Merge remote-tracking branch 'upstream/main' into dev-1.32

pull/48640/head
Shedrack akintayo 2024-11-04 20:21:10 +00:00
commit db99d83333
27 changed files with 1030 additions and 278 deletions

View File

@ -81,5 +81,5 @@ CNCF সনদের সাথে সামঞ্জস্যপূর্ণ,
## স্বীকারোক্তি (Acknowledgements)
এই আচরণবিধিটি অবদানকারী চুক্তি থেকে অভিযোজিত
(http://contributor-covenant.org), সংস্করণ 2.0 এখানে উপলব্ধ
http://contributor-covenant.org/version/2/0/code_of_conduct/
(https://contributor-covenant.org), সংস্করণ 2.0 এখানে উপলব্ধ
https://contributor-covenant.org/version/2/0/code_of_conduct/

View File

@ -5,12 +5,10 @@ slug: k8s-upstream-training-japan-spotlight
date: 2024-10-28
canonicalUrl: https://www.k8s.dev/blog/2024/10/28/k8s-upstream-training-japan-spotlight/
author: >
[Junya Okabe](https://github.com/Okabe-Junya) (University of Tsukuba)
[Junya Okabe](https://github.com/Okabe-Junya) (University of Tsukuba) /
Organizing team of Kubernetes Upstream Training in Japan
---
## About our team
We are organizers of [Kubernetes Upstream Training in Japan](https://github.com/kubernetes-sigs/contributor-playground/tree/master/japan).
Our team is composed of members who actively contribute to Kubernetes, including individuals who hold roles such as member, reviewer, approver, and chair.

View File

@ -39,7 +39,7 @@ assigning a Pod to a specific node is called _binding_, and the process of selec
which node to use is called _scheduling_.
Once a Pod has been scheduled and is bound to a node, Kubernetes tries
to run that Pod on the node. The Pod runs on that node until it stops, or until the Pod
is [terminated](#pod-termination); if Kubernetes isn't able start the Pod on the selected
is [terminated](#pod-termination); if Kubernetes isn't able to start the Pod on the selected
node (for example, if the node crashes before the Pod starts), then that particular Pod
never starts.

View File

@ -97,6 +97,12 @@ maintain sidecar containers without affecting the primary application.
Sidecar containers share the same network and storage namespaces with the primary
container. This co-location allows them to interact closely and share resources.
From Kubernetes perspective, sidecars graceful termination is less important.
When other containers took all alloted graceful termination time, sidecar containers
will receive the `SIGTERM` following with `SIGKILL` faster than may be expected.
So exit codes different from `0` (`0` indicates successful exit), for sidecar containers are normal
on Pod termination and should be generally ignored by the external tooling.
## Differences from init containers
Sidecar containers work alongside the main container, extending its functionality and

View File

@ -93,7 +93,7 @@ and editorial review of their PR.
## How merging works
When a pull request is merged to the branch used to publish content, that content is published to http://kubernetes.io. To ensure that
When a pull request is merged to the branch used to publish content, that content is published to https://kubernetes.io. To ensure that
the quality of our published content is high, we limit merging pull requests to
SIG Docs approvers. Here's how it works.

View File

@ -163,6 +163,48 @@ When reviewing, use the following as a starting point.
- Do the changes show up in the Netlify preview? Be particularly vigilant about lists, code
blocks, tables, notes and images.
### Blog
- Early feedback on blog posts is welcome via a Google Doc or HackMD. Please request input early from the [#sig-docs-blog Slack channel](https://kubernetes.slack.com/archives/CJDHVD54J).
- Before reviewing blog PRs, be familiar with [Submitting blog posts and case studies](/docs/contribute/new-content/blogs-case-studies/).
- We are willing to mirror any blog article that was published to https://kubernetes.dev/blog/ (the contributor blog) provided that:
- the mirrored article has the same publication date as the original (it should have the same publication time too, but you can also set a time stamp up to 12 hours later for special cases)
- for PRs that arrive the original article was merged to https://kubernetes.dev/, there haven't been
(and won't be) any articles published to the main blog between time that the original and mirrored article
[will] publish.
This is because we don't want to add articles to people's feeds, such as RSS, except at the very end of their feed.
- the original article doesn't contravene any strongly recommended review guidelines or community norms.
- You should set the canonical URL for the mirrored article, to the URL of the original article
(you can use a preview to predict the URL and fill this in ahead of actual publication). Use the `canonicalUrl`
field in [front matter](https://gohugo.io/content-management/front-matter/) for this.
- Consider the target audience and whether the blog post is appropriate for kubernetes.io
For example, if the target audience are Kubernetes contributors only then kubernetes.dev
may be more appropriate,
or if the blog post is on general platform engineering then it may be more suitable on another site.
This consideration applies to mirrored articles too; although we are willing to consider all valid
contributor articles for mirroring if a PR is opened, we don't mirror all of them.
- We only mark blog articles as maintained (`evergreen: true` in front matter) if the Kubernetes project
is happy to commit to maintaining them indefinitely. Some blog articles absolutely merit this, and we
always mark our release announcements evergreen. Check with other contributors if you are not sure
how to review on this point.
- The [content guide](/docs/contribute/style/content-guide/) applies unconditionally to blog articles
and the PRs that add them. Bear in mind that some restrictions in the guide state that they are only relevant to documentation; those restrictions don't apply to blog articles.
- The [style guide](/docs/contribute/style/style-guide/) largely also applies to blog PRs, but we make some exceptions.
- it is OK to use “we“ in a blog article that has multiple authors, or where the article introduction clearly indicates that the author is writing on behalf of a specific group.
- we avoid using Kubernetes shortcodes for callouts (such as `{{</* caution */>}}`)
- statements about the future are OK, albeit we use them with care in official announcements on
behalf of Kubernetes
- code samples don't need to use the `{{</* code_sample */>}}` shortcode, and often it is better if they do not
- we are OK to have authors write an article in their own writing style, so long as most readers
would follow the point being made
- The [diagram guide](/docs/contribute/style/diagram-guide/) is aimed at Kubernetes documentation,
not blog articles. It is still good to align with it but:
- we prefer SVG over raster diagram formats, and also over Mermaid (you can still capture the Mermaid source in a comment)
- there is no need to caption diagrams as Figure 1, Figure 2 etc
### Other
- Watch out for [trivial edits](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits);
@ -181,5 +223,4 @@ This lets the author know that this part of your feedback is non-critical.
If you are considering a pull request for approval and all the remaining feedback is
marked as a nit, you can merge the PR anyway. In that case, it's often useful to open
an issue about the remaining nits. Consider whether you're able to meet the requirements
for marking that new issue as a [Good First Issue](https://www.kubernetes.dev/docs/guide/help-wanted/#good-first-issue);
if you can, these are a good source.
for marking that new issue as a [Good First Issue](https://www.kubernetes.dev/docs/guide/help-wanted/#good-first-issue); if you can, these are a good source.

View File

@ -9,6 +9,10 @@ stages:
- stage: alpha
defaultValue: false
fromVersion: "1.30"
toVersion: "1.30"
- stage: beta
defaultValue: true
fromVersion: "1.31"
---
Enables support for recursive read-only mounts.
For more details, see [read-only mounts](/docs/concepts/storage/volumes/#read-only-mounts).

View File

@ -4,7 +4,7 @@ title: Pod Disruption Budget
full-link: /docs/concepts/workloads/pods/disruptions/
date: 2019-02-12
short_description: >
An object that limits the number of {{< glossary_tooltip text="Pods" term_id="pod" >}} of a replicated application, that are down simultaneously from voluntary disruptions.
An object that limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions.
aka:
- PDB
@ -17,8 +17,8 @@ tags:
A [Pod Disruption Budget](/docs/concepts/workloads/pods/disruptions/) allows an
application owner to create an object for a replicated application, that ensures
a certain number or percentage of Pods with an assigned label will not be voluntarily
evicted at any point in time.
a certain number or percentage of {{< glossary_tooltip text="Pods" term_id="pod" >}}
with an assigned label will not be voluntarily evicted at any point in time.
<!--more-->

View File

@ -42,15 +42,19 @@ Kubernetes requires PKI for the following operations:
### Kubelet's server and client certificates
To establish a secure connection and authenticate itself to the kubelet, the API Server
requires a client certificate and key pair.
To establish a secure connection and authenticate itself to the kubelet, the API Server
requires a client certificate and key pair.
In this scenario, there are two approaches for certificate usage:
using shared certificates or separate certificates;
In this scenario, there are two approaches for certificate usage:
* Shared Certificates: The kube-apiserver can utilize the same certificate and key pair it uses to authenticate its clients. This means that the existing certificates, such as `apiserver.crt` and `apiserver.key`, can be used for communicating with the kubelet servers.
* Shared Certificates: The kube-apiserver can utilize the same certificate and key pair it uses
to authenticate its clients. This means that the existing certificates, such as `apiserver.crt`
and `apiserver.key`, can be used for communicating with the kubelet servers.
* Separate Certificates: Alternatively, the kube-apiserver can generate a new client certificate and key pair to authenticate its communication with the kubelet servers. In this case, a distinct certificate named `kubelet-client.crt` and its corresponding private key, `kubelet-client.key` are created.
* Separate Certificates: Alternatively, the kube-apiserver can generate a new client certificate
and key pair to authenticate its communication with the kubelet servers. In this case,
a distinct certificate named `kubelet-client.crt` and its corresponding private key,
`kubelet-client.key` are created.
{{< note >}}
`front-proxy` certificates are required only if you run kube-proxy to support
@ -80,7 +84,7 @@ multiple intermediate CAs, and delegate all further creation to Kubernetes itsel
Required CAs:
| path | Default CN | description |
| Path | Default CN | Description |
|------------------------|---------------------------|----------------------------------|
| ca.crt,key | kubernetes-ca | Kubernetes general CA |
| etcd/ca.crt,key | etcd-ca | For all etcd-related functions |
@ -111,7 +115,7 @@ Required certificates:
| kube-etcd-peer | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
| kube-apiserver-etcd-client | etcd-ca | | client | |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`, `[1]` |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`[^1] |
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
@ -121,7 +125,7 @@ a less privileged group can be used. kubeadm uses the `kubeadm:cluster-admins` g
that purpose.
{{< /note >}}
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
[^1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
@ -155,22 +159,22 @@ For kubeadm users only:
Certificates should be placed in a recommended path (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)).
Paths should be specified using the given argument regardless of location.
| Default CN | recommended key path | recommended cert path | command | key argument | cert argument |
|------------------------------|------------------------------|-----------------------------|-------------------------|------------------------------|-------------------------------------------|
| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file |
| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file |
| kube-apiserver-kubelet-client| apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert |
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
| DefaultCN | recommendedkeypath | recommendedcertpath | command | keyargument | certargument |
| --------- | ------------------ | ------------------- | ------- | ----------- | ------------ |
| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file,--root-ca-file,--cluster-signing-cert-file |
| kube-apiserver | apiserver.key | apiserver.crt| kube-apiserver | --tls-private-key-file | --tls-cert-file |
| kube-apiserver-kubelet-client | apiserver-kubelet-client.key | apiserver-kubelet-client.crt | kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file,--peer-trusted-ca-file |
| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
| etcd-ca| | etcd/ca.crt | etcdctl | | --cacert |
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
Same considerations apply for the service account key pair:
@ -206,11 +210,12 @@ you need to provide if you are generating all of your own keys and certificates:
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
```
## Configure certificates for user accounts
You must manually configure these administrator account and service accounts:
You must manually configure these administrator accounts and service accounts:
| filename | credential name | Default CN | O (in Subject) |
| Filename | Credential name | Default CN | O (in Subject) |
|-------------------------|----------------------------|-------------------------------------|------------------------|
| admin.conf | default-admin | kubernetes-admin | `<admin-group>` |
| super-admin.conf | default-super-admin | kubernetes-super-admin | system:masters |
@ -240,20 +245,21 @@ Another is in `super-admin.conf` that has `Subject: O = system:masters, CN = kub
This file is generated only on the node where `kubeadm init` was called.
{{< /note >}}
1. For each config, generate an x509 cert/key pair with the given CN and O.
1. For each configuration, generate an x509 certificate/key pair with the
given Common Name (CN) and Organization (O).
1. Run `kubectl` as follows for each config:
1. Run `kubectl` as follows for each configuration:
```
KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs
KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs
KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>
KUBECONFIG=<filename> kubectl config use-context default-system
```
```
KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs
KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs
KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>
KUBECONFIG=<filename> kubectl config use-context default-system
```
These files are used as follows:
| filename | command | comment |
| Filename | Command | Comment |
|-------------------------|-------------------------|-----------------------------------------------------------------------|
| admin.conf | kubectl | Configures administrator user for the cluster |
| super-admin.conf | kubectl | Configures super administrator user for the cluster |

View File

@ -5,7 +5,6 @@ title: Validate node setup
weight: 30
---
## Node Conformance Test
*Node conformance test* is a containerized test framework that provides a system
@ -19,40 +18,42 @@ To run node conformance test, a node must satisfy the same prerequisites as a
standard Kubernetes node. At a minimum, the node should have the following
daemons installed:
* CRI-compatible container runtimes such as Docker, Containerd and CRI-O
* Kubelet
* CRI-compatible container runtimes such as Docker, containerd and CRI-O
* kubelet
## Running Node Conformance Test
To run the node conformance test, perform the following steps:
1. Work out the value of the `--kubeconfig` option for the kubelet; for example:
`--kubeconfig=/var/lib/kubelet/config.yaml`.
Because the test framework starts a local control plane to test the kubelet,
use `http://localhost:8080` as the URL of the API server.
There are some other kubelet command line parameters you may want to use:
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should
remove the flag to run the test.
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should
remove the flag to run the test.
2. Run the node conformance test with command:
1. Run the node conformance test with command:
```shell
# $CONFIG_DIR is the pod manifest path of your Kubelet.
# $LOG_DIR is the test output path.
sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
registry.k8s.io/node-test:0.2
```
```shell
# $CONFIG_DIR is the pod manifest path of your kubelet.
# $LOG_DIR is the test output path.
sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
registry.k8s.io/node-test:0.2
```
## Running Node Conformance Test for Other Architectures
Kubernetes also provides node conformance test docker images for other
architectures:
Arch | Image |
--------|:-----------------:|
amd64 | node-test-amd64 |
arm | node-test-arm |
arm64 | node-test-arm64 |
| Arch | Image |
|--------|:-----------------:|
| amd64 | node-test-amd64 |
| arm | node-test-arm |
| arm64 | node-test-arm64 |
## Running Selected Test
@ -76,7 +77,8 @@ sudo docker run -it --rm --privileged --net=host \
registry.k8s.io/node-test:0.2
```
Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md).
Node conformance test is a containerized version of
[node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md).
By default, it runs all conformance tests.
Theoretically, you can run any node e2e test if you configure the container and

View File

@ -69,6 +69,8 @@ Using Kubernetes' native support for sidecar containers provides several benefit
special care is needed to handle this situation.
4. Also, with Jobs, built-in sidecar containers would keep being restarted once they are done, even if regular containers would not with Pod's `restartPolicy: Never`.
See [differences from init containers](/docs/concepts/workloads/pods/sidecar-containers/#differences-from-application-containers) to learn more about it.
## Adopting built-in sidecar containers
The `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is in beta state starting from Kubernetes version 1.29 and is enabled by default.

View File

@ -81,4 +81,4 @@ CNCFの憲章と一貫して、この行動規範に対する実質的な変更
## 謝辞
本行動規範は、コントリビューター行動規範([http://contributor-covenant.org](http://contributor-covenant.org/))に基づいています。バージョン 2.0はこちらから参照可能です。[http://contributor-covenant.org/version/2/0/code_of_conduct/](http://contributor-covenant.org/version/2/0/code_of_conduct/)
本行動規範は、コントリビューター行動規範([https://contributor-covenant.org](https://contributor-covenant.org/))に基づいています。バージョン 2.0はこちらから参照可能です。[https://contributor-covenant.org/version/2/0/code_of_conduct/](https://contributor-covenant.org/version/2/0/code_of_conduct/)

View File

@ -164,6 +164,8 @@ Você também pode entrar em contato com os mantenedores deste projeto utilizand
- [Obter o convide para este slack](https://slack.k8s.io/)
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
Você pode contatar os mantenedores da localização em Português no canal do Slack [#kubernetes-docs-pt](https://kubernetes.slack.com/messages/kubernetes-docs-pt).
## Contribuindo com a documentação
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie um novo **pull request** para nos informar sobre isso.
@ -189,20 +191,6 @@ Caso você precise de ajuda em algum momento ao contribuir, os [Embaixadores par
| -------------------------- | -------------------------- | -------------------------- |
| Sreeram Venkitesh | @sreeram.venkitesh | @sreeram-venkitesh |
## Traduções do `README.md`
| Idioma | Idioma |
| ------------------------- | -------------------------- |
| [Alemão](README-de.md) | [Italiano](README-it.md) |
| [Chinês](README-zh.md) | [Japonês](README-ja.md) |
| [Coreano](README-ko.md) | [Polonês](README-pl.md) |
| [Espanhol](README-es.md) | [Português](README-pt.md) |
| [Francês](README-fr.md) | [Russo](README-ru.md) |
| [Hindi](README-hi.md) | [Ucraniano](README-uk.md) |
| [Indonésio](README-id.md) | [Vietnamita](README-vi.md) |
Você pode contatar os mantenedores da localização em Português no canal do Slack [#kubernetes-docs-pt](https://kubernetes.slack.com/messages/kubernetes-docs-pt).
# Código de conduta
A participação na comunidade Kubernetes é regida pelo [Código de Conduta da Kubernetes](code-of-conduct.md).

View File

@ -0,0 +1,324 @@
---
layout: blog
title: 'Kubernetes 1.31:防止无序删除时 PersistentVolume 泄漏'
date: 2024-08-16
slug: kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order
author: >
Deepak Kinni (Broadcom)
translator: >
[Michael Yao](https://github.com/windsonsea) (DaoCloud)
---
<!--
layout: blog
title: 'Kubernetes 1.31: Prevent PersistentVolume Leaks When Deleting out of Order'
date: 2024-08-16
slug: kubernetes-1-31-prevent-persistentvolume-leaks-when-deleting-out-of-order
author: >
Deepak Kinni (Broadcom)
-->
<!--
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) (or PVs for short) are
associated with [Reclaim Policy](/docs/concepts/storage/persistent-volumes/#reclaim-policy).
The reclaim policy is used to determine the actions that need to be taken by the storage
backend on deletion of the PVC Bound to a PV.
When the reclaim policy is `Delete`, the expectation is that the storage backend
releases the storage resource allocated for the PV. In essence, the reclaim
policy needs to be honored on PV deletion.
With the recent Kubernetes v1.31 release, a beta feature lets you configure your
cluster to behave that way and honor the configured reclaim policy.
-->
[PersistentVolume](/zh-cn/docs/concepts/storage/persistent-volumes/)(简称 PV
具有与之关联的[回收策略](/zh-cn/docs/concepts/storage/persistent-volumes/#reclaim-policy)。
回收策略用于确定在删除绑定到 PV 的 PVC 时存储后端需要采取的操作。当回收策略为 `Delete` 时,
期望存储后端释放为 PV 所分配的存储资源。实际上,在 PV 被删除时就需要执行此回收策略。
在最近发布的 Kubernetes v1.31 版本中,一个 Beta 特性允许你配置集群以这种方式运行并执行你配置的回收策略。
<!--
## How did reclaim work in previous Kubernetes releases?
[PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#Introduction) (or PVC for short) is
a user's request for storage. A PV and PVC are considered [Bound](/docs/concepts/storage/persistent-volumes/#Binding)
if a newly created PV or a matching PV is found. The PVs themselves are
backed by volumes allocated by the storage backend.
-->
## 在以前的 Kubernetes 版本中回收是如何工作的?
[PersistentVolumeClaim](/zh-cn/docs/concepts/storage/persistent-volumes/#Introduction)
(简称 PVC是用户对存储的请求。如果新创建了 PV 或找到了匹配的 PV那么此 PV 和此 PVC
被视为[已绑定](/zh-cn/docs/concepts/storage/persistent-volumes/#Binding)。
PV 本身是由存储后端所分配的卷支持的。
<!--
Normally, if the volume is to be deleted, then the expectation is to delete the
PVC for a bound PV-PVC pair. However, there are no restrictions on deleting a PV
before deleting a PVC.
First, I'll demonstrate the behavior for clusters running an older version of Kubernetes.
#### Retrieve a PVC that is bound to a PV
Retrieve an existing PVC `example-vanilla-block-pvc`
-->
通常,如果卷要被删除,对应的预期是为一个已绑定的 PV-PVC 对删除其中的 PVC。
不过,对于在删除 PVC 之前可否删除 PV 并没有限制。
首先,我将演示运行旧版本 Kubernetes 的集群的行为。
#### 检索绑定到 PV 的 PVC
检索现有的 PVC `example-vanilla-block-pvc`
```shell
kubectl get pvc example-vanilla-block-pvc
```
<!--
The following output shows the PVC and its bound PV; the PV is shown under the `VOLUME` column:
-->
以下输出显示了 PVC 及其绑定的 PV此 PV 显示在 `VOLUME` 列下:
```
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
example-vanilla-block-pvc Bound pvc-6791fdd4-5fad-438e-a7fb-16410363e3da 5Gi RWO example-vanilla-block-sc 19s
```
<!--
#### Delete PV
When I try to delete a bound PV, the kubectl session blocks and the `kubectl`
tool does not return back control to the shell; for example:
-->
#### 删除 PV
当我尝试删除已绑定的 PV 时kubectl 会话被阻塞,
`kubectl` 工具不会将控制权返回给 Shell例如
```shell
kubectl delete pv pvc-6791fdd4-5fad-438e-a7fb-16410363e3da
```
```
persistentvolume "pvc-6791fdd4-5fad-438e-a7fb-16410363e3da" deleted
^C
```
<!--
#### Retrieving the PV
-->
#### 检索 PV
```shell
kubectl get pv pvc-6791fdd4-5fad-438e-a7fb-16410363e3da
```
<!--
It can be observed that the PV is in a `Terminating` state
-->
你可以观察到 PV 处于 `Terminating` 状态:
```
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-6791fdd4-5fad-438e-a7fb-16410363e3da 5Gi RWO Delete Terminating default/example-vanilla-block-pvc example-vanilla-block-sc 2m23s
```
<!--
#### Delete PVC
-->
#### 删除 PVC
```shell
kubectl delete pvc example-vanilla-block-pvc
```
<!--
The following output is seen if the PVC gets successfully deleted:
-->
如果 PVC 被成功删除,则会看到以下输出:
```
persistentvolumeclaim "example-vanilla-block-pvc" deleted
```
<!--
The PV object from the cluster also gets deleted. When attempting to retrieve the PV
it will be observed that the PV is no longer found:
-->
集群中的 PV 对象也被删除。当尝试检索 PV 时,你会观察到该 PV 已不再存在:
```shell
kubectl get pv pvc-6791fdd4-5fad-438e-a7fb-16410363e3da
```
```
Error from server (NotFound): persistentvolumes "pvc-6791fdd4-5fad-438e-a7fb-16410363e3da" not found
```
<!--
Although the PV is deleted, the underlying storage resource is not deleted and
needs to be removed manually.
To sum up, the reclaim policy associated with the PersistentVolume is currently
ignored under certain circumstances. For a `Bound` PV-PVC pair, the ordering of PV-PVC
deletion determines whether the PV reclaim policy is honored. The reclaim policy
is honored if the PVC is deleted first; however, if the PV is deleted prior to
deleting the PVC, then the reclaim policy is not exercised. As a result of this behavior,
the associated storage asset in the external infrastructure is not removed.
-->
尽管 PV 被删除,但下层存储资源并未被删除,需要手动移除。
总结一下,与 PersistentVolume 关联的回收策略在某些情况下会被忽略。
对于 `Bound` 的 PV-PVC 对PV-PVC 删除的顺序决定了回收策略是否被执行。
如果 PVC 先被删除,则回收策略被执行;但如果在删除 PVC 之前 PV 被删除,
则回收策略不会被执行。因此,外部基础设施中关联的存储资产未被移除。
<!--
## PV reclaim policy with Kubernetes v1.31
The new behavior ensures that the underlying storage object is deleted from the backend when users attempt to delete a PV manually.
#### How to enable new behavior?
To take advantage of the new behavior, you must have upgraded your cluster to the v1.31 release of Kubernetes
and run the CSI [`external-provisioner`](https://github.com/kubernetes-csi/external-provisioner) version `5.0.1` or later.
-->
## Kubernetes v1.31 的 PV 回收策略
新的行为确保当用户尝试手动删除 PV 时,下层存储对象会从后端被删除。
#### 如何启用新的行为?
要利用新的行为,你必须将集群升级到 Kubernetes v1.31 版本,并运行
CSI [`external-provisioner`](https://github.com/kubernetes-csi/external-provisioner)
v5.0.1 或更高版本。
<!--
#### How does it work?
For CSI volumes, the new behavior is achieved by adding a [finalizer](/docs/concepts/overview/working-with-objects/finalizers/) `external-provisioner.volume.kubernetes.io/finalizer`
on new and existing PVs. The finalizer is only removed after the storage from the backend is deleted.
`
An example of a PV with the finalizer, notice the new finalizer in the finalizers list
-->
#### 工作方式
对于 CSI 卷,新的行为是通过在新创建和现有的 PV 上添加
[Finalizer](/zh-cn/docs/concepts/overview/working-with-objects/finalizers/)
`external-provisioner.volume.kubernetes.io/finalizer` 来实现的。
只有在后端存储被删除后Finalizer 才会被移除。
下面是一个带 Finalizer 的 PV 示例,请注意 Finalizer 列表中的新 Finalizer
```shell
kubectl get pv pvc-a7b7e3ba-f837-45ba-b243-dec7d8aaed53 -o yaml
```
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
creationTimestamp: "2021-11-17T19:28:56Z"
finalizers:
- kubernetes.io/pv-protection
- external-provisioner.volume.kubernetes.io/finalizer
name: pvc-a7b7e3ba-f837-45ba-b243-dec7d8aaed53
resourceVersion: "194711"
uid: 087f14f2-4157-4e95-8a70-8294b039d30e
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: example-vanilla-block-pvc
namespace: default
resourceVersion: "194677"
uid: a7b7e3ba-f837-45ba-b243-dec7d8aaed53
csi:
driver: csi.vsphere.vmware.com
fsType: ext4
volumeAttributes:
storage.kubernetes.io/csiProvisionerIdentity: 1637110610497-8081-csi.vsphere.vmware.com
type: vSphere CNS Block Volume
volumeHandle: 2dacf297-803f-4ccc-afc7-3d3c3f02051e
persistentVolumeReclaimPolicy: Delete
storageClassName: example-vanilla-block-sc
volumeMode: Filesystem
status:
phase: Bound
```
<!--
The [finalizer](/docs/concepts/overview/working-with-objects/finalizers/) prevents this
PersistentVolume from being removed from the
cluster. As stated previously, the finalizer is only removed from the PV object
after it is successfully deleted from the storage backend. To learn more about
finalizers, please refer to [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/).
Similarly, the finalizer `kubernetes.io/pv-controller` is added to dynamically provisioned in-tree plugin volumes.
-->
[Finalizer](/zh-cn/docs/concepts/overview/working-with-objects/finalizers/)
防止此 PersistentVolume 从集群中被移除。如前文所述Finalizer 仅在从存储后端被成功删除后才会从
PV 对象中被移除。进一步了解 Finalizer
请参阅[使用 Finalizer 控制删除](/blog/2021/05/14/using-finalizers-to-control-deletion/)。
同样Finalizer `kubernetes.io/pv-controller` 也被添加到动态制备的树内插件卷中。
<!--
#### What about CSI migrated volumes?
The fix applies to CSI migrated volumes as well.
### Some caveats
The fix does not apply to statically provisioned in-tree plugin volumes.
-->
#### 有关 CSI 迁移的卷
本次修复同样适用于 CSI 迁移的卷。
### 一些注意事项
本次修复不适用于静态制备的树内插件卷。
<!--
### References
-->
### 参考
* [KEP-2644](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2644-honor-pv-reclaim-policy)
* [Volume leak issue](https://github.com/kubernetes-csi/external-provisioner/issues/546)
<!--
### How do I get involved?
The Kubernetes Slack channel [SIG Storage communication channels](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact) are great mediums to reach out to the SIG Storage and migration working group teams.
Special thanks to the following people for the insightful reviews, thorough consideration and valuable contribution:
-->
### 我该如何参与?
Kubernetes Slack
[SIG Storage 交流频道](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact)是与
SIG Storage 和迁移工作组团队联系的良好媒介。
特别感谢以下人员的用心评审、周全考虑和宝贵贡献:
* Fan Baofa (carlory)
* Jan Šafránek (jsafrane)
* Xing Yang (xing-yang)
* Matthew Wong (wongma7)
<!--
Join the [Kubernetes Storage Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-storage) if you're interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system. Were rapidly growing and always welcome new contributors.
-->
如果你有兴趣参与 CSI 或 Kubernetes Storage 系统任何部分的设计和开发,请加入
[Kubernetes Storage SIG](https://github.com/kubernetes/community/tree/master/sig-storage)。
我们正在快速成长,始终欢迎新的贡献者。

View File

@ -0,0 +1,310 @@
---
layout: blog
title: "Kubernetes 1.31:基于 OCI 工件的只读卷 (Alpha)"
date: 2024-08-16
slug: kubernetes-1-31-image-volume-source
author: Sascha Grunert
translator: >
[Michael Yao](https://github.com/windsonsea) (DaoCloud)
---
<!--
layout: blog
title: "Kubernetes 1.31: Read Only Volumes Based On OCI Artifacts (alpha)"
date: 2024-08-16
slug: kubernetes-1-31-image-volume-source
author: Sascha Grunert
-->
<!--
The Kubernetes community is moving towards fulfilling more Artificial
Intelligence (AI) and Machine Learning (ML) use cases in the future. While the
project has been designed to fulfill microservice architectures in the past,
its now time to listen to the end users and introduce features which have a
stronger focus on AI/ML.
-->
Kubernetes 社区正朝着在未来满足更多人工智能AI和机器学习ML使用场景的方向发展。
虽然此项目在过去设计为满足微服务架构,但现在是时候听听最终用户的声音并引入更侧重于 AI/ML 的特性了。
<!--
One of these requirements is to support [Open Container Initiative (OCI)](https://opencontainers.org)
compatible images and artifacts (referred as OCI objects) directly as a native
volume source. This allows users to focus on OCI standards as well as enables
them to store and distribute any content using OCI registries. A feature like
this gives the Kubernetes project a chance to grow into use cases which go
beyond running particular images.
-->
其中一项需求是直接支持与[开放容器倡议OCI](https://opencontainers.org)
兼容的镜像和工件(称为 OCI 对象)作为原生卷源。
这使得用户能够专注于 OCI 标准,且能够使用 OCI 镜像仓库存储和分发任何内容。
与此类似的特性让 Kubernetes 项目有机会扩大其使用场景,不再局限于运行特定镜像。
<!--
Given that, the Kubernetes community is proud to present a new alpha feature
introduced in v1.31: The Image Volume Source
([KEP-4639](https://kep.k8s.io/4639)). This feature allows users to specify an
image reference as volume in a pod while reusing it as volume mount within
containers:
-->
在这一背景下Kubernetes 社区自豪地展示在 v1.31 中引入的一项新的 Alpha 特性:
镜像卷源([KEP-4639](https://kep.k8s.io/4639))。
此特性允许用户在 Pod 中指定一个镜像引用作为卷,并在容器内将其作为卷挂载进行复用:
```yaml
kind: Pod
spec:
containers:
- …
volumeMounts:
- name: my-volume
mountPath: /path/to/directory
volumes:
- name: my-volume
image:
reference: my-image:tag
```
<!--
The above example would result in mounting `my-image:tag` to
`/path/to/directory` in the pods container.
-->
上述示例的结果是将 `my-image:tag` 挂载到 Pod 的容器中的 `/path/to/directory`
<!--
## Use cases
The goal of this enhancement is to stick as close as possible to the existing
[container image](/docs/concepts/containers/images/) implementation within the
kubelet, while introducing a new API surface to allow more extended use cases.
-->
## 使用场景
此增强特性的目标是在尽可能贴近 kubelet 中现有的[容器镜像](/zh-cn/docs/concepts/containers/images/)实现的同时,
引入新的 API 接口以支持更广泛的使用场景。
<!--
For example, users could share a configuration file among multiple containers in
a pod without including the file in the main image, so that they can minimize
security risks and the overall image size. They can also package and distribute
binary artifacts using OCI images and mount them directly into Kubernetes pods,
so that they can streamline their CI/CD pipeline as an example.
-->
例如,用户可以在 Pod 中的多个容器之间共享一个配置文件,而无需将此文件包含在主镜像中,
这样用户就可以将安全风险最小化和并缩减整体镜像大小。用户还可以使用 OCI 镜像打包和分发二进制工件,
并直接将它们挂载到 Kubernetes Pod 中,例如用户这样就可以简化其 CI/CD 流水线。
<!--
Data scientists, MLOps engineers, or AI developers, can mount large language
model weights or machine learning model weights in a pod alongside a
model-server, so that they can efficiently serve them without including them in
the model-server container image. They can package these in an OCI object to
take advantage of OCI distribution and ensure efficient model deployment. This
allows them to separate the model specifications/content from the executables
that process them.
-->
数据科学家、MLOps 工程师或 AI 开发者可以与模型服务器一起在 Pod 中挂载大语言模型权重或机器学习模型权重数据,
从而可以更高效地提供服务,且无需将这些模型包含在模型服务器容器镜像中。
他们可以将这些模型打包在 OCI 对象中,以利用 OCI 分发机制,还可以确保高效的模型部署。
这一新特性允许他们将模型规约/内容与处理它们的可执行文件分开。
<!--
Another use case is that security engineers can use a public image for a malware
scanner and mount in a volume of private (commercial) malware signatures, so
that they can load those signatures without baking their own combined image
(which might not be allowed by the copyright on the public image). Those files
work regardless of the OS or version of the scanner software.
-->
另一个使用场景是安全工程师可以使用公共镜像作为恶意软件扫描器,并将私有的(商业的)恶意软件签名挂载到卷中,
这样他们就可以加载这些签名且无需制作自己的组合镜像(公共镜像的版权要求可能不允许这样做)。
签名数据文件与操作系统或扫描器软件版本无关,总是可以被使用。
<!--
But in the long term it will be up to **you** as an end user of this project to
outline further important use cases for the new feature.
[SIG Node](https://github.com/kubernetes/community/blob/54a67f5/sig-node/README.md)
is happy to retrieve any feedback or suggestions for further enhancements to
allow more advanced usage scenarios. Feel free to provide feedback by either
using the [Kubernetes Slack (#sig-node)](https://kubernetes.slack.com/messages/sig-node)
channel or the [SIG Node mailinglist](https://groups.google.com/g/kubernetes-sig-node).
-->
但就长期而言,作为此项目的最终用户的你要负责为这一新特性的其他重要使用场景给出规划。
[SIG Node](https://github.com/kubernetes/community/blob/54a67f5/sig-node/README.md)
乐于接收与进一步增强此特性以适应更高级的使用场景有关的所有反馈或建议。你可以通过使用
[Kubernetes Slack#sig-node](https://kubernetes.slack.com/messages/sig-node)
频道或 [SIG Node 邮件列表](https://groups.google.com/g/kubernetes-sig-node)提供反馈。
<!--
## Detailed example {#example}
The Kubernetes alpha feature gate [`ImageVolume`](/docs/reference/command-line-tools-reference/feature-gates)
needs to be enabled on the [API Server](/docs/reference/command-line-tools-reference/kube-apiserver)
as well as the [kubelet](/docs/reference/command-line-tools-reference/kubelet)
to make it functional. If thats the case and the [container runtime](/docs/setup/production-environment/container-runtimes)
has support for the feature (like CRI-O ≥ v1.31), then an example `pod.yaml`
like this can be created:
-->
## 详细示例 {#example}
你需要在 [API 服务器](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver)以及
[kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet) 上启用
Kubernetes Alpha 特性门控 [`ImageVolume`](/zh-cn/docs/reference/command-line-tools-reference/feature-gates)
才能使其正常工作。如果启用了此特性,
并且[容器运行时](/zh-cn/docs/setup/production-environment/container-runtimes)支持此特性
(如 CRI-O ≥ v1.31),那就可以创建这样一个示例 `pod.yaml`
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: test
image: registry.k8s.io/e2e-test-images/echoserver:2.3
volumeMounts:
- name: volume
mountPath: /volume
volumes:
- name: volume
image:
reference: quay.io/crio/artifact:v1
pullPolicy: IfNotPresent
```
<!--
The pod declares a new volume using the `image.reference` of
`quay.io/crio/artifact:v1`, which refers to an OCI object containing two files.
The `pullPolicy` behaves in the same way as for container images and allows the
following values:
-->
此 Pod 使用值为 `quay.io/crio/artifact:v1``image.reference` 声明一个新卷,
该字段值引用了一个包含两个文件的 OCI 对象。`pullPolicy` 的行为与容器镜像相同,允许以下值:
<!--
- `Always`: the kubelet always attempts to pull the reference and the container
creation will fail if the pull fails.
- `Never`: the kubelet never pulls the reference and only uses a local image or
artifact. The container creation will fail if the reference isnt present.
- `IfNotPresent`: the kubelet pulls if the reference isnt already present on
disk. The container creation will fail if the reference isnt present and the
pull fails.
-->
- `Always`kubelet 总是尝试拉取引用,如果拉取失败,容器创建将失败。
- `Never`kubelet 从不拉取引用,只使用本地镜像或工件。如果引用不存在,容器创建将失败。
- `IfNotPresent`kubelet 会在引用已不在磁盘上时进行拉取。如果引用不存在且拉取失败,容器创建将失败。
<!--
The `volumeMounts` field is indicating that the container with the name `test`
should mount the volume under the path `/volume`.
If you now create the pod:
-->
`volumeMounts` 字段表示名为 `test` 的容器应将卷挂载到 `/volume` 路径下。
如果你现在创建 Pod
```shell
kubectl apply -f pod.yaml
```
<!--
And exec into it:
-->
然后通过 exec 进入此 Pod
```shell
kubectl exec -it pod -- sh
```
<!--
Then youre able to investigate what has been mounted:
-->
那么你就能够查看已挂载的内容:
```console
/ # ls /volume
dir file
/ # cat /volume/file
2
/ # ls /volume/dir
file
/ # cat /volume/dir/file
1
```
<!--
**You managed to consume an OCI artifact using Kubernetes!**
The container runtime pulls the image (or artifact), mounts it to the
container and makes it finally available for direct usage. There are a bunch of
details in the implementation, which closely align to the existing image pull
behavior of the kubelet. For example:
-->
**你已经成功地使用 Kubernetes 访问了 OCI 工件!**
容器运行时拉取镜像(或工件),将其挂载到容器中,并最终使其可被直接使用。
在实现中有很多细节,这些细节与 kubelet 现有的镜像拉取行为密切相关。例如:
<!--
- If a `:latest` tag as `reference` is provided, then the `pullPolicy` will
default to `Always`, while in any other case it will default to `IfNotPresent`
if unset.
- The volume gets re-resolved if the pod gets deleted and recreated, which means
that new remote content will become available on pod recreation. A failure to
resolve or pull the image during pod startup will block containers from
starting and may add significant latency. Failures will be retried using
normal volume backoff and will be reported on the pod reason and message.
-->
- 如果提供给 `reference` 的值包含 `:latest` 标签,`pullPolicy` 将默认为 `Always`
而在任何其他情况下,`pullPolicy` 在未被设置的情况下都默认为 `IfNotPresent`
- 如果 Pod 被删除并重新创建,卷将被重新解析,这意味着在 Pod 重新创建时将可以访问新的远端内容。
如果在 Pod 启动期间未能解析或未能拉取镜像,将会容器启动会被阻止,并可能显著增加延迟。
如果拉取镜像失败,将使用正常的卷回退机制进行重试,并将在 Pod 的原因和消息中报告出错原因。
<!--
- Pull secrets will be assembled in the same way as for the container image by
looking up node credentials, service account image pull secrets, and pod spec
image pull secrets.
- The OCI object gets mounted in a single directory by merging the manifest
layers in the same way as for container images.
- The volume is mounted as read-only (`ro`) and non-executable files
(`noexec`).
-->
- 拉取 Secret 的组装方式与容器镜像所用的方式相同,也是通过查找节点凭据、服务账户镜像拉取 Secret
和 Pod 规约中的镜像拉取 Secret 来完成。
- OCI 对象被挂载到单个目录中,清单层的合并方式与容器镜像相同。
- 卷以只读(`ro`)和非可执行文件(`noexec`)的方式被挂载。
<!--
- Sub-path mounts for containers are not supported
(`spec.containers[*].volumeMounts.subpath`).
- The field `spec.securityContext.fsGroupChangePolicy` has no effect on this
volume type.
- The feature will also work with the [`AlwaysPullImages` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)
if enabled.
-->
- 容器的子路径挂载不被支持(`spec.containers[*].volumeMounts.subpath`)。
- 字段 `spec.securityContext.fsGroupChangePolicy` 对这种卷类型没有影响。
- 如果已启用,此特性也将与
[`AlwaysPullImages` 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)一起工作。
<!--
Thank you for reading through the end of this blog post! SIG Node is proud and
happy to deliver this feature as part of Kubernetes v1.31.
As writer of this blog post, I would like to emphasize my special thanks to
**all** involved individuals out there! You all rock, lets keep on hacking!
-->
感谢你阅读到这篇博客文章的结尾!对于将此特性作为 Kubernetes v1.31
的一部分交付SIG Node 感到很高兴也很自豪。
作为这篇博客的作者,我想特别感谢所有参与者!你们都很棒,让我们继续开发之旅!
<!--
## Further reading
- [Use an Image Volume With a Pod](/docs/tasks/configure-pod-container/image-volumes)
- [`image` volume overview](/docs/concepts/storage/volumes/#image)
-->
## 进一步阅读
- [在 Pod 中使用镜像卷](/zh-cn/docs/tasks/configure-pod-container/image-volumes)
- [`image` 卷概览](/zh-cn/docs/concepts/storage/volumes/#image)

View File

@ -167,7 +167,7 @@ cloud providers:
The [descheduler](https://github.com/kubernetes-sigs/descheduler) can help you
consolidate Pods onto a smaller number of nodes, to help with automatic scale down
when the cluster has space capacity.
when the cluster has spare capacity.
-->
## 相关组件 {#related-components}

View File

@ -904,6 +904,8 @@ Here are some examples of device plugin implementations:
Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices
* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for
hardware-assisted virtualization
* The [NVIDIA GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin), NVIDIA's
official device plugin to expose NVIDIA GPUs and monitor GPU health
* The [NVIDIA GPU device plugin for Container-Optimized OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu)
* The [RDMA device plugin](https://github.com/hustcat/k8s-rdma-device-plugin)
* The [SocketCAN device plugin](https://github.com/collabora/k8s-socketcan)
@ -919,6 +921,7 @@ Here are some examples of device plugin implementations:
* [Intel 设备插件](https://github.com/intel/intel-device-plugins-for-kubernetes)支持
Intel GPU、FPGA、QAT、VPU、SGX、DSA、DLB 和 IAA 设备
* [KubeVirt 设备插件](https://github.com/kubevirt/kubernetes-device-plugins)用于硬件辅助的虚拟化
* [NVIDIA GPU 设备插件](https://github.com/NVIDIA/k8s-device-plugin)NVIDIA 的官方设备插件,用于公布 NVIDIA GPU 和监控 GPU 健康状态。
* [为 Container-Optimized OS 所提供的 NVIDIA GPU 设备插件](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu)
* [RDMA 设备插件](https://github.com/hustcat/k8s-rdma-device-plugin)
* [SocketCAN 设备插件](https://github.com/collabora/k8s-socketcan)

View File

@ -97,9 +97,9 @@ spec:
type: ClusterIP
```
<!--
but as it was explained before, the IP address 10.96.0.10 has not been reserved; if other Services are created
before or in parallel with dynamic allocation, there is a chance they can allocate this IP, hence,
you will not be able to create the DNS Service because it will fail with a conflict error.
But, as it was explained before, the IP address 10.96.0.10 has not been reserved.
If other Services are created before or in parallel with dynamic allocation, there is a chance they can allocate this IP.
Hence, you will not be able to create the DNS Service because it will fail with a conflict error.
-->
但如前所述IP 地址 10.96.0.10 尚未被保留。如果在 DNS 启动之前或同时采用动态分配机制创建其他 Service
则它们有可能被分配此 IP因此你将无法创建 DNS Service因为它会因冲突错误而失败。

View File

@ -131,9 +131,9 @@ Figure 1. Review process steps.
2. 使用以下标签(组合)对待处理 PR 进行过滤:
- `cncf-cla: yes` (建议):由尚未签署 CLA 的贡献者所发起的 PR 不可以合并。
- `cncf-cla: yes`(建议):由尚未签署 CLA 的贡献者所发起的 PR 不可以合并。
参考[签署 CLA](/zh-cn/docs/contribute/new-content/#sign-the-cla) 以了解更多信息。
- `language/en` (建议):仅查看英语语言的 PR。
- `language/en`(建议):仅查看英语语言的 PR。
- `size/<尺寸>`:过滤特定尺寸(规模)的 PR。
如果你刚入门,可以从较小的 PR 开始。
@ -239,7 +239,7 @@ When reviewing, use the following as a starting point.
- 当一个 PR 更新现有页面时,你应专注于评审正在更新的页面部分。你应评审所更改内容的技术和编辑的正确性。
如果你发现页面上的一些错误与 PR 作者尝试解决的问题没有直接关系,
则应将其视为一个单独的 Issue首先检查是否存在与此相关的 Issue
- 要特别注意那些 **移动** 内容的 PR。如果作者重命名一个页面或合并两个页面
- 要特别注意那些**移动**内容的 PR。如果作者重命名一个页面或合并两个页面
我们Kubernetes SIG Docs通常应避免要求该作者修复可能在所移动的内容中发现的所有语法或拼写错误。
- 是否存在一些过于复杂晦涩的用词,本可以用简单词汇来代替?
- 是否有些用词、术语或短语可以用不带歧视性的表达方式代替?
@ -291,6 +291,99 @@ When reviewing, use the following as a starting point.
- 变更是否正确出现在 Netlify 预览中了?
要对列表、代码段、表格、注释和图像等元素格外留心。
<!--
### Blog
- Early feedback on blog posts is welcome via a Google Doc or HackMD. Please request input early from the [#sig-docs-blog Slack channel](https://kubernetes.slack.com/archives/CJDHVD54J).
- Before reviewing blog PRs, be familiar with [Submitting blog posts and case studies](/docs/contribute/new-content/blogs-case-studies/).
-->
### 博客
- 欢迎通过 Google Doc 或 HackMD 对博客文章提供早期反馈。请尽早通过
[#sig-docs-blog Slack 频道](https://kubernetes.slack.com/archives/CJDHVD54J)请求输入。
- 在审查博客的拉取请求PR之前请熟悉[提交博客文章和案例研究](/zh-cn/docs/contribute/new-content/blogs-case-studies/)的相关指南。
<!--
- We are willing to mirror any blog article that was published to https://kubernetes.dev/blog/ (the contributor blog) provided that:
- the mirrored article has the same publication date as the original (it should have the same publication time too, but you can also set a time stamp up to 12 hours later for special cases)
- for PRs that arrive the original article was merged to https://kubernetes.dev/, there haven't been
(and won't be) any articles published to the main blog between time that the original and mirrored article
[will] publish.
This is because we don't want to add articles to people's feeds, such as RSS, except at the very end of their feed.
- the original article doesn't contravene any strongly recommended review guidelines or community norms.
- You should set the canonical URL for the mirrored article, to the URL of the original article
(you can use a preview to predict the URL and fill this in ahead of actual publication). Use the `canonicalUrl`
field in [front matter](https://gohugo.io/content-management/front-matter/) for this.
-->
- 我们愿意镜像任何发布到 https://kubernetes.dev/blog/(贡献者博客)的博客文章,前提是:
- 镜像的文章应与原文有相同的发布日期(理想情况下,发布时间也应相同,但在特殊情况下,
可以设置一个最多晚于原时间 12 小时的时间戳)。
- 对于那些原始文章已被合并到 https://kubernetes.dev/ 的拉取请求PR在原始文章和镜像文章发布之间
主博客上没有(也不会有)任何文章发布。这是因为我们不希望除了在 RSS 等订阅源的末端之外添加新的文章到人们的订阅源中。
- 原始文章不应违反任何强烈推荐的审核指南或社区规范。
- 应为镜像文章设置规范URLcanonical URL指向原始文章的URL你可以使用预览来预测URL并在实际发布前填写
为此,请在[前置元数据](https://gohugo.io/content-management/front-matter/)中使用 `canonicalUrl` 字段。
<!--
- Consider the target audience and whether the blog post is appropriate for kubernetes.io
For example, if the target audience are Kubernetes contributors only then kubernetes.dev
may be more appropriate,
or if the blog post is on general platform engineering then it may be more suitable on another site.
This consideration applies to mirrored articles too; although we are willing to consider all valid
contributor articles for mirroring if a PR is opened, we don't mirror all of them.
-->
- 考虑目标受众以及博客文章是否适合发布在 kubernetes.io 上。例如,如果目标受众仅限于
Kubernetes 贡献者,则 kubernetes.dev 可能更为合适;如果博客文章是关于通用平台工程的内容,
则可能更适合跨站发布。
这一考量同样适用于镜像文章;虽然我们愿意考虑镜像所有有效的贡献者文章(如果有拉取请求的话),
我们并不会镜像所有的文章。
<!--
- We only mark blog articles as maintained (`evergreen: true` in front matter) if the Kubernetes project
is happy to commit to maintaining them indefinitely. Some blog articles absolutely merit this, and we
always mark our release announcements evergreen. Check with other contributors if you are not sure
how to review on this point.
-->
- 我们仅在 Kubernetes 项目愿意无限期维护某博客文章的情况下,才将其标记为持续维护状态(在前置元数据中设置
`evergreen: true`)。某些博客文章确实值得这样做,而且我们总是将版本发布通知标记为持续维护状态。
如果你不确定如何在此点上进行审查,请与其他贡献者确认。
<!--
- The [content guide](/docs/contribute/style/content-guide/) applies unconditionally to blog articles
and the PRs that add them. Bear in mind that some restrictions in the guide state that they are only relevant to documentation; those restrictions don't apply to blog articles.
-->
- [内容指南](/zh-cn/docs/contribute/style/content-guide/)无条件地适用于博客文章及添加这些文章的拉取请求PR
请注意,指南中的一些限制规定仅适用于文档,并不适用于博客文章。
<!--
- The [style guide](/docs/contribute/style/style-guide/) largely also applies to blog PRs, but we make some exceptions.
- it is OK to use “we“ in a blog article that has multiple authors, or where the article introduction clearly indicates that the author is writing on behalf of a specific group.
- we avoid using Kubernetes shortcodes for callouts (such as `{{</* caution */>}}`)
- statements about the future are OK, albeit we use them with care in official announcements on
behalf of Kubernetes
- code samples don't need to use the `{{</* code_sample */>}}` shortcode, and often it is better if they do not
- we are OK to have authors write an article in their own writing style, so long as most readers
would follow the point being made
-->
- [样式指南](/zh-cn/docs/contribute/style/style-guide/)大部分也适用于博客的拉取请求PR
但我们做出了一些例外。
- 在有多位作者的博客文章中,或者文章介绍明确指出作者代表特定群体写作的情况下,使用“我们”是可以接受的。
- 我们避免使用 Kubernetes 短代码(如 `{{</* caution */>}}`)来创建提示框。
- 关于未来的陈述是可以接受的,尽管我们在代表 Kubernetes 发布官方公告时会谨慎使用。
- 代码示例不需要使用 `{{</* code_sample */>}}` 短代码,通常情况下不使用反而更好。
- 只要大多数读者能够理解作者所表达的观点,我们允许作者以自己的写作风格撰写文章。
<!--
- The [diagram guide](/docs/contribute/style/diagram-guide/) is aimed at Kubernetes documentation,
not blog articles. It is still good to align with it but:
- we prefer SVG over raster diagram formats, and also over Mermaid (you can still capture the Mermaid source in a comment)
- there is no need to caption diagrams as Figure 1, Figure 2 etc
-->
- [图表指南](/zh-cn/docs/contribute/style/diagram-guide/)主要针对 Kubernetes 文档,
而不是博客文章。尽管如此,保持一致仍然是好的,但是:
- 我们更倾向于使用 SVG 而不是栅格图像格式,也优于 Mermaid你仍然可以在注释中保留 Mermaid 源码)。
- 不需要将图表标注为图 1、图 2 等。
<!--
### Other
@ -318,8 +411,7 @@ This lets the author know that this part of your feedback is non-critical.
If you are considering a pull request for approval and all the remaining feedback is
marked as a nit, you can merge the PR anyway. In that case, it's often useful to open
an issue about the remaining nits. Consider whether you're able to meet the requirements
for marking that new issue as a [Good First Issue](https://www.kubernetes.dev/docs/guide/help-wanted/#good-first-issue);
if you can, these are a good source.
for marking that new issue as a [Good First Issue](https://www.kubernetes.dev/docs/guide/help-wanted/#good-first-issue); if you can, these are a good source.
-->
作为一名 Reviewer如果你发现 PR 有一些无关紧要的小问题,例如拼写错误或不正确的空格,
可以在你的评论前面加上 `nit:`。这样做可以让作者知道该问题不是一个不得了的大问题。
@ -329,4 +421,3 @@ if you can, these are a good source.
考虑一下是否能把这些新 Issue 标记为
[Good First Issue](https://www.kubernetes.dev/docs/guide/help-wanted/#good-first-issue)。
如果可以,这就是这类 Issue 的良好来源。

View File

@ -4,7 +4,8 @@ title: Pod Disruption Budget
full-link: /zh-cn/docs/concepts/workloads/pods/disruptions/
date: 2019-02-12
short_description: >
Pod Disruption Budget 是这样一种对象:它保证在主动中断( voluntary disruptions多实例应用的 {{< glossary_tooltip text="Pod" term_id="pod" >}} 不会少于一定的数量。
Pod Disruption Budget 是这样一种对象它保证在出现主动干扰Voluntary Disruption
多实例应用的 Pod 不会少于一定的数量。
aka:
- PDB
@ -22,7 +23,7 @@ title: Pod Disruption Budget
full-link: /docs/concepts/workloads/pods/disruptions/
date: 2019-02-12
short_description: >
An object that limits the number of {{< glossary_tooltip text="Pods" term_id="pod" >}} of a replicated application, that are down simultaneously from voluntary disruptions.
An object that limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions.
aka:
- PDB
@ -37,13 +38,14 @@ tags:
<!--
A [Pod Disruption Budget](/docs/concepts/workloads/pods/disruptions/) allows an
application owner to create an object for a replicated application, that ensures
a certain number or percentage of Pods with an assigned label will not be voluntarily
evicted at any point in time.
a certain number or percentage of {{< glossary_tooltip text="Pods" term_id="pod" >}}
with an assigned label will not be voluntarily evicted at any point in time.
Involuntary disruptions cannot be prevented by PDBs; however they
do count against the budget.
-->
[Pod 干扰预算Pod Disruption BudgetPDB](/zh-cn/docs/concepts/workloads/pods/disruptions/)
使应用所有者能够为多实例应用创建一个对象,来确保一定数量的具有指定标签的 Pod 在任何时候都不会被主动驱逐。
使应用所有者能够为多实例应用创建一个对象,来确保一定数量的具有指定标签的
{{< glossary_tooltip text="Pod" term_id="pod" >}} 在任何时候都不会被主动驱逐。
<!--more-->
PDB 无法防止非主动的中断但是会计入预算budget

View File

@ -31,7 +31,7 @@ To run an nginx Deployment and expose the Deployment, see [kubectl create deploy
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
@ -79,7 +79,7 @@ deployment.apps/nginx-app env updated
<!--
`kubectl` commands print the type and name of the resource created or mutated, which can then be used in subsequent commands. You can expose a new Service after a Deployment is created.
-->
`kubectl` 命令打印创建或突变资源的类型和名称,然后可以在后续命令中使用。部署后,你可以公开新服务
`kubectl` 命令打印创建或突变资源的类型和名称,然后可以在后续命令中使用。部署后,你可以公开新的 Service
{{< /note >}}
<!--
@ -89,7 +89,7 @@ kubectl expose deployment nginx-app --port=80 --name=nginx-http
```
-->
```shell
# 通过服务公开端口
# 通过 Service 公开端口
kubectl expose deployment nginx-app --port=80 --name=nginx-http
```
```
@ -101,14 +101,14 @@ By using kubectl, you can create a [Deployment](/docs/concepts/workloads/control
-->
在 kubectl 命令中,我们创建了一个 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)
这将保证有 N 个运行 nginx 的 PodN 代表 spec 中声明的副本数,默认为 1
我们还创建了一个 [service](/zh-cn/docs/concepts/services-networking/service/),其选择算符与容器标签匹配。
查看[使用服务访问集群中的应用程序](/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster)获取更多信息。
我们还创建了一个 [Service](/zh-cn/docs/concepts/services-networking/service/),其选择算符与容器标签匹配。
查看[使用 Service 访问集群中的应用程序](/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster)获取更多信息。
<!--
By default images run in the background, similar to `docker run -d ...`. To run things in the foreground, use [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) to create pod:
-->
默认情况下镜像会在后台运行,与 `docker run -d ...` 类似,如果你想在前台运行,
使用 [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) 在前台运行 Pod:
使用 [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) 在前台运行 Pod
```shell
kubectl run [-i] [--tty] --attach <name> --image=<image>
@ -139,7 +139,7 @@ To list what is currently running, see [kubectl get](/docs/reference/generated/k
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker ps -a
@ -175,7 +175,7 @@ To attach a process that is already running in a container, see [kubectl attach]
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker ps
@ -190,7 +190,10 @@ docker attach 55c103fa1296
...
```
<!--
kubectl:
-->
使用 kubectl 命令:
```shell
kubectl get pods
@ -220,7 +223,7 @@ To execute a command in a container, see [kubectl exec](/docs/reference/generate
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker ps
@ -264,14 +267,17 @@ To use interactive commands.
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker exec -ti 55c103fa1296 /bin/sh
# exit
```
<!--
kubectl:
-->
使用 kubectl 命令:
```shell
kubectl exec -ti nginx-app-5jyvm -- /bin/sh
@ -293,7 +299,7 @@ To follow stdout/stderr of a process that is running, see [kubectl logs](/docs/r
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker logs -f a9e
@ -320,7 +326,7 @@ kubectl logs -f nginx-app-zibvs
There is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated, but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:
-->
现在是时候提一下 Pod 和容器之间的细微差别了;默认情况下如果 Pod 中的进程退出 Pod 也不会终止,
相反它将会重启该进程。这类似于 docker run 时的 `--restart=always` 选项,这是主要差别。
相反它将会重启该进程。这类似于 `docker run` 时的 `--restart=always` 选项,这是主要差别。
在 Docker 中,进程的每个调用的输出都是被连接起来的,但是对于 Kubernetes每个调用都是分开的。
要查看以前在 Kubernetes 中执行的输出,请执行以下操作:
@ -350,7 +356,7 @@ To stop and delete a running process, see [kubectl delete](/docs/reference/gener
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker ps
@ -427,12 +433,12 @@ There is no direct analog of `docker login` in kubectl. If you are interested in
<!--
To get the version of client and server, see [kubectl version](/docs/reference/generated/kubectl/kubectl-commands/#version).
-->
如何查看客户端和服务端的版本?查看 [kubectl version](/docs/reference/generated/kubectl/kubectl-commands/#version)。
如何查看客户端和服务端的版本?查看 [kubectl version](/zh-cn/docs/reference/generated/kubectl/kubectl-commands/#version)。
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker version
@ -473,7 +479,7 @@ To get miscellaneous information about the environment and configuration, see [k
<!--
docker:
-->
使用 Docker 命令:
使用 docker 命令:
```shell
docker info

View File

@ -47,17 +47,17 @@ Kubernetes 需要 PKI 才能执行以下操作:
for each kubelet (every {{< glossary_tooltip text="node" term_id="node" >}} runs a kubelet)
* Optional server certificate for the [front-proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
-->
### 服务器证书
### 服务器证书 {#server-certificates}
* API 服务器端点的证书
* etcd 服务器的服务器证书
* 每个 kubelet 的服务器证书(每个 {{< glossary_tooltip text="节点" term_id="node" >}}运行一个 kubelet
* 每个 kubelet 的服务器证书(每个{{< glossary_tooltip text="节点" term_id="node" >}}运行一个 kubelet
* 可选的[前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/)的服务器证书
<!--
### Client certificates
-->
### 客户端证书
### 客户端证书 {#client-certificates}
<!--
* Client certificates for each kubelet, used to authenticate to the API server as a client of
@ -75,33 +75,31 @@ Kubernetes 需要 PKI 才能执行以下操作:
* 调度程序与 API 服务器进行安全通信的客户端证书
* 客户端证书(每个节点一个),用于 kube-proxy 向 API 服务器进行身份验证
* 集群管理员向 API 服务器进行身份验证的可选客户端证书
* [前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/)的客户端及服务端证书
* [前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/)的可选客户端证书
<!--
### Kubelet's server and client certificates
To establish a secure connection and authenticate itself to the kubelet, the API Server
requires a client certificate and key pair.
To establish a secure connection and authenticate itself to the kubelet, the API Server
requires a client certificate and key pair.
-->
### kubelet 的服务器和客户端证书
### kubelet 的服务器和客户端证书 {#kubelets-server-and-client-certificates}
为了建立安全连接并向 kubelet 进行身份验证API 服务器需要客户端证书和密钥对。
<!--
In this scenario, there are two approaches for certificate usage:
using shared certificates or separate certificates;
In this scenario, there are two approaches for certificate usage:
* Shared Certificates: The kube-apiserver can utilize the same certificate and key pair it uses to authenticate its clients.
This means that the existing certificates, such as `apiserver.crt` and `apiserver.key`,
can be used for communicating with the kubelet servers.
* Shared Certificates: The kube-apiserver can utilize the same certificate and key pair it uses
to authenticate its clients. This means that the existing certificates, such as `apiserver.crt`
and `apiserver.key`, can be used for communicating with the kubelet servers.
* Separate Certificates: Alternatively, the kube-apiserver can generate a new client certificate
and key pair to authenticate its communication with the kubelet servers.
In this case, a distinct certificate named `kubelet-client.crt` and its corresponding private key,
and key pair to authenticate its communication with the kubelet servers. In this case,
a distinct certificate named `kubelet-client.crt` and its corresponding private key,
`kubelet-client.key` are created.
-->
在此场景中,证书的使用有两种方法:
使用共享证书或单独证书;
* 共享证书kube-apiserver 可以使用与验证其客户端相同的证书和密钥对。
这意味着现有证书(例如 `apiserver.crt``apiserver.key`)可用于与 kubelet 服务器进行通信。
@ -165,7 +163,7 @@ multiple intermediate CAs, and delegate all further creation to Kubernetes itsel
<!--
Required CAs:
| path | Default CN | description |
| Path | Default CN | Description |
|------------------------|---------------------------|----------------------------------|
| ca.crt,key | kubernetes-ca | Kubernetes general CA |
| etcd/ca.crt,key | etcd-ca | For all etcd-related functions |
@ -173,6 +171,7 @@ Required CAs:
On top of the above CAs, it is also necessary to get a public/private key pair for service account
management, `sa.key` and `sa.pub`.
The following example illustrates the CA key and certificate files shown in the previous table:
-->
需要这些 CA
@ -183,10 +182,6 @@ management, `sa.key` and `sa.pub`.
| front-proxy-ca.crt、key | kubernetes-front-proxy-ca | 用于[前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/) |
上面的 CA 之外,还需要获取用于服务账号管理的密钥对,也就是 `sa.key``sa.pub`
<!--
The following example illustrates the CA key and certificate files shown in the previous table:
-->
下面的例子说明了上表中所示的 CA 密钥和证书文件。
```console
@ -218,7 +213,7 @@ Required certificates:
| kube-etcd-peer | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
| kube-apiserver-etcd-client | etcd-ca | | client | |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`, `[1]` |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`, `<Host_IP>`, `<advertise_IP>`[^1] |
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
-->
@ -228,7 +223,7 @@ Required certificates:
| kube-etcd-peer | etcd-ca | | server、client | `<hostname>`、`<Host_IP>`、`localhost`、`127.0.0.1` |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
| kube-apiserver-etcd-client | etcd-ca | | client | |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`、`<Host_IP>`、`<advertise_IP>`、`[1]` |
| kube-apiserver | kubernetes-ca | | server | `<hostname>`、`<Host_IP>`、`<advertise_IP>`[^1] |
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
@ -243,7 +238,7 @@ that purpose.
{{< /note >}}
<!--
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
[^1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/)
the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
@ -251,7 +246,7 @@ where `kind` maps to one or more of the x509 key usage, which is also documented
`.spec.usages` of a [CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1#CertificateSigningRequest)
type:
-->
[1]: 用来连接到集群的不同 IP 或 DNS 名称
[^1]: 用来连接到集群的不同 IP 或 DNS 名称
(就像 [kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/) 为负载均衡所使用的固定
IP 或 DNS 名称:`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、
`kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`)。
@ -310,39 +305,39 @@ Paths should be specified using the given argument regardless of location.
使用)。无论使用什么位置,都应使用给定的参数指定路径。
<!--
| Default CN | recommended key path | recommended cert path | command | key argument | cert argument |
|------------------------------|------------------------------|-----------------------------|-------------------------|------------------------------|-------------------------------------------|
| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file |
| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file |
| kube-apiserver-kubelet-client| apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert |
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
| DefaultCN | recommendedkeypath | recommendedcertpath | command | keyargument | certargument |
| --------- | ------------------ | ------------------- | ------- | ----------- | ------------ |
| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file,--root-ca-file,--cluster-signing-cert-file |
| kube-apiserver | apiserver.key | apiserver.crt| kube-apiserver | --tls-private-key-file | --tls-cert-file |
| kube-apiserver-kubelet-client | apiserver-kubelet-client.key | apiserver-kubelet-client.crt | kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file,--peer-trusted-ca-file |
| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
| etcd-ca| | etcd/ca.crt | etcdctl | | --cacert |
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
-->
| 默认 CN | 建议的密钥路径 | 建议的证书路径 | 命令 | 密钥参数 | 证书参数 |
|------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------|
| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file |
| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file |
| kube-apiserver-kubelet-client| apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert |
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
| 默认 CN | 建议的密钥路径 | 建议的证书路径 | 命令 | 密钥参数 | 证书参数 |
|---------|-------------|--------------|-----|--------|---------|
| etcd-ca | etcd/ca.key | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |
| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file, --root-ca-file, --cluster-signing-cert-file |
| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file |
| kube-apiserver-kubelet-client | apiserver-kubelet-client.key | apiserver-kubelet-client.crt| kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |
| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
| etcd-ca | etcd/ca.key | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
| etcd-ca | | etcd/ca.crt | etcdctl | | --cacert |
| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl | --key | --cert |
<!--
Same considerations apply for the service account key pair:
@ -402,7 +397,7 @@ You must manually configure these administrator account and service accounts:
你必须手动配置以下管理员账号和服务账号:
<!--
| filename | credential name | Default CN | O (in Subject) |
| Filename | Credential name | Default CN | O (in Subject) |
|-------------------------|----------------------------|-------------------------------------|------------------------|
| admin.conf | default-admin | kubernetes-admin | `<admin-group>` |
| super-admin.conf | default-super-admin | kubernetes-super-admin | system:masters |
@ -461,25 +456,34 @@ This file is generated only on the node where `kubeadm init` was called.
{{< /note >}}
<!--
1. For each config, generate an x509 cert/key pair with the given CN and O.
1. For each configuration, generate an x509 certificate/key pair with the
given Common Name (CN) and Organization (O).
1. Run `kubectl` as follows for each config:
1. Run `kubectl` as follows for each configuration:
-->
1. 对于每个配置,请都使用给定的 CN 和 O 生成 x509 证书/密钥偶对。
1. 对于每个配置,请都使用给定的通用名称CN和组织O生成 x509 证书/密钥对。
1. 为每个配置运行下面的 `kubectl` 命令:
```bash
KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs
KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs
KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>
KUBECONFIG=<filename> kubectl config use-context default-system
```
<!--
```
KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs
KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs
KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>
KUBECONFIG=<filename> kubectl config use-context default-system
```
-->
```bash
KUBECONFIG=<文件名> kubectl config set-cluster default-cluster --server=https://<主机ip>:6443 --certificate-authority <kubernetes-ca> --embed-certs
KUBECONFIG=<文件名> kubectl config set-credentials <凭据名称> --client-key <密钥路径>.pem --client-certificate <证书路径>.pem --embed-certs
KUBECONFIG=<文件名> kubectl config set-context default-system --cluster default-cluster --user <凭据名称>
KUBECONFIG=<文件名> kubectl config use-context default-system
```
<!--
These files are used as follows:
| filename | command | comment |
| Filename | Command | Comment |
|-------------------------|-------------------------|-----------------------------------------------------------------------|
| admin.conf | kubectl | Configures administrator user for the cluster |
| super-admin.conf | kubectl | Configures super administrator user for the cluster |

View File

@ -22,11 +22,12 @@ verification and functionality test for a node. The test validates whether the
node meets the minimum requirements for Kubernetes; a node that passes the test
is qualified to join a Kubernetes cluster.
-->
**节点一致性测试** 是一个容器化的测试框架,提供了针对节点的系统验证和功能测试。
**节点一致性测试**是一个容器化的测试框架,提供了针对节点的系统验证和功能测试。
测试验证节点是否满足 Kubernetes 的最低要求;通过测试的节点有资格加入 Kubernetes 集群。
<!--
The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the testis qualified to join a Kubernetes cluster.
The test validates whether the node meets the minimum requirements for Kubernetes;
a node that passes the testis qualified to join a Kubernetes cluster.
-->
该测试主要检测节点是否满足 Kubernetes 的最低要求,通过检测的节点有资格加入 Kubernetes 集群。
@ -40,14 +41,15 @@ To run node conformance test, a node must satisfy the same prerequisites as a
standard Kubernetes node. At a minimum, the node should have the following
daemons installed:
-->
要运行节点一致性测试,节点必须满足与标准 Kubernetes 节点相同的前提条件。节点至少应安装以下守护程序:
要运行节点一致性测试,节点必须满足与标准 Kubernetes 节点相同的前提条件。
节点至少应安装以下守护程序:
<!--
* CRI-compatible container runtimes such as Docker, Containerd and CRI-O
* Kubelet
* CRI-compatible container runtimes such as Docker, Containerd and CRI-O
* kubelet
-->
* 与 CRI 兼容的容器运行时,例如 Docker、Containerd 和 CRI-O
* Kubelet
* kubelet
<!--
## Running Node Conformance Test
@ -65,8 +67,8 @@ To run the node conformance test, perform the following steps:
Because the test framework starts a local control plane to test the kubelet,
use `http://localhost:8080` as the URL of the API server.
There are some other kubelet command line parameters you may want to use:
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should
remove the flag to run the test.
* `--cloud-provider`: If you are using `--cloud-provider=gce`, you should
remove the flag to run the test.
-->
1. 得出 kubelet 的 `--kubeconfig` 的值;例如:`--kubeconfig=/var/lib/kubelet/config.yaml`。
由于测试框架启动了本地控制平面来测试 kubelet因此使用 `http://localhost:8080`
@ -75,20 +77,21 @@ To run the node conformance test, perform the following steps:
* `--cloud-provider`:如果使用 `--cloud-provider=gce`,需要移除这个参数来运行测试。
<!--
2. Run the node conformance test with command:
1. Run the node conformance test with command:
```shell
# $CONFIG_DIR is the pod manifest path of your Kubelet.
# $LOG_DIR is the test output path.
sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
registry.k8s.io/node-test:0.2
```shell
# $CONFIG_DIR is the pod manifest path of your kubelet.
# $LOG_DIR is the test output path.
sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
registry.k8s.io/node-test:0.2
```
```
-->
2. 使用以下命令运行节点一致性测试:
```shell
# $CONFIG_DIR 是你 Kubelet 的 pod manifest 路径。
# $CONFIG_DIR 是你 kubelet 的 Pod manifest 路径。
# $LOG_DIR 是测试的输出路径。
sudo docker run -it --rm --privileged --net=host \
-v /:/rootfs -v $CONFIG_DIR:$CONFIG_DIR -v $LOG_DIR:/var/result \
@ -107,17 +110,17 @@ architectures:
Kubernetes 也为其他硬件体系结构的系统提供了节点一致性测试的 Docker 镜像:
<!--
Arch | Image |
--------|:-----------------:|
amd64 | node-test-amd64 |
arm | node-test-arm |
arm64 | node-test-arm64 |
| Arch | Image |
|--------|:-----------------:|
| amd64 | node-test-amd64 |
| arm | node-test-arm |
| arm64 | node-test-arm64 |
-->
架构 | 镜像 |
--------|:-----------------:|
amd64 | node-test-amd64 |
arm | node-test-arm |
arm64 | node-test-arm64 |
| 架构 | 镜像 |
|--------|:-----------------:|
| amd64 | node-test-amd64 |
| arm | node-test-arm |
| arm64 | node-test-arm64 |
<!--
## Running Selected Test
@ -158,14 +161,13 @@ sudo docker run -it --rm --privileged --net=host \
registry.k8s.io/node-test:0.2
```
<!--
Node conformance test is a containerized version of [node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md).
-->
节点一致性测试是[节点端到端测试](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md)的容器化版本。
<!--
Node conformance test is a containerized version of
[node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md).
By default, it runs all conformance tests.
-->
节点一致性测试是
[节点端到端测试](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/e2e-node-tests.md)的容器化版本。
默认情况下,它会运行所有一致性测试。
<!--
@ -173,7 +175,8 @@ Theoretically, you can run any node e2e test if you configure the container and
mount required volumes properly. But **it is strongly recommended to only run conformance
test**, because it requires much more complex configuration to run non-conformance test.
-->
理论上,只要合理地配置容器和挂载所需的卷,就可以运行任何的节点端到端测试用例。但是这里**强烈建议只运行一致性测试**,因为运行非一致性测试需要很多复杂的配置。
理论上,只要合理地配置容器和挂载所需的卷,就可以运行任何的节点端到端测试用例。
但是这里**强烈建议只运行一致性测试**,因为运行非一致性测试需要很多复杂的配置。
<!--
## Caveats
@ -187,6 +190,5 @@ test**, because it requires much more complex configuration to run non-conforman
* The test leaves dead containers on the node. These containers are created
during the functionality test.
-->
* 测试会在节点上遗留一些 Docker 镜像,包括节点一致性测试本身的镜像和功能测试相关的镜像。
* 测试会在节点上遗留一些死的容器。这些容器是在功能测试的过程中创建的。

View File

@ -114,6 +114,8 @@ During the two-month maintenance mode period, Release Managers may cut
additional maintenance releases to resolve:
- CVEs (under the advisement of the Security Response Committee)
- [Vulnerabilities](/docs/reference/issues-security/official-cve-feed/) that have an assigned
CVE ID (under the advisement of the Security Response Committee)
- dependency issues (including base image updates)
- critical core component issues
@ -127,6 +129,7 @@ dates for simplicity (every month has it).
在两个月的维护模式期间,发布管理员可能会删减额外的维护版本以解决:
- CVE在安全响应委员会的建议下
- 已分配 CVE ID 的[漏洞](/zh-cn/docs/reference/issues-security/official-cve-feed/)(在安全响应委员会的建议下)
- 依赖问题(包括基础镜像更新)
- 关键核心组件问题

View File

@ -4,6 +4,9 @@
# schedule-builder -uc data/releases/schedule.yaml -e data/releases/eol.yaml
---
branches:
- endOfLifeDate: "2024-10-22"
finalPatchRelease: 1.28.15
release: "1.28"
- endOfLifeDate: "2024-07-16"
finalPatchRelease: 1.27.16
release: "1.27"

View File

@ -7,10 +7,13 @@ schedules:
- endOfLifeDate: "2025-10-28"
maintenanceModeStartDate: "2025-08-28"
next:
cherryPickDeadline: "2024-10-11"
cherryPickDeadline: "2024-11-08"
release: 1.31.3
targetDate: "2024-11-12"
previousPatches:
- cherryPickDeadline: "2024-10-11"
release: 1.31.2
targetDate: "2024-10-22"
previousPatches:
- cherryPickDeadline: "2024-09-06"
release: 1.31.1
targetDate: "2024-09-11"
@ -21,10 +24,13 @@ schedules:
- endOfLifeDate: "2025-06-28"
maintenanceModeStartDate: "2025-04-28"
next:
cherryPickDeadline: "2024-10-11"
cherryPickDeadline: "2024-11-08"
release: 1.30.7
targetDate: "2024-11-12"
previousPatches:
- cherryPickDeadline: "2024-10-11"
release: 1.30.6
targetDate: "2024-10-22"
previousPatches:
- cherryPickDeadline: "2024-09-06"
release: 1.30.5
targetDate: "2024-09-10"
@ -47,10 +53,13 @@ schedules:
- endOfLifeDate: "2025-02-28"
maintenanceModeStartDate: "2024-12-28"
next:
cherryPickDeadline: "2024-10-11"
cherryPickDeadline: "2024-11-08"
release: 1.29.11
targetDate: "2024-11-12"
previousPatches:
- cherryPickDeadline: "2024-10-11"
release: 1.29.10
targetDate: "2024-10-22"
previousPatches:
- cherryPickDeadline: "2024-09-06"
release: 1.29.9
targetDate: "2024-09-10"
@ -82,63 +91,10 @@ schedules:
targetDate: "2023-12-13"
release: "1.29"
releaseDate: "2023-12-13"
- endOfLifeDate: "2024-10-28"
maintenanceModeStartDate: "2024-08-28"
next:
cherryPickDeadline: "2024-10-11"
release: 1.28.15
targetDate: "2024-10-22"
previousPatches:
- cherryPickDeadline: "2024-09-06"
release: 1.28.14
targetDate: "2024-09-10"
- cherryPickDeadline: "2024-08-09"
release: 1.28.13
targetDate: "2024-08-14"
- cherryPickDeadline: "2024-07-12"
release: 1.28.12
targetDate: "2024-07-16"
- cherryPickDeadline: "2024-06-07"
release: 1.28.11
targetDate: "2024-06-11"
- cherryPickDeadline: "2024-05-10"
release: 1.28.10
targetDate: "2024-05-14"
- cherryPickDeadline: "2024-04-12"
release: 1.28.9
targetDate: "2024-04-16"
- cherryPickDeadline: "2024-03-08"
release: 1.28.8
targetDate: "2024-03-12"
- cherryPickDeadline: "2024-02-09"
release: 1.28.7
targetDate: "2024-02-14"
- cherryPickDeadline: "2023-01-12"
release: 1.28.6
targetDate: "2024-01-17"
- cherryPickDeadline: "2023-12-15"
release: 1.28.5
targetDate: "2023-12-20"
- note: Out of band release to fix [CVE-2023-5528](https://groups.google.com/g/kubernetes-announce/c/c3py6Fw0DTI/m/cScFSdk1BwAJ)
release: 1.28.4
targetDate: "2023-11-14"
- cherryPickDeadline: "2023-10-13"
release: 1.28.3
targetDate: "2023-10-18"
- cherryPickDeadline: "2023-09-08"
release: 1.28.2
targetDate: "2023-09-13"
- note: Unplanned release to include CVE fixes
release: 1.28.1
targetDate: "2023-08-23"
- release: 1.28.0
targetDate: "2023-08-15"
release: "1.28"
releaseDate: "2023-08-15"
upcoming_releases:
- cherryPickDeadline: "2024-10-11"
targetDate: "2024-10-22"
- cherryPickDeadline: "2024-11-08"
targetDate: "2024-11-12"
- cherryPickDeadline: "2024-12-06"
targetDate: "2024-12-10"
- cherryPickDeadline: "2025-01-10"
targetDate: "2025-01-14"

View File

@ -7,17 +7,18 @@
{{ if not $feature_gate_name }}
{{ if not $is_valid }}
{{ errorf "%q is not a valid feature-state, use one of %q" $state $valid_states }}
{{ else }}
{{- else -}}
{{- /* Display feature state notice */ -}}
<div class="feature-state-notice feature-{{ $state }}">
<span class="feature-state-name">{{ T "feature_state" }}</span>
<span class="feature-state-name">{{ T "feature_state" }}</span>
<code>{{ T "feature_state_kubernetes_label" }} {{ $for_k8s_version }} [{{ $state }}]</code>
</div>
{{ end }}
{{ else }}
<!-- When 'feature_gate_name' is specified, extract status of the feature gate -->
{{- $featureGateDescriptionFilesPath := "/docs/reference/command-line-tools-reference/feature-gates" -}}
{{- with index (where .Site.Sites "Language.Lang" "eq" "en") 0 -}}
{{- with .GetPage $featureGateDescriptionFilesPath -}}