Merge branch 'kubernetes:main' into hi-12
commit
fbc8515a38
|
@ -12,7 +12,7 @@ In Kubernetes, features follow a defined
|
|||
First, as the twinkle of an eye in an interested developer. Maybe, then,
|
||||
sketched in online discussions, drawn on the online equivalent of a cafe
|
||||
napkin. This rough work typically becomes a
|
||||
[Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/0001-kubernetes-enhancement-proposal-process.md#kubernetes-enhancement-proposal-process) (KEP), and
|
||||
[Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/0000-kep-process/README.md#kubernetes-enhancement-proposal-process) (KEP), and
|
||||
from there it usually turns into code.
|
||||
|
||||
For Kubernetes v1.20 and onwards, we're focusing on helping that code
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.28: A New (alpha) Mechanism For Safer Cluster Upgrades"
|
||||
date: 2023-08-28
|
||||
slug: kubernetes-1-28-feature-mixed-version-proxy-alpha
|
||||
---
|
||||
|
||||
**Author:** Richa Banker (Google)
|
||||
|
||||
This blog describes the _mixed version proxy_, a new alpha feature in Kubernetes 1.28. The
|
||||
mixed version proxy enables an HTTP request for a resource to be served by the correct API server
|
||||
in cases where there are multiple API servers at varied versions in a cluster. For example,
|
||||
this is useful during a cluster upgrade, or when you're rolling out the runtime configuration of
|
||||
the cluster's control plane.
|
||||
|
||||
## What problem does this solve?
|
||||
When a cluster undergoes an upgrade, the kube-apiservers existing at different versions in that scenario can serve different sets (groups, versions, resources) of built-in resources. A resource request made in this scenario may be served by any of the available apiservers, potentially resulting in the request ending up at an apiserver that may not be aware of the requested resource; consequently it being served a 404 not found error which is incorrect. Furthermore, incorrect serving of the 404 errors can lead to serious consequences such as namespace deletion being blocked incorrectly or objects being garbage collected mistakenly.
|
||||
|
||||
## How do we solve the problem?
|
||||
|
||||
{{< figure src="/images/blog/2023-08-28-a-new-alpha-mechanism-for-safer-cluster-upgrades/mvp-flow-diagram.svg" class="diagram-large" >}}
|
||||
|
||||
The new feature “Mixed Version Proxy” provides the kube-apiserver with the capability to proxy a request to a peer kube-apiserver which is aware of the requested resource and hence can serve the request. To do this, a new filter has been added to the handler chain in the API server's aggregation layer.
|
||||
|
||||
1. The new filter in the handler chain checks if the request is for a group/version/resource that the apiserver doesn't know about (using the existing [StorageVersion API](https://github.com/kubernetes/kubernetes/blob/release-1.28/pkg/apis/apiserverinternal/types.go#L25-L37)). If so, it proxies the request to one of the apiservers that is listed in the ServerStorageVersion object. If the identified peer apiserver fails to respond (due to reasons like network connectivity, race between the request being received and the controller registering the apiserver-resource info in ServerStorageVersion object), then error 503("Service Unavailable") is served.
|
||||
2. To prevent indefinite proxying of the request, a (new for v1.28) HTTP header `X-Kubernetes-APIServer-Rerouted: true` is added to the original request once it is determined that the request cannot be served by the original API server. Setting that to true marks that the original API server couldn't handle the request and it should therefore be proxied. If a destination peer API server sees this header, it never proxies the request further.
|
||||
3. To set the network location of a kube-apiserver that peers will use to proxy requests, the value passed in `--advertise-address` or (when `--advertise-address` is unspecified) the `--bind-address` flag is used. For users with network configurations that would not allow communication between peer kube-apiservers using the addresses specified in these flags, there is an option to pass in the correct peer address as `--peer-advertise-ip` and `--peer-advertise-port` flags that are introduced in this feature.
|
||||
|
||||
## How do I enable this feature?
|
||||
Following are the required steps to enable the feature:
|
||||
|
||||
* Download the [latest Kubernetes project](/releases/download/) (version `v1.28.0` or later)
|
||||
* Switch on the feature gate with the command line flag `--feature-gates=UnknownVersionInteroperabilityProxy=true` on the kube-apiservers
|
||||
* Pass the CA bundle that will be used by source kube-apiserver to authenticate destination kube-apiserver's serving certs using the flag `--peer-ca-file` on the kube-apiservers. Note: this is a required flag for this feature to work. There is no default value enabled for this flag.
|
||||
* Pass the correct ip and port of the local kube-apiserver that will be used by peers to connect to this kube-apiserver while proxying a request. Use the flags `--peer-advertise-ip` and `peer-advertise-port` to the kube-apiservers upon startup. If unset, the value passed to either `--advertise-address` or `--bind-address` is used. If those too, are unset, the host's default interface will be used.
|
||||
|
||||
## What’s missing?
|
||||
Currently we only proxy resource requests to a peer kube-apiserver when its determined to do so. Next we need to address how to work discovery requests in such scenarios. Right now we are planning to have the following capabilities for beta
|
||||
|
||||
* Merged discovery across all kube-apiservers
|
||||
* Use an egress dialer for network connections made to peer kube-apiservers
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
- Read the [Mixed Version Proxy documentation](/docs/concepts/architecture/mixed-version-proxy)
|
||||
- Read [KEP-4020: Unknown Version Interoperability Proxy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/4020-unknown-version-interoperability-proxy)
|
||||
|
||||
## How can I get involved?
|
||||
Reach us on [Slack](https://slack.k8s.io/): [#sig-api-machinery](https://kubernetes.slack.com/messages/sig-api-machinery), or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery).
|
||||
|
||||
Huge thanks to the contributors that have helped in the design, implementation, and review of this feature: Daniel Smith, Han Kang, Joe Betz, Jordan Liggit, Antonio Ojea, David Eads and Ben Luddy!
|
|
@ -0,0 +1,195 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Gateway API v0.8.0: Introducing Service Mesh Support"
|
||||
date: 2023-08-29T10:00:00-08:00
|
||||
slug: gateway-api-v0-8
|
||||
---
|
||||
|
||||
***Authors:*** Flynn (Buoyant), John Howard (Google), Keith Mattix (Microsoft), Michael Beaumont (Kong), Mike Morris (independent), Rob Scott (Google)
|
||||
|
||||
We are thrilled to announce the v0.8.0 release of Gateway API! With this
|
||||
release, Gateway API support for service mesh has reached [Experimental
|
||||
status][status]. We look forward to your feedback!
|
||||
|
||||
We're especially delighted to announce that Kuma 2.3+, Linkerd 2.14+, and Istio
|
||||
1.16+ are all fully-conformant implementations of Gateway API service mesh
|
||||
support.
|
||||
|
||||
## Service mesh support in Gateway API
|
||||
|
||||
While the initial focus of Gateway API was always ingress (north-south)
|
||||
traffic, it was clear almost from the beginning that the same basic routing
|
||||
concepts should also be applicable to service mesh (east-west) traffic. In
|
||||
2022, the Gateway API subproject started the [GAMMA initiative][gamma], a
|
||||
dedicated vendor-neutral workstream, specifically to examine how best to fit
|
||||
service mesh support into the framework of the Gateway API resources, without
|
||||
requiring users of Gateway API to relearn everything they understand about the
|
||||
API.
|
||||
|
||||
Over the last year, GAMMA has dug deeply into the challenges and possible
|
||||
solutions around using Gateway API for service mesh. The end result is a small
|
||||
number of [enhancement proposals][geps] that subsume many hours of thought and
|
||||
debate, and provide a minimum viable path to allow Gateway API to be used for
|
||||
service mesh.
|
||||
|
||||
### How will mesh routing work when using Gateway API?
|
||||
|
||||
You can find all the details in the [Gateway API Mesh routing
|
||||
documentation][mesh-routing] and [GEP-1426], but the short version for Gateway
|
||||
API v0.8.0 is that an HTTPRoute can now have a `parentRef` that is a Service,
|
||||
rather than just a Gateway. We anticipate future GEPs in this area as we gain
|
||||
more experience with service mesh use cases -- binding to a Service makes it
|
||||
possible to use the Gateway API with a service mesh, but there are several
|
||||
interesting use cases that remain difficult to cover.
|
||||
|
||||
As an example, you might use an HTTPRoute to do an A-B test in the mesh as
|
||||
follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: gateway.networking.k8s.io/v1beta1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: bar-route
|
||||
spec:
|
||||
parentRefs:
|
||||
- group: ""
|
||||
kind: Service
|
||||
name: demo-app
|
||||
port: 5000
|
||||
rules:
|
||||
- matches:
|
||||
- headers:
|
||||
- type: Exact
|
||||
name: env
|
||||
value: v1
|
||||
backendRefs:
|
||||
- name: demo-app-v1
|
||||
port: 5000
|
||||
- backendRefs:
|
||||
- name: demo-app-v2
|
||||
port: 5000
|
||||
```
|
||||
|
||||
Any request to port 5000 of the `demo-app` Service that has the header `env:
|
||||
v1` will be routed to `demo-app-v1`, while any request without that header
|
||||
will be routed to `demo-app-v2` -- and since this is being handled by the
|
||||
service mesh, not the ingress controller, the A/B test can happen anywhere in
|
||||
the application's call graph.
|
||||
|
||||
### How do I know this will be truly portable?
|
||||
|
||||
Gateway API has been investing heavily in conformance tests across all
|
||||
features it supports, and mesh is no exception. One of the challenges that the
|
||||
GAMMA initiative ran into is that many of these tests were strongly tied to
|
||||
the idea that a given implementation provides an ingress controller. Many
|
||||
service meshes don't, and requiring a GAMMA-conformant mesh to also implement
|
||||
an ingress controller seemed impractical at best. This resulted in work
|
||||
restarting on Gateway API _conformance profiles_, as discussed in [GEP-1709].
|
||||
|
||||
The basic idea of conformance profiles is that we can define subsets of the
|
||||
Gateway API, and allow implementations to choose (and document) which subsets
|
||||
they conform to. GAMMA is adding a new profile, named `Mesh` and described in
|
||||
[GEP-1686], which checks only the mesh functionality as defined by GAMMA. At
|
||||
this point, Kuma 2.3+, Linkerd 2.14+, and Istio 1.16+ are all conformant with
|
||||
the `Mesh` profile.
|
||||
|
||||
## What else is in Gateway API v0.8.0?
|
||||
|
||||
This release is all about preparing Gateway API for the upcoming v1.0 release
|
||||
where HTTPRoute, Gateway, and GatewayClass will graduate to GA. There are two
|
||||
main changes related to this: CEL validation and API version changes.
|
||||
|
||||
### CEL Validation
|
||||
|
||||
The first major change is that Gateway API v0.8.0 is the start of a transition
|
||||
from webhook validation to [CEL validation][cel] using information built into
|
||||
the CRDs. That will mean different things depending on the version of
|
||||
Kubernetes you're using:
|
||||
|
||||
#### Kubernetes 1.25+
|
||||
|
||||
CEL validation is fully supported, and almost all validation is implemented in
|
||||
CEL. (The sole exception is that header names in header modifier filters can
|
||||
only do case-insensitive validation. There is more information in [issue
|
||||
2277].)
|
||||
|
||||
We recommend _not_ using the validating webhook on these Kubernetes versions.
|
||||
|
||||
#### Kubernetes 1.23 and 1.24
|
||||
|
||||
CEL validation is not supported, but Gateway API v0.8.0 CRDs can still be
|
||||
installed. When you upgrade to Kubernetes 1.25+, the validation included in
|
||||
these CRDs will automatically take effect.
|
||||
|
||||
We recommend continuing to use the validating webhook on these Kubernetes
|
||||
versions.
|
||||
|
||||
#### Kubernetes 1.22 and older
|
||||
|
||||
Gateway API only commits to support for [5 most recent versions of
|
||||
Kubernetes][supported-versions]. As such, these versions are no longer
|
||||
supported by Gateway API, and unfortunately Gateway API v0.8.0 cannot be
|
||||
installed on them, since CRDs containing CEL validation will be rejected.
|
||||
|
||||
### API Version Changes
|
||||
|
||||
As we prepare for a v1.0 release that will graduate Gateway, GatewayClass, and
|
||||
HTTPRoute to the `v1` API Version from `v1beta1`, we are continuing the process
|
||||
of moving away from `v1alpha2` for resources that have graduated to `v1beta1`.
|
||||
For more information on this change and everything else included in this
|
||||
release, refer to the [v0.8.0 release notes][v0.8.0 release notes].
|
||||
|
||||
## How can I get started with Gateway API?
|
||||
|
||||
Gateway API represents the future of load balancing, routing, and service mesh
|
||||
APIs in Kubernetes. There are already more than 20 [implementations][impl]
|
||||
available (including both ingress controllers and service meshes) and the list
|
||||
keeps growing.
|
||||
|
||||
If you're interested in getting started with Gateway API, take a look at the
|
||||
[API concepts documentation][concepts] and check out some of the
|
||||
[Guides][guides] to try it out. Because this is a CRD-based API, you can
|
||||
install the latest version on any Kubernetes 1.23+ cluster.
|
||||
|
||||
If you're specifically interested in helping to contribute to Gateway API, we
|
||||
would love to have you! Please feel free to [open a new issue][issue] on the
|
||||
repository, or join in the [discussions][disc]. Also check out the [community
|
||||
page][community] which includes links to the Slack channel and community
|
||||
meetings. We look forward to seeing you!!
|
||||
|
||||
## Further Reading:
|
||||
|
||||
- [GEP-1324] provides an overview of the GAMMA goals and some important
|
||||
definitions. This GEP is well worth a read for its discussion of the problem
|
||||
space.
|
||||
- [GEP-1426] defines how to use Gateway API route resources, such as
|
||||
HTTPRoute, to manage traffic within a service mesh.
|
||||
- [GEP-1686] builds on the work of [GEP-1709] to define a _conformance
|
||||
profile_ for service meshes to be declared conformant with Gateway API.
|
||||
|
||||
Although these are [Experimental][status] patterns, note that they are available
|
||||
in the [`standard` release channel][ch], since the GAMMA initiative has not
|
||||
needed to introduce new resources or fields to date.
|
||||
|
||||
[gamma]:https://gateway-api.sigs.k8s.io/concepts/gamma/
|
||||
[status]:https://gateway-api.sigs.k8s.io/geps/overview/#status
|
||||
[ch]:https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard
|
||||
[cel]:/docs/reference/using-api/cel/
|
||||
[crd]:/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/
|
||||
[concepts]:https://gateway-api.sigs.k8s.io/concepts/api-overview/
|
||||
[geps]:https://gateway-api.sigs.k8s.io/contributing/enhancement-requests/
|
||||
[guides]:https://gateway-api.sigs.k8s.io/guides/getting-started/
|
||||
[impl]:https://gateway-api.sigs.k8s.io/implementations/
|
||||
[install-crds]:https://gateway-api.sigs.k8s.io/guides/getting-started/#install-the-crds
|
||||
[issue]:https://github.com/kubernetes-sigs/gateway-api/issues/new/choose
|
||||
[disc]:https://github.com/kubernetes-sigs/gateway-api/discussions
|
||||
[community]:https://gateway-api.sigs.k8s.io/contributing/community/
|
||||
[mesh-routing]:https://gateway-api.sigs.k8s.io/concepts/gamma/#how-the-gateway-api-works-for-service-mesh
|
||||
[GEP-1426]:https://gateway-api.sigs.k8s.io/geps/gep-1426/
|
||||
[GEP-1324]:https://gateway-api.sigs.k8s.io/geps/gep-1324/
|
||||
[GEP-1686]:https://gateway-api.sigs.k8s.io/geps/gep-1686/
|
||||
[GEP-1709]:https://gateway-api.sigs.k8s.io/geps/gep-1709/
|
||||
[issue 2277]:https://github.com/kubernetes-sigs/gateway-api/issues/2277
|
||||
[supported-versions]:https://gateway-api.sigs.k8s.io/concepts/versioning/#supported-versions
|
||||
[v0.8.0 release notes]:https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.8.0
|
||||
[versioning docs]:https://gateway-api.sigs.k8s.io/concepts/versioning/
|
|
@ -41,7 +41,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
|||
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md) is an [Easegress](https://megaease.com/easegress/) based API gateway that can run as an ingress controller.
|
||||
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
|
||||
lets you use an Ingress to configure F5 BIG-IP virtual servers.
|
||||
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller-1-0/742835/fortiadc-ingress-controller-overview) support the Kubernetes Ingress resources and allows you to manage FortiADC objects from Kubernetes
|
||||
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller/742835/fortiadc-ingress-controller-overview) support the Kubernetes Ingress resources and allows you to manage FortiADC objects from Kubernetes
|
||||
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
|
||||
which offers API gateway functionality.
|
||||
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
|
||||
|
|
|
@ -258,7 +258,7 @@ overlays), the `emptyDir` may run out of capacity before this limit.
|
|||
{{< note >}}
|
||||
If the `SizeMemoryBackedVolumes` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled,
|
||||
you can specify a size for memory backed volumes. If no size is specified, memory
|
||||
backed volumes are sized to 50% of the memory on a Linux host.
|
||||
backed volumes are sized to node allocatable memory.
|
||||
{{< /note>}}
|
||||
|
||||
#### emptyDir configuration example
|
||||
|
|
|
@ -34,7 +34,7 @@ it should create to meet the number of replicas criteria. A ReplicaSet then fulf
|
|||
and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
|
||||
template.
|
||||
|
||||
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/architecture/garbage-collection/#owners-and-dependents)
|
||||
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/architecture/garbage-collection/#owners-dependents)
|
||||
field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning
|
||||
ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet
|
||||
knows of the state of the Pods it is maintaining and plans accordingly.
|
||||
|
|
|
@ -158,7 +158,7 @@ There are some important considerations for the Kubernetes package repositories:
|
|||
over the package builds. This means that anything before v1.24.0 will only be
|
||||
available in the Google-hosted repository.
|
||||
- There's a dedicated package repository for each Kubernetes minor version.
|
||||
When upgrading to to a different minor release, you must bear in mind that
|
||||
When upgrading to a different minor release, you must bear in mind that
|
||||
the package repository details also change.
|
||||
|
||||
{{< /note >}}
|
||||
|
|
|
@ -14,12 +14,12 @@ there's a dedicated package repository for each Kubernetes minor version.
|
|||
## {{% heading "prerequisites" %}}
|
||||
|
||||
This document assumes that you're already using the Kubernetes community-owned
|
||||
package repositories. If that's not the case, it's strongly recommend to migrate
|
||||
package repositories. If that's not the case, it's strongly recommended to migrate
|
||||
to the Kubernetes package repositories.
|
||||
|
||||
### Verifying if the Kubernetes package repositories are used
|
||||
|
||||
If you're unsure if you're using the Kubernetes package repositories or the
|
||||
If you're unsure whether you're using the Kubernetes package repositories or the
|
||||
Google-hosted repository, take the following steps to verify:
|
||||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
|
@ -39,7 +39,7 @@ deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io
|
|||
```
|
||||
|
||||
**You're using the Kubernetes package repositories and this guide applies to you.**
|
||||
Otherwise, it's strongly recommend to migrate to the Kubernetes package repositories.
|
||||
Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
|
@ -51,7 +51,7 @@ Print the contents of the file that defines the Kubernetes `yum` repository:
|
|||
cat /etc/yum.repos.d/kubernetes.repo
|
||||
```
|
||||
|
||||
If you see `baseurl` similar to the `baseurl` in the output below:
|
||||
If you see a `baseurl` similar to the `baseurl` in the output below:
|
||||
|
||||
```
|
||||
[kubernetes]
|
||||
|
@ -64,7 +64,7 @@ exclude=kubelet kubeadm kubectl
|
|||
```
|
||||
|
||||
**You're using the Kubernetes package repositories and this guide applies to you.**
|
||||
Otherwise, it's strongly recommend to migrate to the Kubernetes package repositories.
|
||||
Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories.
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
@ -76,7 +76,6 @@ it can also be one of:
|
|||
- `pkgs.k8s.io`
|
||||
- `pkgs.kubernetes.io`
|
||||
- `packages.kubernetes.io`
|
||||
- `packages.kubernetes.io`
|
||||
{{</ note >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
@ -92,62 +91,62 @@ version.
|
|||
|
||||
1. Open the file that defines the Kubernetes `apt` repository using a text editor of your choice:
|
||||
|
||||
```shell
|
||||
nano /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
```shell
|
||||
nano /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
|
||||
You should see a single line with the URL that contains your current Kubernetes
|
||||
minor version. For example, if you're using v{{< skew currentVersionAddMinor -1 "." >}},
|
||||
you should see this:
|
||||
You should see a single line with the URL that contains your current Kubernetes
|
||||
minor version. For example, if you're using v{{< skew currentVersionAddMinor -1 "." >}},
|
||||
you should see this:
|
||||
|
||||
```
|
||||
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/deb/ /
|
||||
```
|
||||
```
|
||||
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/deb/ /
|
||||
```
|
||||
|
||||
2. Change the version in the URL to **the next available minor release**, for example:
|
||||
1. Change the version in the URL to **the next available minor release**, for example:
|
||||
|
||||
```
|
||||
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /
|
||||
```
|
||||
```
|
||||
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /
|
||||
```
|
||||
|
||||
3. Save the file and exit your text editor. Continue following the relevant upgrade instructions.
|
||||
1. Save the file and exit your text editor. Continue following the relevant upgrade instructions.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
|
||||
1. Open the file that defines the Kubernetes `yum` repository using a text editor of your choice:
|
||||
|
||||
```shell
|
||||
nano /etc/yum.repos.d/kubernetes.repo
|
||||
```
|
||||
```shell
|
||||
nano /etc/yum.repos.d/kubernetes.repo
|
||||
```
|
||||
|
||||
You should see a file with two URLs that contain your current Kubernetes
|
||||
minor version. For example, if you're using v{{< skew currentVersionAddMinor -1 "." >}},
|
||||
you should see this:
|
||||
You should see a file with two URLs that contain your current Kubernetes
|
||||
minor version. For example, if you're using v{{< skew currentVersionAddMinor -1 "." >}},
|
||||
you should see this:
|
||||
|
||||
```
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/repodata/repomd.xml.key
|
||||
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
|
||||
```
|
||||
```
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/repodata/repomd.xml.key
|
||||
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
|
||||
```
|
||||
|
||||
2. Change the version in these URLs to **the next available minor release**, for example:
|
||||
1. Change the version in these URLs to **the next available minor release**, for example:
|
||||
|
||||
```
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://pkgs.k8s.io/core:/stable:/v{{< param "version" >}}/rpm/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< param "version" >}}/rpm/repodata/repomd.xml.key
|
||||
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
|
||||
```
|
||||
```
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://pkgs.k8s.io/core:/stable:/v{{< param "version" >}}/rpm/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< param "version" >}}/rpm/repodata/repomd.xml.key
|
||||
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
|
||||
```
|
||||
|
||||
3. Save the file and exit your text editor. Continue following the relevant upgrade instructions.
|
||||
1. Save the file and exit your text editor. Continue following the relevant upgrade instructions.
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
|
|
@ -232,7 +232,7 @@ kubectl describe pod goproxy
|
|||
|
||||
## Define a gRPC liveness probe
|
||||
|
||||
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
If your application implements the
|
||||
[gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),
|
||||
|
@ -285,7 +285,7 @@ When migrating from grpc-health-probe to built-in probes, remember the following
|
|||
`127.0.0.1`. Be sure to configure your gRPC endpoint to listen on the Pod's IP address.
|
||||
- Built-in probes do not support any authentication parameters (like `-tls`).
|
||||
- There are no error codes for built-in probes. All errors are considered as probe failures.
|
||||
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not**
|
||||
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not**
|
||||
respect the `timeoutSeconds` setting (which defaults to 1s), while built-in probe would fail on timeout.
|
||||
|
||||
## Use a named port
|
||||
|
@ -525,15 +525,15 @@ unusually long time to restart when a Pod-level `terminationGracePeriodSeconds`
|
|||
was set.
|
||||
|
||||
In 1.25 and above, users can specify a probe-level `terminationGracePeriodSeconds`
|
||||
as part of the probe specification. When both a pod- and probe-level
|
||||
as part of the probe specification. When both a pod- and probe-level
|
||||
`terminationGracePeriodSeconds` are set, the kubelet will use the probe-level value.
|
||||
|
||||
{{< note >}}
|
||||
Beginning in Kubernetes 1.25, the `ProbeTerminationGracePeriod` feature is enabled
|
||||
by default. For users choosing to disable this feature, please note the following:
|
||||
|
||||
* The `ProbeTerminationGracePeriod` feature gate is only available on the API Server.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
* The `ProbeTerminationGracePeriod` feature gate is only available on the API Server.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
it is present on a Pod.
|
||||
|
||||
* If you have existing Pods where the `terminationGracePeriodSeconds` field is set and
|
||||
|
|
|
@ -78,7 +78,6 @@ releases may also occur in between these.
|
|||
|
||||
| Monthly Patch Release | Cherry Pick Deadline | Target date |
|
||||
| --------------------- | -------------------- | ----------- |
|
||||
| August 2023 | 2023-08-04 | 2023-08-23 |
|
||||
| September 2023 | 2023-09-08 | 2023-09-13 |
|
||||
| October 2023 | 2023-10-13 | 2023-10-18 |
|
||||
| November 2023 | N/A | N/A |
|
||||
|
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 49 KiB |
|
@ -0,0 +1,575 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "使用 PriorityClass 确保你的关键任务 Pod 免遭驱逐"
|
||||
date: 2023-01-12
|
||||
slug: protect-mission-critical-pods-priorityclass
|
||||
description: "Pod 优先级和抢占有助于通过决定调度和驱逐的顺序来确保关键任务 Pod 在资源紧缩的情况下正常运行。"
|
||||
---
|
||||
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Protect Your Mission-Critical Pods From Eviction With PriorityClass"
|
||||
date: 2023-01-12
|
||||
slug: protect-mission-critical-pods-priorityclass
|
||||
description: "Pod priority and preemption help to make sure that mission-critical pods are up in the event of a resource crunch by deciding order of scheduling and eviction."
|
||||
-->
|
||||
|
||||
**作者**:Sunny Bhambhani (InfraCloud Technologies)
|
||||
|
||||
<!--
|
||||
**Author:** Sunny Bhambhani (InfraCloud Technologies)
|
||||
-->
|
||||
|
||||
**译者**:Wilson Wu (DaoCloud)
|
||||
|
||||
<!--
|
||||
Kubernetes has been widely adopted, and many organizations use it as their de-facto orchestration engine for running workloads that need to be created and deleted frequently.
|
||||
-->
|
||||
Kubernetes 已被广泛使用,许多组织将其用作事实上的编排引擎,用于运行需要频繁被创建和删除的工作负载。
|
||||
|
||||
<!--
|
||||
Therefore, proper scheduling of the pods is key to ensuring that application pods are up and running within the Kubernetes cluster without any issues. This article delves into the use cases around resource management by leveraging the [PriorityClass](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) object to protect mission-critical or high-priority pods from getting evicted and making sure that the application pods are up, running, and serving traffic.
|
||||
-->
|
||||
因此,是否能对 Pod 进行合适的调度是确保应用 Pod 在 Kubernetes 集群中正常启动并运行的关键。
|
||||
本文深入探讨围绕资源管理的使用场景,利用 [PriorityClass](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
|
||||
对象来保护关键或高优先级 Pod 免遭驱逐并确保应用 Pod 正常启动、运行以及提供流量服务。
|
||||
|
||||
<!--
|
||||
## Resource management in Kubernetes
|
||||
-->
|
||||
## Kubernetes 中的资源管理 {#resource-management-in-kubernetes}
|
||||
|
||||
<!--
|
||||
The control plane consists of multiple components, out of which the scheduler (usually the built-in [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)) is one of the components which is responsible for assigning a node to a pod.
|
||||
-->
|
||||
控制平面由多个组件组成,其中调度程序(通常是内置的 [kube-scheduler](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/)
|
||||
是一个负责为 Pod 分配节点的组件。
|
||||
|
||||
<!--
|
||||
Whenever a pod is created, it enters a "pending" state, after which the scheduler determines which node is best suited for the placement of the new pod.
|
||||
-->
|
||||
当 Pod 被创建时,它就会进入“Pending”状态,之后调度程序会确定哪个节点最适合放置这个新 Pod。
|
||||
|
||||
<!--
|
||||
In the background, the scheduler runs as an infinite loop looking for pods without a `nodeName` set that are [ready for scheduling](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/). For each Pod that needs scheduling, the scheduler tries to decide which node should run that Pod.
|
||||
-->
|
||||
在后台,调度程序以无限循环的方式运行,并寻找没有设置 `nodeName`
|
||||
且[准备好进行调度](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)的 Pod。
|
||||
对于每个需要调度的 Pod,调度程序会尝试决定哪个节点应该运行该 Pod。
|
||||
|
||||
<!--
|
||||
If the scheduler cannot find any node, the pod remains in the pending state, which is not ideal.
|
||||
-->
|
||||
如果调度程序找不到任何节点,Pod 就会保持在这个不理想的挂起状态下。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
To name a few, `nodeSelector`, `taints and tolerations`, `nodeAffinity`, the rank of nodes based on available resources (for example, CPU and memory), and several other criteria are used to determine the pod's placement.
|
||||
-->
|
||||
举几个例子,可以用 `nodeSelector`、污点与容忍度、`nodeAffinity`、
|
||||
基于可用资源(例如 CPU 和内存)的节点排序以及其他若干判别标准来确定将 Pod 放到哪个节点。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The below diagram, from point number 1 through 4, explains the request flow:
|
||||
-->
|
||||
下图从第 1 点到第 4 点解释了请求流程:
|
||||
|
||||
<!--
|
||||
{{< figure src=kube-scheduler.svg alt="A diagram showing the scheduling of three Pods that a client has directly created." title="Scheduling in Kubernetes">}}
|
||||
-->
|
||||
{{< figure src=kube-scheduler.svg alt="由客户端直接创建的三个 Pod 的调度示意图。" title="Kubernetes 中的调度">}}
|
||||
|
||||
<!--
|
||||
## Typical use cases
|
||||
-->
|
||||
## 典型使用场景 {#typical-use-cases}
|
||||
|
||||
<!--
|
||||
Below are some real-life scenarios where control over the scheduling and eviction of pods may be required.
|
||||
-->
|
||||
以下是一些可能需要控制 Pod 调度和驱逐的真实场景。
|
||||
|
||||
<!--
|
||||
1. Let's say the pod you plan to deploy is critical, and you have some resource constraints. An example would be the DaemonSet of an infrastructure component like Grafana Loki. The Loki pods must run before other pods can on every node. In such cases, you could ensure resource availability by manually identifying and deleting the pods that are not required or by adding a new node to the cluster. Both these approaches are unsuitable since the former would be tedious to execute, and the latter could involve an expenditure of time and money.
|
||||
|
||||
2. Another use case could be a single cluster that holds the pods for the below environments with associated priorities:
|
||||
- Production (`prod`): top priority
|
||||
- Preproduction (`preprod`): intermediate priority
|
||||
- Development (`dev`): least priority
|
||||
|
||||
In the event of high resource consumption in the cluster, there is competition for CPU and memory resources on the nodes. While cluster-level autoscaling _may_ add more nodes, it takes time. In the interim, if there are no further nodes to scale the cluster, some Pods could remain in a Pending state, or the service could be degraded as they compete for resources. If the kubelet does evict a Pod from the node, that eviction would be random because the kubelet doesn’t have any special information about which Pods to evict and which to keep.
|
||||
|
||||
3. A third example could be a microservice backed by a queuing application or a database running into a resource crunch and the queue or database getting evicted. In such a case, all the other services would be rendered useless until the database can serve traffic again.
|
||||
-->
|
||||
1. 假设你计划部署的 Pod 很关键,并且你有一些资源限制。比如 Grafana Loki 等基础设施组件的 DaemonSet。
|
||||
Loki Pod 必须先于每个节点上的其他 Pod 运行。在这种情况下,你可以通过手动识别并删除不需要的 Pod 或向集群添加新节点来确保资源可用性。
|
||||
但是这两种方法都不合适,因为前者执行起来很乏味,而后者可能需要花费时间和金钱。
|
||||
|
||||
2. 另一个使用场景是包含若干 Pod 的单个集群,其中对于以下环境有着不同的优先级 :
|
||||
- 生产环境(`prod`):最高优先级
|
||||
- 预生产环境(`preprod`):中等优先级
|
||||
- 开发环境(`dev`):最低优先级
|
||||
|
||||
当集群资源消耗较高时,节点上会出现 CPU 和内存资源的竞争。虽然集群自动缩放可能会添加更多节点,但这需要时间。
|
||||
在此期间,如果没有更多节点来扩展集群,某些 Pod 可能会保持 Pending 状态,或者服务可能会因争夺资源而被降级。
|
||||
如果 kubelet 决定从节点中驱逐一个 Pod,那么该驱逐将是随机的,因为 kubelet 不具有关于要驱逐哪些 Pod 以及要保留哪些 Pod 的任何特殊信息。
|
||||
|
||||
3. 第三个示例是后端存在队列或数据库的微服务,当遇到资源紧缩并且队列或数据库被驱逐。
|
||||
在这种情况下,所有其他服务都将变得毫无用处,直到数据库可以再次提供流量。
|
||||
|
||||
<!--
|
||||
There can also be other scenarios where you want to control the order of scheduling or order of eviction of pods.
|
||||
-->
|
||||
还可能存在你希望控制 Pod 调度顺序或驱逐顺序的其他场景。
|
||||
|
||||
<!--
|
||||
## PriorityClasses in Kubernetes
|
||||
-->
|
||||
## Kubernetes 中的 PriorityClass {#priorityclasses-in-kubernetes}
|
||||
|
||||
<!--
|
||||
PriorityClass is a cluster-wide API object in Kubernetes and part of the `scheduling.k8s.io/v1` API group. It contains a mapping of the PriorityClass name (defined in `.metadata.name`) and an integer value (defined in `.value`). This represents the value that the scheduler uses to determine Pod's relative priority.
|
||||
-->
|
||||
PriorityClass 是 Kubernetes 中集群范围的 API 对象,也是 `scheduling.k8s.io/v1` API 组的一部分。
|
||||
它包含 PriorityClass 名称(在 `.metadata.name` 中定义)和一个整数值(在 `.value` 中定义)之间的映射。
|
||||
整数值表示调度程序用来确定 Pod 相对优先级的值。
|
||||
|
||||
<!--
|
||||
Additionally, when you create a cluster using kubeadm or a managed Kubernetes service (for example, Azure Kubernetes Service), Kubernetes uses PriorityClasses to safeguard the pods that are hosted on the control plane nodes. This ensures that critical cluster components such as CoreDNS and kube-proxy can run even if resources are constrained.
|
||||
-->
|
||||
此外,当你使用 kubeadm 或托管 Kubernetes 服务(例如 Azure Kubernetes Service)创建集群时,
|
||||
Kubernetes 使用 PriorityClass 来保护控制平面节点上托管的 Pod。这种设置可以确保即使资源有限,
|
||||
CoreDNS 和 kube-proxy 等关键集群组件仍然可以运行。
|
||||
|
||||
<!--
|
||||
This availability of pods is achieved through the use of a special PriorityClass that ensures the pods are up and running and that the overall cluster is not affected.
|
||||
-->
|
||||
Pod 的这种可用性是通过使用特殊的 PriorityClass 来实现的,该 PriorityClass 可确保 Pod 正常运行并且整个集群不受影响。
|
||||
|
||||
```console
|
||||
$ kubectl get priorityclass
|
||||
NAME VALUE GLOBAL-DEFAULT AGE
|
||||
system-cluster-critical 2000000000 false 82m
|
||||
system-node-critical 2000001000 false 82m
|
||||
```
|
||||
|
||||
<!--
|
||||
The diagram below shows exactly how it works with the help of an example, which will be detailed in the upcoming section.
|
||||
-->
|
||||
下图通过一个示例展示其确切工作原理,下一节详细介绍这一原理。
|
||||
|
||||
<!--
|
||||
{{< figure src="decision-tree.svg" alt="A flow chart that illustrates how the kube-scheduler prioritizes new Pods and potentially preempts existing Pods" title="Pod scheduling and preemption">}}
|
||||
-->
|
||||
{{< figure src="decision-tree.svg" alt="此流程图说明了 kube-scheduler 如何对新 Pod 进行优先级排序并可能对现有 Pod 进行抢占" title="Pod 调度和抢占">}}
|
||||
|
||||
<!--
|
||||
### Pod priority and preemption
|
||||
-->
|
||||
### Pod 优先级和抢占 {#pod-priority-and-preemption}
|
||||
|
||||
<!--
|
||||
[Pod preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) is a Kubernetes feature that allows the cluster to preempt pods (removing an existing Pod in favor of a new Pod) on the basis of priority. [Pod priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) indicates the importance of a pod relative to other pods while scheduling. If there aren't enough resources to run all the current pods, the scheduler tries to evict lower-priority pods over high-priority ones.
|
||||
-->
|
||||
[Pod 抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption)是 Kubernetes 的一项功能,
|
||||
允许集群基于优先级抢占 Pod(删除现有 Pod 以支持新 Pod)。
|
||||
[Pod 优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)表示调度时 Pod 相对于其他 Pod 的重要性。
|
||||
如果没有足够的资源来运行当前所有 Pod,调度程序会尝试驱逐优先级较低的 Pod,而不是优先级高的 Pod。
|
||||
|
||||
<!--
|
||||
Also, when a healthy cluster experiences a node failure, typically, lower-priority pods get preempted to create room for higher-priority pods on the available node. This happens even if the cluster can bring up a new node automatically since pod creation is usually much faster than bringing up a new node.
|
||||
-->
|
||||
此外,当健康集群遇到节点故障时,通常情况下,较低优先级的 Pod 会被抢占,以便在可用节点上为较高优先级的 Pod 腾出空间。
|
||||
即使集群可以自动创建新节点,也会发生这种情况,因为 Pod 创建通常比创建新节点快得多。
|
||||
|
||||
<!--
|
||||
### PriorityClass requirements
|
||||
-->
|
||||
### PriorityClass 的前提条件 {#priorityclass-requirements}
|
||||
|
||||
<!--
|
||||
Before you set up PriorityClasses, there are a few things to consider.
|
||||
-->
|
||||
在配置 PriorityClass 之前,需要考虑一些事项。
|
||||
|
||||
<!--
|
||||
1. Decide which PriorityClasses are needed. For instance, based on environment, type of pods, type of applications, etc.
|
||||
2. The default PriorityClass resource for your cluster. The pods without a `priorityClassName` will be treated as priority 0.
|
||||
3. Use a consistent naming convention for all PriorityClasses.
|
||||
4. Make sure that the pods for your workloads are running with the right PriorityClass.
|
||||
-->
|
||||
1. 决定哪些 PriorityClass 是需要的。例如,基于环境、Pod 类型、应用类型等。
|
||||
2. 集群中默认的 PriorityClass 资源。当 Pod 没有设置 `priorityClassName` 时,优先级将被视为 0。
|
||||
3. 对所有 PriorityClass 使用一致的命名约定。
|
||||
4. 确保工作负载的 Pod 正在使用正确的 PriorityClass。
|
||||
|
||||
<!--
|
||||
## PriorityClass hands-on example
|
||||
-->
|
||||
## PriorityClass 的动手示例 {#priorityclass-hands-on-example}
|
||||
|
||||
<!--
|
||||
Let’s say there are 3 application pods: one for prod, one for preprod, and one for development. Below are three sample YAML manifest files for each of those.
|
||||
-->
|
||||
假设有 3 个应用 Pod:一个用于生产(prod),一个用于预生产(prepord),一个用于开发(development)。
|
||||
下面是这三个示例的 YAML 清单文件。
|
||||
|
||||
```yaml
|
||||
---
|
||||
# 开发环境(dev)
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dev-nginx
|
||||
labels:
|
||||
env: dev
|
||||
spec:
|
||||
containers:
|
||||
- name: dev-nginx
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "0.2"
|
||||
limits:
|
||||
memory: ".5Gi"
|
||||
cpu: "0.5"
|
||||
```
|
||||
|
||||
```yaml
|
||||
---
|
||||
# 预生产环境(prepord)
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: preprod-nginx
|
||||
labels:
|
||||
env: preprod
|
||||
spec:
|
||||
containers:
|
||||
- name: preprod-nginx
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
memory: "1.5Gi"
|
||||
cpu: "1.5"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "2"
|
||||
```
|
||||
|
||||
```yaml
|
||||
---
|
||||
# 生产环境(prod)
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: prod-nginx
|
||||
labels:
|
||||
env: prod
|
||||
spec:
|
||||
containers:
|
||||
- name: prod-nginx
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
memory: "2Gi"
|
||||
cpu: "2"
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "2"
|
||||
```
|
||||
|
||||
<!--
|
||||
You can create these pods with the `kubectl create -f <FILE.yaml>` command, and then check their status using the `kubectl get pods` command. You can see if they are up and look ready to serve traffic:
|
||||
-->
|
||||
你可以使用 `kubectl create -f <FILE.yaml>` 命令创建这些 Pod,然后使用 `kubectl get pods` 命令检查它们的状态。
|
||||
你可以查看它们是否已启动并准备好提供流量:
|
||||
|
||||
```console
|
||||
$ kubectl get pods --show-labels
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
dev-nginx 1/1 Running 0 55s env=dev
|
||||
preprod-nginx 1/1 Running 0 55s env=preprod
|
||||
prod-nginx 0/1 Pending 0 55s env=prod
|
||||
```
|
||||
|
||||
<!--
|
||||
Bad news. The pod for the Production environment is still Pending and isn't serving any traffic.
|
||||
-->
|
||||
坏消息是生产环境的 Pod 仍处于 `Pending` 状态,并且不能提供任何流量。
|
||||
|
||||
<!--
|
||||
Let's see why this is happening:
|
||||
-->
|
||||
让我们看看为什么会发生这种情况:
|
||||
|
||||
```console
|
||||
$ kubectl get events
|
||||
...
|
||||
...
|
||||
5s Warning FailedScheduling pod/prod-nginx 0/2 nodes are available: 1 Insufficient cpu, 2 Insufficient memory.
|
||||
```
|
||||
|
||||
<!--
|
||||
In this example, there is only one worker node, and that node has a resource crunch.
|
||||
-->
|
||||
在此示例中,只有一个工作节点,并且该节点存在资源紧缩。
|
||||
|
||||
<!--
|
||||
Now, let's look at how PriorityClass can help in this situation since prod should be given higher priority than the other environments.
|
||||
-->
|
||||
现在,让我们看看在这种情况下 PriorityClass 如何提供帮助,因为生产环境应该比其他环境具有更高的优先级。
|
||||
|
||||
<!--
|
||||
## PriorityClass API
|
||||
-->
|
||||
## PriorityClass 的 API {#priorityclass-api}
|
||||
|
||||
<!--
|
||||
Before creating PriorityClasses based on these requirements, let's see what a basic manifest for a PriorityClass looks like and outline some prerequisites:
|
||||
-->
|
||||
在根据这些需求创建 PriorityClass 之前,让我们看看 PriorityClass 的基本清单是什么样的,
|
||||
并给出一些先决条件:
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: PRIORITYCLASS_NAME
|
||||
value: 0 # any integer value between -1000000000 to 1000000000
|
||||
description: >-
|
||||
(Optional) description goes here!
|
||||
globalDefault: false # or true. Only one PriorityClass can be the global default.
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: PRIORITYCLASS_NAME
|
||||
value: 0 # -1000000000 到 1000000000 之间的任何整数值
|
||||
description: >-
|
||||
(可选)描述内容!
|
||||
globalDefault: false # 或 true。只有一个 PriorityClass 可以作为全局默认值。
|
||||
```
|
||||
|
||||
<!--
|
||||
Below are some prerequisites for PriorityClasses:
|
||||
-->
|
||||
以下是 PriorityClass 的一些先决条件:
|
||||
|
||||
<!--
|
||||
- The name of a PriorityClass must be a valid DNS subdomain name.
|
||||
- When you make your own PriorityClass, the name should not start with `system-`, as those names are reserved by Kubernetes itself (for example, they are used for two built-in PriorityClasses).
|
||||
- Its absolute value should be between -1000000000 to 1000000000 (1 billion).
|
||||
- Larger numbers are reserved by PriorityClasses such as `system-cluster-critical` (this Pod is critically important to the cluster) and `system-node-critical` (the node critically relies on this Pod). `system-node-critical` is a higher priority than `system-cluster-critical`, because a cluster-critical Pod can only work well if the node where it is running has all its node-level critical requirements met.
|
||||
- There are two optional fields:
|
||||
- `globalDefault`: When true, this PriorityClass is used for pods where a `priorityClassName` is not specified. Only one PriorityClass with `globalDefault` set to true can exist in a cluster. If there is no PriorityClass defined with globalDefault set to true, all the pods with no priorityClassName defined will be treated with 0 priority (i.e. the least priority).
|
||||
- `description`: A string with a meaningful value so that people know when to use this PriorityClass.
|
||||
-->
|
||||
- PriorityClass 的名称必须是有效的 DNS 子域名。
|
||||
- 当你创建自己的 PriorityClass 时,名称不应以 `system-` 开头,因为这类名称是被 Kubernetes
|
||||
本身保留的(例如,它们被用于两个内置的 PriorityClass)。
|
||||
- 其绝对值应在 -1000000000 到 1000000000(10 亿)之间。
|
||||
- 较大的数值由 PriorityClass 保留,例如 `system-cluster-critical`(此 Pod 对集群至关重要)以及 `system-node-critical`(节点严重依赖此 Pod)。
|
||||
`system-node-critical` 的优先级高于 `system-cluster-critical`,因为集群级别关键 Pod 只有在其运行的节点满足其所有节点级别关键要求时才能正常工作。
|
||||
- 额外两个可选字段:
|
||||
- `globalDefault`:当为 true 时,此 PriorityClass 用于未设置 `priorityClassName` 的 Pod。
|
||||
集群中只能存在一个 `globalDefault` 设置为 true 的 PriorityClass。
|
||||
如果没有 PriorityClass 的 globalDefault 设置为 true,则所有未定义 priorityClassName 的 Pod 都将被视为 0 优先级(即最低优先级)。
|
||||
- `description`:具备有意义值的字符串,以便人们知道何时使用此 PriorityClass。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Adding a PriorityClass with `globalDefault` set to `true` does not mean it will apply the same to the existing pods that are already running. This will be applicable only to the pods that came into existence after the PriorityClass was created.
|
||||
-->
|
||||
添加一个将 `globalDefault` 设置为 `true` 的 PriorityClass 并不意味着它将同样应用于已在运行的现有 Pod。
|
||||
这仅适用于创建 PriorityClass 之后出现的 Pod。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### PriorityClass in action
|
||||
-->
|
||||
### PriorityClass 的实际应用 {#priorityclass-in-action}
|
||||
|
||||
<!--
|
||||
Here's an example. Next, create some environment-specific PriorityClasses:
|
||||
-->
|
||||
这里有一个例子。接下来,创建一些针对环境的 PriorityClass:
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: dev-pc
|
||||
value: 1000000
|
||||
globalDefault: false
|
||||
description: >-
|
||||
(Optional) This priority class should only be used for all development pods.
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: dev-pc
|
||||
value: 1000000
|
||||
globalDefault: false
|
||||
description: >-
|
||||
(可选)此 PriorityClass 只能用于所有开发环境(dev)Pod。
|
||||
|
||||
```
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: preprod-pc
|
||||
value: 2000000
|
||||
globalDefault: false
|
||||
description: >-
|
||||
(Optional) This priority class should only be used for all preprod pods.
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: preprod-pc
|
||||
value: 2000000
|
||||
globalDefault: false
|
||||
description: >-
|
||||
(可选)此 PriorityClass 只能用于所有预生产环境(preprod)Pod。
|
||||
```
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: prod-pc
|
||||
value: 4000000
|
||||
globalDefault: false
|
||||
description: >-
|
||||
(Optional) This priority class should only be used for all prod pods.
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
name: prod-pc
|
||||
value: 4000000
|
||||
globalDefault: false
|
||||
description: >-
|
||||
(可选)此 PriorityClass 只能用于所有生产环境(prod)Pod。
|
||||
```
|
||||
|
||||
<!--
|
||||
Use `kubectl create -f <FILE.YAML>` command to create a pc and `kubectl get pc` to check its status.
|
||||
-->
|
||||
使用 `kubectl create -f <FILE.YAML>` 命令创建 PriorityClass 并使用 `kubectl get pc` 检查其状态。
|
||||
|
||||
```console
|
||||
$ kubectl get pc
|
||||
NAME VALUE GLOBAL-DEFAULT AGE
|
||||
dev-pc 1000000 false 3m13s
|
||||
preprod-pc 2000000 false 2m3s
|
||||
prod-pc 4000000 false 7s
|
||||
system-cluster-critical 2000000000 false 82m
|
||||
system-node-critical 2000001000 false 82m
|
||||
```
|
||||
|
||||
<!--
|
||||
The new PriorityClasses are in place now. A small change is needed in the pod manifest or pod template (in a ReplicaSet or Deployment). In other words, you need to specify the priority class name at `.spec.priorityClassName` (which is a string value).
|
||||
-->
|
||||
新的 PriorityClass 现已就位。需要对 Pod 清单或 Pod 模板(在 ReplicaSet 或 Deployment 中)进行一些小的修改。
|
||||
换句话说,你需要在 `.spec.priorityClassName`(这是一个字符串值)中指定 PriorityClass 名称。
|
||||
|
||||
<!--
|
||||
First update the previous production pod manifest file to have a PriorityClass assigned, then delete the Production pod and recreate it. You can't edit the priority class for a Pod that already exists.
|
||||
-->
|
||||
首先更新之前的生产环境 Pod 清单文件以分配 PriorityClass,然后删除生产环境 Pod 并重新创建它。你无法编辑已存在 Pod 的优先级类别。
|
||||
|
||||
<!--
|
||||
In my cluster, when I tried this, here's what happened. First, that change seems successful; the status of pods has been updated:
|
||||
-->
|
||||
在我的集群中,当我尝试此操作时,发生了以下情况。首先,这种改变似乎是成功的;Pod 的状态已被更新:
|
||||
|
||||
```console
|
||||
$ kubectl get pods --show-labels
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
dev-nginx 1/1 Terminating 0 55s env=dev
|
||||
preprod-nginx 1/1 Running 0 55s env=preprod
|
||||
prod-nginx 0/1 Pending 0 55s env=prod
|
||||
```
|
||||
|
||||
<!--
|
||||
The dev-nginx pod is getting terminated. Once that is successfully terminated and there are enough resources for the prod pod, the control plane can schedule the prod pod:
|
||||
-->
|
||||
dev-nginx Pod 即将被终止。一旦成功终止并且有足够的资源用于 prod Pod,控制平面就可以对 prod Pod 进行调度:
|
||||
|
||||
```console
|
||||
Warning FailedScheduling pod/prod-nginx 0/2 nodes are available: 1 Insufficient cpu, 2 Insufficient memory.
|
||||
Normal Preempted pod/dev-nginx by default/prod-nginx on node node01
|
||||
Normal Killing pod/dev-nginx Stopping container dev-nginx
|
||||
Normal Scheduled pod/prod-nginx Successfully assigned default/prod-nginx to node01
|
||||
Normal Pulling pod/prod-nginx Pulling image "nginx"
|
||||
Normal Pulled pod/prod-nginx Successfully pulled image "nginx"
|
||||
Normal Created pod/prod-nginx Created container prod-nginx
|
||||
Normal Started pod/prod-nginx Started container prod-nginx
|
||||
```
|
||||
|
||||
<!--
|
||||
## Enforcement
|
||||
-->
|
||||
## 执行 {#enforcement}
|
||||
|
||||
<!--
|
||||
When you set up PriorityClasses, they exist just how you defined them. However, people (and tools) that make changes to your cluster are free to set any PriorityClass, or to not set any PriorityClass at all. However, you can use other Kubernetes features to make sure that the priorities you wanted are actually applied.
|
||||
-->
|
||||
配置 PriorityClass 时,它们会按照你所定义的方式存在。
|
||||
但是,对集群进行变更的人员(和工具)可以自由设置任意 PriorityClass,
|
||||
或者根本不设置任何 PriorityClass。然而,你可以使用其他 Kubernetes 功能来确保你想要的优先级被实际应用起来。
|
||||
|
||||
<!--
|
||||
As an alpha feature, you can define a [ValidatingAdmissionPolicy](/blog/2022/12/20/validating-admission-policies-alpha/) and a ValidatingAdmissionPolicyBinding so that, for example, Pods that go into the `prod` namespace must use the `prod-pc` PriorityClass. With another ValidatingAdmissionPolicyBinding you ensure that the `preprod` namespace uses the `preprod-pc` PriorityClass, and so on. In *any* cluster, you can enforce similar controls using external projects such as [Kyverno](https://kyverno.io/) or [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/), through validating admission webhooks.
|
||||
-->
|
||||
作为一项 Alpha 级别功能,你可以定义一个 [ValidatingAdmissionPolicy](/blog/2022/12/20/validating-admission-policies-alpha/)
|
||||
和一个 ValidatingAdmissionPolicyBinding,例如,进入 `prod` 命名空间的 Pod 必须使用 `prod-pc` PriorityClass。
|
||||
通过另一个 ValidatingAdmissionPolicyBinding,你可以确保 `preprod` 命名空间使用 `preprod-pc` PriorityClass,依此类推。
|
||||
在*任何*集群中,你可以使用外部项目,例如 [Kyverno](https://kyverno.io/) 或 [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/) 通过验证准入 Webhook 实施类似的控制。
|
||||
|
||||
<!--
|
||||
However you do it, Kubernetes gives you options to make sure that the PriorityClasses are used how you wanted them to be, or perhaps just to [warn](https://open-policy-agent.github.io/gatekeeper/website/docs/violations/#warn-enforcement-action) users when they pick an unsuitable option.
|
||||
-->
|
||||
无论你如何操作,Kubernetes 都会为你提供选项,确保 PriorityClass 的用法如你所愿,
|
||||
或者只是当用户选择不合适的选项时做出[警告](https://open-policy-agent.github.io/gatekeeper/website/docs/violations/#warn-enforcement-action)。
|
||||
|
||||
<!--
|
||||
## Summary
|
||||
-->
|
||||
## 总结 {#summary}
|
||||
|
||||
<!--
|
||||
The above example and its events show you what this feature of Kubernetes brings to the table, along with several scenarios where you can use this feature. To reiterate, this helps ensure that mission-critical pods are up and available to serve the traffic and, in the case of a resource crunch, determines cluster behavior.
|
||||
-->
|
||||
上面的示例及其事件向你展示了 Kubernetes 此功能带来的好处,以及可以使用此功能的几种场景。
|
||||
重申一下,这一机制有助于确保关键任务 Pod 启动并可用于提供流量,并在资源紧张的情况下确定集群行为。
|
||||
|
||||
<!--
|
||||
It gives you some power to decide the order of scheduling and order of [preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption) for Pods. Therefore, you need to define the PriorityClasses sensibly. For example, if you have a cluster autoscaler to add nodes on demand, make sure to run it with the `system-cluster-critical` PriorityClass. You don't want to get in a situation where the autoscaler has been preempted and there are no new nodes coming online.
|
||||
-->
|
||||
它赋予你一定的权力来决定 Pod 的调度顺序和[抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption)顺序。
|
||||
因此,你需要明智地定义 PriorityClass。例如,如果你有一个集群自动缩放程序来按需添加节点,
|
||||
请确保使用 `system-cluster-critical` PriorityClass 运行它。你不希望遇到自动缩放器 Pod 被抢占导致没有新节点上线的情况。
|
||||
|
||||
<!--
|
||||
If you have any queries or feedback, feel free to reach out to me on [LinkedIn](http://www.linkedin.com/in/sunnybhambhani).
|
||||
-->
|
||||
如果你有任何疑问或反馈,可以随时通过 [LinkedIn](http://www.linkedin.com/in/sunnybhambhani) 与我联系。
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 352 KiB |
|
@ -298,19 +298,19 @@ DaemonSet 通常提供节点本地的服务,即使节点上的负载应用已
|
|||
|
||||
A Node's status contains the following information:
|
||||
|
||||
* [Addresses](/docs/concepts/node/node-status/#addresses)
|
||||
* [Conditions](/docs/concepts/node/node-status/#condition)
|
||||
* [Capacity and Allocatable](/docs/concepts/node/node-status/#capacity)
|
||||
* [Info](/docs/concepts/node/node-status/#info)
|
||||
* [Addresses](/docs/reference/node/node-status/#addresses)
|
||||
* [Conditions](/docs/reference/node/node-status/#condition)
|
||||
* [Capacity and Allocatable](/docs/reference/node/node-status/#capacity)
|
||||
* [Info](/docs/reference/node/node-status/#info)
|
||||
-->
|
||||
## 节点状态 {#node-status}
|
||||
|
||||
一个节点的状态包含以下信息:
|
||||
|
||||
* [地址(Addresses)](/zh-cn/docs/concepts/node/node-status/#addresses)
|
||||
* [状况(Condition)](/zh-cn/docs/concepts/node/node-status/#condition)
|
||||
* [容量与可分配(Capacity)](/zh-cn/docs/concepts/node/node-status/#capacity)
|
||||
* [信息(Info)](/zh-cn/docs/concepts/node/node-status/#info)
|
||||
* [地址(Addresses)](/zh-cn/docs/reference/node/node-status/#addresses)
|
||||
* [状况(Condition)](/zh-cn/docs/reference/node/node-status/#condition)
|
||||
* [容量与可分配(Capacity)](/zh-cn/docs/reference/node/node-status/#capacity)
|
||||
* [信息(Info)](/zh-cn/docs/reference/node/node-status/#info)
|
||||
|
||||
<!--
|
||||
You can use `kubectl` to view a Node's status and other details:
|
||||
|
@ -326,9 +326,9 @@ kubectl describe node <节点名称>
|
|||
```
|
||||
|
||||
<!--
|
||||
See [Node Status](/docs/concepts/node/node-status) for more details.
|
||||
See [Node Status](/docs/reference/node/node-status/) for more details.
|
||||
-->
|
||||
更多细节参见 [Node Status](/zh-cn/docs/concepts/node/node-status)。
|
||||
更多细节参见 [Node Status](/zh-cn/docs/reference/node/node-status)。
|
||||
|
||||
<!--
|
||||
## Node heartbeats
|
||||
|
@ -345,13 +345,13 @@ Kubernetes 节点发送的心跳帮助你的集群确定每个节点的可用性
|
|||
对于节点,有两种形式的心跳:
|
||||
|
||||
<!--
|
||||
* Updates to the [`.status`](/docs/concepts/node/node-status/) of a Node
|
||||
* Updates to the [`.status`](/docs/reference/node/node-status/) of a Node.
|
||||
* [Lease](/docs/concepts/architecture/leases/) objects
|
||||
within the `kube-node-lease`
|
||||
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
|
||||
Each Node has an associated Lease object.
|
||||
-->
|
||||
* 更新节点的 [`.status`](/zh-cn/docs/concepts/node/node-status/)
|
||||
* 更新节点的 [`.status`](/zh-cn/docs/reference/node/node-status/)
|
||||
* `kube-node-lease` {{<glossary_tooltip term_id="namespace" text="名字空间">}}中的
|
||||
[Lease(租约)](/zh-cn/docs/concepts/architecture/leases/)对象。
|
||||
每个节点都有一个关联的 Lease 对象。
|
||||
|
|
|
@ -47,6 +47,7 @@ debug the exact same code locally if needed.
|
|||
|
||||
这让你可以获取在云中运行的容器镜像,并且如果有需要的话,在本地调试完全相同的代码。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
A ConfigMap is not designed to hold large chunks of data. The data stored in a
|
||||
ConfigMap cannot exceed 1 MiB. If you need to store settings that are
|
||||
|
@ -56,7 +57,7 @@ separate database or file service.
|
|||
ConfigMap 在设计上不是用来保存大量数据的。在 ConfigMap 中保存的数据不可超过
|
||||
1 MiB。如果你需要保存超出此尺寸限制的数据,你可能希望考虑挂载存储卷
|
||||
或者使用独立的数据库或者文件服务。
|
||||
|
||||
{{< /note >}}
|
||||
<!--
|
||||
## ConfigMap object
|
||||
|
||||
|
|
|
@ -99,12 +99,10 @@ See [Information security for Secrets](#information-security-for-secrets) for mo
|
|||
<!--
|
||||
## Uses for Secrets
|
||||
|
||||
There are three main ways for a Pod to use a Secret:
|
||||
- As [files](#using-secrets-as-files-from-a-pod) in a
|
||||
{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of
|
||||
its containers.
|
||||
- As [container environment variable](#using-secrets-as-environment-variables).
|
||||
- By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod.
|
||||
You can use Secrets for purposes such as the following:
|
||||
- [Set environment variables for a container](/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data).
|
||||
- [Provide credentials such as SSH keys or passwords to Pods](/docs/tasks/inject-data-application/distribute-credentials-secure/#provide-prod-test-creds).
|
||||
- [Allow the kubelet to pull container images from private registries](/docs/tasks/configure-pod-container/pull-image-private-registry/).
|
||||
|
||||
The Kubernetes control plane also uses Secrets; for example,
|
||||
[bootstrap token Secrets](#bootstrap-token-secrets) are a mechanism to
|
||||
|
@ -112,17 +110,105 @@ help automate node registration.
|
|||
-->
|
||||
## Secret 的使用 {#uses-for-secrets}
|
||||
|
||||
Pod 可以用三种方式之一来使用 Secret:
|
||||
你可以将 Secret 用于以下场景:
|
||||
|
||||
- 作为挂载到一个或多个容器上的{{< glossary_tooltip text="卷" term_id="volume" >}}
|
||||
中的[文件](#using-secrets-as-files-from-a-pod)。
|
||||
- 作为[容器的环境变量](#using-secrets-as-environment-variables)。
|
||||
- 由 [kubelet 在为 Pod 拉取镜像时使用](#using-imagepullsecrets)。
|
||||
- [设置容器的环境变量](/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data)。
|
||||
- [向 Pod 提供 SSH 密钥或密码等凭据](/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure/#provide-prod-test-creds)。
|
||||
- [允许 kubelet 从私有镜像仓库中拉取镜像](/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry/)。
|
||||
|
||||
Kubernetes 控制面也使用 Secret;
|
||||
例如,[引导令牌 Secret](#bootstrap-token-secrets)
|
||||
是一种帮助自动化节点注册的机制。
|
||||
|
||||
<!--
|
||||
### Use case: dotfiles in a secret volume
|
||||
|
||||
You can make your data "hidden" by defining a key that begins with a dot.
|
||||
This key represents a dotfile or "hidden" file. For example, when the following secret
|
||||
is mounted into a volume, `secret-volume`, the volume will contain a single file,
|
||||
called `.secret-file`, and the `dotfile-test-container` will have this file
|
||||
present at the path `/etc/secret-volume/.secret-file`.
|
||||
-->
|
||||
### 使用场景:在 Secret 卷中带句点的文件 {#use-case-dotfiles-in-a-secret-volume}
|
||||
|
||||
通过定义以句点(`.`)开头的主键,你可以“隐藏”你的数据。
|
||||
这些主键代表的是以句点开头的文件或“隐藏”文件。
|
||||
例如,当以下 Secret 被挂载到 `secret-volume` 卷上时,该卷中会包含一个名为
|
||||
`.secret-file` 的文件,并且容器 `dotfile-test-container`
|
||||
中此文件位于路径 `/etc/secret-volume/.secret-file` 处。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Files beginning with dot characters are hidden from the output of `ls -l`;
|
||||
you must use `ls -la` to see them when listing directory contents.
|
||||
-->
|
||||
以句点开头的文件会在 `ls -l` 的输出中被隐藏起来;
|
||||
列举目录内容时你必须使用 `ls -la` 才能看到它们。
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: dotfile-secret
|
||||
data:
|
||||
.secret-file: dmFsdWUtMg0KDQo=
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-dotfiles-pod
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: dotfile-secret
|
||||
containers:
|
||||
- name: dotfile-test-container
|
||||
image: registry.k8s.io/busybox
|
||||
command:
|
||||
- ls
|
||||
- "-l"
|
||||
- "/etc/secret-volume"
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
```
|
||||
|
||||
<!--
|
||||
### Use case: Secret visible to one container in a Pod
|
||||
|
||||
Consider a program that needs to handle HTTP requests, do some complex business
|
||||
logic, and then sign some messages with an HMAC. Because it has complex
|
||||
application logic, there might be an unnoticed remote file reading exploit in
|
||||
the server, which could expose the private key to an attacker.
|
||||
-->
|
||||
### 使用场景:仅对 Pod 中一个容器可见的 Secret {#use-case-secret-visible-to-one-container-in-a-pod}
|
||||
|
||||
考虑一个需要处理 HTTP 请求,执行某些复杂的业务逻辑,之后使用 HMAC
|
||||
来对某些消息进行签名的程序。因为这一程序的应用逻辑很复杂,
|
||||
其中可能包含未被注意到的远程服务器文件读取漏洞,
|
||||
这种漏洞可能会把私钥暴露给攻击者。
|
||||
|
||||
<!--
|
||||
This could be divided into two processes in two containers: a frontend container
|
||||
which handles user interaction and business logic, but which cannot see the
|
||||
private key; and a signer container that can see the private key, and responds
|
||||
to simple signing requests from the frontend (for example, over localhost networking).
|
||||
-->
|
||||
这一程序可以分隔成两个容器中的两个进程:前端容器要处理用户交互和业务逻辑,
|
||||
但无法看到私钥;签名容器可以看到私钥,并对来自前端的简单签名请求作出响应
|
||||
(例如,通过本地主机网络)。
|
||||
|
||||
<!--
|
||||
With this partitioned approach, an attacker now has to trick the application
|
||||
server into doing something rather arbitrary, which may be harder than getting
|
||||
it to read a file.
|
||||
-->
|
||||
采用这种划分的方法,攻击者现在必须欺骗应用服务器来做一些其他操作,
|
||||
而这些操作可能要比读取一个文件要复杂很多。
|
||||
|
||||
<!--
|
||||
### Alternatives to Secrets
|
||||
|
||||
|
@ -287,7 +373,7 @@ The `DATA` column shows the number of data items stored in the Secret.
|
|||
In this case, `0` means you have created an empty Secret.
|
||||
-->
|
||||
`DATA` 列显示 Secret 中保存的数据条目个数。
|
||||
在这个例子种,`0` 意味着你刚刚创建了一个空的 Secret。
|
||||
在这个例子中,`0` 意味着你刚刚创建了一个空的 Secret。
|
||||
|
||||
<!--
|
||||
### Service account token Secrets
|
||||
|
@ -423,12 +509,13 @@ for information on referencing service account credentials from within Pods.
|
|||
<!--
|
||||
### Docker config Secrets
|
||||
|
||||
You can use one of the following `type` values to create a Secret to
|
||||
store the credentials for accessing a container image registry:
|
||||
If you are creating a Secret to store credentials for accessing a container image registry,
|
||||
you must use one of the following `type` values for that Secret:
|
||||
-->
|
||||
### Docker 配置 Secret {#docker-config-secrets}
|
||||
|
||||
你可以使用下面两种 `type` 值之一来创建 Secret,用以存放用于访问容器镜像仓库的凭据:
|
||||
如果你要创建 Secret 用来存放用于访问容器镜像仓库的凭据,则必须选用以下 `type`
|
||||
值之一来创建 Secret:
|
||||
|
||||
- `kubernetes.io/dockercfg`
|
||||
- `kubernetes.io/dockerconfigjson`
|
||||
|
@ -537,14 +624,19 @@ Docker configuration file):
|
|||
}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
{{< caution >}}
|
||||
<!--
|
||||
The `auth` value there is base64 encoded; it is obscured but not secret.
|
||||
Anyone who can read that Secret can learn the registry access bearer token.
|
||||
|
||||
It is suggested to use [credential providers](/docs/tasks/administer-cluster/kubelet-credential-provider/) to dynamically and securely provide pull secrets on-demand.
|
||||
-->
|
||||
`auths` 值是 base64 编码的,其内容被屏蔽但未被加密。
|
||||
任何能够读取该 Secret 的人都可以了解镜像库的访问令牌。
|
||||
{{< /note >}}
|
||||
|
||||
建议使用[凭据提供程序](/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider/)来动态、
|
||||
安全地按需提供拉取 Secret。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
### Basic authentication Secret
|
||||
|
@ -669,6 +761,8 @@ When using this type of Secret, the `tls.key` and the `tls.crt` key must be prov
|
|||
in the `data` (or `stringData`) field of the Secret configuration, although the API
|
||||
server doesn't actually validate the values for each key.
|
||||
|
||||
As an alternative to using `stringData`, you can use the `data` field to provide the base64 encoded certificate and private key. Refer to [Constraints on Secret names and data](#restriction-names-data) for more on this.
|
||||
|
||||
The following YAML contains an example config for a TLS Secret:
|
||||
-->
|
||||
### TLS Secret
|
||||
|
@ -680,6 +774,9 @@ TLS Secret 的一种典型用法是为 [Ingress](/zh-cn/docs/concepts/services-n
|
|||
当使用此类型的 Secret 时,Secret 配置中的 `data` (或 `stringData`)字段必须包含
|
||||
`tls.key` 和 `tls.crt` 主键,尽管 API 服务器实际上并不会对每个键的取值作进一步的合法性检查。
|
||||
|
||||
作为使用 `stringData` 的替代方法,你可以使用 `data` 字段来指定 base64 编码的证书和私钥。
|
||||
有关详细信息,请参阅 [Secret 名称和数据的限制](#restriction-names-data)。
|
||||
|
||||
下面的 YAML 包含一个 TLS Secret 的配置示例:
|
||||
|
||||
```yaml
|
||||
|
@ -688,11 +785,13 @@ kind: Secret
|
|||
metadata:
|
||||
name: secret-tls
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
stringData:
|
||||
# 此例中的数据被截断
|
||||
tls.crt: |
|
||||
--------BEGIN CERTIFICATE-----
|
||||
MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
|
||||
tls.key: |
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
|
||||
```
|
||||
|
||||
|
@ -720,36 +819,11 @@ kubectl create secret tls my-tls-secret \
|
|||
```
|
||||
|
||||
<!--
|
||||
The public/private key pair must exist before hand. The public key certificate
|
||||
for `--cert` must be DER format as per
|
||||
[Section 5.1 of RFC 7468](https://datatracker.ietf.org/doc/html/rfc7468#section-5.1),
|
||||
and must match the given private key for `--key` (PKCS #8 in DER format;
|
||||
[Section 11 of RFC 7468](https://datatracker.ietf.org/doc/html/rfc7468#section-11)).
|
||||
The public/private key pair must exist before hand. The public key certificate for `--cert` must be .PEM encoded
|
||||
and must match the given private key for `--key`.
|
||||
-->
|
||||
这里的公钥/私钥对都必须事先已存在。用于 `--cert` 的公钥证书必须是
|
||||
[RFC 7468 中 5.1 节](https://datatracker.ietf.org/doc/html/rfc7468#section-5.1)
|
||||
中所规定的 DER 格式,且与 `--key` 所给定的私钥匹配。
|
||||
私钥必须是 DER 格式的 PKCS #8
|
||||
(参见 [RFC 7468 第 11节](https://datatracker.ietf.org/doc/html/rfc7468#section-11))。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
A kubernetes.io/tls Secret stores the Base64-encoded DER data for keys and
|
||||
certificates. If you're familiar with PEM format for private keys and for certificates,
|
||||
the base64 data are the same as that format except that you omit
|
||||
the initial and the last lines that are used in PEM.
|
||||
-->
|
||||
类型为 `kubernetes.io/tls` 的 Secret 中包含密钥和证书的 DER 数据,以 Base64 格式编码。
|
||||
如果你熟悉私钥和证书的 PEM 格式,base64 与该格式相同,只是你需要略过 PEM
|
||||
数据中所包含的第一行和最后一行。
|
||||
|
||||
<!--
|
||||
For example, for a certificate, you do **not** include `--------BEGIN CERTIFICATE-----`
|
||||
and `-------END CERTIFICATE----`.
|
||||
-->
|
||||
例如,对于证书而言,你 **不要** 包含 `--------BEGIN CERTIFICATE-----`
|
||||
和 `-------END CERTIFICATE----` 这两行。
|
||||
{{< /note >}}
|
||||
公钥/私钥对必须事先存在,`--cert` 的公钥证书必须采用 .PEM 编码,
|
||||
并且必须与 `--key` 的给定私钥匹配。
|
||||
|
||||
<!--
|
||||
### Bootstrap token Secrets
|
||||
|
@ -1030,7 +1104,6 @@ spec:
|
|||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
readOnly: true
|
||||
volumes:
|
||||
|
@ -1549,99 +1622,6 @@ spec:
|
|||
image: myClientImage
|
||||
```
|
||||
|
||||
<!--
|
||||
### Use case: dotfiles in a secret volume
|
||||
|
||||
You can make your data "hidden" by defining a key that begins with a dot.
|
||||
This key represents a dotfile or "hidden" file. For example, when the following secret
|
||||
is mounted into a volume, `secret-volume`:
|
||||
-->
|
||||
### 使用场景:在 Secret 卷中带句点的文件 {#use-case-dotfiles-in-a-secret-volume}
|
||||
|
||||
通过定义以句点(`.`)开头的主键,你可以“隐藏”你的数据。
|
||||
这些主键代表的是以句点开头的文件或“隐藏”文件。
|
||||
例如,当下面的 Secret 被挂载到 `secret-volume` 卷中时:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: dotfile-secret
|
||||
data:
|
||||
.secret-file: dmFsdWUtMg0KDQo=
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-dotfiles-pod
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: dotfile-secret
|
||||
containers:
|
||||
- name: dotfile-test-container
|
||||
image: registry.k8s.io/busybox
|
||||
command:
|
||||
- ls
|
||||
- "-l"
|
||||
- "/etc/secret-volume"
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
```
|
||||
|
||||
<!--
|
||||
The volume will contain a single file, called `.secret-file`, and
|
||||
the `dotfile-test-container` will have this file present at the path
|
||||
`/etc/secret-volume/.secret-file`.
|
||||
-->
|
||||
卷中会包含一个名为 `.secret-file` 的文件,并且容器 `dotfile-test-container`
|
||||
中此文件位于路径 `/etc/secret-volume/.secret-file` 处。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Files beginning with dot characters are hidden from the output of `ls -l`;
|
||||
you must use `ls -la` to see them when listing directory contents.
|
||||
-->
|
||||
以句点开头的文件会在 `ls -l` 的输出中被隐藏起来;
|
||||
列举目录内容时你必须使用 `ls -la` 才能看到它们。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Use case: Secret visible to one container in a Pod
|
||||
|
||||
Consider a program that needs to handle HTTP requests, do some complex business
|
||||
logic, and then sign some messages with an HMAC. Because it has complex
|
||||
application logic, there might be an unnoticed remote file reading exploit in
|
||||
the server, which could expose the private key to an attacker.
|
||||
-->
|
||||
### 使用场景:仅对 Pod 中一个容器可见的 Secret {#use-case-secret-visible-to-one-container-in-a-pod}
|
||||
|
||||
考虑一个需要处理 HTTP 请求,执行某些复杂的业务逻辑,之后使用 HMAC
|
||||
来对某些消息进行签名的程序。因为这一程序的应用逻辑很复杂,
|
||||
其中可能包含未被注意到的远程服务器文件读取漏洞,
|
||||
这种漏洞可能会把私钥暴露给攻击者。
|
||||
|
||||
<!--
|
||||
This could be divided into two processes in two containers: a frontend container
|
||||
which handles user interaction and business logic, but which cannot see the
|
||||
private key; and a signer container that can see the private key, and responds
|
||||
to simple signing requests from the frontend (for example, over localhost networking).
|
||||
-->
|
||||
这一程序可以分隔成两个容器中的两个进程:前端容器要处理用户交互和业务逻辑,
|
||||
但无法看到私钥;签名容器可以看到私钥,并对来自前端的简单签名请求作出响应
|
||||
(例如,通过本地主机网络)。
|
||||
|
||||
<!--
|
||||
With this partitioned approach, an attacker now has to trick the application
|
||||
server into doing something rather arbitrary, which may be harder than getting
|
||||
it to read a file.
|
||||
-->
|
||||
采用这种划分的方法,攻击者现在必须欺骗应用服务器来做一些其他操作,
|
||||
而这些操作可能要比读取一个文件要复杂很多。
|
||||
|
||||
<!--
|
||||
## Immutable Secrets {#secret-immutable}
|
||||
-->
|
||||
|
|
|
@ -18,8 +18,8 @@ weight: 60
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} is a key security control
|
||||
to ensure that cluster users and workloads have only the access to resources required to
|
||||
Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} is a key security control
|
||||
to ensure that cluster users and workloads have only the access to resources required to
|
||||
execute their roles. It is important to ensure that, when designing permissions for cluster
|
||||
users, the cluster administrator understands the areas where privilge escalation could occur,
|
||||
to reduce the risk of excessive access leading to security incidents.
|
||||
|
@ -48,28 +48,28 @@ Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}}
|
|||
### 最小特权 {#least-privilege}
|
||||
|
||||
<!--
|
||||
Ideally, minimal RBAC rights should be assigned to users and service accounts. Only permissions
|
||||
explicitly required for their operation should be used. While each cluster will be different,
|
||||
Ideally, minimal RBAC rights should be assigned to users and service accounts. Only permissions
|
||||
explicitly required for their operation should be used. While each cluster will be different,
|
||||
some general rules that can be applied are :
|
||||
-->
|
||||
理想情况下,分配给用户和服务帐户的 RBAC 权限应该是最小的。
|
||||
仅应使用操作明确需要的权限,虽然每个集群会有所不同,但可以应用的一些常规规则:
|
||||
|
||||
<!--
|
||||
- Assign permissions at the namespace level where possible. Use RoleBindings as opposed to
|
||||
- Assign permissions at the namespace level where possible. Use RoleBindings as opposed to
|
||||
ClusterRoleBindings to give users rights only within a specific namespace.
|
||||
- Avoid providing wildcard permissions when possible, especially to all resources.
|
||||
As Kubernetes is an extensible system, providing wildcard access gives rights
|
||||
not just to all object types that currently exist in the cluster, but also to all object types
|
||||
which are created in the future.
|
||||
- Administrators should not use `cluster-admin` accounts except where specifically needed.
|
||||
- Administrators should not use `cluster-admin` accounts except where specifically needed.
|
||||
Providing a low privileged account with
|
||||
[impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation)
|
||||
can avoid accidental modification of cluster resources.
|
||||
- Avoid adding users to the `system:masters` group. Any user who is a member of this group
|
||||
bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be
|
||||
revoked by removing RoleBindings or ClusterRoleBindings. As an aside, if a cluster is
|
||||
using an authorization webhook, membership of this group also bypasses that webhook (requests
|
||||
- Avoid adding users to the `system:masters` group. Any user who is a member of this group
|
||||
bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be
|
||||
revoked by removing RoleBindings or ClusterRoleBindings. As an aside, if a cluster is
|
||||
using an authorization webhook, membership of this group also bypasses that webhook (requests
|
||||
from users who are members of that group are never sent to the webhook)
|
||||
-->
|
||||
- 尽可能在命名空间级别分配权限。授予用户在特定命名空间中的权限时使用 RoleBinding
|
||||
|
@ -92,20 +92,20 @@ some general rules that can be applied are :
|
|||
|
||||
<!--
|
||||
Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions
|
||||
(for example, any of the rights listed under [privilege escalation risks](#privilege-escalation-risks)).
|
||||
(for example, any of the rights listed under [privilege escalation risks](#privilege-escalation-risks)).
|
||||
In cases where a workload requires powerful permissions, consider the following practices:
|
||||
|
||||
- Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run
|
||||
- Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run
|
||||
are necessary and are run with least privilege to limit the blast radius of container escapes.
|
||||
- Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using
|
||||
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/),
|
||||
[NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or
|
||||
[PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
|
||||
to ensure pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
|
||||
situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard.
|
||||
- Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using
|
||||
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/),
|
||||
[NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or
|
||||
[PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
|
||||
to ensure pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
|
||||
situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard.
|
||||
-->
|
||||
理想情况下,不应为 Pod 分配具有强大权限(例如,在[特权提级的风险](#privilege-escalation-risks)中列出的任一权限)的服务帐户。
|
||||
如果工作负载需要比较大的权限,请考虑以下做法:
|
||||
理想情况下,不应为 Pod 分配具有强大权限(例如,在[特权提级的风险](#privilege-escalation-risks)中列出的任一权限)
|
||||
的服务帐户。如果工作负载需要比较大的权限,请考虑以下做法:
|
||||
|
||||
- 限制运行此类 Pod 的节点数量。确保你运行的任何 DaemonSet 都是必需的,
|
||||
并且以最小权限运行,以限制容器逃逸的影响范围。
|
||||
|
@ -119,9 +119,9 @@ In cases where a workload requires powerful permissions, consider the following
|
|||
<!--
|
||||
### Hardening
|
||||
|
||||
Kubernetes defaults to providing access which may not be required in every cluster. Reviewing
|
||||
Kubernetes defaults to providing access which may not be required in every cluster. Reviewing
|
||||
the RBAC rights provided by default can provide opportunities for security hardening.
|
||||
In general, changes should not be made to rights provided to `system:` accounts some options
|
||||
In general, changes should not be made to rights provided to `system:` accounts some options
|
||||
to harden cluster rights exist:
|
||||
-->
|
||||
### 加固 {#hardening}
|
||||
|
@ -148,7 +148,7 @@ Kubernetes 默认提供访问权限并非是每个集群都需要的。
|
|||
<!--
|
||||
### Periodic review
|
||||
|
||||
It is vital to periodically review the Kubernetes RBAC settings for redundant entries and
|
||||
It is vital to periodically review the Kubernetes RBAC settings for redundant entries and
|
||||
possible privilege escalations.
|
||||
If an attacker is able to create a user account with the same name as a deleted user,
|
||||
they can automatically inherit all the rights of the deleted user, especially the
|
||||
|
@ -166,7 +166,7 @@ rights assigned to that user.
|
|||
Within Kubernetes RBAC there are a number of privileges which, if granted, can allow a user or a service account
|
||||
to escalate their privileges in the cluster or affect systems outside the cluster.
|
||||
|
||||
This section is intended to provide visibility of the areas where cluster operators
|
||||
This section is intended to provide visibility of the areas where cluster operators
|
||||
should take care, to ensure that they do not inadvertently allow for more access to clusters than intended.
|
||||
-->
|
||||
## Kubernetes RBAC - 权限提权的风险 {#privilege-escalation-risks}
|
||||
|
@ -222,8 +222,9 @@ or other (third party) mechanisms to implement that enforcement.
|
|||
可以运行特权 Pod 的用户可以利用该访问权限获得节点访问权限,
|
||||
并可能进一步提升他们的特权。如果你不完全信任某用户或其他主体,
|
||||
不相信他们能够创建比较安全且相互隔离的 Pod,你应该强制实施 **Baseline**
|
||||
或 **Restricted** Pod 安全标准。
|
||||
你可以使用 [Pod 安全性准入](/zh-cn/docs/concepts/security/pod-security-admission/)或其他(第三方)机制来强制实施这些限制。
|
||||
或 **Restricted** Pod 安全标准。你可以使用
|
||||
[Pod 安全性准入](/zh-cn/docs/concepts/security/pod-security-admission/)或其他(第三方)
|
||||
机制来强制实施这些限制。
|
||||
|
||||
<!--
|
||||
For these reasons, namespaces should be used to separate resources requiring different levels of
|
||||
|
@ -248,12 +249,13 @@ to the underlying host filesystem(s) on the associated node. Granting that abili
|
|||
这意味着 Pod 将可以访问对应节点上的下层主机文件系统。授予该能力会带来安全风险。
|
||||
|
||||
<!--
|
||||
There are many ways a container with unrestricted access to the host filesystem can escalate privileges, including
|
||||
There are many ways a container with unrestricted access to the host filesystem can escalate privileges, including
|
||||
reading data from other containers, and abusing the credentials of system services, such as Kubelet.
|
||||
|
||||
You should only allow access to create PersistentVolume objects for:
|
||||
-->
|
||||
不受限制地访问主机文件系统的容器可以通过多种方式提升特权,包括从其他容器读取数据以及滥用系统服务(例如 Kubelet)的凭据。
|
||||
不受限制地访问主机文件系统的容器可以通过多种方式提升特权,包括从其他容器读取数据以及滥用系统服务
|
||||
(例如 kubelet)的凭据。
|
||||
|
||||
你应该只允许以下实体具有创建 PersistentVolume 对象的访问权限:
|
||||
|
||||
|
@ -268,7 +270,7 @@ You should only allow access to create PersistentVolume objects for:
|
|||
这通常由 Kubernetes 提供商或操作员在安装 CSI 驱动程序时进行设置。
|
||||
|
||||
<!--
|
||||
Where access to persistent storage is required trusted administrators should create
|
||||
Where access to persistent storage is required trusted administrators should create
|
||||
PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage.
|
||||
-->
|
||||
在需要访问持久存储的地方,受信任的管理员应创建 PersistentVolume,而受约束的用户应使用
|
||||
|
@ -277,21 +279,21 @@ PersistentVolumeClaim 来访问该存储。
|
|||
<!--
|
||||
### Access to `proxy` subresource of Nodes
|
||||
|
||||
Users with access to the proxy sub-resource of node objects have rights to the Kubelet API,
|
||||
which allows for command execution on every pod on the node(s) to which they have rights.
|
||||
This access bypasses audit logging and admission control, so care should be taken before
|
||||
Users with access to the proxy sub-resource of node objects have rights to the Kubelet API,
|
||||
which allows for command execution on every pod on the node(s) to which they have rights.
|
||||
This access bypasses audit logging and admission control, so care should be taken before
|
||||
granting rights to this resource.
|
||||
-->
|
||||
### 访问 Node 的 `proxy` 子资源 {#access-to-proxy-subresource-of-nodes}
|
||||
|
||||
有权访问 Node 对象的 proxy 子资源的用户有权访问 Kubelet API,
|
||||
有权访问 Node 对象的 proxy 子资源的用户有权访问 kubelet API,
|
||||
这允许在他们有权访问的节点上的所有 Pod 上执行命令。
|
||||
此访问绕过审计日志记录和准入控制,因此在授予对此资源的权限前应小心。
|
||||
|
||||
<!--
|
||||
### Escalate verb
|
||||
|
||||
Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses.
|
||||
Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses.
|
||||
The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update),
|
||||
users with this right can effectively escalate their privileges.
|
||||
-->
|
||||
|
@ -305,8 +307,8 @@ users with this right can effectively escalate their privileges.
|
|||
<!--
|
||||
### Bind verb
|
||||
|
||||
Similar to the `escalate` verb, granting users this right allows for the bypass of Kubernetes
|
||||
in-built protections against privilege escalation, allowing users to create bindings to
|
||||
Similar to the `escalate` verb, granting users this right allows for the bypass of Kubernetes
|
||||
in-built protections against privilege escalation, allowing users to create bindings to
|
||||
roles with rights they do not already have.
|
||||
-->
|
||||
### bind 动词 {#bind-verb}
|
||||
|
@ -317,8 +319,8 @@ roles with rights they do not already have.
|
|||
<!--
|
||||
### Impersonate verb
|
||||
|
||||
This verb allows users to impersonate and gain the rights of other users in the cluster.
|
||||
Care should be taken when granting it, to ensure that excessive permissions cannot be gained
|
||||
This verb allows users to impersonate and gain the rights of other users in the cluster.
|
||||
Care should be taken when granting it, to ensure that excessive permissions cannot be gained
|
||||
via one of the impersonated accounts.
|
||||
-->
|
||||
### impersonate 动词 {#impersonate-verb}
|
||||
|
@ -329,9 +331,9 @@ via one of the impersonated accounts.
|
|||
<!--
|
||||
### CSRs and certificate issuing
|
||||
|
||||
The CSR API allows for users with `create` rights to CSRs and `update` rights on `certificatesigningrequests/approval`
|
||||
where the signer is `kubernetes.io/kube-apiserver-client` to create new client certificates
|
||||
which allow users to authenticate to the cluster. Those client certificates can have arbitrary
|
||||
The CSR API allows for users with `create` rights to CSRs and `update` rights on `certificatesigningrequests/approval`
|
||||
where the signer is `kubernetes.io/kube-apiserver-client` to create new client certificates
|
||||
which allow users to authenticate to the cluster. Those client certificates can have arbitrary
|
||||
names including duplicates of Kubernetes system components. This will effectively allow for privilege escalation.
|
||||
-->
|
||||
### CSR 和证书颁发 {#csrs-and-certificate-issuing}
|
||||
|
@ -346,8 +348,8 @@ CSR API 允许用户拥有 `create` CSR 的权限和 `update`
|
|||
<!--
|
||||
### Token request
|
||||
|
||||
Users with `create` rights on `serviceaccounts/token` can create TokenRequests to issue
|
||||
tokens for existing service accounts.
|
||||
Users with `create` rights on `serviceaccounts/token` can create TokenRequests to issue
|
||||
tokens for existing service accounts.
|
||||
-->
|
||||
### 令牌请求 {#token-request}
|
||||
|
||||
|
@ -357,8 +359,8 @@ TokenRequest 来发布现有服务帐户的令牌。
|
|||
<!--
|
||||
### Control admission webhooks
|
||||
|
||||
Users with control over `validatingwebhookconfigurations` or `mutatingwebhookconfigurations`
|
||||
can control webhooks that can read any object admitted to the cluster, and in the case of
|
||||
Users with control over `validatingwebhookconfigurations` or `mutatingwebhookconfigurations`
|
||||
can control webhooks that can read any object admitted to the cluster, and in the case of
|
||||
mutating webhooks, also mutate admitted objects.
|
||||
-->
|
||||
### 控制准入 Webhook {#control-admission-webhooks}
|
||||
|
@ -371,6 +373,7 @@ mutating webhooks, also mutate admitted objects.
|
|||
## Kubernetes RBAC - denial of service risks {#denial-of-service-risks}
|
||||
|
||||
### Object creation denial-of-service {#object-creation-dos}
|
||||
|
||||
Users who have rights to create objects in a cluster may be able to create sufficient large
|
||||
objects to create a denial of service condition either based on the size or number of objects, as discussed in
|
||||
[etcd used by Kubernetes is vulnerable to OOM attack](https://github.com/kubernetes/kubernetes/issues/107325). This may be
|
||||
|
@ -397,4 +400,4 @@ to limit the quantity of objects which can be created.
|
|||
* To learn more about RBAC, see the [RBAC documentation](/docs/reference/access-authn-authz/rbac/).
|
||||
-->
|
||||
|
||||
* 了解有关 RBAC 的更多信息,请参阅 [RBAC 文档](/zh-cn/docs/reference/access-authn-authz/rbac/)。
|
||||
* 了解有关 RBAC 的更多信息,请参阅 [RBAC 文档](/zh-cn/docs/reference/access-authn-authz/rbac/)。
|
||||
|
|
|
@ -203,10 +203,10 @@ repository means the secret is available to everyone who can read the manifest.
|
|||
同时将该 Secret 数据编码为 base64,
|
||||
那么共享此文件或将其检入一个源代码仓库就意味着有权读取该清单的所有人都能使用该 Secret。
|
||||
|
||||
{{<caution>}}
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Base64 encoding is _not_ an encryption method, it provides no additional
|
||||
confidentiality over plain text.
|
||||
-->
|
||||
Base64 编码 **不是** 一种加密方法,它没有为纯文本提供额外的保密机制。
|
||||
{{</caution>}}
|
||||
Base64 编码**不是**一种加密方法,它没有为纯文本提供额外的保密机制。
|
||||
{{< /caution >}}
|
||||
|
|
|
@ -92,7 +92,7 @@ Kubernetes 作为一个项目,目前支持和维护
|
|||
<!--
|
||||
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
|
||||
lets you use an Ingress to configure F5 BIG-IP virtual servers.
|
||||
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller-1-0/742835/fortiadc-ingress-controller-overview) support the Kubernetes Ingress resources and allows you to manage FortiADC objects from Kubernetes
|
||||
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller/742835/fortiadc-ingress-controller-overview) support the Kubernetes Ingress resources and allows you to manage FortiADC objects from Kubernetes
|
||||
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
|
||||
which offers API gateway functionality.
|
||||
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
|
||||
|
@ -105,7 +105,7 @@ Kubernetes 作为一个项目,目前支持和维护
|
|||
* F5 BIG-IP 的
|
||||
[用于 Kubernetes 的容器 Ingress 服务](https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest)
|
||||
让你能够使用 Ingress 来配置 F5 BIG-IP 虚拟服务器。
|
||||
* [FortiADC Ingress 控制器](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller-1-0/742835/fortiadc-ingress-controller-overview)
|
||||
* [FortiADC Ingress 控制器](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller/742835/fortiadc-ingress-controller-overview)
|
||||
支持 Kubernetes Ingress 资源,并允许你从 Kubernetes 管理 FortiADC 对象。
|
||||
* [Gloo](https://gloo.solo.io) 是一个开源的、基于 [Envoy](https://www.envoyproxy.io) 的
|
||||
Ingress 控制器,能够提供 API 网关功能。
|
||||
|
|
|
@ -31,8 +31,8 @@ hide_summary: true # Listed separately in section index
|
|||
|
||||
<!--
|
||||
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
|
||||
As pods successfully complete, the Job tracks the successful completions. When a specified number
|
||||
of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
|
||||
As pods successfully complete, the Job tracks the successful completions. When a specified number
|
||||
of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
|
||||
the Pods it created. Suspending a Job will delete its active Pods until the Job
|
||||
is resumed again.
|
||||
|
||||
|
@ -65,7 +65,7 @@ Job 会创建一个或者多个 Pod,并将继续重试 Pod 的执行,直到
|
|||
<!--
|
||||
## Running an example Job
|
||||
|
||||
Here is an example Job config. It computes π to 2000 places and prints it out.
|
||||
Here is an example Job config. It computes π to 2000 places and prints it out.
|
||||
It takes around 10s to complete.
|
||||
-->
|
||||
## 运行示例 Job {#running-an-example-job}
|
||||
|
@ -215,7 +215,7 @@ pi-5rwd7
|
|||
```
|
||||
|
||||
<!--
|
||||
Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
|
||||
Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
|
||||
with the name from each Pod in the returned list.
|
||||
|
||||
View the standard output of one of the pods:
|
||||
|
@ -253,9 +253,9 @@ The output is similar to this:
|
|||
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields.
|
||||
|
||||
When the control plane creates new Pods for a Job, the `.metadata.name` of the
|
||||
Job is part of the basis for naming those Pods. The name of a Job must be a valid
|
||||
Job is part of the basis for naming those Pods. The name of a Job must be a valid
|
||||
[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
|
||||
value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
|
||||
value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
|
||||
the name should follow the more restrictive rules for a
|
||||
[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
|
||||
Even when the name is a DNS subdomain, the name must be no longer than 63
|
||||
|
@ -284,17 +284,21 @@ Job 配置还需要一个 [`.spec` 节](https://git.k8s.io/community/contributor
|
|||
Job labels will have `batch.kubernetes.io/` prefix for `job-name` and `controller-uid`.
|
||||
-->
|
||||
Job 标签将为 `job-name` 和 `controller-uid` 加上 `batch.kubernetes.io/` 前缀。
|
||||
|
||||
<!--
|
||||
### Pod Template
|
||||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates).
|
||||
It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}},
|
||||
except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a Job must specify appropriate
|
||||
labels (see [pod selector](#pod-selector)) and an appropriate restart policy.
|
||||
|
||||
Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed.
|
||||
Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
|
||||
equal to `Never` or `OnFailure` is allowed.
|
||||
-->
|
||||
### Pod 模板 {#pod-template}
|
||||
|
||||
|
@ -313,13 +317,13 @@ Job 中 Pod 的 [`RestartPolicy`](/zh-cn/docs/concepts/workloads/pods/pod-lifecy
|
|||
<!--
|
||||
### Pod selector
|
||||
|
||||
The `.spec.selector` field is optional. In almost all cases you should not specify it.
|
||||
The `.spec.selector` field is optional. In almost all cases you should not specify it.
|
||||
See section [specifying your own pod selector](#specifying-your-own-pod-selector).
|
||||
-->
|
||||
### Pod 选择算符 {#pod-selector}
|
||||
|
||||
字段 `.spec.selector` 是可选的。在绝大多数场合,你都不需要为其赋值。
|
||||
参阅[设置自己的 Pod 选择算符](#specifying-your-own-pod-selector).
|
||||
参阅[设置自己的 Pod 选择算符](#specifying-your-own-pod-selector)。
|
||||
|
||||
<!--
|
||||
### Parallel execution for Jobs {#parallel-jobs}
|
||||
|
@ -340,11 +344,15 @@ There are three main types of task suitable to run as a Job:
|
|||
- when using `.spec.completionMode="Indexed"`, each Pod gets a different index in the range 0 to `.spec.completions-1`.
|
||||
1. Parallel Jobs with a *work queue*:
|
||||
- do not specify `.spec.completions`, default to `.spec.parallelism`.
|
||||
- the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
|
||||
- each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done.
|
||||
- the Pods must coordinate amongst themselves or an external service to determine
|
||||
what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
|
||||
- each Pod is independently capable of determining whether or not all its peers are done,
|
||||
and thus that the entire Job is done.
|
||||
- when _any_ Pod from the Job terminates with success, no new Pods are created.
|
||||
- once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
|
||||
- once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting.
|
||||
- once at least one Pod has terminated with success and all Pods are terminated,
|
||||
then the Job is completed with success.
|
||||
- once any Pod has exited with success, no other Pod should still be doing any work
|
||||
for this task or writing any output. They should all be in the process of exiting.
|
||||
-->
|
||||
1. 非并行 Job:
|
||||
- 通常只启动一个 Pod,除非该 Pod 失败。
|
||||
|
@ -365,8 +373,8 @@ There are three main types of task suitable to run as a Job:
|
|||
所有 Pod 都应启动退出过程。
|
||||
|
||||
<!--
|
||||
For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
|
||||
unset, both are defaulted to 1.
|
||||
For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset.
|
||||
When both are unset, both are defaulted to 1.
|
||||
|
||||
For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed.
|
||||
You can set `.spec.parallelism`, or leave it unset and it will default to 1.
|
||||
|
@ -374,7 +382,8 @@ You can set `.spec.parallelism`, or leave it unset and it will default to 1.
|
|||
For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
|
||||
a non-negative integer.
|
||||
|
||||
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
|
||||
For more information about how to make use of the different types of job,
|
||||
see the [job patterns](#job-patterns) section.
|
||||
-->
|
||||
对于**非并行**的 Job,你可以不设置 `spec.completions` 和 `spec.parallelism`。
|
||||
这两个属性都不设置时,均取默认值 1。
|
||||
|
@ -408,7 +417,7 @@ parallelism, for a variety of reasons:
|
|||
|
||||
<!--
|
||||
- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of
|
||||
remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
|
||||
remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
|
||||
- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however.
|
||||
- If the Job {{< glossary_tooltip term_id="controller" >}} has not had time to react.
|
||||
- If the Job controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.),
|
||||
|
@ -446,8 +455,11 @@ Jobs with _fixed completion count_ - that is, jobs that have non null
|
|||
completion is homologous to each other. Note that Jobs that have null
|
||||
`.spec.completions` are implicitly `NonIndexed`.
|
||||
- `Indexed`: the Pods of a Job get an associated completion index from 0 to
|
||||
`.spec.completions-1`. The index is available through three mechanisms:
|
||||
`.spec.completions-1`. The index is available through four mechanisms:
|
||||
- The Pod annotation `batch.kubernetes.io/job-completion-index`.
|
||||
- The Pod label `batch.kubernetes.io/job-completion-index` (for v1.28 and later). Note
|
||||
the feature gate `PodIndexLabel` must be enabled to use this label, and it is enabled
|
||||
by default.
|
||||
- As part of the Pod hostname, following the pattern `$(job-name)-$(index)`.
|
||||
When you use an Indexed Job in combination with a
|
||||
{{< glossary_tooltip term_id="Service" >}}, Pods within the Job can use
|
||||
|
@ -459,19 +471,21 @@ Jobs with _fixed completion count_ - that is, jobs that have non null
|
|||
设值时认为 Job 已经完成。换言之,每个 Job 完成事件都是独立无关且同质的。
|
||||
要注意的是,当 `.spec.completions` 取值为 null 时,Job 被隐式处理为 `NonIndexed`。
|
||||
- `Indexed`:Job 的 Pod 会获得对应的完成索引,取值为 0 到 `.spec.completions-1`。
|
||||
该索引可以通过三种方式获取:
|
||||
该索引可以通过四种方式获取:
|
||||
- Pod 注解 `batch.kubernetes.io/job-completion-index`。
|
||||
- Pod 标签 `batch.kubernetes.io/job-completion-index`(适用于 v1.28 及更高版本)。
|
||||
请注意,必须启用 `PodIndexLabel` 特性门控才能使用此标签,默认被启用。
|
||||
- 作为 Pod 主机名的一部分,遵循模式 `$(job-name)-$(index)`。
|
||||
当你同时使用带索引的 Job(Indexed Job)与 {{< glossary_tooltip term_id="Service" >}},
|
||||
Job 中的 Pod 可以通过 DNS 使用确切的主机名互相寻址。
|
||||
有关如何配置的更多信息,请参阅[带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/)。
|
||||
- 对于容器化的任务,在环境变量 `JOB_COMPLETION_INDEX` 中。
|
||||
|
||||
<!--
|
||||
<!--
|
||||
The Job is considered complete when there is one successfully completed Pod
|
||||
for each index. For more information about how to use this mode, see
|
||||
[Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/).
|
||||
-->
|
||||
-->
|
||||
当每个索引都对应一个成功完成的 Pod 时,Job 被认为是已完成的。
|
||||
关于如何使用这种模式的更多信息,可参阅
|
||||
[用带索引的 Job 执行基于静态任务分配的并行处理](/zh-cn/docs/tasks/job/indexed-parallel-processing-static/)。
|
||||
|
@ -493,9 +507,9 @@ or completed for the same index will be deleted by the Job controller once they
|
|||
## Handling Pod and container failures
|
||||
|
||||
A container in a Pod may fail for a number of reasons, such as because the process in it exited with
|
||||
a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this
|
||||
a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this
|
||||
happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays
|
||||
on the node, but the container is re-run. Therefore, your program needs to handle the case when it is
|
||||
on the node, but the container is re-run. Therefore, your program needs to handle the case when it is
|
||||
restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`.
|
||||
See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`.
|
||||
-->
|
||||
|
@ -513,9 +527,9 @@ Pod 则继续留在当前节点,但容器会被重新运行。
|
|||
<!--
|
||||
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
|
||||
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
|
||||
`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller
|
||||
starts a new Pod. This means that your application needs to handle the case when it is restarted in a new
|
||||
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
|
||||
`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller
|
||||
starts a new Pod. This means that your application needs to handle the case when it is restarted in a new
|
||||
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
|
||||
caused by previous runs.
|
||||
-->
|
||||
整个 Pod 也可能会失败,且原因各不相同。
|
||||
|
@ -534,13 +548,22 @@ customize handling of pod failures by setting the Job's [pod failure policy](#po
|
|||
请参阅 [Pod 回退失效策略](#pod-backoff-failure-policy)。
|
||||
但你可以通过设置 Job 的 [Pod 失效策略](#pod-failure-policy)自定义对 Pod 失效的处理方式。
|
||||
|
||||
<!--
|
||||
Additionally, you can choose to count the pod failures independently for each
|
||||
index of an [Indexed](#completion-mode) Job by setting the `.spec.backoffLimitPerIndex` field
|
||||
(for more information, see [backoff limit per index](#backoff-limit-per-index)).
|
||||
-->
|
||||
此外,你可以通过设置 `.spec.backoffLimitPerIndex` 字段,
|
||||
选择为 [Indexed](#completion-mode) Job 的每个索引独立计算 Pod 失败次数
|
||||
(细节参阅[逐索引的回退限制](#backoff-limit-per-index))。
|
||||
|
||||
<!--
|
||||
Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and
|
||||
`.spec.template.spec.restartPolicy = "Never"`, the same program may
|
||||
sometimes be started twice.
|
||||
|
||||
If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
|
||||
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
|
||||
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
|
||||
-->
|
||||
注意,即使你将 `.spec.parallelism` 设置为 1,且将 `.spec.completions` 设置为
|
||||
1,并且 `.spec.template.spec.restartPolicy` 设置为 "Never",同一程序仍然有可能被启动两次。
|
||||
|
@ -583,6 +606,7 @@ Pods associated with the Job are recreated by the Job controller with an
|
|||
exponential back-off delay (10s, 20s, 40s ...) capped at six minutes.
|
||||
|
||||
The number of retries is calculated in two ways:
|
||||
|
||||
- The number of Pods with `.status.phase = "Failed"`.
|
||||
- When using `restartPolicy = "OnFailure"`, the number of retries in all the
|
||||
containers of Pods with `.status.phase` equal to `Pending` or `Running`.
|
||||
|
@ -609,7 +633,8 @@ considered failed.
|
|||
{{< note >}}
|
||||
<!--
|
||||
If your job has `restartPolicy = "OnFailure"`, keep in mind that your Pod running the Job
|
||||
will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting
|
||||
will be terminated once the job backoff limit has been reached. This can make debugging
|
||||
the Job's executable more difficult. We suggest setting
|
||||
`restartPolicy = "Never"` when debugging the Job or using a logging system to ensure output
|
||||
from failed Jobs is not lost inadvertently.
|
||||
-->
|
||||
|
@ -620,6 +645,117 @@ from failed Jobs is not lost inadvertently.
|
|||
或者使用日志系统来确保失效 Job 的输出不会意外遗失。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Backoff limit per index {#backoff-limit-per-index}
|
||||
-->
|
||||
### 逐索引的回退限制 {#backoff-limit-per-index}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You can only configure the backoff limit per index for an [Indexed](#completion-mode) Job, if you
|
||||
have the `JobBackoffLimitPerIndex` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
enabled in your cluster.
|
||||
-->
|
||||
只有在集群中启用了 `JobBackoffLimitPerIndex`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
才能为 [Indexed](#completion-mode) Job 配置逐索引的回退限制。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
When you run an [indexed](#completion-mode) Job, you can choose to handle retries
|
||||
for pod failures independently for each index. To do so, set the
|
||||
`.spec.backoffLimitPerIndex` to specify the maximal number of pod failures
|
||||
per index.
|
||||
-->
|
||||
运行 [Indexed](#completion-mode) Job 时,你可以选择对每个索引独立处理 Pod 失败的重试。
|
||||
为此,可以设置 `.spec.backoffLimitPerIndex` 来指定每个索引的最大 Pod 失败次数。
|
||||
|
||||
<!--
|
||||
When the per-index backoff limit is exceeded for an index, Kuberentes considers the index as failed and adds it to the
|
||||
`.status.failedIndexes` field. The succeeded indexes, those with a successfully
|
||||
executed pods, are recorded in the `.status.completedIndexes` field, regardless of whether you set
|
||||
the `backoffLimitPerIndex` field.
|
||||
-->
|
||||
当某个索引超过逐索引的回退限制后,Kubernetes 将视该索引为已失败,并将其添加到 `.status.failedIndexes` 字段中。
|
||||
无论你是否设置了 `backoffLimitPerIndex` 字段,已成功执行的索引(具有成功执行的 Pod)将被记录在
|
||||
`.status.completedIndexes` 字段中。
|
||||
|
||||
<!--
|
||||
Note that a failing index does not interrupt execution of other indexes.
|
||||
Once all indexes finish for a Job where you specified a backoff limit per index,
|
||||
if at least one of those indexes did fail, the Job controller marks the overall
|
||||
Job as failed, by setting the Failed condition in the status. The Job gets
|
||||
marked as failed even if some, potentially nearly all, of the indexes were
|
||||
processed successfully.
|
||||
-->
|
||||
请注意,失败的索引不会中断其他索引的执行。一旦在指定了逐索引回退限制的 Job 中的所有索引完成,
|
||||
如果其中至少有一个索引失败,Job 控制器会通过在状态中设置 Failed 状况将整个 Job 标记为失败。
|
||||
即使其中一些(可能几乎全部)索引已被成功处理,该 Job 也会被标记为失败。
|
||||
|
||||
<!--
|
||||
You can additionally limit the maximal number of indexes marked failed by
|
||||
setting the `.spec.maxFailedIndexes` field.
|
||||
When the number of failed indexes exceeds the `maxFailedIndexes` field, the
|
||||
Job controller triggers termination of all remaining running Pods for that Job.
|
||||
Once all pods are terminated, the entire Job is marked failed by the Job
|
||||
controller, by setting the Failed condition in the Job status.
|
||||
-->
|
||||
你还可以通过设置 `.spec.maxFailedIndexes` 字段来限制标记为失败的最大索引数。
|
||||
当失败的索引数量超过 `maxFailedIndexes` 字段时,Job 控制器会对该 Job
|
||||
的运行中的所有余下 Pod 触发终止操作。一旦所有 Pod 被终止,Job 控制器将通过设置 Job
|
||||
状态中的 Failed 状况将整个 Job 标记为失败。
|
||||
|
||||
<!--
|
||||
Here is an example manifest for a Job that defines a `backoffLimitPerIndex`:
|
||||
-->
|
||||
以下是定义 `backoffLimitPerIndex` 的 Job 示例清单:
|
||||
|
||||
{{< codenew file="/controllers/job-backoff-limit-per-index-example.yaml" >}}
|
||||
|
||||
<!--
|
||||
In the example above, the Job controller allows for one restart for each
|
||||
of the indexes. When the total number of failed indexes exceeds 5, then
|
||||
the entire Job is terminated.
|
||||
|
||||
Once the job is finished, the Job status looks as follows:
|
||||
-->
|
||||
在上面的示例中,Job 控制器允许每个索引重新启动一次。
|
||||
当失败的索引总数超过 5 个时,整个 Job 将被终止。
|
||||
|
||||
Job 完成后,该 Job 的状态如下所示:
|
||||
|
||||
```sh
|
||||
kubectl get -o yaml job job-backoff-limit-per-index-example
|
||||
```
|
||||
|
||||
<!--
|
||||
# 1 succeeded pod for each of 5 succeeded indexes
|
||||
# 2 failed pods (1 retry) for each of 5 failed indexes
|
||||
-->
|
||||
```yaml
|
||||
status:
|
||||
completedIndexes: 1,3,5,7,9
|
||||
failedIndexes: 0,2,4,6,8
|
||||
succeeded: 5 # 每 5 个成功的索引有 1 个成功的 Pod
|
||||
failed: 10 # 每 5 个失败的索引有 2 个失败的 Pod(1 次重试)
|
||||
conditions:
|
||||
- message: Job has failed indexes
|
||||
reason: FailedIndexes
|
||||
status: "True"
|
||||
type: Failed
|
||||
```
|
||||
|
||||
<!--
|
||||
Additionally, you may want to use the per-index backoff along with a
|
||||
[pod failure policy](#pod-failure-policy). When using
|
||||
per-index backoff, there is a new `FailIndex` action available which allows you to
|
||||
avoid unnecessary retries within an index.
|
||||
-->
|
||||
此外,你可能想要结合使用逐索引回退与 [Pod 失败策略](#pod-failure-policy)。
|
||||
在使用逐索引回退时,有一个新的 `FailIndex` 操作可用,它让你避免就某个索引进行不必要的重试。
|
||||
|
||||
<!--
|
||||
### Pod failure policy {#pod-failure-policy}
|
||||
-->
|
||||
|
@ -634,8 +770,8 @@ You can only configure a Pod failure policy for a Job if you have the
|
|||
enabled in your cluster. Additionally, it is recommended
|
||||
to enable the `PodDisruptionConditions` feature gate in order to be able to detect and handle
|
||||
Pod disruption conditions in the Pod failure policy (see also:
|
||||
[Pod disruption conditions](/docs/concepts/workloads/pods/disruptions#pod-disruption-conditions)). Both feature gates are
|
||||
available in Kubernetes {{< skew currentVersion >}}.
|
||||
[Pod disruption conditions](/docs/concepts/workloads/pods/disruptions#pod-disruption-conditions)).
|
||||
Both feature gates are available in Kubernetes {{< skew currentVersion >}}.
|
||||
-->
|
||||
只有你在集群中启用了
|
||||
`JobPodFailurePolicy` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
|
@ -661,12 +797,13 @@ which is based on the Job's `.spec.backoffLimit`. These are some examples of use
|
|||
在某些情况下,你可能希望更好地控制 Pod 失效的处理方式,
|
||||
而不是仅限于 [Pod 回退失效策略](#pod-backoff-failure-policy)所提供的控制能力,
|
||||
后者是基于 Job 的 `.spec.backoffLimit` 实现的。以下是一些使用场景:
|
||||
|
||||
<!--
|
||||
* To optimize costs of running workloads by avoiding unnecessary Pod restarts,
|
||||
you can terminate a Job as soon as one of its Pods fails with an exit code
|
||||
indicating a software bug.
|
||||
* To guarantee that your Job finishes even if there are disruptions, you can
|
||||
ignore Pod failures caused by disruptions (such {{< glossary_tooltip text="preemption" term_id="preemption" >}},
|
||||
ignore Pod failures caused by disruptions (such as {{< glossary_tooltip text="preemption" term_id="preemption" >}},
|
||||
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
|
||||
or {{< glossary_tooltip text="taint" term_id="taint" >}}-based eviction) so
|
||||
that they don't count towards the `.spec.backoffLimit` limit of retries.
|
||||
|
@ -745,6 +882,7 @@ the Pods in that Job that are still Pending or Running.
|
|||
|
||||
<!--
|
||||
These are some requirements and semantics of the API:
|
||||
|
||||
- if you want to use a `.spec.podFailurePolicy` field for a Job, you must
|
||||
also define that Job's pod template with `.spec.restartPolicy` set to `Never`.
|
||||
- the Pod failure policy rules you specify under `spec.podFailurePolicy.rules`
|
||||
|
@ -763,6 +901,8 @@ These are some requirements and semantics of the API:
|
|||
should not be incremented and a replacement Pod should be created.
|
||||
- `Count`: use to indicate that the Pod should be handled in the default way.
|
||||
The counter towards the `.spec.backoffLimit` should be incremented.
|
||||
- `FailIndex`: use this action along with [backoff limit per index](#backoff-limit-per-index)
|
||||
to avoid unnecessary retries within the index of a failed pod.
|
||||
-->
|
||||
下面是此 API 的一些要求和语义:
|
||||
- 如果你想在 Job 中使用 `.spec.podFailurePolicy` 字段,
|
||||
|
@ -779,6 +919,8 @@ These are some requirements and semantics of the API:
|
|||
- `FailJob`:表示 Pod 的任务应标记为 Failed,并且所有正在运行的 Pod 应被终止。
|
||||
- `Ignore`:表示 `.spec.backoffLimit` 的计数器不应该增加,应该创建一个替换的 Pod。
|
||||
- `Count`:表示 Pod 应该以默认方式处理。`.spec.backoffLimit` 的计数器应该增加。
|
||||
- `FailIndex`:表示使用此操作以及[逐索引回退限制](#backoff-limit-per-index)来避免就失败的 Pod
|
||||
的索引进行不必要的重试。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -800,14 +942,25 @@ ensures that deleted pods have their finalizers removed by the Job controller.
|
|||
这确保已删除的 Pod 的 Finalizer 被 Job 控制器移除。
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Starting with Kubernetes v1.28, when Pod failure policy is used, the Job controller recreates
|
||||
terminating Pods only once these Pods reach the terminal `Failed` phase. This behavior is similar
|
||||
to `podReplacementPolicy: Failed`. For more information, see [Pod replacement policy](#pod-replacement-policy).
|
||||
-->
|
||||
自 Kubernetes v1.28 开始,当使用 Pod 失败策略时,Job 控制器仅在这些 Pod 达到终止的
|
||||
`Failed` 阶段时才会重新创建终止中的 Pod。这种行为类似于 `podReplacementPolicy: Failed`。
|
||||
细节参阅 [Pod 替换策略](#pod-replacement-policy)。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Job termination and cleanup
|
||||
|
||||
When a Job completes, no more Pods are created, but the Pods are [usually](#pod-backoff-failure-policy) not deleted either.
|
||||
Keeping them around
|
||||
allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output.
|
||||
The job object also remains after it is completed so that you can view its status. It is up to the user to delete
|
||||
old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.
|
||||
Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output.
|
||||
The job object also remains after it is completed so that you can view its status. It is up to the user to delete
|
||||
old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`).
|
||||
When you delete the job using `kubectl`, all the pods it created are deleted too.
|
||||
-->
|
||||
## Job 终止与清理 {#job-termination-and-cleanup}
|
||||
|
||||
|
@ -820,13 +973,16 @@ Job 完成时 Job 对象也一样被保留下来,这样你就可以查看它
|
|||
当使用 `kubectl` 来删除 Job 时,该 Job 所创建的 Pod 也会被删除。
|
||||
|
||||
<!--
|
||||
By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the
|
||||
`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated.
|
||||
By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`)
|
||||
or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the
|
||||
`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will
|
||||
be marked as failed and any running Pods will be terminated.
|
||||
|
||||
Another way to terminate a Job is by setting an active deadline.
|
||||
Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
|
||||
The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
|
||||
Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.
|
||||
Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status
|
||||
will become `type: Failed` with `reason: DeadlineExceeded`.
|
||||
-->
|
||||
默认情况下,Job 会持续运行,除非某个 Pod 失败(`restartPolicy=Never`)
|
||||
或者某个容器出错退出(`restartPolicy=OnFailure`)。
|
||||
|
@ -841,7 +997,9 @@ Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminat
|
|||
并且 Job 的状态更新为 `type: Failed` 及 `reason: DeadlineExceeded`。
|
||||
|
||||
<!--
|
||||
Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
|
||||
Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`.
|
||||
Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once
|
||||
it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
|
||||
|
||||
Example:
|
||||
-->
|
||||
|
@ -865,14 +1023,18 @@ spec:
|
|||
containers:
|
||||
- name: pi
|
||||
image: perl:5.34.0
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: Never
|
||||
```
|
||||
<!--
|
||||
Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
|
||||
|
||||
Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is `type: Failed`.
|
||||
That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve.
|
||||
<!--
|
||||
Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior)
|
||||
within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
|
||||
|
||||
Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself:
|
||||
there is no automatic Job restart once the Job status is `type: Failed`.
|
||||
That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds`
|
||||
and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve.
|
||||
-->
|
||||
注意 Job 规约和 Job 中的
|
||||
[Pod 模板规约](/zh-cn/docs/concepts/workloads/pods/init-containers/#detailed-behavior)
|
||||
|
@ -942,7 +1104,7 @@ spec:
|
|||
containers:
|
||||
- name: pi
|
||||
image: perl:5.34.0
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
|
@ -993,9 +1155,9 @@ consume.
|
|||
<!--
|
||||
## Job patterns
|
||||
|
||||
The Job object can be used to support reliable parallel execution of Pods. The Job object is not
|
||||
The Job object can be used to support reliable parallel execution of Pods. The Job object is not
|
||||
designed to support closely-communicating parallel processes, as commonly found in scientific
|
||||
computing. It does support parallel processing of a set of independent but related *work items*.
|
||||
computing. It does support parallel processing of a set of independent but related *work items*.
|
||||
These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a
|
||||
NoSQL database to scan, and so on.
|
||||
-->
|
||||
|
@ -1008,7 +1170,7 @@ Job 的确能够支持对一组相互独立而又有所关联的**工作条目**
|
|||
数据库中要扫描的主键范围等等。
|
||||
|
||||
<!--
|
||||
In a complex system, there may be multiple different sets of work items. Here we are just
|
||||
In a complex system, there may be multiple different sets of work items. Here we are just
|
||||
considering one set of work items that the user wants to manage together — a *batch job*.
|
||||
|
||||
There are several different patterns for parallel computation, each with strengths and weaknesses.
|
||||
|
@ -1020,13 +1182,13 @@ The tradeoffs are:
|
|||
并行计算的模式有好多种,每种都有自己的强项和弱点。这里要权衡的因素有:
|
||||
|
||||
<!--
|
||||
- One Job object for each work item, vs. a single Job object for all work items. The latter is
|
||||
better for large numbers of work items. The former creates some overhead for the user and for the
|
||||
- One Job object for each work item, vs. a single Job object for all work items. The latter is
|
||||
better for large numbers of work items. The former creates some overhead for the user and for the
|
||||
system to manage large numbers of Job objects.
|
||||
- Number of pods created equals number of work items, vs. each Pod can process multiple work items.
|
||||
The former typically requires less modification to existing code and containers. The latter
|
||||
The former typically requires less modification to existing code and containers. The latter
|
||||
is better for large numbers of work items, for similar reasons to the previous bullet.
|
||||
- Several approaches use a work queue. This requires running a queue service,
|
||||
- Several approaches use a work queue. This requires running a queue service,
|
||||
and modifications to the existing program or container to make it use the work queue.
|
||||
Other approaches are easier to adapt to an existing containerised application.
|
||||
-->
|
||||
|
@ -1050,7 +1212,7 @@ The pattern names are also links to examples and more detailed description.
|
|||
| [Queue with Variable Pod Count] | ✓ | ✓ | |
|
||||
| [Indexed Job with Static Work Assignment] | ✓ | | ✓ |
|
||||
| [Job Template Expansion] | | | ✓ |
|
||||
| [Job with Pod-to-Pod Communication] | ✓ | sometimes | sometimes |
|
||||
| [Job with Pod-to-Pod Communication] | ✓ | sometimes | sometimes |
|
||||
-->
|
||||
下面是对这些权衡的汇总,第 2 到 4 列对应上面的权衡比较。
|
||||
模式的名称对应了相关示例和更详细描述的链接。
|
||||
|
@ -1065,9 +1227,9 @@ The pattern names are also links to examples and more detailed description.
|
|||
|
||||
<!--
|
||||
When you specify completions with `.spec.completions`, each Pod created by the Job controller
|
||||
has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that
|
||||
all pods for a task will have the same command line and the same
|
||||
image, the same volumes, and (almost) the same environment variables. These patterns
|
||||
has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
This means that all pods for a task will have the same command line and the same
|
||||
image, the same volumes, and (almost) the same environment variables. These patterns
|
||||
are different ways to arrange for pods to work on different things.
|
||||
|
||||
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
|
||||
|
@ -1142,7 +1304,8 @@ timer will be stopped and reset when a Job is suspended and resumed.
|
|||
并在 Job 恢复执行时复位。
|
||||
|
||||
<!--
|
||||
When you suspend a Job, any running Pods that don't have a status of `Completed` will be [terminated](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||
When you suspend a Job, any running Pods that don't have a status of `Completed`
|
||||
will be [terminated](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||
with a SIGTERM signal. The Pod's graceful termination period will be honored and
|
||||
your Pod must handle this signal in this period. This may involve saving
|
||||
progress for later or undoing changes. Pods terminated this way will not count
|
||||
|
@ -1272,27 +1435,15 @@ Job 被恢复执行时,Pod 创建操作立即被重启执行。
|
|||
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
In order to use this behavior, you must enable the `JobMutableNodeSchedulingDirectives`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/).
|
||||
It is enabled by default.
|
||||
-->
|
||||
为了使用此功能,你必须在 [API 服务器](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)上启用
|
||||
`JobMutableNodeSchedulingDirectives` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
默认情况下启用。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
In most cases a parallel job will want the pods to run with constraints,
|
||||
In most cases, a parallel job will want the pods to run with constraints,
|
||||
like all in the same zone, or all either on GPU model x or y but not a mix of both.
|
||||
-->
|
||||
在大多数情况下,并行作业会希望 Pod 在一定约束条件下运行,
|
||||
比如所有的 Pod 都在同一个区域,或者所有的 Pod 都在 GPU 型号 x 或 y 上,而不是两者的混合。
|
||||
|
||||
<!--
|
||||
The [suspend](#suspending-a-job) field is the first step towards achieving those semantics. Suspend allows a
|
||||
The [suspend](#suspending-a-job) field is the first step towards achieving those semantics. Suspend allows a
|
||||
custom queue controller to decide when a job should start; However, once a job is unsuspended,
|
||||
a custom queue controller has no influence on where the pods of a job will actually land.
|
||||
-->
|
||||
|
@ -1302,8 +1453,8 @@ suspend 允许自定义队列控制器,以决定工作何时开始;然而,
|
|||
|
||||
<!--
|
||||
This feature allows updating a Job's scheduling directives before it starts, which gives custom queue
|
||||
controllers the ability to influence pod placement while at the same time offloading actual
|
||||
pod-to-node assignment to kube-scheduler. This is allowed only for suspended Jobs that have never
|
||||
controllers the ability to influence pod placement while at the same time offloading actual
|
||||
pod-to-node assignment to kube-scheduler. This is allowed only for suspended Jobs that have never
|
||||
been unsuspended before.
|
||||
-->
|
||||
此特性允许在 Job 开始之前更新调度指令,从而为定制队列提供影响 Pod
|
||||
|
@ -1313,7 +1464,7 @@ been unsuspended before.
|
|||
这仅适用于从未暂停的 Job。
|
||||
|
||||
<!--
|
||||
The fields in a Job's pod template that can be updated are node affinity, node selector,
|
||||
The fields in a Job's pod template that can be updated are node affinity, node selector,
|
||||
tolerations, labels, annotations and [scheduling gates](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/).
|
||||
-->
|
||||
Job 的 Pod 模板中可以更新的字段是节点亲和性、节点选择器、容忍、标签、注解和
|
||||
|
@ -1339,12 +1490,12 @@ To do this, you can specify the `.spec.selector` of the Job.
|
|||
为了实现这点,你可以手动设置 Job 的 `spec.selector` 字段。
|
||||
|
||||
<!--
|
||||
Be very careful when doing this. If you specify a label selector which is not
|
||||
Be very careful when doing this. If you specify a label selector which is not
|
||||
unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated
|
||||
job may be deleted, or this Job may count other Pods as completing it, or one or both
|
||||
Jobs may refuse to create Pods or run to completion. If a non-unique selector is
|
||||
Jobs may refuse to create Pods or run to completion. If a non-unique selector is
|
||||
chosen, then other controllers (e.g. ReplicationController) and their Pods may behave
|
||||
in unpredictable ways too. Kubernetes will not stop you from making a mistake when
|
||||
in unpredictable ways too. Kubernetes will not stop you from making a mistake when
|
||||
specifying `.spec.selector`.
|
||||
-->
|
||||
做这个操作时请务必小心。
|
||||
|
@ -1359,7 +1510,7 @@ Kubernetes 不会在你设置 `.spec.selector` 时尝试阻止你犯这类错误
|
|||
<!--
|
||||
Here is an example of a case when you might want to use this feature.
|
||||
|
||||
Say Job `old` is already running. You want existing Pods
|
||||
Say Job `old` is already running. You want existing Pods
|
||||
to keep running, but you want the rest of the Pods it creates
|
||||
to use a different pod template and for the Job to have a new name.
|
||||
You cannot update the Job because these fields are not updatable.
|
||||
|
@ -1428,7 +1579,7 @@ spec:
|
|||
```
|
||||
|
||||
<!--
|
||||
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
|
||||
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
|
||||
`manualSelector: true` tells the system that you know what you are doing and to allow this
|
||||
mismatch.
|
||||
-->
|
||||
|
@ -1478,13 +1629,92 @@ scaling an indexed Job, such as MPI, Horovord, Ray, and PyTorch training jobs.
|
|||
弹性索引 Job 的使用场景包括需要扩展索引 Job 的批处理工作负载,例如 MPI、Horovord、Ray
|
||||
和 PyTorch 训练作业。
|
||||
|
||||
<!--
|
||||
### Delayed creation of replacement pods {#pod-replacement-policy}
|
||||
-->
|
||||
### 延迟创建替换 Pod {#pod-replacement-policy}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You can only set `podReplacementPolicy` on Jobs if you enable the `JobPodReplacementPolicy`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
-->
|
||||
你只有在启用了 `JobPodReplacementPolicy`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)后,
|
||||
才能为 Job 设置 `podReplacementPolicy`。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
By default, the Job controller recreates Pods as soon they either fail or are terminating (have a deletion timestamp).
|
||||
This means that, at a given time, when some of the Pods are terminating, the number of running Pods for a Job
|
||||
can be greater than `parallelism` or greater than one Pod per index (if you are using an Indexed Job).
|
||||
-->
|
||||
默认情况下,当 Pod 失败或正在终止(具有删除时间戳)时,Job 控制器会立即重新创建 Pod。
|
||||
这意味着,在某个时间点上,当一些 Pod 正在终止时,为 Job 正运行中的 Pod 数量可以大于 `parallelism`
|
||||
或超出每个索引一个 Pod(如果使用 Indexed Job)。
|
||||
|
||||
<!--
|
||||
You may choose to create replacement Pods only when the terminating Pod is fully terminal (has `status.phase: Failed`).
|
||||
To do this, set the `.spec.podReplacementPolicy: Failed`.
|
||||
The default replacement policy depends on whether the Job has a `podFailurePolicy` set.
|
||||
With no Pod failure policy defined for a Job, omitting the `podReplacementPolicy` field selects the
|
||||
`TerminatingOrFailed` replacement policy:
|
||||
the control plane creates replacement Pods immediately upon Pod deletion
|
||||
(as soon as the control plane sees that a Pod for this Job has `deletionTimestamp` set).
|
||||
For Jobs with a Pod failure policy set, the default `podReplacementPolicy` is `Failed`, and no other
|
||||
value is permitted.
|
||||
See [Pod failure policy](#pod-failure-policy) to learn more about Pod failure policies for Jobs.
|
||||
-->
|
||||
你可以选择仅在终止过程中的 Pod 完全终止(具有 `status.phase: Failed`)时才创建替换 Pod。
|
||||
为此,可以设置 `.spec.podReplacementPolicy: Failed`。
|
||||
默认的替换策略取决于 Job 是否设置了 `podFailurePolicy`。对于没有定义 Pod 失败策略的 Job,
|
||||
省略 `podReplacementPolicy` 字段相当于选择 `TerminatingOrFailed` 替换策略:
|
||||
控制平面在 Pod 删除时立即创建替换 Pod(只要控制平面发现该 Job 的某个 Pod 被设置了 `deletionTimestamp`)。
|
||||
对于设置了 Pod 失败策略的 Job,默认的 `podReplacementPolicy` 是 `Failed`,不允许其他值。
|
||||
请参阅 [Pod 失败策略](#pod-failure-policy)以了解更多关于 Job 的 Pod 失败策略的信息。
|
||||
|
||||
```yaml
|
||||
kind: Job
|
||||
metadata:
|
||||
name: new
|
||||
...
|
||||
spec:
|
||||
podReplacementPolicy: Failed
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
Provided your cluster has the feature gate enabled, you can inspect the `.status.terminating` field of a Job.
|
||||
The value of the field is the number of Pods owned by the Job that are currently terminating.
|
||||
-->
|
||||
如果你的集群启用了此特性门控,你可以检查 Job 的 `.status.terminating` 字段。
|
||||
该字段值是当前处于终止过程中的、由该 Job 拥有的 Pod 的数量。
|
||||
|
||||
```shell
|
||||
kubectl get jobs/myjob -o yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
# .metadata and .spec omitted
|
||||
# three Pods are terminating and have not yet reached the Failed phase
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
# .metadata 和 .spec 被省略
|
||||
status:
|
||||
terminating: 3 # 三个 Pod 正在终止且还未达到 Failed 阶段
|
||||
```
|
||||
|
||||
<!--
|
||||
## Alternatives
|
||||
|
||||
### Bare Pods
|
||||
|
||||
When the node that a Pod is running on reboots or fails, the pod is terminated
|
||||
and will not be restarted. However, a Job will create new Pods to replace terminated ones.
|
||||
and will not be restarted. However, a Job will create new Pods to replace terminated ones.
|
||||
For this reason, we recommend that you use a Job rather than a bare Pod, even if your application
|
||||
requires only a single Pod.
|
||||
-->
|
||||
|
@ -1522,7 +1752,7 @@ Job 管理的是那些希望被终止的 Pod(例如,批处理作业)。
|
|||
### Single Job starts controller Pod
|
||||
|
||||
Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort
|
||||
of custom controller for those Pods. This allows the most flexibility, but may be somewhat
|
||||
of custom controller for those Pods. This allows the most flexibility, but may be somewhat
|
||||
complicated to get started with and offers less integration with Kubernetes.
|
||||
-->
|
||||
### 单个 Job 启动控制器 Pod {#single-job-starts-controller-pod}
|
||||
|
@ -1534,8 +1764,8 @@ complicated to get started with and offers less integration with Kubernetes.
|
|||
|
||||
<!--
|
||||
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
|
||||
starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/master/staging/spark/README.md)), runs a spark
|
||||
driver, and then cleans up.
|
||||
starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/master/staging/spark/README.md)),
|
||||
runs a spark driver, and then cleans up.
|
||||
|
||||
An advantage of this approach is that the overall process gets the completion guarantee of a Job
|
||||
object, but maintains complete control over what Pods are created and how work is assigned to them.
|
||||
|
@ -1552,10 +1782,10 @@ object, but maintains complete control over what Pods are created and how work i
|
|||
<!--
|
||||
* Learn about [Pods](/docs/concepts/workloads/pods).
|
||||
* Read about different ways of running Jobs:
|
||||
* [Coarse Parallel Processing Using a Work Queue](/docs/tasks/job/coarse-parallel-processing-work-queue/)
|
||||
* [Fine Parallel Processing Using a Work Queue](/docs/tasks/job/fine-parallel-processing-work-queue/)
|
||||
* Use an [indexed Job for parallel processing with static work assignment](/docs/tasks/job/indexed-parallel-processing-static/)
|
||||
* Create multiple Jobs based on a template: [Parallel Processing using Expansions](/docs/tasks/job/parallel-processing-expansion/)
|
||||
* [Coarse Parallel Processing Using a Work Queue](/docs/tasks/job/coarse-parallel-processing-work-queue/)
|
||||
* [Fine Parallel Processing Using a Work Queue](/docs/tasks/job/fine-parallel-processing-work-queue/)
|
||||
* Use an [indexed Job for parallel processing with static work assignment](/docs/tasks/job/indexed-parallel-processing-static/)
|
||||
* Create multiple Jobs based on a template: [Parallel Processing using Expansions](/docs/tasks/job/parallel-processing-expansion/)
|
||||
* Follow the links within [Clean up finished jobs automatically](#clean-up-finished-jobs-automatically)
|
||||
to learn more about how your cluster can clean up completed and / or failed tasks.
|
||||
* `Job` is part of the Kubernetes REST API.
|
||||
|
|
|
@ -81,13 +81,13 @@ You can pass information from available Container-level fields using
|
|||
<!--
|
||||
### Information available via `fieldRef` {#downwardapi-fieldRef}
|
||||
|
||||
For most Pod-level fields, you can provide them to a container either as
|
||||
For some Pod-level fields, you can provide them to a container either as
|
||||
an environment variable or using a `downwardAPI` volume. The fields available
|
||||
via either mechanism are:
|
||||
-->
|
||||
### 可通过 `fieldRef` 获得的信息 {#downwardapi-fieldRef}
|
||||
|
||||
对于大多数 Pod 级别的字段,你可以将它们作为环境变量或使用 `downwardAPI` 卷提供给容器。
|
||||
对于某些 Pod 级别的字段,你可以将它们作为环境变量或使用 `downwardAPI` 卷提供给容器。
|
||||
通过这两种机制可用的字段有:
|
||||
|
||||
<!--
|
||||
|
@ -152,6 +152,15 @@ The following information is available through environment variables
|
|||
`status.hostIP`
|
||||
: Pod 所在节点的主 IP 地址
|
||||
|
||||
<!--
|
||||
`status.hostIPs`
|
||||
: the IP addresses is a dual-stack version of `status.hostIP`, the first is always the same as `status.hostIP`.
|
||||
The field is available if you enable the `PodHostIPs` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
-->
|
||||
`status.hostIPs`
|
||||
: 这组 IP 地址是 `status.hostIP` 的双协议栈版本,第一个 IP 始终与 `status.hostIP` 相同。
|
||||
该字段在启用了 `PodHostIPs` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)后可用。
|
||||
|
||||
<!--
|
||||
`status.podIP`
|
||||
: the pod's primary IP address (usually, its IPv4 address)
|
||||
|
@ -159,6 +168,13 @@ The following information is available through environment variables
|
|||
`status.podIP`
|
||||
: Pod 的主 IP 地址(通常是其 IPv4 地址)
|
||||
|
||||
<!--
|
||||
`status.podIPs`
|
||||
: the IP addresses is a dual-stack version of `status.podIP`, the first is always the same as `status.podIP`
|
||||
-->
|
||||
`status.podIPs`
|
||||
: 这组 IP 地址是 `status.podIP` 的双协议栈版本, 第一个 IP 始终与 `status.podIP` 相同。
|
||||
|
||||
<!--
|
||||
The following information is available through a `downwardAPI` volume
|
||||
`fieldRef`, **but not as environment variables**:
|
||||
|
|
|
@ -151,7 +151,10 @@ class first,second,third white
|
|||
<!--
|
||||
Figure 1. Getting started for a new contributor.
|
||||
|
||||
Figure 1 outlines a roadmap for new contributors. You can follow some or all of the steps for `Sign up` and `Review`. Now you are ready to open PRs that achieve your contribution objectives with some listed under `Open PR`. Again, questions are always welcome!
|
||||
Figure 1 outlines a roadmap for new contributors. You can follow some or
|
||||
all of the steps for `Sign up` and `Review`. Now you are ready to open PRs
|
||||
that achieve your contribution objectives with some listed under `Open PR`.
|
||||
Again, questions are always welcome!
|
||||
-->
|
||||
图 1. 新手入门指示。
|
||||
|
||||
|
@ -240,6 +243,27 @@ Figure 2. Preparation for your first contribution.
|
|||
- 了解[页面内容类型](/zh-cn/docs/contribute/style/page-content-types/)和
|
||||
[Hugo 短代码](/zh-cn/docs/contribute/style/hugo-shortcodes/)。
|
||||
|
||||
<!--
|
||||
## Getting help when contributing
|
||||
|
||||
Making your first contribution can be overwhelming. The
|
||||
[New Contributor Ambassadors](https://github.com/kubernetes/website#new-contributor-ambassadors)
|
||||
are there to walk you through making your first few contributions.
|
||||
You can reach out to them in the [Kubernetes Slack](https://slack.k8s.io/)
|
||||
preferably in the `#sig-docs` channel. There is also the
|
||||
[New Contributors Meet and Greet call](https://www.kubernetes.dev/resources/calendar/)
|
||||
that happens on the first Tuesday of every month. You can interact with
|
||||
the New Contributor Ambassadors and get your queries resolved here.
|
||||
-->
|
||||
## 贡献时获取帮助
|
||||
|
||||
做出第一个贡献可能会让人感觉比较困难。
|
||||
[新贡献者大使](https://github.com/kubernetes/website#new-contributor-ambassadors)
|
||||
将引导你完成最初的一些贡献。你可以在 [Kubernetes Slack](https://slack.k8s.io/)
|
||||
中联系他们,最好是在 `#sig-docs` 频道中。还有每月第一个星期二举行的
|
||||
[新贡献者见面会](https://www.kubernetes.dev/resources/calendar/),
|
||||
你可以在此处与新贡献者大使互动并解决你的疑问。
|
||||
|
||||
<!--
|
||||
## Next steps
|
||||
|
||||
|
|
|
@ -167,7 +167,7 @@ To submit a blog post, follow these steps:
|
|||
- Blog posts should be relevant to Kubernetes users.
|
||||
|
||||
- Topics related to participation in or results of Kubernetes SIGs activities are always on
|
||||
topic (see the work in the [Contributor Comms Team](https://github.com/kubernetes/community/blob/master/communication/contributor-comms/storytelling-resources/blog-guidelines.md#upstream-marketing-blog-guidelines)
|
||||
topic (see the work in the [Contributor Comms Team](https://github.com/kubernetes/community/blob/master/communication/contributor-comms/blogging-resources/blog-guidelines.md#contributor-comms-blog-guidelines)
|
||||
for support on these posts).
|
||||
- The components of Kubernetes are purposely modular, so tools that use existing integration
|
||||
points like CNI and CSI are on topic.
|
||||
|
@ -181,7 +181,7 @@ To submit a blog post, follow these steps:
|
|||
-->
|
||||
- 博客内容应该对 Kubernetes 用户有用。
|
||||
- 与参与 Kubernetes SIG 活动相关,或者与这类活动的结果相关的主题通常是切题的。
|
||||
请参考 [贡献者沟通(Contributor Comms)团队](https://github.com/kubernetes/community/blob/master/communication/contributor-comms/storytelling-resources/blog-guidelines.md#upstream-marketing-blog-guidelines)的工作以获得对此类博文的支持。
|
||||
请参考 [贡献者沟通(Contributor Comms)团队](https://github.com/kubernetes/community/blob/master/communication/contributor-comms/blogging-resources/blog-guidelines.md#contributor-comms-blog-guidelines)的工作以获得对此类博文的支持。
|
||||
- Kubernetes 的组件都有意设计得模块化,因此使用类似 CNI、CSI 等集成点的工具通常都是切题的。
|
||||
- 关于其他 CNCF 项目的博客可能切题也可能不切题。
|
||||
我们建议你在提交草稿之前与博客团队联系。
|
||||
|
|
|
@ -405,4 +405,3 @@ Examples of published tool reference pages are:
|
|||
- 了解[样式指南](/zh-cn/docs/contribute/style/style-guide/)
|
||||
- 了解[内容指南](/zh-cn/docs/contribute/style/content-guide/)
|
||||
- 了解[内容组织](/zh-cn/docs/contribute/style/content-organization/)
|
||||
|
||||
|
|
|
@ -553,7 +553,7 @@ To specify the Kubernetes version for a task or tutorial page, include
|
|||
[任务模板](/zh-cn/docs/contribute/style/page-content-types/#task)
|
||||
或[教程模板](/zh-cn/docs/contribute/style/page-content-types/#tutorial)
|
||||
的 `prerequisites` 小节定义 Kubernetes 版本。
|
||||
页面保存之后,`prerequisites` 小节会显示为 **开始之前**。
|
||||
页面保存之后,`prerequisites` 小节会显示为**开始之前**。
|
||||
|
||||
如果要为任务或教程页面指定 Kubernetes 版本,可以在文件的前言部分包含
|
||||
`min-kubernetes-server-version` 信息。
|
||||
|
@ -700,7 +700,7 @@ The output is:
|
|||
|
||||
使用短代码 `{{</* note */>}}` 来突出显示某种提示或者有助于读者的信息。
|
||||
|
||||
例如:
|
||||
例如:
|
||||
|
||||
```
|
||||
{{</* note */>}}
|
||||
|
@ -714,7 +714,7 @@ The output is:
|
|||
<!--
|
||||
You can _still_ use Markdown inside these callouts.
|
||||
-->
|
||||
在这类短代码中仍然 _可以_ 使用 Markdown 语法。
|
||||
在这类短代码中仍然**可以**使用 Markdown 语法。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -1234,8 +1234,7 @@ This page teaches you how to use pods. | In this page, we are going to learn abo
|
|||
|
||||
### 避免使用“我们”
|
||||
|
||||
在句子中使用“我们”会让人感到困惑,因为读者可能不知道这里的
|
||||
“我们”指的是谁。
|
||||
在句子中使用“我们”会让人感到困惑,因为读者可能不知道这里的“我们”指的是谁。
|
||||
|
||||
{{< table caption = "要避免的模式" >}}
|
||||
可以 | 不可以
|
||||
|
@ -1283,7 +1282,7 @@ is the [Deprecated API migration guide](/docs/reference/using-api/deprecation-gu
|
|||
### 避免关于将来的陈述
|
||||
|
||||
要避免对将来作出承诺或暗示。如果你需要讨论的是 Alpha 功能特性,
|
||||
可以将相关文字放在一个单独的标题下,标示为 Alpha 版本信息。
|
||||
可以将相关文字放在一个单独的标题下,标识为 Alpha 版本信息。
|
||||
|
||||
此规则的一个例外是对未来版本中计划移除的已废弃功能选项的文档。
|
||||
此类文档的例子之一是[已弃用 API 迁移指南](/zh-cn/docs/reference/using-api/deprecation-guide/)。
|
||||
|
@ -1341,6 +1340,20 @@ These steps ... | These simple steps ...
|
|||
这些步骤... | 这些简单的步骤...
|
||||
{{< /table >}}
|
||||
|
||||
<!--
|
||||
### EditorConfig file
|
||||
The Kubernetes project maintains an EditorConfig file that sets common style preferences in text editors
|
||||
such as VS Code. You can use this file if you want to ensure that your contributions are consistent with
|
||||
the rest of the project. To view the file, refer to
|
||||
[`.editorconfig`](https://github.com/kubernetes/website/blob/main/.editorconfig) in the repository root.
|
||||
-->
|
||||
### 编辑器配置文件
|
||||
|
||||
Kubernetes 项目维护一个 EditorConfig 文件,用于设置文本编辑器(例如 VS Code)中的常见样式首选项。
|
||||
如果你想确保你的贡献与项目的其余部分样式保持一致,则可以使用此文件。
|
||||
要查看该文件,请参阅项目仓库根目录的
|
||||
[`.editorconfig`](https://github.com/kubernetes/website/blob/main/.editorconfig)。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -231,7 +231,7 @@ the topic. In your topic file, use the `codenew` shortcode:
|
|||
其中 `<LANG>` 是该主题的语言。在主题文件中使用 `codenew` 短代码:
|
||||
|
||||
```none
|
||||
{{</* codenew file="<RELPATH>/my-example-yaml>" */>}}
|
||||
{{%/* codenew file="<RELPATH>/my-example-yaml>" */%}}
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -245,19 +245,9 @@ file located at `/content/en/examples/pods/storage/gce-volume.yaml`.
|
|||
文件。
|
||||
|
||||
```none
|
||||
{{</* codenew file="pods/storage/gce-volume.yaml" */>}}
|
||||
{{%/* codenew file="pods/storage/gce-volume.yaml" */%}}
|
||||
```
|
||||
|
||||
<!--
|
||||
To show raw Hugo shortcodes as in the above example and prevent Hugo
|
||||
from interpreting them, use C-style comments directly after the `<` and before
|
||||
the `>` characters. View the code for this page for an example.
|
||||
-->
|
||||
{{< note >}}
|
||||
要展示上述示例中的原始 Hugo 短代码并避免 Hugo 对其进行解释,
|
||||
请直接在 `<` 字符之后和 `>` 字符之前使用 C 样式注释。请查看此页面的代码。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Showing how to create an API object from a configuration file
|
||||
|
||||
|
|
|
@ -4,7 +4,10 @@ content_type: concept
|
|||
weight: 10
|
||||
card:
|
||||
name: contribute
|
||||
weight: 20
|
||||
weight: 15
|
||||
anchors:
|
||||
- anchor: "#opening-an-issue"
|
||||
title: 提出内容改进建议
|
||||
---
|
||||
<!--
|
||||
title: Suggesting content improvements
|
||||
|
@ -12,13 +15,18 @@ content_type: concept
|
|||
weight: 10
|
||||
card:
|
||||
name: contribute
|
||||
weight: 20
|
||||
weight: 15
|
||||
anchors:
|
||||
- anchor: "#opening-an-issue"
|
||||
title: Suggest content improvements
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
If you notice an issue with Kubernetes documentation or have an idea for new content, then open an issue. All you need is a [GitHub account](https://github.com/join) and a web browser.
|
||||
If you notice an issue with Kubernetes documentation or have an idea for new content,
|
||||
then open an issue. All you need is a [GitHub account](https://github.com/join) and
|
||||
a web browser.
|
||||
|
||||
In most cases, new work on Kubernetes documentation begins with an issue in GitHub. Kubernetes contributors
|
||||
then review, categorize and tag issues as needed. Next, you or another member
|
||||
|
@ -115,4 +123,3 @@ fellow contributors. For example, "The docs are terrible" is not
|
|||
例如:`Introduced by #987654`。
|
||||
- 遵从[行为准则](/zh-cn/community/code-of-conduct/)。尊重同行贡献者。
|
||||
例如,"The docs are terrible" 就是无用且无礼的反馈。
|
||||
|
||||
|
|
|
@ -65,40 +65,40 @@ cards:
|
|||
description: "查看常见任务以及如何使用简单步骤执行它们。"
|
||||
button: "查看任务"
|
||||
button_path: "/zh-cn/docs/tasks"
|
||||
# - name: training
|
||||
# title: "Training"
|
||||
# description: "Get certified in Kubernetes and make your cloud native projects successful!"
|
||||
# button: "View training"
|
||||
# button_path: "/training"
|
||||
# - name: reference
|
||||
# title: Look up reference information
|
||||
# description: Browse terminology, command line syntax, API resource types, and setup tool documentation.
|
||||
# button: View Reference
|
||||
# button_path: /docs/reference
|
||||
- name: training
|
||||
title: "培训"
|
||||
description: "通过 Kubernetes 认证,助你的云原生项目成功!"
|
||||
button: "查看培训"
|
||||
button_path: "/zh-cn/training"
|
||||
- name: reference
|
||||
title: 查阅参考信息
|
||||
description: 浏览术语、命令行语法、API 资源类型和安装工具文档。
|
||||
button: 查看参考
|
||||
button_path: /zh-cn/docs/reference
|
||||
# - name: training
|
||||
# title: "Training"
|
||||
# description: "Get certified in Kubernetes and make your cloud native projects successful!"
|
||||
# button: "View training"
|
||||
# button_path: "/training"
|
||||
# - name: contribute
|
||||
# title: Contribute to the docs
|
||||
# title: Contribute to the Kubernetes
|
||||
# description: Anyone can contribute, whether you’re new to the project or you’ve been around a long time.
|
||||
# button: Contribute to the docs
|
||||
# button: Find out how to help
|
||||
# button_path: /docs/contribute
|
||||
# - name: release-notes
|
||||
# title: K8s Release Notes
|
||||
# description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes.
|
||||
# button: "Download Kubernetes"
|
||||
# button_path: "/zh-cn/docs/setup/release/notes"
|
||||
- name: training
|
||||
title: "培训"
|
||||
description: "通过 Kubernetes 认证,助你的云原生项目成功!"
|
||||
button: "查看培训"
|
||||
button_path: "/zh-cn/training"
|
||||
- name: contribute
|
||||
title: 为文档作贡献
|
||||
title: 为 Kubernetes 作贡献
|
||||
description: 任何人,无论对该项目熟悉与否,都能贡献自己的力量。
|
||||
button: 为文档作贡献
|
||||
button: 了解如何提供帮助
|
||||
button_path: /zh-cn/docs/contribute
|
||||
- name: Download
|
||||
title: 下载 Kubernetes
|
||||
|
|
|
@ -149,7 +149,6 @@ operator to use or manage a cluster.
|
|||
* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/),
|
||||
[kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) and
|
||||
[kubelet credential providers (v1)](/docs/reference/config-api/kubelet-credentialprovider.v1/)
|
||||
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/),
|
||||
[kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) and
|
||||
[kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
* [kube-controller-manager configuration (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/)
|
||||
|
@ -179,7 +178,6 @@ operator to use or manage a cluster.
|
|||
* [kubelet 凭据驱动 (v1alpha1)](/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/)、
|
||||
[kubelet 凭据驱动 (v1beta1)](/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) 和
|
||||
[kubelet 凭据驱动 (v1)](/zh-cn/docs/reference/config-api/kubelet-credentialprovider.v1/)
|
||||
* [kube-scheduler 配置 (v1beta2)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta2/)、
|
||||
[kube-scheduler 配置 (v1beta3)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/) 和
|
||||
[kube-scheduler 配置 (v1)](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
* [kube-controller-manager 配置 (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/)
|
||||
|
@ -194,10 +192,12 @@ operator to use or manage a cluster.
|
|||
## Config API for kubeadm
|
||||
|
||||
* [v1beta3](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
* [v1beta4](/docs/reference/config-api/kubeadm-config.v1beta4/)
|
||||
-->
|
||||
## kubeadm 的配置 API {#config-api-for-kubeadm}
|
||||
|
||||
* [v1beta3](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
* [v1beta4](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta4/)
|
||||
|
||||
<!--
|
||||
## Design Docs
|
||||
|
|
|
@ -103,7 +103,7 @@ Event 结构包含可出现在 API 审计日志中的所有信息。
|
|||
</tr>
|
||||
|
||||
<tr><td><code>user</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
|
@ -114,7 +114,7 @@ Event 结构包含可出现在 API 审计日志中的所有信息。
|
|||
</tr>
|
||||
|
||||
<tr><td><code>impersonatedUser</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
|
@ -189,7 +189,7 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
</tr>
|
||||
|
||||
<tr><td><code>responseStatus</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
|
@ -243,7 +243,7 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
</tr>
|
||||
|
||||
<tr><td><code>requestReceivedTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--Time the request reached the apiserver.-->
|
||||
|
@ -254,7 +254,7 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
</tr>
|
||||
|
||||
<tr><td><code>stageTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
|
@ -309,7 +309,7 @@ EventList 是审计事件(Event)的列表。
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>EventList</code></td></tr>
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted"><!--No description provided.-->列表结构元数据</span>
|
||||
|
@ -351,7 +351,7 @@ Policy 定义的是审计日志的配置以及不同类型请求的日志记录
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>Policy</code></td></tr>
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
|
@ -440,7 +440,7 @@ PolicyList 是由审计策略(Policy)组成的列表。
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>PolicyList</code></td></tr>
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted"><!--No description provided.-->列表结构元数据。</span>
|
||||
|
@ -494,19 +494,13 @@ GroupResources 代表的是某 API 组中的资源类别。
|
|||
<td>
|
||||
<!--
|
||||
Resources is a list of resources this rule applies to.
|
||||
<p>For example:</p>
|
||||
<ul>
|
||||
<li><code>pods</code> matches pods.</li>
|
||||
<li><code>pods/log</code> matches the log subresource of pods.</li>
|
||||
<li><code>*<code> matches all resources and their subresources.</li>
|
||||
<li><code>pods/*</code> matches all subresources of pods.</li>
|
||||
<li><code>*/scale</code> matches all scale subresources.</li>
|
||||
</ul>
|
||||
<p>For example:
|
||||
'pods' matches pods.
|
||||
'pods/log' matches the log subresource of pods.
|
||||
'<em>' matches all resources and their subresources.
|
||||
'pods/</em>' matches all subresources of pods.
|
||||
'*/scale' matches all scale subresources.</p>
|
||||
-->
|
||||
<p>
|
||||
字段 resources 是此规则所适用的资源的列表。
|
||||
</p>
|
||||
<br/>
|
||||
<p>例如:</p>
|
||||
<ul>
|
||||
<li><code>pods</code> 匹配 Pod;</li>
|
||||
|
@ -773,12 +767,10 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
<td>
|
||||
<!--
|
||||
NonResourceURLs is a set of URL paths that should be audited.
|
||||
<code>*<code>s are allowed, but only as the full, final step in the path.
|
||||
Examples:</p>
|
||||
<ul>
|
||||
<li>"/metrics" - Log requests for apiserver metrics</li>
|
||||
<li>"/healthz*" - Log all health checks</li>
|
||||
</ul>
|
||||
<em>s are allowed, but only as the full, final step in the path.
|
||||
Examples:
|
||||
"/metrics" - Log requests for apiserver metrics
|
||||
"/healthz</em>" - Log all health checks</p>
|
||||
-->
|
||||
|
||||
<p>
|
||||
|
@ -864,4 +856,3 @@ Stage defines the stages in request handling that audit events may be generated.
|
|||
-->
|
||||
Stage 定义在请求处理过程中可以生成审计事件的阶段。
|
||||
</p>
|
||||
|
||||
|
|
|
@ -29,10 +29,8 @@ Package v1 is the v1 version of the API.
|
|||
## `EncryptionConfiguration` {#apiserver-config-k8s-io-v1-EncryptionConfiguration}
|
||||
|
||||
<!--
|
||||
EncryptionConfiguration stores the complete configuration for encryption providers.
|
||||
It also allows the use of wildcards to specify the resources that should be encrypted.
|
||||
Use <code>*.<group></code> to encrypt all resources within a group or <code>*.*</code> to encrypt all resources.
|
||||
<code>*.</code> can be used to encrypt all resource in the core group. <code>*.*</code> will encrypt all
|
||||
Use '<em>.<!!-- raw HTML omitted -->' to encrypt all resources within a group or '</em>.<em>' to encrypt all resources.
|
||||
'</em>.' can be used to encrypt all resource in the core group. '<em>.</em>' will encrypt all
|
||||
resources, even custom resources that are added after API server start.
|
||||
Use of wildcards that overlap within the same resource list or across multiple
|
||||
entries are not allowed since part of the configuration would be ineffective.
|
||||
|
@ -399,10 +397,10 @@ ResourceConfiguration 中保存资源配置。
|
|||
<p>
|
||||
<!--
|
||||
resources is a list of kubernetes resources which have to be encrypted. The resource names are derived from <code>resource</code> or <code>resource.group</code> of the group/version/resource.
|
||||
eg: <code>pandas.awesome.bears.example</code> is a custom resource with 'group': <code>awesome.bears.example</code>, 'resource': <code>pandas</code>.
|
||||
Use <code>*.*</code> to encrypt all resources and <code>*.<group></code>' to encrypt all resources in a specific group.
|
||||
eg: <code>*.awesome.bears.example</code> will encrypt all resources in the group <code>awesome.bears.example</code>.
|
||||
eg: <code>*.</code> will encrypt all resources in the core group (such as pods, configmaps, etc).
|
||||
eg: pandas.awesome.bears.example is a custom resource with 'group': awesome.bears.example, 'resource': pandas.
|
||||
Use '<em>.</em>' to encrypt all resources and '<em>.< raw HTML omitted >' to encrypt all resources in a specific group.
|
||||
eg: '</em>.awesome.bears.example' will encrypt all resources in the group 'awesome.bears.example'.
|
||||
eg: '*.' will encrypt all resources in the core group (such as pods, configmaps, etc).</p>
|
||||
-->
|
||||
<code>resources</code> 是必须要加密的 Kubernetes 资源的列表。
|
||||
资源名称来自于组/版本/资源的 <code>resource</code> 或 <code>resource.group</code>。
|
||||
|
|
|
@ -259,7 +259,7 @@ itself should at least be protected via file permissions.
|
|||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
<tr><td><code>expirationTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--ExpirationTimestamp indicates a time when the provided credentials expire.-->
|
||||
|
@ -295,4 +295,3 @@ itself should at least be protected via file permissions.
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -290,7 +290,7 @@ exec 插件本身至少应通过文件访问许可来实施保护。</p>
|
|||
|
||||
|
||||
<tr><td><code>expirationTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!-- ExpirationTimestamp indicates a time when the provided credentials expire. -->
|
||||
|
@ -331,5 +331,3 @@ exec 插件本身至少应通过文件访问许可来实施保护。</p>
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ package: imagepolicy.k8s.io/v1alpha1
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>ImageReview</code></td></tr>
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
|
@ -207,4 +207,3 @@ appropriate prefix).</p>
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -25,6 +25,256 @@ auto_generated: true
|
|||
- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1-VolumeBindingArgs)
|
||||
|
||||
|
||||
## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
ClientConnectionConfiguration contains details for constructing a client.
|
||||
-->
|
||||
<p>ClientConnectionConfiguration 中包含用来构造客户端所需的细节。</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>kubeconfig</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
kubeconfig is the path to a KubeConfig file.
|
||||
-->
|
||||
<p><code>kubeconfig</code> 字段为指向 KubeConfig 文件的路径。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>acceptContentTypes</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
|
||||
default value of 'application/json'. This field will control all connections to the server used by a particular
|
||||
client.
|
||||
-->
|
||||
<p>
|
||||
<code>acceptContentTypes</code> 定义的是客户端与服务器建立连接时要发送的 Accept 头部,
|
||||
这里的设置值会覆盖默认值 "application/json"。此字段会影响某特定客户端与服务器的所有连接。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>contentType</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
contentType is the content type used when sending data to the server from this client.
|
||||
-->
|
||||
<p>
|
||||
<code>contentType</code> 包含的是此客户端向服务器发送数据时使用的内容类型(Content Type)。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>qps</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>float32</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
qps controls the number of queries per second allowed for this connection.
|
||||
-->
|
||||
<p><code>qps</code> 控制此连接允许的每秒查询次数。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>burst</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
burst allows extra queries to accumulate when a client is exceeding its rate.
|
||||
-->
|
||||
<p><code>burst</code> 允许在客户端超出其速率限制时可以累积的额外查询个数。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `DebuggingConfiguration` {#DebuggingConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
DebuggingConfiguration holds configuration for Debugging related features.
|
||||
-->
|
||||
<p>DebuggingConfiguration 包含与调试功能相关的配置。</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>enableProfiling</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
enableProfiling enables profiling via web interface host:port/debug/pprof/
|
||||
-->
|
||||
<p><code>enableProfiling</code> 字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>enableContentionProfiling</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
enableContentionProfiling enables block profiling, if
|
||||
enableProfiling is true.
|
||||
-->
|
||||
<p><code>enableContentionProfiling</code> 字段在
|
||||
<code>enableProfiling</code> 为 true 时启用阻塞分析。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LeaderElectionConfiguration` {#LeaderElectionConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
LeaderElectionConfiguration defines the configuration of leader election
|
||||
clients for components that can run with leader election enabled.
|
||||
-->
|
||||
<p>
|
||||
LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举客户端的配置。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>leaderElect</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
leaderElect enables a leader election client to gain leadership
|
||||
before executing the main loop. Enable this when running replicated
|
||||
components for high availability.
|
||||
-->
|
||||
<p>
|
||||
<code>leaderElect</code> 允许领导者选举客户端在进入主循环执行之前先获得领导者角色。
|
||||
运行多副本组件时启用此功能有助于提高可用性。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>leaseDuration</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
leaseDuration is the duration that non-leader candidates will wait
|
||||
after observing a leadership renewal until attempting to acquire
|
||||
leadership of a led but unrenewed leader slot. This is effectively the
|
||||
maximum duration that a leader can be stopped before it is replaced
|
||||
by another candidate. This is only applicable if leader election is
|
||||
enabled.
|
||||
-->
|
||||
<p>
|
||||
<code>leaseDuration</code> 是非领导角色候选者在观察到需要领导席位更新时要等待的时间;
|
||||
只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要被刷新的席位。
|
||||
这里的设置值本质上意味着某个领导者在被另一个候选者替换掉之前可以停止运行的最长时长。
|
||||
只有当启用了领导者选举时此字段有意义。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>renewDeadline</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
renewDeadline is the interval between attempts by the acting master to
|
||||
renew a leadership slot before it stops leading. This must be less
|
||||
than or equal to the lease duration. This is only applicable if leader
|
||||
election is enabled.
|
||||
-->
|
||||
<p>
|
||||
<code>renewDeadline</code> 设置的是当前领导者在停止扮演领导角色之前需要刷新领导状态的时间间隔。
|
||||
此值必须小于或等于租约期限的长度。只有到启用了领导者选举时此字段才有意义。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>retryPeriod</code> <B>[Required<!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
retryPeriod is the duration the clients should wait between attempting
|
||||
acquisition and renewal of a leadership. This is only applicable if
|
||||
leader election is enabled.
|
||||
-->
|
||||
<p>
|
||||
<code>retryPeriod</code> 是客户端在连续两次尝试获得或者刷新领导状态之间需要等待的时长。
|
||||
只有当启用了领导者选举时此字段才有意义。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceLock</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
resourceLock indicates the resource object type that will be used to lock
|
||||
during leader election cycles.
|
||||
-->
|
||||
<p><code>resourceLock</code> 字段给出在领导者选举期间要作为锁来使用的资源对象类型。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceName</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
resourceName indicates the name of resource object that will be used to lock
|
||||
during leader election cycles.
|
||||
-->
|
||||
<p><code>resourceName</code> 字段给出在领导者选举期间要作为锁来使用的资源对象名称。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceNamespace</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
resourceName indicates the namespace of resource object that will be used to lock
|
||||
during leader election cycles.
|
||||
-->
|
||||
<p><code>resourceNamespace</code> 字段给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1-DefaultPreemptionArgs}
|
||||
|
||||
<!--
|
||||
|
@ -262,6 +512,24 @@ with the extender. These extenders are shared by all scheduler profiles.
|
|||
所有调度器模仿会共享此扩展模块列表。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>delayCacheUntilActive</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
DelayCacheUntilActive specifies when to start caching. If this is true and leader election is enabled,
|
||||
the scheduler will wait to fill informer caches until it is the leader. Doing so will have slower
|
||||
failover with the benefit of lower memory overhead while waiting to become leader.
|
||||
Defaults to false.
|
||||
-->
|
||||
DelayCacheUntilActive 指定何时开始缓存。如果字段设置为 true 并且启用了领导者选举,
|
||||
则调度程序将等待填充通知者缓存,直到它成为领导者,这样做会减慢故障转移速度,
|
||||
并在等待成为领导者时降低内存开销。
|
||||
默认为 false。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
@ -280,7 +548,7 @@ NodeAffinityArgs holds arguments to configure the NodeAffinity plugin.
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>NodeAffinityArgs</code></td></tr>
|
||||
|
||||
<tr><td><code>addedAffinity</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
|
@ -401,7 +669,7 @@ PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread pl
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>PodTopologySpreadArgs</code></td></tr>
|
||||
|
||||
<tr><td><code>defaultConstraints</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
|
@ -1474,259 +1742,3 @@ UtilizationShapePoint represents single point of priority function shape.
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
ClientConnectionConfiguration contains details for constructing a client.
|
||||
-->
|
||||
<p>ClientConnectionConfiguration 中包含用来构造客户端所需的细节。</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>kubeconfig</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
kubeconfig is the path to a KubeConfig file.
|
||||
-->
|
||||
<p><code>kubeconfig</code> 字段为指向 KubeConfig 文件的路径。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>acceptContentTypes</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
|
||||
default value of 'application/json'. This field will control all connections to the server used by a particular
|
||||
client.
|
||||
-->
|
||||
<p>
|
||||
<code>acceptContentTypes</code> 定义的是客户端与服务器建立连接时要发送的 Accept 头部,
|
||||
这里的设置值会覆盖默认值 "application/json"。此字段会影响某特定客户端与服务器的所有连接。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>contentType</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
contentType is the content type used when sending data to the server from this client.
|
||||
-->
|
||||
<p>
|
||||
<code>contentType</code> 包含的是此客户端向服务器发送数据时使用的内容类型(Content Type)。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>qps</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>float32</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
qps controls the number of queries per second allowed for this connection.
|
||||
-->
|
||||
<p><code>qps</code> 控制此连接允许的每秒查询次数。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>burst</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
burst allows extra queries to accumulate when a client is exceeding its rate.
|
||||
-->
|
||||
<p><code>burst</code> 允许在客户端超出其速率限制时可以累积的额外查询个数。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `DebuggingConfiguration` {#DebuggingConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
DebuggingConfiguration holds configuration for Debugging related features.
|
||||
-->
|
||||
<p>DebuggingConfiguration 保存与调试功能相关的配置。</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>enableProfiling</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
enableProfiling enables profiling via web interface host:port/debug/pprof/
|
||||
-->
|
||||
<p><code>enableProfiling</code> 字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>enableContentionProfiling</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
enableContentionProfiling enables block profiling, if
|
||||
enableProfiling is true.
|
||||
-->
|
||||
<p><code>enableContentionProfiling</code> 字段在
|
||||
<code>enableProfiling</code> 为 true 时启用阻塞分析。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LeaderElectionConfiguration` {#LeaderElectionConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
LeaderElectionConfiguration defines the configuration of leader election
|
||||
clients for components that can run with leader election enabled.
|
||||
-->
|
||||
<p>
|
||||
LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举客户端的配置。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>leaderElect</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
leaderElect enables a leader election client to gain leadership
|
||||
before executing the main loop. Enable this when running replicated
|
||||
components for high availability.
|
||||
-->
|
||||
<p>
|
||||
<code>leaderElect</code> 允许领导者选举客户端在进入主循环执行之前先获得领导者角色。
|
||||
运行多副本组件时启用此功能有助于提高可用性。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>leaseDuration</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
leaseDuration is the duration that non-leader candidates will wait
|
||||
after observing a leadership renewal until attempting to acquire
|
||||
leadership of a led but unrenewed leader slot. This is effectively the
|
||||
maximum duration that a leader can be stopped before it is replaced
|
||||
by another candidate. This is only applicable if leader election is
|
||||
enabled.
|
||||
-->
|
||||
<p>
|
||||
<code>leaseDuration</code> 是非领导角色候选者在观察到需要领导席位更新时要等待的时间;
|
||||
只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要被刷新的席位。
|
||||
这里的设置值本质上意味着某个领导者在被另一个候选者替换掉之前可以停止运行的最长时长。
|
||||
只有当启用了领导者选举时此字段有意义。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>renewDeadline</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
renewDeadline is the interval between attempts by the acting master to
|
||||
renew a leadership slot before it stops leading. This must be less
|
||||
than or equal to the lease duration. This is only applicable if leader
|
||||
election is enabled.
|
||||
-->
|
||||
<p>
|
||||
<code>renewDeadline</code> 设置的是当前领导者在停止扮演领导角色之前需要刷新领导状态的时间间隔。
|
||||
此值必须小于或等于租约期限的长度。只有到启用了领导者选举时此字段才有意义。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>retryPeriod</code> <B>[Required<!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
retryPeriod is the duration the clients should wait between attempting
|
||||
acquisition and renewal of a leadership. This is only applicable if
|
||||
leader election is enabled.
|
||||
-->
|
||||
<p>
|
||||
<code>retryPeriod</code> 是客户端在连续两次尝试获得或者刷新领导状态之间需要等待的时长。
|
||||
只有当启用了领导者选举时此字段才有意义。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceLock</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
resourceLock indicates the resource object type that will be used to lock
|
||||
during leader election cycles.
|
||||
-->
|
||||
<p><code>resourceLock</code> 字段给出在领导者选举期间要作为锁来使用的资源对象类型。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceName</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
resourceName indicates the name of resource object that will be used to lock
|
||||
during leader election cycles.
|
||||
-->
|
||||
<p><code>resourceName</code> 字段给出在领导者选举期间要作为锁来使用的资源对象名称。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceNamespace</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
resourceName indicates the namespace of resource object that will be used to lock
|
||||
during leader election cycles.
|
||||
-->
|
||||
<p><code>resourceNamespace</code> 字段给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
File diff suppressed because it is too large
Load Diff
|
@ -281,7 +281,7 @@ NodeAffinityArgs holds arguments to configure the NodeAffinity plugin.
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>NodeAffinityArgs</code></td></tr>
|
||||
|
||||
<tr><td><code>addedAffinity</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
|
@ -402,7 +402,7 @@ PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread pl
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>PodTopologySpreadArgs</code></td></tr>
|
||||
|
||||
<tr><td><code>defaultConstraints</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
|
@ -1458,7 +1458,6 @@ UtilizationShapePoint represents single point of priority function shape.
|
|||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
|
@ -1536,7 +1535,6 @@ default value of 'application/json'. This field will control all connections to
|
|||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
|
@ -1579,7 +1577,6 @@ enableProfiling is true.
|
|||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
<!--
|
||||
|
|
|
@ -87,6 +87,56 @@ PriorityLevelConfigurationSpec 指定一个优先级的配置。
|
|||
取值为 `"Limited"` 意味着 (a) 此优先级的请求遵从这些限制且
|
||||
(b) 服务器某些受限的容量仅可用于此优先级。必需。
|
||||
|
||||
- **exempt** (ExemptPriorityLevelConfiguration)
|
||||
|
||||
<!--
|
||||
`exempt` specifies how requests are handled for an exempt priority level. This field MUST be empty if `type` is `"Limited"`. This field MAY be non-empty if `type` is `"Exempt"`. If empty and `type` is `"Exempt"` then the default values for `ExemptPriorityLevelConfiguration` apply.
|
||||
-->
|
||||
|
||||
`exempt` 指定了对于豁免优先级的请求如何处理。
|
||||
如果 `type` 取值为 `"Limited"`,则此字段必须为空。
|
||||
如果 `type` 取值为 `"Exempt"`,则此字段可以非空。
|
||||
如果为空且 `type` 取值为 `"Exempt"`,则应用 `ExemptPriorityLevelConfiguration` 的默认值。
|
||||
|
||||
<!--
|
||||
<a name="ExemptPriorityLevelConfiguration"></a>
|
||||
*ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the `spec`.*
|
||||
-->
|
||||
|
||||
<a name="ExemptPriorityLevelConfiguration"></a>
|
||||
**ExemptPriorityLevelConfiguration 描述豁免请求处理的可配置方面。
|
||||
在强制豁免配置对象中,与 `spec` 中的其余部分不同,此处字段的取值可以被授权用户修改。**
|
||||
|
||||
- **exempt.lendablePercent** (int32)
|
||||
|
||||
<!--
|
||||
`lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.
|
||||
-->
|
||||
|
||||
`lendablePercent` 规定该级别的 NominalCL 可被其他优先级租借的百分比。
|
||||
此字段的值必须在 0 到 100 之间,包括 0 和 100,默认为 0。
|
||||
其他级别可以从该级别借用的席位数被称为此级别的 LendableConcurrencyLimit(LendableCL),定义如下。
|
||||
|
||||
LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 )
|
||||
|
||||
- **exempt.nominalConcurrencyShares** (int32)
|
||||
|
||||
<!--
|
||||
`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values:
|
||||
-->
|
||||
|
||||
`nominalConcurrencyShares`(NCS)也被用来计算该级别的 NominalConcurrencyLimit(NominalCL)。
|
||||
字段值是为该优先级保留的执行席位的数量。这一设置不限制此优先级别的调度行为,
|
||||
但会通过借用机制影响其他优先级。服务器的并发限制(ServerCL)会按照各个优先级的 NCS 值按比例分配:
|
||||
|
||||
NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)
|
||||
|
||||
<!--
|
||||
Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero.
|
||||
-->
|
||||
|
||||
较大的数字意味着更大的标称并发限制,且将影响其他优先级。此字段的默认值为零。
|
||||
|
||||
<!--
|
||||
- **limited** (LimitedPriorityLevelConfiguration)
|
||||
|
||||
|
@ -94,8 +144,8 @@ PriorityLevelConfigurationSpec 指定一个优先级的配置。
|
|||
|
||||
<a name="LimitedPriorityLevelConfiguration"></a>
|
||||
*LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:
|
||||
* How are requests for this priority level limited?
|
||||
* What should be done with requests that exceed the limit?*
|
||||
- How are requests for this priority level limited?
|
||||
- What should be done with requests that exceed the limit?*
|
||||
-->
|
||||
- **limited** (LimitedPriorityLevelConfiguration)
|
||||
|
||||
|
@ -104,8 +154,9 @@ PriorityLevelConfigurationSpec 指定一个优先级的配置。
|
|||
|
||||
<a name="LimitedPriorityLevelConfiguration"></a>
|
||||
LimitedPriorityLevelConfiguration 指定如何处理需要被限制的请求。它解决两个问题:
|
||||
* 如何限制此优先级的请求?
|
||||
* 应如何处理超出此限制的请求?
|
||||
|
||||
- 如何限制此优先级的请求?
|
||||
- 应如何处理超出此限制的请求?
|
||||
|
||||
<!--
|
||||
- **limited.borrowingLimitPercent** (int32)
|
||||
|
@ -119,7 +170,7 @@ PriorityLevelConfigurationSpec 指定一个优先级的配置。
|
|||
|
||||
- **limited.borrowingLimitPercent** (int32)
|
||||
|
||||
`borrowingLimitPercent` 配置如果存在,则可用来限制此优先级级别可以从其他优先级级别中租借多少资源。
|
||||
`borrowingLimitPercent` 配置如果存在,则可用来限制此优先级可以从其他优先级中租借多少资源。
|
||||
该限制被称为该级别的 BorrowingConcurrencyLimit(BorrowingCL),它限制了该级别可以同时租借的资源总数。
|
||||
该字段保存了该限制与该级别标称并发限制之比。当此字段非空时,必须为正整数,并按以下方式计算限制值:
|
||||
|
||||
|
@ -137,7 +188,7 @@ PriorityLevelConfigurationSpec 指定一个优先级的配置。
|
|||
|
||||
- **limited.lendablePercent** (int32)
|
||||
|
||||
`lendablePercent` 规定了 NominalCL 可被其他优先级级别租借资源数百分比。
|
||||
`lendablePercent` 规定了 NominalCL 可被其他优先级租借资源数百分比。
|
||||
此字段的值必须在 0 到 100 之间,包括 0 和 100,默认为 0。
|
||||
其他级别可以从该级别借用的资源数被称为此级别的 LendableConcurrencyLimit(LendableCL),定义如下。
|
||||
|
||||
|
@ -229,21 +280,21 @@ PriorityLevelConfigurationSpec 指定一个优先级的配置。
|
|||
|
||||
`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats available at this priority level. This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level. The server's concurrency limit (ServerCL) is divided among the Limited priority levels in proportion to their NCS values:
|
||||
|
||||
NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[limited priority level k] NCS(k)
|
||||
NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)
|
||||
|
||||
Bigger numbers mean a larger nominal concurrency limit, at the expense of every other Limited priority level. This field has a default value of 30.
|
||||
Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of 30.
|
||||
-->
|
||||
|
||||
- **limited.nominalConcurrencyShares** (int32)
|
||||
|
||||
`nominalConcurrencyShares`(NCS)用于计算该优先级级别的标称并发限制(NominalCL)。
|
||||
NCS 表示可以在此优先级级别同时运行的席位数量上限,包括来自本优先级级别的请求,
|
||||
以及从此优先级级别租借席位的其他级别的请求。
|
||||
服务器的并发度限制(ServerCL)根据 NCS 值按比例分别给各 Limited 优先级级别:
|
||||
`nominalConcurrencyShares`(NCS)用于计算该优先级的标称并发限制(NominalCL)。
|
||||
NCS 表示可以在此优先级同时运行的席位数量上限,包括来自本优先级的请求,
|
||||
以及从此优先级租借席位的其他级别的请求。
|
||||
服务器的并发度限制(ServerCL)根据 NCS 值按比例分别给各 Limited 优先级:
|
||||
|
||||
NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[limited priority level k] NCS(k)
|
||||
NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)
|
||||
|
||||
较大的数字意味着更大的标称并发限制(NominalCL),但是这将牺牲其他 Limited 优先级级别的资源。该字段的默认值为 30。
|
||||
较大的数字意味着更大的标称并发限制,但是这将牺牲其他优先级的资源。该字段的默认值为 30。
|
||||
|
||||
## PriorityLevelConfigurationStatus {#PriorityLevelConfigurationStatus}
|
||||
|
||||
|
@ -381,11 +432,11 @@ GET /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **name** (**路径参数**): string,必需
|
||||
- **name**(**路径参数**):string,必需
|
||||
|
||||
PriorityLevelConfiguration 的名称
|
||||
PriorityLevelConfiguration 的名称。
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
@ -416,11 +467,11 @@ GET /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **name** (**路径参数**): string,必需
|
||||
- **name**(**路径参数**):string,必需
|
||||
|
||||
PriorityLevelConfiguration 的名称
|
||||
PriorityLevelConfiguration 的名称。
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
@ -459,47 +510,47 @@ GET /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **allowWatchBookmarks** (**查询参数**): boolean
|
||||
- **allowWatchBookmarks**(**查询参数**):boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
|
||||
|
||||
- **continue** (**查询参数**): string
|
||||
- **continue**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
|
||||
|
||||
- **fieldSelector** (**查询参数**): string
|
||||
- **fieldSelector**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
|
||||
|
||||
- **labelSelector** (**查询参数**): string
|
||||
- **labelSelector**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
|
||||
|
||||
- **limit** (**查询参数**): integer
|
||||
- **limit**(**查询参数**):integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
- **resourceVersion** (**查询参数**): string
|
||||
- **resourceVersion**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
|
||||
|
||||
- **resourceVersionMatch** (**查询参数**): string
|
||||
- **resourceVersionMatch**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
- **sendInitialEvents** (**查询参数**): boolean
|
||||
- **sendInitialEvents**(**查询参数**):boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
- **timeoutSeconds** (**查询参数**): integer
|
||||
- **timeoutSeconds**(**查询参数**):integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
||||
- **watch** (**查询参数**): boolean
|
||||
- **watch**(**查询参数**):boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
|
||||
|
||||
|
@ -534,19 +585,19 @@ POST /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations
|
|||
|
||||
- **body**: <a href="{{< ref "../cluster-resources/priority-level-configuration-v1beta3#PriorityLevelConfiguration" >}}">PriorityLevelConfiguration</a>,必需
|
||||
|
||||
- **dryRun** (**查询参数**): string
|
||||
- **dryRun**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **fieldManager** (**查询参数**): string
|
||||
- **fieldManager**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
- **fieldValidation** (**查询参数**): string
|
||||
- **fieldValidation**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
@ -585,25 +636,25 @@ PUT /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **name** (**路径参数**): string,必需
|
||||
- **name**(**路径参数**):string,必需
|
||||
|
||||
PriorityLevelConfiguration 的名称
|
||||
PriorityLevelConfiguration 的名称。
|
||||
|
||||
- **body**: <a href="{{< ref "../cluster-resources/priority-level-configuration-v1beta3#PriorityLevelConfiguration" >}}">PriorityLevelConfiguration</a>,必需
|
||||
|
||||
- **dryRun** (**查询参数**): string
|
||||
- **dryRun**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **fieldManager** (**查询参数**): string
|
||||
- **fieldManager**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
- **fieldValidation** (**查询参数**): string
|
||||
- **fieldValidation**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
@ -640,25 +691,25 @@ PUT /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **name** (**路径参数**): string,必需
|
||||
- **name**(**路径参数**):string,必需
|
||||
|
||||
PriorityLevelConfiguration 的名称
|
||||
PriorityLevelConfiguration 的名称。
|
||||
|
||||
- **body**: <a href="{{< ref "../cluster-resources/priority-level-configuration-v1beta3#PriorityLevelConfiguration" >}}">PriorityLevelConfiguration</a>,必需
|
||||
|
||||
- **dryRun** (**查询参数**): string
|
||||
- **dryRun**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **fieldManager** (**查询参数**): string
|
||||
- **fieldManager**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
- **fieldValidation** (**查询参数**): string
|
||||
- **fieldValidation**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
@ -696,29 +747,29 @@ PATCH /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{na
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **name** (**路径参数**): string,必需
|
||||
- **name**(**路径参数**):string,必需
|
||||
|
||||
PriorityLevelConfiguration 的名称
|
||||
PriorityLevelConfiguration 的名称。
|
||||
|
||||
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
|
||||
|
||||
- **dryRun** (**查询参数**): string
|
||||
- **dryRun**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **fieldManager** (**查询参数**): string
|
||||
- **fieldManager**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
- **fieldValidation** (**查询参数**): string
|
||||
- **fieldValidation**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
- **force** (**查询参数**): boolean
|
||||
- **force**(**查询参数**):boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
@ -756,29 +807,29 @@ PATCH /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{na
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **name** (**路径参数**): string,必需
|
||||
- **name**(**路径参数**):string,必需
|
||||
|
||||
PriorityLevelConfiguration 的名称
|
||||
PriorityLevelConfiguration 的名称。
|
||||
|
||||
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
|
||||
|
||||
- **dryRun** (**查询参数**): string
|
||||
- **dryRun**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **fieldManager** (**查询参数**): string
|
||||
- **fieldManager**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
- **fieldValidation** (**查询参数**): string
|
||||
- **fieldValidation**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
- **force** (**查询参数**): boolean
|
||||
- **force**(**查询参数**):boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
@ -815,25 +866,25 @@ DELETE /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{n
|
|||
-->
|
||||
#### 参数
|
||||
|
||||
- **name** (**路径参数**): string,必需
|
||||
- **name**(**路径参数**):string,必需
|
||||
|
||||
PriorityLevelConfiguration 的名称
|
||||
PriorityLevelConfiguration 的名称。
|
||||
|
||||
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
|
||||
|
||||
- **dryRun** (**查询参数**): string
|
||||
- **dryRun**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **gracePeriodSeconds** (**查询参数**): integer
|
||||
- **gracePeriodSeconds**(**查询参数**):integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
- **propagationPolicy** (**查询参数**): string
|
||||
- **propagationPolicy**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
|
||||
|
||||
|
@ -878,51 +929,51 @@ DELETE /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations
|
|||
|
||||
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
|
||||
|
||||
- **continue** (**查询参数**): string
|
||||
- **continue**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
|
||||
|
||||
- **dryRun** (**查询参数**): string
|
||||
- **dryRun**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
- **fieldSelector** (**查询参数**): string
|
||||
- **fieldSelector**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
|
||||
|
||||
- **gracePeriodSeconds** (**查询参数**): integer
|
||||
- **gracePeriodSeconds**(**查询参数**):integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
|
||||
|
||||
- **labelSelector** (**查询参数**): string
|
||||
- **labelSelector**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
|
||||
|
||||
- **limit** (**查询参数**): integer
|
||||
- **limit**(**查询参数**):integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
|
||||
|
||||
- **pretty** (**查询参数**): string
|
||||
- **pretty**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
- **propagationPolicy** (**查询参数**): string
|
||||
- **propagationPolicy**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
|
||||
|
||||
- **resourceVersion** (**查询参数**): string
|
||||
- **resourceVersion**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
|
||||
|
||||
- **resourceVersionMatch** (**查询参数**): string
|
||||
- **resourceVersionMatch**(**查询参数**):string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
- **sendInitialEvents** (**查询参数**): boolean
|
||||
- **sendInitialEvents**(**查询参数**):boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
- **timeoutSeconds** (**查询参数**): integer
|
||||
- **timeoutSeconds**(**查询参数**):integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
||||
|
|
|
@ -3,12 +3,24 @@ title: 众所周知的标签、注解和污点
|
|||
content_type: concept
|
||||
weight: 40
|
||||
no_list: true
|
||||
card:
|
||||
name: reference
|
||||
weight: 30
|
||||
anchors:
|
||||
- anchor: "#labels-annotations-and-taints-used-on-api-objects"
|
||||
title: 标签、注解和污点
|
||||
---
|
||||
<!--
|
||||
title: Well-Known Labels, Annotations and Taints
|
||||
content_type: concept
|
||||
weight: 40
|
||||
no_list: true
|
||||
card:
|
||||
name: reference
|
||||
weight: 30
|
||||
anchors:
|
||||
- anchor: "#labels-annotations-and-taints-used-on-api-objects"
|
||||
title: Labels, annotations and taints
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -25,7 +37,36 @@ Kubernetes 将所有标签和注解保留在 `kubernetes.io` 和 `k8s.io `名字
|
|||
|
||||
<!--
|
||||
## Labels, annotations and taints used on API objects
|
||||
-->
|
||||
## API 对象上使用的标签、注解和污点 {#labels-annotations-and-taints-used-on-api-objects}
|
||||
|
||||
### apf.kubernetes.io/autoupdate-spec
|
||||
|
||||
<!--
|
||||
Type: Annotation
|
||||
|
||||
Example: `apf.kubernetes.io/autoupdate-spec: "true"`
|
||||
|
||||
Used on: [`FlowSchema` and `PriorityLevelConfiguration` Objects](/concepts/cluster-administration/flow-control/#defaults)
|
||||
|
||||
If this annotation is set to true on a FlowSchema or PriorityLevelConfiguration, the `spec` for that object
|
||||
is managed by the kube-apiserver. If the API server does not recognize an APF object, and you annotate it
|
||||
for automatic update, the API server deletes the entire object. Otherwise, the API server does not manage the
|
||||
object spec.
|
||||
For more details, read [Maintenance of the Mandatory and Suggested Configuration Objects](/docs/concepts/cluster-administration/flow-control/#maintenance-of-the-mandatory-and-suggested-configuration-objects).
|
||||
-->
|
||||
类别:注解
|
||||
|
||||
例子:`apf.kubernetes.io/autoupdate-spec: "true"`
|
||||
|
||||
用于:[`FlowSchema` 和 `PriorityLevelConfiguration` 对象](/zh-cn/concepts/cluster-administration/flow-control/#defaults)
|
||||
|
||||
如果在 FlowSchema 或 PriorityLevelConfiguration 上将此注解设置为 true,
|
||||
那么该对象的 `spec` 将由 kube-apiserver 进行管理。如果 API 服务器不识别 APF 对象,
|
||||
并且你对其添加了自动更新的注解,则 API 服务器将删除整个对象。否则,API 服务器不管理对象规约。
|
||||
更多细节参阅[维护强制性和建议的配置对象](/zh-cn/docs/concepts/cluster-administration/flow-control/#maintenance-of-the-mandatory-and-suggested-configuration-objects)
|
||||
|
||||
<!--
|
||||
### app.kubernetes.io/component
|
||||
|
||||
Type: Label
|
||||
|
@ -38,8 +79,6 @@ The component within the application architecture.
|
|||
|
||||
One of the [recommended labels](/docs/concepts/overview/working-with-objects/common-labels/#labels).
|
||||
-->
|
||||
## API 对象上使用的标签、注解和污点 {#labels-annotations-and-taints-used-on-api-objects}
|
||||
|
||||
### app.kubernetes.io/component {#app-kubernetes-io-component}
|
||||
|
||||
类别:标签
|
||||
|
@ -449,6 +488,36 @@ The value must be in the format `<toolname>/<semver>`.
|
|||
工具应该拒绝改变属于其他工具 ApplySets。
|
||||
该值必须采用 `<toolname>/<semver>` 格式。
|
||||
|
||||
### apps.kubernetes.io/pod-index (beta) {#apps-kubernetes.io-pod-index}
|
||||
|
||||
<!--
|
||||
Type: Label
|
||||
|
||||
Example: `apps.kubernetes.io/pod-index: "0"`
|
||||
|
||||
Used on: Pod
|
||||
|
||||
When a StatefulSet controller creates a Pod for the StatefulSet, it sets this label on that Pod.
|
||||
The value of the label is the ordinal index of the pod being created.
|
||||
|
||||
See [Pod Index Label](/docs/concepts/workloads/controllers/statefulset/#pod-index-label)
|
||||
in the StatefulSet topic for more details. Note the [PodIndexLabel](content/en/docs/reference/command-line-tools-reference/feature-gates.md) feature gate must be enabled
|
||||
for this label to be added to pods.
|
||||
-->
|
||||
类别:标签
|
||||
|
||||
例子:`apps.kubernetes.io/pod-index: "0"`
|
||||
|
||||
用于:Pod
|
||||
|
||||
当 StatefulSet 控制器为 StatefulSet 创建 Pod 时,该控制器会在 Pod 上设置这个标签。
|
||||
标签的值是正在创建的 Pod 的序号索引。
|
||||
|
||||
更多细节参阅 StatefulSet 主题中的
|
||||
[Pod 索引标签](/zh-cn/docs/concepts/workloads/controllers/statefulset/#pod-index-label)。
|
||||
请注意,[PodIndexLabel](/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md) 特性门控必须被启用,
|
||||
才能将此标签添加到 Pod 上。
|
||||
|
||||
<!--
|
||||
### cluster-autoscaler.kubernetes.io/safe-to-evict
|
||||
|
||||
|
@ -1723,6 +1792,45 @@ The control plane adds this label to an Endpoints object when the owning Service
|
|||
|
||||
当拥有的 Service 是无头类型时,控制平面将此标签添加到 Endpoints 对象。
|
||||
|
||||
<!--
|
||||
### service.kubernetes.io/topology-aware-hints (deprecated) {#servicekubernetesiotopology-aware-hints}
|
||||
|
||||
Example: `service.kubernetes.io/topology-aware-hints: "Auto"`
|
||||
|
||||
Used on: Service
|
||||
-->
|
||||
### service.kubernetes.io/topology-aware-hints(已弃用) {#servicekubernetesiotopology-aware-hints}
|
||||
|
||||
例子:`service.kubernetes.io/topology-aware-hints: "Auto"`
|
||||
|
||||
用于:Service
|
||||
|
||||
<!--
|
||||
This annotation was used for enabling _topology aware hints_ on Services. Topology aware
|
||||
hints have since been renamed: the concept is now called
|
||||
[topology aware routing](/docs/concepts/services-networking/topology-aware-routing/).
|
||||
Setting the annotation to `Auto`, on a Service, configured the Kubernetes control plane to
|
||||
add topology hints on EndpointSlices associated with that Service. You can also explicitly
|
||||
set the annotation to `Disabled`.
|
||||
|
||||
If you are running a version of Kubernetes older than {{< skew currentVersion >}},
|
||||
check the documentation for that Kubernetes version to see how topology aware routing
|
||||
works in that release.
|
||||
|
||||
There are no other valid values for this annotation. If you don't want topology aware hints
|
||||
for a Service, don't add this annotation.
|
||||
-->
|
||||
此注解曾用于在 Service 中启用**拓扑感知提示(topology aware hints)**。
|
||||
然而,拓扑感知提示已经做了更名操作,
|
||||
此概念现在名为[拓扑感知路由(topology aware routing)](/zh-cn/docs/concepts/services-networking/topology-aware-routing/)。
|
||||
在 Service 上将该注解设置为 `Auto` 会配置 Kubernetes 控制平面,
|
||||
以将拓扑提示添加到该 Service 关联的 EndpointSlice 上。你也可以显式地将该注解设置为 `Disabled`。
|
||||
|
||||
如果你使用的是早于 {{< skew currentVersion >}} 的 Kubernetes 版本,
|
||||
请查阅该版本对应的文档,了解其拓扑感知路由的工作方式。
|
||||
|
||||
此注解没有其他有效值。如果你不希望为 Service 启用拓扑感知提示,不要添加此注解。
|
||||
|
||||
<!--
|
||||
### kubernetes.io/service-name {#kubernetesioservice-name}
|
||||
|
||||
|
@ -2029,18 +2137,18 @@ kubelet 会在 Node 上设置此注解以表示从命令行标志(`--node-ip`
|
|||
<!--
|
||||
### batch.kubernetes.io/job-completion-index
|
||||
|
||||
Type: Annotation
|
||||
Type: Annotation, Label
|
||||
|
||||
Example: `batch.kubernetes.io/job-completion-index: "3"`
|
||||
|
||||
Used on: Pod
|
||||
|
||||
The Job controller in the kube-controller-manager sets this annotation for Pods
|
||||
The Job controller in the kube-controller-manager sets this as a label and annotation for Pods
|
||||
created with Indexed [completion mode](/docs/concepts/workloads/controllers/job/#completion-mode).
|
||||
-->
|
||||
### batch.kubernetes.io/job-completion-index {#batch-kubernetes-io-job-completion-index}
|
||||
|
||||
类别:注解
|
||||
类别:注解、标签
|
||||
|
||||
例子:`batch.kubernetes.io/job-completion-index: "3"`
|
||||
|
||||
|
@ -2048,7 +2156,38 @@ created with Indexed [completion mode](/docs/concepts/workloads/controllers/job/
|
|||
|
||||
kube-controller-manager 中的 Job 控制器为使用 Indexed
|
||||
[完成模式](/zh-cn/docs/concepts/workloads/controllers/job/#completion-mode)创建的 Pod
|
||||
设置此注解。
|
||||
设置此标签和注解。
|
||||
|
||||
<!--
|
||||
Note the [PodIndexLabel](content/en/docs/reference/command-line-tools-reference/feature-gates.md) feature gate must be enabled
|
||||
for this to be added as a pod **label**, otherwise it will just be an annotation.
|
||||
-->
|
||||
请注意,[PodIndexLabel](/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md) 特性门控必须被启用,
|
||||
才能将其添加为 Pod 的**标签**,否则它只会用作注解。
|
||||
|
||||
### batch.kubernetes.io/cronjob-scheduled-timestamp
|
||||
|
||||
<!--
|
||||
Type: Annotation
|
||||
|
||||
Example: `batch.kubernetes.io/cronjob-scheduled-timestamp: "2016-05-19T03:00:00-07:00"`
|
||||
|
||||
Used on: Jobs and Pods controlled by CronJobs
|
||||
|
||||
This annotation is used to record the original (expected) creation timestamp for a Job,
|
||||
when that Job is part of a CronJob.
|
||||
The control plane sets the value to that timestamp in RFC3339 format. If the Job belongs to a CronJob
|
||||
with a timezone specified, then the timestamp is in that timezone. Otherwise, the timestamp is in controller-manager's local time.
|
||||
-->
|
||||
类别:注解
|
||||
|
||||
例子:`batch.kubernetes.io/cronjob-scheduled-timestamp: "2016-05-19T03:00:00-07:00"`
|
||||
|
||||
用于:CronJob 所控制的 Job 和 Pod
|
||||
|
||||
此注解在 Job 是 CronJob 的一部分时用于记录 Job 的原始(预期)创建时间戳。
|
||||
控制平面会将该值设置为 RFC3339 格式的时间戳。如果 Job 属于设置了时区的 CronJob,
|
||||
则时间戳以该时区为基准。否则,时间戳以 controller-manager 的本地时间为准。
|
||||
|
||||
<!--
|
||||
### kubectl.kubernetes.io/default-container
|
||||
|
@ -2101,6 +2240,34 @@ annotation instead. Kubernetes versions 1.25 and newer ignore this annotation.
|
|||
Kubernetes v1.25 及更高版本将忽略此注解。
|
||||
{{< /note >}}
|
||||
|
||||
### kubectl.kubernetes.io/last-applied-configuration
|
||||
|
||||
<!--
|
||||
Type: Annotation
|
||||
|
||||
Example: _see following snippet_
|
||||
-->
|
||||
类别:注解
|
||||
|
||||
例子:**参见以下代码片段**
|
||||
|
||||
```yaml
|
||||
kubectl.kubernetes.io/last-applied-configuration: >
|
||||
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"example","namespace":"default"},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/name":foo}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"foo"}},"spec":{"containers":[{"image":"container-registry.example/foo-bar:1.42","name":"foo-bar","ports":[{"containerPort":42}]}]}}}}
|
||||
```
|
||||
|
||||
<!--
|
||||
Used on: all objects
|
||||
|
||||
The kubectl command line tool uses this annotation as a legacy mechanism
|
||||
to track changes. That mechanism has been superseded by
|
||||
[Server-side apply](/docs/reference/using-api/server-side-apply/).
|
||||
-->
|
||||
用于:所有对象
|
||||
|
||||
kubectl 命令行工具使用此注解作为一种旧的机制来跟踪变更。
|
||||
该机制已被[服务器端应用](/zh-cn/docs/reference/using-api/server-side-apply/)取代。
|
||||
|
||||
<!--
|
||||
### endpoints.kubernetes.io/over-capacity
|
||||
|
||||
|
|
|
@ -3,4 +3,8 @@ title: "创建 Kubeadm"
|
|||
weight: 10
|
||||
toc_hide: true
|
||||
---
|
||||
|
||||
<!--
|
||||
title: "Kubeadm Generated"
|
||||
weight: 10
|
||||
toc_hide: true
|
||||
-->
|
||||
|
|
|
@ -1,18 +1,7 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Generate certificate keys
|
||||
-->
|
||||
生成证书密钥
|
||||
生成证书密钥。
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
|
@ -29,7 +18,7 @@ the "init" command.
|
|||
You can also use "kubeadm init -upload-certs" without specifying a certificate key and it will generate and print one for you.
|
||||
-->
|
||||
你也可以使用 `kubeadm init --upload-certs` 而无需指定证书密钥;
|
||||
命令将为你生成并打印一个证书密钥。
|
||||
此命令将为你生成并打印一个证书密钥。
|
||||
|
||||
```
|
||||
kubeadm certs certificate-key [flags]
|
||||
|
@ -56,7 +45,7 @@ kubeadm certs certificate-key [flags]
|
|||
<!--
|
||||
help for certificate-key
|
||||
-->
|
||||
certificate-key 操作的帮助命令
|
||||
certificate-key 操作的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -92,4 +81,3 @@ certificate-key 操作的帮助命令
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -1,18 +1,7 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Check certificates expiration for a Kubernetes cluster
|
||||
-->
|
||||
为一个 Kubernetes 集群检查证书的到期时间
|
||||
为一个 Kubernetes 集群检查证书的到期时间。
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
|
@ -31,7 +20,7 @@ kubeadm certs check-expiration [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
### 选项
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
|
@ -44,14 +33,14 @@ kubeadm certs check-expiration [flags]
|
|||
<!--
|
||||
<td colspan="2">--cert-dir string Default: "/etc/kubernetes/pki"</td>
|
||||
-->
|
||||
<td colspan="2">--cert-dir string 默认值: "/etc/kubernetes/pki"</td>
|
||||
<td colspan="2">--cert-dir string 默认值:"/etc/kubernetes/pki"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!--
|
||||
<p>The path where to save the certificates</p>
|
||||
-->
|
||||
<p>保存证书的路径</p>
|
||||
<p>保存证书的路径。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -63,7 +52,7 @@ kubeadm certs check-expiration [flags]
|
|||
<!--
|
||||
<p>Path to a kubeadm configuration file.</p>
|
||||
-->
|
||||
<p>kubeadm 配置文件的路径</p>
|
||||
<p>到 kubeadm 配置文件的路径。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -75,7 +64,7 @@ kubeadm certs check-expiration [flags]
|
|||
<!--
|
||||
<p>help for check-expiration</p>
|
||||
-->
|
||||
<p>check-expiration 的帮助命令</p>
|
||||
<p>check-expiration 操作的帮助命令。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -84,7 +73,7 @@ kubeadm certs check-expiration [flags]
|
|||
<!--
|
||||
--kubeconfig string Default: "/etc/kubernetes/admin.conf"
|
||||
-->
|
||||
--kubeconfig string 默认为: "/etc/kubernetes/admin.conf"
|
||||
--kubeconfig string 默认为:"/etc/kubernetes/admin.conf"
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
|
@ -93,7 +82,7 @@ kubeadm certs check-expiration [flags]
|
|||
<p>The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.</p>
|
||||
-->
|
||||
<p>在和集群连接时使用该 kubeconfig 文件。
|
||||
如果该标志没有设置,那么将会在一些标准的位置去搜索存在的 kubeconfig 文件。</p>
|
||||
如果此标志未被设置,那么将会在一些标准的位置去搜索存在的 kubeconfig 文件。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -103,7 +92,7 @@ kubeadm certs check-expiration [flags]
|
|||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
### 继承于父命令的选项
|
||||
### 继承于父命令的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
|
@ -119,12 +108,10 @@ kubeadm certs check-expiration [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!--
|
||||
<p>[EXPERIMENTAL] The path to the 'real' host root filesystem.</p>
|
||||
-->
|
||||
-->
|
||||
<p>[实验] 到'真实'主机根文件系统的路径。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ Generates keys and certificate signing requests (CSRs) for all the certificates
|
|||
<!--
|
||||
This command is designed for use in [Kubeadm External CA Mode](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing.
|
||||
-->
|
||||
此命令设计用于 [Kubeadm 外部 CA 模式](https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode)。
|
||||
此命令设计用于 [Kubeadm 外部 CA 模式](https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode)。
|
||||
它生成你可以提交给外部证书颁发机构进行签名的 CSR。
|
||||
|
||||
<!--
|
||||
|
@ -44,7 +44,7 @@ kubeadm certs generate-csr [flags]
|
|||
```
|
||||
-->
|
||||
```
|
||||
# 以下命令将为所有控制平面证书和 kubeconfig 文件生成密钥和 CSR :
|
||||
# 以下命令将为所有控制平面证书和 kubeconfig 文件生成密钥和 CSR:
|
||||
kubeadm certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s/pki
|
||||
```
|
||||
|
||||
|
@ -64,33 +64,60 @@ kubeadm certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s
|
|||
<td colspan="2">--cert-dir string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!-- td></td><td style="line-height: 130%; word-wrap: break-word;">The path where to save the certificates</td-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>保存证书的路径</p></td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
The path where to save the certificates
|
||||
-->
|
||||
保存证书的路径。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!-- td></td><td style="line-height: 130%; word-wrap: break-word;">Path to a kubeadm configuration file.</td -->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>kubeadm 配置文件的路径。</p></td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
Path to a kubeadm configuration file.
|
||||
-->
|
||||
到 kubeadm 配置文件的路径。
|
||||
</p></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!-- td></td><td style="line-height: 130%; word-wrap: break-word;">help for generate-csr</td -->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>generate-csr 命令的帮助</p></td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
help for generate-csr
|
||||
-->
|
||||
generate-csr 操作的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<!-- td colspan="2">-kubeconfig-dir string Default: "/etc/kubernetes"</td -->
|
||||
<td colspan="2">--kubeconfig-dir string 默认值:"/etc/kubernetes"</td>
|
||||
<td colspan="2">
|
||||
<!--
|
||||
-kubeconfig-dir string Default: "/etc/kubernetes"
|
||||
-->
|
||||
--kubeconfig-dir string 默认值:"/etc/kubernetes"
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!-- td></td><td style="line-height: 130%; word-wrap: break-word;">The path where to save the kubeconfig file.</td-->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>保存 kubeconfig 文件的路径。</p></td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
The path where to save the kubeconfig file.
|
||||
-->
|
||||
保存 kubeconfig 文件的路径。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
|
@ -112,10 +139,15 @@ kubeadm certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s
|
|||
<td colspan="2">--rootfs string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<!-- <td></td><td style="line-height: 130%; word-wrap: break-word;">[EXPERIMENTAL] The path to the 'real' host root filesystem.</td> -->
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>[实验] 到'真实'主机根文件系统的路径。</p></td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
[EXPERIMENTAL] The path to the 'real' host root filesystem.
|
||||
-->
|
||||
[实验] 到'真实'主机根文件系统的路径。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -1,18 +1,7 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Renew the certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself
|
||||
-->
|
||||
续订 kubeconfig 文件中嵌入的证书,供管理员 和 kubeadm 自身使用。
|
||||
续订 kubeconfig 文件中嵌入的证书,供管理员和 kubeadm 自身使用。
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
|
@ -22,7 +11,7 @@ Renew the certificate embedded in the kubeconfig file for the admin to use and f
|
|||
<!--
|
||||
Renew the certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself.
|
||||
-->
|
||||
续订 kubeconfig 文件中嵌入的证书,供管理员 和 kubeadm 自身使用。
|
||||
续订 kubeconfig 文件中嵌入的证书,供管理员和 kubeadm 自身使用。
|
||||
|
||||
<!--
|
||||
Renewals run unconditionally, regardless of certificate expiration date; extra attributes such as SANs will be based on the existing file/certificates, there is no need to resupply them.
|
||||
|
@ -47,7 +36,6 @@ kubeadm certs renew admin.conf [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -59,7 +47,9 @@ kubeadm certs renew admin.conf [flags]
|
|||
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<!-- --cert-dir string Default: "/etc/kubernetes/pki" -->
|
||||
<!--
|
||||
--cert-dir string Default: "/etc/kubernetes/pki"
|
||||
-->
|
||||
--cert-dir string 默认值:"/etc/kubernetes/pki"
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -77,8 +67,10 @@ kubeadm certs renew admin.conf [flags]
|
|||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!-- <p>Path to a kubeadm configuration file.</p> -->
|
||||
<p>kubeadm 配置文件的路径。</p>
|
||||
<!--
|
||||
<p>Path to a kubeadm configuration file.</p>
|
||||
-->
|
||||
<p>到 kubeadm 配置文件的路径。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -87,8 +79,10 @@ kubeadm certs renew admin.conf [flags]
|
|||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!-- <p>help for admin.conf</p> -->
|
||||
<p>admin.conf 子操作的帮助命令</p>
|
||||
<!--
|
||||
<p>help for admin.conf</p>
|
||||
-->
|
||||
<p>admin.conf 操作的帮助命令。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -97,7 +91,9 @@ kubeadm certs renew admin.conf [flags]
|
|||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!-- <p>The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.</p> -->
|
||||
<!--
|
||||
<p>The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.</p>
|
||||
-->
|
||||
<p>与集群通信时使用的 kubeconfig 文件。
|
||||
如果未设置该参数,则可以在一组标准位置中搜索现有的 kubeconfig 文件。</p>
|
||||
</td>
|
||||
|
@ -111,7 +107,7 @@ kubeadm certs renew admin.conf [flags]
|
|||
<!--
|
||||
Use the Kubernetes certificate API to renew certificates
|
||||
-->
|
||||
使用 Kubernetes 证书 API 续订证书
|
||||
使用 Kubernetes 证书 API 续订证书。
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -144,4 +140,3 @@ Use the Kubernetes certificate API to renew certificates
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -1,14 +1,3 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Interact with container images used by kubeadm
|
||||
-->
|
||||
|
@ -17,13 +6,11 @@ Interact with container images used by kubeadm
|
|||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
Interact with container images used by kubeadm.
|
||||
-->
|
||||
|
||||
与 kubeadm 使用的容器镜像交互。
|
||||
|
||||
```
|
||||
|
@ -33,7 +20,6 @@ kubeadm config images [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -52,7 +38,7 @@ kubeadm config images [flags]
|
|||
help for images
|
||||
-->
|
||||
<p>
|
||||
images 的帮助命令
|
||||
images 的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -60,7 +46,6 @@ images 的帮助命令
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
|
|
|
@ -1,20 +1,3 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
<!--
|
||||
该文件是使用通用[生成器](https://github.com/kubernetes-sigs/reference-docs/) 从组件的 Go 源代码自动生成的。
|
||||
要了解如何生成参考文档,请阅读[参与参考文档](/docs/contribute/generate-ref-docs/)。
|
||||
要更新参考内容,请按照[贡献上游](/docs/contribute/generate-ref-docs/contribute-upstream/) 指导。
|
||||
你可以针对[参考文献](https://github.com/kubernetes-sigs/reference-docs/) 项目归档文档格式错误。
|
||||
-->
|
||||
|
||||
<!--
|
||||
Print a list of images kubeadm will use. The configuration file is used in case any images or image repositories are customized
|
||||
-->
|
||||
|
@ -89,7 +72,7 @@ kubeadm 配置文件的路径。
|
|||
Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.
|
||||
-->
|
||||
<p>
|
||||
输出格式:text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file 其中之一
|
||||
输出格式:text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file 其中之一。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -100,13 +83,14 @@ Output format. One of: text|json|yaml|go-template|go-template-file|template|temp
|
|||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!--
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>EtcdLearnerMode=true|false (ALPHA - default=false)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - default=false)
|
||||
-->
|
||||
<p>
|
||||
一组键值对(key=value),用于描述各种特征。选项是:
|
||||
<br/>PublicKeysECDSA=true|false (ALPHA - 默认=false)
|
||||
<br/>RootlessControlPlane=true|false (ALPHA - 默认=false)
|
||||
<br/>PublicKeysECDSA=true|false (ALPHA - 默认=false)
|
||||
一组键值对(key=value),用于描述各种特性。这些选项是:<br/>
|
||||
EtcdLearnerMode=true|false (ALPHA - 默认值=false)<br/>
|
||||
PublicKeysECDSA=true|false (ALPHA - 默认值=false)<br/>
|
||||
RootlessControlPlane=true|false (ALPHA - 默认值=false)<br/>
|
||||
UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - 默认值=false)
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -120,22 +104,26 @@ A set of key=value pairs that describe feature gates for various features. Optio
|
|||
help for list
|
||||
-->
|
||||
<p>
|
||||
list 操作的帮助命令
|
||||
list 操作的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<!-- --image-repository string Default: "registry.k8s.io" -->
|
||||
<!--
|
||||
--image-repository string Default: "registry.k8s.io"
|
||||
-->
|
||||
--image-repository string 默认值:"registry.k8s.io"
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!-- Choose a container registry to pull control plane images from -->
|
||||
<!--
|
||||
Choose a container registry to pull control plane images from
|
||||
-->
|
||||
<p>
|
||||
选择要从中拉取控制平面镜像的容器仓库
|
||||
选择要从中拉取控制平面镜像的容器仓库。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -154,7 +142,7 @@ list 操作的帮助命令
|
|||
Choose a specific Kubernetes version for the control plane.
|
||||
-->
|
||||
<p>
|
||||
为控制平面选择一个特定的 Kubernetes 版本
|
||||
为控制平面选择一个特定的 Kubernetes 版本。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
|
@ -1,20 +1,3 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
<!--
|
||||
该文件是使用通用[生成器](https://github.com/kubernetes-sigs/reference-docs/) 从组件的 Go 源代码自动生成的。
|
||||
要了解如何生成参考文档,请阅读[参与参考文档](/docs/contribute/generate-ref-docs/)。
|
||||
要更新参考内容,请按照[贡献上游](/docs/contribute/generate-ref-docs/contribute-upstream/) 指导。
|
||||
你可以针对[参考文献](https://github.com/kubernetes-sigs/reference-docs/) 项目归档文档格式错误。
|
||||
-->
|
||||
|
||||
<!--
|
||||
Pull images used by kubeadm
|
||||
-->
|
||||
|
@ -23,13 +6,11 @@ Pull images used by kubeadm
|
|||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
Pull images used by kubeadm.
|
||||
-->
|
||||
|
||||
拉取 kubeadm 使用的镜像。
|
||||
|
||||
```
|
||||
|
@ -39,7 +20,6 @@ kubeadm config images pull [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -83,12 +63,13 @@ Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this
|
|||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!--
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>EtcdLearnerMode=true|false (ALPHA - default=false)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>EtcdLearnerMode=true|false (ALPHA - default=false)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - default=false)
|
||||
-->
|
||||
一系列键值对(key=value),用于描述各种特征。可选项是:
|
||||
<br/>EtcdLearnerMode=true|false (ALPHA - 默认值=false)
|
||||
<br/>PublicKeysECDSA=true|false (ALPHA - 默认值=false)
|
||||
<br/>RootlessControlPlane=true|false (ALPHA - 默认值=false)
|
||||
一系列键值对(key=value),用于描述各种特性。可选项是:<br/>
|
||||
EtcdLearnerMode=true|false (ALPHA - 默认值=false)<br/>
|
||||
PublicKeysECDSA=true|false (ALPHA - 默认值=false)<br/>
|
||||
RootlessControlPlane=true|false (ALPHA - 默认值=false)<br/>
|
||||
UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - 默认值=false)
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -101,7 +82,7 @@ A set of key=value pairs that describe feature gates for various features. Optio
|
|||
help for pull
|
||||
-->
|
||||
<p>
|
||||
pull 操作的帮助命令
|
||||
pull 操作的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -120,7 +101,7 @@ pull 操作的帮助命令
|
|||
Choose a container registry to pull control plane images from
|
||||
-->
|
||||
<p>
|
||||
选择用于拉取控制平面镜像的容器仓库
|
||||
选择用于拉取控制平面镜像的容器仓库。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -150,7 +131,6 @@ Choose a specific Kubernetes version for the control plane.
|
|||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
|
||||
### 从父命令继承的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
|
|
@ -1,14 +1,3 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Read an older version of the kubeadm configuration API types from a file, and output the similar config object for the newer version
|
||||
-->
|
||||
|
@ -17,7 +6,6 @@ Read an older version of the kubeadm configuration API types from a file, and ou
|
|||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
|
@ -26,8 +14,9 @@ locally in the CLI tool without ever touching anything in the cluster.
|
|||
In this version of kubeadm, the following API versions are supported:
|
||||
- kubeadm.k8s.io/v1beta3
|
||||
-->
|
||||
此命令允许你在 CLI 工具中将本地旧版本的配置对象转换为最新支持的版本,而无需变更集群中的任何内容。
|
||||
在此版本的 kubeadm 中,支持以下 API 版本:
|
||||
|
||||
此命令允许您在 CLI 工具中将本地旧版本的配置对象转换为最新支持的版本,而无需变更集群中的任何内容。在此版本的 kubeadm 中,支持以下 API 版本:
|
||||
- kubeadm.k8s.io/v1beta3
|
||||
|
||||
<!--
|
||||
|
@ -37,15 +26,14 @@ read, deserialized, defaulted, converted, validated, and re-serialized when writ
|
|||
--new-config if specified.
|
||||
-->
|
||||
|
||||
因此,无论您在此处传递 --old-config 参数的版本是什么,当写入到 stdout 或 --new-config (如果已指定)时,
|
||||
因此,无论你在此处传递 --old-config 参数的版本是什么,当写入到 stdout 或 --new-config (如果已指定)时,
|
||||
都会读取、反序列化、默认、转换、验证和重新序列化 API 对象。
|
||||
|
||||
<!--
|
||||
In other words, the output of this command is what kubeadm actually would read internally if you
|
||||
submitted this file to "kubeadm init"
|
||||
-->
|
||||
|
||||
换句话说,如果您将此文件传递给 "kubeadm init",则该命令的输出就是 kubeadm 实际上在内部读取的内容。
|
||||
换句话说,如果你将此文件传递给 "kubeadm init",则该命令的输出就是 kubeadm 实际上在内部读取的内容。
|
||||
|
||||
```
|
||||
kubeadm config migrate [flags]
|
||||
|
@ -54,7 +42,6 @@ kubeadm config migrate [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -72,7 +59,7 @@ kubeadm config migrate [flags]
|
|||
<!--
|
||||
<p>help for migrate</p>
|
||||
-->
|
||||
<p>migrate 操作的帮助信息</p>
|
||||
<p>migrate 操作的帮助信息。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -106,7 +93,6 @@ kubeadm config migrate [flags]
|
|||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
|
||||
### 从父命令继承的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -118,7 +104,9 @@ kubeadm config migrate [flags]
|
|||
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<!-- kubeconfig string Default: "/etc/kubernetes/admin.conf" -->
|
||||
<!--
|
||||
kubeconfig string Default: "/etc/kubernetes/admin.conf"
|
||||
-->
|
||||
--kubeconfig string 默认值:"/etc/kubernetes/admin.conf"
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -145,4 +133,3 @@ kubeadm config migrate [flags]
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -0,0 +1,119 @@
|
|||
<!--
|
||||
Print default reset configuration, that can be used for 'kubeadm reset'
|
||||
-->
|
||||
打印默认的 reset 配置,该配置可用于 'kubeadm reset' 命令。
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
This command prints objects such as the default reset configuration that is used for 'kubeadm reset'.
|
||||
-->
|
||||
此命令打印 'kubeadm reset' 所用的默认 reset 配置等这类对象。
|
||||
|
||||
<!--
|
||||
Note that sensitive values like the Bootstrap Token fields are replaced with placeholder values like "abcdef.0123456789abcdef" in order to pass validation but
|
||||
not perform the real computation for creating a token.
|
||||
-->
|
||||
请注意,诸如启动引导令牌(Bootstrap Token)字段这类敏感值已替换为 "abcdef.0123456789abcdef"
|
||||
这类占位符值用来通过合法性检查,但不执行创建令牌的实际计算。
|
||||
|
||||
```
|
||||
kubeadm config print reset-defaults [flags]
|
||||
```
|
||||
|
||||
<!--
|
||||
### Options
|
||||
-->
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--component-configs strings</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
|
||||
-->
|
||||
组件配置 API 对象的逗号分隔列表,打印其默认值。
|
||||
可用值:[KubeProxyConfiguration KubeletConfiguration]。
|
||||
如果此参数未被设置,则不会打印任何组件配置。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
help for reset-defaults
|
||||
-->
|
||||
reset-defaults 操作的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
### 从父命令继承的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">
|
||||
<!--
|
||||
--kubeconfig string Default: "/etc/kubernetes/admin.conf"
|
||||
-->
|
||||
--kubeconfig string 默认值:"/etc/kubernetes/admin.conf"
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
|
||||
-->
|
||||
与集群通信时所使用的 kubeconfig 文件。
|
||||
如果该参数未被设置,则可以在一组标准位置中搜索现有的 kubeconfig 文件。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--rootfs string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
[EXPERIMENTAL] The path to the 'real' host root filesystem.
|
||||
-->
|
||||
[试验性] 指向“真实”主机根文件系统的路径。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
|
@ -0,0 +1,138 @@
|
|||
|
||||
<!--
|
||||
Read a file containing the kubeadm configuration API and report any validation problems
|
||||
-->
|
||||
读取包含 kubeadm 配置 API 的文件,并报告所有验证问题。
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
This command lets you validate a kubeadm configuration API file and report any warnings and errors.
|
||||
If there are no errors the exit status will be zero, otherwise it will be non-zero.
|
||||
Any unmarshaling problems such as unknown API fields will trigger errors. Unknown API versions and
|
||||
fields with invalid values will also trigger errors. Any other errors or warnings may be reported
|
||||
depending on contents of the input file.
|
||||
-->
|
||||
这个命令允许你验证 kubeadm 配置 API 文件并报告所有警告和错误。
|
||||
如果没有错误,退出状态将为零;否则,将为非零。
|
||||
诸如未知 API 字段等任何解包问题都会触发错误。
|
||||
未知的 API 版本和具有无效值的字段也会触发错误。
|
||||
根据输入文件的内容,可能会报告任何其他错误或警告。
|
||||
|
||||
<!--
|
||||
In this version of kubeadm, the following API versions are supported:
|
||||
-->
|
||||
在这个版本的 kubeadm 中,支持以下 API 版本:
|
||||
|
||||
- kubeadm.k8s.io/v1beta3
|
||||
|
||||
```
|
||||
kubeadm config validate [flags]
|
||||
```
|
||||
|
||||
<!--
|
||||
### Options
|
||||
-->
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--allow-experimental-api</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
Allow validation of experimental, unreleased APIs.
|
||||
-->
|
||||
允许验证试验性的、未发布的 API。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--config string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
Path to a kubeadm configuration file.
|
||||
-->
|
||||
指向 kubeadm 配置文件的路径。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
help for validate
|
||||
-->
|
||||
validate 的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
### 从父命令继承而来的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<!--
|
||||
<td colspan="2">--kubeconfig string Default: "/etc/kubernetes/admin.conf"</td>
|
||||
-->
|
||||
<td colspan="2">--kubeconfig string 默认值:"/etc/kubernetes/admin.conf"</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file.
|
||||
-->
|
||||
在与集群通信时要使用的 kubeconfig 文件。
|
||||
如果此标志未被设置,则会在一组标准位置中搜索现有的 kubeconfig 文件。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--rootfs string</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
[EXPERIMENTAL] The path to the 'real' host root filesystem.
|
||||
-->
|
||||
[试验性] 指向“真实”主机根文件系统的路径。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
|
@ -1,14 +1,3 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Run this command in order to set up the Kubernetes control plane
|
||||
-->
|
||||
|
@ -17,19 +6,16 @@ Run this command in order to set up the Kubernetes control plane
|
|||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
Run this command in order to set up the Kubernetes control plane
|
||||
-->
|
||||
|
||||
运行此命令来搭建 Kubernetes 控制平面节点。
|
||||
|
||||
<!--
|
||||
The "init" command executes the following phases:
|
||||
-->
|
||||
|
||||
"init" 命令执行以下阶段:
|
||||
|
||||
```
|
||||
|
@ -51,13 +37,13 @@ kubeconfig Generate all kubeconfig files necessary to establis
|
|||
/kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
|
||||
/controller-manager Generate a kubeconfig file for the controller manager to use
|
||||
/scheduler Generate a kubeconfig file for the scheduler to use
|
||||
kubelet-start Write kubelet settings and (re)start the kubelet
|
||||
etcd Generate static Pod manifest file for local etcd
|
||||
/local Generate the static Pod manifest file for a local, single-node local etcd instance
|
||||
control-plane Generate all static Pod manifest files necessary to establish the control plane
|
||||
/apiserver Generates the kube-apiserver static Pod manifest
|
||||
/controller-manager Generates the kube-controller-manager static Pod manifest
|
||||
/scheduler Generates the kube-scheduler static Pod manifest
|
||||
etcd Generate static Pod manifest file for local etcd
|
||||
/local Generate the static Pod manifest file for a local, single-node local etcd instance
|
||||
kubelet-start Write kubelet settings and (re)start the kubelet
|
||||
upload-config Upload the kubeadm and kubelet configuration to a ConfigMap
|
||||
/kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap
|
||||
/kubelet Upload the kubelet component config to a ConfigMap
|
||||
|
@ -66,7 +52,7 @@ mark-control-plane Mark a node as a control-plane
|
|||
bootstrap-token Generates bootstrap tokens used to join a node to a cluster
|
||||
kubelet-finalize Updates settings relevant to the kubelet after TLS bootstrap
|
||||
/experimental-cert-rotation Enable kubelet client certificate rotation
|
||||
addon Install required addons for passing Conformance tests
|
||||
addon Install required addons for passing conformance tests
|
||||
/coredns Install the CoreDNS addon to a Kubernetes cluster
|
||||
/kube-proxy Install the kube-proxy addon to a Kubernetes cluster
|
||||
show-join-command Show the join command for control-plane and worker node
|
||||
|
@ -79,7 +65,6 @@ kubeadm init [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -206,7 +191,8 @@ Specify a stable IP address or DNS name for the control plane.
|
|||
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
|
||||
-->
|
||||
<p>
|
||||
要连接的 CRI 套接字的路径。如果为空,则 kubeadm 将尝试自动检测此值;仅当安装了多个 CRI 或具有非标准 CRI 插槽时,才使用此选项。
|
||||
要连接的 CRI 套接字的路径。如果为空,则 kubeadm 将尝试自动检测此值;
|
||||
仅当安装了多个 CRI 或具有非标准 CRI 套接字时,才使用此选项。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -231,12 +217,17 @@ Don't apply any changes; just output what would be done.
|
|||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!--
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>EtcdLearnerMode=true|false (ALPHA - default=false)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>
|
||||
EtcdLearnerMode=true|false (ALPHA - default=false)<br/>
|
||||
PublicKeysECDSA=true|false (ALPHA - default=false)<br/>
|
||||
RootlessControlPlane=true|false (ALPHA - default=false)<br/>
|
||||
UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - default=false)
|
||||
-->
|
||||
一组用来描述各种功能特性的键值(key=value)对。选项是:
|
||||
<br/>EtcdLearnerMode=true|false (ALPHA - 默认值=false)
|
||||
<br/>PublicKeysECDSA=true|false (ALPHA - 默认值=false)
|
||||
<br/>RootlessControlPlane=true|false (ALPHA - 默认值=false)
|
||||
一组用来描述各种功能特性的键值(key=value)对。选项是:<br/>
|
||||
EtcdLearnerMode=true|false (ALPHA - 默认值=false)<br/>
|
||||
PublicKeysECDSA=true|false (ALPHA - 默认值=false)<br/>
|
||||
RootlessControlPlane=true|false (ALPHA - 默认值=false)<br/>
|
||||
UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - 默认值=false)
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -249,7 +240,7 @@ A set of key=value pairs that describe feature gates for various features. Optio
|
|||
help for init
|
||||
-->
|
||||
<p>
|
||||
init 操作的帮助命令
|
||||
init 操作的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -282,7 +273,7 @@ A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedU
|
|||
Choose a container registry to pull control plane images from
|
||||
-->
|
||||
<p>
|
||||
选择用于拉取控制平面镜像的容器仓库
|
||||
选择用于拉取控制平面镜像的容器仓库。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -350,7 +341,7 @@ Path to a directory that contains files named "target[suffix][+patchtype].e
|
|||
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
|
||||
-->
|
||||
<p>
|
||||
指明 pod 网络可以使用的 IP 地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDRs。
|
||||
指明 Pod 网络可以使用的 IP 地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDR。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -369,7 +360,7 @@ Specify range of IP addresses for the pod network. If set, the control plane wil
|
|||
Use alternative range of IP address for service VIPs.
|
||||
-->
|
||||
<p>
|
||||
为服务的虚拟 IP 地址另外指定 IP 地址段
|
||||
为服务的虚拟 IP 地址另外指定 IP 地址段。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -414,7 +405,7 @@ Don't print the key used to encrypt the control-plane certificates.
|
|||
List of phases to be skipped
|
||||
-->
|
||||
<p>
|
||||
要跳过的阶段列表
|
||||
要跳过的阶段列表。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -442,7 +433,8 @@ Skip printing of the default bootstrap token generated by 'kubeadm init'.
|
|||
The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
|
||||
-->
|
||||
<p>
|
||||
这个令牌用于建立控制平面节点与工作节点间的双向通信。格式为 [a-z0-9]{6}.[a-z0-9]{16} - 示例:abcdef.0123456789abcdef
|
||||
这个令牌用于建立控制平面节点与工作节点间的双向通信。
|
||||
格式为 [a-z0-9]{6}.[a-z0-9]{16} - 示例:abcdef.0123456789abcdef
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -461,7 +453,7 @@ The token to use for establishing bidirectional trust between nodes and control-
|
|||
The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire
|
||||
-->
|
||||
<p>
|
||||
令牌被自动删除之前的持续时间(例如 1 s,2 m,3 h)。如果设置为 '0',则令牌将永不过期
|
||||
令牌被自动删除之前的持续时间(例如 1s,2m,3h)。如果设置为 '0',则令牌将永不过期。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -486,7 +478,6 @@ Upload control-plane certificates to the kubeadm-certs Secret.
|
|||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
|
||||
### 从父命令继承的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -512,4 +503,3 @@ Upload control-plane certificates to the kubeadm-certs Secret.
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -1,30 +1,17 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Install all the addons
|
||||
-->
|
||||
安装所有插件
|
||||
安装所有插件。
|
||||
|
||||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
Install all the addons
|
||||
-->
|
||||
|
||||
安装所有插件(addon)
|
||||
安装所有插件(addon)。
|
||||
|
||||
```
|
||||
kubeadm init phase addon all [flags]
|
||||
|
@ -33,7 +20,6 @@ kubeadm init phase addon all [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -124,12 +110,17 @@ Don't apply any changes; just output what would be done.
|
|||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!--
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>EtcdLearnerMode=true|false (ALPHA - default=false)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)
|
||||
A set of key=value pairs that describe feature gates for various features. Options are:<br/>
|
||||
EtcdLearnerMode=true|false (ALPHA - default=false)<br/>
|
||||
PublicKeysECDSA=true|false (ALPHA - default=false)<br/>
|
||||
RootlessControlPlane=true|false (ALPHA - default=false)<br/>
|
||||
UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - default=false)
|
||||
-->
|
||||
一组键值对(key=value),描述了各种特征。选项包括:
|
||||
<br/>EtcdLearnerMode=true|false (ALPHA - 默认值=false)
|
||||
<br/>PublicKeysECDSA=true|false (ALPHA - 默认值=false)
|
||||
<br/>RootlessControlPlane=true|false (ALPHA - 默认值=false)
|
||||
一组键值对(key=value),描述了各种特征。选项包括:<br/>
|
||||
EtcdLearnerMode=true|false (ALPHA - 默认值=false)<br/>
|
||||
PublicKeysECDSA=true|false (ALPHA - 默认值=false)<br/>
|
||||
RootlessControlPlane=true|false (ALPHA - 默认值=false)<br/>
|
||||
UpgradeAddonsBeforeControlPlane=true|false (DEPRECATED - 默认值=false)
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -142,7 +133,7 @@ A set of key=value pairs that describe feature gates for various features. Optio
|
|||
help for all
|
||||
-->
|
||||
<p>
|
||||
all 操作的帮助命令
|
||||
all 操作的帮助命令。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -161,7 +152,7 @@ all 操作的帮助命令
|
|||
Choose a container registry to pull control plane images from
|
||||
-->
|
||||
<p>
|
||||
选择用于拉取控制平面镜像的容器仓库
|
||||
选择用于拉取控制平面镜像的容器仓库。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -260,7 +251,6 @@ Use alternative domain for services, e.g. "myorg.internal".
|
|||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
|
||||
### 继承于父命令的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -286,4 +276,3 @@ Use alternative domain for services, e.g. "myorg.internal".
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -1,14 +1,3 @@
|
|||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
<!--
|
||||
Install the kube-proxy addon to a Kubernetes cluster
|
||||
-->
|
||||
|
@ -17,13 +6,11 @@ Install the kube-proxy addon to a Kubernetes cluster
|
|||
<!--
|
||||
### Synopsis
|
||||
-->
|
||||
|
||||
### 概要
|
||||
|
||||
<!--
|
||||
Install the kube-proxy addon components via the API server.
|
||||
-->
|
||||
|
||||
通过 API 服务器安装 kube-proxy 附加组件。
|
||||
|
||||
```
|
||||
|
@ -33,7 +20,6 @@ kubeadm init phase addon kube-proxy [flags]
|
|||
<!--
|
||||
### Options
|
||||
-->
|
||||
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -116,7 +102,7 @@ kubeadm init phase addon kube-proxy [flags]
|
|||
<!--
|
||||
<p>help for kube-proxy</p>
|
||||
-->
|
||||
<p>kube-proxy 操作的帮助命令</p>
|
||||
<p>kube-proxy 操作的帮助命令。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -133,7 +119,7 @@ kubeadm init phase addon kube-proxy [flags]
|
|||
<!--
|
||||
<p>Choose a container registry to pull control plane images from</p>
|
||||
-->
|
||||
<p>选择用于拉取控制平面镜像的容器仓库</p>
|
||||
<p>选择用于拉取控制平面镜像的容器仓库。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -190,7 +176,7 @@ kubeadm init phase addon kube-proxy [flags]
|
|||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>
|
||||
<!--Print the addon manifests to STDOUT instead of installing them
|
||||
-->
|
||||
向 STDOUT 打印插件清单,而非安装这些插件
|
||||
向 STDOUT 打印插件清单,而非安装这些插件。
|
||||
</p></td>
|
||||
</tr>
|
||||
|
||||
|
@ -200,7 +186,6 @@ kubeadm init phase addon kube-proxy [flags]
|
|||
<!--
|
||||
### Options inherited from parent commands
|
||||
-->
|
||||
|
||||
### 继承于父命令的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
|
@ -224,4 +209,3 @@ kubeadm init phase addon kube-proxy [flags]
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
@ -3,7 +3,6 @@ title: kubeadm init
|
|||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- luxas
|
||||
|
@ -133,7 +132,7 @@ following steps:
|
|||
Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed.
|
||||
-->
|
||||
8. 通过 API 服务器安装一个 DNS 服务器 (CoreDNS) 和 kube-proxy 附加组件。
|
||||
在 Kubernetes 版本 1.11 和更高版本中,CoreDNS 是默认的 DNS 服务器。
|
||||
在 Kubernetes v1.11 和更高版本中,CoreDNS 是默认的 DNS 服务器。
|
||||
请注意,尽管已部署 DNS 服务器,但直到安装 CNI 时才调度它。
|
||||
|
||||
{{< warning >}}
|
||||
|
@ -148,7 +147,6 @@ following steps:
|
|||
|
||||
Kubeadm allows you to create a control-plane node in phases using the `kubeadm init phase` command.
|
||||
-->
|
||||
|
||||
### 在 kubeadm 中使用 init 阶段 {#init-phases}
|
||||
|
||||
Kubeadm 允许你使用 `kubeadm init phase` 命令分阶段创建控制平面节点。
|
||||
|
@ -228,7 +226,7 @@ Alternatively, you can use the `skipPhases` field under `InitConfiguration`.
|
|||
<!--
|
||||
The config file is still considered beta and may change in future versions.
|
||||
-->
|
||||
配置文件的功能仍然处于 beta 状态并且在将来的版本中可能会改变。
|
||||
配置文件的功能仍然处于 Beta 状态并且在将来的版本中可能会改变。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
|
@ -299,6 +297,15 @@ List of feature gates:
|
|||
-->
|
||||
特性门控的列表:
|
||||
|
||||
<!--
|
||||
{{< table caption="kubeadm feature gates" >}}
|
||||
Feature | Default | Alpha | Beta | GA
|
||||
:-------|:--------|:------|:-----|:----
|
||||
`PublicKeysECDSA` | `false` | 1.19 | - | -
|
||||
`RootlessControlPlane` | `false` | 1.22 | - | -
|
||||
`UnversionedKubeletConfigMap` | `true` | 1.22 | 1.23 | 1.25
|
||||
{{< /table >}}
|
||||
-->
|
||||
{{< table caption="kubeadm 特性门控" >}}
|
||||
特性 | 默认值 | Alpha | Beta | GA
|
||||
:-------|:--------|:------|:-----|:----
|
||||
|
@ -327,8 +334,8 @@ switch between the RSA and ECDSA algorithms on the fly or during upgrades.
|
|||
-->
|
||||
`PublicKeysECDSA`
|
||||
: 可用于创建集群时使用 ECDSA 证书而不是默认 RSA 算法。
|
||||
支持用 `kubeadm certs renew` 更新现有 ECDSA 证书,
|
||||
但你不能在集群运行期间或升级期间切换 RSA 和 ECDSA 算法。
|
||||
支持用 `kubeadm certs renew` 更新现有 ECDSA 证书,
|
||||
但你不能在集群运行期间或升级期间切换 RSA 和 ECDSA 算法。
|
||||
|
||||
<!--
|
||||
`RootlessControlPlane`
|
||||
|
@ -339,9 +346,9 @@ you upgrade to a newer version of Kubernetes.
|
|||
-->
|
||||
`RootlessControlPlane`
|
||||
: 设置此标志来配置 kubeadm 所部署的控制平面组件中的静态 Pod 容器
|
||||
`kube-apiserver`、`kube-controller-manager`、`kube-scheduler` 和 `etcd` 以非 root 用户身份运行。
|
||||
如果未设置该标志,则这些组件以 root 身份运行。
|
||||
你可以在升级到更新版本的 Kubernetes 之前更改此特性门控的值。
|
||||
`kube-apiserver`、`kube-controller-manager`、`kube-scheduler` 和 `etcd` 以非 root 用户身份运行。
|
||||
如果未设置该标志,则这些组件以 root 身份运行。
|
||||
你可以在升级到更新版本的 Kubernetes 之前更改此特性门控的值。
|
||||
|
||||
<!--
|
||||
`UnversionedKubeletConfigMap`
|
||||
|
@ -356,14 +363,74 @@ if that does not succeed, kubeadm falls back to using the legacy (versioned) nam
|
|||
-->
|
||||
`UnversionedKubeletConfigMap`
|
||||
: 此标志控制 kubeadm 存储 kubelet 配置数据的 {{<glossary_tooltip text="ConfigMap" term_id="configmap" >}} 的名称。
|
||||
在未指定此标志或设置为 `true` 的情况下,此 ConfigMap 被命名为 `kubelet-config`。
|
||||
如果将此标志设置为 `false`,则此 ConfigMap 的名称会包括 Kubernetes 的主要版本和次要版本(例如:`kubelet-config-{{< skew currentVersion >}}`)。
|
||||
Kubeadm 会确保用于读写 ConfigMap 的 RBAC 规则适合你设置的值。
|
||||
当 kubeadm 写入此 ConfigMap 时(在 `kubeadm init` 或 `kubeadm upgrade apply` 期间),
|
||||
kubeadm 根据 `UnversionedKubeletConfigMap` 的设置值来执行操作。
|
||||
当读取此 ConfigMap 时(在 `kubeadm join`、`kubeadm reset`、`kubeadm upgrade ...` 期间),
|
||||
kubeadm 尝试首先使用无版本(后缀)的 ConfigMap 名称;
|
||||
如果不成功,kubeadm 将回退到使用该 ConfigMap 的旧(带版本号的)名称。
|
||||
在未指定此标志或设置为 `true` 的情况下,此 ConfigMap 被命名为 `kubelet-config`。
|
||||
如果将此标志设置为 `false`,则此 ConfigMap 的名称会包括 Kubernetes 的主要版本和次要版本
|
||||
(例如:`kubelet-config-{{< skew currentVersion >}}`)。
|
||||
kubeadm 会确保用于读写 ConfigMap 的 RBAC 规则适合你设置的值。
|
||||
当 kubeadm 写入此 ConfigMap 时(在 `kubeadm init` 或 `kubeadm upgrade apply` 期间),
|
||||
kubeadm 根据 `UnversionedKubeletConfigMap` 的设置值来执行操作。
|
||||
当读取此 ConfigMap 时(在 `kubeadm join`、`kubeadm reset`、`kubeadm upgrade ...` 期间),
|
||||
kubeadm 尝试首先使用无版本(后缀)的 ConfigMap 名称;
|
||||
如果不成功,kubeadm 将回退到使用该 ConfigMap 的旧(带版本号的)名称。
|
||||
|
||||
<!--
|
||||
List of deprecated feature gates:
|
||||
-->
|
||||
已弃用特性门控的列表:
|
||||
|
||||
<!--
|
||||
{{< table caption="kubeadm deprecated feature gates" >}}
|
||||
Feature | Default
|
||||
:-------|:--------
|
||||
`UpgradeAddonsBeforeControlPlane` | `false`
|
||||
{{< /table >}}
|
||||
-->
|
||||
{{< table caption="kubeadm 弃用的特性门控" >}}
|
||||
特性 | 默认值
|
||||
:-------|:--------
|
||||
`UpgradeAddonsBeforeControlPlane` | `false`
|
||||
{{< /table >}}
|
||||
|
||||
<!--
|
||||
Feature gate descriptions:
|
||||
-->
|
||||
特性门控描述:
|
||||
|
||||
<!--
|
||||
`UpgradeAddonsBeforeControlPlane`
|
||||
: This is as a **disabled** feature gate that was introduced for Kubernetes v1.28,
|
||||
in order to allow reactivating a legacy and deprecated behavior during cluster upgrade.
|
||||
For kubeadm versions prior to v1.28, kubeadm upgrades cluster addons
|
||||
(including CoreDNS and kube-proxy) immediately during `kubeadm upgrade apply`,
|
||||
regardless of whether there are other control plane instances that have not been upgraded.
|
||||
This may cause compatibility problems. Since v1.28, kubeadm defaults to a mode that
|
||||
always checks whether all the control plane instances have been upgraded before starting
|
||||
to upgrade the addons. This behavior is applied to both `kubeadm upgrade apply` and
|
||||
`kubeadm upgrade node`. kubeadm determines whether a control plane instance
|
||||
has been upgraded by checking whether the image of the kube-apiserver Pod has
|
||||
been upgraded. You must perform control plane instances upgrade sequentially or
|
||||
at least ensure that the last control plane instance upgrade is not started until
|
||||
all the other control plane instances have been upgraded completely, and the addons
|
||||
upgrade will be performed after the last control plane instance is upgraded.
|
||||
The deprecated `UpgradeAddonsBeforeControlPlane` feature gate gives you a chance
|
||||
to keep the old upgrade behavior. You should not need this old behavior; if you do,
|
||||
you should consider changing your cluster or upgrade processes, as this
|
||||
feature gate will be removed in a future release.
|
||||
-->
|
||||
`UpgradeAddonsBeforeControlPlane`
|
||||
: 这是一个在 Kubernetes v1.28 中引入的默认禁用的特性门控,
|
||||
目的是在集群升级期间允许重新激活旧版且已弃用的行为。对于早于 v1.28 的 kubeadm 版本,
|
||||
在 `kubeadm upgrade apply` 期间会立即升级集群插件(包括 CoreDNS 和 kube-proxy),
|
||||
而不管是否有其他未升级的控制平面实例。这可能导致兼容性问题。从 v1.28 开始,
|
||||
kubeadm 默认采用的模式是在开始升级插件之前始终检查是否所有控制平面实例都已完成升级。
|
||||
此行为适用于 `kubeadm upgrade apply` 和 `kubeadm upgrade node`。
|
||||
kubeadm 通过检查 kube-apiserver Pod 的镜像来确定控制平面实例是否已升级。
|
||||
你必须按顺序执行控制平面实例的升级,
|
||||
或者至少确保在所有其他控制平面实例完全升级之前不启动最后一个控制平面实例的升级,
|
||||
并且在最后一个控制平面实例升级完成后再执行插件的升级。
|
||||
这个弃用的 `UpgradeAddonsBeforeControlPlane` 特性门控使你有机会保留旧的升级行为。
|
||||
你不应该需要这种旧的行为;如果确实需要,请考虑更改集群或升级流程,
|
||||
因为此特性门控将在未来的版本中被移除。
|
||||
|
||||
<!--
|
||||
### Adding kube-proxy parameters {#kube-proxy}
|
||||
|
|
|
@ -261,6 +261,18 @@ the `cgroupDriver` field under `KubeletConfiguration`, kubeadm defaults it to `s
|
|||
`KubeletConfiguration` 下设置 `cgroupDriver` 字段,kubeadm 默认使用 `systemd`。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
In Kubernetes v1.28, with the `KubeletCgroupDriverFromCRI`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
enabled and a container runtime that supports the `RuntimeConfig` CRI RPC,
|
||||
the kubelet automatically detects the appropriate cgroup driver from the runtime,
|
||||
and ignores the `cgroupDriver` setting within the kubelet configuration.
|
||||
-->
|
||||
在 Kubernetes v1.28 中,启用 `KubeletCgroupDriverFromCRI`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)结合支持
|
||||
`RuntimeConfig` CRI RPC 的容器运行时,kubelet 会自动从运行时检测适当的 Cgroup
|
||||
驱动程序,并忽略 kubelet 配置中的 `cgroupDriver` 设置。
|
||||
|
||||
<!--
|
||||
If you configure `systemd` as the cgroup driver for the kubelet, you must also
|
||||
configure `systemd` as the cgroup driver for the container runtime. Refer to
|
||||
|
@ -437,6 +449,14 @@ When using kubeadm, manually configure the
|
|||
当使用 kubeadm 时,请手动配置
|
||||
[kubelet 的 cgroup 驱动](/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver)。
|
||||
|
||||
<!--
|
||||
In Kubernetes v1.28, you can enable automatic detection of the
|
||||
cgroup driver as an alpha feature. See [systemd cgroup driver](#systemd-cgroup-driver)
|
||||
for more details.
|
||||
-->
|
||||
在 Kubernetes v1.28 中,你可以启用 Cgroup 驱动程序的自动检测的 Alpha 级别特性。
|
||||
详情参阅 [systemd cgroup 驱动](#systemd-cgroup-driver)。
|
||||
|
||||
<!--
|
||||
#### Overriding the sandbox (pause) image {#override-pause-image-containerd}
|
||||
|
||||
|
@ -509,6 +529,14 @@ in sync.
|
|||
你还应该注意当使用 CRI-O 时,并且 CRI-O 的 cgroup 设置为 `cgroupfs` 时,必须将 `conmon_cgroup` 设置为值 `pod`。
|
||||
通常需要保持 kubelet 的 cgroup 驱动配置(通常通过 kubeadm 完成)和 CRI-O 同步。
|
||||
|
||||
<!--
|
||||
In Kubernetes v1.28, you can enable automatic detection of the
|
||||
cgroup driver as an alpha feature. See [systemd cgroup driver](#systemd-cgroup-driver)
|
||||
for more details.
|
||||
-->
|
||||
在 Kubernetes v1.28 中,你可以启用 Cgroup 驱动程序的自动检测的 Alpha 级别特性。
|
||||
详情参阅 [systemd cgroup 驱动](#systemd-cgroup-driver)。
|
||||
|
||||
<!--
|
||||
For CRI-O, the CRI socket is `/var/run/crio/crio.sock` by default.
|
||||
-->
|
||||
|
|
|
@ -426,12 +426,13 @@ including tools for logging, monitoring, network policy, visualization, and cont
|
|||
* Learn more about `kOps` [advanced usage](https://kops.sigs.k8s.io/) for tutorials,
|
||||
best practices and advanced configuration options.
|
||||
* Follow `kOps` community discussions on Slack:
|
||||
[community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors).
|
||||
[community discussions](https://kops.sigs.k8s.io/contributing/#other-ways-to-communicate-with-the-contributors).
|
||||
(visit https://slack.k8s.io/ for an invitation to this Slack workspace).
|
||||
* Contribute to `kOps` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues).
|
||||
-->
|
||||
* 了解有关 Kubernetes 的[概念](/zh-cn/docs/concepts/)和
|
||||
[`kubectl`](/zh-cn/docs/reference/kubectl/) 的更多信息。
|
||||
* 参阅 `kOps` [进阶用法](https://kops.sigs.k8s.io/) 获取教程、最佳实践和进阶配置选项。
|
||||
* 通过 Slack:[社区讨论](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors)
|
||||
参与 `kOps` 社区讨论。
|
||||
* 通过 Slack:[社区讨论](https://kops.sigs.k8s.io/contributing/#other-ways-to-communicate-with-the-contributors)
|
||||
参与 `kOps` 社区讨论。(访问 https://slack.k8s.io/ 获取此 Slack 工作空间的邀请)
|
||||
* 通过解决或提出一个 [GitHub Issue](https://github.com/kubernetes/kops/issues) 来为 `kOps` 做贡献。
|
||||
|
|
|
@ -136,16 +136,16 @@ For detailed instructions and other prerequisites, see [Installing kubeadm](/doc
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you have already installed kubeadm, run
|
||||
`apt-get update && apt-get upgrade` or
|
||||
`yum update` to get the latest version of kubeadm.
|
||||
If you have already installed kubeadm, see the first two steps of the
|
||||
[Upgrading Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes) document for instructions on how to upgrade kubeadm.
|
||||
|
||||
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
|
||||
kubeadm to tell it what to do. This crashloop is expected and normal.
|
||||
After you initialize your control-plane, the kubelet runs normally.
|
||||
-->
|
||||
如果你已经安装了kubeadm,执行 `apt-get update && apt-get upgrade` 或 `yum update`
|
||||
以获取 kubeadm 的最新版本。
|
||||
如果你已经安装了 kubeadm,
|
||||
请查看[升级 Linux 节点](/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes)文档的前两步,
|
||||
了解如何升级 kubeadm 的说明。
|
||||
|
||||
升级时,kubelet 每隔几秒钟重新启动一次,
|
||||
在 crashloop 状态中等待 kubeadm 发布指令。crashloop 状态是正常现象。
|
||||
|
|
|
@ -0,0 +1,243 @@
|
|||
---
|
||||
title: 更改 Kubernetes 软件包仓库
|
||||
content_type: task
|
||||
weight: 120
|
||||
---
|
||||
<!--
|
||||
title: Changing The Kubernetes Package Repository
|
||||
content_type: task
|
||||
weight: 120
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
This page explains how to switch from one Kubernetes package repository to another
|
||||
when upgrading Kubernetes minor releases. Unlike deprecated Google-hosted
|
||||
repositories, the Kubernetes package repositories are structured in a way that
|
||||
there's a dedicated package repository for each Kubernetes minor version.
|
||||
-->
|
||||
本文阐述如何在升级 Kubernetes 小版本时从一个软件包仓库切换到另一个。
|
||||
与弃用的 Google 托管仓库不同,Kubernetes 软件包仓库的结构是每个 Kubernetes
|
||||
小版本都有一个专门的软件包仓库。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
This document assumes that you're already using the Kubernetes community-owned
|
||||
package repositories. If that's not the case, it's strongly recommend to migrate
|
||||
to the Kubernetes package repositories.
|
||||
-->
|
||||
本文假设你已经在使用 Kubernetes 社区所拥有的软件包仓库。
|
||||
如果不是这种情况,强烈建议迁移到 Kubernetes 软件包仓库。
|
||||
|
||||
<!--
|
||||
### Verifying if the Kubernetes package repositories are used
|
||||
|
||||
If you're unsure if you're using the Kubernetes package repositories or the
|
||||
Google-hosted repository, take the following steps to verify:
|
||||
-->
|
||||
### 验证是否正在使用 Kubernetes 软件包仓库
|
||||
|
||||
如果你不确定自己是在使用 Kubernetes 软件包仓库还是在使用 Google 托管的仓库,
|
||||
可以执行以下步骤进行验证:
|
||||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
|
||||
<!--
|
||||
Print the contents of the file that defines the Kubernetes `apt` repository:
|
||||
|
||||
```shell
|
||||
# On your system, this configuration file could have a different name
|
||||
pager /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
|
||||
If you see a line similar to:
|
||||
-->
|
||||
打印定义 Kubernetes `apt` 仓库的文件的内容:
|
||||
|
||||
```shell
|
||||
# 在你的系统上,此配置文件可能具有不同的名称
|
||||
pager /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
|
||||
如果你看到类似以下的一行:
|
||||
|
||||
```
|
||||
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/deb/ /
|
||||
```
|
||||
|
||||
<!--
|
||||
**You're using the Kubernetes package repositories and this guide applies to you.**
|
||||
Otherwise, it's strongly recommend to migrate to the Kubernetes package repositories.
|
||||
-->
|
||||
**你正在使用 Kubernetes 软件包仓库,本指南适用于你。**
|
||||
否则,强烈建议迁移到 Kubernetes 软件包仓库。
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
|
||||
<!--
|
||||
Print the contents of the file that defines the Kubernetes `yum` repository:
|
||||
|
||||
```shell
|
||||
# On your system, this configuration file could have a different name
|
||||
cat /etc/yum.repos.d/kubernetes.repo
|
||||
```
|
||||
|
||||
If you see `baseurl` similar to the `baseurl` in the output below:
|
||||
-->
|
||||
打印定义 Kubernetes `yum` 仓库的文件的内容:
|
||||
|
||||
```shell
|
||||
# 在你的系统上,此配置文件可能具有不同的名称
|
||||
cat /etc/yum.repos.d/kubernetes.repo
|
||||
```
|
||||
|
||||
如果你看到的 `baseurl` 类似以下输出中的 `baseurl`:
|
||||
|
||||
```
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/repodata/repomd.xml.key
|
||||
exclude=kubelet kubeadm kubectl
|
||||
```
|
||||
|
||||
<!--
|
||||
**You're using the Kubernetes package repositories and this guide applies to you.**
|
||||
Otherwise, it's strongly recommend to migrate to the Kubernetes package repositories.
|
||||
-->
|
||||
**你正在使用 Kubernetes 软件包仓库,本指南适用于你。**
|
||||
否则,强烈建议迁移到 Kubernetes 软件包仓库。
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The URL used for the Kubernetes package repositories is not limited to `pkgs.k8s.io`,
|
||||
it can also be one of:
|
||||
-->
|
||||
Kubernetes 软件包仓库所用的 URL 不仅限于 `pkgs.k8s.io`,还可以是以下之一:
|
||||
|
||||
- `pkgs.k8s.io`
|
||||
- `pkgs.kubernetes.io`
|
||||
- `packages.kubernetes.io`
|
||||
{{</ note >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Switching to another Kubernetes package repository
|
||||
|
||||
This step should be done upon upgrading from one to another Kubernetes minor
|
||||
release in order to get access to the packages of the desired Kubernetes minor
|
||||
version.
|
||||
-->
|
||||
## 切换到其他 Kubernetes 软件包仓库 {#switching-to-another-kubernetes-package-repository}
|
||||
|
||||
在从一个 Kubernetes 小版本升级到另一个版本时,应执行此步骤以获取所需 Kubernetes 小版本的软件包访问权限。
|
||||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
|
||||
<!--
|
||||
1. Open the file that defines the Kubernetes `apt` repository using a text editor of your choice:
|
||||
-->
|
||||
1. 使用你所选择的文本编辑器打开定义 Kubernetes `apt` 仓库的文件:
|
||||
|
||||
```shell
|
||||
nano /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
|
||||
<!--
|
||||
You should see a single line with the URL that contains your current Kubernetes
|
||||
minor version. For example, if you're using v{{< skew currentVersionAddMinor -1 "." >}},
|
||||
you should see this:
|
||||
-->
|
||||
你应该看到一行包含当前 Kubernetes 小版本的 URL。
|
||||
例如,如果你正在使用 v{{< skew currentVersionAddMinor -1 "." >}},你应该看到类似以下的输出:
|
||||
|
||||
```
|
||||
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/deb/ /
|
||||
```
|
||||
|
||||
<!--
|
||||
2. Change the version in the URL to **the next available minor release**, for example:
|
||||
-->
|
||||
2. 将 URL 中的版本更改为**下一个可用的小版本**,例如:
|
||||
|
||||
```
|
||||
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /
|
||||
```
|
||||
|
||||
<!--
|
||||
3. Save the file and exit your text editor. Continue following the relevant upgrade instructions.
|
||||
-->
|
||||
3. 保存文件并退出文本编辑器。继续按照相关的升级说明进行操作。
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
|
||||
<!--
|
||||
1. Open the file that defines the Kubernetes `yum` repository using a text editor of your choice:
|
||||
-->
|
||||
1. 使用你所选择的文本编辑器打开定义 Kubernetes `yum` 仓库的文件:
|
||||
|
||||
```shell
|
||||
nano /etc/yum.repos.d/kubernetes.repo
|
||||
```
|
||||
|
||||
<!--
|
||||
You should see a file with two URLs that contain your current Kubernetes
|
||||
minor version. For example, if you're using v{{< skew currentVersionAddMinor -1 "." >}},
|
||||
you should see this:
|
||||
-->
|
||||
你应该看到一个文件包含当前 Kubernetes 小版本的两个 URL。
|
||||
例如,如果你正在使用 v{{< skew currentVersionAddMinor -1 "." >}},你应该看到类似以下的输出:
|
||||
|
||||
```
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/repodata/repomd.xml.key
|
||||
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
|
||||
```
|
||||
|
||||
<!--
|
||||
2. Change the version in these URLs to **the next available minor release**, for example:
|
||||
-->
|
||||
2. 将这些 URL 中的版本更改为**下一个可用的小版本**,例如:
|
||||
|
||||
```
|
||||
[kubernetes]
|
||||
name=Kubernetes
|
||||
baseurl=https://pkgs.k8s.io/core:/stable:/v{{< param "version" >}}/rpm/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< param "version" >}}/rpm/repodata/repomd.xml.key
|
||||
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
|
||||
```
|
||||
|
||||
<!--
|
||||
3. Save the file and exit your text editor. Continue following the relevant upgrade instructions.
|
||||
-->
|
||||
3. 保存文件并退出文本编辑器。继续按照相关的升级说明进行操作。
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* See how to [Upgrade Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/).
|
||||
* See how to [Upgrade Windows nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/).
|
||||
-->
|
||||
* 参见如何[升级 Linux 节点的说明](/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/)。
|
||||
* 参见如何[升级 Windows 节点的说明](/zh-cn/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/)。
|
|
@ -371,7 +371,7 @@ kubectl describe pod goproxy
|
|||
-->
|
||||
## 定义 gRPC 存活探针
|
||||
|
||||
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
<!--
|
||||
If your application implements the
|
||||
|
@ -868,7 +868,7 @@ was set.
|
|||
|
||||
<!--
|
||||
In 1.25 and above, users can specify a probe-level `terminationGracePeriodSeconds`
|
||||
as part of the probe specification. When both a pod- and probe-level
|
||||
as part of the probe specification. When both a pod- and probe-level
|
||||
`terminationGracePeriodSeconds` are set, the kubelet will use the probe-level value.
|
||||
-->
|
||||
在 1.25 及以上版本中,用户可以指定一个探针层面的 `terminationGracePeriodSeconds`
|
||||
|
@ -880,13 +880,13 @@ as part of the probe specification. When both a pod- and probe-level
|
|||
Beginning in Kubernetes 1.25, the `ProbeTerminationGracePeriod` feature is enabled
|
||||
by default. For users choosing to disable this feature, please note the following:
|
||||
|
||||
* The `ProbeTerminationGracePeriod` feature gate is only available on the API Server.
|
||||
* The `ProbeTerminationGracePeriod` feature gate is only available on the API Server.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
it is present on a Pod.
|
||||
-->
|
||||
{{< note >}}
|
||||
从 Kubernetes 1.25 开始,默认启用 `ProbeTerminationGracePeriod` 特性。
|
||||
选择禁用此特性的用户,请注意以下事项:
|
||||
选择禁用此特性的用户,请注意以下事项:
|
||||
|
||||
* `ProbeTerminationGracePeriod` 特性门控只能用在 API 服务器上。
|
||||
kubelet 始终优先选用探针级别 `terminationGracePeriodSeconds` 字段
|
||||
|
@ -914,7 +914,7 @@ by default. For users choosing to disable this feature, please note the followin
|
|||
<!--
|
||||
For example:
|
||||
-->
|
||||
例如:
|
||||
例如:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
|
|
|
@ -4,6 +4,10 @@ description: 设置监控和日志记录以对集群进行故障排除或调试
|
|||
weight: 40
|
||||
content_type: concept
|
||||
no_list: true
|
||||
card:
|
||||
name: tasks
|
||||
weight: 999
|
||||
title: 寻求帮助
|
||||
---
|
||||
<!--
|
||||
title: "Monitoring, Logging, and Debugging"
|
||||
|
@ -14,6 +18,10 @@ reviewers:
|
|||
- davidopp
|
||||
content_type: concept
|
||||
no_list: true
|
||||
card:
|
||||
name: tasks
|
||||
weight: 999
|
||||
title: Getting help
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -0,0 +1,25 @@
|
|||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: job-backoff-limit-per-index-example
|
||||
spec:
|
||||
completions: 10
|
||||
parallelism: 3
|
||||
completionMode: Indexed # 此特性所必需的字段
|
||||
backoffLimitPerIndex: 1 # 每个索引最大失败次数
|
||||
maxFailedIndexes: 5 # 终止 Job 执行之前失败索引的最大个数
|
||||
template:
|
||||
spec:
|
||||
restartPolicy: Never # 此特性所必需的字段
|
||||
containers:
|
||||
- name: example
|
||||
image: python
|
||||
command: # 作业失败,因为至少有一个索引失败(此处所有偶数索引均失败),
|
||||
# 但由于未超过 maxFailedIndexes,所以所有索引都会被执行
|
||||
- python3
|
||||
- -c
|
||||
- |
|
||||
import os, sys
|
||||
print("Hello world")
|
||||
if int(os.environ.get("JOB_COMPLETION_INDEX")) % 2 == 0:
|
||||
sys.exit(1)
|
|
@ -1,4 +1,9 @@
|
|||
branches:
|
||||
- release: "1.24"
|
||||
finalPatchRelease: "1.24.17"
|
||||
endOfLifeDate: 2023-07-28
|
||||
note: >-
|
||||
1.24.17 was released in August 2023 (after the EOL date) to fix CVE-2023-3676 and CVE-2023-3955
|
||||
- release: "1.23"
|
||||
finalPatchRelease: "1.23.17"
|
||||
endOfLifeDate: 2023-02-28
|
||||
|
|
|
@ -5,12 +5,17 @@ schedules:
|
|||
- release: 1.28
|
||||
releaseDate: 2023-08-15
|
||||
next:
|
||||
release: 1.28.1
|
||||
cherryPickDeadline: 2023-09-08
|
||||
release: 1.28.2
|
||||
cherryPickDeadline: 2023-09-08
|
||||
targetDate: 2023-09-13
|
||||
maintenanceModeStartDate: 2024-08-28
|
||||
targetDate: 2023-09-13
|
||||
endOfLifeDate: 2024-10-28
|
||||
previousPatches:
|
||||
- release: 1.28.1
|
||||
cherryPickDeadline: N/A
|
||||
targetDate: 2023-08-23
|
||||
note: >-
|
||||
Unplanned release to include CVE fixes
|
||||
- release: 1.28.0
|
||||
targetDate: 2023-08-15
|
||||
- release: 1.27
|
||||
|
@ -18,10 +23,13 @@ schedules:
|
|||
maintenanceModeStartDate: 2024-04-28
|
||||
endOfLifeDate: 2024-06-28
|
||||
next:
|
||||
release: 1.27.5
|
||||
cherryPickDeadline: 2023-08-04
|
||||
targetDate: 2023-08-23
|
||||
release: 1.27.6
|
||||
cherryPickDeadline: 2023-09-08
|
||||
targetDate: 2023-09-13
|
||||
previousPatches:
|
||||
- release: 1.27.5
|
||||
cherryPickDeadline: 2023-08-04
|
||||
targetDate: 2023-08-23
|
||||
- release: 1.27.4
|
||||
cherryPickDeadline: 2023-07-14
|
||||
targetDate: 2023-07-19
|
||||
|
@ -44,10 +52,13 @@ schedules:
|
|||
maintenanceModeStartDate: 2023-12-28
|
||||
endOfLifeDate: 2024-02-28
|
||||
next:
|
||||
release: 1.26.8
|
||||
cherryPickDeadline: 2023-08-04
|
||||
targetDate: 2023-08-23
|
||||
release: 1.26.9
|
||||
cherryPickDeadline: 2023-09-08
|
||||
targetDate: 2023-09-13
|
||||
previousPatches:
|
||||
- release: 1.26.8
|
||||
cherryPickDeadline: 2023-08-04
|
||||
targetDate: 2023-08-23
|
||||
- release: 1.26.7
|
||||
cherryPickDeadline: 2023-07-14
|
||||
targetDate: 2023-07-19
|
||||
|
@ -79,10 +90,13 @@ schedules:
|
|||
maintenanceModeStartDate: 2023-08-28
|
||||
endOfLifeDate: 2023-10-28
|
||||
next:
|
||||
release: 1.25.13
|
||||
cherryPickDeadline: 2023-08-04
|
||||
targetDate: 2023-08-23
|
||||
release: 1.25.14
|
||||
cherryPickDeadline: 2023-09-08
|
||||
targetDate: 2023-09-13
|
||||
previousPatches:
|
||||
- release: 1.25.13
|
||||
cherryPickDeadline: 2023-08-04
|
||||
targetDate: 2023-08-23
|
||||
- release: 1.25.12
|
||||
cherryPickDeadline: 2023-07-14
|
||||
targetDate: 2023-07-19
|
||||
|
@ -128,69 +142,3 @@ schedules:
|
|||
- release: 1.25.0
|
||||
cherryPickDeadline: ""
|
||||
targetDate: 2022-08-23
|
||||
- release: 1.24
|
||||
releaseDate: 2022-05-03
|
||||
maintenanceModeStartDate: 2023-05-28
|
||||
endOfLifeDate: 2023-07-28
|
||||
next:
|
||||
release: N/A
|
||||
cherryPickDeadline: ""
|
||||
targetDate: ""
|
||||
previousPatches:
|
||||
- release: 1.24.16
|
||||
cherryPickDeadline: 2023-07-14
|
||||
targetDate: 2023-07-19
|
||||
- release: 1.24.15
|
||||
cherryPickDeadline: 2023-06-09
|
||||
targetDate: 2023-06-14
|
||||
- release: 1.24.14
|
||||
cherryPickDeadline: 2023-05-12
|
||||
targetDate: 2023-05-17
|
||||
- release: 1.24.13
|
||||
cherryPickDeadline: 2023-04-07
|
||||
targetDate: 2023-04-12
|
||||
- release: 1.24.12
|
||||
cherryPickDeadline: 2023-03-10
|
||||
targetDate: 2023-03-15
|
||||
- release: 1.24.11
|
||||
cherryPickDeadline: 2023-02-10
|
||||
targetDate: 2023-02-15
|
||||
note: >-
|
||||
[Some container images might be **unsigned** due to a temporary issue with the promotion process](https://groups.google.com/a/kubernetes.io/g/dev/c/MwSx761slM0/m/4ajkeUl0AQAJ)
|
||||
- release: 1.24.10
|
||||
cherryPickDeadline: 2023-01-13
|
||||
targetDate: 2023-01-18
|
||||
- release: 1.24.9
|
||||
cherryPickDeadline: 2022-12-02
|
||||
targetDate: 2022-12-08
|
||||
- release: 1.24.8
|
||||
cherryPickDeadline: 2022-11-04
|
||||
targetDate: 2022-11-09
|
||||
- release: 1.24.7
|
||||
cherryPickDeadline: 2022-10-07
|
||||
targetDate: 2022-10-12
|
||||
- release: 1.24.6
|
||||
cherryPickDeadline: 2022-09-20
|
||||
targetDate: 2022-09-21
|
||||
note: >-
|
||||
[Out-of-Band release to fix the regression introduced in 1.24.5](https://groups.google.com/a/kubernetes.io/g/dev/c/tA6LNOQTR4Q/m/zL73maPTAQAJ)
|
||||
- release: 1.24.5
|
||||
cherryPickDeadline: 2022-09-09
|
||||
targetDate: 2022-09-14
|
||||
note: >-
|
||||
[Regression](https://groups.google.com/a/kubernetes.io/g/dev/c/tA6LNOQTR4Q/m/zL73maPTAQAJ)
|
||||
- release: 1.24.4
|
||||
cherryPickDeadline: 2022-08-12
|
||||
targetDate: 2022-08-17
|
||||
- release: 1.24.3
|
||||
cherryPickDeadline: 2022-07-08
|
||||
targetDate: 2022-07-13
|
||||
- release: 1.24.2
|
||||
cherryPickDeadline: 2022-06-10
|
||||
targetDate: 2022-06-15
|
||||
- release: 1.24.1
|
||||
cherryPickDeadline: 2022-05-20
|
||||
targetDate: 2022-05-24
|
||||
- release: 1.24.0
|
||||
cherryPickDeadline: ""
|
||||
targetDate: 2022-05-03
|
||||
|
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 29 KiB |
Loading…
Reference in New Issue