Merge pull request #45431 from chanieljdan/merged-main-dev-1.30
Merge main branch into dev-1.30pull/45138/head
commit
c268f09715
|
@ -22,21 +22,139 @@ external components communicate with one another.
|
|||
The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes
|
||||
(for example: Pods, Namespaces, ConfigMaps, and Events).
|
||||
|
||||
Most operations can be performed through the
|
||||
[kubectl](/docs/reference/kubectl/) command-line interface or other
|
||||
command-line tools, such as
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/), which in turn use the
|
||||
API. However, you can also access the API directly using REST calls.
|
||||
Most operations can be performed through the [kubectl](/docs/reference/kubectl/)
|
||||
command-line interface or other command-line tools, such as
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/), which in turn use the API.
|
||||
However, you can also access the API directly using REST calls. Kubernetes
|
||||
provides a set of [client libraries](/docs/reference/using-api/client-libraries/)
|
||||
for those looking to
|
||||
write applications using the Kubernetes API.
|
||||
|
||||
Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/)
|
||||
if you are writing an application using the Kubernetes API.
|
||||
Each Kubernetes cluster publishes the specification of the APIs that the cluster serves.
|
||||
There are two mechanisms that Kubernetes uses to publish these API specifications; both are useful
|
||||
to enable automatic interoperability. For example, the `kubectl` tool fetches and caches the API
|
||||
specification for enabling command-line completion and other features.
|
||||
The two supported mechanisms are as follows:
|
||||
|
||||
- [The Discovery API](#discovery-api) provides information about the Kubernetes APIs:
|
||||
API names, resources, versions, and supported operations. This is a Kubernetes
|
||||
specific term as it is a separate API from the Kubernetes OpenAPI.
|
||||
It is intended to be a brief summary of the available resources and it does not
|
||||
detail specific schema for the resources. For reference about resource schemas,
|
||||
please refer to the OpenAPI document.
|
||||
|
||||
- The [Kubernetes OpenAPI Document](#openapi-specification) provides (full)
|
||||
[OpenAPI v2.0 and 3.0 schemas](https://www.openapis.org/) for all Kubernetes API
|
||||
endpoints.
|
||||
The OpenAPI v3 is the preferred method for accessing OpenAPI as it
|
||||
provides
|
||||
a more comprehensive and accurate view of the API. It includes all the available
|
||||
API paths, as well as all resources consumed and produced for every operations
|
||||
on every endpoints. It also includes any extensibility components that a cluster supports.
|
||||
The data is a complete specification and is significantly larger than that from the
|
||||
Discovery API.
|
||||
|
||||
## Discovery API
|
||||
|
||||
Kubernetes publishes a list of all group versions and resources supported via
|
||||
the Discovery API. This includes the following for each resource:
|
||||
|
||||
- Name
|
||||
- Cluster or namespaced scope
|
||||
- Endpoint URL and supported verbs
|
||||
- Alternative names
|
||||
- Group, version, kind
|
||||
|
||||
The API is available both aggregated and unaggregated form. The aggregated
|
||||
discovery serves two endpoints while the unaggregated discovery serves a
|
||||
separate endpoint for each group version.
|
||||
|
||||
### Aggregated discovery
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.27" >}}
|
||||
|
||||
Kubernetes offers beta support for aggregated discovery, publishing
|
||||
all resources supported by a cluster through two endpoints (`/api` and
|
||||
`/apis`). Requesting this
|
||||
endpoint drastically reduces the number of requests sent to fetch the
|
||||
discovery data from the cluster. You can access the data by
|
||||
requesting the respective endpoints with an `Accept` header indicating
|
||||
the aggregated discovery resource:
|
||||
`Accept: application/json;v=v2beta1;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList`.
|
||||
|
||||
Without indicating the resource type using the `Accept` header, the default
|
||||
response for the `/api` and `/apis` endpoint is an unaggregated discovery
|
||||
document.
|
||||
|
||||
The [discovery document](https://github.com/kubernetes/kubernetes/blob/release-v{{< skew currentVersion >}}/api/discovery/aggregated_v2beta1.json)
|
||||
for the built-in resources can be found in the Kubernetes GitHub repository.
|
||||
This Github document can be used as a reference of the base set of the available resources
|
||||
if a Kubernetes cluster is not available to query.
|
||||
|
||||
The endpoint also supports ETag and protobuf encoding.
|
||||
|
||||
### Unaggregated discovery
|
||||
|
||||
Without discovery aggregation, discovery is published in levels, with the root
|
||||
endpoints publishing discovery information for downstream documents.
|
||||
|
||||
A list of all group versions supported by a cluster is published at
|
||||
the `/api` and `/apis` endpoints. Example:
|
||||
|
||||
```
|
||||
{
|
||||
"kind": "APIGroupList",
|
||||
"apiVersion": "v1",
|
||||
"groups": [
|
||||
{
|
||||
"name": "apiregistration.k8s.io",
|
||||
"versions": [
|
||||
{
|
||||
"groupVersion": "apiregistration.k8s.io/v1",
|
||||
"version": "v1"
|
||||
}
|
||||
],
|
||||
"preferredVersion": {
|
||||
"groupVersion": "apiregistration.k8s.io/v1",
|
||||
"version": "v1"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "apps",
|
||||
"versions": [
|
||||
{
|
||||
"groupVersion": "apps/v1",
|
||||
"version": "v1"
|
||||
}
|
||||
],
|
||||
"preferredVersion": {
|
||||
"groupVersion": "apps/v1",
|
||||
"version": "v1"
|
||||
}
|
||||
},
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Additional requests are needed to obtain the discovery document for each group version at
|
||||
`/apis/<group>/<version>` (for example:
|
||||
`/apis/rbac.authorization.k8s.io/v1alpha1`), which advertises the list of
|
||||
resources served under a particular group version. These endpoints are used by
|
||||
kubectl to fetch the list of resources supported by a cluster.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## OpenAPI specification {#api-specification}
|
||||
<a id="#api-specification" />
|
||||
|
||||
Complete API details are documented using [OpenAPI](https://www.openapis.org/).
|
||||
## OpenAPI interface definition
|
||||
|
||||
For details about the OpenAPI specifications, see the [OpenAPI documentation](https://www.openapis.org/).
|
||||
|
||||
Kubernetes serves both OpenAPI v2.0 and OpenAPI v3.0. OpenAPI v3 is the
|
||||
preferred method of accessing the OpenAPI because it offers a more comprehensive
|
||||
(lossless) representation of Kubernetes resources. Due to limitations of OpenAPI
|
||||
version 2, certain fields are dropped from the published OpenAPI including but not
|
||||
limited to `default`, `nullable`, `oneOf`.
|
||||
### OpenAPI V2
|
||||
|
||||
The Kubernetes API server serves an aggregated OpenAPI v2 spec via the
|
||||
|
@ -74,11 +192,6 @@ request headers as follows:
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
Kubernetes implements an alternative Protobuf based serialization format that
|
||||
is primarily intended for intra-cluster communication. For more information
|
||||
about this format, see the [Kubernetes Protobuf serialization](https://git.k8s.io/design-proposals-archive/api-machinery/protobuf.md) design proposal and the
|
||||
Interface Definition Language (IDL) files for each schema located in the Go
|
||||
packages that define the API objects.
|
||||
|
||||
### OpenAPI V3
|
||||
|
||||
|
@ -149,7 +262,20 @@ Refer to the table below for accepted request headers.
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
A Golang implementation to fetch the OpenAPI V3 is provided in the package `k8s.io/client-go/openapi3`.
|
||||
A Golang implementation to fetch the OpenAPI V3 is provided in the package
|
||||
[`k8s.io/client-go/openapi3`](https://pkg.go.dev/k8s.io/client-go/openapi3).
|
||||
|
||||
Kubernetes {{< skew currentVersion >}} publishes
|
||||
OpenAPI v2.0 and v3.0; there are no plans to support 3.1 in the near future.
|
||||
|
||||
### Protobuf serialization
|
||||
|
||||
Kubernetes implements an alternative Protobuf based serialization format that
|
||||
is primarily intended for intra-cluster communication. For more information
|
||||
about this format, see the [Kubernetes Protobuf serialization](https://git.k8s.io/design-proposals-archive/api-machinery/protobuf.md)
|
||||
design proposal and the
|
||||
Interface Definition Language (IDL) files for each schema located in the Go
|
||||
packages that define the API objects.
|
||||
|
||||
## Persistence
|
||||
|
||||
|
@ -238,8 +364,6 @@ ways that require deleting all existing alpha objects prior to upgrade.
|
|||
Refer to [API versions reference](/docs/reference/using-api/#api-versioning)
|
||||
for more details on the API version level definitions.
|
||||
|
||||
|
||||
|
||||
## API Extension
|
||||
|
||||
The Kubernetes API can be extended in one of two ways:
|
||||
|
|
|
@ -116,17 +116,20 @@ The copy is called a "fork". | The copy is called a "fork."
|
|||
|
||||
## Inline code formatting
|
||||
|
||||
### Use code style for inline code, commands, and API objects {#code-style-inline-code}
|
||||
### Use code style for inline code, commands {#code-style-inline-code}
|
||||
|
||||
For inline code in an HTML document, use the `<code>` tag. In a Markdown
|
||||
document, use the backtick (`` ` ``).
|
||||
document, use the backtick (`` ` ``). However, API kinds such as StatefulSet
|
||||
or ConfigMap are written verbatim (no backticks); this allows using possessive
|
||||
apostrophes.
|
||||
|
||||
{{< table caption = "Do and Don't - Use code style for inline code, commands, and API objects" >}}
|
||||
Do | Don't
|
||||
:--| :-----
|
||||
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
|
||||
The kubelet on each node acquires a `Lease`… | The kubelet on each node acquires a lease…
|
||||
A `PersistentVolume` represents durable storage… | A Persistent Volume represents durable storage…
|
||||
The `kubectl run` command creates a Pod. | The "kubectl run" command creates a Pod.
|
||||
The kubelet on each node acquires a Lease… | The kubelet on each node acquires a `Lease`…
|
||||
A PersistentVolume represents durable storage… | A `PersistentVolume` represents durable storage…
|
||||
The CustomResourceDefinition's `.spec.group` field… | The `CustomResourceDefinition.spec.group` field…
|
||||
For declarative management, use `kubectl apply`. | For declarative management, use "kubectl apply".
|
||||
Enclose code samples with triple backticks. (\`\`\`)| Enclose code samples with any other syntax.
|
||||
Use single backticks to enclose inline code. For example, `var example = true`. | Use two asterisks (`**`) or an underscore (`_`) to enclose inline code. For example, **var example = true**.
|
||||
|
@ -191,37 +194,60 @@ Set the value of `image` to nginx:1.16. | Set the value of `image` to `nginx:1.1
|
|||
Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`.
|
||||
{{< /table >}}
|
||||
|
||||
However, consider quoting values where there is a risk that readers might confuse the value
|
||||
with an API kind.
|
||||
|
||||
## Referring to Kubernetes API resources
|
||||
|
||||
This section talks about how we reference API resources in the documentation.
|
||||
|
||||
### Clarification about "resource"
|
||||
|
||||
Kubernetes uses the word "resource" to refer to API resources, such as `pod`,
|
||||
`deployment`, and so on. We also use "resource" to talk about CPU and memory
|
||||
requests and limits. Always refer to API resources as "API resources" to avoid
|
||||
confusion with CPU and memory resources.
|
||||
Kubernetes uses the word _resource_ to refer to API resources. For example,
|
||||
the URL path `/apis/apps/v1/namespaces/default/deployments/my-app` represents a
|
||||
Deployment named "my-app" in the "default"
|
||||
{{< glossary_tooltip text="namespace" term_id="namespace" >}}. In HTTP jargon,
|
||||
{{< glossary_tooltip text="namespace" term_id="namespace" >}} is a resource -
|
||||
the same way that all web URLs identify a resource.
|
||||
|
||||
Kubernetes documentation also uses "resource" to talk about CPU and memory
|
||||
requests and limits. It's very often a good idea to refer to API resources
|
||||
as "API resources"; that helps to avoid confusion with CPU and memory resources,
|
||||
or with other kinds of resource.
|
||||
|
||||
If you are using the lowercase plural form of a resource name, such as
|
||||
`deployments` or `configmaps`, provide extra written context to help readers
|
||||
understand what you mean. If you are using the term in a context where the
|
||||
UpperCamelCase name could work too, and there is a risk of ambiguity,
|
||||
consider using the API kind in UpperCamelCase.
|
||||
|
||||
### When to use Kubernetes API terminologies
|
||||
|
||||
The different Kubernetes API terminologies are:
|
||||
|
||||
- Resource type: the name used in the API URL (such as `pods`, `namespaces`)
|
||||
- Resource: a single instance of a resource type (such as `pod`, `secret`)
|
||||
- Object: a resource that serves as a "record of intent". An object is a desired
|
||||
- _API kinds_: the name used in the API URL (such as `pods`, `namespaces`).
|
||||
API kinds are sometimes also called _resource types_.
|
||||
- _API resource_: a single instance of an API kind (such as `pod`, `secret`).
|
||||
- _Object_: a resource that serves as a "record of intent". An object is a desired
|
||||
state for a specific part of your cluster, which the Kubernetes control plane tries to maintain.
|
||||
All objects in the Kubernetes API are also resources.
|
||||
|
||||
Always use "resource" or "object" when referring to an API resource in docs.
|
||||
For example, use "a `Secret` object" over just "a `Secret`".
|
||||
For clarity, you can add "resource" or "object" when referring to an API resource in Kubernetes
|
||||
documentation.
|
||||
An example: write "a Secret object" instead of "a Secret".
|
||||
If it is clear just from the capitalization, you don't need to add the extra word.
|
||||
|
||||
Consider rephrasing when that change helps avoid misunderstandings. A common situation is
|
||||
when you want to start a sentence with an API kind, such as “Secret”; because English
|
||||
and other languages capitalize at the start of sentences, readers cannot tell whether you
|
||||
mean the API kind or the general concept. Rewording can help.
|
||||
|
||||
### API resource names
|
||||
|
||||
Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case),
|
||||
also known as PascalCase, and code formatting.
|
||||
also known as PascalCase. Do not write API kinds with code formatting.
|
||||
|
||||
For inline code in an HTML document, use the `<code>` tag. In a Markdown document, use the backtick (`` ` ``).
|
||||
|
||||
Don't split an API object name into separate words. For example, use `PodTemplateList`, not Pod Template List.
|
||||
Don't split an API object name into separate words. For example, use PodTemplateList, not Pod Template List.
|
||||
|
||||
For more information about PascalCase and code formatting, please review the related guidance on
|
||||
[Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects)
|
||||
|
@ -237,7 +263,7 @@ guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/
|
|||
{{< table caption = "Do and Don't - Don't include the command prompt" >}}
|
||||
Do | Don't
|
||||
:--| :-----
|
||||
kubectl get pods | $ kubectl get pods
|
||||
`kubectl get pods` | `$ kubectl get pods`
|
||||
{{< /table >}}
|
||||
|
||||
### Separate commands from output
|
||||
|
|
|
@ -55,10 +55,12 @@ Example CEL expressions:
|
|||
| `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration |
|
||||
| `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' |
|
||||
| `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |
|
||||
| `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the int and string cases |
|
||||
| `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the int and string cases |
|
||||
| `self.metadata.name == 'singleton'` | Validate that an object's name matches a specific value (making it a singleton) |
|
||||
| `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint |
|
||||
| `self.names.size() == self.details.size() && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet |
|
||||
| `self.details.all(key, key.matches('^[a-zA-Z]*$')` | Validate the keys of the 'details' map |
|
||||
| `self.details.all(key, self.details[key].matches('^[a-zA-Z]*$')` | Validate the values of the 'details' map |
|
||||
{{< /table >}}
|
||||
|
||||
## CEL options, language features, and libraries
|
||||
|
|
|
@ -133,7 +133,7 @@ description: |-
|
|||
and look for the <code>Image</code> field:</p>
|
||||
<p><code><b>kubectl describe pods</b></code></p>
|
||||
<p>To update the image of the application to version 2, use the <code>set image</code> subcommand, followed by the deployment name and the new image version:</p>
|
||||
<p><code><b>kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2</b></code></p>
|
||||
<p><code><b>kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=docker.io/jocatalin/kubernetes-bootcamp:v2</b></code></p>
|
||||
<p>The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the <code>get pods</code> subcommand:</p>
|
||||
<p><code><b>kubectl get pods</b></code></p>
|
||||
</div>
|
||||
|
|
|
@ -11,6 +11,8 @@ spec:
|
|||
operations: ["CREATE", "UPDATE"]
|
||||
resources: ["deployments"]
|
||||
validations:
|
||||
- key: "high-replica-count"
|
||||
expression: "object.spec.replicas > 50"
|
||||
- expression: "object.spec.replicas > 50"
|
||||
messageExpression: "'Deployment spec.replicas set to ' + string(object.spec.replicas)"
|
||||
auditAnnotations:
|
||||
- key: "high-replica-count"
|
||||
valueExpression: "'Deployment spec.replicas set to ' + string(object.spec.replicas)"
|
||||
|
|
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
title: Controladores Ingress
|
||||
description: >-
|
||||
Para que un [Ingress](/docs/concepts/services-networking/ingress/) funcione en tu clúster,
|
||||
debe haber un _ingress controller_ en ejecución.
|
||||
Debes seleccionar al menos un controlador Ingress y asegurarte de que está configurado en tu clúster.
|
||||
En esta página se enumeran los controladores Ingress más comunes que se pueden implementar.
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Para que el recurso Ingress funcione, el clúster necesita tener un controlador Ingress corriendo.
|
||||
|
||||
Mientras otro tipo de controladores que corren como parte del binario de `kube-controller-manager`, los controladores Ingress no son automaticamente iniciados dentro del clúster. Usa esta página para elegir la mejor implementación de controlador Ingress que funcione mejor para tu clúster.
|
||||
|
||||
Kubernetes es un proyecto que soporta y mantiene los controladores Ingress de [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme) y
|
||||
[nginx](https://git.k8s.io/ingress-nginx/README.md#readme).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Controladores adicionales
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
* [AKS Application Gateway Ingress Controller](https://docs.microsoft.com/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) es un controlador Ingress que controla la configuración de [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
|
||||
* [Alibaba Cloud MSE Ingress](https://www.alibabacloud.com/help/en/mse/user-guide/overview-of-mse-ingress-gateways) es un controlador Ingress que controla la configuración de [Alibaba Cloud Native Gateway](https://www.alibabacloud.com/help/en/mse/product-overview/cloud-native-gateway-overview?spm=a2c63.p38356.0.0.20563003HJK9is), el cual es una versión comercial de [Higress](https://github.com/alibaba/higress).
|
||||
* [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) es un [Apache APISIX](https://github.com/apache/apisix)-basado en un controlador Ingress.
|
||||
* [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) provee un balanceador de cargas L4-L7 usando [VMware NSX Advanced Load Balancer](https://avinetworks.com/).
|
||||
* [BFE Ingress Controller](https://github.com/bfenetworks/ingress-bfe) es un controlador Ingress basado en [BFE](https://www.bfe-networks.net).
|
||||
* [Cilium Ingress Controller](https://docs.cilium.io/en/stable/network/servicemesh/ingress/) es un controlador Ingress potenciado por [Cilium](https://cilium.io/).
|
||||
* El [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) funciona con Citrix Application Delivery Controller.
|
||||
* [Contour](https://projectcontour.io/) es un controlador Ingress basado en [Envoy](https://www.envoyproxy.io/).
|
||||
* [Emissary-Ingress](https://www.getambassador.io/products/api-gateway) es un API Gateway [Envoy](https://www.envoyproxy.io)-basado en el controlador Ingress.
|
||||
* [EnRoute](https://getenroute.io/) es un API Gateway basado en [Envoy](https://www.envoyproxy.io) que puede correr como un controlador Ingress.
|
||||
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md) es una API Gateway basada en [Easegress](https://megaease.com/easegress/) que puede correr como un controlador Ingress.
|
||||
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
|
||||
te permite usar un Ingress para configurar los servidores virtuales de F5 BIG-IP.
|
||||
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller/742835/fortiadc-ingress-controller-overview) soporta el recurso de Kubernetes Ingress y te permite manejar los objectos FortiADC desde Kubernetes.
|
||||
* [Gloo](https://gloo.solo.io) es un controlador Ingress de código abierto basado en [Envoy](https://www.envoyproxy.io),
|
||||
el cual ofrece la funcionalidad de API gateway.
|
||||
* [HAProxy Ingress](https://haproxy-ingress.github.io/) es un controlador Ingress para
|
||||
[HAProxy](https://www.haproxy.org/#desc).
|
||||
* [Higress](https://github.com/alibaba/higress) es una API Gateway basada en [Envoy](https://www.envoyproxy.io) que puede funcionar como un controlador Ingress.
|
||||
* El [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
|
||||
es también un controlador Ingress para [HAProxy](https://www.haproxy.org/#desc).
|
||||
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
|
||||
es un controlador Ingress basado en [Istio](https://istio.io/).
|
||||
* El [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
|
||||
es un controlador Ingress que controla [Kong Gateway](https://konghq.com/kong/).
|
||||
* [Kusk Gateway](https://kusk.kubeshop.io/) es un controlador Ingress OpenAPI-driven basado en [Envoy](https://www.envoyproxy.io).
|
||||
* El [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
|
||||
trabaja con el servidor web (como un proxy) [NGINX](https://www.nginx.com/resources/glossary/nginx/).
|
||||
* El [ngrok Kubernetes Ingress Controller](https://github.com/ngrok/kubernetes-ingress-controller) es un controlador de código abierto para añadir acceso público seguro a sus servicios K8s utilizando la [plataforma ngrok](https://ngrok.com).
|
||||
* El [OCI Native Ingress Controller](https://github.com/oracle/oci-native-ingress-controller#readme) es un controlador Ingress para Oracle Cloud Infrastructure el cual te permite manejar el [balanceador de cargas OCI](https://docs.oracle.com/en-us/iaas/Content/Balance/home.htm).
|
||||
* El [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) esta basado en [Pomerium](https://pomerium.com/), que ofrece una política de acceso sensible al contexto.
|
||||
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) es un enrutador HTTP y proxy inverso para la composición de servicios, incluyendo casos de uso como Kubernetes Ingress, diseñado como una biblioteca para construir su proxy personalizado.
|
||||
* El [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) es un controlador Ingress para el [Traefik](https://traefik.io/traefik/) proxy.
|
||||
* El [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) amplía Ingress con recursos personalizados para aportar capacidades de gestión de API a Ingress. Tyk Operator funciona con el plano de control de código abierto Tyk Gateway y Tyk Cloud.
|
||||
* [Voyager](https://voyagermesh.com) es un controlador Ingress para
|
||||
[HAProxy](https://www.haproxy.org/#desc).
|
||||
* [Wallarm Ingress Controller](https://www.wallarm.com/solutions/waf-for-kubernetes) es un controlador Ingress que proporciona capacidades de seguridad WAAP (WAF) y API.
|
||||
|
||||
## Uso de varios controladores Ingress
|
||||
|
||||
Puedes desplegar cualquier número de controladores Ingress utilizando [clase ingress](/docs/concepts/services-networking/ingress/#ingress-class)
|
||||
dentro de un clúster. Ten en cuenta el `.metadata.name` de tu recurso de clase Ingress. Cuando creas un Ingress, necesitarás ese nombre para especificar el campo `ingressClassName` de su objeto Ingress (consulta [referencia IngressSpec v1](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec)). `ingressClassName` sustituye el antiguo [método de anotación](/docs/concepts/services-networking/ingress/#deprecated-annotation).
|
||||
|
||||
Si no especificas una IngressClass para un Ingress, y tu clúster tiene exactamente una IngressClass marcada como predeterminada, Kubernetes [aplica](/docs/concepts/services-networking/ingress/#default-ingress-class) la IngressClass predeterminada del clúster al Ingress.
|
||||
Se marca una IngressClass como predeterminada estableciendo la [anotación `ingressclass.kubernetes.io/is-default-class`](/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class) en esa IngressClass, con el valor de cadena `"true"`.
|
||||
|
||||
|
||||
Lo ideal sería que todos los controladores Ingress cumplieran esta especificación, pero los distintos controladores Ingress funcionan de forma ligeramente diferente.
|
||||
|
||||
{{< note >}}
|
||||
Asegúrate de revisar la documentación de tu controlador Ingress para entender las advertencias de tu elección.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Más información [Ingress](/docs/concepts/services-networking/ingress/).
|
||||
* [Configurar Ingress en Minikube con el controlador NGINX](/docs/tasks/access-application-cluster/ingress-minikube).
|
||||
|
||||
|
|
@ -1022,7 +1022,7 @@ Para apagar el complemento `vsphereVolume` y no cargarlo por el administrador de
|
|||
## Uso de subPath {#using-subpath}
|
||||
|
||||
Algunas veces es útil compartir un volumen para múltiples usos en un único Pod.
|
||||
La propiedad `volumeMounts.subPath` especifica una sub-ruta dentro del volumen referenciado en lugar de su raíz.
|
||||
La propiedad `volumeMounts[*].subPath` especifica una sub-ruta dentro del volumen referenciado en lugar de su raíz.
|
||||
|
||||
El siguiente ejemplo muestra cómo configurar un Pod con la pila LAMP (Linux Apache MySQL PHP) usando un único volumen compartido. Esta configuración de ejemplo usando `subPath` no se recomienda para su uso en producción.
|
||||
|
||||
|
@ -1198,7 +1198,7 @@ For more details, see the [FlexVolume](https://github.com/kubernetes/community/b
|
|||
|
||||
La propagación del montaje permite compartir volúmenes montados por un contenedor para otros contenedores en el mismo Pod, o aun para otros pods en el mismo nodo.
|
||||
|
||||
La propagación del montaje de un volumen es controlada por el campo `mountPropagation` en `Container.volumeMounts`. Sus valores son:
|
||||
La propagación del montaje de un volumen es controlada por el campo `mountPropagation` en `containers[*].volumeMounts`. Sus valores son:
|
||||
|
||||
- `None` - Este montaje de volumen no recibirá ningún montaje posterior que el host haya montado en este volumen o en cualquiera de sus subdirectorios. De manera similar, los montajes creados por el contenedor no serán visibles en el host. Este es el modo por defecto.
|
||||
|
||||
|
|
|
@ -808,7 +808,7 @@ O recurso `CSIMigration` para Portworx foi adicionado, mas desativado por padrã
|
|||
|
||||
## Utilizando subPath {#using-subpath}
|
||||
|
||||
Às vezes, é útil compartilhar um volume para múltiplos usos em um único pod. A propriedade `volumeMounts.subPath` especifica um sub caminho dentro do volume referenciado em vez de sua raiz.
|
||||
Às vezes, é útil compartilhar um volume para múltiplos usos em um único pod. A propriedade `volumeMounts[*].subPath` especifica um sub caminho dentro do volume referenciado em vez de sua raiz.
|
||||
|
||||
O exemplo a seguir mostra como configurar um Pod com um ambiente LAMP (Linux, Apache, MySQL e PHP) usando um único volume compartilhado. Esta exemplo de configuração `subPath` não é recomendada para uso em produção.
|
||||
|
||||
|
@ -972,7 +972,7 @@ Os mantenedores do driver FlexVolume devem implementar um driver CSI e ajudar a
|
|||
|
||||
A propagação de montagem permite compartilhar volumes montados por um contêiner para outros contêineres no mesmo pod, ou mesmo para outros pods no mesmo nó.
|
||||
|
||||
A propagação de montagem de um volume é controlada pelo campo `mountPropagation` na propriedade `Container.volumeMounts`. Os seus valores são:
|
||||
A propagação de montagem de um volume é controlada pelo campo `mountPropagation` na propriedade `containers[*].volumeMounts`. Os seus valores são:
|
||||
|
||||
* `None` - Este volume de montagem não receberá do host nenhuma montagem posterior que seja montada para este volume ou qualquer um de seus subdiretórios. De forma semelhante, nenhum ponto de montagem criado pelo contêiner será visível no host. Este é o modo padrão.
|
||||
|
||||
|
|
|
@ -0,0 +1,270 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29 中的上下文日志生成:更好的故障排除和增强的日志记录"
|
||||
slug: contextual-logging-in-kubernetes-1-29
|
||||
date: 2023-12-20T09:30:00-08:00
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging"
|
||||
slug: contextual-logging-in-kubernetes-1-29
|
||||
date: 2023-12-20T09:30:00-08:00
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2023/12/20/contextual-logging/
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Authors**: [Mengjiao Liu](https://github.com/mengjiao-liu/) (DaoCloud), [Patrick Ohly](https://github.com/pohly) (Intel)
|
||||
-->
|
||||
**作者**:[Mengjiao Liu](https://github.com/mengjiao-liu/) (DaoCloud), [Patrick Ohly](https://github.com/pohly) (Intel)
|
||||
|
||||
**译者**:[Mengjiao Liu](https://github.com/mengjiao-liu/) (DaoCloud)
|
||||
|
||||
<!--
|
||||
On behalf of the [Structured Logging Working Group](https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md)
|
||||
and [SIG Instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme),
|
||||
we are pleased to announce that the contextual logging feature
|
||||
introduced in Kubernetes v1.24 has now been successfully migrated to
|
||||
two components (kube-scheduler and kube-controller-manager)
|
||||
as well as some directories. This feature aims to provide more useful logs
|
||||
for better troubleshooting of Kubernetes and to empower developers to enhance Kubernetes.
|
||||
-->
|
||||
代表[结构化日志工作组](https://github.com/kubernetes/community/blob/master/wg-structed-logging/README.md)和
|
||||
[SIG Instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme),
|
||||
我们很高兴地宣布在 Kubernetes v1.24 中引入的上下文日志记录功能现已成功迁移了两个组件(kube-scheduler 和 kube-controller-manager)
|
||||
以及一些目录。该功能旨在为 Kubernetes 提供更多有用的日志以更好地进行故障排除,并帮助开发人员增强 Kubernetes。
|
||||
|
||||
<!--
|
||||
## What is contextual logging?
|
||||
|
||||
[Contextual logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)
|
||||
is based on the [go-logr](https://github.com/go-logr/logr#a-minimal-logging-api-for-go) API.
|
||||
The key idea is that libraries are passed a logger instance by their caller
|
||||
and use that for logging instead of accessing a global logger.
|
||||
The binary decides the logging implementation, not the libraries.
|
||||
The go-logr API is designed around structured logging and supports attaching
|
||||
additional information to a logger.
|
||||
-->
|
||||
## 上下文日志记录是什么? {#what-is-contextual-logging}
|
||||
|
||||
[上下文日志记录](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)基于
|
||||
[go-logr](https://github.com/go-logr/logr#a-minimal-logging-api-for-go) API。
|
||||
关键思想是调用者将一个日志生成器实例传递给库,并使用它进行日志记录而不是访问全局日志生成器。
|
||||
二进制文件而不是库负责选择日志记录的实现。go-logr API 围绕结构化日志记录而设计,并支持向日志生成器提供额外信息。
|
||||
|
||||
<!--
|
||||
This enables additional use cases:
|
||||
|
||||
- The caller can attach additional information to a logger:
|
||||
- [WithName](<https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName>) adds a "logger" key with the names concatenated by a dot as value
|
||||
- [WithValues](<https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues>) adds key/value pairs
|
||||
|
||||
When passing this extended logger into a function, and the function uses it
|
||||
instead of the global logger, the additional information is then included
|
||||
in all log entries, without having to modify the code that generates the log entries.
|
||||
This is useful in highly parallel applications where it can become hard to identify
|
||||
all log entries for a certain operation, because the output from different operations gets interleaved.
|
||||
|
||||
- When running unit tests, log output can be associated with the current test.
|
||||
Then, when a test fails, only the log output of the failed test gets shown by go test.
|
||||
That output can also be more verbose by default because it will not get shown for successful tests.
|
||||
Tests can be run in parallel without interleaving their output.
|
||||
-->
|
||||
这一设计可以支持某些额外的使用场景:
|
||||
|
||||
- 调用者可以为日志生成器提供额外的信息:
|
||||
- [WithName](<https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName>) 添加一个 “logger” 键,
|
||||
并用句点(.)将名称的各个部分串接起来作为取值
|
||||
- [WithValues](<https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues>) 添加键/值对
|
||||
|
||||
当将此经过扩展的日志生成器传递到函数中,并且该函数使用它而不是全局日志生成器时,
|
||||
所有日志条目中都会包含所给的额外信息,而无需修改生成日志条目的代码。
|
||||
这一特点在高度并行的应用中非常有用。在这类应用中,很难辨识某操作的所有日志条目,因为不同操作的输出是交错的。
|
||||
|
||||
- 运行单元测试时,日志输出可以与当前测试相关联。且当测试失败时,go test 仅显示失败测试的日志输出。
|
||||
默认情况下,该输出也可能更详细,因为它不会在成功的测试中显示。测试可以并行运行,而无需交错输出。
|
||||
|
||||
<!--
|
||||
One of the design decisions for contextual logging was to allow attaching a logger as value to a `context.Context`.
|
||||
Since the logger encapsulates all aspects of the intended logging for the call,
|
||||
it is *part* of the context, and not just *using* it. A practical advantage is that many APIs
|
||||
already have a `ctx` parameter or can add one. This provides additional advantages, like being able to
|
||||
get rid of `context.TODO()` calls inside the functions.
|
||||
-->
|
||||
上下文日志记录的设计决策之一是允许将日志生成器作为值附加到 `context.Context` 之上。
|
||||
由于日志生成器封装了调用所预期的、与日志记录相关的所有元素,
|
||||
因此它是 context 的**一部分**,而不仅仅是**使用**它。这一设计的一个比较实际的优点是,
|
||||
许多 API 已经有一个 `ctx` 参数,或者可以添加一个 `ctx` 参数。
|
||||
进而产生的额外好处还包括比如可以去掉函数内的 `context.TODO()` 调用。
|
||||
|
||||
<!--
|
||||
## How to use it
|
||||
|
||||
The contextual logging feature is alpha starting from Kubernetes v1.24,
|
||||
so it requires the `ContextualLogging` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
|
||||
If you want to test the feature while it is alpha, you need to enable this feature gate
|
||||
on the `kube-controller-manager` and the `kube-scheduler`.
|
||||
-->
|
||||
## 如何使用它 {#how-to-use-it}
|
||||
|
||||
从 Kubernetes v1.24 开始,上下文日志记录功能处于 Alpha 状态,因此它需要启用
|
||||
`ContextualLogging` [特性门控](/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
如果你想在该功能处于 Alpha 状态时对其进行测试,则需要在 `kube-controller-manager` 和 `kube-scheduler` 上启用此特性门控。
|
||||
|
||||
<!--
|
||||
For the `kube-scheduler`, there is one thing to note, in addition to enabling
|
||||
the `ContextualLogging` feature gate, instrumentation also depends on log verbosity.
|
||||
To avoid slowing down the scheduler with the logging instrumentation for contextual logging added for 1.29,
|
||||
it is important to choose carefully when to add additional information:
|
||||
- At `-v3` or lower, only `WithValues("pod")` is used once per scheduling cycle.
|
||||
This has the intended effect that all log messages for the cycle include the pod information.
|
||||
Once contextual logging is GA, "pod" key/value pairs can be removed from all log calls.
|
||||
- At `-v4` or higher, richer log entries get produced where `WithValues` is also used for the node (when applicable)
|
||||
and `WithName` is used for the current operation and plugin.
|
||||
-->
|
||||
对于 `kube-scheduler`,有一点需要注意,除了启用 `ContextualLogging` 特性门控之外,
|
||||
插桩行为还取决于日志的详细程度设置。
|
||||
为了避免因 1.29 添加的上下文日志记录工具而降低调度程序的速度,请务必仔细选择何时添加额外的信息:
|
||||
- 在 `-v3` 或更低日志级别中,每个调度周期仅使用一次 `WithValues("pod")`。
|
||||
这样做可以达到预期效果,即该周期的所有日志消息都包含 Pod 信息。
|
||||
一旦上下文日志记录特性到达 GA 阶段,就可以从所有日志调用中删除 “pod” 键值对。
|
||||
- 在 `-v4` 或更高日志级别中,会生成更丰富的日志条目,其中 `WithValues` 也用于节点(如果适用),`WithName` 用于当前操作和插件。
|
||||
|
||||
<!--
|
||||
Here is an example that demonstrates the effect:
|
||||
-->
|
||||
下面的示例展示了这一效果:
|
||||
> I1113 08:43:37.029524 87144 default_binder.go:53] "Attempting to bind pod to node" **logger="Bind.DefaultBinder"** **pod**="kube-system/coredns-69cbfb9798-ms4pq" **node**="127.0.0.1"
|
||||
|
||||
<!--
|
||||
The immediate benefit is that the operation and plugin name are visible in `logger`.
|
||||
`pod` and `node` are already logged as parameters in individual log calls in `kube-scheduler` code.
|
||||
Once contextual logging is supported by more packages outside of `kube-scheduler`,
|
||||
they will also be visible there (for example, client-go). Once it is GA,
|
||||
log calls can be simplified to avoid repeating those values.
|
||||
-->
|
||||
这一设计的直接好处是在 `logger` 中可以看到操作和插件名称。`pod` 和 `node` 已作为参数记录在
|
||||
`kube-scheduler` 代码中的各个日志调用中。一旦 `kube-scheduler` 之外的其他包也支持上下文日志记录,
|
||||
在这些包(例如,client-go)中也可以看到操作和插件名称。
|
||||
一旦上下文日志记录特性到达 GA 阶段,就可以简化日志调用以避免重复这些值。
|
||||
|
||||
<!--
|
||||
In `kube-controller-manager`, `WithName` is used to add the user-visible controller name to log output,
|
||||
for example:
|
||||
-->
|
||||
在 `kube-controller-manager` 中,`WithName` 被用来在日志中输出用户可见的控制器名称,例如:
|
||||
|
||||
> I1113 08:43:29.284360 87141 graph_builder.go:285] "garbage controller monitor not synced: no monitors" **logger="garbage-collector-controller"**
|
||||
|
||||
<!--
|
||||
The `logger=”garbage-collector-controller”` was added by the `kube-controller-manager` core
|
||||
when instantiating that controller and appears in all of its log entries - at least as long as the code
|
||||
that it calls supports contextual logging. Further work is needed to convert shared packages like client-go.
|
||||
-->
|
||||
`logger=”garbage-collector-controller”` 是由 `kube-controller-manager`
|
||||
核心代码在实例化该控制器时添加的,会出现在其所有日志条目中——只要它所调用的代码支持上下文日志记录。
|
||||
转换像 client-go 这样的共享包还需要额外的工作。
|
||||
|
||||
<!--
|
||||
## Performance impact
|
||||
|
||||
Supporting contextual logging in a package, i.e. accepting a logger from a caller, is cheap.
|
||||
No performance impact was observed for the `kube-scheduler`. As noted above,
|
||||
adding `WithName` and `WithValues` needs to be done more carefully.
|
||||
-->
|
||||
## 性能影响 {#performance-impact}
|
||||
|
||||
在包中支持上下文日志记录,即接受来自调用者的记录器,成本很低。
|
||||
没有观察到 `kube-scheduler` 的性能影响。如上所述,添加 `WithName` 和 `WithValues` 需要更加小心。
|
||||
|
||||
<!--
|
||||
In Kubernetes 1.29, enabling contextual logging at production verbosity (`-v3` or lower)
|
||||
caused no measurable slowdown for the `kube-scheduler` and is not expected for the `kube-controller-manager` either.
|
||||
At debug levels, a 28% slowdown for some test cases is still reasonable given that the resulting logs make debugging easier.
|
||||
For details, see the [discussion around promoting the feature to beta](https://github.com/kubernetes/enhancements/pull/4219#issuecomment-1807811995).
|
||||
-->
|
||||
在 Kubernetes 1.29 中,以生产环境日志详细程度(`-v3` 或更低)启用上下文日志不会导致 `kube-scheduler` 速度出现明显的减慢,
|
||||
并且 `kube-controller-manager` 速度也不会出现明显的减慢。在 debug 级别,考虑到生成的日志使调试更容易,某些测试用例减速 28% 仍然是合理的。
|
||||
详细信息请参阅[有关将该特性升级为 Beta 版的讨论](https://github.com/kubernetes/enhancements/pull/4219#issuecomment-1807811995)。
|
||||
|
||||
<!--
|
||||
## Impact on downstream users
|
||||
Log output is not part of the Kubernetes API and changes regularly in each release,
|
||||
whether it is because developers work on the code or because of the ongoing conversion
|
||||
to structured and contextual logging.
|
||||
|
||||
If downstream users have dependencies on specific logs,
|
||||
they need to be aware of how this change affects them.
|
||||
-->
|
||||
## 对下游用户的影响 {#impact-on-downstream-users}
|
||||
|
||||
日志输出不是 Kubernetes API 的一部分,并且经常在每个版本中都会出现更改,
|
||||
无论是因为开发人员修改代码还是因为不断转换为结构化和上下文日志记录。
|
||||
|
||||
如果下游用户对特定日志有依赖性,他们需要了解此更改如何影响他们。
|
||||
|
||||
<!--
|
||||
## Further reading
|
||||
|
||||
- Read the [Contextual Logging in Kubernetes 1.24](https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/) article.
|
||||
- Read the [KEP-3077: contextual logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging).
|
||||
-->
|
||||
## 进一步阅读 {#further-reading}
|
||||
|
||||
- 参阅 [Kubernetes 1.24 中的上下文日志记录](https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/) 。
|
||||
- 参阅 [KEP-3077:上下文日志记录](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)。
|
||||
|
||||
<!--
|
||||
## Get involved
|
||||
|
||||
If you're interested in getting involved, we always welcome new contributors to join us.
|
||||
Contextual logging provides a fantastic opportunity for you to contribute to Kubernetes development and make a meaningful impact.
|
||||
By joining [Structured Logging WG](https://github.com/kubernetes/community/tree/master/wg-structured-logging),
|
||||
you can actively participate in the development of Kubernetes and make your first contribution.
|
||||
It's a great way to learn and engage with the community while gaining valuable experience.
|
||||
-->
|
||||
## 如何参与 {#get-involved}
|
||||
|
||||
如果你有兴趣参与,我们始终欢迎新的贡献者加入我们。上下文日志记录为你参与
|
||||
Kubernetes 开发做出贡献并产生有意义的影响提供了绝佳的机会。
|
||||
通过加入 [Structured Logging WG](https://github.com/kubernetes/community/tree/master/wg-structured-logging),
|
||||
你可以积极参与 Kubernetes 的开发并做出你的第一个贡献。这是学习和参与社区并获得宝贵经验的好方法。
|
||||
|
||||
<!--
|
||||
We encourage you to explore the repository and familiarize yourself with the ongoing discussions and projects.
|
||||
It's a collaborative environment where you can exchange ideas, ask questions, and work together with other contributors.
|
||||
-->
|
||||
我们鼓励你探索存储库并熟悉正在进行的讨论和项目。这是一个协作环境,你可以在这里交流想法、提出问题并与其他贡献者一起工作。
|
||||
|
||||
<!--
|
||||
If you have any questions or need guidance, don't hesitate to reach out to us
|
||||
and you can do so on our [public Slack channel](https://kubernetes.slack.com/messages/wg-structured-logging).
|
||||
If you're not already part of that Slack workspace, you can visit [https://slack.k8s.io/](https://slack.k8s.io/)
|
||||
for an invitation.
|
||||
-->
|
||||
如果你有任何疑问或需要指导,请随时与我们联系,你可以通过我们的[公共 Slack 频道](https://kubernetes.slack.com/messages/wg-structured-logging)联系我们。
|
||||
如果你尚未加入 Slack 工作区,可以访问 [https://slack.k8s.io/](https://slack.k8s.io/) 获取邀请。
|
||||
|
||||
<!--
|
||||
We would like to express our gratitude to all the contributors who provided excellent reviews,
|
||||
shared valuable insights, and assisted in the implementation of this feature (in alphabetical order):
|
||||
-->
|
||||
我们要向所有提供精彩评论、分享宝贵见解并协助实施此功能的贡献者表示感谢(按字母顺序排列):
|
||||
|
||||
- Aldo Culquicondor ([alculquicondor](https://github.com/alculquicondor))
|
||||
- Andy Goldstein ([ncdc](https://github.com/ncdc))
|
||||
- Feruzjon Muyassarov ([fmuyassarov](https://github.com/fmuyassarov))
|
||||
- Freddie ([freddie400](https://github.com/freddie400))
|
||||
- JUN YANG ([yangjunmyfm192085](https://github.com/yangjunmyfm192085))
|
||||
- Kante Yin ([kerthcet](https://github.com/kerthcet))
|
||||
- Kiki ([carlory](https://github.com/carlory))
|
||||
- Lucas Severo Alve ([knelasevero](https://github.com/knelasevero))
|
||||
- Maciej Szulik ([soltysh](https://github.com/soltysh))
|
||||
- Mengjiao Liu ([mengjiao-liu](https://github.com/mengjiao-liu))
|
||||
- Naman Lakhwani ([Namanl2001](https://github.com/Namanl2001))
|
||||
- Oksana Baranova ([oxxenix](https://github.com/oxxenix))
|
||||
- Patrick Ohly ([pohly](https://github.com/pohly))
|
||||
- songxiao-wang87 ([songxiao-wang87](https://github.com/songxiao-wang87))
|
||||
- Tim Allclai ([tallclair](https://github.com/tallclair))
|
||||
- ZhangYu ([Octopusjust](https://github.com/Octopusjust))
|
||||
- Ziqi Zhao ([fatsheep9146](https://github.com/fatsheep9146))
|
||||
- Zac ([249043822](https://github.com/249043822))
|
|
@ -13,7 +13,7 @@ weight: 220
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.28" >}}
|
||||
{{< feature-state feature_gate_name="UnknownVersionInteroperabilityProxy" >}}
|
||||
|
||||
<!--
|
||||
Kubernetes {{< skew currentVersion >}} includes an alpha feature that lets an
|
||||
|
|
|
@ -342,7 +342,7 @@ For nodes there are two forms of heartbeats:
|
|||
|
||||
Kubernetes 节点发送的心跳帮助你的集群确定每个节点的可用性,并在检测到故障时采取行动。
|
||||
|
||||
对于节点,有两种形式的心跳:
|
||||
对于节点,有两种形式的心跳:
|
||||
|
||||
<!--
|
||||
* Updates to the [`.status`](/docs/reference/node/node-status/) of a Node.
|
||||
|
@ -442,7 +442,7 @@ the same time:
|
|||
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
|
||||
(default 0.01) per second.
|
||||
-->
|
||||
- 如果不健康节点的比例超过 `--unhealthy-zone-threshold` (默认为 0.55),
|
||||
- 如果不健康节点的比例超过 `--unhealthy-zone-threshold`(默认为 0.55),
|
||||
驱逐速率将会降低。
|
||||
- 如果集群较小(意即小于等于 `--large-cluster-size-threshold` 个节点 - 默认为 50),
|
||||
驱逐操作将会停止。
|
||||
|
@ -534,7 +534,7 @@ If you want to explicitly reserve resources for non-Pod processes, see
|
|||
-->
|
||||
## 节点拓扑 {#node-topology}
|
||||
|
||||
{{< feature-state state="stable" for_k8s_version="v1.27" >}}
|
||||
{{< feature-state feature_gate_name="TopologyManager" >}}
|
||||
|
||||
<!--
|
||||
If you have enabled the `TopologyManager`
|
||||
|
@ -552,7 +552,7 @@ for more information.
|
|||
-->
|
||||
## 节点体面关闭 {#graceful-node-shutdown}
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.21" >}}
|
||||
{{< feature-state feature_gate_name="GracefulNodeShutdown" >}}
|
||||
|
||||
<!--
|
||||
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
|
||||
|
@ -707,7 +707,7 @@ Message: Pod was terminated in response to imminent node shutdown.
|
|||
-->
|
||||
### 基于 Pod 优先级的节点体面关闭 {#pod-priority-graceful-node-shutdown}
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.24" >}}
|
||||
{{< feature-state feature_gate_name="GracefulNodeShutdownBasedOnPodPriority" >}}
|
||||
|
||||
<!--
|
||||
To provide more flexibility during graceful node shutdown around the ordering
|
||||
|
@ -868,7 +868,7 @@ kubelet 子系统中会生成 `graceful_shutdown_start_time_seconds` 和
|
|||
-->
|
||||
## 处理节点非体面关闭 {#non-graceful-node-shutdown}
|
||||
|
||||
{{< feature-state state="stable" for_k8s_version="v1.28" >}}
|
||||
{{< feature-state feature_gate_name="NodeOutOfServiceVolumeDetach" >}}
|
||||
|
||||
<!--
|
||||
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
|
||||
|
@ -955,7 +955,7 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
|
|||
-->
|
||||
## 交换内存管理 {#swap-memory}
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.28" >}}
|
||||
{{< feature-state feature_gate_name="NodeSwap" >}}
|
||||
|
||||
<!--
|
||||
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
|
||||
|
@ -979,7 +979,7 @@ of Secret objects that were written to tmpfs now could be swapped to disk.
|
|||
A user can also optionally configure `memorySwap.swapBehavior` in order to
|
||||
specify how a node will use swap memory. For example,
|
||||
-->
|
||||
用户还可以选择配置 `memorySwap.swapBehavior` 以指定节点使用交换内存的方式。例如:
|
||||
用户还可以选择配置 `memorySwap.swapBehavior` 以指定节点使用交换内存的方式。例如:
|
||||
|
||||
```yaml
|
||||
memorySwap:
|
||||
|
@ -1051,7 +1051,7 @@ see the blog-post about [Kubernetes 1.28: NodeSwap graduates to Beta1](/blog/202
|
|||
[KEP-2400](https://github.com/kubernetes/enhancements/issues/4128) and its
|
||||
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
|
||||
-->
|
||||
只有 **cgroup v2** 支持交换空间,cgroup v1 不支持。
|
||||
只有 **Cgroup v2** 支持交换空间,Cgroup v1 不支持。
|
||||
|
||||
如需了解更多信息、协助测试和提交反馈,请参阅关于
|
||||
[Kubernetes 1.28:NodeSwap 进阶至 Beta1](/zh-cn/blog/2023/08/24/swap-linux-beta/) 的博客文章、
|
||||
|
|
|
@ -29,7 +29,7 @@ the Pod deploys to, for example, to ensure that a Pod ends up on a node with an
|
|||
or to co-locate Pods from two different services that communicate a lot into the same availability zone.
|
||||
-->
|
||||
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
|
||||
以便 **限制** 其只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行,
|
||||
以便**限制**其只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行,
|
||||
或优先在特定的节点上运行。有几种方法可以实现这点,推荐的方法都是用
|
||||
[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来进行选择。
|
||||
通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上,
|
||||
|
@ -278,7 +278,7 @@ to repel Pods from specific nodes.
|
|||
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied
|
||||
for the Pod to be scheduled onto a node.
|
||||
-->
|
||||
如果你同时指定了 `nodeSelector` 和 `nodeAffinity`,**两者** 必须都要满足,
|
||||
如果你同时指定了 `nodeSelector` 和 `nodeAffinity`,**两者**必须都要满足,
|
||||
才能将 Pod 调度到候选节点上。
|
||||
|
||||
<!--
|
||||
|
@ -568,7 +568,7 @@ uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
|
|||
|
||||
<!--
|
||||
The affinity rule specifies that the scheduler is allowed to place the example Pod
|
||||
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
|
||||
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
where other Pods have been labeled with `security=S1`.
|
||||
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
|
||||
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
|
||||
|
@ -576,8 +576,7 @@ assign the Pod to any node within Zone V, as long as there is at least one Pod w
|
|||
Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1`
|
||||
labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
|
||||
-->
|
||||
亲和性规则规定,只有节点属于特定的
|
||||
[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
|
||||
亲和性规则规定,只有节点属于特定的[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
且该区域中的其他 Pod 已打上 `security=S1` 标签时,调度器才可以将示例 Pod 调度到此节点上。
|
||||
例如,如果我们有一个具有指定区域(称之为 "Zone V")的集群,此区域由带有 `topology.kubernetes.io/zone=V`
|
||||
标签的节点组成,那么只要 Zone V 内已经至少有一个 Pod 打了 `security=S1` 标签,
|
||||
|
@ -586,7 +585,7 @@ labels in Zone V, the scheduler will not assign the example Pod to any node in t
|
|||
|
||||
<!--
|
||||
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
|
||||
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
|
||||
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
where other Pods have been labeled with `security=S2`.
|
||||
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
|
||||
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
|
||||
|
@ -594,8 +593,7 @@ assigning the Pod to any node within Zone R, as long as there is at least one Po
|
|||
Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact
|
||||
scheduling into Zone R if there are no Pods with `security=S2` labels.
|
||||
-->
|
||||
反亲和性规则规定,如果节点属于特定的
|
||||
[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
|
||||
反亲和性规则规定,如果节点属于特定的[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
且该区域中的其他 Pod 已打上 `security=S2` 标签,则调度器应尝试避免将 Pod 调度到此节点上。
|
||||
例如,如果我们有一个具有指定区域(我们称之为 "Zone R")的集群,此区域由带有 `topology.kubernetes.io/zone=R`
|
||||
标签的节点组成,只要 Zone R 内已经至少有一个 Pod 打了 `security=S2` 标签,
|
||||
|
@ -676,6 +674,178 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
|
|||
注意,空的 `namespaceSelector`(`{}`)会匹配所有名字空间,而 null 或者空的
|
||||
`namespaces` 列表以及 null 值 `namespaceSelector` 意味着“当前 Pod 的名字空间”。
|
||||
|
||||
#### matchLabelKeys
|
||||
|
||||
{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
|
||||
<!--
|
||||
The `matchLabelKeys` field is a alpha-level field and is disabled by default in
|
||||
Kubernetes {{< skew currentVersion >}}.
|
||||
When you want to use it, you have to enable it via the
|
||||
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
-->
|
||||
`matchLabelKeys` 字段是一个 Alpha 级别的字段,在 Kubernetes {{< skew currentVersion >}} 中默认被禁用。
|
||||
当你想要使用此字段时,你必须通过 `MatchLabelKeysInPodAffinity`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)启用它。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Kubernetes includes an optional `matchLabelKeys` field for Pod affinity
|
||||
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
|
||||
when satisfying the Pod (anti)affinity.
|
||||
|
||||
The keys are used to look up values from the pod labels; those key-value labels are combined
|
||||
(using `AND`) with the match restrictions defined using the `labelSelector` field. The combined
|
||||
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
|
||||
-->
|
||||
Kubernetes 在 Pod 亲和性或反亲和性中包含一个可选的 `matchLabelKeys` 字段。
|
||||
此字段指定了应与传入 Pod 的标签匹配的标签键,以满足 Pod 的(反)亲和性。
|
||||
|
||||
这些键用于从 Pod 的标签中查找值;这些键值标签与使用 `labelSelector` 字段定义的匹配限制组合(使用 `AND` 操作)。
|
||||
这种组合的过滤机制选择将用于 Pod(反)亲和性计算的现有 Pod 集合。
|
||||
|
||||
<!--
|
||||
A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods
|
||||
managed as part of a Deployment, where the value is unique for each revision).
|
||||
Using `pod-template-hash` in `matchLabelKeys` allows you to target the Pods that belong
|
||||
to the same revision as the incoming Pod, so that a rolling upgrade won't break affinity.
|
||||
-->
|
||||
一个常见的用例是在 `matchLabelKeys` 中使用 `pod-template-hash`
|
||||
(设置在作为 Deployment 的一部分进行管理的 Pod 上,其中每个版本的值是唯一的)。
|
||||
在 `matchLabelKeys` 中使用 `pod-template-hash` 允许你定位与传入 Pod 相同版本的 Pod,
|
||||
确保滚动升级不会破坏亲和性。
|
||||
|
||||
<!--
|
||||
# Only Pods from a given rollout are taken into consideration when calculating pod affinity.
|
||||
# If you update the Deployment, the replacement Pods follow their own affinity rules
|
||||
# (if there are any defined in the new Pod template)
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: application-server
|
||||
...
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- database
|
||||
topologyKey: topology.kubernetes.io/zone
|
||||
# 只有在计算 Pod 亲和性时,才考虑指定上线的 Pod。
|
||||
# 如果你更新 Deployment,替代的 Pod 将遵循它们自己的亲和性规则
|
||||
# (如果在新的 Pod 模板中定义了任何规则)。
|
||||
matchLabelKeys:
|
||||
- pod-template-hash
|
||||
```
|
||||
|
||||
#### mismatchLabelKeys
|
||||
|
||||
{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
|
||||
<!--
|
||||
The `mismatchLabelKeys` field is a alpha-level field and is disabled by default in
|
||||
Kubernetes {{< skew currentVersion >}}.
|
||||
When you want to use it, you have to enable it via the
|
||||
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
-->
|
||||
`mismatchLabelKeys` 字段是一个 Alpha 级别的字段,在 Kubernetes {{< skew currentVersion >}} 中默认被禁用。
|
||||
当你想要使用此字段时,你必须通过 `MatchLabelKeysInPodAffinity`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)启用它。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity
|
||||
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
|
||||
when satisfying the Pod (anti)affinity.
|
||||
|
||||
One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.
|
||||
In other words, you want to avoid running Pods from two different tenants on the same topology domain at the same time.
|
||||
-->
|
||||
Kubernetes 为 Pod 亲和性或反亲和性提供了一个可选的 `mismatchLabelKeys` 字段。
|
||||
此字段指定了在满足 Pod(反)亲和性时,**不**应与传入 Pod 的标签匹配的键。
|
||||
|
||||
一个示例用例是确保 Pod 进入指定的拓扑域(节点、区域等),在此拓扑域中只调度来自同一租户或团队的 Pod。
|
||||
换句话说,你想要避免在同一拓扑域中同时运行来自两个不同租户的 Pod。
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
# Assume that all relevant Pods have a "tenant" label set
|
||||
tenant: tenant-a
|
||||
...
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# ensure that pods associated with this tenant land on the correct node pool
|
||||
- matchLabelKeys:
|
||||
- tenant
|
||||
topologyKey: node-pool
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# ensure that pods associated with this tenant can't schedule to nodes used for another tenant
|
||||
- mismatchLabelKeys:
|
||||
- tenant # whatever the value of the "tenant" label for this Pod, prevent
|
||||
# scheduling to nodes in any pool where any Pod from a different
|
||||
# tenant is running.
|
||||
labelSelector:
|
||||
# We have to have the labelSelector which selects only Pods with the tenant label,
|
||||
# otherwise this Pod would hate Pods from daemonsets as well, for example,
|
||||
# which aren't supposed to have the tenant label.
|
||||
matchExpressions:
|
||||
- key: tenant
|
||||
operator: Exists
|
||||
topologyKey: node-pool
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
# 假设所有相关的 Pod 都设置了 “tenant” 标签
|
||||
tenant: tenant-a
|
||||
...
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# 确保与此租户关联的 Pod 落在正确的节点池上
|
||||
- matchLabelKeys:
|
||||
- tenant
|
||||
topologyKey: node-pool
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# 确保与此租户关联的 Pod 不能调度到用于其他租户的节点上
|
||||
- mismatchLabelKeys:
|
||||
- tenant # 无论此 Pod 的 “tenant” 标签的值是什么,
|
||||
# 如果节点池中有来自别的租户的任何 Pod 在运行,
|
||||
# 都会阻碍此 Pod 被调度到这些节点池中的节点上
|
||||
labelSelector:
|
||||
# 我们必须有一个 labelSelector,只选择具有 “tenant” 标签的 Pod,
|
||||
# 否则此 Pod 也会与来自 DaemonSet 的 Pod 发生冲突,
|
||||
# 而这些 Pod 不应该具有 “tenant” 标签
|
||||
matchExpressions:
|
||||
- key: tenant
|
||||
operator: Exists
|
||||
topologyKey: node-pool
|
||||
```
|
||||
|
||||
<!--
|
||||
#### More practical use-cases
|
||||
|
||||
|
@ -807,7 +977,7 @@ where each web server is co-located with a cache, on three separate nodes.
|
|||
The overall effect is that each cache instance is likely to be accessed by a single client, that
|
||||
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
|
||||
-->
|
||||
总体效果是每个缓存实例都非常可能被在同一个节点上运行的某个客户端访问。
|
||||
总体效果是每个缓存实例都非常可能被在同一个节点上运行的某个客户端访问,
|
||||
这种方法旨在最大限度地减少偏差(负载不平衡)和延迟。
|
||||
|
||||
<!--
|
||||
|
@ -858,7 +1028,8 @@ Some of the limitations of using `nodeName` to select nodes are:
|
|||
<!--
|
||||
`nodeName` is intended for use by custom schedulers or advanced use cases where
|
||||
you need to bypass any configured schedulers. Bypassing the schedulers might lead to
|
||||
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the [`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
|
||||
failed Pods if the assigned Nodes get oversubscribed. You can use [node affinity](#node-affinity) or a the
|
||||
[`nodeselector` field](#nodeselector) to assign a Pod to a specific Node without bypassing the schedulers.
|
||||
-->
|
||||
`nodeName` 旨在供自定义调度器或需要绕过任何已配置调度器的高级场景使用。
|
||||
如果已分配的 Node 负载过重,绕过调度器可能会导致 Pod 失败。
|
||||
|
|
|
@ -370,7 +370,7 @@ When you set `setHostnameAsFQDN: true` in the Pod spec, the kubelet writes the P
|
|||
<!--
|
||||
In Linux, the hostname field of the kernel (the `nodename` field of `struct utsname`) is limited to 64 characters.
|
||||
|
||||
If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from Pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment.
|
||||
If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from Pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment.
|
||||
-->
|
||||
在 Linux 中,内核的主机名字段(`struct utsname` 的 `nodename` 字段)限定最多 64 个字符。
|
||||
|
||||
|
@ -381,7 +381,7 @@ Pod 会一直出于 `Pending` 状态(通过 `kubectl` 所看到的 `ContainerC
|
|||
`long-FQDN` is too long (64 characters is the max, 70 characters requested)."
|
||||
(无法基于 Pod 主机名和集群域名构造 FQDN,FQDN `long-FQDN` 过长,至多 64 个字符,请求字符数为 70)。
|
||||
对于这种场景而言,改善用户体验的一种方式是创建一个
|
||||
[准入 Webhook 控制器](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks),
|
||||
[准入 Webhook 控制器](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks),
|
||||
在用户创建顶层对象(如 Deployment)的时候控制 FQDN 的长度。
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -1100,7 +1100,7 @@ can define your own (provider specific) annotations on the Service that specify
|
|||
-->
|
||||
#### 混合协议类型的负载均衡器
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
{{< feature-state feature_gate_name="MixedProtocolLBService" >}}
|
||||
|
||||
<!--
|
||||
By default, for LoadBalancer type of Services, when there is more than one port defined, all
|
||||
|
@ -1197,7 +1197,7 @@ Unprefixed names are reserved for end-users.
|
|||
-->
|
||||
#### 指定负载均衡器状态的 IPMode {#load-balancer-ip-mode}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
{{< feature-state feature_gate_name="LoadBalancerIPMode" >}}
|
||||
|
||||
<!--
|
||||
Starting as Alpha in Kubernetes 1.29,
|
||||
|
|
|
@ -197,12 +197,14 @@ When a default `StorageClass` exists in a cluster and a user creates a
|
|||
`DefaultStorageClass` 准入控制器会自动向其中添加指向默认存储类的 `storageClassName` 字段。
|
||||
|
||||
<!--
|
||||
Note that there can be at most one *default* storage class on a cluster, or
|
||||
a `PersistentVolumeClaim` without `storageClassName` explicitly specified cannot
|
||||
be created.
|
||||
Note that if you set the `storageclass.kubernetes.io/is-default-class`
|
||||
annotation to true on more than one StorageClass in your cluster, and you then
|
||||
create a `PersistentVolumeClaim` with no `storageClassName` set, Kubernetes
|
||||
uses the most recently created default StorageClass.
|
||||
-->
|
||||
请注意,集群上最多只能有一个 **默认** 存储类,否则无法创建没有明确指定
|
||||
`storageClassName` 的 `PersistentVolumeClaim`。
|
||||
请注意,如果你在集群的多个 StorageClass 设置 `storageclass.kubernetes.io/is-default-class` 注解为 true,
|
||||
并之后创建了未指定 `storageClassName` 的 `PersistentVolumeClaim`,
|
||||
Kubernetes 会使用最新创建的默认 StorageClass。
|
||||
|
||||
<!--
|
||||
## Topology Awareness
|
||||
|
|
|
@ -3,7 +3,6 @@ title: 投射卷
|
|||
content_type: concept
|
||||
weight: 21 # 跟在持久卷之后
|
||||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- marosset
|
||||
|
@ -35,6 +34,7 @@ Currently, the following types of volume sources can be projected:
|
|||
* [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi)
|
||||
* [`configMap`](/docs/concepts/storage/volumes/#configmap)
|
||||
* [`serviceAccountToken`](#serviceaccounttoken)
|
||||
* [`clusterTrustBundle`](#clustertrustbundle)
|
||||
-->
|
||||
## 介绍 {#introduction}
|
||||
|
||||
|
@ -46,6 +46,7 @@ Currently, the following types of volume sources can be projected:
|
|||
* [`downwardAPI`](/zh-cn/docs/concepts/storage/volumes/#downwardapi)
|
||||
* [`configMap`](/zh-cn/docs/concepts/storage/volumes/#configmap)
|
||||
* [`serviceAccountToken`](#serviceaccounttoken)
|
||||
* [`clusterTrustBundle`](#clustertrustbundle)
|
||||
|
||||
<!--
|
||||
All sources are required to be in the same namespace as the Pod. For more details,
|
||||
|
@ -133,6 +134,66 @@ volume mount will not receive updates for those volume sources.
|
|||
形式使用投射卷源的容器无法收到对应卷源的更新。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## clusterTrustBundle projected volumes {#clustertrustbundle}
|
||||
-->
|
||||
## clusterTrustBundle 投射卷 {#clustertrustbundle}
|
||||
|
||||
{{<feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
To use this feature in Kubernetes {{< skew currentVersion >}}, you must enable support for ClusterTrustBundle objects with the `ClusterTrustBundle` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and `--runtime-config=certificates.k8s.io/v1alpha1/clustertrustbundles=true` kube-apiserver flag, then enable the `ClusterTrustBundleProjection` feature gate.
|
||||
-->
|
||||
要在 Kubernetes {{< skew currentVersion >}} 中使用此特性,你必须通过 `ClusterTrustBundle`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)和
|
||||
`--runtime-config=certificates.k8s.io/v1alpha1/clustertrustbundles=true` kube-apiserver
|
||||
标志启用对 ClusterTrustBundle 对象的支持,然后才能启用 `ClusterTrustBundleProjection` 特性门控。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The `clusterTrustBundle` projected volume source injects the contents of one or more [ClusterTrustBundle](/docs/reference/access-authn-authz/certificate-signing-requests#cluster-trust-bundles) objects as an automatically-updating file in the container filesystem.
|
||||
-->
|
||||
`clusterTrustBundle` 投射卷源将一个或多个
|
||||
[ClusterTrustBundle](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests#cluster-trust-bundles)
|
||||
对象的内容作为一个自动更新的文件注入到容器文件系统中。
|
||||
|
||||
<!--
|
||||
ClusterTrustBundles can be selected either by [name](/docs/reference/access-authn-authz/certificate-signing-requests#ctb-signer-unlinked) or by [signer name](/docs/reference/access-authn-authz/certificate-signing-requests#ctb-signer-linked).
|
||||
-->
|
||||
ClusterTrustBundle 可以通过[名称](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests#ctb-signer-unlinked)
|
||||
或[签名者名称](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests#ctb-signer-linked)被选中。
|
||||
|
||||
<!--
|
||||
To select by name, use the `name` field to designate a single ClusterTrustBundle object.
|
||||
|
||||
To select by signer name, use the `signerName` field (and optionally the
|
||||
`labelSelector` field) to designate a set of ClusterTrustBundle objects that use
|
||||
the given signer name. If `labelSelector` is not present, then all
|
||||
ClusterTrustBundles for that signer are selected.
|
||||
-->
|
||||
要按名称选择,可以使用 `name` 字段指定单个 ClusterTrustBundle 对象。
|
||||
|
||||
要按签名者名称选择,可以使用 `signerName` 字段(也可选用 `labelSelector` 字段)
|
||||
指定一组使用给定签名者名称的 ClusterTrustBundle 对象。
|
||||
如果 `labelSelector` 不存在,则针对该签名者的所有 ClusterTrustBundles 将被选中。
|
||||
|
||||
<!--
|
||||
The kubelet deduplicates the certificates in the selected ClusterTrustBundle objects, normalizes the PEM representations (discarding comments and headers), reorders the certificates, and writes them into the file named by `path`. As the set of selected ClusterTrustBundles or their content changes, kubelet keeps the file up-to-date.
|
||||
-->
|
||||
kubelet 会对所选 ClusterTrustBundle 对象中的证书进行去重,规范化 PEM 表示(丢弃注释和头部),
|
||||
重新排序证书,并将这些证书写入由 `path` 指定的文件中。
|
||||
随着所选 ClusterTrustBundles 的集合或其内容发生变化,kubelet 会保持更新此文件。
|
||||
|
||||
<!--
|
||||
By default, the kubelet will prevent the pod from starting if the named ClusterTrustBundle is not found, or if `signerName` / `labelSelector` do not match any ClusterTrustBundles. If this behavior is not what you want, then set the `optional` field to `true`, and the pod will start up with an empty file at `path`.
|
||||
-->
|
||||
默认情况下,如果找不到指定的 ClusterTrustBundle,或者 `signerName` / `labelSelector`
|
||||
与所有 ClusterTrustBundle 都不匹配,kubelet 将阻止 Pod 启动。如果这不是你想要的行为,
|
||||
可以将 `optional` 字段设置为 `true`,Pod 将使用 `path` 处的空白文件启动。
|
||||
|
||||
{{% code_sample file="pods/storage/projected-clustertrustbundle.yaml" %}}
|
||||
|
||||
<!--
|
||||
## SecurityContext interactions
|
||||
-->
|
||||
|
@ -257,4 +318,3 @@ the Linux only `RunAsUser` option with Windows Pods.
|
|||
Pod 会一直阻塞在 `ContainerCreating` 状态。因此,建议不要在 Windows
|
||||
节点上使用仅针对 Linux 的 `RunAsUser` 选项。
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -27,8 +27,6 @@ with [volumes](/docs/concepts/storage/volumes/) and
|
|||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Introduction
|
||||
|
||||
A StorageClass provides a way for administrators to describe the "classes" of
|
||||
storage they offer. Different classes might map to quality-of-service levels,
|
||||
or to backup policies, or to arbitrary policies determined by the cluster
|
||||
|
@ -36,20 +34,18 @@ administrators. Kubernetes itself is unopinionated about what classes
|
|||
represent. This concept is sometimes called "profiles" in other storage
|
||||
systems.
|
||||
-->
|
||||
## 介绍 {#introduction}
|
||||
|
||||
StorageClass 为管理员提供了描述存储"类"的方法。
|
||||
不同的类型可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。
|
||||
Kubernetes 本身并不清楚各种类代表的什么。这个类的概念在其他存储系统中有时被称为"配置文件"。
|
||||
|
||||
<!--
|
||||
## The StorageClass Resource
|
||||
## The StorageClass API
|
||||
|
||||
Each StorageClass contains the fields `provisioner`, `parameters`, and
|
||||
`reclaimPolicy`, which are used when a PersistentVolume belonging to the
|
||||
class needs to be dynamically provisioned.
|
||||
-->
|
||||
## StorageClass 资源 {#the-storageclass-resource}
|
||||
## StorageClass API {#the-storageclass-api}
|
||||
|
||||
每个 StorageClass 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段,
|
||||
这些字段会在 StorageClass 需要动态制备 PersistentVolume 时会使用到。
|
||||
|
|
|
@ -28,7 +28,7 @@ of operating system.
|
|||
-->
|
||||
在许多组织中,所运行的很大一部分服务和应用是 Windows 应用。
|
||||
[Windows 容器](https://aka.ms/windowscontainers)提供了一种封装进程和包依赖项的方式,
|
||||
从而简化了 DevOps 实践,令 Windows 应用程序同样遵从云原生模式。
|
||||
从而简化了 DevOps 实践,令 Windows 应用同样遵从云原生模式。
|
||||
|
||||
对于同时投入基于 Windows 应用和 Linux 应用的组织而言,他们不必寻找不同的编排系统来管理其工作负载,
|
||||
使其跨部署的运营效率得以大幅提升,而不必关心所用的操作系统。
|
||||
|
@ -48,7 +48,7 @@ multiple operating systems.
|
|||
While you can only run the {{< glossary_tooltip text="control plane" term_id="control-plane" >}} on Linux,
|
||||
you can deploy worker nodes running either Windows or Linux.
|
||||
-->
|
||||
## Kubernetes 中的 Windows 节点 {#windows-nodes-in-k8s}
|
||||
## Kubernetes 中的 Windows 节点 {#windows-nodes-in-k8s}
|
||||
|
||||
若要在 Kubernetes 中启用对 Windows 容器的编排,可以在现有的 Linux 集群中包含 Windows 节点。
|
||||
在 Kubernetes 上调度 {{< glossary_tooltip text="Pod" term_id="pod" >}} 中的 Windows 容器与调度基于 Linux 的容器类似。
|
||||
|
@ -60,13 +60,14 @@ you can deploy worker nodes running either Windows or Linux.
|
|||
<!--
|
||||
Windows {{< glossary_tooltip text="nodes" term_id="node" >}} are
|
||||
[supported](#windows-os-version-support) provided that the operating system is
|
||||
Windows Server 2019.
|
||||
Windows Server 2019 or Windows Server 2022.
|
||||
|
||||
This document uses the term *Windows containers* to mean Windows containers with
|
||||
process isolation. Kubernetes does not support running Windows containers with
|
||||
[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container).
|
||||
-->
|
||||
支持 Windows {{< glossary_tooltip text="节点" term_id="node" >}}的前提是操作系统为 Windows Server 2019。
|
||||
支持 Windows {{< glossary_tooltip text="节点" term_id="node" >}}的前提是操作系统为
|
||||
Windows Server 2019 或 Windows Server 2022。
|
||||
|
||||
本文使用术语 **Windows 容器**表示具有进程隔离能力的 Windows 容器。
|
||||
Kubernetes 不支持使用
|
||||
|
@ -85,7 +86,7 @@ including:
|
|||
[HostProcess Containers](/docs/tasks/configure-pod-container/create-hostprocess-pod/) offer similar functionality.
|
||||
* TerminationGracePeriod: requires containerD
|
||||
-->
|
||||
## 兼容性与局限性 {#limitations}
|
||||
## 兼容性与局限性 {#limitations}
|
||||
|
||||
某些节点层面的功能特性仅在使用特定[容器运行时](#container-runtime)时才可用;
|
||||
另外一些特性则在 Windows 节点上不可用,包括:
|
||||
|
@ -109,7 +110,8 @@ functionality which are outlined in this section.
|
|||
Windows 节点并不支持共享命名空间的所有功能特性。
|
||||
有关更多详细信息,请参考 [API 兼容性](#api)。
|
||||
|
||||
有关 Kubernetes 测试时所使用的 Windows 版本的详细信息,请参考 [Windows 操作系统版本兼容性](#windows-os-version-support)。
|
||||
有关 Kubernetes 测试时所使用的 Windows 版本的详细信息,请参考
|
||||
[Windows 操作系统版本兼容性](#windows-os-version-support)。
|
||||
|
||||
从 API 和 kubectl 的角度来看,Windows 容器的行为与基于 Linux 的容器非常相似。
|
||||
然而,在本节所概述的一些关键功能上,二者存在一些显著差异。
|
||||
|
@ -120,7 +122,7 @@ Windows 节点并不支持共享命名空间的所有功能特性。
|
|||
Key Kubernetes elements work the same way in Windows as they do in Linux. This
|
||||
section refers to several key workload abstractions and how they map to Windows.
|
||||
-->
|
||||
### 与 Linux 比较 {#comparison-with-Linux-similarities}
|
||||
### 与 Linux 比较 {#comparison-with-Linux-similarities}
|
||||
|
||||
Kubernetes 关键组件在 Windows 上的工作方式与在 Linux 上相同。
|
||||
本节介绍几个关键的工作负载抽象及其如何映射到 Windows。
|
||||
|
@ -140,6 +142,7 @@ Kubernetes 关键组件在 Windows 上的工作方式与在 Linux 上相同。
|
|||
你不可以在同一个 Pod 中部署 Windows 和 Linux 容器。
|
||||
Pod 中的所有容器都调度到同一 Node 上,每个 Node 代表一个特定的平台和体系结构。
|
||||
Windows 容器支持以下 Pod 能力、属性和事件:
|
||||
|
||||
<!--
|
||||
* Single or multiple containers per Pod with process isolation and volume sharing
|
||||
* Pod `status` fields
|
||||
|
@ -257,7 +260,7 @@ Pod、工作负载资源和 Service 是在 Kubernetes 上管理 Windows 工作
|
|||
|
||||
Some kubelet command line options behave differently on Windows, as described below:
|
||||
-->
|
||||
### kubelet 的命令行选项 {#kubelet-compatibility}
|
||||
### kubelet 的命令行选项 {#kubelet-compatibility}
|
||||
|
||||
某些 kubelet 命令行选项在 Windows 上的行为不同,如下所述:
|
||||
|
||||
|
@ -296,7 +299,7 @@ and container runtime. Some workload properties were designed for Linux, and fai
|
|||
|
||||
At a high level, these OS concepts are different:
|
||||
-->
|
||||
### API 兼容性 {#api}
|
||||
### API 兼容性 {#api}
|
||||
|
||||
由于操作系统和容器运行时的缘故,Kubernetes API 在 Windows 上的工作方式存在细微差异。
|
||||
某些工作负载属性是为 Linux 设计的,无法在 Windows 上运行。
|
||||
|
@ -367,7 +370,7 @@ work between Windows and Linux:
|
|||
node. They should be applied to all containers as a best practice if the operator
|
||||
wants to avoid overprovisioning entirely.
|
||||
-->
|
||||
#### 容器规约的字段兼容性 {#compatibility-v1-pod-spec-containers}
|
||||
#### 容器规约的字段兼容性 {#compatibility-v1-pod-spec-containers}
|
||||
|
||||
以下列表记录了 Pod 容器规约在 Windows 和 Linux 之间的工作方式差异:
|
||||
|
||||
|
@ -437,7 +440,7 @@ The following list documents differences between how Pod specifications work bet
|
|||
which are not implemented on Windows. Windows cannot share process namespaces or
|
||||
the container's root filesystem. Only the network can be shared.
|
||||
-->
|
||||
#### Pod 规约的字段兼容性 {#compatibility-v1-pod}
|
||||
#### Pod 规约的字段兼容性 {#compatibility-v1-pod}
|
||||
|
||||
以下列表记录了 Pod 规约在 Windows 和 Linux 之间的工作方式差异:
|
||||
|
||||
|
@ -446,7 +449,7 @@ The following list documents differences between how Pod specifications work bet
|
|||
* `dnsPolicy` - Windows 不支持将 Pod `dnsPolicy` 设为 `ClusterFirstWithHostNet`,
|
||||
因为未提供主机网络。Pod 始终用容器网络运行。
|
||||
* `podSecurityContext` [参见下文](#compatibility-v1-pod-spec-containers-securitycontext)
|
||||
* `shareProcessNamespace` - 这是一个 beta 版功能特性,依赖于 Windows 上未实现的 Linux 命名空间。
|
||||
* `shareProcessNamespace` - 这是一个 Beta 版功能特性,依赖于 Windows 上未实现的 Linux 命名空间。
|
||||
Windows 无法共享进程命名空间或容器的根文件系统(root filesystem)。
|
||||
只能共享网络。
|
||||
<!--
|
||||
|
@ -471,7 +474,7 @@ The following list documents differences between how Pod specifications work bet
|
|||
最后使用正常的 Windows 关机行为终止所有进程。
|
||||
5 秒默认值实际上位于[容器内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)的
|
||||
Windows 注册表中,因此在构建容器时可以覆盖这个值。
|
||||
* `volumeDevices` - 这是一个 beta 版功能特性,未在 Windows 上实现。
|
||||
* `volumeDevices` - 这是一个 Beta 版功能特性,未在 Windows 上实现。
|
||||
Windows 无法将原始块设备挂接到 Pod。
|
||||
* `volumes`
|
||||
* 如果你定义一个 `emptyDir` 卷,则你无法将卷源设为 `memory`。
|
||||
|
@ -485,7 +488,7 @@ The following list documents differences between how Pod specifications work bet
|
|||
The kubelet can now request that pods running on Windows nodes use the host's network namespace instead
|
||||
of creating a new pod network namespace. To enable this functionality pass `--feature-gates=WindowsHostNetwork=true` to the kubelet.
|
||||
-->
|
||||
#### hostNetwork 的字段兼容性 {#compatibility-v1-pod-spec-containers-hostnetwork}
|
||||
#### hostNetwork 的字段兼容性 {#compatibility-v1-pod-spec-containers-hostnetwork}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
|
||||
|
||||
|
@ -505,9 +508,9 @@ This functionality requires a container runtime that supports this functionality
|
|||
Only the `securityContext.runAsNonRoot` and `securityContext.windowsOptions` from the Pod
|
||||
[`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) fields work on Windows.
|
||||
-->
|
||||
#### Pod 安全上下文的字段兼容性 {#compatibility-v1-pod-spec-containers-securitycontext}
|
||||
#### Pod 安全上下文的字段兼容性 {#compatibility-v1-pod-spec-containers-securitycontext}
|
||||
|
||||
Pod 的 [`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
|
||||
Pod 的 [`securityContext`](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
|
||||
中只有 `securityContext.runAsNonRoot` 和 `securityContext.windowsOptions` 字段在 Windows 上生效。
|
||||
|
||||
<!--
|
||||
|
@ -518,7 +521,7 @@ The node problem detector (see
|
|||
has preliminary support for Windows.
|
||||
For more information, visit the project's [GitHub page](https://github.com/kubernetes/node-problem-detector#windows).
|
||||
-->
|
||||
## 节点问题检测器 {#node-problem-detector}
|
||||
## 节点问题检测器 {#node-problem-detector}
|
||||
|
||||
节点问题检测器(参考[节点健康监测](/zh-cn/docs/tasks/debug/debug-cluster/monitor-node-health/))初步支持 Windows。
|
||||
有关更多信息,请访问该项目的 [GitHub 页面](https://github.com/kubernetes/node-problem-detector#windows)。
|
||||
|
@ -534,7 +537,7 @@ containers, share a common network endpoint (same IPv4 and / or IPv6 address, sa
|
|||
network port spaces). Kubernetes uses pause containers to allow for worker containers
|
||||
crashing or restarting without losing any of the networking configuration.
|
||||
-->
|
||||
## Pause 容器 {#pause-container}
|
||||
## Pause 容器 {#pause-container}
|
||||
|
||||
在 Kubernetes Pod 中,首先创建一个基础容器或 “pause” 容器来承载容器。
|
||||
在 Linux 中,构成 Pod 的 cgroup 和命名空间维持持续存在需要一个进程;
|
||||
|
@ -577,7 +580,7 @@ into each node in the cluster so that Pods can run there.
|
|||
|
||||
The following container runtimes work with Windows:
|
||||
-->
|
||||
## 容器运行时 {#container-runtime}
|
||||
## 容器运行时 {#container-runtime}
|
||||
|
||||
你需要将{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}安装到集群中的每个节点,
|
||||
这样 Pod 才能在这些节点上运行。
|
||||
|
@ -596,7 +599,7 @@ as the container runtime for Kubernetes nodes that run Windows.
|
|||
|
||||
Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd).
|
||||
-->
|
||||
### ContainerD {#containerd}
|
||||
### ContainerD
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
|
||||
|
||||
|
@ -624,7 +627,7 @@ is available as a container runtime for all Windows Server 2019 and later versio
|
|||
|
||||
See [Install MCR on Windows Servers](https://docs.mirantis.com/mcr/20.10/install/mcr-windows.html) for more information.
|
||||
-->
|
||||
### Mirantis 容器运行时 {#mcr}
|
||||
### Mirantis 容器运行时 {#mcr}
|
||||
|
||||
[Mirantis 容器运行时](https://docs.mirantis.com/mcr/20.10/overview.html)(MCR)
|
||||
可作为所有 Windows Server 2019 和更高版本的容器运行时。
|
||||
|
@ -641,7 +644,7 @@ operating system of Windows Server 2019 are fully supported.
|
|||
For Kubernetes v{{< skew currentVersion >}}, operating system compatibility for Windows nodes (and Pods)
|
||||
is as follows:
|
||||
-->
|
||||
## Windows 操作系统版本兼容性 {#windows-os-version-support}
|
||||
## Windows 操作系统版本兼容性 {#windows-os-version-support}
|
||||
|
||||
在 Windows 节点上,如果主机操作系统版本必须与容器基础镜像操作系统版本匹配,
|
||||
则会应用严格的兼容性规则。
|
||||
|
@ -654,13 +657,79 @@ Windows Server LTSC release
|
|||
: Windows Server 2022
|
||||
|
||||
Windows Server SAC release
|
||||
: Windows Server version 20H2
|
||||
: Windows Server version 20H2
|
||||
|
||||
<!--
|
||||
The Kubernetes [version-skew policy](/docs/setup/release/version-skew-policy/) also applies.
|
||||
-->
|
||||
也适用 Kubernetes [版本偏差策略](/zh-cn/releases/version-skew-policy/)。
|
||||
|
||||
<!--
|
||||
## Hardware recommendations and considerations {#windows-hardware-recommendations}
|
||||
-->
|
||||
## 硬件建议和注意事项 {#windows-hardware-recommendations}
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The following hardware specifications outlined here should be regarded as sensible default values.
|
||||
They are not intended to represent minimum requirements or specific recommendations for production environments.
|
||||
Depending on the requirements for your workload these values may need to be adjusted.
|
||||
-->
|
||||
这里列出的硬件规格应被视为合理的默认值。
|
||||
它们并不代表生产环境的最低要求或具体推荐。
|
||||
根据你的工作负载要求,这些值可能需要进行调整。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
- 64-bit processor 4 CPU cores or more, capable of supporting virtualization
|
||||
- 8GB or more of RAM
|
||||
- 50GB or more of free disk space
|
||||
-->
|
||||
- 64 位处理器,4 核或更多的 CPU,能够支持虚拟化
|
||||
- 8GB 或更多的 RAM
|
||||
- 50GB 或更多的可用磁盘空间
|
||||
|
||||
<!--
|
||||
Refer to
|
||||
[Hardware requirements for Windows Server Microsoft documentation](https://learn.microsoft.com/en-us/windows-server/get-started/hardware-requirements)
|
||||
for the most up-to-date information on minimum hardware requirements. For guidance on deciding on resources for
|
||||
production worker nodes refer to [Production worker nodes Kubernetes documentation](https://kubernetes.io/docs/setup/production-environment/#production-worker-nodes).
|
||||
-->
|
||||
有关最新的最低硬件要求信息,
|
||||
请参考[微软文档:Windows Server 的硬件要求](https://learn.microsoft.com/zh-cn/windows-server/get-started/hardware-requirements)。
|
||||
有关决定生产工作节点资源的指导信息,
|
||||
请参考 [Kubernetes 文档:生产用工作节点](https://kubernetes.io/zh-cn/docs/setup/production-environment/#production-worker-nodes)。
|
||||
|
||||
<!--
|
||||
To optimize system resources, if a graphical user interface is not required,
|
||||
it may be preferable to use a Windows Server OS installation that excludes
|
||||
the [Windows Desktop Experience](https://learn.microsoft.com/en-us/windows-server/get-started/install-options-server-core-desktop-experience)
|
||||
installation option, as this configuration typically frees up more system
|
||||
resources.
|
||||
-->
|
||||
为了优化系统资源,如果图形用户界面不是必需的,最好选择一个不包含
|
||||
[Windows 桌面体验](https://learn.microsoft.com/zh-cn/windows-server/get-started/install-options-server-core-desktop-experience)安装选项的
|
||||
Windows Server 操作系统安装包,因为这种配置通常会释放更多的系统资源。
|
||||
|
||||
<!--
|
||||
In assessing disk space for Windows worker nodes, take note that Windows container images are typically larger than
|
||||
Linux container images, with container image sizes ranging
|
||||
from [300MB to over 10GB](https://techcommunity.microsoft.com/t5/containers/nano-server-x-server-core-x-server-which-base-image-is-the-right/ba-p/2835785)
|
||||
for a single image. Additionally, take note that the `C:` drive in Windows containers represents a virtual free size of
|
||||
20GB by default, which is not the actual consumed space, but rather the disk size for which a single container can grow
|
||||
to occupy when using local storage on the host.
|
||||
See [Containers on Windows - Container Storage Documentation](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-storage#storage-limits)
|
||||
for more detail.
|
||||
-->
|
||||
在估算 Windows 工作节点的磁盘空间时,需要注意 Windows 容器镜像通常比 Linux 容器镜像更大,
|
||||
单个镜像的容器大小范围从 [300MB 到超过 10GB](https://techcommunity.microsoft.com/t5/containers/nano-server-x-server-core-x-server-which-base-image-is-the-right/ba-p/2835785)。
|
||||
此外,需要注意 Windows 容器中的 `C:` 驱动器默认呈现的虚拟剩余空间为 20GB,
|
||||
这不是实际的占用空间,而是使用主机上的本地存储时单个容器可以最多占用的磁盘大小。
|
||||
有关更多详细信息,
|
||||
请参见[在 Windows 上运行容器 - 容器存储文档](https://learn.microsoft.com/zh-cn/virtualization/windowscontainers/manage-containers/container-storage#storage-limits)。
|
||||
|
||||
<!--
|
||||
## Getting help and troubleshooting {#troubleshooting}
|
||||
|
||||
|
@ -675,7 +744,7 @@ troubleshooting assistance from other contributors. Follow the
|
|||
instructions in the
|
||||
SIG Windows [contributing guide on gathering logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs).
|
||||
-->
|
||||
## 获取帮助和故障排查 {#troubleshooting}
|
||||
## 获取帮助和故障排查 {#troubleshooting}
|
||||
|
||||
对 Kubernetes 集群进行故障排查的主要帮助来源应始于[故障排查](/zh-cn/docs/tasks/debug/)页面。
|
||||
|
||||
|
@ -695,14 +764,12 @@ reported previously and comment with your experience on the issue and add additi
|
|||
logs. SIG Windows channel on the Kubernetes Slack is also a great avenue to get some initial support and
|
||||
troubleshooting ideas prior to creating a ticket.
|
||||
-->
|
||||
### 报告问题和功能请求 {#report-issue-and-feature-request}
|
||||
### 报告问题和功能请求 {#report-issue-and-feature-request}
|
||||
|
||||
如果你发现疑似 bug,或者你想提出功能请求,请按照
|
||||
[SIG Windows 贡献指南](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#reporting-issues-and-feature-requests)
|
||||
新建一个 Issue。
|
||||
你应该先搜索 issue 列表,以防之前报告过这个问题,凭你对该问题的经验添加评论,
|
||||
并随附日志信息。
|
||||
Kubernetes Slack 上的 SIG Windows 频道也是一个很好的途径,
|
||||
新建一个 Issue。你应该先搜索 Issue 列表,以防之前报告过这个问题,凭你对该问题的经验添加评论,
|
||||
并随附日志信息。Kubernetes Slack 上的 SIG Windows 频道也是一个很好的途径,
|
||||
可以在创建工单之前获得一些初始支持和故障排查思路。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
@ -715,7 +782,7 @@ plane to manage the cluster it, and nodes to run your workloads.
|
|||
|
||||
The Kubernetes [cluster API](https://cluster-api.sigs.k8s.io/) project also provides means to automate deployment of Windows nodes.
|
||||
-->
|
||||
## 部署工具 {#deployment-tools}
|
||||
## 部署工具 {#deployment-tools}
|
||||
|
||||
kubeadm 工具帮助你部署 Kubernetes 集群,提供管理集群的控制平面以及运行工作负载的节点。
|
||||
|
||||
|
@ -732,7 +799,7 @@ Information on the different Windows Server servicing channels
|
|||
including their support models can be found at
|
||||
[Windows Server servicing channels](https://docs.microsoft.com/en-us/windows-server/get-started/servicing-channels-comparison).
|
||||
-->
|
||||
## Windows 分发渠道 {#windows-distribution-channels}
|
||||
## Windows 分发渠道 {#windows-distribution-channels}
|
||||
|
||||
有关 Windows 分发渠道的详细阐述,请参考
|
||||
[Microsoft 文档](https://docs.microsoft.com/zh-cn/windows-server/get-started-19/servicing-channels-19)。
|
||||
|
|
|
@ -0,0 +1,297 @@
|
|||
---
|
||||
title: 自动扩缩工作负载
|
||||
description: >-
|
||||
通过自动扩缩,你可以用某种方式自动更新你的工作负载。在面对资源需求变化的时候可以使你的集群更灵活、更高效。
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
<!--
|
||||
title: Autoscaling Workloads
|
||||
description: >-
|
||||
With autoscaling, you can automatically update your workloads in one way or another. This allows your cluster to react to changes in resource demand more elastically and efficiently.
|
||||
content_type: concept
|
||||
weight: 40
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
In Kubernetes, you can _scale_ a workload depending on the current demand of resources.
|
||||
This allows your cluster to react to changes in resource demand more elastically and efficiently.
|
||||
|
||||
When you scale a workload, you can either increase or decrease the number of replicas managed by
|
||||
the workload, or adjust the resources available to the replicas in-place.
|
||||
|
||||
The first approach is referred to as _horizontal scaling_, while the second is referred to as
|
||||
_vertical scaling_.
|
||||
|
||||
There are manual and automatic ways to scale your workloads, depending on your use case.
|
||||
-->
|
||||
在 Kubernetes 中,你可以根据当前的资源需求**扩缩**工作负载。
|
||||
这让你的集群可以更灵活、更高效地面对资源需求的变化。
|
||||
|
||||
当你扩缩工作负载时,你可以增加或减少工作负载所管理的副本数量,或者就地调整副本的可用资源。
|
||||
|
||||
第一种手段称为**水平扩缩**,第二种称为**垂直扩缩**。
|
||||
|
||||
扩缩工作负载有手动和自动两种方式,这取决于你的使用情况。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Scaling workloads manually
|
||||
-->
|
||||
## 手动扩缩工作负载 {#scaling-workloads-manually}
|
||||
|
||||
<!--
|
||||
Kubernetes supports _manual scaling_ of workloads. Horizontal scaling can be done
|
||||
using the `kubectl` CLI.
|
||||
For vertical scaling, you need to _patch_ the resource definition of your workload.
|
||||
|
||||
See below for examples of both strategies.
|
||||
-->
|
||||
Kubernetes 支持工作负载的手动扩缩。水平扩缩可以使用 `kubectl` 命令行工具完成。
|
||||
对于垂直扩缩,你需要**更新**工作负载的资源定义。
|
||||
|
||||
这两种策略的示例见下文。
|
||||
|
||||
<!--
|
||||
- **Horizontal scaling**: [Running multiple instances of your app](/docs/tutorials/kubernetes-basics/scale/scale-intro/)
|
||||
- **Vertical scaling**: [Resizing CPU and memory resources assigned to containers](/docs/tasks/configure-pod-container/resize-container-resources)
|
||||
-->
|
||||
- **水平扩缩**:[运行应用程序的多个实例](/docs/tutorials/kubernetes-basics/scale/scale-intro/)
|
||||
- **垂直扩缩**:[调整分配给容器的 CPU 和内存资源](/docs/tasks/configure-pod-container/resize-container-resources)
|
||||
|
||||
<!--
|
||||
## Scaling workloads automatically
|
||||
-->
|
||||
## 自动扩缩工作负载 {#scaling-workloads-automatically}
|
||||
|
||||
<!--
|
||||
Kubernetes also supports _automatic scaling_ of workloads, which is the focus of this page.
|
||||
-->
|
||||
Kubernetes 也支持工作负载的**自动扩缩**,这也是本页的重点。
|
||||
|
||||
<!--
|
||||
The concept of _Autoscaling_ in Kubernetes refers to the ability to automatically update an
|
||||
object that manages a set of Pods (for example a
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}.
|
||||
-->
|
||||
在 Kubernetes 中**自动扩缩**的概念是指自动更新管理一组 Pod 的能力(例如
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}})。
|
||||
|
||||
<!--
|
||||
### Scaling workloads horizontally
|
||||
-->
|
||||
### 水平扩缩工作负载 {#scaling-workloads-horizontally}
|
||||
|
||||
<!--
|
||||
In Kubernetes, you can automatically scale a workload horizontally using a _HorizontalPodAutoscaler_ (HPA).
|
||||
-->
|
||||
在 Kubernetes 中,你可以使用 HorizontalPodAutoscaler (HPA) 实现工作负载的自动水平扩缩。
|
||||
|
||||
<!--
|
||||
It is implemented as a Kubernetes API resource and a {{< glossary_tooltip text="controller" term_id="controller" >}}
|
||||
and periodically adjusts the number of {{< glossary_tooltip text="replicas" term_id="replica" >}}
|
||||
in a workload to match observed resource utilization such as CPU or memory usage.
|
||||
-->
|
||||
它以 Kubernetes API 资源和{{< glossary_tooltip text="控制器" term_id="controller" >}}的方式实现,
|
||||
并定期调整工作负载中{{< glossary_tooltip text="副本" term_id="replica" >}}的数量
|
||||
以满足设置的资源利用率,如 CPU 或内存利用率。
|
||||
|
||||
<!--
|
||||
There is a [walkthrough tutorial](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough) of configuring a HorizontalPodAutoscaler for a Deployment.
|
||||
-->
|
||||
这是一个为 Deployment 部署配置 HorizontalPodAutoscaler 的[示例教程](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough)。
|
||||
|
||||
<!--
|
||||
### Scaling workloads vertically
|
||||
-->
|
||||
### 垂直扩缩工作负载 {#scaling-workloads-vertically}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
|
||||
|
||||
<!--
|
||||
You can automatically scale a workload vertically using a _VerticalPodAutoscaler_ (VPA).
|
||||
Different to the HPA, the VPA doesn't come with Kubernetes by default, but is a separate project
|
||||
that can be found [on GitHub](https://github.com/kubernetes/autoscaler/tree/9f87b78df0f1d6e142234bb32e8acbd71295585a/vertical-pod-autoscaler).
|
||||
-->
|
||||
你可以使用 VerticalPodAutoscaler (VPA) 实现工作负载的垂直扩缩。
|
||||
不同于 HPA,VPA 并非默认来源于 Kubernetes,而是一个独立的项目,
|
||||
参见 [on GitHub](https://github.com/kubernetes/autoscaler/tree/9f87b78df0f1d6e142234bb32e8acbd71295585a/vertical-pod-autoscaler)。
|
||||
|
||||
<!--
|
||||
Once installed, it allows you to create {{< glossary_tooltip text="CustomResourceDefinitions" term_id="customresourcedefinition" >}}
|
||||
(CRDs) for your workloads which define _how_ and _when_ to scale the resources of the managed replicas.
|
||||
-->
|
||||
安装后,你可以为工作负载创建 {{< glossary_tooltip text="CustomResourceDefinitions" term_id="customresourcedefinition" >}}(CRDs),
|
||||
定义**如何**以及**何时**扩缩被管理副本的资源。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You will need to have the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server)
|
||||
installed to your cluster for the HPA to work.
|
||||
-->
|
||||
你需要在集群中安装 [Metrics Server](https://github.com/kubernetes-sigs/metrics-server),这样,你的 HPA 才能正常工作。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
At the moment, the VPA can operate in four different modes:
|
||||
-->
|
||||
目前,VPA 可以有四种不同的运行模式:
|
||||
|
||||
<!--
|
||||
{{< table caption="Different modes of the VPA" >}}
|
||||
Mode | Description
|
||||
:----|:-----------
|
||||
`Auto` | Currently `Recreate`, might change to in-place updates in the future
|
||||
`Recreate` | The VPA assigns resource requests on pod creation as well as updates them on existing pods by evicting them when the requested resources differ significantly from the new recommendation
|
||||
`Initial` | The VPA only assigns resource requests on pod creation and never changes them later.
|
||||
`Off` | The VPA does not automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object.
|
||||
{{< /table >}}
|
||||
-->
|
||||
{{< table caption="VPA 的不同模式" >}}
|
||||
模式 | 描述
|
||||
:----|:-----------
|
||||
`Auto` | 目前是 `Recreate`,将来可能改为就地更新
|
||||
`Recreate` | VPA 会在创建 Pod 时分配资源请求,并且当请求的资源与新的建议值区别很大时通过驱逐 Pod 的方式来更新现存的 Pod
|
||||
`Initial` | VPA 只有在创建时分配资源请求,之后不做更改
|
||||
`Off` | VPA 不会自动更改 Pod 的资源需求,建议值仍会计算并可在 VPA 对象中查看
|
||||
{{< /table >}}
|
||||
|
||||
<!--
|
||||
#### Requirements for in-place resizing
|
||||
-->
|
||||
#### 就地调整的要求
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
Resizing a workload in-place **without** restarting the {{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
or its {{< glossary_tooltip text="Containers" term_id="container" >}} requires Kubernetes version 1.27 or later.<br />
|
||||
Additionally, the `InPlaceVerticalScaling` feature gate needs to be enabled.
|
||||
-->
|
||||
在**不**重启 {{< glossary_tooltip text="Pod" term_id="pod" >}} 或其中{{< glossary_tooltip text="容器" term_id="container" >}}就地调整工作负载的情况下要求 Kubernetes 版本大于 1.27。
|
||||
此外,特性门控 `InPlaceVerticalScaling` 需要开启。
|
||||
|
||||
{{< feature-gate-description name="InPlacePodVerticalScaling" >}}
|
||||
|
||||
<!--
|
||||
### Autoscaling based on cluster size
|
||||
-->
|
||||
### 根据集群规模自动扩缩 {#autoscaling-based-on-cluster-size}
|
||||
|
||||
<!--
|
||||
For workloads that need to be scaled based on the size of the cluster (for example
|
||||
`cluster-dns` or other system components), you can use the
|
||||
[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).<br />
|
||||
Just like the VPA, it is not part of the Kubernetes core, but hosted as its
|
||||
own project on GitHub.
|
||||
-->
|
||||
对于需要根据集群规模实现扩缩的工作负载(例如:`cluster-dns` 或者其他系统组件),
|
||||
你可以使用 [Cluster Proportional Autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler)。
|
||||
与 VPA 一样,这个项目不是 Kubernetes 核心项目的一部分,它在 GitHub 上有自己的项目。
|
||||
|
||||
<!--
|
||||
The Cluster Proportional Autoscaler watches the number of schedulable {{< glossary_tooltip text="nodes" term_id="node" >}}
|
||||
and cores and scales the number of replicas of the target workload accordingly.
|
||||
-->
|
||||
集群弹性伸缩器 (Cluster Proportional Autoscaler) 会观测可调度 {{< glossary_tooltip text="节点" term_id="node" >}} 和 内核数量,
|
||||
并调整目标工作负载的副本数量。
|
||||
|
||||
<!--
|
||||
If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using
|
||||
the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler).
|
||||
The project is **currently in beta** and can be found on GitHub.
|
||||
-->
|
||||
如果副本的数量需要保持一致,你可以使用 [Cluster Proportional Vertical Autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler) 来根据集群规模进行垂直扩缩。
|
||||
这个项目目前处于 **beta** 阶段,你可以在 GitHub 上找到它。
|
||||
|
||||
<!--
|
||||
While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler
|
||||
adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores
|
||||
in the cluster.
|
||||
-->
|
||||
集群弹性伸缩器会扩缩工作负载的副本数量,垂直集群弹性伸缩器 (Cluster Proportional Vertical Autoscaler) 会根据节点和/或核心的数量
|
||||
调整工作负载的资源请求(例如 Deployment 和 DaemonSet)。
|
||||
|
||||
<!--
|
||||
### Event driven Autoscaling
|
||||
-->
|
||||
### 事件驱动型自动扩缩 {#event-driven-autoscaling}
|
||||
|
||||
<!--
|
||||
It is also possible to scale workloads based on events, for example using the
|
||||
[_Kubernetes Event Driven Autoscaler_ (**KEDA**)](https://keda.sh/).
|
||||
-->
|
||||
通过事件驱动实现工作负载的扩缩也是可行的,
|
||||
例如使用 [Kubernetes Event Driven Autoscaler (**KEDA**)](https://keda.sh/)。
|
||||
|
||||
<!--
|
||||
KEDA is a CNCF graduated enabling you to scale your workloads based on the number
|
||||
of events to be processed, for example the amount of messages in a queue. There exists
|
||||
a wide range of adapters for different event sources to choose from.
|
||||
-->
|
||||
KEDA 是 CNCF 的毕业项目,能让你根据要处理事件的数量对工作负载进行扩缩,例如队列中消息的数量。
|
||||
有多种针对不同事件源的适配可供选择。
|
||||
|
||||
<!--
|
||||
### Autoscaling based on schedules
|
||||
-->
|
||||
### 根据计划自动扩缩 {#autoscaling-based-on-schedules}
|
||||
|
||||
<!--
|
||||
Another strategy for scaling your workloads is to **schedule** the scaling operations, for example in order to
|
||||
reduce resource consumption during off-peak hours.
|
||||
-->
|
||||
扩缩工作负载的另一种策略是**计划**进行扩缩,例如在非高峰时段减少资源消耗。
|
||||
|
||||
<!--
|
||||
Similar to event driven autoscaling, such behavior can be achieved using KEDA in conjunction with
|
||||
its [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/). The `Cron` scaler allows you to define schedules
|
||||
(and time zones) for scaling your workloads in or out.
|
||||
-->
|
||||
与事件驱动型自动扩缩相似,这种行为可以使用 KEDA 和 [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/) 实现。
|
||||
你可以在计划扩缩器 (Cron scaler) 中定义计划来实现工作负载的横向扩缩。
|
||||
|
||||
<!--
|
||||
## Scaling cluster infrastructure
|
||||
-->
|
||||
## 扩缩集群基础设施 {#scaling-cluster-infrastructure}
|
||||
|
||||
<!--
|
||||
If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself.
|
||||
-->
|
||||
如果扩缩工作负载无法满足你的需求,你也可以扩缩集群基础设施本身。
|
||||
|
||||
<!--
|
||||
Scaling the cluster infrastructure normally means adding or removing {{< glossary_tooltip text="nodes" term_id="node" >}}.
|
||||
This can be done using one of two available autoscalers:
|
||||
-->
|
||||
扩缩集群基础设施通常是指增加或移除{{< glossary_tooltip text="节点" term_id="node" >}}。
|
||||
这可以通过以下两种自动扩缩器中的任意一种实现:
|
||||
|
||||
- [**Cluster Autoscaler**](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
|
||||
- [**Karpenter**](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file)
|
||||
|
||||
<!--
|
||||
Both scalers work by watching for pods marked as _unschedulable_ or _underutilized_ nodes and then adding or
|
||||
removing nodes as needed.
|
||||
-->
|
||||
这两种扩缩器的工作原理都是通过监测节点上被标记为 **unschedulable** 或 **underutilized** 的 Pod 数量,
|
||||
然后根据需要增加或移除节点。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
- Learn more about scaling horizontally
|
||||
- [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/)
|
||||
- [HorizontalPodAutoscaler Walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
|
||||
- [Resize Container Resources In-Place](/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
- [Autoscale the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
|
||||
-->
|
||||
- 了解有关横向扩缩的更多信息
|
||||
- [扩缩 StatefulSet](/zh-cn/docs/tasks/run-application/scale-stateful-set/)
|
||||
- [HorizontalPodAutoscaler 演练](/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
|
||||
- [调整分配给容器的 CPU 和内存资源](/zh-cn/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
- [自动扩缩集群 DNS 服务](/zh-cn/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
|
|
@ -39,15 +39,14 @@ Pod 所建模的是特定于应用的 “逻辑主机”,其中包含一个或
|
|||
|
||||
<!--
|
||||
As well as application containers, a Pod can contain
|
||||
[init containers](/docs/concepts/workloads/pods/init-containers/) that run
|
||||
{{< glossary_tooltip text="init containers" term_id="init-container" >}} that run
|
||||
during Pod startup. You can also inject
|
||||
[ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/)
|
||||
for debugging if your cluster offers this.
|
||||
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
|
||||
for debugging a running Pod.
|
||||
-->
|
||||
除了应用容器,Pod 还可以包含在 Pod 启动期间运行的
|
||||
[Init 容器](/zh-cn/docs/concepts/workloads/pods/init-containers/)。
|
||||
你也可以在集群支持[临时性容器](/zh-cn/docs/concepts/workloads/pods/ephemeral-containers/)的情况下,
|
||||
为调试的目的注入临时性容器。
|
||||
{{< glossary_tooltip text="Init 容器" term_id="init-container" >}}。
|
||||
你也可以注入{{< glossary_tooltip text="临时性容器" term_id="ephemeral-container" >}}来调试正在运行的 Pod。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -78,6 +77,42 @@ Pod 的共享上下文包括一组 Linux 名字空间、控制组(cgroup)和
|
|||
|
||||
Pod 类似于共享名字空间并共享文件系统卷的一组容器。
|
||||
|
||||
<!--
|
||||
Pods in a Kubernetes cluster are used in two main ways:
|
||||
|
||||
* **Pods that run a single container**. The "one-container-per-Pod" model is the
|
||||
most common Kubernetes use case; in this case, you can think of a Pod as a
|
||||
wrapper around a single container; Kubernetes manages Pods rather than managing
|
||||
the containers directly.
|
||||
* **Pods that run multiple containers that need to work together**. A Pod can
|
||||
encapsulate an application composed of
|
||||
[multiple co-located containers](#how-pods-manage-multiple-containers) that are
|
||||
tightly coupled and need to share resources. These co-located containers
|
||||
form a single cohesive unit.
|
||||
-->
|
||||
Kubernetes 集群中的 Pod 主要有两种用法:
|
||||
|
||||
* **运行单个容器的 Pod**。"每个 Pod 一个容器"模型是最常见的 Kubernetes 用例;
|
||||
在这种情况下,可以将 Pod 看作单个容器的包装器,并且 Kubernetes 直接管理 Pod,而不是容器。
|
||||
* **运行多个协同工作的容器的 Pod**。
|
||||
Pod 可以封装由紧密耦合且需要共享资源的[多个并置容器](#how-pods-manage-multiple-containers)组成的应用。
|
||||
这些位于同一位置的容器构成一个内聚单元。
|
||||
|
||||
<!--
|
||||
Grouping multiple co-located and co-managed containers in a single Pod is a
|
||||
relatively advanced use case. You should use this pattern only in specific
|
||||
instances in which your containers are tightly coupled.
|
||||
|
||||
You don't need to run multiple containers to provide replication (for resilience
|
||||
or capacity); if you need multiple replicas, see
|
||||
[Workload management](/docs/concepts/workloads/controllers/).
|
||||
-->
|
||||
将多个并置、同管的容器组织到一个 Pod 中是一种相对高级的使用场景。
|
||||
只有在一些场景中,容器之间紧密关联时你才应该使用这种模式。
|
||||
|
||||
你不需要运行多个容器来扩展副本(为了弹性或容量);
|
||||
如果你需要多个副本,请参阅[工作负载管理](/zh-cn/docs/concepts/workloads/controllers/)。
|
||||
|
||||
<!--
|
||||
## Using Pods
|
||||
|
||||
|
@ -116,8 +151,6 @@ create them using workload resources such as {{< glossary_tooltip text="Deployme
|
|||
term_id="deployment" >}} or {{< glossary_tooltip text="Job" term_id="job" >}}.
|
||||
If your Pods need to track state, consider the
|
||||
{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} resource.
|
||||
|
||||
Pods in a Kubernetes cluster are used in two main ways:
|
||||
-->
|
||||
通常你不需要直接创建 Pod,甚至单实例 Pod。相反,你会使用诸如
|
||||
{{< glossary_tooltip text="Deployment" term_id="deployment" >}} 或
|
||||
|
@ -125,40 +158,6 @@ Pods in a Kubernetes cluster are used in two main ways:
|
|||
如果 Pod 需要跟踪状态,可以考虑
|
||||
{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} 资源。
|
||||
|
||||
Kubernetes 集群中的 Pod 主要有两种用法:
|
||||
|
||||
<!--
|
||||
* **Pods that run a single container**. The "one-container-per-Pod" model is the
|
||||
most common Kubernetes use case; in this case, you can think of a Pod as a
|
||||
wrapper around a single container; Kubernetes manages Pods rather than managing
|
||||
the containers directly.
|
||||
* **Pods that run multiple containers that need to work together**. A Pod can
|
||||
encapsulate an application composed of multiple co-located containers that are
|
||||
tightly coupled and need to share resources. These co-located containers
|
||||
form a single cohesive unit of service—for example, one container serving data
|
||||
stored in a shared volume to the public, while a separate _sidecar_ container
|
||||
refreshes or updates those files.
|
||||
The Pod wraps these containers, storage resources, and an ephemeral network
|
||||
identity together as a single unit.
|
||||
-->
|
||||
* **运行单个容器的 Pod**。"每个 Pod 一个容器" 模型是最常见的 Kubernetes 用例;
|
||||
在这种情况下,可以将 Pod 看作单个容器的包装器,并且 Kubernetes 直接管理 Pod,而不是容器。
|
||||
* **运行多个协同工作的容器的 Pod**。
|
||||
Pod 可能封装由多个紧密耦合且需要共享资源的共处容器组成的应用程序。
|
||||
这些位于同一位置的容器可能形成单个内聚的服务单元 —— 一个容器将文件从共享卷提供给公众,
|
||||
而另一个单独的 “边车”(sidecar)容器则刷新或更新这些文件。
|
||||
Pod 将这些容器和存储资源打包为一个可管理的实体。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Grouping multiple co-located and co-managed containers in a single Pod is a
|
||||
relatively advanced use case. You should use this pattern only in specific
|
||||
instances in which your containers are tightly coupled.
|
||||
-->
|
||||
将多个并置、同管的容器组织到一个 Pod 中是一种相对高级的使用场景。
|
||||
只有在一些场景中,容器之间紧密关联时你才应该使用这种模式。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Each Pod is meant to run a single instance of a given application. If you want to
|
||||
scale your application horizontally (to provide more overall resources by running
|
||||
|
@ -179,54 +178,6 @@ scaling and auto-healing.
|
|||
参见 [Pod 和控制器](#pods-and-controllers)以了解 Kubernetes
|
||||
如何使用工作负载资源及其控制器以实现应用的扩缩和自动修复。
|
||||
|
||||
<!--
|
||||
### How Pods manage multiple containers
|
||||
|
||||
Pods are designed to support multiple cooperating processes (as containers) that form
|
||||
a cohesive unit of service. The containers in a Pod are automatically co-located and
|
||||
co-scheduled on the same physical or virtual machine in the cluster. The containers
|
||||
can share resources and dependencies, communicate with one another, and coordinate
|
||||
when and how they are terminated.
|
||||
-->
|
||||
### Pod 怎样管理多个容器 {#how-pods-manage-multiple-containers}
|
||||
|
||||
Pod 被设计成支持形成内聚服务单元的多个协作过程(形式为容器)。
|
||||
Pod 中的容器被自动安排到集群中的同一物理机或虚拟机上,并可以一起进行调度。
|
||||
容器之间可以共享资源和依赖、彼此通信、协调何时以及何种方式终止自身。
|
||||
|
||||
<!--
|
||||
For example, you might have a container that
|
||||
acts as a web server for files in a shared volume, and a separate "sidecar" container
|
||||
that updates those files from a remote source, as in the following diagram:
|
||||
-->
|
||||
例如,你可能有一个容器,为共享卷中的文件提供 Web 服务器支持,以及一个单独的
|
||||
"边车 (sidercar)" 容器负责从远端更新这些文件,如下图所示:
|
||||
|
||||
{{< figure src="/zh-cn/docs/images/pod.svg" alt="Pod 创建示意图" class="diagram-medium" >}}
|
||||
|
||||
<!--
|
||||
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}}
|
||||
as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
|
||||
Init containers run and complete before the app containers are started.
|
||||
-->
|
||||
有些 Pod 具有 {{< glossary_tooltip text="Init 容器" term_id="init-container" >}}和
|
||||
{{< glossary_tooltip text="应用容器" term_id="app-container" >}}。
|
||||
Init 容器会在启动应用容器之前运行并完成。
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
<!--
|
||||
Enabled by default, the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
allows you to specify `restartPolicy: Always` for init containers.
|
||||
Setting the `Always` restart policy ensures that the init containers where you set it are
|
||||
kept running during the entire lifetime of the Pod.
|
||||
See [Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
|
||||
for more details.
|
||||
-->
|
||||
启用 `SidecarContainers` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)(默认启用)允许你为
|
||||
Init 容器指定 `restartPolicy: Always`。设置重启策略为 `Always` 会确保 Init 容器在 Pod 的整个生命周期内保持运行。
|
||||
更多细节参阅[边车容器和重启策略](/zh-cn/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
|
||||
|
||||
<!--
|
||||
Pods natively provide two kinds of shared resources for their constituent containers:
|
||||
[networking](#pod-networking) and [storage](#pod-storage).
|
||||
|
@ -650,6 +601,95 @@ The `spec` of a static Pod cannot refer to other API objects
|
|||
{{< glossary_tooltip text="Secret" term_id="secret" >}} 等)。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Pods manage multiple containers {#how-pods-manage-multiple-containers}
|
||||
|
||||
Pods are designed to support multiple cooperating processes (as containers) that form
|
||||
a cohesive unit of service. The containers in a Pod are automatically co-located and
|
||||
co-scheduled on the same physical or virtual machine in the cluster. The containers
|
||||
can share resources and dependencies, communicate with one another, and coordinate
|
||||
when and how they are terminated.
|
||||
-->
|
||||
### Pod 管理多个容器 {#how-pods-manage-multiple-containers}
|
||||
|
||||
Pod 被设计成支持构造内聚的服务单元的多个协作进程(形式为容器)。
|
||||
Pod 中的容器被自动并置到集群中的同一物理机或虚拟机上,并可以一起进行调度。
|
||||
容器之间可以共享资源和依赖、彼此通信、协调何时以及何种方式终止自身。
|
||||
|
||||
<!--intentionally repeats some text from earlier in the page, with more detail -->
|
||||
|
||||
<!--
|
||||
Pods in a Kubernetes cluster are used in two main ways:
|
||||
|
||||
* **Pods that run a single container**. The "one-container-per-Pod" model is the
|
||||
most common Kubernetes use case; in this case, you can think of a Pod as a
|
||||
wrapper around a single container; Kubernetes manages Pods rather than managing
|
||||
the containers directly.
|
||||
* **Pods that run multiple containers that need to work together**. A Pod can
|
||||
encapsulate an application composed of
|
||||
multiple co-located containers that are
|
||||
tightly coupled and need to share resources. These co-located containers
|
||||
form a single cohesive unit of service—for example, one container serving data
|
||||
stored in a shared volume to the public, while a separate
|
||||
{{< glossary_tooltip text="sidecar container" term_id="sidecar-container" >}}
|
||||
refreshes or updates those files.
|
||||
The Pod wraps these containers, storage resources, and an ephemeral network
|
||||
identity together as a single unit.
|
||||
-->
|
||||
Kubernetes 集群中的 Pod 主要有两种用法:
|
||||
|
||||
* **运行单个容器的 Pod**。"每个 Pod 一个容器" 模型是最常见的 Kubernetes 用例;
|
||||
在这种情况下,可以将 Pod 看作单个容器的包装器。Kubernetes 直接管理 Pod,而不是容器。
|
||||
* **运行多个需要协同工作的容器的 Pod**。
|
||||
Pod 可以封装由多个紧密耦合且需要共享资源的并置容器组成的应用。
|
||||
这些位于同一位置的容器可能形成单个内聚的服务单元 —— 一个容器将文件从共享卷提供给公众,
|
||||
而另一个单独的{{< glossary_tooltip text="边车容器" term_id="sidecar-container" >}}则刷新或更新这些文件。
|
||||
Pod 将这些容器和存储资源打包为一个可管理的实体。
|
||||
|
||||
<!--
|
||||
For example, you might have a container that
|
||||
acts as a web server for files in a shared volume, and a separate
|
||||
[sidecar container](/docs/concepts/workloads/pods/sidecar-containers/)
|
||||
that updates those files from a remote source, as in the following diagram:
|
||||
-->
|
||||
例如,你可能有一个容器,为共享卷中的文件提供 Web 服务器支持,以及一个单独的
|
||||
[边车(Sidercar)](/zh-cn/docs/concepts/workloads/pods/sidecar-containers/)
|
||||
容器负责从远端更新这些文件,如下图所示:
|
||||
|
||||
{{< figure src="/zh-cn/docs/images/pod.svg" alt="Pod 创建示意图" class="diagram-medium" >}}
|
||||
|
||||
<!--
|
||||
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}}
|
||||
as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
|
||||
By default, init containers run and complete before the app containers are started.
|
||||
-->
|
||||
有些 Pod 具有 {{< glossary_tooltip text="Init 容器" term_id="init-container" >}}和
|
||||
{{< glossary_tooltip text="应用容器" term_id="app-container" >}}。
|
||||
Init 容器默认会在启动应用容器之前运行并完成。
|
||||
|
||||
<!--
|
||||
You can also have [sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/)
|
||||
that provide auxiliary services to the main application Pod (for example: a service mesh).
|
||||
-->
|
||||
你还可以拥有为主应用 Pod 提供辅助服务的
|
||||
[边车容器](/zh-cn/docs/concepts/workloads/pods/sidecar-containers/)(例如:服务网格)。
|
||||
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
<!--
|
||||
Enabled by default, the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
allows you to specify `restartPolicy: Always` for init containers.
|
||||
Setting the `Always` restart policy ensures that the init containers where you set it are
|
||||
treated as _sidecars_ that are kept running during the entire lifetime of the Pod.
|
||||
See [Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
|
||||
for more details.
|
||||
-->
|
||||
启用 `SidecarContainers` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)(默认启用)允许你为
|
||||
Init 容器指定 `restartPolicy: Always`。设置重启策略为 `Always` 会确保设置的 Init 容器被视为**边车**,
|
||||
并在 Pod 的整个生命周期内保持运行。
|
||||
更多细节参阅[边车容器和重启策略](/zh-cn/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
|
||||
|
||||
<!--
|
||||
## Container probes
|
||||
|
||||
|
|
|
@ -164,7 +164,7 @@ CPU request 时,Pod 才是 `BestEffort`。Pod 中的容器可以请求(除 C
|
|||
-->
|
||||
## 使用 cgroup v2 的内存 QoS {#memory-qos-with-cgroup-v2}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
|
||||
{{< feature-state feature-gate-name="MemoryQoS" >}}
|
||||
|
||||
<!--
|
||||
Memory QoS uses the memory controller of cgroup v2 to guarantee memory resources in Kubernetes.
|
||||
|
|
|
@ -10,7 +10,7 @@ weight: 50
|
|||
-->
|
||||
|
||||
<!-- overview -->
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
<!--
|
||||
Sidecar containers are the secondary containers that run along with the main
|
||||
|
@ -28,7 +28,7 @@ security, or data synchronization, without directly altering the primary applica
|
|||
<!--
|
||||
## Enabling sidecar containers
|
||||
|
||||
Starting with Kubernetes 1.28, a
|
||||
Enabled by default with Kubernetes 1.29, a
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) named
|
||||
`SidecarContainers` allows you to specify a `restartPolicy` for containers listed in a
|
||||
Pod's `initContainers` field. These restartable _sidecar_ containers are independent with
|
||||
|
@ -38,7 +38,7 @@ without effecting the main application container and other init containers.
|
|||
-->
|
||||
## 启用边车容器 {#enabling-sidecar-containers}
|
||||
|
||||
从 Kubernetes 1.28 开始,一个名为 `SidecarContainers`
|
||||
Kubernetes 1.29 默认启用,一个名为 `SidecarContainers`
|
||||
的[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)允许你为
|
||||
Pod 的 `initContainers` 字段中列出的容器指定 `restartPolicy`。这些可重启的**边车**容器与同一
|
||||
Pod 内的其他 [Init 容器](/zh-cn/docs/concepts/workloads/pods/init-containers/)及主应用容器相互独立。
|
||||
|
@ -105,14 +105,6 @@ Here's an example of a Job with two containers, one of which is a sidecar:
|
|||
|
||||
{{% code_sample language="yaml" file="application/job/job-sidecar.yaml" %}}
|
||||
|
||||
<!--
|
||||
By default, this feature is not available in Kubernetes. To avail this feature, you
|
||||
need to enable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
named `SidecarContainers`.
|
||||
-->
|
||||
Kubernetes 默认不提供此特性。要使用此特性,你需要启用名为 `SidecarContainers`
|
||||
的[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
|
||||
<!--
|
||||
## Differences from regular containers
|
||||
|
||||
|
|
|
@ -87,6 +87,33 @@ Renders to:
|
|||
|
||||
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
|
||||
|
||||
<!--
|
||||
### Feature state retrieval from description file
|
||||
|
||||
To dynamically determine the state of the feature, make use of the `feature_gate_name`
|
||||
shortcode parameter. The feature state details will be extracted from the corresponding feature gate
|
||||
description file located in `content/en/docs/reference/command-line-tools-reference/feature-gates/`.
|
||||
For example:
|
||||
-->
|
||||
### 从描述文件中检索特征状态
|
||||
|
||||
要动态确定特性的状态,请使用 `feature_gate_name` 短代码参数,此参数将从
|
||||
`content/en/docs/reference/command-line-tools-reference/feature-gates/`
|
||||
中相应的特性门控描述文件中提取特性的详细状态信息。
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
{{</* feature-state feature_gate_name="NodeSwap" */>}}
|
||||
```
|
||||
|
||||
<!--
|
||||
Renders to:
|
||||
-->
|
||||
会转换为:
|
||||
|
||||
{{< feature-state feature_gate_name="NodeSwap" >}}
|
||||
|
||||
<!--
|
||||
## Feature gate description
|
||||
|
||||
|
|
|
@ -47,7 +47,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。
|
|||
<a href="#audit-k8s-io-v1-Level"><code>Level</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--AuditLevel at which event was generated-->
|
||||
<!--
|
||||
AuditLevel at which event was generated
|
||||
-->
|
||||
<p>
|
||||
生成事件所对应的审计级别。
|
||||
</p>
|
||||
|
@ -58,7 +60,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。
|
|||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/types#UID"><code>k8s.io/apimachinery/pkg/types.UID</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--Unique audit ID, generated for each request.-->
|
||||
<!--
|
||||
Unique audit ID, generated for each request.
|
||||
-->
|
||||
<p>
|
||||
为每个请求所生成的唯一审计 ID。
|
||||
</p>
|
||||
|
@ -70,7 +74,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。
|
|||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--Stage of the request handling when this event instance was generated.-->
|
||||
<!--
|
||||
Stage of the request handling when this event instance was generated.
|
||||
-->
|
||||
生成此事件时请求的处理阶段。
|
||||
</p>
|
||||
</td>
|
||||
|
@ -81,7 +87,9 @@ Event 结构包含可出现在 API 审计日志中的所有信息。
|
|||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--RequestURI is the request URI as sent by the client to a server.-->
|
||||
<!--
|
||||
RequestURI is the request URI as sent by the client to a server.
|
||||
-->
|
||||
requestURI 是客户端发送到服务器端的请求 URI。
|
||||
</p>
|
||||
</td>
|
||||
|
@ -103,22 +111,26 @@ Event 结构包含可出现在 API 审计日志中的所有信息。
|
|||
</tr>
|
||||
|
||||
<tr><td><code>user</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--Authenticated user information.-->
|
||||
<!--
|
||||
Authenticated user information.
|
||||
-->
|
||||
关于认证用户的信息。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>impersonatedUser</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--Impersonated user information.-->
|
||||
<!--
|
||||
Impersonated user information.
|
||||
-->
|
||||
关于所伪装(impersonated)的用户的信息。
|
||||
</p>
|
||||
</td>
|
||||
|
@ -162,14 +174,15 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
|
||||
<tr><td><code>userAgent</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--UserAgent records the user agent string reported by the client.
|
||||
Note that the UserAgent is provided by the client, and must not be trusted.-->
|
||||
<!--
|
||||
UserAgent records the user agent string reported by the client.
|
||||
Note that the UserAgent is provided by the client, and must not be trusted.
|
||||
-->
|
||||
userAgent 中记录客户端所报告的用户代理(User Agent)字符串。
|
||||
注意 userAgent 信息是由客户端提供的,一定不要信任。
|
||||
</p>
|
||||
|
@ -181,21 +194,25 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!-- Object reference this request is targeted at.
|
||||
Does not apply for List-type requests, or non-resource requests.-->
|
||||
<!--
|
||||
Object reference this request is targeted at.
|
||||
Does not apply for List-type requests, or non-resource requests.
|
||||
-->
|
||||
此请求所指向的对象引用。对于 List 类型的请求或者非资源请求,此字段可忽略。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>responseStatus</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--The response status, populated even when the ResponseObject is not a Status type.
|
||||
<!--
|
||||
The response status, populated even when the ResponseObject is not a Status type.
|
||||
For successful responses, this will only include the Code and StatusSuccess.
|
||||
For non-status type error responses, this will be auto-populated with the error Message.-->
|
||||
For non-status type error responses, this will be auto-populated with the error Message.
|
||||
-->
|
||||
响应的状态,当 responseObject 不是 Status 类型时被赋值。
|
||||
对于成功的请求,此字段仅包含 code 和 statusSuccess。
|
||||
对于非 Status 类型的错误响应,此字段会被自动赋值为出错信息。
|
||||
|
@ -215,15 +232,13 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
Omitted for non-resource requests. Only logged at Request Level and higher.
|
||||
-->
|
||||
来自请求的 API 对象,以 JSON 格式呈现。requestObject 在请求中按原样记录
|
||||
(可能会采用 JSON 重新编码),之后会进入版本转换、默认值填充、准入控制以及
|
||||
配置信息合并等阶段。此对象为外部版本化的对象类型,甚至其自身可能并不是一个
|
||||
合法的对象。对于非资源请求,此字段被忽略。
|
||||
(可能会采用 JSON 重新编码),之后会进入版本转换、默认值填充、准入控制以及配置信息合并等阶段。
|
||||
此对象为外部版本化的对象类型,甚至其自身可能并不是一个合法的对象。对于非资源请求,此字段被忽略。
|
||||
只有当审计级别为 Request 或更高的时候才会记录。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
||||
<tr><td><code>responseObject</code><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#Unknown"><code>k8s.io/apimachinery/pkg/runtime.Unknown</code></a>
|
||||
</td>
|
||||
|
@ -234,19 +249,20 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
to the external type, and serialized as JSON. Omitted for non-resource requests. Only logged
|
||||
at Response Level.
|
||||
-->
|
||||
响应中包含的 API 对象,以 JSON 格式呈现。responseObject 是在被转换为外部类型
|
||||
并序列化为 JSON 格式之后才被记录的。
|
||||
对于非资源请求,此字段会被忽略。
|
||||
响应中包含的 API 对象,以 JSON 格式呈现。responseObject 是在被转换为外部类型并序列化为
|
||||
JSON 格式之后才被记录的。对于非资源请求,此字段会被忽略。
|
||||
只有审计级别为 Response 时才会记录。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>requestReceivedTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--Time the request reached the apiserver.-->
|
||||
<!--
|
||||
Time the request reached the apiserver.
|
||||
-->
|
||||
<p>
|
||||
请求到达 API 服务器时的时间。
|
||||
</p>
|
||||
|
@ -254,11 +270,13 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
</tr>
|
||||
|
||||
<tr><td><code>stageTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--Time the request reached current audit stage.-->
|
||||
<!--
|
||||
Time the request reached current audit stage.
|
||||
-->
|
||||
请求到达当前审计阶段时的时间。
|
||||
</p>
|
||||
</td>
|
||||
|
@ -278,8 +296,7 @@ Note: All but the last IP can be arbitrarily set by the client.
|
|||
should be short. Annotations are included in the Metadata level.
|
||||
-->
|
||||
annotations 是一个无结构的键-值映射,其中保存的是一个审计事件。
|
||||
该事件可以由请求处理链路上的插件来设置,包括身份认证插件、鉴权插件以及
|
||||
准入控制插件等。
|
||||
该事件可以由请求处理链路上的插件来设置,包括身份认证插件、鉴权插件以及准入控制插件等。
|
||||
注意这些注解是针对审计事件本身的,与所提交的对象中的 metadata.annotations
|
||||
之间不存在对应关系。
|
||||
映射中的键名应该唯一性地标识生成该事件的组件,从而避免名字上的冲突
|
||||
|
@ -309,7 +326,7 @@ EventList 是审计事件(Event)的列表。
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>EventList</code></td></tr>
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted"><!--No description provided.-->列表结构元数据</span>
|
||||
|
@ -351,11 +368,13 @@ Policy 定义的是审计日志的配置以及不同类型请求的日志记录
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>Policy</code></td></tr>
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--ObjectMeta is included for interoperability with API infrastructure.-->
|
||||
<!--
|
||||
ObjectMeta is included for interoperability with API infrastructure.
|
||||
-->
|
||||
包含 <code>metadata</code> 字段是为了便于与 API 基础设施之间实现互操作。
|
||||
</p>
|
||||
<!--Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field.-->
|
||||
|
@ -368,15 +387,15 @@ Policy 定义的是审计日志的配置以及不同类型请求的日志记录
|
|||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--Rules specify the audit Level a request should be recorded at.
|
||||
<!--
|
||||
Rules specify the audit Level a request should be recorded at.
|
||||
A request may match multiple rules, in which case the FIRST matching rule is used.
|
||||
The default audit level is None, but can be overridden by a catch-all rule at the end of the list.
|
||||
PolicyRules are strictly ordered.
|
||||
-->
|
||||
字段 rules 设置请求要被记录的审计级别(level)。
|
||||
每个请求可能会与多条规则相匹配;发生这种状况时遵从第一条匹配规则。
|
||||
默认的审计级别是 None,不过可以在列表的末尾使用一条全抓(catch-all)规则
|
||||
重载其设置。
|
||||
默认的审计级别是 None,不过可以在列表的末尾使用一条全抓(catch-all)规则重载其设置。
|
||||
列表中的规则(PolicyRule)是严格有序的。
|
||||
</p>
|
||||
</td>
|
||||
|
@ -398,7 +417,6 @@ PolicyRules are strictly ordered.
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
|
||||
<tr>
|
||||
<td>
|
||||
<code>omitManagedFields</code><br/>
|
||||
|
@ -440,7 +458,7 @@ PolicyList 是由审计策略(Policy)组成的列表。
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>PolicyList</code></td></tr>
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted"><!--No description provided.-->列表结构元数据。</span>
|
||||
|
@ -481,8 +499,10 @@ GroupResources 代表的是某 API 组中的资源类别。
|
|||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--Group is the name of the API group that contains the resources.
|
||||
The empty string represents the core API group.-->
|
||||
<!--
|
||||
Group is the name of the API group that contains the resources.
|
||||
The empty string represents the core API group.
|
||||
-->
|
||||
字段 group 给出包含资源的 API 组的名称。
|
||||
空字符串代表 <code>core</code> API 组。
|
||||
</td>
|
||||
|
@ -493,34 +513,33 @@ GroupResources 代表的是某 API 组中的资源类别。
|
|||
</td>
|
||||
<td>
|
||||
<!--
|
||||
Resources is a list of resources this rule applies to.
|
||||
<p>For example:
|
||||
'pods' matches pods.
|
||||
'pods/log' matches the log subresource of pods.
|
||||
'*' matches all resources and their subresources.
|
||||
'pods/*' matches all subresources of pods.
|
||||
'*/scale' matches all scale subresources.</p>
|
||||
<p>Resources is a list of resources this rule applies to.</p>
|
||||
<p>For example:</p>
|
||||
<ul>
|
||||
<li><code>pods</code> matches pods.</li>
|
||||
<li><code>pods/log</code> matches the log subresource of pods.</li>
|
||||
<li><code>*</code> matches all resources and their subresources.</li>
|
||||
<li><code>pods/*</code> matches all subresources of pods.</li>
|
||||
<li><code>*/scale</code> matches all scale subresources.</li>
|
||||
</ul>
|
||||
-->
|
||||
<p>例如:</p>
|
||||
<ul>
|
||||
<li><code>pods</code> 匹配 Pod;</li>
|
||||
<li><code>pods/log</code> 匹配 Pod 的 log 子资源;</li>
|
||||
<li><code>*<code> 匹配所有资源及其子资源;</li>
|
||||
<li><code>pods/*</code> 匹配 Pod 的所有子资源;</li>
|
||||
<p><code>resources</code> 是此规则所适用的资源的列表。</p>
|
||||
<p>例如:</p>
|
||||
<ul>
|
||||
<li><code>pods</code> 匹配 Pod。</li>
|
||||
<li><code>pods/log</code> 匹配 Pod 的 log 子资源。</li>
|
||||
<li><code>*<code> 匹配所有资源及其子资源。</li>
|
||||
<li><code>pods/*</code> 匹配 Pod 的所有子资源。</li>
|
||||
<li><code>*/scale</code> 匹配所有的 scale 子资源。</li>
|
||||
</ul>
|
||||
</ul>
|
||||
|
||||
<!--If wildcard is present, the validation rule will ensure resources do not
|
||||
overlap with each other.
|
||||
|
||||
An empty list implies all resources and subresources in this API groups apply.-->
|
||||
<p>
|
||||
如果存在通配符,则合法性检查逻辑会确保 resources 中的条目不会彼此重叠。
|
||||
</p>
|
||||
<br/>
|
||||
<p>
|
||||
空的列表意味着规则适用于该 API 组中的所有资源及其子资源。
|
||||
</p>
|
||||
<!--
|
||||
<p>If wildcard is present, the validation rule will ensure resources do not
|
||||
overlap with each other.</p>
|
||||
<p>An empty list implies all resources and subresources in this API groups apply.</p>
|
||||
-->
|
||||
<p>如果存在通配符,则合法性检查逻辑会确保 resources 中的条目不会彼此重叠。</p>
|
||||
<p>空的列表意味着规则适用于该 API 组中的所有资源及其子资源。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
|
@ -621,8 +640,10 @@ ObjectReference 包含的是用来检查或修改所引用对象时将需要的
|
|||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--APIGroup is the name of the API group that contains the referred object.
|
||||
The empty string represents the core API group.-->
|
||||
<!--
|
||||
APIGroup is the name of the API group that contains the referred object.
|
||||
The empty string represents the core API group.
|
||||
-->
|
||||
<p>
|
||||
字段 apiGroup 给出包含所引用对象的 API 组的名称。
|
||||
空字符串代表 <code>core</code> API 组。
|
||||
|
@ -634,7 +655,9 @@ ObjectReference 包含的是用来检查或修改所引用对象时将需要的
|
|||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--APIVersion is the version of the API group that contains the referred object.-->
|
||||
<!--
|
||||
APIVersion is the version of the API group that contains the referred object.
|
||||
-->
|
||||
<p>
|
||||
字段 apiVersion 是包含所引用对象的 API 组的版本。
|
||||
</p>
|
||||
|
@ -685,7 +708,9 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
<a href="#audit-k8s-io-v1-Level"><code>Level</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--The Level that requests matching this rule are recorded at.-->
|
||||
<!--
|
||||
The Level that requests matching this rule are recorded at.
|
||||
-->
|
||||
<p>
|
||||
与此规则匹配的请求所对应的日志记录级别(Level)。
|
||||
</p>
|
||||
|
@ -697,8 +722,10 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--The users (by authenticated user name) this rule applies to.
|
||||
An empty list implies every user.-->
|
||||
<!--
|
||||
The users (by authenticated user name) this rule applies to.
|
||||
An empty list implies every user.
|
||||
-->
|
||||
根据身份认证所确定的用户名的列表,给出此规则所适用的用户。
|
||||
空列表意味着适用于所有用户。
|
||||
</p>
|
||||
|
@ -710,9 +737,11 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--The user groups this rule applies to. A user is considered matching
|
||||
<!--
|
||||
The user groups this rule applies to. A user is considered matching
|
||||
if it is a member of any of the UserGroups.
|
||||
An empty list implies every user group.-->
|
||||
An empty list implies every user group.
|
||||
-->
|
||||
此规则所适用的用户组的列表。如果用户是所列用户组中任一用户组的成员,则视为匹配。
|
||||
空列表意味着适用于所有用户组。
|
||||
</p>
|
||||
|
@ -738,7 +767,9 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
<a href="#audit-k8s-io-v1-GroupResources"><code>[]GroupResources</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--Resources that this rule matches. An empty list implies all kinds in all API groups.-->
|
||||
<!--
|
||||
Resources that this rule matches. An empty list implies all kinds in all API groups.
|
||||
-->
|
||||
<p>
|
||||
此规则所适用的资源类别列表。
|
||||
空列表意味着适用于 API 组中的所有资源类别。
|
||||
|
@ -751,9 +782,11 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--Namespaces that this rule matches.
|
||||
<!--
|
||||
Namespaces that this rule matches.
|
||||
The empty string "" matches non-namespaced resources.
|
||||
An empty list implies every namespace.-->
|
||||
An empty list implies every namespace.
|
||||
-->
|
||||
此规则所适用的名字空间列表。
|
||||
空字符串("")意味着适用于非名字空间作用域的资源。
|
||||
空列表意味着适用于所有名字空间。
|
||||
|
@ -766,20 +799,21 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
</td>
|
||||
<td>
|
||||
<!--
|
||||
NonResourceURLs is a set of URL paths that should be audited.
|
||||
*s are allowed, but only as the full, final step in the path.
|
||||
Examples:
|
||||
"/metrics" - Log requests for apiserver metrics
|
||||
"/healthz*" - Log all health checks</p>
|
||||
<p>NonResourceURLs is a set of URL paths that should be audited.
|
||||
<code>*</code>s are allowed, but only as the full, final step in the path.
|
||||
Examples:</p>
|
||||
<ul>
|
||||
<li><code>/metrics</code> - Log requests for apiserver metrics</li>
|
||||
<li><code>/healthz*</code> - Log all health checks</li>
|
||||
</ul>
|
||||
-->
|
||||
|
||||
<p>
|
||||
字段 nonResourceURLs 给出一组需要被审计的 URL 路径。
|
||||
允许使用 <code>*<code>s,但只能作为路径中最后一个完整分段。
|
||||
例如:
|
||||
</p>
|
||||
<li>"/metrics" - 记录对 API 服务器度量值(metrics)的所有请求;</li>
|
||||
<li>"/healthz*" - 记录所有健康检查请求。</li>
|
||||
<code>nonResourceURLs</code> 是一组需要被审计的 URL 路径。
|
||||
允许使用 <code>*<code>,但只能作为路径中最后一个完整分段。
|
||||
例如:</p>
|
||||
<ul>
|
||||
<li>"/metrics" - 记录对 API 服务器度量值(metrics)的所有请求;</li>
|
||||
<li>"/healthz*" - 记录所有健康检查。</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -802,34 +836,36 @@ PolicyRule 包含一个映射,基于元数据将请求映射到某审计级别
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
|
||||
<tr>
|
||||
<td><code>omitManagedFields</code><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
OmitManagedFields indicates whether to omit the managed fields of the request
|
||||
and response bodies from being written to the API audit log.
|
||||
a value of 'true' will drop the managed fields from the API audit log
|
||||
a value of 'false' indicates that the managed fileds should be included in the API audit log
|
||||
Note that the value, if specified, in this rule will override the global default
|
||||
If a value is not specified then the global default specified in
|
||||
Policy.OmitManagedFields will stand.
|
||||
-->
|
||||
<p>
|
||||
omitManagedFields 决定将请求和响应主体写入 API 审计日志时,是否省略其托管字段。
|
||||
</p>
|
||||
<ul>
|
||||
<li>值为 'true' 将从 API 审计日志中删除托管字段</li>
|
||||
<li>
|
||||
值为 'false' 表示托管字段应包含在 API 审计日志中
|
||||
请注意,如果指定此规则中的值将覆盖全局默认值。
|
||||
如果未指定,则使用 policy.omitManagedFields 中指定的全局默认值。
|
||||
</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><code>omitManagedFields</code><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
<p>OmitManagedFields indicates whether to omit the managed fields of the request
|
||||
and response bodies from being written to the API audit log.</p>
|
||||
<ul>
|
||||
<li>a value of 'true' will drop the managed fields from the API audit log</li>
|
||||
<li>a value of 'false' indicates that the managed fileds should be included
|
||||
in the API audit log
|
||||
Note that the value, if specified, in this rule will override the global default
|
||||
If a value is not specified then the global default specified in
|
||||
Policy.OmitManagedFields will stand.</li>
|
||||
</ul>
|
||||
-->
|
||||
<p>
|
||||
<code>omitManagedFields</code> 决定将请求和响应主体写入 API 审计日志时,是否省略其托管字段。
|
||||
</p>
|
||||
<ul>
|
||||
<li>值为 'true' 将从 API 审计日志中删除托管字段</li>
|
||||
<li>
|
||||
值为 'false' 表示托管字段应包含在 API 审计日志中
|
||||
请注意,如果指定此规则中的值将覆盖全局默认值。
|
||||
如果未指定,则使用 policy.omitManagedFields 中指定的全局默认值。
|
||||
</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
|
@ -39,7 +39,7 @@ FormatOptions 包含为不同日志格式提供的选项。
|
|||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>json</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>json</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="#JSONOptions"><code>JSONOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -47,7 +47,7 @@ FormatOptions 包含为不同日志格式提供的选项。
|
|||
[Alpha] JSON contains options for logging format "json".
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
<p>[Alpha] json 包含 "json" 日志格式的选项。
|
||||
<p>[Alpha] <code>json</code> 包含 "json" 日志格式的选项。
|
||||
只有 LoggingAlphaOptions 特性门控被启用时才可用。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -73,12 +73,13 @@ JSONOptions 包含为 "json" 日志格式提供的选项。
|
|||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
<tr><td><code>splitStream</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>splitStream</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--[Alpha] SplitStream redirects error messages to stderr while
|
||||
<!--
|
||||
[Alpha] SplitStream redirects error messages to stderr while
|
||||
info messages go to stdout, with buffering. The default is to write
|
||||
both to stdout, without buffering. Only available when
|
||||
the LoggingAlphaOptions feature gate is enabled.
|
||||
|
@ -91,7 +92,7 @@ the LoggingAlphaOptions feature gate is enabled.
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>infoBufferSize</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>infoBufferSize</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -135,7 +136,7 @@ LoggingConfiguration 包含日志选项。
|
|||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>format</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>format</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -149,7 +150,7 @@ default value of format is `text`
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>flushFrequency</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>flushFrequency</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="#TimeOrMetaDuration"><code>TimeOrMetaDuration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -168,7 +169,7 @@ Ignored if the selected logging backend writes log messages without buffering.
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>verbosity</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>verbosity</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="#VerbosityLevel"><code>VerbosityLevel</code></a>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -185,7 +186,7 @@ are always logged.
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>vmodule</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>vmodule</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="#VModuleConfiguration"><code>VModuleConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -200,7 +201,7 @@ Only supported for "text" log format.
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>options</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>options</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="#FormatOptions"><code>FormatOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -324,12 +325,10 @@ TracingConfiguration provides versioned configuration for OpenTelemetry tracing
|
|||
-->
|
||||
<p>TracingConfiguration 为 OpenTelemetry 追踪客户端提供版本化的配置信息。</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">字段</th><th>描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
|
@ -352,7 +351,7 @@ Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
|
|||
Recommended is unset. If unset, sampler respects its parent span's sampling
|
||||
rate, but otherwise never samples.
|
||||
-->
|
||||
<p>samplingRatePerMillion 是每百万 span 要采集的样本数。推荐不设置。
|
||||
<p><code>samplingRatePerMillion</code> 是每百万 span 要采集的样本数。推荐不设置。
|
||||
如果不设置,则采样器优先使用其父级 span 的采样率,否则不采样。</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -412,11 +411,10 @@ Kubelet 从磁盘上读取这些配置信息,并根据 CredentialProvider 类
|
|||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubelet.config.k8s.io/v1beta1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>CredentialProviderConfig</code></td></tr>
|
||||
|
||||
|
||||
|
||||
<tr><td><code>providers</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="#kubelet-config-k8s-io-v1beta1-CredentialProvider"><code>[]CredentialProvider</code></a>
|
||||
</td>
|
||||
|
@ -452,7 +450,7 @@ KubeletConfiguration 中包含 Kubelet 的配置。
|
|||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubelet.config.k8s.io/v1beta1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>KubeletConfiguration</code></td></tr>
|
||||
<tr><td><code>enableServer</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>enableServer</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -622,7 +620,8 @@ Default:"quot;
|
|||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--tlsPrivateKeyFile is the file containing x509 private key matching tlsCertFile.
|
||||
<!--
|
||||
tlsPrivateKeyFile is the file containing x509 private key matching tlsCertFile.
|
||||
Default: ""
|
||||
-->
|
||||
<p><code>tlsPrivateKeyFile</code> 是一个包含与 <code>tlsCertFile</code>
|
||||
|
@ -638,12 +637,12 @@ Default: ""
|
|||
<!--
|
||||
tlsCipherSuites is the list of allowed cipher suites for the server.
|
||||
Note that TLS 1.3 ciphersuites are not configurable.
|
||||
Values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants).
|
||||
Values are from tls package constants (https://pkg.go.dev/crypto/tls#pkg-constants).
|
||||
Default: nil
|
||||
-->
|
||||
<p><code>tlsCipherSuites</code> 是一个字符串列表,其中包含服务器所接受的加密包名称。
|
||||
请注意,TLS 1.3 密码套件是不可配置的。
|
||||
列表中的每个值来自于 <code>tls</code> 包中定义的常数(https://golang.org/pkg/crypto/tls/#pkg-constants)。</p>
|
||||
列表中的每个值来自于 <code>tls</code> 包中定义的常数(https://pkg.go.dev/crypto/tls#pkg-constants)。</p>
|
||||
<p>默认值:nil</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -652,12 +651,13 @@ Default: ""
|
|||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--tlsMinVersion is the minimum TLS version supported.
|
||||
Values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants).
|
||||
<!--
|
||||
tlsMinVersion is the minimum TLS version supported.
|
||||
Values are from tls package constants (https://pkg.go.dev/crypto/tls#pkg-constants).
|
||||
Default: ""
|
||||
-->
|
||||
<p><code>tlsMinVersion</code> 给出所支持的最小 TLS 版本。
|
||||
字段取值来自于 <code>tls</code> 包中的常数定义(https://golang.org/pkg/crypto/tls/#pkg-constants)。</p>
|
||||
字段取值来自于 <code>tls</code> 包中的常数定义(https://pkg.go.dev/crypto/tls#pkg-constants)。</p>
|
||||
<p>默认值:""</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -666,7 +666,8 @@ Default: ""
|
|||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--rotateCertificates enables client certificate rotation. The Kubelet will request a
|
||||
<!--
|
||||
rotateCertificates enables client certificate rotation. The Kubelet will request a
|
||||
new certificate from the certificates.k8s.io API. This requires an approver to approve the
|
||||
certificate signing requests.
|
||||
Default: false
|
||||
|
@ -1005,15 +1006,34 @@ Default: 40
|
|||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--imageMinimumGCAge is the minimum age for an unused image before it is
|
||||
<!--
|
||||
imageMinimumGCAge is the minimum age for an unused image before it is
|
||||
garbage collected.
|
||||
Default: "2m"
|
||||
-->
|
||||
<p><code>imageMinimumGCAge</code> 是对未使用镜像进行垃圾搜集之前允许其存在的时长。</p>
|
||||
<p><code>imageMinimumGCAge</code> 是对未使用镜像进行垃圾收集之前允许其存在的时长。</p>
|
||||
<p>默认值:"2m"</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>
|
||||
<code>imageMaximumGCAge</code><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
imageMaximumGCAge is the maximum age an image can be unused before it is garbage collected.
|
||||
The default of this field is "0s", which disables this field--meaning images won't be garbage
|
||||
collected based on being unused for too long.
|
||||
Default: "0s" (disabled)
|
||||
-->
|
||||
<p><code>imageMaximumGCAge</code> 是对未使用镜像进行垃圾收集之前允许其存在的时长。
|
||||
此字段的默认值为 "0s",表示禁用此字段,这意味着镜像不会因为过长时间不使用而被垃圾收集。
|
||||
默认值:"0s"(已禁用)</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>imageGCHighThresholdPercent</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
|
@ -1180,7 +1200,8 @@ Default: nil
|
|||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--cpuManagerReconcilePeriod is the reconciliation period for the CPU Manager.
|
||||
<!--
|
||||
cpuManagerReconcilePeriod is the reconciliation period for the CPU Manager.
|
||||
Requires the CPUManager feature gate to be enabled.
|
||||
Default: "10s"
|
||||
-->
|
||||
|
@ -1333,7 +1354,7 @@ themselves if they should try to access their own Service. Values:</p>
|
|||
Generally, one must set <code>--hairpin-mode=hairpin-veth to</code> achieve hairpin NAT,
|
||||
because promiscuous-bridge assumes the existence of a container bridge named cbr0.
|
||||
Default: "promiscuous-bridge"
|
||||
-->
|
||||
-->
|
||||
<p>一般而言,用户必须设置 <code>--hairpin-mode=hairpin-veth</code> 才能实现发夹模式的网络地址转译
|
||||
(NAT),因为混杂模式的网桥要求存在一个名为 <code>cbr0</code> 的容器网桥。</p>
|
||||
<p>默认值:"promiscuous-bridge"</p>
|
||||
|
@ -1393,8 +1414,7 @@ If set to the empty string, will override the default and effectively disable DN
|
|||
Default: "/etc/resolv.conf"
|
||||
-->
|
||||
<p><code>resolvConf</code> 是一个域名解析配置文件,用作容器 DNS 解析配置的基础。</p>
|
||||
<p>如果此值设置为空字符串,则会覆盖 DNS 解析的默认配置,
|
||||
本质上相当于禁用了 DNS 查询。</p>
|
||||
<p>如果此值设置为空字符串,则会覆盖 DNS 解析的默认配置,本质上相当于禁用了 DNS 查询。</p>
|
||||
<p>默认值:"/etc/resolv.conf"</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -1582,7 +1602,8 @@ Default:
|
|||
<code>map[string]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--evictionSoft is a map of signal names to quantities that defines soft eviction thresholds.
|
||||
<!--
|
||||
evictionSoft is a map of signal names to quantities that defines soft eviction thresholds.
|
||||
For example: <code>{"memory.available": "300Mi"}</code>.
|
||||
Default: nil
|
||||
-->
|
||||
|
@ -1596,7 +1617,8 @@ Default: nil
|
|||
<code>map[string]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--evictionSoftGracePeriod is a map of signal names to quantities that defines grace
|
||||
<!--
|
||||
evictionSoftGracePeriod is a map of signal names to quantities that defines grace
|
||||
periods for each soft eviction signal. For example: <code>{"memory.available": "30s"}</code>.
|
||||
Default: nil
|
||||
-->
|
||||
|
@ -1892,12 +1914,12 @@ Default: nil
|
|||
<p><code>kubeReserved</code> 是一组<code>资源名称=资源数量</code>对,
|
||||
用来描述为 Kubernetes 系统组件预留的资源(例如:'cpu=200m,memory=150G')。
|
||||
目前支持 CPU、内存和根文件系统的本地存储。
|
||||
更多细节可参见 https://kubernetes.io/zh/docs/concepts/configuration/manage-resources-containers/。</p>
|
||||
更多细节可参见 https://kubernetes.io/zh-cn/docs/concepts/configuration/manage-resources-containers/。</p>
|
||||
<p>默认值:Nil</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>reservedSystemCPUs</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>reservedSystemCPUs</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -2069,7 +2091,7 @@ memcg 通知机制来确定是否超出内存逐出阈值,而不是使用轮
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>logging</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>logging</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="#LoggingConfiguration"><code>LoggingConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -2080,7 +2102,7 @@ for more information.
|
|||
Default:
|
||||
Format: text
|
||||
-->
|
||||
<p><code>logging</code>设置日志机制选项。更多的详细信息科参阅
|
||||
<p><code>logging</code>设置日志机制选项。更多的详细信息可参阅
|
||||
<a href="https://github.com/kubernetes/component-base/blob/master/logs/options.go">日志选项</a>。</p>
|
||||
<p>默认值:</p>
|
||||
<code><pre>Format: text</pre></code>
|
||||
|
@ -2118,7 +2140,8 @@ EnableSystemLogHandler has to be enabled in addition for this feature to work.
|
|||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--shutdownGracePeriod specifies the total duration that the node should delay the
|
||||
<!--
|
||||
shutdownGracePeriod specifies the total duration that the node should delay the
|
||||
shutdown and total grace period for pod termination during a node shutdown.
|
||||
Default: "0s"
|
||||
-->
|
||||
|
@ -2132,7 +2155,8 @@ Pod 提供的宽限期限的总时长。</p>
|
|||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--shutdownGracePeriodCriticalPods specifies the duration used to terminate critical
|
||||
<!--
|
||||
shutdownGracePeriodCriticalPods specifies the duration used to terminate critical
|
||||
pods during a node shutdown. This should be less than shutdownGracePeriod.
|
||||
For example, if shutdownGracePeriod=30s, and shutdownGracePeriodCriticalPods=10s,
|
||||
during a node shutdown the first 20 seconds would be reserved for gracefully
|
||||
|
@ -2309,7 +2333,7 @@ Default: 0.8
|
|||
</tr>
|
||||
|
||||
<tr><td><code>registerWithTaints</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#taint-v1-core"><code>[]core/v1.Taint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#taint-v1-core"><code>[]core/v1.Taint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
|
@ -2341,8 +2365,10 @@ Default: true
|
|||
<a href="#TracingConfiguration"><code>TracingConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!-- Tracing specifies the versioned configuration for OpenTelemetry tracing clients.
|
||||
See https://kep.k8s.io/2832 for more details. -->
|
||||
<!--
|
||||
Tracing specifies the versioned configuration for OpenTelemetry tracing clients.
|
||||
See https://kep.k8s.io/2832 for more details.
|
||||
-->
|
||||
<p>tracing 为 OpenTelemetry 追踪客户端设置版本化的配置信息。
|
||||
参阅 https://kep.k8s.io/2832 了解更多细节。</p>
|
||||
</td>
|
||||
|
@ -2370,7 +2396,7 @@ Default: true
|
|||
默认值:true</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>containerRuntimeEndpoint</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>containerRuntimeEndpoint</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -2422,7 +2448,7 @@ SerializedNodeConfigSource 允许对 `v1.NodeConfigSource` 执行序列化操作
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>SerializedNodeConfigSource</code></td></tr>
|
||||
|
||||
<tr><td><code>source</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
|
@ -2488,9 +2514,9 @@ and URL path.
|
|||
<!--
|
||||
Each entry in matchImages is a pattern which can optionally contain a port and a path.
|
||||
Globs can be used in the domain, but not in the port or the path. Globs are supported
|
||||
as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.
|
||||
Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match
|
||||
a single subdomain segment, so *.io does not match *.k8s.io.
|
||||
as subdomains like '<em>.k8s.io' or 'k8s.</em>.io', and top-level-domains such as 'k8s.<em>'.
|
||||
Matching partial subdomains like 'app</em>.k8s.io' is also supported. Each glob can only match
|
||||
a single subdomain segment, so *.io does not match *.k8s.io.
|
||||
-->
|
||||
<p><code>matchImages</code> 中的每个条目都是一个模式字符串,其中可以包含端口号和路径。
|
||||
域名部分可以包含统配符,但端口或路径部分不可以。通配符可以用作子域名,例如
|
||||
|
@ -2896,7 +2922,7 @@ MemoryReservation 为每个 NUMA 节点设置不同类型的内存预留。
|
|||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>numaNode</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>numaNode</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -2905,8 +2931,8 @@ MemoryReservation 为每个 NUMA 节点设置不同类型的内存预留。
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>limits</code> <B>[必需]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
|
||||
<tr><td><code>limits</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--span class="text-muted">No description provided.</span-->
|
||||
|
@ -2987,7 +3013,7 @@ ShutdownGracePeriodByPodPriority 基于 Pod 关联的优先级类数值来为其
|
|||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>priority</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>priority</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
|
@ -2998,7 +3024,7 @@ ShutdownGracePeriodByPodPriority 基于 Pod 关联的优先级类数值来为其
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>shutdownGracePeriodSeconds</code> <B>[必需]</B><br/>
|
||||
<tr><td><code>shutdownGracePeriodSeconds</code> <B><!-- [Required] -->[必需]</B><br/>
|
||||
<code>int64</code>
|
||||
</td>
|
||||
<td>
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: 副本(Replica)
|
||||
id: replica
|
||||
date: 2023-06-11
|
||||
full_link:
|
||||
short_description: >
|
||||
Replicas 是 Pod 的副本,通过维护相同的实例确保可用性、可扩缩性和容错性。
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- workload
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Replica
|
||||
id: replica
|
||||
date: 2023-06-11
|
||||
full_link:
|
||||
short_description: >
|
||||
Replicas are copies of pods, ensuring availability, scalability, and fault tolerance by maintaining identical instances.
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- workload
|
||||
-->
|
||||
|
||||
<!--
|
||||
A copy or duplicate of a {{< glossary_tooltip text="Pod" term_id="pod" >}} or
|
||||
a set of pods. Replicas ensure high availability, scalability, and fault tolerance
|
||||
by maintaining multiple identical instances of a pod.
|
||||
-->
|
||||
单个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 或一组 Pod 的复制拷贝。
|
||||
Replicas 通过维护多个相同的 Pod 实例保证了高可用性、可扩缩性和容错性。
|
||||
|
||||
<!--more-->
|
||||
<!--
|
||||
Replicas are commonly used in Kubernetes to achieve the desired application state and reliability.
|
||||
They enable workload scaling and distribution across multiple nodes in a cluster.
|
||||
|
||||
By defining the number of replicas in a Deployment or ReplicaSet, Kubernetes ensures that
|
||||
the specified number of instances are running, automatically adjusting the count as needed.
|
||||
|
||||
Replica management allows for efficient load balancing, rolling updates, and
|
||||
self-healing capabilities in a Kubernetes cluster.
|
||||
-->
|
||||
Kubernetes 中通常使用副本来实现期望的应用状态和可靠性。
|
||||
它们可以在集群的多个节点上扩缩和分配工作负载。
|
||||
|
||||
在 Deployment 或 ReplicaSet 中定义副本数量, Kubernetes 确保了所期望数量的实例正在运行,
|
||||
并且会根据需要自动调整这个数量。
|
||||
|
||||
副本管理可以在 Kubernetes 集群中提供了高效的负载均衡、滚动更新和自愈能力。
|
||||
|
|
@ -36,21 +36,6 @@ kubeadm config print join-defaults [flags]
|
|||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--component-configs strings/td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<!--
|
||||
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
|
||||
-->
|
||||
<p>
|
||||
以逗号分隔的组件配置 API 对象的列表,打印其默认值。可用值:[KubeProxyConfiguration KubeletConfiguration]。
|
||||
如果未设置此参数,则不会打印任何组件配置。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
|
|
|
@ -29,29 +29,13 @@ kubeadm config print reset-defaults [flags]
|
|||
-->
|
||||
### 选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
</colgroup>
|
||||
<tbody>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--component-configs strings</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">
|
||||
<p>
|
||||
<!--
|
||||
A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
|
||||
-->
|
||||
组件配置 API 对象的逗号分隔列表,打印其默认值。
|
||||
可用值:[KubeProxyConfiguration KubeletConfiguration]。
|
||||
如果此参数未被设置,则不会打印任何组件配置。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">-h, --help</td>
|
||||
</tr>
|
||||
|
@ -74,7 +58,7 @@ reset-defaults 操作的帮助命令。
|
|||
-->
|
||||
### 从父命令继承的选项
|
||||
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<table style="width: 100%; table-layout: fixed;">
|
||||
<colgroup>
|
||||
<col span="1" style="width: 10px;" />
|
||||
<col span="1" />
|
||||
|
|
|
@ -19,11 +19,6 @@ If both files already exist, kubeadm skips the generation step and existing file
|
|||
|
||||
如果两个文件都已存在,则 kubeadm 将跳过生成步骤,使用现有文件。
|
||||
|
||||
<!--
|
||||
Alpha Disclaimer: this command is currently alpha.
|
||||
-->
|
||||
Alpha 免责声明:此命令目前处于 Alpha 阶段。
|
||||
|
||||
```
|
||||
kubeadm init phase certs etcd-ca [flags]
|
||||
```
|
||||
|
|
|
@ -18,11 +18,6 @@ If both files already exist, kubeadm skips the generation step and existing file
|
|||
-->
|
||||
如果两个文件都已存在,则 kubeadm 将跳过生成步骤,使用现有文件。
|
||||
|
||||
<!--
|
||||
Alpha Disclaimer: this command is currently alpha.
|
||||
-->
|
||||
Alpha 免责声明:此命令当前为 Alpha 功能。
|
||||
|
||||
```
|
||||
kubeadm init phase certs etcd-healthcheck-client [flags]
|
||||
```
|
||||
|
|
|
@ -9,15 +9,14 @@ Generate a private key for signing service account tokens along with its public
|
|||
### 概要
|
||||
|
||||
<!--
|
||||
Generate the private key for signing service account tokens along with its public key, and save them into sa.key and sa.pub files. If both files already exist, kubeadm skips the generation step and existing files will be used.
|
||||
Generate the private key for signing service account tokens along with its public key, and save them into sa.key and sa.pub files.
|
||||
-->
|
||||
生成用来签署服务账号令牌的私钥及其公钥,并将其保存到 sa.key 和 sa.pub 文件中。
|
||||
如果两个文件都已存在,则 kubeadm 会跳过生成步骤,而将使用现有文件。
|
||||
|
||||
<!--
|
||||
Alpha Disclaimer: this command is currently alpha.
|
||||
If both files already exist, kubeadm skips the generation step and existing files will be used.
|
||||
-->
|
||||
Alpha 免责声明:此命令当前为 Alpha 阶段。
|
||||
如果两个文件都已存在,则 kubeadm 会跳过生成步骤,而将使用现有文件。
|
||||
|
||||
```
|
||||
kubeadm init phase certs sa [flags]
|
||||
|
|
|
@ -30,7 +30,7 @@ control plane's API server component remains available.
|
|||
[通用表达式语言 (Common Expression Language, CEL)](https://github.com/google/cel-go)
|
||||
用于声明 Kubernetes API 的验证规则、策略规则和其他限制或条件。
|
||||
|
||||
CEL 表达式在{{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}中直接进行评估,
|
||||
CEL 表达式在 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}中直接进行处理,
|
||||
这使得 CEL 成为许多可扩展性用例的便捷替代方案,而无需使用类似 Webhook 这种进程外机制。
|
||||
只要控制平面的 API 服务器组件保持可用状态,你的 CEL 表达式就会继续执行。
|
||||
|
||||
|
@ -49,7 +49,7 @@ single expression that evaluates to a single value. CEL expressions are
|
|||
typically short "one-liners" that inline well into the string fields of Kubernetes
|
||||
API resources.
|
||||
-->
|
||||
## 语言概述 {#language-overview}
|
||||
## 语言概述 {#language-overview}
|
||||
|
||||
[CEL 语言](https://github.com/google/cel-spec/blob/master/doc/langdef.md)的语法直观简单,
|
||||
类似于 C、C++、Java、JavaScript 和 Go 中的表达式。
|
||||
|
@ -68,7 +68,7 @@ different variables. See the API documentation of the API fields to learn which
|
|||
variables are available for that field.
|
||||
-->
|
||||
对 CEL 程序的输入是各种 “变量”。包含 CEL 的每个 Kubernetes API 字段都在 API
|
||||
文档中声明了字段可使用哪些变量。例如,在 CustomResourceDefinitions 的
|
||||
文档中声明了字段可使用哪些变量。例如,在 CustomResourceDefinition 的
|
||||
`x-kubernetes-validations[i].rules` 字段中,`self` 和 `oldSelf` 变量可用,
|
||||
并且分别指代要由 CEL 表达式验证的自定义资源数据的前一个状态和当前状态。
|
||||
其他 Kubernetes API 字段可能声明不同的变量。请查阅 API 字段的 API 文档以了解该字段可使用哪些变量。
|
||||
|
@ -112,23 +112,60 @@ CEL 表达式示例:
|
|||
{{< /table >}}
|
||||
|
||||
<!--
|
||||
## CEL community libraries
|
||||
## CEL options, language features, and libraries
|
||||
|
||||
Kubernetes CEL expressions have access to the following CEL community libraries:
|
||||
CEL is configured with the following options, libraries and language features, introduced at the specified Kubernetes versions:
|
||||
-->
|
||||
## CEL 社区库 {#cel-community-libraries}
|
||||
## CEL 选项、语言特性和库 {#cel-options-language-features-and-libraries}
|
||||
|
||||
Kubernetes CEL 表达式能够访问以下 CEL 社区库:
|
||||
CEL 配置了以下选项、库和语言特性,这些特性是在所列的 Kubernetes 版本中引入的:
|
||||
|
||||
<!--
|
||||
- CEL standard functions, defined in the [list of standard definitions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions)
|
||||
- CEL standard [macros](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros)
|
||||
- CEL [extended string function library](https://pkg.go.dev/github.com/google/cel-go/ext#Strings)
|
||||
| CEL option, library or language feature | Included | Availablity |
|
||||
|------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|
|
||||
| [Standard macros](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros) | `has`, `all`, `exists`, `exists_one`, `map`, `filter` | All Kubernetes versions |
|
||||
| [Standard functions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) | See [official list of standard definitions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) | All Kubernetes versions |
|
||||
| [Homogeneous Aggregate Literals](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#HomogeneousAggregateLiterals) | | All Kubernetes versions |
|
||||
| [Default UTC Time Zone](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#DefaultUTCTimeZone) | | All Kubernetes versions |
|
||||
| [Eagerly Validate Declarations](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#EagerlyValidateDeclarations) | | All Kubernetes versions |
|
||||
| [extended strings library](https://pkg.go.dev/github.com/google/cel-go/ext#Strings), Version 1 | `charAt`, `indexOf`, `lastIndexOf`, `lowerAscii`, `upperAscii`, `replace`, `split`, `join`, `substring`, `trim` | All Kubernetes versions |
|
||||
| Kubernetes list library | See [Kubernetes list library](#kubernetes-list-library) | All Kubernetes versions |
|
||||
| Kubernetes regex library | See [Kubernetes regex library](#kubernetes-regex-library) | All Kubernetes versions |
|
||||
| Kubernetes URL library | See [Kubernetes URL library](#kubernetes-url-library) | All Kubernetes versions |
|
||||
| Kubernetes authorizer library | See [Kubernetes authorizer library](#kubernetes-authorizer-library) | All Kubernetes versions |
|
||||
| Kubernetes quantity library | See [Kubernetes quantity library](#kubernetes-quantity-library) | Kubernetes versions 1.29+ |
|
||||
| CEL optional types | See [CEL optional types](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#OptionalTypes) | Kubernetes versions 1.29+ |
|
||||
| CEL CrossTypeNumericComparisons | See [CEL CrossTypeNumericComparisons](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#CrossTypeNumericComparisons) | Kubernetes versions 1.29+ |
|
||||
-->
|
||||
- [标准定义列表](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions)中定义的
|
||||
CEL 标准函数
|
||||
- CEL 标准[宏](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros)
|
||||
- CEL [扩展字符串函数库](https://pkg.go.dev/github.com/google/cel-go/ext#Strings)
|
||||
| CEL 选项、库或语言特性 | 包含的内容 | 可用性 |
|
||||
| ------------------- | -------- | ----- |
|
||||
| [标准宏](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros) | `has`、`all`、`exists`、`exists_one`、`map`、`filter` | 所有 Kubernetes 版本 |
|
||||
| [标准函数](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) | 参见[官方标准定义列表](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) | 所有 Kubernetes 版本 |
|
||||
| [同质聚合字面量](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#HomogeneousAggregateLiterals) | | 所有 Kubernetes 版本 |
|
||||
| [默认 UTC 时区](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#DefaultUTCTimeZone) | | 所有 Kubernetes 版本 |
|
||||
| [迫切验证声明](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#EagerlyValidateDeclarations) | | 所有 Kubernetes 版本 |
|
||||
| [扩展字符串库](https://pkg.go.dev/github.com/google/cel-go/ext#Strings),v1 | `charAt`、`indexOf`、`lastIndexOf`、`lowerAscii`、`upperAscii`、`replace`、`split`、`join`、`substring`、`trim` | 所有 Kubernetes 版本 |
|
||||
| Kubernetes 列表库 | 参见 [Kubernetes 列表库](#kubernetes-list-library) | 所有 Kubernetes 版本 |
|
||||
| Kubernetes 正则表达式库 | 参见 [Kubernetes 正则表达式库](#kubernetes-regex-library) | 所有 Kubernetes 版本 |
|
||||
| [Kubernetes URL 库] | 参见 [Kubernetes URL 库](#kubernetes-url-library) | 所有 Kubernetes 版本 |
|
||||
| Kubernetes 鉴权组件库 | 参见 [Kubernetes 鉴权组件库](#kubernetes-authorizer-library) | 所有 Kubernetes 版本 |
|
||||
| Kubernetes 数量库 | 参见 [Kubernetes 数量库](#kubernetes-quantity-library) | Kubernetes v1.29+ |
|
||||
| CEL 可选类型 | 参见 [CEL 可选类型](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#OptionalTypes) | Kubernetes v1.29+ |
|
||||
| CEL CrossTypeNumericComparisons | 参见 [CEL CrossTypeNumericComparisons](https://pkg.go.dev/github.com/google/cel-go@v0.17.4/cel#CrossTypeNumericComparisons) | Kubernetes v1.29+ |
|
||||
|
||||
<!--
|
||||
CEL functions, features and language settings support Kubernetes control plane
|
||||
rollbacks. For example, _CEL Optional Values_ was introduced at Kubernetes 1.29
|
||||
and so only API servers at that version or newer will accept write requests to
|
||||
CEL expressions that use _CEL Optional Values_. However, when a cluster is
|
||||
rolled back to Kubernetes 1.28 CEL expressions using "CEL Optional Values" that
|
||||
are already stored in API resources will continue to evaluate correctly.
|
||||
-->
|
||||
CEL 函数、特性和语言设置支持 Kubernetes 控制平面回滚。
|
||||
例如,__CEL 可选值(Optional Values)__ 是在 Kubernetes 1.29 引入的,因此只有该版本或更新的
|
||||
API 服务器才会接受使用 __CEL Optional Values__ 的 CEL 表达式的写入请求。
|
||||
但是,当集群回滚到 Kubernetes 1.28 时,已经存储在 API 资源中的使用了
|
||||
"CEL Optional Values" 的 CEL 表达式将继续正确评估。
|
||||
|
||||
<!--
|
||||
## Kubernetes CEL libraries
|
||||
|
@ -136,7 +173,7 @@ Kubernetes CEL 表达式能够访问以下 CEL 社区库:
|
|||
In additional to the CEL community libraries, Kubernetes includes CEL libraries
|
||||
that are available everywhere CEL is used in Kubernetes.
|
||||
-->
|
||||
## Kubernetes CEL 库 {#kubernetes-cel-libraries}
|
||||
## Kubernetes CEL 库 {#kubernetes-cel-libraries}
|
||||
|
||||
除了 CEL 社区库之外,Kubernetes 还包括在 Kubernetes 中使用 CEL 时所有可用的 CEL 库。
|
||||
|
||||
|
@ -151,7 +188,7 @@ The list library also includes `min`, `max` and `sum`. Sum is supported on all
|
|||
number types as well as the duration type. Min and max are supported on all
|
||||
comparable types.
|
||||
-->
|
||||
### Kubernetes 列表库 {#kubernetes-list-library}
|
||||
### Kubernetes 列表库 {#kubernetes-list-library}
|
||||
|
||||
列表库包括 `indexOf` 和 `lastIndexOf`,这两个函数的功能类似于同名的字符串函数。
|
||||
这些函数返回提供的元素在列表中的第一个或最后一个位置索引。
|
||||
|
@ -205,7 +242,7 @@ regex operations.
|
|||
|
||||
Examples:
|
||||
-->
|
||||
### Kubernetes 正则表达式库 {#kubernete-regex-library}
|
||||
### Kubernetes 正则表达式库 {#kubernete-regex-library}
|
||||
|
||||
除了 CEL 标准库提供的 `matches` 函数外,正则表达式库还提供了 `find` 和 `findAll`,
|
||||
使得更多种类的正则表达式运算成为可能。
|
||||
|
@ -245,7 +282,7 @@ To make it easier and safer to process URLs, the following functions have been a
|
|||
- `url(string) URL` converts a string to a URL or results in an error if the
|
||||
string is not a valid URL.
|
||||
-->
|
||||
### Kubernetes URL 库 {#kubernetes-url-library}
|
||||
### Kubernetes URL 库 {#kubernetes-url-library}
|
||||
|
||||
为了更轻松、更安全地处理 URL,添加了以下函数:
|
||||
|
||||
|
@ -296,7 +333,7 @@ the authorizer may be used to perform authorization checks for the principal
|
|||
|
||||
API resource checks are performed as follows:
|
||||
-->
|
||||
### Kubernetes 鉴权组件库
|
||||
### Kubernetes 鉴权组件库 {#kubernetes-authorizer-library}
|
||||
|
||||
在 API 中使用 CEL 表达式,可以使用类型为 `Authorizer` 的变量,
|
||||
这个鉴权组件可用于对请求的主体(已认证用户)执行鉴权检查。
|
||||
|
@ -318,7 +355,7 @@ API 资源检查的过程如下:
|
|||
注意这些函数将返回接收者的类型,并且可以串接起来:
|
||||
- `ResourceCheck.subresource(string) ResourceCheck`
|
||||
- `ResourceCheck.namespace(string) ResourceCheck`
|
||||
- `ResourceCheck.name(string) ResourceCheck`
|
||||
- `ResourceCheck.name(string) ResourceCheck`
|
||||
3. 调用 `ResourceCheck.check(verb string) Decision` 来执行鉴权检查。
|
||||
4. 调用 `allowed() bool` 或 `reason() string` 来查验鉴权检查的结果。
|
||||
|
||||
|
@ -331,7 +368,7 @@ Non-resource authorization performed are used as follows:
|
|||
-->
|
||||
对非资源访问的鉴权过程如下:
|
||||
|
||||
1. 仅指定路径:`Authorizer.path(string) PathCheck`
|
||||
1. 仅指定路径:`Authorizer.path(string) PathCheck`。
|
||||
1. 调用 `PathCheck.check(httpVerb string) Decision` 来执行鉴权检查。
|
||||
1. 调用 `allowed() bool` 或 `reason() string` 来查验鉴权检查的结果。
|
||||
|
||||
|
@ -366,6 +403,98 @@ godoc for more information.
|
|||
更多信息请参阅 Go 文档:
|
||||
[Kubernetes Authz library](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz)。
|
||||
|
||||
<!--
|
||||
### Kubernetes quantity library
|
||||
|
||||
Kubernetes 1.28 adds support for manipulating quantity strings (ex 1.5G, 512k, 20Mi)
|
||||
-->
|
||||
### Kubernetes 数量库 {#kubernetes-quantity-library}
|
||||
|
||||
Kubernetes 1.28 添加了对数量字符串(例如 1.5G、512k、20Mi)的操作支持。
|
||||
|
||||
<!--
|
||||
- `isQuantity(string)` checks if a string is a valid Quantity according to [Kubernetes'
|
||||
resource.Quantity](https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity).
|
||||
- `quantity(string) Quantity` converts a string to a Quantity or results in an error if the
|
||||
string is not a valid quantity.
|
||||
-->
|
||||
- `isQuantity(string)` 根据 [Kubernetes 的 resource.Quantity](https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity),
|
||||
检查字符串是否是有效的 Quantity。
|
||||
- `quantity(string) Quantity` 将字符串转换为 Quantity,如果字符串不是有效的数量,则会报错。
|
||||
|
||||
<!--
|
||||
Once parsed via the `quantity` function, the resulting Quantity object has the
|
||||
following library of member functions:
|
||||
-->
|
||||
一旦通过 `quantity` 函数解析,得到的 Quantity 对象将具有以下成员函数库:
|
||||
|
||||
<!--
|
||||
{{< table caption="Available member functions of a Quantity" >}}
|
||||
| Member Function | CEL Return Value | Description |
|
||||
|-------------------------------|-------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `isInteger()` | bool | returns true if and only if asInteger is safe to call without an error |
|
||||
| `asInteger()` | int | returns a representation of the current value as an int64 if possible or results in an error if conversion would result in overflow or loss of precision. |
|
||||
| `asApproximateFloat()` | float | returns a float64 representation of the quantity which may lose precision. If the value of the quantity is outside the range of a float64 +Inf/-Inf will be returned. |
|
||||
| `sign()` | int | Returns `1` if the quantity is positive, `-1` if it is negative. `0` if it is zero |
|
||||
| `add(<Quantity>)` | Quantity | Returns sum of two quantities |
|
||||
| `add(<int>)` | Quantity | Returns sum of quantity and an integer |
|
||||
| `sub(<Quantity>)` | Quantity | Returns difference between two quantities |
|
||||
| `sub(<int>)` | Quantity | Returns difference between a quantity and an integer |
|
||||
| `isLessThan(<Quantity>)` | bool | Returns true if and only if the receiver is less than the operand |
|
||||
| `isGreaterThan(<Quantity>)` | bool | Returns true if and only if the receiver is greater than the operand |
|
||||
| `compareTo(<Quantity>)` | int | Compares receiver to operand and returns 0 if they are equal, 1 if the receiver is greater, or -1 if the receiver is less than the operand |
|
||||
{{< /table >}}
|
||||
-->
|
||||
{{< table caption="Quantity 的可用成员函数" >}}
|
||||
| 成员函数 | CEL 返回值 | 描述 |
|
||||
| ------------------------ | --------- | --- |
|
||||
| `isInteger()` | bool | 仅当 asInteger 可以被安全调用且不出错时,才返回 true |
|
||||
| `asInteger()` | int | 将当前值作为 int64 的表示返回,如果转换会导致溢出或精度丢失,则会报错 |
|
||||
| `asApproximateFloat()` | float | 返回数量的 float64 表示,可能会丢失精度。如果数量的值超出了 float64 的范围,则返回 +Inf/-Inf |
|
||||
| `sign()` | int | 如果数量为正,则返回 1;如果数量为负,则返回 -1;如果数量为零,则返回 0 |
|
||||
| `add(<Quantity>)` | Quantity | 返回两个数量的和 |
|
||||
| `add(<int>)` | Quantity | 返回数量和整数的和 |
|
||||
| `sub(<Quantity>)` | Quantity | 返回两个数量的差 |
|
||||
| `sub(<int>)` | Quantity | 返回数量减去整数的差 |
|
||||
| `isLessThan(<Quantity>)` | bool | 如果接收值小于操作数,则返回 true |
|
||||
| `isGreaterThan(<Quantity>)`| bool | 如果接收值大于操作数,则返回 true |
|
||||
| `compareTo(<Quantity>)` | int | 将接收值与操作数进行比较,如果它们相等,则返回 0;如果接收值大于操作数,则返回 1;如果接收值小于操作数,则返回 -1 |
|
||||
{{< /table >}}
|
||||
|
||||
<!--
|
||||
Examples:
|
||||
-->
|
||||
例如:
|
||||
|
||||
<!--
|
||||
{{< table caption="Examples of CEL expressions using URL library functions" >}}
|
||||
| CEL Expression | Purpose |
|
||||
|---------------------------------------------------------------------------|-------------------------------------------------------|
|
||||
| `quantity("500000G").isInteger()` | Test if conversion to integer would throw an error |
|
||||
| `quantity("50k").asInteger()` | Precise conversion to integer |
|
||||
| `quantity("9999999999999999999999999999999999999G").asApproximateFloat()` | Lossy conversion to float |
|
||||
| `quantity("50k").add("20k")` | Add two quantities |
|
||||
| `quantity("50k").sub(20000)` | Subtract an integer from a quantity |
|
||||
| `quantity("50k").add(20).sub(quantity("100k")).sub(-50000)` | Chain adding and subtracting integers and quantities |
|
||||
| `quantity("200M").compareTo(quantity("0.2G"))` | Compare two quantities |
|
||||
| `quantity("150Mi").isGreaterThan(quantity("100Mi"))` | Test if a quantity is greater than the receiver |
|
||||
| `quantity("50M").isLessThan(quantity("100M"))` | Test if a quantity is less than the receiver |
|
||||
{{< /table >}}
|
||||
-->
|
||||
{{< table caption="使用 URL 库函数的 CEL 表达式示例" >}}
|
||||
| CEL 表达式 | 用途 |
|
||||
|---------------------------------------------------------------------------| -------------------- |
|
||||
| `quantity("500000G").isInteger()` | 测试转换为整数是否会报错 |
|
||||
| `quantity("50k").asInteger()` | 精确转换为整数 |
|
||||
| `quantity("9999999999999999999999999999999999999G").asApproximateFloat()` | 松散转换为浮点数 |
|
||||
| `quantity("50k").add("20k")` | 两个数量相加 |
|
||||
| `quantity("50k").sub(20000)` | 从数量中减去整数 |
|
||||
| `quantity("50k").add(20).sub(quantity("100k")).sub(-50000)` | 链式相加和减去整数和数量 |
|
||||
| `quantity("200M").compareTo(quantity("0.2G"))` | 比较两个数量 |
|
||||
| `quantity("150Mi").isGreaterThan(quantity("100Mi"))` | 测试数量是否大于接收值 |
|
||||
| `quantity("50M").isLessThan(quantity("100M"))` | 测试数量是否小于接收值 |
|
||||
{{< /table >}}
|
||||
|
||||
<!--
|
||||
## Type checking
|
||||
|
||||
|
@ -376,16 +505,16 @@ example, [CustomResourceDefinitions Validation
|
|||
Rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
|
||||
are fully type checked.
|
||||
-->
|
||||
## 类型检查 {#type-checking}
|
||||
## 类型检查 {#type-checking}
|
||||
|
||||
CEL 是一种[逐渐类型化的语言](https://github.com/google/cel-spec/blob/master/doc/langdef.md#gradual-type-checking)。
|
||||
|
||||
一些 Kubernetes API 字段包含完全经过类型检查的 CEL 表达式。
|
||||
例如,[CustomResourceDefinitions 验证规则](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)就是完全经过类型检查的。
|
||||
例如,[CustomResourceDefinition 验证规则](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)就是完全经过类型检查的。
|
||||
|
||||
<!--
|
||||
Some Kubernetes API fields contain partially type checked CEL expressions. A
|
||||
partially type checked expression is an experessions where some of the variables
|
||||
partially type checked expression is an expressions where some of the variables
|
||||
are statically typed but others are dynamically typed. For example, in the CEL
|
||||
expressions of
|
||||
[ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/)
|
||||
|
@ -397,7 +526,7 @@ that `object` refers to, because `object` is dynamically typed.
|
|||
-->
|
||||
一些 Kubernetes API 字段包含部分经过类型检查的 CEL 表达式。
|
||||
部分经过类型检查的表达式是指一些变量是静态类型,而另一些变量是动态类型的表达式。
|
||||
例如在 [ValidatingAdmissionPolicies](/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/)
|
||||
例如在 [ValidatingAdmissionPolicy](/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/)
|
||||
的 CEL 表达式中,`request` 变量是有类型的,但 `object` 变量是动态类型的。
|
||||
因此,包含 `request.namex` 的表达式将无法通过类型检查,因为 `namex` 字段未定义。
|
||||
然而,即使对于 `object` 所引用的资源种类没有定义 `namex` 字段,
|
||||
|
@ -418,7 +547,7 @@ has(object.namex) ? object.namex == 'special' : request.name == 'special'
|
|||
<!--
|
||||
## Type system integration
|
||||
-->
|
||||
## 类型系统集成 {#type-system-integration}
|
||||
## 类型系统集成 {#type-system-integration}
|
||||
|
||||
<!--
|
||||
{{< table caption="Table showing the relationship between OpenAPIv3 types and CEL types" >}}
|
||||
|
@ -511,7 +640,7 @@ Only Kubernetes resource property names of the form
|
|||
names are escaped according to the following rules when accessed in the
|
||||
expression:
|
||||
-->
|
||||
## 转义 {#escaping}
|
||||
## 转义 {#escaping}
|
||||
|
||||
仅形如 `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` 的 Kubernetes 资源属性名可以从 CEL 中访问。
|
||||
当在表达式中访问可访问的属性名时,会根据以下规则进行转义:
|
||||
|
@ -578,7 +707,7 @@ excessive resource consumption during evaluation. CEL's resource constraint
|
|||
features are used to prevent CEL evaluation from consuming excessive API server
|
||||
resources.
|
||||
-->
|
||||
## 资源约束 {#resource-constraints}
|
||||
## 资源约束 {#resource-constraints}
|
||||
|
||||
CEL 不是图灵完备的,提供了多种生产安全控制手段来限制执行时间。
|
||||
CEL 的**资源约束**特性提供了关于表达式复杂性的反馈,并帮助保护 API 服务器免受过度的资源消耗。
|
||||
|
@ -625,7 +754,7 @@ expression. If the CEL interpreter executes too many instructions, the runtime
|
|||
cost budget will be exceeded, execution of the expressions will be halted, and
|
||||
an error will result.
|
||||
-->
|
||||
### 运行时成本预算 {#runtime-cost-budget}
|
||||
### 运行时成本预算 {#runtime-cost-budget}
|
||||
|
||||
所有由 Kubernetes 评估的 CEL 表达式都受到运行时成本预算的限制。
|
||||
运行时成本预算是通过在解释 CEL 表达式时增加成本单元计数器来计算实际 CPU 利用率的估算值。
|
||||
|
@ -657,7 +786,7 @@ expression to the API resources. This feature offers a stronger assurance that
|
|||
CEL expressions written to the API resource will be evaluate at runtime without
|
||||
exceeding the runtime cost budget.
|
||||
-->
|
||||
### 估算的成本限制 {#estimated-cost-limits}
|
||||
### 估算的成本限制 {#estimated-cost-limits}
|
||||
|
||||
对于某些 Kubernetes 资源,API 服务器还可能检查 CEL 表达式的最坏情况估计运行时间是否过于昂贵而无法执行。
|
||||
如果是,则 API 服务器会拒绝包含 CEL 表达式的创建或更新操作,以防止 CEL 表达式被写入 API 资源。
|
||||
|
|
|
@ -174,9 +174,9 @@ for database debugging.
|
|||
```
|
||||
|
||||
<!--
|
||||
27017 is the TCP port allocated to MongoDB on the internet.
|
||||
27017 is the official TCP port for MongoDB.
|
||||
-->
|
||||
27017 是分配给 MongoDB 的互联网 TCP 端口。
|
||||
27017 是 MongoDB 的官方 TCP 端口。
|
||||
|
||||
<!--
|
||||
## Forward a local port to a port on the Pod
|
||||
|
|
|
@ -134,10 +134,10 @@ Here is the configuration file for the application Deployment:
|
|||
```
|
||||
|
||||
<!--
|
||||
Make a note of the NodePort value for the service. For example,
|
||||
Make a note of the NodePort value for the Service. For example,
|
||||
in the preceding output, the NodePort value is 31496.
|
||||
-->
|
||||
注意服务中的 NodePort 值。例如在上面的输出中,NodePort 值是 31496。
|
||||
注意 Service 中的 NodePort 值。例如在上面的输出中,NodePort 值是 31496。
|
||||
|
||||
<!--
|
||||
1. List the pods that are running the Hello World application:
|
||||
|
|
|
@ -486,7 +486,6 @@ For example:
|
|||
ports:
|
||||
- name: liveness-port
|
||||
containerPort: 8080
|
||||
hostPort: 8080
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
|
@ -520,7 +519,6 @@ So, the previous example would become:
|
|||
ports:
|
||||
- name: liveness-port
|
||||
containerPort: 8080
|
||||
hostPort: 8080
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
|
@ -883,7 +881,6 @@ spec:
|
|||
ports:
|
||||
- name: liveness-port
|
||||
containerPort: 8080
|
||||
hostPort: 8080
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
|
|
|
@ -100,7 +100,7 @@ spec:
|
|||
```
|
||||
|
||||
<!--
|
||||
## Clusters containing different types of GPUs
|
||||
## Manage clusters with different types of GPUs
|
||||
|
||||
If different nodes in your cluster have different types of GPUs, then you
|
||||
can use [Node Labels and Node Selectors](/docs/tasks/configure-pod-container/assign-pods-nodes/)
|
||||
|
@ -108,7 +108,7 @@ to schedule pods to appropriate nodes.
|
|||
|
||||
For example:
|
||||
-->
|
||||
## 集群内存在不同类型的 GPU {#clusters-containing-different-types-of-gpus}
|
||||
## 管理配有不同类型 GPU 的集群 {#manage-clusters-with-different-types-of-gpus}
|
||||
|
||||
如果集群内部的不同节点上有不同类型的 NVIDIA GPU,
|
||||
那么你可以使用[节点标签和节点选择器](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/)来将
|
||||
|
@ -116,6 +116,13 @@ Pod 调度到合适的节点上。
|
|||
|
||||
例如:
|
||||
|
||||
<!--
|
||||
```shell
|
||||
# Label your nodes with the accelerator type they have.
|
||||
kubectl label nodes node1 accelerator=example-gpu-x100
|
||||
kubectl label nodes node2 accelerator=other-gpu-k915
|
||||
```
|
||||
-->
|
||||
```shell
|
||||
# 为你的节点加上它们所拥有的加速器类型的标签
|
||||
kubectl label nodes node1 accelerator=example-gpu-x100
|
||||
|
@ -134,18 +141,92 @@ a different label key if you prefer.
|
|||
## 自动节点标签 {#node-labeller}
|
||||
|
||||
<!--
|
||||
If you're using AMD GPU devices, you can deploy
|
||||
[Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller).
|
||||
Node Labeller is a {{< glossary_tooltip text="controller" term_id="controller" >}} that automatically
|
||||
labels your nodes with GPU device properties.
|
||||
|
||||
Similar functionality for NVIDIA is provided by
|
||||
[GPU feature discovery](https://github.com/NVIDIA/gpu-feature-discovery/blob/main/README.md).
|
||||
As an administrator, you can automatically discover and label all your GPU enabled nodes
|
||||
by deploying Kubernetes [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) (NFD).
|
||||
NFD detects the hardware features that are available on each node in a Kubernetes cluster.
|
||||
Typically, NFD is configured to advertise those features as node labels, but NFD can also add extended resources, annotations, and node taints.
|
||||
NFD is compatible with all [supported versions](/releases/version-skew-policy/#supported-versions) of Kubernetes.
|
||||
By default NFD create the [feature labels](https://kubernetes-sigs.github.io/node-feature-discovery/master/usage/features.html) for the detected features.
|
||||
Administrators can leverage NFD to also taint nodes with specific features, so that only pods that request those features can be scheduled on those nodes.
|
||||
-->
|
||||
如果你在使用 AMD GPU,你可以部署
|
||||
[Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller),
|
||||
它是一个 {{< glossary_tooltip text="控制器" term_id="controller" >}},
|
||||
会自动给节点打上 GPU 设备属性标签。
|
||||
作为管理员,你可以通过部署 Kubernetes
|
||||
[Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) (NFD)
|
||||
来自动发现所有启用 GPU 的节点并为其打标签。NFD 检测 Kubernetes 集群中每个节点上可用的硬件特性。
|
||||
通常,NFD 被配置为以节点标签广告这些特性,但 NFD 也可以添加扩展的资源、注解和节点污点。
|
||||
NFD 兼容所有[支持版本](/zh-cn/releases/version-skew-policy/#supported-versions)的 Kubernetes。
|
||||
NFD 默认会为检测到的特性创建[特性标签](https://kubernetes-sigs.github.io/node-feature-discovery/master/usage/features.html)。
|
||||
管理员可以利用 NFD 对具有某些具体特性的节点添加污点,以便只有请求这些特性的 Pod 可以被调度到这些节点上。
|
||||
|
||||
对于 NVIDIA GPU,[GPU feature discovery](https://github.com/NVIDIA/gpu-feature-discovery/blob/main/README.md)
|
||||
提供了类似功能。
|
||||
<!--
|
||||
You also need a plugin for NFD that adds appropriate labels to your nodes; these might be generic
|
||||
labels or they could be vendor specific. Your GPU vendor may provide a third party
|
||||
plugin for NFD; check their documentation for more details.
|
||||
-->
|
||||
你还需要一个 NFD 插件,将适当的标签添加到你的节点上;
|
||||
这些标签可以是通用的,也可以是供应商特定的。你的 GPU 供应商可能会为 NFD 提供第三方插件;
|
||||
更多细节请查阅他们的文档。
|
||||
|
||||
<!--
|
||||
{{< highlight yaml "linenos=false,hl_lines=7-18" >}}
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: example-vector-add
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
# You can use Kubernetes node affinity to schedule this Pod onto a node
|
||||
# that provides the kind of GPU that its container needs in order to work
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: "gpu.gpu-vendor.example/installed-memory"
|
||||
operator: Gt # (greater than)
|
||||
values: ["40535"]
|
||||
- key: "feature.node.kubernetes.io/pci-10.present" # NFD Feature label
|
||||
values: ["true"] # (optional) only schedule on nodes with PCI device 10
|
||||
containers:
|
||||
- name: example-vector-add
|
||||
image: "registry.example/example-vector-add:v42"
|
||||
resources:
|
||||
limits:
|
||||
gpu-vendor.example/example-gpu: 1 # requesting 1 GPU
|
||||
{{< /highlight >}}
|
||||
-->
|
||||
{{< highlight yaml "linenos=false,hl_lines=7-18" >}}
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: example-vector-add
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
# 你可以使用 Kubernetes 节点亲和性将此 Pod 调度到提供其容器所需的那种 GPU 的节点上
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: "gpu.gpu-vendor.example/installed-memory"
|
||||
operator: Gt #(大于)
|
||||
values: ["40535"]
|
||||
- key: "feature.node.kubernetes.io/pci-10.present" # NFD 特性标签
|
||||
values: ["true"] #(可选)仅调度到具有 PCI 设备 10 的节点上
|
||||
containers:
|
||||
- name: example-vector-add
|
||||
image: "registry.example/example-vector-add:v42"
|
||||
resources:
|
||||
limits:
|
||||
gpu-vendor.example/example-gpu: 1 # 请求 1 个 GPU
|
||||
{{< /highlight >}}
|
||||
|
||||
<!--
|
||||
#### GPU vendor implementations
|
||||
|
||||
- [Intel](https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/gpu_plugin/README.html)
|
||||
- [NVIDIA](https://github.com/NVIDIA/gpu-feature-discovery/#readme)
|
||||
-->
|
||||
#### GPU 供应商实现
|
||||
|
||||
- [Intel](https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/gpu_plugin/README.html)
|
||||
- [NVIDIA](https://github.com/NVIDIA/gpu-feature-discovery/#readme)
|
||||
|
|
|
@ -188,7 +188,7 @@ kubernetes-bootcamp 1/1 1 1 11m
|
|||
<p>我们应该有 1 个 Pod。如果没有,请再次运行该命令。结果显示:</p>
|
||||
<ul>
|
||||
<li><em>NAME</em> 列出 Deployment 在集群中的名称。</li>
|
||||
<li><em>READY</em> 显示当前/预期(CURRENT/DESIRED)副本数的比例/li>
|
||||
<li><em>READY</em> 显示当前/预期(CURRENT/DESIRED)副本数的比例。</li>
|
||||
<li><em>UP-TO-DATE</em> 显示为了达到预期状态,而被更新的副本的数量。</li>
|
||||
<li><em>AVAILABLE</em> 显示应用程序有多少个副本对你的用户可用。</li>
|
||||
<li><em>AGE</em> 显示应用程序的运行时间。</li>
|
||||
|
|
|
@ -43,19 +43,19 @@ the target localization.
|
|||
|
||||
<!--
|
||||
[NAT](https://en.wikipedia.org/wiki/Network_address_translation)
|
||||
: network address translation
|
||||
: Network address translation
|
||||
|
||||
[Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT)
|
||||
: replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node.
|
||||
: Replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node.
|
||||
|
||||
[Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT)
|
||||
: replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}}
|
||||
: Replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}}
|
||||
|
||||
[VIP](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
|
||||
: a virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes
|
||||
: A virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes
|
||||
|
||||
[kube-proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
|
||||
: a network daemon that orchestrates Service VIP management on every node
|
||||
: A network daemon that orchestrates Service VIP management on every node
|
||||
-->
|
||||
[NAT](https://zh.wikipedia.org/wiki/%E7%BD%91%E7%BB%9C%E5%9C%B0%E5%9D%80%E8%BD%AC%E6%8D%A2)
|
||||
: 网络地址转换
|
||||
|
@ -89,6 +89,7 @@ IP of requests it receives through an HTTP header. You can create it as follows:
|
|||
```shell
|
||||
kubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.4
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is:
|
||||
-->
|
||||
|
@ -130,6 +131,7 @@ kube-proxy,则从集群内发送到 ClusterIP 的数据包永远不会进行
|
|||
```console
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
|
@ -341,6 +343,7 @@ Visually:
|
|||
* Pod 的回复被发送回给客户端
|
||||
|
||||
用图表示:
|
||||
|
||||
{{< figure src="/zh-cn/docs/images/tutor-service-nodePort-fig01.svg" alt="图 1:源 IP NodePort" class="diagram-large" caption="如图。使用 SNAT 的源 IP(Type=NodePort)" link="https://mermaid.live/edit#pako:eNqNkV9rwyAUxb-K3LysYEqS_WFYKAzat9GHdW9zDxKvi9RoMIZtlH732ZjSbE970cu5v3s86hFqJxEYfHjRNeT5ZcUtIbXRaMNN2hZ5vrYRqt52cSXV-4iMSuwkZiYtyX739EqWaahMQ-V1qPxDVLNOvkYrO6fj2dupWMR2iiT6foOKdEZoS5Q2hmVSStoH7w7IMqXUVOefWoaG3XVftHbGeZYVRbH6ZXJ47CeL2-qhxvt_ucTe1SUlpuMN6CX12XeGpLdJiaMMFFr0rdAyvvfxjHEIDbbIgcVSohKDCRy4PUV06KQIuJU6OA9MCdMjBTEEt_-2NbDgB7xAGy3i97VJPP0ABRmcqg" >}}
|
||||
|
||||
<!--
|
||||
|
@ -368,6 +371,7 @@ Set the `service.spec.externalTrafficPolicy` field as follows:
|
|||
```shell
|
||||
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is:
|
||||
-->
|
||||
|
@ -385,6 +389,7 @@ Now, re-run the test:
|
|||
```shell
|
||||
for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to:
|
||||
-->
|
||||
|
@ -447,6 +452,7 @@ You can test this by exposing the source-ip-app through a load balancer:
|
|||
```shell
|
||||
kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is:
|
||||
-->
|
||||
|
@ -550,6 +556,7 @@ serving the health check at `/healthz`. You can test this:
|
|||
```shell
|
||||
kubectl get pod -o wide -l app=source-ip-app
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
|
|
|
@ -0,0 +1,28 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: sa-ctb-name-test
|
||||
spec:
|
||||
containers:
|
||||
- name: container-test
|
||||
image: busybox
|
||||
command: ["sleep", "3600"]
|
||||
volumeMounts:
|
||||
- name: token-vol
|
||||
mountPath: "/root-certificates"
|
||||
readOnly: true
|
||||
serviceAccountName: default
|
||||
volumes:
|
||||
- name: root-certificates-vol
|
||||
projected:
|
||||
sources:
|
||||
- clusterTrustBundle:
|
||||
name: example
|
||||
path: example-roots.pem
|
||||
- clusterTrustBundle:
|
||||
signerName: "example.com/mysigner"
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
version: live
|
||||
path: mysigner-roots.pem
|
||||
optional: true
|
|
@ -92,7 +92,7 @@
|
|||
{{- end -}}
|
||||
{{- if eq $seenPatchVersionInfoCount 0 -}}
|
||||
<!-- fallback patch version to .0 -->
|
||||
{{- printf "%.2f.0" $currentVersion -}}
|
||||
{{- printf "%s.0" $currentVersion -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
|
|
Loading…
Reference in New Issue