Merge branch 'kubernetes:main' into juliafmorgado-kops
commit
507cf1df9b
|
@ -216,12 +216,14 @@ aliases:
|
|||
- aisonaku
|
||||
- potapy4
|
||||
- dianaabv
|
||||
- shurup
|
||||
sig-docs-ru-reviews: # PR reviews for Russian content
|
||||
- Arhell
|
||||
- msheldyakov
|
||||
- aisonaku
|
||||
- potapy4
|
||||
- dianaabv
|
||||
- shurup
|
||||
sig-docs-pl-owners: # Admins for Polish content
|
||||
- mfilocha
|
||||
- nvtkaszpir
|
||||
|
|
|
@ -20,7 +20,7 @@ for the past 2 years. I describe Kluctl as "The missing glue to put together
|
|||
large Kubernetes deployments, composed of multiple smaller parts
|
||||
(Helm/Kustomize/...) in a manageable and unified way."
|
||||
|
||||
To get a basic understanding of Kluctl, I suggest to visit the [kluctl.io](https://kluctlio)
|
||||
To get a basic understanding of Kluctl, I suggest to visit the [kluctl.io](https://kluctl.io)
|
||||
website and read through the documentation and tutorials, for example the
|
||||
[microservices demo tutorial](https://kluctl.io/docs/guides/tutorials/microservices-demo/).
|
||||
As an alternative, you can watch [Hands-on Introduction to kluctl](https://www.youtube.com/watch?v=9LoYLjDjOdg)
|
||||
|
|
|
@ -346,7 +346,8 @@ or 400 megabytes (`400M`).
|
|||
In the following example, the Pod has two containers. Each container has a request of
|
||||
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
|
||||
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
|
||||
a limit of 8GiB of local ephemeral storage.
|
||||
a limit of 8GiB of local ephemeral storage. 500Mi of that limit could be
|
||||
consumed by the `emptyDir` volume.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -377,7 +378,8 @@ spec:
|
|||
mountPath: "/tmp"
|
||||
volumes:
|
||||
- name: ephemeral
|
||||
emptyDir: {}
|
||||
emptyDir:
|
||||
sizeLimit: 500Mi
|
||||
```
|
||||
|
||||
### How Pods with ephemeral-storage requests are scheduled
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Scheduling, Preemption and Eviction"
|
||||
weight: 90
|
||||
weight: 95
|
||||
content_type: concept
|
||||
description: >
|
||||
In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Security"
|
||||
weight: 81
|
||||
weight: 85
|
||||
description: >
|
||||
Concepts for keeping your cloud-native workload secure.
|
||||
---
|
||||
|
|
|
@ -19,9 +19,6 @@ to represent them in Kubernetes. The dynamic provisioning feature eliminates
|
|||
the need for cluster administrators to pre-provision storage. Instead, it
|
||||
automatically provisions storage when it is requested by users.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Background
|
||||
|
@ -116,7 +113,7 @@ can enable this behavior by:
|
|||
is enabled on the API server.
|
||||
|
||||
An administrator can mark a specific `StorageClass` as default by adding the
|
||||
`storageclass.kubernetes.io/is-default-class` [annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it.
|
||||
[`storageclass.kubernetes.io/is-default-class` annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it.
|
||||
When a default `StorageClass` exists in a cluster and a user creates a
|
||||
`PersistentVolumeClaim` with `storageClassName` unspecified, the
|
||||
`DefaultStorageClass` admission controller automatically adds the
|
||||
|
@ -128,9 +125,9 @@ be created.
|
|||
|
||||
## Topology Awareness
|
||||
|
||||
In [Multi-Zone](/docs/setup/multiple-zones) clusters, Pods can be spread across
|
||||
In [Multi-Zone](/docs/setup/best-practices/multiple-zones/) clusters, Pods can be spread across
|
||||
Zones in a Region. Single-Zone storage backends should be provisioned in the Zones where
|
||||
Pods are scheduled. This can be accomplished by setting the [Volume Binding
|
||||
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
||||
Pods are scheduled. This can be accomplished by setting the
|
||||
[Volume Binding Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
||||
|
||||
|
||||
|
|
|
@ -297,18 +297,18 @@ the following types of volumes:
|
|||
* {{< glossary_tooltip text="csi" term_id="csi" >}}
|
||||
* flexVolume (deprecated)
|
||||
* gcePersistentDisk
|
||||
* glusterfs
|
||||
* glusterfs (deprecated)
|
||||
* rbd
|
||||
* portworxVolume
|
||||
|
||||
You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
|
||||
|
||||
``` yaml
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: gluster-vol-default
|
||||
provisioner: kubernetes.io/glusterfs
|
||||
name: example-vol-default
|
||||
provisioner: vendor-name.example/magicstorage
|
||||
parameters:
|
||||
resturl: "http://192.168.10.100:8080"
|
||||
restuser: ""
|
||||
|
@ -616,7 +616,7 @@ The following volume types support mount options:
|
|||
* `cephfs`
|
||||
* `cinder` (**deprecated** in v1.18)
|
||||
* `gcePersistentDisk`
|
||||
* `glusterfs`
|
||||
* `glusterfs` (**deprecated** in v1.25)
|
||||
* `iscsi`
|
||||
* `nfs`
|
||||
* `rbd`
|
||||
|
|
|
@ -338,7 +338,7 @@ using `allowedTopologies`.
|
|||
[allowedTopologies](#allowed-topologies)
|
||||
{{< /note >}}
|
||||
|
||||
### Glusterfs
|
||||
### Glusterfs (deprecated) {#glusterfs}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
|
|
|
@ -335,12 +335,20 @@ Some uses for an `emptyDir` are:
|
|||
* holding files that a content-manager container fetches while a webserver
|
||||
container serves the data
|
||||
|
||||
Depending on your environment, `emptyDir` volumes are stored on whatever medium that backs the
|
||||
node such as disk or SSD, or network storage. However, if you set the `emptyDir.medium` field
|
||||
to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead.
|
||||
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
|
||||
node reboot and any files you write count against your container's
|
||||
memory limit.
|
||||
The `emptyDir.medium` field controls where `emptyDir` volumes are stored. By
|
||||
default `emptyDir` volumes are stored on whatever medium that backs the node
|
||||
such as disk, SSD, or network storage, depending on your environment. If you set
|
||||
the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed
|
||||
filesystem) for you instead. While tmpfs is very fast, be aware that unlike
|
||||
disks, tmpfs is cleared on node reboot and any files you write count against
|
||||
your container's memory limit.
|
||||
|
||||
|
||||
A size limit can be specified for the default medium, which limits the capacity
|
||||
of the `emptyDir` volume. The storage is allocated from [node ephemeral
|
||||
storage](docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage).
|
||||
If that is filled up from another source (for example, log files or image
|
||||
overlays), the `emptyDir` may run out of capacity before this limit.
|
||||
|
||||
{{< note >}}
|
||||
If the `SizeMemoryBackedVolumes` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled,
|
||||
|
@ -364,7 +372,8 @@ spec:
|
|||
name: cache-volume
|
||||
volumes:
|
||||
- name: cache-volume
|
||||
emptyDir: {}
|
||||
emptyDir:
|
||||
sizeLimit: 500Mi
|
||||
```
|
||||
|
||||
### fc (fibre channel) {#fc}
|
||||
|
@ -534,7 +543,7 @@ spec:
|
|||
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
|
||||
```
|
||||
|
||||
### glusterfs (deprecated)
|
||||
### glusterfs (deprecated) {#glusterfs}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="deprecated" >}}
|
||||
|
||||
|
@ -542,7 +551,7 @@ A `glusterfs` volume allows a [Glusterfs](https://www.gluster.org) (an open
|
|||
source networked filesystem) volume to be mounted into your Pod. Unlike
|
||||
`emptyDir`, which is erased when a Pod is removed, the contents of a
|
||||
`glusterfs` volume are preserved and the volume is merely unmounted. This
|
||||
means that a glusterfs volume can be pre-populated with data, and that data can
|
||||
means that a `glusterfs` volume can be pre-populated with data, and that data can
|
||||
be shared between pods. GlusterFS can be mounted by multiple writers
|
||||
simultaneously.
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Workloads"
|
||||
weight: 50
|
||||
weight: 55
|
||||
description: >
|
||||
Understand Pods, the smallest deployable compute object in Kubernetes, and the higher-level abstractions that help you to run them.
|
||||
no_list: true
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: API Access Control
|
||||
weight: 15
|
||||
weight: 30
|
||||
no_list: true
|
||||
---
|
||||
|
||||
|
|
|
@ -6,7 +6,8 @@ reviewers:
|
|||
- erictune
|
||||
- janetkuo
|
||||
- thockin
|
||||
title: Using Admission Controllers
|
||||
title: Admission Controllers Reference
|
||||
linkTitle: Admission Controllers
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
@ -18,9 +19,19 @@ This page provides an overview of Admission Controllers.
|
|||
<!-- body -->
|
||||
## What are they?
|
||||
|
||||
An admission controller is a piece of code that intercepts requests to the
|
||||
An _admission controller_ is a piece of code that intercepts requests to the
|
||||
Kubernetes API server prior to persistence of the object, but after the request
|
||||
is authenticated and authorized. The controllers consist of the
|
||||
is authenticated and authorized.
|
||||
|
||||
Admission controllers may be _validating_, _mutating_, or both. Mutating
|
||||
controllers may modify related objects to the requests they admit; validating controllers may not.
|
||||
|
||||
Admission controllers limit requests to create, delete, modify objects. Admission
|
||||
controllers can also block custom verbs, such as a request connect to a Pod via
|
||||
an API server proxy. Admission controllers do _not_ (and cannot) block requests
|
||||
to read (**get**, **watch** or **list**) objects.
|
||||
|
||||
The admission controllers in Kubernetes {{< skew currentVersion >}} consist of the
|
||||
[list](#what-does-each-admission-controller-do) below, are compiled into the
|
||||
`kube-apiserver` binary, and may only be configured by the cluster
|
||||
administrator. In that list, there are two special controllers:
|
||||
|
@ -29,10 +40,7 @@ mutating and validating (respectively)
|
|||
[admission control webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
|
||||
which are configured in the API.
|
||||
|
||||
Admission controllers may be "validating", "mutating", or both. Mutating
|
||||
controllers may modify related objects to the requests they admit; validating controllers may not.
|
||||
|
||||
Admission controllers limit requests to create, delete, modify objects or connect to proxy. They do not limit requests to read objects.
|
||||
## Admission control phases
|
||||
|
||||
The admission control process proceeds in two phases. In the first phase,
|
||||
mutating admission controllers are run. In the second phase, validating
|
||||
|
@ -52,7 +60,7 @@ other admission controllers.
|
|||
|
||||
## Why do I need them?
|
||||
|
||||
Many advanced features in Kubernetes require an admission controller to be enabled in order
|
||||
Several important features of Kubernetes require an admission controller to be enabled in order
|
||||
to properly support the feature. As a result, a Kubernetes API server that is not properly
|
||||
configured with the right set of admission controllers is an incomplete server and will not
|
||||
support all the features you expect.
|
||||
|
@ -91,7 +99,7 @@ To see which admission plugins are enabled:
|
|||
kube-apiserver -h | grep enable-admission-plugins
|
||||
```
|
||||
|
||||
In the current version, the default ones are:
|
||||
In Kubernetes {{< skew currentVersion >}}, the default ones are:
|
||||
|
||||
```shell
|
||||
CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook
|
||||
|
@ -103,18 +111,18 @@ CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultI
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
This admission controller allows all pods into the cluster. It is deprecated because
|
||||
This admission controller allows all pods into the cluster. It is **deprecated** because
|
||||
its behavior is the same as if there were no admission controller at all.
|
||||
|
||||
### AlwaysDeny {#alwaysdeny}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
Rejects all requests. AlwaysDeny is DEPRECATED as it has no real meaning.
|
||||
Rejects all requests. AlwaysDeny is **deprecated** as it has no real meaning.
|
||||
|
||||
### AlwaysPullImages {#alwayspullimages}
|
||||
|
||||
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
|
||||
This admission controller modifies every new Pod to force the image pull policy to `Always`. This is useful in a
|
||||
multitenant cluster so that users can be assured that their private images can only be used by those
|
||||
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
|
||||
node, any pod from any user can use it by knowing the image's name (assuming the Pod is
|
||||
|
@ -124,8 +132,8 @@ required.
|
|||
|
||||
### CertificateApproval {#certificateapproval}
|
||||
|
||||
This admission controller observes requests to 'approve' CertificateSigningRequest resources and performs additional
|
||||
authorization checks to ensure the approving user has permission to `approve` certificate requests with the
|
||||
This admission controller observes requests to approve CertificateSigningRequest resources and performs additional
|
||||
authorization checks to ensure the approving user has permission to **approve** certificate requests with the
|
||||
`spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
|
||||
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
|
||||
|
@ -134,7 +142,7 @@ information on the permissions required to perform different actions on Certific
|
|||
### CertificateSigning {#certificatesigning}
|
||||
|
||||
This admission controller observes updates to the `status.certificate` field of CertificateSigningRequest resources
|
||||
and performs an additional authorization checks to ensure the signing user has permission to `sign` certificate
|
||||
and performs an additional authorization checks to ensure the signing user has permission to **sign** certificate
|
||||
requests with the `spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
|
||||
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
|
||||
|
@ -159,7 +167,7 @@ must revisit their `IngressClass` objects and mark only one as default (with the
|
|||
"ingressclass.kubernetes.io/is-default-class"). This admission controller ignores any `Ingress`
|
||||
updates; it acts only on creation.
|
||||
|
||||
See the [ingress](/docs/concepts/services-networking/ingress/) documentation for more about ingress
|
||||
See the [Ingress](/docs/concepts/services-networking/ingress/) documentation for more about ingress
|
||||
classes and how to mark one as default.
|
||||
|
||||
### DefaultStorageClass {#defaultstorageclass}
|
||||
|
@ -181,7 +189,7 @@ storage classes and how to mark a storage class as default.
|
|||
|
||||
This admission controller sets the default forgiveness toleration for pods to tolerate
|
||||
the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters
|
||||
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
|
||||
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
|
||||
have toleration for taints `node.kubernetes.io/not-ready:NoExecute` or
|
||||
`node.kubernetes.io/unreachable:NoExecute`.
|
||||
The default value for `default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` is 5 minutes.
|
||||
|
@ -206,7 +214,7 @@ This admission controller is disabled by default.
|
|||
{{< feature-state for_k8s_version="v1.13" state="alpha" >}}
|
||||
|
||||
This admission controller mitigates the problem where the API server gets flooded by
|
||||
event requests. The cluster admin can specify event rate limits by:
|
||||
requests to store new Events. The cluster admin can specify event rate limits by:
|
||||
|
||||
* Enabling the `EventRateLimit` admission controller;
|
||||
* Referencing an `EventRateLimit` configuration file from the file provided to the API
|
||||
|
@ -223,7 +231,7 @@ plugins:
|
|||
|
||||
There are four types of limits that can be specified in the configuration:
|
||||
|
||||
* `Server`: All event requests received by the API server share a single bucket.
|
||||
* `Server`: All Event requests (creation or modifications) received by the API server share a single bucket.
|
||||
* `Namespace`: Each namespace has a dedicated bucket.
|
||||
* `User`: Each user is allocated a bucket.
|
||||
* `SourceAndObject`: A bucket is assigned by each combination of source and
|
||||
|
@ -266,7 +274,7 @@ The ImagePolicyWebhook admission controller allows a backend webhook to make adm
|
|||
|
||||
This admission controller is disabled by default.
|
||||
|
||||
#### Configuration File Format
|
||||
#### Configuration file format {#imagereview-config-file-format}
|
||||
|
||||
ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend.
|
||||
This file may be json or yaml and has the following format:
|
||||
|
@ -377,8 +385,8 @@ An example request body:
|
|||
}
|
||||
```
|
||||
|
||||
The remote service is expected to fill the `ImageReviewStatus` field of the request and
|
||||
respond to either allow or disallow access. The response body's `spec` field is ignored and
|
||||
The remote service is expected to fill the `status` field of the request and
|
||||
respond to either allow or disallow access. The response body's `spec` field is ignored, and
|
||||
may be omitted. A permissive response would return:
|
||||
|
||||
```json
|
||||
|
@ -529,9 +537,9 @@ permissions required to operate correctly.
|
|||
### OwnerReferencesPermissionEnforcement {#ownerreferencespermissionenforcement}
|
||||
|
||||
This admission controller protects the access to the `metadata.ownerReferences` of an object
|
||||
so that only users with "delete" permission to the object can change it.
|
||||
so that only users with **delete** permission to the object can change it.
|
||||
This admission controller also protects the access to `metadata.ownerReferences[x].blockOwnerDeletion`
|
||||
of an object, so that only users with "update" permission to the `finalizers`
|
||||
of an object, so that only users with **update** permission to the `finalizers`
|
||||
subresource of the referenced *owner* can change it.
|
||||
|
||||
### PersistentVolumeClaimResize {#persistentvolumeclaimresize}
|
||||
|
@ -568,12 +576,12 @@ For more information about persistent volume claims, see [PersistentVolumeClaims
|
|||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
This admission controller automatically attaches region or zone labels to PersistentVolumes
|
||||
as defined by the cloud provider (for example, GCE or AWS).
|
||||
as defined by the cloud provider (for example, Azure or GCP).
|
||||
It helps ensure the Pods and the PersistentVolumes mounted are in the same
|
||||
region and/or zone.
|
||||
If the admission controller doesn't support automatic labelling your PersistentVolumes, you
|
||||
may need to add the labels manually to prevent pods from mounting volumes from
|
||||
a different zone. PersistentVolumeLabel is DEPRECATED and labeling persistent volumes has been taken over by
|
||||
a different zone. PersistentVolumeLabel is **deprecated** as labeling for persistent volumes has been taken over by
|
||||
the {{< glossary_tooltip text="cloud-controller-manager" term_id="cloud-controller-manager" >}}.
|
||||
|
||||
This admission controller is disabled by default.
|
||||
|
@ -745,7 +753,8 @@ pod privileges.
|
|||
|
||||
This admission controller implements automation for
|
||||
[serviceAccounts](/docs/tasks/configure-pod-container/configure-service-account/).
|
||||
We strongly recommend using this admission controller if you intend to make use of Kubernetes
|
||||
The Kubernetes project strongly recommends enabling this admission controller.
|
||||
You should enable this admission controller if you intend to make any use of Kubernetes
|
||||
`ServiceAccount` objects.
|
||||
|
||||
### StorageObjectInUseProtection
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Component tools
|
||||
weight: 60
|
||||
weight: 120
|
||||
---
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: kubelet
|
||||
content_type: tool-reference
|
||||
weight: 28
|
||||
weight: 20
|
||||
---
|
||||
|
||||
## {{% heading "synopsis" %}}
|
||||
|
@ -16,7 +16,7 @@ through various mechanisms (primarily through the apiserver) and ensures that
|
|||
the containers described in those PodSpecs are running and healthy. The
|
||||
kubelet doesn't manage containers which were not created by Kubernetes.
|
||||
|
||||
Other than from a PodSpec from the apiserver, there are three ways that a
|
||||
Other than from a PodSpec from the apiserver, there are two ways that a
|
||||
container manifest can be provided to the Kubelet.
|
||||
|
||||
- File: Path passed as a flag on the command line. Files under this path will be
|
||||
|
@ -24,8 +24,6 @@ container manifest can be provided to the Kubelet.
|
|||
and is configurable via a flag.
|
||||
- HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This
|
||||
endpoint is checked every 20 seconds (also configurable with a flag).
|
||||
- HTTP server: The kubelet can also listen for HTTP and respond to a simple API
|
||||
(underspec'd currently) to submit a new manifest.
|
||||
|
||||
```
|
||||
kubelet [flags]
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Configuration APIs
|
||||
weight: 65
|
||||
weight: 130
|
||||
---
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Instrumentation
|
||||
weight: 40
|
||||
weight: 60
|
||||
---
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Kubernetes Issues and Security
|
||||
weight: 40
|
||||
weight: 70
|
||||
---
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Command line tool (kubectl)
|
||||
content_type: reference
|
||||
weight: 60
|
||||
weight: 110
|
||||
no_list: true
|
||||
card:
|
||||
name: reference
|
||||
|
|
|
@ -3,6 +3,7 @@ title: kubectl Usage Conventions
|
|||
reviewers:
|
||||
- janetkuo
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -4,6 +4,7 @@ content_type: concept
|
|||
reviewers:
|
||||
- brendandburns
|
||||
- thockin
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: JSONPath Support
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes API"
|
||||
weight: 30
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Well-Known Labels, Annotations and Taints
|
||||
content_type: concept
|
||||
weight: 20
|
||||
weight: 40
|
||||
no_list: true
|
||||
---
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Audit Annotations"
|
||||
weight: 1
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Node Reference Information
|
||||
weight: 40
|
||||
weight: 80
|
||||
---
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: Articles on dockershim Removal and on Using CRI-compatible Runtimes
|
||||
content_type: reference
|
||||
weight: 20
|
||||
---
|
||||
<!-- overview -->
|
||||
This is a list of articles and other pages that are either
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Ports and Protocols
|
||||
content_type: reference
|
||||
weight: 50
|
||||
weight: 90
|
||||
---
|
||||
|
||||
When running Kubernetes in an environment with strict network boundaries, such
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Scheduling
|
||||
weight: 70
|
||||
weight: 140
|
||||
toc-hide: true
|
||||
---
|
||||
|
|
|
@ -3,6 +3,7 @@ title: Scheduling Policies
|
|||
content_type: concept
|
||||
sitemap:
|
||||
priority: 0.2 # Scheduling priorities are deprecated
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: Setup tools
|
||||
weight: 50
|
||||
weight: 100
|
||||
---
|
||||
|
|
|
@ -3,7 +3,7 @@ title: Other Tools
|
|||
reviewers:
|
||||
- janetkuo
|
||||
content_type: concept
|
||||
weight: 80
|
||||
weight: 150
|
||||
no_list: true
|
||||
---
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: Mapping from dockercli to crictl
|
||||
content_type: reference
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
|
|
@ -5,7 +5,7 @@ reviewers:
|
|||
- lavalamp
|
||||
- jbeda
|
||||
content_type: concept
|
||||
weight: 10
|
||||
weight: 20
|
||||
no_list: true
|
||||
card:
|
||||
name: reference
|
||||
|
|
|
@ -54,7 +54,7 @@ For example, you can use the OpenSSL command line tool to issue a X.509 certific
|
|||
using the cluster CA certificate `/etc/kubernetes/pki/ca.crt` from a control-plane host.
|
||||
|
||||
```bash
|
||||
openssl req -subj "/CN=system:konnectivity-server" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key -out konnectivity.csr
|
||||
openssl req -subj "/CN=system:konnectivity-server" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key
|
||||
openssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256
|
||||
SERVER=$(kubectl config view -o jsonpath='{.clusters..server}')
|
||||
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true
|
||||
|
|
|
@ -1202,12 +1202,54 @@ Service:
|
|||
kubectl delete svc nginx
|
||||
```
|
||||
|
||||
Delete the persistent storage media for the PersistentVolumes used in this tutorial.
|
||||
|
||||
```shell
|
||||
kubectl get pvc
|
||||
```
|
||||
```
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
www-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m
|
||||
www-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m
|
||||
www-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m
|
||||
www-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m
|
||||
www-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get pv
|
||||
```
|
||||
```
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m
|
||||
pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m
|
||||
pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m
|
||||
pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m
|
||||
pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4
|
||||
```
|
||||
|
||||
```
|
||||
persistentvolumeclaim "www-web-0" deleted
|
||||
persistentvolumeclaim "www-web-1" deleted
|
||||
persistentvolumeclaim "www-web-2" deleted
|
||||
persistentvolumeclaim "www-web-3" deleted
|
||||
persistentvolumeclaim "www-web-4" deleted
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get pvc
|
||||
```
|
||||
|
||||
```
|
||||
No resources found in default namespace.
|
||||
```
|
||||
{{< note >}}
|
||||
You also need to delete the persistent storage media for the PersistentVolumes
|
||||
used in this tutorial.
|
||||
|
||||
|
||||
Follow the necessary steps, based on your environment, storage configuration,
|
||||
and provisioning method, to ensure that all storage is reclaimed.
|
||||
{{< /note >}}
|
||||
{{< /note >}}
|
|
@ -58,7 +58,7 @@ no_list: true
|
|||
- *एपिसर्वर के लिए लोड बैलेंसर कॉन्फ़िगर करें*: विभिन्न नोड्स पर चल रहे एपिसर्वर सर्विस इंस्टेंस के लिए बाहरी एपीआई अनुरोधों को वितरित करने के लिए लोड बैलेंसर को कॉन्फ़िगर करें।
|
||||
विवरण के लिए [एक बाहरी लोड बैलेंसर बनाना](/docs/tasks/access-application-cluster/create-external-load-balancer/) देखें।
|
||||
- *अलग और बैकअप etcd सेवा*: अतिरिक्त सुरक्षा और उपलब्धता के लिए etcd सेवाएं या तो अन्य कंट्रोल प्लेन सेवाओं के समान मशीनों पर चल सकती हैं या अलग मशीनों पर चल सकती हैं। क्योंकि etcd क्लस्टर कॉन्फ़िगरेशन डेटा संग्रहीत करता है, etcd डेटाबेस का बैकअप नियमित रूप से किया जाना चाहिए ताकि यह सुनिश्चित हो सके कि यदि आवश्यक हो तो आप उस डेटाबेस की मरम्मत कर सकते हैं।
|
||||
etcd को कॉन्फ़िगर करने और उपयोग करने के विवरण के लिए [etcd अक्सर पूछे जाने वाले प्रश्न](https://etcd.io/docs/v3.4/faq/) देखें।
|
||||
etcd को कॉन्फ़िगर करने और उपयोग करने के विवरण के लिए [etcd अक्सर पूछे जाने वाले प्रश्न](https://etcd.io/docs/v3.5/faq/) देखें।
|
||||
विवरण के लिए [कुबेरनेट्स के लिए ऑपरेटिंग etcd क्लस्टर](/docs/tasks/administer-cluster/configure-upgrade-etcd/) और [kubeadm के साथ एक उच्च उपलब्धता etcd क्लस्टर स्थापित करें](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) देखें।
|
||||
- *मल्टीपल कण्ट्रोल प्लेन सिस्टम बनाएं*: उच्च उपलब्धता के लिए, कण्ट्रोल प्लेन एक मशीन तक सीमित नहीं होना चाहिए। यदि कण्ट्रोल प्लेन सेवाएं एक init सेवा (जैसे systemd) द्वारा चलाई जाती हैं, तो प्रत्येक सेवा को कम से कम तीन मशीनों पर चलना चाहिए। हालाँकि, कुबेरनेट्स में पॉड्स के रूप में कण्ट्रोल प्लेन सेवाएं चलाना सुनिश्चित करता है कि आपके द्वारा अनुरोधित सेवाओं की प्रतिकृति संख्या हमेशा उपलब्ध रहेगी।
|
||||
अनुसूचक फॉल्ट सहने वाला होना चाहिए, लेकिन अत्यधिक उपलब्ध नहीं होना चाहिए। कुबेरनेट्स सेवाओं के नेता चुनाव करने के लिए कुछ डिप्लॉयमेंट उपकरण [राफ्ट](https://raft.github.io/) सर्वसम्मति एल्गोरिथ्म की स्थापना करते हैं। यदि प्राथमिक चला जाता है, तो दूसरी सेवा स्वयं को चुनती है और कार्यभार संभालती है।
|
||||
|
|
|
@ -199,7 +199,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
|||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
|
||||
# SELinuxをpermissiveモードに設定する(効果的に無効化する)
|
||||
|
|
|
@ -72,7 +72,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
|||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
yum install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
|
|
@ -111,7 +111,7 @@ kubectl get events | grep Created
|
|||
proc attrを調べることで、コンテナのルートプロセスが正しいプロファイルで実行されているかどうかを直接確認することもできます。
|
||||
|
||||
```shell
|
||||
kubectl exec <pod_name> cat /proc/1/attr/current
|
||||
kubectl exec <pod_name> -- cat /proc/1/attr/current
|
||||
```
|
||||
```
|
||||
k8s-apparmor-example-deny-write (enforce)
|
||||
|
|
|
@ -0,0 +1,184 @@
|
|||
---
|
||||
title: Controlando Acesso à API do Kubernetes
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Esta página fornece uma visão geral do controle de acesso à API do Kubernetes.
|
||||
|
||||
<!-- body -->
|
||||
Usuários podem acessar a [API do Kubernetes](/docs/concepts/overview/kubernetes-api/) utilizando `kubectl`,
|
||||
bibliotecas, ou executando requisições REST. Ambos, usuários humanos e
|
||||
[Contas de serviço do Kubernetes](/docs/tasks/configure-pod-container/configure-service-account/) podem ser autorizados
|
||||
a acessar à API.
|
||||
Quando uma requisição chega à API, ela passa por diversos estágios,
|
||||
ilustrados no seguinte diagrama:
|
||||
|
||||

|
||||
|
||||
## Segurança na camada de transporte
|
||||
|
||||
Em um cluster Kubernetes típico, a API fica disponível na porta 443, protegida por segurança na camada de transporte (TLS).
|
||||
O servidor de API apresenta um certificado. Este certificado pode ser assinado utilizando
|
||||
uma autoridade privada de certificados (CA), ou baseado em uma infraestrutura de chave pública ligada
|
||||
a uma autoridade de certificados reconhecida publicamente.
|
||||
|
||||
Se o seu cluster utiliza uma autoridade privada de certificados, voce precisa de uma cópia do certificado
|
||||
da autoridade de certificados (CA) dentro do arquivo de configuração `~/.kube/config`, no lado do cliente, para que
|
||||
voce possa confiar na conexão e tenha a garantia de que não há interceptação de tráfego.
|
||||
|
||||
O seu cliente pode apresentar o certificado de cliente TLS neste estágio.
|
||||
|
||||
## Autenticação
|
||||
|
||||
Uma vez em que a segurança na camada de transporte (TLS) é estabelecida, a requisição HTTP move para o passo de autenticação.
|
||||
Isto é demonstrado no passo **1** no diagrama acima.
|
||||
O script de criação do cluster ou configurações de administração configuram o servidor de API para executar
|
||||
um ou mais módulos autenticadores.
|
||||
|
||||
Autenticadores são descritos em maiores detalhes em
|
||||
[Autenticação](/pt-br/docs/reference/access-authn-authz/authentication/).
|
||||
|
||||
A entrada para o passo de autenticação é a requisição HTTP completa; no entanto, tipicamente
|
||||
são verificados os cabeçalhos e/ou o certificado de cliente.
|
||||
|
||||
Módulos de autenticação incluem certificados de cliente, senhas, tokens simples,
|
||||
tokens de auto-inicialização e JSON Web Tokens (utilizados para contas de serviço).
|
||||
|
||||
Vários módulos de autenticação podem ser especificados, em que cada um será verificado em sequência,
|
||||
até que um deles tenha sucesso.
|
||||
|
||||
Se a requisição não pode ser autenticada, será rejeitada com o código de status HTTP 401 (não autorizado).
|
||||
Caso contrário, o usuário é autenticado com um "nome de usuário" específico e o nome de usuário
|
||||
está disponível para as etapas subsequentes para usar em suas decisões. Alguns autenticadores
|
||||
também fornecem as associações de grupo do usuário, enquanto outros autenticadores
|
||||
não o fazem.
|
||||
|
||||
Enquanto o Kubernetes usa nomes de usuário para decisões de controle de acesso e no registro de requisições,
|
||||
ele não possui um objeto `user` nem armazena nomes de usuários ou outras informações sobre
|
||||
usuários em sua API.
|
||||
|
||||
## Autorização
|
||||
|
||||
Após a requisição ser autenticada como originada de um usuário específico, a requisição deve ser autorizada. Isso é mostrado no passo **2** no diagrama.
|
||||
|
||||
Uma requisição deve incluir o nome do usuário requerente, a ação requisitada e o objeto afetado pela ação. A requisição é autorizada se uma
|
||||
política existente declarar que o usuário tem as devidas permissões para concluir a ação requisitada.
|
||||
|
||||
Por exemplo, se Bob possui a política abaixo, então ele somente poderá ler pods no namespace `projectCaribou`:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
|
||||
"kind": "Policy",
|
||||
"spec": {
|
||||
"user": "bob",
|
||||
"namespace": "projectCaribou",
|
||||
"resource": "pods",
|
||||
"readonly": true
|
||||
}
|
||||
}
|
||||
```
|
||||
Se Bob fizer a seguinte requisição, a requisição será autorizada porque ele tem permissão para ler objetos no namespace `projectCaribou`:
|
||||
```json
|
||||
{
|
||||
"apiVersion": "authorization.k8s.io/v1beta1",
|
||||
"kind": "SubjectAccessReview",
|
||||
"spec": {
|
||||
"resourceAttributes": {
|
||||
"namespace": "projectCaribou",
|
||||
"verb": "get",
|
||||
"group": "unicorn.example.org",
|
||||
"resource": "pods"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Se Bob fizer uma requisição para escrever (`create` ou `update`) em objetos no namespace `projectCaribou`, sua autorização será negada. Se Bob fizer uma requisição para ler (`get`) objetos em um namespace diferente, como `projectFish`, sua autorização será negada.
|
||||
|
||||
A autorização do Kubernetes requer que você use atributos comuns a REST para interagir com os sistemas de controle de acesso existentes em toda uma organização ou em todo o provedor de nuvem utilizado. É importante usar a formatação REST porque esses sistemas de controle podem interagir com outras APIs além da API do Kubernetes.
|
||||
|
||||
O Kubernetes oferece suporte a vários módulos de autorização, como o modo de controle de acesso baseado em atributos (ABAC), o modo de controle de acesso baseado em função (RBAC) e o modo Webhook. Quando um administrador cria um cluster, ele configura os módulos de autorização que devem ser utilizados no servidor de API. Se mais de um módulo de autorização for configurado, o Kubernetes verificará cada módulo e, se algum módulo autorizar a requisição, a requisição poderá prosseguir. Se todos os módulos negarem a requisição, a requisição será negada (com código de status HTTP 403 - Acesso Proibido).
|
||||
|
||||
Para saber mais sobre a autorização do Kubernetes, incluindo detalhes sobre como criar políticas usando os módulos de autorização compatíveis, consulte [Visão Geral de Autorização](/pt-br/docs/reference/access-authn-authz/authorization/).
|
||||
|
||||
## Controle de admissão
|
||||
|
||||
Os módulos de controle de admissão são módulos de software que podem modificar ou rejeitar requisições.
|
||||
Além dos atributos disponíveis para os módulos de Autorização, os módulos do controlador de admissão
|
||||
podem acessar o conteúdo do objeto que está sendo criado ou modificado.
|
||||
|
||||
Os controladores de admissão atuam em requisições que criam, modificam, excluem ou age como um proxy para outro objeto.
|
||||
Os controladores de admissão não agem em requisições que apenas leem objetos.
|
||||
Quando vários controladores de admissão são configurados, eles são chamados em ordem.
|
||||
|
||||
Isso é mostrado como etapa **3** no diagrama.
|
||||
|
||||
Ao contrário dos módulos de autenticação e autorização, se algum módulo controlador de admissão
|
||||
rejeita, a solicitação é imediatamente rejeitada.
|
||||
|
||||
Além de rejeitar objetos, os controladores de admissão também podem definir valores padrão complexos para
|
||||
campos.
|
||||
|
||||
Os módulos de Controle de Admissão disponíveis são descritos em [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/).
|
||||
|
||||
Após uma requisição passar por todos os controladores de admissão, ela é validada usando as rotinas de validação
|
||||
para o objeto de API correspondente e, em seguida, gravados no armazenamento de objetos (mostrado como etapa **4** no diagrama).
|
||||
|
||||
## Auditoria
|
||||
|
||||
A auditoria do Kubernetes fornece um conjunto de registros cronológicos relevantes para a segurança que documentam a sequência de ações em um cluster.
|
||||
O cluster audita as atividades geradas pelos usuários, pelos aplicativos que usam a API do Kubernetes e pela própria camada de gerenciamento.
|
||||
|
||||
Para mais informações, consulte [Auditing](/docs/tasks/debug/debug-cluster/audit/).
|
||||
|
||||
## Portas e IPs do servidor de API
|
||||
|
||||
A discussão anterior se aplica a requisições enviadas para a porta segura do servidor de API
|
||||
(o caso típico). O servidor de API pode realmente servir em 2 portas.
|
||||
|
||||
Por padrão, o servidor da API Kubernetes atende HTTP em 2 portas:
|
||||
|
||||
1. Porta `localhost`:
|
||||
|
||||
- destina-se a testes e auto-inicialização e a outros componentes do nó mestre
|
||||
(scheduler, controller-manager) para falar com a API
|
||||
- sem segurança na camada de transporte (TLS)
|
||||
- o padrão é a porta 8080
|
||||
- IP padrão é localhost, mude com a flag `--insecure-bind-address`.
|
||||
- a requisição **ignora** os módulos de autenticação e autorização .
|
||||
- requisição tratada pelo(s) módulo(s) de controle de admissão.
|
||||
- protegido pela necessidade de ter acesso ao host
|
||||
|
||||
2. “Porta segura”:
|
||||
|
||||
- utilize sempre que possível
|
||||
- utiliza segurança na camada de transporte (TLS). Defina o certificado com `--tls-cert-file` e a chave com a flag `--tls-private-key-file`.
|
||||
- o padrão é a porta 6443, mude com a flag `--secure-port`.
|
||||
- IP padrão é a primeira interface de rede não localhost, mude com a flag `--bind-address`.
|
||||
- requisição tratada pelos módulos de autenticação e autorização.
|
||||
- requisição tratada pelo(s) módulo(s) de controle de admissão.
|
||||
- módulos de autenticação e autorização executados.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Consulte mais documentação sobre autenticação, autorização e controle de acesso à API:
|
||||
|
||||
- [Autenticação](/pt-br/docs/reference/access-authn-authz/authentication/)
|
||||
- [Autenticando com Tokens de Inicialização](/pt-br/docs/reference/access-authn-authz/bootstrap-tokens/)
|
||||
- [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
|
||||
- [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
- [Autorização](/pt-br/docs/reference/access-authn-authz/authorization/)
|
||||
- [Using RBAC Authorization](/docs/reference/access-authn-authz/rbac/)
|
||||
- [Using ABAC Authorization](/docs/reference/access-authn-authz/abac/)
|
||||
- [Using Node Authorization](/docs/reference/access-authn-authz/node/)
|
||||
- [Webhook Mode](/docs/reference/access-authn-authz/webhook/)
|
||||
- [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/)
|
||||
- incluindo [Approval or rejection of Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection)
|
||||
e [Signing certificates](/docs/reference/access-authn-authz/certificate-signing-requests/#signing)
|
||||
- Contas de serviço
|
||||
- [Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/)
|
||||
- [Managing Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
|
||||
|
||||
Você pode aprender mais sobre:
|
||||
- como os pods podem usar [Secrets](/docs/concepts/configuration/secret/) para obter credenciais de API.
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: Provedor de Nuvem
|
||||
id: cloud-provider
|
||||
date: 2018-04-12
|
||||
short_description: >
|
||||
Uma organização que oferece uma plataforma de computação em nuvem.
|
||||
|
||||
aka:
|
||||
- Provedor de Serviços em Nuvem
|
||||
tags:
|
||||
- community
|
||||
---
|
||||
Uma empresa ou outra organização que oferece uma plataforma de computação em nuvem.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Os provedores de nuvem, às vezes chamados de Provedores de Serviços em Nuvem (CSPs),
|
||||
oferecem plataformas ou serviços de computação em nuvem.
|
||||
|
||||
Muitos provedores de nuvem oferecem infraestrutura gerenciada
|
||||
(também chamada de Infraestrutura como Serviço ou IaaS).
|
||||
Com a infraestrutura gerenciada, o provedor de nuvem é responsável pelos servidores,
|
||||
armazenamento e rede enquanto você gerencia as camadas em cima disso,
|
||||
tal como a execução de um cluster Kubernetes.
|
||||
|
||||
Você também pode encontrar o Kubernetes como um serviço gerenciado;
|
||||
às vezes chamado de Plataforma como Serviço ou PaaS.
|
||||
Com o Kubernetes gerenciado, o seu provedor de nuvem é responsável pela camada de gerenciamento do Kubernetes,
|
||||
bem como pelos {{< glossary_tooltip term_id="node" text="nós" >}} e pela infraestrutura em que eles dependem:
|
||||
rede, armazenamento e provavelmente outros elementos, tal como balanceadores de carga.
|
|
@ -166,7 +166,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
|
|||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
exclude=kubelet kubeadm kubectl
|
||||
EOF
|
||||
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
title: Сертификаты
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Чтобы узнать, как генерировать сертификаты для кластера, см. раздел [Сертификаты](/docs/tasks/administer-cluster/certificates/).
|
|
@ -0,0 +1,280 @@
|
|||
---
|
||||
title: Генерация сертификатов вручную
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
При аутентификации с помощью клиентского сертификата сертификаты можно генерировать вручную с помощью `easyrsa`, `openssl` или `cfssl`.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
### easyrsa
|
||||
|
||||
**easyrsa** поддерживает ручную генерацию сертификатов для кластера.
|
||||
|
||||
1. Скачайте, распакуйте и инициализируйте пропатченную версию `easyrsa3`.
|
||||
|
||||
```shell
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
|
||||
tar xzf easy-rsa.tar.gz
|
||||
cd easy-rsa-master/easyrsa3
|
||||
./easyrsa init-pki
|
||||
```
|
||||
1. Создайте новый центр сертификации (certificate authority, CA). `--batch` задает автоматический режим;
|
||||
`--req-cn` указывает общее имя (Common Name, CN) для нового корневого сертификата CA.
|
||||
|
||||
```shell
|
||||
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
|
||||
```
|
||||
|
||||
1. Сгенерируйте сертификат и ключ сервера.
|
||||
|
||||
Аргумент `--subject-alt-name` задает допустимые IP-адреса и DNS-имена, с которых будет осуществляться доступ к серверу API. `MASTER_CLUSTER_IP` обычно является первым IP из CIDR сервиса, который указан в качестве аргумента `--service-cluster-ip-range` как для сервера API, так и для менеджера контроллеров. Аргумент `--days` задает количество дней, через которое истекает срок действия сертификата. В приведенном ниже примере предполагается, что `cluster.local` используется в качестве доменного имени по умолчанию.
|
||||
|
||||
```shell
|
||||
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
|
||||
"IP:${MASTER_CLUSTER_IP},"\
|
||||
"DNS:kubernetes,"\
|
||||
"DNS:kubernetes.default,"\
|
||||
"DNS:kubernetes.default.svc,"\
|
||||
"DNS:kubernetes.default.svc.cluster,"\
|
||||
"DNS:kubernetes.default.svc.cluster.local" \
|
||||
--days=10000 \
|
||||
build-server-full server nopass
|
||||
```
|
||||
|
||||
1. Скопируйте `pki/ca.crt`, `pki/issued/server.crt` и `pki/private/server.key` в свою директорию.
|
||||
|
||||
1. Заполните и добавьте следующие параметры в параметры запуска сервера API:
|
||||
|
||||
```shell
|
||||
--client-ca-file=/yourdirectory/ca.crt
|
||||
--tls-cert-file=/yourdirectory/server.crt
|
||||
--tls-private-key-file=/yourdirectory/server.key
|
||||
```
|
||||
|
||||
### openssl
|
||||
|
||||
**openssl** поддерживает ручную генерацию сертификатов для кластера.
|
||||
|
||||
1. Сгенерируйте 2048-разрядный ключ ca.key:
|
||||
|
||||
```shell
|
||||
openssl genrsa -out ca.key 2048
|
||||
```
|
||||
|
||||
1. На основе ca.key сгенерируйте ca.crt (используйте `-days` для установки времени действия сертификата):
|
||||
|
||||
```shell
|
||||
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
|
||||
```
|
||||
|
||||
1. Сгенерируйте 2048-битный ключ server.key:
|
||||
|
||||
```shell
|
||||
openssl genrsa -out server.key 2048
|
||||
```
|
||||
|
||||
1. Создайте файл конфигурации для генерации запроса на подписание сертификата (Certificate Signing Request, CSR).
|
||||
|
||||
Замените значения, помеченные угловыми скобками (например, `<MASTER_IP>`), нужными значениями перед сохранением в файл (например, `csr.conf`). Обратите внимание, что значение для `MASTER_CLUSTER_IP` – это cluster IP сервиса для сервера API, как описано в предыдущем подразделе. В приведенном ниже примере предполагается, что `cluster.local` используется в качестве доменного имени по умолчанию.
|
||||
|
||||
```ini
|
||||
[ req ]
|
||||
default_bits = 2048
|
||||
prompt = no
|
||||
default_md = sha256
|
||||
req_extensions = req_ext
|
||||
distinguished_name = dn
|
||||
|
||||
[ dn ]
|
||||
C = <country>
|
||||
ST = <state>
|
||||
L = <city>
|
||||
O = <organization>
|
||||
OU = <organization unit>
|
||||
CN = <MASTER_IP>
|
||||
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[ alt_names ]
|
||||
DNS.1 = kubernetes
|
||||
DNS.2 = kubernetes.default
|
||||
DNS.3 = kubernetes.default.svc
|
||||
DNS.4 = kubernetes.default.svc.cluster
|
||||
DNS.5 = kubernetes.default.svc.cluster.local
|
||||
IP.1 = <MASTER_IP>
|
||||
IP.2 = <MASTER_CLUSTER_IP>
|
||||
|
||||
[ v3_ext ]
|
||||
authorityKeyIdentifier=keyid,issuer:always
|
||||
basicConstraints=CA:FALSE
|
||||
keyUsage=keyEncipherment,dataEncipherment
|
||||
extendedKeyUsage=serverAuth,clientAuth
|
||||
subjectAltName=@alt_names
|
||||
```
|
||||
|
||||
1. Сгенерируйте запрос на подписание сертификата, используя файл конфигурации:
|
||||
|
||||
```shell
|
||||
openssl req -new -key server.key -out server.csr -config csr.conf
|
||||
```
|
||||
|
||||
1. С помощью ca.key, ca.crt и server.csr сгенерируйте сертификат сервера:
|
||||
|
||||
```shell
|
||||
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
|
||||
-CAcreateserial -out server.crt -days 10000 \
|
||||
-extensions v3_ext -extfile csr.conf
|
||||
```
|
||||
|
||||
1. Используйте следующую команду, чтобы просмотреть запрос на подписание сертификата:
|
||||
|
||||
```shell
|
||||
openssl req -noout -text -in ./server.csr
|
||||
```
|
||||
|
||||
1. Используйте следующую команду, чтобы просмотреть сертификат:
|
||||
|
||||
```shell
|
||||
openssl x509 -noout -text -in ./server.crt
|
||||
```
|
||||
|
||||
Наконец, добавьте эти параметры в конфигурацию запуска сервера API.
|
||||
|
||||
### cfssl
|
||||
|
||||
**cfssl** – еще один инструмент для генерации сертификатов.
|
||||
|
||||
1. Скачайте, распакуйте и подготовьте пакеты, как показано ниже.
|
||||
|
||||
Обратите внимание, что команды необходимо адаптировать под архитектуру и используемую версию cfssl.
|
||||
|
||||
```shell
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
|
||||
chmod +x cfssl
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
|
||||
chmod +x cfssljson
|
||||
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
|
||||
chmod +x cfssl-certinfo
|
||||
```
|
||||
|
||||
1. Создайте директорию для хранения артефактов и инициализируйте cfssl:
|
||||
|
||||
```shell
|
||||
mkdir cert
|
||||
cd cert
|
||||
../cfssl print-defaults config > config.json
|
||||
../cfssl print-defaults csr > csr.json
|
||||
```
|
||||
|
||||
1. Создайте JSON-конфиг для генерации файла CA (например, `ca-config.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"profiles": {
|
||||
"kubernetes": {
|
||||
"usages": [
|
||||
"signing",
|
||||
"key encipherment",
|
||||
"server auth",
|
||||
"client auth"
|
||||
],
|
||||
"expiry": "8760h"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
1. Создайте JSON-конфиг для запроса на подписание сертификата (CSR) (например,
|
||||
`ca-csr.json`). Замените значения, помеченные угловыми скобками, на нужные.
|
||||
|
||||
```json
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names":[{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
1. Сгенерируйте ключ CA (`ca-key.pem`) и сертификат (`ca.pem`):
|
||||
|
||||
```shell
|
||||
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
|
||||
```
|
||||
|
||||
1. Создайте конфигурационный JSON-файл для генерации ключей и сертификатов для сервера API (например, `server-csr.json`). Замените значения, помеченные угловыми скобками, на нужные. `MASTER_CLUSTER_IP` – это cluster IP сервиса для сервера API, как описано в предыдущем подразделе. В приведенном ниже примере предполагается, что `cluster.local` используется в качестве доменного имени по умолчанию.
|
||||
|
||||
```json
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"hosts": [
|
||||
"127.0.0.1",
|
||||
"<MASTER_IP>",
|
||||
"<MASTER_CLUSTER_IP>",
|
||||
"kubernetes",
|
||||
"kubernetes.default",
|
||||
"kubernetes.default.svc",
|
||||
"kubernetes.default.svc.cluster",
|
||||
"kubernetes.default.svc.cluster.local"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [{
|
||||
"C": "<country>",
|
||||
"ST": "<state>",
|
||||
"L": "<city>",
|
||||
"O": "<organization>",
|
||||
"OU": "<organization unit>"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
1. Сгенерируйте ключ и сертификат для сервера API (по умолчанию они сохраняются в файлах `server-key.pem` и `server.pem` соответственно):
|
||||
|
||||
```shell
|
||||
../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
|
||||
--config=ca-config.json -profile=kubernetes \
|
||||
server-csr.json | ../cfssljson -bare server
|
||||
```
|
||||
|
||||
## Распространение самоподписанного сертификата CA
|
||||
|
||||
Клиентский узел может отказаться признавать самоподписанный сертификат CA действительным. В случае его использования не в production или в инсталляциях, защищенных межсетевым экраном, самоподписанный сертификат CA можно распространить среди всех клиентов и обновить локальный список действительных сертификатов.
|
||||
|
||||
Для этого на каждом клиенте выполните следующие операции:
|
||||
|
||||
```shell
|
||||
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
|
||||
sudo update-ca-certificates
|
||||
```
|
||||
|
||||
```none
|
||||
Updating certificates in /etc/ssl/certs...
|
||||
1 added, 0 removed; done.
|
||||
Running hooks in /etc/ca-certificates/update.d....
|
||||
done.
|
||||
```
|
||||
|
||||
## API для сертификатов
|
||||
|
||||
Для создания сертификатов x509, которые будут использоваться для аутентификации, можно воспользоваться API `certificates.k8s.io`. Для дополнительной информации см. [Управление TLS-сертификатами в кластере](/docs/tasks/tls/managing-tls-in-a-cluster).
|
|
@ -39,6 +39,6 @@ Your anonymity will be protected. -->
|
|||
</p>
|
||||
</div>
|
||||
|
||||
<div id="cncf_coc_container">
|
||||
<div id="cncf-code-of-conduct">
|
||||
{{< include "/static/cncf-code-of-conduct.md" >}}
|
||||
</div>
|
||||
|
|
|
@ -149,14 +149,18 @@ For self-registration, the kubelet is started with the following options:
|
|||
|
||||
<!--
|
||||
- `--kubeconfig` - Path to credentials to authenticate itself to the API server.
|
||||
- `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}} to read metadata about itself.
|
||||
- `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}}
|
||||
to read metadata about itself.
|
||||
- `--register-node` - Automatically register with the API server.
|
||||
- `--register-with-taints` - Register the node with the given list of {{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
|
||||
- `--register-with-taints` - Register the node with the given list of
|
||||
{{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
|
||||
|
||||
No-op if `register-node` is false.
|
||||
No-op if `register-node` is false.
|
||||
- `--node-ip` - IP address of the node.
|
||||
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
|
||||
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
|
||||
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node
|
||||
in the cluster (see label restrictions enforced by the
|
||||
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
|
||||
- `--node-status-update-frequency` - Specifies how often kubelet posts its node status to the API server.
|
||||
-->
|
||||
- `--kubeconfig` - 用于向 API 服务器执行身份认证所用的凭据的路径。
|
||||
- `--cloud-provider` - 与某{{< glossary_tooltip text="云驱动" term_id="cloud-provider" >}}
|
||||
|
@ -167,16 +171,16 @@ For self-registration, the kubelet is started with the following options:
|
|||
- `--node-ip` - 节点 IP 地址。
|
||||
- `--node-labels` - 在集群中注册节点时要添加的{{< glossary_tooltip text="标签" term_id="label" >}}。
|
||||
(参见 [NodeRestriction 准入控制插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)所实施的标签限制)。
|
||||
- `--node-status-update-frequency` - 指定 kubelet 向控制面发送状态的频率。
|
||||
- `--node-status-update-frequency` - 指定 kubelet 向 API 服务器发送其节点状态的频率。
|
||||
|
||||
<!--
|
||||
When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
|
||||
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
|
||||
kubelets are only authorized to create/modify their own Node resource.
|
||||
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||
are enabled, kubelets are only authorized to create/modify their own Node resource.
|
||||
-->
|
||||
启用 [Node 鉴权模式](/zh-cn/docs/reference/access-authn-authz/node/)和
|
||||
[NodeRestriction 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)时,
|
||||
仅授权 `kubelet` 创建或修改其自己的节点资源。
|
||||
当 [Node 鉴权模式](/zh-cn/docs/reference/access-authn-authz/node/)和
|
||||
[NodeRestriction 准入插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)被启用后,
|
||||
仅授权 kubelet 创建/修改自己的 Node 资源。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -297,8 +301,10 @@ You can use `kubectl` to view a Node's status and other details:
|
|||
kubectl describe node <节点名称>
|
||||
```
|
||||
|
||||
<!-- Each section is described in detail below. -->
|
||||
下面对每个部分进行详细描述。
|
||||
<!--
|
||||
Each section of the output is described below.
|
||||
-->
|
||||
下面对输出的每个部分进行详细描述。
|
||||
|
||||
<!--
|
||||
### Addresses
|
||||
|
@ -310,9 +316,11 @@ The usage of these fields varies depending on your cloud provider or bare metal
|
|||
这些字段的用法取决于你的云服务商或者物理机配置。
|
||||
|
||||
<!--
|
||||
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet `-hostname-override` parameter.
|
||||
* ExternalIP: Typically the IP address of the node that is externally routable (available from outside the cluster).
|
||||
* InternalIP: Typichostnameally the IP address of the node that is routable only within the cluster.
|
||||
* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet
|
||||
`--hostname-override` parameter.
|
||||
* ExternalIP: Typically the IP address of the node that is externally routable (available from
|
||||
outside the cluster).
|
||||
* InternalIP: Typically the IP address of the node that is routable only within the cluster.
|
||||
-->
|
||||
* HostName:由节点的内核报告。可以通过 kubelet 的 `--hostname-override` 参数覆盖。
|
||||
* ExternalIP:通常是节点的可外部路由(从集群外可访问)的 IP 地址。
|
||||
|
@ -443,7 +451,7 @@ for more details.
|
|||
<!--
|
||||
### Capacity and Allocatable {#capacity}
|
||||
|
||||
Describes the resources available on the node: CPU, memory and the maximum
|
||||
Describes the resources available on the node: CPU, memory, and the maximum
|
||||
number of pods that can be scheduled onto the node.
|
||||
-->
|
||||
### 容量(Capacity)与可分配(Allocatable) {#capacity}
|
||||
|
@ -632,7 +640,7 @@ the same time:
|
|||
|
||||
<!--
|
||||
The reason these policies are implemented per availability zone is because one
|
||||
availability zone might become partitioned from the master while the others remain
|
||||
availability zone might become partitioned from the control plane while the others remain
|
||||
connected. If your cluster does not span multiple cloud provider availability zones,
|
||||
then the eviction mechanism does not take per-zone unavailability into account.
|
||||
-->
|
||||
|
@ -675,8 +683,8 @@ that the scheduler won't place Pods onto unhealthy nodes.
|
|||
<!--
|
||||
## Resource capacity tracking {#node-capacity}
|
||||
|
||||
Node objects track information about the Node's resource capacity (for example: the amount
|
||||
of memory available, and the number of CPUs).
|
||||
Node objects track information about the Node's resource capacity: for example, the amount
|
||||
of memory available and the number of CPUs.
|
||||
Nodes that [self register](#self-registration-of-nodes) report their capacity during
|
||||
registration. If you [manually](#manual-node-administration) add a Node, then
|
||||
you need to set the node's capacity information when you add it.
|
||||
|
@ -690,11 +698,11 @@ Node 对象会跟踪节点上资源的容量(例如可用内存和 CPU 数量
|
|||
|
||||
<!--
|
||||
The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
|
||||
there are enough resources for all the pods on a node. The scheduler checks that the sum
|
||||
of the requests of containers on the node is no greater than the node capacity.
|
||||
The sum of requests includes all containers started by the kubelet, but excludes any
|
||||
there are enough resources for all the Pods on a Node. The scheduler checks that the sum
|
||||
of the requests of containers on the node is no greater than the node's capacity.
|
||||
That sum of requests includes all containers managed by the kubelet, but excludes any
|
||||
containers started directly by the container runtime, and also excludes any
|
||||
process running outside of the kubelet's control.
|
||||
processes running outside of the kubelet's control.
|
||||
-->
|
||||
Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}
|
||||
保证节点上有足够的资源供其上的所有 Pod 使用。
|
||||
|
@ -704,7 +712,7 @@ Kubernetes {{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}}
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you want to explicitly reserve resources for non-Pod processes, follow this tutorial to
|
||||
If you want to explicitly reserve resources for non-Pod processes, see
|
||||
[reserve resources for system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
|
||||
-->
|
||||
如果要为非 Pod 进程显式保留资源。
|
||||
|
@ -749,7 +757,7 @@ kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的所
|
|||
[Pod 终止流程](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
||||
|
||||
<!--
|
||||
The graceful node shutdown feature depends on systemd since it takes advantage of
|
||||
The Graceful node shutdown feature depends on systemd since it takes advantage of
|
||||
[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to
|
||||
delay the node shutdown with a given duration.
|
||||
-->
|
||||
|
@ -768,9 +776,10 @@ enabled by default in 1.21.
|
|||
|
||||
<!--
|
||||
Note that by default, both configuration options described below,
|
||||
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are set to zero,
|
||||
`shutdownGracePeriod` and `shutdownGracePeriodCriticalPods` are set to zero,
|
||||
thus not activating the graceful node shutdown functionality.
|
||||
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
|
||||
To activate the feature, the two kubelet config settings should be configured appropriately and
|
||||
set to non-zero values.
|
||||
-->
|
||||
注意,默认情况下,下面描述的两个配置选项,`shutdownGracePeriod` 和
|
||||
`shutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活节点体面关闭功能。
|
||||
|
@ -780,7 +789,8 @@ To activate the feature, the two kubelet config settings should be configured ap
|
|||
During a graceful shutdown, kubelet terminates pods in two phases:
|
||||
|
||||
1. Terminate regular pods running on the node.
|
||||
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
|
||||
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
|
||||
running on the node.
|
||||
-->
|
||||
在体面关闭节点过程中,kubelet 分两个阶段来终止 Pod:
|
||||
|
||||
|
@ -788,11 +798,16 @@ During a graceful shutdown, kubelet terminates pods in two phases:
|
|||
2. 终止在节点上运行的[关键 Pod](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
|
||||
|
||||
<!--
|
||||
Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
|
||||
* `ShutdownGracePeriod`:
|
||||
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
|
||||
* `ShutdownGracePeriodCriticalPods`:
|
||||
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `ShutdownGracePeriod`.
|
||||
Graceful node shutdown feature is configured with two
|
||||
[`KubeletConfiguration`](/docs/tasks/administer-cluster/kubelet-config-file/) options:
|
||||
* `shutdownGracePeriod`:
|
||||
* Specifies the total duration that the node should delay the shutdown by. This is the total
|
||||
grace period for pod termination for both regular and
|
||||
[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
|
||||
* `shutdownGracePeriodCriticalPods`:
|
||||
* Specifies the duration used to terminate
|
||||
[critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
|
||||
during a node shutdown. This value should be less than `shutdownGracePeriod`.
|
||||
-->
|
||||
节点体面关闭的特性对应两个
|
||||
[`KubeletConfiguration`](/zh-cn/docs/tasks/administer-cluster/kubelet-config-file/) 选项:
|
||||
|
@ -805,8 +820,8 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/
|
|||
的持续时间。该值应小于 `shutdownGracePeriod`。
|
||||
|
||||
<!--
|
||||
For example, if `ShutdownGracePeriod=30s`, and
|
||||
`ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
|
||||
For example, if `shutdownGracePeriod=30s`, and
|
||||
`shutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
|
||||
30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved
|
||||
for gracefully terminating normal pods, and the last 10 seconds would be
|
||||
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
|
||||
|
@ -877,13 +892,11 @@ these pods will be stuck in terminating status on the shutdown node forever.
|
|||
<!--
|
||||
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
|
||||
or `NoSchedule` effect to a Node marking it out-of-service.
|
||||
If the `NodeOutOfServiceVolumeDetach` [feature gate](/docs/reference/
|
||||
command-line-tools-reference/feature-gates/) is enabled on
|
||||
`kube-controller-manager`, and a Node is marked out-of-service with this taint, the
|
||||
pods on the node will be forcefully deleted if there are no matching tolerations on
|
||||
it and volume detach operations for the pods terminating on the node will happen
|
||||
immediately. This allows the Pods on the out-of-service node to recover quickly on a
|
||||
different node.
|
||||
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
|
||||
pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
|
||||
detach operations for the pods terminating on the node will happen immediately. This allows the
|
||||
Pods on the out-of-service node to recover quickly on a different node.
|
||||
-->
|
||||
为了缓解上述情况,用户可以手动将具有 `NoExecute` 或 `NoSchedule` 效果的
|
||||
`node.kubernetes.io/out-of-service` 污点添加到节点上,标记其无法提供服务。
|
||||
|
@ -906,11 +919,11 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
|
|||
<!--
|
||||
{{< note >}}
|
||||
- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified
|
||||
that the node is already in shutdown or power off state (not in the middle of
|
||||
restarting).
|
||||
that the node is already in shutdown or power off state (not in the middle of
|
||||
restarting).
|
||||
- The user is required to manually remove the out-of-service taint after the pods are
|
||||
moved to a new node and the user has checked that the shutdown node has been
|
||||
recovered since the user was the one who originally added the taint.
|
||||
moved to a new node and the user has checked that the shutdown node has been
|
||||
recovered since the user was the one who originally added the taint.
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
|
@ -1108,6 +1121,14 @@ must be set to false.
|
|||
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
|
||||
[配置](/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)设置为 false。
|
||||
|
||||
{{< warning >}}
|
||||
<!--
|
||||
When the memory swap feature is turned on, Kubernetes data such as the content
|
||||
of Secret objects that were written to tmpfs now could be swapped to disk.
|
||||
-->
|
||||
当内存交换功能被启用后,Kubernetes 数据(如写入 tmpfs 的 Secret 对象的内容)可以被交换到磁盘。
|
||||
{{< /warning >}}
|
||||
|
||||
<!--
|
||||
A user can also optionally configure `memorySwap.swapBehavior` in order to
|
||||
specify how a node will use swap memory. For example,
|
||||
|
|
|
@ -189,45 +189,7 @@ Here's an example Pod that uses values from `game-demo` to configure a Pod:
|
|||
|
||||
下面是一个 Pod 的示例,它通过使用 `game-demo` 中的值来配置一个 Pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: configmap-demo-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: demo
|
||||
image: alpine
|
||||
command: ["sleep", "3600"]
|
||||
env:
|
||||
# 定义环境变量
|
||||
- name: PLAYER_INITIAL_LIVES # 请注意这里和 ConfigMap 中的键名是不一样的
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: game-demo # 这个值来自 ConfigMap
|
||||
key: player_initial_lives # 需要取值的键
|
||||
- name: UI_PROPERTIES_FILE_NAME
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: game-demo
|
||||
key: ui_properties_file_name
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: "/config"
|
||||
readOnly: true
|
||||
volumes:
|
||||
# 你可以在 Pod 级别设置卷,然后将其挂载到 Pod 内的容器中
|
||||
- name: config
|
||||
configMap:
|
||||
# 提供你想要挂载的 ConfigMap 的名字
|
||||
name: game-demo
|
||||
# 来自 ConfigMap 的一组键,将被创建为文件
|
||||
items:
|
||||
- key: "game.properties"
|
||||
path: "game.properties"
|
||||
- key: "user-interface.properties"
|
||||
path: "user-interface.properties"
|
||||
```
|
||||
{{< codenew file="configmap/configure-pod.yaml" >}}
|
||||
|
||||
<!--
|
||||
A ConfigMap doesn't differentiate between single line property values and
|
||||
|
|
|
@ -47,7 +47,7 @@ If you must use an untrusted kubeconfig file, inspect it carefully first, much a
|
|||
By default, `kubectl` looks for a file named `config` in the `$HOME/.kube` directory.
|
||||
You can specify other kubeconfig files by setting the `KUBECONFIG` environment
|
||||
variable or by setting the
|
||||
[`-kubeconfig`](/docs/reference/generated/kubectl/kubectl/) flag.
|
||||
[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/) flag.
|
||||
-->
|
||||
默认情况下,`kubectl` 在 `$HOME/.kube` 目录下查找名为 `config` 的文件。
|
||||
你可以通过设置 `KUBECONFIG` 环境变量或者设置
|
||||
|
@ -163,7 +163,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
以下是 `kubectl` 在合并 kubeconfig 文件时使用的规则。
|
||||
|
||||
<!--
|
||||
1. If the `-kubeconfig` flag is set, use only the specified file. Do not merge.
|
||||
1. If the `--kubeconfig` flag is set, use only the specified file. Do not merge.
|
||||
Only one instance of this flag is allowed.
|
||||
|
||||
Otherwise, if the `KUBECONFIG` environment variable is set, use it as a
|
||||
|
@ -203,7 +203,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
<!--
|
||||
1. Determine the context to use based on the first hit in this chain:
|
||||
|
||||
1. Use the `-context` command-line flag if it exists.
|
||||
1. Use the `--context` command-line flag if it exists.
|
||||
2. Use the `current-context` from the merged kubeconfig files.
|
||||
-->
|
||||
2. 根据此链中的第一个匹配确定要使用的上下文。
|
||||
|
@ -288,20 +288,20 @@ kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位
|
|||
<!--
|
||||
## Proxy
|
||||
|
||||
You can configure `kubectl` to use proxy by setting `proxy-url` in the kubeconfig file, like:
|
||||
You can configure `kubectl` to use a proxy per cluster using `proxy-url` in your kubeconfig file, like this:
|
||||
-->
|
||||
## 代理
|
||||
|
||||
你可以在 `kubeconfig` 文件中设置 `proxy-url` 来为 `kubectl` 使用代理,例如:
|
||||
你可以在 `kubeconfig` 文件中,为每个集群配置 `proxy-url` 来让 `kubectl` 使用代理,例如:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
|
||||
proxy-url: https://proxy.host:3128
|
||||
|
||||
clusters:
|
||||
- cluster:
|
||||
proxy-url: http://proxy.example.org:3128
|
||||
server: https://k8s.example.org/k8s/clusters/c-xxyyzz
|
||||
name: development
|
||||
|
||||
users:
|
||||
|
@ -310,7 +310,6 @@ users:
|
|||
contexts:
|
||||
- context:
|
||||
name: development
|
||||
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
@ -318,6 +317,6 @@ contexts:
|
|||
<!--
|
||||
* [Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
--->
|
||||
-->
|
||||
* [配置对多集群的访问](/zh-cn/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
|
|
|
@ -107,13 +107,13 @@ When you use a policy APIs that is [stable](/docs/reference/using-api/#api-versi
|
|||
[defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs.
|
||||
For these reasons, policy APIs are recommended over *configuration files* and *command arguments* where suitable.
|
||||
-->
|
||||
**内置的策略 API**,例如 [ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/)、
|
||||
诸如 [ResourceQuota](/zh-cn/docs/concepts/policy/resource-quotas/)、
|
||||
[NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/)
|
||||
和基于角色的访问控制([RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/))
|
||||
等等都是以声明方式配置策略选项的内置 Kubernetes API。
|
||||
等**内置策略 API** 都是以声明方式配置策略选项的内置 Kubernetes API。
|
||||
即使在托管的 Kubernetes 服务和受控的 Kubernetes 安装环境中,API 通常也是可用的。
|
||||
内置策略 API 遵循与 Pod 这类其他 Kubernetes 资源相同的约定。
|
||||
当你使用[稳定版本的](/zh-cn/docs/reference/using-api/#api-versioning)的策略 API,
|
||||
当你使用[稳定版本](/zh-cn/docs/reference/using-api/#api-versioning)的策略 API,
|
||||
它们与其他 Kubernetes API 一样,采纳的是一种[预定义的支持策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。
|
||||
出于以上原因,在条件允许的情况下,基于策略 API 的方案应该优先于**配置文件**和**命令参数**。
|
||||
|
||||
|
@ -164,7 +164,7 @@ A controller is a client of the Kubernetes API. When Kubernetes is the client an
|
|||
out to a remote service, Kubernetes calls this a *webhook*. The remote service is called
|
||||
a *webhook backend*. As with custom controllers, webhooks do add a point of failure.
|
||||
-->
|
||||
编写客户端程序有一种特殊的 {{< glossary_tooltip term_id="controller" text="控制器(Controller)" >}}模式,
|
||||
编写客户端程序有一种特殊的{{< glossary_tooltip term_id="controller" text="控制器(Controller)" >}}模式,
|
||||
能够与 Kubernetes 很好地协同工作。控制器通常会读取某个对象的 `.spec`,或许还会执行一些操作,
|
||||
之后更新对象的 `.status`。
|
||||
|
||||
|
@ -203,7 +203,7 @@ and by kubectl (see [Extend kubectl with plugins](/docs/tasks/extend-kubectl/kub
|
|||
This diagram shows the extension points in a Kubernetes cluster and the
|
||||
clients that access it.
|
||||
-->
|
||||
### 扩展点
|
||||
### 扩展点 {#extension-points}
|
||||
|
||||
下图展示了 Kubernetes 集群中的这些扩展点及其访问集群的客户端。
|
||||
|
||||
|
@ -375,7 +375,7 @@ Operator 模式用于管理特定的应用;通常,这些应用需要维护
|
|||
你还可以创建自己的定制 API 和控制回路来管理其他资源(例如存储)或定义策略(例如访问控制限制)。
|
||||
|
||||
<!--
|
||||
### Changing Built-in Resources
|
||||
### Changing built-in resources
|
||||
|
||||
When you extend the Kubernetes API by adding custom resources, the added resources always fall
|
||||
into a new API Groups. You cannot replace or change existing API groups.
|
||||
|
|
|
@ -1,9 +1,15 @@
|
|||
---
|
||||
title: 设备插件
|
||||
description: 使用 Kubernetes 设备插件框架来实现适用于 GPU、NIC、FPGA、InfiniBand 以及类似的需要特定于供应商设置的资源的插件。
|
||||
description: 设备插件可以让你配置集群以支持需要特定于供应商设置的设备或资源,例如 GPU、NIC、FPGA 或非易失性主存储器。
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
<!--
|
||||
title: Device Plugins
|
||||
description: Device plugins let you configure your cluster with support for devices or resources that require vendor-specific setup, such as GPUs, NICs, FPGAs, or non-volatile main memory.
|
||||
content_type: concept
|
||||
weight: 20
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
|
||||
|
@ -20,7 +26,8 @@ and other similar computing resources that may require vendor specific initializ
|
|||
and setup.
|
||||
-->
|
||||
Kubernetes 提供了一个
|
||||
[设备插件框架](https://git.k8s.io/design-proposals-archive/resource-management/device-plugin.md),你可以用它来将系统硬件资源发布到 {{< glossary_tooltip term_id="kubelet" >}}。
|
||||
[设备插件框架](https://git.k8s.io/design-proposals-archive/resource-management/device-plugin.md),
|
||||
你可以用它来将系统硬件资源发布到 {{< glossary_tooltip term_id="kubelet" >}}。
|
||||
|
||||
供应商可以实现设备插件,由你手动部署或作为 {{< glossary_tooltip term_id="daemonset" >}}
|
||||
来部署,而不必定制 Kubernetes 本身的代码。目标设备包括 GPU、高性能 NIC、FPGA、
|
||||
|
@ -66,15 +73,15 @@ to advertise that the node has 2 "Foo" devices installed and available.
|
|||
|
||||
* 设备插件的 Unix 套接字。
|
||||
* 设备插件的 API 版本。
|
||||
* `ResourceName` 是需要公布的。这里 `ResourceName` 需要遵循
|
||||
[扩展资源命名方案](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources),
|
||||
* `ResourceName` 是需要公布的。这里 `ResourceName`
|
||||
需要遵循[扩展资源命名方案](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources),
|
||||
类似于 `vendor-domain/resourcetype`。(比如 NVIDIA GPU 就被公布为 `nvidia.com/gpu`。)
|
||||
|
||||
成功注册后,设备插件就向 kubelet 发送它所管理的设备列表,然后 kubelet
|
||||
负责将这些资源发布到 API 服务器,作为 kubelet 节点状态更新的一部分。
|
||||
|
||||
比如,设备插件在 kubelet 中注册了 `hardware-vendor.example/foo` 并报告了
|
||||
节点上的两个运行状况良好的设备后,节点状态将更新以通告该节点已安装 2 个
|
||||
比如,设备插件在 kubelet 中注册了 `hardware-vendor.example/foo`
|
||||
并报告了节点上的两个运行状况良好的设备后,节点状态将更新以通告该节点已安装 2 个
|
||||
"Foo" 设备并且是可用的。
|
||||
|
||||
<!--
|
||||
|
@ -139,11 +146,10 @@ The general workflow of a device plugin includes the following steps:
|
|||
|
||||
* 初始化。在这个阶段,设备插件将执行供应商特定的初始化和设置,
|
||||
以确保设备处于就绪状态。
|
||||
* 插件使用主机路径 `/var/lib/kubelet/device-plugins/` 下的 Unix 套接字启动
|
||||
一个 gRPC 服务,该服务实现以下接口:
|
||||
* 插件使用主机路径 `/var/lib/kubelet/device-plugins/` 下的 Unix 套接字启动一个
|
||||
gRPC 服务,该服务实现以下接口:
|
||||
|
||||
<!--
|
||||
|
||||
```gRPC
|
||||
service DevicePlugin {
|
||||
// GetDevicePluginOptions returns options to be communicated with Device Manager.
|
||||
|
@ -208,8 +214,8 @@ The general workflow of a device plugin includes the following steps:
|
|||
always call `GetDevicePluginOptions()` to see which optional functions are
|
||||
available, before calling any of them directly.
|
||||
-->
|
||||
插件并非必须为 `GetPreferredAllocation()` 或 `PreStartContainer()` 提供有用
|
||||
的实现逻辑,调用 `GetDevicePluginOptions()` 时所返回的 `DevicePluginOptions`
|
||||
插件并非必须为 `GetPreferredAllocation()` 或 `PreStartContainer()` 提供有用的实现逻辑,
|
||||
调用 `GetDevicePluginOptions()` 时所返回的 `DevicePluginOptions`
|
||||
消息中应该设置这些调用是否可用。`kubelet` 在真正调用这些函数之前,总会调用
|
||||
`GetDevicePluginOptions()` 来查看是否存在这些可选的函数。
|
||||
{{< /note >}}
|
||||
|
@ -242,7 +248,7 @@ kubelet instance. In the current implementation, a new kubelet instance deletes
|
|||
under `/var/lib/kubelet/device-plugins` when it starts. A device plugin can monitor the deletion
|
||||
of its Unix socket and re-register itself upon such an event.
|
||||
-->
|
||||
### 处理 kubelet 重启
|
||||
### 处理 kubelet 重启 {#handling-kubelet-restarts}
|
||||
|
||||
设备插件应能监测到 kubelet 重启,并且向新的 kubelet 实例来重新注册自己。
|
||||
在当前实现中,当 kubelet 重启的时候,新的 kubelet 实例会删除 `/var/lib/kubelet/device-plugins`
|
||||
|
@ -265,12 +271,12 @@ in the plugin's
|
|||
If you choose the DaemonSet approach you can rely on Kubernetes to: place the device plugin's
|
||||
Pod onto Nodes, to restart the daemon Pod after failure, and to help automate upgrades.
|
||||
-->
|
||||
## 设备插件部署
|
||||
## 设备插件部署 {#device-plugin-depoloyments}
|
||||
|
||||
你可以将你的设备插件作为节点操作系统的软件包来部署、作为 DaemonSet 来部署或者手动部署。
|
||||
|
||||
规范目录 `/var/lib/kubelet/device-plugins` 是需要特权访问的,所以设备插件
|
||||
必须要在被授权的安全的上下文中运行。
|
||||
规范目录 `/var/lib/kubelet/device-plugins` 是需要特权访问的,
|
||||
所以设备插件必须要在被授权的安全的上下文中运行。
|
||||
如果你将设备插件部署为 DaemonSet,`/var/lib/kubelet/device-plugins` 目录必须要在插件的
|
||||
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
|
||||
中声明作为 {{< glossary_tooltip term_id="volume" >}} 被挂载到插件中。
|
||||
|
@ -292,7 +298,7 @@ a Kubernetes release with a newer device plugin API version, upgrade your device
|
|||
to support both versions before upgrading these nodes. Taking that approach will
|
||||
ensure the continuous functioning of the device allocations during the upgrade.
|
||||
-->
|
||||
## API 兼容性
|
||||
## API 兼容性 {#api-compatibility}
|
||||
|
||||
Kubernetes 设备插件支持还处于 beta 版本。所以在稳定版本出来之前 API 会以不兼容的方式进行更改。
|
||||
作为一个项目,Kubernetes 建议设备插件开发者:
|
||||
|
@ -306,9 +312,12 @@ Kubernetes 设备插件支持还处于 beta 版本。所以在稳定版本出来
|
|||
|
||||
<!--
|
||||
## Monitoring Device Plugin Resources
|
||||
-->
|
||||
## 监控设备插件资源 {#monitoring-device-plugin-resources}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.15" state="beta" >}}
|
||||
|
||||
<!--
|
||||
In order to monitor resources provided by device plugins, monitoring agents need to be able to
|
||||
discover the set of devices that are in-use on the node and obtain metadata to describe which
|
||||
container the metric should be associated with. [Prometheus](https://prometheus.io/) metrics
|
||||
|
@ -316,10 +325,6 @@ exposed by device monitoring agents should follow the
|
|||
[Kubernetes Instrumentation Guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/instrumentation.md),
|
||||
identifying containers using `pod`, `namespace`, and `container` prometheus labels.
|
||||
-->
|
||||
## 监控设备插件资源
|
||||
|
||||
{{< feature-state for_k8s_version="v1.15" state="beta" >}}
|
||||
|
||||
为了监控设备插件提供的资源,监控代理程序需要能够发现节点上正在使用的设备,
|
||||
并获取元数据来描述哪个指标与容器相关联。
|
||||
设备监控代理暴露给 [Prometheus](https://prometheus.io/) 的指标应该遵循
|
||||
|
@ -398,8 +403,8 @@ message ContainerDevices {
|
|||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
<!--
|
||||
cpu_ids in the `ContainerResources` in the `List` endpoint correspond to exclusive CPUs allocated
|
||||
to a partilar container. If the goal is to evaluate CPUs that belong to the shared pool, the `List`
|
||||
endpoint needs to be used in conjunction with the `GetAllocatableResources` endpoint as explained
|
||||
|
@ -407,9 +412,7 @@ below:
|
|||
1. Call `GetAllocatableResources` to get a list of all the allocatable CPUs
|
||||
2. Call `GetCpuIds` on all `ContainerResources` in the system
|
||||
3. Subtract out all of the CPUs from the `GetCpuIds` calls from the `GetAllocatableResources` call
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
`List` 端点中的 `ContainerResources` 中的 cpu_ids 对应于分配给某个容器的专属 CPU。
|
||||
如果要统计共享池中的 CPU,`List` 端点需要与 `GetAllocatableResources` 端点一起使用,如下所述:
|
||||
|
||||
|
@ -418,6 +421,9 @@ below:
|
|||
3. 用 `GetAllocatableResources` 获取的 CPU 数减去 `GetCpuIds` 获取的 CPU 数。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### `GetAllocatableResources` gRPC endpoint {#grpc-endpoint-getallocatableresources}
|
||||
-->
|
||||
### `GetAllocatableResources` gRPC 端点 {#grpc-endpoint-getallocatableresources}
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.23" >}}
|
||||
|
@ -448,7 +454,6 @@ update and Kubelet needs to be restarted to reflect the correct resource capacit
|
|||
Kubelet 需要重新启动以获取正确的资源容量和可分配的资源。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
```gRPC
|
||||
// AllocatableResourcesResponses 包含 kubelet 所了解到的所有设备的信息
|
||||
message AllocatableResourcesResponse {
|
||||
|
@ -504,8 +509,8 @@ gRPC 服务通过 `/var/lib/kubelet/pod-resources/kubelet.sock` 的 UNIX 套接
|
|||
所以监控代理程序必须要在获得授权的安全的上下文中运行。
|
||||
如果设备监控代理以 DaemonSet 形式运行,必须要在插件的
|
||||
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
|
||||
中声明将 `/var/lib/kubelet/pod-resources` 目录以
|
||||
{{< glossary_tooltip text="卷" term_id="volume" >}}的形式被挂载到设备监控代理中。
|
||||
中声明将 `/var/lib/kubelet/pod-resources`
|
||||
目录以{{< glossary_tooltip text="卷" term_id="volume" >}}的形式被挂载到设备监控代理中。
|
||||
|
||||
对 “PodResourcesLister 服务”的支持要求启用 `KubeletPodResources`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
|
@ -513,21 +518,20 @@ gRPC 服务通过 `/var/lib/kubelet/pod-resources/kubelet.sock` 的 UNIX 套接
|
|||
|
||||
<!--
|
||||
## Device plugin integration with the Topology Manager
|
||||
-->
|
||||
## 设备插件与拓扑管理器的集成 {#device-plugin-integration-with-the-topology-manager}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
<!--
|
||||
The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a `TopologyInfo` struct.
|
||||
-->
|
||||
## 设备插件与拓扑管理器的集成
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
拓扑管理器是 Kubelet 的一个组件,它允许以拓扑对齐方式来调度资源。
|
||||
为了做到这一点,设备插件 API 进行了扩展来包括一个 `TopologyInfo` 结构体。
|
||||
|
||||
```gRPC
|
||||
message TopologyInfo {
|
||||
repeated NUMANode nodes = 1;
|
||||
repeated NUMANode nodes = 1;
|
||||
}
|
||||
|
||||
message NUMANode {
|
||||
|
@ -562,10 +566,14 @@ NUMA 节点列表表示设备插件没有该设备的 NUMA 亲和偏好。
|
|||
```
|
||||
pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.TopologyInfo{Nodes: []*pluginapi.NUMANode{&pluginapi.NUMANode{ID: 0,},}}}
|
||||
```
|
||||
|
||||
<!--
|
||||
## Device plugin examples {#examples}
|
||||
-->
|
||||
## 设备插件示例 {#examples}
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
<!--
|
||||
Here are some examples of device plugin implementations:
|
||||
|
||||
* The [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
|
||||
|
@ -578,8 +586,6 @@ Here are some examples of device plugin implementations:
|
|||
* The [SR-IOV Network device plugin](https://github.com/intel/sriov-network-device-plugin)
|
||||
* The [Xilinx FPGA device plugins](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-device-plugin) for Xilinx FPGA devices
|
||||
-->
|
||||
## 设备插件示例 {#examples}
|
||||
|
||||
下面是一些设备插件实现的示例:
|
||||
|
||||
* [AMD GPU 设备插件](https://github.com/RadeonOpenCompute/k8s-device-plugin)
|
||||
|
@ -600,7 +606,7 @@ Here are some examples of device plugin implementations:
|
|||
* Learn about the [Topology Manager](/docs/tasks/administer-cluster/topology-manager/)
|
||||
* Read about using [hardware acceleration for TLS ingress](/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes
|
||||
-->
|
||||
* 查看[调度 GPU 资源](/zh-cn/docs/tasks/manage-gpus/scheduling-gpus/) 来学习使用设备插件
|
||||
* 查看[调度 GPU 资源](/zh-cn/docs/tasks/manage-gpus/scheduling-gpus/)来学习使用设备插件
|
||||
* 查看在上如何[公布节点上的扩展资源](/zh-cn/docs/tasks/administer-cluster/extended-resource-node/)
|
||||
* 学习[拓扑管理器](/zh-cn/docs/tasks/administer-cluster/topology-manager/)
|
||||
* 阅读如何在 Kubernetes 中使用 [TLS Ingress 的硬件加速](/zh-cn/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/)
|
||||
|
|
|
@ -88,15 +88,15 @@ Some types of these controllers are:
|
|||
* Node controller: Responsible for noticing and responding when nodes go down.
|
||||
* Job controller: Watches for Job objects that represent one-off tasks, then creates
|
||||
Pods to run those tasks to completion.
|
||||
* Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
|
||||
* Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
|
||||
* EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods).
|
||||
* ServiceAccount controller: Create default ServiceAccounts for new namespaces.
|
||||
-->
|
||||
这些控制器包括:
|
||||
|
||||
* 节点控制器(Node Controller):负责在节点出现故障时进行通知和响应
|
||||
* 任务控制器(Job Controller):监测代表一次性任务的 Job 对象,然后创建 Pods 来运行这些任务直至完成
|
||||
* 端点控制器(Endpoints Controller):填充端点(Endpoints)对象(即加入 Service 与 Pod)
|
||||
* 服务帐户和令牌控制器(Service Account & Token Controllers):为新的命名空间创建默认帐户和 API 访问令牌
|
||||
* 端点分片控制器(EndpointSlice controller):填充端点分片(EndpointSlice)对象(以提供 Service 和 Pod 之间的链接)。
|
||||
* 服务账号控制器(ServiceAccount controller):为新的命名空间创建默认的服务账号(ServiceAccount)。
|
||||
|
||||
<!--
|
||||
### cloud-controller-manager
|
||||
|
|
|
@ -0,0 +1,281 @@
|
|||
---
|
||||
title: Kubernetes API 服务器旁路风险
|
||||
description: >
|
||||
与 API 服务器及其他组件相关的安全架构信息
|
||||
content_type: concept
|
||||
weight: 90
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Kubernetes API Server Bypass Risks
|
||||
description: >
|
||||
Security architecture information relating to the API server and other components
|
||||
content_type: concept
|
||||
weight: 90
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
The Kubernetes API server is the main point of entry to a cluster for external parties
|
||||
(users and services) interacting with it.
|
||||
-->
|
||||
Kubernetes API 服务器是外部(用户和服务)与集群交互的主要入口。
|
||||
|
||||
<!--
|
||||
As part of this role, the API server has several key built-in security controls, such as
|
||||
audit logging and {{< glossary_tooltip text="admission controllers" term_id="admission-controller" >}}. However, there are ways to modify the configuration
|
||||
or content of the cluster that bypass these controls.
|
||||
-->
|
||||
作为此角色的一部分,API 服务器有几个关键的内置安全控制,
|
||||
例如审计日志和{{< glossary_tooltip text="准入控制器" term_id="admission-controller" >}}。
|
||||
但是,有一些方法可以绕过这些安全控制从而修改集群的配置或内容。
|
||||
|
||||
<!--
|
||||
This page describes the ways in which the security controls built into the
|
||||
Kubernetes API server can be bypassed, so that cluster operators
|
||||
and security architects can ensure that these bypasses are appropriately restricted.
|
||||
-->
|
||||
本页描述了绕过 Kubernetes API 服务器中内置安全控制的几种方式,
|
||||
以便集群运维人员和安全架构师可以确保这些绕过方式被适当地限制。
|
||||
|
||||
<!-- body -->
|
||||
<!--
|
||||
## Static Pods {#static-pods}
|
||||
-->
|
||||
## 静态 Pod {#static-pods}
|
||||
|
||||
<!--
|
||||
The {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} on each node loads and
|
||||
directly manages any manifests that are stored in a named directory or fetched from
|
||||
a specific URL as [*static Pods*](/docs/tasks/configure-pod-container/static-pod) in
|
||||
your cluster. The API server doesn't manage these static Pods. An attacker with write
|
||||
access to this location could modify the configuration of static pods loaded from that
|
||||
source, or could introduce new static Pods.
|
||||
-->
|
||||
每个节点上的 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
|
||||
会加载并直接管理集群中存储在指定目录中或从特定 URL
|
||||
获取的[**静态 Pod**](/zh-cn/docs/tasks/configure-pod-container/static-pod) 清单。
|
||||
API 服务器不管理这些静态 Pod。对该位置具有写入权限的攻击者可以修改从该位置加载的静态 Pod 的配置,或引入新的静态 Pod。
|
||||
|
||||
<!--
|
||||
Static Pods are restricted from accessing other objects in the Kubernetes API. For example,
|
||||
you can't configure a static Pod to mount a Secret from the cluster. However, these Pods can
|
||||
take other security sensitive actions, such as using `hostPath` mounts from the underlying
|
||||
node.
|
||||
-->
|
||||
静态 Pod 被限制访问 Kubernetes API 中的其他对象。例如,你不能将静态 Pod 配置为从集群挂载 Secret。
|
||||
但是,这些 Pod 可以执行其他安全敏感的操作,例如挂载来自下层节点的 `hostPath` 卷。
|
||||
|
||||
<!--
|
||||
By default, the kubelet creates a {{< glossary_tooltip text="mirror pod" term_id="mirror-pod">}}
|
||||
so that the static Pods are visible in the Kubernetes API. However, if the attacker uses an invalid
|
||||
namespace name when creating the Pod, it will not be visible in the Kubernetes API and can only
|
||||
be discovered by tooling that has access to the affected host(s).
|
||||
-->
|
||||
默认情况下,kubelet 会创建一个{{< glossary_tooltip text="镜像 Pod(Mirror Pod)" term_id="mirror-pod">}},
|
||||
以便静态 Pod 在 Kubernetes API 中可见。但是,如果攻击者在创建 Pod 时使用了无效的名字空间名称,
|
||||
则该 Pod 将在 Kubernetes API 中不可见,只能通过对受影响主机有访问权限的工具发现。
|
||||
|
||||
<!--
|
||||
If a static Pod fails admission control, the kubelet won't register the Pod with the
|
||||
API server. However, the Pod still runs on the node. For more information, refer to
|
||||
[kubeadm issue #1541](https://github.com/kubernetes/kubeadm/issues/1541#issuecomment-487331701).
|
||||
-->
|
||||
如果静态 Pod 无法通过准入控制,kubelet 不会将 Pod 注册到 API 服务器。但该 Pod 仍然在节点上运行。
|
||||
有关更多信息,请参阅 [kubeadm issue #1541](https://github.com/kubernetes/kubeadm/issues/1541#issuecomment-487331701)。
|
||||
|
||||
<!--
|
||||
### Mitigations {#static-pods-mitigations}
|
||||
-->
|
||||
### 缓解措施 {#static-pods-mitigations}
|
||||
|
||||
<!--
|
||||
- Only [enable the kubelet static Pod manifest functionality](/docs/tasks/configure-pod-container/static-pod/#static-pod-creation)
|
||||
if required by the node.
|
||||
- If a node uses the static Pod functionality, restrict filesystem access to the static Pod manifest directory
|
||||
or URL to users who need the access.
|
||||
- Restrict access to kubelet configuration parameters and files to prevent an attacker setting
|
||||
a static Pod path or URL.
|
||||
- Regularly audit and centrally report all access to directories or web storage locations that host
|
||||
static Pod manifests and kubelet configuration files.
|
||||
-->
|
||||
- 仅在节点需要时[启用 kubelet 静态 Pod 清单功能](/zh-cn/docs/tasks/configure-pod-container/static-pod/#static-pod-creation)。
|
||||
- 如果节点使用静态 Pod 功能,请将对静态 Pod 清单目录或 URL 的文件系统的访问权限限制为需要访问的用户。
|
||||
- 限制对 kubelet 配置参数和文件的访问,以防止攻击者设置静态 Pod 路径或 URL。
|
||||
- 定期审计并集中报告所有对托管静态 Pod 清单和 kubelet 配置文件的目录或 Web 存储位置的访问。
|
||||
|
||||
<!--
|
||||
## The kubelet API {#kubelet-api}
|
||||
-->
|
||||
## kubelet API {#kubelet-api}
|
||||
|
||||
<!--
|
||||
The kubelet provides an HTTP API that is typically exposed on TCP port 10250 on cluster
|
||||
worker nodes. The API might also be exposed on control plane nodes depending on the Kubernetes
|
||||
distribution in use. Direct access to the API allows for disclosure of information about
|
||||
the pods running on a node, the logs from those pods, and execution of commands in
|
||||
every container running on the node.
|
||||
-->
|
||||
kubelet 提供了一个 HTTP API,通常暴露在集群工作节点上的 TCP 端口 10250 上。
|
||||
在某些 Kubernetes 发行版中,API 也可能暴露在控制平面节点上。
|
||||
对 API 的直接访问允许公开有关运行在节点上的 Pod、这些 Pod 的日志以及在节点上运行的每个容器中执行命令的信息。
|
||||
|
||||
<!--
|
||||
When Kubernetes cluster users have RBAC access to `Node` object sub-resources, that access
|
||||
serves as authorization to interact with the kubelet API. The exact access depends on
|
||||
which sub-resource access has been granted, as detailed in [kubelet authorization](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization).
|
||||
-->
|
||||
当 Kubernetes 集群用户具有对 `Node` 对象子资源 RBAC 访问权限时,该访问权限可用作与 kubelet API 交互的授权。
|
||||
实际的访问权限取决于授予了哪些子资源访问权限,详见
|
||||
[kubelet 鉴权](/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authorization)。
|
||||
|
||||
<!--
|
||||
Direct access to the kubelet API is not subject to admission control and is not logged
|
||||
by Kubernetes audit logging. An attacker with direct access to this API may be able to
|
||||
bypass controls that detect or prevent certain actions.
|
||||
-->
|
||||
对 kubelet API 的直接访问不受准入控制影响,也不会被 Kubernetes 审计日志记录。
|
||||
能直接访问此 API 的攻击者可能会绕过能检测或防止某些操作的控制机制。
|
||||
|
||||
<!--
|
||||
The kubelet API can be configured to authenticate requests in a number of ways.
|
||||
By default, the kubelet configuration allows anonymous access. Most Kubernetes providers
|
||||
change the default to use webhook and certificate authentication. This lets the control plane
|
||||
ensure that the caller is authorized to access the `nodes` API resource or sub-resources.
|
||||
The default anonymous access doesn't make this assertion with the control plane.
|
||||
-->
|
||||
kubelet API 可以配置为以多种方式验证请求。
|
||||
默认情况下,kubelet 的配置允许匿名访问。大多数 Kubernetes 提供商将默认值更改为使用 Webhook 和证书身份认证。
|
||||
这使得控制平面能够确保调用者访问 `Node` API 资源或子资源是经过授权的。但控制平面不能确保默认的匿名访问也是如此。
|
||||
|
||||
<!--
|
||||
### Mitigations
|
||||
-->
|
||||
### 缓解措施 {#mitigations}
|
||||
|
||||
<!--
|
||||
- Restrict access to sub-resources of the `nodes` API object using mechanisms such as
|
||||
[RBAC](/docs/reference/access-authn-authz/rbac/). Only grant this access when required,
|
||||
such as by monitoring services.
|
||||
- Restrict access to the kubelet port. Only allow specified and trusted IP address
|
||||
ranges to access the port.
|
||||
- [Ensure that kubelet authentication is set to webhook or certificate mode](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication).
|
||||
- Ensure that the unauthenticated "read-only" Kubelet port is not enabled on the cluster.
|
||||
-->
|
||||
- 使用 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 等机制限制对 `Node` API 对象的子资源的访问。
|
||||
只在有需要时才授予此访问权限,例如监控服务。
|
||||
- 限制对 kubelet 端口的访问。只允许指定和受信任的 IP 地址段访问该端口。
|
||||
- [确保将 kubelet 身份验证设置为 Webhook 或证书模式](/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication)。
|
||||
- 确保集群上未启用不作身份认证的“只读” Kubelet 端口。
|
||||
|
||||
<!--
|
||||
## The etcd API
|
||||
-->
|
||||
## etcd API {#the-etcd-api}
|
||||
|
||||
<!--
|
||||
Kubernetes clusters use etcd as a datastore. The `etcd` service listens on TCP port 2379.
|
||||
The only clients that need access are the Kubernetes API server and any backup tooling
|
||||
that you use. Direct access to this API allows for disclosure or modification of any
|
||||
data held in the cluster.
|
||||
-->
|
||||
Kubernetes 集群使用 etcd 作为数据存储。`etcd` 服务监听 TCP 端口 2379。
|
||||
只有 Kubernetes API 服务器和你所使用的备份工具需要访问此存储。对该 API 的直接访问允许公开或修改集群中保存的数据。
|
||||
|
||||
<!--
|
||||
Access to the etcd API is typically managed by client certificate authentication.
|
||||
Any certificate issued by a certificate authority that etcd trusts allows full access
|
||||
to the data stored inside etcd.
|
||||
-->
|
||||
对 etcd API 的访问通常通过客户端证书身份认证来管理。
|
||||
由 etcd 信任的证书颁发机构所颁发的任何证书都可以完全访问 etcd 中存储的数据。
|
||||
|
||||
<!--
|
||||
Direct access to etcd is not subject to Kubernetes admission control and is not logged
|
||||
by Kubernetes audit logging. An attacker who has read access to the API server's
|
||||
etcd client certificate private key (or can create a new trusted client certificate) can gain
|
||||
cluster admin rights by accessing cluster secrets or modifying access rules. Even without
|
||||
elevating their Kubernetes RBAC privileges, an attacker who can modify etcd can retrieve any API object
|
||||
or create new workloads inside the cluster.
|
||||
-->
|
||||
对 etcd 的直接访问不受 Kubernetes 准入控制的影响,也不会被 Kubernetes 审计日志记录。
|
||||
具有对 API 服务器的 etcd 客户端证书私钥的读取访问权限(或可以创建一个新的受信任的客户端证书)
|
||||
的攻击者可以通过访问集群 Secret 或修改访问规则来获得集群管理员权限。
|
||||
即使不提升其 Kubernetes RBAC 权限,可以修改 etcd 的攻击者也可以在集群内检索所有 API 对象或创建新的工作负载。
|
||||
|
||||
|
||||
<!--
|
||||
Many Kubernetes providers configure
|
||||
etcd to use mutual TLS (both client and server verify each other's certificate for authentication).
|
||||
There is no widely accepted implementation of authorization for the etcd API, although
|
||||
the feature exists. Since there is no authorization model, any certificate
|
||||
with client access to etcd can be used to gain full access to etcd. Typically, etcd client certificates
|
||||
that are only used for health checking can also grant full read and write access.
|
||||
-->
|
||||
许多 Kubernetes 提供商配置 etcd 为使用双向 TLS(客户端和服务器都验证彼此的证书以进行身份验证)。
|
||||
尽管存在该特性,但目前还没有被广泛接受的 etcd API 鉴权实现。
|
||||
由于缺少鉴权模型,任何具有对 etcd 的客户端访问权限的证书都可以用于获得对 etcd 的完全访问权限。
|
||||
通常,仅用于健康检查的 etcd 客户端证书也可以授予完全读写访问权限。
|
||||
|
||||
<!--
|
||||
### Mitigations {#etcd-api-mitigations}
|
||||
-->
|
||||
### 缓解措施 {#etcd-api-mitigations}
|
||||
|
||||
<!--
|
||||
- Ensure that the certificate authority trusted by etcd is used only for the purposes of
|
||||
authentication to that service.
|
||||
- Control access to the private key for the etcd server certificate, and to the API server's
|
||||
client certificate and key.
|
||||
- Consider restricting access to the etcd port at a network level, to only allow access
|
||||
from specified and trusted IP address ranges.
|
||||
-->
|
||||
- 确保 etcd 所信任的证书颁发机构仅用于该服务的身份认证。
|
||||
- 控制对 etcd 服务器证书的私钥以及 API 服务器的客户端证书和密钥的访问。
|
||||
- 考虑在网络层面限制对 etcd 端口的访问,仅允许来自特定和受信任的 IP 地址段的访问。
|
||||
|
||||
<!--
|
||||
## Container runtime socket {#runtime-socket}
|
||||
-->
|
||||
## 容器运行时套接字 {#runtime-socket}
|
||||
|
||||
<!--
|
||||
On each node in a Kubernetes cluster, access to interact with containers is controlled
|
||||
by the container runtime (or runtimes, if you have configured more than one). Typically,
|
||||
the container runtime exposes a Unix socket that the kubelet can access. An attacker with
|
||||
access to this socket can launch new containers or interact with running containers.
|
||||
-->
|
||||
在 Kubernetes 集群中的每个节点上,与容器交互的访问都由容器运行时控制。
|
||||
通常,容器运行时会公开一个 kubelet 可以访问的 UNIX 套接字。
|
||||
具有此套接字访问权限的攻击者可以启动新容器或与正在运行的容器进行交互。
|
||||
|
||||
<!--
|
||||
At the cluster level, the impact of this access depends on whether the containers that
|
||||
run on the compromised node have access to Secrets or other confidential
|
||||
data that an attacker could use to escalate privileges to other worker nodes or to
|
||||
control plane components.
|
||||
-->
|
||||
在集群层面,这种访问造成的影响取决于在受威胁节点上运行的容器是否可以访问 Secret 或其他机密数据,
|
||||
攻击者可以使用这些机密数据将权限提升到其他工作节点或控制平面组件。
|
||||
|
||||
<!--
|
||||
### Mitigations {#runtime-socket-mitigations}
|
||||
-->
|
||||
### 缓解措施 {#runtime-socket-mitigations}
|
||||
|
||||
<!--
|
||||
- Ensure that you tightly control filesystem access to container runtime sockets.
|
||||
When possible, restrict this access to the `root` user.
|
||||
- Isolate the kubelet from other components running on the node, using
|
||||
mechanisms such as Linux kernel namespaces.
|
||||
- Ensure that you restrict or forbid the use of [`hostPath` mounts](/docs/concepts/storage/volumes/#hostpath)
|
||||
that include the container runtime socket, either directly or by mounting a parent
|
||||
directory. Also `hostPath` mounts must be set as read-only to mitigate risks
|
||||
of attackers bypassing directory restrictions.
|
||||
- Restrict user access to nodes, and especially restrict superuser access to nodes.
|
||||
-->
|
||||
- 确保严格控制对容器运行时套接字所在的文件系统访问。如果可能,限制为仅 `root` 用户可访问。
|
||||
- 使用 Linux 内核命名空间等机制将 kubelet 与节点上运行的其他组件隔离。
|
||||
- 确保限制或禁止使用包含容器运行时套接字的 [`hostPath` 挂载](/zh-cn/docs/concepts/storage/volumes/#hostpath),
|
||||
无论是直接挂载还是通过挂载父目录挂载。此外,`hostPath` 挂载必须设置为只读,以降低攻击者绕过目录限制的风险。
|
||||
- 限制用户对节点的访问,特别是限制超级用户对节点的访问。
|
|
@ -99,13 +99,13 @@ Here are links to some of the popular cloud providers' security documentation:
|
|||
IaaS 提供商 | 链接 |
|
||||
-------------------- | ------------ |
|
||||
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
|
||||
Amazon Web Services | https://aws.amazon.com/security/ |
|
||||
Google Cloud Platform | https://cloud.google.com/security/ |
|
||||
Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety.html |
|
||||
Amazon Web Services | https://aws.amazon.com/security |
|
||||
Google Cloud Platform | https://cloud.google.com/security |
|
||||
Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety |
|
||||
IBM Cloud | https://www.ibm.com/cloud/security |
|
||||
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
|
||||
Oracle Cloud Infrastructure | https://www.oracle.com/security/ |
|
||||
VMWare VSphere | https://www.vmware.com/security/hardening-guides.html |
|
||||
Oracle Cloud Infrastructure | https://www.oracle.com/security |
|
||||
VMWare VSphere | https://www.vmware.com/security/hardening-guides |
|
||||
|
||||
{{< /table >}}
|
||||
|
||||
|
@ -216,7 +216,7 @@ Area of Concern for Containers | Recommendation |
|
|||
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
|
||||
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
|
||||
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
|
||||
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provide stronger isolation
|
||||
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provide stronger isolation.
|
||||
-->
|
||||
## 容器 {#container}
|
||||
|
||||
|
|
|
@ -84,7 +84,7 @@ predefined Pod Security Standard levels you want to use for a namespace. The lab
|
|||
defines what action the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
|
||||
takes if a potential violation is detected:
|
||||
-->
|
||||
## 为名字空间设置 Pod 安全性准入控制标签
|
||||
## 为名字空间设置 Pod 安全性准入控制标签 {#pod-security-admission-labels-for-namespaces}
|
||||
|
||||
一旦特性被启用或者安装了 Webhook,你可以配置名字空间以定义每个名字空间中
|
||||
Pod 安全性准入控制模式。
|
||||
|
|
|
@ -0,0 +1,749 @@
|
|||
---
|
||||
title: 安全检查清单
|
||||
description: >
|
||||
确保 Kubernetes 集群安全的基线检查清单。
|
||||
content_type: concept
|
||||
weight: 100
|
||||
---
|
||||
<!--
|
||||
title: Security Checklist
|
||||
description: >
|
||||
Baseline checklist for ensuring security in Kubernetes clusters.
|
||||
content_type: concept
|
||||
weight: 100
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
This checklist aims at providing a basic list of guidance with links to more
|
||||
comprehensive documentation on each topic. It does not claim to be exhaustive
|
||||
and is meant to evolve.
|
||||
|
||||
On how to read and use this document:
|
||||
|
||||
- The order of topics does not reflect an order of priority.
|
||||
- Some checklist items are detailed in the paragraph below the list of each section.
|
||||
-->
|
||||
本清单旨在提供一个基本的指导列表,其中包含链接,指向各个主题的更为全面的文档。
|
||||
此清单不求详尽无遗,是预计会不断演化的。
|
||||
|
||||
关于如何阅读和使用本文档:
|
||||
- 主题的顺序并不代表优先级的顺序。
|
||||
- 在每章节的列表下面的段落中,都详细列举了一些检查清项目。
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Checklists are **not** sufficient for attaining a good security posture on their
|
||||
own. A good security posture requires constant attention and improvement, but a
|
||||
checklist can be the first step on the never-ending journey towards security
|
||||
preparedness. Some of the recommendations in this checklist may be too
|
||||
restrictive or too lax for your specific security needs. Since Kubernetes
|
||||
security is not "one size fits all", each category of checklist items should be
|
||||
evaluated on its merits.
|
||||
-->
|
||||
单靠检查清单是**不够的**,无法获得良好的安全态势。
|
||||
实现良好的安全态势需要持续的关注和改进,实现安全上有备无患的目标道路漫长,清单可作为征程上的第一步。
|
||||
对于你的特定安全需求,此清单中的某些建议可能过于严格或过于宽松。
|
||||
由于 Kubernetes 的安全性并不是“一刀切”的,因此针对每一类检查清单项目都应该做价值评估。
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Authentication & Authorization
|
||||
|
||||
- [ ] `system:masters` group is not used for user or component authentication after bootstrapping.
|
||||
- [ ] The kube-controller-manager is running with `--use-service-account-credentials`
|
||||
enabled.
|
||||
- [ ] The root certificate is protected (either an offline CA, or a managed
|
||||
online CA with effective access controls).
|
||||
- [ ] Intermediate and leaf certificates have an expiry date no more than 3
|
||||
years in the future.
|
||||
- [ ] A process exists for periodic access review, and reviews occur no more
|
||||
than 24 months apart.
|
||||
- [ ] The [Role Based Access Control Good Practices](/docs/concepts/security/rbac-good-practices/)
|
||||
is followed for guidance related to authentication and authorization.
|
||||
-->
|
||||
## 认证和鉴权 {#authentication-authorization}
|
||||
|
||||
- [ ] 在启动后 `system:masters` 组不用于用户或组件的身份验证。
|
||||
- [ ] kube-controller-manager 运行时要启用 `--use-service-account-credentials` 参数。
|
||||
- [ ] 根证书要受到保护(或离线 CA,或一个具有有效访问控制的托管型在线 CA)。
|
||||
- [ ] 中级证书和叶子证书的有效期不要超过未来 3 年。
|
||||
- [ ] 存在定期访问审查的流程,审查间隔不要超过 24 个月。
|
||||
- [ ] 遵循[基于角色的访问控制良好实践](/zh-cn/docs/concepts/security/rbac-good-practices/),以获得与身份验证和授权相关的指导。
|
||||
|
||||
<!--
|
||||
After bootstrapping, neither users nor components should authenticate to the
|
||||
Kubernetes API as `system:masters`. Similarly, running all of
|
||||
kube-controller-manager as `system:masters` should be avoided. In fact,
|
||||
`system:masters` should only be used as a break-glass mechanism, as opposed to
|
||||
an admin user.
|
||||
-->
|
||||
在启动后,用户和组件都不应以 `system:masters` 身份向 Kubernetes API 进行身份验证。
|
||||
同样,应避免将任何 kube-controller-manager 以 `system:masters` 运行。
|
||||
事实上,`system:masters` 应该只用作一个例外机制,而不是管理员用户。
|
||||
|
||||
<!--
|
||||
## Network security
|
||||
|
||||
- [ ] CNI plugins in-use supports network policies.
|
||||
- [ ] Ingress and egress network policies are applied to all workloads in the
|
||||
cluster.
|
||||
- [ ] Default network policies within each namespace, selecting all pods, denying
|
||||
everything, are in place.
|
||||
- [ ] If appropriate, a service mesh is used to encrypt all communications inside of the cluster.
|
||||
- [ ] The Kubernetes API, kubelet API and etcd are not exposed publicly on Internet.
|
||||
- [ ] Access from the workloads to the cloud metadata API is filtered.
|
||||
- [ ] Use of LoadBalancer and ExternalIPs is restricted.
|
||||
-->
|
||||
## 网络安全 {#network-security}
|
||||
|
||||
- [ ] 使用的 CNI 插件可支持网络策略。
|
||||
- [ ] 对集群中的所有工作负载应用入站和出站的网络策略。
|
||||
- [ ] 落实每个名称空间内的默认网络策略,覆盖所有 Pod,拒绝一切访问。
|
||||
- [ ] 如果合适,使用服务网格来加密集群内的所有通信。
|
||||
- [ ] 不在互联网上公开 Kubernetes API、kubelet API 和 etcd。
|
||||
- [ ] 过滤工作负载对云元数据 API 的访问。
|
||||
- [ ] 限制使用 LoadBalancer 和 ExternalIP。
|
||||
|
||||
<!--
|
||||
A number of [Container Network Interface (CNI) plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
|
||||
plugins provide the functionality to
|
||||
restrict network resources that pods may communicate with. This is most commonly done
|
||||
through [Network Policies](/docs/concepts/services-networking/network-policies/)
|
||||
which provide a namespaced resource to define rules. Default network policies
|
||||
blocking everything egress and ingress, in each namespace, selecting all the
|
||||
pods, can be useful to adopt an allow list approach, ensuring that no workloads
|
||||
is missed.
|
||||
-->
|
||||
许多[容器网络接口(Container Network Interface,CNI)插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)提供了限制
|
||||
Pod 可能与之通信的网络资源的功能。
|
||||
这种限制通常通过[网络策略](/zh-cn/docs/concepts/services-networking/network-policies/)来完成,
|
||||
网络策略提供了一种名称空间作用域的资源来定义规则。
|
||||
在每个名称空间中,默认的网络策略会阻塞所有的出入站流量,并选择所有 Pod,
|
||||
采用允许列表的方法很有用,可以确保不遗漏任何工作负载。
|
||||
|
||||
<!--
|
||||
Not all CNI plugins provide encryption in transit. If the chosen plugin lacks this
|
||||
feature, an alternative solution could be to use a service mesh to provide that
|
||||
functionality.
|
||||
|
||||
The etcd datastore of the control plane should have controls to limit access and
|
||||
not be publicly exposed on the Internet. Furthermore, mutual TLS (mTLS) should
|
||||
be used to communicate securely with it. The certificate authority for this
|
||||
should be unique to etcd.
|
||||
-->
|
||||
并非所有 CNI 插件都在传输过程中提供加密。
|
||||
如果所选的插件缺少此功能,一种替代方案是可以使用服务网格来提供该功能。
|
||||
|
||||
控制平面的 etcd 数据存储应该实施访问限制控制,并且不要在互联网上公开。
|
||||
此外,应使用双向 TLS(mTLS)与其进行安全通信。
|
||||
用在这里的证书机构应该仅用于 etcd。
|
||||
|
||||
<!--
|
||||
External Internet access to the Kubernetes API server should be restricted to
|
||||
not expose the API publicly. Be careful as many managed Kubernetes distribution
|
||||
are publicly exposing the API server by default. You can then use a bastion host
|
||||
to access the server.
|
||||
|
||||
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API access
|
||||
should be restricted and not publicly exposed, the defaults authentication and
|
||||
authorization settings, when no configuration file specified with the `--config`
|
||||
flag, are overly permissive.
|
||||
-->
|
||||
应该限制外部互联网对 Kubernetes API 服务器未公开的 API 的访问。
|
||||
请小心,因为许多托管的 Kubernetes 发行版在默认情况下公开了 API 服务器。
|
||||
当然,你可以使用堡垒机访问服务器。
|
||||
|
||||
对 [kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/) API 的访问应该受到限制,
|
||||
并且不公开,当没有使用 `--config` 参数来设置配置文件时,默认的身份验证和鉴权设置是过于宽松的。
|
||||
|
||||
<!--
|
||||
If a cloud provider is used for hosting Kubernetes, the access from pods to the cloud
|
||||
metadata API `169.254.169.254` should also be restricted or blocked if not needed
|
||||
because it may leak information.
|
||||
|
||||
For restricted LoadBalancer and ExternalIPs use, see
|
||||
[CVE-2020-8554: Man in the middle using LoadBalancer or ExternalIPs](https://github.com/kubernetes/kubernetes/issues/97076)
|
||||
and the [DenyServiceExternalIPs admission controller](/docs/reference/access-authn-authz/admission-controllers/#denyserviceexternalips)
|
||||
for further information.
|
||||
-->
|
||||
如果使用云服务供应商托管的 Kubernetes,在没有明确需要的情况下,
|
||||
也应该限制或阻止从 Pod 对云元数据 API `169.254.169.254` 的访问,因为这可能泄露信息。
|
||||
|
||||
关于限制使用 LoadBalancer 和 ExternalIP 请参阅
|
||||
[CVE-2020-8554:中间人使用 LoadBalancer 或 ExternalIP](https://github.com/kubernetes/kubernetes/issues/97076)
|
||||
和
|
||||
[DenyServiceExternalIPs 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#denyserviceexternalips)获取更多信息。
|
||||
|
||||
<!--
|
||||
## Pod security
|
||||
|
||||
- [ ] RBAC rights to `create`, `update`, `patch`, `delete` workloads is only granted if necessary.
|
||||
- [ ] Appropriate Pod Security Standards policy is applied for all namespaces and enforced.
|
||||
- [ ] Memory limit is set for the workloads with a limit equal or inferior to the request.
|
||||
- [ ] CPU limit might be set on sensitive workloads.
|
||||
- [ ] For nodes that support it, Seccomp is enabled with appropriate syscalls
|
||||
profile for programs.
|
||||
- [ ] For nodes that support it, AppArmor or SELinux is enabled with appropriate
|
||||
profile for programs.
|
||||
-->
|
||||
## Pod 安全 {#pod-security}
|
||||
|
||||
- [ ] 仅在必要时才授予 `create`、`update`、`patch`、`delete` 工作负载的 RBAC 权限。
|
||||
- [ ] 对所有名字空间实施适当的 Pod 安全标准策略,并强制执行。
|
||||
- [ ] 为工作负载设置内存限制值,并确保限制值等于或者不高于请求值。
|
||||
- [ ] 对敏感工作负载可以设置 CPU 限制。
|
||||
- [ ] 对于支持 Seccomp 的节点,可以为程序启用合适的系统调用配置文件。
|
||||
- [ ] 对于支持 AppArmor 或 SELinux 的系统,可以为程序启用合适的配置文件。
|
||||
|
||||
<!--
|
||||
RBAC authorization is crucial but
|
||||
[cannot be granular enough to have authorization on the Pods' resources](/docs/concepts/security/rbac-good-practices/#workload-creation)
|
||||
(or on any resource that manages Pods). The only granularity is the API verbs
|
||||
on the resource itself, for example, `create` on Pods. Without
|
||||
additional admission, the authorization to create these resources allows direct
|
||||
unrestricted access to the schedulable nodes of a cluster.
|
||||
-->
|
||||
RBAC 的授权是至关重要的,
|
||||
但[不能在足够细的粒度上对 Pod 的资源进行授权](/zh-cn/docs/concepts/security/rbac-good-practices/#workload-creation),
|
||||
也不足以对管理 Pod 的任何资源进行授权。
|
||||
唯一的粒度是资源本身上的 API 动作,例如,对 Pod 的 `create`。
|
||||
在未指定额外许可的情况下,创建这些资源的权限允许直接不受限制地访问集群的可调度节点。
|
||||
|
||||
<!--
|
||||
The [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
|
||||
define three different policies, privileged, baseline and restricted that limit
|
||||
how fields can be set in the `PodSpec` regarding security.
|
||||
These standards can be enforced at the namespace level with the new
|
||||
[Pod Security](/docs/concepts/security/pod-security-admission/) admission,
|
||||
enabled by default, or by third-party admission webhook. Please note that,
|
||||
contrary to the removed PodSecurityPolicy admission it replaces,
|
||||
[Pod Security](/docs/concepts/security/pod-security-admission/)
|
||||
admission can be easily combined with admission webhooks and external services.
|
||||
-->
|
||||
[Pod 安全性标准](/zh-cn/docs/concepts/security/pod-security-standards/)定义了三种不同的策略:
|
||||
特权策略(Privileged)、基线策略(Baseline)和限制策略(Restricted),它们限制了 `PodSpec` 中关于安全的字段的设置。
|
||||
这些标准可以通过默认启用的新的
|
||||
[Pod 安全性准入](/zh-cn/docs/concepts/security/pod-security-admission/)或第三方准入 Webhook 在名称空间级别强制执行。
|
||||
请注意,与它所取代的、已被删除的 PodSecurityPolicy 准入机制相反,
|
||||
[Pod 安全性准入](/zh-cn/docs/concepts/security/pod-security-admission/)可以轻松地与准入 Webhook 和外部服务相结合使用。
|
||||
|
||||
<!--
|
||||
Pod Security admission `restricted` policy, the most restrictive policy of the
|
||||
[Pod Security Standards](/docs/concepts/security/pod-security-standards/) set,
|
||||
[can operate in several modes](/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces),
|
||||
`warn`, `audit` or `enforce` to gradually apply the most appropriate
|
||||
[security context](/docs/tasks/configure-pod-container/security-context/)
|
||||
according to security best practices. Nevertheless, pods'
|
||||
[security context](/docs/tasks/configure-pod-container/security-context/)
|
||||
should be separately investigated to limit the privileges and access pods may
|
||||
have on top of the predefined security standards, for specific use cases.
|
||||
-->
|
||||
`restricted` Pod 安全准入策略是 [Pod 安全性标准](/zh-cn/docs/concepts/security/pod-security-standards/)集中最严格的策略,
|
||||
可以在[多种种模式下运行](/zh-cn/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces),
|
||||
根据最佳安全实践,逐步地采用 `warn`、`audit` 或者 `enforce`
|
||||
模式以应用最合适的[安全上下文(Security Context)](/zh-cn/docs/tasks/configure-pod-container/security-context/)。
|
||||
尽管如此,对于特定的用例,应该单独审查 Pod 的[安全上下文](/zh-cn/docs/tasks/configure-pod-container/security-context/),
|
||||
以限制 Pod 在预定义的安全性标准之上可能具有的特权和访问权限。
|
||||
|
||||
<!--
|
||||
For a hands-on tutorial on [Pod Security](/docs/concepts/security/pod-security-admission/),
|
||||
see the blog post
|
||||
[Kubernetes 1.23: Pod Security Graduates to Beta](/blog/2021/12/09/pod-security-admission-beta/).
|
||||
|
||||
[Memory and CPU limits](/docs/concepts/configuration/manage-resources-containers/)
|
||||
should be set in order to restrict the memory and CPU resources a pod can
|
||||
consume on a node, and therefore prevent potential DoS attacks from malicious or
|
||||
breached workloads. Such policy can be enforced by an admission controller.
|
||||
Please note that CPU limits will throttle usage and thus can have unintended
|
||||
effects on auto-scaling features or efficiency i.e. running the process in best
|
||||
effort with the CPU resource available.
|
||||
-->
|
||||
有关 [Pod 安全性](/zh-cn/docs/concepts/security/pod-security-admission/)的实践教程,
|
||||
请参阅博文 [Kubernetes 1.23:Pod 安全性升级到 Beta](/blog/2021/12/09/pod-security-admission-beta/)。
|
||||
|
||||
为了限制一个 Pod 可以使用的内存和 CPU 资源,
|
||||
应该设置 Pod 在节点上可消费的[内存和 CPU 限制](/zh-cn/docs/concepts/configuration/manage-resources-containers/),
|
||||
从而防止来自恶意的或已被攻破的工作负载的潜在 DoS 攻击。这种策略可以由准入控制器强制执行。
|
||||
请注意,CPU 限制设置可能会影响 CPU 用量,从而可能会对自动扩缩功能或效率产生意外的影响,
|
||||
换言之,系统会在可用的 CPU 资源下最大限度地运行进程。
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Memory limit superior to request can expose the whole node to OOM issues.
|
||||
-->
|
||||
内存限制高于请求的,可能会使整个节点面临 OOM 问题。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
### Enabling Seccomp
|
||||
|
||||
Seccomp can improve the security of your workloads by reducing the Linux kernel
|
||||
syscall attack surface available inside containers. The seccomp filter mode
|
||||
leverages BPF to create an allow or deny list of specific syscalls, named
|
||||
profiles. Those seccomp profiles can be enabled on individual workloads,
|
||||
[a security tutorial is available](/docs/tutorials/security/seccomp/). In
|
||||
addition, the [Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator)
|
||||
is a project to facilitate the management and use of seccomp in clusters.
|
||||
-->
|
||||
### 启用 Seccomp {#enabling-seccomp}
|
||||
|
||||
<!-- 按照英文原文翻译比较啰嗦,本小段是英文原文结合 Seccomp 简洁翻译的 -->
|
||||
Seccomp 通过减少容器内对 Linux 内核的系统调用(System Call)以缩小攻击面,从而提高工作负载的安全性。
|
||||
Seccomp 过滤器模式借助 BPF 创建了配置文件(Profile),文件中设置对具体系统调用的允许或拒绝,
|
||||
可以针对单个工作负载上启用这类 Seccomp 配置文件。你可以阅读相应的[安全教程](/zh-cn/docs/tutorials/security/seccomp/)。
|
||||
此外,[Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator)
|
||||
是一个方便在集群中管理和使用 Seccomp 的项目。
|
||||
|
||||
<!--
|
||||
For historical context, please note that Docker has been using
|
||||
[a default seccomp profile](https://docs.docker.com/engine/security/seccomp/)
|
||||
to only allow a restricted set of syscalls since 2016 from
|
||||
[Docker Engine 1.10](https://www.docker.com/blog/docker-engine-1-10-security/),
|
||||
but Kubernetes is still not confining workloads by default. The default seccomp
|
||||
profile can be found [in containerd](https://github.com/containerd/containerd/blob/main/contrib/seccomp/seccomp_default.go)
|
||||
as well. Fortunately, [Seccomp Default](/blog/2021/08/25/seccomp-default/), a
|
||||
new alpha feature to use a default seccomp profile for all workloads can now be
|
||||
enabled and tested.
|
||||
|
||||
-->
|
||||
从历史背景看,请注意 Docker 自 2016 年以来一直使用[默认的 Seccomp 配置文件](https://docs.docker.com/engine/security/seccomp/),
|
||||
仅允许来自 [Docker Engine 1.10](https://www.docker.com/blog/docker-engine-1-10-security/) 的很小的一组系统调用,
|
||||
但是,在默认情况下 Kubernetes 仍不限制工作负载。
|
||||
默认的 Seccomp 配置文件也可以在
|
||||
[containerd](https://github.com/containerd/containerd/blob/main/contrib/seccomp/seccomp_default.go) 中找到。
|
||||
幸运的是,[Seccomp Default](/blog/2021/08/25/seccomp-default/) 可将默认的 Seccomp 配置文件用于所有工作负载,
|
||||
这是一项新的 Alpha 功能,现在可以启用和测试了。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Seccomp is only available on Linux nodes.
|
||||
-->
|
||||
Seccomp 仅适用于 Linux 节点。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Enabling AppArmor or SELinux
|
||||
-->
|
||||
### 启用 AppArmor 或 SELinux {#enabling-appArmor-or-SELinux}
|
||||
|
||||
#### AppArmor
|
||||
|
||||
<!--
|
||||
[AppArmor](https://apparmor.net/) is a Linux kernel security module that can
|
||||
provide an easy way to implement Mandatory Access Control (MAC) and better
|
||||
auditing through system logs. To [enable AppArmor in Kubernetes](/docs/tutorials/security/apparmor/),
|
||||
at least version 1.4 is required. Like seccomp, AppArmor is also configured
|
||||
through profiles, where each profile is either running in enforcing mode, which
|
||||
blocks access to disallowed resources or complain mode, which only reports
|
||||
violations. AppArmor profiles are enforced on a per-container basis, with an
|
||||
annotation, allowing for processes to gain just the right privileges.
|
||||
-->
|
||||
[AppArmor](https://apparmor.net/) 是一个 Linux 内核安全模块,
|
||||
可以提供一种简单的方法来实现强制访问控制(Mandatory Access Control, MAC)并通过系统日志进行更好地审计。
|
||||
要在 Kubernetes 中[启用 AppArmor](/zh-cn/docs/tutorials/security/apparmor/),至少需要 1.4 版本。
|
||||
与 Seccomp 一样,AppArmor 也通过配置文件进行配置,
|
||||
其中每个配置文件要么在强制(Enforcing)模式下运行,即阻止访问不允许的资源,要么在投诉(Complaining)模式下运行,只报告违规行为。
|
||||
AppArmor 配置文件是通过注解的方式,以容器为粒度强制执行的,允许进程获得刚好合适的权限。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
AppArmor is only available on Linux nodes, and enabled in
|
||||
[some Linux distributions](https://gitlab.com/apparmor/apparmor/-/wikis/home#distributions-and-ports).
|
||||
-->
|
||||
AppArmor 仅在 Linux 节点上可用,在[一些 Linux 发行版](https://gitlab.com/apparmor/apparmor/-/wikis/home#distributions-and-ports)中已启用。
|
||||
{{< /note >}}
|
||||
|
||||
#### SELinux
|
||||
|
||||
<!--
|
||||
[SELinux](https://github.com/SELinuxProject/selinux-notebook/blob/main/src/selinux_overview.md) is also a
|
||||
Linux kernel security module that can provide a mechanism for supporting access
|
||||
control security policies, including Mandatory Access Controls (MAC). SELinux
|
||||
labels can be assigned to containers or pods
|
||||
[via their `securityContext` section](/docs/tasks/configure-pod-container/security-context/#assign-selinux-labels-to-a-container).
|
||||
-->
|
||||
[SELinux](https://github.com/SELinuxProject/selinux-notebook/blob/main/src/selinux_overview.md)
|
||||
也是一个 Linux 内核安全模块,可以提供支持访问控制安全策略的机制,包括强制访问控制(MAC)。
|
||||
SELinux 标签可以[通过 `securityContext` 节](/zh-cn/docs/tasks/configure-pod-container/security-context/#assign-selinux-labels-to-a-container)指配给容器或 Pod。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
SELinux is only available on Linux nodes, and enabled in
|
||||
[some Linux distributions](https://en.wikipedia.org/wiki/Security-Enhanced_Linux#Implementations).
|
||||
-->
|
||||
SELinux 仅在 Linux 节点上可用,在[一些 Linux 发行版](https://en.wikipedia.org/wiki/Security-Enhanced_Linux#Implementations)中已启用。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Pod placement
|
||||
|
||||
- [ ] Pod placement is done in accordance with the tiers of sensitivity of the
|
||||
application.
|
||||
- [ ] Sensitive applications are running isolated on nodes or with specific
|
||||
sandboxed runtimes.
|
||||
-->
|
||||
## Pod 布局 {#pod-placement}
|
||||
|
||||
- [ ] Pod 布局是根据应用程序的敏感级别来完成的。
|
||||
- [ ] 敏感应用程序在节点上隔离运行或使用特定的沙箱运行时运行。
|
||||
|
||||
<!--
|
||||
Pods that are on different tiers of sensitivity, for example, an application pod
|
||||
and the Kubernetes API server, should be deployed onto separate nodes. The
|
||||
purpose of node isolation is to prevent an application container breakout to
|
||||
directly providing access to applications with higher level of sensitivity to easily
|
||||
pivot within the cluster. This separation should be enforced to prevent pods
|
||||
accidentally being deployed onto the same node. This could be enforced with the
|
||||
following features:
|
||||
-->
|
||||
处于不同敏感级别的 Pod,例如,应用程序 Pod 和 Kubernetes API 服务器,应该部署到不同的节点上。
|
||||
节点隔离的目的是防止应用程序容器的逃逸,进而直接访问敏感度更高的应用,
|
||||
甚至轻松地改变集群工作机制。
|
||||
这种隔离应该被强制执行,以防止 Pod 集合被意外部署到同一节点上。
|
||||
可以通过以下功能实现:
|
||||
|
||||
<!--
|
||||
[Node Selectors](/docs/concepts/scheduling-eviction/assign-pod-node/)
|
||||
: Key-value pairs, as part of the pod specification, that specify which nodes to
|
||||
deploy onto. These can be enforced at the namespace and cluster level with the
|
||||
[PodNodeSelector](/docs/reference/access-authn-authz/admission-controllers/#podnodeselector)
|
||||
admission controller.
|
||||
-->
|
||||
[节点选择器(Node Selector)](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)
|
||||
: 作为 Pod 规约的一部分来设置的键值对,指定 Pod 可部署到哪些节点。
|
||||
通过 [PodNodeSelector](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#podnodeselector)
|
||||
准入控制器可以在名字空间和集群级别强制实施节点选择。
|
||||
|
||||
<!--
|
||||
[PodTolerationRestriction](/docs/reference/access-authn-authz/admission-controllers/#podtolerationrestriction)
|
||||
: An admission controller that allows administrators to restrict permitted
|
||||
[tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) within a
|
||||
namespace. Pods within a namespace may only utilize the tolerations specified on
|
||||
the namespace object annotation keys that provide a set of default and allowed
|
||||
tolerations.
|
||||
-->
|
||||
[PodTolerationRestriction](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#podtolerationrestriction)
|
||||
: [容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)准入控制器,
|
||||
允许管理员设置在名字空间内允许使用的容忍度。
|
||||
名字空间中的 Pod 只能使用名字空间对象的注解键上所指定的容忍度,这些键提供默认和允许的容忍度集合。
|
||||
|
||||
<!--
|
||||
[RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
: RuntimeClass is a feature for selecting the container runtime configuration.
|
||||
The container runtime configuration is used to run a Pod's containers and can
|
||||
provide more or less isolation from the host at the cost of performance
|
||||
overhead.
|
||||
-->
|
||||
[RuntimeClass](/zh-cn/docs/concepts/containers/runtime-class/)
|
||||
: RuntimeClass 是一个用于选择容器运行时配置的特性,容器运行时配置用于运行 Pod 中的容器,
|
||||
并以性能开销为代价提供或多或少的主机隔离能力。
|
||||
|
||||
## Secrets {#secrets}
|
||||
|
||||
<!--
|
||||
- [ ] ConfigMaps are not used to hold confidential data.
|
||||
- [ ] Encryption at rest is configured for the Secret API.
|
||||
- [ ] If appropriate, a mechanism to inject secrets stored in third-party storage
|
||||
is deployed and available.
|
||||
- [ ] Service account tokens are not mounted in pods that don't require them.
|
||||
- [ ] [Bound service account token volume](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)
|
||||
is in-use instead of non-expiring tokens.
|
||||
-->
|
||||
- [ ] 不用 ConfigMap 保存机密数据。
|
||||
- [ ] 为 Secret API 配置静态加密。
|
||||
- [ ] 如果合适,可以部署和使用一种机制,负责注入保存在第三方存储中的 Secret。
|
||||
- [ ] 不应该将服务账号令牌挂载到不需要它们的 Pod 中。
|
||||
- [ ] 使用[绑定的服务账号令牌卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume),
|
||||
而不要使用不会过期的令牌。
|
||||
|
||||
<!--
|
||||
Secrets required for pods should be stored within Kubernetes Secrets as opposed
|
||||
to alternatives such as ConfigMap. Secret resources stored within etcd should
|
||||
be [encrypted at rest](/docs/tasks/administer-cluster/encrypt-data/).
|
||||
-->
|
||||
Pod 所需的秘密信息应该存储在 Kubernetes Secret 中,而不是像 ConfigMap 这样的替代品中。
|
||||
存储在 etcd 中的 Secret 资源应该被静态加密。
|
||||
|
||||
<!--
|
||||
Pods needing secrets should have these automatically mounted through volumes,
|
||||
preferably stored in memory like with the [`emptyDir.medium` option](/docs/concepts/storage/volumes/#emptydir).
|
||||
Mechanism can be used to also inject secrets from third-party storages as
|
||||
volume, like the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/).
|
||||
This should be done preferentially as compared to providing the pods service
|
||||
account RBAC access to secrets. This would allow adding secrets into the pod as
|
||||
environment variables or files. Please note that the environment variable method
|
||||
might be more prone to leakage due to crash dumps in logs and the
|
||||
non-confidential nature of environment variable in Linux, as opposed to the
|
||||
permission mechanism on files.
|
||||
-->
|
||||
需要 Secret 的 Pod 应该通过卷自动挂载这些信息,
|
||||
最好使用 [`emptyDir.medium` 选项](/zh-cn/docs/concepts/storage/volumes/#emptydir)存储在内存中。
|
||||
该机制还可以用于从第三方存储中注入 Secret 作为卷,如 [Secret Store CSI 驱动](https://secrets-store-csi-driver.sigs.k8s.io/)。
|
||||
与通过 RBAC 来允许 Pod 服务帐户访问 Secret 相比,应该优先使用上述机制。这种机制允许将 Secret 作为环境变量或文件添加到 Pod 中。
|
||||
请注意,与带访问权限控制的文件相比,由于日志的崩溃转储,以及 Linux 的环境变量的非机密性,环境变量方法可能更容易发生泄漏。
|
||||
|
||||
<!--
|
||||
Service account tokens should not be mounted into pods that do not require them. This can be configured by setting
|
||||
[`automountServiceAccountToken`](/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)
|
||||
to `false` either within the service account to apply throughout the namespace
|
||||
or specifically for a pod. For Kubernetes v1.22 and above, use
|
||||
[Bound Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)
|
||||
for time-bound service account credentials.
|
||||
-->
|
||||
不应该将服务账号令牌挂载到不需要它们的 Pod 中。
|
||||
这可以通过在服务帐号内将
|
||||
[`automountServiceAccountToken`](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)
|
||||
设置为 `false` 来完成整个名字空间范围的配置,
|
||||
或者也可以单独在 Pod 层面定制。
|
||||
对于 Kubernetes v1.22 及更高版本,
|
||||
请使用[绑定服务账号](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)作为有时间限制的服务帐号凭证。
|
||||
|
||||
<!--
|
||||
## Images
|
||||
|
||||
- [ ] Minimize unnecessary content in container images.
|
||||
- [ ] Container images are configured to be run as unprivileged user.
|
||||
- [ ] References to container images are made by sha256 digests (rather than
|
||||
tags) or the provenance of the image is validated by verifying the image's
|
||||
digital signature at deploy time [via admission control](/docs/tasks/administer-cluster/verify-signed-images/#verifying-image-signatures-with-admission-controller).
|
||||
- [ ] Container images are regularly scanned during creation and in deployment, and
|
||||
known vulnerable software is patched.
|
||||
-->
|
||||
## 镜像 {#images}
|
||||
- [ ] 尽量减少容器镜像中不必要的内容。
|
||||
- [ ] 容器镜像配置为以非特权用户身份运行。
|
||||
- [ ] 对容器镜像的引用是通过 Sha256 摘要实现的,而不是标签(tags),
|
||||
或者[通过准入控制器](/zh-cn/docs/tasks/administer-cluster/verify-signed-images/#verifying-image-signatures-with-admission-controller)在部署时验证镜像的数字签名来验证镜像的来源。
|
||||
- [ ] 在创建和部署过程中定期扫描容器镜像,并对已知的漏洞软件进行修补。
|
||||
|
||||
<!--
|
||||
Container image should contain the bare minimum to run the program they
|
||||
package. Preferably, only the program and its dependencies, building the image
|
||||
from the minimal possible base. In particular, image used in production should not
|
||||
contain shells or debugging utilities, as an
|
||||
[ephemeral debug container](/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container)
|
||||
can be used for troubleshooting.
|
||||
-->
|
||||
容器镜像应该包含运行其所打包的程序所需要的最少内容。
|
||||
最好,只使用程序及其依赖项,基于最小的基础镜像来构建镜像。
|
||||
尤其是,在生产中使用的镜像不应包含 Shell 或调试工具,
|
||||
因为可以使用[临时调试容器](/zh-cn/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container)进行故障排除。
|
||||
|
||||
<!--
|
||||
Build images to directly start with an unprivileged user by using the
|
||||
[`USER` instruction in Dockerfile](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user).
|
||||
The [Security Context](/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod)
|
||||
allows a container image to be started with a specific user and group with
|
||||
`runAsUser` and `runAsGroup`, even if not specified in the image manifest.
|
||||
However, the file permissions in the image layers might make it impossible to just
|
||||
start the process with a new unprivileged user without image modification.
|
||||
-->
|
||||
构建镜像时使用 [Dockerfile 中的 `USER`](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user)
|
||||
指令直接开始使用非特权用户。
|
||||
[安全上下文(Security Context)](/zh-cn/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod)
|
||||
允许使用 `runAsUser` 和 `runAsGroup` 来指定使用特定的用户和组来启动容器镜像,
|
||||
即使没有在镜像清单文件(Manifest)中指定这些配置信息。
|
||||
不过,镜像层中的文件权限设置可能无法做到在不修改镜像的情况下,使用新的非特权用户来启动进程。
|
||||
|
||||
<!--
|
||||
Avoid using image tags to reference an image, especially the `latest` tag, the
|
||||
image behind a tag can be easily modified in a registry. Prefer using the
|
||||
complete `sha256` digest which is unique to the image manifest. This policy can be
|
||||
enforced via an [ImagePolicyWebhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook).
|
||||
Image signatures can also be automatically [verified with an admission controller](/docs/tasks/administer-cluster/verify-signed-images/#verifying-image-signatures-with-admission-controller)
|
||||
at deploy time to validate their authenticity and integrity.
|
||||
-->
|
||||
避免使用镜像标签来引用镜像,尤其是 `latest` 标签,因为标签对应的镜像可以在仓库中被轻松地修改。
|
||||
首选使用完整的 `Sha256` 摘要,该摘要对特定镜像清单文件而言是唯一的。
|
||||
可以通过 [ImagePolicyWebhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)
|
||||
强制执行此策略。
|
||||
镜像签名还可以在部署时由[准入控制器自动验证](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook),
|
||||
以验证其真实性和完整性。
|
||||
|
||||
<!--
|
||||
Scanning a container image can prevent critical vulnerabilities from being
|
||||
deployed to the cluster alongside the container image. Image scanning should be
|
||||
completed before deploying a container image to a cluster and is usually done
|
||||
as part of the deployment process in a CI/CD pipeline. The purpose of an image
|
||||
scan is to obtain information about possible vulnerabilities and their
|
||||
prevention in the container image, such as a
|
||||
[Common Vulnerability Scoring System (CVSS)](https://www.first.org/cvss/)
|
||||
score. If the result of the image scans is combined with the pipeline
|
||||
compliance rules, only properly patched container images will end up in
|
||||
Production.
|
||||
-->
|
||||
扫描容器镜像可以防止关键性的漏洞随着容器镜像一起被部署到集群中。
|
||||
镜像扫描应在将容器镜像部署到集群之前完成,通常作为 CI/CD 流水线中的部署过程的一部分来完成。
|
||||
镜像扫描的目的是获取有关容器镜像中可能存在的漏洞及其预防措施的信息,
|
||||
例如使用[公共漏洞评分系统 (Common Vulnerability Scoring System,CVSS)](https://www.first.org/cvss/)评分。
|
||||
如果镜像扫描的结果与管道合性规则匹配,则只有经过正确修补的容器镜像才会最终进入生产环境。
|
||||
|
||||
<!--
|
||||
## Admission controllers
|
||||
|
||||
- [ ] An appropriate selection of admission controllers is enabled.
|
||||
- [ ] A pod security policy is enforced by the Pod Security Admission or/and a
|
||||
webhook admission controller.
|
||||
- [ ] The admission chain plugins and webhooks are securely configured.
|
||||
-->
|
||||
## 准入控制器 {#admission-controllers}
|
||||
|
||||
- [ ] 选择启用适当的准入控制器。
|
||||
- [ ] Pod 安全策略由 Pod 安全准入强制执行,或者和 Webhook 准入控制器一起强制执行。
|
||||
- [ ] 保证准入链插件和 Webhook 的配置都是安全的。
|
||||
|
||||
<!--
|
||||
Admission controllers can help to improve the security of the cluster. However,
|
||||
they can present risks themselves as they extend the API server and
|
||||
[should be properly secured](/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/).
|
||||
-->
|
||||
准入控制器可以帮助提高集群的安全性。
|
||||
然而,由于它们是对 API 服务器的扩展,其自身可能会带来风险,
|
||||
所以它们[应该得到适当的保护](/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/)。
|
||||
|
||||
<!--
|
||||
The following lists present a number of admission controllers that could be
|
||||
considered to enhance the security posture of your cluster and application. It
|
||||
includes controllers that may be referenced in other parts of this document.
|
||||
-->
|
||||
下面列出了一些准入控制器,可以考虑用这些控制器来增强集群和应用程序的安全状况。
|
||||
列表中包括了可能在本文档其他部分曾提及的控制器。
|
||||
|
||||
<!--
|
||||
This first group of admission controllers includes plugins
|
||||
[enabled by default](/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default),
|
||||
consider to leave them enabled unless you know what you are doing:
|
||||
-->
|
||||
第一组准入控制器包括[默认启用的插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default),
|
||||
除非你知道自己在做什么,否则请考虑保持它们处于被启用的状态:
|
||||
|
||||
<!--
|
||||
[`CertificateApproval`](/docs/reference/access-authn-authz/admission-controllers/#certificateapproval)
|
||||
: Performs additional authorization checks to ensure the approving user has
|
||||
permission to approve certificate request.
|
||||
-->
|
||||
[`CertificateApproval`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#certificateapproval)
|
||||
: 执行额外的授权检查,以确保审批用户具有审批证书请求的权限。
|
||||
|
||||
<!--
|
||||
[`CertificateSigning`](/docs/reference/access-authn-authz/admission-controllers/#certificatesigning)
|
||||
: Performs additional authorization checks to ensure the signing user has
|
||||
permission to sign certificate requests.
|
||||
-->
|
||||
[`CertificateSigning`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#certificatesigning)
|
||||
: 执行其他授权检查,以确保签名用户具有签名证书请求的权限。
|
||||
|
||||
<!--
|
||||
[`CertificateSubjectRestriction`](/docs/reference/access-authn-authz/admission-controllers/#certificatesubjectrestriction)
|
||||
: Rejects any certificate request that specifies a 'group' (or 'organization
|
||||
attribute') of `system:masters`.
|
||||
-->
|
||||
[`CertificateSubjectRestriction`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#certificatesubjectrestriction)
|
||||
: 拒绝将 `group`(或 `organization attribute` )设置为 `system:masters` 的所有证书请求。
|
||||
|
||||
<!--
|
||||
[`LimitRanger`](/docs/reference/access-authn-authz/admission-controllers/#limitranger)
|
||||
: Enforce the LimitRange API constraints.
|
||||
-->
|
||||
[`LimitRanger`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#limitranger)
|
||||
: 强制执行 LimitRange API 约束。
|
||||
|
||||
<!--
|
||||
[`MutatingAdmissionWebhook`](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)
|
||||
: Allows the use of custom controllers through webhooks, these controllers may
|
||||
mutate requests that it reviews.
|
||||
-->
|
||||
[`MutatingAdmissionWebhook`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)
|
||||
: 允许通过 Webhook 使用自定义控制器,这些控制器可能会变更它所审查的请求。
|
||||
|
||||
<!--
|
||||
[`PodSecurity`](/docs/reference/access-authn-authz/admission-controllers/#podsecurity)
|
||||
: Replacement for Pod Security Policy, restricts security contexts of deployed
|
||||
Pods.
|
||||
-->
|
||||
[`PodSecurity`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#podsecurity)
|
||||
: Pod Security Policy 的替代品,用于约束所部署 Pod 的安全上下文。
|
||||
|
||||
<!--
|
||||
[`ResourceQuota`](/docs/reference/access-authn-authz/admission-controllers/#resourcequota)
|
||||
: Enforces resource quotas to prevent over-usage of resources.
|
||||
-->
|
||||
[`ResourceQuota`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#resourcequota)
|
||||
: 强制执行资源配额,以防止资源被过度使用。
|
||||
|
||||
<!--
|
||||
[`ValidatingAdmissionWebhook`](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook)
|
||||
: Allows the use of custom controllers through webhooks, these controllers do
|
||||
not mutate requests that it reviews.
|
||||
-->
|
||||
[`ValidatingAdmissionWebhook`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook)
|
||||
: 允许通过 Webhook 使用自定义控制器,这些控制器不变更它所审查的请求。
|
||||
|
||||
<!--
|
||||
The second group includes plugin that are not enabled by default but in general
|
||||
availability state and recommended to improve your security posture:
|
||||
-->
|
||||
第二组包括默认情况下没有启用、但处于正式发布状态的插件,建议启用这些插件以改善你的安全状况:
|
||||
|
||||
<!--
|
||||
[`DenyServiceExternalIPs`](/docs/reference/access-authn-authz/admission-controllers/#denyserviceexternalips)
|
||||
: Rejects all net-new usage of the `Service.spec.externalIPs` field. This is a mitigation for
|
||||
[CVE-2020-8554: Man in the middle using LoadBalancer or ExternalIPs](https://github.com/kubernetes/kubernetes/issues/97076).
|
||||
-->
|
||||
[`DenyServiceExternalIPs`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#denyserviceexternalips)
|
||||
: 拒绝使用 `Service.spec.externalIPs` 字段,已有的 Service 不受影响,新增或者变更时不允许使用。
|
||||
这是 [CVE-2020-8554:中间人使用 LoadBalancer 或 ExternalIP](https://github.com/kubernetes/kubernetes/issues/97076) 的缓解措施。
|
||||
|
||||
<!--
|
||||
[`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||
: Restricts kubelet's permissions to only modify the pods API resources they own
|
||||
or the node API ressource that represent themselves. It also prevents kubelet
|
||||
from using the `node-restriction.kubernetes.io/` annotation, which can be used
|
||||
by an attacker with access to the kubelet's credentials to influence pod
|
||||
placement to the controlled node.
|
||||
-->
|
||||
[`NodeRestriction`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||
: 将 kubelet 的权限限制为只能修改其拥有的 Pod API 资源或代表其自身的节点 API 资源。
|
||||
此插件还可以防止 kubelet 使用 `node-restriction.kubernetes.io/` 注解,
|
||||
攻击者可以使用该注解来访问 kubelet 的凭证,从而影响所控制的节点上的 Pod 布局。
|
||||
|
||||
<!--
|
||||
The third group includes plugins that are not enabled by default but could be
|
||||
considered for certain use cases:
|
||||
-->
|
||||
第三组包括默认情况下未启用,但可以考虑在某些场景下启用的插件:
|
||||
|
||||
<!--
|
||||
[`AlwaysPullImages`](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)
|
||||
: Enforces the usage of the latest version of a tagged image and ensures that the deployer
|
||||
has permissions to use the image.
|
||||
-->
|
||||
[`AlwaysPullImages`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)
|
||||
: 强制使用最新版本标记的镜像,并确保部署者有权使用该镜像。
|
||||
|
||||
<!--
|
||||
[`ImagePolicyWebhook`](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)
|
||||
: Allows enforcing additional controls for images through webhooks.
|
||||
-->
|
||||
[`ImagePolicyWebhook`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook)
|
||||
: 允许通过 Webhook 对镜像强制执行额外的控制。
|
||||
|
||||
<!--
|
||||
## What's next
|
||||
|
||||
- [RBAC Good Practices](/docs/concepts/security/rbac-good-practices/) for
|
||||
further information on authorization.
|
||||
- [Cluster Multi-tenancy guide](/docs/concepts/security/multi-tenancy/) for
|
||||
configuration options recommendations and best practices on multi-tenancy.
|
||||
- [Blog post "A Closer Look at NSA/CISA Kubernetes Hardening Guidance"](/blog/2021/10/05/nsa-cisa-kubernetes-hardening-guidance/#building-secure-container-images)
|
||||
for complementary resource on hardening Kubernetes clusters.
|
||||
-->
|
||||
## 接下来
|
||||
|
||||
- [RBAC 良好实践](/zh-cn/docs/concepts/security/rbac-good-practices/)提供有关授权的更多信息。
|
||||
- [集群多租户指南](/zh-cn/docs/concepts/security/multi-tenancy/)提供有关多租户的配置选项建议和最佳实践。
|
||||
- [博文“深入了解 NSA/CISA Kubernetes 强化指南”](/blog/2021/10/05/nsa-cisa-kubernetes-hardening-guidance/#building-secure-container-images)为强化
|
||||
Kubernetes 集群提供补充资源。
|
|
@ -3,6 +3,15 @@ title: 使用 Service 连接到应用
|
|||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
- caesarxuchao
|
||||
- lavalamp
|
||||
- thockin
|
||||
title: Connecting Applications with Services
|
||||
content_type: concept
|
||||
weight: 30
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
|
@ -117,7 +126,6 @@ service/my-nginx exposed
|
|||
<!--
|
||||
This is equivalent to `kubectl apply -f` the following yaml:
|
||||
-->
|
||||
|
||||
这等价于使用 `kubectl create -f` 命令及如下的 yaml 文件创建:
|
||||
|
||||
{{< codenew file="service/networking/nginx-svc.yaml" >}}
|
||||
|
@ -149,17 +157,24 @@ my-nginx ClusterIP 10.0.162.149 <none> 80/TCP 21s
|
|||
|
||||
<!--
|
||||
As mentioned previously, a Service is backed by a group of Pods. These Pods are
|
||||
exposed through `endpoints`. The Service's selector will be evaluated continuously
|
||||
and the results will be POSTed to an Endpoints object also named `my-nginx`.
|
||||
When a Pod dies, it is automatically removed from the endpoints, and new Pods
|
||||
matching the Service's selector will automatically get added to the endpoints.
|
||||
exposed through
|
||||
{{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlices">}}.
|
||||
The Service's selector will be evaluated continuously and the results will be POSTed
|
||||
to an EndpointSlice that is connected to the Service using a
|
||||
{{< glossary_tooltip text="labels" term_id="label" >}}.
|
||||
When a Pod dies, it is automatically removed from the EndpointSlices that contain it
|
||||
as an endpoint. New Pods that match the Service's selector will automatically get added
|
||||
to an EndpointSlice for that Service.
|
||||
Check the endpoints, and note that the IPs are the same as the Pods created in
|
||||
the first step:
|
||||
-->
|
||||
正如前面所提到的,一个 Service 由一组 Pod 提供支撑。这些 Pod 通过 `endpoints` 暴露出来。
|
||||
Service Selector 将持续评估,结果被 POST 到一个名称为 `my-nginx` 的 Endpoint 对象上。
|
||||
当 Pod 终止后,它会自动从 Endpoint 中移除,新的能够匹配上 Service Selector 的 Pod 将自动地被添加到 Endpoint 中。
|
||||
检查该 Endpoint,注意到 IP 地址与在第一步创建的 Pod 是相同的。
|
||||
正如前面所提到的,一个 Service 由一组 Pod 提供支撑。这些 Pod 通过
|
||||
{{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlices">}} 暴露出来。
|
||||
Service Selector 将持续评估,结果被 POST
|
||||
到使用{{< glossary_tooltip text="标签" term_id="label" >}}与该 Service 连接的一个 EndpointSlice。
|
||||
当 Pod 终止后,它会自动从包含该 Pod 的 EndpointSlices 中移除。
|
||||
新的能够匹配上 Service Selector 的 Pod 将自动地被为该 Service 添加到 EndpointSlice 中。
|
||||
检查 Endpoint,注意到 IP 地址与在第一步创建的 Pod 是相同的。
|
||||
|
||||
```shell
|
||||
kubectl describe svc my-nginx
|
||||
|
@ -178,11 +193,11 @@ Session Affinity: None
|
|||
Events: <none>
|
||||
```
|
||||
```shell
|
||||
kubectl get ep my-nginx
|
||||
kubectl get endpointslices -l kubernetes.io/service-name=my-nginx
|
||||
```
|
||||
```
|
||||
NAME ENDPOINTS AGE
|
||||
my-nginx 10.244.2.5:80,10.244.3.4:80 1m
|
||||
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
|
||||
my-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -191,11 +206,9 @@ any node in your cluster. Note that the Service IP is completely virtual, it
|
|||
never hits the wire. If you're curious about how this works you can read more
|
||||
about the [service proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies).
|
||||
-->
|
||||
|
||||
现在,你应该能够从集群中任意节点上使用 curl 命令向 `<CLUSTER-IP>:<PORT>` 发送请求以访问 Nginx Service。
|
||||
注意 Service IP 完全是虚拟的,它从来没有走过网络,如果对它如何工作的原理感到好奇,
|
||||
可以进一步阅读[服务代理](/zh-cn/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
|
||||
的内容。
|
||||
可以进一步阅读[服务代理](/zh-cn/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)的内容。
|
||||
|
||||
<!--
|
||||
## Accessing the Service
|
||||
|
@ -207,14 +220,14 @@ and DNS. The former works out of the box while the latter requires the
|
|||
## 访问 Service {#accessing-the-service}
|
||||
|
||||
Kubernetes支持两种查找服务的主要模式: 环境变量和 DNS。前者开箱即用,而后者则需要
|
||||
[CoreDNS 集群插件](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns).
|
||||
[CoreDNS 集群插件](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If the service environment variables are not desired (because possible clashing with expected program ones,
|
||||
too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks`
|
||||
flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
|
||||
-->
|
||||
{{< note >}}
|
||||
如果不需要服务环境变量(因为可能与预期的程序冲突,可能要处理的变量太多,或者仅使用DNS等),则可以通过在
|
||||
[pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
上将 `enableServiceLinks` 标志设置为 `false` 来禁用此模式。
|
||||
|
@ -227,7 +240,7 @@ When a Pod runs on a Node, the kubelet adds a set of environment variables for
|
|||
each active Service. This introduces an ordering problem. To see why, inspect
|
||||
the environment of your running nginx Pods (your Pod name will be different):
|
||||
-->
|
||||
### 环境变量
|
||||
### 环境变量 {#environment-variables}
|
||||
|
||||
当 Pod 在节点上运行时,kubelet 会针对每个活跃的 Service 为 Pod 添加一组环境变量。
|
||||
这就引入了一个顺序的问题。为解释这个问题,让我们先检查正在运行的 Nginx Pod
|
||||
|
@ -252,7 +265,6 @@ replicas. This will give you scheduler-level Service spreading of your Pods
|
|||
(provided all your nodes have equal capacity), as well as the right environment
|
||||
variables:
|
||||
-->
|
||||
|
||||
能看到环境变量中并没有你创建的 Service 相关的值。这是因为副本的创建先于 Service。
|
||||
这样做的另一个缺点是,调度器可能会将所有 Pod 部署到同一台机器上,如果该机器宕机则整个 Service 都会离线。
|
||||
要改正的话,我们可以先终止这 2 个 Pod,然后等待 Deployment 去重新创建它们。
|
||||
|
@ -273,7 +285,6 @@ my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8
|
|||
<!--
|
||||
You may notice that the pods have different names, since they are killed and recreated.
|
||||
-->
|
||||
|
||||
你可能注意到,Pod 具有不同的名称,这是因为它们是被重新创建的。
|
||||
|
||||
```shell
|
||||
|
@ -292,7 +303,6 @@ KUBERNETES_SERVICE_PORT_HTTPS=443
|
|||
<!--
|
||||
Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster:
|
||||
-->
|
||||
|
||||
Kubernetes 提供了一个自动为其它 Service 分配 DNS 名字的 DNS 插件 Service。
|
||||
你可以通过如下命令检查它是否在工作:
|
||||
|
||||
|
@ -327,7 +337,6 @@ Hit enter for command prompt
|
|||
<!--
|
||||
Then, hit enter and run `nslookup my-nginx`:
|
||||
-->
|
||||
|
||||
然后,按回车并执行命令 `nslookup my-nginx`:
|
||||
|
||||
```shell
|
||||
|
@ -350,7 +359,6 @@ Till now we have only accessed the nginx server from within the cluster. Before
|
|||
|
||||
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
|
||||
-->
|
||||
|
||||
## 保护 Service {#securing-the-service}
|
||||
|
||||
到现在为止,我们只在集群内部访问了 Nginx 服务器。在将 Service 暴露到因特网之前,我们希望确保通信信道是安全的。
|
||||
|
@ -574,7 +582,6 @@ $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
|
|||
<!--
|
||||
Let's now recreate the Service to use a cloud load balancer. Change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`:
|
||||
-->
|
||||
|
||||
让我们重新创建一个 Service 以使用云负载均衡器。
|
||||
将 `my-nginx` Service 的 `Type` 由 `NodePort` 改成 `LoadBalancer`:
|
||||
|
||||
|
@ -601,7 +608,6 @@ hostname, not an IP. It's too long to fit in the standard `kubectl get svc`
|
|||
output, in fact, so you'll need to do `kubectl describe service my-nginx` to
|
||||
see it. You'll see something like this:
|
||||
-->
|
||||
|
||||
在 `EXTERNAL-IP` 列中的 IP 地址能在公网上被访问到。`CLUSTER-IP` 只能从集群/私有云网络中访问。
|
||||
|
||||
注意,在 AWS 上,类型 `LoadBalancer` 的服务会创建一个 ELB,且 ELB 使用主机名(比较长),而不是 IP。
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Service 与 Pod 的 DNS
|
||||
content_type: concept
|
||||
weight: 60
|
||||
weight: 80
|
||||
description: >-
|
||||
你的工作负载可以使用 DNS 发现集群内的 Service,本页说明具体工作原理。
|
||||
---
|
||||
|
@ -11,7 +11,7 @@ reviewers:
|
|||
- thockin
|
||||
title: DNS for Services and Pods
|
||||
content_type: concept
|
||||
weight: 60
|
||||
weight: 80
|
||||
description: >-
|
||||
Your workload can discover Services within your cluster using DNS;
|
||||
this page explains how that works.
|
||||
|
@ -29,14 +29,10 @@ Kubernetes 为 Service 和 Pod 创建 DNS 记录。
|
|||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Introduction
|
||||
|
||||
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures
|
||||
the kubelets to tell individual containers to use the DNS Service's IP to
|
||||
resolve DNS names.
|
||||
-->
|
||||
## 介绍 {#introduction}
|
||||
|
||||
Kubernetes DNS 除了在集群上调度 DNS Pod 和 Service,
|
||||
还配置 kubelet 以告知各个容器使用 DNS Service 的 IP 来解析 DNS 名称。
|
||||
|
||||
|
@ -55,7 +51,7 @@ A DNS query may return different results based on the namespace of the Pod makin
|
|||
it. DNS queries that don't specify a namespace are limited to the Pod's
|
||||
namespace. Access Services in other namespaces by specifying it in the DNS query.
|
||||
|
||||
For example, consider a Pod in a `test` namespace. A `data` service is in
|
||||
For example, consider a Pod in a `test` namespace. A `data` Service is in
|
||||
the `prod` namespace.
|
||||
|
||||
A query for `data` returns no results, because it uses the Pod's `test` namespace.
|
||||
|
@ -81,7 +77,7 @@ DNS queries may be expanded using the Pod's `/etc/resolv.conf`. Kubelet
|
|||
sets this file for each Pod. For example, a query for just `data` may be
|
||||
expanded to `data.test.svc.cluster.local`. The values of the `search` option
|
||||
are used to expand queries. To learn more about DNS queries, see
|
||||
[the `resolv.conf` manual page.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)
|
||||
[the `resolv.conf` manual page.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)
|
||||
-->
|
||||
DNS 查询可以使用 Pod 中的 `/etc/resolv.conf` 展开。kubelet 会为每个 Pod
|
||||
生成此文件。例如,对 `data` 的查询可能被展开为 `data.test.svc.cluster.local`。
|
||||
|
@ -143,7 +139,7 @@ Services, this resolves to the set of IPs of the Pods selected by the Service.
|
|||
Clients are expected to consume the set or else use standard round-robin
|
||||
selection from the set.
|
||||
-->
|
||||
### Services
|
||||
### Service
|
||||
|
||||
#### A/AAAA 记录 {#a-aaaa-records}
|
||||
|
||||
|
@ -181,7 +177,7 @@ SRV 记录格式为 `_my-port-name._my-port-protocol.my-svc.my-namespace.svc.clu
|
|||
其中包含 Pod 端口号和格式为 `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example`
|
||||
的域名。
|
||||
|
||||
## Pods
|
||||
## Pod
|
||||
|
||||
<!--
|
||||
### A/AAAA records
|
||||
|
@ -208,11 +204,11 @@ Any Pods exposed by a Service have the following DNS resolution available:
|
|||
例如,对于一个位于 `default` 名字空间,IP 地址为 172.17.0.3 的 Pod,
|
||||
如果集群的域名为 `cluster.local`,则 Pod 会对应 DNS 名称:
|
||||
|
||||
`172-17-0-3.default.pod.cluster.local`.
|
||||
`172-17-0-3.default.pod.cluster.local`
|
||||
|
||||
通过 Service 暴露出来的所有 Pod 都会有如下 DNS 解析名称可用:
|
||||
|
||||
`pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`.
|
||||
`pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`
|
||||
|
||||
<!--
|
||||
### Pod's hostname and subdomain fields
|
||||
|
@ -315,10 +311,11 @@ DNS 会为此名字提供一个 A 记录或 AAAA 记录,指向该 Pod 的 IP
|
|||
“`busybox1`” 和 “`busybox2`” 这两个 Pod 分别具有它们自己的 A 或 AAAA 记录。
|
||||
|
||||
<!--
|
||||
The Endpoints object can specify the `hostname` for any endpoint addresses,
|
||||
along with its IP.
|
||||
An {{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlice">}} can specify
|
||||
the DNS hostname for any endpoint addresses, along with its IP.
|
||||
-->
|
||||
Endpoints 对象可以为任何端点地址及其 IP 指定 `hostname`。
|
||||
{{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlice">}}
|
||||
对象可以为任何端点地址及其 IP 指定 `hostname`。
|
||||
|
||||
<!--
|
||||
Because A or AAAA records are not created for Pod names, `hostname` is required for the Pod's A or AAAA
|
||||
|
@ -338,8 +335,6 @@ record unless `publishNotReadyAddresses=True` is set on the Service.
|
|||
|
||||
<!--
|
||||
### Pod's setHostnameAsFQDN field {#pod-sethostnameasfqdn-field}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="stable" >}}
|
||||
-->
|
||||
### Pod 的 setHostnameAsFQDN 字段 {#pod-sethostnameasfqdn-field}
|
||||
|
||||
|
@ -351,8 +346,8 @@ When a Pod is configured to have fully qualified domain name (FQDN), its hostnam
|
|||
When you set `setHostnameAsFQDN: true` in the Pod spec, the kubelet writes the Pod's FQDN into the hostname for that Pod's namespace. In this case, both `hostname` and `hostname --fqdn` return the Pod's FQDN.
|
||||
-->
|
||||
当 Pod 配置为具有全限定域名 (FQDN) 时,其主机名是短主机名。
|
||||
例如,如果你有一个具有完全限定域名 `busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example` 的 Pod,
|
||||
则默认情况下,该 Pod 内的 `hostname` 命令返回 `busybox-1`,而 `hostname --fqdn` 命令返回 FQDN。
|
||||
例如,如果你有一个具有完全限定域名 `busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example` 的 Pod,
|
||||
则默认情况下,该 Pod 内的 `hostname` 命令返回 `busybox-1`,而 `hostname --fqdn` 命令返回 FQDN。
|
||||
|
||||
当你在 Pod 规约中设置了 `setHostnameAsFQDN: true` 时,kubelet 会将 Pod
|
||||
的全限定域名(FQDN)作为该 Pod 的主机名记录到 Pod 所在名字空间。
|
||||
|
@ -364,16 +359,14 @@ In Linux, the hostname field of the kernel (the `nodename` field of `struct utsn
|
|||
|
||||
If a Pod enables this feature and its FQDN is longer than 64 character, it will fail to start. The Pod will remain in `Pending` status (`ContainerCreating` as seen by `kubectl`) generating error events, such as Failed to construct FQDN from Pod hostname and cluster domain, FQDN `long-FQDN` is too long (64 characters is the max, 70 characters requested). One way of improving user experience for this scenario is to create an [admission webhook controller](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) to control FQDN size when users create top level objects, for example, Deployment.
|
||||
-->
|
||||
在 Linux 中,内核的主机名字段(`struct utsname` 的 `nodename` 字段)限定
|
||||
最多 64 个字符。
|
||||
在 Linux 中,内核的主机名字段(`struct utsname` 的 `nodename` 字段)限定最多 64 个字符。
|
||||
|
||||
如果 Pod 启用这一特性,而其 FQDN 超出 64 字符,Pod 的启动会失败。
|
||||
Pod 会一直出于 `Pending` 状态(通过 `kubectl` 所看到的 `ContainerCreating`),
|
||||
并产生错误事件,例如
|
||||
"Failed to construct FQDN from Pod hostname and cluster domain, FQDN
|
||||
`long-FQDN` is too long (64 characters is the max, 70 characters requested)."
|
||||
(无法基于 Pod 主机名和集群域名构造 FQDN,FQDN `long-FQDN` 过长,至多 64
|
||||
字符,请求字符数为 70)。
|
||||
(无法基于 Pod 主机名和集群域名构造 FQDN,FQDN `long-FQDN` 过长,至多 64 个字符,请求字符数为 70)。
|
||||
对于这种场景而言,改善用户体验的一种方式是创建一个
|
||||
[准入 Webhook 控制器](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks),
|
||||
在用户创建顶层对象(如 Deployment)的时候控制 FQDN 的长度。
|
||||
|
@ -409,9 +402,8 @@ following Pod-specific DNS policies. These policies are specified in the
|
|||
DNS 策略可以逐个 Pod 来设定。目前 Kubernetes 支持以下特定 Pod 的 DNS 策略。
|
||||
这些策略可以在 Pod 规约中的 `dnsPolicy` 字段设置:
|
||||
|
||||
- "`Default`": Pod 从运行所在的节点继承名称解析配置。参考
|
||||
[相关讨论](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers)
|
||||
获取更多信息。
|
||||
- "`Default`": Pod 从运行所在的节点继承名称解析配置。
|
||||
参考[相关讨论](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers)获取更多信息。
|
||||
- "`ClusterFirst`": 与配置的集群域后缀不匹配的任何 DNS 查询(例如 "www.kubernetes.io")
|
||||
都将转发到从节点继承的上游名称服务器。集群管理员可能配置了额外的存根域和上游 DNS 服务器。
|
||||
参阅[相关讨论](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers)
|
||||
|
@ -419,15 +411,15 @@ DNS 策略可以逐个 Pod 来设定。目前 Kubernetes 支持以下特定 Pod
|
|||
- "`ClusterFirstWithHostNet`":对于以 hostNetwork 方式运行的 Pod,应显式设置其 DNS 策略
|
||||
"`ClusterFirstWithHostNet`"。
|
||||
- 注意:这在 Windows 上不支持。 有关详细信息,请参见[下文](#dns-windows)。
|
||||
- "`None`": 此设置允许 Pod 忽略 Kubernetes 环境中的 DNS 设置。Pod 会使用其 `dnsConfig` 字段
|
||||
所提供的 DNS 设置。
|
||||
- "`None`": 此设置允许 Pod 忽略 Kubernetes 环境中的 DNS 设置。Pod 会使用其 `dnsConfig`
|
||||
字段所提供的 DNS 设置。
|
||||
参见 [Pod 的 DNS 配置](#pod-dns-config)节。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
"Default" is not the default DNS policy. If `dnsPolicy` is not
|
||||
explicitly specified, then "ClusterFirst" is used.
|
||||
-->
|
||||
{{< note >}}
|
||||
"Default" 不是默认的 DNS 策略。如果未明确指定 `dnsPolicy`,则使用 "ClusterFirst"。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -459,9 +451,12 @@ spec:
|
|||
|
||||
<!--
|
||||
### Pod's DNS Config {#pod-dns-config}
|
||||
-->
|
||||
### Pod 的 DNS 配置 {#pod-dns-config}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
<!--
|
||||
Pod's DNS Config allows users more control on the DNS settings for a Pod.
|
||||
|
||||
The `dnsConfig` field is optional and it can work with any `dnsPolicy` settings.
|
||||
|
@ -470,10 +465,6 @@ to be specified.
|
|||
|
||||
Below are the properties a user can specify in the `dnsConfig` field:
|
||||
-->
|
||||
### Pod 的 DNS 配置 {#pod-dns-config}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
Pod 的 DNS 配置可让用户对 Pod 的 DNS 设置进行更多控制。
|
||||
|
||||
`dnsConfig` 字段是可选的,它可以与任何 `dnsPolicy` 设置一起使用。
|
||||
|
@ -544,6 +535,7 @@ kubectl exec -it dns-example -- cat /etc/resolv.conf
|
|||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
nameserver 2001:db8:30::a
|
||||
search default.svc.cluster-domain.example svc.cluster-domain.example cluster-domain.example
|
||||
|
@ -551,26 +543,36 @@ options ndots:5
|
|||
```
|
||||
|
||||
<!--
|
||||
#### Expanded DNS Configuration
|
||||
|
||||
{{< feature-state for_k8s_version="1.22" state="alpha" >}}
|
||||
|
||||
By default, for Pod's DNS Config, Kubernetes allows at most 6 search domains and
|
||||
a list of search domains of up to 256 characters.
|
||||
|
||||
If the feature gate `ExpandedDNSConfig` is enabled for the kube-apiserver and
|
||||
the kubelet, it is allowed for Kubernetes to have at most 32 search domains and
|
||||
a list of search domains of up to 2048 characters.
|
||||
## DNS search domain list limits
|
||||
-->
|
||||
#### 扩展 DNS 配置 {#expanded-dns-configuration}
|
||||
## DNS 搜索域列表限制 {#dns-search-domain-list-limits}
|
||||
|
||||
{{< feature-state for_k8s_version="1.22" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="1.26" state="beta" >}}
|
||||
|
||||
对于 Pod DNS 配置,Kubernetes 默认允许最多 6 个 搜索域( Search Domain)
|
||||
以及一个最多 256 个字符的搜索域列表。
|
||||
<!--
|
||||
Kubernetes itself does not limit the DNS Config until the length of the search
|
||||
domain list exceeds 32 or the total length of all search domains exceeds 2048.
|
||||
This limit applies to the node's resolver configuration file, the Pod's DNS
|
||||
Config, and the merged DNS Config respectively.
|
||||
-->
|
||||
Kubernetes 本身不限制 DNS 配置,最多可支持 32 个搜索域列表,所有搜索域的总长度不超过 2048。
|
||||
此限制分别适用于节点的解析器配置文件、Pod 的 DNS 配置和合并的 DNS 配置。
|
||||
|
||||
如果启用 kube-apiserver 和 kubelet 的特性门控 `ExpandedDNSConfig`,Kubernetes 将可以有最多 32 个
|
||||
搜索域以及一个最多 2048 个字符的搜索域列表。
|
||||
{{< note >}}
|
||||
<!--
|
||||
Some container runtimes of earlier versions may have their own restrictions on
|
||||
the number of DNS search domains. Depending on the container runtime
|
||||
environment, the pods with a large number of DNS search domains may get stuck in
|
||||
the pending state.
|
||||
|
||||
It is known that containerd v1.5.5 or earlier and CRI-O v1.21 or earlier have
|
||||
this problem.
|
||||
-->
|
||||
早期版本的某些容器运行时可能对 DNS 搜索域的数量有自己的限制。
|
||||
根据容器运行环境,那些具有大量 DNS 搜索域的 Pod 可能会卡在 Pending 状态。
|
||||
|
||||
众所周知 containerd v1.5.5 或更早版本和 CRI-O v1.21 或更早版本都有这个问题。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## DNS resolution on Windows nodes {#dns-windows}
|
||||
|
@ -613,6 +615,6 @@ a list of search domains of up to 2048 characters.
|
|||
For guidance on administering DNS configurations, check
|
||||
[Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/)
|
||||
-->
|
||||
有关管理 DNS 配置的指导,请查看
|
||||
[配置 DNS 服务](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers/)
|
||||
有关管理 DNS 配置的指导,
|
||||
请查看[配置 DNS 服务](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers/)
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ feature:
|
|||
description: >
|
||||
为 Pod 和 Service 分配 IPv4 和 IPv6 地址
|
||||
content_type: concept
|
||||
weight: 70
|
||||
weight: 90
|
||||
---
|
||||
|
||||
<!--
|
||||
|
@ -27,7 +27,7 @@ reviewers:
|
|||
- khenidak
|
||||
- aramase
|
||||
- bridgetkromhout
|
||||
weight: 70
|
||||
weight: 90
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -247,11 +247,11 @@ These examples demonstrate the behavior of various dual-stack Service configurat
|
|||
this Service, Kubernetes assigns a cluster IP for the Service from the first configured
|
||||
`service-cluster-ip-range` and sets the `.spec.ipFamilyPolicy` to `SingleStack`. ([Services
|
||||
without selectors](/docs/concepts/services-networking/service/#services-without-selectors) and
|
||||
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors
|
||||
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors
|
||||
will behave in this same way.)
|
||||
-->
|
||||
1. 此服务规约中没有显式设定 `.spec.ipFamilyPolicy`。当你创建此服务时,Kubernetes
|
||||
从所配置的第一个 `service-cluster-ip-range` 种为服务分配一个集群IP,并设置
|
||||
从所配置的第一个 `service-cluster-ip-range` 中为服务分配一个集群 IP,并设置
|
||||
`.spec.ipFamilyPolicy` 为 `SingleStack`。
|
||||
([无选择算符的服务](/zh-cn/docs/concepts/services-networking/service/#services-without-selectors)
|
||||
和[无头服务](/zh-cn/docs/concepts/services-networking/service/#headless-services)的行为方式
|
||||
|
@ -263,14 +263,14 @@ These examples demonstrate the behavior of various dual-stack Service configurat
|
|||
1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When
|
||||
you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6
|
||||
addresses for the service. The control plane updates the `.spec` for the Service to record the IP
|
||||
address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned
|
||||
address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned
|
||||
IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from
|
||||
`.spec.ClusterIPs`.
|
||||
|
||||
|
||||
* For the `.spec.ClusterIP` field, the control plane records the IP address that is from the
|
||||
same address family as the first service cluster IP range.
|
||||
same address family as the first service cluster IP range.
|
||||
* On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list
|
||||
one address.
|
||||
one address.
|
||||
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
|
||||
behaves the same as `PreferDualStack`.
|
||||
-->
|
||||
|
@ -364,8 +364,8 @@ dual-stack.)
|
|||
|
||||
<!--
|
||||
1. When dual-stack is enabled on a cluster, existing
|
||||
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors
|
||||
are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set
|
||||
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are
|
||||
configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set
|
||||
`.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the
|
||||
`--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to
|
||||
`None`.
|
||||
|
@ -473,8 +473,8 @@ Services can be changed from single-stack to dual-stack and from dual-stack to s
|
|||
|
||||
<!--
|
||||
For [Headless Services without selectors](/docs/concepts/services-networking/service/#without-selectors)
|
||||
and without `.spec.ipFamilyPolicy` explicitly set, the `.spec.ipFamilyPolicy` field defaults
|
||||
to `RequireDualStack`.
|
||||
and without `.spec.ipFamilyPolicy` explicitly set, the `.spec.ipFamilyPolicy` field defaults to
|
||||
`RequireDualStack`.
|
||||
-->
|
||||
对于[不带选择算符的无头服务](/zh-cn/docs/concepts/services-networking/service/#without-selectors),
|
||||
若没有显式设置 `.spec.ipFamilyPolicy`,则 `.spec.ipFamilyPolicy`
|
||||
|
|
|
@ -34,38 +34,7 @@ Endpoints.
|
|||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Motivation
|
||||
|
||||
The Endpoints API has provided a simple and straightforward way of
|
||||
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
|
||||
and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle and
|
||||
send more traffic to more backend Pods, limitations of that original API became
|
||||
more visible.
|
||||
Most notably, those included challenges with scaling to larger numbers of
|
||||
network endpoints.
|
||||
-->
|
||||
## 动机 {#motivation}
|
||||
|
||||
Endpoints API 提供了在 Kubernetes 中跟踪网络端点的一种简单而直接的方法。遗憾的是,随着 Kubernetes
|
||||
集群和{{< glossary_tooltip text="服务" term_id="service" >}}逐渐开始为更多的后端 Pod 处理和发送请求,
|
||||
原来的 API 的局限性变得越来越明显。最重要的是那些因为要处理大量网络端点而带来的挑战。
|
||||
|
||||
<!--
|
||||
Since all network endpoints for a Service were stored in a single Endpoints
|
||||
resource, those resources could get quite large. That affected the performance
|
||||
of Kubernetes components (notably the master control plane) and resulted in
|
||||
significant amounts of network traffic and processing when Endpoints changed.
|
||||
EndpointSlices help you mitigate those issues as well as provide an extensible
|
||||
platform for additional features such as topological routing.
|
||||
-->
|
||||
由于任一 Service 的所有网络端点都保存在同一个 Endpoints 资源中,
|
||||
这类资源可能变得非常巨大,而这一变化会影响到 Kubernetes
|
||||
组件(比如主控组件)的性能,并在 Endpoints 变化时产生大量的网络流量和额外的处理。
|
||||
EndpointSlice 能够帮助你缓解这一问题,
|
||||
还能为一些诸如拓扑路由这类的额外功能提供一个可扩展的平台。
|
||||
|
||||
<!--
|
||||
## EndpointSlice resources {#endpointslice-resource}
|
||||
## EndpointSlice API {#endpointslice-resource}
|
||||
|
||||
In Kubernetes, an EndpointSlice contains references to a set of network
|
||||
endpoints. The control plane automatically creates EndpointSlices
|
||||
|
@ -77,10 +46,10 @@ Service name.
|
|||
The name of a EndpointSlice object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
As an example, here's a sample EndpointSlice resource for the `example`
|
||||
As an example, here's a sample EndpointSlice object, that's owned by the `example`
|
||||
Kubernetes Service.
|
||||
-->
|
||||
## EndpointSlice 资源 {#endpointslice-resource}
|
||||
## EndpointSlice API {#endpointslice-resource}
|
||||
|
||||
在 Kubernetes 中,`EndpointSlice` 包含对一组网络端点的引用。
|
||||
控制面会自动为设置了{{< glossary_tooltip text="选择算符" term_id="selector" >}}的
|
||||
|
@ -90,7 +59,7 @@ EndpointSlice 通过唯一的协议、端口号和 Service 名称将网络端点
|
|||
EndpointSlice 的名称必须是合法的
|
||||
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
|
||||
例如,下面是 Kubernetes Service `example` 的 EndpointSlice 资源示例。
|
||||
例如,下面是 Kubernetes Service `example` 所拥有的 EndpointSlice 对象示例。
|
||||
|
||||
```yaml
|
||||
apiVersion: discovery.k8s.io/v1
|
||||
|
@ -123,8 +92,7 @@ flag, up to a maximum of 1000.
|
|||
|
||||
EndpointSlices can act as the source of truth for
|
||||
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} when it comes to
|
||||
how to route internal traffic. When enabled, they should provide a performance
|
||||
improvement for services with large numbers of endpoints.
|
||||
how to route internal traffic.
|
||||
-->
|
||||
默认情况下,控制面创建和管理的 EndpointSlice 将包含不超过 100 个端点。
|
||||
你可以使用 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
|
||||
|
@ -133,7 +101,6 @@ improvement for services with large numbers of endpoints.
|
|||
当涉及如何路由内部流量时,EndpointSlice 可以充当
|
||||
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
|
||||
的决策依据。
|
||||
启用该功能后,在服务的端点数量庞大时会有可观的性能提升。
|
||||
|
||||
<!--
|
||||
### Address types
|
||||
|
@ -143,6 +110,10 @@ EndpointSlices support three address types:
|
|||
* IPv4
|
||||
* IPv6
|
||||
* FQDN (Fully Qualified Domain Name)
|
||||
|
||||
Each `EndpointSlice` object represents a specific IP address type. If you have
|
||||
a Service that is available via IPv4 and IPv6, there will be at least two
|
||||
`EndpointSlice` objects (one for IPv4, and one for IPv6).
|
||||
-->
|
||||
### 地址类型
|
||||
|
||||
|
@ -152,6 +123,9 @@ EndpointSlice 支持三种地址类型:
|
|||
* IPv6
|
||||
* FQDN (完全合格的域名)
|
||||
|
||||
每个 `EndpointSlice` 对象代表一个特定的 IP 地址类型。如果你有一个支持 IPv4 和 IPv6 的 Service,
|
||||
那么将至少有两个 `EndpointSlice` 对象(一个用于 IPv4,一个用于 IPv6)。
|
||||
|
||||
<!--
|
||||
### Conditions
|
||||
|
||||
|
@ -434,7 +408,7 @@ getting replaced.
|
|||
-->
|
||||
在实践中,上面这种并非最理想的分布是很少出现的。大多数被 EndpointSlice
|
||||
控制器处理的变更都是足够小的,可以添加到某已有 EndpointSlice 中去的。
|
||||
并且,假使无法添加到已有的切片中,不管怎样都会快就会需要一个新的
|
||||
并且,假使无法添加到已有的切片中,不管怎样都很快就会创建一个新的
|
||||
EndpointSlice 对象。Deployment 的滚动更新为重新为 EndpointSlice
|
||||
打包提供了一个自然的机会,所有 Pod 及其对应的端点在这一期间都会被替换掉。
|
||||
|
||||
|
@ -443,20 +417,82 @@ EndpointSlice 对象。Deployment 的滚动更新为重新为 EndpointSlice
|
|||
|
||||
Due to the nature of EndpointSlice changes, endpoints may be represented in more
|
||||
than one EndpointSlice at the same time. This naturally occurs as changes to
|
||||
different EndpointSlice objects can arrive at the Kubernetes client watch/cache
|
||||
at different times. Implementations using EndpointSlice must be able to have the
|
||||
endpoint appear in more than one slice. A reference implementation of how to
|
||||
perform endpoint deduplication can be found in the `EndpointSliceCache`
|
||||
implementation in `kube-proxy`.
|
||||
different EndpointSlice objects can arrive at the Kubernetes client watch / cache
|
||||
at different times.
|
||||
-->
|
||||
### 重复的端点 {#duplicate-endpoints}
|
||||
|
||||
由于 EndpointSlice 变化的自身特点,端点可能会同时出现在不止一个 EndpointSlice
|
||||
中。鉴于不同的 EndpointSlice 对象在不同时刻到达 Kubernetes 的监视/缓存中,
|
||||
这种情况的出现是很自然的。
|
||||
使用 EndpointSlice 的实现必须能够处理端点出现在多个切片中的状况。
|
||||
关于如何执行端点去重(deduplication)的参考实现,你可以在 `kube-proxy` 的
|
||||
`EndpointSlice` 实现中找到。
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
Clients of the EndpointSlice API must be able to handle the situation where
|
||||
a particular endpoint address appears in more than one slice.
|
||||
|
||||
You can find a reference implementation for how to perform this endpoint deduplication
|
||||
as part of the `EndpointSliceCache` code within `kube-proxy`.
|
||||
-->
|
||||
EndpointSlice API 的客户端必须能够处理特定端点地址出现在多个 EndpointSlice 中的情况。
|
||||
|
||||
你可以在 `kube-proxy` 中的 `EndpointSliceCache` 代码中找到有关如何执行这个端点去重的参考实现。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Comparison with Endpoints {#motivation}
|
||||
|
||||
The original Endpoints API provided a simple and straightforward way of
|
||||
tracking network endpoints in Kubernetes. As Kubernetes clusters
|
||||
and {{< glossary_tooltip text="Services" term_id="service" >}} grew to handle
|
||||
more traffic and to send more traffic to more backend Pods, the
|
||||
limitations of that original API became more visible.
|
||||
Most notably, those included challenges with scaling to larger numbers of
|
||||
network endpoints.
|
||||
-->
|
||||
## 与 Endpoints 的比较 {#motivation}
|
||||
原来的 Endpoints API 提供了在 Kubernetes 中跟踪网络端点的一种简单而直接的方法。随着 Kubernetes
|
||||
集群和{{< glossary_tooltip text="服务" term_id="service" >}}逐渐开始为更多的后端 Pod 处理和发送请求,
|
||||
原来的 API 的局限性变得越来越明显。最明显的是那些因为要处理大量网络端点而带来的挑战。
|
||||
|
||||
<!--
|
||||
Since all network endpoints for a Service were stored in a single Endpoints
|
||||
object, those Endpoints objects could get quite large. For Services that stayed
|
||||
stable (the same set of endpoints over a long period of time) the impact was
|
||||
less noticeable; even then, some use cases of Kubernetes weren't well served.
|
||||
-->
|
||||
由于任一 Service 的所有网络端点都保存在同一个 Endpoints 对象中,这些 Endpoints
|
||||
对象可能变得非常巨大。对于保持稳定的服务(长时间使用同一组端点),影响不太明显;
|
||||
即便如此,Kubernetes 的一些使用场景也没有得到很好的服务。
|
||||
|
||||
|
||||
<!--
|
||||
When a Service had a lot of backend endpoints and the workload was either
|
||||
scaling frequently, or rolling out new changes frequently, each update to
|
||||
the single Endpoints object for that Service meant a lot of traffic between
|
||||
Kubernetes cluster components (within the control plane, and also between
|
||||
nodes and the API server). This extra traffic also had a cost in terms of
|
||||
CPU use.
|
||||
-->
|
||||
当某 Service 存在很多后端端点并且该工作负载频繁扩缩或上线新更改时,对该 Service 的单个 Endpoints
|
||||
对象的每次更新都意味着(在控制平面内以及在节点和 API 服务器之间)Kubernetes 集群组件之间会出现大量流量。
|
||||
这种额外的流量在 CPU 使用方面也有开销。
|
||||
|
||||
<!--
|
||||
With EndpointSlices, adding or removing a single Pod triggers the same _number_
|
||||
of updates to clients that are watching for changes, but the size of those
|
||||
update message is much smaller at large scale.
|
||||
-->
|
||||
使用 EndpointSlices 时,添加或移除单个 Pod 对于正监视变更的客户端会触发相同数量的更新,
|
||||
但这些更新消息的大小在大规模场景下要小得多。
|
||||
|
||||
<!--
|
||||
EndpointSlices also enabled innovation around new features such dual-stack
|
||||
networking and topology-aware routing.
|
||||
-->
|
||||
EndpointSlices 还支持围绕双栈网络和拓扑感知路由等新功能的创新。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,12 +1,17 @@
|
|||
---
|
||||
title: 动态卷制备
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 50
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
- saad-ali
|
||||
- jsafrane
|
||||
- thockin
|
||||
- msau42
|
||||
title: Dynamic Volume Provisioning
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 50
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -31,7 +36,7 @@ automatically provisions storage when it is requested by users.
|
|||
<!--
|
||||
## Background
|
||||
-->
|
||||
## 背景
|
||||
## 背景 {#background}
|
||||
|
||||
<!--
|
||||
The implementation of dynamic volume provisioning is based on the API object `StorageClass`
|
||||
|
@ -78,7 +83,8 @@ disk-like persistent disks.
|
|||
-->
|
||||
要启用动态制备功能,集群管理员需要为用户预先创建一个或多个 `StorageClass` 对象。
|
||||
`StorageClass` 对象定义当动态制备被调用时,哪一个驱动将被使用和哪些参数将被传递给驱动。
|
||||
StorageClass 对象的名字必须是一个合法的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
StorageClass 对象的名字必须是一个合法的
|
||||
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
以下清单创建了一个 `StorageClass` 存储类 "slow",它提供类似标准磁盘的永久磁盘。
|
||||
|
||||
```yaml
|
||||
|
@ -122,7 +128,8 @@ this field must match the name of a `StorageClass` configured by the
|
|||
administrator (see [below](#enabling-dynamic-provisioning)).
|
||||
-->
|
||||
用户通过在 `PersistentVolumeClaim` 中包含存储类来请求动态制备的存储。
|
||||
在 Kubernetes v1.9 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。
|
||||
在 Kubernetes v1.9 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。
|
||||
然而,这个注解自 v1.6 起就不被推荐使用了。
|
||||
用户现在能够而且应该使用 `PersistentVolumeClaim` 对象的 `storageClassName` 字段。
|
||||
这个字段的值必须能够匹配到集群管理员配置的 `StorageClass` 名称(见[下面](#enabling-dynamic-provisioning))。
|
||||
|
||||
|
@ -156,7 +163,7 @@ provisioned. When the claim is deleted, the volume is destroyed.
|
|||
<!--
|
||||
## Defaulting Behavior
|
||||
-->
|
||||
## 设置默认值的行为
|
||||
## 设置默认值的行为 {#defaulting-behavior}
|
||||
|
||||
<!--
|
||||
Dynamic provisioning can be enabled on a cluster such that all claims are
|
||||
|
@ -172,18 +179,19 @@ can enable this behavior by:
|
|||
is enabled on the API server.
|
||||
-->
|
||||
- 标记一个 `StorageClass` 为 **默认**;
|
||||
- 确保 [`DefaultStorageClass` 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)在 API 服务端被启用。
|
||||
- 确保 [`DefaultStorageClass` 准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)在
|
||||
API 服务器端被启用。
|
||||
|
||||
<!--
|
||||
An administrator can mark a specific `StorageClass` as default by adding the
|
||||
`storageclass.kubernetes.io/is-default-class` [annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it.
|
||||
[`storageclass.kubernetes.io/is-default-class` annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it.
|
||||
When a default `StorageClass` exists in a cluster and a user creates a
|
||||
`PersistentVolumeClaim` with `storageClassName` unspecified, the
|
||||
`DefaultStorageClass` admission controller automatically adds the
|
||||
`storageClassName` field pointing to the default storage class.
|
||||
-->
|
||||
管理员可以通过向其添加 `storageclass.kubernetes.io/is-default-class`
|
||||
[annotation](/zh-cn/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class)
|
||||
管理员可以通过向其添加
|
||||
[`storageclass.kubernetes.io/is-default-class` 注解](/zh-cn/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class)
|
||||
来将特定的 `StorageClass` 标记为默认。
|
||||
当集群中存在默认的 `StorageClass` 并且用户创建了一个未指定 `storageClassName` 的 `PersistentVolumeClaim` 时,
|
||||
`DefaultStorageClass` 准入控制器会自动向其中添加指向默认存储类的 `storageClassName` 字段。
|
||||
|
@ -202,12 +210,13 @@ be created.
|
|||
## 拓扑感知 {#topology-awareness}
|
||||
|
||||
<!--
|
||||
In [Multi-Zone](/docs/setup/multiple-zones) clusters, Pods can be spread across
|
||||
In [Multi-Zone](/docs/setup/best-practices/multiple-zones/) clusters, Pods can be spread across
|
||||
Zones in a Region. Single-Zone storage backends should be provisioned in the Zones where
|
||||
Pods are scheduled. This can be accomplished by setting the [Volume Binding
|
||||
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
||||
Pods are scheduled. This can be accomplished by setting the
|
||||
[Volume Binding Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
||||
-->
|
||||
在[多可用区](/zh-cn/docs/setup/best-practices/multiple-zones/)集群中,Pod 可以被分散到某个区域的多个可用区。
|
||||
单可用区存储后端应该被制备到 Pod 被调度到的可用区。
|
||||
这可以通过设置[卷绑定模式](/zh-cn/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 存储类
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 40
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
@ -11,7 +11,7 @@ reviewers:
|
|||
- msau42
|
||||
title: Storage Classes
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 40
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -52,7 +52,7 @@ class needs to be dynamically provisioned.
|
|||
## StorageClass 资源 {#the-storageclass-resource}
|
||||
|
||||
每个 StorageClass 都包含 `provisioner`、`parameters` 和 `reclaimPolicy` 字段,
|
||||
这些字段会在 StorageClass 需要动态分配 PersistentVolume 时会使用到。
|
||||
这些字段会在 StorageClass 需要动态制备 PersistentVolume 时会使用到。
|
||||
|
||||
<!--
|
||||
The name of a StorageClass object is significant, and is how users can
|
||||
|
@ -208,10 +208,10 @@ Volume type | Required Kubernetes version
|
|||
|
||||
{{< /table >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You can only use the volume expansion feature to grow a Volume, not to shrink it.
|
||||
-->
|
||||
{{< note >}}
|
||||
此功能仅可用于扩容卷,不能用于缩小卷。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -467,7 +467,7 @@ parameters:
|
|||
`zone` and `zones` parameters are deprecated and replaced with
|
||||
[allowedTopologies](#allowed-topologies)
|
||||
-->
|
||||
`zone` 和 `zones` 已被弃用并被 [允许的拓扑结构](#allowed-topologies) 取代。
|
||||
`zone` 和 `zones` 已被弃用并被[允许的拓扑结构](#allowed-topologies)取代。
|
||||
{{< /note >}}
|
||||
|
||||
### GCE PD {#gce-pd}
|
||||
|
@ -503,7 +503,7 @@ parameters:
|
|||
* `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,
|
||||
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。
|
||||
`zone` 和 `zones` 参数不能同时使用。
|
||||
* `fstype`: `ext4` 或 `xfs`。 默认: `ext4`。宿主机操作系统必须支持所定义的文件系统类型。
|
||||
* `fstype`:`ext4` 或 `xfs`。 默认:`ext4`。宿主机操作系统必须支持所定义的文件系统类型。
|
||||
* `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。
|
||||
|
||||
<!--
|
||||
|
@ -530,15 +530,18 @@ using `allowedTopologies`.
|
|||
区域性持久化磁盘会在两个区域里制备。 其中一个区域是 Pod 所在区域。
|
||||
另一个区域是会在集群管理的区域中任意选择。磁盘区域可以通过 `allowedTopologies` 加以限制。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
`zone` and `zones` parameters are deprecated and replaced with
|
||||
[allowedTopologies](#allowed-topologies)
|
||||
-->
|
||||
{{< note >}}
|
||||
`zone` 和 `zones` 已被弃用并被 [allowedTopologies](#allowed-topologies) 取代。
|
||||
{{< /note >}}
|
||||
|
||||
### Glusterfs
|
||||
<!--
|
||||
### Glusterfs (deprecated) {#glusterfs}
|
||||
-->
|
||||
### Glusterfs(已弃用) {#glusterfs}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
|
@ -659,11 +662,11 @@ parameters:
|
|||
deleted when the persistent volume claim is deleted.
|
||||
-->
|
||||
* `volumetype`:卷的类型及其参数可以用这个可选值进行配置。如果未声明卷类型,则由制备器决定卷的类型。
|
||||
例如:
|
||||
|
||||
* 'Replica volume': `volumetype: replicate:3` 其中 '3' 是 replica 数量。
|
||||
* 'Disperse/EC volume': `volumetype: disperse:4:2` 其中 '4' 是数据,'2' 是冗余数量。
|
||||
* 'Distribute volume': `volumetype: none`
|
||||
例如:
|
||||
* 'Replica volume':`volumetype: replicate:3` 其中 '3' 是 replica 数量。
|
||||
* 'Disperse/EC volume':`volumetype: disperse:4:2` 其中 '4' 是数据,'2' 是冗余数量。
|
||||
* 'Distribute volume':`volumetype: none`
|
||||
|
||||
有关可用的卷类型和管理选项,
|
||||
请参阅[管理指南](https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/)。
|
||||
|
@ -726,10 +729,10 @@ parameters:
|
|||
-->
|
||||
* `availability`:可用区域。如果没有指定,通常卷会在 Kubernetes 集群节点所在的活动区域中轮转调度。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
This internal provisioner of OpenStack is deprecated. Please use [the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
|
||||
-->
|
||||
{{< note >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
|
||||
OpenStack 的内部驱动已经被弃用。请使用
|
||||
[OpenStack 的外部云驱动](https://github.com/kubernetes/cloud-provider-openstack)。
|
||||
|
@ -745,7 +748,7 @@ There are two types of provisioners for vSphere storage classes:
|
|||
|
||||
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
|
||||
-->
|
||||
vSphere 存储类有两种制备器
|
||||
vSphere 存储类有两种制备器:
|
||||
|
||||
- [CSI 制备器](#vsphere-provisioner-csi):`csi.vsphere.vmware.com`
|
||||
- [vCP 制备器](#vcp-provisioner):`kubernetes.io/vsphere-volume`
|
||||
|
@ -773,7 +776,7 @@ The following examples use the VMware Cloud Provider (vCP) StorageClass provisio
|
|||
-->
|
||||
#### vCP 制备器 {#vcp-provisioner}
|
||||
|
||||
以下示例使用 VMware Cloud Provider (vCP) StorageClass 调度器。
|
||||
以下示例使用 VMware Cloud Provider (vCP) StorageClass 制备器。
|
||||
|
||||
<!--
|
||||
1. Create a StorageClass with a user specified disk format.
|
||||
|
@ -793,7 +796,7 @@ The following examples use the VMware Cloud Provider (vCP) StorageClass provisio
|
|||
<!--
|
||||
`diskformat`: `thin`, `zeroedthick` and `eagerzeroedthick`. Default: `"thin"`.
|
||||
-->
|
||||
`diskformat`: `thin`, `zeroedthick` 和 `eagerzeroedthick`。默认值: `"thin"`。
|
||||
`diskformat`:`thin`、`zeroedthick` 和 `eagerzeroedthick`。默认值:`"thin"`。
|
||||
|
||||
<!--
|
||||
2. Create a StorageClass with a disk format on a user specified datastore.
|
||||
|
@ -927,7 +930,7 @@ parameters:
|
|||
* `adminSecret`:`adminId` 的 Secret 名称。该参数是必需的。
|
||||
提供的 secret 必须有值为 "kubernetes.io/rbd" 的 type 参数。
|
||||
* `adminSecretNamespace`:`adminSecret` 的命名空间。默认是 "default"。
|
||||
* `pool`: Ceph RBD 池. 默认是 "rbd"。
|
||||
* `pool`:Ceph RBD 池。默认是 "rbd"。
|
||||
* `userId`:Ceph 客户端 ID,用于映射 RBD 镜像。默认与 `adminId` 相同。
|
||||
|
||||
<!--
|
||||
|
@ -1029,7 +1032,7 @@ parameters:
|
|||
* `kind`:可能的值是 `shared`、`dedicated` 和 `managed`(默认)。
|
||||
当 `kind` 的值是 `shared` 时,所有非托管磁盘都在集群的同一个资源组中的几个共享存储帐户中创建。
|
||||
当 `kind` 的值是 `dedicated` 时,将为在集群的同一个资源组中新的非托管磁盘创建新的专用存储帐户。
|
||||
* `resourceGroup`: 指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。
|
||||
* `resourceGroup`:指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。
|
||||
若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。
|
||||
<!--
|
||||
- Premium VM can attach both Standard_LRS and Premium_LRS disks, while Standard
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: Windows 存储
|
||||
content_type: concept
|
||||
weight: 110
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
@ -101,11 +102,11 @@ The following broad classes of Kubernetes volume plugins are supported on Window
|
|||
Windows 支持以下类型的 Kubernetes 卷插件:
|
||||
|
||||
<!--
|
||||
* [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexvolume-deprecated)
|
||||
* [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexvolume)
|
||||
* Please note that FlexVolumes have been deprecated as of 1.23
|
||||
* [`CSI Plugins`](/docs/concepts/storage/volumes/#csi)
|
||||
-->
|
||||
* [`FlexVolume plugins`](/zh-cn/docs/concepts/storage/volumes/#flexvolume-deprecated)
|
||||
* [`FlexVolume plugins`](/zh-cn/docs/concepts/storage/volumes/#flexvolume)
|
||||
* 请注意自 1.23 版本起,FlexVolume 已被弃用
|
||||
* [`CSI Plugins`](/zh-cn/docs/concepts/storage/volumes/#csi)
|
||||
|
||||
|
|
|
@ -67,14 +67,14 @@ The following are typical use cases for Deployments:
|
|||
<!--
|
||||
* [Rollback to an earlier Deployment revision](#rolling-back-a-deployment) if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
|
||||
* [Scale up the Deployment to facilitate more load](#scaling-a-deployment).
|
||||
* [Pause the Deployment](#pausing-and-resuming-a-deployment) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
|
||||
* [Pause the rollout of a Deployment](#pausing-and-resuming-a-deployment) to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
|
||||
* [Use the status of the Deployment](#deployment-status) as an indicator that a rollout has stuck.
|
||||
* [Clean up older ReplicaSets](#clean-up-policy) that you don't need anymore.
|
||||
-->
|
||||
* 如果 Deployment 的当前状态不稳定,[回滚到较早的 Deployment 版本](#rolling-back-a-deployment)。
|
||||
每次回滚都会更新 Deployment 的修订版本。
|
||||
* [扩大 Deployment 规模以承担更多负载](#scaling-a-deployment)。
|
||||
* [暂停 Deployment ](#pausing-and-resuming-a-deployment) 以应用对 PodTemplateSpec 所作的多项修改,
|
||||
* [暂停 Deployment 的上线](#pausing-and-resuming-a-deployment) 以应用对 PodTemplateSpec 所作的多项修改,
|
||||
然后恢复其执行以启动新的上线版本。
|
||||
* [使用 Deployment 状态](#deployment-status)来判定上线过程是否出现停滞。
|
||||
* [清理较旧的不再需要的 ReplicaSet](#clean-up-policy) 。
|
||||
|
@ -98,11 +98,10 @@ In this example:
|
|||
<!--
|
||||
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
|
||||
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
|
||||
* The `.spec.selector` field defines how the Deployment finds which Pods to manage.
|
||||
|
||||
-->
|
||||
* 创建名为 `nginx-deployment`(由 `.metadata.name` 字段标明)的 Deployment。
|
||||
* 该 Deployment 创建三个(由 `.spec.replicas` 字段标明)Pod 副本。
|
||||
* `.spec.selector` 字段定义了 Deployment 如何查找要管理的 Pod。
|
||||
|
||||
<!--
|
||||
* The `selector` field defines how the Deployment finds which Pods to manage.
|
||||
|
@ -208,7 +207,8 @@ Follow the steps given below to create the above Deployment:
|
|||
```
|
||||
|
||||
<!--
|
||||
4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this:
|
||||
4. Run the `kubectl get deployments` again a few seconds later.
|
||||
The output is similar to this:
|
||||
-->
|
||||
4. 几秒钟后再次运行 `kubectl get deployments`。输出类似于:
|
||||
|
||||
|
@ -1626,10 +1626,10 @@ Deployment progress has stalled.
|
|||
|
||||
<!--
|
||||
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report
|
||||
lack of progress for a Deployment after 10 minutes:
|
||||
lack of progress of a rollout for a Deployment after 10 minutes:
|
||||
-->
|
||||
以下 `kubectl` 命令设置规约中的 `progressDeadlineSeconds`,从而告知控制器
|
||||
在 10 分钟后报告 Deployment 没有进展:
|
||||
在 10 分钟后报告 Deployment 的上线没有进展:
|
||||
|
||||
```shell
|
||||
kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
|
||||
|
@ -1683,11 +1683,11 @@ Deployment 不执行任何操作。更高级别的编排器可以利用这一设
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you pause a Deployment rollout, Kubernetes does not check progress against your specified deadline. You can
|
||||
safely pause a Deployment in the middle of a rollout and resume without triggering the condition for
|
||||
exceeding the deadline.
|
||||
If you pause a Deployment rollout, Kubernetes does not check progress against your specified deadline.
|
||||
You can safely pause a Deployment rollout in the middle of a rollout and resume without triggering
|
||||
the condition for exceeding the deadline.
|
||||
-->
|
||||
如果你暂停了某个 Deployment 上线,Kubernetes 不再根据指定的截止时间检查 Deployment 进展。
|
||||
如果你暂停了某个 Deployment 上线,Kubernetes 不再根据指定的截止时间检查 Deployment 上线的进展。
|
||||
你可以在上线过程中间安全地暂停 Deployment 再恢复其执行,这样做不会导致超出最后时限的问题。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -1837,8 +1837,7 @@ $ echo $?
|
|||
### Operating on a failed deployment
|
||||
|
||||
All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll back
|
||||
to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment
|
||||
Pod template.
|
||||
to a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.
|
||||
-->
|
||||
### 对失败 Deployment 的操作 {#operating-on-a-failed-deployment}
|
||||
|
||||
|
|
|
@ -622,12 +622,12 @@ StatefulSet 才会开始使用被还原的模板来重新创建 Pod。
|
|||
The optional `.spec.persistentVolumeClaimRetentionPolicy` field controls if
|
||||
and how PVCs are deleted during the lifecycle of a StatefulSet. You must enable the
|
||||
`StatefulSetAutoDeletePVC` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
to use this field. Once enabled, there are two policies you can configure for each
|
||||
StatefulSet:
|
||||
on the API server and the controller manager to use this field.
|
||||
Once enabled, there are two policies you can configure for each StatefulSet:
|
||||
-->
|
||||
在 StatefulSet 的生命周期中,可选字段
|
||||
`.spec.persistentVolumeClaimRetentionPolicy` 控制是否删除以及如何删除 PVC。
|
||||
使用该字段,你必须启用 `StatefulSetAutoDeletePVC`
|
||||
使用该字段,你必须在 API 服务器和控制器管理器启用 `StatefulSetAutoDeletePVC`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
启用后,你可以为每个 StatefulSet 配置两个策略:
|
||||
|
||||
|
|
|
@ -43,7 +43,7 @@ or is [terminated](#pod-termination).
|
|||
-->
|
||||
在 Kubernetes API 中,Pod 包含规约部分和实际状态部分。
|
||||
Pod 对象的状态包含了一组 [Pod 状况(Conditions)](#pod-conditions)。
|
||||
如果应用需要的话,你也可以向其中注入[自定义的就绪性信息](#pod-readiness-gate)。
|
||||
如果应用需要的话,你也可以向其中注入[自定义的就绪态信息](#pod-readiness-gate)。
|
||||
|
||||
Pod 在其生命周期中只会被[调度](/zh-cn/docs/concepts/scheduling-eviction/)一次。
|
||||
一旦 Pod 被调度(分派)到某个节点,Pod 会一直在该节点运行,直到 Pod
|
||||
|
@ -58,7 +58,7 @@ Like individual application containers, Pods are considered to be relatively
|
|||
ephemeral (rather than durable) entities. Pods are created, assigned a unique
|
||||
ID ([UID](/docs/concepts/overview/working-with-objects/names/#uids)), and scheduled
|
||||
to nodes where they remain until termination (according to restart policy) or
|
||||
deletion.
|
||||
deletion.
|
||||
If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that node
|
||||
are [scheduled for deletion](#pod-garbage-collection) after a timeout period.
|
||||
-->
|
||||
|
@ -159,13 +159,13 @@ Value | Description
|
|||
`Failed`(失败) | Pod 中的所有容器都已终止,并且至少有一个容器是因为失败终止。也就是说,容器以非 0 状态退出或者被系统终止。
|
||||
`Unknown`(未知) | 因为某些原因无法取得 Pod 的状态。这种情况通常是因为与 Pod 所在主机通信失败。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
When a Pod is being deleted, it is shown as `Terminating` by some kubectl commands.
|
||||
This `Terminating` status is not one of the Pod phases.
|
||||
A Pod is granted a term to terminate gracefully, which defaults to 30 seconds.
|
||||
You can use the flag `--force` to [terminate a Pod by force](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced).
|
||||
-->
|
||||
{{< note >}}
|
||||
当一个 Pod 被删除时,执行一些 kubectl 命令会展示这个 Pod 的状态为 `Terminating`(终止)。
|
||||
这个 `Terminating` 状态并不是 Pod 阶段之一。
|
||||
Pod 被赋予一个可以体面终止的期限,默认为 30 秒。
|
||||
|
@ -338,7 +338,7 @@ Field name | Description
|
|||
字段名称 | 描述
|
||||
:--------------------|:-----------
|
||||
`type` | Pod 状况的名称
|
||||
`status` | 表明该状况是否适用,可能的取值有 "`True`", "`False`" 或 "`Unknown`"
|
||||
`status` | 表明该状况是否适用,可能的取值有 "`True`"、"`False`" 或 "`Unknown`"
|
||||
`lastProbeTime` | 上次探测 Pod 状况时的时间戳
|
||||
`lastTransitionTime` | Pod 上次从一种状态转换到另一种状态时的时间戳
|
||||
`reason` | 机器可读的、驼峰编码(UpperCamelCase)的文字,表述上次状况变化的原因
|
||||
|
@ -845,33 +845,41 @@ An example flow:
|
|||
runs that hook inside of the container. If the `preStop` hook is still running after the
|
||||
grace period expires, the kubelet requests a small, one-off grace period extension of 2
|
||||
seconds.
|
||||
If the `preStop` hook needs longer to complete than the default grace period allows,
|
||||
you must modify `terminationGracePeriodSeconds` to suit this.
|
||||
1. The kubelet triggers the container runtime to send a TERM signal to process 1 inside each
|
||||
container.
|
||||
The containers in the Pod receive the TERM signal at different times and in an arbitrary
|
||||
order. If the order of shutdowns matters, consider using a `preStop` hook to synchronize.
|
||||
-->
|
||||
|
||||
1. 如果 Pod 中的容器之一定义了 `preStop`
|
||||
[回调](/zh-cn/docs/concepts/containers/container-lifecycle-hooks),
|
||||
`kubelet` 开始在容器内运行该回调逻辑。如果超出体面终止限期时,
|
||||
`preStop` 回调逻辑仍在运行,`kubelet` 会请求给予该 Pod 的宽限期一次性增加 2 秒钟。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If the `preStop` hook needs longer to complete than the default grace period allows,
|
||||
you must modify `terminationGracePeriodSeconds` to suit this.
|
||||
-->
|
||||
如果 `preStop` 回调所需要的时间长于默认的体面终止限期,你必须修改
|
||||
`terminationGracePeriodSeconds` 属性值来使其正常工作。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
1. The kubelet triggers the container runtime to send a TERM signal to process 1 inside each
|
||||
container.
|
||||
-->
|
||||
|
||||
1. `kubelet` 接下来触发容器运行时发送 TERM 信号给每个容器中的进程 1。
|
||||
2. `kubelet` 接下来触发容器运行时发送 TERM 信号给每个容器中的进程 1。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The containers in the Pod receive the TERM signal at different times and in an arbitrary
|
||||
order. If the order of shutdowns matters, consider using a `preStop` hook to synchronize.
|
||||
-->
|
||||
Pod 中的容器会在不同时刻收到 TERM 信号,接收顺序也是不确定的。
|
||||
如果关闭的顺序很重要,可以考虑使用 `preStop` 回调逻辑来协调。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
1. At the same time as the kubelet is starting graceful shutdown, the control plane removes that
|
||||
shutting-down Pod from Endpoints (and, if enabled, EndpointSlice) objects where these represent
|
||||
shutting-down Pod from EndpointSlice (and Endpoints) objects where these represent
|
||||
a {{< glossary_tooltip term_id="service" text="Service" >}} with a configured
|
||||
{{< glossary_tooltip text="selector" term_id="selector" >}}.
|
||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and other workload resources
|
||||
|
@ -879,11 +887,11 @@ An example flow:
|
|||
cannot continue to serve traffic as load balancers (like the service proxy) remove the Pod from
|
||||
the list of endpoints as soon as the termination grace period _begins_.
|
||||
-->
|
||||
3. 与此同时,`kubelet` 启动体面关闭逻辑,控制面会将 Pod 从对应的端点列表(以及端点切片列表,
|
||||
如果启用了的话)中移除,过滤条件是 Pod 被对应的
|
||||
{{< glossary_tooltip term_id="service" text="服务" >}}以某
|
||||
3. 在 `kubelet` 启动体面关闭逻辑的同时,控制面会将关闭的 Pod 从对应的
|
||||
EndpointSlice(和 Endpoints)对象中移除,过滤条件是 Pod
|
||||
被对应的{{< glossary_tooltip term_id="service" text="服务" >}}以某
|
||||
{{< glossary_tooltip text="选择算符" term_id="selector" >}}选定。
|
||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}}
|
||||
{{< glossary_tooltip text="ReplicaSet" term_id="replica-set" >}}
|
||||
和其他工作负载资源不再将关闭进程中的 Pod 视为合法的、能够提供服务的副本。
|
||||
关闭动作很慢的 Pod 也无法继续处理请求数据,
|
||||
因为负载均衡器(例如服务代理)已经在终止宽限期开始的时候将其从端点列表中移除。
|
||||
|
@ -907,19 +915,21 @@ An example flow:
|
|||
|
||||
<!--
|
||||
### Forced Pod termination {#pod-termination-forced}
|
||||
|
||||
Forced deletions can be potentially disruptive for some workloads and their Pods.
|
||||
|
||||
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports
|
||||
the `--grace-period=<seconds>` option which allows you to override the default and specify your
|
||||
own value.
|
||||
-->
|
||||
### 强制终止 Pod {#pod-termination-forced}
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Forced deletions can be potentially disruptive for some workloads and their Pods.
|
||||
-->
|
||||
对于某些工作负载及其 Pod 而言,强制删除很可能会带来某种破坏。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports
|
||||
the `--grace-period=<seconds>` option which allows you to override the default and specify your
|
||||
own value.
|
||||
-->
|
||||
默认情况下,所有的删除操作都会附有 30 秒钟的宽限期限。
|
||||
`kubectl delete` 命令支持 `--grace-period=<seconds>` 选项,允许你重载默认值,
|
||||
设定自己希望的期限值。
|
||||
|
@ -932,12 +942,11 @@ begin immediate cleanup.
|
|||
将宽限期限强制设置为 `0` 意味着立即从 API 服务器删除 Pod。
|
||||
如果 Pod 仍然运行于某节点上,强制删除操作会触发 `kubelet` 立即执行清理操作。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions.
|
||||
-->
|
||||
{{< note >}}
|
||||
你必须在设置 `--grace-period=0` 的同时额外设置 `--force`
|
||||
参数才能发起强制删除请求。
|
||||
你必须在设置 `--grace-period=0` 的同时额外设置 `--force` 参数才能发起强制删除请求。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -8,12 +8,11 @@ card:
|
|||
title: 翻译文档
|
||||
---
|
||||
<!--
|
||||
title: Localizing Kubernetes Documentation
|
||||
title: Localizing Kubernetes documentation
|
||||
content_type: concept
|
||||
approvers:
|
||||
- remyleone
|
||||
- rlenferink
|
||||
- zacharysarah
|
||||
weight: 50
|
||||
card:
|
||||
name: contribute
|
||||
|
@ -24,7 +23,9 @@ card:
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
This page shows you how to [localize](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/) the docs for a different language.
|
||||
This page shows you how to
|
||||
[localize](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)
|
||||
the docs for a different language.
|
||||
-->
|
||||
此页面描述如何为其他语言的文档提供
|
||||
[本地化](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)版本。
|
||||
|
@ -34,7 +35,11 @@ This page shows you how to [localize](https://blog.mozilla.org/l10n/2011/12/14/i
|
|||
<!--
|
||||
## Contribute to an existing localization
|
||||
|
||||
You can help add or improve content to an existing localization. In [Kubernetes Slack](https://slack.k8s.io/) you'll find a channel for each localization. There is also a general [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations) where you can say hello.
|
||||
You can help add or improve the content of an existing localization. In
|
||||
[Kubernetes Slack](https://slack.k8s.io/), you can find a channel for each
|
||||
localization. There is also a general [SIG Docs Localizations Slack
|
||||
channel](https://kubernetes.slack.com/messages/sig-docs-localizations) where you
|
||||
can say hello.
|
||||
-->
|
||||
## 为现有的本地化做出贡献 {#contribute-to-an-existing-localization}
|
||||
|
||||
|
@ -45,30 +50,31 @@ You can help add or improve content to an existing localization. In [Kubernetes
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you want to work on a localization that already exists, check
|
||||
this page in that localization (if it exists), rather than the
|
||||
English original. You might see extra details there.
|
||||
For extra details on how to contribute to a specific localization,
|
||||
look for a localized version of this page.
|
||||
-->
|
||||
如果你想处理已经存在的本地化,请在该本地化(如果存在)中检查此页面,而不是英文原版。
|
||||
你可能会在那里看到额外的详细信息。
|
||||
有关如何为特定本地化做贡献的更多信息,请参阅本页面的各个本地化版本。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Find your two-letter language code
|
||||
|
||||
First, consult the [ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) to find your localization's two-letter language code. For example, the two-letter code for Korean is `ko`.
|
||||
First, consult the [ISO 639-1
|
||||
standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) to find your
|
||||
localization's two-letter language code. For example, the two-letter code for
|
||||
Korean is `ko`.
|
||||
|
||||
### Fork and clone the repo
|
||||
|
||||
First, [create your own fork](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the [kubernetes/website](https://github.com/kubernetes/website) repository.
|
||||
|
||||
The website content directory includes sub-directories for each language. The localization you want to help out with is inside `content/<two-letter-code>`.
|
||||
First, [create your own
|
||||
fork](/docs/contribute/new-content/open-a-pr/#fork-the-repo) of the
|
||||
[kubernetes/website](https://github.com/kubernetes/website) repository.
|
||||
-->
|
||||
### 找到两个字母的语言代码 {#find-your-two-letter-language-code}
|
||||
|
||||
首先,有关本地化的两个字母的语言代码,请参考
|
||||
[ISO 639-1 标准](https://www.loc.gov/standards/iso639-2/php/code_list.php)。
|
||||
例如,韩国的两个字母代码是 `ko`。
|
||||
例如,韩语的两个字母代码是 `ko`。
|
||||
|
||||
### 派生(fork)并且克隆仓库 {#fork-and-clone-the-repo}
|
||||
|
||||
|
@ -86,7 +92,8 @@ cd website
|
|||
```
|
||||
|
||||
<!--
|
||||
The website content directory includes sub-directories for each language. The localization you want to help out with is inside `content/<two-letter-code>`.
|
||||
The website content directory includes subdirectories for each language. The
|
||||
localization you want to help out with is inside `content/<two-letter-code>`.
|
||||
-->
|
||||
网站内容目录包括每种语言的子目录。你想要助力的本地化位于 `content/<two-letter-code>` 中。
|
||||
|
||||
|
@ -96,15 +103,16 @@ The website content directory includes sub-directories for each language. The lo
|
|||
Create or update your chosen localized page based on the English original. See
|
||||
[translating content](#translating-content) for more details.
|
||||
|
||||
If you notice a technical inaccuracy or other problem with the upstream (English)
|
||||
documentation, you should fix the upstream documentation first and then repeat the
|
||||
equivalent fix by updating the localization you're working on.
|
||||
If you notice a technical inaccuracy or other problem with the upstream
|
||||
(English) documentation, you should fix the upstream documentation first and
|
||||
then repeat the equivalent fix by updating the localization you're working on.
|
||||
|
||||
Please limit pull requests to a single localization, since pull requests that change
|
||||
content in multiple localizations could be difficult to review.
|
||||
Limit changes in a pull requests to a single localization. Reviewing pull
|
||||
requests that change content in multiple localizations is problematic.
|
||||
|
||||
Follow [Suggesting Content Improvements](/docs/contribute/suggest-improvements/) to propose changes to
|
||||
that localization. The process is very similar to proposing changes to the upstream (English) content.
|
||||
Follow [Suggesting Content Improvements](/docs/contribute/suggesting-improvements/)
|
||||
to propose changes to that localization. The process is similar to proposing
|
||||
changes to the upstream (English) content.
|
||||
-->
|
||||
### 建议更改 {#suggest-changes}
|
||||
|
||||
|
@ -114,7 +122,7 @@ that localization. The process is very similar to proposing changes to the upstr
|
|||
如果你发现上游(英文)文档存在技术错误或其他问题,
|
||||
你应该先修复上游文档,然后通过更新你正在处理的本地化来重复等效的修复。
|
||||
|
||||
请将拉取请求限制为单个本地化,因为在多个本地化中更改内容的拉取请求可能难以审查。
|
||||
请将 PR 限制为单个语言版本,因为多语言的 PR 内容修改可能难以审查。
|
||||
|
||||
按照[内容改进建议](/zh-cn/docs/contribute/suggest-improvements/)提出对该本地化的更改。
|
||||
该过程与提议更改上游(英文)内容非常相似。
|
||||
|
@ -122,29 +130,30 @@ that localization. The process is very similar to proposing changes to the upstr
|
|||
<!--
|
||||
## Start a new localization
|
||||
|
||||
If you want the Kubernetes documentation localized into a new language, here's what
|
||||
you need to do.
|
||||
If you want the Kubernetes documentation localized into a new language, here's
|
||||
what you need to do.
|
||||
|
||||
Because contributors can't approve their own pull requests, you need _at least two contributors_
|
||||
to begin a localization.
|
||||
Because contributors can't approve their own pull requests, you need _at least
|
||||
two contributors_ to begin a localization.
|
||||
|
||||
All localization teams must be self-sustaining. The Kubernetes website is happy to host your work, but
|
||||
it's up to you to translate it and keep existing localized content current.
|
||||
All localization teams must be self-sufficient. The Kubernetes website is happy
|
||||
to host your work, but it's up to you to translate it and keep existing
|
||||
localized content current.
|
||||
-->
|
||||
## 开始新的本地化 {#start-a-new-localization}
|
||||
|
||||
如果你希望将 Kubernetes 文档本地化为一种新语言,你需要执行以下操作。
|
||||
|
||||
因为贡献者不能批准他们自己的拉取请求,你需要 _至少两个贡献者_ 来开始本地化。
|
||||
因为贡献者不能批准他们自己的拉取请求,你需要 **至少两个贡献者** 来开始本地化。
|
||||
|
||||
所有本地化团队都必须能够自我维持。
|
||||
Kubernetes 网站很乐意托管你的作品,但要由你来翻译它并使现有的本地化内容保持最新。
|
||||
|
||||
<!--
|
||||
You'll need to know the two-letter language code for your language. Consult the
|
||||
[ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php) to find your
|
||||
localization's two-letter language code. For example, the two-letter code for Korean is
|
||||
`ko`.
|
||||
[ISO 639-1 standard](https://www.loc.gov/standards/iso639-2/php/code_list.php)
|
||||
to find your localization's two-letter language code. For example, the
|
||||
two-letter code for Korean is `ko`.
|
||||
|
||||
When you start a new localization, you must localize all the
|
||||
[minimum required content](#minimum-required-content) before
|
||||
|
@ -166,79 +175,97 @@ SIG Docs 可以帮助你在单独的分支上工作,以便你可以逐步实
|
|||
<!--
|
||||
### Find community
|
||||
|
||||
Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) and the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations). Other localization teams are happy to help you get started and answer any questions you have.
|
||||
Let Kubernetes SIG Docs know you're interested in creating a localization! Join
|
||||
the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) and
|
||||
the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations).
|
||||
Other localization teams are happy to help you get started and answer your
|
||||
questions.
|
||||
-->
|
||||
### 找到社区 {#find-community}
|
||||
|
||||
让 Kubernetes SIG Docs 知道你有兴趣创建本地化!
|
||||
加入 [SIG Docs Slack 频道](https://kubernetes.slack.com/messages/sig-docs)
|
||||
和 [SIG Docs Localizations Slack 频道](https://kubernetes.slack.com/messages/sig-docs-localizations)。
|
||||
其他本地化团队很乐意帮助你入门并回答你的任何问题。
|
||||
其他本地化团队很乐意帮助你入门并回答你的问题。
|
||||
|
||||
<!--
|
||||
Please also consider participating in the [SIG Docs Localization Subgroup meeting](https://github.com/kubernetes/community/tree/master/sig-docs). The mission of the SIG Docs localization subgroup is to work across the SIG Docs localization teams to collaborate on defining and documenting the processes for creating localized contribution guides. In addition, the SIG Docs localization subgroup will look for opportunities for the creation and sharing of common tools across localization teams and also serve to identify new requirements to the SIG Docs Leadership team. If you have questions about this meeting, please inquire on the [SIG Docs Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations).
|
||||
Please also consider participating in the
|
||||
[SIG Docs Localization Subgroup meeting](https://github.com/kubernetes/community/tree/master/sig-docs).
|
||||
The mission of the SIG Docs localization subgroup is to work across the SIG Docs
|
||||
localization teams to collaborate on defining and documenting the processes for
|
||||
creating localized contribution guides. In addition, the SIG Docs localization
|
||||
subgroup looks for opportunities to create and share common tools across
|
||||
localization teams and identify new requirements for the SIG Docs Leadership
|
||||
team. If you have questions about this meeting, please inquire on the [SIG Docs
|
||||
Localizations Slack channel](https://kubernetes.slack.com/messages/sig-docs-localizations).
|
||||
|
||||
You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding a channel for Persian](https://github.com/kubernetes/community/pull/4980).
|
||||
You can also create a Slack channel for your localization in the
|
||||
`kubernetes/community` repository. For an example of adding a Slack channel, see
|
||||
the PR for [adding a channel for Persian](https://github.com/kubernetes/community/pull/4980).
|
||||
-->
|
||||
也请考虑参加
|
||||
[SIG Docs 本地化小组的会议](https://github.com/kubernetes/community/tree/master/sig-docs)。
|
||||
SIG Docs 本地化小组的任务是与 SIG Docs 本地化团队合作,
|
||||
共同定义和记录创建本地化贡献指南的流程。
|
||||
此外,SIG Docs 本地化小组将寻找机会在本地化团队中创建和共享通用工具,
|
||||
并为 SIG Docs 领导团队确定新要求。如果你对本次会议有任何疑问,
|
||||
请在 [SIG Docs Localizations Slack 频道](https://kubernetes.slack.com/messages/sig-docs-localizations)
|
||||
中提问。
|
||||
并为 SIG Docs 领导团队确定新要求。如果你对本次会议有任何疑问,请在
|
||||
[SIG Docs Localizations Slack 频道](https://kubernetes.slack.com/messages/sig-docs-localizations)中提问。
|
||||
|
||||
你还可以在 `kubernetes/community` 仓库中为你的本地化创建一个 Slack 频道。
|
||||
有关添加 Slack 频道的示例,请参阅
|
||||
[为波斯语添加频道](https://github.com/kubernetes/community/pull/4980)的 PR。
|
||||
有关添加 Slack 频道的示例,
|
||||
请参阅[为波斯语添加频道](https://github.com/kubernetes/community/pull/4980)的 PR。
|
||||
|
||||
<!--
|
||||
### Join the Kubernetes GitHub organization
|
||||
|
||||
Once you've opened a localization PR, you can become members of the Kubernetes GitHub organization. Each person on the team needs to create their own [Organization Membership Request](https://github.com/kubernetes/org/issues/new/choose) in the `kubernetes/org` repository.
|
||||
When you've opened a localization PR, you can become members of the Kubernetes
|
||||
GitHub organization. Each person on the team needs to create their own
|
||||
[Organization Membership Request](https://github.com/kubernetes/org/issues/new/choose)
|
||||
in the `kubernetes/org` repository.
|
||||
-->
|
||||
### 加入到 Kubernetes GitHub 组织 {#join-the-kubernetes-github-organization}
|
||||
|
||||
提交本地化 PR 后,你可以成为 Kubernetes GitHub 组织的成员。
|
||||
团队中的每个人都需要在 `kubernetes/org` 仓库中创建自己的
|
||||
[组织成员申请](https://github.com/kubernetes/org/issues/new/choose)。
|
||||
团队中的每个人都需要在 `kubernetes/org`
|
||||
仓库中创建自己的[组织成员申请](https://github.com/kubernetes/org/issues/new/choose)。
|
||||
|
||||
<!--
|
||||
### Add your localization team in GitHub
|
||||
|
||||
Next, add your Kubernetes localization team to [`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml). For an example of adding a localization team, see the PR to add the [Spanish localization team](https://github.com/kubernetes/org/pull/685).
|
||||
Next, add your Kubernetes localization team to
|
||||
[`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml).
|
||||
For an example of adding a localization team, see the PR to add the
|
||||
[Spanish localization team](https://github.com/kubernetes/org/pull/685).
|
||||
|
||||
Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content within (and only within) your localization directory: `/content/**/`.
|
||||
|
||||
For each localization, The `@kubernetes/sig-docs-**-reviews` team automates review assignment for new PRs.
|
||||
Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content
|
||||
within (and only within) your localization directory: `/content/**/`. For each
|
||||
localization, The `@kubernetes/sig-docs-**-reviews` team automates review
|
||||
assignments for new PRs. Members of `@kubernetes/website-maintainers` can create
|
||||
new localization branches to coordinate translation efforts. Members of
|
||||
`@kubernetes/website-milestone-maintainers` can use the `/milestone`
|
||||
[Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs.
|
||||
-->
|
||||
### 在 GitHub 中添加你的本地化团队 {#add-your-localization-team-in-github}
|
||||
|
||||
接下来,将你的 Kubernetes 本地化团队添加到
|
||||
[`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml)。
|
||||
有关添加本地化团队的示例,请参见添加[西班牙本地化团队](https://github.com/kubernetes/org/pull/685) 的 PR。
|
||||
有关添加本地化团队的示例,请参见添加[西班牙本地化团队](https://github.com/kubernetes/org/pull/685)的 PR。
|
||||
|
||||
`@kubernetes/sig-docs-**-owners` 成员可以批准更改对应本地化目录 `/content/**/` 中内容的 PR,并仅限这类 PR。
|
||||
|
||||
对于每个本地化,`@kubernetes/sig-docs-**-reviews` 团队被自动分派新 PR 的审阅任务。
|
||||
|
||||
<!--
|
||||
Members of `@kubernetes/website-maintainers` can create new localization branches to coordinate translation efforts.
|
||||
|
||||
Members of `@kubernetes/website-milestone-maintainers` can use the `/milestone` [Prow command](https://prow.k8s.io/command-help) to assign a milestone to issues or PRs.
|
||||
-->
|
||||
`@kubernetes/website-maintainers` 成员可以创建新的本地化分支来协调翻译工作。
|
||||
|
||||
`@kubernetes/website-milestone-maintainers` 成员可以使用 `/milestone`
|
||||
[Prow 命令](https://prow.k8s.io/command-help)为 issues 或 PR 设定里程碑。
|
||||
|
||||
<!--
|
||||
### Configure the workflow
|
||||
|
||||
Next, add a GitHub label for your localization in the `kubernetes/test-infra` repository. A label lets you filter issues and pull requests for your specific language.
|
||||
Next, add a GitHub label for your localization in the `kubernetes/test-infra`
|
||||
repository. A label lets you filter issues and pull requests for your specific
|
||||
language.
|
||||
|
||||
For an example of adding a label, see the PR for adding the [Italian language label](https://github.com/kubernetes/test-infra/pull/11316).
|
||||
For an example of adding a label, see the PR for adding the
|
||||
[Italian language label](https://github.com/kubernetes/test-infra/pull/11316).
|
||||
-->
|
||||
### 配置工作流程 {#configure-the-workflow}
|
||||
|
||||
|
@ -247,15 +274,16 @@ For an example of adding a label, see the PR for adding the [Italian language la
|
|||
|
||||
有关添加标签的示例,请参见添加[意大利语标签](https://github.com/kubernetes/test-infra/pull/11316)的 PR。
|
||||
|
||||
你还可以在 `kubernetes/community` 仓库中为你的本地化创建一个 Slack 频道。
|
||||
有关添加 Slack 频道的示例,请参见[为印尼语和葡萄牙语添加频道](https://github.com/kubernetes/community/pull/3605)的 PR。
|
||||
|
||||
<!--
|
||||
### Modify the site configuration
|
||||
|
||||
The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in the [`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml) file. To support a new localization, you'll need to modify `config.toml`.
|
||||
The Kubernetes website uses Hugo as its web framework. The website's Hugo
|
||||
configuration resides in the
|
||||
[`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml)
|
||||
file. You'll need to modify `config.toml` to support a new localization.
|
||||
|
||||
Add a configuration block for the new language to `config.toml`, under the existing `[languages]` block. The German block, for example, looks like:
|
||||
Add a configuration block for the new language to `config.toml` under the
|
||||
existing `[languages]` block. The German block, for example, looks like:
|
||||
-->
|
||||
### 修改站点配置 {#configure-the-workflow}
|
||||
|
||||
|
@ -271,25 +299,36 @@ Kubernetes 网站使用 Hugo 作为其 Web 框架。网站的 Hugo 配置位于
|
|||
title = "Kubernetes"
|
||||
description = "Produktionsreife Container-Verwaltung"
|
||||
languageName = "Deutsch (German)"
|
||||
languageNameLatinScript = "German"
|
||||
languageNameLatinScript = "Deutsch"
|
||||
contentDir = "content/de"
|
||||
weight = 8
|
||||
```
|
||||
|
||||
<!--
|
||||
The value for `languageName` will be listed in language selection bar. Assign "language name in native script (language name in latin script)" to `languageName`, for example, `languageName = "한국어 (Korean)"`. `languageNameLatinScript` can be used to access the language name in latin script and use it in the theme. Assign "language name in latin script" to `languageNameLatinScript`, for example, `languageNameLatinScript ="Korean"`.
|
||||
The language selection bar lists the value for `languageName`. Assign "language
|
||||
name in native script and language (English language name in Latin script)" to
|
||||
`languageName`. For example, `languageName = "한국어 (Korean)"` or `languageName =
|
||||
"Deutsch (German)"`.
|
||||
|
||||
`languageNameLatinScript` can be used to access the language name in Latin
|
||||
script and use it in the theme. Assign "language name in latin script" to
|
||||
`languageNameLatinScript`. For example, `languageNameLatinScript ="Korean"` or
|
||||
`languageNameLatinScript = "Deutsch"`.
|
||||
-->
|
||||
`languageName` 的值将列在语言选择栏中。
|
||||
语言选择栏列出了 `languageName` 的值。
|
||||
将 `languageName` 赋值为“本地脚本中的语言名称(拉丁脚本中的语言名称)”。
|
||||
例如,`languageName = "한국어 (Korean)"`。
|
||||
例如,`languageName = "한국어 (Korean)"` 或 `languageName = "Deutsch (German)"`。
|
||||
|
||||
`languageNameLatinScript` 可用于访问拉丁脚本中的语言名称并在主题中使用。
|
||||
将 `languageNameLatinScript` 赋值为“拉丁脚本中的语言名称”。
|
||||
例如,`languageNameLatinScript ="Korean"`。
|
||||
例如,`languageNameLatinScript ="Korean"` 或 `languageNameLatinScript = "Deutsch"`。
|
||||
|
||||
<!--
|
||||
When assigning a `weight` parameter for your block, find the language block with the highest weight and add 1 to that value.
|
||||
When assigning a `weight` parameter for your block, find the language block with
|
||||
the highest weight and add 1 to that value.
|
||||
|
||||
For more information about Hugo's multilingual support, see "[Multilingual Mode](https://gohugo.io/content-management/multilingual/)".
|
||||
For more information about Hugo's multilingual support, see
|
||||
"[Multilingual Mode](https://gohugo.io/content-management/multilingual/)".
|
||||
-->
|
||||
为你的语言块分配一个 `weight` 参数时,找到权重最高的语言块并将其加 1。
|
||||
|
||||
|
@ -298,7 +337,9 @@ For more information about Hugo's multilingual support, see "[Multilingual Mode]
|
|||
<!--
|
||||
### Add a new localization directory
|
||||
|
||||
Add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/main/content) folder in the repository. For example, the two-letter code for German is `de`:
|
||||
Add a language-specific subdirectory to the
|
||||
[`content`](https://github.com/kubernetes/website/tree/main/content) folder in
|
||||
the repository. For example, the two-letter code for German is `de`:
|
||||
-->
|
||||
### 添加一个新的本地化目录 {#add-a-new-localization-directory}
|
||||
|
||||
|
@ -333,46 +374,52 @@ For example, for German the strings live in `data/i18n/de/de.toml`, and
|
|||
<!--
|
||||
### Localize the community code of conduct
|
||||
|
||||
Open a PR against the [`cncf/foundation`](https://github.com/cncf/foundation/tree/master/code-of-conduct-languages) repository to add the code of conduct in your language.
|
||||
|
||||
Open a PR against the
|
||||
[`cncf/foundation`](https://github.com/cncf/foundation/tree/main/code-of-conduct-languages)
|
||||
repository to add the code of conduct in your language.
|
||||
-->
|
||||
### 本地化社区行为准则 {#localize-the-community-code-of-conduct}
|
||||
|
||||
在 [`cncf/foundation`](https://github.com/cncf/foundation/tree/master/code-of-conduct-languages)
|
||||
在 [`cncf/foundation`](https://github.com/cncf/foundation/tree/main/code-of-conduct-languages)
|
||||
仓库提交 PR,添加你所用语言版本的行为准则。
|
||||
|
||||
<!--
|
||||
### Setting up the OWNERS files
|
||||
|
||||
To set the roles of each user contributing to the localization, create an `OWNERS` file inside the language-specific subdirectory with:
|
||||
To set the roles of each user contributing to the localization, create an
|
||||
`OWNERS` file inside the language-specific subdirectory with:
|
||||
|
||||
- **reviewers**: A list of kubernetes teams with reviewer roles, in this case, the `sig-docs-**-reviews` team created in [Add your localization team in GitHub](#add-your-localization-team-in-github).
|
||||
- **approvers**: A list of kubernetes teams with approvers roles, in this case, the `sig-docs-**-owners` team created in [Add your localization team in GitHub](#add-your-localization-team-in-github).
|
||||
- **labels**: A list of GitHub labels to automatically apply to a PR, in this case, the language label created in [Configure the workflow](#configure-the-workflow).
|
||||
- **reviewers**: A list of kubernetes teams with reviewer roles, in this case,
|
||||
- the `sig-docs-**-reviews` team created in [Add your localization team in GitHub](#add-your-localization-team-in-github).
|
||||
- **approvers**: A list of kubernetes teams with approvers roles, in this case,
|
||||
- the `sig-docs-**-owners` team created in [Add your localization team in GitHub](#add-your-localization-team-in-github).
|
||||
- **labels**: A list of GitHub labels to automatically apply to a PR, in this
|
||||
case, the language label created in [Configure the workflow](#configure-the-workflow).
|
||||
-->
|
||||
### 设置 OWNERS 文件 {#setting-up-the-owners-files}
|
||||
|
||||
要设置每个对本地化做出贡献用户的角色,请在特定于语言的子目录内创建一个 `OWNERS` 文件,其中:
|
||||
|
||||
- **reviewers**: 具有评审人角色的 kubernetes 团队的列表,在本例中为在
|
||||
[在 GitHub 中添加你的本地化团队](#add-your-localization-team-in-github)
|
||||
中创建的 `sig-docs-**-reviews` 团队。
|
||||
- **approvers**: 具有批准人角色的 kubernetes 团队的列表,在本例中为在
|
||||
[在 GitHub 中添加你的本地化团队](#add-your-localization-team-in-github)
|
||||
中创建的 `sig-docs-**-owners` 团队。
|
||||
- **labels**: 可以自动应用于 PR 的 GitHub 标签列表,在本例中为
|
||||
[配置工作流程](#configure-the-workflow)中创建的语言标签。
|
||||
- **reviewers**: 具有评审人角色的 kubernetes 团队的列表,
|
||||
在本例中为在[在 GitHub 中添加你的本地化团队](#add-your-localization-team-in-github)中创建的
|
||||
`sig-docs-**-reviews` 团队。
|
||||
- **approvers**: 具有批准人角色的 kubernetes 团队的列表,
|
||||
在本例中为在[在 GitHub 中添加你的本地化团队](#add-your-localization-team-in-github)中创建的
|
||||
`sig-docs-**-owners` 团队。
|
||||
- **labels**: 可以自动应用于 PR 的 GitHub 标签列表,
|
||||
在本例中为[配置工作流程](#configure-the-workflow)中创建的语言标签。
|
||||
|
||||
<!--
|
||||
More information about the `OWNERS` file can be found at [go.k8s.io/owners](https://go.k8s.io/owners).
|
||||
More information about the `OWNERS` file can be found at
|
||||
[go.k8s.io/owners](https://go.k8s.io/owners).
|
||||
|
||||
The [Spanish OWNERS file](https://git.k8s.io/website/content/es/OWNERS), with language code `es`, looks like:
|
||||
The [Spanish OWNERS file](https://git.k8s.io/website/content/es/OWNERS), with
|
||||
language code `es`, looks like this:
|
||||
-->
|
||||
有关 `OWNERS` 文件的更多信息,请访问[go.k8s.io/owners](https://go.k8s.io/owners)。
|
||||
有关 `OWNERS` 文件的更多信息,请访问 [go.k8s.io/owners](https://go.k8s.io/owners)。
|
||||
|
||||
语言代码为 `es` 的[西班牙语 OWNERS 文件](https://git.k8s.io/website/content/es/OWNERS)看起来像:
|
||||
|
||||
|
||||
```yaml
|
||||
# See the OWNERS docs at https://go.k8s.io/owners
|
||||
|
||||
|
@ -390,9 +437,14 @@ labels:
|
|||
```
|
||||
|
||||
<!--
|
||||
After adding the language-specific `OWNERS` file, update the [root `OWNERS_ALIASES`](https://git.k8s.io/website/OWNERS_ALIASES) file with the new Kubernetes teams for the localization, `sig-docs-**-owners` and `sig-docs-**-reviews`.
|
||||
After adding the language-specific `OWNERS` file, update the [root
|
||||
`OWNERS_ALIASES`](https://git.k8s.io/website/OWNERS_ALIASES) file with the new
|
||||
Kubernetes teams for the localization, `sig-docs-**-owners` and
|
||||
`sig-docs-**-reviews`.
|
||||
|
||||
For each team, add the list of GitHub users requested in [Add your localization team in GitHub](#add-your-localization-team-in-github), in alphabetical order.
|
||||
For each team, add the list of GitHub users requested in
|
||||
[Add your localization team in GitHub](#add-your-localization-team-in-github),
|
||||
in alphabetical order.
|
||||
-->
|
||||
添加了特定语言的 OWNERS 文件之后,使用新的 Kubernetes 本地化团队、
|
||||
`sig-docs-**-owners` 和 `sig-docs-**-reviews` 列表更新
|
||||
|
@ -425,28 +477,34 @@ For each team, add the list of GitHub users requested in [Add your localization
|
|||
<!--
|
||||
### Open a pull request
|
||||
|
||||
Next, [open a pull request](/docs/contribute/new-content/open-a-pr/#open-a-pr) (PR) to add a localization to the `kubernetes/website` repository.
|
||||
Next, [open a pull request](/docs/contribute/new-content/open-a-pr/#open-a-pr)
|
||||
(PR) to add a localization to the `kubernetes/website` repository. The PR must
|
||||
include all the [minimum required content](#minimum-required-content) before it
|
||||
can be approved.
|
||||
|
||||
The PR must include all of the [minimum required content](#minimum-required-content) before it can be approved.
|
||||
|
||||
For an example of adding a new localization, see the PR to enable [docs in French](https://github.com/kubernetes/website/pull/12548).
|
||||
For an example of adding a new localization, see the PR to enable
|
||||
[docs in French](https://github.com/kubernetes/website/pull/12548).
|
||||
-->
|
||||
### 打开拉取请求 {#open-a-pull-request}
|
||||
### 发起拉取请求 {#open-a-pull-request}
|
||||
|
||||
接下来,[打开拉取请求](/zh-cn/docs/contribute/new-content/open-a-pr/#open-a-pr)(PR)
|
||||
接下来,[发起拉取请求](/zh-cn/docs/contribute/new-content/open-a-pr/#open-a-pr)(PR)
|
||||
将本地化添加到 `kubernetes/website` 存储库。
|
||||
|
||||
PR 必须包含所有[最低要求内容](#minimum-required-content)才能获得批准。
|
||||
|
||||
有关添加新本地化的示例,
|
||||
请参阅 PR 以启用[法语文档](https://github.com/kubernetes/website/pull/12548)。
|
||||
请参阅启用[法语文档](https://github.com/kubernetes/website/pull/12548)的 PR。
|
||||
|
||||
<!--
|
||||
### Add a localized README file
|
||||
|
||||
To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of [kubernetes/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code. For example, a German README file would be `README-de.md`.
|
||||
To guide other localization contributors, add a new
|
||||
[`README-**.md`](https://help.github.com/articles/about-readmes/) to the top
|
||||
level of [kubernetes/website](https://github.com/kubernetes/website/), where
|
||||
`**` is the two-letter language code. For example, a German README file would be
|
||||
`README-de.md`.
|
||||
|
||||
Provide guidance to localization contributors in the localized `README-**.md` file. Include the same information contained in `README.md` as well as:
|
||||
Guide localization contributors in the localized `README-**.md` file.
|
||||
Include the same information contained in `README.md` as well as:
|
||||
|
||||
- A point of contact for the localization project
|
||||
- Any information specific to the localization
|
||||
|
@ -463,7 +521,11 @@ Provide guidance to localization contributors in the localized `README-**.md` fi
|
|||
- 任何特定于本地化的信息
|
||||
|
||||
<!--
|
||||
After you create the localized README, add a link to the file from the main English `README.md`, and include contact information in English. You can provide a GitHub ID, email address, [Slack channel](https://slack.com/), or other method of contact. You must also provide a link to your localized Community Code of Conduct.
|
||||
After you create the localized README, add a link to the file from the main
|
||||
English `README.md`, and include contact information in English. You can provide
|
||||
a GitHub ID, email address, [Slack channel](https://slack.com/), or another
|
||||
method of contact. You must also provide a link to your localized Community Code
|
||||
of Conduct.
|
||||
-->
|
||||
创建本地化的 README 文件后,请在英语版文件 `README.md` 中添加指向该文件的链接,
|
||||
并给出英文形式的联系信息。你可以提供 GitHub ID、电子邮件地址、
|
||||
|
@ -472,27 +534,31 @@ After you create the localized README, add a link to the file from the main Engl
|
|||
<!--
|
||||
### Launching your new localization
|
||||
|
||||
Once a localization meets requirements for workflow and minimum output, SIG Docs will:
|
||||
When a localization meets the requirements for workflow and minimum output, SIG
|
||||
Docs does the following:
|
||||
|
||||
- Enable language selection on the website
|
||||
- Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/).
|
||||
- Enables language selection on the website.
|
||||
- Publicizes the localization's availability through
|
||||
[Cloud Native Computing Foundation](https://www.cncf.io/about/)(CNCF)
|
||||
channels, including the [Kubernetes blog](/blog/).
|
||||
-->
|
||||
### 启动你的新本地化 {#add-a-localized-readme-file}
|
||||
|
||||
一旦本地化满足工作流程和最小输出的要求,SIG Docs 将:
|
||||
|
||||
- 在网站上启用语言选择
|
||||
- 通过[云原生计算基金会](https://www.cncf.io/about/)(CNCF)渠道,
|
||||
包括 [Kubernetes 博客](https://kubernetes.io/blog/),来宣传本地化的可用性。
|
||||
- 通过[云原生计算基金会](https://www.cncf.io/about/)(CNCF)渠道以及
|
||||
[Kubernetes 博客](https://kubernetes.io/zh-cn/blog/)来宣传本地化的可用性。
|
||||
|
||||
<!--
|
||||
## Translating content
|
||||
|
||||
Localizing *all* of the Kubernetes documentation is an enormous task. It's okay to start small and expand over time.
|
||||
Localizing *all* the Kubernetes documentation is an enormous task. It's okay to
|
||||
start small and expand over time.
|
||||
-->
|
||||
## 翻译文档 {#translating-content}
|
||||
|
||||
本地化*所有* Kubernetes 文档是一项艰巨的任务。从小做起,循序渐进。
|
||||
本地化**所有** Kubernetes 文档是一项艰巨的任务。从小做起,循序渐进。
|
||||
|
||||
<!--
|
||||
### Minimum required content
|
||||
|
@ -509,7 +575,7 @@ Description | URLs
|
|||
Home | [All heading and subheading URLs](/docs/home/)
|
||||
Setup | [All heading and subheading URLs](/docs/setup/)
|
||||
Tutorials | [Kubernetes Basics](/docs/tutorials/kubernetes-basics/), [Hello Minikube](/docs/tutorials/hello-minikube/)
|
||||
Site strings | [All site strings](#Site-strings-in-i18n) in a new localized TOML file
|
||||
Site strings | [All site strings](#site-strings-in-i18n) in a new localized TOML file
|
||||
Releases | [All heading and subheading URLs](/releases)
|
||||
-->
|
||||
描述 | 网址
|
||||
|
@ -517,14 +583,17 @@ Releases | [All heading and subheading URLs](/releases)
|
|||
主页 | [所有标题和副标题网址](/zh-cn/docs/home/)
|
||||
安装 | [所有标题和副标题网址](/zh-cn/docs/setup/)
|
||||
教程 | [Kubernetes 基础](/zh-cn/docs/tutorials/kubernetes-basics/), [Hello Minikube](/zh-cn/docs/tutorials/hello-minikube/)
|
||||
网站字符串 | [所有网站字符串](#Site-strings-in-i18n)
|
||||
发行版本 | [所有标题和副标题 URL](/releases)
|
||||
网站字符串 | [所有网站字符串](#site-strings-in-i18n)
|
||||
发行版本 | [所有标题和副标题 URL](/zh-cn/releases)
|
||||
|
||||
<!--
|
||||
Translated documents must reside in their own `content/**/` subdirectory, but otherwise follow the same URL path as the English source. For example, to prepare the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) tutorial for translation into German, create a subfolder under the `content/de/` folder and copy the English source:
|
||||
Translated documents must reside in their own `content/**/` subdirectory, but otherwise, follow the
|
||||
same URL path as the English source. For example, to prepare the
|
||||
[Kubernetes Basics](/docs/tutorials/kubernetes-basics/) tutorial for translation into German,
|
||||
create a subfolder under the `content/de/` folder and copy the English source:
|
||||
-->
|
||||
翻译后的文档必须保存在自己的 `content/**/` 子目录中,否则将遵循与英文源相同的 URL 路径。
|
||||
例如,要准备将 [Kubernetes 基础](/zh-cn/docs/tutorials/kubernetes-basics/) 教程翻译为德语,
|
||||
例如,要准备将 [Kubernetes 基础](/zh-cn/docs/tutorials/kubernetes-basics/)教程翻译为德语,
|
||||
请在 `content/de/` 文件夹下创建一个子文件夹并复制英文源:
|
||||
|
||||
```shell
|
||||
|
@ -533,44 +602,51 @@ cp content/en/docs/tutorials/kubernetes-basics.md content/de/docs/tutorials/kube
|
|||
```
|
||||
|
||||
<!--
|
||||
Translation tools can speed up the translation process. For example, some editors offers plugins to quickly translate text.
|
||||
Translation tools can speed up the translation process. For example, some
|
||||
editors offer plugins to quickly translate text.
|
||||
-->
|
||||
翻译工具可以加快翻译过程。例如,某些编辑器提供了用于快速翻译文本的插件。
|
||||
|
||||
<!--
|
||||
Machine-generated translation is insufficient on its own. Localization requires extensive human review to meet minimum standards of quality.
|
||||
-->
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Machine-generated translation is insufficient on its own. Localization requires
|
||||
extensive human review to meet minimum standards of quality.
|
||||
-->
|
||||
机器生成的翻译本身是不够的,本地化需要广泛的人工审核才能满足最低质量标准。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
To ensure accuracy in grammar and meaning, members of your localization team should carefully review all machine-generated translations before publishing.
|
||||
To ensure accuracy in grammar and meaning, members of your localization team
|
||||
should carefully review all machine-generated translations before publishing.
|
||||
-->
|
||||
为了确保语法和含义的准确性,本地化团队的成员应在发布之前仔细检查所有由机器生成的翻译。
|
||||
|
||||
<!--
|
||||
### Source files
|
||||
|
||||
Localizations must be based on the English files from a specific release targeted by the localization team.
|
||||
Each localization team can decide which release to target which is referred to as the _target version_ below.
|
||||
Localizations must be based on the English files from a specific release
|
||||
targeted by the localization team. Each localization team can decide which
|
||||
release to target, referred to as the _target version_ below.
|
||||
|
||||
To find source files for your target version:
|
||||
|
||||
1. Navigate to the Kubernetes website repository at https://github.com/kubernetes/website.
|
||||
2. Select a branch for your target version from the following table:
|
||||
Target version | Branch
|
||||
-----|-----
|
||||
Latest version | [`main`](https://github.com/kubernetes/website/tree/main)
|
||||
Previous version | [`release-{{< skew prevMinorVersion >}}`](https://github.com/kubernetes/website/tree/release-{{< skew prevMinorVersion >}})
|
||||
Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}})
|
||||
|
||||
The `main` branch holds content for the current release `{{< latest-version >}}`. The release team will create a `{{< release-branch >}}` branch before the next release: v{{< skew nextMinorVersion >}}.
|
||||
Target version | Branch
|
||||
-----|-----
|
||||
Latest version | [`main`](https://github.com/kubernetes/website/tree/main)
|
||||
Previous version | [`release-{{< skew prevMinorVersion >}}`](https://github.com/kubernetes/website/tree/release-{{< skew prevMinorVersion >}})
|
||||
Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}})
|
||||
|
||||
The `main` branch holds content for the current release `{{< latest-version >}}`.
|
||||
The release team creates a `{{< release-branch >}}` branch before the next
|
||||
release: v{{< skew nextMinorVersion >}}.
|
||||
-->
|
||||
### 源文件 {#source-files}
|
||||
|
||||
本地化必须基于本地化团队所针对的特定发行版本中的英文文件。
|
||||
每个本地化团队可以决定要针对哪个发行版本,在下文中称作目标版本(target version)。
|
||||
每个本地化团队可以决定要针对哪个发行版本,在下文中称作 **目标版本(target version)**。
|
||||
|
||||
要查找你的目标版本的源文件:
|
||||
|
||||
|
@ -590,9 +666,13 @@ The `main` branch holds content for the current release `{{< latest-version >}}`
|
|||
<!--
|
||||
### Site strings in i18n
|
||||
|
||||
Localizations must include the contents of [`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/main/data/i18n/en/en.toml) in a new language-specific file. Using German as an example: `data/i18n/de/de.toml`.
|
||||
Localizations must include the contents of
|
||||
[`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/main/data/i18n/en/en.toml)
|
||||
in a new language-specific file. Using German as an example:
|
||||
`data/i18n/de/de.toml`.
|
||||
|
||||
Add a new localization directory and file to `data/i18n/`. For example, with German (`de`):
|
||||
Add a new localization directory and file to `data/i18n/`. For example, with
|
||||
German (`de`):
|
||||
-->
|
||||
### i18n/ 中的网站字符串 {#site-strings-in-i18n}
|
||||
|
||||
|
@ -608,12 +688,12 @@ cp data/i18n/en/en.toml data/i18n/de/de.toml
|
|||
```
|
||||
|
||||
<!--
|
||||
Revise the comments at the top of the file to suit your localization,
|
||||
then translate the value of each string. For example, this is the German-language
|
||||
Revise the comments at the top of the file to suit your localization, then
|
||||
translate the value of each string. For example, this is the German-language
|
||||
placeholder text for the search form:
|
||||
-->
|
||||
修改文件顶部的注释以适合你的本地化,
|
||||
然后翻译每个字符串的值。例如,这是搜索表单的德语占位符文本:
|
||||
修改文件顶部的注释以适合你的本地化,然后翻译每个字符串的值。
|
||||
例如,这是搜索表单的德语占位符文本:
|
||||
|
||||
```toml
|
||||
[ui_search_placeholder]
|
||||
|
@ -621,27 +701,33 @@ other = "Suchen"
|
|||
```
|
||||
|
||||
<!--
|
||||
Localizing site strings lets you customize site-wide text and features: for example, the legal copyright text in the footer on each page.
|
||||
Localizing site strings lets you customize site-wide text and features: for
|
||||
example, the legal copyright text in the footer on each page.
|
||||
-->
|
||||
本地化网站字符串允许你自定义网站范围的文本和特性:例如,每个页面页脚中的合法版权文本。
|
||||
本地化网站字符串允许你自定义网站范围的文本和特性:例如每个页面页脚中的版权声明文本。
|
||||
|
||||
<!--
|
||||
### Language specific style guide and glossary
|
||||
### Language-specific localization guide
|
||||
|
||||
Some language teams have their own language-specific style guide and glossary. For example, see the [Korean Localization Guide](/ko/docs/contribute/localization_ko/).
|
||||
As a localization team, you can formalize the best practices your team follows
|
||||
by creating a language-specific localization guide.
|
||||
-->
|
||||
### 特定语言的样式指南和词汇表 {#language-specific-style-guide-and-glossary}
|
||||
### 特定语言的本地化指南 {#language-specific-localization-guide}
|
||||
|
||||
一些语言团队有自己的特定语言样式指南和词汇表。
|
||||
例如,请参见[中文本地化指南](/zh-cn/docs/contribute/localization_zh/)。
|
||||
作为本地化团队,你可以通过创建特定语言的本地化指南来正式确定团队需遵循的最佳实践。
|
||||
请参见[中文本地化指南](/zh-cn/docs/contribute/localization_zh/)。
|
||||
|
||||
<!--
|
||||
### Language specific Zoom meetings
|
||||
### Language-specific Zoom meetings
|
||||
|
||||
If the localization project needs a separate meeting time, contact a SIG Docs Co-Chair or Tech Lead to create a new reoccurring Zoom meeting and calendar invite. This is only needed when the the team is large enough to sustain and require a separate meeting.
|
||||
|
||||
Per CNCF policy, the localization teams must upload their meetings to the SIG Docs YouTube playlist. A SIG Docs Co-Chair or Tech Lead can help with the process until SIG Docs automates it.
|
||||
If the localization project needs a separate meeting time, contact a SIG Docs
|
||||
Co-Chair or Tech Lead to create a new reoccurring Zoom meeting and calendar
|
||||
invite. This is only needed when the team is large enough to sustain and require
|
||||
a separate meeting.
|
||||
|
||||
Per CNCF policy, the localization teams must upload their meetings to the SIG
|
||||
Docs YouTube playlist. A SIG Docs Co-Chair or Tech Lead can help with the
|
||||
process until SIG Docs automates it.
|
||||
-->
|
||||
|
||||
### 特定语言的 Zoom 会议 {#language-specific-zoom-meetings}
|
||||
|
@ -664,24 +750,32 @@ To collaborate on a localization branch:
|
|||
-->
|
||||
### 分支策略 {#branching-strategy}
|
||||
|
||||
因为本地化项目是高度协同的工作,所以我们鼓励团队基于共享的本地化分支工作。
|
||||
- 特别是在开始并且本地化尚未生效时。
|
||||
因为本地化项目是高度协同的工作,
|
||||
特别是在刚开始本地化并且本地化尚未生效时,我们鼓励团队基于共享的本地化分支工作。
|
||||
|
||||
在本地化分支上协作需要:
|
||||
|
||||
<!--
|
||||
1. A team member of [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers) opens a localization branch from a source branch on https://github.com/kubernetes/website.
|
||||
1. A team member of
|
||||
[@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers)
|
||||
opens a localization branch from a source branch on
|
||||
https://github.com/kubernetes/website.
|
||||
|
||||
Your team approvers joined the `@kubernetes/website-maintainers` team when you [added your localization team](#add-your-localization-team-in-github) to the [`kubernetes/org`](https://github.com/kubernetes/org) repository.
|
||||
Your team approvers joined the `@kubernetes/website-maintainers` team when
|
||||
you [added your localization team](#add-your-localization-team-in-github) to
|
||||
the [`kubernetes/org`](https://github.com/kubernetes/org) repository.
|
||||
|
||||
We recommend the following branch naming scheme:
|
||||
We recommend the following branch naming scheme:
|
||||
|
||||
`dev-<source version>-<language code>.<team milestone>`
|
||||
`dev-<source version>-<language code>.<team milestone>`
|
||||
|
||||
For example, an approver on a German localization team opens the localization branch `dev-1.12-de.1` directly against the kubernetes/website repository, based on the source branch for Kubernetes v1.12.
|
||||
For example, an approver on a German localization team opens the localization
|
||||
branch `dev-1.12-de.1` directly against the `kubernetes/website` repository,
|
||||
based on the source branch for Kubernetes v1.12.
|
||||
-->
|
||||
1. [@kubernetes/website-maintainers](https://github.com/orgs/kubernetes/teams/website-maintainers)
|
||||
中的团队成员从 https://github.com/kubernetes/website 原有分支新建一个本地化分支。
|
||||
|
||||
当你给 `kubernetes/org` 仓库[添加你的本地化团队](#add-your-localization-team-in-github)时,
|
||||
你的团队批准人便加入了 `@kubernetes/website-maintainers` 团队。
|
||||
|
||||
|
@ -693,17 +787,22 @@ To collaborate on a localization branch:
|
|||
直接新建了 kubernetes/website 仓库的本地化分支 `dev-1.12-de.1`。
|
||||
|
||||
<!--
|
||||
2. Individual contributors open feature branches based on the localization branch.
|
||||
2. Individual contributors open feature branches based on the localization
|
||||
branch.
|
||||
|
||||
For example, a German contributor opens a pull request with changes to `kubernetes:dev-1.12-de.1` from `username:local-branch-name`.
|
||||
For example, a German contributor opens a pull request with changes to
|
||||
`kubernetes:dev-1.12-de.1` from `username:local-branch-name`.
|
||||
|
||||
3. Approvers review and merge feature branches into the localization branch.
|
||||
|
||||
4. Periodically, an approver merges the localization branch to its source branch by opening and approving a new pull request. Be sure to squash the commits before approving the pull request.
|
||||
4. Periodically, an approver merges the localization branch with its source
|
||||
branch by opening and approving a new pull request. Be sure to squash the
|
||||
commits before approving the pull request.
|
||||
-->
|
||||
2. 个人贡献者基于本地化分支创建新的特性分支
|
||||
2. 个人贡献者基于本地化分支创建新的特性分支。
|
||||
|
||||
例如,一个德语贡献者新建了一个拉取请求,并将 `username:local-branch-name` 更改为 `kubernetes:dev-1.12-de.1`。
|
||||
例如,一个德语贡献者新建了一个拉取请求,
|
||||
并将 `username:local-branch-name` 更改为 `kubernetes:dev-1.12-de.1`。
|
||||
|
||||
3. 批准人审查功能分支并将其合并到本地化分支中。
|
||||
|
||||
|
@ -711,55 +810,74 @@ To collaborate on a localization branch:
|
|||
在批准 PR 之前,请确保先 squash commits。
|
||||
|
||||
<!--
|
||||
Repeat steps 1-4 as needed until the localization is complete. For example, subsequent German localization branches would be: `dev-1.12-de.2`, `dev-1.12-de.3`, etc.
|
||||
Repeat steps 1-4 as needed until the localization is complete. For example,
|
||||
subsequent German localization branches would be: `dev-1.12-de.2`,
|
||||
`dev-1.12-de.3`, etc.
|
||||
-->
|
||||
根据需要重复步骤 1-4,直到完成本地化工作。例如,随后的德语本地化分支将是:
|
||||
`dev-1.12-de.2`、`dev-1.12-de.3`,等等。
|
||||
`dev-1.12-de.2`、`dev-1.12-de.3` 等等。
|
||||
|
||||
<!--
|
||||
Teams must merge localized content into the same branch from which the content was sourced.
|
||||
Teams must merge localized content into the same branch from which the content
|
||||
was sourced. For example:
|
||||
|
||||
For example:
|
||||
|
||||
- a localization branch sourced from `main` must be merged into `main`.
|
||||
- a localization branch sourced from `release-{{% skew "prevMinorVersion" %}}` must be merged into `release-{{% skew "prevMinorVersion" %}}`.
|
||||
|
||||
{{< note >}}
|
||||
If your localization branch was created from `main` branch but it is not merged into `main` before new release branch `{{< release-branch >}}` created, merge it into both `main` and new release branch `{{< release-branch >}}`. To merge your localization branch into new release branch `{{< release-branch >}}`, you need to switch upstream branch of your localization branch to `{{< release-branch >}}`.
|
||||
{{< /note >}}
|
||||
- A localization branch sourced from `main` must be merged into `main`.
|
||||
- A localization branch sourced from `release-{{% skew "prevMinorVersion" %}}`
|
||||
must be merged into `release-{{% skew "prevMinorVersion" %}}`.
|
||||
-->
|
||||
团队必须将本地化内容合入到发布分支中,该发布分支是内容的来源。
|
||||
|
||||
例如:
|
||||
团队必须将本地化内容合入到发布分支中,该发布分支是内容的来源。例如:
|
||||
|
||||
- 源于 `main` 分支的本地化分支必须被合并到 `main`。
|
||||
- 源于 `release-{{ skew "prevMinorVersion" }}`
|
||||
的本地化分支必须被合并到 `release-{{ skew "prevMinorVersion" }}`。
|
||||
|
||||
如果你的本地化分支是基于 `main` 分支创建的,但最终没有在新的发行
|
||||
分支 `{{< release-branch >}}` 被创建之前合并到 `main` 中,需要将其
|
||||
同时将其合并到 `main` 和新的发行分支 `{{< release-branch >}}` 中。
|
||||
要将本地化分支合并到新的发行分支 `{{< release-branch >}}` 中,你需要
|
||||
将你本地化分支的上游分支切换到 `{{< release-branch >}}`。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous localization branch and the current localization branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/main/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/main/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch.
|
||||
|
||||
While only approvers can open a new localization branch and merge pull requests, anyone can open a pull request for a new localization branch. No special permissions are required.
|
||||
If your localization branch was created from `main` branch, but it is not merged
|
||||
into `main` before the new release branch `{{< release-branch >}}` created,
|
||||
merge it into both `main` and new release branch `{{< release-branch >}}`. To
|
||||
merge your localization branch into the new release branch
|
||||
`{{< release-branch >}}`, you need to switch the upstream branch of your
|
||||
localization branch to `{{< release-branch >}}`.
|
||||
-->
|
||||
在团队每个里程碑的开始时段,创建一个 issue 来比较先前的本地化分支
|
||||
和当前的本地化分支之间的上游变化很有帮助。
|
||||
现在有两个脚本用来比较上游的变化。
|
||||
[`upstream_changes.py`](https://github.com/kubernetes/website/tree/main/scripts#upstream_changespy)
|
||||
对于检查对某个文件的变更很有用。
|
||||
[`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/main/scripts#diff_l10n_branchespy)
|
||||
可以用来为某个特定本地化分支创建过时文件的列表。
|
||||
|
||||
虽然只有批准人才能创建新的本地化分支并合并 PR,任何人都可以
|
||||
为新的本地化分支提交一个拉取请求(PR)。不需要特殊权限。
|
||||
如果你的本地化分支是基于 `main` 分支创建的,但最终没有在新的发行分支
|
||||
`{{< release-branch >}}` 被创建之前合并到 `main` 中,需要将其同时将其合并到
|
||||
`main` 和新的发行分支 `{{< release-branch >}}` 中。
|
||||
要将本地化分支合并到新的发行分支 `{{< release-branch >}}` 中,
|
||||
你需要将你本地化分支的上游分支切换到 `{{< release-branch >}}`。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
For more information about working from forks or directly from the repository, see ["fork and clone the repo"](#fork-and-clone-the-repo).
|
||||
At the beginning of every team milestone, it's helpful to open an issue
|
||||
comparing upstream changes between the previous localization branch and the
|
||||
current localization branch. There are two scripts for comparing upstream
|
||||
changes.
|
||||
|
||||
- [`upstream_changes.py`](https://github.com/kubernetes/website/tree/main/scripts#upstream_changespy)
|
||||
is useful for checking the changes made to a specific file. And
|
||||
- [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/main/scripts#diff_l10n_branchespy)
|
||||
is useful for creating a list of outdated files for a specific localization
|
||||
branch.
|
||||
|
||||
While only approvers can open a new localization branch and merge pull requests,
|
||||
anyone can open a pull request for a new localization branch. No special
|
||||
permissions are required.
|
||||
-->
|
||||
在团队每个里程碑的开始时段,创建一个 issue
|
||||
来比较先前的本地化分支和当前的本地化分支之间的上游变化很有帮助。
|
||||
现在有两个脚本用来比较上游的变化。
|
||||
|
||||
- [`upstream_changes.py`](https://github.com/kubernetes/website/tree/main/scripts#upstream_changespy)
|
||||
对于检查对某个文件的变更很有用。
|
||||
- [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/main/scripts#diff_l10n_branchespy)
|
||||
可以用来为某个特定本地化分支创建过时文件的列表。
|
||||
|
||||
虽然只有批准人才能创建新的本地化分支并合并 PR,
|
||||
任何人都可以为新的本地化分支提交一个拉取请求(PR)。不需要特殊权限。
|
||||
|
||||
<!--
|
||||
For more information about working from forks or directly from the repository,
|
||||
see ["fork and clone the repo"](#fork-and-clone-the-repo).
|
||||
-->
|
||||
有关基于派生或直接从仓库开展工作的更多信息,请参见 ["派生和克隆"](#fork-and-clone-the-repo)。
|
||||
|
||||
|
|
|
@ -26,9 +26,10 @@ Case studies require extensive review before they're approved.
|
|||
<!--
|
||||
## The Kubernetes Blog
|
||||
|
||||
The Kubernetes blog is used by the project to communicate new features, community reports, and any news that might be relevant to the Kubernetes community.
|
||||
This includes end users and developers.
|
||||
Most of the blog's content is about things happening in the core project, but we encourage you to submit about things happening elsewhere in the ecosystem too!
|
||||
The Kubernetes blog is used by the project to communicate new features, community reports, and any
|
||||
news that might be relevant to the Kubernetes community. This includes end users and developers.
|
||||
Most of the blog's content is about things happening in the core project, but we encourage you to
|
||||
submit about things happening elsewhere in the ecosystem too!
|
||||
|
||||
Anyone can write a blog post and submit it for review.
|
||||
-->
|
||||
|
@ -45,8 +46,8 @@ Kubernetes 博客用于项目发布新功能特性、
|
|||
<!--
|
||||
### Submit a Post
|
||||
|
||||
Blog posts should not be commercial in nature and should consist of original content that applies broadly to the Kubernetes community.
|
||||
Appropriate blog content includes:
|
||||
Blog posts should not be commercial in nature and should consist of original content that applies
|
||||
broadly to the Kubernetes community. Appropriate blog content includes:
|
||||
|
||||
- New Kubernetes capabilities
|
||||
- Kubernetes projects updates
|
||||
|
@ -76,7 +77,6 @@ Unsuitable content includes:
|
|||
- Partner updates without an integration and customer story
|
||||
- Syndicated posts (language translations ok)
|
||||
-->
|
||||
|
||||
不合适的博客内容包括:
|
||||
|
||||
- 供应商产品推介
|
||||
|
@ -86,12 +86,17 @@ Unsuitable content includes:
|
|||
<!--
|
||||
To submit a blog post, follow these steps:
|
||||
|
||||
1. [Sign the CLA](https://kubernetes.io/docs/contribute/start/#sign-the-cla) if you have not yet done so.
|
||||
1. Have a look at the Markdown format for existing blog posts in the [website repository](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts).
|
||||
1. [Sign the CLA](https://github.com/kubernetes/community/blob/master/CLA.md)
|
||||
if you have not yet done so.
|
||||
1. Have a look at the Markdown format for existing blog posts in the
|
||||
[website repository](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts).
|
||||
1. Write out your blog post in a text editor of your choice.
|
||||
1. On the same link from step 2, click the Create new file button. Paste your content into the editor. Name the file to match the proposed title of the blog post, but don’t put the date in the file name. The blog reviewers will work with you on the final file name and the date the blog will be published.
|
||||
1. On the same link from step 2, click the Create new file button. Paste your content into the editor.
|
||||
Name the file to match the proposed title of the blog post, but don’t put the date in the file name.
|
||||
The blog reviewers will work with you on the final file name and the date the blog will be published.
|
||||
1. When you save the file, GitHub will walk you through the pull request process.
|
||||
1. A blog post reviewer will review your submission and work with you on feedback and final details. When the blog post is approved, the blog will be scheduled for publication.
|
||||
1. A blog post reviewer will review your submission and work with you on feedback and final details.
|
||||
When the blog post is approved, the blog will be scheduled for publication.
|
||||
-->
|
||||
要提交博文,你可以遵从以下步骤:
|
||||
|
||||
|
@ -110,9 +115,16 @@ To submit a blog post, follow these steps:
|
|||
### Guidelines and expectations
|
||||
|
||||
- Blog posts should not be vendor pitches.
|
||||
- Articles must contain content that applies broadly to the Kubernetes community. For example, a submission should focus on upstream Kubernetes as opposed to vendor-specific configurations. Check the [Documentation style guide](/docs/contribute/style/content-guide/#what-s-allowed) for what is typically allowed on Kubernetes properties.
|
||||
- Links should primarily be to the official Kubernetes documentation. When using external references, links should be diverse - For example a submission shouldn't contain only links back to a single company's blog.
|
||||
- Sometimes this is a delicate balance. The [blog team](https://kubernetes.slack.com/messages/sig-docs-blog/) is there to give guidance on whether a post is appropriate for the Kubernetes blog, so don't hesitate to reach out.
|
||||
- Articles must contain content that applies broadly to the Kubernetes community. For example, a
|
||||
submission should focus on upstream Kubernetes as opposed to vendor-specific configurations.
|
||||
Check the [Documentation style guide](/docs/contribute/style/content-guide/#what-s-allowed) for
|
||||
what is typically allowed on Kubernetes properties.
|
||||
- Links should primarily be to the official Kubernetes documentation. When using external
|
||||
references, links should be diverse - For example a submission shouldn't contain only links
|
||||
back to a single company's blog.
|
||||
- Sometimes this is a delicate balance. The [blog team](https://kubernetes.slack.com/messages/sig-docs-blog/)
|
||||
is there to give guidance on whether a post is appropriate for the Kubernetes blog, so don't
|
||||
hesitate to reach out.
|
||||
-->
|
||||
### 指导原则和期望 {#guidelines-and-expectations}
|
||||
|
||||
|
@ -130,9 +142,12 @@ To submit a blog post, follow these steps:
|
|||
所以,需要帮助的时候不要犹豫。
|
||||
<!--
|
||||
- Blog posts are not published on specific dates.
|
||||
- Articles are reviewed by community volunteers. We'll try our best to accommodate specific timing, but we make no guarantees.
|
||||
- Many core parts of the Kubernetes projects submit blog posts during release windows, delaying publication times. Consider submitting during a quieter period of the release cycle.
|
||||
- If you are looking for greater coordination on post release dates, coordinating with [CNCF marketing](https://www.cncf.io/about/contact/) is a more appropriate choice than submitting a blog post.
|
||||
- Articles are reviewed by community volunteers. We'll try our best to accommodate specific
|
||||
timing, but we make no guarantees.
|
||||
- Many core parts of the Kubernetes projects submit blog posts during release windows, delaying
|
||||
publication times. Consider submitting during a quieter period of the release cycle.
|
||||
- If you are looking for greater coordination on post release dates, coordinating with
|
||||
[CNCF marketing](https://www.cncf.io/about/contact/) is a more appropriate choice than submitting a blog post.
|
||||
- Sometimes reviews can get backed up. If you feel your review isn't getting the attention it needs,
|
||||
you can reach out to the blog team on the [`#sig-docs-blog` Slack channel](https://kubernetes.slack.com/messages/sig-docs-blog/)
|
||||
to ask in real time.
|
||||
|
@ -150,19 +165,24 @@ To submit a blog post, follow these steps:
|
|||
|
||||
<!--
|
||||
- Blog posts should be relevant to Kubernetes users.
|
||||
|
||||
- Topics related to participation in or results of Kubernetes SIGs activities are always on
|
||||
topic (see the work in the [Upstream Marketing Team](https://github.com/kubernetes/community/blob/master/communication/marketing-team/storytelling-resources/blog-guidelines.md#upstream-marketing-blog-guidelines)
|
||||
topic (see the work in the [Contributor Comms Team](https://github.com/kubernetes/community/blob/master/communication/contributor-comms/storytelling-resources/blog-guidelines.md#upstream-marketing-blog-guidelines)
|
||||
for support on these posts).
|
||||
- The components of Kubernetes are purposely modular, so tools that use existing integration points like CNI and CSI are on topic.
|
||||
- Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team before submitting a draft.
|
||||
- Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog.
|
||||
- Blog posts about contributing to the Kubernetes project should be in the [Kubernetes Contributors site](https://kubernetes.dev)
|
||||
- The components of Kubernetes are purposely modular, so tools that use existing integration
|
||||
points like CNI and CSI are on topic.
|
||||
- Posts about other CNCF projects may or may not be on topic. We recommend asking the blog team
|
||||
before submitting a draft.
|
||||
- Many CNCF projects have their own blog. These are often a better choice for posts. There are
|
||||
times of major feature or milestone for a CNCF project that users would be interested in
|
||||
reading on the Kubernetes blog.
|
||||
- Blog posts about contributing to the Kubernetes project should be in the
|
||||
[Kubernetes Contributors site](https://kubernetes.dev)
|
||||
-->
|
||||
- 博客内容应该对 Kubernetes 用户有用。
|
||||
- 与参与 Kubernetes SIG 活动相关,或者与这类活动的结果相关的主题通常是切题的。
|
||||
请参考[上游推广团队](https://github.com/kubernetes/community/blob/master/communication/marketing-team/storytelling-resources/blog-guidelines.md#upstream-marketing-blog-guidelines)的工作以获得对此类博文的支持。
|
||||
- Kubernetes 的组件都有意设计得模块化,因此使用类似 CNI、CSI 等集成点的工具
|
||||
通常都是切题的。
|
||||
请参考 [贡献者沟通(Contributor Comms)团队](https://github.com/kubernetes/community/blob/master/communication/contributor-comms/storytelling-resources/blog-guidelines.md#upstream-marketing-blog-guidelines)的工作以获得对此类博文的支持。
|
||||
- Kubernetes 的组件都有意设计得模块化,因此使用类似 CNI、CSI 等集成点的工具通常都是切题的。
|
||||
- 关于其他 CNCF 项目的博客可能切题也可能不切题。
|
||||
我们建议你在提交草稿之前与博客团队联系。
|
||||
- 很多 CNCF 项目有自己的博客。这些博客通常是更好的选择。
|
||||
|
@ -172,9 +192,11 @@ To submit a blog post, follow these steps:
|
|||
<!--
|
||||
- Blog posts should be original content
|
||||
- The official blog is not for repurposing existing content from a third party as new content.
|
||||
- The [license](https://github.com/kubernetes/website/blob/main/LICENSE) for the blog allows commercial use of the content for commercial purposes, just not the other way around.
|
||||
- The [license](https://github.com/kubernetes/website/blob/main/LICENSE) for the blog allows
|
||||
commercial use of the content for commercial purposes, but not the other way around.
|
||||
- Blog posts should aim to be future proof
|
||||
- Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader.
|
||||
- Given the development velocity of the project, we want evergreen content that won't require
|
||||
updates to stay accurate for the reader.
|
||||
- It can be a better choice to add a tutorial or update official documentation than to write a
|
||||
high level overview as a blog post.
|
||||
- Consider concentrating the long technical content as a call to action of the blog post, and
|
||||
|
@ -187,15 +209,19 @@ To submit a blog post, follow these steps:
|
|||
- 博客文章的内容应该在一段时间内不过期。
|
||||
- 考虑到项目的开发速度,我们希望读者看到的是不必更新就能保持长期准确的内容。
|
||||
- 有时候,在官方文档中添加一个教程或者进行内容更新都是比博客更好的选择。
|
||||
- 可以考虑在博客文章中将较长技术内容的重点放在鼓励读者自行尝试上,或者
|
||||
放在问题域本身或者为什么读者应该关注某个话题上。
|
||||
- 可以考虑在博客文章中将较长技术内容的重点放在鼓励读者自行尝试上,
|
||||
或者放在问题域本身或者为什么读者应该关注某个话题上。
|
||||
|
||||
<!--
|
||||
### Technical Considerations for submitting a blog post
|
||||
|
||||
Submissions need to be in Markdown format to be used by the [Hugo](https://gohugo.io/) generator for the blog. There are [many resources available](https://gohugo.io/documentation/) on how to use this technology stack.
|
||||
Submissions need to be in Markdown format to be used by the [Hugo](https://gohugo.io/) generator
|
||||
for the blog. There are [many resources available](https://gohugo.io/documentation/) on how to use
|
||||
this technology stack.
|
||||
|
||||
We recognize that this requirement makes the process more difficult for less-familiar folks to submit, and we're constantly looking at solutions to lower this bar. If you have ideas on how to lower the barrier, please volunteer to help out.
|
||||
We recognize that this requirement makes the process more difficult for less-familiar folks to
|
||||
submit, and we're constantly looking at solutions to lower this bar. If you have ideas on how to
|
||||
lower the barrier, please volunteer to help out.
|
||||
-->
|
||||
### 提交博客的技术考虑 {#technical-consideration-for-submitting-a-blog-post}
|
||||
|
||||
|
@ -207,22 +233,30 @@ We recognize that this requirement makes the process more difficult for less-fam
|
|||
如果你有降低难度的好主意,请自荐帮忙。
|
||||
|
||||
<!--
|
||||
The SIG Docs [blog subproject](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject) manages the review process for blog posts. For more information, see [Submit a post](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post).
|
||||
The SIG Docs [blog subproject](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject)
|
||||
manages the review process for blog posts. For more information, see
|
||||
[Submit a post](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post).
|
||||
|
||||
To submit a blog post follow these directions:
|
||||
-->
|
||||
SIG Docs [博客子项目](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject) 负责管理博客的评阅过程。
|
||||
SIG Docs
|
||||
[博客子项目](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject)负责管博客的评阅过程。
|
||||
更多信息可参考[提交博文](https://github.com/kubernetes/community/tree/master/sig-docs/blog-subproject#submit-a-post)。
|
||||
|
||||
要提交博文,你可以遵从以下指南:
|
||||
|
||||
<!--
|
||||
- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post. New blog posts go under the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/main/content/en/blog/_posts) directory.
|
||||
- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post.
|
||||
New blog posts go under the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/main/content/en/blog/_posts)
|
||||
directory.
|
||||
|
||||
- Ensure that your blog post follows the correct naming conventions and the following frontmatter (metadata) information:
|
||||
- Ensure that your blog post follows the correct naming conventions and the following frontmatter
|
||||
(metadata) information:
|
||||
|
||||
- The Markdown file name must follow the format `YYYY-MM-DD-Your-Title-Here.md`. For example, `2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md`.
|
||||
- Do **not** include dots in the filename. A name like `2020-01-01-whats-new-in-1.19.md` causes failures during a build.
|
||||
- The Markdown file name must follow the format `YYYY-MM-DD-Your-Title-Here.md`. For example,
|
||||
`2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md`.
|
||||
- Do **not** include dots in the filename. A name like `2020-01-01-whats-new-in-1.19.md` causes
|
||||
failures during a build.
|
||||
- The front matter must include the following:
|
||||
-->
|
||||
- [发起一个包含新博文的 PR](/zh-cn/docs/contribute/new-content/open-a-pr/#fork-the-repo)。
|
||||
|
@ -244,18 +278,25 @@ SIG Docs [博客子项目](https://github.com/kubernetes/community/tree/master/s
|
|||
slug: text-for-URL-link-here-no-spaces
|
||||
---
|
||||
```
|
||||
<!--
|
||||
- The first or initial commit message should be a short summary of the work being done and should stand alone as a description of the blog post. Please note that subsequent edits to your blog will be squashed into this main commit, so it should be as useful as possible.
|
||||
|
||||
<!--
|
||||
- The first or initial commit message should be a short summary of the work being done and
|
||||
should stand alone as a description of the blog post. Please note that subsequent edits to
|
||||
your blog will be squashed into this main commit, so it should be as useful as possible.
|
||||
|
||||
- Examples of a good commit message:
|
||||
- _Add blog post on the foo kubernetes feature_
|
||||
- _blog: foobar announcement_
|
||||
- _Add blog post on the foo kubernetes feature_
|
||||
- _blog: foobar announcement_
|
||||
- Examples of bad commit message:
|
||||
- _Add blog post_
|
||||
- _._
|
||||
- _initial commit_
|
||||
- _draft post_
|
||||
- The blog team will then review your PR and give you comments on things you might need to fix. After that the bot will merge your PR and your blog post will be published.
|
||||
-->
|
||||
|
||||
- The blog team will then review your PR and give you comments on things you might need to fix.
|
||||
After that the bot will merge your PR and your blog post will be published.
|
||||
-->
|
||||
|
||||
- 第一个或者最初的提交的描述信息中应该包含一个所作工作的简单摘要,
|
||||
并作为整个博文的一个独立描述。
|
||||
请注意,对博文的后续修改编辑都会最终合并到此主提交中,所以此提交的描述信息
|
||||
|
@ -271,8 +312,14 @@ SIG Docs [博客子项目](https://github.com/kubernetes/community/tree/master/s
|
|||
- 博客团队会对 PR 内容进行评阅,为你提供一些评语以便修订。
|
||||
之后,机器人会将你的博文合并并发表。
|
||||
|
||||
<!--
|
||||
- If the content of the blog post contains only content that is not expected to require updates to stay accurate for the reader, it can be marked as evergreen and exempted from the automatic warning about outdated content added to blog posts older than one year.
|
||||
<!--
|
||||
- The blog team will then review your PR and give you comments on things you might need to fix.
|
||||
After that the bot will merge your PR and your blog post will be published.
|
||||
|
||||
- If the content of the blog post contains only content that is not expected to require updates
|
||||
to stay accurate for the reader, it can be marked as evergreen and exempted from the automatic
|
||||
warning about outdated content added to blog posts older than one year.
|
||||
|
||||
- To mark a blog post as evergreen, add this to the front matter:
|
||||
|
||||
```yaml
|
||||
|
@ -281,7 +328,7 @@ SIG Docs [博客子项目](https://github.com/kubernetes/community/tree/master/s
|
|||
- Examples of content that should not be marked evergreen:
|
||||
- **Tutorials** that only apply to specific releases or versions and not all future versions
|
||||
- References to pre-GA APIs or features
|
||||
-->
|
||||
-->
|
||||
|
||||
- 如果博文的内容仅包含预期无需更新就能对读者保持精准的内容,
|
||||
则可以将这篇博文标记为长期有效(evergreen),
|
||||
|
@ -299,13 +346,15 @@ SIG Docs [博客子项目](https://github.com/kubernetes/community/tree/master/s
|
|||
<!--
|
||||
## Submit a case study
|
||||
|
||||
Case studies highlight how organizations are using Kubernetes to solve
|
||||
real-world problems. The Kubernetes marketing team and members of the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} collaborate with you on all case studies.
|
||||
Case studies highlight how organizations are using Kubernetes to solve real-world problems. The
|
||||
Kubernetes marketing team and members of the {{< glossary_tooltip text="CNCF" term_id="cncf" >}}
|
||||
collaborate with you on all case studies.
|
||||
|
||||
Have a look at the source for the
|
||||
[existing case studies](https://github.com/kubernetes/website/tree/main/content/en/case-studies).
|
||||
|
||||
Refer to the [case study guidelines](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md) and submit your request as outlined in the guidelines.
|
||||
Refer to the [case study guidelines](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md)
|
||||
and submit your request as outlined in the guidelines.
|
||||
-->
|
||||
## 提交案例分析 {#submit-a-case-study}
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: 使用准入控制器
|
||||
title: 准入控制器参考
|
||||
linkTitle: 准入控制器
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
@ -12,7 +13,8 @@ reviewers:
|
|||
- erictune
|
||||
- janetkuo
|
||||
- thockin
|
||||
title: Using Admission Controllers
|
||||
title: Admission Controllers Reference
|
||||
linkTitle: Admission Controllers
|
||||
content_type: concept
|
||||
weight: 30
|
||||
-->
|
||||
|
@ -31,9 +33,30 @@ This page provides an overview of Admission Controllers.
|
|||
## 什么是准入控制插件? {#what-are-they}
|
||||
|
||||
<!--
|
||||
An admission controller is a piece of code that intercepts requests to the
|
||||
An _admission controller_ is a piece of code that intercepts requests to the
|
||||
Kubernetes API server prior to persistence of the object, but after the request
|
||||
is authenticated and authorized. The controllers consist of the
|
||||
is authenticated and authorized.
|
||||
|
||||
Admission controllers may be _validating_, _mutating_, or both. Mutating
|
||||
controllers may modify related objects to the requests they admit; validating controllers may not.
|
||||
|
||||
Admission controllers limit requests to create, delete, modify objects. Admission
|
||||
controllers can also block custom verbs, such as a request connect to a Pod via
|
||||
an API server proxy. Admission controllers do _not_ (and cannot) block requests
|
||||
to read (**get**, **watch** or **list**) objects.
|
||||
-->
|
||||
**准入控制器** 是一段代码,它会在请求通过认证和鉴权之后、对象被持久化之前拦截到达 API
|
||||
服务器的请求。
|
||||
|
||||
准入控制器可以执行**验证(Validating)** 和/或**变更(Mutating)** 操作。
|
||||
变更(mutating)控制器可以根据被其接受的请求更改相关对象;验证(validating)控制器则不行。
|
||||
|
||||
准入控制器限制创建、删除、修改对象的请求。
|
||||
准入控制器也可以阻止自定义动作,例如通过 API 服务器代理连接到 Pod 的请求。
|
||||
准入控制器**不会** (也不能)阻止读取(**get**、**watch** 或 **list**)对象的请求。
|
||||
|
||||
<!--
|
||||
The admission controllers in Kubernetes {{< skew currentVersion >}} consist of the
|
||||
[list](#what-does-each-admission-controller-do) below, are compiled into the
|
||||
`kube-apiserver` binary, and may only be configured by the cluster
|
||||
administrator. In that list, there are two special controllers:
|
||||
|
@ -42,18 +65,15 @@ mutating and validating (respectively)
|
|||
[admission control webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
|
||||
which are configured in the API.
|
||||
-->
|
||||
准入控制器是一段代码,它会在请求通过认证和授权之后、对象被持久化之前拦截到达 API
|
||||
服务器的请求。控制器由下面的[列表](#what-does-each-admission-controller-do)组成,
|
||||
Kubernetes {{< skew currentVersion >}}
|
||||
中的准入控制器由下面的[列表](#what-does-each-admission-controller-do)组成,
|
||||
并编译进 `kube-apiserver` 可执行文件,并且只能由集群管理员配置。
|
||||
在该列表中,有两个特殊的控制器:MutatingAdmissionWebhook 和 ValidatingAdmissionWebhook。
|
||||
它们根据 API 中的配置,分别执行变更和验证
|
||||
[准入控制 webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)。
|
||||
它们根据 API 中的配置,
|
||||
分别执行变更和验证[准入控制 webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)。
|
||||
|
||||
<!--
|
||||
Admission controllers may be "validating", "mutating", or both. Mutating
|
||||
controllers may modify related objects to the requests they admit; validating controllers may not.
|
||||
|
||||
Admission controllers limit requests to create, delete, modify objects or connect to proxy. They do not limit requests to read objects.
|
||||
## Admission control phases
|
||||
|
||||
The admission control process proceeds in two phases. In the first phase,
|
||||
mutating admission controllers are run. In the second phase, validating
|
||||
|
@ -63,10 +83,7 @@ both.
|
|||
If any of the controllers in either phase reject the request, the entire
|
||||
request is rejected immediately and an error is returned to the end-user.
|
||||
-->
|
||||
准入控制器可以执行 “验证(Validating)” 和/或 “变更(Mutating)” 操作。
|
||||
变更(mutating)控制器可以根据被其接受的请求更改相关对象;验证(validating)控制器则不行。
|
||||
|
||||
准入控制器限制创建、删除、修改对象或连接到代理的请求,不限制读取对象的请求。
|
||||
## 准入控制阶段 {#admission-control-phases}
|
||||
|
||||
准入控制过程分为两个阶段。第一阶段,运行变更准入控制器。第二阶段,运行验证准入控制器。
|
||||
再次提醒,某些控制器既是变更准入控制器又是验证准入控制器。
|
||||
|
@ -89,20 +106,19 @@ other admission controllers.
|
|||
<!--
|
||||
## Why do I need them?
|
||||
|
||||
Many advanced features in Kubernetes require an admission controller to be enabled in order
|
||||
Several important features of Kubernetes require an admission controller to be enabled in order
|
||||
to properly support the feature. As a result, a Kubernetes API server that is not properly
|
||||
configured with the right set of admission controllers is an incomplete server and will not
|
||||
support all the features you expect.
|
||||
-->
|
||||
## 为什么需要准入控制器? {#why-do-i-need-them}
|
||||
|
||||
Kubernetes 的许多高级功能都要求启用一个准入控制器,以便正确地支持该特性。
|
||||
Kubernetes 的若干重要功能都要求启用一个准入控制器,以便正确地支持该特性。
|
||||
因此,没有正确配置准入控制器的 Kubernetes API 服务器是不完整的,它无法支持你所期望的所有特性。
|
||||
|
||||
<!--
|
||||
## How do I turn on an admission controller?
|
||||
|
||||
|
||||
The Kubernetes API server flag `enable-admission-plugins` takes a comma-delimited list of admission control plugins to invoke prior to modifying objects in the cluster.
|
||||
For example, the following command line enables the `NamespaceLifecycle` and the `LimitRanger`
|
||||
admission control plugins:
|
||||
|
@ -159,9 +175,9 @@ kube-apiserver -h | grep enable-admission-plugins
|
|||
```
|
||||
|
||||
<!--
|
||||
In the current version, the default ones are:
|
||||
In Kubernetes {{< skew currentVersion >}}, the default ones are:
|
||||
-->
|
||||
在目前版本中,默认启用的插件有:
|
||||
在 Kubernetes {{< skew currentVersion >}} 中,默认启用的插件有:
|
||||
|
||||
```shell
|
||||
CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook
|
||||
|
@ -177,24 +193,24 @@ CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultI
|
|||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
<!--
|
||||
This admission controller allows all pods into the cluster. It is deprecated because
|
||||
This admission controller allows all pods into the cluster. It is **deprecated** because
|
||||
its behavior is the same as if there were no admission controller at all.
|
||||
-->
|
||||
该准入控制器允许所有的 Pod 进入集群。此插件已被弃用,因其行为与没有准入控制器一样。
|
||||
该准入控制器允许所有的 Pod 进入集群。此插件**已被弃用**,因其行为与没有准入控制器一样。
|
||||
|
||||
### AlwaysDeny {#alwaysdeny}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
<!--
|
||||
Rejects all requests. AlwaysDeny is DEPRECATED as it has no real meaning.
|
||||
Rejects all requests. AlwaysDeny is **deprecated** as it has no real meaning.
|
||||
-->
|
||||
拒绝所有的请求。由于它没有实际意义,已被弃用。
|
||||
拒绝所有的请求。由于它没有实际意义,**已被弃用**。
|
||||
|
||||
### AlwaysPullImages {#alwayspullimages}
|
||||
|
||||
<!--
|
||||
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
|
||||
This admission controller modifies every new Pod to force the image pull policy to `Always`. This is useful in a
|
||||
multitenant cluster so that users can be assured that their private images can only be used by those
|
||||
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
|
||||
node, any pod from any user can use it by knowing the image's name (assuming the Pod is
|
||||
|
@ -202,7 +218,7 @@ scheduled onto the right node), without any authorization check against the imag
|
|||
is enabled, images are always pulled prior to starting containers, which means valid credentials are
|
||||
required.
|
||||
-->
|
||||
该准入控制器会修改每个新创建的 Pod,将其镜像拉取策略设置为 Always。
|
||||
该准入控制器会修改每个新创建的 Pod,将其镜像拉取策略设置为 `Always`。
|
||||
这在多租户集群中是有用的,这样用户就可以放心,他们的私有镜像只能被那些有凭证的人使用。
|
||||
如果没有这个准入控制器,一旦镜像被拉取到节点上,任何用户的 Pod 都可以通过已了解到的镜像的名称
|
||||
(假设 Pod 被调度到正确的节点上)来使用它,而不需要对镜像进行任何鉴权检查。
|
||||
|
@ -211,13 +227,13 @@ required.
|
|||
### CertificateApproval {#certificateapproval}
|
||||
|
||||
<!--
|
||||
This admission controller observes requests to 'approve' CertificateSigningRequest resources and performs additional
|
||||
authorization checks to ensure the approving user has permission to `approve` certificate requests with the
|
||||
This admission controller observes requests to approve CertificateSigningRequest resources and performs additional
|
||||
authorization checks to ensure the approving user has permission to **approve** certificate requests with the
|
||||
`spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
-->
|
||||
此准入控制器获取“审批” CertificateSigningRequest 资源的请求并执行额外的鉴权检查,
|
||||
此准入控制器获取审批 CertificateSigningRequest 资源的请求并执行额外的鉴权检查,
|
||||
以确保针对设置了 `spec.signerName` 的 CertificateSigningRequest 资源而言,
|
||||
审批请求的用户有权限对证书请求执行 `approve` 操作。
|
||||
审批请求的用户有权限对证书请求执行 **审批** 操作。
|
||||
|
||||
<!--
|
||||
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
|
||||
|
@ -230,12 +246,12 @@ information on the permissions required to perform different actions on Certific
|
|||
|
||||
<!--
|
||||
This admission controller observes updates to the `status.certificate` field of CertificateSigningRequest resources
|
||||
and performs an additional authorization checks to ensure the signing user has permission to `sign` certificate
|
||||
and performs an additional authorization checks to ensure the signing user has permission to **sign** certificate
|
||||
requests with the `spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
-->
|
||||
此准入控制器监视对 CertificateSigningRequest 资源的 `status.certificate` 字段的更新请求,
|
||||
并执行额外的鉴权检查,以确保针对设置了 `spec.signerName` 的 CertificateSigningRequest 资源而言,
|
||||
签发证书的用户有权限对证书请求执行 `sign` 操作。
|
||||
签发证书的用户有权限对证书请求执行 **签发** 操作。
|
||||
|
||||
<!--
|
||||
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
|
||||
|
@ -280,7 +296,7 @@ updates; it acts only on creation.
|
|||
此准入控制器会忽略所有 `Ingress` 更新操作,仅处理创建操作。
|
||||
|
||||
<!--
|
||||
See the [ingress](/docs/concepts/services-networking/ingress/) documentation for more about ingress
|
||||
See the [Ingress](/docs/concepts/services-networking/ingress/) documentation for more about ingress
|
||||
classes and how to mark one as default.
|
||||
-->
|
||||
关于 Ingress 类以及如何将 Ingress 类标记为默认的更多信息,请参见
|
||||
|
@ -320,7 +336,7 @@ storage classes and how to mark a storage class as default.
|
|||
<!--
|
||||
This admission controller sets the default forgiveness toleration for pods to tolerate
|
||||
the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters
|
||||
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
|
||||
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
|
||||
have toleration for taints `node.kubernetes.io/not-ready:NoExecute` or
|
||||
`node.kubernetes.io/unreachable:NoExecute`.
|
||||
The default value for `default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` is 5 minutes.
|
||||
|
@ -366,9 +382,9 @@ This admission controller is disabled by default.
|
|||
|
||||
<!--
|
||||
This admission controller mitigates the problem where the API server gets flooded by
|
||||
event requests. The cluster admin can specify event rate limits by:
|
||||
requests to store new Events. The cluster admin can specify event rate limits by:
|
||||
-->
|
||||
此准入控制器缓解了事件请求淹没 API 服务器的问题。集群管理员可以通过以下方式指定事件速率限制:
|
||||
此准入控制器缓解了请求存储新事件时淹没 API 服务器的问题。集群管理员可以通过以下方式指定事件速率限制:
|
||||
|
||||
<!--
|
||||
* Enabling the `EventRateLimit` admission controller;
|
||||
|
@ -394,16 +410,16 @@ There are four types of limits that can be specified in the configuration:
|
|||
可以在配置中指定的限制有四种类型:
|
||||
|
||||
<!--
|
||||
* `Server`: All event requests received by the API server share a single bucket.
|
||||
* `Server`: All Event requests (creation or modifications) received by the API server share a single bucket.
|
||||
* `Namespace`: Each namespace has a dedicated bucket.
|
||||
* `User`: Each user is allocated a bucket.
|
||||
* `SourceAndObject`: A bucket is assigned by each combination of source and
|
||||
involved object of the event.
|
||||
-->
|
||||
* `Server`: API 服务器收到的所有事件请求共享一个桶。
|
||||
* `Namespace`: 每个名字空间都对应一个专用的桶。
|
||||
* `User`: 为每个用户分配一个桶。
|
||||
* `SourceAndObject`: 根据事件的来源和涉及对象的各种组合分配桶。
|
||||
* `Server`:API 服务器收到的所有(创建或修改)Event 请求共享一个桶。
|
||||
* `Namespace`:每个名字空间都对应一个专用的桶。
|
||||
* `User`:为每个用户分配一个桶。
|
||||
* `SourceAndObject`:根据事件的来源和涉及对象的各种组合分配桶。
|
||||
|
||||
<!--
|
||||
Below is a sample `eventconfig.yaml` for such a configuration:
|
||||
|
@ -466,12 +482,12 @@ ImagePolicyWebhook 准入控制器允许使用后端 Webhook 做出准入决策
|
|||
此准入控制器默认被禁用。
|
||||
|
||||
<!--
|
||||
#### Configuration File Format
|
||||
#### Configuration file format {#imagereview-config-file-format}
|
||||
|
||||
ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend.
|
||||
This file may be json or yaml and has the following format:
|
||||
-->
|
||||
#### 配置文件格式 {#configuration-file-format}
|
||||
#### 配置文件格式 {#imagereview-config-file-format}
|
||||
|
||||
ImagePolicyWebhook 使用配置文件来为后端行为设置选项。该文件可以是 JSON 或 YAML,
|
||||
并具有以下格式:
|
||||
|
@ -638,11 +654,11 @@ An example request body:
|
|||
```
|
||||
|
||||
<!--
|
||||
The remote service is expected to fill the `ImageReviewStatus` field of the request and
|
||||
respond to either allow or disallow access. The response body's `spec` field is ignored and
|
||||
The remote service is expected to fill the `status` field of the request and
|
||||
respond to either allow or disallow access. The response body's `spec` field is ignored, and
|
||||
may be omitted. A permissive response would return:
|
||||
-->
|
||||
远程服务将填充请求的 `ImageReviewStatus` 字段,并返回允许或不允许访问的响应。
|
||||
远程服务将填充请求的 `status` 字段,并返回允许或不允许访问的响应。
|
||||
响应体的 `spec` 字段会被忽略,并且可以被省略。一个允许访问应答会返回:
|
||||
|
||||
```json
|
||||
|
@ -904,15 +920,15 @@ permissions required to operate correctly.
|
|||
|
||||
<!--
|
||||
This admission controller protects the access to the `metadata.ownerReferences` of an object
|
||||
so that only users with "delete" permission to the object can change it.
|
||||
so that only users with **delete** permission to the object can change it.
|
||||
This admission controller also protects the access to `metadata.ownerReferences[x].blockOwnerDeletion`
|
||||
of an object, so that only users with "update" permission to the `finalizers`
|
||||
of an object, so that only users with **update** permission to the `finalizers`
|
||||
subresource of the referenced *owner* can change it.
|
||||
-->
|
||||
此准入控制器保护对对象的 `metadata.ownerReferences` 的访问,以便只有对该对象具有
|
||||
“delete” 权限的用户才能对其进行更改。
|
||||
**delete** 权限的用户才能对其进行更改。
|
||||
该准入控制器还保护对 `metadata.ownerReferences[x].blockOwnerDeletion` 对象的访问,
|
||||
以便只有对所引用的 **属主(owner)** 的 `finalizers` 子资源具有 “update”
|
||||
以便只有对所引用的 **属主(owner)** 的 `finalizers` 子资源具有 **update**
|
||||
权限的用户才能对其进行更改。
|
||||
|
||||
### PersistentVolumeClaimResize {#persistentvolumeclaimresize}
|
||||
|
@ -964,21 +980,21 @@ For more information about persistent volume claims, see [PersistentVolumeClaims
|
|||
|
||||
<!--
|
||||
This admission controller automatically attaches region or zone labels to PersistentVolumes
|
||||
as defined by the cloud provider (for example, GCE or AWS).
|
||||
as defined by the cloud provider (for example, Azure or GCP).
|
||||
It helps ensure the Pods and the PersistentVolumes mounted are in the same
|
||||
region and/or zone.
|
||||
If the admission controller doesn't support automatic labelling your PersistentVolumes, you
|
||||
may need to add the labels manually to prevent pods from mounting volumes from
|
||||
a different zone. PersistentVolumeLabel is DEPRECATED and labeling persistent volumes has been taken over by
|
||||
a different zone. PersistentVolumeLabel is **deprecated** as labeling for persistent volumes has been taken over by
|
||||
the {{< glossary_tooltip text="cloud-controller-manager" term_id="cloud-controller-manager" >}}.
|
||||
|
||||
This admission controller is disabled by default.
|
||||
-->
|
||||
此准入控制器会自动将由云提供商(如 GCE、AWS)定义的区(region)或区域(zone)
|
||||
此准入控制器会自动将由云提供商(如 Azure 或 GCP)定义的区(region)或区域(zone)
|
||||
标签附加到 PersistentVolume 上。这有助于确保 Pod 和 PersistentVolume 位于相同的区或区域。
|
||||
如果准入控制器不支持为 PersistentVolumes 自动添加标签,那你可能需要手动添加标签,
|
||||
以防止 Pod 挂载其他区域的卷。
|
||||
PersistentVolumeLabel 已被弃用,
|
||||
PersistentVolumeLabel **已被弃用**,
|
||||
为持久卷添加标签的操作已由{{< glossary_tooltip text="云管理控制器" term_id="cloud-controller-manager" >}}接管。
|
||||
|
||||
此准入控制器默认被禁用。
|
||||
|
@ -1265,13 +1281,14 @@ pod privileges.
|
|||
<!--
|
||||
This admission controller implements automation for
|
||||
[serviceAccounts](/docs/tasks/configure-pod-container/configure-service-account/).
|
||||
We strongly recommend using this admission controller if you intend to make use of Kubernetes
|
||||
The Kubernetes project strongly recommends enabling this admission controller.
|
||||
You should enable this admission controller if you intend to make any use of Kubernetes
|
||||
`ServiceAccount` objects.
|
||||
-->
|
||||
此准入控制器实现了
|
||||
[ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)
|
||||
的自动化。
|
||||
如果你打算使用 Kubernetes 的 ServiceAccount 对象,我们强烈建议你使用这个准入控制器。
|
||||
的自动化。强烈推荐为 Kubernetes 项目启用此准入控制器。
|
||||
如果你打算使用 Kubernetes 的 `ServiceAccount` 对象,你应启用这个准入控制器。
|
||||
|
||||
### StorageObjectInUseProtection {#storageobjectinuseprotection}
|
||||
|
||||
|
@ -1298,7 +1315,8 @@ Nodes as `NotReady` and `NoSchedule`. That tainting avoids a race condition that
|
|||
to be scheduled on new Nodes before their taints were updated to accurately reflect their reported
|
||||
conditions.
|
||||
-->
|
||||
该准入控制器为新创建的节点添加 `NotReady` 和 `NoSchedule` {{< glossary_tooltip text="污点" term_id="taint" >}}。
|
||||
该准入控制器为新创建的节点添加 `NotReady` 和 `NoSchedule`
|
||||
{{< glossary_tooltip text="污点" term_id="taint" >}}。
|
||||
这些污点能够避免一些竞态条件的发生,而这类竞态条件可能导致 Pod
|
||||
在更新节点污点以准确反映其所报告状况之前,就被调度到新节点上。
|
||||
|
||||
|
@ -1340,7 +1358,7 @@ so you do not need to explicitly specify them.
|
|||
You can enable additional admission controllers beyond the default set using the
|
||||
`--enable-admission-plugins` flag (**order doesn't matter**).
|
||||
-->
|
||||
## 有推荐的准入控制器吗?
|
||||
## 有推荐的准入控制器吗? {#is-there-a-recommended-set-of-admission-controllers-to-use}
|
||||
|
||||
有。推荐使用的准入控制器默认情况下都处于启用状态
|
||||
(请查看[这里](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/#options))。
|
||||
|
|
|
@ -187,7 +187,7 @@ This would create a CSR for the username "jbeda", belonging to two groups, "app1
|
|||
|
||||
See [Managing Certificates](/docs/tasks/administer-cluster/certificates/) for how to generate a client cert.
|
||||
-->
|
||||
此命令将使用用户名 `jbeda` 生成一个证书签名请求(CSR),且该用户属于 "app" 和
|
||||
此命令将使用用户名 `jbeda` 生成一个证书签名请求(CSR),且该用户属于 "app1" 和
|
||||
"app2" 两个用户组。
|
||||
|
||||
参阅[管理证书](/zh-cn/docs/tasks/administer-cluster/certificates/)了解如何生成客户端证书。
|
||||
|
|
|
@ -307,7 +307,7 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `DefaultPodTopologySpread` | `true` | GA | 1.24 | - |
|
||||
| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
|
||||
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 |
|
||||
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.25 |- |
|
||||
| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- |
|
||||
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
|
||||
| `DryRun` | `true` | Beta | 1.13 | 1.18 |
|
||||
| `DryRun` | `true` | GA | 1.19 | - |
|
||||
|
@ -478,8 +478,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
|
||||
- `APIServerIdentity`: Assign each API server an ID in a cluster.
|
||||
- `APIServerTracing`: Add support for distributed tracing in the API server.
|
||||
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces)
|
||||
for more details.
|
||||
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details.
|
||||
-->
|
||||
- `APIListChunking`:启用 API 客户端以块的形式从 API 服务器检索(“LIST” 或 “GET”)资源。
|
||||
- `APIPriorityAndFairness`:在每个服务器上启用优先级和公平性来管理请求并发(由 `RequestManagement` 重命名而来)。
|
||||
|
@ -947,8 +946,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `KubeletTracing`: Add support for distributed tracing in the kubelet.
|
||||
When enabled, kubelet CRI interface and authenticated http servers are instrumented to generate
|
||||
OpenTelemetry trace spans.
|
||||
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces)
|
||||
for more details.
|
||||
See [Traces for Kubernetes System Components](/docs/concepts/cluster-administration/system-traces) for more details.
|
||||
- `LegacyServiceAccountTokenNoAutoGeneration`: Stop auto-generation of Secret-based
|
||||
[service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens).
|
||||
-->
|
||||
|
|
|
@ -1,14 +1,14 @@
|
|||
---
|
||||
title: PKI 证书和要求
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 50
|
||||
---
|
||||
<!--
|
||||
title: PKI certificates and requirements
|
||||
reviewers:
|
||||
- sig-cluster-lifecycle
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 50
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -54,17 +54,17 @@ Kubernetes 需要 PKI 才能执行以下操作:
|
|||
* 集群管理员的客户端证书,用于 API 服务器身份认证
|
||||
* API 服务器的客户端证书,用于和 Kubelet 的会话
|
||||
* API 服务器的客户端证书,用于和 etcd 的会话
|
||||
* 控制器管理器的客户端证书/kubeconfig,用于和 API 服务器的会话
|
||||
* 调度器的客户端证书/kubeconfig,用于和 API 服务器的会话
|
||||
* [前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/) 的客户端及服务端证书
|
||||
* 控制器管理器的客户端证书或 kubeconfig,用于和 API 服务器的会话
|
||||
* 调度器的客户端证书或 kubeconfig,用于和 API 服务器的会话
|
||||
* [前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/)的客户端及服务端证书
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
`front-proxy` certificates are required only if you run kube-proxy to support [an extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/).
|
||||
-->
|
||||
{{< note >}}
|
||||
只有当你运行 kube-proxy 并要支持
|
||||
[扩展 API 服务器](/zh-cn/docs/tasks/extend-kubernetes/setup-extension-api-server/)
|
||||
时,才需要 `front-proxy` 证书
|
||||
只有当你运行 kube-proxy
|
||||
并要支持[扩展 API 服务器](/zh-cn/docs/tasks/extend-kubernetes/setup-extension-api-server/)时,
|
||||
才需要 `front-proxy` 证书。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -121,7 +121,7 @@ On top of the above CAs, it is also necessary to get a public/private key pair f
|
|||
|------------------------|---------------------------|----------------------------------|
|
||||
| ca.crt,key | kubernetes-ca | Kubernetes 通用 CA |
|
||||
| etcd/ca.crt,key | etcd-ca | 与 etcd 相关的所有功能 |
|
||||
| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | 用于 [前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/) |
|
||||
| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | 用于[前端代理](/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/) |
|
||||
|
||||
上面的 CA 之外,还需要获取用于服务账户管理的密钥对,也就是 `sa.key` 和 `sa.pub`。
|
||||
|
||||
|
@ -182,7 +182,7 @@ where `kind` maps to one or more of the [x509 key usage](https://pkg.go.dev/k8s.
|
|||
-->
|
||||
[1]: 用来连接到集群的不同 IP 或 DNS 名
|
||||
(就像 [kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/) 为负载均衡所使用的固定
|
||||
IP 或 DNS 名,`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、
|
||||
IP 或 DNS 名:`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、
|
||||
`kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`)。
|
||||
|
||||
其中,`kind` 对应一种或多种类型的 [x509 密钥用途](https://pkg.go.dev/k8s.io/api/certificates/v1beta1#KeyUsage):
|
||||
|
@ -216,7 +216,8 @@ For kubeadm users only:
|
|||
对于 kubeadm 用户:
|
||||
|
||||
* 不使用私钥,将证书复制到集群 CA 的方案,在 kubeadm 文档中将这种方案称为外部 CA。
|
||||
* 如果将以上列表与 kubeadm 生成的 PKI 进行比较,你会注意到,如果使用外部 etcd,则不会生成 `kube-etcd`、`kube-etcd-peer` 和 `kube-etcd-healthcheck-client` 证书。
|
||||
* 如果将以上列表与 kubeadm 生成的 PKI 进行比较,你会注意到,如果使用外部 etcd,则不会生成
|
||||
`kube-etcd`、`kube-etcd-peer` 和 `kube-etcd-healthcheck-client` 证书。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -337,10 +338,10 @@ You must manually configure these administrator account and service accounts:
|
|||
| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
|
||||
| scheduler.conf | default-scheduler | system:kube-scheduler | |
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The value of `<nodeName>` for `kubelet.conf` **must** match precisely the value of the node name provided by the kubelet as it registers with the apiserver. For further details, read the [Node Authorization](/docs/reference/access-authn-authz/node/).
|
||||
-->
|
||||
{{< note >}}
|
||||
`kubelet.conf` 中 `<nodeName>` 的值 **必须** 与 kubelet 向 apiserver 注册时提供的节点名称的值完全匹配。
|
||||
有关更多详细信息,请阅读[节点授权](/zh-cn/docs/reference/access-authn-authz/node/)。
|
||||
{{< /note >}}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: 配置对多集群的访问
|
||||
content_type: task
|
||||
weight: 30
|
||||
card:
|
||||
name: tasks
|
||||
weight: 40
|
||||
|
@ -32,24 +33,20 @@ A file that is used to configure access to a cluster is sometimes called
|
|||
a *kubeconfig file*. This is a generic way of referring to configuration files.
|
||||
It does not mean that there is a file named `kubeconfig`.
|
||||
-->
|
||||
用于配置集群访问的文件有时被称为 *kubeconfig 文件*。
|
||||
用于配置集群访问的文件有时被称为 **kubeconfig 文件**。
|
||||
这是一种引用配置文件的通用方式,并不意味着存在一个名为 `kubeconfig` 的文件。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
<!--
|
||||
{{< warning >}}
|
||||
<!--
|
||||
Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig
|
||||
file could result in malicious code execution or file exposure.
|
||||
If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.
|
||||
{{< /warning>}}
|
||||
-->
|
||||
{{< warning >}}
|
||||
只使用来源可靠的 kubeconfig 文件。使用特制的 kubeconfig 文件可能会导致恶意代码执行或文件暴露。
|
||||
如果必须使用不受信任的 kubeconfig 文件,请首先像检查 shell 脚本一样仔细检查它。
|
||||
{{< /warning>}}
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
@ -62,8 +59,8 @@ cluster's API server.
|
|||
-->
|
||||
要检查 {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} 是否安装,
|
||||
执行 `kubectl version --client` 命令。
|
||||
kubectl 的版本应该与集群的 API 服务器
|
||||
[使用同一次版本号](/zh-cn/releases/version-skew-policy/#kubectl)。
|
||||
kubectl 的版本应该与集群的 API
|
||||
服务器[使用同一次版本号](/zh-cn/releases/version-skew-policy/#kubectl)。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -138,6 +135,15 @@ Add user details to your configuration file:
|
|||
-->
|
||||
将用户详细信息添加到配置文件中:
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Storing passwords in Kubernetes client config is risky. A better alternative would be to use a credential plugin and store them separately. See: [client-go credential plugins](/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins)
|
||||
-->
|
||||
将密码保存到 Kubernetes 客户端配置中有风险。
|
||||
一个较好的替代方式是使用凭据插件并单独保存这些凭据。
|
||||
参阅 [client-go 凭据插件](/zh-cn/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins)
|
||||
{{< /caution >}}
|
||||
|
||||
```shell
|
||||
kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
|
||||
kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password
|
||||
|
@ -218,6 +224,10 @@ users:
|
|||
client-key: fake-key-file
|
||||
- name: experimenter
|
||||
user:
|
||||
# 文档说明(本注释不属于命令输出)。
|
||||
# 将密码保存到 Kubernetes 客户端配置有风险。
|
||||
# 一个较好的替代方式是使用凭据插件并单独保存这些凭据。
|
||||
# 参阅 https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
|
||||
password: some-password
|
||||
username: exp
|
||||
```
|
||||
|
@ -320,7 +330,6 @@ listed in the `exp-scratch` context.
|
|||
|
||||
View configuration associated with the new current context, `exp-scratch`.
|
||||
-->
|
||||
|
||||
现在你发出的所有 `kubectl` 命令都将应用于 `scratch` 集群的默认名字空间。
|
||||
同时,命令会使用 `exp-scratch` 上下文中所列用户的凭证。
|
||||
|
||||
|
@ -358,7 +367,6 @@ kubectl config --kubeconfig=config-demo view --minify
|
|||
|
||||
In your `config-exercise` directory, create a file named `config-demo-2` with this content:
|
||||
-->
|
||||
|
||||
## 创建第二个配置文件 {#create-a-second-configuration-file}
|
||||
|
||||
在 `config-exercise` 目录中,创建名为 `config-demo-2` 的文件,其中包含以下内容:
|
||||
|
@ -479,8 +487,8 @@ contexts:
|
|||
For more information about how kubeconfig files are merged, see
|
||||
[Organizing Cluster Access Using kubeconfig Files](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
-->
|
||||
关于 kubeconfig 文件如何合并的更多信息,请参考
|
||||
[使用 kubeconfig 文件组织集群访问](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
关于 kubeconfig 文件如何合并的更多信息,
|
||||
请参考[使用 kubeconfig 文件组织集群访问](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
|
||||
<!--
|
||||
## Explore the $HOME/.kube directory
|
||||
|
@ -563,7 +571,6 @@ $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED
|
|||
* [Organizing Cluster Access Using kubeconfig Files](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
-->
|
||||
|
||||
* [使用 kubeconfig 文件组织集群访问](/zh-cn/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
|
||||
|
|
|
@ -248,16 +248,16 @@ resources:
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
|
||||
<!--
|
||||
A `LimitRange` does **not** check the consistency of the default values it applies. This means that a default value for the _limit_ that is set by `LimitRange` may be less than the _request_ value specified for the container in the spec that a client submits to the API server. If that happens, the final Pod will not be scheduleable.
|
||||
See [Constraints on resource limits and requests](/docs/concepts/policy/limit-range/#constraints-on-resource-limits-and-requests) for more details.
|
||||
-->
|
||||
`LimitRange` **不会**去检查它所应用的默认值的一致性。这意味着由 `LimitRange`
|
||||
设置的 limit 的默认值可能小于客户端提交给 API 服务器的规约中为容器指定的请求值。
|
||||
如果发生这种情况,最终的 Pod 将无法被调度。
|
||||
更多的相关信息细节,请参考[对资源限制和请求的约束](/zh-cn/docs/concepts/policy/limit-range/)。
|
||||
{{< /note >}}
|
||||
`LimitRange` **不会**检查它应用的默认值的一致性。 这意味着 `LimitRange` 设置的 _limit_ 的默认值可能小于客户端提交给
|
||||
API 服务器的声明中为容器指定的 _request_ 值。如果发生这种情况,最终会导致 Pod 无法调度。更多信息,
|
||||
请参阅[资源限制的 limit 和 request](/zh-cn/docs/concepts/policy/limit-range/#constraints-on-resource-limits-and-requests)。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Motivation for default memory limits and requests
|
||||
|
|
|
@ -492,7 +492,7 @@ your backups using a well reviewed backup and encryption solution, and consider
|
|||
encryption where possible.
|
||||
|
||||
Kubernetes supports [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/), a feature
|
||||
introduced in 1.7, and beta since 1.13. This will encrypt `Secret` resources in etcd, preventing
|
||||
introduced in 1.7, v1 beta since 1.13, and v2 alpha since 1.25. This will encrypt resources like `Secret` and `ConfigMap` in etcd, preventing
|
||||
parties that gain access to your etcd backups from viewing the content of those secrets. While
|
||||
this feature is currently beta, it offers an additional level of defense when backups
|
||||
are not encrypted or an attacker gains read access to etcd.
|
||||
|
@ -505,8 +505,8 @@ are not encrypted or an attacker gains read access to etcd.
|
|||
并考虑在可能的情况下使用全盘加密。
|
||||
|
||||
Kubernetes 支持[静态数据加密](/zh-cn/docs/tasks/administer-cluster/encrypt-data/)。
|
||||
该功能在 1.7 版本引入,并在 1.13 版本成为 Beta。
|
||||
它会加密 etcd 里面的 `Secret` 资源,以防止某一方通过查看 etcd 的备份文件查看到这些
|
||||
该功能在 1.7 版引入,在 1.13 版成为 v1 Beta,在 1.25 版成为 v2 Alpha。
|
||||
它会加密 etcd 里面的 `Secret` 和 `ConfigMap` 资源,以防止某一方通过查看 etcd 的备份文件查看到这些
|
||||
Secret 的内容。虽然目前该功能还只是 Beta 阶段,
|
||||
在备份未被加密或者攻击者获取到 etcd 的读访问权限时,它仍能提供额外的防御层级。
|
||||
|
||||
|
|
|
@ -207,7 +207,7 @@ type:
|
|||
```
|
||||
|
||||
<!--
|
||||
### Specifying both `data` and `stringData`
|
||||
### Specify both `data` and `stringData`
|
||||
|
||||
If you specify a field in both `data` and `stringData`, the value from `stringData` is used.
|
||||
|
||||
|
@ -255,6 +255,90 @@ type: Opaque
|
|||
-->
|
||||
`YWRtaW5pc3RyYXRvcg==` 解码成 `administrator`。
|
||||
|
||||
<!--
|
||||
## Edit a Secret {#edit-secret}
|
||||
|
||||
To edit the data in the Secret you created using a manifest, modify the `data`
|
||||
or `stringData` field in your manifest and apply the file to your
|
||||
cluster. You can edit an existing `Secret` object unless it is
|
||||
[immutable](/docs/concepts/configuration/secret/#secret-immutable).
|
||||
|
||||
For example, if you want to change the password from the previous example to
|
||||
`birdsarentreal`, do the following:
|
||||
|
||||
1. Encode the new password string:
|
||||
-->
|
||||
## 编辑 Secret {#edit-secret}
|
||||
|
||||
要编辑使用清单创建的 Secret 中的数据,请修改清单中的 `data` 或 `stringData` 字段并将此清单文件应用到集群。
|
||||
你可以编辑现有的 `Secret` 对象,除非它是[不可变的](/zh-cn/docs/concepts/configuration/secret/#secret-immutable)。
|
||||
|
||||
例如,如果你想将上一个示例中的密码更改为 `birdsarentreal`,请执行以下操作:
|
||||
|
||||
1. 编码新密码字符串:
|
||||
|
||||
```shell
|
||||
echo -n 'birdsarentreal' | base64
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
YmlyZHNhcmVudHJlYWw=
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Update the `data` field with your new password string:
|
||||
-->
|
||||
2. 使用你的新密码字符串更新 `data` 字段:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret
|
||||
type: Opaque
|
||||
data:
|
||||
username: YWRtaW4=
|
||||
password: YmlyZHNhcmVudHJlYWw=
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Apply the manifest to your cluster:
|
||||
-->
|
||||
3. 将清单应用到你的集群:
|
||||
|
||||
```shell
|
||||
kubectl apply -f ./secret.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
secret/mysecret configured
|
||||
```
|
||||
|
||||
<!--
|
||||
Kubernetes updates the existing `Secret` object. In detail, the `kubectl` tool
|
||||
notices that there is an existing `Secret` object with the same name. `kubectl`
|
||||
fetches the existing object, plans changes to it, and submits the changed
|
||||
`Secret` object to your cluster control plane.
|
||||
|
||||
If you specified `kubectl apply --server-side` instead, `kubectl` uses
|
||||
[Server Side Apply](/docs/reference/using-api/server-side-apply/) instead.
|
||||
-->
|
||||
Kubernetes 更新现有的 `Secret` 对象。具体而言,`kubectl` 工具发现存在一个同名的现有 `Secret` 对象。
|
||||
`kubectl` 获取现有对象,计划对其进行更改,并将更改后的 `Secret` 对象提交到你的集群控制平面。
|
||||
|
||||
如果你指定了 `kubectl apply --server-side`,则 `kubectl`
|
||||
使用[服务器端应用(Server-Side Apply)](/zh-cn/docs/reference/using-api/server-side-apply/)。
|
||||
|
||||
<!--
|
||||
## Clean Up
|
||||
-->
|
||||
|
@ -273,10 +357,10 @@ kubectl delete secret mysecret
|
|||
|
||||
<!--
|
||||
- Read more about the [Secret concept](/docs/concepts/configuration/secret/)
|
||||
- Learn how to [manage Secrets with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
|
||||
- Learn how to [manage Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
|
||||
- Learn how to [manage Secrets using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
|
||||
-->
|
||||
- 进一步阅读 [Secret 概念](/zh-cn/docs/concepts/configuration/secret/)
|
||||
- 了解如何[使用 `kubectl` 命令管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
|
||||
- 了解如何[使用 `kubectl` 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
|
||||
- 了解如何[使用 kustomize 管理 Secret](/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
|
||||
|
||||
|
|
|
@ -85,7 +85,7 @@ When developing an application on Kubernetes, you typically program or debug a s
|
|||
一种选择是使用连续部署流水线,但即使最快的部署流水线也会在程序或调试周期中引入延迟。
|
||||
|
||||
<!--
|
||||
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:REMOTE_PORT` command to create an "intercept" for rerouting remote service traffic.
|
||||
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` command to create an "intercept" for rerouting remote service traffic.
|
||||
|
||||
Where:
|
||||
|
||||
|
@ -94,7 +94,7 @@ Where:
|
|||
- And `$REMOTE_PORT` is the port your service listens to in the cluster
|
||||
-->
|
||||
|
||||
使用 `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:REMOTE_PORT` 命令创建一个 "拦截器" 用于重新路由远程服务流量。
|
||||
使用 `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` 命令创建一个 "拦截器" 用于重新路由远程服务流量。
|
||||
|
||||
环境变量:
|
||||
|
||||
|
|
|
@ -18,27 +18,26 @@ weight: 10
|
|||
<!--
|
||||
Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs.
|
||||
-->
|
||||
配置[聚合层](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
可以允许 Kubernetes apiserver 使用其它 API 扩展,这些 API 不是核心
|
||||
Kubernetes API 的一部分。
|
||||
配置[聚合层](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)可以允许
|
||||
Kubernetes apiserver 使用其它 API 扩展,这些 API 不是核心 Kubernetes API 的一部分。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
There are a few setup requirements for getting the aggregation layer working in your environment to support mutual TLS auth between the proxy and extension apiservers. Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is signed by the aggregation layer CA and not by something else, like the Kubernetes general CA.
|
||||
-->
|
||||
{{< note >}}
|
||||
要使聚合层在你的环境中正常工作以支持代理服务器和扩展 apiserver 之间的相互 TLS 身份验证,
|
||||
需要满足一些设置要求。Kubernetes 和 kube-apiserver 具有多个 CA,
|
||||
因此请确保代理是由聚合层 CA 签名的,而不是由 Kubernetes 通用 CA 签名的。
|
||||
{{< /note >}}
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Reusing the same CA for different client types can negatively impact the cluster's ability to function. For more information, see [CA Reusage and Conflicts](#ca-reusage-and-conflicts).
|
||||
-->
|
||||
{{< caution >}}
|
||||
对不同的客户端类型重复使用相同的 CA 会对集群的功能产生负面影响。
|
||||
有关更多信息,请参见 [CA 重用和冲突](#ca-reusage-and-conflicts)。
|
||||
{{< /caution >}}
|
||||
|
@ -52,11 +51,11 @@ Unlike Custom Resource Definitions (CRDs), the Aggregation API involves another
|
|||
|
||||
This section describes how the authentication and authorization flows work, and how to configure them.
|
||||
-->
|
||||
## 身份认证流程
|
||||
## 身份认证流程 {#authentication-flow}
|
||||
|
||||
与自定义资源定义(CRD)不同,除标准的 Kubernetes apiserver 外,Aggregation API
|
||||
还涉及另一个服务器:扩展 apiserver。
|
||||
Kubernetes apiserver 将需要与你的扩展 apiserver 通信,并且你的扩展 apiserver
|
||||
Kubernetes apiserver 将需要与你的扩展 apiserver 通信,并且你的扩展 apiserver
|
||||
也需要与 Kubernetes apiserver 通信。
|
||||
为了确保此通信的安全,Kubernetes apiserver 使用 x509 证书向扩展 apiserver 认证。
|
||||
|
||||
|
@ -84,13 +83,15 @@ The rest of this section describes these steps in detail.
|
|||
|
||||
The flow can be seen in the following diagram.
|
||||
|
||||
.
|
||||
|
||||
The source for the above swimlanes can be found in the source of this document.
|
||||
-->
|
||||
本节的其余部分详细描述了这些步骤。
|
||||
|
||||
该流程可以在下图中看到。
|
||||
|
||||
.
|
||||

|
||||
|
||||
以上泳道的来源可以在本文档的源码中找到。
|
||||
|
||||
|
@ -222,7 +223,7 @@ Everything to this point has been standard Kubernetes API requests, authenticati
|
|||
|
||||
The Kubernetes apiserver now is prepared to send the request to the extension apiserver.
|
||||
-->
|
||||
### Kubernetes Apiserver 认证和授权
|
||||
### Kubernetes Apiserver 认证和授权 {#kubernetes-apiserver-authentication-and-authorization}
|
||||
|
||||
由扩展 apiserver 服务的对 API 路径的请求以与所有 API 请求相同的方式开始:
|
||||
与 Kubernetes apiserver 的通信。该路径已通过扩展 apiserver 在
|
||||
|
@ -231,10 +232,10 @@ Kubernetes apiserver 中注册。
|
|||
用户与 Kubernetes apiserver 通信,请求访问路径。
|
||||
Kubernetes apiserver 使用它的标准认证和授权配置来对用户认证,以及对特定路径的鉴权。
|
||||
|
||||
有关对 Kubernetes 集群认证的概述,请参见
|
||||
[对集群认证](/zh-cn/docs/reference/access-authn-authz/authentication/)。
|
||||
有关对Kubernetes集群资源的访问鉴权的概述,请参见
|
||||
[鉴权概述](/zh-cn/docs/reference/access-authn-authz/authorization/)。
|
||||
有关对 Kubernetes 集群认证的概述,
|
||||
请参见[对集群认证](/zh-cn/docs/reference/access-authn-authz/authentication/)。
|
||||
有关对 Kubernetes 集群资源的访问鉴权的概述,
|
||||
请参见[鉴权概述](/zh-cn/docs/reference/access-authn-authz/authorization/)。
|
||||
|
||||
到目前为止,所有内容都是标准的 Kubernetes API 请求,认证与鉴权。
|
||||
|
||||
|
@ -250,7 +251,7 @@ The Kubernetes apiserver now will send, or proxy, the request to the extension a
|
|||
|
||||
In order to provide for these two, you must configure the Kubernetes apiserver using several flags.
|
||||
-->
|
||||
### Kubernetes Apiserver 代理请求
|
||||
### Kubernetes Apiserver 代理请求 {#kubernetes-apiserver-proxies-the-request}
|
||||
|
||||
Kubernetes apiserver 现在将请求发送或代理到注册以处理该请求的扩展 apiserver。
|
||||
为此,它需要了解几件事:
|
||||
|
@ -258,8 +259,8 @@ Kubernetes apiserver 现在将请求发送或代理到注册以处理该请求
|
|||
1. Kubernetes apiserver 应该如何向扩展 apiserver 认证,以通知扩展
|
||||
apiserver 通过网络发出的请求来自有效的 Kubernetes apiserver?
|
||||
|
||||
2. Kubernetes apiserver 应该如何通知扩展 apiserver 原始请求
|
||||
已通过认证的用户名和组?
|
||||
2. Kubernetes apiserver 应该如何通知扩展 apiserver
|
||||
原始请求已通过认证的用户名和组?
|
||||
|
||||
为提供这两条信息,你必须使用若干标志来配置 Kubernetes apiserver。
|
||||
|
||||
|
@ -295,12 +296,12 @@ Kubernetes apiserver 将使用由 `--proxy-client-*-file` 指示的文件来向
|
|||
1. 连接必须使用由 CA 签署的客户端证书,该证书的证书位于 `--requestheader-client-ca-file` 中。
|
||||
2. 连接必须使用客户端证书,该客户端证书的 CN 是 `--requestheader-allowed-names` 中列出的证书之一。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable.
|
||||
-->
|
||||
{{< note >}}
|
||||
你可以将此选项设置为空白,即为`--requestheader-allowed-names=""`。
|
||||
这将向扩展 apiserver 指示**任何** CN 是可接受的。
|
||||
这将向扩展 apiserver 指示**任何** CN 都是可接受的。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -314,8 +315,8 @@ Note that the same client certificate is used by the Kubernetes apiserver to aut
|
|||
使用这些选项启动时,Kubernetes apiserver 将:
|
||||
|
||||
1. 使用它们向扩展 apiserver 认证。
|
||||
2. 在 `kube-system` 命名空间中
|
||||
创建一个名为 `extension-apiserver-authentication` 的 ConfigMap,
|
||||
2. 在 `kube-system` 命名空间中创建一个名为
|
||||
`extension-apiserver-authentication` 的 ConfigMap,
|
||||
它将在其中放置 CA 证书和允许的 CN。
|
||||
反过来,扩展 apiserver 可以检索这些内容以验证请求。
|
||||
|
||||
|
@ -341,9 +342,9 @@ These header names are also placed in the `extension-apiserver-authentication` c
|
|||
它在其代理请求的 HTTP 头部中提供这些。你必须将要使用的标头名称告知
|
||||
Kubernetes apiserver。
|
||||
|
||||
* 通过`--requestheader-username-headers` 标明用来保存用户名的头部
|
||||
* 通过`--requestheader-group-headers` 标明用来保存 group 的头部
|
||||
* 通过`--requestheader-extra-headers-prefix` 标明用来保存拓展信息前缀的头部
|
||||
* 通过 `--requestheader-username-headers` 标明用来保存用户名的头部
|
||||
* 通过 `--requestheader-group-headers` 标明用来保存 group 的头部
|
||||
* 通过 `--requestheader-extra-headers-prefix` 标明用来保存拓展信息前缀的头部
|
||||
|
||||
这些头部名称也放置在 `extension-apiserver-authentication` ConfigMap 中,
|
||||
因此扩展 apiserver 可以检索和使用它们。
|
||||
|
@ -363,13 +364,13 @@ The extension apiserver, upon receiving a proxied request from the Kubernetes ap
|
|||
* Has a CN in the list of allowed CNs, unless the list is blank, in which case all CNs are allowed.
|
||||
* Extract the username and group from the appropriate headers
|
||||
-->
|
||||
### 扩展 Apiserver 认证
|
||||
### 扩展 Apiserver 认证请求 {#extension-apiserver-authenticates-the-request}
|
||||
|
||||
扩展 apiserver 在收到来自 Kubernetes apiserver 的代理请求后,
|
||||
必须验证该请求确实确实来自有效的身份验证代理,
|
||||
该认证代理由 Kubernetes apiserver 履行。扩展 apiserver 通过以下方式对其认证:
|
||||
|
||||
1. 如上所述,从`kube-system`中的 configmap 中检索以下内容:
|
||||
1. 如上所述,从 `kube-system` 中的 ConfigMap 中检索以下内容:
|
||||
|
||||
* 客户端 CA 证书
|
||||
* 允许名称(CN)列表
|
||||
|
@ -379,7 +380,7 @@ The extension apiserver, upon receiving a proxied request from the Kubernetes ap
|
|||
|
||||
* 由其证书与检索到的 CA 证书匹配的 CA 签名。
|
||||
* 在允许的 CN 列表中有一个 CN,除非列表为空,在这种情况下允许所有 CN。
|
||||
* 从适当的头部中提取用户名和组
|
||||
* 从适当的头部中提取用户名和组。
|
||||
|
||||
<!--
|
||||
If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver.
|
||||
|
@ -406,7 +407,7 @@ The extension apiserver now can validate that the user/group retrieved from the
|
|||
|
||||
In order for the extension apiserver to be authorized itself to submit the `SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions. Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that has the appropriate permissions. It can be granted to the extension apiserver's service account.
|
||||
-->
|
||||
### 扩展 Apiserver 对请求鉴权
|
||||
### 扩展 Apiserver 对请求鉴权 {#extensions-apiserver-authorizes-the-request}
|
||||
|
||||
扩展 apiserver 现在可以验证从标头检索的`user/group`是否有权执行给定请求。
|
||||
通过向 Kubernetes apiserver 发送标准
|
||||
|
@ -426,7 +427,7 @@ If the `SubjectAccessReview` passes, the extension apiserver executes the reques
|
|||
|
||||
Enable the aggregation layer via the following `kube-apiserver` flags. They may have already been taken care of by your provider.
|
||||
-->
|
||||
### 扩展 Apiserver 执行
|
||||
### 扩展 Apiserver 执行 {#enable-kubernetes-apiserver-flags}
|
||||
|
||||
如果 `SubjectAccessReview` 通过,则扩展 apiserver 执行请求。
|
||||
|
||||
|
@ -450,7 +451,7 @@ Enable the aggregation layer via the following `kube-apiserver` flags. They may
|
|||
|
||||
The Kubernetes apiserver has two client CA options:
|
||||
-->
|
||||
### CA-重用和冲突 {#ca-reusage-and-conflicts}
|
||||
### CA 重用和冲突 {#ca-reusage-and-conflicts}
|
||||
|
||||
Kubernetes apiserver 有两个客户端 CA 选项:
|
||||
|
||||
|
@ -468,7 +469,7 @@ Each of these functions independently and can conflict with each other, if not u
|
|||
* `--client-ca-file`:当请求到达 Kubernetes apiserver 时,如果启用了此选项,
|
||||
则 Kubernetes apiserver 会检查请求的证书。
|
||||
如果它是由 `--client-ca-file` 引用的文件中的 CA 证书之一签名的,
|
||||
并且用户是公用名`CN=`的值,而组是组织`O=` 的取值,则该请求被视为合法请求。
|
||||
并且用户是公用名 `CN=` 的值,而组是组织 `O=` 的取值,则该请求被视为合法请求。
|
||||
请参阅[关于 TLS 身份验证的文档](/zh-cn/docs/reference/access-authn-authz/authentication/#x509-client-certs)。
|
||||
|
||||
* `--requestheader-client-ca-file`:当请求到达 Kubernetes apiserver 时,
|
||||
|
@ -485,7 +486,7 @@ If _both_ `--client-ca-file` and `--requestheader-client-ca-file` are provided,
|
|||
For this reason, use different CA certs for the `--client-ca-file` option - to authorize control plane components and end-users - and the `--requestheader-client-ca-file` option - to authorize aggregation apiserver requests.
|
||||
-->
|
||||
如果同时提供了 `--client-ca-file` 和 `--requestheader-client-ca-file`,
|
||||
则首先检查 `--requestheader-client-ca-file` CA,然后再检查`--client-ca-file`。
|
||||
则首先检查 `--requestheader-client-ca-file` CA,然后再检查 `--client-ca-file`。
|
||||
通常,这些选项中的每一个都使用不同的 CA(根 CA 或中间 CA)。
|
||||
常规客户端请求与 `--client-ca-file` 相匹配,而聚合请求要与
|
||||
`--requestheader-client-ca-file` 相匹配。
|
||||
|
@ -500,11 +501,11 @@ apiserver 认证。
|
|||
用于聚合 apiserver 鉴权的 `--requestheader-client-ca-file` 选项使用
|
||||
不同的 CA 证书。
|
||||
|
||||
{{< warning >}}
|
||||
<!--
|
||||
Do **not** reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage.
|
||||
-->
|
||||
{{< warning >}}
|
||||
除非你了解风险和保护 CA 用法的机制,否则 *不要* 重用在不同上下文中使用的 CA。
|
||||
除非你了解风险和保护 CA 用法的机制,否则 **不要** 重用在不同上下文中使用的 CA。
|
||||
{{< /warning >}}
|
||||
|
||||
<!--
|
||||
|
@ -523,7 +524,7 @@ If you are not running kube-proxy on a host running the API server, then you mus
|
|||
You can dynamically configure what client requests are proxied to extension
|
||||
apiserver. The following is an example registration:
|
||||
-->
|
||||
### 注册 APIService 对象
|
||||
### 注册 APIService 对象 {#register-apiservice-objects}
|
||||
|
||||
你可以动态配置将哪些客户端请求代理到扩展 apiserver。以下是注册示例:
|
||||
|
||||
|
@ -547,8 +548,8 @@ spec:
|
|||
The name of an APIService object must be a valid
|
||||
[path segment name](/docs/concepts/overview/working-with-objects/names#path-segment-names).
|
||||
-->
|
||||
APIService 对象的名称必须是合法的
|
||||
[路径片段名称](/zh-cn/docs/concepts/overview/working-with-objects/names#path-segment-names)。
|
||||
APIService
|
||||
对象的名称必须是合法的[路径片段名称](/zh-cn/docs/concepts/overview/working-with-objects/names#path-segment-names)。
|
||||
|
||||
<!--
|
||||
#### Contacting the extension apiserver
|
||||
|
@ -572,10 +573,9 @@ and to verify the TLS connection against the ServerName
|
|||
服务的名字空间和名字是必需的。端口是可选的,默认为 443。
|
||||
|
||||
下面是一个扩展 apiserver 的配置示例,它被配置为在端口 `1234` 上调用。
|
||||
并针对 ServerName
|
||||
`my-service-name.my-service-namespace.svc`
|
||||
使用自定义的 CA 包来验证 TLS 连接
|
||||
使用自定义 CA 捆绑包的 `my-service-name.my-service-namespace.svc`。
|
||||
并针对 ServerName `my-service-name.my-service-namespace.svc`
|
||||
使用自定义的 CA 包来验证 TLS 连接使用自定义 CA 捆绑包的
|
||||
`my-service-name.my-service-namespace.svc`。
|
||||
|
||||
```yaml
|
||||
apiVersion: apiregistration.k8s.io/v1
|
||||
|
|
|
@ -590,7 +590,7 @@ The example server is organized in a way to be reused for other conversions.
|
|||
Most of the common code are located in the
|
||||
[framework file](https://github.com/kubernetes/kubernetes/tree/v1.25.3/test/images/agnhost/crd-conversion-webhook/converter/framework.go)
|
||||
that leaves only
|
||||
[one function](https://github.com/kubernetes/kubernetes/blob/v1.25.3/test/images/crd-conversion-webhook/converter/example_converter.go#L29-L80)
|
||||
[one function](https://github.com/kubernetes/kubernetes/tree/v1.25.3/test/images/agnhost/crd-conversion-webhook/converter/example_converter.go#L29-L80)
|
||||
to be implemented for different conversions.
|
||||
-->
|
||||
### 编写一个转换 Webhook 服务器 {#write-a-conversion-webhook-server}
|
||||
|
|
|
@ -31,7 +31,7 @@ plane hosts. If you do not already have a cluster, you can create one by using
|
|||
|
||||
The following steps require an egress configuration, for example:
|
||||
-->
|
||||
## 配置 Konnectivity 服务
|
||||
## 配置 Konnectivity 服务 {#configure-the-konnectivity-service}
|
||||
|
||||
接下来的步骤需要出口配置,比如:
|
||||
|
||||
|
@ -48,29 +48,16 @@ feature enabled in your cluster. It is enabled by default since Kubernetes v1.20
|
|||
1. Set the `--egress-selector-config-file` flag of the API Server to the path of
|
||||
your API Server egress configuration file.
|
||||
1. If you use UDS connection, add volumes config to the kube-apiserver:
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
volumeMounts:
|
||||
- name: konnectivity-uds
|
||||
mountPath: /etc/kubernetes/konnectivity-server
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: konnectivity-uds
|
||||
hostPath:
|
||||
path: /etc/kubernetes/konnectivity-server
|
||||
type: DirectoryOrCreate
|
||||
```
|
||||
-->
|
||||
你需要配置 API 服务器来使用 Konnectivity 服务,并将网络流量定向到集群节点:
|
||||
|
||||
确保[服务账号令牌卷投射](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection)
|
||||
特性被启用。该特性自 Kubernetes v1.20 起默认已被启用。
|
||||
|
||||
1. 确保[服务账号令牌卷投射](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection)特性被启用。
|
||||
该特性自 Kubernetes v1.20 起默认已被启用。
|
||||
1. 创建一个出站流量配置文件,比如 `admin/konnectivity/egress-selector-configuration.yaml`。
|
||||
1. 将 API 服务器的 `--egress-selector-config-file` 参数设置为你的 API 服务器的
|
||||
离站流量配置文件路径。
|
||||
1. 将 API 服务器的 `--egress-selector-config-file` 参数设置为你的 API
|
||||
服务器的离站流量配置文件路径。
|
||||
1. 如果你在使用 UDS 连接,须将卷配置添加到 kube-apiserver:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
|
@ -92,11 +79,10 @@ using the cluster CA certificate `/etc/kubernetes/pki/ca.crt` from a control-pla
|
|||
-->
|
||||
为 konnectivity-server 生成或者取得证书和 kubeconfig 文件。
|
||||
例如,你可以使用 OpenSSL 命令行工具,基于存放在某控制面主机上
|
||||
`/etc/kubernetes/pki/ca.crt` 文件中的集群 CA 证书来
|
||||
发放一个 X.509 证书,
|
||||
`/etc/kubernetes/pki/ca.crt` 文件中的集群 CA 证书来发放一个 X.509 证书。
|
||||
|
||||
```bash
|
||||
openssl req -subj "/CN=system:konnectivity-server" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key -out konnectivity.csr
|
||||
openssl req -subj "/CN=system:konnectivity-server" -new -newkey rsa:2048 -nodes -out konnectivity.csr -keyout konnectivity.key
|
||||
openssl x509 -req -in konnectivity.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out konnectivity.crt -days 375 -sha256
|
||||
SERVER=$(kubectl config view -o jsonpath='{.clusters..server}')
|
||||
kubectl --kubeconfig /etc/kubernetes/konnectivity-server.conf config set-credentials system:konnectivity-server --client-certificate konnectivity.crt --client-key konnectivity.key --embed-certs=true
|
||||
|
|
|
@ -15,8 +15,8 @@ This page shows how a Pod can use environment variables to expose information
|
|||
about itself to containers running in the Pod, using the _downward API_.
|
||||
You can use environment variables to expose Pod fields, container fields, or both.
|
||||
-->
|
||||
此页面展示 Pod 如何使用环境变量把自身的信息呈现给 Pod 中运行的容器。
|
||||
使用 **downward API** 你可以使用环境变量来呈现 Pod 的字段、容器字段或两者。
|
||||
此页面展示 Pod 如何使用 **downward API** 通过环境变量把自身的信息呈现给 Pod 中运行的容器。
|
||||
你可以使用环境变量来呈现 Pod 的字段、容器字段或两者。
|
||||
|
||||
<!--
|
||||
In Kubernetes, there are two ways to expose Pod and container fields to a running container:
|
||||
|
@ -27,13 +27,12 @@ In Kubernetes, there are two ways to expose Pod and container fields to a runnin
|
|||
Together, these two ways of exposing Pod and container fields are called the
|
||||
downward API.
|
||||
-->
|
||||
在 Kubernetes 中有两种方式可以将 Pod 和 容器字段呈现给运行中的容器:
|
||||
在 Kubernetes 中有两种方式可以将 Pod 和容器字段呈现给运行中的容器:
|
||||
|
||||
* 如本任务所述的**环境变量**。
|
||||
* 如本任务所述的**环境变量**
|
||||
* [卷文件](/zh-cn/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)
|
||||
|
||||
这两种呈现 Pod 和 容器字段的方式统称为 downward API。
|
||||
|
||||
这两种呈现 Pod 和容器字段的方式统称为 downward API。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -47,9 +46,9 @@ downward API.
|
|||
In this part of exercise, you create a Pod that has one container, and you
|
||||
project Pod-level fields into the running container as environment variables.
|
||||
-->
|
||||
## 用 Pod 字段作为环境变量的值
|
||||
## 用 Pod 字段作为环境变量的值 {#use-pod-fields-as-values-for-env-var}
|
||||
|
||||
在这部分练习中,你将创建一个包含一个容器的 Pod。并将 Pod 级别的字段作为环境变量映射到正在运行的容器中。
|
||||
在这部分练习中,你将创建一个包含一个容器的 Pod。并将 Pod 级别的字段作为环境变量投射到正在运行的容器中。
|
||||
|
||||
{{< codenew file="pods/inject/dapi-envars-pod.yaml" >}}
|
||||
|
||||
|
@ -62,22 +61,21 @@ variable gets its value from the Pod's `spec.nodeName` field. Similarly, the
|
|||
other environment variables get their names from Pod fields.
|
||||
-->
|
||||
这个清单中,你可以看到五个环境变量。`env` 字段定义了一组环境变量。
|
||||
|
||||
数组中第一个元素指定 `MY_NODE_NAME` 这个环境变量从 Pod 的 `spec.nodeName` 字段获取变量值。
|
||||
同样,其它环境变量也是从 Pod 的字段获取它们的变量值。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The fields in this example are Pod fields. They are not fields of the
|
||||
container in the Pod.
|
||||
-->
|
||||
{{< note >}}
|
||||
本示例中的字段是 Pod 字段,不是 Pod 中 Container 的字段。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
-->
|
||||
创建Pod:
|
||||
创建 Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-pod.yaml
|
||||
|
@ -125,7 +123,7 @@ Next, get a shell into the container that is running in your Pod:
|
|||
要了解为什么这些值出现在日志中,请查看配置文件中的 `command` 和 `args` 字段。
|
||||
当容器启动时,它将五个环境变量的值写入标准输出。每十秒重复执行一次。
|
||||
|
||||
接下来,通过打开一个 Shell 进入 Pod 中运行的容器:
|
||||
接下来,进入 Pod 中运行的容器,打开一个 Shell:
|
||||
|
||||
```shell
|
||||
kubectl exec -it dapi-envars-fieldref -- sh
|
||||
|
@ -169,9 +167,8 @@ definition, but taken from the specific
|
|||
rather than from the Pod overall.
|
||||
|
||||
Here is a manifest for another Pod that again has just one container:
|
||||
|
||||
-->
|
||||
## 使用容器字段作为环境变量的值
|
||||
## 使用容器字段作为环境变量的值 {#use-container-fields-as-value-for-env-var}
|
||||
|
||||
前面的练习中,你将 Pod 级别的字段作为环境变量的值。
|
||||
接下来这个练习中,你将传递属于 Pod 定义的字段,但这些字段取自特定容器而不是整个 Pod。
|
||||
|
@ -195,7 +192,7 @@ Create the Pod:
|
|||
数组中第一个元素指定 `MY_CPU_REQUEST` 这个环境变量从容器的 `requests.cpu`
|
||||
字段获取变量值。同样,其它的环境变量也是从特定于这个容器的字段中获取它们的变量值。
|
||||
|
||||
创建Pod:
|
||||
创建 Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dapi-envars-container.yaml
|
||||
|
@ -244,7 +241,7 @@ The output shows the values of selected environment variables:
|
|||
* 阅读[给容器定义环境变量](/zh-cn/docs/tasks/inject-data-application/define-environment-variable-container/)
|
||||
* 阅读 Pod 的 [`spec`](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
|
||||
API 包括容器(Pod 的一部分)的定义。
|
||||
* 阅读可以使用 downward API 公开的[可用字段](/zh-cn/docs/concepts/workloads/pods/downward-api/#available-fields)列表。
|
||||
* 阅读可以使用 downward API 呈现的[可用字段](/zh-cn/docs/concepts/workloads/pods/downward-api/#available-fields)列表。
|
||||
|
||||
<!--
|
||||
Read about Pods, containers and environment variables in the legacy API reference:
|
||||
|
|
|
@ -1798,6 +1798,55 @@ Service:
|
|||
```shell
|
||||
kubectl delete svc nginx
|
||||
```
|
||||
<!--
|
||||
+Delete the persistent storage media for the PersistentVolumes used in this tutorial.
|
||||
-->
|
||||
|
||||
删除本教程中用到的 PersistentVolume 卷的持久化存储介质。
|
||||
|
||||
+```shell
|
||||
+kubectl get pvc
|
||||
+```
|
||||
+```
|
||||
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
+www-web-0 Bound pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO standard 25m
|
||||
+www-web-1 Bound pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO standard 24m
|
||||
+www-web-2 Bound pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO standard 15m
|
||||
+www-web-3 Bound pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO standard 15m
|
||||
+www-web-4 Bound pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO standard 14m
|
||||
+```
|
||||
+
|
||||
+```shell
|
||||
+kubectl get pv
|
||||
+```
|
||||
+```
|
||||
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
+pvc-0c04d7f0-787a-4977-8da3-d9d3a6d8d752 1Gi RWO Delete Bound default/www-web-3 standard 15m
|
||||
+pvc-2bf00408-d366-4a12-bad0-1869c65d0bee 1Gi RWO Delete Bound default/www-web-0 standard 25m
|
||||
+pvc-b2c73489-e70b-4a4e-9ec1-9eab439aa43e 1Gi RWO Delete Bound default/www-web-4 standard 14m
|
||||
+pvc-ba3bfe9c-413e-4b95-a2c0-3ea8a54dbab4 1Gi RWO Delete Bound default/www-web-1 standard 24m
|
||||
+pvc-cba6cfa6-3a47-486b-a138-db5930207eaf 1Gi RWO Delete Bound default/www-web-2 standard 15m
|
||||
+```
|
||||
+
|
||||
+```shell
|
||||
+kubectl delete pvc www-web-0 www-web-1 www-web-2 www-web-3 www-web-4
|
||||
+```
|
||||
+
|
||||
+```
|
||||
+persistentvolumeclaim "www-web-0" deleted
|
||||
+persistentvolumeclaim "www-web-1" deleted
|
||||
+persistentvolumeclaim "www-web-2" deleted
|
||||
+persistentvolumeclaim "www-web-3" deleted
|
||||
+persistentvolumeclaim "www-web-4" deleted
|
||||
+```
|
||||
+
|
||||
+```shell
|
||||
+kubectl get pvc
|
||||
+```
|
||||
+
|
||||
+```
|
||||
+No resources found in default namespace.
|
||||
+```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
|
|
@ -0,0 +1,38 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: configmap-demo-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: demo
|
||||
image: alpine
|
||||
command: ["sleep", "3600"]
|
||||
env:
|
||||
# 定义环境变量
|
||||
- name: PLAYER_INITIAL_LIVES # 请注意这里和 ConfigMap 中的键名是不一样的
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: game-demo # 这个值来自 ConfigMap
|
||||
key: player_initial_lives # 需要取值的键
|
||||
- name: UI_PROPERTIES_FILE_NAME
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: game-demo
|
||||
key: ui_properties_file_name
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: "/config"
|
||||
readOnly: true
|
||||
volumes:
|
||||
# 你可以在 Pod 级别设置卷,然后将其挂载到 Pod 内的容器中
|
||||
- name: config
|
||||
configMap:
|
||||
# 提供你想要挂载的 ConfigMap 的名字
|
||||
name: game-demo
|
||||
# 来自 ConfigMap 的一组键,将被创建为文件
|
||||
items:
|
||||
- key: "game.properties"
|
||||
path: "game.properties"
|
||||
- key: "user-interface.properties"
|
||||
path: "user-interface.properties"
|
||||
|
Loading…
Reference in New Issue