Merge remote-tracking branch 'upstream/main' into dev-1.24

pull/32694/head
Nate W 2022-03-31 15:18:13 -07:00
commit f85be125b9
68 changed files with 2463 additions and 1549 deletions

View File

@ -18,7 +18,7 @@ The paper attempts to _not_ focus on any specific [cloud native project](https:/
## Kubernetes native security controls
When using Kubernetes as a workload orchestrator, some of the security controls this version of the whitepaper recommends are:
* [Pod Security Policies](/docs/concepts/policy/pod-security-policy/): Implement a single source of truth for “least privilege” workloads across the entire cluster
* [Pod Security Policies](/docs/concepts/security/pod-security-policy/): Implement a single source of truth for “least privilege” workloads across the entire cluster
* [Resource requests and limits](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits): Apply requests (soft constraint) and limits (hard constraint) for shared resources such as memory and CPU
* [Audit log analysis](/docs/tasks/debug-application-cluster/audit/): Enable Kubernetes API auditing and filtering for security relevant events
* [Control plane authentication and certificate root of trust](/docs/concepts/architecture/control-plane-node-communication/): Enable mutual TLS authentication with a trusted CA for communication within the cluster

View File

@ -21,7 +21,7 @@ Until then, PSP is still PSP. There will be at least a year during which the new
## What is PodSecurityPolicy?
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) is a built-in [admission controller](/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/) that allows a cluster administrator to control security-sensitive aspects of the Pod specification.
[PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) is a built-in [admission controller](/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/) that allows a cluster administrator to control security-sensitive aspects of the Pod specification.
First, one or more PodSecurityPolicy resources are created in a cluster to define the requirements Pods must meet. Then, RBAC rules are created to control which PodSecurityPolicy applies to a given pod. If a pod meets the requirements of its PSP, it will be admitted to the cluster as usual. In some cases, PSP can also modify Pod fields, effectively creating new defaults for those fields. If a Pod does not meet the PSP requirements, it is rejected, and cannot run.

View File

@ -175,7 +175,7 @@ require internet access to update the vulnerability database.
### Pod Security Policies
Since Kubernetes v1.21, the [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)
Since Kubernetes v1.21, the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/)
API and related features are [deprecated](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),
but some of the guidance in this section will still apply for the next few years, until cluster operators
upgrade their clusters to newer Kubernetes versions.

View File

@ -18,7 +18,7 @@ One of the key security principles for running containers in Kubernetes is the
principle of least privilege. The Pod/container `securityContext` specifies the config
options to set, e.g., Linux capabilities, MAC policies, and user/group ID values to achieve this.
Furthermore, the cluster admins are supported with tools like [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) (deprecated) or
Furthermore, the cluster admins are supported with tools like [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) (deprecated) or
[Pod Security Admission](/docs/concepts/security/pod-security-admission/) (alpha) to enforce the desired security settings for pods that are being deployed in
the cluster. These settings could, for instance, require that containers must be `runAsNonRoot` or
that they are forbidden from running with root's group ID in `runAsGroup` or `supplementalGroups`.

View File

@ -9,7 +9,7 @@ slug: pod-security-admission-beta
With the release of Kubernetes v1.23, [Pod Security admission](/docs/concepts/security/pod-security-admission/) has now entered beta. Pod Security is a [built-in](/docs/reference/access-authn-authz/admission-controllers/) admission controller that evaluates pod specifications against a predefined set of [Pod Security Standards](/docs/concepts/security/pod-security-standards/) and determines whether to `admit` or `deny` the pod from running.
Pod Security is the successor to [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) which was deprecated in the v1.21 release, and will be removed in Kubernetes v1.25. In this article, we cover the key concepts of Pod Security along with how to use it. We hope that cluster administrators and developers alike will use this new mechanism to enforce secure defaults for their workloads.
Pod Security is the successor to [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) which was deprecated in the v1.21 release, and will be removed in Kubernetes v1.25. In this article, we cover the key concepts of Pod Security along with how to use it. We hope that cluster administrators and developers alike will use this new mechanism to enforce secure defaults for their workloads.
## Why Pod Security

View File

@ -14,7 +14,7 @@ Way back in December of 2020, Kubernetes announced the [deprecation of Dockershi
If you are rolling your own cluster or are otherwise unsure whether or not this removal affects you, stay on the safe side and [check to see if you have any dependencies on Docker Engine](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/). Please note that using Docker Desktop to build your application containers is not a Docker dependency for your cluster. Container images created by Docker are compliant with the [Open Container Initiative (OCI)](https://opencontainers.org/), a Linux Foundation governance structure that defines industry standards around container formats and runtimes. They will work just fine on any container runtime supported by Kubernetes.
If you are using a managed Kubernetes service from a cloud provider, and you havent explicitly changed the container runtime, there may be nothing else for you to do. Amazon EKS, Azure AKS, and Google GKE all default to containerd now, though you should make sure they do not need updating if you have any node customizations. To check the runtime of your nodes, follow [Find Out What Container Runtime is Used on a Node](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/).
If you are using a managed Kubernetes service from a cloud provider, and you havent explicitly changed the container runtime, there may be nothing else for you to do. Amazon EKS, Azure AKS, and Google GKE all default to containerd now, though you should make sure they do not need updating if you have any node customizations. To check the runtime of your nodes, follow [Find Out What Container Runtime is Used on a Node](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/).
Regardless of whether you are rolling your own cluster or using a managed Kubernetes service from a cloud provider, you may need to [migrate telemetry or security agents that rely on Docker Engine](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/).

View File

@ -13,7 +13,7 @@ allows the clean up of resources like the following:
* [Objects without owner references](#owners-dependents)
* [Unused containers and container images](#containers-images)
* [Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete](/docs/concepts/storage/persistent-volumes/#delete)
* [Stale or expired CertificateSigningRequests (CSRs)](/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
* [Stale or expired CertificateSigningRequests (CSRs)](/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
* {{<glossary_tooltip text="Nodes" term_id="node">}} deleted in the following scenarios:
* On a cloud when the cluster uses a [cloud controller manager](/docs/concepts/architecture/cloud-controller/)
* On-premises when the cluster uses an addon similar to a cloud controller

View File

@ -134,7 +134,7 @@ Mi, Ki. For example, the following represent roughly the same value:
128974848, 129e6, 129M, 128974848000m, 123Mi
```
Take care about case for suffixes. If you request `400m` of memory, this is a request
Pay attention to the case of the suffixes. If you request `400m` of memory, this is a request
for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`)
or 400 megabytes (`400M`).

View File

@ -46,7 +46,7 @@ Customization approaches can be broadly divided into *configuration*, which only
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
## Extensions

View File

@ -348,6 +348,7 @@ Here are some examples of device plugin implementations:
* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for hardware-assisted virtualization
* The [NVIDIA GPU device plugin for Container-Optimized OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu)
* The [RDMA device plugin](https://github.com/hustcat/k8s-rdma-device-plugin)
* The [SocketCAN device plugin](https://github.com/collabora/k8s-socketcan)
* The [Solarflare device plugin](https://github.com/vikaschoudhary16/sfc-device-plugin)
* The [SR-IOV Network device plugin](https://github.com/intel/sriov-network-device-plugin)
* The [Xilinx FPGA device plugins](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin) for Xilinx FPGA devices

View File

@ -129,6 +129,12 @@ The available Admission Control modules are described in [Admission Controllers]
Once a request passes all admission controllers, it is validated using the validation routines
for the corresponding API object, and then written to the object store (shown as step **4**).
## Auditing
Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster.
The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.
For more information, see [Auditing](/docs/tasks/debug-application-cluster/audit/).
## API server ports and IPs

View File

@ -21,7 +21,7 @@ behavior of pods in a clear, consistent fashion.
As a Beta feature, Kubernetes offers a built-in _Pod Security_ {{< glossary_tooltip
text="admission controller" term_id="admission-controller" >}}, the successor
to [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/). Pod security restrictions
to [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/). Pod security restrictions
are applied at the {{< glossary_tooltip text="namespace" term_id="namespace" >}} level when pods
are created.
@ -55,7 +55,7 @@ are available at [https://git.k8s.io/pod-security-admission/webhook](https://git
To install:
```shell
git clone git@github.com:kubernetes/pod-security-admission.git
git clone https://github.com/kubernetes/pod-security-admission.git
cd pod-security-admission/webhook
make certs
kubectl apply -k .

View File

@ -452,7 +452,7 @@ of individual policies are not defined here.
- {{< example file="security/podsecurity-baseline.yaml" >}}Baseline namespace{{< /example >}}
- {{< example file="security/podsecurity-restricted.yaml" >}}Restricted namespace{{< /example >}}
[**PodSecurityPolicy**](/docs/concepts/policy/pod-security-policy/) (Deprecated)
[**PodSecurityPolicy**](/docs/concepts/security/pod-security-policy/) (Deprecated)
- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}}
- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}}

View File

@ -281,7 +281,7 @@ the policy will be applied only for the single `port` field.
## Targeting a Namespace by its name
{{< feature-state state="beta" for_k8s_version="1.21" >}}
{{< feature-state for_k8s_version="1.22" state="stable" >}}
The Kubernetes control plane sets an immutable label `kubernetes.io/metadata.name` on all
namespaces, provided that the `NamespaceDefaultLabelName`

View File

@ -126,10 +126,10 @@ standardized. See the documentation of each CSI driver for further
instructions.
### CSI driver restrictions
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) to control which CSI drivers can be used in a Pod, specified with the
As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) to control which CSI drivers can be used in a Pod, specified with the
[`allowedCSIDrivers` field](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicyspec-v1beta1-policy).
{{< note >}}
@ -250,13 +250,12 @@ PVCs indirectly if they can create Pods, even if they do not have
permission to create PVCs directly. Cluster administrators must be
aware of this. If this does not fit their security model, they have
two choices:
- Use a [Pod Security
Policy](/docs/concepts/policy/pod-security-policy/) where the
`volumes` list does not contain the `ephemeral` volume type
(deprecated in Kubernetes 1.21).
- Use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
which rejects objects like Pods that have a generic ephemeral
that rejects objects like Pods that have a generic ephemeral
volume.
- Use a [Pod Security Policy](/docs/concepts/security/pod-security-policy/)
where the `volumes` list does not contain the `ephemeral` volume type
(deprecated since Kubernetes 1.21).
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota) still applies, so
even if users are allowed to use this new mechanism, they cannot use

View File

@ -54,8 +54,7 @@ into a Pod at a specified path. For example:
The example Pod has a projected volume containing the injected service account
token. Containers in this Pod can use that token to access the Kubernetes API
server, authenticating with the identity of [the pod's ServiceAccount]
(/docs/tasks/configure-pod-container/configure-service-account/).
server, authenticating with the identity of [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/).
The `audience` field contains the intended audience of the
token. A recipient of the token must identify itself with an identifier specified
in the audience of the token, and otherwise should reject the token. This field

View File

@ -261,8 +261,7 @@ Within a Pod, containers share an IP address and port space, and
can find each other via `localhost`. The containers in a Pod can also communicate
with each other using standard inter-process communications like SystemV semaphores
or POSIX shared memory. Containers in different Pods have distinct IP addresses
and can not communicate by IPC without
[special configuration](/docs/concepts/policy/pod-security-policy/).
and can not communicate by OS-level IPC without special configuration.
Containers that want to interact with a container running in a different Pod can
use IP networking to communicate.

View File

@ -712,7 +712,7 @@ for more information.
This admission controller acts on creation and modification of the pod and determines if it should be admitted
based on the requested security context and the available Pod Security Policies.
See also [Pod Security Policy documentation](/docs/concepts/policy/pod-security-policy/)
See also the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) documentation
for more information.
### PodTolerationRestriction {#podtolerationrestriction}
@ -773,14 +773,17 @@ for more information.
### SecurityContextDeny {#securitycontextdeny}
This admission controller will deny any pod that attempts to set certain escalating
This admission controller will deny any Pod that attempts to set certain escalating
[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
fields, as shown in the
[Configure a Security Context for a Pod or Container](/docs/tasks/configure-pod-container/security-context/)
task.
This should be enabled if a cluster doesn't utilize
[pod security policies](/docs/concepts/policy/pod-security-policy/)
to restrict the set of values a security context can take.
If you don't use [Pod Security admission]((/docs/concepts/security/pod-security-admission/),
[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), nor any external enforcement mechanism,
then you could use this admission controller to restrict the set of values a security context can take.
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for more context on restricting
pod privileges.
### ServiceAccount {#serviceaccount}

View File

@ -121,7 +121,7 @@ token,user,uid,"group1,group2,group3"
When using bearer token authentication from an http client, the API
server expects an `Authorization` header with a value of `Bearer
THETOKEN`. The bearer token must be a character sequence that can be
<token>`. The bearer token must be a character sequence that can be
put in an HTTP header value using no more than the encoding and
quoting facilities of HTTP. For example: if the bearer token is
`31ada4fd-adec-460c-809a-9e56ceb75269` then it would appear in an HTTP

View File

@ -76,7 +76,7 @@ DELETE | delete (for individual resources), deletecollection (for collections
Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:
* [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)
* [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/)
* `use` verb on `podsecuritypolicies` resources in the `policy` API group.
* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping)
* `bind` and `escalate` verbs on `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group.

View File

@ -74,7 +74,7 @@ different Kubernetes components.
| `CSIInlineVolume` | `true` | Beta | 1.16 | - |
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
| `CSIMigration` | `true` | Beta | 1.17 | |
| `CSIMigrationAWS` | `false` | Alpha | 1.14 | |
| `CSIMigrationAWS` | `false` | Alpha | 1.14 | 1.16 |
| `CSIMigrationAWS` | `false` | Beta | 1.17 | 1.22 |
| `CSIMigrationAWS` | `true` | Beta | 1.23 | |
| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | 1.19 |
@ -606,39 +606,47 @@ Each feature gate is designed for enabling/disabling a specific feature:
See [AppArmor Tutorial](/docs/tutorials/security/apparmor/) for more details.
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
that can be attached to a node.
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) for more details.
- `BalanceAttachedNodeVolumes`: Include volume count on node to be considered for balanced resource allocation
while scheduling. A node which has closer CPU, memory utilization, and volume count is favored by the scheduler
while making decisions.
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits)
for more details.
- `BalanceAttachedNodeVolumes`: Include volume count on node to be considered for
balanced resource allocation while scheduling. A node which has closer CPU,
memory utilization, and volume count is favored by the scheduler while making decisions.
- `BlockVolume`: Enable the definition and consumption of raw block devices in Pods.
See [Raw Block Volume Support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)
for more details.
- `BoundServiceAccountTokenVolume`: Migrate ServiceAccount volumes to use a projected volume consisting of a
ServiceAccountTokenVolumeProjection. Cluster admins can use metric `serviceaccount_stale_tokens_total` to
monitor workloads that are depending on the extended tokens. If there are no such workloads, turn off
extended tokens by starting `kube-apiserver` with flag `--service-account-extend-token-expiration=false`.
- `BoundServiceAccountTokenVolume`: Migrate ServiceAccount volumes to use a projected volume
consisting of a ServiceAccountTokenVolumeProjection. Cluster admins can use metric
`serviceaccount_stale_tokens_total` to monitor workloads that are depending on the extended
tokens. If there are no such workloads, turn off extended tokens by starting `kube-apiserver` with
flag `--service-account-extend-token-expiration=false`.
Check [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
for more details.
for more details.
- `ControllerManagerLeaderMigration`: Enables Leader Migration for
[kube-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) and
[cloud-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager) which allows a cluster operator to live migrate
[cloud-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager)
which allows a cluster operator to live migrate
controllers from the kube-controller-manager into an external controller-manager
(e.g. the cloud-controller-manager) in an HA cluster without downtime.
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CPUManagerPolicyAlphaOptions`: This allows fine-tuning of CPUManager policies, experimental, Alpha-quality options
- `CPUManagerPolicyAlphaOptions`: This allows fine-tuning of CPUManager policies,
experimental, Alpha-quality options
This feature gate guards *a group* of CPUManager options whose quality level is alpha.
This feature gate will never graduate to beta or stable.
- `CPUManagerPolicyBetaOptions`: This allows fine-tuning of CPUManager policies, experimental, Beta-quality options
- `CPUManagerPolicyBetaOptions`: This allows fine-tuning of CPUManager policies,
experimental, Beta-quality options
This feature gate guards *a group* of CPUManager options whose quality level is beta.
This feature gate will never graduate to stable.
- `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies.
- `CRIContainerLogRotation`: Enable container log rotation for CRI container runtime. The default max size of a log file is 10MB and the
default max number of log files allowed for a container is 5. These values can be configured in the kubelet config.
See the [logging at node level](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level) documentation for more details.
- `CRIContainerLogRotation`: Enable container log rotation for CRI container runtime.
The default max size of a log file is 10MB and the default max number of
log files allowed for a container is 5.
These values can be configured in the kubelet config.
See [logging at node level](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
for more details.
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage.
See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support)
documentation for more details.
See [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support)
for more details.
- `CSIDriverRegistry`: Enable all logic related to the CSIDriver API object in
csi.storage.k8s.io.
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
@ -670,7 +678,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
AzureDisk CSI plugin. Requires CSIMigration and CSIMigrationAzureDisk feature
flags enabled and AzureDisk CSI plugin installed and configured on all nodes
in the cluster. This flag has been deprecated in favor of the
`InTreePluginAzureDiskUnregister` feature flag which prevents the registration of in-tree AzureDisk plugin.
`InTreePluginAzureDiskUnregister` feature flag which prevents the registration
of in-tree AzureDisk plugin.
- `CSIMigrationAzureFile`: Enables shims and translation logic to route volume
operations from the Azure-File in-tree plugin to AzureFile CSI plugin.
Supports falling back to in-tree AzureFile plugin for mount operations to
@ -693,19 +702,13 @@ Each feature gate is designed for enabling/disabling a specific feature:
Does not support falling back for provision operations, for those the CSI
plugin must be installed and configured. Requires CSIMigration feature flag
enabled.
- `csiMigrationRBD`: Enables shims and translation logic to route volume
operations from the RBD in-tree plugin to Ceph RBD CSI plugin. Requires
CSIMigration and csiMigrationRBD feature flags enabled and Ceph CSI plugin
installed and configured in the cluster. This flag has been deprecated in
favor of the
`InTreePluginRBDUnregister` feature flag which prevents the registration of
in-tree RBD plugin.
- `CSIMigrationGCEComplete`: Stops registering the GCE-PD in-tree plugin in
kubelet and volume controllers and enables shims and translation logic to
route volume operations from the GCE-PD in-tree plugin to PD CSI plugin.
Requires CSIMigration and CSIMigrationGCE feature flags enabled and PD CSI
plugin installed and configured on all nodes in the cluster. This flag has
been deprecated in favor of the `InTreePluginGCEUnregister` feature flag which prevents the registration of in-tree GCE PD plugin.
been deprecated in favor of the `InTreePluginGCEUnregister` feature flag which
prevents the registration of in-tree GCE PD plugin.
- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume
operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports
falling back to in-tree Cinder plugin for mount operations to nodes that have
@ -718,7 +721,14 @@ Each feature gate is designed for enabling/disabling a specific feature:
volume operations from the Cinder in-tree plugin to Cinder CSI plugin.
Requires CSIMigration and CSIMigrationOpenStack feature flags enabled and Cinder
CSI plugin installed and configured on all nodes in the cluster. This flag has
been deprecated in favor of the `InTreePluginOpenStackUnregister` feature flag which prevents the registration of in-tree openstack cinder plugin.
been deprecated in favor of the `InTreePluginOpenStackUnregister` feature flag
which prevents the registration of in-tree openstack cinder plugin.
- `csiMigrationRBD`: Enables shims and translation logic to route volume
operations from the RBD in-tree plugin to Ceph RBD CSI plugin. Requires
CSIMigration and csiMigrationRBD feature flags enabled and Ceph CSI plugin
installed and configured in the cluster. This flag has been deprecated in
favor of the `InTreePluginRBDUnregister` feature flag which prevents the registration of
in-tree RBD plugin.
- `CSIMigrationvSphere`: Enables shims and translation logic to route volume operations
from the vSphere in-tree plugin to vSphere CSI plugin. Supports falling back
to in-tree vSphere plugin for mount operations to nodes that have the feature
@ -731,11 +741,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
from the vSphere in-tree plugin to vSphere CSI plugin. Requires CSIMigration and
CSIMigrationvSphere feature flags enabled and vSphere CSI plugin installed and
configured on all nodes in the cluster. This flag has been deprecated in favor
of the `InTreePluginvSphereUnregister` feature flag which prevents the registration of in-tree vsphere plugin.
of the `InTreePluginvSphereUnregister` feature flag which prevents the
registration of in-tree vsphere plugin.
- `CSIMigrationPortworx`: Enables shims and translation logic to route volume operations
from the Portworx in-tree plugin to Portworx CSI plugin.
Requires Portworx CSI driver to be installed and configured in the cluster, and feature gate set `CSIMigrationPortworx=true` in kube-controller-manager and kubelet configs.
- `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in csi.storage.k8s.io.
Requires Portworx CSI driver to be installed and configured in the cluster.
- `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in `csi.storage.k8s.io`.
- `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a
[CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
compatible volume plugin.
@ -763,7 +774,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
version 1 of the same controller is selected.
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change `cpuCFSQuotaPeriod` in
[kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/).
- `CustomResourceValidationExpressions`: Enable expression language validation in CRD which will validate customer resource based on validation rules written in `x-kubernetes-validations` extension.
- `CustomResourceValidationExpressions`: Enable expression language validation in CRD
which will validate customer resource based on validation rules written in
the `x-kubernetes-validations` extension.
- `CustomPodDNS`: Enable customizing the DNS settings for a Pod using its `dnsConfig` property.
Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)
for more details.
@ -847,7 +860,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
host mounts, or containers that are privileged or using specific non-namespaced
capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled
if user namespace remapping is enabled in the Docker daemon.
- `ExternalPolicyForExternalIP`: Fix a bug where ExternalTrafficPolicy is not applied to Service ExternalIPs.
- `ExternalPolicyForExternalIP`: Fix a bug where ExternalTrafficPolicy is not
applied to Service ExternalIPs.
- `GCERegionalPersistentDisk`: Enable the regional PD feature on GCE.
- `GenericEphemeralVolume`: Enables ephemeral, inline volumes that support all features
of normal volumes (can be provided by third-party storage vendors, storage capacity tracking,
@ -860,8 +874,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
for more details.
- `GracefulNodeShutdownBasedOnPodPriority`: Enables the kubelet to check Pod priorities
when shutting down a node gracefully.
- `GRPCContainerProbe`: Enables the gRPC probe method for {Liveness,Readiness,Startup}Probe. See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
- `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete` irrespective of PV-PVC deletion ordering.
- `GRPCContainerProbe`: Enables the gRPC probe method for {Liveness,Readiness,Startup}Probe.
See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
- `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete`
irrespective of PV-PVC deletion ordering.
- `HPAContainerMetrics`: Enable the `HorizontalPodAutoscaler` to scale based on
metrics from individual containers in target pods.
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler`
@ -873,10 +889,19 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `HyperVContainer`: Enable
[Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
for Windows containers.
- `IdentifyPodOS`: Allows the Pod OS field to be specified. This helps in identifying the OS of the pod
authoritatively during the API server admission time. In Kubernetes {{< skew currentVersion >}}, the allowed values for the `pod.spec.os.name` are `windows` and `linux`.
- `IdentifyPodOS`: Allows the Pod OS field to be specified. This helps in identifying
the OS of the pod authoritatively during the API server admission time.
In Kubernetes {{< skew currentVersion >}}, the allowed values for the `pod.spec.os.name`
are `windows` and `linux`.
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as
immutable for better safety and performance.
- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
controller to manage Pod completions per completion index.
- `IngressClassNamespacedParams`: Allow namespace-scoped parameters reference in
`IngressClass` resource. This feature adds two fields - `Scope` and `Namespace`
to `IngressClass.spec.parameters`.
- `Initializers`: Allow asynchronous coordination of object creation using the
Initializers admission plugin.
- `InTreePluginAWSUnregister`: Stops registering the aws-ebs in-tree plugin in kubelet
and volume controllers.
- `InTreePluginAzureDiskUnregister`: Stops registering the azuredisk in-tree plugin in kubelet
@ -893,13 +918,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
and volume controllers.
- `InTreePluginvSphereUnregister`: Stops registering the vSphere in-tree plugin in kubelet
and volume controllers.
- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
controller to manage Pod completions per completion index.
- `IngressClassNamespacedParams`: Allow namespace-scoped parameters reference in
`IngressClass` resource. This feature adds two fields - `Scope` and `Namespace`
to `IngressClass.spec.parameters`.
- `Initializers`: Allow asynchronous coordination of object creation using the
Initializers admission plugin.
- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
support for IPv6.
- `JobMutableNodeSchedulingDirectives`: Allows updating node scheduling directives in
@ -917,17 +935,21 @@ Each feature gate is designed for enabling/disabling a specific feature:
a file specified using a config file.
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)
for more details.
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
- `KubeletInUserNamespace`: Enables support for running kubelet in a {{<glossary_tooltip text="user namespace" term_id="userns">}}.
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for
image pull credentials.
- `KubeletInUserNamespace`: Enables support for running kubelet in a
{{<glossary_tooltip text="user namespace" term_id="userns">}}.
See [Running Kubernetes Node Components as a Non-root User](/docs/tasks/administer-cluster/kubelet-in-userns/).
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
for more details.
- `KubeletPodResourcesGetAllocatable`: Enable the kubelet's pod resources `GetAllocatableResources` functionality.
This API augments the [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
with informations about the allocatable resources, enabling clients to properly track the free compute resources on a node.
- `KubeletPodResourcesGetAllocatable`: Enable the kubelet's pod resources
`GetAllocatableResources` functionality. This API augments the
[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
with informations about the allocatable resources, enabling clients to properly
track the free compute resources on a node.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
@ -948,15 +970,18 @@ Each feature gate is designed for enabling/disabling a specific feature:
based on logarithmic bucketing of pod timestamps.
- `MemoryManager`: Allows setting memory affinity for a container based on
NUMA topology.
- `MemoryQoS`: Enable memory protection and usage throttle on pod / container using cgroup v2 memory controller.
- `MemoryQoS`: Enable memory protection and usage throttle on pod / container using
cgroup v2 memory controller.
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
Service instance.
- `MountContainers`: Enable using utility containers on host as the volume mounter.
- `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods.
For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation).
- `NamespaceDefaultLabelName`: Configure the API Server to set an immutable {{< glossary_tooltip text="label" term_id="label" >}}
`kubernetes.io/metadata.name` on all namespaces, containing the namespace name.
- `NetworkPolicyEndPort`: Enable use of the field `endPort` in NetworkPolicy objects, allowing the selection of a port range instead of a single port.
- `NamespaceDefaultLabelName`: Configure the API Server to set an immutable
{{< glossary_tooltip text="label" term_id="label" >}} `kubernetes.io/metadata.name`
on all namespaces, containing the namespace name.
- `NetworkPolicyEndPort`: Enable use of the field `endPort` in NetworkPolicy objects,
allowing the selection of a port range instead of a single port.
- `NodeDisruptionExclusion`: Enable use of the Node label `node.kubernetes.io/exclude-disruption`
which prevents nodes from being evacuated during zone failures.
- `NodeLease`: Enable the new Lease API to report node heartbeats, which could be used as a node health signal.
@ -971,23 +996,23 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `OpenAPIEnums`: Enables populating "enum" fields of OpenAPI schemas in the
spec returned from the API server.
- `OpenAPIV3`: Enables the API server to publish OpenAPI v3.
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
being deleted when it is still used by any Pod.
- `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)
feature which allows users to influence ReplicaSet downscaling order.
- `PersistentLocalVolumes`: Enable the usage of `local` volume type in Pods.
Pod affinity has to be specified if requesting a `local` volume.
- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and pod stats from the CRI container runtime
rather than gathering them from cAdvisor.
- `PodAndContainerStatsFromCRI`: Configure the kubelet to gather container and
pod stats from the CRI container runtime rather than gathering them from cAdvisor.
- `PodDisruptionBudget`: Enable the [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) feature.
- `PodAffinityNamespaceSelector`: Enable the [Pod Affinity Namespace Selector](/docs/concepts/scheduling-eviction/assign-pod-node/#namespace-selector)
and [CrossNamespacePodAffinity](/docs/concepts/policy/resource-quotas/#cross-namespace-pod-affinity-quota) quota scope features.
- `PodAffinityNamespaceSelector`: Enable the
[Pod Affinity Namespace Selector](/docs/concepts/scheduling-eviction/assign-pod-node/#namespace-selector)
and [CrossNamespacePodAffinity](/docs/concepts/policy/resource-quotas/#cross-namespace-pod-affinity-quota)
quota scope features.
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/)
feature to account for pod overheads.
- `PodPriority`: Enable the descheduling and preemption of Pods based on their
[priorities](/docs/concepts/scheduling-eviction/pod-priority-preemption/).
- `PodReadinessGates`: Enable the setting of `PodReadinessGate` field for extending
Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)
Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)
for more details.
- `PodSecurity`: Enables the `PodSecurity` admission plugin.
- `PodShareProcessNamespace`: Enable the setting of `shareProcessNamespace` in a Pod for sharing
@ -998,19 +1023,22 @@ Each feature gate is designed for enabling/disabling a specific feature:
the cluster.
- `ProbeTerminationGracePeriod`: Enable [setting probe-level
`terminationGracePeriodSeconds`](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#probe-level-terminationgraceperiodseconds)
on pods. See the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2238-liveness-probe-grace-period) for more details.
on pods. See the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2238-liveness-probe-grace-period)
for more details.
- `ProcMountType`: Enables control over the type proc mounts for containers
by setting the `procMount` field of a SecurityContext.
- `ProxyTerminatingEndpoints`: Enable the kube-proxy to handle terminating
endpoints when `ExternalTrafficPolicy=Local`.
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
being deleted when it is still used by any Pod.
- `QOSReserved`: Allows resource reservations at the QoS level preventing pods
at lower QoS levels from bursting into resources requested at higher QoS levels
(memory only for now).
- `ReadWriteOncePod`: Enables the usage of `ReadWriteOncePod` PersistentVolume
access mode.
- `RecoverVolumeExpansionFailure`: Enables users to edit their PVCs to smaller sizes so as they can recover from previously issued
volume expansion failures. See
[Recovering from Failure when Expanding Volumes](/docs/concepts/storage/persistent-volumes/#recovering-from-failure-when-expanding-volumes)
- `RecoverVolumeExpansionFailure`: Enables users to edit their PVCs to smaller
sizes so as they can recover from previously issued volume expansion failures.
See [Recovering from Failure when Expanding Volumes](/docs/concepts/storage/persistent-volumes/#recovering-from-failure-when-expanding-volumes)
for more details.
- `RemainingItemCount`: Allow the API servers to show a count of remaining
items in the response to a
@ -1033,7 +1061,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
[Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)
for more details.
- `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet.
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details.
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)
for more details.
- `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet.
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)
for more details.
@ -1045,7 +1074,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
instead of the DaemonSet controller.
- `SCTPSupport`: Enables the _SCTP_ `protocol` value in Pod, Service,
Endpoints, EndpointSlice, and NetworkPolicy definitions.
- `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile for all workloads.
- `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile
for all workloads.
The seccomp profile is specified in the `securityContext` of a Pod and/or a Container.
- `SelectorIndex`: Allows label and field based indexes in API server watch
cache to accelerate list operations.
@ -1059,7 +1089,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `ServiceInternalTrafficPolicy`: Enables the `internalTrafficPolicy` field on Services
- `ServiceLBNodePortControl`: Enables the `allocateLoadBalancerNodePorts` field on Services.
- `ServiceLoadBalancerClass`: Enables the `loadBalancerClass` field on Services. See
[Specifying class of load balancer implementation](/docs/concepts/services-networking/service/#load-balancer-class) for more details.
[Specifying class of load balancer implementation](/docs/concepts/services-networking/service/#load-balancer-class)
for more details.
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers
created by a cloud provider. A node is eligible for exclusion if labelled with

View File

@ -126,7 +126,6 @@ of provisioning.
2. [Token authentication file](#token-authentication-file)
Bootstrap tokens are a simpler and more easily managed method to authenticate kubelets, and do not require any additional flags when starting kube-apiserver.
Using bootstrap tokens is currently __beta__ as of Kubernetes version 1.12.
Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to:

View File

@ -2,7 +2,7 @@
title: Pod Security Policy
id: pod-security-policy
date: 2018-04-12
full_link: /docs/concepts/policy/pod-security-policy/
full_link: /docs/concepts/security/pod-security-policy/
short_description: >
Enables fine-grained authorization of pod creation and updates.

View File

@ -527,15 +527,56 @@ options.
## Version skew policy {#version-skew-policy}
The `kubeadm` tool of version v{{< skew latestVersion >}} may deploy clusters with a control plane of version v{{< skew latestVersion >}} or v{{< skew prevMinorVersion >}}.
`kubeadm` v{{< skew latestVersion >}} can also upgrade an existing kubeadm-created cluster of version v{{< skew prevMinorVersion >}}.
While kubeadm allows version skew against some components that it manages, it is recommended that you
match the kubeadm version with the versions of the control plane components, kube-proxy and kubelet.
Due to that we can't see into the future, kubeadm CLI v{{< skew latestVersion >}} may or may not be able to deploy v{{< skew nextMinorVersion >}} clusters.
### kubeadm's skew against the Kubernetes version
These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
kubeadm can be used with Kubernetes components that are the same version as kubeadm
or one version older. The Kubernetes version can be specified to kubeadm by using the
`--kubernetes-version` flag of `kubeadm init` or the
[`ClusterConfiguration.kubernetesVersion`](/docs/reference/config-api/kubeadm-config.v1beta3/)
field when using `--config`. This option will control the versions
of kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy.
* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
* Kubeadm-specific [installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
Example:
* kubeadm is at {{< skew latestVersion >}}
* `kubernetesVersion` must be at {{< skew latestVersion >}} or {{< skew prevMinorVersion >}}
### kubeadm's skew against the kubelet
Similarly to the Kubernetes version, kubeadm can be used with a kubelet version that is the same
version as kubeadm or one version older.
Example:
* kubeadm is at {{< skew latestVersion >}}
* kubelet on the host must be at {{< skew latestVersion >}} or {{< skew prevMinorVersion >}}
### kubeadm's skew against kubeadm
There are certain limitations on how kubeadm commands can operate on existing nodes or whole clusters
managed by kubeadm.
If new nodes are joined to the cluster, the kubeadm binary used for `kubeadm join` must match
the last version of kubeadm used to either create the cluster with `kubeadm init` or to upgrade
the same node with `kubeadm upgrade`. Similar rules apply to the rest of the kubeadm commands
with the exception of `kubeadm upgrade`.
Example for `kubeadm join`:
* kubeadm version {{< skew latestVersion >}} was used to create a cluster with `kubeadm init`
* Joining nodes must use a kubeadm binary that is at version {{< skew latestVersion >}}
Nodes that are being upgraded must use a version of kubeadm that is the same MINOR
version or one MINOR version newer than the version of kubeadm used for managing the
node.
Example for `kubeadm upgrade`:
* kubeadm version {{< skew prevMinorVersion >}} was used to create or upgrade the node
* The version of kubeadm used for upgrading the node must be at {{< skew prevMinorVersion >}}
or {{< skew latestVersion >}}
To learn more about the version skew between the different Kubernetes component see
the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/).
## Limitations {#limitations}

View File

@ -182,8 +182,9 @@ respectively.
## General Guidelines
System daemons are expected to be treated similar to 'Guaranteed' pods. System
daemons can burst within their bounding control groups and this behavior needs
System daemons are expected to be treated similar to
[Guaranteed pods](/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed).
System daemons can burst within their bounding control groups and this behavior needs
to be managed as part of kubernetes deployments. For example, `kubelet` should
have its own control group and share `kube-reserved` resources with the
container runtime. However, Kubelet cannot burst and use up all available Node

View File

@ -32,7 +32,7 @@ configure a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/).
If availability is important for any applications that run or could run on the node(s)
that you are draining, [configure a PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
first and the continue following this guide.
first and then continue following this guide.
## Use `kubectl drain` to remove a node from service

View File

@ -63,7 +63,7 @@ actions a client might want to perform. It is recommended that you use the
As with authentication, simple and broad roles may be appropriate for smaller clusters, but as
more users interact with the cluster, it may become necessary to separate teams into separate
namespaces with more limited roles.
{{< glossary_tooltip text="namespaces" term_id="namespace" >}} with more limited roles.
With authorization, it is important to understand how updates on one object may cause actions in
other places. For instance, a user may not be able to create pods directly, but allowing them to
@ -106,15 +106,18 @@ reserved resources like memory, or to provide default limits when none are speci
A pod definition contains a [security context](/docs/tasks/configure-pod-container/security-context/)
that allows it to request access to run as a specific Linux user on a node (like root),
access to run privileged or access the host network, and other controls that would otherwise
allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy/)
can limit which users or service accounts can provide dangerous security context settings. For example, pod security policies can limit volume mounts, especially `hostPath`, which are aspects of a pod that should be controlled.
allow it to run unfettered on a hosting node.
You can configure [Pod security admission](/docs/concepts/security/pod-security-admission/)
to enforce use of a particular [Pod Security Standard](/docs/concepts/security/pod-security-standards/)
in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}, or to detect breaches.
Generally, most application workloads need limited access to host resources so they can
successfully run as a root process (uid 0) without access to host information. However,
considering the privileges associated with the root user, you should write application
containers to run as a non-root user. Similarly, administrators who wish to prevent
client applications from escaping their containers should use a restrictive pod security
policy.
client applications from escaping their containers should apply the **Baseline**
or **Restricted** Pod Security Standard.
### Preventing containers from loading unwanted kernel modules
@ -238,7 +241,15 @@ restrict the integration to functioning in a single namespace if possible.
Components that create pods may also be unexpectedly powerful if they can do so inside namespaces
like the `kube-system` namespace, because those pods can gain access to service account secrets
or run with elevated permissions if those service accounts are granted access to permissive
[pod security policies](/docs/concepts/policy/pod-security-policy/).
[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/).
If you use [Pod Security admission]((/docs/concepts/security/pod-security-admission/) and allow
any component to create Pods within a namespace that permits privileged Pods, those Pods may
be able to escape their containers and use this widened access to elevate their privileges.
You should not allow untrusted components to create Pods in any system namespace (those with
names that start with `kube-`) nor in any namespace where that access grant allows the possibility
of privilege escalation.
### Encrypt secrets at rest

View File

@ -218,6 +218,7 @@ Use the option `--from-env-file` to create a ConfigMap from an env-file, for exa
# Download the sample files into `configure-pod-container/configmap/` directory
wget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties
wget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties
# The env-file `game-env-file.properties` looks like below
cat configure-pod-container/configmap/game-env-file.properties
@ -255,17 +256,10 @@ data:
lives: "3"
```
{{< caution >}}
When passing `--from-env-file` multiple times to create a ConfigMap from multiple data sources, only the last env-file is used.
{{< /caution >}}
The behavior of passing `--from-env-file` multiple times is demonstrated by:
Starting with Kubernetes v1.23, `kubectl` supports the `--from-env-file` argument to be
specified multiple times to create a ConfigMap from multiple data sources.
```shell
# Download the sample files into `configure-pod-container/configmap/` directory
wget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties
# Create the configmap
kubectl create configmap config-multi-env-files \
--from-env-file=configure-pod-container/configmap/game-env-file.properties \
--from-env-file=configure-pod-container/configmap/ui-env-file.properties
@ -288,8 +282,11 @@ metadata:
resourceVersion: "810136"
uid: 252c4572-eb35-11e7-887b-42010a8002b8
data:
allowed: '"true"'
color: purple
enemies: aliens
how: fairlyNice
lives: "3"
textmode: "true"
```

View File

@ -44,9 +44,6 @@ The above bullets are not a complete set of security context settings -- please
[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
for a comprehensive list.
For more information about security mechanisms in Linux, see
[Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features)
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
@ -487,6 +484,8 @@ kubectl delete pod security-context-demo-4
* [Tuning Docker with the newest security enhancements](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)
* [Security Contexts design document](https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md)
* [Ownership Management design document](https://git.k8s.io/community/contributors/design-proposals/storage/volume-ownership-management.md)
* [Pod Security Policies](/docs/concepts/policy/pod-security-policy/)
* [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/)
* [AllowPrivilegeEscalation design
document](https://git.k8s.io/community/contributors/design-proposals/auth/no-new-privs.md)
* For more information about security mechanisms in Linux, see
[Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features) (Note: Some information is out of date)

View File

@ -182,8 +182,7 @@ static-web 1/1 Running 0 2m
```
{{< note >}}
Make sure the kubelet has permission to create the mirror Pod in the API server. If not, the creation request is rejected by the API server. See
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/).
Make sure the kubelet has permission to create the mirror Pod in the API server. If not, the creation request is rejected by the API server. See [Pod Security admission](/docs/concepts/security/pod-security-admission) and [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/).
{{< /note >}}
{{< glossary_tooltip term_id="label" text="Labels" >}} from the static Pod are

View File

@ -337,9 +337,9 @@ APIs, cluster administrators must ensure that:
* For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above.
For more information on these different metrics paths and how they differ please see the relevant design proposals for
[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md),
[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md)
and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md).
[the HPA V2](https://github.com/kubernetes/design-proposals-archive/blob/main/autoscaling/hpa-v2.md),
[custom.metrics.k8s.io](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/custom-metrics-api.md)
and [external.metrics.k8s.io](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/external-metrics-api.md).
For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)
and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects).

View File

@ -350,7 +350,7 @@ node with the required profile.
{{< note >}}
PodSecurityPolicy is deprecated in Kubernetes v1.21, and will be removed in v1.25.
See [PodSecurityPolicy documentation](/docs/concepts/policy/pod-security-policy/) for more information.
See [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) documentation for more information.
{{< /note >}}
If the PodSecurityPolicy extension is enabled, cluster-wide AppArmor restrictions can be applied. To

View File

@ -0,0 +1,63 @@
---
title: Límites de Volumen específicos del Nodo
content_type: concept
---
<!-- overview -->
Esta página describe la cantidad máxima de Volúmenes que se pueden adjuntar a un Nodo para varios proveedores de nube.
Los proveedores de la nube como Google, Amazon y Microsoft suelen tener un límite en la cantidad de Volúmenes que se pueden adjuntar a un Nodo. Es importante que Kubernetes respete esos límites. De lo contrario, los Pods planificados en un Nodo podrían quedarse atascados esperando que los Volúmenes se conecten.
<!-- body -->
## Límites predeterminados de Kubernetes
El Planificador de Kubernetes tiene límites predeterminados en la cantidad de Volúmenes que se pueden adjuntar a un Nodo:
<table>
<tr><th>Servicio de almacenamiento en la nube</th><th>Volúmenes máximos por Nodo</th></tr>
<tr><td><a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (EBS)</a></td><td>39</td></tr>
<tr><td><a href="https://cloud.google.com/persistent-disk/">Google Persistent Disk</a></td><td>16</td></tr>
<tr><td><a href="https://azure.microsoft.com/en-us/services/storage/main-disks/">Microsoft Azure Disk Storage</a></td><td>16</td></tr>
</table>
## Límites personalizados
Puede cambiar estos límites configurando el valor de la variable de entorno KUBE_MAX_PD_VOLS y luego iniciando el Planificador. Los controladores CSI pueden tener un procedimiento diferente, consulte su documentación sobre cómo personalizar sus límites.
Tenga cuidado si establece un límite superior al límite predeterminado. Consulte la documentación del proveedor de la nube para asegurarse de que los Nodos realmente puedan admitir el límite que establezca.
El límite se aplica a todo el clúster, por lo que afecta a todos los Nodos.
## Límites de Volumen dinámico
{{< feature-state state="stable" for_k8s_version="v1.17" >}}
Los límites de Volumen dinámico son compatibles con los siguientes tipos de Volumen.
- Amazon EBS
- Google Persistent Disk
- Azure Disk
- CSI
Para los Volúmenes administrados por in-tree plugins de Volumen, Kubernetes determina automáticamente el tipo de Nodo y aplica la cantidad máxima adecuada de Volúmenes para el Nodo. Por ejemplo:
* En
<a href="https://cloud.google.com/compute/">Google Compute Engine</a>,
se pueden adjuntar hasta 127 Volúmenes a un Nodo, [según el tipo de Nodo](https://cloud.google.com/compute/docs/disks/#pdnumberlimits).
* Para los discos de Amazon EBS en los tipos de instancias M5,C5,R5,T3 y Z1D, Kubernetes permite que solo se adjunten 25 Volúmenes a un Nodo. Para otros tipos de instancias en
<a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (EC2)</a>,
Kubernetes permite adjuntar 39 Volúmenes a un Nodo.
* En Azure, se pueden conectar hasta 64 discos a un Nodo, según el tipo de Nodo. Para obtener más detalles, consulte [Sizes for virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes).
* Si un controlador de almacenamiento CSI anuncia una cantidad máxima de Volúmenes para un Nodo (usando `NodeGetInfo`), el {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} respeta ese límite.
Consulte las [especificaciones de CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo) para obtener más información.
* Para los Volúmenes administrados por in-tree plugins que han sido migrados a un controlador CSI, la cantidad máxima de Volúmenes será la que informe el controlador CSI.

View File

@ -27,12 +27,7 @@ weight: 10
<div class="col-md-8">
<h3>Déploiements Kubernetes</h3>
<p>
Une fois que vous avez un cluster Kubernetes en cours d'exécution, vous pouvez déployer vos applications conteneurisées par dessus.
Pour ce faire, vous créez une configuration de <b>Déploiement (Deployments) </b> Kubernetes. Le déploiement instruit Kubernetes
                de comment créer et mettre à jour des instances de votre application. Une fois que vous avez créé un déploiement, le plannificateur de Kubernetes (kube-scheduler)
                planifient les instanciations d'application sur des nœuds du cluster.
</p>
<p>Une fois que vous avez un cluster Kubernetes en cours d'exécution, vous pouvez déployer vos applications conteneurisées par dessus. Pour ce faire, vous créez une configuration de <b>Déploiement (Deployments) </b> Kubernetes. Le déploiement instruit Kubernetes de comment créer et mettre à jour des instances de votre application. Une fois que vous avez créé un déploiement, le plannificateur de Kubernetes (kube-scheduler) planifient les instanciations d'application sur des nœuds du cluster.</p>
<p>Une fois les instances dapplication créées, un contrôleur de déploiement Kubernetes surveille en permanence ces instances. Si le nœud hébergeant une instance tombe en panne ou est supprimé, le contrôleur de déploiement remplace l'instance par une instance située sur un autre nœud du cluster. <b> Ceci fournit un mécanisme d'auto-réparation pour faire face aux pannes ou à la maintenance de la machine.</b></p>

View File

@ -0,0 +1,134 @@
---
title: Podのセキュリティアドミッション
content_type: concept
weight: 20
min-kubernetes-server-version: v1.22
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
Kubernetesの[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/)はPodに対して異なる分離レベルを定義します。
これらの標準によって、Podの動作をどのように制限したいかを、明確かつ一貫した方法で定義することができます。
ベータ版機能として、Kubernetesは[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)の後継である組み込みの _Pod Security_ {{< glossary_tooltip text="アドミッションコントローラー" term_id="admission-controller" >}}を提供しています。
Podセキュリティの制限は、Pod作成時に{{< glossary_tooltip text="名前空間" term_id="namespace" >}}レベルで適用されます。
{{< note >}}
PodSecurityPolicy APIは非推奨であり、v1.25でKubernetesから[削除](/docs/reference/using-api/deprecation-guide/#v1-25)される予定です。
{{< /note >}}
<!-- body -->
## `PodSecurity`アドミッションプラグインの有効化 {#enabling-the-podsecurity-admission-plugin}
v1.23において、`PodSecurity`の[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)はベータ版の機能で、デフォルトで有効化されています。
v1.22において、`PodSecurity`の[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)はアルファ版の機能で、組み込みのアドミッションプラグインを使用するには、`kube-apiserver`で有効にする必要があります。
```shell
--feature-gates="...,PodSecurity=true"
```
## 代替案:`PodSecurity`アドミッションwebhookのインストール {#webhook}
クラスターがv1.22より古い、あるいは`PodSecurity`機能を有効にできないなどの理由で、ビルトインの`PodSecurity`アドミッションプラグインが使えない環境では、`PodSecurity`はアドミッションロジックはベータ版の[validating admission webhook](https://git.k8s.io/pod-security-admission/webhook)としても提供されています。
ビルド前のコンテナイメージ、証明書生成スクリプト、マニフェストの例は、[https://git.k8s.io/pod-security-admission/webhook](https://git.k8s.io/pod-security-admission/webhook)で入手可能です。
インストール方法:
```shell
git clone git@github.com:kubernetes/pod-security-admission.git
cd pod-security-admission/webhook
make certs
kubectl apply -k .
```
{{< note >}}
生成された証明書の有効期限は2年間です。有効期限が切れる前に、証明書を再生成するか、内蔵のアドミッションプラグインを使用してWebhookを削除してください。
{{< /note >}}
## Podのセキュリティレベル {#pod-security-levels}
Podのセキュリティアドミッションは、Podの[Security Context](/docs/tasks/configure-pod-container/security-context/)とその他の関連フィールドに、[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards)で定義された3つのレベル、`privileged`、`baseline`、`restricted`に従って要件を設定するものです。
これらの要件の詳細については、[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards)のページを参照してください。
## Podの名前空間に対するセキュリティアドミッションラベル {#pod-security-admission-labels-for-namespaces}
この機能を有効にするか、Webhookをインストールすると、名前空間を設定して、各名前空間でPodセキュリティに使用したいadmission controlモードを定義できます。
Kubernetesは、名前空間に使用したい定義済みのPodセキュリティの標準レベルのいずれかを適用するために設定できる{{< glossary_tooltip term_id="label" text="ラベル" >}}のセットを用意しています。
選択したラベルは、以下のように違反の可能性が検出された場合に{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}が取るアクションを定義します。
{{< table caption="Podのセキュリティアドミッションのモード" >}}
モード | 説明
:---------|:------------
**enforce** | ポリシーに違反した場合、Podは拒否されます。
**audit** | ポリシー違反は、[監査ログ](/ja/docs/tasks/debug-application-cluster/audit/)に記録されるイベントに監査アノテーションを追加するトリガーとなりますが、それ以外は許可されます。
**warn** | ポリシーに違反した場合は、ユーザーへの警告がトリガーされますが、それ以外は許可されます。
{{< /table >}}
名前空間は、任意のまたはすべてのモードを設定することができ、異なるモードに対して異なるレベルを設定することもできます。
各モードには、使用するポリシーを決定する2つのラベルがあります。
```yaml
# モードごとのレベルラベルは、そのモードに適用するポリシーレベルを示す。
#
# MODEは`enforce`、`audit`、`warn`のいずれかでなければならない。
# LEVELは`privileged`、`baseline`、`restricted`のいずれかでなければならない。
pod-security.kubernetes.io/<MODE>: <LEVEL>
# オプション: モードごとのバージョンラベルは、Kubernetesのマイナーバージョンに同梱される
# バージョンにポリシーを固定するために使用できる例えばv{{< skew latestVersion >}}など)。
#
# MODEは`enforce`、`audit`、`warn`のいずれかでなければならない。
# VERSIONは有効なKubernetesのマイナーバージョンか`latest`でなければならない。
pod-security.kubernetes.io/<MODE>-version: <VERSION>
```
[名前空間ラベルでのPodセキュリティの標準の適用](/docs/tasks/configure-pod-container/enforce-standards-namespace-labels)で使用例を確認できます。
## WorkloadのリソースとPodテンプレート {#workload-resources-and-pod-templates}
Podは、{{< glossary_tooltip term_id="deployment" >}}や{{< glossary_tooltip term_id="job">}}のような[ワークロードオブジェクト](/ja/docs/concepts/workloads/controllers/)を作成することによって、しばしば間接的に生成されます。
ワークロードオブジェクトは_Pod template_を定義し、ワークロードリソースの{{< glossary_tooltip term_id="controller" text="コントローラー" >}}はそのテンプレートに基づきPodを作成します。
違反の早期発見を支援するために、auditモードとwarningモードは、ワークロードリソースに適用されます。
ただし、enforceモードはワークロードリソースには**適用されず**、結果としてのPodオブジェクトにのみ適用されます。
## 適用除外(Exemption) {#exemptions}
Podセキュリティの施行から _exemptions_ を定義することで、特定の名前空間に関連するポリシーのために禁止されていたPodの作成を許可することができます。
Exemptionは[アドミッションコントローラーの設定](/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller)で静的に設定することができます。
Exemptionは明示的に列挙する必要があります。
Exemptionを満たしたリクエストは、アドミッションコントローラーによって _無視_ されます(`enforce`、`audit`、`warn`のすべての動作がスキップされます)。Exemptionの次元は以下の通りです。
- **Usernames:** 認証されていない(あるいは偽装された)ユーザー名を持つユーザーからの要求は無視されます。
- **RuntimeClassNames:** Podと[ワークロードリソース](#workload-resources-and-pod-templates)で指定された除外ランタイムクラス名は、無視されます。
- **Namespaces:** 除外された名前空間のPodと[ワークロードリソース](#workload-resources-and-pod-templates)は、無視されます。
{{< caution >}}
ほとんどのPodは、[ワークロードリソース](#workload-resources-and-pod-templates)に対応してコントローラーが作成します。つまり、エンドユーザーを適用除外にするのはPodを直接作成する場合のみで、ワークロードリソースを作成する場合は適用除外になりません。
コントローラーサービスアカウント(`system:serviceaccount:kube-system:replicaset-controller`など)は通常、除外してはいけません。そうした場合、対応するワークロードリソースを作成できるすべてのユーザーを暗黙的に除外してしまうためです。
{{< /caution >}}
以下のPodフィールドに対する更新は、ポリシーチェックの対象外となります。つまり、Podの更新要求がこれらのフィールドを変更するだけであれば、Podが現在のポリシーレベルに違反していても拒否されることはありません。
- すべてのメタデータの更新(seccompまたはAppArmorアテーションへの変更を**除く**)
- `seccomp.security.alpha.kubernetes.io/pod`(非推奨)
- `container.seccomp.security.alpha.kubernetes.io/*`(非推奨)
- `container.apparmor.security.beta.kubernetes.io/*`
- `.spec.activeDeadlineSeconds`に対する有効な更新
- `.spec.tolerations`に対する有効な更新
## {{% heading "whatsnext" %}}
- [Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards)
- [Podセキュリティの標準の適用](/docs/setup/best-practices/enforcing-pod-security-standards)
- [ビルトインのアドミッションコントローラーの設定によるPodセキュリティの標準の適用](/docs/tasks/configure-pod-container/enforce-standards-admission-controller)
- [名前空間ラベルでのPodセキュリティの標準の適用](/docs/tasks/configure-pod-container/enforce-standards-namespace-labels)
- [PodSecurityPolicyからビルトインのPodSecurityアドミッションコントローラーへの移行](/docs/tasks/configure-pod-container/migrate-from-psp)

View File

@ -68,8 +68,8 @@ Kubernetesプロジェクトは、GitHubのissueとPull Requestに関連する
- approve
これらの2つのプラグインは`kubernetes.website`のGithubリポジトリのトップレベルにある
[OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS)ファイルと、
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES)ファイルを使用して、
[OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS)ファイルと、
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES)ファイルを使用して、
リポジトリ内でのprowの動作を制御します。
OWNERSファイルには、SIG Docsのレビュー担当者および承認者であるユーザーのリストが含まれています。

View File

@ -0,0 +1,4 @@
---
title: メモリー、CPU、APIリソースの管理
weight: 20
---

View File

@ -3,7 +3,6 @@ layout: blog
title: "Atualizado: Perguntas frequentes (FAQ) sobre a remoção do Dockershim"
date: 2022-02-17
slug: dockershim-faq
aliases: [ '/dockershim' ]
---
**Esta é uma atualização do artigo original [FAQ sobre a depreciação do Dockershim](/blog/2020/12/02/dockershim-faq/),

View File

@ -52,7 +52,7 @@ kubectl [command] [TYPE] [NAME] [flags]
* Выбор ресурсов по одному или нескольким файлов: `-f file1 -f file2 -f file<#>`
* [Используйте YAML вместо JSON](/docs/concepts/configuration/overview/#general-configuration-tips), так так YAML удобнее для пользователей, особенно в конфигурационных файлов.<br/>
* [Используйте YAML вместо JSON](/docs/concepts/configuration/overview/#general-configuration-tips), так как YAML удобнее для пользователей, особенно в конфигурационных файлах.<br/>
Пример: `kubectl get pod -f ./pod.yaml`
* `flags`: определяет дополнительные флаги. Например, вы можете использовать флаги `-s` или `--server`, чтобы указать адрес и порт API-сервера Kubernetes.<br/>
@ -162,7 +162,7 @@ kubectl [command] [TYPE] [NAME] [flags]
### Форматирование вывода
Стандартный формат вывода всех команд `kubectl` представлен в человекочитаемом текстовом формате. Чтобы вывести подробности в определенном формате можно добавить флаги `-o` или `--output` к команде `kubectl`.
Стандартный формат вывода всех команд `kubectl` представлен в понятном для человека текстовом формате. Чтобы вывести подробности в определенном формате можно добавить флаги `-o` или `--output` к команде `kubectl`.
#### Синтаксис
@ -229,7 +229,7 @@ submit-queue 610995
`kubectl` может получать информацию об объектах с сервера.
Это означает, что для любого указанного ресурса сервер вернет столбцы и строки по этому ресурсу, которые отобразит клиент.
Благодаря тому, что сервер инкапсулирует реализацию вывода, гарантируется единообразный и человекочитаемый вывод на всех клиентах, использующих один и тот же кластер.
Благодаря тому, что сервер инкапсулирует реализацию вывода, гарантируется единообразный и понятный для человека вывод на всех клиентах, использующих один и тот же кластер.
Эта функциональность включена по умолчанию, начиная с `kubectl` 1.11 и выше. Чтобы отключить ее, добавьте флаг `--server-print=false` в команду `kubectl get`.
@ -340,7 +340,7 @@ kubectl delete pods,services -l name=<label-name>
kubectl delete pods --all
```
`kubectl exec` - Выполнить команду в контейнера пода.
`kubectl exec` - Выполнить команду в контейнере пода.
```shell
# Получить вывод от запущенной команды 'date' в поде <pod-name>. По умолчанию отображается вывод из первого контейнера.

View File

@ -0,0 +1,156 @@
---
layout: blog
title: "确保准入控制器的安全"
date: 2022-01-19
slug: secure-your-admission-controllers-and-webhooks
---
<!--
layout: blog
title: "Securing Admission Controllers"
date: 2022-01-19
slug: secure-your-admission-controllers-and-webhooks
-->
<!--
**Author:** Rory McCune (Aqua Security)
-->
**作者:** Rory McCune (Aqua Security)
<!--
[Admission control](/docs/reference/access-authn-authz/admission-controllers/) is a key part of Kubernetes security, alongside authentication and authorization.
Webhook admission controllers are extensively used to help improve the security of Kubernetes clusters in a variety of ways including restricting the privileges of workloads and ensuring that images deployed to the cluster meet organizations security requirements.
-->
[准入控制](/zh/docs/reference/access-authn-authz/admission-controllers/)和认证、授权都是 Kubernetes 安全性的关键部分。
Webhook 准入控制器被广泛用于以多种方式帮助提高 Kubernetes 集群的安全性,
包括限制工作负载权限和确保部署到集群的镜像满足组织安全要求。
<!--
However, as with any additional component added to a cluster, security risks can present themselves.
A security risk example is if the deployment and management of the admission controller are not handled correctly. To help admission controller users and designers manage these risks appropriately,
the [security documentation](https://github.com/kubernetes/community/tree/master/sig-security#security-docs) subgroup of SIG Security has spent some time developing a [threat model for admission controllers](https://github.com/kubernetes/sig-security/tree/main/sig-security-docs/papers/admission-control).
This threat model looks at likely risks which may arise from the incorrect use of admission controllers, which could allow security policies to be bypassed, or even allow an attacker to get unauthorised access to the cluster.
-->
然而,与添加到集群中的任何其他组件一样,安全风险也会随之出现。
一个安全风险示例是没有正确处理准入控制器的部署和管理。
为了帮助准入控制器用户和设计人员适当地管理这些风险,
SIG Security 的[安全文档](https://github.com/kubernetes/community/tree/master/sig-security#security-docs)小组
花费了一些时间来开发一个[准入控制器威胁模型](https://github.com/kubernetes/sig-security/tree/main/sig-security-docs/papers/admission-control)。
这种威胁模型着眼于由于不正确使用准入控制器而产生的可能的风险,可能允许绕过安全策略,甚至允许攻击者未经授权访问集群。
<!--
From the threat model, we developed a set of security best practices that should be adopted to ensure that cluster operators can get the security benefits of admission controllers whilst avoiding any risks from using them.
-->
基于这个威胁模型,我们开发了一套安全最佳实践。
你应该采用这些实践来确保集群操作员可以获得准入控制器带来的安全优势,同时避免使用它们带来的任何风险。
<!--
## Admission controllers and good practices for security
-->
## 准入控制器和安全的良好做法
<!--
From the threat model, a couple of themes emerged around how to ensure the security of admission controllers.
-->
基于这个威胁模型,围绕着如何确保准入控制器的安全性出现了几个主题。
<!--
### Secure webhook configuration
-->
### 安全的 webhook 配置
<!--
Its important to ensure that any security component in a cluster is well configured and admission controllers are no different here. There are a couple of security best practices to consider when using admission controllers
-->
确保集群中的任何安全组件都配置良好是很重要的,在这里准入控制器也并不例外。
使用准入控制器时需要考虑几个安全最佳实践:
<!--
* **Correctly configured TLS for all webhook traffic**. Communications between the API server and the admission controller webhook should be authenticated and encrypted to ensure that attackers who may be in a network position to view or modify this traffic cannot do so. To achieve this access the API server and webhook must be using certificates from a trusted certificate authority so that they can validate their mutual identities
-->
* **为所有 webhook 流量正确配置了 TLS**。
API 服务器和准入控制器 webhook 之间的通信应该经过身份验证和加密,以确保处于网络中查看或修改此流量的攻击者无法查看或修改。
要实现此访问API 服务器和 webhook 必须使用来自受信任的证书颁发机构的证书,以便它们可以验证相互的身份。
<!--
* **Only authenticated access allowed**. If an attacker can send an admission controller large numbers of requests, they may be able to overwhelm the service causing it to fail. Ensuring all access requires strong authentication should mitigate that risk.
-->
* **只允许经过身份验证的访问**。
如果攻击者可以向准入控制器发送大量请求,他们可能会压垮服务导致其失败。
确保所有访问都需要强身份验证可以降低这种风险。
<!--
* **Admission controller fails closed**. This is a security practice that has a tradeoff, so whether a cluster operator wants to configure it will depend on the clusters threat model. If an admission controller fails closed, when the API server cant get a response from it, all deployments will fail. This stops attackers bypassing the admission controller by disabling it, but, can disrupt the clusters operation. As clusters can have multiple webhooks, one approach to hit a middle ground might be to have critical controls on a fail closed setups and less critical controls allowed to fail open.
-->
* **准入控制器关闭失败**。
这是一种需要权衡的安全实践,集群操作员是否要对其进行配置取决于集群的威胁模型。
如果一个准入控制器关闭失败,当 API 服务器无法从它得到响应时,所有的部署都会失败。
这可以阻止攻击者通过禁用准入控制器绕过准入控制器,但可能会破坏集群的运行。
由于集群可以有多个 webhook因此一种折中的方法是对关键控制允许故障关闭
并允许不太关键的控制进行故障打开。
<!--
* **Regular reviews of webhook configuration**. Configuration mistakes can lead to security issues, so its important that the admission controller webhook configuration is checked to make sure the settings are correct. This kind of review could be done automatically by an Infrastructure As Code scanner or manually by an administrator.
-->
* **定期审查 webhook 配置**。
配置错误可能导致安全问题,因此检查准入控制器 webhook 配置以确保设置正确非常重要。
这种审查可以由基础设施即代码扫描程序自动完成,也可以由管理员手动完成。
<!--
### Secure cluster configuration for admission control
-->
### 为准入控制保护集群配置
<!--
In most cases, the admission controller webhook used by a cluster will be installed as a workload in the cluster. As a result, its important to ensure that Kubernetes' security features that could impact its operation are well configured.
-->
在大多数情况下,集群使用的准入控制器 webhook 将作为工作负载安装在集群中。
因此,确保正确配置了可能影响其操作的 Kubernetes 安全特性非常重要。
<!--
* **Restrict [RBAC](/docs/reference/access-authn-authz/rbac/) rights**. Any user who has rights which would allow them to modify the configuration of the webhook objects or the workload that the admission controller uses could disrupt its operation. So its important to make sure that only cluster administrators have those rights.
-->
* **限制 [RBAC](/zh/docs/reference/access-authn-authz/rbac/) 权限**。
任何有权修改 webhook 对象的配置或准入控制器使用的工作负载的用户都可以破坏其运行。
因此,确保只有集群管理员拥有这些权限非常重要。
<!--
* **Prevent privileged workloads**. One of the realities of container systems is that if a workload is given certain privileges, it will be possible to break out to the underlying cluster node and impact other containers on that node. Where admission controller services run in the cluster theyre protecting, its important to ensure that any requirement for privileged workloads is carefully reviewed and restricted as much as possible.
-->
* **防止特权工作负载**。
容器系统的一个现实是,如果工作负载被赋予某些特权,
则有可能逃逸到下层的集群节点并影响该节点上的其他容器。
如果准入控制器服务在它们所保护的集群上运行,
一定要确保对特权工作负载的所有请求都要经过仔细审查并尽可能地加以限制。
<!--
* **Strictly control external system access**. As a security service in a cluster admission controller systems will have access to sensitive information like credentials. To reduce the risk of this information being sent outside the cluster, [network policies](/docs/concepts/services-networking/network-policies/) should be used to restrict the admission controller services access to external networks.
-->
* **严格控制外部系统访问**。
作为集群中的安全服务,准入控制器系统将有权访问敏感信息,如凭证。
为了降低此信息被发送到集群外的风险,
应使用[网络策略](/zh/docs/concepts/services-networking/network-policies/)
来限制准入控制器服务对外部网络的访问。
<!--
* **Each cluster has a dedicated webhook**. Whilst it may be possible to have admission controller webhooks that serve multiple clusters, there is a risk when using that model that an attack on the webhook service would have a larger impact where its shared. Also where multiple clusters use an admission controller there will be increased complexity and access requirements, making it harder to secure.
-->
* **每个集群都有一个专用的 webhook**。
虽然可能让准入控制器 webhook 服务于多个集群的,
但在使用该模型时存在对 webhook 服务的攻击会对共享它的地方产生更大影响的风险。
此外,在多个集群使用准入控制器的情况下,复杂性和访问要求也会增加,从而更难保护其安全。
<!--
### Admission controller rules
-->
### 准入控制器规则
<!--
A key element of any admission controller used for Kubernetes security is the rulebase it uses. The rules need to be able to accurately meet their goals avoiding false positive and false negative results.
-->
对于用于 Kubernetes 安全的所有准入控制器而言,一个关键元素是它使用的规则库。
规则需要能够准确地满足其目标,避免假阳性和假阴性结果。
<!--
* **Regularly test and review rules**. Admission controller rules need to be tested to ensure their accuracy. They also need to be regularly reviewed as the Kubernetes API will change with each new version, and rules need to be assessed with each Kubernetes release to understand any changes that may be required to keep them up to date.
-->
* **定期测试和审查规则**。
需要测试准入控制器规则以确保其准确性。
还需要定期审查,因为 Kubernetes API 会随着每个新版本而改变,
并且需要在每个 Kubernetes 版本中评估规则,以了解使他们保持最新版本所需要做的任何改变。

View File

@ -0,0 +1,299 @@
---
layout: blog
title: 'SIG Node CI 子项目庆祝测试改进两周年'
date: 2022-02-16
slug: sig-node-ci-subproject-celebrates
canonicalUrl: https://www.kubernetes.dev/blog/2022/02/16/sig-node-ci-subproject-celebrates-two-years-of-test-improvements/
---
<!--
---
layout: blog
title: 'SIG Node CI Subproject Celebrates Two Years of Test Improvements'
date: 2022-02-16
slug: sig-node-ci-subproject-celebrates
canonicalUrl: https://www.kubernetes.dev/blog/2022/02/16/sig-node-ci-subproject-celebrates-two-years-of-test-improvements/
url: /zh/blog/2022/02/sig-node-ci-subproject-celebrates
---
-->
**作者:** Sergey Kanzhelev (Google), Elana Hashman (Red Hat)
<!--**Authors:** Sergey Kanzhelev (Google), Elana Hashman (Red Hat)-->
<!--Ensuring the reliability of SIG Node upstream code is a continuous effort
that takes a lot of behind-the-scenes effort from many contributors.
There are frequent releases of Kubernetes, base operating systems,
container runtimes, and test infrastructure that result in a complex matrix that
requires attention and steady investment to "keep the lights on."
In May 2020, the Kubernetes node special interest group ("SIG Node") organized a new
subproject for continuous integration (CI) for node-related code and tests. Since its
inauguration, the SIG Node CI subproject has run a weekly meeting, and even the full hour
is often not enough to complete triage of all bugs, test-related PRs and issues, and discuss all
related ongoing work within the subgroup.-->
保证 SIG 节点上游代码的可靠性是一项持续的工作,需要许多贡献者在幕后付出大量努力。
Kubernetes、基础操作系统、容器运行时和测试基础架构的频繁发布导致了一个复杂的矩阵
需要关注和稳定的投资来“保持灯火通明”。2020 年 5 月Kubernetes Node 特殊兴趣小组
“SIG Node”为节点相关代码和测试组织了一个新的持续集成CI子项目。自成立以来SIG Node CI
子项目每周举行一次会议,即使一整个小时通常也不足以完成对所有缺陷、测试相关的 PR 和问题的分类,
并讨论组内所有相关的正在进行的工作。
<!--Over the past two years, we've fixed merge-blocking and release-blocking tests, reducing time to merge Kubernetes contributors' pull requests thanks to reduced test flakes. When we started, Node test jobs only passed 42% of the time, and through our efforts, we now ensure a consistent >90% job pass rate. We've closed 144 test failure issues and merged 176 pull requests just in kubernetes/kubernetes. And we've helped subproject participants ascend the Kubernetes contributor ladder, with 3 new org members, 6 new reviewers, and 2 new approvers.-->
在过去两年中,我们修复了阻塞合并和阻塞发布的测试,由于减少了测试缺陷,缩短了合并 Kubernetes
贡献者的拉取请求的时间。通过我们的努力,任务通过率由开始时 42% 提高至稳定大于 90% 。我们已经解决了 144 个测试失败问题,
并在 kubernetes/kubernetes 中合并了 176 个拉取请求。
我们还帮助子项目参与者提升了 Kubernetes 贡献者的等级,新增了 3 名组织成员、6 名评审员和 2 名审批员。
<!--The Node CI subproject is an approachable first stop to help new contributors
get started with SIG Node. There is a low barrier to entry for new contributors
to address high-impact bugs and test fixes, although there is a long
road before contributors can climb the entire contributor ladder:
it took over a year to establish two new approvers for the group.
The complexity of all the different components that power Kubernetes nodes
and its test infrastructure requires a sustained investment over a long period
for developers to deeply understand the entire system,
both at high and low levels of detail.-->
Node CI 子项目是一个可入门的第一站,帮助新参与者开始使用 SIG Node。对于新贡献者来说
解决影响较大的缺陷和测试修复的门槛很低,尽管贡献者要攀登整个贡献者阶梯还有很长的路要走:
为该团队培养了两个新的审批人花了一年多的时间。为 Kubernetes 节点及其测试基础设施提供动力的所有
不同组件的复杂性要求开发人员在很长一段时间内进行持续投资,
以深入了解整个系统,从宏观到微观。
<!--We have several regular contributors at our meetings, however; our reviewers
and approvers pool is still small. It is our goal to continue to grow
contributors to ensure a sustainable distribution of work
that does not just fall to a few key approvers.-->
虽然在我们的会议上有几个比较固定的贡献者;但是我们的评审员和审批员仍然很少。
我们的目标是继续增加贡献者,以确保工作的可持续分配,而不仅仅是少数关键批准者。
<!--It's not always obvious how subprojects within SIGs are formed, operate,
and work. Each is unique to its sponsoring SIG and tailored to the projects
that the group is intended to support. As a group that has welcomed many
first-time SIG Node contributors, we'd like to share some of the details and
accomplishments over the past two years,
helping to demystify our inner workings and celebrate the hard work
of all our dedicated contributors!-->
SIG 中的子项目如何形成、运行和工作并不总是显而易见的。每一个都是其背后的 SIG 所独有的,
并根据该小组打算支持的项目量身定制。作为一个欢迎了许多第一次 SIG Node 贡献者的团队,
我们想分享过去两年的一些细节和成就,帮助揭开我们内部工作的神秘面纱,并庆祝我们所有专注贡献者的辛勤工作!
<!--## Timeline-->
## 时间线
<!--***May 2020.*** SIG Node CI group was formed on May 11, 2020, with more than
[30 volunteers](https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U/edit#bookmark=id.vsb8pqnf4gib)
signed up, to improve SIG Node CI signal and overall observability.
Victor Pickard focused on getting
[testgrid jobs](https://testgrid.k8s.io/sig-node) passing
when Ning Liao suggested forming a group around this effort and came up with
the [original group charter document](https://docs.google.com/document/d/1yS-XoUl6GjZdjrwxInEZVHhxxLXlTIX2CeWOARmD8tY/edit#heading=h.te6sgum6s8uf).
The SIG Node chairs sponsored group creation with Victor as a subproject lead.
Sergey Kanzhelev joined Victor shortly after as a co-lead.-->
***2020 年 5 月*** SIG Node CI 组于 2020 年 5 月 11 日成立,超过
[30 名志愿者](https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U/edit#bookmark=id.vsb8pqnf4gib)
注册,以改进 SIG Node CI 信号和整体可观测性。
Victor Pickard 专注于让 [testgrid 可以运行](https://testgrid.k8s.io/sig-node)
当时 Ning Liao 建议围绕这项工作组建一个小组,并提出
[最初的小组章程文件](https://docs.google.com/document/d/1yS-XoUl6GjZdjrwxInEZVHhxxLXlTIX2CeWOARmD8tY/edit#heading=h.te6sgum6s8uf) 。
SIG Node 赞助成立以 Victor 作为子项目负责人的小组。Sergey Kanzhelev 不久后就加入 Victor担任联合领导人。
<!--At the kick-off meeting, we discussed which tests to concentrate on fixing first
and discussed merge-blocking and release-blocking tests, many of which were failing due
to infrastructure issues or buggy test code.-->
在启动会议上,我们讨论了应该首先集中精力修复哪些测试,并讨论了阻塞合并和阻塞发布的测试,
其中许多测试由于基础设施问题或错误的测试代码而失败。
<!--The subproject launched weekly hour-long meetings to discuss ongoing work
discussion and triage.-->
该子项目每周召开一小时的会议,讨论正在进行的工作会谈和分类。
<!--***June 2020.*** Morgan Bauer, Karan Goel, and Jorge Alarcon Ochoa were
recognized as reviewers for the SIG Node CI group for their contributions,
helping significantly with the early stages of the subproject.
David Porter and Roy Yang also joined the SIG test failures GitHub team.-->
***2020 年 6 月*** Morgan Bauer 、 Karan Goel 和 Jorge Alarcon Ochoa
因其贡献而被公认为 SIG Node CI 小组的评审员,为该子项目的早期阶段提供了重要帮助。
David Porter 和 Roy Yang 也加入了 SIG 检测失败的 GitHub 测试团队。
<!--***August 2020.*** All merge-blocking and release-blocking tests were passing,
with some flakes. However, only 42% of all SIG Node test jobs were green, as there
were many flakes and failing tests.-->
***2020 年 8 月*** 所有的阻塞合并和阻塞发布的测试都通过了,伴有一些逻辑问题。
然而,只有 42% 的 SIG Node 测试作业是绿色的,
因为有许多逻辑错误和失败的测试。
<!--***October 2020.*** Amim Knabben becomes a Kubernetes org member for his
contributions to the subproject.-->
***2020 年 10 月*** Amim Knabben 因对子项目的贡献成为 Kubernetes 组织成员。
<!--***January 2021.*** With healthy presubmit and critical periodic jobs passing,
the subproject discussed its goal for cleaning up the rest of periodic tests
and ensuring they passed without flakes.-->
***2021 年 1 月*** 随着健全的预提交和关键定期工作的通过,子项目讨论了清理其余定期测试并确保其顺利通过的目标。
<!--Elana Hashman joined the subproject, stepping up to help lead it after
Victor's departure.-->
Elana Hashman 加入了这个子项目,在 Victor 离开后帮助领导该项目。
<!--***February 2021.*** Artyom Lukianov becomes a Kubernetes org member for his
contributions to the subproject.-->
***2021 年 2 月*** Artyom Lukianov 因其对子项目的贡献成为 Kubernetes 组织成员。
<!--***August 2021.*** After SIG Node successfully ran a [bug scrub](https://groups.google.com/g/kubernetes-dev/c/w2ghO4ihje0/m/VeEql1LJBAAJ)
to clean up its bug backlog, the scope of the meeting was extended to
include bug triage to increase overall reliability, anticipating issues
before they affect the CI signal.-->
***2021 年 8 月*** 在 SIG Node 成功运行 [bug scrub](https://groups.google.com/g/kubernetes-dev/c/w2ghO4ihje0/m/VeEql1LJBAAJ)
以清理其累积的缺陷之后,会议的范围扩大到包括缺陷分类以提高整体可靠性,
在问题影响 CI 信号之前预测问题。
<!--Subproject leads Elana Hashman and Sergey Kanzhelev are both recognized as
approvers on all node test code, supported by SIG Node and SIG Testing.-->
子项目负责人 Elana Hashman 和 Sergey Kanzhelev 都被认为是所有节点测试代码的审批人,由 SIG node 和 SIG Testing 支持。
<!--***September 2021.*** After significant deflaking progress with serial tests in
the 1.22 release spearheaded by Francesco Romani, the subproject set a goal
for getting the serial job fully passing by the 1.23 release date.-->
***2021 年 9 月*** 在 Francesco Romani 牵头的 1.22 版本系列测试取得重大进展后,
该子项目设定了一个目标,即在 1.23 发布日期之前让串行任务完全通过。
<!--Mike Miranda becomes a Kubernetes org member for his contributions
to the subproject.-->
Mike Miranda 因其对子项目的贡献成为 Kubernetes 组织成员。
<!--***November 2021.*** Throughout 2021, SIG Node had no merge or
release-blocking test failures. Many flaky tests from past releases are removed
from release-blocking dashboards as they had been fully cleaned up.-->
***2021 年 11 月*** 在整个 2021 年, SIG Node 没有合并或发布的测试失败。
过去版本中的许多古怪测试都已从阻止发布的仪表板中删除,因为它们已被完全清理。
<!--Danielle Lancashire was recognized as a reviewer for SIG Node's subgroup, test code.-->
Danielle Lancashire 被公认为 SIG Node 子组测试代码的评审员。
<!--The final node serial tests were completely fixed. The serial tests consist of
many disruptive and slow tests which tend to be flakey and are hard
to troubleshoot. By the 1.23 release freeze, the last serial tests were
fixed and the job was passing without flakes.-->
最终节点系列测试已完全修复。系列测试由许多中断性和缓慢的测试组成,这些测试往往是碎片化的,很难排除故障。
到 1.23 版本冻结时,最后一次系列测试已修复,作业顺利通过。
<!--[![Slack announcement that Serial tests are green](serial-tests-green.png)](https://kubernetes.slack.com/archives/C0BP8PW9G/p1638211041322900)-->
[![宣布系列测试为绿色](serial-tests-green.png)](https://kubernetes.slack.com/archives/C0BP8PW9G/p1638211041322900)
<!--The 1.23 release got a special shout out for the tests quality and CI signal.
The SIG Node CI subproject was proud to have helped contribute to such
a high-quality release, in part due to our efforts in identifying
and fixing flakes in Node and beyond.-->
1.23 版本在测试质量和 CI 信号方面得到了特别的关注。SIG Node CI 子项目很自豪能够为这样一个高质量的发布做出贡献,
部分原因是我们在识别和修复节点内外的碎片方面所做的努力。
<!--[![Slack shoutout that release was mostly green](release-mostly-green.png)](https://kubernetes.slack.com/archives/C92G08FGD/p1637175755023200)-->
[![Slack 大声宣布发布的版本大多是绿色的](release-mostly-green.png)](https://kubernetes.slack.com/archives/C92G08FGD/p1637175755023200)
<!--***December 2021.*** An estimated 90% of test jobs were passing at the time of
the 1.23 release (up from 42% in August 2020).-->
***2021 年 12 月*** 在 1.23 版本发布时,估计有 90% 的测试工作通过了测试2020 年 8 月为 42%)。
<!--Dockershim code was removed from Kubernetes. This affected nearly half of SIG Node's
test jobs and the SIG Node CI subproject reacted quickly and retargeted all the
tests. SIG Node was the first SIG to complete test migrations off dockershim,
providing examples for other affected SIGs. The vast majority of new jobs passed
at the time of introduction without further fixes required. The [effort of
removing dockershim](https://k8s.io/dockershim)) from Kubernetes is ongoing.
There are still some wrinkles from the dockershim removal as we uncover more
dependencies on dockershim, but we plan to stabilize all test jobs
by the 1.24 release.-->
Dockershim 代码已从 Kubernetes 中删除。这影响了 SIG Node 近一半的测试作业,
SIG Node CI 子项目反应迅速,并重新确定了所有测试的目标。
SIG Node 是第一个完成 dockershim 外测试迁移的 SIG ,为其他受影响的 SIG 提供了示例。
绝大多数新工作在引入时都已通过,无需进一步修复。
从 Kubernetes 中[将 dockershim 除名的工作](https://k8s.io/dockershim) 正在进行中。
随着我们发现 dockershim 对 dockershim 的依赖性越来越大dockershim 的删除仍然存在一些问题,
但我们计划在 1.24 版本之前确保所有测试任务稳定。
<!--## Statistics-->
## 统计数据
<!--Our regular meeting attendees and subproject participants for the past few months:-->
我们过去几个月的定期会议与会者和子项目参与者:
- Aditi Sharma
- Artyom Lukianov
- Arnaud Meukam
- Danielle Lancashire
- David Porter
- Davanum Srinivas
- Elana Hashman
- Francesco Romani
- Matthias Bertschy
- Mike Miranda
- Paco Xu
- Peter Hunt
- Ruiwen Zhao
- Ryan Phillips
- Sergey Kanzhelev
- Skyler Clark
- Swati Sehgal
- Wenjun Wu
<!--The [kubernetes/test-infra](https://github.com/kubernetes/test-infra/) source code repository contains test definitions. The number of
Node PRs just in that repository:
- 2020 PRs (since May): [183](https://github.com/kubernetes/test-infra/pulls?q=is%3Apr+is%3Aclosed+label%3Asig%2Fnode+created%3A2020-05-01..2020-12-31+-author%3Ak8s-infra-ci-robot+)
- 2021 PRs: [264](https://github.com/kubernetes/test-infra/pulls?q=is%3Apr+is%3Aclosed+label%3Asig%2Fnode+created%3A2021-01-01..2021-12-31+-author%3Ak8s-infra-ci-robot+)-->
[kubernetes/test-infra](https://github.com/kubernetes/test-infra/) 源代码存储库包含测试定义。该存储库中的节点 PR 数:
- 2020 年 PR自 5 月起):[183](https://github.com/kubernetes/test-infra/pulls?q=is%3Apr+is%3Aclosed+label%3Asig%2Fnode+created%3A2020-05-01..2020-12-31+-author%3Ak8s-infra-ci-robot+)
- 2021 年 PR[264](https://github.com/kubernetes/test-infra/pulls?q=is%3Apr+is%3Aclosed+label%3Asig%2Fnode+created%3A2021-01-01..2021-12-31+-author%3Ak8s-infra-ci-robot+)
<!--Triaged issues and PRs on CI board (including triaging away from the subgroup scope):
- 2020 (since May)[132](https://github.com/issues?q=project%3Akubernetes%2F43+created%3A2020-05-01..2020-12-31)
- 2021: [532](https//github.com/issues?q=project%3Akubernetes%2F43+created%3A2021-01-01..2021-12-31+)-->
CI 委员会上的问题和 PRs 分类(包括子组范围之外的分类):
- 2020 年(自 5 月起):[132](https://github.com/issues?q=project%3Akubernetes%2F43+created%3A2020-05-01..2020-12-31)
- 2021 年:[532](https://github.com/issues?q=project%3Akubernetes%2F43+created%3A2021-01-01..2021-12-31+)
<!--## Future-->
## 未来
<!--Just "keeping the lights on" is a bold task and we are committed to improving this experience.
We are working to simplify the triage and review processes for SIG Node.
Specifically, we are working on better test organization, naming,
and tracking:-->
只是“保持灯亮”是一项大胆的任务,我们致力于改善这种体验。
我们正在努力简化 SIG Node 的分类和审查流程。
具体来说,我们正在致力于更好的测试组织、命名和跟踪:
<!-- - https://github.com/kubernetes/enhancements/pull/3042
- https://github.com/kubernetes/test-infra/issues/24641
- [Kubernetes SIG-Node CI Testgrid Tracker](https://docs.google.com/spreadsheets/d/1IwONkeXSc2SG_EQMYGRSkfiSWNk8yWLpVhPm-LOTbGM/edit#gid=0)-->
- https://github.com/kubernetes/enhancements/pull/3042
- https://github.com/kubernetes/test-infra/issues/24641
- [Kubernetes SIG Node CI 测试网格跟踪器](https://docs.google.com/spreadsheets/d/1IwONkeXSc2SG_EQMYGRSkfiSWNk8yWLpVhPm-LOTbGM/edit#gid=0)
<!--We are also constantly making progress on improved tests debuggability and de-flaking.
If any of this interests you, we'd love for you to join us!
There's plenty to learn in debugging test failures, and it will help you gain
familiarity with the code that SIG Node maintains.-->
我们还在改进测试的可调试性和去剥落方面不断取得进展。
如果你对此感兴趣,我们很乐意您能加入我们!
在调试测试失败中有很多东西需要学习,它将帮助你熟悉 SIG Node 维护的代码。
<!--You can always find information about the group on the
[SIG Node](https://github.com/kubernetes/community/tree/master/sig-node) page.
We give group updates at our maintainer track sessions, such as
[KubeCon + CloudNativeCon Europe 2021](https://kccnceu2021.sched.com/event/iE8E/kubernetes-sig-node-intro-and-deep-dive-elana-hashman-red-hat-sergey-kanzhelev-google) 和
[KubeCon + CloudNative North America 2021](https://kccncna2021.sched.com/event/lV9D/kubenetes-sig-node-intro-and-deep-dive-elana-hashman-derek-carr-red-hat-sergey-kanzhelev-dawn-chen-google?iframe=no&w=100%&sidebar=yes&bg=no)。
Join us in our mission to keep the kubelet and other SIG Node components reliable and ensure smooth and uneventful releases!-->
你可以在 [SIG Node](https://github.com/kubernetes/community/tree/master/sig-node) 页面上找到有关该组的信息。
我们在我们的维护者轨道会议上提供组更新,例如:
[KubeCon + CloudNativeCon Europe 2021](https://kccnceu2021.sched.com/event/iE8E/kubernetes-sig-node-intro-and-deep-dive-elana-hashman-red-hat-sergey-kanzhelev-google) 和
[KubeCon + CloudNative North America 2021](https://kccncna2021.sched.com/event/lV9D/kubenetes-sig-node-intro-and-deep-dive-elana-hashman-derek-carr-red-hat-sergey-kanzhelev-dawn-chen-google?iframe=no&w=100%&sidebar=yes&bg=no)。
加入我们的使命,保持 kubelet 和其他 SIG Node 组件的可靠性,确保顺顺利利发布!

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

@ -3,7 +3,6 @@ layout: blog
title: "更新:弃用 Dockershim 的常见问题"
date: 2022-02-17
slug: dockershim-faq
aliases: [ '/dockershim' ]
---
<!--
layout: blog

View File

@ -25,7 +25,7 @@ allows the clean up of resources like the following:
* [Objects without owner references](#owners-dependents)
* [Unused containers and container images](#containers-images)
* [Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete](/docs/concepts/storage/persistent-volumes/#delete)
* [Stale or expired CertificateSigningRequests (CSRs)](/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
* [Stale or expired CertificateSigningRequests (CSRs)](/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
* {{<glossary_tooltip text="Nodes" term_id="node">}} deleted in the following scenarios:
* On a cloud when the cluster uses a [cloud controller manager](/docs/concepts/architecture/cloud-controller/)
* On-premises when the cluster uses an addon similar to a cloud controller

View File

@ -97,22 +97,22 @@ The output is:
You can use `kubectl logs --previous` to retrieve logs from a previous instantiation of a container.
If your pod has multiple containers, specify which container's logs you want to access by
appending a container name to the command, with a `-c` flag, like so:
```console
kubectl logs counter -c count
```
See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
-->
你可以使用命令 `kubectl logs --previous` 检索之前容器实例的日志。
如果 Pod 中有多个容器,你应该为该命令附加容器名以访问对应容器的日志。
详见 [`kubectl logs` 文档](/docs/reference/generated/kubectl/kubectl-commands#logs)。
如果 Pod 有多个容器,你应该为该命令附加容器名以访问对应容器的日志,
使用 `-c` 标志来指定要访问的容器的日志,如下所示:
```console
kubectl logs counter -c count
```
详见 [kubectl logs 文档](/zh/docs/reference/generated/kubectl/kubectl-commands#logs)。
<!--
See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
-->
详见 [`kubectl logs` 文档](/docs/reference/generated/kubectl/kubectl-commands#logs)。
<!--
## Logging at the node level

View File

@ -254,7 +254,7 @@ Mi, Ki. For example, the following represent roughly the same value:
```
<!--
Take care about case for suffixes. If you request `400m` of memory, this is a request
Pay attention to the case of the suffixes. If you request `400m` of memory, this is a request
for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`)
or 400 megabytes (`400M`).
-->

File diff suppressed because it is too large Load Diff

View File

@ -95,10 +95,10 @@ Flags and configuration files may not always be changeable in a hosted Kubernete
有鉴于此,通常应该在没有其他替代方案时才应考虑更改参数标志和配置文件。
<!--
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
-->
*内置的策略 API*,例如[ResourceQuota](/zh/docs/concepts/policy/resource-quotas/)、
[PodSecurityPolicies](/zh/docs/concepts/policy/pod-security-policy/)、
[PodSecurityPolicies](/zh/docs/concepts/security/pod-security-policy/)、
[NetworkPolicy](/zh/docs/concepts/services-networking/network-policies/)
和基于角色的访问控制([RBAC](/zh/docs/reference/access-authn-authz/rbac/)
等等都是内置的 Kubernetes API。

View File

@ -51,7 +51,7 @@ many core Kubernetes functions are now built using custom resources, making Kube
Custom resources can appear and disappear in a running cluster through dynamic registration,
and cluster admins can update custom resources independently of the cluster itself.
Once a custom resource is installed, users can create and access its objects using
[kubectl](/docs/reference/kubectl/overview/), just as they do for built-in resources like
[kubectl](/docs/reference/kubectl/), just as they do for built-in resources like
*Pods*.
-->
*定制资源Custom Resource* 是对 Kubernetes API 的扩展,不一定在默认的
@ -61,7 +61,7 @@ Kubernetes 安装中就可用。定制资源所代表的是对特定 Kubernetes
定制资源可以通过动态注册的方式在运行中的集群内或出现或消失,集群管理员可以独立于集群
更新定制资源。一旦某定制资源被安装,用户可以使用
[kubectl](/zh/docs/reference/kubectl/overview/)
[kubectl](/docs/reference/kubectl/)
来创建和访问其中的对象,就像他们为 *pods* 这种内置资源所做的一样。
<!--
@ -323,7 +323,7 @@ making them available to all of its clients.
-->
## API 服务器聚合 {#api-server-aggregation}
通常Kubernetes API 中的每个都需要处理 REST 请求和管理对象持久性存储的代码。
通常Kubernetes API 中的每个资源都需要处理 REST 请求和管理对象持久性存储的代码。
Kubernetes API 主服务器能够处理诸如 *pods**services* 这些内置资源,也可以
按通用的方式通过 [CRD](#customresourcedefinitions) 来处理定制资源。

View File

@ -34,7 +34,7 @@ Currently, the following types of volume sources can be projected:
* [`secret`](/docs/concepts/storage/volumes/#secret)
* [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi)
* [`configMap`](/docs/concepts/storage/volumes/#configmap)
* `serviceAccountToken`
* [`serviceAccountToken`](#serviceaccounttoken)
-->
## 介绍 {#introduction}
@ -45,7 +45,7 @@ Currently, the following types of volume sources can be projected:
* [`secret`](/zh/docs/concepts/storage/volumes/#secret)
* [`downwardAPI`](/zh/docs/concepts/storage/volumes/#downwardapi)
* [`configMap`](/zh/docs/concepts/storage/volumes/#configmap)
* `serviceAccountToken`
* [`serviceAccountToken`](#serviceaccounttoken)
<!--
All sources are required to be in the same namespace as the Pod. For more details,
@ -85,10 +85,12 @@ parameters are nearly the same with two exceptions:
你可以显式地为每个投射单独设置 `mode` 属性。
<!--
## serviceAccountToken projected volumes {#serviceaccounttoken}
When the `TokenRequestProjection` feature is enabled, you can inject the token
for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
into a Pod at a specified path. For example:
-->
## serviceAccountToken 投射卷 {#serviceaccounttoken}
`TokenRequestProjection` 特性被启用时,你可以将当前
[服务账号](/zh/docs/reference/access-authn-authz/authentication/#service-account-tokens)
的令牌注入到 Pod 中特定路径下。例如:
@ -97,14 +99,17 @@ into a Pod at a specified path. For example:
<!--
The example Pod has a projected volume containing the injected service account
token. This token can be used by a Pod's containers to access the Kubernetes API
server. The `audience` field contains the intended audience of the
token. Containers in this Pod can use that token to access the Kubernetes API
server, authenticating with the identity of [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/).
The `audience` field contains the intended audience of the
token. A recipient of the token must identify itself with an identifier specified
in the audience of the token, and otherwise should reject the token. This field
is optional and it defaults to the identifier of the API server.
-->
示例 Pod 中包含一个投射卷,其中包含注入的服务账号令牌。该令牌可以被 Pod
中的容器用来访问 Kubernetes API 服务器。`audience` 字段包含令牌所针对的受众。
示例 Pod 中包含一个投射卷,其中包含注入的服务账号令牌。
此 Pod 中的容器可以使用该令牌访问 Kubernetes API 服务器, 使用
[pod 的 ServiceAccount](/zh/docs/tasks/configure-pod-container/configure-service-account/)
进行身份验证。`audience` 字段包含令牌所针对的受众。
收到令牌的主体必须使用令牌受众中所指定的某个标识符来标识自身,否则应该拒绝该令牌。
此字段是可选的,默认值为 API 服务器的标识。

View File

@ -25,23 +25,6 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
之间的分布例如区域Region、可用区Zone、节点和其他用户自定义拓扑域。
这样做有助于实现高可用并提升资源利用率。
<!--
{{< note >}}
In versions of Kubernetes before v1.18, you must enable the `EvenPodsSpread`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on
the [API server](/docs/concepts/overview/components/#kube-apiserver) and the
[scheduler](/docs/reference/generated/kube-scheduler/) in order to use Pod
topology spread constraints.
{{< /note >}}
-->
{{< note >}}
在 v1.18 之前的 Kubernetes 版本中,如果要使用 Pod 拓扑扩展约束,你必须在
[API 服务器](/zh/docs/concepts/overview/components/#kube-apiserver)
和[调度器](/zh/docs/reference/command-line-tools-reference/kube-scheduler/)
中启用 `EvenPodsSpread` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
{{< /note >}}
<!-- body -->
<!--
@ -534,7 +517,7 @@ It is recommended that you disable this plugin in the scheduling profile when
using default constraints for `PodTopologySpread`.
-->
默认调度约束所生成的评分可能与
[`SelectorSpread` 插件](/zh/docs/reference/scheduling/config/#scheduling-plugins).
[`SelectorSpread` 插件](/zh/docs/reference/scheduling/config/#scheduling-plugins)
所生成的评分有冲突。
建议你在为 `PodTopologySpread` 设置默认约束是禁用调度方案中的该插件。
{{< /note >}}

View File

@ -233,7 +233,7 @@ token,user,uid,"group1,group2,group3"
When using bearer token authentication from an http client, the API
server expects an `Authorization` header with a value of `Bearer
THETOKEN`. The bearer token must be a character sequence that can be
<token>`. The bearer token must be a character sequence that can be
put in an HTTP header value using no more than the encoding and
quoting facilities of HTTP. For example: if the bearer token is
`31ada4fd-adec-460c-809a-9e56ceb75269` then it would appear in an HTTP
@ -242,7 +242,7 @@ header as shown below.
#### 在请求中放入持有者令牌 {#putting-a-bearer-token-in-a-request}
当使用持有者令牌来对某 HTTP 客户端执行身份认证时API 服务器希望看到
一个名为 `Authorization` 的 HTTP 头,其值格式为 `Bearer THETOKEN`。
一个名为 `Authorization` 的 HTTP 头,其值格式为 `Bearer <token>`。
持有者令牌必须是一个可以放入 HTTP 头部值字段的字符序列,至多可使用
HTTP 的编码和引用机制。
例如:如果持有者令牌为 `31ada4fd-adec-460c-809a-9e56ceb75269`,则其

View File

@ -273,11 +273,9 @@ of provisioning.
<!--
Bootstrap tokens are a simpler and more easily managed method to authenticate kubelets, and do not require any additional flags when starting kube-apiserver.
Using bootstrap tokens is currently __beta__ as of Kubernetes version 1.12.
-->
启动引导令牌是一种对 kubelet 进行身份认证的方法,相对简单且容易管理,
且不需要在启动 kube-apiserver 时设置额外的标志。
启动引导令牌从 Kubernetes 1.12 开始是一种 __Beta__ 功能特性。
<!--
Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to:

View File

@ -613,10 +613,10 @@ The following commands are equivalent:
以下命令是等价的:
```shell
kubectl patch deployment patch-demo --patch-file patch-file.yaml"
kubectl patch deployment patch-demo --patch-file patch-file.yaml
kubectl patch deployment patch-demo --patch 'spec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-2\n image: redis'
kubectl patch deployment patch-demo --patch-file patch-file.json"
kubectl patch deployment patch-demo --patch-file patch-file.json
kubectl patch deployment patch-demo --patch '{"spec": {"template": {"spec": {"containers": [{"name": "patch-demo-ctr-2","image": "redis"}]}}}}'
```

View File

@ -15,10 +15,10 @@ headless: true
-->
<!--
A plugin for Kubernetes command-line tool `kubectl`, which allows you to convert manifests between different API
A plugin for Kubernetes command-line tool `kubectl`, which allows you to convert manifests between different API
versions. This can be particularly helpful to migrate manifests to a non-deprecated api version with newer Kubernetes release.
For more info, visit [migrate to non deprecated apis](/docs/reference/using-api/deprecation-guide/#migrate-to-non-deprecated-apis)
-->
一个 Kubernetes 命令行工具 `kubectl` 的插件,允许你将清单在不同 API 版本间转换。
在将清单迁移到具有较新 Kubernetes 版本的未弃用 API 版本时,这个插件特别有用
这对于将清单迁移到新的 Kubernetes 发行版上未被废弃的 API 版本时尤其有帮助
更多信息请访问 [迁移到非弃用 API](/zh/docs/reference/using-api/deprecation-guide/#migrate-to-non-deprecated-apis)

View File

@ -26,24 +26,19 @@ source <(kubectl completion zsh)
```
<!--
If you have an alias for kubectl, you can extend shell completion to work with that alias:
If you have an alias for kubectl, kubectl autocompletion will automatically work with it.
-->
如果你为 kubectl 定义了别名,可以扩展脚本补全,以兼容该别名。
```zsh
echo 'alias k=kubectl' >>~/.zshrc
echo 'compdef __start_kubectl k' >>~/.zshrc
```
如果你为 kubectl 定义了别名kubectl 自动补全将自动使用它。
<!--
After reloading your shell, kubectl autocompletion should be working.
If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file:
If you get an error like `2: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file:
-->
重新加载 shell 后kubectl 自动补全功能将立即生效。
如果你收到 `complete:13: command not found: compdef` 这样的错误提示,那请将下面内容添加到 `~/.zshrc` 文件的开头:
如果你收到 `2: command not found: compdef` 这样的错误提示,那请将下面内容添加到 `~/.zshrc` 文件的开头:
```zsh
autoload -Uz compinit
compinit

View File

@ -2,3 +2,9 @@
title: 创建集群
weight: 10
---
<!--
Learn about Kubernetes {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}} and create a simple cluster using Minikube.
-->
了解 Kubernetes {{<glossary_tooltip text="集群" term_id="cluster" length="all" >}}并使用 Minikube
创建一个简单的集群。

View File

@ -72,7 +72,7 @@ weight: 10
<div class="row">
<div class="col-md-8">
<p><b> Master 负责管理整个集群。</b> Master 协调集群中的所有活动,例如调度应用、维护应用的所需状态、应用扩容以及推出新的更新。</p>
<p><b> Node 是一个虚拟机或者物理机,它在 Kubernetes 集群中充当工作机器的角色</b> 每个Node都有 Kubelet , 它管理 Node 而且是 Node 与 Master 通信的代理。 Node 还应该具有用于​​处理容器操作的工具,例如 Docker 或 rkt 。处理生产级流量的 Kubernetes 集群至少应具有三个 Node 。</p>
<p><b> Node 是一个虚拟机或者物理机,它在 Kubernetes 集群中充当工作机器的角色</b> 每个Node都有 Kubelet , 它管理 Node 而且是 Node 与 Master 通信的代理。 Node 还应该具有用于​​处理容器操作的工具,例如 Docker 或 rkt 。处理生产级流量的 Kubernetes 集群至少应具有三个 Node,因为如果一个 Node 出现故障其对应的 etcd 成员和控制平面实例都会丢失,并且冗余会受到影响。 你可以通过添加更多控制平面节点来降低这种风险</p>
</div>
<div class="col-md-4">

View File

@ -19,7 +19,7 @@ Pod Security admission (PSA) is enabled by default in v1.23 and later, as it has
[graduated to beta](/blog/2021/12/09/pod-security-admission-beta/).
Pod Security
is an admission controller that carries out checks against the Kubernetes
[Pod Security Standards](docs/concepts/security/pod-security-standards/) when new pods are
[Pod Security Standards](/docs/concepts/security/pod-security-standards/) when new pods are
created. This tutorial shows you how to enforce the `baseline` Pod Security
Standard at the cluster level which applies a standard configuration
to all namespaces in a cluster.
@ -406,14 +406,14 @@ following:
<!--
## Clean up
Run `kind delete cluster -name psa-with-cluster-pss` and
`kind delete cluster -name psa-wo-cluster-pss` to delete the clusters you
Run `kind delete cluster --name psa-with-cluster-pss` and
`kind delete cluster --name psa-wo-cluster-pss` to delete the clusters you
created.
-->
## 清理 {#clean-up}
运行 `kind delete cluster -name psa-with-cluster-pss` 和
`kind delete cluster -name psa-wo-cluster-pss` 来删除你创建的集群。
运行 `kind delete cluster --name psa-with-cluster-pss` 和
`kind delete cluster --name psa-wo-cluster-pss` 来删除你创建的集群。
## {{% heading "whatsnext" %}}

View File

@ -6,7 +6,7 @@ spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
command: [ "/bin/echo", "$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:

View File

@ -8,11 +8,10 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
- key: kubernetes.io/os
operator: In
values:
- e2e-az1
- e2e-az2
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:

View File

@ -6,20 +6,16 @@ metadata:
# Optional: Allow the default AppArmor profile, requires setting the default.
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
# Optional: Allow the default seccomp profile, requires setting the default.
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default,unconfined'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'unconfined'
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: false
# The moby default capability set, defined here:
# https://github.com/moby/moby/blob/0a5cec2833f82a6ad797d70acbf9cbbaf8956017/oci/caps/defaults.go#L6-L19
# The moby default capability set, minus NET_RAW
allowedCapabilities:
- 'CHOWN'
- 'DAC_OVERRIDE'
- 'FSETID'
- 'FOWNER'
- 'MKNOD'
- 'NET_RAW'
- 'SETGID'
- 'SETUID'
- 'SETFCAP'
@ -36,15 +32,16 @@ spec:
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
# Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
- 'csi'
- 'persistentVolumeClaim'
- 'ephemeral'
# Allow all other non-hostpath volume types.
- 'awsElasticBlockStore'
- 'azureDisk'
- 'azureFile'
- 'cephFS'
- 'cinder'
- 'csi'
- 'fc'
- 'flexVolume'
- 'flocker'
@ -67,6 +64,9 @@ spec:
runAsUser:
rule: 'RunAsAny'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
# The PSP SELinux API cannot express the SELinux Pod Security Standards,
# so if using SELinux, you must choose a more restrictive default.
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'

View File

@ -5,44 +5,43 @@ metadata:
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
# Required to prevent escalations to root.
# 防止权限升级到 root
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
# 允许的核心卷类型.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
# 假设集群管理员设置的临时 CSI 驱动程序和持久卷可以安全使用
- 'csi'
- 'persistentVolumeClaim'
- 'ephemeral'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
# 要求容器在没有 root 权限的情况下运行
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
# 此策略假定节点使用 AppArmor 而不是 SELinux
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
# 禁止添加 root 组
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
# 禁止添加 root 组
- min: 1
max: 65535
readOnlyRootFilesystem: false

View File

@ -1,4 +1,4 @@
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: zk-pdb

View File

@ -1,4 +1,4 @@
apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: zk-pdb

View File

@ -159,7 +159,8 @@
/docs/concepts/workloads/pods/pod-overview/ /docs/concepts/workloads/pods/ 301
/docs/concepts/workloads/pods/init-containers/Kubernetes/ /docs/concepts/workloads/pods/init-containers/ 301
/docs/consumer-guideline/pod-security-coverage/ /docs/concepts/policy/pod-security-policy/ 301
/docs/concepts/policy/pod-security-policy/ /docs/concepts/security/pod-security-policy/ 301
/docs/consumer-guideline/pod-security-coverage/ /docs/concepts/security/pod-security-policy/ 301
/docs/contribute/create-pull-request/ /docs/home/contribute/create-pull-request/ 301
/docs/contribute/page-templates/ /docs/home/contribute/page-templates/ 301
@ -424,7 +425,7 @@
/docs/user-guide/persistent-volumes/index /docs/concepts/storage/persistent-volumes/ 301
/docs/user-guide/persistent-volumes/index.md /docs/concepts/storage/persistent-volumes/ 301
/docs/user-guide/persistent-volumes/walkthrough/ /docs/tasks/configure-pod-container/configure-persistent-volume-storage/ 301
/docs/user-guide/pod-security-policy/ /docs/concepts/policy/pod-security-policy/ 301
/docs/user-guide/pod-security-policy/ /docs/concepts/security/pod-security-policy/ 301
/docs/user-guide/pod-states/ /docs/concepts/workloads/pods/pod-lifecycle/ 301
/docs/user-guide/pod-templates/ /docs/concepts/workloads/pods/#pod-templates 301
/docs/user-guide/pods/ /docs/concepts/workloads/pods/pod/ 301