[zh] update concept policy

pull/30015/head
Mayo 2021-10-10 18:25:27 +08:00
parent 383dbc251c
commit 93e202cd5f
3 changed files with 137 additions and 39 deletions

View File

@ -17,7 +17,8 @@ weight: 40
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
<!--
Kubernetes allow you to limit the number of process IDs (PIDs) that a {{< glossary_tooltip term_id="Pod" text="Pod" >}} can use.
Kubernetes allow you to limit the number of process IDs (PIDs) that a
{{< glossary_tooltip term_id="Pod" text="Pod" >}} can use.
You can also reserve a number of allocatable PIDs for each {{< glossary_tooltip term_id="node" text="node" >}}
for use by the operating system and daemons (rather than by Pods).
-->
@ -155,8 +156,8 @@ gate](/docs/reference/command-line-tools-reference/feature-gates/)
Kubernetes allows you to limit the number of processes running in a Pod. You
specify this limit at the node level, rather than configuring it as a resource
limit for a particular Pod. Each Node can have a different PID limit.
To configure the limit, you can specify the command line parameter
`--pod-max-pids` to the kubelet, or set `PodPidsLimit` in the kubelet
To configure the limit, you can specify the command line parameter `--pod-max-pids`
to the kubelet, or set `PodPidsLimit` in the kubelet
[configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
-->
## Pod 级别 PID 限制 {#pod-pid-limits}
@ -183,9 +184,12 @@ the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
## PID based eviction
You can configure kubelet to start terminating a Pod when it is misbehaving and consuming abnormal amount of resources.
This feature is called eviction. You can [Configure Out of Resource Handling](/docs/tasks/administer-cluster/out-of-resource) for various eviction signals.
This feature is called eviction. You can
[Configure Out of Resource Handling](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
for various eviction signals.
Use `pid.available` eviction signal to configure the threshold for number of PIDs used by Pod.
You can set soft and hard eviction policies. However, even with the hard eviction policy, if the number of PIDs growing very fast,
You can set soft and hard eviction policies.
However, even with the hard eviction policy, if the number of PIDs growing very fast,
node can still get into unstable state by hitting the node PIDs limit.
Eviction signal value is calculated periodically and does NOT enforce the limit.
-->
@ -219,9 +223,10 @@ Pod 行为不正常而没有 PID 可用。
<!--
- Refer to the [PID Limiting enhancement document](https://github.com/kubernetes/enhancements/blob/097b4d8276bc9564e56adf72505d43ce9bc5e9e8/keps/sig-node/20190129-pid-limiting.md) for more information.
- For historical context, read [Process ID Limiting for Stability Improvements in Kubernetes 1.14](/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/).
- For historical context, read
[Process ID Limiting for Stability Improvements in Kubernetes 1.14](/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/).
- Read [Managing Resources for Containers](/docs/concepts/configuration/manage-resources-containers/).
- Learn how to [Configure Out of Resource Handling](/docs/tasks/administer-cluster/out-of-resource).
- Learn how to [Configure Out of Resource Handling](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
-->
- 参阅 [PID 约束改进文档](https://github.com/kubernetes/enhancements/blob/097b4d8276bc9564e56adf72505d43ce9bc5e9e8/keps/sig-node/20190129-pid-limiting.md)
以了解更多信息。
@ -229,5 +234,5 @@ Pod 行为不正常而没有 PID 可用。
[Kubernetes 1.14 中限制进程 ID 以提升稳定性](/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/)
的博文。
- 请阅读[为容器管理资源](/zh/docs/concepts/configuration/manage-resources-containers/)。
- 学习如何[配置资源不足情况的处理](/zh/docs/tasks/administer-cluster/out-of-resource)。
- 学习如何[配置资源不足情况的处理](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)。

View File

@ -15,9 +15,11 @@ weight: 30
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
<!--
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25.
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. For more information on the deprecation,
see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/).
-->
PodSecurityPolicy 在 Kubernetes v1.21 版本中被弃用,将在 v1.25 中删除。
关于弃用的更多信息,请查阅 [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。
<!--
Pod Security Policies enable fine-grained authorization of pod creation and
@ -92,17 +94,16 @@ _Pod 安全策略_ 由设置和策略组成,它们能够控制 Pod 访问的
<!--
## Enabling Pod Security Policies
Pod security policy control is implemented as an optional (but recommended)
[admission
controller](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy). PodSecurityPolicies
are enforced by [enabling the admission
Pod security policy control is implemented as an optional [admission
controller](/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy).
PodSecurityPolicies are enforced by [enabling the admission
controller](/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in),
but doing so without authorizing any policies **will prevent any pods from being
created** in the cluster.
but doing so without authorizing any policies **will prevent any pods from being created** in the
cluster.
-->
## 启用 Pod 安全策略
Pod 安全策略实现为一种可选(但是建议启用)
Pod 安全策略实现为一种可选的
[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy)。
[启用了准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in)
即可强制实施 Pod 安全策略,不过如果没有授权认可策略之前即启用
@ -206,7 +207,11 @@ roleRef:
name: <role name>
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize specific service accounts:
# Authorize all service accounts in a namespace (recommended):
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts:<authorized namespace>
# Authorize specific service accounts (not recommended):
- kind: ServiceAccount
name: <authorized service account name>
namespace: <authorized pod namespace>
@ -222,20 +227,24 @@ subjects:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: <绑定名称>
name: <binding name>
roleRef:
kind: ClusterRole
name: <角色名称>
name: <role name>
apiGroup: rbac.authorization.k8s.io
subjects:
# 授权特定的服务账号
# 授权命名空间下的所有服务账号(推荐):
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts:<authorized namespace>
# 授权特定的服务账号(不建议这样操作):
- kind: ServiceAccount
name: <要授权的服务账号名称>
name: <authorized service account name>
namespace: <authorized pod namespace>
# 授权特定的用户(不建议这样操作)
# 授权特定的用户(不建议这样操作)
- kind: User
apiGroup: rbac.authorization.k8s.io
name: <要授权的用户名>
name: <authorized user name>
```
<!--
@ -279,6 +288,77 @@ For a complete example of authorizing a PodSecurityPolicy, see
参阅[下文](#example),查看对 PodSecurityPolicy 进行授权的完整示例。
<!--
### Recommended Practice
PodSecurityPolicy is being replaced by a new, simplified `PodSecurity` {{< glossary_tooltip
text="admission controller" term_id="admission-controller" >}}. For more details on this change, see
[PodSecurityPolicy Deprecation: Past, Present, and
Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/). Follow these
guidelines to simplify migration from PodSecurityPolicy to the new admission controller:
-->
## 推荐实践 {#recommended-practice}
PodSecurityPolicy 正在被一个新的、简化的 `PodSecurity` {{< glossary_tooltip
text="准入控制器" term_id="admission-controller" >}}替代。
有关此变更的更多详细信息,请参阅 [PodSecurityPolicy Deprecation: Past, Present, and
Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。
参照下述指导,简化从 PodSecurityPolicy 迁移到新的准入控制器步骤:
<!--
1. Limit your PodSecurityPolicies to the policies defined by the [Pod Security Standards](/docs/concepts/security/pod-security-standards):
- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}}
- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}}
- {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}}
2. Only bind PSPs to entire namespaces, by using the `system:serviceaccounts:<namespace>` group
(where `<namespace>` is the target namespace). For example:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows all pods in the "development" namespace to use the baseline PSP.
kind: ClusterRoleBinding
metadata:
name: psp-baseline-namespaces
roleRef:
kind: ClusterRole
name: psp-baseline
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:serviceaccounts:development
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:serviceaccounts:canary
apiGroup: rbac.authorization.k8s.io
```
-->
1. 将 PodSecurityPolicies 限制为 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)所定义的策略:
- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}}
- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}}
- {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}}
2. 通过配置 `system:serviceaccounts:<namespace>` 组(`<namespace>` 是目标命名空间),仅将 PSP 绑定到整个命名空间。示例:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows all pods in the "development" namespace to use the baseline PSP.
kind: ClusterRoleBinding
metadata:
name: psp-baseline-namespaces
roleRef:
kind: ClusterRole
name: psp-baseline
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:serviceaccounts:development
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:serviceaccounts:canary
apiGroup: rbac.authorization.k8s.io
```
<!--
### Troubleshooting
- The [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) must be run
@ -1230,10 +1310,17 @@ By default, all safe sysctls are allowed.
## {{% heading "whatsnext" %}}
<!--
- See [PodSecurityPolicy Deprecation: Past, Present, and
Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/) to learn about
the future of pod security policy.
- See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
- Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
-->
- 参阅 [PodSecurityPolicy Deprecation: Past, Present, and
Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),了解 Pod 安全策略的未来。
- 参阅[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)
了解策略建议。
- 阅读 [Pod 安全策略参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy)了解 API 细节。

View File

@ -102,8 +102,9 @@ Neither contention nor changes to quota will affect already created resources.
<!--
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} `--enable-admission-plugins=` flag has `ResourceQuota` as
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
`--enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments.
-->
## 启用资源配额
@ -122,7 +123,9 @@ ResourceQuota in that namespace.
<!--
## Compute Resource Quota
You can limit the total sum of [compute resources](/docs/concepts/configuration/manage-resources-containers/) that can be requested in a given namespace.
You can limit the total sum of
[compute resources](/docs/concepts/configuration/manage-resources-containers/)
that can be requested in a given namespace.
-->
## 计算资源配额
@ -249,7 +252,9 @@ In release 1.8, quota support for local ephemeral storage is added as an alpha f
{{< note >}}
<!--
When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas. Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/) for details.
When using a CRI container runtime, container logs will count against the ephemeral storage quota.
This can result in the unexpected eviction of pods that have exhausted their storage quotas.
Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/) for details.
-->
如果所使用的是 CRI 容器运行时,容器日志会被计入临时存储配额。
这可能会导致存储配额耗尽的 Pods 被意外地驱逐出节点。
@ -382,7 +387,7 @@ Resources specified on the quota outside of the allowed set results in a validat
| `NotTerminating` | Match pods where `.spec.activeDeadlineSeconds is nil` |
| `BestEffort` | Match pods that have best effort quality of service. |
| `NotBestEffort` | Match pods that do not have best effort quality of service. |
| `PriorityClass` | Match pods that references the specified [priority class](/docs/concepts/configuration/pod-priority-preemption). |
| `PriorityClass` | Match pods that references the specified [priority class](/docs/concepts/scheduling-eviction/pod-priority-preemption). |
| `CrossNamespacePodAffinity` | Match pods that have cross-namespace pod [(anti)affinity terms](/docs/concepts/scheduling-eviction/assign-pod-node). |
-->
| 作用域 | 描述 |
@ -391,7 +396,7 @@ Resources specified on the quota outside of the allowed set results in a validat
| `NotTerminating` | 匹配所有 `spec.activeDeadlineSeconds` 是 nil 的 Pod。 |
| `BestEffort` | 匹配所有 Qos 是 BestEffort 的 Pod。 |
| `NotBestEffort` | 匹配所有 Qos 不是 BestEffort 的 Pod。 |
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh/docs/concepts/configuration/pod-priority-preemption)的 Pods。 |
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 |
| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 |
<!--
@ -476,11 +481,11 @@ specified.
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
<!--
Pods can be created at a specific [priority](/docs/concepts/configuration/pod-priority-preemption/#pod-priority).
Pods can be created at a specific [priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority).
You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector`
field in the quota spec.
-->
Pod 可以创建为特定的[优先级](/zh/docs/concepts/configuration/pod-priority-preemption/#pod-priority)。
Pod 可以创建为特定的[优先级](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)。
通过使用配额规约中的 `scopeSelector` 字段,用户可以根据 Pod 的优先级控制其系统资源消耗。
<!--
@ -489,7 +494,8 @@ A quota is matched and consumed only if `scopeSelector` in the quota spec select
仅当配额规范中的 `scopeSelector` 字段选择到某 Pod 时,配额机制才会匹配和计量 Pod 的资源消耗。
<!--
When quota is scoped for priority class using `scopeSelector` field, quota object is restricted to track only following resources:
When quota is scoped for priority class using `scopeSelector` field, quota object
is restricted to track only following resources:
-->
如果配额对象通过 `scopeSelector` 字段设置其作用域为优先级类,则配额对象只能
跟踪以下资源:
@ -702,7 +708,7 @@ pods 0 10
-->
### 跨名字空间的 Pod 亲和性配额 {#cross-namespace-pod-affinity-quota}
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
<!--
Operators can use `CrossNamespacePodAffinity` quota scope to limit which namespaces are allowed to
@ -781,11 +787,11 @@ if the namespace where they are created have a resource quota object with
`namespaceSelector` 的新 Pod。
<!--
This feature is alpha and disabled by default. You can enable it by setting the
This feature is beta and enabled by default. You can disable it using the
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
`PodAffinityNamespaceSelector` in both kube-apiserver and kube-scheduler.
-->
此功能特性处于 Alpha 阶段,默认被禁用。你可以通过为 kube-apiserver 和
此功能特性处于 Beta 阶段,默认被禁用。你可以通过为 kube-apiserver 和
kube-scheduler 设置
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
`PodAffinityNamespaceSelector` 来启用此特性。
@ -868,7 +874,7 @@ kubectl create -f ./object-counts.yaml --namespace=myspace
kubectl get quota --namespace=myspace
```
```
```none
NAME AGE
compute-resources 30s
object-counts 32s
@ -878,7 +884,7 @@ object-counts 32s
kubectl describe quota compute-resources --namespace=myspace
```
```
```none
Name: compute-resources
Namespace: myspace
Resource Used Hard
@ -894,7 +900,7 @@ requests.nvidia.com/gpu 0 4
kubectl describe quota object-counts --namespace=myspace
```
```
```none
Name: object-counts
Namespace: myspace
Resource Used Hard
@ -1034,7 +1040,7 @@ Then, create a resource quota object in the `kube-system` namespace:
kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
```
```
```none
resourcequota/pods-cluster-services created
```