Merge pull request #46373 from my-git9/pt-22895
[zh-cn]sync api-extension/_index.md resource-quotas.mdpull/46430/head
commit
59c3eb3bad
|
@ -2,3 +2,34 @@
|
|||
title: 扩展 Kubernetes API
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!--
|
||||
Custom resources are extensions of the Kubernetes API. Kubernetes provides two ways to add custom resources to your cluster:
|
||||
-->
|
||||
自定义资源是 Kubernetes API 的扩展。
|
||||
Kubernetes 提供了两种将自定义资源添加到集群的方法:
|
||||
|
||||
<!--
|
||||
- The [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
(CRD) mechanism allows you to declaratively define a new custom API with an API group, kind, and
|
||||
schema that you specify.
|
||||
The Kubernetes control plane serves and handles the storage of your custom resource. CRDs allow you to
|
||||
create new types of resources for your cluster without writing and running a custom API server.
|
||||
-->
|
||||
- [CustomResourceDefinition](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/)(CRD)
|
||||
机制允许你通过指定自己的 API 组、种类和模式以声明方式定义新的自定义 API。
|
||||
Kubernetes 控制平面为自定义资源提供服务并为其提供存储。
|
||||
CRD 允许你为集群创建新的资源类别,而无需编写和运行自定义 API 服务器。
|
||||
<!--
|
||||
- The [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
sits behind the primary API server, which acts as a proxy.
|
||||
This arrangement is called API Aggregation (AA), which allows you to provide
|
||||
specialized implementations for your custom resources by writing and
|
||||
deploying your own API server.
|
||||
The main API server delegates requests to your API server for the custom APIs that you specify,
|
||||
making them available to all of its clients.
|
||||
-->
|
||||
- [聚合层(Aggregation Layer)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)位于主
|
||||
API 服务器后面,将 API 服务器用作代理。
|
||||
这种安排称为 API 聚合(API Aggregation,AA),允许你通过编写和部署自己的 API 服务器来为自定义资源提供专门的实现。
|
||||
主 API 服务器将你指定的自定义 API 的请求委托给你的 API 服务器,使其可供所有客户端使用。
|
||||
|
|
|
@ -60,14 +60,14 @@ Resource quotas work like this:
|
|||
See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
|
||||
for an example of how to avoid this problem.
|
||||
-->
|
||||
- 不同的团队可以在不同的命名空间下工作。这可以通过
|
||||
- 不同的团队可以在不同的命名空间下工作,这可以通过
|
||||
[RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。
|
||||
- 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。
|
||||
- 当用户在命名空间下创建资源(如 Pod、Service 等)时,Kubernetes 的配额系统会跟踪集群的资源使用情况,
|
||||
以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
|
||||
- 如果资源创建或者更新请求违反了配额约束,那么该请求会报错(HTTP 403 FORBIDDEN),
|
||||
并在消息中给出有可能违反的约束。
|
||||
- 如果命名空间下的计算资源 (如 `cpu` 和 `memory`)的配额被启用,
|
||||
- 如果命名空间下的计算资源(如 `cpu` 和 `memory`)的配额被启用,
|
||||
则用户必须为这些资源设定请求值(request)和约束值(limit),否则配额系统将拒绝 Pod 的创建。
|
||||
提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。
|
||||
|
||||
|
@ -195,8 +195,8 @@ In addition to the resources mentioned above, in release 1.10, quota support for
|
|||
-->
|
||||
### 扩展资源的资源配额 {#resource-quota-for-extended-resources}
|
||||
|
||||
除上述资源外,在 Kubernetes 1.10 版本中,还添加了对
|
||||
[扩展资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources)
|
||||
除上述资源外,在 Kubernetes 1.10 版本中,
|
||||
还添加了对[扩展资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources)
|
||||
的支持。
|
||||
|
||||
<!--
|
||||
|
@ -248,7 +248,7 @@ In addition, you can limit consumption of storage resources based on associated
|
|||
| `requests.storage` | 所有 PVC,存储资源的需求总量不能超过该值。 |
|
||||
| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 |
|
||||
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | 在所有与 `<storage-class-name>` 相关的持久卷申领中,存储请求的总和不能超过该值。 |
|
||||
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
|
||||
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
|
||||
|
||||
<!--
|
||||
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
|
||||
|
@ -284,24 +284,23 @@ When using a CRI container runtime, container logs will count against the epheme
|
|||
This can result in the unexpected eviction of pods that have exhausted their storage quotas.
|
||||
Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/) for details.
|
||||
-->
|
||||
如果所使用的是 CRI 容器运行时,容器日志会被计入临时存储配额。
|
||||
这可能会导致存储配额耗尽的 Pods 被意外地驱逐出节点。
|
||||
参考[日志架构](/zh-cn/docs/concepts/cluster-administration/logging/)
|
||||
了解详细信息。
|
||||
如果所使用的是 CRI 容器运行时,容器日志会被计入临时存储配额,
|
||||
这可能会导致存储配额耗尽的 Pod 被意外地驱逐出节点。
|
||||
参考[日志架构](/zh-cn/docs/concepts/cluster-administration/logging/)了解详细信息。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Object Count Quota
|
||||
|
||||
You can set quota for the total number of certain resources of all standard,
|
||||
namespaced resource types using the following syntax:
|
||||
You can set quota for *the total number of one particular resource kind* in the Kubernetes API,
|
||||
using the following syntax:
|
||||
|
||||
* `count/<resource>.<group>` for resources from non-core groups
|
||||
* `count/<resource>` for resources from the core group
|
||||
-->
|
||||
## 对象数量配额 {#object-count-quota}
|
||||
|
||||
你可以使用以下语法对所有标准的、命名空间域的资源类型进行配额设置:
|
||||
你可以使用以下语法为 Kubernetes API 中“一种特定资源类型的总数”设置配额:
|
||||
|
||||
* `count/<resource>.<group>`:用于非核心(core)组的资源
|
||||
* `count/<resource>`:用于核心组的资源
|
||||
|
@ -309,7 +308,7 @@ namespaced resource types using the following syntax:
|
|||
<!--
|
||||
Here is an example set of resources users may want to put under object count quota:
|
||||
-->
|
||||
这是用户可能希望利用对象计数配额来管理的一组资源示例。
|
||||
这是用户可能希望利用对象计数配额来管理的一组资源示例:
|
||||
|
||||
* `count/persistentvolumeclaims`
|
||||
* `count/services`
|
||||
|
@ -323,21 +322,31 @@ Here is an example set of resources users may want to put under object count quo
|
|||
* `count/cronjobs.batch`
|
||||
|
||||
<!--
|
||||
The same syntax can be used for custom resources.
|
||||
If you define a quota this way, it applies to Kubernetes' APIs that are part of the API server, and
|
||||
to any custom resources backed by a CustomResourceDefinition. If you use [API aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) to
|
||||
add additional, custom APIs that are not defined as CustomResourceDefinitions, the core Kubernetes
|
||||
control plane does not enforce quota for the aggregated API. The extension API server is expected to
|
||||
provide quota enforcement if that's appropriate for the custom API.
|
||||
For example, to create a quota on a `widgets` custom resource in the `example.com` API group, use `count/widgets.example.com`.
|
||||
-->
|
||||
相同语法也可用于自定义资源。
|
||||
如果你以这种方式定义配额,它将应用于属于 API 服务器一部分的 Kubernetes API,以及 CustomResourceDefinition
|
||||
支持的任何自定义资源。
|
||||
如果你使用[聚合 API](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
添加未定义为 CustomResourceDefinitions 的其他自定义 API,则核心 Kubernetes 控制平面不会对聚合 API 实施配额管理。
|
||||
如果合适,扩展 API 服务器需要为自定义 API 提供配额管理。
|
||||
例如,要对 `example.com` API 组中的自定义资源 `widgets` 设置配额,请使用
|
||||
`count/widgets.example.com`。
|
||||
|
||||
<!--
|
||||
When using `count/*` resource quota, an object is charged against the quota if it exists in server storage.
|
||||
When using such a resource quota (nearly for all object kinds), an object is charged
|
||||
against the quota if the object kind exists (is defined) in the control plane.
|
||||
These types of quotas are useful to protect against exhaustion of storage resources. For example, you may
|
||||
want to limit the number of Secrets in a server given their large size. Too many Secrets in a cluster can
|
||||
actually prevent servers and controllers from starting. You can set a quota for Jobs to protect against
|
||||
a poorly configured CronJob. CronJobs that create too many Jobs in a namespace can lead to a denial of service.
|
||||
-->
|
||||
当使用 `count/*` 资源配额时,如果对象存在于服务器存储中,则会根据配额管理资源。
|
||||
当使用这样的资源配额(几乎涵盖所有对象类别)时,如果对象类别在控制平面中已存在(已定义),
|
||||
则该对象管理会参考配额设置。
|
||||
这些类型的配额有助于防止存储资源耗尽。例如,用户可能想根据服务器的存储能力来对服务器中
|
||||
Secret 的数量进行配额限制。
|
||||
集群中存在过多的 Secret 实际上会导致服务器和控制器无法启动。
|
||||
|
@ -345,10 +354,10 @@ Secret 的数量进行配额限制。
|
|||
Job 而导致集群拒绝服务。
|
||||
|
||||
<!--
|
||||
It is also possible to do generic object count quota on a limited set of resources.
|
||||
There is another syntax only to set the same type of quota for certain resources.
|
||||
The following types are supported:
|
||||
-->
|
||||
对有限的一组资源上实施一般性的对象数量配额也是可能的。
|
||||
还有另一种语法仅用于为某些资源设置相同类型的配额。
|
||||
|
||||
支持以下类型:
|
||||
|
||||
|
@ -383,10 +392,15 @@ created in a single namespace that are not terminal. You might want to set a `po
|
|||
quota on a namespace to avoid the case where a user creates many small pods and
|
||||
exhausts the cluster's supply of Pod IPs.
|
||||
-->
|
||||
例如,`pods` 配额统计某个命名空间中所创建的、非终止状态的 `Pod` 个数并确保其不超过某上限值。
|
||||
例如,`pods` 配额统计某个命名空间中所创建的、非终止状态的 `pods` 个数并确保其不超过某上限值。
|
||||
用户可能希望在某命名空间中设置 `pods` 配额,以避免有用户创建很多小的 Pod,
|
||||
从而耗尽集群所能提供的 Pod IP 地址。
|
||||
|
||||
<!--
|
||||
您可以在[查看和设置配额](#viewing-and-setting-quotas) 上找到更多示例。
|
||||
-->
|
||||
你可以在[查看和设置配额](#viewing-and-setting-quotas)节查看更多示例。
|
||||
|
||||
<!--
|
||||
## Quota Scopes
|
||||
|
||||
|
@ -421,8 +435,8 @@ Resources specified on the quota outside of the allowed set results in a validat
|
|||
| `NotTerminating` | 匹配所有 `spec.activeDeadlineSeconds` 是 nil 的 Pod。 |
|
||||
| `BestEffort` | 匹配所有 Qos 是 BestEffort 的 Pod。 |
|
||||
| `NotBestEffort` | 匹配所有 Qos 不是 BestEffort 的 Pod。 |
|
||||
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 |
|
||||
| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 |
|
||||
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pod。 |
|
||||
| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间[(反)亲和性条件](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 |
|
||||
|
||||
<!--
|
||||
The `BestEffort` scope restricts a quota to tracking the following resource:
|
||||
|
@ -540,8 +554,7 @@ is restricted to track only following resources:
|
|||
This example creates a quota object and matches it with pods at specific priorities. The example
|
||||
works as follows:
|
||||
-->
|
||||
本示例创建一个配额对象,并将其与具有特定优先级的 Pod 进行匹配。
|
||||
该示例的工作方式如下:
|
||||
本示例创建一个配额对象,并将其与具有特定优先级的 Pod 进行匹配,其工作方式如下:
|
||||
|
||||
<!--
|
||||
- Pods in the cluster have one of the three priority classes, "low", "medium", "high".
|
||||
|
@ -786,7 +799,7 @@ as a limited resource by setting the kube-apiserver flag --admission-control-con
|
|||
to the path of the following configuration file:
|
||||
-->
|
||||
如果集群运维人员希望默认禁止使用 `namespaces` 和 `namespaceSelector`,
|
||||
而仅仅允许在特定名字空间中这样做,他们可以将 `CrossNamespacePodAffinity`
|
||||
而仅仅允许在特定命名空间中这样做,他们可以将 `CrossNamespacePodAffinity`
|
||||
作为一个被约束的资源。方法是为 `kube-apiserver` 设置标志
|
||||
`--admission-control-config-file`,使之指向如下的配置文件:
|
||||
|
||||
|
@ -839,9 +852,9 @@ then it requires that every incoming container specifies an explicit limit for t
|
|||
|
||||
Kubectl supports creating, updating, and viewing quotas:
|
||||
-->
|
||||
## 查看和设置配额 {#viewing-and-setting-quotas}
|
||||
## 查看和设置配额 {#viewing-and-setting-quotas}
|
||||
|
||||
Kubectl 支持创建、更新和查看配额:
|
||||
kubectl 支持创建、更新和查看配额:
|
||||
|
||||
```shell
|
||||
kubectl create namespace myspace
|
||||
|
@ -976,7 +989,7 @@ automatically give each namespace the ability to consume more resources.
|
|||
## 配额和集群容量 {#quota-and-cluster-capacity}
|
||||
|
||||
ResourceQuota 与集群资源总量是完全独立的。它们通过绝对的单位来配置。
|
||||
所以,为集群添加节点时,资源配额*不会*自动赋予每个命名空间消耗更多资源的能力。
|
||||
所以,为集群添加节点时,资源配额**不会**自动赋予每个命名空间消耗更多资源的能力。
|
||||
|
||||
<!--
|
||||
Sometimes more complex policies may be desired, such as:
|
||||
|
|
Loading…
Reference in New Issue