sync cloud-controller-manager limit-storage-consumption horizontal-pod-autoscale-walkthrough
parent
43a323fcd4
commit
70b55bafef
content/zh-cn/docs
reference/glossary
tasks
administer-cluster
run-application
|
@ -29,14 +29,13 @@ tags:
|
|||
|
||||
<!--
|
||||
A Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}} component
|
||||
that embeds cloud-specific control logic. The [cloud controller manager](/docs/concepts/architecture/cloud-controller/) lets you link your
|
||||
that embeds cloud-specific control logic. The cloud controller manager lets you link your
|
||||
cluster into your cloud provider's API, and separates out the components that interact
|
||||
with that cloud platform from components that only interact with your cluster.
|
||||
-->
|
||||
一个 Kubernetes {{<glossary_tooltip text="控制平面" term_id="control-plane" >}}组件,
|
||||
嵌入了特定于云平台的控制逻辑。
|
||||
[云控制器管理器(Cloud Controller Manager)](/zh-cn/docs/concepts/architecture/cloud-controller/)
|
||||
允许将你的集群连接到云提供商的 API 之上,
|
||||
云控制器管理器(Cloud Controller Manager)允许将你的集群连接到云提供商的 API 之上,
|
||||
并将与该云平台交互的组件同与你的集群交互的组件分离开来。
|
||||
|
||||
<!--more-->
|
||||
|
|
|
@ -95,9 +95,9 @@ AWS EBS volumes have a 1Gi minimum requirement.
|
|||
例如,AWS EBS volumes 的最低要求为 1Gi。
|
||||
|
||||
<!--
|
||||
## StorageQuota to limit PVC count and cumulative storage capacity
|
||||
## ResourceQuota to limit PVC count and cumulative storage capacity
|
||||
-->
|
||||
## 使用 StorageQuota 限制 PVC 数目和累计存储容量
|
||||
## 使用 ResourceQuota 限制 PVC 数目和累计存储容量
|
||||
|
||||
<!--
|
||||
Admins can limit the number of PVCs in a namespace as well as the cumulative capacity of those PVCs. New PVCs that exceed
|
||||
|
|
|
@ -132,9 +132,9 @@ service/php-apache created
|
|||
<!--
|
||||
## Create the HorizontalPodAutoscaler {#create-horizontal-pod-autoscaler}
|
||||
|
||||
Now that the server is running, create the autoscaler using `kubectl`. There is
|
||||
Now that the server is running, create the autoscaler using `kubectl`. The
|
||||
[`kubectl autoscale`](/docs/reference/generated/kubectl/kubectl-commands#autoscale) subcommand,
|
||||
part of `kubectl`, that helps you do this.
|
||||
part of `kubectl`, helps you do this.
|
||||
|
||||
You will shortly run a command that creates a HorizontalPodAutoscaler that maintains
|
||||
between 1 and 10 replicas of the Pods controlled by the php-apache Deployment that
|
||||
|
@ -420,12 +420,12 @@ status:
|
|||
Notice that the `targetCPUUtilizationPercentage` field has been replaced with an array called `metrics`.
|
||||
The CPU utilization metric is a *resource metric*, since it is represented as a percentage of a resource
|
||||
specified on pod containers. Notice that you can specify other resource metrics besides CPU. By default,
|
||||
the only other supported resource metric is memory. These resources do not change names from cluster
|
||||
the only other supported resource metric is `memory`. These resources do not change names from cluster
|
||||
to cluster, and should always be available, as long as the `metrics.k8s.io` API is available.
|
||||
-->
|
||||
需要注意的是,`targetCPUUtilizationPercentage` 字段已经被名为 `metrics` 的数组所取代。
|
||||
CPU 利用率这个度量指标是一个 **resource metric**(资源度量指标),因为它表示容器上指定资源的百分比。
|
||||
除 CPU 外,你还可以指定其他资源度量指标。默认情况下,目前唯一支持的其他资源度量指标为内存。
|
||||
除 CPU 外,你还可以指定其他资源度量指标。默认情况下,目前唯一支持的其他资源度量指标为 `memory`。
|
||||
只要 `metrics.k8s.io` API 存在,这些资源度量指标就是可用的,并且他们不会在不同的 Kubernetes 集群中改变名称。
|
||||
|
||||
<!--
|
||||
|
@ -437,6 +437,16 @@ setting the corresponding `target.averageValue` field instead of the `target.ave
|
|||
`Utilization` 替换成 `AverageValue`,同时设置 `target.averageValue`
|
||||
而非 `target.averageUtilization` 的值。
|
||||
|
||||
```
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: memory
|
||||
target:
|
||||
type: AverageValue
|
||||
averageValue: 500Mi
|
||||
```
|
||||
|
||||
<!--
|
||||
There are two other types of metrics, both of which are considered *custom metrics*: pod metrics and
|
||||
object metrics. These metrics may have names which are cluster specific, and require a more
|
||||
|
|
Loading…
Reference in New Issue