commit
1da4203c47
|
@ -2,6 +2,7 @@
|
|||
title: 为应用程序设置干扰预算(Disruption Budget)
|
||||
content_type: task
|
||||
weight: 110
|
||||
min-kubernetes-server-version: v1.21
|
||||
---
|
||||
|
||||
<!--
|
||||
|
@ -12,7 +13,7 @@ weight: 110
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.5" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
|
||||
|
||||
<!--
|
||||
This page shows how to limit the number of concurrent disruptions
|
||||
|
@ -24,6 +25,8 @@ nodes.
|
|||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< version-check >}}
|
||||
|
||||
<!--
|
||||
* You are the owner of an application running on a Kubernetes cluster that requires
|
||||
high availability.
|
||||
|
@ -205,15 +208,15 @@ It can be either an absolute number or a percentage.
|
|||
* `.spec.maxUnavailable` (Kubernetes 1.7 及更高的版本中可用)表示驱逐后允许不可用的
|
||||
Pod 的最大数量。其值可以是绝对值或是百分比。
|
||||
|
||||
<!--
|
||||
For versions 1.8 and earlier: When creating a `PodDisruptionBudget`
|
||||
object using the `kubectl` command line tool, the `minAvailable` field has a
|
||||
default value of 1 if neither `minAvailable` nor `maxUnavailable` is specified.
|
||||
-->
|
||||
{{< note >}}
|
||||
对于1.8及更早的版本:当你用 `kubectl` 命令行工具创建 `PodDisruptionBudget` 对象时,
|
||||
如果既未指定 `minAvailable` 也未指定 `maxUnavailable`,
|
||||
则 `minAvailable` 字段有一个默认值 1。
|
||||
<!--
|
||||
The behavior for an empty selector differs between the policy/v1beta1 and policy/v1 APIs for
|
||||
PodDisruptionBudgets. For policy/v1beta1 an empty selector matches zero pods, while
|
||||
for policy/v1 an empty selector matches every pod in the namespace.
|
||||
-->
|
||||
`policy/v1beta1` 和 `policy/v1` API 中 PodDisruptionBudget 的空选择算符的行为
|
||||
略有不同。在 `policy/v1beta1` 中,空的选择算符不会匹配任何 Pods,而
|
||||
`policy/v1` 中,空的选择算符会匹配名字空间中所有 Pods。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -296,9 +299,9 @@ Example PDB Using minAvailable:
|
|||
{{< codenew file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" >}}
|
||||
|
||||
<!--
|
||||
Example PDB Using maxUnavailable (Kubernetes 1.7 or higher):
|
||||
Example PDB Using maxUnavailable:
|
||||
-->
|
||||
使用 maxUnavailable 的 PDB 示例(Kubernetes 1.7 或更高的版本):
|
||||
使用 maxUnavailable 的 PDB 示例:
|
||||
|
||||
{{< codenew file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" >}}
|
||||
|
||||
|
@ -378,7 +381,7 @@ kubectl get poddisruptionbudgets zk-pdb -o yaml
|
|||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: policy/v1beta1
|
||||
apiVersion: policy/v1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
anntation: {}
|
||||
|
|
|
@ -20,14 +20,14 @@ weight: 60
|
|||
|
||||
<!--
|
||||
This task shows you how to delete a StatefulSet.
|
||||
--->
|
||||
-->
|
||||
本任务展示如何删除 StatefulSet。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
* This task assumes you have an application running on your cluster represented by a StatefulSet.
|
||||
--->
|
||||
-->
|
||||
* 本任务假设在你的集群上已经运行了由 StatefulSet 创建的应用。
|
||||
|
||||
<!-- steps -->
|
||||
|
@ -36,7 +36,7 @@ This task shows you how to delete a StatefulSet.
|
|||
|
||||
<!--
|
||||
You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
|
||||
--->
|
||||
-->
|
||||
你可以像删除 Kubernetes 中的其他资源一样删除 StatefulSet:使用 `kubectl delete` 命令,并按文件或者名字指定 StatefulSet。
|
||||
|
||||
```shell
|
||||
|
@ -66,10 +66,11 @@ kubectl delete service <服务名称>
|
|||
```
|
||||
|
||||
<!--
|
||||
Deleting a StatefulSet through kubectl will scale it down to 0, thereby deleting all pods that are a part of it.
|
||||
If you want to delete just the StatefulSet and not the pods, use `--cascade=false`.
|
||||
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=false`.
|
||||
For example:
|
||||
--->
|
||||
通过 `kubectl` 删除 StatefulSet 会将其缩容为 0,因此删除属于它的所有 Pod。
|
||||
当通过 `kubectl` 删除 StatefulSet 时,StatefulSet 会被缩容为 0。
|
||||
属于该 StatefulSet 的所有 Pod 也被删除。
|
||||
如果你只想删除 StatefulSet 而不删除 Pod,使用 `--cascade=false`。
|
||||
|
||||
```shell
|
||||
|
@ -114,7 +115,8 @@ To simply delete everything in a StatefulSet, including the associated pods, you
|
|||
-->
|
||||
### 完全删除 StatefulSet {#complete-deletion-of-a-statefulset}
|
||||
|
||||
要简单地删除 StatefulSet 中的所有内容,包括关联的 pods,你可能需要运行一系列类似于以下内容的命令:
|
||||
要删除 StatefulSet 中的所有内容,包括关联的 pods,你可以运行
|
||||
一系列如下所示的命令:
|
||||
|
||||
```shell
|
||||
grace=$(kubectl get pods <stateful-set-pod> --template '{{.spec.terminationGracePeriodSeconds}}')
|
||||
|
@ -144,7 +146,7 @@ If you find that some pods in your StatefulSet are stuck in the 'Terminating' or
|
|||
|
||||
<!--
|
||||
Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
--->
|
||||
-->
|
||||
进一步了解[强制删除 StatefulSet 的 Pods](/zh/docs/tasks/run-application/force-delete-stateful-set-pod/)。
|
||||
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 强制删除 StatefulSet 类型的 Pods
|
||||
title: 强制删除 StatefulSet 中的 Pods
|
||||
content_type: task
|
||||
weight: 70
|
||||
---
|
||||
|
@ -19,8 +19,8 @@ weight: 70
|
|||
<!--
|
||||
This page shows how to delete Pods which are part of a {{< glossary_tooltip text="stateful set" term_id="StatefulSet" >}}, and explains the considerations to keep in mind when doing so.
|
||||
-->
|
||||
本文介绍了如何删除 {{< glossary_tooltip text="StatefulSet" term_id="StatefulSet" >}}
|
||||
管理的 Pods,并且解释了这样操作时需要记住的注意事项。
|
||||
本文介绍如何删除 {{< glossary_tooltip text="StatefulSet" term_id="StatefulSet" >}}
|
||||
管理的 Pods,并解释这样操作时需要记住的注意事项。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -76,11 +76,16 @@ Pod 不要使用。体面删除是安全的,并且会在 kubelet 从 API 服
|
|||
[体面地结束 pod ](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
||||
|
||||
<!--
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a [timeout](/docs/admin/node/#node-condition). Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node. The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
A Pod is not deleted automatically when a Node is unreachable.
|
||||
The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a
|
||||
[timeout](/docs/concepts/architecture/nodes/#condition).
|
||||
Pods may also enter these states when the user attempts graceful deletion of a Pod
|
||||
on an unreachable Node.
|
||||
The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
-->
|
||||
Kubernetes(1.5 版本或者更新版本)不会因为一个节点无法访问而删除 Pod。
|
||||
当某个节点不可达时,不会引发自动删除 Pod。
|
||||
在无法访问的节点上运行的 Pod 在
|
||||
[超时](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-condition)
|
||||
[超时](/zh/docs/concepts/architecture/nodes/#condition)
|
||||
后会进入'Terminating' 或者 'Unknown' 状态。
|
||||
当用户尝试体面地删除无法访问的节点上的 Pod 时 Pod 也可能会进入这些状态。
|
||||
从 API 服务器上删除处于这些状态 Pod 的仅有可行方法如下:
|
||||
|
|
|
@ -571,7 +571,7 @@ with *external metrics*.
|
|||
<!--
|
||||
Using external metrics requires knowledge of your monitoring system; the setup is
|
||||
similar to that required when using custom metrics. External metrics allow you to autoscale your cluster
|
||||
based on any metric available in your monitoring system. Just provide a `metric` block with a
|
||||
based on any metric available in your monitoring system. Provide a `metric` block with a
|
||||
`name` and `selector`, as above, and use the `External` metric type instead of `Object`.
|
||||
If multiple time series are matched by the `metricSelector`,
|
||||
the sum of their values is used by the HorizontalPodAutoscaler.
|
||||
|
@ -580,7 +580,7 @@ as when you use the `Object` type.
|
|||
-->
|
||||
使用外部度量指标时,需要了解你所使用的监控系统,相关的设置与使用自定义指标时类似。
|
||||
外部度量指标使得你可以使用你的监控系统的任何指标来自动扩缩你的集群。
|
||||
你只需要在 `metric` 块中提供 `name` 和 `selector`,同时将类型由 `Object` 改为 `External`。
|
||||
你需要在 `metric` 块中提供 `name` 和 `selector`,同时将类型由 `Object` 改为 `External`。
|
||||
如果 `metricSelector` 匹配到多个度量指标,HorizontalPodAutoscaler 将会把它们加和。
|
||||
外部度量指标同时支持 `Value` 和 `AverageValue` 类型,这与 `Object` 类型的度量指标相同。
|
||||
|
||||
|
|
|
@ -19,8 +19,10 @@ support, on some other application-provided metrics). Note that Horizontal
|
|||
Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.
|
||||
-->
|
||||
Pod 水平自动扩缩(Horizontal Pod Autoscaler)
|
||||
可以基于 CPU 利用率自动扩缩 ReplicationController、Deployment、ReplicaSet 和 StatefulSet 中的 Pod 数量。
|
||||
除了 CPU 利用率,也可以基于其他应程序提供的[自定义度量指标](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)
|
||||
可以基于 CPU 利用率自动扩缩 ReplicationController、Deployment、ReplicaSet 和
|
||||
StatefulSet 中的 Pod 数量。
|
||||
除了 CPU 利用率,也可以基于其他应程序提供的
|
||||
[自定义度量指标](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)
|
||||
来执行自动扩缩。
|
||||
Pod 自动扩缩不适用于无法扩缩的对象,比如 DaemonSet。
|
||||
|
||||
|
@ -28,11 +30,11 @@ Pod 自动扩缩不适用于无法扩缩的对象,比如 DaemonSet。
|
|||
The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
|
||||
The resource determines the behavior of the controller.
|
||||
The controller periodically adjusts the number of replicas in a replication controller or deployment
|
||||
to match the observed average CPU utilization to the target specified by user.
|
||||
to match the observed metrics such as average CPU utilisation, average memory utilisation or any other custom metric to the target specified by the user.
|
||||
-->
|
||||
Pod 水平自动扩缩特性由 Kubernetes API 资源和控制器实现。资源决定了控制器的行为。
|
||||
控制器会周期性的调整副本控制器或 Deployment 中的副本数量,以使得 Pod 的平均 CPU
|
||||
利用率与用户所设定的目标值匹配。
|
||||
控制器会周期性地调整副本控制器或 Deployment 中的副本数量,以使得类似 Pod 平均 CPU
|
||||
利用率、平均内存利用率这类观测到的度量值与用户所设定的目标值匹配。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -57,7 +59,8 @@ obtains the metrics from either the resource metrics API (for per-pod resource m
|
|||
or the custom metrics API (for all other metrics).
|
||||
-->
|
||||
每个周期内,控制器管理器根据每个 HorizontalPodAutoscaler 定义中指定的指标查询资源利用率。
|
||||
控制器管理器可以从资源度量指标 API(按 Pod 统计的资源用量)和自定义度量指标 API(其他指标)获取度量值。
|
||||
控制器管理器可以从资源度量指标 API(按 Pod 统计的资源用量)和自定义度量指标
|
||||
API(其他指标)获取度量值。
|
||||
|
||||
<!--
|
||||
* For per-pod resource metrics (like CPU), the controller fetches the metrics
|
||||
|
@ -288,7 +291,7 @@ the current value.
|
|||
这表示,如果一个或多个指标给出的 `desiredReplicas` 值大于当前值,HPA 仍然能实现扩容。
|
||||
|
||||
<!--
|
||||
Finally, just before HPA scales the target, the scale recommendation is recorded. The
|
||||
Finally, right before HPA scales the target, the scale recommendation is recorded. The
|
||||
controller considers all recommendations within a configurable window choosing the
|
||||
highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
|
||||
This means that scaledowns will occur gradually, smoothing out the impact of rapidly
|
||||
|
@ -296,7 +299,8 @@ fluctuating metric values.
|
|||
-->
|
||||
最后,在 HPA 控制器执行扩缩操作之前,会记录扩缩建议信息。
|
||||
控制器会在操作时间窗口中考虑所有的建议信息,并从中选择得分最高的建议。
|
||||
这个值可通过 `kube-controller-manager` 服务的启动参数 `--horizontal-pod-autoscaler-downscale-stabilization` 进行配置,
|
||||
这个值可通过 `kube-controller-manager` 服务的启动参数
|
||||
`--horizontal-pod-autoscaler-downscale-stabilization` 进行配置,
|
||||
默认值为 5 分钟。
|
||||
这个配置可以让系统更为平滑地进行缩容操作,从而消除短时间内指标值快速波动产生的影响。
|
||||
|
||||
|
@ -349,7 +353,7 @@ Finally, we can delete an autoscaler using `kubectl delete hpa`.
|
|||
最后,可以使用 `kubectl delete hpa` 命令删除对象。
|
||||
|
||||
<!--
|
||||
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
|
||||
In addition, there is a special `kubectl autoscale` command for creating a HorizontalPodAutoscaler.
|
||||
For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
|
||||
will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
|
||||
and the number of replicas between 2 and 5.
|
||||
|
@ -412,14 +416,15 @@ upscale delay.
|
|||
从 v1.12 开始,算法调整后,扩容操作时的延迟就不必设置了。
|
||||
|
||||
<!--
|
||||
- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a
|
||||
duration that specifies how long the autoscaler has to wait before another
|
||||
downscale operation can be performed after the current one has completed.
|
||||
- `--horizontal-pod-autoscaler-downscale-stabilization`: Specifies the duration of the
|
||||
downscale stabilization time window. Horizontal Pod Autoscaler remembers
|
||||
this historical recommended sizes and only acts on the largest size within this time window.
|
||||
The default value is 5 minutes (`5m0s`).
|
||||
-->
|
||||
- `--horizontal-pod-autoscaler-downscale-stabilization`:
|
||||
`kube-controller-manager` 的这个参数表示缩容冷却时间。
|
||||
即自从上次缩容执行结束后,多久可以再次执行缩容,默认时间是 5 分钟(`5m0s`)。
|
||||
- `--horizontal-pod-autoscaler-downscale-stabilization`: 设置缩容冷却时间窗口长度。
|
||||
水平 Pod
|
||||
扩缩器能够记住过去建议的负载规模,并仅对此时间窗口内的最大规模执行操作。
|
||||
默认值是 5 分钟(`5m0s`)。
|
||||
|
||||
<!--
|
||||
When tuning these parameter values, a cluster operator should be aware of the possible
|
||||
|
@ -669,7 +674,7 @@ and [the walkthrough for using external metrics](/docs/tasks/run-application/hor
|
|||
## Support for configurable scaling behavior
|
||||
|
||||
Starting from
|
||||
[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md)
|
||||
[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md)
|
||||
the `v2beta2` API allows scaling behavior to be configured through the HPA
|
||||
`behavior` field. Behaviors are specified separately for scaling up and down in
|
||||
`scaleUp` or `scaleDown` section under the `behavior` field. A stabilization
|
||||
|
@ -679,7 +684,7 @@ policies controls the rate of change of replicas while scaling.
|
|||
-->
|
||||
## 支持可配置的扩缩 {#support-for-configurable-scaling-behaviour}
|
||||
|
||||
从 [v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md)
|
||||
从 [v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md)
|
||||
开始,`v2beta2` API 允许通过 HPA 的 `behavior` 字段配置扩缩行为。
|
||||
在 `behavior` 字段中的 `scaleUp` 和 `scaleDown` 分别指定扩容和缩容行为。
|
||||
可以两个方向指定一个稳定窗口,以防止扩缩目标中副本数量的波动。
|
||||
|
@ -711,7 +716,12 @@ behavior:
|
|||
```
|
||||
|
||||
<!--
|
||||
When the number of pods is more than 40 the second policy will be used for scaling down.
|
||||
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
|
||||
The first policy _(Pods)_ allows at most 4 replicas to be scaled down in one minute. The second policy
|
||||
_(Percent)_ allows at most 10% of the current replicas to be scaled down in one minute.
|
||||
|
||||
Since by default the policy which allows the highest amount of change is selected, the second policy will
|
||||
only be used when the number of pod replicas is more than 40. With 40 or less replicas, the first policy will be applied.
|
||||
For instance if there are 80 replicas and the target has to be scaled down to 10 replicas
|
||||
then during the first step 8 replicas will be reduced. In the next iteration when the number
|
||||
of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of
|
||||
|
@ -719,20 +729,16 @@ the autoscaler controller the number of pods to be change is re-calculated based
|
|||
of current replicas. When the number of replicas falls below 40 the first policy _(Pods)_ is applied
|
||||
and 4 replicas will be reduced at a time.
|
||||
-->
|
||||
当 Pod 数量超过 40 个时,第二个策略将用于缩容。
|
||||
`periodSeconds` 表示在过去的多长时间内要求策略值为真。
|
||||
第一个策略(Pods)允许在一分钟内最多缩容 4 个副本。第二个策略(Percent)
|
||||
允许在一分钟内最多缩容当前副本个数的百分之十。
|
||||
|
||||
由于默认情况下会选择容许更大程度作出变更的策略,只有 Pod 副本数大于 40 时,
|
||||
第二个策略才会被采用。如果副本数为 40 或者更少,则应用第一个策略。
|
||||
例如,如果有 80 个副本,并且目标必须缩小到 10 个副本,那么在第一步中将减少 8 个副本。
|
||||
在下一轮迭代中,当副本的数量为 72 时,10% 的 Pod 数为 7.2,但是这个数字向上取整为 8。
|
||||
在 autoscaler 控制器的每个循环中,将根据当前副本的数量重新计算要更改的 Pod 数量。
|
||||
当副本数量低于 40 时,应用第一个策略 _(Pods)_ ,一次减少 4 个副本。
|
||||
|
||||
<!--
|
||||
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
|
||||
The first policy allows at most 4 replicas to be scaled down in one minute. The second policy
|
||||
allows at most 10% of the current replicas to be scaled down in one minute.
|
||||
-->
|
||||
`periodSeconds` 表示策略的时间长度必须保证有效。
|
||||
第一个策略允许在一分钟内最多缩小 4 个副本。
|
||||
第二个策略最多允许在一分钟内缩小当前副本的 10%。
|
||||
当副本数量低于 40 时,应用第一个策略(Pods),一次减少 4 个副本。
|
||||
|
||||
<!--
|
||||
The policy selection can be changed by specifying the `selectPolicy` field for a scaling
|
||||
|
@ -806,7 +812,7 @@ behavior:
|
|||
```
|
||||
|
||||
<!--
|
||||
For scaling down the stabilization window is _300_ seconds(or the value of the
|
||||
For scaling down the stabilization window is _300_ seconds (or the value of the
|
||||
`--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy
|
||||
for scaling down which allows a 100% of the currently running replicas to be removed which
|
||||
means the scaling target can be scaled down to the minimum allowed replicas.
|
||||
|
@ -814,7 +820,8 @@ For scaling up there is no stabilization window. When the metrics indicate that
|
|||
scaled up the target is scaled up immediately. There are 2 policies where 4 pods or a 100% of the currently
|
||||
running replicas will be added every 15 seconds till the HPA reaches its steady state.
|
||||
-->
|
||||
用于缩小稳定窗口的时间为 _300_ 秒(或是 `--horizontal-pod-autoscaler-downscale-stabilization` 参数设定值)。
|
||||
用于缩小稳定窗口的时间为 _300_ 秒(或是 `--horizontal-pod-autoscaler-downscale-stabilization`
|
||||
参数设定值)。
|
||||
只有一种缩容的策略,允许 100% 删除当前运行的副本,这意味着扩缩目标可以缩小到允许的最小副本数。
|
||||
对于扩容,没有稳定窗口。当指标显示目标应该扩容时,目标会立即扩容。
|
||||
这里有两种策略,每 15 秒添加 4 个 Pod 或 100% 当前运行的副本数,直到 HPA 达到稳定状态。
|
||||
|
@ -859,7 +866,8 @@ To ensure that no more than 5 Pods are removed per minute, you can add a second
|
|||
policy with a fixed size of 5, and set `selectPolicy` to minimum. Setting `selectPolicy` to `Min` means
|
||||
that the autoscaler chooses the policy that affects the smallest number of Pods:
|
||||
-->
|
||||
为了确保每分钟删除的 Pod 数不超过 5 个,可以添加第二个缩容策略,大小固定为 5,并将 `selectPolicy` 设置为最小值。
|
||||
为了确保每分钟删除的 Pod 数不超过 5 个,可以添加第二个缩容策略,大小固定为 5,
|
||||
并将 `selectPolicy` 设置为最小值。
|
||||
将 `selectPolicy` 设置为 `Min` 意味着 autoscaler 会选择影响 Pod 数量最小的策略:
|
||||
|
||||
```yaml
|
||||
|
|
|
@ -290,7 +290,8 @@ Combined with the StatefulSet controller's
|
|||
this ensures the primary MySQL server is Ready before creating replicas, so they can begin
|
||||
replicating.
|
||||
-->
|
||||
通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的 `primary.cnf` 或 `replica.cnf`。
|
||||
通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的
|
||||
`primary.cnf` 或 `replica.cnf`。
|
||||
由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成,
|
||||
因此脚本仅将序数 `0` 指定为主节点,而将其他所有节点指定为副本节点。
|
||||
|
||||
|
@ -851,12 +852,12 @@ kubectl delete pvc data-mysql-4
|
|||
* Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/).
|
||||
* Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/).
|
||||
* Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
* Look in the [Helm Charts repository](https://github.com/kubernetes/charts)
|
||||
* Look in the [Helm Charts repository](https://artifacthub.io/)
|
||||
for other stateful application examples.
|
||||
-->
|
||||
* 进一步了解[为 StatefulSet 扩缩容](/zh/docs/tasks/run-application/scale-stateful-set/).
|
||||
* 进一步了解[调试 StatefulSet](/zh/docs/tasks/debug-application-cluster/debug-stateful-set/).
|
||||
* 进一步了解[删除 StatefulSet](/zh/docs/tasks/run-application/delete-stateful-set/).
|
||||
* 进一步了解[强制删除 StatefulSet Pods](/zh/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
* 在[Helm Charts 仓库](https://github.com/kubernetes/charts)中查找其他有状态的应用程序示例。
|
||||
* 在 [Helm Charts 仓库](https://artifacthub.io/)中查找其他有状态的应用程序示例。
|
||||
|
||||
|
|
|
@ -81,6 +81,11 @@ for a secure solution.
|
|||
kubectl describe deployment mysql
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
Name: mysql
|
||||
Namespace: default
|
||||
|
@ -126,6 +131,11 @@ for a secure solution.
|
|||
kubectl get pods -l app=mysql
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mysql-63082529-2z3ki 1/1 Running 0 3m
|
||||
|
@ -138,6 +148,11 @@ for a secure solution.
|
|||
kubectl describe pvc mysql-pv-claim
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
Name: mysql-pv-claim
|
||||
Namespace: default
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
---
|
||||
title: 使用Deployment运行一个无状态应用
|
||||
title: 使用 Deployment 运行一个无状态应用
|
||||
min-kubernetes-server-version: v1.9
|
||||
content_type: tutorial
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -8,7 +10,7 @@ content_type: tutorial
|
|||
<!--
|
||||
This page shows how to run an application using a Kubernetes Deployment object.
|
||||
-->
|
||||
本文介绍通过Kubernetes Deployment对象如何去运行一个应用.
|
||||
本文介绍如何通过 Kubernetes Deployment 对象去运行一个应用.
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
|
@ -211,7 +213,7 @@ Delete the deployment by name:
|
|||
-->
|
||||
## 删除 Deployment
|
||||
|
||||
通过名称删除 Deployment:
|
||||
基于名称删除 Deployment:
|
||||
|
||||
```shell
|
||||
kubectl delete deployment nginx-deployment
|
||||
|
|
Loading…
Reference in New Issue