Merge remote-tracking branch 'upstream/master' into dev-1.20 to keep in sync - 11-25-2020

pull/25236/head
reylejano-rxm 2020-11-25 07:03:22 -08:00
commit d8ae37587e
139 changed files with 6562 additions and 5229 deletions

View File

@ -13,52 +13,94 @@ This repository contains the assets required to build the [Kubernetes website an
我们非常高兴您想要参与贡献!
<!--
## Running the website locally using Hugo
# Using this repository
See the [official Hugo documentation](https://gohugo.io/getting-started/installing/) for Hugo installation instructions. Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file.
You can run the website locally using Hugo (Extended version), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
-->
## 在本地使用 Hugo 来运行网站
## 使用这个仓库
请参考 [Hugo 的官方文档](https://gohugo.io/getting-started/installing/)了解 Hugo 的安装指令。
请确保安装的是 [`netlify.toml`](netlify.toml#L10) 文件中环境变量 `HUGO_VERSION` 所指定的
Hugo 扩展版本。
可以使用 Hugo扩展版在本地运行网站也可以在容器中运行它。强烈建议使用容器因为这样可以和在线网站的部署保持一致。
<!--
Before building the site, clone the Kubernetes website repository:
-->
在构造网站之前,先克隆 Kubernetes website 仓库:
## Prerequisites
```bash
To use this repository, you need the following installed locally:
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (Extended version)](https://gohugo.io/)
- A container runtime, like [Docker](https://www.docker.com/).
-->
## 前提条件
使用这个仓库,需要在本地安装以下软件:
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (Extended version)](https://gohugo.io/)
- 容器运行时,比如 [Docker](https://www.docker.com/).
<!--
Before you start, install the dependencies. Clone the repository and navigate to the directory:
-->
开始前,先安装这些依赖。克隆本仓库并进入对应目录:
```
git clone https://github.com/kubernetes/website.git
cd website
git submodule update --init --recursive
```
<!--
**Note:** The Kubernetes website deploys the [Docsy Hugo theme](https://github.com/google/docsy#readme).
If you have not updated your website repository, the `website/themes/docsy` directory is empty.
The site cannot build without a local copy of the theme.
Update the website theme:
The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/docsy#readme). Even if you plan to run the website in a container, we strongly recommend pulling in the submodule and other development dependencies by running the following:
-->
**注意:** Kubernetes 网站要部署 [Docsy Hugo 主题](https://github.com/google/docsy#readme).
如果你还没有更新你本地的 website 仓库,目录 `website/themes/docsy`
会是空目录。
在本地没有主题副本的情况下,网站无法正常构造。
使用下面的命令更新网站主题:
Kubernetes 网站使用的是 [Docsy Hugo 主题](https://github.com/google/docsy#readme)。 即使你打算在容器中运行网站,我们也强烈建议你通过运行以下命令来引入子模块和其他开发依赖项:
```bash
```
# pull in the Docsy submodule
git submodule update --init --recursive --depth 1
```
<!--
## Running the website using a container
To build the site in a container, run the following to build the container image and run it:
-->
## 在容器中运行网站
要在容器中构建网站,请通过以下命令来构建容器镜像并运行:
```
make container-image
make container-serve
```
<!--
Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
-->
启动浏览器,打开 http://localhost:1313 来查看网站。
当你对源文件作出修改时Hugo 会更新网站并强制浏览器执行刷新操作。
<!--
## Running the website locally using Hugo
Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file.
To build and test the site locally, run:
-->
## 在本地使用 Hugo 来运行网站
请确保安装的是 [`netlify.toml`](netlify.toml#L10) 文件中环境变量 `HUGO_VERSION` 所指定的
Hugo 扩展版本。
若要在本地构造和测试网站,请运行:
```bash
hugo server --buildFuture
# install dependencies
npm ci
make serve
```
<!--
@ -68,6 +110,63 @@ This will start the local Hugo server on port 1313. Open up your browser to http
启动浏览器,打开 http://localhost:1313 来查看网站。
当你对源文件作出修改时Hugo 会更新网站并强制浏览器执行刷新操作。
<!--
## Troubleshooting
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Hugo is shipped in two set of binaries for technical reasons. The current website runs based on the **Hugo Extended** version only. In the [release page](https://github.com/gohugoio/hugo/releases) look for archives with `extended` in the name. To confirm, run `hugo version` and look for the word `extended`.
-->
## 故障排除
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
由于技术原因Hugo 会发布两套二进制文件。
当前网站仅基于 **Hugo Extended** 版本运行。
在 [发布页面](https://github.com/gohugoio/hugo/releases) 中查找名称为 `extended` 的归档。可以运行 `huge version` 查看是否有单词 `extended` 来确认。
<!--
### Troubleshooting macOS for too many open files
If you run `make serve` on macOS and receive the following error:
-->
### 对 macOs 上打开太多文件的故障排除
如果在 macOS 上运行 `make serve` 收到以下错误:
```
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
试着查看一下当前打开文件数的限制:
`launchctl limit maxfiles`
然后运行以下命令参考https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c
```
#!/bin/sh
# These are the original gist links, linking to my gists now.
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
sudo mv limit.maxproc.plist /Library/LaunchDaemons
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
这适用于 Catalina 和 Mojave macOS。
<!--
## Get involved with SIG Docs
@ -78,7 +177,7 @@ You can also reach the maintainers of this project at:
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
-->
## 参与 SIG Docs 工作
# 参与 SIG Docs 工作
通过 [社区页面](https://github.com/kubernetes/community/tree/master/sig-docs#meetings)
进一步了解 SIG Docs Kubernetes 社区和会议信息。
@ -95,7 +194,7 @@ You can click the **Fork** button in the upper-right area of the screen to creat
Once your pull request is created, a Kubernetes reviewer will take responsibility for providing clear, actionable feedback. As the owner of the pull request, **it is your responsibility to modify your pull request to address the feedback that has been provided to you by the Kubernetes reviewer.**
-->
## 为文档做贡献
# 为文档做贡献
你也可以点击屏幕右上方区域的 **Fork** 按钮,在你自己的 GitHub
账号下创建本仓库的拷贝。此拷贝被称作 *fork*
@ -133,7 +232,7 @@ For more information about contributing to the Kubernetes documentation, see:
* [文档风格指南](http://kubernetes.io/docs/contribute/style/style-guide/)
* [本地化 Kubernetes 文档](https://kubernetes.io/docs/contribute/localization/)
## 中文本地化
# 中文本地化
可以通过以下方式联系中文本地化的维护人员:
@ -146,15 +245,15 @@ For more information about contributing to the Kubernetes documentation, see:
Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
-->
### 行为准则
# 行为准则
参与 Kubernetes 社区受 [CNCF 行为准则](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)约束。
参与 Kubernetes 社区受 [CNCF 行为准则](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) 约束。
<!--
## Thank you!
Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation!
-->
## 感谢!
# 感谢!
Kubernetes 因为社区的参与而蓬勃发展,感谢您对我们网站和文档的贡献!

View File

@ -88,6 +88,20 @@ footer {
}
}
main {
.button {
display: inline-block;
border-radius: 6px;
padding: 6px 20px;
line-height: 1.3rem;
color: white;
background-color: $blue;
text-decoration: none;
font-size: 1rem;
border: 0px;
}
}
// HEADER
#hamburger {

View File

@ -157,7 +157,7 @@ github_repo = "https://github.com/kubernetes/website"
# param for displaying an announcement block on every page.
# See /i18n/en.toml for message text and title.
announcement = true
announcement_bg = "#3d4cb7" # choose a dark color text is white
announcement_bg = "#000000" #choose a dark color  text is white
#Searching
k8s_search = true

View File

@ -12,7 +12,7 @@ Kubernetes is well-known for running scalable workloads. It scales your workload
## Guaranteed scheduling with controlled cost
[Kubernetes Cluster Autoscaler](https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling) is an excellent tool in the ecosystem which adds more nodes to your cluster when your applications need them. However, cluster autoscaler has some limitations and may not work for all users:
[Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/) is an excellent tool in the ecosystem which adds more nodes to your cluster when your applications need them. However, cluster autoscaler has some limitations and may not work for all users:
- It does not work in physical clusters.
- Adding more nodes to the cluster costs more.

View File

@ -92,9 +92,8 @@ Controllers that interact with external state find their desired state from
the API server, then communicate directly with an external system to bring
the current state closer in line.
(There actually is a controller that horizontally scales the
nodes in your cluster. See
[Cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling)).
(There actually is a [controller](https://github.com/kubernetes/autoscaler/)
that horizontally scales the nodes in your cluster.)
The important point here is that the controller makes some change to bring about
your desired state, and then reports current state back to your cluster's API server.

View File

@ -358,5 +358,4 @@ For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods=
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
* Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling).

View File

@ -39,8 +39,6 @@ Before choosing a guide, here are some considerations:
## Managing a cluster
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster's master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.

View File

@ -440,7 +440,7 @@ poorly-behaved workloads that may be harming system health.
{{< /note >}}
* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector
hoding the computed concurrency limit (based on the API server's
holding the computed concurrency limit (based on the API server's
total concurrency limit and PriorityLevelConfigurations' concurrency
shares), broken down by the label `priority_level`.

View File

@ -321,9 +321,7 @@ Pod may be created that fits on the same Node. In this case, the scheduler will
schedule the higher priority Pod instead of the preemptor.
This is expected behavior: the Pod with the higher priority should take the place
of a Pod with a lower priority. Other controller actions, such as
[cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling),
may eventually provide capacity to schedule the pending Pods.
of a Pod with a lower priority.
### Higher priority Pods are preempted before lower priority pods

View File

@ -12,125 +12,147 @@ weight: 50
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
The kube-scheduler can be configured to enable bin packing of resources along with extended resources using `RequestedToCapacityRatioResourceAllocation` priority function. Priority functions can be used to fine-tune the kube-scheduler as per custom needs.
The kube-scheduler can be configured to enable bin packing of resources along
with extended resources using `RequestedToCapacityRatioResourceAllocation`
priority function. Priority functions can be used to fine-tune the
kube-scheduler as per custom needs.
<!-- body -->
## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation
Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters and improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
consists of `name` which specifies the resource to be considered during scoring and `weight` specify the weight of each resource.
Kubernetes allows the users to specify the resources along with weights for
each resource to score nodes based on the request to capacity ratio. This
allows users to bin pack extended resources by using appropriate parameters
and improves the utilization of scarce resources in large clusters. The
behavior of the `RequestedToCapacityRatioResourceAllocation` priority function
can be controlled by a configuration option called
`requestedToCapacityRatioArguments`. This argument consists of two parameters
`shape` and `resources`. The `shape` parameter allows the user to tune the
function as least requested or most requested based on `utilization` and
`score` values. The `resources` parameter consists of `name` of the resource
to be considered during scoring and `weight` specify the weight of each
resource.
Below is an example configuration that sets `requestedToCapacityRatioArguments` to bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`
Below is an example configuration that sets
`requestedToCapacityRatioArguments` to bin packing behavior for extended
resources `intel.com/foo` and `intel.com/bar`.
```json
{
"kind" : "Policy",
"apiVersion" : "v1",
...
"priorities" : [
...
{
"name": "RequestedToCapacityRatioPriority",
"weight": 2,
"argument": {
"requestedToCapacityRatioArguments": {
"shape": [
{"utilization": 0, "score": 0},
{"utilization": 100, "score": 10}
],
"resources": [
{"name": "intel.com/foo", "weight": 3},
{"name": "intel.com/bar", "weight": 5}
]
}
}
}
],
}
```yaml
apiVersion: v1
kind: Policy
# ...
priorities:
# ...
- name: RequestedToCapacityRatioPriority
weight: 2
argument:
requestedToCapacityRatioArguments:
shape:
- utilization: 0
score: 0
- utilization: 100
score: 10
resources:
- name: intel.com/foo
weight: 3
- name: intel.com/bar
weight: 5
```
**This feature is disabled by default**
### Tuning RequestedToCapacityRatioResourceAllocation Priority Function
### Tuning the Priority Function
`shape` is used to specify the behavior of the `RequestedToCapacityRatioPriority` function.
`shape` is used to specify the behavior of the
`RequestedToCapacityRatioPriority` function.
```yaml
{"utilization": 0, "score": 0},
{"utilization": 100, "score": 10}
shape:
- utilization: 0
score: 0
- utilization: 100
score: 10
```
The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follows.
The above arguments give the node a `score` of 0 if `utilization` is 0% and 10 for
`utilization` 100%, thus enabling bin packing behavior. To enable least
requested the score value must be reversed as follows.
```yaml
{"utilization": 0, "score": 100},
{"utilization": 100, "score": 0}
shape:
- utilization: 0
score: 100
- utilization: 100
score: 0
```
`resources` is an optional parameter which by defaults is set to:
`resources` is an optional parameter which defaults to:
``` yaml
"resources": [
{"name": "CPU", "weight": 1},
{"name": "Memory", "weight": 1}
]
resources:
- name: CPU
weight: 1
- name: Memory
weight: 1
```
It can be used to add extended resources as follows:
```yaml
"resources": [
{"name": "intel.com/foo", "weight": 5},
{"name": "CPU", "weight": 3},
{"name": "Memory", "weight": 1}
]
resources:
- name: intel.com/foo
weight: 5
- name: CPU
weight: 3
- name: Memory
weight: 1
```
The weight parameter is optional and is set to 1 if not specified. Also, the weight cannot be set to a negative value.
The `weight` parameter is optional and is set to 1 if not specified. Also, the
`weight` cannot be set to a negative value.
### How the RequestedToCapacityRatioResourceAllocation Priority Function Scores Nodes
### Node scoring for capacity allocation
This section is intended for those who want to understand the internal details
of this feature.
Below is an example of how the node score is calculated for a given set of values.
```
Requested Resources
Requested resources:
```
intel.com/foo : 2
Memory: 256MB
CPU: 2
```
Resource Weights
Resource weights:
```
intel.com/foo : 5
Memory: 1
CPU: 3
```
FunctionShapePoint {{0, 0}, {100, 10}}
Node 1 Spec
Node 1 spec:
```
Available:
intel.com/foo : 4
Memory : 1 GB
CPU: 8
intel.com/foo: 4
Memory: 1 GB
CPU: 8
Used:
intel.com/foo: 1
Memory: 256MB
CPU: 1
intel.com/foo: 1
Memory: 256MB
CPU: 1
```
Node score:
Node Score:
```
intel.com/foo = resourceScoringFunction((2+1),4)
= (100 - ((4-3)*100/4)
= (100 - 25)
@ -152,24 +174,24 @@ CPU = resourceScoringFunction((2+1),8)
NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3)
= 5
```
Node 2 spec:
Node 2 Spec
```
Available:
intel.com/foo: 8
Memory: 1GB
CPU: 8
intel.com/foo: 8
Memory: 1GB
CPU: 8
Used:
intel.com/foo: 2
Memory: 512MB
CPU: 6
```
intel.com/foo: 2
Memory: 512MB
CPU: 6
Node Score:
Node score:
```
intel.com/foo = resourceScoringFunction((2+2),8)
= (100 - ((8-4)*100/8)
= (100 - 50)
@ -194,4 +216,8 @@ NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
```
## {{% heading "whatsnext" %}}
- Read more about the [scheduling framework](/docs/concepts/scheduling-eviction/scheduling-framework/)
- Read more about [scheduler configuration](/docs/reference/scheduling/config/)

View File

@ -40,7 +40,7 @@ This means that no pod will be able to schedule onto `node1` unless it has a mat
To remove the taint added by the command above, you can run:
```shell
kubectl taint nodes node1 key:NoSchedule-
kubectl taint nodes node1 key=value:NoSchedule-
```
You specify a toleration for a pod in the PodSpec. Both of the following tolerations "match" the

View File

@ -345,7 +345,7 @@ Or you can use [this similar script](https://raw.githubusercontent.com/TremoloSe
Setup instructions for specific systems:
- [UAA](https://docs.cloudfoundry.org/concepts/architecture/uaa.html)
- [Dex](https://github.com/dexidp/dex/blob/master/Documentation/kubernetes.md)
- [Dex](https://dexidp.io/docs/kubernetes/)
- [OpenUnison](https://www.tremolosecurity.com/orchestra-k8s/)
#### Using kubectl

View File

@ -208,6 +208,9 @@ different Kubernetes components.
| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 |
| `CustomPodDNS` | `true` | Beta| 1.10 | 1.13 |
| `CustomPodDNS` | `true` | GA | 1.14 | - |
| `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 |
| `CustomResourceDefaulting` | `true` | Beta | 1.16 | 1.16 |
| `CustomResourceDefaulting` | `true` | GA | 1.17 | - |
| `CustomResourcePublishOpenAPI` | `false` | Alpha| 1.14 | 1.14 |
| `CustomResourcePublishOpenAPI` | `true` | Beta| 1.15 | 1.15 |
| `CustomResourcePublishOpenAPI` | `true` | GA | 1.16 | - |

View File

@ -61,7 +61,7 @@ for example `create`, `get`, `describe`, `delete`.
* To specify resources with one or more files: `-f file1 -f file2 -f file<#>`
* [Use YAML rather than JSON](/docs/concepts/configuration/overview/#general-configuration-tips) since YAML tends to be more user-friendly, especially for configuration files.<br/>
Example: `kubectl get pod -f ./pod.yaml`
Example: `kubectl get -f ./pod.yaml`
* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.<br/>

View File

@ -230,7 +230,7 @@ in Go files or in the OpenAPI schema definition of the
| Golang marker | OpenAPI extension | Accepted values | Description | Introduced in |
|---|---|---|---|---|
| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | Applicable to lists. `atomic` and `set` apply to lists with scalar elements only. `map` applies to lists of nested types only. If configured as `atomic`, the entire list is replaced during merge; a single manager manages the list as a whole at any one time. If `granular`, different managers can manage entries separately. | 1.16 |
| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | Applicable to lists. `atomic` and `set` apply to lists with scalar elements only. `map` applies to lists of nested types only. If configured as `atomic`, the entire list is replaced during merge; a single manager manages the list as a whole at any one time. If `set` or `map`, different managers can manage entries separately. | 1.16 |
| `//+listMapKey` | `x-kubernetes-list-map-keys` | Slice of map keys that uniquely identify entries for example `["port", "protocol"]` | Only applicable when `+listType=map`. A slice of strings whose values in combination must uniquely identify list entries. While there can be multiple keys, `listMapKey` is singular because keys need to be specified individually in the Go type. | 1.16 |
| `//+mapType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to maps. `atomic` means that the map can only be entirely replaced by a single manager. `granular` means that the map supports separate managers updating individual fields. | 1.17 |
| `//+structType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to structs; otherwise same usage and OpenAPI annotation as `//+mapType`.| 1.17 |

View File

@ -1,223 +0,0 @@
---
reviewers:
- lavalamp
- thockin
title: Cluster Management
content_type: concept
---
<!-- overview -->
This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
upgrading your cluster's
master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
running cluster.
<!-- body -->
## Creating and configuring a Cluster
To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](/docs/setup/) depending on your environment.
## Upgrading a cluster
The current state of cluster upgrades is provider dependent, and some releases may require special care when upgrading. It is recommended that administrators consult both the [release notes](https://git.k8s.io/kubernetes/CHANGELOG/README.md), as well as the version specific upgrade notes prior to upgrading their clusters.
### Upgrading an Azure Kubernetes Service (AKS) cluster
Azure Kubernetes Service enables easy self-service upgrades of the control plane and nodes in your cluster. The process is
currently user-initiated and is described in the [Azure AKS documentation](https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster).
### Upgrading Google Compute Engine clusters
Google Compute Engine Open Source (GCE-OSS) support master upgrades by deleting and
recreating the master, while maintaining the same Persistent Disk (PD) to ensure that data is retained across the
upgrade.
Node upgrades for GCE use a [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/), each node
is sequentially destroyed and then recreated with new software. Any Pods that are running on that node need to be
controlled by a Replication Controller, or manually re-created after the roll out.
Upgrades on open source Google Compute Engine (GCE) clusters are controlled by the `cluster/gce/upgrade.sh` script.
Get its usage by running `cluster/gce/upgrade.sh -h`.
For example, to upgrade just your master to a specific version (v1.0.2):
```shell
cluster/gce/upgrade.sh -M v1.0.2
```
Alternatively, to upgrade your entire cluster to the latest stable release:
```shell
cluster/gce/upgrade.sh release/stable
```
### Upgrading Google Kubernetes Engine clusters
Google Kubernetes Engine automatically updates master components (e.g. `kube-apiserver`, `kube-scheduler`) to the latest version. It also handles upgrading the operating system and other components that the master runs on.
The node upgrade process is user-initiated and is described in the [Google Kubernetes Engine documentation](https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade).
### Upgrading an Amazon EKS Cluster
Amazon EKS cluster's master components can be upgraded by using eksctl, AWS Management Console, or AWS CLI. The process is user-initiated and is described in the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html).
### Upgrading an Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) cluster
Oracle creates and manages a set of master nodes in the Oracle control plane on your behalf (and associated Kubernetes infrastructure such as etcd nodes) to ensure you have a highly available managed Kubernetes control plane. You can also seamlessly upgrade these master nodes to new versions of Kubernetes with zero downtime. These actions are described in the [OKE documentation](https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengupgradingk8smasternode.htm).
### Upgrading clusters on other platforms
Different providers, and tools, will manage upgrades differently. It is recommended that you consult their main documentation regarding upgrades.
* [kops](https://github.com/kubernetes/kops)
* [kubespray](https://github.com/kubernetes-sigs/kubespray)
* [CoreOS Tectonic](https://coreos.com/tectonic/docs/latest/admin/upgrade.html)
* [Digital Rebar](https://provision.readthedocs.io/en/tip/doc/content-packages/krib.html)
* ...
To upgrade a cluster on a platform not mentioned in the above list, check the order of component upgrade on the
[Skewed versions](/docs/setup/release/version-skew-policy/#supported-component-upgrade-order) page.
## Resizing a cluster
If your cluster runs short on resources you can easily add more machines to it if your cluster
is running in [Node self-registration mode](/docs/concepts/architecture/nodes/#self-registration-of-nodes).
If you're using GCE or Google Kubernetes Engine it's done by resizing the Instance Group managing your Nodes.
It can be accomplished by modifying number of instances on
`Compute > Compute Engine > Instance groups > your group > Edit group`
[Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
```shell
gcloud compute instance-groups managed resize kubernetes-node-pool --size=42 --zone=$ZONE
```
The Instance Group will take care of putting appropriate image on new machines and starting them,
while the Kubelet will register its Node with the API server to make it available for scheduling.
If you scale the instance group down, system will randomly choose Nodes to kill.
In other environments you may need to configure the machine yourself and tell the Kubelet on which machine API server is running.
### Resizing an Azure Kubernetes Service (AKS) cluster
Azure Kubernetes Service enables user-initiated resizing of the cluster from either the CLI or
the Azure Portal and is described in the
[Azure AKS documentation](https://docs.microsoft.com/en-us/azure/aks/scale-cluster).
### Cluster autoscaling
If you are using GCE or Google Kubernetes Engine, you can configure your cluster so that it is automatically rescaled based on
pod needs.
As described in [Compute Resource](/docs/concepts/configuration/manage-resources-containers/),
users can reserve how much CPU and memory is allocated to pods.
This information is used by the Kubernetes scheduler to find a place to run the pod. If there is
no node that has enough free capacity (or doesn't match other pod requirements) then the pod has
to wait until some pods are terminated or a new node is added.
Cluster autoscaler looks for the pods that cannot be scheduled and checks if adding a new node, similar
to the other in the cluster, would help. If yes, then it resizes the cluster to accommodate the waiting pods.
Cluster autoscaler also scales down the cluster if it notices that one or more nodes are not needed anymore for
an extended period of time (10min but it may change in the future).
Cluster autoscaler is configured per instance group (GCE) or node pool (Google Kubernetes Engine).
If you are using GCE then you can either enable it while creating a cluster with kube-up.sh script.
To configure cluster autoscaler you have to set three environment variables:
* `KUBE_ENABLE_CLUSTER_AUTOSCALER` - it enables cluster autoscaler if set to true.
* `KUBE_AUTOSCALER_MIN_NODES` - minimum number of nodes in the cluster.
* `KUBE_AUTOSCALER_MAX_NODES` - maximum number of nodes in the cluster.
Example:
```shell
KUBE_ENABLE_CLUSTER_AUTOSCALER=true KUBE_AUTOSCALER_MIN_NODES=3 KUBE_AUTOSCALER_MAX_NODES=10 NUM_NODES=5 ./cluster/kube-up.sh
```
On Google Kubernetes Engine you configure cluster autoscaler either on cluster creation or update or when creating a particular node pool
(which you want to be autoscaled) by passing flags `--enable-autoscaling` `--min-nodes` and `--max-nodes`
to the corresponding `gcloud` commands.
Examples:
```shell
gcloud container clusters create mytestcluster --zone=us-central1-b --enable-autoscaling --min-nodes=3 --max-nodes=10 --num-nodes=5
```
```shell
gcloud container clusters update mytestcluster --enable-autoscaling --min-nodes=1 --max-nodes=15
```
**Cluster autoscaler expects that nodes have not been manually modified (e.g. by adding labels via kubectl) as those properties would not be propagated to the new nodes within the same instance group.**
For more details about how the cluster autoscaler decides whether, when and how
to scale a cluster, please refer to the [FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md)
documentation from the autoscaler project.
## Maintenance on a Node
If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer
(the default time is 5 minutes, controlled by `--pod-eviction-timeout` on the controller-manager),
then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
replica set (or replication controller), then a new copy of the pod will be started on a different node. So, in the case where all
pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.
If you want more control over the upgrading process, you may use the following workflow:
Use `kubectl drain` to gracefully terminate all pods on the node while marking the node as unschedulable:
```shell
kubectl drain $NODENAME
```
This keeps new pods from landing on the node while you are trying to get them off.
For pods with a replica set, the pod will be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
For pods with no replica set, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
Perform maintenance work on the node.
Make the node schedulable again:
```shell
kubectl uncordon $NODENAME
```
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically (if you're using a cloud provider that supports
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register).
See [Node](/docs/concepts/architecture/nodes/) for more details.
## Advanced Topics
### Turn on or off an API version for your cluster
Specific API versions can be turned on or off by passing `--runtime-config=api/<version>` flag while bringing up the API server. For example: to turn off v1 API, pass `--runtime-config=api/v1=false`.
runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively.
For example, for turning off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true`.
For the purposes of these flags, _legacy_ APIs are those APIs which have been explicitly deprecated (e.g. `v1beta3`).
### Switching your cluster's storage API version
The objects that are stored to disk for a cluster's internal representation of the Kubernetes resources active in the cluster are written using a particular version of the API.
When the supported API changes, these objects may need to be rewritten in the newer API. Failure to do this will eventually result in resources that are no longer decodable or usable
by the Kubernetes API server.
### Switching your config files to a new API version
You can use `kubectl convert` command to convert config files between different API versions.
```shell
kubectl convert -f pod.yaml --output-version v1
```
For more options, please refer to the usage of [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert) command.

View File

@ -0,0 +1,93 @@
---
title: Upgrade A Cluster
content_type: task
---
<!-- overview -->
This page provides an overview of the steps you should follow to upgrade a
Kubernetes cluster.
The way that you upgrade a cluster depends on how you initially deployed it
and on any subsequent changes.
At a high level, the steps you perform are:
- Upgrade the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
- Upgrade the nodes in your cluster
- Upgrade clients such as {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}
- Adjust manifests and other resources based on the API changes that accompany the
new Kubernetes version
## {{% heading "prerequisites" %}}
You must have an existing cluster. This page is about upgrading from Kubernetes
{{< skew prevMinorVersion >}} to Kubernetes {{< skew latestVersion >}}. If your cluster
is not currently running Kubernetes {{< skew prevMinorVersion >}} then please check
the documentation for the version of Kubernetes that you plan to upgrade to.
## Upgrade approaches
### kubeadm {#upgrade-kubeadm}
If your cluster was deployed using the `kubeadm` tool, refer to
[Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for detailed information on how to upgrade the cluster.
Once you have upgraded the cluster, remember to
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
### Manual deployments
{{< caution >}}
These steps do not account for third-party extensions such as network and storage
plugins.
{{< /caution >}}
You should manually update the control plane following this sequence:
- etcd (all instances)
- kube-apiserver (all control plane hosts)
- kube-controller-manager
- kube-scheduler
- cloud controller manager, if you use one
At this point you should
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/)
that node and then either replace it with a new node that uses the {{< skew latestVersion >}}
kubelet, or upgrade the kubelet on that node and bring the node back into service.
### Other deployments {#upgrade-other}
Refer to the documentation for your cluster deployment tool to learn the recommended set
up steps for maintenance.
## Post-upgrade tasks
### Switch your cluster's storage API version
The objects that are serialized into etcd for a cluster's internal
representation of the Kubernetes resources active in the cluster are
written using a particular version of the API.
When the supported API changes, these objects may need to be rewritten
in the newer API. Failure to do this will eventually result in resources
that are no longer decodable or usable by the Kubernetes API server.
For each affected object, fetch it using the latest supported API and then
write it back also using the latest supported API.
### Update manifests
Upgrading to a new Kubernetes version can provide new APIs.
You can use `kubectl convert` command to convert manifests between different API versions.
For example:
```shell
kubectl convert -f pod.yaml --output-version v1
```
The `kubectl` tool replaces the contents of `pod.yaml` with a manifest that sets `kind` to
Pod (unchanged), but with a revised `apiVersion`.

View File

@ -0,0 +1,29 @@
---
title: Enable Or Disable A Kubernetes API
content_type: task
---
<!-- overview -->
This page shows how to enable or disable an API version from your cluster's
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
<!-- steps -->
Specific API versions can be turned on or off by passing `--runtime-config=api/<version>` as a
command line argument to the API server. The values for this argument are a comma-separated
list of API versions. Later values override earlier values.
The `runtime-config` command line argument also supports 2 special keys:
- `api/all`, representing all known APIs
- `api/legacy`, representing only legacy APIs. Legacy APIs are any APIs that have been
explicitly [deprecated](/docs/reference/using-api/deprecation-policy/).
For example, to turning off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true`
to the `kube-apiserver`.
## {{% heading "whatsnext" %}}
Read the [full documentation](/docs/reference/command-line-tools-reference/kube-apiserver/)
for the `kube-apiserver` component.

View File

@ -4,14 +4,14 @@ reviewers:
- mml
- foxish
- kow3ns
title: Safely Drain a Node while Respecting the PodDisruptionBudget
title: Safely Drain a Node
content_type: task
min-kubernetes-server-version: 1.5
---
<!-- overview -->
This page shows how to safely drain a {{< glossary_tooltip text="node" term_id="node" >}},
respecting the PodDisruptionBudget you have defined.
optionally respecting the PodDisruptionBudget you have defined.
## {{% heading "prerequisites" %}}
@ -27,6 +27,15 @@ This task also assumes that you have met the following prerequisites:
<!-- steps -->
## (Optional) Configure a disruption budget {#configure-poddisruptionbudget}
To endure that your workloads remain available during maintenance, you can
configure a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/).
If availability is important for any applications that run or could run on the node(s)
that you are draining, [configure a PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
first and the continue following this guide.
## Use `kubectl drain` to remove a node from service
You can use `kubectl drain` to safely evict all of your pods from a
@ -158,7 +167,4 @@ application owners and cluster owners to establish an agreement on behavior in t
* Follow steps to protect your application by [configuring a Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/).
* Learn more about [maintenance on a node](/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node).

View File

@ -9,10 +9,11 @@ title: Auditing
<!-- overview -->
Kubernetes auditing provides a security-relevant chronological set of records documenting
the sequence of activities that have affected system by individual users, administrators
or other components of the system. It allows cluster administrator to
answer the following questions:
Kubernetes _auditing_ provides a security-relevant, chronological set of records documenting
the sequence of actions in a cluster. The cluster audits the activities generated by users,
by applications that use the Kubernetes API, and by the control plane itself.
Auditing allows cluster administrators to answer the following questions:
- what happened?
- when did it happen?
@ -32,7 +33,7 @@ a certain policy and written to a backend. The policy determines what's recorded
and the backends persist the records. The current backend implementations
include logs files and webhooks.
Each request can be recorded with an associated "stage". The known stages are:
Each request can be recorded with an associated _stage_. The defined stages are:
- `RequestReceived` - The stage for events generated as soon as the audit
handler receives the request, and before it is delegated down the handler
@ -45,19 +46,23 @@ Each request can be recorded with an associated "stage". The known stages are:
- `Panic` - Events generated when a panic occurred.
{{< note >}}
The audit logging feature increases the memory consumption of the API server
because some context required for auditing is stored for each request.
Additionally, memory consumption depends on the audit logging configuration.
Audit events are different from the
[Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core)
API object.
{{< /note >}}
## Audit Policy
The audit logging feature increases the memory consumption of the API server
because some context required for auditing is stored for each request.
Memory consumption depends on the audit logging configuration.
## Audit policy
Audit policy defines rules about what events should be recorded and what data
they should include. The audit policy object structure is defined in the
[`audit.k8s.io` API group](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go).
When an event is processed, it's
compared against the list of rules in order. The first matching rule sets the
"audit level" of the event. The known audit levels are:
_audit level_ of the event. The defined audit levels are:
- `None` - don't log events that match this rule.
- `Metadata` - log request metadata (requesting user, timestamp, resource,
@ -86,26 +91,27 @@ rules:
- level: Metadata
```
The audit profile used by GCE should be used as reference by admins constructing their own audit profiles. You can check the
If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
script, which generates the audit policy file. You can see most of the audit policy file by looking directly at the script.
script, which generates an audit policy file. You can see most of the audit policy file by looking directly at the script.
## Audit backends
Audit backends persist audit events to an external storage.
Out of the box, the kube-apiserver provides two backends:
- Log backend, which writes events to a disk
- Webhook backend, which sends events to an external API
- Log backend, which writes events into the filesystem
- Webhook backend, which sends events to an external HTTP API
In all cases, audit events structure is defined by the API in the
`audit.k8s.io` API group. The current version of the API is
In all cases, audit events follow a structure defined by the Kubernetes API in the
`audit.k8s.io` API group. For Kubernetes {{< param "fullversion" >}}, that
API is at version
[`v1`](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go).
{{< note >}}
In case of patches, request body is a JSON array with patch operations, not a JSON object
with an appropriate Kubernetes API object. For example, the following request body is a valid patch
request to `/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`.
request to `/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`:
```json
[
@ -125,8 +131,8 @@ request to `/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`.
### Log backend
Log backend writes audit events to a file in JSON format. You can configure
log audit backend using the following `kube-apiserver` flags:
The log backend writes audit events to a file in [JSONlines](https://jsonlines.org/) format.
You can configure the log audit backend using the following `kube-apiserver` flags:
- `--audit-log-path` specifies the log file path that log backend uses to write
audit events. Not specifying this flag disables log backend. `-` means standard out
@ -134,15 +140,16 @@ log audit backend using the following `kube-apiserver` flags:
- `--audit-log-maxbackup` defines the maximum number of audit log files to retain
- `--audit-log-maxsize` defines the maximum size in megabytes of the audit log file before it gets rotated
In case kube-apiserver is configured as a Pod,remember to mount the hostPath to the location of the policy file and log file. For example,
`
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--audit-log-path=/var/log/audit.log
`
If your cluster's control plane runs the kube-apiserver as a Pod, remember to mount the `hostPath`
to the location of the policy file and log file, so that audit records are persisted. For example:
```shell
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
--audit-log-path=/var/log/audit.log
```
then mount the volumes:
```
```yaml
...
volumeMounts:
- mountPath: /etc/kubernetes/audit-policy.yaml
name: audit
@ -151,9 +158,10 @@ volumeMounts:
name: audit-log
readOnly: false
```
finally the hostPath:
and finally configure the `hostPath`:
```
```yaml
...
- name: audit
hostPath:
path: /etc/kubernetes/audit-policy.yaml
@ -163,19 +171,19 @@ finally the hostPath:
hostPath:
path: /var/log/audit.log
type: FileOrCreate
```
### Webhook backend
Webhook backend sends audit events to a remote API, which is assumed to be the
same API as `kube-apiserver` exposes. You can configure webhook
audit backend using the following kube-apiserver flags:
The webhook audit backend sends audit events to a remote web API, which is assumed to
be a form of the Kubernetes API, including means of authentication. You can configure
a webhook audit backend using the following kube-apiserver flags:
- `--audit-webhook-config-file` specifies the path to a file with a webhook
configuration. Webhook configuration is effectively a
configuration. The webhook configuration is effectively a specialized
[kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters).
- `--audit-webhook-initial-backoff` specifies the amount of time to wait after the first failed
request before retrying. Subsequent requests are retried with exponential backoff.
@ -183,7 +191,7 @@ audit backend using the following kube-apiserver flags:
The webhook config file uses the kubeconfig format to specify the remote address of
the service and credentials used to connect to it.
### Batching
## Event batching {#batching}
Both log and webhook backends support batching. Using webhook as an example, here's the list of
available flags. To get the same flag for log backend, replace `webhook` with `log` in the flag
@ -193,9 +201,10 @@ throttling is enabled in `webhook` and disabled in `log`.
- `--audit-webhook-mode` defines the buffering strategy. One of the following:
- `batch` - buffer events and asynchronously process them in batches. This is the default.
- `blocking` - block API server responses on processing each individual event.
- `blocking-strict` - Same as blocking, but when there is a failure during audit logging at RequestReceived stage, the whole request to apiserver will fail.
- `blocking-strict` - Same as blocking, but when there is a failure during audit logging at the
RequestReceived stage, the whole request to the kube-apiserver fails.
The following flags are used only in the `batch` mode.
The following flags are used only in the `batch` mode:
- `--audit-webhook-batch-buffer-size` defines the number of events to buffer before batching.
If the rate of incoming events overflows the buffer, events are dropped.
@ -207,16 +216,16 @@ The following flags are used only in the `batch` mode.
- `--audit-webhook-batch-throttle-burst` defines the maximum number of batches generated at the same
moment if the allowed QPS was underutilized previously.
#### Parameter tuning
## Parameter tuning
Parameters should be set to accommodate the load on the apiserver.
Parameters should be set to accommodate the load on the API server.
For example, if kube-apiserver receives 100 requests each second, and each request is audited only
on `ResponseStarted` and `ResponseComplete` stages, you should account for ~200 audit
on `ResponseStarted` and `ResponseComplete` stages, you should account for 200 audit
events being generated each second. Assuming that there are up to 100 events in a batch,
you should set throttling level at least 2 QPS. Assuming that the backend can take up to
5 seconds to write events, you should set the buffer size to hold up to 5 seconds of events, i.e.
10 batches, i.e. 1000 events.
you should set throttling level at least 2 queries per second. Assuming that the backend can take up to
5 seconds to write events, you should set the buffer size to hold up to 5 seconds of events;
that is: 10 batches, or 1000 events.
In most cases however, the default parameters should be sufficient and you don't have to worry about
setting them manually. You can look at the following Prometheus metrics exposed by kube-apiserver
@ -226,192 +235,18 @@ and in the logs to monitor the state of the auditing subsystem.
- `apiserver_audit_error_total` metric contains the total number of events dropped due to an error
during exporting.
### Truncate
### Log entry truncation {#truncate}
Both log and webhook backends support truncating. As an example, the following is the list of flags
available for the log backend:
Both log and webhook backends support limiting the size of events that are logged.
As an example, the following is the list of flags available for the log backend:
- `audit-log-truncate-enabled` whether event and batch truncating is enabled.
- `audit-log-truncate-max-batch-size` maximum size in bytes of the batch sent to the underlying backend.
- `audit-log-truncate-max-event-size` maximum size in bytes of the audit event sent to the underlying backend.
By default truncate is disabled in both `webhook` and `log`, a cluster administrator should set `audit-log-truncate-enabled` or `audit-webhook-truncate-enabled` to enable the feature.
## Setup for multiple API servers
If you're extending the Kubernetes API with the [aggregation
layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/),
you can also set up audit logging for the aggregated apiserver. To do this,
pass the configuration options in the same format as described above to the
aggregated apiserver and set up the log ingesting pipeline to pick up audit
logs. Different apiservers can have different audit configurations and
different audit policies.
## Log Collector Examples
### Use fluentd to collect and distribute audit events from log file
[Fluentd](https://www.fluentd.org/) is an open source data collector for unified logging layer.
In this example, we will use fluentd to split audit events by different namespaces.
{{< note >}}
The `fluent-plugin-forest` and `fluent-plugin-rewrite-tag-filter` are plugins for fluentd.
You can get details about plugin installation from
[fluentd plugin-management](https://docs.fluentd.org/v1.0/articles/plugin-management).
{{< /note >}}
1. Install [`fluentd`](https://docs.fluentd.org/v1.0/articles/quickstart#step-1:-installing-fluentd),
`fluent-plugin-forest` and `fluent-plugin-rewrite-tag-filter` in the kube-apiserver node
1. Create a config file for fluentd
```
cat <<'EOF' > /etc/fluentd/config
# fluentd conf runs in the same host with kube-apiserver
<source>
@type tail
# audit log path of kube-apiserver
path /var/log/kube-audit
pos_file /var/log/audit.pos
format json
time_key time
time_format %Y-%m-%dT%H:%M:%S.%N%z
tag audit
</source>
<filter audit>
#https://github.com/fluent/fluent-plugin-rewrite-tag-filter/issues/13
@type record_transformer
enable_ruby
<record>
namespace ${record["objectRef"].nil? ? "none":(record["objectRef"]["namespace"].nil? ? "none":record["objectRef"]["namespace"])}
</record>
</filter>
<match audit>
# route audit according to namespace element in context
@type rewrite_tag_filter
<rule>
key namespace
pattern /^(.+)/
tag ${tag}.$1
</rule>
</match>
<filter audit.**>
@type record_transformer
remove_keys namespace
</filter>
<match audit.**>
@type forest
subtype file
remove_prefix audit
<template>
time_slice_format %Y%m%d%H
compress gz
path /var/log/audit-${tag}.*.log
format json
include_time_key true
</template>
</match>
EOF
```
1. Start fluentd
```shell
fluentd -c /etc/fluentd/config -vv
```
1. Start kube-apiserver with the following options:
```shell
--audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/kube-audit --audit-log-format=json
```
1. Check audits for different namespaces in `/var/log/audit-*.log`
### Use logstash to collect and distribute audit events from webhook backend
[Logstash](https://www.elastic.co/products/logstash)
is an open source, server-side data processing tool. In this example,
we will use logstash to collect audit events from webhook backend, and save events of
different users into different files.
1. install [logstash](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html)
1. create config file for logstash
```
cat <<EOF > /etc/logstash/config
input{
http{
#TODO, figure out a way to use kubeconfig file to authenticate to logstash
#https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-ssl
port=>8888
}
}
filter{
split{
# Webhook audit backend sends several events together with EventList
# split each event here.
field=>[items]
# We only need event subelement, remove others.
remove_field=>[headers, metadata, apiVersion, "@timestamp", kind, "@version", host]
}
mutate{
rename => {items=>event}
}
}
output{
file{
# Audit events from different users will be saved into different files.
path=>"/var/log/kube-audit-%{[event][user][username]}/audit"
}
}
EOF
```
1. start logstash
```shell
bin/logstash -f /etc/logstash/config --path.settings /etc/logstash/
```
1. create a [kubeconfig file](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) for kube-apiserver webhook audit backend
cat <<EOF > /etc/kubernetes/audit-webhook-kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://<ip_of_logstash>:8888
name: logstash
contexts:
- context:
cluster: logstash
user: ""
name: default-context
current-context: default-context
preferences: {}
users: []
EOF
1. start kube-apiserver with the following options:
```shell
--audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-webhook-config-file=/etc/kubernetes/audit-webhook-kubeconfig
```
1. check audits in logstash node's directories `/var/log/kube-audit-*/audit`
Note that in addition to file output plugin, logstash has a variety of outputs that
let users route data where they want. For example, users can emit audit events to elasticsearch
plugin which supports full-text search and analytics.
- `audit-log-truncate-enabled` whether event and batch truncating is enabled.
- `audit-log-truncate-max-batch-size` maximum size in bytes of the batch sent to the underlying backend.
- `audit-log-truncate-max-event-size` maximum size in bytes of the audit event sent to the underlying backend.
By default truncate is disabled in both `webhook` and `log`, a cluster administrator should set
`audit-log-truncate-enabled` or `audit-webhook-truncate-enabled` to enable the feature.
## {{% heading "whatsnext" %}}
Learn about [Mutating webhook auditing annotations](/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations).
* Learn about [Mutating webhook auditing annotations](/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations).

View File

@ -47,7 +47,7 @@ can not schedule your pod. Reasons include:
You may have exhausted the supply of CPU or Memory in your cluster. In this
case you can try several things:
* [Add more nodes](/docs/tasks/administer-cluster/cluster-management/#resizing-a-cluster) to the cluster.
* Add more nodes to the cluster.
* [Terminate unneeded pods](/docs/concepts/workloads/pods/#pod-termination)
to make room for pending pods.

View File

@ -32,7 +32,7 @@ using [Krew](https://krew.dev/). Krew is a plugin manager maintained by
the Kubernetes SIG CLI community.
{{< caution >}}
Kubectl plugins available via the Krew [plugin index](https://index.krew.dev/)
Kubectl plugins available via the Krew [plugin index](https://krew.sigs.k8s.io/plugins/)
are not audited for security. You should install and run third-party plugins at your
own risk, since they are arbitrary programs running on your machine.
{{< /caution >}}
@ -46,7 +46,7 @@ A warning will also be included for any valid plugin files that overlap each oth
You can use [Krew](https://krew.dev/) to discover and install `kubectl`
plugins from a community-curated
[plugin index](https://index.krew.dev/).
[plugin index](https://krew.sigs.k8s.io/plugins/).
#### Limitations
@ -354,7 +354,7 @@ package it, distribute it and deliver updates to your users.
distribute your plugins. This way, you use a single packaging format for all
target platforms (Linux, Windows, macOS etc) and deliver updates to your users.
Krew also maintains a [plugin
index](https://index.krew.dev/) so that other people can
index](https://krew.sigs.k8s.io/plugins/) so that other people can
discover your plugin and install it.

View File

@ -33,14 +33,17 @@ Configurations with a single API server will experience unavailability while the
(ex: `ca.crt`, `ca.key`, `front-proxy-ca.crt`, and `front-proxy-ca.key`)
to all your control plane nodes in the Kubernetes certificates directory.
1. Update *Kubernetes controller manager's* `--root-ca-file` to include both old and new CA and restart controller manager.
1. Update {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}'s `--root-ca-file` to
include both old and new CA. Then restart the component.
Any service account created after this point will get secrets that include both old and new CAs.
{{< note >}}
Remove the flag `--client-ca-file` from the *Kubernetes controller manager* configuration.
You can also replace the existing client CA file or change this configuration item to reference a new, updated CA.
[Issue 1350](https://github.com/kubernetes/kubeadm/issues/1350) tracks an issue with *Kubernetes controller manager* being unable to accept a CA bundle.
The files specified by the kube-controller-manager flags `--client-ca-file` and `--cluster-signing-cert-file`
cannot be CA bundles. If these flags and `--root-ca-file` point to the same `ca.crt` file which is now a
bundle (includes both old and new CA) you will face an error. To workaround this problem you can copy the new CA to a separate
file and make the flags `--client-ca-file` and `--cluster-signing-cert-file` point to the copy. Once `ca.crt` is no longer
a bundle you can restore the problem flags to point to `ca.crt` and delete the copy.
{{< /note >}}
1. Update all service account tokens to include both old and new CA certificates.

View File

@ -23,7 +23,7 @@ You can also read the
## kind
[`kind`](https://kind.sigs.k8s.io/docs/) lets you run Kubernetes on
your local computer. This tool it requires that you have
your local computer. This tool requires that you have
[Docker](https://docs.docker.com/get-docker/) installed and configured.
The kind [Quick Start](https://kind.sigs.k8s.io/docs/user/quick-start/) page

View File

@ -1,6 +1,7 @@
---
title: Learn Kubernetes Basics
linkTitle: Learn Kubernetes Basics
no_list: true
weight: 10
card:
name: tutorials

View File

@ -21,489 +21,489 @@ kubectl [flags]
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--add-dir-header</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, ajoute le répertoire du fichier à l'entête</td>
</tr>
<tr>
<td colspan="2">--alsologtostderr</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">log sur l'erreur standard en plus d'un fichier</td>
</tr>
<tr>
<td colspan="2">--application-metrics-count-limit int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 100</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nombre max de métriques d'applications à stocker (par conteneur)</td>
</tr>
<tr>
<td colspan="2">--as chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom d'utilisateur à utiliser pour l'opération</td>
</tr>
<tr>
<td colspan="2">--as-group tableauDeChaînes</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Groupe à utiliser pour l'opération, ce flag peut être répété pour spécifier plusieurs groupes</td>
</tr>
<tr>
<td colspan="2">--azure-container-registry-config chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin du fichier contenant les informations de configuration du registre de conteneurs Azure</td>
</tr>
<tr>
<td colspan="2">--boot-id-file string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/proc/sys/kernel/random/boot_id"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Liste séparée par des virgules de fichiers dans lesquels rechercher le boot-id. Utilise le premier trouvé.</td>
</tr>
<tr>
<td colspan="2">--cache-dir chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "/home/karen/.kube/http-cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Répertoire de cache HTTP par défaut</td>
</tr>
<tr>
<td colspan="2">--certificate-authority chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin vers un fichier cert pour l'autorité de certification</td>
</tr>
<tr>
<td colspan="2">--client-certificate chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin vers un fichier de certificat client pour TLS</td>
</tr>
<tr>
<td colspan="2">--client-key chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin vers un fichier de clé client pour TLS</td>
</tr>
<tr>
<td colspan="2">--cloud-provider-gce-lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">CIDRs ouverts dans le firewall GCE pour le proxy de trafic LB & health checks</td>
</tr>
<tr>
<td colspan="2">--cluster chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Le nom du cluster kubeconfig à utiliser</td>
</tr>
<tr>
<td colspan="2">--container-hints chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/etc/cadvisor/container_hints.json"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">location du fichier hints du conteneur</td>
</tr>
<tr>
<td colspan="2">--containerd chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/run/containerd/containerd.sock"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Point de terminaison de containerd</td>
</tr>
<tr>
<td colspan="2">--containerd-namespace chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "k8s.io"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">namespace de containerd</td>
</tr>
<tr>
<td colspan="2">--context chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Le nom du contexte kubeconfig à utiliser</td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Indique les tolerationSeconds de la tolérance pour notReady:NoExecute qui sont ajoutées par défaut à tous les pods qui n'ont pas défini une telle tolérance</td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Indique les tolerationSeconds de la tolérance pour unreachable:NoExecute qui sont ajoutées par défaut à tous les pods qui n'ont pas défini une telle tolérance</td>
</tr>
<tr>
<td colspan="2">--disable-root-cgroup-stats</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Désactive la collecte des stats du Cgroup racine</td>
</tr>
<tr>
<td colspan="2">--docker chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "unix:///var/run/docker.sock"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Point de terminaison docker</td>
</tr>
<tr>
<td colspan="2">--docker-env-metadata-whitelist chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">une liste séparée par des virgules de variables d'environnement qui doivent être collectées pour les conteneurs docker</td>
</tr>
<tr>
<td colspan="2">--docker-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Remonte uniquement les stats Docker en plus des stats racine</td>
</tr>
<tr>
<td colspan="2">--docker-root chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/var/lib/docker"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">DÉPRÉCIÉ : la racine de docker est lue depuis docker info (ceci est une solution de secours, défaut : /var/lib/docker)</td>
</tr>
<tr>
<td colspan="2">--docker-tls</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">utiliser TLS pour se connecter à docker</td>
</tr>
<tr>
<td colspan="2">--docker-tls-ca chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "ca.pem"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">chemin vers CA de confiance</td>
</tr>
<tr>
<td colspan="2">--docker-tls-cert chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "cert.pem"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">chemin vers le certificat client</td>
</tr>
<tr>
<td colspan="2">--docker-tls-key chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "key.pem"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">chemin vers la clef privée</td>
</tr>
<tr>
<td colspan="2">--enable-load-reader</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Activer le lecteur de la charge CPU</td>
</tr>
<tr>
<td colspan="2">--event-storage-age-limit chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "default=0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Durée maximale pendant laquelle stocker les événements (par type). La valeur est une liste séparée par des virgules de clefs/valeurs, où les clefs sont des types d'événements (par ex: creation, oom) ou "default" et la valeur est la durée. La valeur par défaut est appliquée à tous les types d'événements non spécifiés</td>
</tr>
<tr>
<td colspan="2">--event-storage-event-limit chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "default=0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nombre max d'événements à stocker (par type). La valeur est une liste séparée par des virgules de clefs/valeurs, où les clefs sont les types d'événements (par ex: creation, oom) ou "default" et la valeur est un entier. La valeur par défaut est appliquée à tous les types d'événements non spécifiés</td>
</tr>
<tr>
<td colspan="2">--global-housekeeping-interval durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Intevalle entre ménages globaux</td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">aide pour kubectl</td>
</tr>
<tr>
<td colspan="2">--housekeeping-interval durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 10s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Intervalle entre ménages des conteneurs</td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, la validité du certificat du serveur ne sera pas vérifiée. Ceci rend vos connexions HTTPS non sécurisées</td>
</tr>
<tr>
<td colspan="2">--kubeconfig chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin du fichier kubeconfig à utiliser pour les requêtes du CLI</td>
</tr>
<tr>
<td colspan="2">--log-backtrace-at traceLocation&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: :0</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">lorsque les logs arrivent à la ligne fichier:N, émet une stack trace</td>
</tr>
<tr>
<td colspan="2">--log-cadvisor-usage</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Activer les logs d'usage du conteneur cAdvisor</td>
</tr>
<tr>
<td colspan="2">--log-dir chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si non vide, écrit les fichiers de log dans ce répertoire</td>
</tr>
<tr>
<td colspan="2">--log-file chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si non vide, utilise ce fichier de log</td>
</tr>
<tr>
<td colspan="2">--log-file-max-size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 1800</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Définit la taille maximale d'un fichier de log. L'unité est le mega-octet. Si la valeur est 0, la taille de fichier maximale est illimitée.</td>
</tr>
<tr>
<td colspan="2">--log-flush-frequency durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 5s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nombre de secondes maximum entre flushs des logs</td>
</tr>
<tr>
<td colspan="2">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">log sur l'erreur standard plutôt que dans un fichier</td>
</tr>
<tr>
<td colspan="2">--machine-id-file chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/etc/machine-id,/var/lib/dbus/machine-id"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">liste séparée par des virgules de fichiers dans lesquels rechercher le machine-id. Utiliser le premier trouvé.</td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">La version du serveur doit correspondre à la version du client</td>
</tr>
<tr>
<td colspan="2">-n, --namespace chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si présent, la portée de namespace pour la requête du CLI</td>
</tr>
<tr>
<td colspan="2">--password chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Mot de passe pour l'authentification de base au serveur d'API</td>
</tr>
<tr>
<td colspan="2">--profile chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom du profil à capturer. Parmi (none|cpu|heap|goroutine|threadcreate|block|mutex)</td>
</tr>
<tr>
<td colspan="2">--profile-output chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom du fichier dans lequel écrire le profil</td>
</tr>
<tr>
<td colspan="2">--request-timeout chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">La durée à attendre avant d'abandonner une requête au serveur. Les valeurs non égales à zéro doivent contenir une unité de temps correspondante (ex 1s, 2m, 3h). Une valeur à zéro indique de ne pas abandonner les requêtes</td>
</tr>
<tr>
<td colspan="2">-s, --server chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">L'adresse et le port de l'API server Kubernetes</td>
</tr>
<tr>
<td colspan="2">--skip-headers</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, n'affiche pas les entêtes dans les messages de log</td>
</tr>
<tr>
<td colspan="2">--skip-log-headers</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, évite les entêtes lors de l'ouverture des fichiers de log</td>
</tr>
<tr>
<td colspan="2">--stderrthreshold sévérité&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 2</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">logs à cette sévérité et au dessus de ce seuil vont dans stderr</td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Les écritures dans le driver de stockage seront bufferisés pour cette durée, et seront envoyés aux backends non-mémoire en une seule transaction</td>
</tr>
<tr>
<td colspan="2">--storage-driver-db chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">nom de la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-host chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">hôte:port de la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-password chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Mot de passe de la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">utiliser une connexion sécurisée avec la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-table chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom de la table dans la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-user chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">nom d'utilisateur de la base de données</td>
</tr>
<tr>
<td colspan="2">--token chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Bearer token pour l'authentification auprès de l'API server</td>
</tr>
<tr>
<td colspan="2">--update-machine-info-interval durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 5m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Intevalle entre mises à jour des infos machine.</td>
</tr>
<tr>
<td colspan="2">--user chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Le nom de l'utilisateur kubeconfig à utiliser</td>
</tr>
<tr>
<td colspan="2">--username chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom d'utilisateur pour l'authentification de base au serveur d'API</td>
</tr>
<tr>
<td colspan="2">-v, --v Niveau</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Niveau de verbosité des logs</td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Affiche les informations de version et quitte</td>
</tr>
<tr>
<td colspan="2">--vmodule moduleSpec</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Liste de settings pattern=N séparés par des virgules pour le logging filtré par fichiers</td>
</tr>
</tbody>
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--add-dir-header</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, ajoute le répertoire du fichier à l'entête</td>
</tr>
<tr>
<td colspan="2">--alsologtostderr</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">log sur l'erreur standard en plus d'un fichier</td>
</tr>
<tr>
<td colspan="2">--application-metrics-count-limit int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 100</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nombre max de métriques d'applications à stocker (par conteneur)</td>
</tr>
<tr>
<td colspan="2">--as chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom d'utilisateur à utiliser pour l'opération</td>
</tr>
<tr>
<td colspan="2">--as-group tableauDeChaînes</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Groupe à utiliser pour l'opération, ce flag peut être répété pour spécifier plusieurs groupes</td>
</tr>
<tr>
<td colspan="2">--azure-container-registry-config chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin du fichier contenant les informations de configuration du registre de conteneurs Azure</td>
</tr>
<tr>
<td colspan="2">--boot-id-file string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/proc/sys/kernel/random/boot_id"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Liste séparée par des virgules de fichiers dans lesquels rechercher le boot-id. Utilise le premier trouvé.</td>
</tr>
<tr>
<td colspan="2">--cache-dir chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "/home/karen/.kube/http-cache"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Répertoire de cache HTTP par défaut</td>
</tr>
<tr>
<td colspan="2">--certificate-authority chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin vers un fichier cert pour l'autorité de certification</td>
</tr>
<tr>
<td colspan="2">--client-certificate chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin vers un fichier de certificat client pour TLS</td>
</tr>
<tr>
<td colspan="2">--client-key chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin vers un fichier de clé client pour TLS</td>
</tr>
<tr>
<td colspan="2">--cloud-provider-gce-lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">CIDRs ouverts dans le firewall GCE pour le proxy de trafic LB & health checks</td>
</tr>
<tr>
<td colspan="2">--cluster chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Le nom du cluster kubeconfig à utiliser</td>
</tr>
<tr>
<td colspan="2">--container-hints chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/etc/cadvisor/container_hints.json"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">location du fichier hints du conteneur</td>
</tr>
<tr>
<td colspan="2">--containerd chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/run/containerd/containerd.sock"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Point de terminaison de containerd</td>
</tr>
<tr>
<td colspan="2">--containerd-namespace chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "k8s.io"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">namespace de containerd</td>
</tr>
<tr>
<td colspan="2">--context chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Le nom du contexte kubeconfig à utiliser</td>
</tr>
<tr>
<td colspan="2">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Indique les tolerationSeconds de la tolérance pour notReady:NoExecute qui sont ajoutées par défaut à tous les pods qui n'ont pas défini une telle tolérance</td>
</tr>
<tr>
<td colspan="2">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 300</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Indique les tolerationSeconds de la tolérance pour unreachable:NoExecute qui sont ajoutées par défaut à tous les pods qui n'ont pas défini une telle tolérance</td>
</tr>
<tr>
<td colspan="2">--disable-root-cgroup-stats</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Désactive la collecte des stats du Cgroup racine</td>
</tr>
<tr>
<td colspan="2">--docker chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "unix:///var/run/docker.sock"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Point de terminaison docker</td>
</tr>
<tr>
<td colspan="2">--docker-env-metadata-whitelist chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">une liste séparée par des virgules de variables d'environnement qui doivent être collectées pour les conteneurs docker</td>
</tr>
<tr>
<td colspan="2">--docker-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Remonte uniquement les stats Docker en plus des stats racine</td>
</tr>
<tr>
<td colspan="2">--docker-root chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/var/lib/docker"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">DÉPRÉCIÉ : la racine de docker est lue depuis docker info (ceci est une solution de secours, défaut : /var/lib/docker)</td>
</tr>
<tr>
<td colspan="2">--docker-tls</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">utiliser TLS pour se connecter à docker</td>
</tr>
<tr>
<td colspan="2">--docker-tls-ca chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "ca.pem"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">chemin vers CA de confiance</td>
</tr>
<tr>
<td colspan="2">--docker-tls-cert chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "cert.pem"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">chemin vers le certificat client</td>
</tr>
<tr>
<td colspan="2">--docker-tls-key chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "key.pem"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">chemin vers la clef privée</td>
</tr>
<tr>
<td colspan="2">--enable-load-reader</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Activer le lecteur de la charge CPU</td>
</tr>
<tr>
<td colspan="2">--event-storage-age-limit chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "default=0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Durée maximale pendant laquelle stocker les événements (par type). La valeur est une liste séparée par des virgules de clefs/valeurs, où les clefs sont des types d'événements (par ex: creation, oom) ou "default" et la valeur est la durée. La valeur par défaut est appliquée à tous les types d'événements non spécifiés</td>
</tr>
<tr>
<td colspan="2">--event-storage-event-limit chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "default=0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nombre max d'événements à stocker (par type). La valeur est une liste séparée par des virgules de clefs/valeurs, où les clefs sont les types d'événements (par ex: creation, oom) ou "default" et la valeur est un entier. La valeur par défaut est appliquée à tous les types d'événements non spécifiés</td>
</tr>
<tr>
<td colspan="2">--global-housekeeping-interval durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Intevalle entre ménages globaux</td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">aide pour kubectl</td>
</tr>
<tr>
<td colspan="2">--housekeeping-interval durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 10s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Intervalle entre ménages des conteneurs</td>
</tr>
<tr>
<td colspan="2">--insecure-skip-tls-verify</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, la validité du certificat du serveur ne sera pas vérifiée. Ceci rend vos connexions HTTPS non sécurisées</td>
</tr>
<tr>
<td colspan="2">--kubeconfig chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Chemin du fichier kubeconfig à utiliser pour les requêtes du CLI</td>
</tr>
<tr>
<td colspan="2">--log-backtrace-at traceLocation&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: :0</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">lorsque les logs arrivent à la ligne fichier:N, émet une stack trace</td>
</tr>
<tr>
<td colspan="2">--log-cadvisor-usage</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Activer les logs d'usage du conteneur cAdvisor</td>
</tr>
<tr>
<td colspan="2">--log-dir chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si non vide, écrit les fichiers de log dans ce répertoire</td>
</tr>
<tr>
<td colspan="2">--log-file chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si non vide, utilise ce fichier de log</td>
</tr>
<tr>
<td colspan="2">--log-file-max-size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 1800</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Définit la taille maximale d'un fichier de log. L'unité est le mega-octet. Si la valeur est 0, la taille de fichier maximale est illimitée.</td>
</tr>
<tr>
<td colspan="2">--log-flush-frequency durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 5s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nombre de secondes maximum entre flushs des logs</td>
</tr>
<tr>
<td colspan="2">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: true</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">log sur l'erreur standard plutôt que dans un fichier</td>
</tr>
<tr>
<td colspan="2">--machine-id-file chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "/etc/machine-id,/var/lib/dbus/machine-id"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">liste séparée par des virgules de fichiers dans lesquels rechercher le machine-id. Utiliser le premier trouvé.</td>
</tr>
<tr>
<td colspan="2">--match-server-version</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">La version du serveur doit correspondre à la version du client</td>
</tr>
<tr>
<td colspan="2">-n, --namespace chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si présent, la portée de namespace pour la requête du CLI</td>
</tr>
<tr>
<td colspan="2">--password chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Mot de passe pour l'authentification de base au serveur d'API</td>
</tr>
<tr>
<td colspan="2">--profile chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "none"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom du profil à capturer. Parmi (none|cpu|heap|goroutine|threadcreate|block|mutex)</td>
</tr>
<tr>
<td colspan="2">--profile-output chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "profile.pprof"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom du fichier dans lequel écrire le profil</td>
</tr>
<tr>
<td colspan="2">--request-timeout chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: "0"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">La durée à attendre avant d'abandonner une requête au serveur. Les valeurs non égales à zéro doivent contenir une unité de temps correspondante (ex 1s, 2m, 3h). Une valeur à zéro indique de ne pas abandonner les requêtes</td>
</tr>
<tr>
<td colspan="2">-s, --server chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">L'adresse et le port de l'API server Kubernetes</td>
</tr>
<tr>
<td colspan="2">--skip-headers</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, n'affiche pas les entêtes dans les messages de log</td>
</tr>
<tr>
<td colspan="2">--skip-log-headers</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Si vrai, évite les entêtes lors de l'ouverture des fichiers de log</td>
</tr>
<tr>
<td colspan="2">--stderrthreshold sévérité&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut: 2</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">logs à cette sévérité et au dessus de ce seuil vont dans stderr</td>
</tr>
<tr>
<td colspan="2">--storage-driver-buffer-duration durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 1m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Les écritures dans le driver de stockage seront bufferisés pour cette durée, et seront envoyés aux backends non-mémoire en une seule transaction</td>
</tr>
<tr>
<td colspan="2">--storage-driver-db chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "cadvisor"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">nom de la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-host chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "localhost:8086"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">hôte:port de la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-password chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Mot de passe de la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-secure</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">utiliser une connexion sécurisée avec la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-table chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "stats"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom de la table dans la base de données</td>
</tr>
<tr>
<td colspan="2">--storage-driver-user chaîne&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : "root"</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">nom d'utilisateur de la base de données</td>
</tr>
<tr>
<td colspan="2">--token chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Bearer token pour l'authentification auprès de l'API server</td>
</tr>
<tr>
<td colspan="2">--update-machine-info-interval durée&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Défaut : 5m0s</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Intevalle entre mises à jour des infos machine.</td>
</tr>
<tr>
<td colspan="2">--user chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Le nom de l'utilisateur kubeconfig à utiliser</td>
</tr>
<tr>
<td colspan="2">--username chaîne</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Nom d'utilisateur pour l'authentification de base au serveur d'API</td>
</tr>
<tr>
<td colspan="2">-v, --v Niveau</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Niveau de verbosité des logs</td>
</tr>
<tr>
<td colspan="2">--version version[=true]</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Affiche les informations de version et quitte</td>
</tr>
<tr>
<td colspan="2">--vmodule moduleSpec</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Liste de settings pattern=N séparés par des virgules pour le logging filtré par fichiers</td>
</tr>
</tbody>
</table>

View File

@ -258,7 +258,7 @@ IDプロバイダーがKubernetesと連携するためには、以下のこと
特定のシステム用のセットアップ手順は、以下を参照してください。
- [UAA](https://docs.cloudfoundry.org/concepts/architecture/uaa.html)
- [Dex](https://github.com/dexidp/dex/blob/master/Documentation/kubernetes.md)
- [Dex](https://dexidp.io/docs/kubernetes/)
- [OpenUnison](https://www.tremolosecurity.com/orchestra-k8s/)
#### kubectlの使用

View File

@ -43,7 +43,7 @@ CLA에 서명하지 않은 기여자의 풀 리퀘스트(pull request)는 자동
시나리오 | 브랜치
:---------|:------------
현재 릴리스의 기존 또는 새로운 영어 콘텐츠 | `master`
기능 변경 릴리스의 콘텐츠 | `dev-release-<version>` 패턴을 사용하여 기능 변경이 있는 주 버전과 부 버전에 해당하는 브랜치. 예를 들어, `{{< latest-version >}}` 에서 기능이 변경된 경우, ``dev-{{< release-branch >}}`` 에 문서 변경을 추가한다.
기능 변경 릴리스의 콘텐츠 | `dev-<version>` 패턴을 사용하여 기능 변경이 있는 주 버전과 부 버전에 해당하는 브랜치. 예를 들어, `v{{< skew nextMinorVersion >}}` 에서 기능이 변경된 경우, ``dev-{{< skew nextMinorVersion >}}`` 에 문서 변경을 추가한다.
다른 언어로된 콘텐츠(현지화) | 현지화 규칙을 사용. 자세한 내용은 [현지화 브랜치 전략](/docs/contribute/localization/#branching-strategy)을 참고한다.

View File

@ -1,4 +1,6 @@
---
linktitle: Dokumentacja Kubernetesa
title: Dokumentacja
sitemap:
priority: 1.0
---

View File

@ -10,14 +10,12 @@ content_type: concept
Tutaj znajdziesz dokumentację źródłową Kubernetes.
<!-- body -->
## Dokumentacja API
* [Kubernetes API Overview](/docs/reference/using-api/api-overview/) - Ogólne informacje na temat Kubernetes API.
* [Dokumentacja źródłowa Kubernetes API {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
* [Dokumentacja źródłowa API Kubernetesa {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
* [Using The Kubernetes API](/docs/reference/using-api/) - ogólne informacje na temat API Kubernetesa.
## Biblioteki klientów API

View File

@ -9,7 +9,6 @@ short_description: >
aka:
tags:
- fundamental
- core-object
---
Agent, który działa na każdym {{< glossary_tooltip text="węźle" term_id="node" >}} klastra. Odpowiada za uruchamianie {{< glossary_tooltip text="kontenerów" term_id="container" >}} w ramach {{< glossary_tooltip text="poda" term_id="pod" >}}.

View File

@ -18,7 +18,7 @@ Kubernetes zawiera różne wbudowane narzędzia służące do pracy z systemem:
## Minikube
[`minikube`](/docs/tasks/tools/install-minikube/) to narzędzie do łatwego uruchamiania lokalnego klastra Kubernetes na twojej stacji roboczej na potrzeby rozwoju oprogramowania lub prowadzenia testów.
[`minikube`](https://minikube.sigs.k8s.io/docs/) to narzędzie do łatwego uruchamiania lokalnego klastra Kubernetes na twojej stacji roboczej na potrzeby rozwoju oprogramowania lub prowadzenia testów.
## Pulpit *(Dashboard)*
@ -38,7 +38,7 @@ Helm-a można używać do:
## Kompose
[`Kompose`](https://github.com/kubernetes-incubator/kompose) to narzędzie, które ma pomóc użytkownikom Docker Compose przenieść się na Kubernetes.
[`Kompose`](https://github.com/kubernetes/kompose) to narzędzie, które ma pomóc użytkownikom Docker Compose przenieść się na Kubernetes.
Kompose można używać do:

View File

@ -1,6 +1,7 @@
---
title: Naucz się podstaw
linkTitle: Podstawy Kubernetesa
no_list: true
weight: 10
card:
name: tutorials

View File

@ -72,7 +72,7 @@ weight: 10
<div class="row">
<div class="col-md-8">
<p><b>Master odpowiada za zarządzanie klastrem.</b> Master koordynuje wszystkie działania klastra, takie jak zlecanie uruchomienia aplikacji, utrzymywanie pożądanego stanu aplikacji, skalowanie aplikacji i instalowanie nowych wersji.</p>
<p><b>Węzeł to maszyna wirtualna (VM) lub fizyczny serwer, który jest maszyną roboczą w klastrze Kubernetes.</b> Na każdym węźle działa Kubelet, agent zarządzający tym węzłem i komunikujący się z masterem Kubernetes. Węzeł zawiera także narzędzia do obsługi kontenerów, takie jak Docker lub rkt. Klaster Kubernetes w środowisku produkcyjnym powinien składać się minimum z trzech węzłów.</p>
<p><b>Węzeł to maszyna wirtualna (VM) lub fizyczny serwer, który jest maszyną roboczą w klastrze Kubernetes.</b> Na każdym węźle działa Kubelet, agent zarządzający tym węzłem i komunikujący się z masterem Kubernetes. Węzeł zawiera także narzędzia do obsługi kontenerów, takie jak containerd lub Docker. Klaster Kubernetes w środowisku produkcyjnym powinien składać się minimum z trzech węzłów.</p>
</div>
<div class="col-md-4">

View File

@ -10,16 +10,19 @@ class: training
<section class="call-to-action">
<div class="main-section">
<div class="call-to-action" id="cta-certification">
<div class="logo-certification cta-image cta-image-before" id="logo-cka">
<img src="/images/training/kubernetes-cka-white.svg"/>
</div>
<div class="logo-certification cta-image cta-image-after" id="logo-ckad">
<img src="/images/training/kubernetes-ckad-white.svg"/>
</div>
<div class="cta-text">
<h2>Kariera <em>Cloud Native</em></h2>
<p>Kubernetes stanowi serce całego ruchu <em>cloud native</em>. Korzystając ze szkoleń i certyfikacji oferowanych przez Linux Foundation i naszych partnerów zainwestujesz w swoją karierę, nauczysz się korzystać z Kubernetesa i sprawisz, że Twoje projekty <em>cloud native</em> osiągną sukces.</p>
</div>
<div class="logo-certification cta-image" id="logo-cka">
<img src="/images/training/kubernetes-cka-white.svg"/>
</div>
<div class="logo-certification cta-image" id="logo-ckad">
<img src="/images/training/kubernetes-ckad-white.svg"/>
</div>
<div class="logo-certification cta-image" id="logo-cks">
<img src="/images/training/kubernetes-cks-white.svg"/>
</div>
</div>
</div>
</section>
@ -74,31 +77,36 @@ class: training
</div>
</div>
<section>
<section id="get-certified">
<div class="main-section padded">
<center>
<h2>Uzyskaj certyfikat Kubernetes</h2>
</center>
<h2>Uzyskaj certyfikat Kubernetes</h2>
<div class="col-container">
<div class="col-nav">
<center>
<h5>
<b>Certified Kubernetes Application Developer (CKAD)</b>
</h5>
<p>Egzamin na certyfikowanego dewelopera aplikacji (Certified Kubernetes Application Developer) potwierdza umiejętności projektowania, budowania, konfigurowania i udostępniania "rdzennych" aplikacji dla Kubernetesa.</p>
<p>CKAD potrafi określić zasoby wymagane przez aplikację oraz wykorzystać bazowe elementy do budowy, monitorowania i rozwiązywania problemów skalowalnych aplikacji oraz narzędzi w Kubernetesie.</p>
<br>
<a href="https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/" target="_blank" class="button">Przejdź do certyfikacji</a>
</center>
</div>
<div class="col-nav">
<center>
<h5>
<b>Certified Kubernetes Administrator (CKA)</b>
</h5>
<p>Program certyfikowanego administratora Kubernetes (Certified Kubernetes Administrator) potwierdza umiejętności, wiedzę i kompetencje do podejmowania się zadań administracji Kubernetesem.</p>
<p>Certyfikowany administrator Kubernetesa udowodnił swoje umiejętności prostej instalacji oraz konfiguracji i zarządzania klastrem Kubernetesa jakości produkcyjnej.</p>
<br>
<a href="https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/" target="_blank" class="button">Przejdź do certyfikacji</a>
</center>
</div>
<div class="col-nav">
<h5>
<b>Certified Kubernetes Security Specialist (CKS)</b>
</h5>
<p>Program certyfikowanego specjalisty bezpieczeństwa Kubernetesa zapewnia, że jego posiadacz jest kompetentny i biegły w stosowaniu w szerokim zakresie najlepszych praktyk. Certyfikacja CKS obejmuje umiejętności niezbędne do zapewnienia bezpieczeństwa aplikacji uruchamianych w kontenerach i platformy Kubernetes na etapie budowy, instalacji i działania.</p>
<p><em>Kandydaci na CKS muszą posiadać ważny certyfikat Certified Kubernetes Administrator (CKA), aby udowodnić, że posiadają wystarczające doświadczenie w pracy z Kubernetesem przed przystąpieniem do egzaminu CKS.</em></p>
<br>
<a href="https://training.linuxfoundation.org/certification/certified-kubernetes-security-specialist/" target="_blank" class="button">Przejdź do certyfikacji</a>
</div>
</div>
</div>

View File

@ -63,6 +63,12 @@ Kubernetes - проект з відкритим вихідним кодом. В
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna20" button id="desktopKCButton">Відвідайте KubeCon NA онлайн, 17-20 листопада 2020 року</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu21" button id="desktopKCButton">Відвідайте KubeCon EU онлайн, 17-20 травня 2021 року</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -38,10 +38,10 @@ apiserver 被配置为在一个安全的 HTTPS 端口443上监听远程连
或[服务账号令牌](/docs/reference/access-authn-authz/authentication/#service-account-tokens)的时候。
<!--
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. For example, on a default GKE deployment, the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
-->
应该使用集群的公共根证书开通节点,这样它们就能够基于有效的客户端凭据安全地连接 apiserver。
例如:在一个默认的 GCE 部署中,客户端凭据以客户端证书的形式提供给 kubelet。
一种好的方法是以客户端证书的形式将客户端凭据提供给 kubelet。
请查看 [kubelet TLS 启动引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
以了解如何自动提供 kubelet 客户端证书。
@ -103,16 +103,16 @@ To verify this connection, use the `--kubelet-certificate-authority` flag to pro
If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
untrusted or public network.
Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/) should be enabled to secure the kubelet API.
Finally, [Kubelet authentication and/or authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) should be enabled to secure the kubelet API.
-->
为了对这个连接进行认证,使用 `--kubelet-certificate-authority` 标志给 apiserver
提供一个根证书包,用于 kubelet 的服务证书。
如果无法实现这点,又要求避免在非受信网络或公共网络上进行连接,可在 apiserver 和
kubelet 之间使用 [SSH 隧道](#ssh-tunnels)。
最后,应该启用 [Kubelet 用户认证和/或鉴权](/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
最后,应该启用
[kubelet 用户认证和/或鉴权](/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
来保护 kubelet API。
<!--

View File

@ -155,6 +155,27 @@ nodes in your cluster. See
(实际上有一个控制器可以水平地扩展集群中的节点。请参阅
[集群自动扩缩容](/zh/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling))。
<!--
The important point here is that the controller makes some change to bring about
your desired state, and then reports current state back to your cluster's API server.
Other control loops can observe that reported data and take their own actions.
-->
这里,很重要的一点是,控制器做出了一些变更以使得事物更接近你的期望状态,
之后将当前状态报告给集群的 API 服务器。
其他控制回路可以观测到所汇报的数据的这种变化并采取其各自的行动。
<!--
In the thermostat example, if the room is very cold then a different controller
might also turn on a frost protection heater. With Kubernetes clusters, the control
plane indirectly works with IP address management tools, storage services,
cloud provider APIS, and other services by
[extending Kubernetes](/docs/concepts/extend-kubernetes/) to implement that.
-->
在温度计的例子中,如果房间很冷,那么某个控制器可能还会启动一个防冻加热器。
就 Kubernetes 集群而言,控制面间接地与 IP 地址管理工具、存储服务、云驱动
APIs 以及其他服务协作,通过[扩展 Kubernetes](/zh/docs/concepts/extend-kubernetes/)
来实现这点。
<!--
## Desired versus current state {#desired-vs-current}

View File

@ -487,7 +487,7 @@ a Lease object.
<!--
#### Reliability
In most cases, node controller limits the eviction rate to
In most cases, the node controller limits the eviction rate to
`-node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
from more than 1 node per 10 seconds.
-->

View File

@ -15,10 +15,11 @@ If the data you want to store are confidential, use a
{{< glossary_tooltip text="Secret" term_id="secret" >}} rather than a ConfigMap,
or use additional (third party) tools to keep your data private.
-->
ConfigMap 并不提供保密或者加密功能。如果你想存储的数据是机密的,请使用 {{< glossary_tooltip text="Secret" term_id="secret" >}} ,或者使用其他第三方工具来保证你的数据的私密性,而不是用 ConfigMap。
ConfigMap 并不提供保密或者加密功能。
如果你想存储的数据是机密的,请使用 {{< glossary_tooltip text="Secret" term_id="secret" >}}
或者使用其他第三方工具来保证你的数据的私密性,而不是用 ConfigMap。
{{< /caution >}}
<!-- body -->
<!--
## Motivation
@ -27,31 +28,45 @@ Use a ConfigMap for setting configuration data separately from application code.
For example, imagine that you are developing an application that you can run on your
own computer (for development) and in the cloud (to handle real traffic).
You write the code to
look in an environment variable named `DATABASE_HOST`. Locally, you set that variable
to `localhost`. In the cloud, you set it to refer to a Kubernetes
{{< glossary_tooltip text="Service" term_id="service" >}} that exposes the database
component to your cluster.
You write the code to look in an environment variable named `DATABASE_HOST`.
Locally, you set that variable to `localhost`. In the cloud, you set it to
refer to a Kubernetes {{< glossary_tooltip text="Service" term_id="service" >}}
that exposes the database component to your cluster.
This lets you fetch a container image running in the cloud and
debug the exact same code locally if needed.
-->
## 动机
## 动机 {#motivation}
使用 ConfigMap 来将你的配置数据和应用程序代码分开。
比如,假设你正在开发一个应用,它可以在你自己的电脑上(用于开发)和在云上(用于实际流量)运行。你的代码里有一段是用于查看环境变量 `DATABASE_HOST`,在本地运行时,你将这个变量设置为 `localhost`,在云上,你将其设置为引用 Kubernetes 集群中的公开数据库 {{< glossary_tooltip text="Service" term_id="service" >}} 中的组件。
比如,假设你正在开发一个应用,它可以在你自己的电脑上(用于开发)和在云上
(用于实际流量)运行。
你的代码里有一段是用于查看环境变量 `DATABASE_HOST`,在本地运行时,
你将这个变量设置为 `localhost`,在云上,你将其设置为引用 Kubernetes 集群中的
公开数据库组件的 {{< glossary_tooltip text="服务" term_id="service" >}}。
这让您可以获取在云中运行的容器镜像,并且如果有需要的话,在本地调试完全相同的代码。
这让你可以获取在云中运行的容器镜像,并且如果有需要的话,在本地调试完全相同的代码。
<!--
A ConfigMap is not designed to hold large chunks of data. The data stored in a
ConfigMap cannot exeed 1 MiB. If you need to store settings that are
larger than this limit, you may want to consider mounting a volume or use a
separate database or file service.
-->
ConfigMap 在设计上不是用来保存大量数据的。在 ConfigMap 中保存的数据不可超过
1 MiB。如果你需要保存超出此尺寸限制的数据你可能希望考虑挂载存储卷
或者使用独立的数据库或者文件服务。
<!--
## ConfigMap object
A ConfigMap is an API [object](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
that lets you store configuration for other objects to use. Unlike most
Kubernetes objects that have a `spec`, a ConfigMap has a `data` section to
store items (keys) and their values.
Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
fields. These fields accepts key-value pairs as their values. Both the `data`
field and the `binaryData` are optional. The `data` field is designed to
contain UTF-8 byte sequences while the `binaryData` field is designed to
contain binary data.
The name of a ConfigMap must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
@ -60,9 +75,28 @@ The name of a ConfigMap must be a valid
ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/)
让你可以存储其他对象所需要使用的配置。
和其他 Kubernetes 对象都有一个 `spec` 不同的是ConfigMap 使用 `data` 块来存储元素(键名)和它们的值。
和其他 Kubernetes 对象都有一个 `spec` 不同的是ConfigMap 使用 `data`
`binaryData` 字段。这些字段能够接收键-值对作为其取值。`data` 和 `binaryData`
字段都是可选的。`data` 字段设计用来保存 UTF-8 字节序列,而 `binaryData`
被设计用来保存二进制数据。
ConfigMap 的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
ConfigMap 的名字必须是一个合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
<!--
Each key under the `data` or the `binaryData` field must consist of
alphanumeric characters, `-`, `_` or `.`. The keys stored in `data` must not
overlap with the keys in the `binaryData` field.
Starting from v1.19, you can add an `immutable` field to a ConfigMap
definition to create an [immutable ConfigMap](#configmap-immutable).
-->
`data``binaryData` 字段下面的每个键的名称都必须由字母数字字符或者
`-`、`_` 或 `.` 组成。在 `data` 下保存的键名不可以与在 `binaryData`
出现的键名有重叠。
从 v1.19 开始,你可以添加一个 `immutable` 字段到 ConfigMap 定义中,创建
[不可变更的 ConfigMap](#configmap-immutable)。
<!--
## ConfigMaps and Pods
@ -77,9 +111,12 @@ format.
-->
## ConfigMaps 和 Pods
您可以写一个引用 ConfigMap 的 Pod 的 `spec`,并根据 ConfigMap 中的数据在该 Pod 中配置容器。这个 Pod 和 ConfigMap 必须要在同一个 {{< glossary_tooltip text="命名空间" term_id="namespace" >}} 中。
你可以写一个引用 ConfigMap 的 Pod 的 `spec`,并根据 ConfigMap 中的数据
在该 Pod 中配置容器。这个 Pod 和 ConfigMap 必须要在同一个
{{< glossary_tooltip text="名字空间" term_id="namespace" >}} 中。
这是一个 ConfigMap 的示例,它的一些键只有一个值,其他键的值看起来像是配置的片段格式。
这是一个 ConfigMap 的示例,它的一些键只有一个值,其他键的值看起来像是
配置的片段格式。
```yaml
apiVersion: v1
@ -90,7 +127,7 @@ data:
# 类属性键;每一个键都映射到一个简单的值
player_initial_lives: "3"
ui_properties_file_name: "user-interface.properties"
#
# 类文件键
game.properties: |
enemy.types=aliens,monsters
@ -115,14 +152,16 @@ For the first three methods, the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} uses the data from
the ConfigMap when it launches container(s) for a Pod.
-->
可以使用四种方式来使用 ConfigMap 配置 Pod 中的容器:
可以使用四种方式来使用 ConfigMap 配置 Pod 中的容器:
1. 容器 entrypoint 的命令行参数
1. 容器的环境变量
1. 在只读卷里面添加一个文件,让应用来读取
1. 编写代码在 Pod 中运行,使用 Kubernetes API 来读取 ConfigMap
这些不同的方法适用于不同的数据使用方式。对前三个方法,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 使用 ConfigMap 中的数据在 Pod 中启动容器。
这些不同的方法适用于不同的数据使用方式。
对前三个方法,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
使用 ConfigMap 中的数据在 Pod 中启动容器。
<!--
The fourth method means you have to write code to read the ConfigMap and its data.
@ -133,9 +172,13 @@ technique also lets you access a ConfigMap in a different namespace.
Here's an example Pod that uses values from `game-demo` to configure a Pod:
-->
第四种方法意味着你必须编写代码才能读取 ConfigMap 和它的数据。然而,由于您是直接使用 Kubernetes API因此只要 ConfigMap 发生更改,您的应用就能够通过订阅来获取更新,并且在这样的情况发生的时候做出反应。通过直接进入 Kubernetes API这个技术也可以让你能够获取到不同的命名空间里的 ConfigMap。
第四种方法意味着你必须编写代码才能读取 ConfigMap 和它的数据。然而,
由于你是直接使用 Kubernetes API因此只要 ConfigMap 发生更改,你的
应用就能够通过订阅来获取更新,并且在这样的情况发生的时候做出反应。
通过直接进入 Kubernetes API这个技术也可以让你能够获取到不同的名字空间
里的 ConfigMap。
这是一个 Pod 的示例,它通过使用 `game-demo` 中的值来配置一个 Pod
下面是一个 Pod 的示例,它通过使用 `game-demo` 中的值来配置一个 Pod
```yaml
apiVersion: v1
@ -145,7 +188,8 @@ metadata:
spec:
containers:
- name: demo
image: game.example/demo-game
image: alpine
command: ["sleep", "3600"]
env:
# 定义环境变量
- name: PLAYER_INITIAL_LIVES # 请注意这里和 ConfigMap 中的键名是不一样的
@ -163,41 +207,56 @@ spec:
mountPath: "/config"
readOnly: true
volumes:
# 可以在 Pod 级别设置卷,然后将其挂载到 Pod 内的容器中
# 可以在 Pod 级别设置卷,然后将其挂载到 Pod 内的容器中
- name: config
configMap:
# 提供你想要挂载的 ConfigMap 的名字
name: game-demo
# 来自 ConfigMap 的一组键,将被创建为文件
items:
- key: "game.properties"
path: "game.properties"
- key: "user-interface.properties"
path: "user-interface.properties"
```
<!--
A ConfigMap doesn't differentiate between single line property values and
multi-line file-like values.
What matters how Pods and other objects consume those values.
For this example, defining a volume and mounting it inside the `demo`
container as `/config` creates four files:
- `/config/player_initial_lives`
- `/config/ui_properties_file_name`
- `/config/game.properties`
- `/config/user-interface.properties`
If you want to make sure that `/config` only contains files with a
`.properties` extension, use two different ConfigMaps, and refer to both
ConfigMaps in the `spec` for a Pod. The first ConfigMap defines
`player_initial_lives` and `ui_properties_file_name`. The second
ConfigMap defines the files that the kubelet places into `/config`.
container as `/config` creates two files,
`/config/game.properties` and `/config/user-interface.properties`,
even though there are four keys in the ConfigMap. This is because the Pod
definition specifies an `items` array in the `volumes` section.
If you omit the `items` array entirely, every key in the ConfigMap becomes
a file with the same name as the key, and you get 4 files.
-->
ConfigMap 不会区分单行属性值和多行类似文件的值,重要的是 Pods 和其他对象如何使用这些值。比如,定义一个卷,并将它作为 `/config` 文件夹安装到 `demo` 容器内,并创建四个文件:
ConfigMap 不会区分单行属性值和多行类似文件的值,重要的是 Pods 和其他对象
如何使用这些值。
- `/config/player_initial_lives`
- `/config/ui_properties_file_name`
- `/config/game.properties`
- `/config/user-interface.properties`
上面的例子定义了一个卷并将它作为 `/config` 文件夹挂载到 `demo` 容器内,
创建两个文件,`/config/game.properties` 和
`/config/user-interface.properties`
尽管 ConfigMap 中包含了四个键。
这是因为 Pod 定义中在 `volumes` 节指定了一个 `items` 数组。
如果你完全忽略 `items` 数组,则 ConfigMap 中的每个键都会变成一个与
该键同名的文件,因此你会得到四个文件。
如果您要确保 `/config` 只包含带有 `.properties` 扩展名的文件,可以使用两个不同的 ConfigMaps并在 `spec` 中同时引用这两个 ConfigMaps 来创建 Pod。第一个 ConfigMap 定义了 `player_initial_lives``ui_properties_file_name`,第二个 ConfigMap 定义了 kubelet 放进 `/config` 的文件。
<!--
## Using ConfigMaps
ConfigMaps can be mounted as data volumes. ConfigMaps can also be used by other
parts of the system, without being directly exposed to the Pod. For example,
ConfigMaps can hold data that other parts of the system should use for configuration.
-->
## 使用 ConfigMap {#using-configmaps}
ConfigMap 可以作为数据卷挂载。ConfigMap 也可被系统的其他组件使用,而
不一定直接暴露给 Pod。例如ConfigMap 可以保存系统中其他组件要使用
的配置数据。
{{< note >}}
<!--
The most common way to use ConfigMaps is to configure settings for
containers running in a Pod in the same namespace. You can also use a
@ -208,12 +267,178 @@ might encounter {{< glossary_tooltip text="addons" term_id="addons" >}}
or {{< glossary_tooltip text="operators" term_id="operator-pattern" >}} that
adjust their behavior based on a ConfigMap.
-->
ConfigMap 最常见的用法是为同一命名空间里某 Pod 中运行的容器执行配置。您也可以单独使用 ConfigMap。
ConfigMap 最常见的用法是为同一命名空间里某 Pod 中运行的容器执行配置。
你也可以单独使用 ConfigMap。
比如,您可能会遇到基于 ConfigMap 来调整其行为的 {{< glossary_tooltip text="插件" term_id="addons" >}} 或者 {{< glossary_tooltip text="operator" term_id="operator-pattern" >}}。
{{< /note >}}
比如,你可能会遇到基于 ConfigMap 来调整其行为的
{{< glossary_tooltip text="插件" term_id="addons" >}} 或者
{{< glossary_tooltip text="operator" term_id="operator-pattern" >}}。
<!--
### Using ConfigMaps as files from a Pod
To consume a ConfigMap in a volume in a Pod:
-->
### 在 Pod 中将 ConfigMap 当做文件使用
<!--
1. Create a ConfigMap or use an existing one. Multiple Pods can reference the
same ConfigMap.
1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name
the volume anything, and have a `.spec.volumes[].configMap.name` field set
to reference your ConfigMap object.
1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the
ConfigMap. Specify `.spec.containers[].volumeMounts[].readOnly = true` and
`.spec.containers[].volumeMounts[].mountPath` to an unused directory name
where you would like the ConfigMap to appear.
1. Modify your image or command line so that the program looks for files in
that directory. Each key in the ConfigMap `data` map becomes the filename
under `mountPath`.
-->
1. 创建一个 ConfigMap 对象或者使用现有的 ConfigMap 对象。多个 Pod 可以引用同一个
ConfigMap。
1. 修改 Pod 定义,在 `spec.volumes[]` 下添加一个卷。
为该卷设置任意名称,之后将 `spec.volumes[].configMap.name` 字段设置为对
你的 ConfigMap 对象的引用。
1. 为每个需要该 ConfigMap 的容器添加一个 `.spec.containers[].volumeMounts[]`
设置 `.spec.containers[].volumeMounts[].readOnly=true` 并将
`.spec.containers[].volumeMounts[].mountPath` 设置为一个未使用的目录名,
ConfigMap 的内容将出现在该目录中。
1. 更改你的镜像或者命令行以便程序能够从该目录中查找文件。ConfigMap 中的每个
`data` 键会变成 `mountPath` 下面的一个文件名。
<!--
This is an example of a Pod that mounts a ConfigMap in a volume:
-->
下面是一个将 ConfigMap 以卷的形式进行挂载的 Pod 示例:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
configMap:
name: myconfigmap
```
<!--
Each ConfigMap you want to use needs to be referred to in `.spec.volumes`.
If there are multiple containers in the Pod, then each container needs its
own `volumeMounts` block, but only one `.spec.volumes` is needed per ConfigMap.
-->
你希望使用的每个 ConfigMap 都需要在 `spec.volumes` 中被引用到。
如果 Pod 中有多个容器,则每个容器都需要自己的 `volumeMounts` 块,但针对
每个 ConfigMap你只需要设置一个 `spec.volumes` 块。
<!--
#### Mounted ConfigMaps are updated automatically
When a ConfigMap currently consumed in a volume is updated, projected keys are eventually updated as well.
The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync.
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
-->
#### 被挂载的 ConfigMap 内容会被自动更新
当卷中使用的 ConfigMap 被更新时,所投射的键最终也会被更新。
kubelet 组件会在每次周期性同步时检查所挂载的 ConfigMap 是否为最新。
不过kubelet 使用的是其本地的高速缓存来获得 ConfigMap 的当前值。
高速缓存的类型可以通过
[KubeletConfiguration 结构](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)
`ConfigMapAndSecretChangeDetectionStrategy` 字段来配置。
<!--
A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the ConfigMap is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
propagation delay, where the cache propagation delay depends on the chosen cache type
(it equals to watch propagation delay, ttl of cache, or zero correspondingly).
-->
ConfigMap 既可以通过 watch 操作实现内容传播(默认形式),也可实现基于 TTL
的缓存,还可以直接将所有请求重定向到 API 服务器。
因此,从 ConfigMap 被更新的那一刻算起,到新的主键被投射到 Pod 中去,这一
时间跨度可能与 kubelet 的同步周期加上高速缓存的传播延迟相等。
这里的传播延迟取决于所选的高速缓存类型
(分别对应 watch 操作的传播延迟、高速缓存的 TTL 时长或者 0
<!--
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
-->
以环境变量方式使用的 ConfigMap 数据不会被自动更新。
更新这些数据需要重新启动 Pod。
<!--
## Immutable ConfigMaps {#configmap-immutable}
-->
## 不可变更的 ConfigMap {#configmap-immutable}
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
<!--
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
individual Secrets and ConfigMaps as immutable. For clusters that extensively use ConfigMaps
(at least tens of thousands of unique ConfigMap to Pod mounts), preventing changes to their
data has the following advantages:
-->
Kubernetes Beta 特性 _不可变更的 Secret 和 ConfigMap_ 提供了一种将各个
Secret 和 ConfigMap 设置为不可变更的选项。对于大量使用 ConfigMap 的
集群(至少有数万个各不相同的 ConfigMap 给 Pod 挂载)而言,禁止更改
ConfigMap 的数据有以下好处:
<!--
- protects you from accidental (or unwanted) updates that could cause applications outages
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
closing watches for ConfigMaps marked as immutable.
-->
- 保护应用,使之免受意外(不想要的)更新所带来的负面影响。
- 通过大幅降低对 kube-apiserver 的压力提升集群性能,这是因为系统会关闭
对已标记为不可变更的 ConfigMap 的监视操作。
<!--
This feature is controlled by the `ImmutableEphemeralVolumes`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
You can create an immutable ConfigMap by setting the `immutable` field to `true`.
For example:
-->
此功能特性由 `ImmutableEphemeralVolumes`
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
来控制。你可以通过将 `immutable` 字段设置为 `true` 创建不可变更的 ConfigMap。
例如:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
...
immutable: true
```
<!--
Once a ConfigMap is marked as immutable, it is _not_ possible to revert this change
nor to mutate the contents of the `data` or the `binaryData` field. You can
only delete and recreate the ConfigMap. Because existing Pods maintain a mount point
to the deleted ConfigMap, it is recommended to recreate these pods.
-->
一旦某 ConfigMap 被标记为不可变更,则 _无法_ 逆转这一变化,,也无法更改
`data``binaryData` 字段的内容。你只能删除并重建 ConfigMap。
因为现有的 Pod 会维护一个对已删除的 ConfigMap 的挂载点,建议重新创建
这些 Pods。
## {{% heading "whatsnext" %}}
@ -227,4 +452,3 @@ ConfigMap 最常见的用法是为同一命名空间里某 Pod 中运行的容
* 阅读 [配置 Pod 来使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。
* 阅读 [Twelve-Factor 应用](https://12factor.net/) 来了解将代码和配置分开的动机。

View File

@ -54,7 +54,6 @@ For example, if you set a `memory` request of 256 MiB for a container, and that
a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to use
more RAM.
-->
## 请求和约束 {#requests-and-limits}
如果 Pod 运行所在的节点具有足够的可用资源,容器可能(且可以)使用超出对应资源
@ -77,7 +76,6 @@ Limits can be implemented either reactively (the system intervenes once it sees
or by enforcement (the system prevents the container from ever exceeding the limit). Different
runtimes can have different ways to implement the same restrictions.
-->
如果你将某容器的 `memory` 约束设置为 4 GiBkubelet (和
{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}
就会确保该约束生效。
@ -88,6 +86,19 @@ runtimes can have different ways to implement the same restrictions.
约束值可以以被动方式来实现(系统会在发现违例时进行干预),或者通过强制生效的方式实现
(系统会避免容器用量超出约束值)。不同的容器运行时采用不同方式来实现相同的限制。
{{< note >}}
<!--
If a Container specifies its own memory limit, but does not specify a memory request, Kubernetes
automatically assigns a memory request that matches the limit. Similarly, if a Container specifies its own
CPU limit, but does not specify a CPU request, Kubernetes automatically assigns a CPU request that matches
the limit.
-->
如果某 Container 设置了自己的内存限制但未设置内存请求Kubernetes
自动为其设置与内存限制相匹配的请求值。类似的,如果某 Container 设置了
CPU 限制值但未设置 CPU 请求值,则 Kubernetes 自动为其设置 CPU 请求
并使之与 CPU 限制值匹配。
{{< /note >}}
<!--
## Resource types
@ -110,15 +121,19 @@ CPU 表达的是计算处理能力,其单位是 [Kubernetes CPUs](#meaning-of-
如果你使用的是 Kubernetes v1.14 或更高版本则可以指定巨页Huge Page资源。
巨页是 Linux 特有的功能,节点内核在其中分配的内存块比默认页大小大得多。
例如,在默认页面大小为 4KiB 的系统上,可以指定约束 `hugepages-2Mi: 80Mi`
例如,在默认页面大小为 4KiB 的系统上,可以指定约束 `hugepages-2Mi: 80Mi`
如果容器尝试分配 40 个 2MiB 大小的巨页(总共 80 MiB ),则分配请求会失败。
<!--
{{< note >}}
<!--
You cannot overcommit `hugepages-*` resources.
This is different from the `memory` and `cpu` resources.
-->
你不能过量使用 `hugepages- * `资源。
这与 `memory``cpu` 资源不同。
{{< /note >}}
<!--
CPU and memory are collectively referred to as *compute resources*, or just
*resources*. Compute
resources are measurable quantities that can be requested, allocated, and
@ -127,13 +142,6 @@ consumed. They are distinct from
[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified
through the Kubernetes API server.
-->
{{< note >}}
您不能过量使用 `hugepages- * `资源。
这与 `memory``cpu` 资源不同。
{{< /note >}}
CPU 和内存统称为*计算资源*,或简称为*资源*。
计算资源的数量是可测量的,可以被请求、被分配、被消耗。
它们与 [API 资源](/zh/docs/concepts/overview/kubernetes-api/) 不同。
@ -191,8 +199,9 @@ be preferred.
CPU is always requested as an absolute quantity, never as a relative quantity;
0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
-->
## Kubernetes 中的资源单位 {#resource-units-in-kubernetes}
## CPU 的含义 {#meaning-of-cpu}
### CPU 的含义 {#meaning-of-cpu}
CPU 资源的约束和请求以 *cpu* 为单位。
@ -222,7 +231,7 @@ Mi, Ki. For example, the following represent roughly the same value:
E、P、T、G、M、K。你也可以使用对应的 2 的幂数Ei、Pi、Ti、Gi、Mi、Ki。
例如,以下表达式所代表的是大致相同的值:
```shell
```
128974848、129e6、129M、123Mi
```
@ -233,7 +242,6 @@ and 64MiB (2<sup>26</sup> bytes) of memory. Each Container has a limit of 0.5
cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
MiB of memory, and a limit of 1 cpu and 256MiB of memory.
-->
下面是个例子。
以下 Pod 有两个 Container。每个 Container 的请求为 0.25 cpu 和 64MiB2<sup>26</sup> 字节)内存,
@ -272,6 +280,7 @@ spec:
<!--
## How Pods with resource requests are scheduled
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
run on. Each node has a maximum capacity for each of the resource types: the
amount of CPU and memory it can provide for Pods. The scheduler ensures that,
@ -282,7 +291,6 @@ a Pod on a node if the capacity check fails. This protects against a resource
shortage on a node when resource usage later increases, for example, during a
daily peak in request rate.
-->
## 带资源请求的 Pod 如何调度
当你创建一个 Pod 时Kubernetes 调度程序将为 Pod 选择一个节点。
@ -300,7 +308,6 @@ to the container runtime.
When using Docker:
-->
## 带资源约束的 Pod 如何运行
当 kubelet 启动 Pod 中的 Container 时,它会将 CPU 和内存约束信息传递给容器运行时。
@ -318,9 +325,7 @@ When using Docker:
multiplied by 100. The resulting value is the total amount of CPU time that a container can use
every 100ms. A container cannot use more than its share of CPU time during this interval.
{{< note >}}
The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
{{</ note >}}
- The `spec.containers[].resources.limits.memory` is converted to an integer, and
used as the value of the
@ -337,7 +342,7 @@ When using Docker:
时间不会超过它被分配的时间。
{{< note >}}
默认的配额(quota周期为 100 毫秒。 CPU配额的最小精度为 1 毫秒。
默认的配额(Quota周期为 100 毫秒。CPU 配额的最小精度为 1 毫秒。
{{</ note >}}
- `spec.containers[].resources.limits.memory` 被转换为整数值,作为 `docker run` 命令中的
@ -359,7 +364,6 @@ To determine whether a Container cannot be scheduled or is being killed due to
resource limits, see the
[Troubleshooting](#troubleshooting) section.
-->
如果 Container 超过其内存限制,则可能会被终止。如果容器可重新启动,则与所有其他类型的
运行时失效一样kubelet 将重新启动容器。
@ -380,7 +384,6 @@ are available in your cluster, then Pod resource usage can be retrieved either
from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)
directly or from your monitoring tools.
-->
## 监控计算和内存资源用量
Pod 的资源使用情况是作为 Pod 状态的一部分来报告的。
@ -422,11 +425,9 @@ The kubelet also uses this kind of storage to hold
[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level),
container images, and the writable layers of running containers.
{{< caution >}}
If a node fails, the data in its ephemeral storage can be lost.
Your applications cannot expect any performance SLAs (disk IOPS for example)
from local ephemeral storage.
{{< /caution >}}
As a beta feature, Kubernetes lets you track, reserve and limit the amount
of ephemeral local storage a Pod can consume.
@ -458,7 +459,6 @@ The kubelet also writes
[node-level container logs](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
and treats these similarly to ephemeral local storage.
-->
### 本地临时性存储的配置
Kubernetes 有两种方式支持节点上配置本地临时性存储:
@ -555,7 +555,9 @@ than as local ephemeral storage.
kubelet 能够度量其本地存储的用量。实现度量机制的前提是:
- `LocalStorageCapacityIsolation` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)被启用(默认状态),并且
- `LocalStorageCapacityIsolation`
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
被启用(默认状态),并且
- 你已经对节点进行了配置,使之使用所支持的本地临时性储存配置方式之一
如果你的节点配置不同于以上预期kubelet 就无法对临时性本地存储的资源约束实施限制。
@ -581,10 +583,9 @@ Mi, Ki. For example, the following represent roughly the same value:
128974848, 129e6, 129M, 123Mi
```
-->
### 为本地临时性存储设置请求和约束值
你可以使用_ephemeral-storage_来管理本地临时性存储。
你可以使用 _ephemeral-storage_ 来管理本地临时性存储。
Pod 中的每个 Container 可以设置以下属性:
* `spec.containers[].resources.limits.ephemeral-storage`
@ -595,7 +596,7 @@ Pod 中的每个 Container 可以设置以下属性:
你也可以使用对应的 2 的幂级数来表达Ei、Pi、Ti、Gi、Mi、Ki。
例如,下面的表达式所表达的大致是同一个值:
```shell
```
128974848, 129e6, 129M, 123Mi
```
@ -639,7 +640,7 @@ run on. Each node has a maximum amount of local ephemeral storage it can provide
The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node.
-->
### 带 ephemeral-storage 的 Pods 的调度行为
### 带临时性存储的 Pods 的调度行为
当你创建一个 Pod 时Kubernetes 调度器会为 Pod 选择一个节点来运行之。
每个节点都有一个本地临时性存储的上限,是其可提供给 Pods 使用的总量。
@ -670,9 +671,7 @@ summing the limits for the containers in that Pod. In this case, if the sum of
the local ephemeral storage usage from all containers and also the Pod's `emptyDir`
volumes exceeds the overall Pod storage limit, then the kubelet also marks the Pod
for eviction.
-->
### 临时性存储消耗的管理 {#resource-emphemeralstorage-consumption}
如果 kubelet 将本地临时性存储作为资源来管理,则 kubelet 会度量以下各处的存储用量:
@ -736,7 +735,6 @@ still open, then the inode for the deleted file stays until you close
that file but the kubelet does not categorize the space as in use.
{{< /note >}}
-->
kubelet 支持使用不同方式来度量 Pod 的存储用量:
{{< tabs name="resource-emphemeralstorage-measurement" >}}
@ -795,7 +793,6 @@ If a file is created and deleted, but has an open file descriptor,
it continues to consume space. Quota tracking records that space accurately
whereas directory scans overlook the storage used by deleted files.
-->
Kubernetes 所使用的项目 ID 始于 `1048576`
所使用的 IDs 会注册在 `/etc/projects``/etc/projid` 文件中。
如果该范围中的项目 ID 已经在系统中被用于其他目的,则已占用的项目 IDs
@ -881,14 +878,13 @@ See [Device
Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
for how to advertise device plugin managed resources on each node.
-->
### 管理扩展资源 {#managing-extended-resources}
### 管理扩展资源
#### 节点级扩展资源
#### 节点级扩展资源 {#node-level-extended-resources}
节点级扩展资源绑定到节点。
##### 设备插件管理的资源
##### 设备插件管理的资源 {#device-plugin-managed-resources}
有关如何颁布在各节点上由设备插件所管理的资源,请参阅
[设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)。
@ -905,8 +901,7 @@ asynchronously by the kubelet. Note that because the scheduler uses the node
delay between patching the node capacity with a new resource and the first Pod
that requests the resource to be scheduled on that node.
-->
##### 其他资源
##### 其他资源 {#other-resources}
为了颁布新的节点级扩展资源,集群操作员可以向 API 服务器提交 `PATCH` HTTP 请求,
以在集群中节点的 `status.capacity` 中为其配置可用数量。
@ -943,7 +938,6 @@ in the patch path. The operation path value in JSON-Patch is interpreted as a
JSON-Pointer. For more details, see
{{< /note >}}
-->
{{< note >}}
在前面的请求中,`~1` 是在 patch 路径中对字符 `/` 的编码。
JSON-Patch 中的操作路径的值被视为 JSON-Pointer 类型。
@ -961,13 +955,13 @@ You can specify the extended resources that are handled by scheduler extenders
in [scheduler policy
configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31).
-->
#### 集群层面的扩展资源
#### 集群层面的扩展资源 {#cluster-level-extended-resources}
集群层面的扩展资源并不绑定到具体节点。
它们通常由调度器扩展程序Scheduler Extenders管理这些程序处理资源消耗和资源配额。
您可以在[调度器策略配置](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31)中指定由调度器扩展程序处理的扩展资源。
你可以在[调度器策略配置](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31)
中指定由调度器扩展程序处理的扩展资源。
<!--
**Example:**
@ -981,7 +975,6 @@ extender.
- The `ignoredByScheduler` field specifies that the scheduler does not check
the "example.com/foo" resource in its `PodFitsResources` predicate.
-->
**示例:**
下面的调度器策略配置标明集群层扩展资源 "example.com/foo" 由调度器扩展程序处理。
@ -1020,8 +1013,7 @@ The API server restricts quantities of extended resources to whole numbers.
Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of
_invalid_ quantities are `0.5` and `1500m`.
-->
### 使用扩展资源
### 使用扩展资源 {#consuming-extended-resources}
就像 CPU 和内存一样,用户可以在 Pod 的规约中使用扩展资源。
调度器负责资源的核算,确保同时分配给 Pod 的资源总量不会超过可用数量。
@ -1032,7 +1024,6 @@ Extended resources replace Opaque Integer Resources.
Users can use any domain name prefix other than `kubernetes.io` which is reserved.
{{< /note >}}
-->
{{< note >}}
扩展资源取代了非透明整数资源Opaque Integer ResourcesOIR
用户可以使用 `kubernetes.io` (保留)以外的任何域名前缀。
@ -1047,7 +1038,6 @@ Extended resources cannot be overcommitted, so request and limit
must be equal if both are present in a container spec.
{{< /note >}}
-->
要在 Pod 中使用扩展资源,请在容器规范的 `spec.containers[].resources.limits`
映射中包含资源名称作为键。
@ -1064,7 +1054,6 @@ as long as the resource request cannot be satisfied.
The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource).
-->
仅当所有资源请求(包括 CPU、内存和任何扩展资源都被满足时Pod 才能被调度。
在资源请求无法满足时Pod 会保持在 `PENDING` 状态。
@ -1098,7 +1087,6 @@ If the scheduler cannot find any node where a Pod can fit, the Pod remains
unscheduled until a place can be found. An event is produced each time the
scheduler fails to find a place for the Pod, like this:
-->
## 疑难解答
### 我的 Pod 处于悬决状态且事件信息显示 failedScheduling
@ -1139,7 +1127,7 @@ You can check node capacities and amounts allocated with the
- 检查 Pod 所需的资源是否超出所有节点的资源容量。例如,如果所有节点的容量都是`cpu1`
那么一个请求为 `cpu: 1.1` 的 Pod 永远不会被调度。
可以使用 `kubectl describe nodes` 命令检查节点容量和已分配的资源数量。 例如:
可以使用 `kubectl describe nodes` 命令检查节点容量和已分配的资源数量。 例如:
```shell
kubectl describe nodes e2e-test-node-pool-4lw4
@ -1187,17 +1175,17 @@ The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be conf
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.
-->
在上面的输出中,你可以看到如果 Pod 请求超过 1120m CPU 或者 6.23Gi 内存,节点将无法满足。
通过查看 `Pods` 部分,将看到哪些 Pod 占用了节点上的资源。
通过查看 `Pods` 部分,将看到哪些 Pod 占用了节点上的资源。
可供 Pod 使用的资源量小于节点容量,因为系统守护程序也会使用一部分可用资源。
[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
`allocatable` 字段给出了可用于 Pod 的资源量。
有关更多信息,请参阅 [节点可分配资源](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md)。
可以配置 [资源配额](/zh/docs/concepts/policy/resource-quotas/) 功能特性以限制可以使用的资源总量。
可以配置 [资源配额](/zh/docs/concepts/policy/resource-quotas/) 功能特性
以限制可以使用的资源总量。
如果与名字空间配合一起使用,就可以防止一个团队占用所有资源。
<!--
@ -1260,7 +1248,6 @@ Container in the Pod was terminated and restarted five times.
You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
of previously terminated Containers:
-->
在上面的例子中,`Restart Count: 5` 意味着 Pod 中的 `simmemleak` 容器被终止并重启了五次。
你可以使用 `kubectl get pod` 命令加上 `-o go-template=...` 选项来获取之前终止容器的状态。
@ -1296,10 +1283,10 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
-->
* 获取[分配内存资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验
* 获取[分配 CPU 资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验
* 获取[分配内存资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验
* 获取[分配 CPU 资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验
* 关于请求和约束之间的区别,细节信息可参见[资源服务质量](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md)
* 阅读 API 参考文档中 [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) 部分。
* 阅读 API 参考文档中 [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) 部分。
* 阅读 XFS 中关于 [项目配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) 的文档。
* 阅读 XFS 中关于[项目配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) 的文档。

View File

@ -199,7 +199,7 @@ Deployment 描述了对象的期望状态,并且如果对该规范的更改被
## 容器镜像
<!--
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/admin/kubelet/) attempts to pull the specified image.
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull the specified image.
-->
[imagePullPolicy](/zh/docs/concepts/containers/images/#updating-images)和镜像标签会影响
[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 何时尝试拉取指定的镜像。

View File

@ -48,11 +48,11 @@ There are two hooks that are exposed to Containers:
`PostStart`
<!--
This hook executes immediately after a container is created.
This hook is executed immediately after a container is created.
However, there is no guarantee that the hook will execute before the container ENTRYPOINT.
No parameters are passed to the handler.
-->
这个回调在创建容器之后立即执行。
这个回调在容器被创建之后立即执行。
但是不能保证回调会在容器入口点ENTRYPOINT之前执行。
没有参数传递给处理程序。
@ -61,13 +61,13 @@ No parameters are passed to the handler.
<!--
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
It is blocking, meaning it is synchronous,
so it must complete before the call to delete the container can be sent.
so it must complete before the signal to stop the container can be sent.
No parameters are passed to the handler.
-->
在容器因 API 请求或者管理事件(诸如存活态探针失败、资源抢占、资源竞争等)而被终止之前,
此回调会被调用。
如果容器已经处于终止或者完成状态,则对 preStop 回调的调用将失败。
此调用是阻塞的,也是同步调用,因此必须在删除容器的调用之前完成。
此调用是阻塞的,也是同步调用,因此必须在发出删除容器的信号之前完成。
没有参数传递给处理程序。
<!--
@ -102,11 +102,13 @@ Resources consumed by the command are counted against the Container.
### Hook handler execution
When a Container lifecycle management hook is called,
the Kubernetes management system executes the handler in the Container registered for that hook. 
the Kubernetes management system execute the handler according to the hook action,
`exec` and `tcpSocket` are executed in the container, and `httpGet` is executed by the kubelet process.
-->
### 回调处理程序执行
当调用容器生命周期管理回调时Kubernetes 管理系统在注册了回调的容器中执行处理程序。
当调用容器生命周期管理回调时Kubernetes 管理系统根据回调动作执行其处理程序,
`exec``tcpSocket` 在容器内执行,而 `httpGet` 则由 kubelet 进程执行。
<!--
Hook handler calls are synchronous within the context of the Pod containing the Container.
@ -120,15 +122,35 @@ the Container cannot reach a `running` state.
但是,如果回调运行或挂起的时间太长,则容器无法达到 `running` 状态。
<!--
The behavior is similar for a `PreStop` hook.
If the hook hangs during execution,
the Pod phase stays in a `Terminating` state and is killed after `terminationGracePeriodSeconds` of pod ends.
If a `PostStart` or `PreStop` hook fails,
`PreStop` hooks are not executed asynchronously from the signal
to stop the Container; the hook must complete its execution before
the signal can be sent.
If a `PreStop` hook hangs during execution,
the Pod's phase will be `Terminating` and remain there until the Pod is
killed after its `terminationGracePeriodSeconds` expires.
This grace period applies to the total time it takes for both
the `PreStop` hook to execute and for the Container to stop normally.
If, for example, `terminationGracePeriodSeconds` is 60, and the hook
takes 55 seconds to complete, and the Container takes 10 seconds to stop
normally after receiving the signal, then the Container will be killed
before it can stop normally, since `terminationGracePeriodSeconds` is
less than the total time (55+10) it takes for these two things to happen.
-->
`PreStop` 回调并不会与停止容器的信号处理程序异步执行;回调必须在
可以发送信号之前完成执行。
如果 `PreStop` 回调在执行期间停滞不前Pod 的阶段会变成 `Terminating`
并且一致处于该状态,直到其 `terminationGracePeriodSeconds` 耗尽为止,
这时 Pod 会被杀死。
这一宽限期是针对 `PreStop` 回调的执行时间及容器正常停止时间的总和而言的。
例如,如果 `terminationGracePeriodSeconds` 是 60回调函数花了 55 秒钟
完成执行,而容器在收到信号之后花了 10 秒钟来正常结束,那么容器会在其
能够正常结束之前即被杀死,因为 `terminationGracePeriodSeconds` 的值
小于后面两件事情所花费的总时间55 + 10
<!--
If either a `PostStart` or `PreStop` hook fails,
it kills the Container.
-->
行为与 `PreStop` 回调的行为类似。
如果回调在执行过程中挂起Pod 阶段将保持在 `Terminating` 状态,
并在 Pod 结束的 `terminationGracePeriodSeconds` 之后被杀死。
如果 `PostStart``PreStop` 回调失败,它会杀死容器。
<!--
@ -147,10 +169,11 @@ which means that a hook may be called multiple times for any given event,
such as for `PostStart` or `PreStop`.
It is up to the hook implementation to handle this correctly.
-->
### 回调送保证
### 回调送保证
回调的寄送应该是 *至少一次*,这意味着对于任何给定的事件,例如 `PostStart``PreStop`,回调可以被调用多次。
如何正确处理,是回调实现所要考虑的问题。
回调的递送应该是 *至少一次*,这意味着对于任何给定的事件,
例如 `PostStart``PreStop`,回调可以被调用多次。
如何正确处理被多次调用的情况,是回调实现所要考虑的问题。
<!--
Generally, only single deliveries are made.
@ -160,9 +183,9 @@ In some rare cases, however, double delivery may occur.
For instance, if a kubelet restarts in the middle of sending a hook,
the hook might be resent after the kubelet comes back up.
-->
通常情况下,只会进行单次送。
通常情况下,只会进行单次送。
例如,如果 HTTP 回调接收器宕机,无法接收流量,则不会尝试重新发送。
然而,偶尔也会发生重复送的可能。
然而,偶尔也会发生重复送的可能。
例如,如果 kubelet 在发送回调的过程中重新启动,回调可能会在 kubelet 恢复后重新发送。
<!--

View File

@ -87,7 +87,7 @@ Instead, specify a meaningful tag such as `v1.42.0`.
{{< /caution >}}
<!--
## Updating Images
## Updating images
The default pull policy is `IfNotPresent` which causes the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip
@ -116,17 +116,18 @@ When `imagePullPolicy` is defined without a specific value, it is also set to `A
如果 `imagePullPolicy` 未被定义为特定的值,也会被设置为 `Always`
<!--
## Multi-architecture Images with Manifests
## Multi-architecture images with image indexes
As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of an container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
As well as providing binary images, a container registry can also serve a [container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.
-->
## 使用清单manifest构建多架构镜像
## 带镜像索引的多架构镜像 {#multi-architecture-images-with-image-indexes}
除了提供二进制的镜像之外,容器仓库也可以提供
[容器镜像清单](https://github.com/opencontainers/image-spec/blob/master/manifest.md)。
清单文件Manifest可以为特定于体系结构的镜像版本引用其镜像清单。
[容器镜像索引](https://github.com/opencontainers/image-spec/blob/master/image-index.md)。
镜像索引可以根据特定于体系结构版本的容器指向镜像的多个
[镜像清单](https://github.com/opencontainers/image-spec/blob/master/manifest.md)。
这背后的理念是让你可以为镜像命名(例如:`pause`、`example/mycontainer`、`kube-apiserver`
的同时,允许不同的系统基于它们所使用的机器体系结构取回正确的二进制镜像。
@ -137,7 +138,7 @@ Kubernetes 自身通常在命名容器镜像时添加后缀 `-$(ARCH)`。
YAML 文件也能兼容。
<!--
## Using a Private Registry
## Using a private registry
Private registries may require keys to read images from them.
Credentials can be provided in several ways:
@ -179,7 +180,7 @@ These options are explaind in more detail below.
下面将详细描述每一项。
<!--
### Configuring Nodes to authenticate to a Private Registry
### Configuring nodes to authenticate to a private registry
If you run Docker on your nodes, you can configure the Docker container
runtime to authenticate to a private container registry.
@ -333,7 +334,7 @@ registry keys are added to the `.docker/config.json`.
`.docker/config.json` 中配置了私有仓库密钥后,所有 Pod 都将能读取私有仓库中的镜像。
<!--
### Pre-pulled Images
### Pre-pulled images
-->
### 提前拉取镜像 {#pre-pulled-images}
@ -371,7 +372,7 @@ All pods will have read access to any pre-pulled images.
所有的 Pod 都可以使用节点上提前拉取的镜像。
<!--
### Specifying ImagePullSecrets on a Pod
### Specifying imagePullSecrets on a Pod
-->
### 在 Pod 上指定 ImagePullSecrets {#specifying-imagepullsecrets-on-a-pod}
@ -389,7 +390,7 @@ Kubernetes supports specifying container image registry keys on a Pod.
Kubernetes 支持在 Pod 中设置容器镜像仓库的密钥。
<!--
#### Creating a Secret with a Docker Config
#### Creating a Secret with a Docker config
Run the following command, substituting the appropriate uppercase values:
-->
@ -491,12 +492,12 @@ will be merged.
来自不同来源的凭据会被合并。
<!--
### Use Cases
## Use cases
There are a number of solutions for configuring private registries. Here are some
common use cases and suggested solutions.
-->
### 使用案例 {#use-cases}
## 使用案例 {#use-cases}
配置私有仓库有多种方案,以下是一些常用场景和建议的解决方案。

View File

@ -313,14 +313,14 @@ Pod 开销通过 RuntimeClass 的 `overhead` 字段定义。
## {{% heading "whatsnext" %}}
<!--
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md)
- Read about the [Pod Overhead](/docs/concepts/configuration/pod-overhead/) concept
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- Read about the [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) concept
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
-->
- [RuntimeClass 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)
- [RuntimeClass 调度设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md)
- 阅读关于 [Pod 开销](/zh/docs/concepts/configuration/pod-overhead/) 的概念
- [RuntimeClass 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
- [RuntimeClass 调度设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- 阅读关于 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/) 的概念
- [PodOverhead 特性设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)

View File

@ -97,7 +97,7 @@ API 通常用于托管的 Kubernetes 服务和受控的 Kubernetes 安装环境
这些 API 是声明式的,与 Pod 这类其他 Kubernetes 资源遵从相同的约定,所以
新的集群配置是可复用的,并且可以当作应用程序来管理。
此外,对于稳定版本的 API 而言,它们与其他 Kubernetes API 一样,采纳的是
一种[预定义的支持策略](/docs/reference/using-api/deprecation-policy/)。
一种[预定义的支持策略](/zh/docs/reference/using-api/deprecation-policy/)。
出于以上原因,在条件允许的情况下,基于 API 的方案应该优先于*配置文件*和*参数标志*。
<!--
@ -195,12 +195,12 @@ This diagram shows the extension points in a Kubernetes system.
<!--
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins).
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](#storage-plugins).
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
-->
@ -259,7 +259,7 @@ For more about Custom Resources, see the [Custom Resources concept guide](/docs/
不要使用自定义资源来充当应用、用户或者监控数据的数据存储。
关于自定义资源的更多信息,可参见[自定义资源概念指南](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。
关于自定义资源的更多信息,可参见[自定义资源概念指南](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。
<!--
### Combining New APIs with Automation
@ -289,7 +289,7 @@ API 组中。你不可以替换或更改现有的 API 组。
<!--
### API Access Extensions
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) for more on this flow.
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/) for more on this flow.
Each of these steps offers extension points.
@ -299,7 +299,7 @@ Kubernetes has several built-in authentication methods that it supports. It can
当请求到达 Kubernetes API 服务器时,首先要经过身份认证,之后是鉴权操作,
再之后要经过若干类型的准入控制器的检查。
参见[控制 Kubernetes API 访问](/zh/docs/reference/access-authn-authz/controlling-access/)
参见[控制 Kubernetes API 访问](/zh/docs/concepts/security/controlling-access/)
以了解此流程的细节。
这些步骤中都存在扩展点。
@ -319,11 +319,11 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat
-->
### 身份认证 {#authentication}
[身份认证](/docs/reference/access-authn-authz/authentication/)负责将所有请求中
[身份认证](/zh/docs/reference/access-authn-authz/authentication/)负责将所有请求中
的头部或证书映射到发出该请求的客户端的用户名。
Kubernetes 提供若干种内置的认证方法,以及
[认证 Webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
[认证 Webhook](/zh/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
方法以备内置方法无法满足你的要求。
<!--
@ -443,7 +443,7 @@ the nodes chosen for a pod.
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)
-->
* 进一步了解[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* 进一步了解[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* 了解[动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
* 进一步了解基础设施扩展
* [网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)

View File

@ -28,16 +28,16 @@ methods for adding custom resources and how to choose between them.
<!--
## Custom resources
A *resource* is an endpoint in the [Kubernetes
API](/docs/reference/using-api/api-overview/) that stores a collection of [API
objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of
A *resource* is an endpoint in the
[Kubernetes API](/docs/concepts/overview/kubernetes-api/) that stores a collection of
[API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of
a certain kind; for example, the built-in *pods* resource contains a
collection of Pod objects.
-->
## 定制资源
*资源Resource* 是
[Kubernetes API](/zh/docs/reference/using-api/api-overview/) 中的一个端点,
[Kubernetes API](/zh/docs/concepts/overview/kubernetes-api/) 中的一个端点,
其中存储的是某个类别的
[API 对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/)
的一个集合。
@ -177,16 +177,16 @@ Signs that your API might not be declarative include:
命令式 APIImperative API与声明式有所不同。
以下迹象表明你的 API 可能不是声明式的:
- 客户端发出“做这个操作”的指令,之后在该操作结束时获得同步响应。
- 客户端发出“做这个操作”的指令,并获得一个操作 ID之后需要检查一个 Operation操作
对象来判断请求是否成功完成。
- 你会将你的 API 类比为远程过程调用Remote Procedure CallRPCs
- 直接存储大量数据;例如每个对象几 kB或者存储上千个对象。
- 需要较高的访问带宽(长期保持每秒数十个请求)。
- 存储有应用来处理的最终用户数据如图片、个人标识信息PII或者其他大规模数据。
- 在对象上执行的常规操作并非 CRUD 风格。
- API 不太容易用对象来建模。
- 你决定使用操作 ID 或者操作对象来表现悬决的操作。
- 客户端发出“做这个操作”的指令,之后在该操作结束时获得同步响应。
- 客户端发出“做这个操作”的指令,并获得一个操作 ID之后需要检查一个 Operation操作
对象来判断请求是否成功完成。
- 你会将你的 API 类比为远程过程调用Remote Procedure CallRPCs
- 直接存储大量数据;例如每个对象几 kB或者存储上千个对象。
- 需要较高的访问带宽(长期保持每秒数十个请求)。
- 存储有应用来处理的最终用户数据如图片、个人标识信息PII或者其他大规模数据。
- 在对象上执行的常规操作并非 CRUD 风格。
- API 不太容易用对象来建模。
- 你决定使用操作 ID 或者操作对象来表现悬决的操作。
<!--
## Should I use a configMap or a custom resource?

View File

@ -19,13 +19,19 @@ The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adap
and other similar computing resources that may require vendor specific initialization
and setup.
-->
Kubernetes 提供了一个[设备插件框架](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md),你可以用来将系统硬件资源发布到 {{< glossary_tooltip term_id="kubelet" >}}。
Kubernetes 提供了一个
[设备插件框架](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md),你可以用它来将系统硬件资源发布到 {{< glossary_tooltip term_id="kubelet" >}}。
供应商可以实现设备插件,由你手动部署或作为 {{< glossary_tooltip term_id="daemonset" >}} 来部署,而不必定制 Kubernetes 本身的代码。目标设备包括 GPU、高性能 NIC、FPGA、InfiniBand 适配器以及其他类似的、可能需要特定于供应商的初始化和设置的计算资源。
供应商可以实现设备插件,由你手动部署或作为 {{< glossary_tooltip term_id="daemonset" >}}
来部署,而不必定制 Kubernetes 本身的代码。目标设备包括 GPU、高性能 NIC、FPGA、
InfiniBand 适配器以及其他类似的、可能需要特定于供应商的初始化和设置的计算资源。
<!-- body -->
## 注册设备插件
<!--
## Device plugin registration
-->
## 注册设备插件 {#device-plugin-registration}
<!--
The kubelet exports a `Registration` gRPC service:
@ -45,7 +51,7 @@ During the registration, the device plugin needs to send:
* The name of its Unix socket.
* The Device Plugin API version against which it was built.
* The `ResourceName` it wants to advertise. Here `ResourceName` needs to follow the
[extended resource naming scheme](/docs/concepts/configuration/manage-resources-container/#extended-resources)
[extended resource naming scheme](/docs/concepts/configuration/manage-resources-containers/#extended-resources)
as `vendor-domain/resourcetype`.
(For example, an NVIDIA GPU is advertised as `nvidia.com/gpu`.)
@ -54,7 +60,7 @@ list of devices it manages, and the kubelet is then in charge of advertising tho
resources to the API server as part of the kubelet node status update.
For example, after a device plugin registers `hardware-vendor.example/foo` with the kubelet
and reports two healthy devices on a node, the node status is updated
to advertise that the node has 2 “Foo” devices installed and available.
to advertise that the node has 2 "Foo" devices installed and available.
-->
设备插件可以通过此 gRPC 服务在 kubelet 进行注册。在注册期间,设备插件需要发送下面几样内容:
@ -64,9 +70,12 @@ to advertise that the node has 2 “Foo” devices installed and available.
[扩展资源命名方案](/zh/docs/concepts/configuration/manage-resources-containers/#extended-resources)
类似于 `vendor-domain/resourcetype`。(比如 NVIDIA GPU 就被公布为 `nvidia.com/gpu`。)
成功注册后,设备插件就向 kubelet 发送他所管理的设备列表,然后 kubelet 负责将这些资源发布到 API 服务器,作为 kubelet 节点状态更新的一部分。
成功注册后,设备插件就向 kubelet 发送它所管理的设备列表,然后 kubelet
负责将这些资源发布到 API 服务器,作为 kubelet 节点状态更新的一部分。
比如,设备插件在 kubelet 中注册了 `hardware-vendor.example/foo` 并报告了节点上的两个运行状况良好的设备后节点状态将更新以通告该节点已安装2个 `Foo` 设备并且是可用的。
比如,设备插件在 kubelet 中注册了 `hardware-vendor.example/foo` 并报告了
节点上的两个运行状况良好的设备后,节点状态将更新以通告该节点已安装 2 个
"Foo" 设备并且是可用的。
<!--
Then, users can request devices in a
@ -105,9 +114,9 @@ spec:
hardware-vendor.example/foo: 2
#
# 这个 pod 需要两个 hardware-vendor.example/foo 设备
# 而且只能够调度到满足需求的 node
# 而且只能够调度到满足需求的节点
#
# 如果该节点中有2个以上的设备可用其余的可供其他 pod 使用
# 如果该节点中有 2 个以上的设备可用,其余的可供其他 Pod 使用
```
<!--
@ -121,14 +130,16 @@ The general workflow of a device plugin includes the following steps:
* The plugin starts a gRPC service, with a Unix socket under host path
`/var/lib/kubelet/device-plugins/`, that implements the following interfaces:
-->
## 设备插件的实现
## 设备插件的实现 {#device-plugin-implementation}
设备插件的常规工作流程包括以下几个步骤:
* 初始化。在这个阶段,设备插件将执行供应商特定的初始化和设置,以确保设备处于就绪状态。
* 插件使用主机路径 `/var/lib/kubelet/device-plugins/` 下的 Unix socket 启动一个 gRPC 服务,该服务实现以下接口:
* 初始化。在这个阶段,设备插件将执行供应商特定的初始化和设置,
以确保设备处于就绪状态。
* 插件使用主机路径 `/var/lib/kubelet/device-plugins/` 下的 Unix 套接字启动
一个 gRPC 服务,该服务实现以下接口:
<!--
```gRPC
service DevicePlugin {
// ListAndWatch returns a stream of List of Devices
@ -140,8 +151,58 @@ The general workflow of a device plugin includes the following steps:
// Plugin can run device specific operations and instruct Kubelet
// of the steps to make the Device available in the container
rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
// GetPreferredAllocation returns a preferred set of devices to allocate
// from a list of available ones. The resulting preferred allocation is not
// guaranteed to be the allocation ultimately performed by the
// devicemanager. It is only designed to help the devicemanager make a more
// informed allocation decision when possible.
rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}
// PreStartContainer is called, if indicated by Device Plugin during registeration phase,
// before each container start. Device plugin can run device specific operations
// such as resetting the device before making devices available to the container.
rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}
}
```
-->
```gRPC
service DevicePlugin {
// ListAndWatch 返回 Device 列表构成的数据流。
// 当 Device 状态发生变化或者 Device 消失时ListAndWatch
// 会返回新的列表。
rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
// Allocate 在容器创建期间调用,这样设备插件可以运行一些特定于设备的操作,
// 并告诉 kubelet 如何令 Device 可在容器中访问的所需执行的具体步骤
rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
// GetPreferredAllocation 从一组可用的设备中返回一些优选的设备用来分配,
// 所返回的优选分配结果不一定会是设备管理器的最终分配方案。
// 此接口的设计仅是为了让设备管理器能够在可能的情况下做出更有意义的决定。
rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}
// PreStartContainer 在设备插件注册阶段根据需要被调用,调用发生在容器启动之前。
// 在将设备提供给容器使用之前,设备插件可以运行一些诸如重置设备之类的特定于
// 具体设备的操作,
rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}
}
```
{{< note >}}
<!--
Plugins are not required to provide useful implementations for
`GetPreferredAllocation()` or `PreStartContainer()`. Flags indicating which
(if any) of these calls are available should be set in the `DevicePluginOptions`
message sent back by a call to `GetDevicePluginOptions()`. The `kubelet` will
always call `GetDevicePluginOptions()` to see which optional functions are
available, before calling any of them directly.
-->
插件并非必须为 `GetPreferredAllocation()``PreStartContainer()` 提供有用
的实现逻辑,调用 `GetDevicePluginOptions()` 时所返回的 `DevicePluginOptions`
消息中应该设置这些调用是否可用。`kubelet` 在真正调用这些函数之前,总会调用
`GetDevicePluginOptions()` 来查看是否存在这些可选的函数。
{{< /note >}}
<!--
* The plugin registers itself with the kubelet through the Unix socket at host
@ -155,7 +216,8 @@ If the operations succeed, the device plugin returns an `AllocateResponse` that
runtime configurations for accessing the allocated devices. The kubelet passes this information
to the container runtime.
-->
* 插件通过 Unix socket 在主机路径 `/var/lib/kubelet/device-plugins/kubelet.sock` 处向 kubelet 注册自身。
* 插件通过 Unix socket 在主机路径 `/var/lib/kubelet/device-plugins/kubelet.sock`
处向 kubelet 注册自身。
* 成功注册自身后,设备插件将以服务模式运行,在此期间,它将持续监控设备运行状况,
并在设备状态发生任何变化时向 kubelet 报告。它还负责响应 `Allocate` gRPC 请求。
`Allocate` 期间,设备插件可能还会做一些设备特定的准备;例如 GPU 清理或 QRNG 初始化。
@ -174,8 +236,8 @@ of its Unix socket and re-register itself upon such an event.
设备插件应能监测到 kubelet 重启,并且向新的 kubelet 实例来重新注册自己。
在当前实现中,当 kubelet 重启的时候,新的 kubelet 实例会删除 `/var/lib/kubelet/device-plugins`
下所有已经存在的 Unix sockets
设备插件需要能够监控到它的 Unix socket 被删除,并且当发生此类事件时重新注册自己。
下所有已经存在的 Unix 套接字
设备插件需要能够监控到它的 Unix 套接字被删除,并且当发生此类事件时重新注册自己。
<!--
## Device plugin deployment
@ -197,10 +259,11 @@ Pod onto Nodes, to restart the daemon Pod after failure, and to help automate up
你可以将你的设备插件作为节点操作系统的软件包来部署、作为 DaemonSet 来部署或者手动部署。
规范目录 `/var/lib/kubelet/device-plugins` 是需要特权访问的,所以设备插件必须要在被授权的安全的上下文中运行。
规范目录 `/var/lib/kubelet/device-plugins` 是需要特权访问的,所以设备插件
必须要在被授权的安全的上下文中运行。
如果你将设备插件部署为 DaemonSet`/var/lib/kubelet/device-plugins` 目录必须要在插件的
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
中声明作为 {{< glossary_tooltip term_id="volume" >}} 被 mount 到插件中。
中声明作为 {{< glossary_tooltip term_id="volume" >}} 被挂载到插件中。
如果你选择 DaemonSet 方法,你可以通过 Kubernetes 进行以下操作:
将设备插件的 Pod 放置在节点上,在出现故障后重新启动守护进程 Pod来进行自动升级。
@ -296,11 +359,12 @@ gRPC 服务通过 `/var/lib/kubelet/pod-resources/kubelet.sock` 的 UNIX 套接
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
The Topology Manager is a Kubelet component that allows resources to be co-ordintated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a `TopologyInfo` struct.
The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology aligned manner. In order to do this, the Device Plugin API was extended to include a `TopologyInfo` struct.
-->
## 设备插件与拓扑管理器的集成
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
拓扑管理器是 Kubelet 的一个组件,它允许以拓扑对齐方式来调度资源。
为了做到这一点,设备插件 API 进行了扩展来包括一个 `TopologyInfo` 结构体。

View File

@ -12,31 +12,28 @@ weight: 10
<!-- overview -->
{{< feature-state state="alpha" >}}
<!--
{{< caution >}}Alpha features can change rapidly. {{< /caution >}}
-->
{{< caution >}}Alpha 特性可能很快会变化。{{< /caution >}}
<!--
Network plugins in Kubernetes come in a few flavors:
* CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
* CNI plugins: adhere to the [Container Network Interface](https://github.com/containernetworking/cni) (CNI) specification, designed for interoperability.
* Kubernetes follows the [v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md) release of the CNI specification.
* Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins
-->
Kubernetes中的网络插件有几种类型
* CNI 插件: 遵守 appc/CNI 规约,为互操作性设计。
* CNI 插件:遵守[容器网络接口Container Network InterfaceCNI](https://github.com/containernetworking/cni)
规范,其设计上偏重互操作性。
* Kubernetes 遵从 CNI 规范的
[v0.4.0](https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md)
版本。
* Kubenet 插件:使用 `bridge``host-local` CNI 插件实现了基本的 `cbr0`
<!-- body -->
<!--
## Installation
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
@ -44,11 +41,14 @@ The kubelet has a single default network plugin, and a default network common to
## 安装
kubelet 有一个单独的默认网络插件,以及一个对整个集群通用的默认网络。
它在启动时探测插件,记住找到的内容,并在 pod 生命周期的适当时间执行所选插件(这仅适用于 Docker因为 rkt 管理自己的 CNI 插件)。
在使用插件时,需要记住两个 Kubelet 命令行参数:
它在启动时探测插件,记住找到的内容,并在 Pod 生命周期的适当时间执行
所选插件(这仅适用于 Docker因为 CRI 管理自己的 CNI 插件)。
在使用插件时,需要记住两个 kubelet 命令行参数:
* `cni-bin-dir` Kubelet 在启动时探测这个目录中的插件
* `network-plugin` 要使用的网络插件来自 `cni-bin-dir`。它必须与从插件目录探测到的插件报告的名称匹配。对于 CNI 插件,其值为 "cni"。
* `cni-bin-dir` kubelet 在启动时探测这个目录中的插件
* `network-plugin` 要使用的网络插件来自 `cni-bin-dir`
它必须与从插件目录探测到的插件报告的名称匹配。
对于 CNI 插件,其值为 "cni"。
<!--
## Network Plugin Requirements
@ -59,12 +59,18 @@ By default if no kubelet network plugin is specified, the `noop` plugin is used,
-->
## 网络插件要求
除了提供[`NetworkPlugin` 接口](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go)来配置和清理 pod 网络之外,该插件还可能需要对 kube-proxy 的特定支持。
除了提供
[`NetworkPlugin` 接口](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go)
来配置和清理 Pod 网络之外,该插件还可能需要对 kube-proxy 的特定支持。
iptables 代理显然依赖于 iptables插件可能需要确保 iptables 能够监控容器的网络通信。
例如,如果插件将容器连接到 Linux 网桥,插件必须将 `net/bridge/bridge-nf-call-iptables` 系统参数设置为`1`,以确保 iptables 代理正常工作。
如果插件不使用 Linux 网桥(而是类似于 Open vSwitch 或者其它一些机制),它应该确保为代理对容器通信执行正确的路由。
例如,如果插件将容器连接到 Linux 网桥,插件必须将 `net/bridge/bridge-nf-call-iptables`
系统参数设置为`1`,以确保 iptables 代理正常工作。
如果插件不使用 Linux 网桥(而是类似于 Open vSwitch 或者其它一些机制),
它应该确保为代理对容器通信执行正确的路由。
默认情况下,如果未指定 kubelet 网络插件,则使用 `noop` 插件,该插件设置 `net/bridge/bridge-nf-call-iptables=1`,以确保简单的配置(如带网桥的 Docker )与 iptables 代理正常工作。
默认情况下,如果未指定 kubelet 网络插件,则使用 `noop` 插件,
该插件设置 `net/bridge/bridge-nf-call-iptables=1`,以确保简单的配置
(如带网桥的 Docker )与 iptables 代理正常工作。
<!--
### CNI
@ -77,13 +83,20 @@ In addition to the CNI plugin specified by the configuration file, Kubernetes re
-->
### CNI
通过给 Kubelet 传递 `--network-plugin=cni` 命令行选项来选择 CNI 插件。
Kubelet 从 `--cni-conf-dir` (默认是 `/etc/cni/net.d` 读取文件并使用该文件中的 CNI 配置来设置每个 pod 的网络。
CNI 配置文件必须与 [CNI 规约](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)匹配,并且配置引用的任何所需的 CNI 插件都必须存在于 `--cni-bin-dir`(默认是 `/opt/cni/bin`)。
通过给 Kubelet 传递 `--network-plugin=cni` 命令行选项可以选择 CNI 插件。
Kubelet 从 `--cni-conf-dir` (默认是 `/etc/cni/net.d` 读取文件并使用
该文件中的 CNI 配置来设置各个 Pod 的网络。
CNI 配置文件必须与
[CNI 规约](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)
匹配,并且配置所引用的所有所需的 CNI 插件都应存在于
`--cni-bin-dir`(默认是 `/opt/cni/bin`)下。
如果这个目录中有多个 CNI 配置文件kubelet 将会使用按文件名的字典顺序排列的第一个作为配置文件。
如果这个目录中有多个 CNI 配置文件kubelet 将会使用按文件名的字典顺序排列
的第一个作为配置文件。
除了配置文件指定的 CNI 插件外Kubernetes 还需要标准的 CNI [`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) 插件最低版本是0.2.0。
除了配置文件指定的 CNI 插件外Kubernetes 还需要标准的 CNI
[`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go)
插件最低版本是0.2.0。
<!--
#### Support hostPort
@ -96,8 +109,9 @@ For example:
-->
#### 支持 hostPort
CNI 网络插件支持 `hostPort`。 您可以使用官方 [portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
插件,它由 CNI 插件团队提供,或者使用您自己的带有 portMapping 功能的插件。
CNI 网络插件支持 `hostPort`。 你可以使用官方
[portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
插件,它由 CNI 插件团队提供,或者使用你自己的带有 portMapping 功能的插件。
如果你想要启动 `hostPort` 支持,则必须在 `cni-conf-dir` 指定 `portMappings capability`
例如:
@ -147,11 +161,13 @@ If you want to enable traffic shaping support, you must add the `bandwidth` plug
**实验功能**
CNI 网络插件还支持 pod 入口和出口流量整形。
您可以使用 CNI 插件团队提供的 [bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth) 插件,
也可以使用您自己的具有带宽控制功能的插件。
你可以使用 CNI 插件团队提供的
[bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth)
插件,也可以使用你自己的具有带宽控制功能的插件。
如果您想要启用流量整形支持,你必须将 `bandwidth` 插件添加到 CNI 配置文件
(默认是 `/etc/cni/net.d`)并保证该可执行文件包含在您的 CNI 的 bin 文件夹内 (默认为 `/opt/cni/bin`)。
如果你想要启用流量整形支持,你必须将 `bandwidth` 插件添加到 CNI 配置文件
(默认是 `/etc/cni/net.d`)并保证该可执行文件包含在你的 CNI 的 bin
文件夹内 (默认为 `/opt/cni/bin`)。
```json
{
@ -185,8 +201,8 @@ CNI 网络插件还支持 pod 入口和出口流量整形。
Now you can add the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations to your pod.
For example:
-->
现在,可以将 `kubernetes.io/ingress-bandwidth``kubernetes.io/egress-bandwidth` 注解添加到 pod 中。
例如:
现在,可以将 `kubernetes.io/ingress-bandwidth``kubernetes.io/egress-bandwidth`
注解添加到 pod 中。例如:
```yaml
apiVersion: v1
@ -210,7 +226,7 @@ The plugin requires a few things:
* The standard CNI `bridge`, `lo` and `host-local` plugins are required, at minimum version 0.2.0. Kubenet will first search for them in `/opt/cni/bin`. Specify `cni-bin-dir` to supply additional search path. The first found match will take effect.
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
* Kubelet should also be run with the `--non-masquerade-cidr=<clusterCidr>` argument to ensure traffic to IPs outside this range will use IP masquerade.
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true -cluster-cidr=<cidr>` controller-manager command-line options.
-->
### kubenet
@ -218,16 +234,23 @@ Kubenet 是一个非常基本的、简单的网络插件,仅适用于 Linux。
它本身并不实现更高级的功能,如跨节点网络或网络策略。
它通常与云驱动一起使用,云驱动为节点间或单节点环境中的通信设置路由规则。
Kubenet 创建名为 `cbr0` 的网桥,并为每个 pod 创建了一个 veth 对,每个 pod 的主机端都连接到 `cbr0`
这个 veth 对的 pod 端会被分配一个 IP 地址,该 IP 地址隶属于节点所被分配的 IP 地址范围内。节点的 IP 地址范围则通过配置或控制器管理器来设置。
Kubenet 创建名为 `cbr0` 的网桥,并为每个 pod 创建了一个 veth 对,
每个 Pod 的主机端都连接到 `cbr0`
这个 veth 对的 Pod 端会被分配一个 IP 地址,该 IP 地址隶属于节点所被分配的 IP
地址范围内。节点的 IP 地址范围则通过配置或控制器管理器来设置。
`cbr0` 被分配一个 MTU该 MTU 匹配主机上已启用的正常接口的最小 MTU。
使用此插件还需要一些其他条件:
* 需要标准的 CNI `bridge`、`lo` 以及 `host-local` 插件最低版本是0.2.0。Kubenet 首先在 `/opt/cni/bin` 中搜索它们。 指定 `cni-bin-dir` 以提供其它的搜索路径。首次找到的匹配将生效。
* 需要标准的 CNI `bridge`、`lo` 以及 `host-local` 插件最低版本是0.2.0。
kubenet 首先在 `/opt/cni/bin` 中搜索它们。 指定 `cni-bin-dir` 以提供
其它搜索路径。首次找到的匹配将生效。
* Kubelet 必须和 `--network-plugin=kubenet` 参数一起运行,才能启用该插件。
* Kubelet 还应该和 `--non-masquerade-cidr=<clusterCidr>` 参数一起运行,以确保超出此范围的 IP 流量将使用 IP 伪装。
* 节点必须被分配一个 IP 子网通过kubelet 命令行的 `--pod-cidr` 选项或控制器管理器的命令行选项 `--allocate-node-cidrs=true --cluster-cidr=<cidr>` 来设置。
* Kubelet 还应该和 `--non-masquerade-cidr=<clusterCidr>` 参数一起运行,
以确保超出此范围的 IP 流量将使用 IP 伪装。
* 节点必须被分配一个 IP 子网通过kubelet 命令行的 `--pod-cidr` 选项或
控制器管理器的命令行选项 `--allocate-node-cidrs=true --cluster-cidr=<cidr>`
来设置。
<!--
### Customizing the MTU (with kubenet)
@ -249,11 +272,11 @@ This option is provided to the network-plugin; currently **only kubenet supports
要获得最佳的网络性能,必须确保 MTU 的取值配置正确。
网络插件通常会尝试推断出一个合理的 MTU但有时候这个逻辑不会产生一个最优的 MTU。
例如,如果 Docker 网桥或其他接口有一个小的 MTU, kubenet 当前将选择该 MTU。
或者如果正在使用 IPSEC 封装,则必须减少 MTU并且这种计算超出了大多数网络插件的能力范围。
或者如果正在使用 IPSEC 封装,则必须减少 MTU并且这种计算超出了大多数网络插件的能力范围。
如果需要,可以使用 `network-plugin-mtu` kubelet 选项显式的指定 MTU。
例如:在 AWS 上 `eth0` MTU 通常是 9001因此可以指定 `--network-plugin-mtu=9001`
如果您正在使用 IPSEC ,您可以减少它以允许封装开销,例如 `--network-plugin-mtu=8873`
如果需要,可以使用 `network-plugin-mtu` kubelet 选项显式的指定 MTU。
例如:在 AWS 上 `eth0` MTU 通常是 9001因此可以指定 `--network-plugin-mtu=9001`
如果你正在使用 IPSEC ,你可以减少它以允许封装开销,例如 `--network-plugin-mtu=8873`
此选项会传递给网络插件; 当前 **仅 kubenet 支持 `network-plugin-mtu`**。
@ -264,14 +287,15 @@ This option is provided to the network-plugin; currently **only kubenet supports
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
* `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.
-->
## 使用总结
## 用总结
* `--network-plugin=cni` 用来表明我们要使用 `cni` 网络插件,实际的 CNI 插件可执行文件位于 `--cni-bin-dir`(默认是 `/opt/cni/bin`)下, CNI 插件配置位于 `--cni-conf-dir`(默认是 `/etc/cni/net.d`)下。
* `--network-plugin=kubenet` 用来表明我们要使用 `kubenet` 网络插件CNI `bridge``host-local` 插件位于 `/opt/cni/bin``cni-bin-dir` 中。
* `--network-plugin=cni` 用来表明我们要使用 `cni` 网络插件,实际的 CNI 插件
可执行文件位于 `--cni-bin-dir`(默认是 `/opt/cni/bin`)下, CNI 插件配置位于
`--cni-conf-dir`(默认是 `/etc/cni/net.d`)下。
* `--network-plugin=kubenet` 用来表明我们要使用 `kubenet` 网络插件CNI `bridge`
`host-local` 插件位于 `/opt/cni/bin``cni-bin-dir` 中。
* `--network-plugin-mtu=9001` 指定了我们使用的 MTU当前仅被 `kubenet` 网络插件使用。
## {{% heading "whatsnext" %}}

View File

@ -4,7 +4,6 @@ content_type: concept
weight: 10
---
<!--
---
title: Extending your Kubernetes Cluster
reviewers:
- erictune
@ -13,7 +12,6 @@ reviewers:
- chenopis
content_type: concept
weight: 10
---
-->
<!-- overview -->
@ -89,7 +87,7 @@ Flags and configuration files may not always be changeable in a hosted Kubernete
它们是声明性的,并使用与其他 Kubernetes 资源(如 Pod )相同的约定,所以新的集群配置可以重复使用,
并以与应用程序相同的方式进行管理。
而且,当它们变稳定后,也遵循和其他 Kubernetes API 一样的
[支持政策](/docs/reference/using-api/deprecation-policy/)。
[支持政策](/zh/docs/reference/using-api/deprecation-policy/)。
出于这些原因,在合适的情况下它们优先于 *配置文件**标志* 被使用。
<!--
@ -238,12 +236,13 @@ For more about Custom Resources, see the [Custom Resources concept guide](/docs/
### 用户自定义类型 {#user-defined-types}
如果你想定义新的控制器、应用程序配置对象或其他声明式 API并使用 Kubernetes 工具(如 `kubectl`)管理它们,请考虑为 Kubernetes 添加一个自定义资源。
如果你想定义新的控制器、应用程序配置对象或其他声明式 API并使用 Kubernetes
工具(如 `kubectl`)管理它们,请考虑为 Kubernetes 添加一个自定义资源。
不要使用自定义资源作为应用、用户或者监控数据的数据存储。
有关自定义资源的更多信息,请查看
[自定义资源概念指南](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。
[自定义资源概念指南](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。
<!--
### Combining New APIs with Automation
@ -272,7 +271,7 @@ Adding an API does not directly let you affect the behavior of existing APIs (e.
<!--
### API Access Extensions
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/reference/access-authn-authz/controlling-access/) for more on this flow.
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/) for more on this flow.
Each of these steps offers extension points.
@ -282,13 +281,13 @@ Kubernetes has several built-in authentication methods that it supports. It can
当请求到达 Kubernetes API Server 时,它首先被要求进行用户认证,然后要进行授权检查,
接着受到各种类型的准入控制的检查。有关此流程的更多信息,请参阅
[Kubernetes API 访问控制](/zh/docs/reference/access-authn-authz/controlling-access/)。
[Kubernetes API 访问控制](/zh/docs/concepts/security/controlling-access/)。
上述每个步骤都提供了扩展点。
Kubernetes 有几个它支持的内置认证方法。它还可以位于身份验证代理之后,并将 Authorziation 头部
中的令牌发送给远程服务webhook进行验证。所有这些方法都在
[身份验证文档](/docs/reference/access-authn-authz/authentication/)中介绍。
[身份验证文档](/zh/docs/reference/access-authn-authz/authentication/)中介绍。
<!--
### Authentication
@ -299,11 +298,11 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat
-->
### 身份认证 {#authentication}
[身份认证](/docs/reference/access-authn-authz/authentication/)
[身份认证](/zh/docs/reference/access-authn-authz/authentication/)
将所有请求中的头部字段或证书映射为发出请求的客户端的用户名。
Kubernetes 提供了几种内置的身份认证方法,如果这些方法不符合你的需求,可以使用
[身份认证 Webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) 方法。
[身份认证 Webhook](/zh/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) 方法。
<!--
### Authorization

View File

@ -16,12 +16,13 @@ weight: 30
Operators are software extensions to Kubernetes that make use of [custom
resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
to manage applications and their components. Operators follow
Kubernetes principles, notably the [control loop](/docs/concepts/#kubernetes-control-plane).
Kubernetes principles, notably the [control loop](/docs/concepts/architecture/controller/).
-->
Operator 是 Kubernetes 的扩展软件,它利用
[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)管理应用及其组件。
Operator 遵循 Kubernetes 的理念,特别是在[控制回路](/zh/docs/concepts/#kubernetes-control-plane)方面。
Operator 遵循 Kubernetes 的理念,特别是在[控制回路](/zh/docs/concepts/architecture/controller/)
方面。
<!-- body -->
@ -43,7 +44,8 @@ code to automate a task beyond what Kubernetes itself provides.
Operator 模式旨在捕获(正在管理一个或一组服务的)运维人员的关键目标。
负责特定应用和 service 的运维人员,在系统应该如何运行、如何部署以及出现问题时如何处理等方面有深入的了解。
在 Kubernetes 上运行工作负载的人们都喜欢通过自动化来处理重复的任务。Operator 模式会封装您编写的Kubernetes 本身提供功能以外的)任务自动化代码。
在 Kubernetes 上运行工作负载的人们都喜欢通过自动化来处理重复的任务。
Operator 模式会封装你编写的Kubernetes 本身提供功能以外的)任务自动化代码。
<!--
## Operators in Kubernetes
@ -57,14 +59,15 @@ Kubernetes' {{< glossary_tooltip text="controllers" term_id="controller" >}}
concept lets you extend the cluster's behaviour without modifying the code
of Kubernetes itself.
Operators are clients of the Kubernetes API that act as controllers for
a [Custom Resource](/docs/concepts/api-extension/custom-resources/).
a [Custom Resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
-->
## Kubernetes 上的 Operator
Kubernetes 为自动化而生。无需任何修改,即可以从 Kubernetes 核心中获得许多内置的自动化功能。
可以使用 Kubernetes 自动化部署和运行工作负载, *甚至* 可以自动化 Kubernetes 自身。
Kubernetes 为自动化而生。无需任何修改,即可以从 Kubernetes 核心中获得许多内置的自动化功能。
可以使用 Kubernetes 自动化部署和运行工作负载, *甚至* 可以自动化 Kubernetes 自身。
Kubernetes {{< glossary_tooltip text="控制器" term_id="controller" >}} 使您无需修改 Kubernetes 自身的代码,即可以扩展集群的行为。
Kubernetes {{< glossary_tooltip text="控制器" term_id="controller" >}}
使你无需修改 Kubernetes 自身的代码,即可以扩展集群的行为。
Operator 是 Kubernetes API 的客户端,充当
[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)的控制器。
@ -123,15 +126,21 @@ detail:
想要更详细的了解 Operator这儿有一个详细的示例
1. 有一个名为 SampleDB 的自定义资源,可以将其配置到集群中。
1. 有一个名为 SampleDB 的自定义资源,可以将其配置到集群中。
2. 一个包含 Operator 控制器部分的 Deployment用来确保 Pod 处于运行状态。
3. Operator 代码的容器镜像。
4. 控制器代码,负责查询控制平面以找出已配置的 SampleDB 资源。
5. Operator 的核心是告诉 API 服务器,如何使现实与代码里配置的资源匹配。
* 如果添加新的 SampleDBOperator 将设置 PersistentVolumeClaims 以提供持久化的数据库存储,设置 StatefulSet 以运行 SampleDB并设置 Job 来处理初始配置。
* 如果您删除它Operator 将建立快照,然后确保 StatefulSet 和 Volume 已被删除。
6. Operator 也可以管理常规数据库的备份。对于每个 SampleDB 资源Operator 会确定何时创建可以连接到数据库并进行备份的Pod。这些 Pod 将依赖于 ConfigMap 和/或 具有数据库连接详细信息和凭据的 Secret。
7. 由于 Operator 旨在为其管理的资源提供强大的自动化功能,因此它还需要一些额外的支持性代码。在这个示例中,代码将检查数据库是否正运行在旧版本上,如果是,则创建 Job 对象为您升级数据库。
* 如果添加新的 SampleDBOperator 将设置 PersistentVolumeClaims 以提供
持久化的数据库存储,设置 StatefulSet 以运行 SampleDB并设置 Job
来处理初始配置。
* 如果你删除它Operator 将建立快照,然后确保 StatefulSet 和 Volume 已被删除。
6. Operator 也可以管理常规数据库的备份。对于每个 SampleDB 资源Operator
会确定何时创建可以连接到数据库并进行备份的Pod。这些 Pod 将依赖于
ConfigMap 和/或具有数据库连接详细信息和凭据的 Secret。
7. 由于 Operator 旨在为其管理的资源提供强大的自动化功能,因此它还需要一些
额外的支持性代码。在这个示例中,代码将检查数据库是否正运行在旧版本上,
如果是,则创建 Job 对象为你升级数据库。
<!--
## Deploying Operators
@ -145,7 +154,10 @@ For example, you can run the controller in your cluster as a Deployment.
-->
## 部署 Operator
部署 Operator 最常见的方法是将自定义资源及其关联的控制器添加到您的集群中。跟运行容器化应用一样Controller 通常会运行在 {{< glossary_tooltip text="控制平面" term_id="control-plane" >}} 之外。例如,您可以在集群中将控制器作为 Deployment 运行。
部署 Operator 最常见的方法是将自定义资源及其关联的控制器添加到你的集群中。
跟运行容器化应用一样,控制器通常会运行在
{{< glossary_tooltip text="控制平面" term_id="control-plane" >}} 之外。
例如,你可以在集群中将控制器作为 Deployment 运行。
<!--
## Using an Operator {#using-operators}
@ -160,10 +172,10 @@ kubectl get SampleDB # find configured databases
kubectl edit SampleDB/example-database # manually change some settings
```
-->
## 使用 Operator {#using-operators}
部署 Operator 后,您可以对 Operator 所使用的资源执行添加、修改或删除操作。按照上面的示例,您将为 Operator 本身建立一个 Deployment然后
部署 Operator 后,你可以对 Operator 所使用的资源执行添加、修改或删除操作。
按照上面的示例,你将为 Operator 本身建立一个 Deployment然后
```shell
kubectl get SampleDB # 查找所配置的数据库
@ -176,8 +188,7 @@ kubectl edit SampleDB/example-database # 手动修改某些配置
## Writing your own Operator {#writing-operator}
-->
可以了Operator 会负责应用所作的更改并保持现有服务处于良好的状态
可以了Operator 会负责应用所作的更改并保持现有服务处于良好的状态。
## 编写你自己的 Operator {#writing-operator}
@ -191,9 +202,11 @@ You also implement an Operator (that is, a Controller) using any language / runt
that can act as a [client for the Kubernetes API](/docs/reference/using-api/client-libraries/).
-->
如果生态系统中没可以实现您目标的 Operator您可以自己编写代码。在[接下来](#what-s-next)一节中,您会找到编写自己的云原生 Operator 需要的库和工具的链接。
如果生态系统中没可以实现你目标的 Operator你可以自己编写代码。在
[接下来](#what-s-next)一节中,你会找到编写自己的云原生 Operator
需要的库和工具的链接。
您还可以使用任何支持 [Kubernetes API 客户端](/zh/docs/reference/using-api/client-libraries/)
还可以使用任何支持 [Kubernetes API 客户端](/zh/docs/reference/using-api/client-libraries/)
的语言或运行时来实现 Operator即控制器
## {{% heading "whatsnext" %}}
@ -206,20 +219,20 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie
* using [kubebuilder](https://book.kubebuilder.io/)
* using [Metacontroller](https://metacontroller.app/) along with WebHooks that
you implement yourself
* using the [Operator Framework](https://github.com/operator-framework/getting-started)
* using the [Operator Framework](https://operatorframework.io)
* [Publish](https://operatorhub.io/) your operator for other people to use
* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
-->
* 详细了解[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合的 Operator
* 借助已有的工具来编写自己的 Operator例如
* 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合的 Operator
* 借助已有的工具来编写自己的 Operator例如
* [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator)
* [kubebuilder](https://book.kubebuilder.io/)
* [Metacontroller](https://metacontroller.app/),可与 Webhook 结合使用,以实现自己的功能。
* [Operator 框架](https://github.com/operator-framework/getting-started)
* [发布](https://operatorhub.io/)的 Operator让别人也可以使用
* [Operator Framework](https://operatorframework.io)
* [发布](https://operatorhub.io/)的 Operator让别人也可以使用
* 阅读 [CoreOS 原文](https://coreos.com/blog/introducing-operators.html),其介绍了 Operator 介绍
* 阅读这篇来自谷歌云的关于构建 Operator 最佳实践的
[文章](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps)

View File

@ -435,7 +435,7 @@ The following example describes how to map secret values into application enviro
<!--
* If you are familiar with {{< glossary_tooltip text="Helm Charts" term_id="helm-chart" >}}, [install Service Catalog using Helm](/docs/tasks/service-catalog/install-service-catalog-using-helm/) into your Kubernetes cluster. Alternatively, you can [install Service Catalog using the SC tool](/docs/tasks/service-catalog/install-service-catalog-using-sc/).
* View [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers).
* Explore the [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog) project.
* Explore the [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) project.
* View [svc-cat.io](https://svc-cat.io/docs/).
-->
* 如果你熟悉 {{< glossary_tooltip text="Helm Charts" term_id="helm-chart" >}}
@ -443,7 +443,7 @@ The following example describes how to map secret values into application enviro
到 Kubernetes 集群中。或者,你可以
[使用 SC 工具安装服务目录](/zh/docs/tasks/service-catalog/install-service-catalog-using-sc/)。
* 查看[服务代理示例](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers)
* 浏览 [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog) 项目
* 浏览 [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) 项目
* 查看 [svc-cat.io](https://svc-cat.io/docs/)

View File

@ -1,11 +1,11 @@
---
title: 驱逐策略
content_template: templates/concept
content_type: concept
weight: 60
---
<!--
title: Eviction Policy
content_template: templates/concept
content_type: concept
weight: 60
-->
@ -20,25 +20,28 @@ This page is an overview of Kubernetes' policy for eviction.
<!--
## Eviction Policy
The {{< glossary_tooltip text="Kubelet" term_id="kubelet" >}} can proactively monitor for and prevent total starvation of a
compute resource. In those cases, the `kubelet` can reclaim the starved
resource by proactively failing one or more Pods. When the `kubelet` fails
The {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} proactively monitors for
and prevents total starvation of a compute resource. In those cases, the `kubelet` can reclaim
the starved resource by failing one or more Pods. When the `kubelet` fails
a Pod, it terminates all of its containers and transitions its `PodPhase` to `Failed`.
If the evicted Pod is managed by a Deployment, the Deployment will create another Pod
If the evicted Pod is managed by a Deployment, the Deployment creates another Pod
to be scheduled by Kubernetes.
-->
## 驱逐策略 {#eviction-policy}
{{< glossary_tooltip text="Kubelet" term_id="kubelet" >}} 能够主动监测和防止计算资源的全面短缺。
在资源短缺的情况下,`kubelet` 可以主动地结束一个或多个 Pod 以回收短缺的资源。
`kubelet` 结束一个 Pod 时,它将终止 Pod 中的所有容器,而 Pod 的 `Phase` 将变为 `Failed`
如果被驱逐的 Pod 由 Deployment 管理,这个 Deployment 会创建另一个 Pod 给 Kubernetes 来调度。
{{< glossary_tooltip text="Kubelet" term_id="kubelet" >}} 主动监测和防止
计算资源的全面短缺。在资源短缺时,`kubelet` 可以主动地结束一个或多个 Pod
以回收短缺的资源。
`kubelet` 结束一个 Pod 时,它将终止 Pod 中的所有容器,而 Pod 的 `Phase`
将变为 `Failed`
如果被驱逐的 Pod 由 Deployment 管理,这个 Deployment 会创建另一个 Pod 给
Kubernetes 来调度。
## {{% heading "whatsnext" %}}
<!--
- Read [Configure out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) to learn more about eviction signals, thresholds, and handling.
- Learn how to [configure out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) with eviction signals and thresholds.
-->
- 阅读[配置资源不足的处理](/zh/docs/tasks/administer-cluster/out-of-resource/)
进一步了解驱逐信号、阈值以及处理方法
进一步了解驱逐信号和阈值

View File

@ -26,23 +26,25 @@ The kube-scheduler can be configured to enable bin packing of resources along wi
<!--
## Enabling Bin Packing using RequestedToCapacityRatioResourceAllocation
Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
Before Kubernetes 1.15, Kube-scheduler used to allow scoring nodes based on the request to capacity ratio of primary resources like CPU and Memory. Kubernetes 1.16 added a new parameter to the priority function that allows the users to specify the resources along with weights for each resource to score nodes based on the request to capacity ratio. This allows users to bin pack extended resources by using appropriate parameters and improves the utilization of scarce resources in large clusters. The behavior of the `RequestedToCapacityRatioResourceAllocation` priority function can be controlled by a configuration option called `requestedToCapacityRatioArguments`. This argument consists of two parameters `shape` and `resources`. Shape allows the user to tune the function as least requested or most requested based on `utilization` and `score` values. Resources
consists of `name` which specifies the resource to be considered during scoring and `weight` specify the weight of each resource.
-->
## 使用 RequestedToCapacityRatioResourceAllocation 启用装箱
在 Kubernetes 1.15 之前Kube-scheduler 通常允许根据对主要资源(如 CPU 和内存)的请求数量和可用容量
之比率对节点评分。
在 Kubernetes 1.15 之前Kube-scheduler 通常允许根据对主要资源(如 CPU 和内存)
的请求数量和可用容量 之比率对节点评分。
Kubernetes 1.16 在优先级函数中添加了一个新参数,该参数允许用户指定资源以及每类资源的权重,
以便根据请求数量与可用容量之比率为节点评分。
这就使得用户可以通过使用适当的参数来对扩展资源执行装箱操作,从而提高了大型集群中稀缺资源的利用率。
`RequestedToCapacityRatioResourceAllocation` 优先级函数的行为可以通过名为
`requestedToCapacityRatioArguments` 的配置选项进行控制。
该标志由两个参数 `shape``resources` 组成。
shape 允许用户根据 `utilization``score` 值将函数调整为最少请求least requested
`shape` 允许用户根据 `utilization``score` 值将函数调整为最少请求
least requested
最多请求most requested计算。
resources 由 `name``weight` 组成,`name` 指定评分时要考虑的资源,`weight` 指定每种资源的权重。
`resources` 包含由 `name``weight` 组成,`name` 指定评分时要考虑的资源,
`weight` 指定每种资源的权重。
<!--
Below is an example configuration that sets `requestedToCapacityRatioArguments` to bin packing behavior for extended resources `intel.com/foo` and `intel.com/bar`
@ -53,29 +55,29 @@ Below is an example configuration that sets `requestedToCapacityRatioArguments`
```json
{
"kind" : "Policy",
"apiVersion" : "v1",
...
"priorities" : [
...
{
"name": "RequestedToCapacityRatioPriority",
"weight": 2,
"argument": {
"requestedToCapacityRatioArguments": {
"shape": [
{"utilization": 0, "score": 0},
{"utilization": 100, "score": 10}
],
"resources": [
{"name": "intel.com/foo", "weight": 3},
{"name": "intel.com/bar", "weight": 5}
]
}
"kind": "Policy",
"apiVersion": "v1",
...
"priorities": [
...
{
"name": "RequestedToCapacityRatioPriority",
"weight": 2,
"argument": {
"requestedToCapacityRatioArguments": {
"shape": [
{"utilization": 0, "score": 0},
{"utilization": 100, "score": 10}
],
"resources": [
{"name": "intel.com/foo", "weight": 3},
{"name": "intel.com/bar", "weight": 5}
]
}
}
],
}
}
],
}
```
<!--
@ -89,7 +91,6 @@ Below is an example configuration that sets `requestedToCapacityRatioArguments`
`shape` is used to specify the behavior of the `RequestedToCapacityRatioPriority` function.
-->
### 调整 RequestedToCapacityRatioResourceAllocation 优先级函数
`shape` 用于指定 `RequestedToCapacityRatioPriority` 函数的行为。
@ -103,8 +104,9 @@ Below is an example configuration that sets `requestedToCapacityRatioArguments`
The above arguments give the node a score of 0 if utilization is 0% and 10 for utilization 100%, thus enabling bin packing behavior. To enable least requested the score value must be reversed as follows.
-->
上面的参数在 utilization 为 0% 时给节点评分为 0在 utilization 为 100% 时给节点评分为 10
因此启用了装箱行为。要启用最少请求least requested模式必须按如下方式反转得分值。
上面的参数在 `utilization` 为 0% 时给节点评分为 0`utilization`
100% 时给节点评分为 10因此启用了装箱行为。
要启用最少请求least requested模式必须按如下方式反转得分值。
```yaml
{"utilization": 0, "score": 100},

View File

@ -54,7 +54,8 @@ You configure this tuning setting via kube-scheduler setting
`percentageOfNodesToScore`. This KubeSchedulerConfiguration setting determines
a threshold for scheduling nodes in your cluster.
-->
在大规模集群中,你可以调节调度器的表现来平衡调度的延迟(新 Pod 快速就位)和精度(调度器很少做出糟糕的放置决策)。
在大规模集群中,你可以调节调度器的表现来平衡调度的延迟(新 Pod 快速就位)
和精度(调度器很少做出糟糕的放置决策)。
你可以通过设置 kube-scheduler 的 `percentageOfNodesToScore` 来配置这个调优设置。
这个 KubeSchedulerConfiguration 设置决定了调度集群中节点的阈值。
@ -71,33 +72,32 @@ should use its compiled-in default.
If you set `percentageOfNodesToScore` above 100, kube-scheduler acts as if you
had set a value of 100.
-->
`percentageOfNodesToScore` 选项接受从 0 到 100 之间的整数值。0 值比较特殊,表示 kube-scheduler 应该使用其编译后的默认值。
如果你设置 `percentageOfNodesToScore` 的值超过了 100kube-scheduler 的表现等价于设置值为 100。
`percentageOfNodesToScore` 选项接受从 0 到 100 之间的整数值。
0 值比较特殊,表示 kube-scheduler 应该使用其编译后的默认值。
如果你设置 `percentageOfNodesToScore` 的值超过了 100
kube-scheduler 的表现等价于设置值为 100。
<!--
To change the value, edit the kube-scheduler configuration file (this is likely
to be `/etc/kubernetes/config/kube-scheduler.yaml`), then restart the scheduler.
-->
要修改这个值,编辑 kube-scheduler 的配置文件(通常是 `/etc/kubernetes/config/kube-scheduler.yaml`),然后重启调度器。
要修改这个值,编辑 kube-scheduler 的配置文件
(通常是 `/etc/kubernetes/config/kube-scheduler.yaml`
然后重启调度器。
<!--
After you have made this change, you can run
-->
修改完成后,你可以执行
```bash
kubectl get componentstatuses
kubectl get pods -n kube-system | grep kube-scheduler
```
<!--
to verify that the kube-scheduler component is healthy. The output is similar to:
to verify that the kube-scheduler component is healthy.
-->
来检查该 kube-scheduler 组件是否健康。输出类似如下:
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
...
```
来检查该 kube-scheduler 组件是否健康。
<!--
## Node scoring threshold {#percentage-of-nodes-to-score}
@ -109,7 +109,8 @@ To improve scheduling performance, the kube-scheduler can stop looking for
feasible nodes once it has found enough of them. In large clusters, this saves
time compared to a naive approach that would consider every node.
-->
要提升调度性能kube-scheduler 可以在找到足够的可调度节点之后停止查找。在大规模集群中,比起考虑每个节点的简单方法相比可以节省时间。
要提升调度性能kube-scheduler 可以在找到足够的可调度节点之后停止查找。
在大规模集群中,比起考虑每个节点的简单方法相比可以节省时间。
<!--
You specify a threshold for how many nodes are enough, as a whole number percentage
@ -141,8 +142,8 @@ If you don't specify a threshold, Kubernetes calculates a figure using a
linear formula that yields 50% for a 100-node cluster and yields 10%
for a 5000-node cluster. The lower bound for the automatic value is 5%.
-->
如果你不指定阈值Kubernetes 使用线性公式计算出一个比例,在 100-node 集群下取 50%,在 5000-node 的集群下取 10%。
这个自动设置的参数的最低值是 5%。
如果你不指定阈值Kubernetes 使用线性公式计算出一个比例,在 100-节点集群
下取 50%,在 5000-节点的集群下取 10%。这个自动设置的参数的最低值是 5%。
<!--
This means that, the kube-scheduler always scores at least 5% of your cluster no
@ -205,12 +206,14 @@ scheduler's performance significantly.
{{< /note >}}
-->
{{< note >}}
当集群中的可调度节点少于 50 个时,调度器仍然会去检查所有的 Node因为可调度节点太少不足以停止调度器最初的过滤选择。
当集群中的可调度节点少于 50 个时,调度器仍然会去检查所有的 Node
因为可调度节点太少,不足以停止调度器最初的过滤选择。
同理,在小规模集群中,如果你将 `percentageOfNodesToScore` 设置为一个较低的值,则没有或者只有很小的效果。
如果集群只有几百个节点或者更少,请保持这个配置的默认值。改变基本不会对调度器的性能有明显的提升。
同理,在小规模集群中,如果你将 `percentageOfNodesToScore` 设置为
一个较低的值,则没有或者只有很小的效果。
如果集群只有几百个节点或者更少,请保持这个配置的默认值。
改变基本不会对调度器的性能有明显的提升。
{{< /note >}}
<!--
@ -226,9 +229,15 @@ percentage to anything below 10%, unless the scheduler's throughput is critical
for your application and the score of nodes is not important. In other words, you
prefer to run the Pod on any Node as long as it is feasible.
-->
值得注意的是,该参数设置后可能会导致只有集群中少数节点被选为可调度节点,很多 node 都没有进入到打分阶段。这样就会造成一种后果,一个本来可以在打分阶段得分很高的 Node 甚至都不能进入打分阶段。
值得注意的是,该参数设置后可能会导致只有集群中少数节点被选为可调度节点,
很多节点都没有进入到打分阶段。这样就会造成一种后果,
一个本来可以在打分阶段得分很高的节点甚至都不能进入打分阶段。
由于这个原因,这个参数不应该被设置成一个很低的值。通常的做法是不会将这个参数的值设置的低于 10。很低的参数值一般在调度器的吞吐量很高且对 node 的打分不重要的情况下才使用。换句话说,只有当你更倾向于在可调度节点中任意选择一个 Node 来运行这个 Pod 时,才使用很低的参数设置。
由于这个原因,这个参数不应该被设置成一个很低的值。
通常的做法是不会将这个参数的值设置的低于 10。
很低的参数值一般在调度器的吞吐量很高且对节点的打分不重要的情况下才使用。
换句话说,只有当你更倾向于在可调度节点中任意选择一个节点来运行这个 Pod 时,
才使用很低的参数设置。
<!--
### How the scheduler iterates over Nodes
@ -250,14 +259,20 @@ Nodes as specified by `percentageOfNodesToScore`. For the next Pod, the
scheduler continues from the point in the Node array that it stopped at when
checking feasibility of Nodes for the previous Pod.
-->
在将 Pod 调度到 Node 上时,为了让集群中所有 Node 都有公平的机会去运行这些 Pod调度器将会以轮询的方式覆盖全部的 Node。你可以将 Node 列表想象成一个数组。调度器从数组的头部开始筛选可调度节点,依次向后直到可调度节点的数量达到 `percentageOfNodesToScore` 参数的要求。在对下一个 Pod 进行调度的时候,前一个 Pod 调度筛选停止的 Node 列表的位置,将会来作为这次调度筛选 Node 开始的位置。
在将 Pod 调度到节点上时,为了让集群中所有节点都有公平的机会去运行这些 Pod
调度器将会以轮询的方式覆盖全部的 Node。
你可以将 Node 列表想象成一个数组。调度器从数组的头部开始筛选可调度节点,
依次向后直到可调度节点的数量达到 `percentageOfNodesToScore` 参数的要求。
在对下一个 Pod 进行调度的时候,前一个 Pod 调度筛选停止的 Node 列表的位置,
将会来作为这次调度筛选 Node 开始的位置。
<!--
If Nodes are in multiple zones, the scheduler iterates over Nodes in various
zones to ensure that Nodes from different zones are considered in the
feasibility checks. As an example, consider six nodes in two zones:
-->
如果集群中的 Node 在多个区域,那么调度器将从不同的区域中轮询 Node来确保不同区域的 Node 接受可调度性检查。如下例,考虑两个区域中的六个节点:
如果集群中的 Node 在多个区域,那么调度器将从不同的区域中轮询 Node
来确保不同区域的 Node 接受可调度性检查。如下例,考虑两个区域中的六个节点:
```
Zone 1: Node 1, Node 2, Node 3, Node 4
@ -278,4 +293,3 @@ After going over all the Nodes, it goes back to Node 1.
-->
在评估完所有 Node 后,将会返回到 Node 1从头开始。

View File

@ -7,13 +7,11 @@ weight: 70
---
<!--
---
reviewers:
- ahg-g
title: Scheduling Framework
content_type: concept
weight: 60
---
-->
<!-- overview -->
@ -29,19 +27,17 @@ scheduling "core" simple and maintainable. Refer to the [design proposal of the
scheduling framework][kep] for more technical information on the design of the
framework.
-->
调度框架是 Kubernetes Scheduler 的一种可插入架构,可以简化调度器的自定义。它向现有的调度器增加了一组新的“插件” API。插件被编译到调度器程序中。这些 API 允许大多数调度功能以插件的形式实现,同时使调度“核心”保持简单且可维护。请参考[调度框架的设计提案][kep]获取框架设计的更多技术信息。
[kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md
调度框架是 Kubernetes Scheduler 的一种可插入架构,可以简化调度器的自定义。
它向现有的调度器增加了一组新的“插件” API。插件被编译到调度器程序中。
这些 API 允许大多数调度功能以插件的形式实现,同时使调度“核心”保持简单且可维护。
请参考[调度框架的设计提案](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/624-scheduling-framework/README.md)
获取框架设计的更多技术信息。
<!-- body -->
<!--
# Framework workflow
-->
# 框架工作流程
<!--
@ -49,20 +45,18 @@ The Scheduling Framework defines a few extension points. Scheduler plugins
register to be invoked at one or more extension points. Some of these plugins
can change the scheduling decisions and some are informational only.
-->
调度框架定义了一些扩展点。调度器插件注册后在一个或多个扩展点处被调用。这些插件中的一些可以改变调度决策,而另一些仅用于提供信息。
调度框架定义了一些扩展点。调度器插件注册后在一个或多个扩展点处被调用。
这些插件中的一些可以改变调度决策,而另一些仅用于提供信息。
<!--
Each attempt to schedule one Pod is split into two phases, the **scheduling
cycle** and the **binding cycle**.
-->
每次调度一个 Pod 的尝试都分为两个阶段,即 **调度周期** 和 **绑定周期**。
<!--
## Scheduling Cycle & Binding Cycle
-->
## 调度周期和绑定周期
<!--
@ -70,13 +64,12 @@ The scheduling cycle selects a node for the Pod, and the binding cycle applies
that decision to the cluster. Together, a scheduling cycle and binding cycle are
referred to as a "scheduling context".
-->
调度周期为 Pod 选择一个节点,绑定周期将该决策应用于集群。调度周期和绑定周期一起被称为“调度上下文”。
调度周期为 Pod 选择一个节点,绑定周期将该决策应用于集群。
调度周期和绑定周期一起被称为“调度上下文”。
<!--
Scheduling cycles are run serially, while binding cycles may run concurrently.
-->
调度周期是串行运行的,而绑定周期可能是同时运行的。
<!--
@ -84,13 +77,12 @@ A scheduling or binding cycle can be aborted if the Pod is determined to
be unschedulable or if there is an internal error. The Pod will be returned to
the queue and retried.
-->
如果确定 Pod 不可调度或者存在内部错误,则可以终止调度周期或绑定周期。Pod 将返回队列并重试。
如果确定 Pod 不可调度或者存在内部错误,则可以终止调度周期或绑定周期。
Pod 将返回队列并重试。
<!--
## Extension points
-->
## 扩展点
<!--
@ -98,14 +90,13 @@ The following picture shows the scheduling context of a Pod and the extension
points that the scheduling framework exposes. In this picture "Filter" is
equivalent to "Predicate" and "Scoring" is equivalent to "Priority function".
-->
下图显示了一个 Pod 的调度上下文以及调度框架公开的扩展点。在此图片中,“过滤器”等同于“断言”,“评分”相当于“优先级函数”。
下图显示了一个 Pod 的调度上下文以及调度框架公开的扩展点。
在此图片中,“过滤器”等同于“断言”,“评分”相当于“优先级函数”。
<!--
One plugin may register at multiple extension points to perform more complex or
stateful tasks.
-->
一个插件可以在多个扩展点处注册,以执行更复杂或有状态的任务。
<!--
@ -113,7 +104,6 @@ stateful tasks.
-->
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="调度框架扩展点" >}}
<!--
### QueueSort {#queue-sort}
-->
@ -124,13 +114,13 @@ These plugins are used to sort Pods in the scheduling queue. A queue sort plugin
essentially provides a `less(Pod1, Pod2)` function. Only one queue sort
plugin may be enabled at a time.
-->
队列排序插件用于对调度队列中的 Pod 进行排序。队列排序插件本质上提供 "less(Pod1, Pod2)" 函数。一次只能启动一个队列插件。
队列排序插件用于对调度队列中的 Pod 进行排序。
队列排序插件本质上提供 `less(Pod1, Pod2)` 函数。
一次只能启动一个队列插件。
<!--
### PreFilter {#pre-filter}
-->
### 前置过滤 {#pre-filter}
<!--
@ -138,13 +128,12 @@ These plugins are used to pre-process info about the Pod, or to check certain
conditions that the cluster or the Pod must meet. If a PreFilter plugin returns
an error, the scheduling cycle is aborted.
-->
前置过滤插件用于预处理 Pod 的相关信息,或者检查集群或 Pod 必须满足的某些条件。如果 PreFilter 插件返回错误,则调度周期将终止。
前置过滤插件用于预处理 Pod 的相关信息,或者检查集群或 Pod 必须满足的某些条件。
如果 PreFilter 插件返回错误,则调度周期将终止。
<!--
### Filter
-->
### 过滤
<!--
@ -153,13 +142,13 @@ node, the scheduler will call filter plugins in their configured order. If any
filter plugin marks the node as infeasible, the remaining plugins will not be
called for that node. Nodes may be evaluated concurrently.
-->
过滤插件用于过滤出不能运行该 Pod 的节点。对于每个节点,调度器将按照其配置顺序调用这些过滤插件。如果任何过滤插件将节点标记为不可行,则不会为该节点调用剩下的过滤插件。节点可以被同时进行评估。
过滤插件用于过滤出不能运行该 Pod 的节点。对于每个节点,
调度器将按照其配置顺序调用这些过滤插件。如果任何过滤插件将节点标记为不可行,
则不会为该节点调用剩下的过滤插件。节点可以被同时进行评估。
<!--
### PostFilter {#post-filter}
-->
### 后置过滤 {#post-filter}
<!--
@ -170,7 +159,10 @@ will not be called. A typical PostFilter implementation is preemption, which
tries to make the pod schedulable by preempting other Pods.
-->
这些插件在筛选阶段后调用,但仅在该 pod 没有可行的节点时调用。插件按其配置的顺序调用。如果任何后过滤器插件标记节点为“可调度”, 则其余的插件不会调用。典型的后筛选实现是抢占,试图通过抢占其他 pod 的资源使该 pod 可以调度。
这些插件在筛选阶段后调用,但仅在该 Pod 没有可行的节点时调用。
插件按其配置的顺序调用。如果任何后过滤器插件标记节点为“可调度”,
则其余的插件不会调用。典型的后筛选实现是抢占,试图通过抢占其他 Pod
的资源使该 Pod 可以调度。
<!--
### PreScore {#pre-score}
@ -182,7 +174,8 @@ These plugins are used to perform "pre-scoring" work, which generates a sharable
state for Score plugins to use. If a PreScore plugin returns an error, the
scheduling cycle is aborted.
-->
前置评分插件用于执行 “前置评分” 工作,即生成一个可共享状态供评分插件使用。如果 PreScore 插件返回错误,则调度周期将终止。
前置评分插件用于执行 “前置评分” 工作,即生成一个可共享状态供评分插件使用。
如果 PreScore 插件返回错误,则调度周期将终止。
<!--
### Score {#scoring}
@ -196,13 +189,14 @@ defined range of integers representing the minimum and maximum scores. After the
[NormalizeScore](#normalize-scoring) phase, the scheduler will combine node
scores from all plugins according to the configured plugin weights.
-->
评分插件用于对通过过滤阶段的节点进行排名。调度器将为每个节点调用每个评分插件。将有一个定义明确的整数范围,代表最小和最大分数。在[标准化评分](#normalize-scoring)阶段之后,调度器将根据配置的插件权重合并所有插件的节点分数。
评分插件用于对通过过滤阶段的节点进行排名。调度器将为每个节点调用每个评分插件。
将有一个定义明确的整数范围,代表最小和最大分数。
在[标准化评分](#normalize-scoring)阶段之后,调度器将根据配置的插件权重
合并所有插件的节点分数。
<!--
### NormalizeScore {#normalize-scoring}
-->
### 标准化评分 {#normalize-scoring}
<!--
@ -211,14 +205,14 @@ ranking of Nodes. A plugin that registers for this extension point will be
called with the [Score](#scoring) results from the same plugin. This is called
once per plugin per scheduling cycle.
-->
标准化评分插件用于在调度器计算节点的排名之前修改分数。在此扩展点注册的插件将使用同一插件的[评分](#scoring) 结果被调用。每个插件在每个调度周期调用一次。
标准化评分插件用于在调度器计算节点的排名之前修改分数。
在此扩展点注册的插件将使用同一插件的[评分](#scoring) 结果被调用。
每个插件在每个调度周期调用一次。
<!--
For example, suppose a plugin `BlinkingLightScorer` ranks Nodes based on how
many blinking lights they have.
-->
例如,假设一个 `BlinkingLightScorer` 插件基于具有的闪烁指示灯数量来对节点进行排名。
```go
@ -232,8 +226,8 @@ However, the maximum count of blinking lights may be small compared to
`NodeScoreMax`. To fix this, `BlinkingLightScorer` should also register for this
extension point.
-->
然而,最大的闪烁灯个数值可能比 `NodeScoreMax` 小。要解决这个问题,`BlinkingLightScorer` 插件还应该注册该扩展点。
然而,最大的闪烁灯个数值可能比 `NodeScoreMax` 小。要解决这个问题,
`BlinkingLightScorer` 插件还应该注册该扩展点。
```go
func NormalizeScores(scores map[string]int) {
@ -251,7 +245,6 @@ func NormalizeScores(scores map[string]int) {
If any NormalizeScore plugin returns an error, the scheduling cycle is
aborted.
-->
如果任何 NormalizeScore 插件返回错误,则调度阶段将终止。
<!--
@ -265,8 +258,7 @@ NormalizeScore extension point.
<!--
### Reserve
-->
### 保留
### Reserve
<!--
This is an informational extension point. Plugins which maintain runtime state
@ -275,37 +267,38 @@ scheduler when resources on a node are being reserved for a given Pod. This
happens before the scheduler actually binds the Pod to the Node, and it exists
to prevent race conditions while the scheduler waits for the bind to succeed.
-->
保留是一个信息性的扩展点。管理运行时状态的插件(也成为“有状态插件”)应该使用此扩展点,以便调度器在节点给指定 Pod 预留了资源时能够通知该插件。这是在调度器真正将 Pod 绑定到节点之前发生的,并且它存在是为了防止在调度器等待绑定成功时发生竞争情况。
Reserve 是一个信息性的扩展点。
管理运行时状态的插件(也成为“有状态插件”)应该使用此扩展点,以便
调度器在节点给指定 Pod 预留了资源时能够通知该插件。
这是在调度器真正将 Pod 绑定到节点之前发生的,并且它存在是为了防止
在调度器等待绑定成功时发生竞争情况。
<!--
This is the last step in a scheduling cycle. Once a Pod is in the reserved
state, it will either trigger [Unreserve](#unreserve) plugins (on failure) or
[PostBind](#post-bind) plugins (on success) at the end of the binding cycle.
-->
这个是调度周期的最后一步。一旦 Pod 处于保留状态,它将在绑定周期结束时触发[不保留](#不保留) 插件(失败时)或
[绑定后](#post-bind) 插件(成功时)。
这个是调度周期的最后一步。
一旦 Pod 处于保留状态,它将在绑定周期结束时触发[不保留](#unreserve) 插件
(失败时)或 [绑定后](#post-bind) 插件(成功时)。
<!--
### Permit
-->
### 允许
### Permit
<!--
_Permit_ plugins are invoked at the end of the scheduling cycle for each Pod, to
prevent or delay the binding to the candidate node. A permit plugin can do one of
the three things:
-->
_Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟 Pod 的绑定。一个允许插件可以做以下三件事之一:
_Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟 Pod 的绑定。
一个允许插件可以做以下三件事之一:
<!--
1. **approve** \
Once all Permit plugins approve a Pod, it is sent for binding.
-->
1. **批准** \
一旦所有 Permit 插件批准 Pod 后,该 Pod 将被发送以进行绑定。
@ -314,9 +307,9 @@ _Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟
If any Permit plugin denies a Pod, it is returned to the scheduling queue.
This will trigger [Unreserve](#unreserve) plugins.
-->
1. **拒绝** \
如果任何 Permit 插件拒绝 Pod则该 Pod 将被返回到调度队列。这将触发[不保留](#不保留) 插件。
如果任何 Permit 插件拒绝 Pod则该 Pod 将被返回到调度队列。
这将触发[Unreserve](#unreserve) 插件。
<!--
1. **wait** (with a timeout) \
@ -326,9 +319,11 @@ _Permit_ 插件在每个 Pod 调度周期的最后调用,用于防止或延迟
and the Pod is returned to the scheduling queue, triggering [Unreserve](#unreserve)
plugins.
-->
1. **等待**(带有超时) \
如果一个 Permit 插件返回 “等待” 结果,则 Pod 将保持在一个内部的 “等待中” 的 Pod 列表,同时该 Pod 的绑定周期启动时即直接阻塞直到得到[批准](#frameworkhandle)。如果超时发生,**等待** 变成 **拒绝**,并且 Pod 将返回调度队列,从而触发[不保留](#不保留) 插件。
如果一个 Permit 插件返回 “等待” 结果,则 Pod 将保持在一个内部的 “等待中”
的 Pod 列表,同时该 Pod 的绑定周期启动时即直接阻塞直到得到
[批准](#frameworkhandle)。如果超时发生,**等待** 变成 **拒绝**,并且 Pod
将返回调度队列,从而触发 [Unreserve](#unreserve) 插件。
<!--
@ -338,13 +333,15 @@ plugins to approve binding of reserved Pods that are in "waiting" state. Once a
is approved, it is sent to the [PreBind](#pre-bind) phase.
-->
{{< note >}}
尽管任何插件可以访问 “等待中” 状态的 Pod 列表并批准它们 (查看 [`FrameworkHandle`](#frameworkhandle))。我们希望只有允许插件可以批准处于 “等待中” 状态的 预留 Pod 的绑定。一旦 Pod 被批准了,它将发送到[预绑定](#pre-bind) 阶段。
尽管任何插件可以访问 “等待中” 状态的 Pod 列表并批准它们
(查看 [`FrameworkHandle`](#frameworkhandle))。
我们希望只有允许插件可以批准处于 “等待中” 状态的预留 Pod 的绑定。
一旦 Pod 被批准了,它将发送到[预绑定](#pre-bind) 阶段。
{{< /note >}}
<!--
### Pre-bind {#pre-bind}
-->
### 预绑定 {#pre-bind}
<!--
@ -352,21 +349,21 @@ These plugins are used to perform any work required before a Pod is bound. For
example, a pre-bind plugin may provision a network volume and mount it on the
target node before allowing the Pod to run there.
-->
预绑定插件用于执行 Pod 绑定前所需的任何工作。例如,一个预绑定插件可能需要提供网络卷并且在允许 Pod 运行在该节点之前将其挂载到目标节点上。
预绑定插件用于执行 Pod 绑定前所需的任何工作。
例如,一个预绑定插件可能需要提供网络卷并且在允许 Pod 运行在该节点之前
将其挂载到目标节点上。
<!--
If any PreBind plugin returns an error, the Pod is [rejected](#unreserve) and
returned to the scheduling queue.
-->
如果任何 PreBind 插件返回错误,则 Pod 将被[拒绝](#不保留) 并且返回到调度队列中。
如果任何 PreBind 插件返回错误,则 Pod 将被[拒绝](#unreserve) 并且
退回到调度队列中。
<!--
### Bind
-->
### 绑定
### Bind
<!--
These plugins are used to bind a Pod to a Node. Bind plugins will not be called
@ -375,13 +372,13 @@ configured order. A bind plugin may choose whether or not to handle the given
Pod. If a bind plugin chooses to handle a Pod, **the remaining bind plugins are
skipped**.
-->
绑定插件用于将 Pod 绑定到节点上。直到所有的 PreBind 插件都完成,绑定插件才会被调用。每个绑定插件按照配置顺序被调用。绑定插件可以选择是否处理指定的 Pod。如果绑定插件选择处理 Pod**剩余的绑定插件将被跳过**。
Bind 插件用于将 Pod 绑定到节点上。直到所有的 PreBind 插件都完成Bind 插件才会被调用。
各绑定插件按照配置顺序被调用。绑定插件可以选择是否处理指定的 Pod。
如果绑定插件选择处理 Pod**剩余的绑定插件将被跳过**。
<!--
### PostBind {#post-bind}
-->
### 绑定后 {#post-bind}
<!--
@ -389,34 +386,32 @@ This is an informational extension point. Post-bind plugins are called after a
Pod is successfully bound. This is the end of a binding cycle, and can be used
to clean up associated resources.
-->
这是个信息性的扩展点。绑定后插件在 Pod 成功绑定后被调用。这是绑定周期的结尾,可用于清理相关的资源。
这是个信息性的扩展点。
绑定后插件在 Pod 成功绑定后被调用。这是绑定周期的结尾,可用于清理相关的资源。
<!--
### Unreserve
-->
### 不保留
### Unreserve
<!--
This is an informational extension point. If a Pod was reserved and then
rejected in a later phase, then unreserve plugins will be notified. Unreserve
plugins should clean up state associated with the reserved Pod.
-->
这是个信息性的扩展点。如果 Pod 被保留,然后在后面的阶段中被拒绝,则不保留插件将被通知。不保留插件应该清楚保留 Pod 的相关状态。
这是个信息性的扩展点。
如果 Pod 被保留,然后在后面的阶段中被拒绝,则 Unreserve 插件将被通知。
Unreserve 插件应该清楚保留 Pod 的相关状态。
<!--
Plugins that use this extension point usually should also use
[Reserve](#reserve).
-->
使用此扩展点的插件通常也使用[保留](#保留)。
使用此扩展点的插件通常也使用[Reserve](#reserve)。
<!--
## Plugin API
-->
## 插件 API
<!--
@ -424,8 +419,8 @@ There are two steps to the plugin API. First, plugins must register and get
configured, then they use the extension point interfaces. Extension point
interfaces have the following form.
-->
插件 API 分为两个步骤。首先,插件必须注册并配置,然后才能使用扩展点接口。扩展点接口具有以下形式。
插件 API 分为两个步骤。首先,插件必须完成注册并配置,然后才能使用扩展点接口。
扩展点接口具有以下形式。
```go
type Plugin interface {
@ -448,7 +443,6 @@ type PreFilterPlugin interface {
<!--
# Plugin Configuration
-->
# 插件配置
<!--
@ -457,21 +451,26 @@ Kubernetes v1.18 or later, most scheduling
[plugins](/docs/reference/scheduling/profiles/#scheduling-plugins) are in use and
enabled by default.
-->
你可以在调度器配置中启用或禁用插件。如果你在使用 Kubernetes v1.18 或更高版本,大部分调度[插件](/docs/reference/scheduling/profiles/#scheduling-plugins) 都在使用中且默认启用。
你可以在调度器配置中启用或禁用插件。
如果你在使用 Kubernetes v1.18 或更高版本,大部分调度
[插件](/zh/docs/reference/scheduling/profiles/#scheduling-plugins)
都在使用中且默认启用。
<!--
In addition to default plugins, you can also implement your own scheduling
plugins and get them configured along with default plugins. You can visit
[scheduler-plugins](https://github.com/kubernetes-sigs/scheduler-plugins) for more details.
-->
除了默认的插件,你同样可以实现自己的调度插件并且将他们与默认插件一起配置。你可以访问 [调度插件](https://github.com/kubernetes-sigs/scheduler-plugins) 了解更多详情。
除了默认的插件,你还可以实现自己的调度插件并且将它们与默认插件一起配置。
你可以访问[scheduler-plugins](https://github.com/kubernetes-sigs/scheduler-plugins)
了解更多信息。
<!--
If you are using Kubernetes v1.18 or later, you can configure a set of plugins as
a scheduler profile and then define multiple profiles to fit various kinds of workload.
Learn more at [multiple profiles](/docs/reference/scheduling/profiles/#multiple-profiles).
-->
如果你正在使用 Kubernetes v1.18 或更高版本,你可以将一组插件设置为一个调度器配置文件,然后定义不同的配置文件来满足各类工作负载。
了解更多关于 [多配置文件](/docs/reference/scheduling/profiles/#multiple-profiles)
如果你正在使用 Kubernetes v1.18 或更高版本,你可以将一组插件设置为
一个调度器配置文件,然后定义不同的配置文件来满足各类工作负载
了解更多关于[多配置文件](/zh/docs/reference/scheduling/profiles/#multiple-profiles)。

View File

@ -204,7 +204,7 @@ Disallow privileged users | When constructing containers, consult your documenta
容器安全性不在本指南的探讨范围内。下面是一些探索此主题的建议和连接:
容器关注领域 | 建议 |
容器关注领域 | 建议 |
------------------------------ | -------------- |
容器漏洞扫描和操作系统依赖安全性 | 作为镜像构建的一部分,您应该扫描您的容器里的已知漏洞。
镜像签名和执行 | 对容器镜像进行签名,以维护对容器内容的信任。
@ -257,8 +257,8 @@ Learn about related Kubernetes security topics:
* [Pod security standards](/docs/concepts/security/pod-security-standards/)
* [Network policies for Pods](/docs/concepts/services-networking/network-policies/)
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
* [API access control](/docs/reference/access-authn-authz/controlling-access/)
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
@ -267,8 +267,9 @@ Learn about related Kubernetes security topics:
* [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)
* [Pod 的网络策略](/zh/docs/concepts/services-networking/network-policies/)
* [控制对 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/)
* [保护您的集群](/zh/docs/tasks/administer-cluster/securing-a-cluster/)
* [API 访问控制](/zh/docs/reference/access-authn-authz/controlling-access/)
* [加密通信中的数据](/zh/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* 为控制面[加密通信中的数据](/zh/docs/tasks/tls/managing-tls-in-a-cluster/)
* [加密静止状态的数据](/zh/docs/tasks/administer-cluster/encrypt-data/)
* [Kubernetes 的 Secret](/zh/docs/concepts/configuration/secret/)
* [Kubernetes 中的 Secret](/zh/docs/concepts/configuration/secret/)

View File

@ -278,7 +278,7 @@ Baseline/Default 策略的目标是便于常见的容器化应用采用,同时
net.ipv4.ip_local_port_range<br>
net.ipv4.tcp_syncookies<br>
net.ipv4.ping_group_range<br>
undefined/empty<br>
未定义/空值<br>
</td>
</tr>
</tbody>
@ -385,14 +385,15 @@ Restricted 策略旨在实施当前保护 Pod 的最佳实践,尽管这样作
<tr>
<td>Seccomp</td>
<td>
<!-- The 'runtime/default' seccomp profile must be required, or allow specific additional profiles. -->
必须要求使用 'runtime/default' seccomp profile 或者允许使用特定的 profiles。<br>
<!-- The RuntimeDefault seccomp profile must be required, or allow specific additional profiles. -->
必须要求使用 RuntimeDefault seccomp profile 或者允许使用特定的 profiles。<br>
<br><b>限制的字段:</b><br>
metadata.annotations['seccomp.security.alpha.kubernetes.io/pod']<br>
metadata.annotations['container.seccomp.security.alpha.kubernetes.io/*']<br>
spec.securityContext.seccompProfile.type<br>
spec.containers[*].securityContext.seccompProfile<br>
spec.initContainers[*].securityContext.seccompProfile<br>
<br><b>允许的值:</b><br>
'runtime/default'<br>
未定义(容器注解)<br>
未定义/nil<br>
</td>
</tr>
</tbody>
@ -462,7 +463,7 @@ in the Pod manifest, and represent parameters to the container runtime.
<!--
Security policies are control plane mechanisms to enforce specific settings in the Security Context,
as well as other parameters outside the Security Contex. As of February 2020, the current native
as well as other parameters outside the Security Context. As of February 2020, the current native
solution for enforcing these security policies is [Pod Security
Policy](/docs/concepts/policy/pod-security-policy/) - a mechanism for centrally enforcing security
policy on Pods across a cluster. Other alternatives for enforcing security policy are being
@ -503,7 +504,7 @@ restrict privileged permissions is lessened when the workload is isolated from t
kernel. This allows for workloads requiring heightened permissions to still be isolated.
Additionally, the protection of sandboxed workloads is highly dependent on the method of
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
-->
### 沙箱Sandboxed Pod 怎么处理?
@ -515,5 +516,5 @@ sandboxing. As such, no single recommended policy is recommended for all s
限制特权化操作的许可就不那么重要。这使得那些需要更多许可权限的负载仍能被有效隔离。
此外,沙箱化负载的保护高度依赖于沙箱化的实现方法。
因此,现在还没有针对所有沙箱化负载的建议策略。
因此,现在还没有针对所有沙箱化负载的建议策略。

View File

@ -1,23 +1,13 @@
---
title: 端点切片Endpoint Slices
feature:
title: 端点切片
description: >
Kubernetes 集群中网络端点的可扩展跟踪。
content_type: concept
weight: 10
weight: 35
---
<!--
title: Endpoint Slices
feature:
title: Endpoint Slices
description: >
Scalable tracking of network endpoints in a Kubernetes cluster.
content_type: concept
weight: 10
weight: 35
-->
<!-- overview -->
@ -34,14 +24,50 @@ _端点切片Endpoint Slices_ 提供了一种简单的方法来跟踪 Kube
<!-- body -->
<!--
## Motivation
The Endpoints API has provided a simple and straightforward way of
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle and
send more traffic to more backend Pods, limitations of that original API became
more visible.
Most notably, those included challenges with scaling to larger numbers of
network endpoints.
-->
## 动机 {#motivation}
Endpoints API 提供了在 Kubernetes 跟踪网络端点的一种简单而直接的方法。
不幸的是,随着 Kubernetes 集群和 {{< glossary_tooltip text="服务" term_id="service" >}}
逐渐开始为更多的后端 Pods 处理和发送请求,原来的 API 的局限性变得越来越明显。
最重要的是那些因为要处理大量网络端点而带来的挑战。
<!--
Since all network endpoints for a Service were stored in a single Endpoints
resource, those resources could get quite large. That affected the performance
of Kubernetes components (notably the master control plane) and resulted in
significant amounts of network traffic and processing when Endpoints changed.
EndpointSlices help you mitigate those issues as well as provide an extensible
platform for additional features such as topological routing.
-->
由于任一服务的所有网络端点都保存在同一个 Endpoints 资源中,这类资源可能变得
非常巨大,而这一变化会影响到 Kubernetes 组件(比如主控组件)的性能,并
在 Endpoints 变化时需要处理大量的网络流量和处理。
EndpointSlice 能够帮助你缓解这一问题,还能为一些诸如拓扑路由这类的额外
功能提供一个可扩展的平台。
<!--
## Endpoint Slice resources {#endpointslice-resource}
In Kubernetes, an EndpointSlice contains references to a set of network
endpoints. The EndpointSlice controller automatically creates Endpoint Slices
for a Kubernetes Service when a selector is specified. These Endpoint Slices
will include references to any Pods that match the Service selector. Endpoint
Slices group network endpoints together by unique Service and Port combinations.
endpoints. The control plane automatically creates EndpointSlices
for any Kubernetes Service that has a {{< glossary_tooltip text="selector"
term_id="selector" >}} specified. These EndpointSlices include
references to any Pods that match the Service selector. EndpointSlices group
network endpoints together by unique combinations of protocol, port number, and
Service name.
The name of a EndpointSlice object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
As an example, here's a sample EndpointSlice resource for the `example`
Kubernetes Service.
@ -49,10 +75,14 @@ Kubernetes Service.
## Endpoint Slice 资源 {#endpointslice-resource}
在 Kubernetes 中,`EndpointSlice` 包含对一组网络端点的引用。
指定选择器后EndpointSlice 控制器会自动为 Kubernetes 服务创建 EndpointSlice。
这些 EndpointSlice 将包含对与服务选择器匹配的所有 Pod 的引用。EndpointSlice 通过唯一的服务和端口组合将网络端点组织在一起。
指定选择器后控制面会自动为设置了 {{< glossary_tooltip text="选择算符" term_id="selector" >}}
的 Kubernetes 服务创建 EndpointSlice。
这些 EndpointSlice 将包含对与服务选择算符匹配的所有 Pod 的引用。
EndpointSlice 通过唯一的协议、端口号和服务名称将网络端点组织在一起。
EndpointSlice 的名称必须是合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
例如,这里是 Kubernetes服务 `example` 的示例 EndpointSlice 资源。
例如,下面是 Kubernetes 服务 `example` 的 EndpointSlice 资源示例
```yaml
apiVersion: discovery.k8s.io/v1beta1
@ -78,19 +108,25 @@ endpoints:
```
<!--
By default, Endpoint Slices managed by the EndpointSlice controller will have no
more than 100 endpoints each. Below this scale, Endpoint Slices should map 1:1
with Endpoints and Services and have similar performance.
By default, the control plane creates and manages EndpointSlices to have no
more than 100 endpoints each. You can configure this with the
`-max-endpoints-per-slice`
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
flag, up to a maximum of 1000.
Endpoint Slices can act as the source of truth for kube-proxy when it comes to
EndpointSlices can act as the source of truth for
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} when it comes to
how to route internal traffic. When enabled, they should provide a performance
improvement for services with large numbers of endpoints.
-->
默认情况下,由 EndpointSlice 控制器管理的 Endpoint Slice 将有不超过 100 个端点。
低于此比例时Endpoint Slices 应与 Endpoints 和服务进行 1:1 映射,并具有相似的性能。
默认情况下,控制面创建和管理的 EndpointSlice 将包含不超过 100 个端点。
你可以使用 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
`--max-endpoints-per-slice` 标志设置此值,最大值为 1000。
当涉及如何路由内部流量时Endpoint Slices 可以充当 kube-proxy 的真实来源。
启用该功能后,在服务的 endpoints 规模庞大时会有可观的性能提升。
当涉及如何路由内部流量时EndpointSlice 可以充当
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
的决策依据。
启用该功能后,在服务的端点数量庞大时会有可观的性能提升。
<!--
## Address Types
@ -110,33 +146,222 @@ EndpointSlice 支持三种地址类型:
* FQDN (完全合格的域名)
<!--
## Motivation
### Topology information {#topology}
The Endpoints API has provided a simple and straightforward way of
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
and Services have gotten larger, limitations of that API became more visible.
Most notably, those included challenges with scaling to larger numbers of
network endpoints.
Since all network endpoints for a Service were stored in a single Endpoints
resource, those resources could get quite large. That affected the performance
of Kubernetes components (notably the master control plane) and resulted in
significant amounts of network traffic and processing when Endpoints changed.
Endpoint Slices help you mitigate those issues as well as provide an extensible
platform for additional features such as topological routing.
Each endpoint within an EndpointSlice can contain relevant topology information.
This is used to indicate where an endpoint is, containing information about the
corresponding Node, zone, and region. When the values are available, the
control plane sets the following Topology labels for EndpointSlices:
-->
## 动机
### 拓扑信息 {#topology}
Endpoints API 提供了一种简单明了的方法在 Kubernetes 中跟踪网络端点
不幸的是,随着 Kubernetes 集群与服务的增长,该 API 的局限性变得更加明显
最值得注意的是,这包含了扩展到更多网络端点的挑战。
EndpointSlice 中的每个端点都可以包含一定的拓扑信息。
这一信息用来标明端点的位置,包含对应节点、可用区、区域的信息。
当这些值可用时,控制面会为 EndpointSlice 设置如下拓扑标签:
由于服务的所有网络端点都存储在单个 Endpoints 资源中,
因此这些资源可能会变得很大。
这影响了 Kubernetes 组件(尤其是主控制平面)的性能,并在 Endpoints
发生更改时导致大量网络流量和处理。
Endpoint Slices 可帮助您缓解这些问题并提供可扩展的
附加特性(例如拓扑路由)平台。
<!--
* `kubernetes.io/hostname` - The name of the Node this endpoint is on.
* `topology.kubernetes.io/zone` - The zone this endpoint is in.
* `topology.kubernetes.io/region` - The region this endpoint is in.
-->
* `kubernetes.io/hostname` - 端点所在的节点名称
* `topology.kubernetes.io/zone` - 端点所处的可用区
* `topology.kubernetes.io/region` - 端点所处的区域
<!--
The values of these labels are derived from resources associated with each
endpoint in a slice. The hostname label represents the value of the NodeName
field on the corresponding Pod. The zone and region labels represent the value
of the labels with the same names on the corresponding Node.
-->
这些标签的值时根据与切片中各个端点相关联的资源来生成的。
标签 `hostname` 代表的是对应的 Pod 的 NodeName 字段的取值。
`zone``region` 标签则代表的是对应的节点所拥有的同名标签的值。
<!--
### Management
Most often, the control plane (specifically, the endpoint slice
{{< glossary_tooltip text="controller" term_id="controller" >}}) creates and
manages EndpointSlice objects. There are a variety of other use cases for
EndpointSlices, such as service mesh implementations, that could result in other
entities or controllers managing additional sets of EndpointSlices.
-->
### 管理 {#management}
通常,控制面(尤其是端点切片的 {{< glossary_tooltip text="controller" term_id="controller" >}}
会创建和管理 EndpointSlice 对象。EndpointSlice 对象还有一些其他使用场景,
例如作为服务网格Service Mesh的实现。这些场景都会导致有其他实体
或者控制器负责管理额外的 EndpointSlice 集合。
<!--
To ensure that multiple entities can manage EndpointSlices without interfering
with each other, Kubernetes defines the
{{< glossary_tooltip term_id="label" text="label" >}}
`endpointslice.kubernetes.io/managed-by`, which indicates the entity managing
an EndpointSlice.
The endpoint slice controller sets `endpointslice-controller.k8s.io` as the value
for this label on all EndpointSlices it manages. Other entities managing
EndpointSlices should also set a unique value for this label.
-->
为了确保多个实体可以管理 EndpointSlice 而且不会相互产生干扰Kubernetes 定义了
{{< glossary_tooltip term_id="label" text="标签" >}}
`endpointslice.kubernetes.io/managed-by`,用来标明哪个实体在管理某个
EndpointSlice。端点切片控制器会在自己所管理的所有 EndpointSlice 上将该标签值设置
`endpointslice-controller.k8s.io`
管理 EndpointSlice 的其他实体也应该为此标签设置一个唯一值。
<!--
### Ownership
In most use cases, EndpointSlices are owned by the Service that the endpoint
slice object tracks endpoints for. This ownership is indicated by an owner
reference on each EndpointSlice as well as a `kubernetes.io/service-name`
label that enables simple lookups of all EndpointSlices belonging to a Service.
-->
### 属主关系 {#ownership}
在大多数场合下EndpointSlice 都由某个 Service 所有,(因为)该端点切片正是
为该服务跟踪记录其端点。这一属主关系是通过为每个 EndpointSlice 设置一个
属主owner引用同时设置 `kubernetes.io/service-name` 标签来标明的,
目的是方便查找隶属于某服务的所有 EndpointSlice。
<!--
### EndpointSlice mirroring
In some cases, applications create custom Endpoints resources. To ensure that
these applications do not need to concurrently write to both Endpoints and
EndpointSlice resources, the cluster's control plane mirrors most Endpoints
resources to corresponding EndpointSlices.
-->
### EndpointSlice 镜像 {#endpointslice-mirroring}
在某些场合,应用会创建定制的 Endpoints 资源。为了保证这些应用不需要并发
递更改 Endpoints 和 EndpointSlice 资源,集群的控制面将大多数 Endpoints
映射到对应的 EndpointSlice 之上。
<!--
The control plane mirrors Endpoints resources unless:
* the Endpoints resource has a `endpointslice.kubernetes.io/skip-mirror` label
set to `true`.
* the Endpoints resource has a `control-plane.alpha.kubernetes.io/leader`
annotation.
* the corresponding Service resource does not exist.
* the corresponding Service resource has a non-nil selector.
-->
控制面对 Endpoints 资源进行映射的例外情况有:
* Endpoints 资源上标签 `endpointslice.kubernetes.io/skip-mirror` 值为 `true`
* Endpoints 资源包含标签 `control-plane.alpha.kubernetes.io/leader`
* 对应的 Service 资源不存在。
* 对应的 Service 的选择算符不为空。
<!--
Individual Endpoints resources may translate into multiple EndpointSlices. This
will occur if an Endpoints resource has multiple subsets or includes endpoints
with multiple IP families (IPv4 and IPv6). A maximum of 1000 addresses per
subset will be mirrored to EndpointSlices.
-->
每个 Endpoints 资源可能会被翻译到多个 EndpointSlices 中去。
当 Endpoints 资源中包含多个子网或者包含多个 IP 地址族IPv4 和 IPv6的端点时
就有可能发生这种状况。
每个子网最多有 1000 个地址会被镜像到 EndpointSlice 中。
<!--
### Distribution of EndpointSlices
Each EndpointSlice has a set of ports that applies to all endpoints within the
resource. When named ports are used for a Service, Pods may end up with
different target port numbers for the same named port, requiring different
EndpointSlices. This is similar to the logic behind how subsets are grouped
with Endpoints.
-->
### EndpointSlices 的分布问题 {#distribution-of-endpointslices}
每个 EndpointSlice 都有一组端口值,适用于资源内的所有端点。
当为服务使用命名端口时Pod 可能会就同一命名端口获得不同的端口号,因而需要
不同的 EndpointSlice。这有点像 Endpoints 用来对子网进行分组的逻辑。
<!--
The control plane tries to fill EndpointSlices as full as possible, but does not
actively rebalance them. The logic is fairly straightforward:
1. Iterate through existing EndpointSlices, remove endpoints that are no longer
desired and update matching endpoints that have changed.
2. Iterate through EndpointSlices that have been modified in the first step and
fill them up with any new endpoints needed.
3. If there's still new endpoints left to add, try to fit them into a previously
unchanged slice and/or create new ones.
-->
控制面尝试尽量将 EndpointSlice 填满,不过不会主动地在若干 EndpointSlice 之间
执行再平衡操作。这里的逻辑也是相对直接的:
1. 列举所有现有的 EndpointSlices移除那些不再需要的端点并更新那些已经
变化的端点。
2. 列举所有在第一步中被更改过的 EndpointSlices用新增加的端点将其填满。
3. 如果还有新的端点未被添加进去,尝试将这些端点添加到之前未更改的切片中,
或者创建新切片。
<!--
Importantly, the third step prioritizes limiting EndpointSlice updates over a
perfectly full distribution of EndpointSlices. As an example, if there are 10
new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each,
this approach will create a new EndpointSlice instead of filling up the 2
existing EndpointSlices. In other words, a single EndpointSlice creation is
preferrable to multiple EndpointSlice updates.
-->
这里比较重要的是,与在 EndpointSlice 之间完成最佳的分布相比,第三步中更看重
限制 EndpointSlice 更新的操作次数。例如,如果有 10 个端点待添加,有两个
EndpointSlice 中各有 5 个空位,上述方法会创建一个新的 EndpointSlice 而不是
将现有的两个 EndpointSlice 都填满。换言之,与执行多个 EndpointSlice 更新操作
相比较,方法会优先考虑执行一个 EndpointSlice 创建操作。
<!--
With kube-proxy running on each Node and watching EndpointSlices, every change
to an EndpointSlice becomes relatively expensive since it will be transmitted to
every Node in the cluster. This approach is intended to limit the number of
changes that need to be sent to every Node, even if it may result with multiple
EndpointSlices that are not full.
-->
由于 kube-proxy 在每个节点上运行并监视 EndpointSlice 状态EndpointSlice 的
每次变更都变得相对代价较高,因为这些状态变化要传递到集群中每个节点上。
这一方法尝试限制要发送到所有节点上的变更消息个数,即使这样做可能会导致有
多个 EndpointSlice 没有被填满。
<!--
In practice, this less than ideal distribution should be rare. Most changes
processed by the EndpointSlice controller will be small enough to fit in an
existing EndpointSlice, and if not, a new EndpointSlice is likely going to be
necessary soon anyway. Rolling updates of Deployments also provide a natural
repacking of EndpointSlices with all Pods and their corresponding endpoints
getting replaced.
-->
在实践中,上面这种并非最理想的分布是很少出现的。大多数被 EndpointSlice 控制器
处理的变更都是足够小的,可以添加到某已有 EndpointSlice 中去的。并且,假使无法
添加到已有的切片中,不管怎样都会快就会需要一个新的 EndpointSlice 对象。
Deployment 的滚动更新为重新为 EndpointSlice 打包提供了一个自然的机会,所有
Pod 及其对应的端点在这一期间都会被替换掉。
<!--
### Duplicate endpoints
Due to the nature of EndpointSlice changes, endpoints may be represented in more
than one EndpointSlice at the same time. This naturally occurs as changes to
different EndpointSlice objects can arrive at the Kubernetes client watch/cache
at different times. Implementations using EndpointSlice must be able to have the
endpoint appear in more than one slice. A reference implementation of how to
perform endpoint deduplication can be found in the `EndpointSliceCache`
implementation in `kube-proxy`.
-->
### 重复的端点 {#duplicate-endpoints}
由于 EndpointSlice 变化的自身特点,端点可能会同时出现在不止一个 EndpointSlice
中。鉴于不同的 EndpointSlice 对象在不同时刻到达 Kubernetes 的监视/缓存中,
这种情况的出现是很自然的。
使用 EndpointSlice 的实现必须能够处理端点出现在多个切片中的状况。
关于如何执行端点去重deduplication的参考实现你可以在 `kube-proxy`
`EndpointSlice` 实现中找到。
## {{% heading "whatsnext" %}}
@ -144,6 +369,6 @@ Endpoint Slices 可帮助您缓解这些问题并提供可扩展的
* [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpoint-slices)
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
-->
* [启用端点切片](/zh/docs/tasks/administer-cluster/enabling-endpointslices)
* 阅读[使用服务接应用](/zh/docs/concepts/services-networking/connect-applications-service/)
* 了解[启用 EndpointSlice](/zh/docs/tasks/administer-cluster/enabling-endpointslices)
* 阅读[使用服务接应用](/zh/docs/concepts/services-networking/connect-applications-service/)

View File

@ -22,7 +22,8 @@ automatically provisions storage when it is requested by users.
-->
动态卷供应允许按需创建存储卷。
如果没有动态供应,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷,
然后在 Kubernetes 集群创建 [`PersistentVolume` 对象](/docs/concepts/storage/persistent-volumes/)来表示这些卷。
然后在 Kubernetes 集群创建
[`PersistentVolume` 对象](/zh/docs/concepts/storage/persistent-volumes/)来表示这些卷。
动态供应功能消除了集群管理员预先配置存储的需要。 相反,它在用户请求时自动供应存储。
<!-- body -->
@ -46,7 +47,7 @@ that provisioner when provisioning.
<!--
A cluster administrator can define and expose multiple flavors of storage (from
the same or different storage systems) within a cluster, each with a custom set
of parameters. This design also ensures that end users dont have to worry
of parameters. This design also ensures that end users don't have to worry
about the complexity and nuances of how storage is provisioned, but still
have the ability to select from multiple storage options.
-->
@ -122,10 +123,10 @@ administrator (see [below](#enabling-dynamic-provisioning)).
这个字段的值必须能够匹配到集群管理员配置的 `StorageClass` 名称(见[下面](#enabling-dynamic-provisioning))。
<!--
To select the “fast” storage class, for example, a user would create the
following `PersistentVolumeClaim`:
To select the "fast" storage class, for example, a user would create the
following PersistentVolumeClaim:
-->
例如,要选择 "fast" 存储类,用户将创建如下的 `PersistentVolumeClaim`
例如,要选择 “fast” 存储类,用户将创建如下的 PersistentVolumeClaim
```yaml
apiVersion: v1
@ -151,7 +152,7 @@ provisioned. When the claim is deleted, the volume is destroyed.
<!--
## Defaulting Behavior
-->
## 默认行为
## 设置默认值的行为
<!--
Dynamic provisioning can be enabled on a cluster such that all claims are
@ -186,7 +187,8 @@ Note that there can be at most one *default* storage class on a cluster, or
a `PersistentVolumeClaim` without `storageClassName` explicitly specified cannot
be created.
-->
请注意,群集上最多只能有一个 *默认* 存储类,否则无法创建没有明确指定 `storageClassName``PersistentVolumeClaim`
请注意,群集上最多只能有一个 *默认* 存储类,否则无法创建没有明确指定
`storageClassName``PersistentVolumeClaim`
<!--
## Topology Awareness
@ -199,7 +201,7 @@ Zones in a Region. Single-Zone storage backends should be provisioned in the Zon
Pods are scheduled. This can be accomplished by setting the [Volume Binding
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
-->
在[多区域](/docs/setup/best-practices/multiple-zones/)集群中Pod 可以被分散到多个区域。
在[多区域](/zh/docs/setup/best-practices/multiple-zones/)集群中Pod 可以被分散到多个区域。
单区域存储后端应该被供应到 Pod 被调度到的区域。
这可以通过设置[卷绑定模式](/zh/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。

View File

@ -72,10 +72,9 @@ different purposes:
[downwardAPI](/docs/concepts/storage/volumes/#downwardapi),
[secret](/docs/concepts/storage/volumes/#secret): inject different
kinds of Kubernetes data into a Pod
- [CSI ephemeral
volumes](docs/concepts/storage/volumes/#csi-ephemeral-volumes):
similar to the previous volume kinds, but provided by special [CSI
drivers](https://github.com/container-storage-interface/spec/blob/master/spec.md)
- [CSI ephemeral volumes](#csi-ephemeral-volume):
similar to the previous volume kinds, but provided by special
[CSI drivers](https://github.com/container-storage-interface/spec/blob/master/spec.md)
which specifically [support this feature](https://kubernetes-csi.github.io/docs/drivers.html)
- [generic ephemeral volumes](#generic-ephemeral-volumes), which
can be provided by all storage drivers that also support persistent volumes
@ -287,8 +286,8 @@ spec:
### 生命周期和 PersistentVolumeClaim {#lifecycle-and-persistentvolumeclaim}
<!--
The key design idea is that the [parameters for a
volume claim](/docs/reference/generated/kubernetes-api/#ephemeralvolumesource-v1alpha1-core)
The key design idea is that the
[parameters for a volume claim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1alpha1-core)
are allowed inside a volume source of the Pod. Labels, annotations and
the whole set of fields for a PersistentVolumeClaim are supported. When such a Pod gets
created, the ephemeral volume controller then creates an actual PersistentVolumeClaim
@ -296,11 +295,11 @@ object in the same namespace as the Pod and ensures that the PersistentVolumeCla
gets deleted when the Pod gets deleted.
-->
关键的设计思想是在 Pod 的卷来源中允许使用
[卷申领的参数](/docs/reference/generated/kubernetes-api/#ephemeralvolumesource-v1alpha1-core)。
[卷申领的参数](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1alpha1-core)。
PersistentVolumeClaim 的标签、注解和整套字段集均被支持。
创建这样一个 Pod 后,
临时卷控制器在 Pod 所属的命名空间中创建一个实际的 PersistentVolumeClaim 对象,
并确保删除 Pod 时, 同步删除PersistentVolumeClaim
并确保删除 Pod 时,同步删除 PersistentVolumeClaim。
<!--
That triggers volume binding and/or provisioning, either immediately if
@ -417,7 +416,8 @@ two choices:
`volumes` list does not contain the `ephemeral` volume type.
-->
- 通过特性门控显式禁用该特性,可以避免将来的 Kubernetes 版本默认启用时带来混乱。
- 当`卷`列表不包含 `ephemeral` 卷类型时,使用 [Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)。
- 当`卷`列表不包含 `ephemeral` 卷类型时,使用
[Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)。
<!--
The normal namespace quota for PVCs in a namespace still applies, so
@ -435,6 +435,7 @@ it to circumvent other policies.
See [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage).
-->
### kubelet 管理的临时卷 {#ephemeral-volumes-managed-by-kubelet}
参阅[本地临时存储](/zh/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage)。
<!--
@ -446,9 +447,10 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont
-->
### CSI 临时卷 {#csi-ephemeral-volumes}
- 有关设计的更多信息,参阅 [Ephemeral Inline CSI
volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md)。
- 本特性下一步开发的更多信息,参阅 [enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596)。
- 有关设计的更多信息,参阅
[Ephemeral Inline CSI volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md)。
- 本特性下一步开发的更多信息,参阅
[enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596)。
<!--
### Generic ephemeral volumes
@ -458,5 +460,8 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont
- For more information on further development of this feature, see the [enhancement tracking issue #1698](https://github.com/kubernetes/enhancements/issues/1698).
-->
### 通用临时卷 {#generic-ephemeral-volumes}
- 有关设计的更多信息,参阅 [Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md)。
- 本特性下一步开发的更多信息,参阅 [enhancement tracking issue #1698](https://github.com/kubernetes/enhancements/issues/1698).
- 有关设计的更多信息,参阅
[Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md)。
- 本特性下一步开发的更多信息,参阅
[enhancement tracking issue #1698](https://github.com/kubernetes/enhancements/issues/1698).

View File

@ -370,6 +370,68 @@ However, the particular path specified in the custom recycler Pod template in th
定制回收器 Pod 模板中在 `volumes` 部分所指定的特定路径要替换为
正被回收的卷的路径。
<!--
### Reserving a PersistentVolume
The control plane can [bind PersistentVolumeClaims to matching PersistentVolumes](#binding) in the
cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
-->
### 预留 PersistentVolume {#reserving-a-persistentvolume}
通过在 PersistentVolumeClaim 中指定 PersistentVolume你可以声明该特定
PV 与 PVC 之间的绑定关系。如果该 PersistentVolume 存在且未被通过其
`claimRef` 字段预留给 PersistentVolumeClaim则该 PersistentVolume
会和该 PersistentVolumeClaim 绑定到一起。
<!--
The binding happens regardless of some volume matching criteria, including node affinity.
The control plane still checks that [storage class](/docs/concepts/storage/storage-classes/), access modes, and requested storage size are valid.
-->
绑定操作不会考虑某些卷匹配条件是否满足,包括节点亲和性等等。
控制面仍然会检查
[存储类](/zh/docs/concepts/storage/storage-classes/)、访问模式和所请求的
存储尺寸都是合法的。
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-pvc
namespace: foo
spec:
storageClassName: "" # 此处须显式设置空字符串,否则会被设置为默认的 StorageClass
volumeName: foo-pv
...
```
<!--
This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify the relevant PersistentVolumeClaim in the `claimRef` field of the PV so that other PVCs can not bind to it.
-->
此方法无法对 PersistentVolume 的绑定特权做出任何形式的保证。
如果有其他 PersistentVolumeClaim 可以使用你所指定的 PV则你应该首先预留
该存储卷。你可以将 PV 的 `claimRef` 字段设置为相关的 PersistentVolumeClaim
以确保其他 PVC 不会绑定到该 PV 卷。
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
storageClassName: ""
claimRef:
name: foo-pvc
namespace: foo
...
```
<!--
This is useful if you want to consume PersistentVolumes that have their `claimPolicy` set
to `Retain`, including cases where you are reusing an existing PV.
-->
如果你想要使用 `claimPolicy` 属性设置为 `Retain` 的 PersistentVolume 卷
时,包括你希望复用现有的 PV 卷时,这点是很有用的
<!--
### Expanding Persistent Volumes Claims
-->

View File

@ -18,7 +18,7 @@ with [volumes](/docs/concepts/storage/volumes/) and
[persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested.
-->
本文描述了 Kubernetes 中 StorageClass 的概念。建议先熟悉 [](/zh/docs/concepts/storage/volumes/) 和
[持久卷](/docs/concepts/storage/persistent-volumes) 的概念。
[持久卷](/zh/docs/concepts/storage/persistent-volumes) 的概念。
<!-- body -->
@ -67,7 +67,7 @@ for details.
-->
管理员可以为没有申请绑定到特定 StorageClass 的 PVC 指定一个默认的存储类
更多详情请参阅
[PersistentVolumeClaim 章节](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。
[PersistentVolumeClaim 章节](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)。
```yaml
apiVersion: storage.k8s.io/v1
@ -90,15 +90,16 @@ volumeBindingMode: Immediate
Each StorageClass has a provisioner that determines what volume plugin is used
for provisioning PVs. This field must be specified.
-->
### 存储分配器
### 存储制备器 {#provisioner}
每个 StorageClass 都有一个分配器,用来决定使用哪个卷插件分配 PV。该字段必须指定。
每个 StorageClass 都有一个制备器Provisioner用来决定使用哪个卷插件制备 PV。
该字段必须指定。
<!--
| Volume Plugin | Internal Provisioner| Config Example |
-->
| 卷插件 | 内置分配器 | 配置例子 |
| 卷插件 | 内置制备器 | 配置例子 |
|:---------------------|:----------:|:-------------------------------------:|
| AWSElasticBlockStore | &#x2713; | [AWS EBS](#aws-ebs) |
| AzureFile | &#x2713; | [Azure File](#azure-file) |
@ -131,24 +132,25 @@ run, what volume plugin it uses (including Flex), etc. The repository
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)
houses a library for writing external provisioners that implements the bulk of
the specification. Some external provisioners are listed under the repository
[kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage).
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner).
-->
您不限于指定此处列出的 "内置" 分配器(其名称前缀为 "kubernetes.io" 并打包在 Kubernetes 中)。
您还可以运行和指定外部分配器,这些独立的程序遵循由 Kubernetes 定义的
你不限于指定此处列出的 "内置" 制备器(其名称前缀为 "kubernetes.io" 并打包在 Kubernetes 中)。
你还可以运行和指定外部制备器,这些独立的程序遵循由 Kubernetes 定义的
[规范](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)。
外部供应商的作者完全可以自由决定他们的代码保存于何处、打包方式、运行方式、使用的插件(包括 Flex等。
代码仓库 [kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)
包含一个用于为外部分配器编写功能实现的类库。可以通过下面的代码仓库,查看外部分配器列表。
包含一个用于为外部制备器编写功能实现的类库。你可以访问代码仓库
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)
了解外部驱动列表。
[kubernetes-incubator/external-storage](https://github.com/kubernetes-incubator/external-storage).
<!--
For example, NFS doesn't provide an internal provisioner, but an external
provisioner can be used. There are also cases when 3rd party storage
vendors provide their own external provisioner.
-->
例如NFS 没有内部分配器,但可以使用外部分配器。
也有第三方存储供应商提供自己的外部分配器。
例如NFS 没有内部制备器,但可以使用外部制备器。
也有第三方存储供应商提供自己的外部制备器。
<!--
### Reclaim Policy
@ -228,7 +230,7 @@ the class or PV, so mount of the PV will simply fail if one is invalid.
由 StorageClass 动态创建的 PersistentVolume 将使用类中 `mountOptions` 字段指定的挂载选项。
如果卷插件不支持挂载选项,却指定了该选项,则分配操作会失败。
如果卷插件不支持挂载选项,却指定了该选项,则制备操作会失败。
挂载选项在 StorageClass 和 PV 上都不会做验证,所以如果挂载选项无效,那么这个 PV 就会失败。
<!--
@ -240,7 +242,7 @@ the class or PV, so mount of the PV will simply fail if one is invalid.
The `volumeBindingMode` field controls when [volume binding and dynamic
provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur.
-->
`volumeBindingMode` 字段控制了[卷绑定和动态分配](/docs/concepts/storage/persistent-volumes/#provisioning)
`volumeBindingMode` 字段控制了[卷绑定和动态制备](/zh/docs/concepts/storage/persistent-volumes/#provisioning)
应该发生在什么时候。
<!--
@ -250,8 +252,9 @@ backends that are topology-constrained and not globally accessible from all Node
in the cluster, PersistentVolumes will be bound or provisioned without knowledge of the Pod's scheduling
requirements. This may result in unschedulable Pods.
-->
默认情况下,`Immediate` 模式表示一旦创建了 PersistentVolumeClaim 也就完成了卷绑定和动态分配。
对于由于拓扑限制而非集群所有节点可达的存储后端PersistentVolume 会在不知道 Pod 调度要求的情况下绑定或者分配。
默认情况下,`Immediate` 模式表示一旦创建了 PersistentVolumeClaim 也就完成了卷绑定和动态制备。
对于由于拓扑限制而非集群所有节点可达的存储后端PersistentVolume
会在不知道 Pod 调度要求的情况下绑定或者制备。
<!--
A cluster administrator can address this issue by specifying the `WaitForFirstConsumer` mode which
@ -265,8 +268,8 @@ anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-a
and [taints and tolerations](/docs/concepts/configuration/taint-and-toleration).
-->
集群管理员可以通过指定 `WaitForFirstConsumer` 模式来解决此问题。
该模式将延迟 PersistentVolume 的绑定和分配,直到使用该 PersistentVolumeClaim 的 Pod 被创建。
PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或分配。这些包括但不限于
该模式将延迟 PersistentVolume 的绑定和制备,直到使用该 PersistentVolumeClaim 的 Pod 被创建。
PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或制备。这些包括但不限于
[资源需求](/zh/docs/concepts/configuration/manage-resources-containers/)、
[节点筛选器](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)、
[pod 亲和性和互斥性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)、
@ -279,7 +282,7 @@ The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
* [GCEPersistentDisk](#gce-pd)
* [AzureDisk](#azure-disk)
-->
以下插件支持动态分配`WaitForFirstConsumer` 模式:
以下插件支持动态供应`WaitForFirstConsumer` 模式:
* [AWSElasticBlockStore](#aws-ebs)
* [GCEPersistentDisk](#gce-pd)
@ -304,7 +307,7 @@ and pre-created PVs, but you'll need to look at the documentation for a specific
to see its supported topology keys and examples.
-->
动态配置和预先创建的 PV 也支持 [CSI卷](/zh/docs/concepts/storage/volumes/#csi)
但是需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑键名和例子。
但是需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑键名和例子。
<!--
### Allowed Topologies
@ -317,7 +320,8 @@ When a cluster operator specifies the `WaitForFirstConsumer` volume binding mode
to restrict provisioning to specific topologies in most situations. However,
if still required, `allowedTopologies` can be specified.
-->
当集群操作人员使用了 `WaitForFirstConsumer` 的卷绑定模式,在大部分情况下就没有必要将配置限制为特定的拓扑结构。
当集群操作人员使用了 `WaitForFirstConsumer` 的卷绑定模式,
在大部分情况下就没有必要将制备限制为特定的拓扑结构。
然而,如果还有需要的话,可以使用 `allowedTopologies`
<!--
@ -325,7 +329,8 @@ This example demonstrates how to restrict the topology of provisioned volumes to
zones and should be used as a replacement for the `zone` and `zones` parameters for the
supported plugins.
-->
这个例子描述了如何将分配卷的拓扑限制在特定的区域,在使用时应该根据插件支持情况替换 `zone``zones` 参数。
这个例子描述了如何将供应卷的拓扑限制在特定的区域,在使用时应该根据插件
支持情况替换 `zone``zones` 参数。
```yaml
apiVersion: storage.k8s.io/v1
@ -359,10 +364,12 @@ exceed 256 KiB.
-->
## 参数
Storage class 具有描述属于卷的参数。取决于分配器,可以接受不同的参数。
例如,参数 type 的值 io1 和参数 iopsPerGB 特定于 EBS PV。当参数被省略时会使用默认值。
Storage class 具有描述属于卷的参数。取决于制备器,可以接受不同的参数。
例如,参数 type 的值 io1 和参数 iopsPerGB 特定于 EBS PV。
当参数被省略时,会使用默认值。
一个 StorageClass 最多可以定义 512 个参数。这些参数对象的总长度不能超过 256 KiB, 包括参数的键和值。
一个 StorageClass 最多可以定义 512 个参数。这些参数对象的总长度不能
超过 256 KiB, 包括参数的键和值。
### AWS EBS
@ -405,9 +412,11 @@ parameters:
* `type``io1``gp2``sc1``st1`。详细信息参见
[AWS 文档](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)。默认值:`gp2`。
* `zone`(弃用)AWS 区域。如果没有指定 `zone``zones`
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone` 和 `zones` 参数不能同时使用。
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
`zone``zones` 参数不能同时使用。
* `zones`(弃用):以逗号分隔的 AWS 区域列表。
如果没有指定 `zone``zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。`zone`和`zones`参数不能同时使用。
如果没有指定 `zone``zones`,通常卷会在 Kubernetes 集群节点所在的
活动区域中轮询调度分配。`zone`和`zones`参数不能同时使用。
* `iopsPerGB`:只适用于 `io1` 卷。每 GiB 每秒 I/O 操作。
AWS 卷插件将其与请求卷的大小相乘以计算 IOPS 的容量,
并将其限制在 20000 IOPSAWS 支持的最高值,请参阅
@ -465,7 +474,7 @@ parameters:
<!--
If `replication-type` is set to `none`, a regular (zonal) PD will be provisioned.
-->
如果 `replication-type` 设置为 `none`,会分配一个常规(当前区域内的)持久化磁盘。
如果 `replication-type` 设置为 `none`,会制备一个常规(当前区域内的)持久化磁盘。
<!--
If `replication-type` is set to `regional-pd`, a
@ -477,10 +486,10 @@ specified, Kubernetes will arbitrarily choose among the specified zones. If the
`zones` parameter is omitted, Kubernetes will arbitrarily choose among zones
managed by the cluster.
-->
如果 `replication-type` 设置为 `regional-pd`,会分配一个
如果 `replication-type` 设置为 `regional-pd`,会制备一个
[区域性持久化磁盘Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds)。
在这种情况下,用户必须使用 `zones` 而非 `zone` 来指定期望的复制区域zone
如果指定来两个特定的区域,区域性持久化磁盘会在这两个区域里分配
如果指定来两个特定的区域,区域性持久化磁盘会在这两个区域里制备
如果指定了多于两个的区域Kubernetes 会选择其中任意两个区域。
如果省略了 `zones` 参数Kubernetes 会在集群管理的区域中任意选择。
@ -530,8 +539,8 @@ parameters:
for authentication to the REST server. This parameter is deprecated in favor
of `secretNamespace` + `secretName`.
-->
* `resturl`分配 gluster 卷的需求的 Gluster REST 服务/Heketi 服务 url。
通用格式应该是 `IPaddress:Port`,这是 GlusterFS 动态分配器的必需参数。
* `resturl`制备 gluster 卷的需求的 Gluster REST 服务/Heketi 服务 url。
通用格式应该是 `IPaddress:Port`,这是 GlusterFS 动态制备器的必需参数。
如果 Heketi 服务在 OpenShift/kubernetes 中安装并暴露为可路由服务,则可以使用类似于
`http://heketi-storage-project.cloudapps.mystorage.com` 的格式,其中 fqdn 是可解析的 heketi 服务网址。
* `restauthenabled`Gluster REST 服务身份验证布尔值,用于启用对 REST 服务器的身份验证。
@ -557,7 +566,8 @@ parameters:
Example of a secret can be found in
[glusterfs-provisioning-secret.yaml](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/glusterfs/glusterfs-secret.yaml).
-->
* `secretNamespace``secretName`Secret 实例的标识,包含与 Gluster REST 服务交互时使用的用户密码。
* `secretNamespace``secretName`Secret 实例的标识,包含与 Gluster
REST 服务交互时使用的用户密码。
这些参数是可选的,`secretNamespace` 和 `secretName` 都省略时使用空密码。
所提供的 Secret 必须将类型设置为 "kubernetes.io/glusterfs",例如以这种方式创建:
@ -581,12 +591,13 @@ parameters:
specified, the volume will be provisioned with a value between 2000-2147483647
which are defaults for gidMin and gidMax respectively.
-->
* `clusterid``630372ccdc720a92c681fb928f27b53f` 是集群的 ID分配卷时,
* `clusterid``630372ccdc720a92c681fb928f27b53f` 是集群的 ID制备卷时,
Heketi 将会使用这个文件。它也可以是一个 clusterid 列表,例如:
`"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397"`。这个是可选参数。
* `gidMin``gidMax`storage class GID 范围的最小值和最大值。
在此范围gidMin-gidMax内的唯一值GID将用于动态分配卷。这些是可选的值。
如果不指定,卷将被分配一个 2000-2147483647 之间的值,这是 gidMin 和 gidMax 的默认值。
在此范围gidMin-gidMax内的唯一值GID将用于动态制备卷。这些是可选的值。
如果不指定,所制备的卷为一个 2000-2147483647 之间的值,这是 gidMin 和
gidMax 的默认值。
<!--
* `volumetype` : The volume type and its parameters can be configured with this
@ -609,18 +620,22 @@ parameters:
`gluster-dynamic-<claimname>`. The dynamic endpoint and service are automatically
deleted when the persistent volume claim is deleted.
-->
* `volumetype`:卷的类型及其参数可以用这个可选值进行配置。如果未声明卷类型,则由分配器决定卷的类型。
例如:
* `volumetype`:卷的类型及其参数可以用这个可选值进行配置。如果未声明卷类型,则
由制备器决定卷的类型。
例如:
* 'Replica volume': `volumetype: replicate:3` 其中 '3' 是 replica 数量.
* 'Disperse/EC volume': `volumetype: disperse:4:2` 其中 '4' 是数据,'2' 是冗余数量.
* 'Distribute volume': `volumetype: none`
* 'Replica volume': `volumetype: replicate:3` 其中 '3' 是 replica 数量.
* 'Disperse/EC volume': `volumetype: disperse:4:2` 其中 '4' 是数据,'2' 是冗余数量.
* 'Distribute volume': `volumetype: none`
有关可用的卷类型和管理选项,请参阅 [管理指南](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/part-Overview.html)。
有关可用的卷类型和管理选项,请参阅
[管理指南](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/part-Overview.html)。
更多相关的参考信息,请参阅 [如何配置 Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology)。
更多相关的参考信息,请参阅
[如何配置 Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology)。
当动态分配持久卷时Gluster 插件自动创建名为 `gluster-dynamic-<claimname>` 的端点和 headless service。在 PVC 被删除时动态端点和 headless service 会自动被删除。
当动态制备持久卷时Gluster 插件自动创建名为 `gluster-dynamic-<claimname>`
的端点和无头服务。在 PVC 被删除时动态端点和无头服务会自动被删除。
### OpenStack Cinder
@ -638,7 +653,8 @@ parameters:
* `availability`: Availability Zone. If not specified, volumes are generally
round-robin-ed across all active zones where Kubernetes cluster has a node.
-->
* `availability`:可用区域。如果没有指定,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
* `availability`:可用区域。如果没有指定,通常卷会在 Kubernetes 集群节点
所在的活动区域中轮转调度。
<!--
{{< note >}}
@ -648,7 +664,8 @@ This internal provisioner of OpenStack is deprecated. Please use [the external c
-->
{{< note >}}
{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
OpenStack 的内部驱动程序已经被弃用。请使用 [OpenStack 的外部驱动程序](https://github.com/kubernetes/cloud-provider-openstack)。
OpenStack 的内部驱动已经被弃用。请使用
[OpenStack 的外部云驱动](https://github.com/kubernetes/cloud-provider-openstack)。
{{< /note >}}
### vSphere
@ -658,108 +675,107 @@ OpenStack 的内部驱动程序已经被弃用。请使用 [OpenStack 的外部
-->
1. 使用用户指定的磁盘格式创建一个 StorageClass。
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
```
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
```
<!--
`diskformat`: `thin`, `zeroedthick` and `eagerzeroedthick`. Default: `"thin"`.
-->
`diskformat`: `thin`, `zeroedthick``eagerzeroedthick`。默认值: `"thin"`
<!--
`diskformat`: `thin`, `zeroedthick` and `eagerzeroedthick`. Default: `"thin"`.
-->
`diskformat`: `thin`, `zeroedthick``eagerzeroedthick`。默认值: `"thin"`
<!--
2. Create a StorageClass with a disk format on a user specified datastore.
-->
2. 在用户指定的数据存储上创建磁盘格式的 StorageClass。
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
```
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
```
<!--
`datastore`: The user can also specify the datastore in the StorageClass.
The volume will be created on the datastore specified in the storage class,
which in this case is `VSANDatastore`. This field is optional. If the
datastore is not specified, then the volume will be created on the datastore
specified in the vSphere config file used to initialize the vSphere Cloud
Provider.
-->
<!--
`datastore`: The user can also specify the datastore in the StorageClass.
The volume will be created on the datastore specified in the storage class,
which in this case is `VSANDatastore`. This field is optional. If the
datastore is not specified, then the volume will be created on the datastore
specified in the vSphere config file used to initialize the vSphere Cloud
Provider.
-->
`datastore`:用户也可以在 StorageClass 中指定数据存储。
卷将在 storage class 中指定的数据存储上创建,在这种情况下是 `VSANDatastore`
该字段是可选的。
如果未指定数据存储,则将在用于初始化 vSphere Cloud Provider 的 vSphere
配置文件中指定的数据存储上创建该卷。
`datastore`:用户也可以在 StorageClass 中指定数据存储。
卷将在 storage class 中指定的数据存储上创建,在这种情况下是 `VSANDatastore`
该字段是可选的。
如果未指定数据存储,则将在用于初始化 vSphere Cloud Provider 的 vSphere
配置文件中指定的数据存储上创建该卷。
<!--
3. Storage Policy Management inside kubernetes
-->
3. Kubernetes 中的存储策略管理
<!--
* Using existing vCenter SPBM policy
<!--
* Using existing vCenter SPBM policy
One of the most important features of vSphere for Storage Management is
policy based Management. Storage Policy Based Management (SPBM) is a
storage policy framework that provides a single unified control plane
across a broad range of data services and storage solutions. SPBM enables
vSphere administrators to overcome upfront storage provisioning challenges,
such as capacity planning, differentiated service levels and managing
capacity headroom.
One of the most important features of vSphere for Storage Management is
policy based Management. Storage Policy Based Management (SPBM) is a
storage policy framework that provides a single unified control plane
across a broad range of data services and storage solutions. SPBM enables
vSphere administrators to overcome upfront storage provisioning challenges,
such as capacity planning, differentiated service levels and managing
capacity headroom.
The SPBM policies can be specified in the StorageClass using the
`storagePolicyName` parameter.
The SPBM policies can be specified in the StorageClass using the
`storagePolicyName` parameter.
-->
* 使用现有的 vCenter SPBM 策略
vSphere 用于存储管理的最重要特性之一是基于策略的管理。
基于存储策略的管理SPBM是一个存储策略框架提供单一的统一控制平面的
跨越广泛的数据服务和存储解决方案。
SPBM 使能 vSphere 管理员克服先期的存储配置挑战,如容量规划,差异化服务等级和管理容量空间。
vSphere 用于存储管理的最重要特性之一是基于策略的管理。
基于存储策略的管理SPBM是一个存储策略框架提供单一的统一控制平面的
跨越广泛的数据服务和存储解决方案。
SPBM 使能 vSphere 管理员克服先期的存储配置挑战,如容量规划,差异化服务等级和管理容量空间。
SPBM 策略可以在 StorageClass 中使用 `storagePolicyName` 参数声明。
SPBM 策略可以在 StorageClass 中使用 `storagePolicyName` 参数声明。
<!--
* Virtual SAN policy support inside Kubernetes
Vsphere Infrastructure (VI) Admins will have the ability to specify custom
Virtual SAN Storage Capabilities during dynamic volume provisioning. You
can now define storage requirements, such as performance and availability,
in the form of storage capabilities during dynamic volume provisioning.
The storage capability requirements are converted into a Virtual SAN
policy which are then pushed down to the Virtual SAN layer when a
persistent volume (virtual disk) is being created. The virtual disk is
distributed across the Virtual SAN datastore to meet the requirements.
Vsphere Infrastructure (VI) Admins will have the ability to specify custom
Virtual SAN Storage Capabilities during dynamic volume provisioning. You
can now define storage requirements, such as performance and availability,
in the form of storage capabilities during dynamic volume provisioning.
The storage capability requirements are converted into a Virtual SAN
policy which are then pushed down to the Virtual SAN layer when a
persistent volume (virtual disk) is being created. The virtual disk is
distributed across the Virtual SAN datastore to meet the requirements.
You can see [Storage Policy Based Management for dynamic provisioning of volumes](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html)
for more details on how to use storage policies for persistent volumes
management.
You can see [Storage Policy Based Management for dynamic provisioning of volumes](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html)
for more details on how to use storage policies for persistent volumes
management.
-->
* Kubernetes 内的 Virtual SAN 策略支持
Vsphere InfrastructureVI管理员将能够在动态卷配置期间指定自定义 Virtual SAN
存储功能。您现在可以定义存储需求,例如性能和可用性,当动态卷供分配时会以存储功能的形式提供
存储功能需求会转换为 Virtual SAN 策略,然后当持久卷(虚拟磁盘)在创建时,
会将其推送到 Virtual SAN 层。虚拟磁盘分布在 Virtual SAN 数据存储中以满足要求。
Vsphere InfrastructureVI管理员将能够在动态卷配置期间指定自定义 Virtual SAN
存储功能。你现在可以在动态制备卷期间以存储能力的形式定义存储需求,例如性能和可用性
存储能力需求会转换为 Virtual SAN 策略,之后当持久卷(虚拟磁盘)被创建时,
会将其推送到 Virtual SAN 层。虚拟磁盘分布在 Virtual SAN 数据存储中以满足要求。
更多有关 persistent volume 管理的存储策略的详细信息
您可以参考 [基于存储策略的动态分配卷管理](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html)
你可以参考[基于存储策略的动态制备卷管理](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html)
进一步了解有关持久卷管理的存储策略的详细信息
<!--
There are few
@ -767,7 +783,7 @@ There are few
which you try out for persistent volume management inside Kubernetes for vSphere.
-->
有几个 [vSphere 例子](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere)
您在 Kubernetes for vSphere 中尝试进行 persistent volume 管理。
你在 Kubernetes for vSphere 中尝试进行持久卷管理。
### Ceph RBD
@ -882,7 +898,7 @@ parameters:
* `registry`:用于挂载卷的 Quobyte registry。你可以指定 registry 为 ``<host>:<port>``
或者如果你想指定多个 registry你只需要在他们之间添加逗号例如
``<host1>:<port>,<host2>:<port>,<host3>:<port>``。
主机可以是一个 IP 地址,或者如果您有正在运行的 DNS也可以提供 DNS 名称。
主机可以是一个 IP 地址,或者如果你有正在运行的 DNS也可以提供 DNS 名称。
* `adminSecretNamespace``adminSecretName`的 namespace。
默认值是 "default"。
@ -920,7 +936,7 @@ parameters:
-->
* `user`:对这个用户映射的所有访问权限。默认是 "root"。
* `group`:对这个组映射的所有访问权限。默认是 "nfsnobody"。
* `quobyteConfig`:使用指定的配置来创建卷。可以创建一个新的配置,或者,可以修改 Web console 或
* `quobyteConfig`:使用指定的配置来创建卷。可以创建一个新的配置,或者,可以修改 Web console 或
quobyte CLI 中现有的配置。默认是 "BASE"。
* `quobyteTenant`:使用指定的租户 ID 创建/删除卷。这个 Quobyte 租户必须已经于 Quobyte。
默认是 "DEFAULT"。
@ -1053,17 +1069,19 @@ mounting credentials. If the cluster has enabled both
add the `create` permission of resource `secret` for clusterrole
`system:controller:persistent-volume-binder`.
-->
在存储分配期间,为挂载凭证创建一个名为 `secretName` 的 Secret。如果集群同时启用了
[RBAC](/zh/docs/reference/access-authn-authz/rbac/) 和 [控制器角色](/zh/docs/reference/access-authn-authz/rbac/#controller-roles)
`system:controller:persistent-volume-binder` 的 clusterrole 添加 `secret` 资源的 `create` 权限。
在存储制备期间,为挂载凭证创建一个名为 `secretName` 的 Secret。如果集群同时启用了
[RBAC](/zh/docs/reference/access-authn-authz/rbac/) 和
[控制器角色](/zh/docs/reference/access-authn-authz/rbac/#controller-roles)
`system:controller:persistent-volume-binder` 的 clusterrole 添加
`Secret` 资源的 `create` 权限。
<!--
In a multi-tenancy context, it is strongly recommended to set the value for
`secretNamespace` explicitly, otherwise the storage account credentials may
be read by other users.
-->
在多租户上下文中,强烈建议显式设置 `secretNamespace` 的值,否则其他用户可能会读取存储帐户凭据。
在多租户上下文中,强烈建议显式设置 `secretNamespace` 的值,否则
其他用户可能会读取存储帐户凭据。
<!--
### Portworx Volume
@ -1108,14 +1126,19 @@ parameters:
* `block_size`:以 Kbytes 为单位的块大小(默认值:`32`)。
* `repl`:同步副本数量,以复制因子 `1..3`(默认值:`1`)的形式提供。
这里需要填写字符串,即,`"1"` 而不是 `1`
* `io_priority`:决定是否从更高性能或者较低优先级存储创建卷 `high/medium/low`(默认值:`low`)。
* `snap_interval`:触发快照的时钟/时间间隔分钟。快照是基于与先前快照的增量变化0 是禁用快照(默认:`0`)。
* `io_priority`:决定是否从更高性能或者较低优先级存储创建卷
`high/medium/low`(默认值:`low`)。
* `snap_interval`:触发快照的时钟/时间间隔(分钟)。
快照是基于与先前快照的增量变化0 是禁用快照(默认:`0`)。
这里需要填写字符串,即,是 `"70"` 而不是 `70`
* `aggregation_level`指定卷分配到的块数量0 表示一个非聚合卷(默认:`0`)。
这里需要填写字符串,即,是 `"0"` 而不是 `0`
* `ephemeral`:指定卷在卸载后进行清理还是持久化。 `emptyDir` 的使用场景可以将这个值设置为 true
`persistent volumes` 的使用场景可以将这个值设置为 false例如 Cassandra 这样的数据库)
`true/false`(默认为 `false`)。这里需要填写字符串,即,是 `"true"` 而不是 `true`
* `ephemeral`:指定卷在卸载后进行清理还是持久化。
`emptyDir` 的使用场景可以将这个值设置为 true
`persistent volumes` 的使用场景可以将这个值设置为 false
(例如 Cassandra 这样的数据库)
`true/false`(默认为 `false`)。这里需要填写字符串,即,
`"true"` 而不是 `true`
### ScaleIO
@ -1171,8 +1194,8 @@ kubectl create secret generic sio-secret --type="kubernetes.io/scaleio" \
```
-->
ScaleIO Kubernetes 卷插件需要配置一个 Secret 对象。
secret 必须用 `kubernetes.io/scaleio` 类型创建,并与引用它的 PVC 所属的名称空间使用相同的值
如下面的命令所示:
Secret 必须用 `kubernetes.io/scaleio` 类型创建,并与引用它的
PVC 所属的名称空间使用相同的值。如下面的命令所示:
```shell
kubectl create secret generic sio-secret --type="kubernetes.io/scaleio" \
@ -1210,13 +1233,17 @@ parameters:
* `adminSecretName`: The name of the secret to use for obtaining the StorageOS
API credentials. If not specified, default values will be attempted.
-->
* `pool`:分配卷的 StorageOS 分布式容量池的名称。如果未指定,则使用通常存在的 `default` 池。
* `description`:分配给动态创建的卷的描述。所有卷描述对于 storage class 都是相同的,
* `pool`:制备卷的 StorageOS 分布式容量池的名称。如果未指定,则使用
通常存在的 `default` 池。
* `description`:指定给动态创建的卷的描述。所有卷描述对于存储类而言都是相同的,
但不同的 storage class 可以使用不同的描述,以区分不同的使用场景。
默认为 `Kubernetas volume`
* `fsType`:请求的默认文件系统类型。请注意,在 StorageOS 中用户定义的规则可以覆盖此值。默认为 `ext4`
* `adminSecretNamespace`API 配置 secret 所在的命名空间。如果设置了 adminSecretName则是必需的。
* `adminSecretName`:用于获取 StorageOS API 凭证的 secret 名称。如果未指定,则将尝试默认值。
* `fsType`:请求的默认文件系统类型。
请注意,在 StorageOS 中用户定义的规则可以覆盖此值。默认为 `ext4`
* `adminSecretNamespace`API 配置 secret 所在的命名空间。
如果设置了 adminSecretName则是必需的。
* `adminSecretName`:用于获取 StorageOS API 凭证的 secret 名称。
如果未指定,则将尝试默认值。
<!--
The StorageOS Kubernetes volume plugin can use a Secret object to specify an
@ -1253,7 +1280,8 @@ and referenced with the `adminSecretNamespace` parameter. Secrets used by
pre-provisioned volumes must be created in the same namespace as the PVC that
references it.
-->
用于动态分配卷的 Secret 可以在任何名称空间中创建,并通过 `adminSecretNamespace` 参数引用。
用于动态制备卷的 Secret 可以在任何名称空间中创建,并通过
`adminSecretNamespace` 参数引用。
预先配置的卷使用的 Secret 必须在与引用它的 PVC 在相同的名称空间中。
<!--
@ -1277,13 +1305,14 @@ Local volumes do not currently support dynamic provisioning, however a StorageCl
should still be created to delay volume binding until pod scheduling. This is
specified by the `WaitForFirstConsumer` volume binding mode.
-->
本地卷还不支持动态分配,然而还是需要创建 StorageClass 以延迟卷绑定,直到完成 pod 的调度。这是由 `WaitForFirstConsumer` 卷绑定模式指定的。
本地卷还不支持动态制备,然而还是需要创建 StorageClass 以延迟卷绑定,
直到完成 Pod 的调度。这是由 `WaitForFirstConsumer` 卷绑定模式指定的。
<!--
Delaying volume binding allows the scheduler to consider all of a pod's
scheduling constraints when choosing an appropriate PersistentVolume for a
PersistentVolumeClaim.
-->
延迟卷绑定使得调度器在为 PersistentVolumeClaim 选择一个合适的 PersistentVolume 时能考虑到所有 pod 的调度限制。
延迟卷绑定使得调度器在为 PersistentVolumeClaim 选择一个合适的
PersistentVolume 时能考虑到所有 Pod 的调度限制。

View File

@ -7,13 +7,13 @@ weight: 30
<!-- overview -->
<!--
This document describes the concept of `VolumeSnapshotClass` in Kubernetes. Familiarity
This document describes the concept of VolumeSnapshotClass in Kubernetes. Familiarity
with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
[storage classes](/docs/concepts/storage/storage-classes) is suggested.
-->
本文档描述了 Kubernetes 中 `VolumeSnapshotClass` 的概念。 建议熟悉[卷快照Volume Snapshots](/docs/concepts/storage/volume-snapshots/)和[存储类Storage Class](/docs/concepts/storage/storage-classes)。
本文档描述了 Kubernetes 中 VolumeSnapshotClass 的概念。建议熟悉
[卷快照Volume Snapshots](/zh/docs/concepts/storage/volume-snapshots/)和
[存储类Storage Class](/zh/docs/concepts/storage/storage-classes)。
<!-- body -->
@ -21,37 +21,35 @@ with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
<!--
## Introduction
Just like `StorageClass` provides a way for administrators to describe the "classes"
of storage they offer when provisioning a volume, `VolumeSnapshotClass` provides a
Just like StorageClass provides a way for administrators to describe the "classes"
of storage they offer when provisioning a volume, VolumeSnapshotClass provides a
way to describe the "classes" of storage when provisioning a volume snapshot.
-->
## 介绍 {#introduction}
就像 `StorageClass` 为管理员提供了一种在配置卷时描述存储“类”的方法,`VolumeSnapshotClass` 提供了一种在配置卷快照时描述存储“类”的方法。
就像 StorageClass 为管理员提供了一种在配置卷时描述存储“类”的方法,
VolumeSnapshotClass 提供了一种在配置卷快照时描述存储“类”的方法。
<!--
## The VolumeSnapshotClass Resource
Each `VolumeSnapshotClass` contains the fields `driver`, `deletionPolicy`, and `parameters`,
which are used when a `VolumeSnapshot` belonging to the class needs to be
Each VolumeSnapshotClass contains the fields `driver`, `deletionPolicy`, and `parameters`,
which are used when a VolumeSnapshot belonging to the class needs to be
dynamically provisioned.
The name of a `VolumeSnapshotClass` object is significant, and is how users can
The name of a VolumeSnapshotClass object is significant, and is how users can
request a particular class. Administrators set the name and other parameters
of a class when first creating `VolumeSnapshotClass` objects, and the objects cannot
of a class when first creating VolumeSnapshotClass objects, and the objects cannot
be updated once they are created.
Administrators can specify a default `VolumeSnapshotClass` just for VolumeSnapshots
that don't request any particular class to bind to.
-->
## VolumeSnapshotClass 资源
## VolumeSnapshotClass 资源 {#the-volumesnapshortclass-resource}
每个 `VolumeSnapshotClass` 都包含 `driver` 、`deletionPolicy` 和 `parameters` 字段,当需要动态配置属于该类的 `VolumeSnapshot` 时使用。
每个 VolumeSnapshotClass 都包含 `driver`、`deletionPolicy` 和 `parameters` 字段,
在需要动态配置属于该类的 VolumeSnapshot 时使用。
`VolumeSnapshotClass` 对象的名称很重要,是用户可以请求特定类的方式。
管理员在首次创建 `VolumeSnapshotClass` 对象时设置类的名称和其他参数,对象一旦创建就无法更新。
管理员可以为不请求任何特定类绑定的 VolumeSnapshots 指定默认的 `VolumeSnapshotClass`
VolumeSnapshotClass 对象的名称很重要,是用户可以请求特定类的方式。
管理员在首次创建 VolumeSnapshotClass 对象时设置类的名称和其他参数,
对象一旦创建就无法更新。
```yaml
apiVersion: snapshot.storage.k8s.io/v1beta1
@ -63,6 +61,26 @@ deletionPolicy: Delete
parameters:
```
<!--
Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots
that don't request any particular class to bind to by adding the
`snapshot.storage.kubernetes.io/is-default-class: "true"` annotation:
-->
管理员可以为未请求任何特定类绑定的 VolumeSnapshots 指定默认的 VolumeSnapshotClass
方法是设置注解 `snapshot.storage.kubernetes.io/is-default-class: "true"`
```yaml
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
name: csi-hostpath-snapclass
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: hostpath.csi.k8s.io
deletionPolicy: Delete
parameters:
```
<!--
### Driver
@ -71,20 +89,25 @@ used for provisioning VolumeSnapshots. This field must be specified.
-->
### 驱动程序 {#driver}
卷快照类有一个驱动程序,用于确定配置 VolumeSnapshot 的 CSI 卷插件。 必须指定此字段。
卷快照类有一个驱动程序,用于确定配置 VolumeSnapshot 的 CSI 卷插件。
此字段必须指定。
<!--
### DeletionPolicy
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a `VolumeSnapshotContent` when the `VolumeSnapshot` object it is bound to is to be deleted. The deletionPolicy of a volume snapshot can either be `Retain` or `Delete`. This field must be specified.
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot can either be `Retain` or `Delete`. This field must be specified.
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the `VolumeSnapshotContent` object. If the deletionPolicy is `Retain`, then both the underlying snapshot and `VolumeSnapshotContent` remain.
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`, then both the underlying snapshot and VolumeSnapshotContent remain.
-->
### 删除策略 {#deletion-policy}
卷快照类具有 `deletionPolicy` 属性。用户可以配置当所绑定的 `VolumeSnapshot` 对象将被删除时,如何处理 `VolumeSnapshotContent` 对象。卷快照的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。
卷快照类具有 `deletionPolicy` 属性。用户可以配置当所绑定的 VolumeSnapshot
对象将被删除时,如何处理 VolumeSnapshotContent 对象。
卷快照的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。
如果删除策略是 `Delete`,那么底层的存储快照会和 `VolumeSnapshotContent` 对象一起删除。如果删除策略是 `Retain`,那么底层快照和 `VolumeSnapshotContent` 对象都会被保留。
如果删除策略是 `Delete`,那么底层的存储快照会和 VolumeSnapshotContent 对象
一起删除。如果删除策略是 `Retain`,那么底层快照和 VolumeSnapshotContent
对象都会被保留。
<!--
## Parameters
@ -95,6 +118,5 @@ the volume snapshot class. Different parameters may be accepted depending on the
-->
## 参数 {#parameters}
卷快照类具有描述属于该卷快照类的卷快照的参数。 可根据 `driver` 接受不同的参数。
卷快照类具有描述属于该卷快照类的卷快照的参数,可根据 `driver` 接受不同的参数。

View File

@ -18,7 +18,7 @@ weight: 20
In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/).
-->
在 Kubernetes 中,卷快照是一个存储系统上卷的快照,本文假设你已经熟悉了 Kubernetes
的 [持久卷](/docs/concepts/storage/persistent-volumes/)。
的 [持久卷](/zh/docs/concepts/storage/persistent-volumes/)。
<!-- body -->
@ -48,6 +48,13 @@ A `VolumeSnapshot` is a request for snapshot of a volume by a user. It is simila
-->
`VolumeSnapshotClass` 允许指定属于 `VolumeSnapshot` 的不同属性。在从存储系统的相同卷上获取的快照之间,这些属性可能有所不同,因此不能通过使用与 `PersistentVolumeClaim` 相同的 `StorageClass` 来表示。
<!--
Volume snapshots provide Kubernetes users with a standardized way to copy a volume's contents at a particular point in time without creating an entirely new volume. This functionality enables, for example, database administrators to backup databases before performing edit or delete modifications.
-->
卷快照能力为 Kubernetes 用户提供了一种标准的方式来在指定时间点
复制卷的内容,并且不需要创建全新的卷。例如,这一功能使得数据库管理员
能够在执行编辑或删除之类的修改之前对数据库执行备份。
<!--
Users need to be aware of the following when using this feature:
-->
@ -171,7 +178,8 @@ A volume snapshot can request a particular class by specifying the name of a
[VolumeSnapshotClass](/docs/concepts/storage/volume-snapshot-classes/)
using the attribute `volumeSnapshotClassName`. If nothing is set, then the default class is used if available.
-->
`persistentVolumeClaimName``PersistentVolumeClaim` 数据源对快照的名称。这个字段是动态配置快照中的必填字段。
`persistentVolumeClaimName``PersistentVolumeClaim` 数据源对快照的名称。
这个字段是动态配置快照中的必填字段。
卷快照可以通过指定 [VolumeSnapshotClass](/zh/docs/concepts/storage/volume-snapshot-classes/)
使用 `volumeSnapshotClassName` 属性来请求特定类。如果没有设置,那么使用默认类(如果有)。
@ -182,14 +190,14 @@ For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName`
如下面例子所示,对于预配置的快照,需要给快照指定 `volumeSnapshotContentName` 来作为源。
对于预配置的快照 `source` 中的`volumeSnapshotContentName` 字段是必填的。
```
```yaml
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: test-snapshot
spec:
source:
volumeSnapshotContentName: test-content
volumeSnapshotContentName: test-content
```
<!--
@ -260,5 +268,6 @@ the *dataSource* field in the `PersistentVolumeClaim` object.
For more details, see
[Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support).
-->
更多详细信息,请参阅 [卷快照和从快照还原卷](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。
更多详细信息,请参阅
[卷快照和从快照还原卷](/zh/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)。

View File

@ -1,4 +1,4 @@
---
title: "控制器"
title: "工作负载资源"
weight: 20
---

View File

@ -201,20 +201,20 @@ Follow the steps given below to create the above Deployment:
3. 要查看 Deployment 上线状态,运行 `kubectl rollout status deployment.v1.apps/nginx-deployment`
输出类似于:
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
```
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
```
<!--
4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this:
-->
4. 几秒钟后再次运行 `kubectl get deployments`。输出类似于:
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
```
<!--
Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.
@ -1504,7 +1504,7 @@ deployment.apps/nginx-deployment patched
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
attributes to the Deployment's `.status.conditions`:
-->
超过截止时间后, Deployment 控制器将添加具有以下属性的 DeploymentCondition 到
超过截止时间后Deployment 控制器将添加具有以下属性的 DeploymentCondition 到
Deployment 的 `.status.conditions` 中:
* Type=Progressing
@ -1514,7 +1514,9 @@ Deployment 的 `.status.conditions` 中:
<!--
See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions.
-->
参考 [Kubernetes API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) 获取更多状态状况相关的信息。
参考
[Kubernetes API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties)
获取更多状态状况相关的信息。
<!--
Kubernetes takes no action on a stalled Deployment other than to report a status condition with

View File

@ -1,13 +1,13 @@
---
title: 垃圾收集
content_type: concept
weight: 70
weight: 60
---
<!--
title: Garbage Collection
content_type: concept
weight: 70
weight: 60
-->
<!-- overview -->

View File

@ -5,7 +5,7 @@ feature:
title: 批量执行
description: >
除了服务之外Kubernetes 还可以管理你的批处理和 CI 工作负载,在期望时替换掉失效的容器。
weight: 60
weight: 50
---
<!--
reviewers:
@ -17,7 +17,7 @@ feature:
title: Batch execution
description: >
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
weight: 60
weight: 50
-->
<!-- overview -->

View File

@ -1,8 +1,17 @@
---
title: ReplicaSet
content_type: concept
weight: 10
weight: 20
---
<!--
reviewers:
- Kashomon
- bprashanth
- madhusudancs
title: ReplicaSet
content_type: concept
weight: 20
-->
<!-- overview -->
@ -18,30 +27,25 @@ ReplicaSet 的目的是维护一组在任何时候都处于运行状态的 Pod
<!--
## How a ReplicaSet works
A ReplicaSet is defined with fields, including a selector that specifies how
to identify Pods it can acquire, a number of replicas indicating how many Pods
it should be maintaining, and a pod template specifying the data of new Pods
it should create to meet the number of replicas criteria. A ReplicaSet then
fulfills its purpose by creating and deleting Pods as needed to reach the
desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number
of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods
it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating
and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
template.
-->
## ReplicaSet 的工作原理 {#how-a-replicaset-works}
RepicaSet 是通过一组字段来定义的,包括一个用来识别可获得的 Pod
的集合的选择算符,一个用来标明应该维护的副本个数的数值,一个用来指定应该创建新 Pod
以满足副本个数条件时要使用的 Pod 模板等等。每个 ReplicaSet 都通过根据需要创建和
删除 Pod 以使得副本个数达到期望值,进而实现其存在价值。当 ReplicaSet 需要创建
新的 Pod 时,会使用所提供的 Pod 模板。
的集合的选择算符、一个用来标明应该维护的副本个数的数值、一个用来指定应该创建新 Pod
以满足副本个数条件时要使用的 Pod 模板等等。
每个 ReplicaSet 都通过根据需要创建和 删除 Pod 以使得副本个数达到期望值,
进而实现其存在价值。当 ReplicaSet 需要创建新的 Pod 时,会使用所提供的 Pod 模板。
<!--
A ReplicaSet is linked to its Pods via the Pods'
[metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
field, which specifies what resource the current object is owned by. All Pods
acquired by a ReplicaSet have their owning ReplicaSet's identifying
information within their ownerReferences field. It's through this link that
the ReplicaSet knows of the state of the Pods it is maintaining and plans
accordingly.
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning
ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet
knows of the state of the Pods it is maintaining and plans accordingly.
-->
ReplicaSet 通过 Pod 上的
[metadata.ownerReferences](/zh/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
@ -51,41 +55,14 @@ ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 R
并据此计划其操作行为。
<!--
A ReplicaSet identifies new Pods to acquire by using its selector. If there is
a Pod that has no OwnerReference or the OwnerReference is not a {{<
glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's
selector, it will be immediately acquired by said ReplicaSet.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet.
-->
ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有
OwnerReference 或者其 OwnerReference 不是一个
{{< glossary_tooltip text="控制器" term_id="controller" >}},且其匹配到
某 ReplicaSet 的选择算符,则该 Pod 立即被此 ReplicaSet 获得。
<!--
## How to use a ReplicaSet
Most [`kubectl`](/docs/user-guide/kubectl/) commands that support
Replication Controllers also support ReplicaSets. One exception is the
[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) command. If
you want the rolling update functionality please consider using Deployments
instead. Also, the
[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) command is
imperative whereas Deployments are declarative, so we recommend using Deployments
through the [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout) command.
While ReplicaSets can be used independently, today it's mainly used by
[Deployments](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod
creation, deletion and updates. When you use Deployments you don't have to worry
about managing the ReplicaSets that they create. Deployments own and manage
their ReplicaSets.
-->
## 怎样使用 ReplicaSet {#how-to-use-a-replicaset}
大多数支持 Replication Controllers 的[`kubectl`](/zh/docs/reference/kubectl/kubectl/)命令也支持 ReplicaSets。但[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) 命令是个例外。如果您想要滚动更新功能请考虑使用 Deployment。[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) 命令是必需的,而 Deployment 是声明性的,因此我们建议通过 [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout)命令使用 Deployment。
虽然 ReplicaSets 可以独立使用,但今天它主要被[Deployments](/zh/docs/concepts/workloads/controllers/deployment/) 用作协调 Pod 创建、删除和更新的机制。
当您使用 Deployment 时,您不必担心还要管理它们创建的 ReplicaSet。Deployment 会拥有并管理它们的 ReplicaSet。
<!--
## When to use a ReplicaSet
@ -98,14 +75,16 @@ you require custom update orchestration or don't require updates at all.
This actually means that you may never need to manipulate ReplicaSet objects:
use a Deployment instead, and define your application in the spec section.
-->
## 什么时候使用 ReplicaSet
## 何时使用 ReplicaSet
ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。
然而Deployment 是一个更高级的概念,它管理 ReplicaSet并向 Pod 提供声明式的更新以及许多其他有用的功能。
因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet除非您需要自定义更新业务流程或根本不需要更新。
然而Deployment 是一个更高级的概念,它管理 ReplicaSet并向 Pod
提供声明式的更新以及许多其他有用的功能。
因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet除非
你需要自定义更新业务流程或根本不需要更新。
这实际上意味着,您可能永远不需要操作 ReplicaSet 对象:而是使用 Deployment并在 spec 部分定义您的应用。
这实际上意味着,你可能永远不需要操作 ReplicaSet 对象:而是使用
Deployment并在 spec 部分定义你的应用。
<!--
## Example
@ -118,151 +97,349 @@ ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should
create the defined ReplicaSet and the pods that it manages.
-->
将此清单保存到 `frontend.yaml` 中,并将其提交到 Kubernetes 集群,
应该就能创建 yaml 文件所定义的 ReplicaSet 及其管理的 Pod。
将此清单保存到 `frontend.yaml` 中,并将其提交到 Kubernetes 集群,应该就能创建 yaml 文件所定义的 ReplicaSet 及其管理的 Pod。
```shell
$ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
replicaset.apps/frontend created
$ kubectl describe rs/frontend
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
```
<!--
You can then get the current ReplicaSets deployed:
-->
你可以看到当前被部署的 ReplicaSet
```shell
kubectl get rs
```
<!--
And see the frontend one you created:
-->
并看到你所创建的前端:
```
NAME DESIRED CURRENT READY AGE
frontend 3 3 3 6s
```
<!--
You can also check on the state of the ReplicaSet:
-->
你也可以查看 ReplicaSet 的状态:
```shell
kubectl describe rs/frontend
```
<!--
And you will see output similar to:
-->
你会看到类似如下的输出:
```
Name: frontend
Namespace: default
Selector: tier=frontend,tier in (frontend)
Selector: tier=frontend
Labels: app=guestbook
tier=frontend
Annotations: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"ReplicaSet","metadata":{"annotations":{},"labels":{"app":"guestbook","tier":"frontend"},"name":"frontend",...
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=guestbook
tier=frontend
Labels: tier=frontend
Containers:
php-redis:
Image: gcr.io/google_samples/gb-frontend:v3
Port: 80/TCP
Requests:
cpu: 100m
memory: 100Mi
Environment:
GET_HOSTS_FROM: dns
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
frontend-dnjpy 1/1 Running 0 1m
frontend-qhloh 1/1 Running 0 1m
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 117s replicaset-controller Created pod: frontend-wtsmm
Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-b2zdv
Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-vcmts
```
<!--
And lastly you can check for the Pods brought up:
-->
最后可以查看启动了的 Pods
```shell
kubectl get pods
```
<!--
You should see Pod information similar to:
-->
你会看到类似如下的 Pod 信息:
```
NAME READY STATUS RESTARTS AGE
frontend-b2zdv 1/1 Running 0 6m36s
frontend-vcmts 1/1 Running 0 6m36s
frontend-wtsmm 1/1 Running 0 6m36s
```
<!--
You can also verify that the owner reference of these pods is set to the frontend ReplicaSet.
To do this, get the yaml of one of the Pods running:
-->
你也可以查看 Pods 的属主引用被设置为前端的 ReplicaSet。
要实现这点,可取回运行中的 Pods 之一的 YAML
```shell
kubectl get pods frontend-b2zdv -o yaml
```
<!--
The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's ownerReferences field:
-->
输出将类似这样frontend ReplicaSet 的信息被设置在 metadata 的
`ownerReferences` 字段中:
```yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-02-12T07:06:16Z"
generateName: frontend-
labels:
tier: frontend
name: frontend-b2zdv
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: frontend
uid: f391f6db-bb9b-4c09-ae74-6a1f77f3d5cf
...
```
<!--
## Non-Template Pod acquisitions
While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have
labels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited
to owning Pods specified by its template - it can acquire other Pods in the manner specified in the previous sections.
-->
## 非模板 Pod 的获得
<!--
While you can create bare Pods with no problems, it is strongly recommended to
make sure that the bare Pods do not have labels which match the selector of
one of your ReplicaSets. The reason for this is because a ReplicaSet is not
limited to owning Pods specified by its template - it can acquire other Pods
in the manner specified in the previous sections.
Take the previous frontend ReplicaSet example, and the Pods specified in the
following manifest:
-->
尽管你完全可以直接创建裸的 Pods强烈建议你确保这些裸的 Pods 并不包含可能与你
的某个 ReplicaSet 的选择算符相匹配的标签。原因在于 ReplicaSet 并不仅限于拥有
在其模板中设置的 Pods它还可以像前面小节中所描述的那样获得其他 Pods。
{{< codenew file="pods/pod-rs.yaml" >}}
<!--
As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend
ReplicaSet, they will immediately be acquired by it.
Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas to
fulfill its replica count requirement:
-->
由于这些 Pod 没有控制器Controller或其他对象作为其属主引用并且
其标签与 frontend ReplicaSet 的选择算符匹配,它们会立即被该 ReplicaSet
获取。
假定你在 frontend ReplicaSet 已经被部署之后创建 Pods并且你已经在 ReplicaSet
中设置了其初始的 Pod 副本数以满足其副本计数需要:
```shell
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
```
<!--
The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over
its desired count.
Fetching the Pods:
-->
新的 Pods 会被该 ReplicaSet 获取,并立即被 ReplicaSet 终止,因为
它们的存在会使得 ReplicaSet 中 Pod 个数超出其期望值。
取回 Pods
```shell
kubectl get pods
```
<!--
The output shows that the new Pods are either already terminated, or in the process of being terminated:
-->
输出显示新的 Pods 或者已经被终止,或者处于终止过程中:
```shell
NAME READY STATUS RESTARTS AGE
frontend-b2zdv 1/1 Running 0 10m
frontend-vcmts 1/1 Running 0 10m
frontend-wtsmm 1/1 Running 0 10m
pod1 0/1 Terminating 0 1s
pod2 0/1 Terminating 0 1s
```
<!--
If you create the Pods first:
-->
如果你先行创建 Pods
```shell
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
```
<!--
And then create the ReplicaSet however:
-->
之后再创建 ReplicaSet
```shell
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
```
<!--
You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the
number of its new Pods and the original matches its desired count. As fetching the Pods:
-->
你会看到 ReplicaSet 已经获得了该 Pods并仅根据其规约创建新的 Pods直到
新的 Pods 和原来的 Pods 的总数达到其预期个数。
这时取回 Pods
```shell
kubectl get pods
```
<!--
Will reveal in its output:
-->
将会生成下面的输出:
```
NAME READY STATUS RESTARTS AGE
frontend-hmmj2 1/1 Running 0 9s
pod1 1/1 Running 0 36s
pod2 1/1 Running 0 36s
```
采用这种方式,一个 ReplicaSet 中可以包含异质的 Pods 集合。
<!--
## Writing a ReplicaSet Spec
As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For
general information about working with manifests, see [object management using kubectl](/docs/concepts/overview/object-management-kubectl/overview/).
As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields.
For ReplicaSets, the kind is always just ReplicaSet.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
Refer to the first lines of the `frontend.yaml` example for guidance.
The name of a ReplicaSet object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).
-->
## 编写 ReplicaSet 的 spec
## 编写 ReplicaSet Spec
与所有其他 Kubernetes API 对象一样ReplicaSet 也需要 `apiVersion`、`kind`、和 `metadata` 字段。
对于 ReplicaSets 而言,其 kind 始终是 ReplicaSet。
在 Kubernetes 1.9 中ReplicaSet 上的 API 版本 `apps/v1` 是其当前版本,且被
默认启用。API 版本 `apps/v1beta2` 已被废弃。
参考 `frontend.yaml` 示例的第一行。
与所有其他 Kubernetes API 对象一样ReplicaSet 也需要 `apiVersion`、`kind`、和 `metadata` 字段。有关使用清单的一般信息,请参见 [使用 kubectl 管理对象](/zh/docs/concepts/overview/working-with-objects/object-management/)。
ReplicaSet 对象的名称必须是合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
ReplicaSet 也需要 [`.spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) 部分。
ReplicaSet 也需要 [`.spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status)
部分。
<!--
### Pod Template
The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a
[pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a
[pod](/docs/concepts/workloads/pods/pod/), except that it is nested and does not have an `apiVersion` or `kind`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates) which is also
required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`.
Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.
In addition to required fields of a pod, a pod template in a ReplicaSet must specify appropriate
labels and an appropriate restart policy.
For labels, make sure to not overlap with other controllers. For more information, see [pod selector](#pod-selector).
For [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy), the only allowed value for `.spec.template.spec.restartPolicy` is `Always`, which is the default.
For local container restarts, ReplicaSet delegates to an agent on the node,
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field,
`.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default.
-->
### Pod 模版
`.spec.template``.spec` 唯一需要的字段。`.spec.template` 是 [Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates)。它和 [Pod](/zh/docs/concepts/workloads/pods/) 的语法几乎完全一样,除了它是嵌套的并没有 `apiVersion``kind`
`.spec.template` 是一个[Pod 模版](/zh/docs/concepts/workloads/pods/#pod-templates)
要求设置标签。在 `frontend.yaml` 示例中,我们指定了标签 `tier: frontend`
注意不要将标签与其他控制器的选择算符重叠,否则那些控制器会尝试收养此 Pod。
除了所需的 Pod 字段之外ReplicaSet 中的 Pod 模板必须指定适当的标签和适当的重启策略。
对于标签,请确保不要与其他控制器重叠。更多信息请参考 [Pod 选择器](#pod-selector)。
对于 [重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)`.spec.template.spec.restartPolicy` 唯一允许的取值是 `Always`,这也是默认值.
对于本地容器重新启动ReplicaSet 委托给了节点上的代理去执行,例如[Kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 或 Docker 去执行。
对于模板的[重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
字段,`.spec.template.spec.restartPolicy`,唯一允许的取值是 `Always`,这也是默认值.
<!--
### Pod Selector
The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). A ReplicaSet
manages all the pods with labels that match the selector. It does not distinguish
between pods that it created or deleted and pods that another person or process created or
deleted. This allows the ReplicaSet to be replaced without affecting the running pods.
The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). As discussed
[earlier](#how-a-replicaset-works) these are the labels used to identify potential Pods to acquire. In our
`frontend.yaml` example, the selector was:
The `.spec.template.metadata.labels` must match the `.spec.selector`, or it will
```yaml
matchLabels:
tier: frontend
```
In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, or it will
be rejected by the API.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
-->
### Pod 选择算符 {#pod-selector}
### Pod 选择器
`.spec.selector` 字段是一个[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)。
如前文中[所讨论的](how-a-replicaset-works),这些是用来标识要被获取的 Pods
的标签。在签名的 `frontend.yaml` 示例中,选择算符为:
`.spec.selector` 字段是[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/)。ReplicaSet 管理所有标签匹配与标签选择器的 Pod。它不区分自己创建或删除的 Pod 和其他人或进程创建或删除的pod。这允许在不影响运行中的 Pod 的情况下替换副本集。
```yaml
matchLabels:
tier: frontend
```
`.spec.template.metadata.labels` 必须匹配 `.spec.selector`,否则它将被 API 拒绝。
在 ReplicaSet 中,`.spec.template.metadata.labels` 的值必须与 `spec.selector`
相匹配,否则该配置会被 API 拒绝。
Kubernetes 1.9 版本中API 版本 `apps/v1` 中的 ReplicaSet 类型的版本是当前版本并默认开启。API 版本 `apps/v1beta2` 被弃用。
{{< note >}}
<!--
For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
-->
对于设置了相同的 `.spec.selector`,但
`.spec.template.metadata.labels``.spec.template.spec` 字段不同的
两个 ReplicaSet 而言,每个 ReplicaSet 都会忽略被另一个 ReplicaSet 所
创建的 Pods。
{{< /note >}}
<!--
Also you should not normally create any pods whose labels match this selector, either directly, with
another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it
created the other pods. Kubernetes does not stop you from doing this.
If you do end up with multiple controllers that have overlapping selectors, you
will have to manage the deletion yourself.
-->
另外,通常您不应该创建标签与此选择器匹配的任何 Pod或者直接与另一个 ReplicaSet 或另一个控制器(如 Deployment标签匹配的任何 Pod。
如果你这样做ReplicaSet 会认为它创造了其他 Pod。Kubernetes 并不会阻止您这样做。
如果您最终使用了多个具有重叠选择器的控制器,则必须自己负责删除。
<!--
### Labels on a ReplicaSet
The ReplicaSet can itself have labels (`.metadata.labels`). Typically, you
would set these the same as the `.spec.template.metadata.labels`. However, they are allowed to be
different, and the `.metadata.labels` do not affect the behavior of the ReplicaSet.
### Replicas
You can specify how many pods should run concurrently by setting `.spec.replicas`. The number running at any time may be higher
or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully
shut down, and a replacement starts early.
You can specify how many Pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete
its Pods to match this number.
If you do not specify `.spec.replicas`, then it defaults to 1.
-->
### Replicas
通过设置 `.spec.replicas` 您可以指定要同时运行多少个 Pod。
在任何时间运行的 Pod 数量可能高于或低于 `.spec.replicas` 指定的数量,例如在副本刚刚被增加或减少后、或者 Pod 正在被优雅地关闭、以及替换提前开始。
你可以通过设置 `.spec.replicas` 来指定要同时运行的 Pod 个数。
ReplicaSet 创建、删除 Pods 以与此值匹配。
如果您没有指定 `.spec.replicas`, 那么默认值为 1。
如果你没有指定 `.spec.replicas`, 那么默认值为 1。
<!--
## Working with ReplicaSets
@ -273,58 +450,63 @@ To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/referen
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. e.g. :
-->
## 使用 ReplicaSets 的具体方法
## 使用 ReplicaSets
### 删除 ReplicaSet 和它的 Pod
要删除 ReplicaSet 和它的所有 Pod使用[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令。
默认情况下,[垃圾收集器](/zh/docs/concepts/workloads/controllers/garbage-collection/) 自动删除所有依赖的 Pod。
要删除 ReplicaSet 和它的所有 Pod使用
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令。
默认情况下,[垃圾收集器](/zh/docs/concepts/workloads/controllers/garbage-collection/)
自动删除所有依赖的 Pod。
当使用 REST API 或 `client-go` 库时,您必须在删除选项中将 `propagationPolicy` 设置为 `Background``Foreground`。例如:
当使用 REST API 或 `client-go` 库时,你必须在删除选项中将 `propagationPolicy`
设置为 `Background``Foreground`。例如:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
> -H "Content-Type: application/json"
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
<!--
### Deleting just a ReplicaSet
You can delete a ReplicaSet without affecting any of its pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`, e.g. :
You can delete a ReplicaSet without affecting any of its pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `-cascade=false` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
For example:
-->
### 只删除 ReplicaSet
您可以只删除 ReplicaSet 而不影响它的 Pod方法是使用[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) 命令并设置 `--cascade=false` 选项。
你可以只删除 ReplicaSet 而不影响它的 Pod方法是使用
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete)
命令并设置 `--cascade=false` 选项。
当使用 REST API 或 `client-go` 库时,您必须将 `propagationPolicy` 设置为 `Orphan`。例如:
当使用 REST API 或 `client-go` 库时,你必须将 `propagationPolicy` 设置为 `Orphan`
例如:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
> -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
> -H "Content-Type: application/json"
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
<!--
Once the original is deleted, you can create a new ReplicaSet to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
However, it will not make any effort to make existing pods match a new, different pod template.
To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
To update Pods to a new spec in a controlled way, use a
[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as ReplicaSets do not support a rolling update directly.
-->
一旦删除了原来的 ReplicaSet就可以创建一个新的来替换它。
由于新旧 ReplicaSet 的 `.spec.selector` 是相同的,新的 ReplicaSet 将接管老的 Pod。
但是,它不会努力使现有的 Pod 与新的、不同的 Pod 模板匹配。
若想要以可控的方式将 Pod 更新到新的 spec就要使用 [滚动更新](#rolling-updates)的方式。
若想要以可控的方式更新 Pod 的规约,可以使用
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/#creating-a-deployment)
资源,因为 ReplicaSet 并不直接支持滚动更新。
<!--
### Isolating pods from a ReplicaSet
Pods may be removed from a ReplicaSet's target set by changing their labels. This technique may be used to remove pods
@ -332,10 +514,10 @@ from service for debugging, data recovery, etc. Pods that are removed in this wa
assuming that the number of replicas is not also changed).
-->
### 将 Pod 从 ReplicaSet 中隔离
可以通过改变标签来从 ReplicaSet 的目标集中移除 Pod。这种技术可以用来从服务中去除 Pod以便进行排错、数据恢复等。
可以通过改变标签来从 ReplicaSet 的目标集中移除 Pod。
这种技术可以用来从服务中去除 Pod以便进行排错、数据恢复等。
以这种方式移除的 Pod 将被自动替换(假设副本的数量没有改变)。
<!--
@ -344,10 +526,10 @@ from service for debugging, data recovery, etc. Pods that are removed in this wa
A ReplicaSet can be easily scaled up or down by simply updating the `.spec.replicas` field. The ReplicaSet controller
ensures that a desired number of pods with a matching label selector are available and operational.
-->
### 缩放 RepliaSet
通过更新 `.spec.replicas` 字段ReplicaSet 可以被轻松的进行缩放。ReplicaSet 控制器能确保匹配标签选择器的数量的 Pod 是可用的和可操作的。
通过更新 `.spec.replicas` 字段ReplicaSet 可以被轻松的进行缩放。ReplicaSet
控制器能确保匹配标签选择器的数量的 Pod 是可用的和可操作的。
<!--
### ReplicaSet as an Horizontal Pod Autoscaler Target
@ -357,13 +539,13 @@ A ReplicaSet can also be a target for
a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
the ReplicaSet we created in the previous example.
-->
### ReplicaSet 作为水平的 Pod 自动缩放器目标
ReplicaSet 也可以作为 [水平的 Pod 缩放器 (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/) 的目标。也就是说ReplicaSet 可以被 HPA 自动缩放。
ReplicaSet 也可以作为
[水平的 Pod 缩放器 (HPA)](/zh/docs/tasks/run-application/horizontal-pod-autoscale/)
的目标。也就是说ReplicaSet 可以被 HPA 自动缩放。
以下是 HPA 以我们在前一个示例中创建的副本集为目标的示例。
{{< codenew file="controllers/hpa-rs.yaml" >}}
<!--
@ -371,23 +553,21 @@ Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluste
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage
of the replicated pods.
-->
将这个列表保存到 `hpa-rs.yaml` 并提交到 Kubernetes 集群,就能创建它所定义的 HPA进而就能根据复制的 Pod 的 CPU 利用率对目标 ReplicaSet进行自动缩放。
将这个列表保存到 `hpa-rs.yaml` 并提交到 Kubernetes 集群,就能创建它所定义的
HPA进而就能根据复制的 Pod 的 CPU 利用率对目标 ReplicaSet进行自动缩放。
```shell
kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml
kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml
```
<!--
Alternatively, you can use the `kubectl autoscale` command to accomplish the same
(and it's easier!)
-->
或者,可以使用 `kubectl autoscale` 命令完成相同的操作。
(而且它更简单!)
或者,可以使用 `kubectl autoscale` 命令完成相同的操作。 (而且它更简单!)
```shell
kubectl autoscale rs frontend
kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50
```
<!--
@ -395,43 +575,49 @@ kubectl autoscale rs frontend
### Deployment (Recommended)
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying ReplicaSets and their Pods
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features. For more information on running a stateless
application using a Deployment, please read [Run a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update
them and their Pods via declarative, server-side rolling updates.
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
creation, deletion and updates. When you use Deployments you don't have to worry about managing the ReplicaSets that
they create. Deployments own and manage their ReplicaSets.
As such, it is recommended to use Deployments when you want ReplicaSets.
-->
## ReplicaSet 的替代方案
### Deployment (推荐)
[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 是一个高级 API 对象,它以 `kubectl rolling-update` 的方式更新其底层副本集及其Pod。
如果您需要滚动更新功能,建议使用 Deployment因为 Deployment 与 `kubectl rolling-update` 不同的是:它是声明式的、服务器端的、并且具有其他特性。
有关使用 Deployment 来运行无状态应用的更多信息,请参阅 [使用 Deployment 运行无状态应用](/zh/docs/tasks/run-application/run-stateless-application-deployment/)。
[`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 是一个
可以拥有 ReplicaSet 并使用声明式方式在服务器端完成对 Pods 滚动更新的对象。
尽管 ReplicaSet 可以独立使用,目前它们的主要用途是提供给 Deployment 作为
编排 Pod 创建、删除和更新的一种机制。当使用 Deployment 时,你不必关心
如何管理它所创建的 ReplicaSetDeployment 拥有并管理其 ReplicaSet。
因此,建议你在需要 ReplicaSet 时使用 Deployment。
<!--
### Bare Pods
Unlike the case where a user directly created pods, a ReplicaSet replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
-->
### 裸 Pod
与用户直接创建 Pod 的情况不同ReplicaSet 会替换那些由于某些原因被删除或被终止的 Pod例如在节点故障或破坏性的节点维护如内核升级的情况下。
因为这个好处,我们建议您使用 ReplicaSet即使应用程序只需要一个 Pod。
想像一下ReplicaSet 类似于进程监视器,只不过它在多个节点上监视多个 Pod而不是在单个节点上监视单个进程。
与用户直接创建 Pod 的情况不同ReplicaSet 会替换那些由于某些原因被删除或被终止的
Pod例如在节点故障或破坏性的节点维护如内核升级的情况下。
因为这个原因,我们建议你使用 ReplicaSet即使应用程序只需要一个 Pod。
想像一下ReplicaSet 类似于进程监视器,只不过它在多个节点上监视多个 Pod
而不是在单个节点上监视单个进程。
ReplicaSet 将本地容器重启的任务委托给了节点上的某个代理例如Kubelet 或 Docker去完成。
<!--
### Job
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicaSet for pods that are expected to terminate on their own
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are expected to terminate on their own
(that is, batch jobs).
-->
### Job
使用[`Job`](/zh/docs/concepts/workloads/controllers/job/) 代替ReplicaSet可以用于那些期望自行终止的 Pod。
使用[`Job`](/zh/docs/concepts/workloads/controllers/job/) 代替ReplicaSet
可以用于那些期望自行终止的 Pod。
<!--
### DaemonSet
@ -441,11 +627,25 @@ machine-level function, such as machine monitoring or machine logging. These po
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
-->
### DaemonSet
对于管理那些提供主机级别功能(如主机监控和主机日志)的容器,就要用[`DaemonSet`](/zh/docs/concepts/workloads/controllers/daemonset/) 而不用 ReplicaSet。
这些 Pod 的寿命与主机寿命有关:这些 Pod 需要先于主机上的其他 Pod 运行,并且在机器准备重新启动/关闭时安全地终止。
对于管理那些提供主机级别功能(如主机监控和主机日志)的容器,
就要用 [`DaemonSet`](/zh/docs/concepts/workloads/controllers/daemonset/)
而不用 ReplicaSet。
这些 Pod 的寿命与主机寿命有关:这些 Pod 需要先于主机上的其他 Pod 运行,
并且在机器准备重新启动/关闭时安全地终止。
### ReplicationController
<!--
ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/).
The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based
selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
As such, ReplicaSets are preferred over ReplicationControllers
-->
ReplicaSet 是 [ReplicationController](/zh/docs/concepts/workloads/controllers/replicationcontroller/)
的后继者。二者目的相同且行为类似,只是 ReplicationController 不支持
[标签用户指南](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)
中讨论的基于集合的选择算符需求。
因此,相比于 ReplicationController应优先考虑 ReplicaSet。

View File

@ -7,7 +7,7 @@ feature:
重新启动失败的容器,在节点死亡时替换并重新调度容器,杀死不响应用户定义的健康检查的容器,并且在它们准备好服务之前不会将它们公布给客户端。
content_type: concept
weight: 20
weight: 90
---
<!--
@ -22,7 +22,7 @@ feature:
Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
content_type: concept
weight: 20
weight: 90
-->
<!-- overview -->
@ -499,7 +499,7 @@ API object can be found at:
### ReplicaSet
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
Its mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate Pod creation, deletion and updates.
Its mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate Pod creation, deletion and updates.
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or dont require updates at all.
-->
## ReplicationController 的替代方案
@ -508,8 +508,10 @@ Note that we recommend using Deployments instead of directly using Replica Sets,
[`ReplicaSet`](/zh/docs/concepts/workloads/controllers/replicaset/) 是下一代 ReplicationController
支持新的[基于集合的标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#set-based-requirement)。
它主要被 [`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/) 用来作为一种编排 Pod 创建、删除及更新的机制。
请注意,我们推荐使用 Deployment 而不是直接使用 ReplicaSet除非你需要自定义更新编排或根本不需要更新。
它主要被 [`Deployment`](/zh/docs/concepts/workloads/controllers/deployment/)
用来作为一种编排 Pod 创建、删除及更新的机制。
请注意,我们推荐使用 Deployment 而不是直接使用 ReplicaSet除非
你需要自定义更新编排或根本不需要更新。
<!--
### Deployment (Recommended)

View File

@ -276,13 +276,13 @@ from a _pod template_ and manage those Pods on your behalf.
PodTemplates are specifications for creating Pods, and are included in workload resources such as
[Deployments](/docs/concepts/workloads/controllers/deployment/),
[Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), and
[Jobs](/docs/concepts/workloads/controllers/job/), and
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/).
-->
### Pod 模版 {#pod-templates}
{{< glossary_tooltip text="负载" term_id="workload" >}}资源的控制器通常使用 _Pod 模板Pod Template_
来替你创建 Pod 并管理它们。
{{< glossary_tooltip text="负载" term_id="workload" >}}资源的控制器通常使用
_Pod 模板Pod Template_ 来替你创建 Pod 并管理它们。
Pod 模板是包含在工作负载对象中的规范,用来创建 Pod。这类负载资源包括
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/)、
@ -405,7 +405,7 @@ or POSIX shared memory. Containers in different Pods have distinct IP addresses
and can not communicate by IPC without
[special configuration](/docs/concepts/policy/pod-security-policy/).
Containers that want to interact with a container running in a different Pod can
use IP networking to comunicate.
use IP networking to communicate.
-->
在同一个 Pod 内,所有容器共享一个 IP 地址和端口空间,并且可以通过 `localhost` 发现对方。
他们也能通过如 SystemV 信号量或 POSIX 共享内存这类标准的进程间通信方式互相通信。
@ -487,7 +487,7 @@ but cannot be controlled from there.
<!--
* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
* Learn about [PodPresets](/docs/concepts/workloads/pods/podpreset/).
* Lean about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
* Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
configure different Pods with different container runtime configurations.
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
* Read about [PodDisruptionBudget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions.
@ -506,7 +506,8 @@ but cannot be controlled from there.
* Pod 在 Kubernetes REST API 中是一个顶层资源;
[Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
对象的定义中包含了更多的细节信息。
* 博客 [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) 中解释了在同一 Pod 中包含多个容器时的几种常见布局。
* 博客 [分布式系统工具箱:复合容器模式](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)
中解释了在同一 Pod 中包含多个容器时的几种常见布局。
<!--
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}, you can read about the prior art, including:
@ -516,9 +517,9 @@ To understand the context for why Kubernetes wraps a common Pod API in other res
或 {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
封装通用的 Pod API相关的背景信息可以在前人的研究中找到。具体包括
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).

View File

@ -3,20 +3,29 @@ approvers:
- erictune
title: Init 容器
content_type: concept
weight: 40
---
<!---
reviewers:
- erictune
title: Init Containers
content_type: concept
weight: 40
-->
<!-- overview -->
<!--
This page provides an overview of init containers: specialized containers that run before app containers in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
This page provides an overview of init containers: specialized containers that run
before app containers in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
Init containers can contain utilities or setup scripts not present in an app image.
-->
本页提供了 Init 容器的概览,它是一种特殊容器,在 {{< glossary_tooltip text="Pod" term_id="pod" >}}
内的应用容器启动之前运行,可以包括一些应用镜像中不存在的实用工具和安装脚本。
本页提供了 Init 容器的概览。Init 容器是一种特殊容器,在 {{< glossary_tooltip text="Pod" term_id="pod" >}}
内的应用容器启动之前运行。Init 容器可以包括一些应用镜像中不存在的实用工具和安装脚本。
<!--
You can specify init containers in the Pod specification alongside the `containers` array (which describes app containers).
You can specify init containers in the Pod specification alongside the `containers`
array (which describes app containers).
-->
你可以在 Pod 的规约中与用来描述应用容器的 `containers` 数组平行的位置指定
Init 容器。
@ -26,7 +35,9 @@ Init 容器。
<!--
## Understanding init containers
A {{< glossary_tooltip text="Pod" term_id="pod" >}} can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
A {{< glossary_tooltip text="Pod" term_id="pod" >}} can have multiple containers
running apps within it, but it can also have one or more init containers, which are run
before the app containers are started.
-->
## 理解 Init 容器
@ -35,6 +46,7 @@ A {{< glossary_tooltip text="Pod" term_id="pod" >}} can have multiple containers
<!--
Init containers are exactly like regular containers, except:
* Init containers always run to completion.
* Each init container must complete successfully before the next one starts.
-->
@ -44,15 +56,20 @@ Init 容器与普通的容器非常像,除了如下两点:
* 每个都必须在下一个启动之前成功完成。
<!--
If a Pod's init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a `restartPolicy` of Never, Kubernetes does not restart the Pod.
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.
-->
如果 Pod 的 Init 容器失败,Kubernetes 会不断地重启该 Pod直到 Init 容器成功为止。
然而,如果 Pod 对应的 `restartPolicy` 值为 NeverKubernetes 不会重新启动 Pod。
如果 Pod 的 Init 容器失败,kubelet 会不断地重启该 Init 容器直到该容器成功为止。
然而,如果 Pod 对应的 `restartPolicy` 值为 "Never"Kubernetes 不会重新启动 Pod。
<!--
To specify an init container for a Pod, add the `initContainers` field into the Pod specification, as an array of objects of type [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core), alongside the app `containers` array.
The status of the init containers is returned in `.status.initContainerStatuses` field as an array of the container statuses (similar to the `.status.containerStatuses` field).
To specify an init container for a Pod, add the `initContainers` field into
the Pod specification, as an array of objects of type
[Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core),
alongside the app `containers` array.
The status of the init containers is returned in `.status.initContainerStatuses`
field as an array of the container statuses (similar to the `.status.containerStatuses`
field).
-->
为 Pod 设置 Init 容器需要在 Pod 的 `spec` 中添加 `initContainers` 字段,
该字段以 [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
@ -62,9 +79,19 @@ Init 容器的状态在 `status.initContainerStatuses` 字段中以容器状态
<!--
### Differences from regular containers
Init containers support all the fields and features of app containers, including resource limits, volumes, and security settings. However, the resource requests and limits for an init container are handled differently, as documented in [Resources](#resources).
Also, init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or `startupProbe` because they must run to completion before the Pod can be ready.
If you specify multiple init containers for a Pod, Kubelet runs each init container sequentially. Each init container must succeed before the next can run. When all of the init containers have run to completion, Kubelet initializes the application containers for the Pod and runs them as usual.
Init containers support all the fields and features of app containers,
including resource limits, volumes, and security settings. However, the
resource requests and limits for an init container are handled differently,
as documented in [Resources](#resources).
Also, init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or
`startupProbe` because they must run to completion before the Pod can be ready.
If you specify multiple init containers for a Pod, Kubelet runs each init
container sequentially. Each init container must succeed before the next can run.
When all of the init containers have run to completion, Kubelet initializes
the application containers for the Pod and runs them as usual.
-->
### 与普通容器的不同之处
@ -80,12 +107,24 @@ Kubernetes 才会为 Pod 初始化应用容器并像平常一样运行。
<!--
## Using init containers
Because init containers have separate images from app containers, they have some advantages for start-up related code:
* Init containers can contain utilities or custom code for setup that are not present in an app image. For example, there is no need to make an image `FROM` another image just to use a tool like `sed`, `awk`, `python`, or `dig` during setup.
* Init containers can securely run utilities that would make an app container image less secure.
* The application image builder and deployer roles can work independently without the need to jointly build a single app image.
* Init containers can run with a different view of the filesystem than app containers in the same Pod. Consequently, they can be given access to {{< glossary_tooltip text="Secrets" term_id="secret" >}} that app containers cannot access.
* Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met. Once preconditions are met, all of the app containers in a Pod can start in parallel.
Because init containers have separate images from app containers, they
have some advantages for start-up related code:
* Init containers can contain utilities or custom code for setup that are not present in an app
image. For example, there is no need to make an image `FROM` another image just to use a tool like
`sed`, `awk`, `python`, or `dig` during setup.
* The application image builder and deployer roles can work independently without
the need to jointly build a single app image.
* Init containers can run with a different view of the filesystem than app containers in the
same Pod. Consequently, they can be given access to
{{< glossary_tooltip text="Secrets" term_id="secret" >}} that app containers cannot access.
* Because init containers run to completion before any app containers start, init containers offer
a mechanism to block or delay app container startup until a set of preconditions are met. Once
preconditions are met, all of the app containers in a Pod can start in parallel.
* Init containers can securely run utilities or custom code that would otherwise make an app
container image less secure. By keeping unnecessary tools separate you can limit the attack
surface of your app container image.
-->
## 使用 Init 容器
@ -108,12 +147,15 @@ Because init containers have separate images from app containers, they have some
<!--
### Examples
Here are some ideas for how to use init containers:
* Wait for a {{< glossary_tooltip text="Service" term_id="service">}} to
be created, using a shell one-line command like:
```shell
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
```
* Register this Pod with a remote server from the downward API with a command like:
```shell
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
@ -124,7 +166,11 @@ Here are some ideas for how to use init containers:
```
* Clone a Git repository into a {{< glossary_tooltip text="Volume" term_id="volume" >}}
* Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app container. For example, place the `POD_IP` value in a configuration and generate the main app configuration file using Jinja.
* Place values into a configuration file and run a template tool to dynamically
generate a configuration file for the main app container. For example,
place the `POD_IP` value in a configuration and generate the main app
configuration file using Jinja.
-->
### 示例 {#examples}
@ -156,24 +202,10 @@ Here are some ideas for how to use init containers:
<!--
#### Init containers in use
This example defines a simple Pod that has two init containers. The first waits for `myservice`, and the second waits for `mydb`. Once both init containers complete, the Pod runs the app container from its `spec` section.
```yaml
```
The following YAML file outlines the `mydb` and `myservice` services:
```yaml
```
You can start this Pod by running:
```shell
```
And check on its status with:
```shell
```
This example defines a simple Pod that has two init containers.
The first waits for `myservice`, and the second waits for `mydb`. Once both
init containers complete, the Pod runs the app container from its `spec` section.
-->
### 使用 Init 容器的情况
@ -201,63 +233,40 @@ spec:
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
```
下面的 yaml 文件展示了 `mydb``myservice` 两个 Service
```
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
kind: Service
apiVersion: v1
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
```
要启动这个 Pod可以执行如下命令
<!--
You can start this Pod by running:
-->
你通过运行下面的命令启动 Pod
```shell
kubectl apply -f myapp.yaml
```
输出为:
```
pod/myapp-pod created
```
要检查其状态:
<!--
And check on its status with:
-->
使用下面的命令检查其状态:
```shell
kubectl get -f myapp.yaml
```
输出类似于:
```
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 6m
```
如需更详细的信息:
<!--
or for more details:
-->
或者查看更多详细信息:
```shell
kubectl describe -f myapp.yaml
```
输出类似于:
```
Name: myapp-pod
Namespace: default
@ -293,6 +302,9 @@ Events:
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634
```
<!--
To see logs for the init containers in this Pod, run:
-->
如需查看 Pod 内 Init 容器的日志,请执行:
```shell
@ -301,7 +313,8 @@ kubectl logs myapp-pod -c init-mydb # 查看第二个 Init 容器
```
<!--
At this point, those init containers will be waiting to discover Services named `mydb` and `myservice`.
At this point, those init containers will be waiting to discover Services named
`mydb` and `myservice`.
Here's a configuration you can use to make those Services appear:
-->
@ -332,23 +345,27 @@ spec:
targetPort: 9377
```
<!--
To create the `mydb` and `myservice` services:
-->
创建 `mydb``myservice` 服务的命令:
```shell
kubectl create -f services.yaml
```
输出类似于:
```
service "myservice" created
service "mydb" created
```
<!--
You'll then see that those init containers complete, and that the `myapp-pod`
Pod moves into the Running state:
-->
这样你将能看到这些 Init 容器执行完毕,随后 `my-app` 的 Pod 进入 `Running` 状态:
```shell
$ kubectl get -f myapp.yaml
kubectl get -f myapp.yaml
```
```
@ -356,32 +373,43 @@ NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 9m
```
一旦我们启动了 `mydb``myservice` 这两个服务,我们能够看到 Init 容器完成,
并且 `myapp-pod` 被创建。
<!--
This simple example should provide some inspiration for you to create your own init containers. [What's next](#what-s-next) contains a link to a more detailed example.
This simple example should provide some inspiration for you to create your own
init containers. [What's next](#whats-next) contains a link to a more detailed example.
-->
这个简单例子应该能为你创建自己的 Init 容器提供一些启发。
[接下来](#what-s-next)节提供了更详细例子的链接。
[接下来](#whats-next)节提供了更详细例子的链接。
<!--
## Detailed behavior
During the startup of a Pod, each init container starts in order, after the network and volumes are initialized. Each container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with failure, it is retried according to the Pod `restartPolicy`. However, if the Pod `restartPolicy` is set to Always, the init containers use `restartPolicy` OnFailure.
During Pod startup, the kubelet delays running init containers until the networking
and storage are ready. Then the kubelet runs the Pod's init containers in the order
they appear in the Pod's spec.
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an init container are not aggregated under a Service. A Pod that is initializing
is in the `Pending` state but should have a condition `Initializing` set to true.
Each init container must exit successfully before
the next container starts. If a container fails to start due to the runtime or
exits with failure, it is retried according to the Pod `restartPolicy`. However,
if the Pod `restartPolicy` is set to Always, the init containers use
`restartPolicy` OnFailure.
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers must execute again.
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an
init container are not aggregated under a Service. A Pod that is initializing
is in the `Pending` state but should have a condition `Initialized` set to true.
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers
must execute again.
-->
## 具体行为 {#detailed-behavior}
在 Pod 启动过程中,每个 Init 容器在网络和数据卷初始化之后会按顺序启动。
在 Pod 启动过程中,每个 Init 容器会在网络和数据卷初始化之后按顺序启动。
kubelet 运行依据 Init 容器在 Pod 规约中的出现顺序依次运行之。
每个 Init 容器成功退出后才会启动下一个 Init 容器。
如果它们因为容器运行时的原因无法启动,或以错误状态退出,它会根据 Pod 的 `restartPolicy` 策略进行重试。
然而,如果 Pod 的 `restartPolicy` 设置为 "Always"Init 容器失败时会使用 `restartPolicy`
的 "OnFailure" 策略。
如果某容器因为容器运行时的原因无法启动或以错误状态退出kubelet 会根据
Pod 的 `restartPolicy` 策略进行重试。
然而,如果 Pod 的 `restartPolicy` 设置为 "Always"Init 容器失败时会使用
`restartPolicy` 的 "OnFailure" 策略。
在所有的 Init 容器没有成功之前Pod 将不会变成 `Ready` 状态。
Init 容器的端口将不会在 Service 中进行聚集。正在初始化中的 Pod 处于 `Pending` 状态,
@ -390,11 +418,17 @@ Init 容器的端口将不会在 Service 中进行聚集。正在初始化中的
如果 Pod [重启](#pod-restart-reasons),所有 Init 容器必须重新执行。
<!--
Changes to the init container spec are limited to the container image field. Altering an init container image field is equivalent to restarting the Pod.
Changes to the init container spec are limited to the container image field.
Altering an init container image field is equivalent to restarting the Pod.
Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on `EmptyDirs` should be prepared for the possibility that an output file already exists.
Because init containers can be restarted, retried, or re-executed, init container
code should be idempotent. In particular, code that writes to files on `EmptyDirs`
should be prepared for the possibility that an output file already exists.
Init containers have all of the fields of an app container. However, Kubernetes
prohibits `readinessProbe` from being used because init containers cannot
define readiness distinct from completion. This is enforced during validation.
Init containers have all of the fields of an app container. However, Kubernetes prohibits `readinessProbe` from being used because init containers cannot define readiness distinct from completion. This is enforced during validation.
-->
对 Init 容器规约的修改仅限于容器的 `image` 字段。
更改 Init 容器的 `image` 字段,等同于重启该 Pod。
@ -407,9 +441,12 @@ Init 容器具有应用容器的所有字段。然而 Kubernetes 禁止使用 `r
Kubernetes 会在校验时强制执行此检查。
<!--
Use `activeDeadlineSeconds` on the Pod and `livenessProbe` on the container to prevent init containers from failing forever. The active deadline includes init containers.
Use `activeDeadlineSeconds` on the Pod and `livenessProbe` on the container to
prevent init containers from failing forever. The active deadline includes init
containers.
The name of each app and init container in a Pod must be unique; avalidation error is thrown for any container sharing a name with another.
The name of each app and init container in a Pod must be unique; a
validation error is thrown for any container sharing a name with another.
-->
在 Pod 上使用 `activeDeadlineSeconds` 和在容器上使用 `livenessProbe` 可以避免
Init 容器一直重复失败。`activeDeadlineSeconds` 时间包含了 Init 容器启动的时间。
@ -418,17 +455,21 @@ Init 容器一直重复失败。`activeDeadlineSeconds` 时间包含了 Init 容
与任何其它容器共享同一个名称,会在校验时抛出错误。
<!--
### Resources
Given the ordering and execution for init containers, the following rules for resource usage apply:
* The highest of any particular resource request or limit defined on all init containers is the *effective init request/limit*
### Resources
Given the ordering and execution for init containers, the following rules
for resource usage apply:
* The highest of any particular resource request or limit defined on all init
containers is the *effective init request/limit*
* The Pod's *effective request/limit* for a resource is the higher of:
* the sum of all app containers request/limit for a resource
* the effective init request/limit for a resource
* Scheduling is done based on effective requests/limits, which means init containers can reserve resources for initialization that are not used during the life of the Pod.
* The QoS (quality of service) tier of the Pod's *effective QoS tier* is the QoS tier for init containers and app containers alike.
Quota and limits are applied based on the effective Pod request and limit.
Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as the scheduler.
* Scheduling is done based on effective requests/limits, which means
init containers can reserve resources for initialization that are not used
during the life of the Pod.
* The QoS (quality of service) tier of the Pod's *effective QoS tier* is the
QoS tier for init containers and app containers alike.
-->
### 资源 {#resources}
@ -442,15 +483,27 @@ Pod level control groups (cgroups) are based on the effective Pod request and li
这些资源在 Pod 生命周期过程中并没有被使用。
* Pod 的 *有效 QoS 层* ,与 Init 容器和应用容器的一样。
配额和限制适用于有效 Pod的 limit/request。
Pod 级别的 cgroups 是基于有效 Pod 的 limit/request和调度器相同。
<!--
Quota and limits are applied based on the effective Pod request and limit.
Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as the scheduler.
-->
配额和限制适用于有效 Pod 的请求和限制值。
Pod 级别的 cgroups 是基于有效 Pod 的请求和限制值,和调度器相同。
<!--
### Pod restart reasons
A Pod can restart, causing re-execution of init containers, for the following reasons:
* A user updates the Pod specification, causing the init container image to change. Any changes to the init container image restarts the Pod. App container image changes only restart the app container.
* The Pod infrastructure container is restarted. This is uncommon and would have to be done by someone with root access to nodes.
* All containers in a Pod are terminated while `restartPolicy` is set to Always, forcing a restart, and the init container completion record has been lost due to garbage collection.
### Pod restart reasons
A Pod can restart, causing re-execution of init containers, for the following
reasons:
* A user updates the Pod specification, causing the init container image to change.
Any changes to the init container image restarts the Pod. App container image
changes only restart the app container.
* The Pod infrastructure container is restarted. This is uncommon and would
have to be done by someone with root access to nodes.
* All containers in a Pod are terminated while `restartPolicy` is set to Always,
forcing a restart, and the init container completion record has been lost due
to garbage collection.
-->
### Pod 重启的原因 {#pod-restart-reasons}
@ -471,7 +524,6 @@ Pod 重启会导致 Init 容器重新执行,主要有如下几个原因:
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
-->
* 阅读[创建包含 Init 容器的 Pod](/zh/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* 学习如何[调试 Init 容器](/zh/docs/tasks/debug-application-cluster/debug-init-containers/)

View File

@ -19,7 +19,8 @@ of its primary containers starts OK, and then through either the `Succeeded` or
Whilst a Pod is running, the kubelet is able to restart containers to handle some
kind of faults. Within a Pod, Kubernetes tracks different container
[states](#container-states) and handles
[states](#container-states) and determines what action to take to make the Pod
healthy again.
-->
本页面讲述 Pod 的生命周期。
Pod 遵循一个预定义的生命周期,起始于 `Pending` [阶段](#pod-phase),如果至少
@ -28,7 +29,7 @@ Pod 遵循一个预定义的生命周期,起始于 `Pending` [阶段](#pod-pha
在 Pod 运行期间,`kubelet` 能够重启容器以处理一些失效场景。
在 Pod 内部Kubernetes 跟踪不同容器的[状态](#container-states)
处理可能出现的状况
确定使 Pod 重新变得健康所需要采取的动作
<!--
In the Kubernetes API, Pods have both a specification and an actual status. The
@ -88,7 +89,7 @@ Pod 自身不具有自愈能力。如果 Pod 被调度到某{{< glossary_tooltip
<!--
A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead,
that Pod can be replaced by a new, near-identical Pod, with even the same name i
that Pod can be replaced by a new, near-identical Pod, with even the same name if
desired, but with a different UID.
When something is said to have the same lifetime as a Pod, such as a
@ -193,7 +194,7 @@ Kubernetes 会跟踪 Pod 中每个容器的状态,就像它跟踪 Pod 总体
`Terminated`(已终止)。
<!--
To the check state of a Pod's containers, you can use
To check the state of a Pod's containers, you can use
`kubectl describe pod <name-of-pod>`. The output shows the state for each container
within that Pod.
@ -207,7 +208,7 @@ Each state has a specific meaning:
<!--
### `Waiting` {#container-state-waiting}
If a container is not in either the `Running` or `Terminated` state, it `Waiting`.
If a container is not in either the `Running` or `Terminated` state, it is `Waiting`.
A container in the `Waiting` state is still running the operations it requires in
order to complete start up: for example, pulling the container image from a container
image registry, or applying {{< glossary_tooltip text="Secret" term_id="secret" >}}
@ -228,23 +229,23 @@ Reason 字段,其中给出了容器处于等待状态的原因。
### `Running` {#container-state-running}
The `Running` status indicates that a container is executing without issues. If there
was a `postStart` hook configured, it has already executed and executed. When you use
was a `postStart` hook configured, it has already executed and finished. When you use
`kubectl` to query a Pod with a container that is `Running`, you also see information
about when the container entered the `Running` state.
-->
### `Running`(运行中) {#container-state-running}
`Running` 状态表明容器正在执行状态并且没有问题发生。
如果配置了 `postStart` 回调,那么该回调已经执行完成。
如果配置了 `postStart` 回调,那么该回调已经执行且已完成。
如果你使用 `kubectl` 来查询包含 `Running` 状态的容器的 Pod 时,你也会看到
关于容器进入 `Running` 状态的信息。
<!--
### `Terminated` {#container-state-terminated}
A container in the `Terminated` state has begin execution and has then either run to
completion or has failed for some reason. When you use `kubectl` to query a Pod with
a container that is `Terminated`, you see a reason, and exit code, and the start and
A container in the `Terminated` state began execution and then either ran to
completion or failed for some reason. When you use `kubectl` to query a Pod with
a container that is `Terminated`, you see a reason, an exit code, and the start and
finish time for that container's period of execution.
If a container has a `preStop` hook configured, that runs before the container enters
@ -268,8 +269,8 @@ and Never. The default value is Always.
The `restartPolicy` applies to all containers in the Pod. `restartPolicy` only
refers to restarts of the containers by the kubelet on the same node. After containers
in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s,
40s, …), that is capped at five minutes. Once a container has executed with no problems
for 10 minutes without any problems, the kubelet resets the restart backoff timer for
40s, …), that is capped at five minutes. Once a container has executed for 10 minutes
without any problems, the kubelet resets the restart backoff timer for
that container.
-->
## 容器重启策略 {#restart-policy}
@ -426,7 +427,8 @@ When a Pod's containers are Ready but at least one custom condition is missing o
## Container probes
A [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) is a diagnostic
performed periodically by the [kubelet](/docs/admin/kubelet/)
performed periodically by the
[kubelet](/docs/reference/command-line-tools-reference/kubelet/)
on a Container. To perform a diagnostic,
the kubelet calls a
[Handler](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core) implemented by
@ -434,10 +436,10 @@ the container. There are three types of handlers:
-->
## 容器探针 {#container-probes}
[探针](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
[Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
是由 [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 对容器执行的定期诊断。
要执行诊断kubelet 调用由容器实现的
[Handler](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core)
[Handler](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#handler-v1-core)
(处理程序)。有三种类型的处理程序:
<!--
@ -593,7 +595,7 @@ to stop.
-->
### 何时该使用启动探针? {#when-should-you-use-a-startup-probe}
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
<!--
Startup probes are useful for Pods that have containers that take a long time to
@ -647,14 +649,17 @@ shutdown.
Pod。
<!--
Typically, the container runtime sends a a TERM signal is sent to the main process in each
container. Once the grace period has expired, the KILL signal is sent to any remainig
Typically, the container runtime sends a TERM signal to the main process in each
container. Many container runtimes respect the `STOPSIGNAL` value defined in the container
image and send this instead of TERM.
Once the grace period has expired, the KILL signal is sent to any remainig
processes, and the Pod is then deleted from the
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}. If the kubelet or the
container runtime's management service is restarted while waiting for processes to terminate, the
cluster retries from the start including the full original grace period.
-->
通常情况下,容器运行时会发送一个 TERM 信号到每个容器中的主进程。
很多容器运行时都能够注意到容器镜像中 `STOPSIGNAL` 的值,并发送该信号而不是 TERM。
一旦超出了体面终止限期,容器运行时会向所有剩余进程发送 KILL 信号,之后
Pod 就会被从 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}
上移除。如果 `kubelet` 或者容器运行时的管理服务在等待进程终止期间被重启,
@ -666,9 +671,9 @@ An example flow:
1. You use the `kubectl` tool to manually delete a specific Pod, with the default grace period
(30 seconds).
1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead"
along with the grace period.
along with the grace period.
If you use `kubectl describe` to check on the Pod you're deleting, that Pod shows up as
"Terminating".
"Terminating".
On the node where the Pod is running: as soon as the kubelet sees that a Pod has been marked
as terminating (a graceful shutdown duration has been set), the kubelet begins the local Pod
shutdown process.
@ -737,7 +742,7 @@ An example flow:
`SIGKILL` to any processes still running in any container in the Pod.
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
1. The kubelet triggers forcible removal of Pod object from the API server, by setting grace period
to 0 (immediate deletion).
to 0 (immediate deletion).
1. The API server deletes the Pod's API object, which is then no longer visible from any client.
-->
4. 超出终止宽限期线时,`kubelet` 会触发强制关闭过程。容器运行时会向 Pod 中所有容器内
@ -745,14 +750,14 @@ An example flow:
`kubelet` 也会清理隐藏的 `pause` 容器,如果容器运行时使用了这种容器的话。
5. `kubelet` 触发强制从 API 服务器上删除 Pod 对象的逻辑,并将体面终止限期设置为 0
(这意味着马上删除)。
(这意味着马上删除)。
6. API 服务器删除 Pod 的 API 对象,从任何客户端都无法再看到该对象。
<!--
### Forced Pod termination {#pod-termination-forced}
Forced deletions can be potentially disruptiove for some workloads and their Pods.
Forced deletions can be potentially disruptive for some workloads and their Pods.
By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports
the `-grace-period=<seconds>` option which allows you to override the default and specify your
@ -850,4 +855,3 @@ and
可参阅 [PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core)
和 [ContainerStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core)。

View File

@ -1,64 +1,66 @@
---
title: Pod 拓扑扩展约束
title: Pod 拓扑分布约束
content_type: concept
weight: 50
weight: 40
---
<!--
title: Pod Topology Spread Constraints
content_type: concept
weight: 50
weight: 40
-->
<!-- overview -->
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
<!-- leave this shortcode in place until the note about EvenPodsSpread is
obsolete -->
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
<!-- overview -->
<!--
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
-->
你可以使用 _拓扑分布约束Topology Spread Constraints_ 来控制
{{< glossary_tooltip text="Pods" term_id="Pod" >}} 在集群内故障域
之间的分布例如区域Region、可用区Zone、节点和其他用户自定义拓扑域。
这样做有助于实现高可用并提升资源利用率。
可以使用*拓扑扩展约束*来控制 {{< glossary_tooltip text="Pods" term_id="Pod" >}} 在集群内故障域(例如地区,区域,节点和其他用户自定义拓扑域)之间的分布。这可以帮助实现高可用以及提升资源利用率。
<!--
{{< note >}}
In versions of Kubernetes before v1.19, you must enable the `EvenPodsSpread`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on
the [API server](/docs/concepts/overview/components/#kube-apiserver) and the
[scheduler](/docs/reference/generated/kube-scheduler/) in order to use Pod
topology spread constraints.
{{< /note >}}
-->
{{< note >}}
在 v1.19 之前的 Kubernetes 版本中,如果要使用 Pod 拓扑扩展约束,你必须在 [API 服务器](/zh/docs/concepts/overview/components/#kube-apiserver)
和[调度器](/zh/docs/reference/command-line-tools-referene/kube-scheduler/)
中启用 `EvenPodsSpread` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
{{< /note >}}
<!-- body -->
<!--
## Prerequisites
-->
## 先决条件
<!--
### Enable Feature Gate
-->
### 启用功能
<!--
Ensure the `EvenPodsSpread` feature gate is enabled (it is disabled by default
in 1.16). See [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/)
for an explanation of enabling feature gates. The `EvenPodsSpread` feature gate must be enabled for the
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and**
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}.
-->
确保 `EvenPodsSpread` 功能已开启(在 1.16 版本中该功能默认关闭)。
阅读[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)了解如何开启该功能。
`EvenPodsSpread` 必须在 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}} **和**
{{< glossary_tooltip text="调度器" term_id="kube-scheduler" >}} 中都开启。
<!--
### Node Labels
-->
### 节点标签
## 先决条件 {#prerequisites}
### 节点标签 {#node-labels}
<!--
Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. For example, a Node might have labels: `node=node1,zone=us-east-1a,region=us-east-1`
-->
拓扑扩展约束依赖于节点标签来标识每个节点所在的拓扑域。例如,一个节点可能具有标签:`node=node1,zone=us-east-1a,region=us-east-1`
拓扑分布约束依赖于节点标签来标识每个节点所在的拓扑域。
例如,某节点可能具有标签:`node=node1,zone=us-east-1a,region=us-east-1`
<!--
Suppose you have a 4-node cluster with the following labels:
-->
假设你拥有一个具有以下标签的 4 节点集群:
假设你拥有具有以下标签的一个 4 节点集群:
```
NAME STATUS ROLES AGE VERSION LABELS
@ -73,28 +75,40 @@ Then the cluster is logically viewed as below:
-->
然后从逻辑上看集群如下:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
<!--
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/kubernetes-api/labels-annotations-taints/) that are created and populated automatically on most clusters.
-->
可以复用在大多数集群上自动创建和填充的[常用标签](/zh/docs/reference/kubernetes-api/labels-annotations-taints/),而不是手动添加标签。
你可以复用在大多数集群上自动创建和填充的
[常用标签](/zh/docs/reference/kubernetes-api/labels-annotations-taints/)
而不是手动添加标签。
<!--
## Spread Constraints for Pods
-->
## Pod 的拓扑约束
## Pod 的分布约束 {#spread-constraints-for-pods}
### API
<!--
The field `pod.spec.topologySpreadConstraints` is introduced in 1.16 as below:
The API field `pod.spec.topologySpreadConstraints` is defined as below:
-->
`pod.spec.topologySpreadConstraints` 字段定义如下所示:
@ -114,10 +128,19 @@ spec:
<!--
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
-->
可以定义一个或多个 `topologySpreadConstraint` 来指示 kube-scheduler 如何将每个传入的 Pod 根据与现有的 Pod 的关联关系在集群中部署。字段包括:
你可以定义一个或多个 `topologySpreadConstraint` 来指示 kube-scheduler
如何根据与现有的 Pod 的关联关系将每个传入的 Pod 部署到集群中。字段包括:
<!--
- **maxSkew** describes the degree to which Pods may be unevenly distributed. It's the maximum permitted difference between the number of matching Pods in any two topology domains of a given topology type. It must be greater than zero.
- **maxSkew** describes the degree to which Pods may be unevenly distributed.
It's the maximum permitted difference between the number of matching Pods in
any two topology domains of a given topology type. It must be greater than
zero. Its semantics differs according to the value of `whenUnsatisfiable`:
- when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum
permitted difference between the number of matching pods in the target
topology and the global minimum.
- when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher
precedence to topologies that would help reduce the skew.
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
@ -125,147 +148,220 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
-->
- **maxSkew** 描述 pod 分布不均的程度。这是给定拓扑类型中任意两个拓扑域中匹配的 pod 之间的最大允许差值。它必须大于零。
- **topologyKey** 是节点标签的键。如果两个节点使用此键标记并且具有相同的标签值,则调度器会将这两个节点视为处于同一拓扑中。调度器试图在每个拓扑域中放置数量均衡的 pod。
- **whenUnsatisfiable** 指示如果 pod 不满足扩展约束时如何处理:
- `DoNotSchedule`(默认)告诉调度器不用进行调度。
- `ScheduleAnyway` 告诉调度器在对最小化倾斜的节点进行优先级排序时仍对其进行调度。
- **labelSelector** 用于查找匹配的 pod。匹配此标签的 pod 将被统计,以确定相应拓扑域中 pod 的数量。
- **maxSkew** 描述 Pod 分布不均的程度。这是给定拓扑类型中任意两个拓扑域中
匹配的 pod 之间的最大允许差值。它必须大于零。取决于 `whenUnsatisfiable`
取值,其语义会有不同。
- 当 `whenUnsatisfiable` 等于 "DoNotSchedule" 时,`maxSkew` 是目标拓扑域
中匹配的 Pod 数与全局最小值之间可存在的差异。
- 当 `whenUnsatisfiable` 等于 "ScheduleAnyway" 时,调度器会更为偏向能够降低
偏差值的拓扑域。
- **topologyKey** 是节点标签的键。如果两个节点使用此键标记并且具有相同的标签值,
则调度器会将这两个节点视为处于同一拓扑域中。调度器试图在每个拓扑域中放置数量
均衡的 Pod。
- **whenUnsatisfiable** 指示如果 Pod 不满足分布约束时如何处理:
- `DoNotSchedule`(默认)告诉调度器不要调度。
- `ScheduleAnyway` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对
节点进行排序。
- **labelSelector** 用于查找匹配的 pod。匹配此标签的 Pod 将被统计,以确定相应
拓扑域中 Pod 的数量。
有关详细信息,请参考[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)。
<!--
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
-->
执行 `kubectl explain Pod.spec.topologySpreadConstraints` 命令了解更多关于 topologySpreadConstraints 的信息。
你可以执行 `kubectl explain Pod.spec.topologySpreadConstraints` 命令以
了解关于 topologySpreadConstraints 的更多信息。
<!--
### Example: One TopologySpreadConstraint
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
-->
### 例子:单个拓扑扩展约束
### 例子:单个 TopologySpreadConstraint
假设你拥有一个 4 节点集群,其中标记为 `foo:bar` 的 3 个 pod 分别位于 node1node2 和 node3 中(`P` 表示 pod
假设你拥有一个 4 节点集群,其中标记为 `foo:bar` 的 3 个 Pod 分别位于
node1、node2 和 node3 中:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
| P | P | P | |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
<!--
If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:
-->
如果希望传入的 pod 均匀散布在现有的 pod 区域,则可以指定字段如下:
如果希望新来的 Pod 均匀分布在现有的可用区域,则可以按如下设置其规约
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
<!--
`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:&lt;any value&gt;" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod cant satisfy the constraint.
-->
`topologyKey: zone` 意味着均匀分布将只应用于存在标签对为 "zone:&lt;any value&gt;" 的节点上。
`whenUnsatisfiable: DoNotSchedule` 告诉调度器,如果传入的 pod 不满足约束,则让它保持悬决状态。
`topologyKey: zone` 意味着均匀分布将只应用于存在标签键值对为
"zone:&lt;any value&gt;" 的节点。
`whenUnsatisfiable: DoNotSchedule` 告诉调度器如果新的 Pod 不满足约束,
则让它保持悬决状态。
<!--
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1],
hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
-->
如果调度器将传入的 pod 放入 "zoneA"pod 分布将变为 [3, 1],因此实际的倾斜为 23 - 1
这违反了 `maxSkew: 1`。此示例中,传入的 pod 只能放置在 "zoneB" 上:
如果调度器将新的 Pod 放入 "zoneA"Pods 分布将变为 [3, 1],因此实际的偏差
为 23 - 1。这违反了 `maxSkew: 1` 的约定。此示例中,新 Pod 只能放置在
"zoneB" 上:
```
+---------------+---------------+ +---------------+---------------+
| zoneA | zoneB | | zoneA | zoneB |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
| node1 | node2 | node3 | node4 | OR | node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
| P | P | P | P | | P | P | P P | |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
或者
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n3
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
<!--
You can tweak the Pod spec to meet various kinds of requirements:
-->
可以调整 pod 规格以满足各种要求:
你可以调整 Pod 规约以满足各种要求:
<!--
- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed onto "zoneA" as well.
- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes instead of zones. In the above example, if `maxSkew` remains "1", the incoming Pod can only be placed onto "node4".
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, its preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.)
-->
- 将 `maxSkew` 更改为更大的值,比如 "2",这样传入的 pod 也可以放在 "zoneA" 上。
- 将 `topologyKey` 更改为 "node",以便将 pod 均匀分布在节点上而不是区域中。
在上面的例子中,如果 `maxSkew` 保持为 "1",那么传入的 pod 只能放在 "node4" 上。
- 将 `maxSkew` 更改为更大的值,比如 "2",这样新的 Pod 也可以放在 "zoneA" 上。
- 将 `topologyKey` 更改为 "node",以便将 Pod 均匀分布在节点上而不是区域中。
在上面的例子中,如果 `maxSkew` 保持为 "1",那么传入的 Pod 只能放在 "node4" 上。
- 将 `whenUnsatisfiable: DoNotSchedule` 更改为 `whenUnsatisfiable: ScheduleAnyway`
以确保传入的 Pod 始终可以调度(假设满足其他的调度 API
但是,最好将其放置在具有较少匹配 Pod 的拓扑域中。
(请注意,此优先性与其他内部调度优先级(如资源使用率等)一起进行标准化。)
以确保新的 Pod 始终可以被调度(假设满足其他的调度 API
但是,最好将其放置在匹配 Pod 数量较少的拓扑域中。
(请注意,这一优先判定会与其他内部调度优先级(如资源使用率等)排序准则一起进行标准化。)
<!--
### Example: Multiple TopologySpreadConstraints
-->
### 例子:多个拓扑扩展约束
### 例子:多个 TopologySpreadConstraints
<!--
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
-->
下面的例子建立在前面例子的基础上。假设你拥有一个 4 节点集群,其中 3 个标记为 `foo:bar`
Pod 分别位于 node1node2 和 node3 上(`P` 表示 Pod
Pod 分别位于 node1、node2 和 node3 上
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
| P | P | P | |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
<!--
You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node:
-->
可以使用 2 个拓扑扩展约束来控制 pod 在 区域和节点两个维度上进行分布:
可以使用 2 个 TopologySpreadConstraint 来控制 Pod 在 区域和节点两个维度上的分布:
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
<!--
In this case, to match the first constraint, the incoming Pod can only be placed onto "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4".
-->
在这种情况下,为了匹配第一个约束,传入的 pod 只能放置在 "zoneB" 中;而在第二个约束中,
传入的 Pod 只能放置在 "node4" 上。然后两个约束的结果加在一起,因此唯一可行的选择是放置在 "node4" 上。
在这种情况下,为了匹配第一个约束,新的 Pod 只能放置在 "zoneB" 中;而在第二个约束中,
新的 Pod 只能放置在 "node4" 上。最后两个约束的结果加在一起,唯一可行的选择是放置
在 "node4" 上。
<!--
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
-->
多个约束可能导致冲突。假设有一个跨越 2 个区域的 3 节点集群:
多个约束之间可能存在冲突。假设有一个跨越 2 个区域的 3 节点集群:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p4(Pod) --> n3(Node3)
p5(Pod) --> n3
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n1
p3(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
```
+---------------+-------+
| zoneA | zoneB |
+-------+-------+-------+
| node1 | node2 | nod3 |
+-------+-------+-------+
| P P | P | P P |
+-------+-------+-------+
```
<!--
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
-->
如果对集群应用 "two-constraints.yaml",会发现 "mypod" 处于 `Pending` 状态。
这是因为:为了满足第一个约束,"mypod" 只能放在 "zoneB" 中,而第二个约束要求
"mypod" 只能放在 "node2" 上。pod 调度无法满足两种约束。
"mypod" 只能放在 "node2" 上。Pod 调度无法满足两种约束。
<!--
To overcome this situation, you can either increase the `maxSkew` or modify one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`.
-->
为了克服这种情况,可以增加 `maxSkew` 或修改其中一个约束,让其使用
为了克服这种情况,可以增加 `maxSkew` 或修改其中一个约束,让其使用
`whenUnsatisfiable: ScheduleAnyway`
<!--
@ -273,58 +369,200 @@ To overcome this situation, you can either increase the `maxSkew` or modify one
There are some implicit conventions worth noting here:
-->
### 约定
### 约定 {#conventions}
这里有一些值得注意的隐式约定:
<!--
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- Nodes without `topologySpreadConstraints[*].topologyKey` present will be bypassed. It implies that:
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incomingPod will be scheduled into "zoneA".
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
-->
- 只有与传入 pod 具有相同命名空间的 pod 才能作为匹配候选者。
- 只有与新的 Pod 具有相同命名空间的 Pod 才能作为匹配候选者。
- 没有 `topologySpreadConstraints[*].topologyKey` 的节点将被忽略。这意味着:
1. 位于这些节点上的 pod 不影响 `maxSkew` 的计算。
1. 位于这些节点上的 Pod 不影响 `maxSkew` 的计算。
在上面的例子中,假设 "node1" 没有标签 "zone",那么 2 个 Pod 将被忽略,
因此传入的 Pod 将被调度到 "zoneA" 中。
2. 传入的 Pod 没有机会被调度到这类节点上。
2. 新的 Pod 没有机会被调度到这类节点上。
在上面的例子中,假设一个带有标签 `{zone-typo: zoneC}` 的 "node5" 加入到集群,
它将由于没有标签键 "zone" 而被忽略。
<!--
- Be aware of what will happen if the incomingPods `topologySpreadConstraints[*].labelSelector` doesnt match its own labels. In the above example, if we remove the incoming Pods labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - its still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workloads `topologySpreadConstraints[*].labelSelector` to match its own labels.
-->
注意,如果传入 Pod 的 `topologySpreadConstraints[*].labelSelector` 与自身的标签不匹配,将会发生什么。
在上面的例子中,如果移除传入 Pod 的标签Pod 仍然可以调度到 "zoneB",因为约束仍然满足。
然而在调度之后集群的不平衡程度保持不变。zoneA 仍然有 2 个带有 {foo:bar} 标签的 Pod
zoneB 有 1 个带有 {foo:bar} 标签的 Pod。
因此,如果这不是你所期望的,建议工作负载的 `topologySpreadConstraints[*].labelSelector`
与其自身的标签匹配。
- 注意,如果新 Pod 的 `topologySpreadConstraints[*].labelSelector` 与自身的
标签不匹配,将会发生什么。
在上面的例子中,如果移除新 Pod 上的标签Pod 仍然可以调度到 "zoneB",因为约束仍然满足。
然而在调度之后集群的不平衡程度保持不变。zoneA 仍然有 2 个带有 {foo:bar} 标签的 Pod
zoneB 有 1 个带有 {foo:bar} 标签的 Pod。
因此,如果这不是你所期望的,建议工作负载的 `topologySpreadConstraints[*].labelSelector`
与其自身的标签匹配。
<!--
- If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed.
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
-->
- 如果新 Pod 定义了 `spec.nodeSelector``spec.affinity.nodeAffinity`,则
不匹配的节点会被忽略。
- 如果传入的 pod 定义了 `spec.nodeSelector``spec.affinity.nodeAffinity`,则将忽略不匹配的节点。
假设你有一个跨越 zoneA 到 zoneC 的 5 节点集群:
假设有一个从 zonea 到 zonec 的 5 节点集群:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
```
+---------------+---------------+-------+
| zoneA | zoneB | zoneC |
+-------+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 | node5 |
+-------+-------+-------+-------+-------+
| P | P | P | | |
+-------+-------+-------+-------+-------+
```
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 yaml以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector` 也要一样处理。
{{<mermaid>}}
graph BT
subgraph "zoneC"
n5(Node5)
end
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n5 k8s;
class zoneC cluster;
{{< /mermaid >}}
<!--
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
-->
而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 yaml
以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector`
也要一样处理。
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
<!--
### Cluster-level default constraints
It is possible to set default topology spread constraints for a cluster. Default
topology spread constraints are applied to a Pod if, and only if:
- It doesn't define any constraints in its `.spec.topologySpreadConstraints`.
- It belongs to a service, replication controller, replica set or stateful set.
-->
### 集群级别的默认约束 {#cluster-level-default-constraints}
为集群设置默认的拓扑分布约束也是可能的。默认拓扑分布约束在且仅在以下条件满足
时才会应用到 Pod 上:
- Pod 没有在其 `.spec.topologySpreadConstraints` 设置任何约束;
- Pod 隶属于某个服务、副本控制器、ReplicaSet 或 StatefulSet。
<!--
Default constraints can be set as part of the `PodTopologySpread` plugin args
in a [scheduling profile](/docs/reference/scheduling/config/#profiles).
The constraints are specified with the same [API above](#api), except that
`labelSelector` must be empty. The selectors are calculated from the services,
replication controllers, replica sets or stateful sets that the Pod belongs to.
An example configuration might look like follows:
-->
你可以在 [调度方案Schedulingg Profile](/zh/docs/reference/scheduling/config/#profiles)
中将默认约束作为 `PodTopologySpread` 插件参数的一部分来设置。
约束的设置采用[如前所述的 API](#api),只是 `labelSelector` 必须为空。
选择算符是根据 Pod 所属的服务、副本控制器、ReplicaSet 或 StatefulSet 来设置的。
配置的示例可能看起来像下面这个样子:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
profiles:
- pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
```
{{< note >}}
<!--
The score produced by default scheduling constraints might conflict with the
score produced by the
[`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins).
It is recommended that you disable this plugin in the scheduling profile when
using default constraints for `PodTopologySpread`.
-->
默认调度约束所生成的评分可能与
[`SelectorSpread` 插件](/zh/docs/reference/scheduling/config/#scheduling-plugins).
所生成的评分有冲突。
建议你在为 `PodTopologySpread` 设置默认约束是禁用调度方案中的该插件。
{{< /note >}}
<!--
#### Internal default constraints
-->
#### 内部默认约束 {#internal-default-constraints}
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
<!--
When you enable the `DefaultPodTopologySpread` feature gate, the
legacy `SelectorSpread` plugin is disabled.
kube-scheduler uses the following default topology constraints for the
`PodTopologySpread` plugin configuration:
-->
当你启用了 `DefaultPodTopologySpread` 特性门控时,原来的
`SelectorSpread` 插件会被禁用。
kube-scheduler 会使用下面的默认拓扑约束作为 `PodTopologySpread` 插件的
配置:
```yaml
defaultConstraints:
- maxSkew: 3
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
- maxSkew: 5
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
```
<!--
Also, the legacy `SelectorSpread` plugin, which provides an equivalent behavior,
is disabled.
-->
此外,原来用于提供等同行为的 `SelectorSpread` 插件也会被禁用。
<!--
If your nodes are not expected to have **both** `kubernetes.io/hostname` and
`topology.kubernetes.io/zone` labels set, define your own constraints
instead of using the Kubernetes defaults.
The `PodTopologySpread` plugin does not score the nodes that don't have
the topology keys specified in the spreading constraints.
-->
{{< note >}}
如果你的节点不会 **同时** 设置 `kubernetes.io/hostname`
`topology.kubernetes.io/zone` 标签,你应该定义自己的约束而不是使用
Kubernetes 的默认约束。
插件 `PodTopologySpread` 不会为未设置分布约束中所给拓扑键的节点评分。
{{< /note >}}
<!--
## Comparison with PodAffinity/PodAntiAffinity
@ -334,7 +572,7 @@ scheduled - more packed or more scattered.
-->
## 与 PodAffinity/PodAntiAffinity 相比较
在 Kubernetes 中,与 "Affinity" 相关的指令控制 pod 的调度方式(更密集或更分散)。
在 Kubernetes 中,与“亲和性”相关的指令控制 Pod 的调度方式(更密集或更分散)。
<!--
- For `PodAffinity`, you can try to pack any number of Pods into qualifying
@ -342,34 +580,46 @@ topology domain(s)
- For `PodAntiAffinity`, only one Pod can be scheduled into a
single topology domain.
-->
- 对于 `PodAffinity`可以尝试将任意数量的 pod 打包到符合条件的拓扑域中。
- 对于 `PodAntiAffinity`,只能将一个 pod 调度到单个拓扑域中。
- 对于 `PodAffinity`你可以尝试将任意数量的 Pod 集中到符合条件的拓扑域中。
- 对于 `PodAntiAffinity`,只能将一个 Pod 调度到某个拓扑域中。
<!--
For finer control, you can specify topology spread constraints to distribute
Pods across different topology domains - to achieve either high availability or
cost-saving. This can also help on rolling update workloads and scaling out
replicas smoothly. See
[Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation)
for more details.
The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different
topology domains - to achieve high availability or cost-saving. This can also help on rolling update
workloads and scaling out replicas smoothly.
See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details.
-->
"EvenPodsSpread" 功能提供灵活的选项来将 pod 均匀分布到不同的拓扑域中,以实现高可用性或节省成本。
这也有助于滚动更新工作负载和平滑扩展副本。
有关详细信息,请参考[动机](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation)。
要实现更细粒度的控制,你可以设置拓扑分布约束来将 Pod 分布到不同的拓扑域下,
从而实现高可用性或节省成本。这也有助于工作负载的滚动更新和平稳地扩展副本规模。
有关详细信息,请参考
[动机](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation)文档。
<!--
## Known Limitations
As of 1.16, at which this feature is Alpha, there are some known limitations:
-->
## 已知局限性
1.16 版本(此功能为 alpha存在下面的一些限制
<!--
- Scaling down a `Deployment` may result in imbalanced Pods distribution.
- Scaling down a Deployment may result in imbalanced Pods distribution.
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
-->
- `Deployment` 的缩容可能导致 pod 分布不平衡。
- pod 匹配到污点节点是允许的。参考 [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)。
## 已知局限性
- Deployment 缩容操作可能导致 Pod 分布不平衡。
- 具有污点的节点上的 Pods 也会被统计。
参考 [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)。
## {{% heading "whatsnext" %}}
<!--
- [Blog: Introducing PodTopologySpread](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/)
explains `maxSkew` in details, as well as bringing up some advanced usage examples.
-->
- [博客: PodTopologySpread介绍](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/)
详细解释了 `maxSkew`,并给出了一些高级的使用示例。

View File

@ -226,6 +226,26 @@ Please check [installation caveats](https://acme.com/docs/v1/caveats) ...
英文排比句式中采用的逗号,在译文中要使用顿号代替,复合中文书写习惯。
## 更新译文
由于整个文档站点会随着 Kubernetes 项目的开发进展而演化,英文版本的网站内容
会不断更新。鉴于中文站点的基本翻译工作在 1.19 版本已完成,从 1.20 版本开始
本地化的工作会集中在追踪英文内容变化上。
为确保准确跟踪中文化版本与英文版本之间的差异,中文内容的 PR 所包含的每个页面
都必须是“最新的”。这里的“最新”指的是对应的英文页面中的更改已全部同步到中文页面。
如果某中文 PR 中包含对 `content/zh/docs/foo/bar.md` 的更改,且文件 `bar.md`
上次更改日期是 `2020-10-01 01:02:03 UTC`,对应 GIT 标签 `abcd1234`,则
`bar.md` 应包含自 `abcd1234` 以来 `content/en/docs/foo/bar.md` 的所有变更,
否则视此 PR 为不完整 PR会破坏我们对上游变更的跟踪。
这一要求适用于所有更改,包括拼写错误、格式更正、链接修订等等。要查看文件
`bar.md` 上次提交以来发生的所有变更,可使用:
```
./scripts/lsync.sh content/zh/docs/foo/bar.md
```
## 关于链接
### 链接锚点

View File

@ -700,7 +700,7 @@ Setup instructions for specific systems:
特定系统的安装指令:
- [UAA](https://docs.cloudfoundry.org/concepts/architecture/uaa.html)
- [Dex](https://github.com/dexidp/dex/blob/master/Documentation/kubernetes.md)
- [Dex](https://dexidp.io/docs/kubernetes/)
- [OpenUnison](https://www.tremolosecurity.com/orchestra-k8s/)
<!--

View File

@ -99,23 +99,18 @@ different Kubernetes components.
| 特性 | 默认值 | 状态 | 开始(Since) | 结束(Until) |
|---------|---------|-------|-------|-------|
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
| `APIListChunking` | `false` | Alpha | 1.8 | 1.8 |
| `APIListChunking` | `true` | Beta | 1.9 | |
| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | |
| `APIResponseCompression` | `false` | Alpha | 1.7 | |
| `AppArmor` | `true` | Beta | 1.4 | |
| `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | |
| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 |
| `BlockVolume` | `true` | Beta | 1.13 | - |
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | |
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
| `CPUManager` | `true` | Beta | 1.10 | |
| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
| `CRIContainerLogRotation` | `true` | Beta| 1.11 | |
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 |
| `CSIBlockVolume` | `true` | Beta | 1.14 | |
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 |
| `CSIDriverRegistry` | `true` | Beta | 1.14 | |
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
| `CSIInlineVolume` | `true` | Beta | 1.16 | - |
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
@ -123,7 +118,8 @@ different Kubernetes components.
| `CSIMigrationAWS` | `false` | Alpha | 1.14 | |
| `CSIMigrationAWS` | `false` | Beta | 1.17 | |
| `CSIMigrationAWSComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | |
| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 |
| `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | |
| `CSIMigrationAzureDiskComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | |
| `CSIMigrationAzureFileComplete` | `false` | Alpha | 1.17 | |
@ -132,18 +128,27 @@ different Kubernetes components.
| `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | |
| `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | |
| `CSIMigrationvSphere` | `false` | Beta | 1.19 | |
| `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | |
| `CSIStorageCapacity` | `false` | Alpha | 1.19 | |
| `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | |
| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | |
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
| `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 |
| `CustomResourceDefaulting` | `true` | Beta | 1.16 | |
| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | |
| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 |
| `DevicePlugins` | `true` | Beta | 1.10 | |
| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.20 |
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
| `DryRun` | `true` | Beta | 1.13 | |
| `DynamicAuditing` | `false` | Alpha | 1.13 | |
| `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 |
| `DynamicKubeletConfig` | `true` | Beta | 1.11 | |
| `EndpointSlice` | `false` | Alpha | 1.16 | 1.16 |
| `EndpointSlice` | `false` | Beta | 1.17 | |
| `EndpointSlice` | `true` | Beta | 1.18 | |
| `EndpointSliceProxying` | `false` | Alpha | 1.18 | 1.18 |
| `EndpointSliceProxying` | `true` | Beta | 1.19 | |
| `EphemeralContainers` | `false` | Alpha | 1.16 | |
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | |
@ -152,9 +157,14 @@ different Kubernetes components.
| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 |
| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | |
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
| `EvenPodsSpread` | `false` | Alpha | 1.16 | |
| `GenericEphemeralVolume` | `false` | Alpha | 1.19 | |
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
| `HugePageStorageMediumSize` | `false` | Alpha | 1.18 | 1.18 |
| `HugePageStorageMediumSize` | `true` | Beta | 1.19 | |
| `HyperVContainer` | `false` | Alpha | 1.10 | |
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | |
| `IPv6DualStack` | `false` | Alpha | 1.16 | |
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
| `KubeletPodResources` | `true` | Beta | 1.15 | |
| `LegacyNodeRoleBehavior` | `true` | Alpha | 1.16 | |
@ -162,36 +172,41 @@ different Kubernetes components.
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | |
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | |
| `MountContainers` | `false` | Alpha | 1.9 | |
| `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | |
| `NonPreemptingPriority` | `false` | Alpha | 1.15 | |
| `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | 1.18 |
| `NodeDisruptionExclusion` | `true` | Beta | 1.19 | |
| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
| `NonPreemptingPriority` | `true` | Beta | 1.19 | |
| `PodDisruptionBudget` | `false` | Alpha | 1.3 | 1.4 |
| `PodDisruptionBudget` | `true` | Beta | 1.5 | |
| `PodOverhead` | `false` | Alpha | 1.16 | - |
| `ProcMountType` | `false` | Alpha | 1.12 | |
| `QOSReserved` | `false` | Alpha | 1.11 | |
| `RemainingItemCount` | `false` | Alpha | 1.15 | |
| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | |
| `RotateKubeletClientCertificate` | `true` | Beta | 1.8 | |
| `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 |
| `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | |
| `RunAsGroup` | `true` | Beta | 1.14 | |
| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 |
| `RuntimeClass` | `true` | Beta | 1.14 | |
| `SCTPSupport` | `false` | Alpha | 1.12 | |
| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 |
| `SCTPSupport` | `true` | Beta | 1.19 | |
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
| `ServerSideApply` | `true` | Beta | 1.16 | |
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | |
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | |
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 |
| `ServiceAppProtocol` | `true` | Beta | 1.19 | |
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 |
| `ServiceNodeExclusion` | `true` | Beta | 1.19 | |
| `ServiceTopology` | `false` | Alpha | 1.17 | |
| `StartupProbe` | `false` | Alpha | 1.16 | |
| `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | |
| `StartupProbe` | `false` | Alpha | 1.16 | 1.17 |
| `StartupProbe` | `true` | Beta | 1.18 | |
| `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 |
| `StorageVersionHash` | `true` | Beta | 1.15 | |
| `StreamingProxyRedirects` | `false` | Beta | 1.5 | 1.5 |
| `StreamingProxyRedirects` | `true` | Beta | 1.6 | |
| `SupportNodePidsLimit` | `false` | Alpha | 1.14 | 1.14 |
| `SupportNodePidsLimit` | `true` | Beta | 1.15 | |
| `SupportPodPidsLimit` | `false` | Alpha | 1.10 | 1.13 |
| `SupportPodPidsLimit` | `true` | Beta | 1.14 | |
| `Sysctls` | `true` | Beta | 1.11 | |
| `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 |
| `TaintBasedEvictions` | `true` | Beta | 1.13 | |
| `TokenRequest` | `false` | Alpha | 1.10 | 1.11 |
| `TokenRequest` | `true` | Beta | 1.12 | |
| `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 |
@ -200,10 +215,9 @@ different Kubernetes components.
| `TopologyManager` | `false` | Alpha | 1.16 | |
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 |
| `VolumePVCDataSource` | `true` | Beta | 1.16 | |
| `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | 1.16 |
| `VolumeSnapshotDataSource` | `true` | Beta | 1.17 | - |
| `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | |
| `WindowsGMSA` | `false` | Alpha | 1.14 | |
| `WindowsGMSA` | `true` | Beta | 1.16 | |
| `WinDSR` | `false` | Alpha | 1.14 | |
@ -236,6 +250,15 @@ different Kubernetes components.
| `AffinityInAnnotations` | - | Deprecated | 1.8 | - |
| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 |
| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - |
| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 |
| `BlockVolume` | `true` | Beta | 1.13 | 1.17 |
| `BlockVolume` | `true` | GA | 1.18 | - |
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 |
| `CSIBlockVolume` | `true` | Beta | 1.14 | 1.17 |
| `CSIBlockVolume` | `true` | GA | 1.18 | - |
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 |
| `CSIDriverRegistry` | `true` | Beta | 1.14 | 1.17 |
| `CSIDriverRegistry` | `true` | GA | 1.18 | |
| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 |
| `CSINodeInfo` | `true` | Beta | 1.14 | 1.16 |
| `CSINodeInfo` | `true` | GA | 1.17 | |
@ -260,6 +283,8 @@ different Kubernetes components.
| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 |
| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 |
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - |
| `DynamicAuditing` | `false` | Alpha | 1.13 | 1.18 |
| `DynamicAuditing` | - | Deprecated | 1.19 | - |
| `DynamicProvisioningScheduling` | `false` | Alpha | 1.11 | 1.11 |
| `DynamicProvisioningScheduling` | - | Deprecated| 1.12 | - |
| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 |
@ -268,6 +293,9 @@ different Kubernetes components.
| `EnableEquivalenceClassCache` | - | Deprecated | 1.15 | - |
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 |
| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | - |
| `EvenPodsSpread` | `false` | Alpha | 1.16 | 1.17 |
| `EvenPodsSpread` | `true` | Beta | 1.18 | 1.18 |
| `EvenPodsSpread` | `true` | GA | 1.19 | - |
| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 |
| `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - |
| `HugePages` | `false` | Alpha | 1.8 | 1.9 |
@ -301,9 +329,13 @@ different Kubernetes components.
| `PVCProtection` | `false` | Alpha | 1.9 | 1.9 |
| `PVCProtection` | - | Deprecated | 1.10 | - |
| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 |
| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | 1.18 |
| `ResourceLimitsPriorityFunction` | - | Deprecated | 1.19 | - |
| `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 |
| `ResourceQuotaScopeSelectors` | `true` | Beta | 1.12 | 1.16 |
| `ResourceQuotaScopeSelectors` | `true` | GA | 1.17 | - |
| `RotateKubeletClientCertificate` | `true` | Beta | 1.8 | 1.18 |
| `RotateKubeletClientCertificate` | `true` | GA | 1.19 | - |
| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 |
| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | 1.16 |
| `ScheduleDaemonSetPods` | `true` | GA | 1.17 | - |
@ -312,13 +344,22 @@ different Kubernetes components.
| `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | - |
| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 |
| `StorageObjectInUseProtection` | `true` | GA | 1.11 | - |
| `StreamingProxyRedirects` | `false` | Beta | 1.5 | 1.5 |
| `StreamingProxyRedirects` | `true` | Beta | 1.6 | 1.18 |
| `StreamingProxyRedirects` | - | Deprecated| 1.19 | - |
| `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 |
| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 |
| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 |
| `SupportIPVSProxyMode` | `true` | GA | 1.11 | - |
| `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 |
| `TaintBasedEvictions` | `true` | Beta | 1.13 | 1.17 |
| `TaintBasedEvictions` | `true` | GA | 1.18 | - |
| `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 |
| `TaintNodesByCondition` | `true` | Beta | 1.12 | 1.16 |
| `TaintNodesByCondition` | `true` | GA | 1.17 | - |
| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 |
| `VolumePVCDataSource` | `true` | Beta | 1.16 | 1.17 |
| `VolumePVCDataSource` | `true` | GA | 1.18 | - |
| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 |
| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 |
| `VolumeScheduling` | `true` | GA | 1.13 | - |
@ -329,6 +370,12 @@ different Kubernetes components.
| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 |
| `WatchBookmark` | `true` | Beta | 1.16 | 1.16 |
| `WatchBookmark` | `true` | GA | 1.17 | - |
| `WindowsGMSA` | `false` | Alpha | 1.14 | 1.15 |
| `WindowsGMSA` | `true` | Beta | 1.16 | 1.17 |
| `WindowsGMSA` | `true` | GA | 1.18 | - |
| `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 |
| `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 |
| `WindowsRunAsUserName` | `true` | GA | 1.18 | - |
{{< /table >}}
@ -430,8 +477,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
<!--
- `Accelerators`: Enable Nvidia GPU support when using Docker
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
- `AffinityInAnnotations`(*deprecated*): Enable setting [Pod affinity or anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
- `AffinityInAnnotations`(*deprecated*): Enable setting [Pod affinity or anti-affinity](docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
- `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints.
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`) resources from API server in chunks.
- `APIPriorityAndFairness`: Enable managing request concurrency with prioritization and fairness at each server. (Renamed from `RequestManagement`)
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
@ -440,14 +489,15 @@ Each feature gate is designed for enabling/disabling a specific feature:
-->
- `Accelerators`:使用 Docker 时启用 Nvidia GPU 支持。
- `AdvancedAuditing`:启用[高级审查功能](/docs/tasks/debug-application-cluster/audit/#advanced-audit)。
- `AffinityInAnnotations` *已弃用* ):启用 [Pod 亲和力或反亲和力](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)。
- `AdvancedAuditing`:启用[高级审计功能](/zh/docs/tasks/debug-application-cluster/audit/#advanced-audit)。
- `AffinityInAnnotations` *已弃用* ):启用 [Pod 亲和或反亲和](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)。
- `AllowExtTrafficLocalEndpoints`:启用服务用于将外部请求路由到节点本地终端。
- `AnyVolumeDataSource`: 允许使用任何自定义的资源来做作为 {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}} 中的 `DataSource`.
- `APIListChunking`:启用 API 客户端以块的形式从 API 服务器检索“LIST” 或 “GET”资源。
- `APIPriorityAndFairness`: Enable managing request concurrency with prioritization and fairness at each server. (Renamed from `RequestManagement`)
- `APIPriorityAndFairness`: 在每个服务器上启用优先级和公平性来管理请求并发。(由 `RequestManagement` 重命名而来)
- `APIResponseCompression`:压缩 “LIST” 或 “GET” 请求的 API 响应。
- `AppArmor`:使用 Docker 时,在 Linux 节点上启用基于 AppArmor 机制的强制访问控制。请参见 [AppArmor 教程](/docs/tutorials/clusters/apparmor/) 获取详细信息。
- `AppArmor`:使用 Docker 时,在 Linux 节点上启用基于 AppArmor 机制的强制访问控制。请参见 [AppArmor 教程](/zh/docs/tutorials/clusters/apparmor/) 获取详细信息。
<!--
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
@ -463,14 +513,16 @@ Each feature gate is designed for enabling/disabling a specific feature:
ServiceAccountTokenVolumeProjection.
Check [Service Account Token Volumes](https://git.k8s.io/community/contributors/design-proposals/storage/svcacct-token-volume-source.md)
for more details.
- `ConfigurableFSGroupPolicy`: Allows user to configure volume permission change policy for fsGroups when mounting a volume in a Pod. See [Configure volume permission and ownership change policy for Pods](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) for more details.
- `CPUManager`: Enable container level CPU affinity support, see [CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
-->
- `AttachVolumeLimit`:启用卷插件用于报告可连接到节点的卷数限制。有关更多详细信息,请参见[动态卷限制](/docs/concepts/storage/storage-limits/#dynamic-volume-limits)。
- `AttachVolumeLimit`:启用卷插件用于报告可连接到节点的卷数限制。有关更多详细信息,请参见[动态卷限制](/zh/docs/concepts/storage/storage-limits/#dynamic-volume-limits)。
- `BalanceAttachedNodeVolumes`包括要在调度时进行平衡资源分配的节点上的卷数。scheduler 在决策时会优先考虑 CPU、内存利用率和卷数更近的节点。
- `BlockVolume`:在 Pod 中启用原始块设备的定义和使用。有关更多详细信息,请参见[原始块卷支持](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)。
- `BlockVolume`:在 Pod 中启用原始块设备的定义和使用。有关更多详细信息,请参见[原始块卷支持](/zh/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)。
- `BoundServiceAccountTokenVolume`:迁移 ServiceAccount 卷以使用由 ServiceAccountTokenVolumeProjection 组成的预计卷。有关更多详细信息,请参见 [Service Account Token 卷](https://git.k8s.io/community/contributors/design-proposals/storage/svcacct-token-volume-source.md)。
- `CPUManager`:启用容器级别的 CPU 亲和力支持,有关更多详细信息,请参见 [CPU 管理策略](/docs/tasks/administer-cluster/cpu-management-policies/)。
- `ConfigurableFSGroupPolicy`:在 Pod 中挂载卷时,允许用户为 fsGroup 配置卷访问权限和属主变更策略。请参见 [为 Pod 配置卷访问权限和属主变更策略](/zh/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)。
- `CPUManager`:启用容器级别的 CPU 亲和性支持,有关更多详细信息,请参见 [CPU 管理策略](/zh/docs/tasks/administer-cluster/cpu-management-policies/)。
<!--
- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime.
@ -487,35 +539,41 @@ Each feature gate is designed for enabling/disabling a specific feature:
-->
- `CRIContainerLogRotation`:为 cri 容器运行时启用容器日志轮换。
- `CSIBlockVolume`:启用外部 CSI 卷驱动程序用于支持块存储。有关更多详细信息,请参见 [`csi` 原始块卷支持](/docs/concepts/storage/volumes/#csi-raw-block-volume-support)。
- `CSIBlockVolume`:启用外部 CSI 卷驱动程序用于支持块存储。有关更多详细信息,请参见 [`csi` 原始块卷支持](/zh/docs/concepts/storage/volumes/#csi-raw-block-volume-support)。
- `CSIDriverRegistry`:在 csi.storage.k8s.io 中启用与 CSIDriver API 对象有关的所有逻辑。
- `CSIInlineVolume`:为 Pod 启用 CSI 内联卷支持。
- `CSIMigration`:确保填充和转换逻辑能够将卷操作从内嵌插件路由到相应的预安装 CSI 插件。
- `CSIMigrationAWS`:确保填充和转换逻辑能够将卷操作从 AWS-EBS 内嵌插件路由到 EBS CSI 插件。如果节点未安装和配置 EBS CSI 插件,则支持回退到内嵌 EBS 插件。这需要启用 CSIMigration 特性标志。
- `CSIMigrationAWSComplete`:停止在 kubelet 和卷控制器中注册 EBS 内嵌插件,并启用 shims 和转换逻辑将卷操作从AWS-EBS 内嵌插件路由到 EBS CSI 插件。这需要启用 CSIMigration 和 CSIMigrationAWS 特性标志,并在集中的所有节点上安装和配置 EBS CSI 插件。
- `CSIMigrationAWSComplete`:停止在 kubelet 和卷控制器中注册 EBS 内嵌插件,并启用 shims 和转换逻辑将卷操作从AWS-EBS 内嵌插件路由到 EBS CSI 插件。这需要启用 CSIMigration 和 CSIMigrationAWS 特性标志,并在集中的所有节点上安装和配置 EBS CSI 插件。
- `CSIMigrationAzureDisk`:确保填充和转换逻辑能够将卷操作从 Azure 磁盘内嵌插件路由到 Azure 磁盘 CSI 插件。如果节点未安装和配置 AzureDisk CSI 插件,支持回退到内建 AzureDisk 插件。这需要启用 CSIMigration 特性标志。
- `CSIMigrationAzureDiskComplete`:停止在 kubelet 和卷控制器中注册 Azure 磁盘内嵌插件,并启用 shims 和转换逻辑以将卷操作从 Azure 磁盘内嵌插件路由到 AzureDisk CSI 插件。这需要启用 CSIMigration 和 CSIMigrationAzureDisk 特性标志,并在集中的所有节点上安装和配置 AzureDisk CSI 插件。
- `CSIMigrationAzureDiskComplete`:停止在 kubelet 和卷控制器中注册 Azure 磁盘内嵌插件,并启用 shims 和转换逻辑以将卷操作从 Azure 磁盘内嵌插件路由到 AzureDisk CSI 插件。这需要启用 CSIMigration 和 CSIMigrationAzureDisk 特性标志,并在集中的所有节点上安装和配置 AzureDisk CSI 插件。
- `CSIMigrationAzureFile`:确保填充和转换逻辑能够将卷操作从 Azure 文件内嵌插件路由到 Azure 文件 CSI 插件。如果节点未安装和配置 AzureFile CSI 插件,支持回退到内嵌 AzureFile 插件。这需要启用 CSIMigration 特性标志。
- `CSIMigrationAzureFileComplete`:停止在 kubelet 和卷控制器中注册 Azure-File 内嵌插件,并启用 shims 和转换逻辑以将卷操作从 Azure-File 内嵌插件路由到 AzureFile CSI 插件。这需要启用 CSIMigration 和 CSIMigrationAzureFile 特性标志,并在集中的所有节点上安装和配置 AzureFile CSI 插件。
- `CSIMigrationAzureFileComplete`:停止在 kubelet 和卷控制器中注册 Azure-File 内嵌插件,并启用 shims 和转换逻辑以将卷操作从 Azure-File 内嵌插件路由到 AzureFile CSI 插件。这需要启用 CSIMigration 和 CSIMigrationAzureFile 特性标志,并在集中的所有节点上安装和配置 AzureFile CSI 插件。
<!--
- `CSIMigrationGCE`: Enables shims and translation logic to route volume operations from the GCE-PD in-tree plugin to PD CSI plugin. Supports falling back to in-tree GCE plugin if a node does not have PD CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationGCEComplete`: Stops registering the GCE-PD in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the GCE-PD in-tree plugin to PD CSI plugin. Requires CSIMigration and CSIMigrationGCE feature flags enabled and PD CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationOpenStack`: Enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin. Supports falling back to in-tree Cinder plugin if a node does not have Cinder CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationOpenStackComplete`: Stops registering the Cinder in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the Cinder in-tree plugin to Cinder CSI plugin. Requires CSIMigration and CSIMigrationOpenStack feature flags enabled and Cinder CSI plugin installed and configured on all nodes in the cluster.
- `CSIMigrationvSphere`: Enables shims and translation logic to route volume operations from the vSphere in-tree plugin to vSphere CSI plugin. Supports falling back to in-tree vSphere plugin if a node does not have vSphere CSI plugin installed and configured. Requires CSIMigration feature flag enabled.
- `CSIMigrationvSphereComplete`: Stops registering the vSphere in-tree plugin in kubelet and volume controllers and enables shims and translation logic to route volume operations from the vSphere in-tree plugin to vSphere CSI plugin. Requires CSIMigration and CSIMigrationvSphere feature flags enabled and vSphere CSI plugin installed and configured on all nodes in the cluster.
- `CSINodeInfo`: Enable all logic related to the CSINodeInfo API object in csi.storage.k8s.io.
- `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a
[CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
compatible volume plugin.
Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details.
-->
- `CSIMigrationGCE`使 shims 和转换逻辑能够将卷操作从 GCE-PD 内嵌插件路由到 PD CSI 插件。如果节点未安装和配置 PD CSI 插件,支持回退到内嵌 GCE 插件。这需要启用 CSIMigration 特性标志。
- `CSIMigrationGCEComplete`:停止在 kubelet 和卷控制器中注册 GCE-PD 内嵌插件,并启用 shims 和转换逻辑以将卷操作从 GCE-PD 内嵌插件路由到 PD CSI 插件。这需要启用 CSIMigration 和 CSIMigrationGCE 特性标志,并在集中的所有节点上安装和配置 PD CSI 插件。
- `CSIMigrationGCE`启用 shims 和转换逻辑,将卷操作从 GCE-PD 内嵌插件路由到 PD CSI 插件。如果节点未安装和配置 PD CSI 插件,支持回退到内嵌 GCE 插件。这需要启用 CSIMigration 特性标志。
- `CSIMigrationGCEComplete`:停止在 kubelet 和卷控制器中注册 GCE-PD 内嵌插件,并启用 shims 和转换逻辑以将卷操作从 GCE-PD 内嵌插件路由到 PD CSI 插件。这需要启用 CSIMigration 和 CSIMigrationGCE 特性标志,并在集中的所有节点上安装和配置 PD CSI 插件。
- `CSIMigrationOpenStack`:确保填充和转换逻辑能够将卷操作从 Cinder 内嵌插件路由到 Cinder CSI 插件。如果节点未安装和配置 Cinder CSI 插件,支持回退到内嵌 Cinder 插件。这需要启用 CSIMigration 特性标志。
- `CSIMigrationOpenStackComplete`:停止在 kubelet 和卷控制器中注册 Cinder 内嵌插件,并启用 shims 和转换逻辑将卷操作从 Cinder 内嵌插件路由到 Cinder CSI 插件。这需要启用 CSIMigration 和 CSIMigrationOpenStack 特性标志,并在群集中的所有节点上安装和配置 Cinder CSI 插件。
- `CSIMigrationOpenStackComplete`:停止在 kubelet 和卷控制器中注册 Cinder 内嵌插件,并启用 shims 和转换逻辑将卷操作从 Cinder 内嵌插件路由到 Cinder CSI 插件。这需要启用 CSIMigration 和 CSIMigrationOpenStack 特性标志,并在集群中的所有节点上安装和配置 Cinder CSI 插件。
- `CSIMigrationvSphere`: 启用 shims 和转换逻辑,将卷操作从 vSphere 内嵌插件路由到 vSphere CSI 插件。如果节点未安装和配置 vSphere CSI 插件,则支持回退到 vSphere 内嵌插件。这需要启用 CSIMigration 特性标志。
- `CSIMigrationvSphereComplete`: 停止在 kubelet 和卷控制器中注册 vSphere 内嵌插件,并启用 shims 和转换逻辑以将卷操作从 vSphere 内嵌插件路由到 vSphere CSI 插件。这需要启用 CSIMigration 和 CSIMigrationvSphere 特性标志,并在集群中的所有节点上安装和配置 vSphere CSI 插件。
- `CSINodeInfo`:在 csi.storage.k8s.io 中启用与 CSINodeInfo API 对象有关的所有逻辑。
- `CSIPersistentVolume`:启用发现并挂载通过 [CSI容器存储接口](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)兼容卷插件配置的卷。有关更多详细信息,请参见 [`csi` 卷类型](/docs/concepts/storage/volumes/#csi)。
- `CSIPersistentVolume`:启用发现和挂载通过 [CSI容器存储接口](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)兼容卷插件配置的卷。
- `CSIStorageCapacity`: 使 CSI 驱动程序可以发布存储容量信息,并使 Kubernetes 调度程序在调度 Pod 时使用该信息。 参见 [存储容量](/zh/docs/concepts/storage/storage-capacity/)。
详情请参见 [`csi` 卷类型](/zh/docs/concepts/storage/volumes/#csi)。
- `CSIVolumeFSGroupPolicy`: 允许 CSIDrivers 使用 `fsGroupPolicy` 字段. 该字段能控制由 CSIDriver 创建的卷在挂载这些卷时是否支持卷所有权和权限修改。
<!--
- `CustomCPUCFSQuotaPeriod`: Enable nodes to change CPUCFSQuotaPeriod.
@ -525,41 +583,47 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `CustomResourceDefaulting`: Enable CRD support for default values in OpenAPI v3 validation schemas.
- `CustomResourcePublishOpenAPI`: Enables publishing of CRD OpenAPI specs.
- `CustomResourceSubresources`: Enable `/status` and `/scale` subresources
on resources created from [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/).
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
- `CustomResourceValidation`: Enable schema based validation on resources created from
[CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/).
[CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
on resources created from [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/).
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
troubleshoot a running Pod.
-->
- `CustomCPUCFSQuotaPeriod`:使节点能够更改 CPUCFSQuotaPeriod。
- `CustomPodDNS`:使用其 `dnsConfig` 属性启用 Pod 的自定义 DNS 设置。有关更多详细信息,请参见 [Pod 的 DNS 配置](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)。
- `CustomPodDNS`:使用其 `dnsConfig` 属性启用 Pod 的自定义 DNS 设置。有关更多详细信息,请参见 [Pod 的 DNS 配置](/zh/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)。
- `CustomResourceDefaulting`:为 OpenAPI v3 验证架构中的默认值启用 CRD 支持。
- `CustomResourcePublishOpenAPI`:启用 CRD OpenAPI 规范的发布。
- `CustomResourceSubresources`:对于从 [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/) 中创建的资源启用 `/status``/scale` 子资源。
- `CustomResourceValidation`:对于从 [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/) 中创建的资源启用基于架构的验证。
- `CustomResourceWebhookConversion`:对于从 [CustomResourceDefinition](/docs/concepts/api-extension/custom-resources/) 中创建的资源启用基于 Webhook 的转换。
- `CustomResourceSubresources`:对于从 [CustomResourceDefinition](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 中创建的资源启用 `/status``/scale` 子资源。
- `CustomResourceValidation`:对于从 [CustomResourceDefinition](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 中创建的资源启用基于模式的验证。
- `CustomResourceWebhookConversion`:对于从 [CustomResourceDefinition](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 中创建的资源启用基于 Webhook 的转换。
对正在运行的 Pod 进行故障排除。
<!--
- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/).
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
based resource provisioning on nodes.
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
[default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
- `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests
so that validation, merging, and mutation can be tested without committing.
- `DynamicAuditing`: Enable [dynamic auditing](/docs/tasks/debug-application-cluster/audit/#dynamic-backend)
- `DynamicAuditing`(*deprecated*): Used to enable dynamic auditing before v1.19.
- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of volume topology and handle PV provisioning.
This feature is superseded by the `VolumeScheduling` feature completely in v1.12.
- `DynamicVolumeProvisioning`(*deprecated*): Enable the [dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/) of persistent volumes to Pods.
-->
- `DevicePlugins`:在节点上启用基于 [device-plugins](/docs/concepts/cluster-administration/device-plugins/) 的资源供应。
- `DryRun`:启用服务器端 [dry run](/docs/reference/using-api/api-concepts/#dry-run) 请求,以便无需提交即可测试验证、合并和差异化。
- `DynamicAuditing`:确保[动态审查](/docs/tasks/debug-application-cluster/audit/#dynamic-backend)。
- `DynamicKubeletConfig`:启用 kubelet 的动态配置。请参阅[重新配置 kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/)。
- `DisableAcceleratorUsageMetrics`: [禁用 kubelet 收集加速器指标](/zh/docs/concepts/cluster-administration/system-metrics/).
- `DevicePlugins`:在节点上启用基于 [device-plugins](/zh/docs/concepts/cluster-administration/device-plugins/) 的资源供应。
- `DefaultPodTopologySpread`: 启用 `PodTopologySpread` 调度插件来做
[默认的调度传播](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
- `DryRun`:启用服务器端 [dry run](/zh/docs/reference/using-api/api-concepts/#dry-run) 请求,以便无需提交即可测试验证、合并和差异化。
- `DynamicAuditing` *已弃用* ):在 v1.19 版本前用于启用动态审计。
- `DynamicKubeletConfig`:启用 kubelet 的动态配置。请参阅[重新配置 kubelet](/zh/docs/tasks/administer-cluster/reconfigure-kubelet/)。
- `DynamicProvisioningScheduling`:扩展默认 scheduler 以了解卷拓扑并处理 PV 配置。此特性已在 v1.12 中完全被 `VolumeScheduling` 特性取代。
- `DynamicVolumeProvisioning` *已弃用* ):启用持久化卷到 Pod 的[动态预配置](/docs/concepts/storage/dynamic-provisioning/)。
- `DynamicVolumeProvisioning` *已弃用* ):启用持久化卷到 Pod 的[动态预配置](/zh/docs/concepts/storage/dynamic-provisioning/)。
<!--
- `EnableAggregatedDiscoveryTimeout` (*deprecated*): Enable the five second timeout on aggregated discovery calls.
@ -577,9 +641,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `EnableEquivalenceClassCache`:调度 Pod 时,使 scheduler 缓存节点的等效项。
- `EphemeralContainers`:启用添加 {{< glossary_tooltip text="临时容器" term_id="ephemeral-container" >}} 到正在运行的 Pod 的特性。
- `EvenPodsSpread`:使 Pod 能够在拓扑域之间平衡调度。请参阅 [Pod 拓扑扩展约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)。
- `ExpandInUsePersistentVolumes`:启用扩展使用中的 PVC。请查阅 [调整使用中的 PersistentVolumeClaim 的大小](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim)。
- `ExpandInUsePersistentVolumes`:启用扩展使用中的 PVC。请查阅[调整使用中的 PersistentVolumeClaim 的大小](/zh/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim)。
- `ExpandPersistentVolumes`:启用持久卷的扩展。请查阅[扩展永久卷声明](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)。
- `ExperimentalCriticalPodAnnotation`:启用将特定 Pod 注解为 *critical* 的方式,用于[确保其调度](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)。从 v1.13 开始Pod 优先级和抢占功能已弃用此特性。
- `ExperimentalCriticalPodAnnotation`:启用将特定 Pod 注解为 *critical* 的方式,用于[确保其调度](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)。从 v1.13 开始Pod 优先级和抢占功能已弃用此特性。
<!--
- `ExperimentalHostUserNamespaceDefaultingGate`: Enabling the defaulting user
@ -588,49 +652,66 @@ Each feature gate is designed for enabling/disabling a specific feature:
capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled
if user namespace remapping is enabled in the Docker daemon.
- `EndpointSlice`: Enables Endpoint Slices for more scalable and extensible
network endpoints. Requires corresponding API and Controller to be enabled.
See [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
network endpoints. See [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `EndpointSliceProxying`: When this feature gate is enabled, kube-proxy running
on Linux will use EndpointSlices as the primary data source instead of
Endpoints, enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `WindowsEndpointSliceProxying`: When this feature gate is enabled, kube-proxy
running on Windows will use EndpointSlices as the primary data source instead
of Endpoints, enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `GCERegionalPersistentDisk`: Enable the regional PD feature on GCE.
- `GenericEphemeralVolume`: Enables ephemeral, inline volumes that support all features of normal volumes (can be provided by third-party storage vendors, storage capacity tracking, restore from snapshot, etc.). See [Ephemeral Volumes](/docs/concepts/storage/ephemeral-volumes/).
- `HugePages`: Enable the allocation and consumption of pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
- `HugePageStorageMediumSize`: Enable support for multiple sizes pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
-->
- `ExperimentalHostUserNamespaceDefaultingGate`启用主机默认的用户命名空间。这适用于使用其他主机命名空间、主机安装的容器或具有特权或使用特定的非命名空间功能例如MKNODE、SYS_MODULE等的容器。如果在 Docker 守护程序中启用了用户命名空间重新映射,则启用此选项。
- `EndpointSlice`:启用端点切片以实现更多可扩展的网络端点。需要启用相应的 API 和控制器,请参阅[启用端点切片](/docs/tasks/administer-cluster/enabling-endpointslices/)。
- `EndpointSlice`:启用 EndpointSlice 以实现更多可扩展的网络端点。需要启用相应的 API 和控制器,请参阅[启用 EndpointSlice](/zh/docs/tasks/administer-cluster/enabling-endpointslices/)。
- `EndpointSliceProxying`启用此特性门控后Linux 上运行的 kube-proxy 将使用 EndpointSlices 取代 Endpoints 作为主要数据源,可以提高扩展性和性能。 请参见
[启用 EndpointSlice](/zh/docs/tasks/administer-cluster/enabling-endpointslices/)。
- `WindowsEndpointSliceProxying`启用此特性门控后Windows 上运行的 kube-proxy 将使用 EndpointSlices 取代 Endpoints 作为主要数据源,可以提高扩展性和性能。 请参见
[启用 EndpointSlice](/zh/docs/tasks/administer-cluster/enabling-endpointslices/)。
- `GCERegionalPersistentDisk`:在 GCE 上启用区域 PD 特性。
- `HugePages`: 启用分配和使用预分配的 [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/)。
- `GenericEphemeralVolume`:启用支持临时卷和内联卷的(可以由第三方存储供应商提供,存储容量跟踪,从快照还原,等等)所有功能。请参见[临时卷](/zh/docs/concepts/storage/ephemeral-volumes/)。
- `HugePages`:启用分配和使用预分配的[巨页资源](/zh/docs/tasks/manage-hugepages/scheduling-hugepages/)。
- `HugePageStorageMediumSize`:启用支持多种大小的预分配[巨页资源](/zh/docs/tasks/manage-hugepages/scheduling-hugepages/)。
<!--
- `HyperVContainer`: Enable [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) for Windows containers.
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler` resources when using custom or external metrics.
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as immutable for better safety and performance.
- `KubeletConfigFile`: Enable loading kubelet configuration from a file specified using a config file.
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/) for more details.
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
- `KubeletPodResources`: Enable the kubelet's pod resources grpc endpoint.
See [Support Device Monitoring](https://git.k8s.io/community/keps/sig-node/compute-device-assignment.md) for more details.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels.
See [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md) for more details.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
-->
- `HyperVContainer`:为 Windows 容器启用[Hyper-V 隔离](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)。
- `HPAScaleToZero`:使用自定义指标或外部指标时,可将 `HorizontalPodAutoscaler` 资源的 `minReplicas` 设置为 0。
- `KubeletConfigFile`:启用从使用配置文件指定的文件中加载 kubelet 配置。有关更多详细信息,请参见[通过配置文件设置 kubelet 参数](/docs/tasks/administer-cluster/kubelet-config-file/)。
- `KubeletPluginsWatcher`:启用基于探针的插件监视应用程序,使 kubelet 能够发现插件,例如 [CSI 卷驱动程序](/docs/concepts/storage/volumes/#csi)。
- `KubeletPodResources`:启用 kubelet 的 pod 资源 grpc 端点。有关更多详细信息,请参见[支持设备监控](https://git.k8s.io/community/keps/sig-node/compute-device-assignment.md)。
- `LegacyNodeRoleBehavior`:禁用此选项后,服务负载均衡中的旧版操作和节点中断将忽略 `node-role.kubernetes.io/master` 标签,而使用特性指定的标签。
- `ImmutableEphemeralVolumes`:允许将各个 Secret 和 ConfigMap 标记为不可变更的,以提高安全性和性能。
- `KubeletConfigFile`:启用从使用配置文件指定的文件中加载 kubelet 配置。有关更多详细信息,请参见[通过配置文件设置 kubelet 参数](/zh/docs/tasks/administer-cluster/kubelet-config-file/)。
- `KubeletPluginsWatcher`:启用基于探针的插件监视应用程序,使 kubelet 能够发现插件,例如 [CSI 卷驱动程序](/zh/docs/concepts/storage/volumes/#csi)。
- `KubeletPodResources`:启用 kubelet 的 pod 资源 grpc 端点。有关更多详细信息,请参见[支持设备监控](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)。
- `LegacyNodeRoleBehavior`:禁用此选项后,服务负载均衡中的传统行为和节点中断将忽略 `node-role.kubernetes.io/master` 标签,而使用 `NodeDisruptionExclusion``ServiceNodeExclusion` 提供的特性指定的标签。
<!--
- `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy.
- `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy.
- `MountContainers`: Enable using utility containers on host as the volume mounter.
- `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods.
For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation).
- `NodeDisruptionExclusion`: Enable use of the node label `node.kubernetes.io/exclude-disruption` which prevents nodes from being evacuated during zone failures.
-->
- `LocalStorageCapacityIsolation`启用[本地临时存储](/docs/concepts/configuration/manage-compute-resources-container/)的消耗,以及 [emptyDir 卷](/docs/concepts/storage/volumes/#emptydir) 的 `sizeLimit` 属性。
- `LocalStorageCapacityIsolationFSQuotaMonitoring`:如果[本地临时存储](/docs/concepts/configuration/manage-compute-resources-container/)启用了 `LocalStorageCapacityIsolation`,并且 [emptyDir 卷](/docs/concepts/storage/volumes/#emptydir) 的后备文件系统支持项目配额,并且启用了这些配额,请使用项目配额来监视 [emptyDir 卷](/docs/concepts/storage/volumes/#emptydir)的存储消耗而不是遍历文件系统,以此获得更好的性能和准确性。
- `LocalStorageCapacityIsolation`允许使用[本地临时存储](/zh/docs/concepts/configuration/manage-resources-containers/)以及 [emptyDir 卷](/zh/docs/concepts/storage/volumes/#emptydir) 的 `sizeLimit` 属性。
- `LocalStorageCapacityIsolationFSQuotaMonitoring`:如果[本地临时存储](/zh/docs/concepts/configuration/manage-resources-containers/)启用了 `LocalStorageCapacityIsolation`,并且 [emptyDir 卷](/zh/docs/concepts/storage/volumes/#emptydir) 的后备文件系统支持项目配额,并且启用了这些配额,请使用项目配额来监视 [emptyDir 卷](/zh/docs/concepts/storage/volumes/#emptydir)的存储消耗而不是遍历文件系统,以此获得更好的性能和准确性。
- `MountContainers`:在主机上启用将应用程序容器用作卷安装程序。
- `MountPropagation`:启用将一个容器安装的共享卷共享到其他容器或 Pod。有关更多详细信息请参见 [mount propagation](/docs/concepts/storage/volumes/#mount-propagation)。
- `MountPropagation`:启用将一个容器安装的共享卷共享到其他容器或 Pod。有关更多详细信息请参见[挂载传播](/zh/docs/concepts/storage/volumes/#mount-propagation)。
- `NodeDisruptionExclusion`:启用节点标签 `node.kubernetes.io/exclude-disruption`,以防止在区域故障期间驱逐节点。
<!--
@ -638,7 +719,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `NonPreemptingPriority`: Enable NonPreempting option for PriorityClass and Pod.
- `PersistentLocalVolumes`: Enable the usage of `local` volume type in Pods.
Pod affinity has to be specified if requesting a `local` volume.
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/configuration/pod-overhead/) feature to account for pod overheads.
- `PodDisruptionBudget`: Enable the [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) feature.
- `PodOverhead`: Enable the [PodOverhead](/docs/concepts/scheduling-eviction/pod-overhead/) feature to account for pod overheads.
- `PodPriority`: Enable the descheduling and preemption of Pods based on their [priorities](/docs/concepts/configuration/pod-priority-preemption/).
- `PodReadinessGates`: Enable the setting of `PodReadinessGate` field for extending
Pod readiness evaluation. See [Pod readiness gate](/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)
@ -648,8 +730,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `NodeLease`:启用新的租赁 API 以报告节点心跳,可用作节点运行状况信号。
- `NonPreemptingPriority`:为 PriorityClass 和 Pod 启用 NonPreempting 选项。
- `PersistentLocalVolumes`:在 Pod 中启用 “本地” 卷类型的使用。如果请求 “本地” 卷,则必须指定 Pod 亲和力。
- `PodOverhead`:启用 [PodOverhead](/docs/concepts/configuration/pod-overhead/) 特性以解决 Pod 开销。
- `PodPriority`:根据[优先级](/docs/concepts/configuration/pod-priority-preemption/)启用 Pod 的调度和抢占。
- `PodDisruptionBudget`:启用 [PodDisruptionBudget](/zh/docs/tasks/run-application/configure-pdb/) 特性。
- `PodOverhead`:启用 [PodOverhead](/zh/docs/concepts/scheduling-eviction/pod-overhead/) 特性以考虑 Pod 开销。
- `PodPriority`:根据[优先级](/zh/docs/concepts/configuration/pod-priority-preemption/)启用 Pod 的调度和抢占。
- `PodReadinessGates`:启用 `PodReadinessGate` 字段的设置以扩展 Pod 准备状态评估。有关更多详细信息,请参见 [Pod readiness 特性门控](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)。
<!--
@ -661,20 +744,20 @@ Each feature gate is designed for enabling/disabling a specific feature:
being deleted when it is still used by any Pod.
- `QOSReserved`: Allows resource reservations at the QoS level preventing pods at lower QoS levels from
bursting into resources requested at higher QoS levels (memory only for now).
- `ResourceLimitsPriorityFunction`: Enable a scheduler priority function that
- `ResourceLimitsPriorityFunction` (*deprecated*): Enable a scheduler priority function that
assigns a lowest possible score of 1 to a node that satisfies at least one of
the input Pod's cpu and memory limits. The intent is to break ties between
nodes with same scores.
-->
- `PodShareProcessNamespace`:在 Pod 中启用 `shareProcessNamespace` 的设置,以便在 Pod 中运行的容器之间共享单个进程命名空间。更多详细信息,请参见[在 Pod 中的容器之间共享进程命名空间](/docs/tasks/configure-pod-container/share-process-namespace/)。
- `PodShareProcessNamespace`:在 Pod 中启用 `shareProcessNamespace` 的设置,
以便在 Pod 中运行的容器之间共享同一进程命名空间。更多详细信息,请参见[在 Pod 中的容器间共享同一进程名字空间](/zh/docs/tasks/configure-pod-container/share-process-namespace/)。
- `ProcMountType`:启用对容器的 ProcMountType 的控制。
- `PVCProtection`:启用防止任何 Pod 仍使用 PersistentVolumeClaim(PVC) 删除的特性。可以在[此处](/docs/tasks/administer-cluster/storage-object-in-use-protection/)中找到更多详细信息。
- `PVCProtection`:启用防止任何 Pod 仍使用 PersistentVolumeClaim(PVC) 删除的特性。可以在[此处](/zh/docs/tasks/administer-cluster/storage-object-in-use-protection/)中找到更多详细信息。
- `QOSReserved`:允许在 QoS 级别进行资源预留,以防止处于较低 QoS 级别的 Pod 突发进入处于较高 QoS 级别的请求资源(仅适用于内存)。
- `ResourceLimitsPriorityFunction`:启用 scheduler 优先级特性,该特性将最低可能得 1 分配给至少满足输入 Pod 的 cpu 和内存限制之一的节点,目的是打破得分相同的节点之间的联系。
- `ResourceLimitsPriorityFunction` *已弃用* ):启用某调度器优先级函数,该函数将最低得分 1
指派给至少满足输入 Pod 的 cpu 和内存限制之一的节点,目的是打破得分相同的节点之间的关联。
<!--
- `RequestManagement`: Enable managing request concurrency with prioritization and fairness at each server.
- `ResourceQuotaScopeSelectors`: Enable resource quota scope selectors.
- `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet.
See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details.
@ -685,31 +768,37 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler instead of the DaemonSet controller.
-->
- `RequestManagement`:在每个服务器上启用具有优先级和公平性的管理请求并发性。
- `ResourceQuotaScopeSelectors`:启用资源配额范围选择器。
- `RotateKubeletClientCertificate`:在 kubelet 上启用客户端 TLS 证书的轮换。有关更多详细信息,请参见 [kubelet 配置](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)。
- `RotateKubeletServerCertificate`:在 kubelet 上启用服务器 TLS 证书的轮换。有关更多详细信息,请参见 [kubelet 配置](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)。
- `RotateKubeletClientCertificate`:在 kubelet 上启用客户端 TLS 证书的轮换。有关更多详细信息,请参见 [kubelet 配置](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)。
- `RotateKubeletServerCertificate`:在 kubelet 上启用服务器 TLS 证书的轮换。有关更多详细信息,请参见 [kubelet 配置](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)。
- `RunAsGroup`:启用对容器初始化过程中设置的主要组 ID 的控制。
- `RuntimeClass`:启用 [RuntimeClass](/docs/concepts/containers/runtime-class/) 特性用于选择容器运行时配置。
- `RuntimeClass`:启用 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 特性用于选择容器运行时配置。
- `ScheduleDaemonSetPods`:启用 DaemonSet Pods 由默认调度程序而不是 DaemonSet 控制器进行调度。
<!--
- `SCTPSupport`: Enables the usage of SCTP as `protocol` value in `Service`, `Endpoint`, `NetworkPolicy` and `Pod` definitions
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/api-concepts/#server-side-apply) path at the API Server.
- `SCTPSupport`: Enables the _SCTP_ `protocol` value in Pod, Service, Endpoints, EndpointSlice, and NetworkPolicy definitions.
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/) path at the API Server.
- `ServiceAccountIssuerDiscovery`: Enable OIDC discovery endpoints (issuer and JWKS URLs) for the service account issuer in the API server. See [Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery) for more details.
- `ServiceAppProtocol`: Enables the `AppProtocol` field on Services and Endpoints.
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider.
A node is eligible for exclusion if labelled with "`alpha.service-controller.kubernetes.io/exclude-balancer`" key or `node.kubernetes.io/exclude-from-external-load-balancers`.
- `ServiceTopology`: Enable service to route traffic based upon the Node topology of the cluster. See [ServiceTopology](https://kubernetes.io/docs/concepts/services-networking/service-topology/) for more details.
- `ServiceTopology`: Enable service to route traffic based upon the Node topology of the cluster. See [ServiceTopology](/docs/concepts/services-networking/service-topology/) for more details.
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain Name(FQDN) as hostname of pod. See [Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).
- `StartupProbe`: Enable the [startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe) probe in the kubelet.
- `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or
PersistentVolumeClaim objects if they are still being used.
-->
- `SCTPSupport`:在 “服务”、“端点”、“NetworkPolicy” 和 “Pod” 定义中,将 SCTP 用作 “协议” 值。
- `ServerSideApply`:在 API 服务器上启用[服务器端应用SSA](/docs/reference/using-api/api-concepts/#server-side-apply) 路径。
- `SCTPSupport`:在 Service、Endpoints、NetworkPolicy 和 Pod 定义中,允许将 _SCTP_ 用作 `protocol` 值。
- `ServerSideApply`:在 API 服务器上启用[服务器端应用SSA](/zh/docs/reference/using-api/server-side-apply/) 路径。
- `ServiceAccountIssuerDiscovery`:在 API 服务器中为服务帐户颁发者启用 OIDC 发现端点。
颁发者和 JWKS URL。 详情请参见[为 Pod 配置服务账户](/zh/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery) 。
- `ServiceAppProtocol`:为 Service 和 Endpoints 启用 `AppProtocol` 字段。
- `ServiceLoadBalancerFinalizer`:为服务负载均衡启用终结器保护。
- `ServiceNodeExclusion`:启用从云提供商创建的负载均衡中排除节点。如果节点标记有 `alpha.service-controller.kubernetes.io/exclude-balancer` 键或 `node.kubernetes.io/exclude-from-external-load-balancers`,则可以排除节点。
- `ServiceTopology`: 启用服务拓扑可以让一个服务基于集群的节点拓扑进行流量路由。有关更多详细信息,请参见[Service 拓扑](https://kubernetes.io/zh/docs/concepts/services-networking/service-topology/)
- `ServiceTopology`:启用服务拓扑可以让一个服务基于集群的节点拓扑进行流量路由。有关更多详细信息,请参见 [Service 拓扑](/zh/docs/concepts/services-networking/service-topology/)。
- `SetHostnameAsFQDN`:启用将全限定域名 FQDN设置为 Pod 主机名的功能。请参见[给 Pod 设置 `setHostnameAsFQDN` 字段](/zh/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field)。
- `StartupProbe`:在 kubelet 中启用 [startup](/zh/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe) 探针。
- `StorageObjectInUseProtection`:如果仍在使用 PersistentVolume 或 PersistentVolumeClaim 对象,则将其推迟。
@ -727,13 +816,13 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `StorageVersionHash`:允许 apiserver 在发现中公开存储版本的哈希值。
- `StreamingProxyRedirects`:指示 API 服务器拦截并遵循从后端kubelet进行重定向以处理流请求。流请求的例子包括 `exec`、`attach` 和 `port-forward` 请求。
- `SupportIPVSProxyMode`:启用使用 IPVS 提供内服务负载平衡。有关更多详细信息,请参见[服务代理](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)。
- `SupportIPVSProxyMode`:启用使用 IPVS 提供内服务负载平衡。有关更多详细信息,请参见[服务代理](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)。
- `SupportPodPidsLimit`:启用支持限制 Pod 中的进程 PID。
- `Sysctls`:启用对可以为每个 Pod 设置的命名空间内核参数sysctls的支持。有关更多详细信息请参见 [sysctls](/docs/tasks/administer-cluster/sysctl-cluster/)。
- `Sysctls`:启用对可以为每个 Pod 设置的命名空间内核参数sysctls的支持。有关更多详细信息请参见 [sysctls](/zh/docs/tasks/administer-cluster/sysctl-cluster/)。
<!--
- `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on nodes and tolerations on Pods.
See [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/) for more details.
See [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) for more details.
- `TaintNodesByCondition`: Enable automatic tainting nodes based on [node conditions](/docs/concepts/architecture/nodes/#condition).
- `TokenRequest`: Enable the `TokenRequest` endpoint on service account resources.
- `TokenRequestProjection`: Enable the injection of service account tokens into
@ -742,12 +831,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `TTLAfterFinished`: Allow a [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) to clean up resources after they finish execution.
-->
- `TaintBasedEvictions`:根据节点上的污点和 Pod 上的容忍度启用从节点驱逐 Pod 的特性。有关更多详细信息,请参见[污点和容忍度](/docs/concepts/configuration/taint-and-toleration/)。
- `TaintNodesByCondition`:根据[节点条件](/docs/concepts/configuration/taint-and-toleration/)启用自动在节点标记污点。
- `TaintBasedEvictions`:根据节点上的污点和 Pod 上的容忍度启用从节点驱逐 Pod 的特性。有关更多详细信息,请参见[污点和容忍度](/zh/docs/concepts/configuration/taint-and-toleration/)。
- `TaintNodesByCondition`:根据[节点条件](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)启用自动在节点标记污点。
- `TokenRequest`:在服务帐户资源上启用 `TokenRequest` 端点。
- `TokenRequestProjection`:启用通过 [`projected` 卷](/docs/concepts/storage/volumes/#projected) 将服务帐户令牌注入到 Pod 中的特性。
- `TopologyManager`:启用一种机制来协调 Kubernetes 不同组件的细粒度硬件资源分配。详见 [控制节点上的拓扑管理策略](/docs/tasks/administer-cluster/topology-manager/)。
- `TTLAfterFinished`:完成执行后,允许 [TTL 控制器](/docs/concepts/workloads/controllers/ttlafterfinished/)清理资源。
- `TokenRequestProjection`:启用通过 [`projected` 卷](/zh/docs/concepts/storage/volumes/#projected) 将服务帐户令牌注入到 Pod 中的特性。
- `TopologyManager`:启用一种机制来协调 Kubernetes 不同组件的细粒度硬件资源分配。详见[控制节点上的拓扑管理策略](/zh/docs/tasks/administer-cluster/topology-manager/)。
- `TTLAfterFinished`:完成执行后,允许 [TTL 控制器](/zh/docs/concepts/workloads/controllers/ttlafterfinished/)清理资源。
<!--
- `VolumePVCDataSource`: Enable support for specifying an existing PVC as a DataSource.
@ -766,6 +855,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment variables into a `subPath`.
- `WatchBookmark`: Enable support for watch bookmark events.
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers with as a non-default user.
See [Configuring RunAsUserName](/docs/tasks/configure-pod-container/configure-runasusername) for more details.
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
-->
@ -773,6 +864,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `VolumeSubpathEnvExpansion`:启用 `subPathExpr` 字段用于将环境变量扩展为 `subPath`
- `WatchBookmark`:启用对监测 bookmark 事件的支持。
- `WindowsGMSA`:允许将 GMSA 凭据规范从 Pod 传递到容器运行时。
- `WindowsRunAsUserName`:提供使用非默认用户在 Windows 容器中运行应用程序的支持。
详情请参见[配置 RunAsUserName](/zh/docs/tasks/configure-pod-container/configure-runasusername)。
- `WinDSR`:允许 kube-proxy 为 Windows 创建 DSR 负载均衡。
- `WinOverlay`:允许 kube-proxy 在 Windows 的 overlay 模式下运行。
@ -785,6 +878,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
the project's approach to removing features and components.
-->
* Kubernetes 的 [弃用策略](/docs/reference/using-api/deprecation-policy/) 介绍了项目已移除的特性部件和组件的方法。
* Kubernetes 的[弃用策略](/zh/docs/reference/using-api/deprecation-policy/) 介绍了项目已移除的特性部件和组件的方法。

View File

@ -112,7 +112,7 @@ where `command`, `TYPE`, `NAME`, and `flags` are:
* 用一个或多个文件指定资源:`-f file1 -f file2 -f file<#>`
* [使用 YAML 而不是 JSON](/zh/docs/concepts/configuration/overview/#general-config-tips) 因为 YAML 更容易使用,特别是用于配置文件时。<br/>
例子:`kubectl get pod -f ./pod.yaml`
例子:`kubectl get -f ./pod.yaml`
* `flags`: 指定可选的参数。例如,可以使用 `-s``-server` 参数指定 Kubernetes API 服务器的地址和端口。<br/>

View File

@ -0,0 +1,810 @@
---
title: 服务器端应用Server-Side Apply
content_type: concept
weight: 25
min-kubernetes-server-version: 1.16
---
<!--
---
title: Server-Side Apply
reviewers:
- smarterclayton
- apelisse
- lavalamp
- liggitt
content_type: concept
weight: 25
min-kubernetes-server-version: 1.16
---
-->
<!-- overview -->
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
<!--
## Introduction
Server Side Apply helps users and controllers manage their resources via
declarative configurations. It allows them to create and/or modify their
[objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
declaratively, simply by sending their fully specified intent.
-->
## 简介 {#introduction}
服务器端应用协助用户、控制器通过声明式配置的方式管理他们的资源。
它发送完整描述的目标A fully specified intent
声明式地创建和/或修改
[对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/)。
<!--
A fully specified intent is a partial object that only includes the fields and
values for which the user has an opinion. That intent either creates a new
object or is [combined](#merge-strategy), by the server, with the existing object.
The system supports multiple appliers collaborating on a single object.
-->
一个完整描述的目标并不是一个完整的对象,仅包括能体现用户意图的字段和值。
该目标intent可以用来创建一个新对象
也可以通过服务器来实现与现有对象的[合并](#merge-strategy)。
系统支持多个应用者appliers在同一个对象上开展协作。
<!--
Changes to an object's fields are tracked through a "[field management](#field-management)"
mechanism. When a field's value changes, ownership moves from its current
manager to the manager making the change. When trying to apply an object,
fields that have a different value and are owned by another manager will
result in a [conflict](#conflicts). This is done in order to signal that the
operation might undo another collaborator's changes. Conflicts can be forced,
in which case the value will be overridden, and the ownership will be
transferred.
-->
“[字段管理field management](#field-management)”机制追踪对象字段的变化。
当一个字段值改变时其所有权从当前管理器manager转移到施加变更的管理器。
当尝试将新配置应用到一个对象时,如果字段有不同的值,且由其他管理器管理,
将会引发[冲突](#conflicts)。
冲突引发警告信号:此操作可能抹掉其他协作者的修改。
冲突可以被刻意忽略,这种情况下,值将会被改写,所有权也会发生转移。
<!--
If you remove a field from a configuration and apply the configuration, server
side apply checks if there are any other field managers that also own the
field. If the field is not owned by any other field managers, it is either
deleted from the live object or reset to its default value, if it has one. The
same rule applies to associative list or map items.
-->
当你从配置文件中删除一个字段,然后应用这个配置文件,
这将触发服务端应用检查此字段是否还被其他字段管理器拥有。
如果没有,那就从活动对象中删除该字段;如果有,那就重置为默认值。
该规则同样适用于 list 或 map 项目。
<!--
Server side apply is meant both as a replacement for the original `kubectl
apply` and as a simpler mechanism for controllers to enact their changes.
If you have Server Side Apply enabled, the control plane tracks managed fields
for all newlly created objects.
-->
服务器端应用既是原有 `kubectl apply` 的替代品,
也是控制器发布自身变化的一个简化机制。
如果你启用了服务器端应用,控制平面就会跟踪被所有新创建对象管理的字段。
<!--
## Field Management
Compared to the `last-applied` annotation managed by `kubectl`, Server Side
Apply uses a more declarative approach, which tracks a user's field management,
rather than a user's last applied state. This means that as a side effect of
using Server Side Apply, information about which field manager manages each
field in an object also becomes available.
-->
## 字段管理 {#field-management}
相对于通过 `kubectl` 管理的注解 `last-applied`
服务器端应用使用了一种更具声明式特点的方法:
它持续的跟踪用户的字段管理,而不仅仅是最后一次的执行状态。
这就意味着,作为服务器端应用的一个副作用,
关于用哪一个字段管理器负责管理对象中的哪个字段的这类信息,都要对外界开放了。
<!--
For a user to manage a field, in the Server Side Apply sense, means that the
user relies on and expects the value of the field not to change. The user who
last made an assertion about the value of a field will be recorded as the
current field manager. This can be done either by changing the value with
`POST`, `PUT`, or non-apply `PATCH`, or by including the field in a config sent
to the Server Side Apply endpoint. When using Server-Side Apply, trying to
change a field which is managed by someone else will result in a rejected
request (if not forced, see [Conflicts](#conflicts)).
-->
用户管理字段这件事,在服务器端应用的场景中,意味着用户依赖并期望字段的值不要改变。
最后一次对字段值做出断言的用户将被记录到当前字段管理器。
这可以通过发送 `POST``PUT`
或非应用non-apply方式的 `PATCH` 等命令来修改字段值的方式实现,
或通过把字段放在配置文件中,然后发送到服务器端应用的服务端点的方式实现。
当使用服务器端应用,尝试着去改变一个被其他人管理的字段,
会导致请求被拒绝(在没有设置强制执行时,参见[冲突](#conflicts))。
<!--
When two or more appliers set a field to the same value, they share ownership of
that field. Any subsequent attempt to change the value of the shared field, by any of
the appliers, results in a conflict. Shared field owners may give up ownership
of a field by removing it from their configuration.
Field management is stored in a`managedFields` field that is part of an object's
[`metadata`](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#objectmeta-v1-meta).
A simple example of an object created by Server Side Apply could look like this:
-->
如果两个或以上的应用者均把同一个字段设置为相同值,他们将共享此字段的所有权。
后续任何改变共享字段值的尝试,不管由那个应用者发起,都会导致冲突。
共享字段的所有者可以放弃字段的所有权,这只需从配置文件中删除该字段即可。
字段管理的信息存储在 `managedFields` 字段中,该字段是对象的
[`metadata`](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#objectmeta-v1-meta)中的一部分。
服务器端应用创建对象的简单示例如下:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-cm
namespace: default
labels:
test-label: test
managedFields:
- manager: kubectl
operation: Apply
apiVersion: v1
time: "2010-10-10T0:00:00Z"
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:test-label: {}
f:data:
f:key: {}
data:
key: some value
```
<!--
The above object contains a single manager in `metadata.managedFields`. The
manager consists of basic information about the managing entity itself, like
operation type, API version, and the fields managed by it.
This field is managed by the API server and should not be changed by
the user.
-->
上述对象在 `metadata.managedFields` 中包含了唯一的管理器。
管理器由管理实体自身的基本信息组成比如操作类型、API 版本、以及它管理的字段。
{{< note >}}
该字段由 API 服务器管理,用户不应该改动它。
{{< /note >}}
<!--
Nevertheless it is possible to change `metadata.managedFields` through an
`Update` operation. Doing so is highly discouraged, but might be a reasonable
option to try if, for example, the `managedFields` get into an inconsistent
state (which clearly should not happen).
The format of the `managedFields` is described in the
[API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#fieldsv1-v1-meta).
-->
不过,执行 `Update` 操作修改 `metadata.managedFields` 也是可实现的。
强烈不鼓励这么做,但当发生如下情况时,
比如 `managedFields` 进入不一致的状态(显然不应该发生这种情况),
这么做也是一个合理的尝试。
`managedFields` 的格式在
[API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#fieldsv1-v1-meta)
文档中描述。
<!--
## Conflicts
A conflict is a special status error that occurs when an `Apply` operation tries
to change a field, which another user also claims to manage. This prevents an
applier from unintentionally overwriting the value set by another user. When
this occurs, the applier has 3 options to resolve the conflicts:
-->
## 冲突 {#conflicts}
冲突是一种特定的错误状态,
发生在执行 `Apply` 改变一个字段,而恰巧该字段被其他用户声明过主权时。
这可以防止一个应用者不小心覆盖掉其他用户设置的值。
冲突发生时,应用者有三种办法来解决它:
<!--
* **Overwrite value, become sole manager:** If overwriting the value was
intentional (or if the applier is an automated process like a controller) the
applier should set the `force` query parameter to true and make the request
again. This forces the operation to succeed, changes the value of the field,
and removes the field from all other managers' entries in managedFields.
* **Don't overwrite value, give up management claim:** If the applier doesn't
care about the value of the field anymore, they can remove it from their
config and make the request again. This leaves the value unchanged, and causes
the field to be removed from the applier's entry in managedFields.
* **Don't overwrite value, become shared manager:** If the applier still cares
about the value of the field, but doesn't want to overwrite it, they can
change the value of the field in their config to match the value of the object
on the server, and make the request again. This leaves the value unchanged,
and causes the field's management to be shared by the applier and all other
field managers that already claimed to manage it.
-->
* **覆盖前值,成为唯一的管理器:** 如果打算覆盖该值(或应用者是一个自动化部件,比如控制器),
应用者应该设置查询参数 `force` 为 true然后再发送一次请求。
这将强制操作成功,改变字段的值,从所有其他管理器的 managedFields 条目中删除指定字段。
* **不覆盖前值,放弃管理权:** 如果应用者不再关注该字段的值,
可以从配置文件中删掉它,再重新发送请求。
这就保持了原值不变,并从 managedFields 的应用者条目中删除该字段。
* **不覆盖前值,成为共享的管理器:** 如果应用者仍然关注字段值,并不想覆盖它,
他们可以在配置文件中把字段的值改为和服务器对象一样,再重新发送请求。
这样在不改变字段值的前提下,
就实现了字段管理被应用者和所有声明了管理权的其他的字段管理器共享。
<!--
## Managers
Managers identify distinct workflows that are modifying the object (especially
useful on conflicts!), and can be specified through the `fieldManager` query
parameter as part of a modifying request. It is required for the apply endpoint,
though kubectl will default it to `kubectl`. For other updates, its default is
computed from the user-agent.
-->
## 管理器 {#managers}
管理器识别出正在修改对象的工作流程(在冲突时尤其有用),
管理器可以通过修改请求的参数 `fieldManager` 指定。
虽然 kubectl 默认发往 `kubectl` 服务端点但它则请求到应用的服务端点apply endpoint
对于其他的更新,它默认的是从用户代理计算得来。
<!--
## Apply and Update
The two operation types considered by this feature are `Apply` (`PATCH` with
content type `application/apply-patch+yaml`) and `Update` (all other operations
which modify the object). Both operations update the `managedFields`, but behave
a little differently.
Whether you are submitting JSON data or YAML data, use
`application/apply-patch+yaml` as the `Content-Type` header value.
All JSON documents are valid YAML.
-->
## 应用和更新 {#apply-and-update}
此特性涉及两类操作,分别是 `Apply`
(内容类型为 `application/apply-patch+yaml``PATCH` 请求)
`Update` (所有修改对象的其他操作)。
这两类操作都会更新字段 `managedFields`,但行为表现有一点不同。
{{< note >}}
不管你提交的是 JSON 数据还是 YAML 数据,
都要使用 `application/apply-patch+yaml` 作为 `Content-Type` 的值。
所有的 JSON 文档 都是合法的 YAML。
{{< /note >}}
<!--
For instance, only the apply operation fails on conflicts while update does
not. Also, apply operations are required to identify themselves by providing a
`fieldManager` query parameter, while the query parameter is optional for update
operations. Finally, when using the apply operation you cannot have
`managedFields` in the object that is being applied.
An example object with multiple managers could look like this:
-->
例如,在冲突发生的时候,只有 `apply` 操作失败,而 `update` 则不会。
此外,`apply` 操作必须通过提供一个 `fieldManager` 查询参数来标识自身,
而此查询参数对于 `update` 操作则是可选的。
最后,当使用 `apply` 命令时,你不能在应用中的对象中持有 `managedFields`
一个包含多个管理器的对象,示例如下:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-cm
namespace: default
labels:
test-label: test
managedFields:
- manager: kubectl
operation: Apply
apiVersion: v1
fields:
f:metadata:
f:labels:
f:test-label: {}
- manager: kube-controller-manager
operation: Update
apiVersion: v1
time: '2019-03-30T16:00:00.000Z'
fields:
f:data:
f:key: {}
data:
key: new value
```
<!--
In this example, a second operation was run as an `Update` by the manager called
`kube-controller-manager`. The update changed a value in the data field which
caused the field's management to change to the `kube-controller-manager`.
If this update would have been an `Apply` operation, the operation
would have failed due to conflicting ownership.
-->
在这个例子中,
第二个操作被管理器 `kube-controller-manager``Update` 的方式运行。
`update` 更改 data 字段的值,
并使得字段管理器被改为 `kube-controller-manager`
如果把 `update` 操作改为 `Apply`,那就会因为所有权冲突的原因,导致操作失败。
<!--
## Merge strategy
The merging strategy, implemented with Server Side Apply, provides a generally
more stable object lifecycle. Server Side Apply tries to merge fields based on
the fact who manages them instead of overruling just based on values. This way
it is intended to make it easier and more stable for multiple actors updating
the same object by causing less unexpected interference.
-->
## 合并策略 {#merge-strategy}
由服务器端应用实现的合并策略,提供了一个总体更稳定的对象生命周期。
服务器端应用试图依据谁管理它们来合并字段,而不只是根据值来否决。
这么做是为了多个参与者可以更简单、更稳定的更新同一个对象,且避免引起意外干扰。
<!--
When a user sends a "fully-specified intent" object to the Server Side Apply
endpoint, the server merges it with the live object favoring the value in the
applied config if it is specified in both places. If the set of items present in
the applied config is not a superset of the items applied by the same user last
time, each missing item not managed by any other appliers is removed. For
more information about how an object's schema is used to make decisions when
merging, see
[sigs.k8s.io/structured-merge-diff](https://sigs.k8s.io/structured-merge-diff).
-->
当用户发送一个“完整描述的目标”对象到服务器端应用的服务端点,
服务器会将它和活动对象做一次合并,如果两者中有重复定义的值,那就以配置文件的为准。
如果配置文件中的项目集合不是此用户上一次操作项目的超集,
所有缺少的、没有其他应用者管理的项目会被删除。
关于合并时用来做决策的对象规格的更多信息,参见
[sigs.k8s.io/structured-merge-diff](https://sigs.k8s.io/structured-merge-diff).
<!--
A number of markers were added in Kubernetes 1.16 and 1.17, to allow API
developers to describe the merge strategy supported by lists, maps, and
structs. These markers can be applied to objects of the respective type,
in Go files or in the OpenAPI schema definition of the
[CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io):
-->
Kubernetes 1.16 和 1.17 中添加了一些标记,
允许 API 开发人员描述由 list、map、和 structs 支持的合并策略。
这些标记可应用到相应类型的对象,在 Go 文件或在
[CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io)
的 OpenAPI 的模式中定义:
<!--
| Golang marker | OpenAPI extension | Accepted values | Description | Introduced in |
|---|---|---|---|---|
| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | Applicable to lists. `atomic` and `set` apply to lists with scalar elements only. `map` applies to lists of nested types only. If configured as `atomic`, the entire list is replaced during merge; a single manager manages the list as a whole at any one time. If `granular`, different managers can manage entries separately. | 1.16 |
| `//+listMapKey` | `x-kubernetes-list-map-keys` | Slice of map keys that uniquely identify entries for example `["port", "protocol"]` | Only applicable when `+listType=map`. A slice of strings whose values in combination must uniquely identify list entries. While there can be multiple keys, `listMapKey` is singular because keys need to be specified individually in the Go type. | 1.16 |
| `//+mapType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to maps. `atomic` means that the map can only be entirely replaced by a single manager. `granular` means that the map supports separate managers updating individual fields. | 1.17 |
| `//+structType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to structs; otherwise same usage and OpenAPI annotation as `//+mapType`.| 1.17 |
-->
| Golang 标记 | OpenAPI extension | 可接受的值 | 描述 | 引入版本 |
|---|---|---|---|---|
| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | 适用于 list。 `atomic``set` 适用于只包含标量元素的 list。 `map` 适用于只包含嵌套类型的 list。 如果配置为 `atomic`, 合并时整个列表会被替换掉; 任何时候,唯一的管理器都把列表作为一个整体来管理。如果是细粒度管理,不同的管理器也可以分开管理条目。 | 1.16 |
| `//+listMapKey` | `x-kubernetes-list-map-keys` | 用来唯一标识条目的 map keys 切片,例如 `["port", "protocol"]` | 仅当 `+listType=map` 时适用。组合值的字符串切片必须唯一标识列表中的条目。尽管有多个 key`listMapKey` 是单数的,这是因为 key 需要在 Go 类型中单独的指定。 | 1.16 |
| `//+mapType` | `x-kubernetes-map-type` | `atomic`/`granular` | 适用于 map。 `atomic` 指 map 只能被单个的管理器整个的替换。 `granular` 指 map 支持多个管理器各自更新自己的字段。 | 1.17 |
| `//+structType` | `x-kubernetes-map-type` | `atomic`/`granular` | 适用于 structs否则就像 `//+mapType` 有相同的用法和 openapi 注释.| 1.17 |
<!--
### Custom Resources
By default, Server Side Apply treats custom resources as unstructured data. All
keys are treated the same as struct fields, and all lists are considered atomic.
If the Custom Resource Definition defines a
[schema](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io)
that contains annotations as defined in the previous "Merge Strategy"
section, these annotations will be used when merging objects of this
type.
-->
### 自定义资源 {#custom-resources}
默认情况下,服务器端应用把自定义资源看做非结构化数据。
所有的键值keys就像 struct 的字段一样被处理,
所有的 list 被认为是原子性的。
如果自定义资源定义Custom Resource DefinitionCRD定义了一个
[模式](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io)
它包含类似以前“合并策略”章节中定义过的注解,
这些注解将在合并此类型的对象时使用。
<!--
### Using Server-Side Apply in a controller
As a developer of a controller, you can use server-side apply as a way to
simplify the update logic of your controller. The main differences with a
read-modify-write and/or patch are the following:
* the applied object must contain all the fields that the controller cares about.
* there are no way to remove fields that haven't been applied by the controller
before (controller can still send a PATCH/UPDATE for these use-cases).
* the object doesn't have to be read beforehand, `resourceVersion` doesn't have
to be specified.
It is strongly recommended for controllers to always "force" conflicts, since they
might not be able to resolve or act on these conflicts.
-->
### 在控制器中使用服务器端应用 {#using-server-side-apply-in-controller}
控制器的开发人员可以把服务器端应用作为简化控制器的更新逻辑的方式。
读-改-写 和/或 patch 的主要区别如下所示:
* 应用的对象必须包含控制器关注的所有字段。
* 对于在控制器没有执行过应用操作之前就已经存在的字段,不能删除。
(控制器在这种用例环境下,依然可以发送一个 PATCH/UPDATE
* 对象不必事先读取,`resourceVersion` 不必指定。
强烈推荐:设置控制器在冲突时强制执行,这是因为冲突发生时,它们没有其他解决方案或措施。
<!--
### Transferring Ownership
In addition to the concurrency controls provided by [conflict resolution](#conflicts),
Server Side Apply provides ways to perform coordinated
field ownership transfers from users to controllers.
This is best explained by example. Let's look at how to safely transfer
ownership of the `replicas` field from a user to a controller while enabling
automatic horizontal scaling for a Deployment, using the HorizontalPodAutoscaler
resource and its accompanying controller.
Say a user has defined deployment with `replicas` set to the desired value:
-->
### 转移所有权 {#transferring-ownership}
除了通过[冲突解决方案](#conflicts)提供的并发控制,
服务器端应用提供了一些协作方式来将字段所有权从用户转移到控制器。
最好通过例子来说明这一点。
让我们来看看,在使用 HorizontalPodAutoscaler 资源和与之配套的控制器,
且开启了 Deployment 的自动水平扩展功能之后,
怎么安全的将 `replicas` 字段的所有权从用户转移到控制器。
假设用户定义了 Deployment`replicas` 字段已经设置为期望的值:
{{< codenew file="application/ssa/nginx-deployment.yaml" >}}
<!--
And the user has created the deployment using server side apply like so:
-->
并且,用户使用服务器端应用,像这样创建 Deployment
```shell
kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment.yaml --server-side
```
<!--
Then later, HPA is enabled for the deployment, e.g.:
-->
然后,为 Deployment 启用 HPA例如
```shell
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10
```
<!--
Now, the user would like to remove `replicas` from their configuration, so they
don't accidentally fight with the HPA controller. However, there is a race: it
might take some time before HPA feels the need to adjust `replicas`, and if
the user removes `replicas` before the HPA writes to the field and becomes
its owner, then apiserver will set `replicas` to 1, its default value. This
is not what the user wants to happen, even temporarily.
-->
现在,用户希望从他们的配置中删除 `replicas`,所以他们总是和 HPA 控制器冲突。
然而,这里存在一个竟态:
在 HPA 需要调整 `replicas` 之前会有一个时间窗口,
如果在 HPA 写入字段成为所有者之前,用户删除了`replicas`
那 API 服务器就会把 `replicas` 的值设为1 也就是默认值。
这不是用户希望发生的事情,即使是暂时的。
<!--
There are two solutions:
- (easy) Leave `replicas` in the configuration; when HPA eventually writes to that
field, the system gives the user a conflict over it. At that point, it is safe
to remove from the configuration.
- (more advanced) If, however, the user doesn't want to wait, for example
because they want to keep the cluster legible to coworkers, then they can take
the following steps to make it safe to remove `replicas` from their
configuration:
First, the user defines a new configuration containing only the `replicas` field:
-->
这里有两个解决方案:
- (容易) 把 `replicas` 留在配置文件中;当 HPA 最终写入那个字段,
系统基于此事件告诉用户:冲突发生了。在这个时间点,可以安全的删除配置文件。
- (高级)然而,如果用户不想等待,比如他们想为合作伙伴保持集群清晰,
那他们就可以执行以下步骤,安全的从配置文件中删除 `replicas`
首先,用户新定义一个只包含 `replicas` 字段的配置文件:
{{< codenew file="application/ssa/nginx-deployment-replicas-only.yaml" >}}
<!--
The user applies that configuration using the field manager name `handover-to-hpa`:
-->
用户使用名为 `handover-to-hpa` 的字段管理器,应用此配置文件。
```shell
kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replicas-only.yaml \
--server-side --field-manager=handover-to-hpa \
--validate=false
```
<!--
If the apply results in a conflict with the HPA controller, then do nothing. The
conflict just indicates the controller has claimed the field earlier in the
process than it sometimes does.
At this point the user may remove the `replicas` field from their configuration.
-->
如果应用操作和 HPA 控制器产生冲突,那什么都不做。
冲突只是表明控制器在更早的流程中已经对字段声明过所有权。
在此时间点,用户可以从配置文件中删除 `replicas`
{{< codenew file="application/ssa/nginx-deployment-no-replicas.yaml" >}}
<!--
Note that whenever the HPA controller sets the `replicas` field to a new value,
the temporary field manager will no longer own any fields and will be
automatically deleted. No clean up is required.
-->
注意,只要 HPA 控制器为 `replicas` 设置了一个新值,
该临时字段管理器将不再拥有任何字段,会被自动删除。
这里不需要执行清理工作。
<!--
## Transferring Ownership Between Users
Users can transfer ownership of a field between each other by setting the field
to the same value in both of their applied configs, causing them to share
ownership of the field. Once the users share ownership of the field, one of them
can remove the field from their applied configuration to give up ownership and
complete the transfer to the other user.
-->
## 在用户之间转移所有权 {#transferring-ownership-between-users}
通过在配置文件中把一个字段设置为相同的值,用户可以在他们之间转移字段的所有权,
从而共享了字段的所有权。
当用户共享了字段的所有权,任何一个用户可以从他的配置文件中删除该字段,
并应用该变更,从而放弃所有权,并实现了所有权向其他用户的转移。
<!--
## Comparison with Client Side Apply
A consequence of the conflict detection and resolution implemented by Server
Side Apply is that an applier always has up to date field values in their local
state. If they don't, they get a conflict the next time they apply. Any of the
three options to resolve conflicts results in the applied configuration being an
up to date subset of the object on the server's fields.
This is different from Client Side Apply, where outdated values which have been
overwritten by other users are left in an applier's local config. These values
only become accurate when the user updates that specific field, if ever, and an
applier has no way of knowing whether their next apply will overwrite other
users' changes.
Another difference is that an applier using Client Side Apply is unable to
change the API version they are using, but Server Side Apply supports this use
case.
-->
## 与客户端应用的对比 {#comparison-with-client-side-apply}
由服务器端应用实现的冲突检测和解决方案的一个结果就是,
应用者总是可以在本地状态中得到最新的字段值。
如果得不到最新值,下次执行应用操作时就会发生冲突。
解决冲突三个选项的任意一个都会保证:此应用过的配置文件是服务器上对象字段的最新子集。
这和客户端应用Client Side Apply 不同,如果有其他用户覆盖了此值,
过期的值被留在了应用者本地的配置文件中。
除非用户更新了特定字段,此字段才会准确,
应用者没有途径去了解下一次应用操作是否会覆盖其他用户的修改。
另一个区别是使用客户端应用的应用者不能改变他们正在使用的 API 版本,但服务器端应用支持这个场景。
<!--
## Upgrading from client-side apply to server-side apply
Client-side apply users who manage a resource with `kubectl apply` can start
using server-side apply with the following flag.
-->
## 从客户端应用升级到服务器端应用 {#upgrading-from-client-side-apply-to-server-side-apply}
客户端应用方式时,用户使用 `kubectl apply` 管理资源,
可以通过使用下面标记切换为使用服务器端应用。
```shell
kubectl apply --server-side [--dry-run=server]
```
<!--
By default, field management of the object transfers from client-side apply to
kubectl server-side apply without encountering conflicts.
Keep the `last-applied-configuration` annotation up to date.
The annotation infers client-side apply's managed fields.
Any fields not managed by client-side apply raise conflicts.
For example, if you used `kubectl scale` to update the replicas field after
client-side apply, then this field is not owned by client-side apply and
creates conflicts on `kubectl apply --server-side`.
This behavior applies to server-side apply with the `kubectl` field manager.
As an exception, you can opt-out of this behavior by specifying a different,
non-default field manager, as seen in the following example. The default field
manager for kubectl server-side apply is `kubectl`.
-->
默认情况下,对象的字段管理从客户端应用方式迁移到 kubectl 触发的服务器端应用时,不会发生冲突。
{{< caution >}}
保持注解 `last-applied-configuration` 是最新的。
从注解能推断出字段是由客户端应用管理的。
任何没有被客户端应用管理的字段将引发冲突。
举例说明,比如你在客户端应用之后,
使用 `kubectl scale` 去更新 `replicas` 字段,
可是该字段并没有被客户端应用所拥有,
在执行 `kubectl apply --server-side` 时就会产生冲突。
{{< /caution >}}
此操作以 `kubectl` 作为字段管理器来应用到服务器端应用。
作为例外,可以指定一个不同的、非默认字段管理器停止的这种行为,如下面的例子所示。
对于 kubectl 触发的服务器端应用,默认的字段管理器是 `kubectl`
```shell
kubectl apply --server-side --field-manager=my-manager [--dry-run=server]
```
<!--
## Downgrading from server-side apply to client-side apply
If you manage a resource with `kubectl apply --server-side`,
you can downgrade to client-side apply directly with `kubectl apply`.
Downgrading works because kubectl server-side apply keeps the
`last-applied-configuration` annotation up-to-date if you use
`kubectl apply`.
This behavior applies to server-side apply with the `kubectl` field manager.
As an exception, you can opt-out of this behavior by specifying a different,
non-default field manager, as seen in the following example. The default field
manager for kubectl server-side apply is `kubectl`.
-->
## 从服务器端应用降级到客户端应用 {#downgrading-from-server-side-apply-to-client-side-apply}
如果你用 `kubectl apply --server-side` 管理一个资源,
可以直接用 `kubectl apply` 命令将其降级为客户端应用。
降级之所以可行,这是因为 `kubectl server-side apply`
会保存最新的 `last-applied-configuration` 注解。
此操作以 `kubectl` 作为字段管理器应用到服务器端应用。
作为例外,可以指定一个不同的、非默认字段管理器停止这种行为,如下面的例子所示。
对于 kubectl 触发的服务器端应用,默认的字段管理器是 `kubectl`
```shell
kubectl apply --server-side --field-manager=my-manager [--dry-run=server]
```
<!--
## API Endpoint
With the Server Side Apply feature enabled, the `PATCH` endpoint accepts the
additional `application/apply-patch+yaml` content type. Users of Server Side
Apply can send partially specified objects as YAML to this endpoint. When
applying a configuration, one should always include all the fields that they
have an opinion about.
-->
## API 端点 {#api-endpoint}
启用了服务器端应用特性之后,
`PATCH` 服务端点接受额外的内容类型 `application/apply-patch+yaml`
服务器端应用的用户就可以把 YAMl 格式的
部分定义对象partially specified objects发送到此端点。
当一个配置文件被应用时,它应该包含所有体现你意图的字段。
<!--
## Clearing ManagedFields
It is possible to strip all managedFields from an object by overwriting them
using `MergePatch`, `StrategicMergePatch`, `JSONPatch` or `Update`, so every
non-apply operation. This can be done by overwriting the managedFields field
with an empty entry. Two examples are:
-->
## 清除 ManagedFields {#clearing-managedfields}
可以从对象中剥离所有 managedField
实现方法是通过使用 `MergePatch``StrategicMergePatch`
`JSONPatch``Update`、以及所有的非应用方式的操作来覆盖它。
这可以通过用空条目覆盖 managedFields 字段的方式实现。
```console
PATCH /api/v1/namespaces/default/configmaps/example-cm
Content-Type: application/merge-patch+json
Accept: application/json
Data: {"metadata":{"managedFields": [{}]}}
```
```console
PATCH /api/v1/namespaces/default/configmaps/example-cm
Content-Type: application/json-patch+json
Accept: application/json
Data: [{"op": "replace", "path": "/metadata/managedFields", "value": [{}]}]
```
<!--
This will overwrite the managedFields with a list containing a single empty
entry that then results in the managedFields being stripped entirely from the
object. Note that just setting the managedFields to an empty list will not
reset the field. This is on purpose, so managedFields never get stripped by
clients not aware of the field.
In cases where the reset operation is combined with changes to other fields
than the managedFields, this will result in the managedFields being reset
first and the other changes being processed afterwards. As a result the
applier takes ownership of any fields updated in the same request.
-->
这一操作将用只包含一个空条目的 list 覆写 managedFields
来实现从对象中整个的去除 managedFields。
注意,只把 managedFields 设置为空 list 并不会重置字段。
这么做是有目的的,所以 managedFields 将永远不会被与该字段无关的客户删除。
在重置操作结合 managedFields 以外其他字段更改的场景中,
将导致 managedFields 首先被重置,其他改变被押后处理。
其结果是,应用者取得了同一个请求中所有字段的所有权。
<!--
Server Side Apply does not correctly track ownership on
sub-resources that don't receive the resource object type. If you are
using Server Side Apply with such a sub-resource, the changed fields
won't be tracked.
-->
{{< caution >}}
对于不接受资源对象类型的子资源sub-resources
服务器端应用不能正确地跟踪其所有权。
如果你对这样的子资源使用服务器端应用,变更的字段将不会被跟踪。
{{< /caution >}}
<!--
## Disabling the feature
Server Side Apply is a beta feature, so it is enabled by default. To turn this
[feature gate](/docs/reference/command-line-tools-reference/feature-gates) off,
you need to include the `--feature-gates ServerSideApply=false` flag when
starting `kube-apiserver`. If you have multiple `kube-apiserver` replicas, all
should have the same flag setting.
-->
## 禁用此功能 {#disabling-the-feature}
服务器端应用是一个 beta 版特性,默认启用。
要关闭此[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates)
你需要在启动 `kube-apiserver` 时包含参数 `--feature-gates ServerSideApply=false`
如果你有多个 `kube-apiserver` 副本,他们都应该有相同的标记设置。

View File

@ -1,916 +0,0 @@
---
title: 使用 Minikube 安装 Kubernetes
weight: 30
content_type: concept
---
<!--
reviewers:
- dlorenc
- balopat
- aaron-prindle
title: Installing Kubernetes with Minikube
content_type: concept
-->
<!-- overview -->
<!--
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
-->
Minikube 是一种可以让你在本地轻松运行 Kubernetes 的工具。
Minikube 在笔记本电脑上的虚拟机VM中运行单节点 Kubernetes 集群,
供那些希望尝试 Kubernetes 或进行日常开发的用户使用。
<!-- body -->
<!--
## Minikube Features
Minikube supports the following Kubernetes features:
-->
## Minikube 功能
Minikube 支持以下 Kubernetes 功能:
<!--
* DNS
* NodePorts
* ConfigMaps and Secrets
* Dashboards
* Container Runtime: Docker, [CRI-O](https://github.com/kubernetes-incubator/cri-o), and [containerd](https://github.com/containerd/containerd)
* Enabling CNI (Container Network Interface)
* Ingress
-->
* DNS
* NodePorts
* ConfigMaps 和 Secrets
* Dashboards
* 容器运行时: Docker、[CRI-O](https://github.com/kubernetes-incubator/cri-o) 以及
[containerd](https://github.com/containerd/containerd)
* 启用 CNI (容器网络接口)
* Ingress
<!--
## Installation
See [Installing Minikube](/docs/tasks/tools/install-minikube/).
-->
## 安装
请参阅[安装 Minikube](/zh/docs/tasks/tools/install-minikube/)。
<!--
## Quickstart
This brief demo guides you on how to start, use, and delete Minikube locally. Follow the steps given below to start and explore Minikube.
-->
## 快速开始
这个简短的演示将指导你如何在本地启动、使用和删除 Minikube。请按照以下步骤开始探索 Minikube。
<!--
1. Start Minikube and create a cluster:
-->
1. 启动 Minikube 并创建一个集群:
```shell
minikube start
```
<!--
The output is similar to this:
-->
输出类似于:
```
Starting local Kubernetes cluster...
Running pre-create checks...
Creating machine...
Starting local Kubernetes cluster...
```
<!--
For more information on starting your cluster on a specific Kubernetes version, VM, or container runtime, see [Starting a Cluster](#starting-a-cluster).
-->
有关使用特定 Kubernetes 版本、VM 或容器运行时启动集群的详细信息,请参阅[启动集群](#starting-a-cluster)。
<!--
2. Now, you can interact with your cluster using kubectl. For more information, see [Interacting with Your Cluster](#interacting-with-your-cluster).
-->
2. 现在,你可以使用 kubectl 与集群进行交互。有关详细信息,请参阅[与集群交互](#interacting-with-your-cluster)。
<!--
Lets create a Kubernetes Deployment using an existing image named `echoserver`, which is a simple HTTP server and expose it on port 8080 using `-port`.
-->
让我们使用名为 `echoserver` 的镜像创建一个 Kubernetes Deployment并使用 `--port` 在端口 8080 上暴露服务。`echoserver` 是一个简单的 HTTP 服务器。
```shell
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
```
<!--
The output is similar to this:
-->
输出类似于:
```
deployment.apps/hello-minikube created
```
<!--
3. To access the `hello-minikube` Deployment, expose it as a Service:
-->
3. 要访问 `hello-minikube` Deployment需要将其作为 Service 公开:
```shell
kubectl expose deployment hello-minikube --type=NodePort --port=8080
```
<!--
The option `-type=NodePort` specifies the type of the Service.
-->
选项 `--type = NodePort` 指定 Service 的类型。
<!--
The output is similar to this:
-->
输出类似于:
```
service/hello-minikube exposed
```
<!--
4. The `hello-minikube` Pod is now launched but you have to wait until the Pod is up before accessing it via the exposed Service.
-->
4. 现在 `hello-minikube` Pod 已经启动,但是你必须等到 Pod 启动完全才能通过暴露的 Service 访问它。
<!--
Check if the Pod is up and running:
-->
检查 Pod 是否启动并运行:
```shell
kubectl get pod
```
<!--
If the output shows the `STATUS` as `ContainerCreating`, the Pod is still being created:
-->
如果输出显示 `STATUS``ContainerCreating`,则表明 Pod 仍在创建中:
```
NAME READY STATUS RESTARTS AGE
hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s
```
<!--
If the output shows the `STATUS` as `Running`, the Pod is now up and running:
-->
如果输出显示 `STATUS``Running`,则 Pod 现在正在运行:
```
NAME READY STATUS RESTARTS AGE
hello-minikube-3383150820-vctvh 1/1 Running 0 13s
```
<!--
5. Get the URL of the exposed Service to view the Service details:
-->
5. 获取暴露 Service 的 URL 以查看 Service 的详细信息:
```shell
minikube service hello-minikube --url
```
<!--
6. To view the details of your local cluster, copy and paste the URL you got as the output, on your browser.
-->
6. 要查看本地集群的详细信息,请在浏览器中复制粘贴并访问上一步骤输出的 URL。
<!--
The output is similar to this:
-->
输出类似于:
```
Hostname: hello-minikube-7c77b68cff-8wdzq
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=172.17.0.1
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_uri=http://192.168.99.100:8080/
Request Headers:
accept=*/*
host=192.168.99.100:30674
user-agent=curl/7.47.0
Request Body:
-no body in request-
```
<!--
If you no longer want the Service and cluster to run, you can delete them.
-->
如果你不再希望运行 Service 和集群,则可以删除它们。
<!--
7. Delete the `hello-minikube` Service:
-->
7. 删除 `hello-minikube` Service
```shell
kubectl delete services hello-minikube
```
<!--
The output is similar to this:
-->
输出类似于:
```
service "hello-minikube" deleted
```
<!--
8. Delete the `hello-minikube` Deployment:
-->
8. 删除 `hello-minikube` Deployment
```shell
kubectl delete deployment hello-minikube
```
<!--
The output is similar to this:
-->
输出类似于:
```
deployment.extensions "hello-minikube" deleted
```
<!--
9. Stop the local Minikube cluster:
-->
9. 停止本地 Minikube 集群:
```shell
minikube stop
```
<!--
The output is similar to this:
-->
输出类似于:
```
Stopping "minikube"...
"minikube" stopped.
```
<!--
For more information, see [Stopping a Cluster](#stopping-a-cluster).
-->
有关更多信息,请参阅[停止集群](#stopping-a-cluster)。
<!--
10. Delete the local Minikube cluster:
-->
10. 删除本地 Minikube 集群:
```shell
minikube delete
```
<!--
The output is similar to this:
-->
输出类似于:
```
Deleting "minikube" ...
The "minikube" cluster has been deleted.
```
<!--
For more information, see [Deleting a cluster](#deleting-a-cluster).
-->
有关更多信息,请参阅[删除集群](#deletion-a-cluster)。
<!--
## Managing your Cluster
### Starting a Cluster
The `minikube start` command can be used to start your cluster.
-->
## 管理你的集群
### 启动集群 {#starting-a-cluster}
`minikube start` 命令可用于启动集群。
<!--
This command creates and configures a Virtual Machine that runs a single-node Kubernetes cluster.
This command also configures your [kubectl](/docs/user-guide/kubectl-overview/) installation to communicate with this cluster.
-->
此命令将创建并配置一台虚拟机,使其运行单节点 Kubernetes 集群。
此命令还会配置你的 [kubectl](/zh/docs/reference/kubectl/overview/) 安装,以便使其能与你的 Kubernetes 集群正确通信。
<!--
If you are behind a web proxy, you need to pass this information to the `minikube start` command:
Unfortunately, setting the environment variables alone does not work.
Minikube also creates a "minikube" context, and sets it to default in kubectl.
To switch back to this context, run this command: `kubectl config use-context minikube`.
-->
{{< note >}}
如果你启用了 web 代理,则需要将此信息传递给 `minikube start` 命令:
```shell
minikube start --docker-env http_proxy=<my proxy> --docker-env https_proxy=<my proxy> --docker-env no_proxy=192.168.99.0/24
```
不幸的是,单独设置环境变量不起作用。
Minikube 还创建了一个 `minikube` 上下文,并将其设置为 kubectl 的默认上下文。
要切换回此上下文,请运行以下命令:`kubectl config use-context minikube`。
{{< /note >}}
<!--
#### Specifying the Kubernetes version
You can specify the version of Kubernetes for Minikube to use byadding the `--kubernetes-version` string to the `minikube start` command. Forexample, to run version {{< param "fullversion" >}}, you would run the following:
-->
#### 指定 Kubernetes 版本
你可以通过将 `--kubernetes-version` 字符串添加到 `minikube start` 命令来指定要用于
Minikube 的 Kubernetes 版本。例如,要运行版本 {{< param "fullversion" >}},你可以运行以下命令:
```shell
minikube start --kubernetes-version {{< param "fullversion" >}}
```
<!--
#### Specifying the VM driver
You can change the VM driver by adding the `-vm-driver=<enter_driver_name>` flag to `minikube start`.
-->
#### 指定 VM 驱动程序 {#specifying-the-vm-driver}
你可以通过将 `--vm-driver=<enter_driver_name>` 参数添加到 `minikube start` 来更改 VM 驱动程序。
<!--
For example the command would be.
-->
例如命令:
```shell
minikube start --vm-driver=<driver_name>
```
<!--
Minikube supports the following drivers:
-->
Minikube 支持以下驱动程序:
<!--
See [DRIVERS](https://minikube.sigs.k8s.io/docs/drivers/) for details on supported drivers and how to install plugins.
-->
{{< note >}}
有关支持的驱动程序以及如何安装插件的详细信息,请参阅[驱动程序](https://minikube.sigs.k8s.io/docs/drivers/)。
{{< /note >}}
<!--
* virtualbox
* vmwarefusion
* kvm2 ([driver installation](https://minikube.sigs.k8s.io/docs/drivers/#kvm2-driver))
* hyperkit ([driver installation](https://minikube.sigs.k8s.io/docs/drivers/#hyperkit-driver))
* hyperv ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver))
Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`.
* vmware ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#vmware-unified-driver)) (VMware unified driver)
* none (Runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker ([docker install](https://docs.docker.com/install/linux/docker-ce/ubuntu/)) and a Linux environment)
-->
* virtualbox
* vmwarefusion
* kvm2 ([驱动安装](https://minikube.sigs.k8s.io/docs/drivers/#kvm2-driver))
* hyperkit ([驱动安装](https://minikube.sigs.k8s.io/docs/drivers/#hyperkit-driver))
* hyperv ([驱动安装](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver))
<!--
Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`.
-->
请注意,下面的 IP 是动态的,可以更改。可以使用 `minikube ip` 检索。
* vmware ([驱动安装](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#vmware-unified-driver)) VMware 统一驱动)
* none (在主机上运行Kubernetes组件而不是在 VM 中。使用该驱动依赖 Docker
([安装 Docker](https://docs.docker.com/install/linux/docker-ce/ubuntu/)) 和 Linux 环境)
<!--
#### Starting a cluster on alternative container runtimes
You can start Minikube on the following container runtimes.
-->
#### 通过别的容器运行时启动集群
你可以通过以下容器运行时启动 Minikube。
{{< tabs name="container_runtimes" >}}
{{% tab name="containerd" %}}
<!--
To use [containerd](https://github.com/containerd/containerd) as the container runtime, run:
-->
要使用 [containerd](https://github.com/containerd/containerd) 作为容器运行时,请运行:
```bash
minikube start \
--network-plugin=cni \
--enable-default-cni \
--container-runtime=containerd \
--bootstrapper=kubeadm
```
<!--
Or you can use the extended version:
-->
或者你可以使用扩展版本:
```bash
minikube start \
--network-plugin=cni \
--enable-default-cni \
--extra-config=kubelet.container-runtime=remote \
--extra-config=kubelet.container-runtime-endpoint=unix:///run/containerd/containerd.sock \
--extra-config=kubelet.image-service-endpoint=unix:///run/containerd/containerd.sock \
--bootstrapper=kubeadm
```
{{% /tab %}}
{{% tab name="CRI-O" %}}
<!--
To use [CRI-O](https://github.com/kubernetes-incubator/cri-o) as the container runtime, run:
-->
要使用 [CRI-O](https://github.com/kubernetes-incubator/cri-o) 作为容器运行时,请运行:
```bash
minikube start \
--network-plugin=cni \
--enable-default-cni \
--container-runtime=cri-o \
--bootstrapper=kubeadm
```
<!--
Or you can use the extended version:
-->
或者你可以使用扩展版本:
```bash
minikube start \
--network-plugin=cni \
--enable-default-cni \
--extra-config=kubelet.container-runtime=remote \
--extra-config=kubelet.container-runtime-endpoint=/var/run/crio.sock \
--extra-config=kubelet.image-service-endpoint=/var/run/crio.sock \
--bootstrapper=kubeadm
```
{{% /tab %}}
{{< /tabs >}}
<!--
#### Use local images by re-using the Docker daemon
-->
#### 通过重用 Docker 守护进程使用本地镜像
<!--
When using a single VM for Kubernetes, it's useful to reuse Minikube's built-in Docker daemon. Reusing the built-in daemon means you don't have to build a Docker registry on your host machine and push the image into it. Instead, you can build inside the same Docker daemon as Minikube, which speeds up local experiments.
-->
当为 Kubernetes 使用单个 VM 时,重用 Minikube 的内置 Docker 守护程序非常有用。重用内置守护程序意味着你不必在主机上构建 Docker 镜像仓库并将镜像推入其中。相反,你可以在与 Minikube 相同的 Docker 守护进程内部构建,这可以加速本地实验。
<!--
Be sure to tag your Docker image with something other than latest and use that tag to pull the image. Because `:latest` is the default value, with a corresponding default image pull policy of `Always`, an image pull error (`ErrImagePull`) eventually results if you do not have the Docker image in the default Docker registry (usually DockerHub).
-->
{{< note >}}
一定要用非 `latest` 的标签来标记你的 Docker 镜像,并使用该标签来拉取镜像。因为 `:latest` 标记的镜像,其默认镜像拉取策略是 `Always`,如果在默认的 Docker 镜像仓库(通常是 DockerHub中没有找到你的 Docker 镜像,最终会导致一个镜像拉取错误(`ErrImagePull`)。
{{< /note >}}
<!--
To work with the Docker daemon on your Mac/Linux host, use the `docker-env command` in your shell:
-->
要在 Mac/Linux 主机上使用 Docker 守护程序,请在 shell 中运行 `docker-env command`
```shell
eval $(minikube docker-env)
```
<!--
You can now use Docker at the command line of your host Mac/Linux machine to communicate with the Docker daemon inside the Minikube VM:
-->
你现在可以在 Mac/Linux 机器的命令行中使用 Docker 与 Minikube VM 内的 Docker 守护程序进行通信:
```shell
docker ps
```
<!--
On Centos 7, Docker may report the following error:
-->
在 Centos 7 上Docker 可能会报如下错误:
```
Could not read CA certificate "/etc/docker/ca.pem": open /etc/docker/ca.pem: no such file or directory
```
<!--
You can fix this by updating /etc/sysconfig/docker to ensure that Minikube's environment changes are respected:
-->
你可以通过更新 /etc/sysconfig/docker 来解决此问题,以确保 Minikube 的环境更改得到遵守:
```shell
< DOCKER_CERT_PATH=/etc/docker
---
> if [ -z "${DOCKER_CERT_PATH}" ]; then
> DOCKER_CERT_PATH=/etc/docker
> fi
```
<!--
### Configuring Kubernetes
-->
### 配置 Kubernetes
<!--
Minikube has a "configurator" feature that allows users to configure the Kubernetes components with arbitrary values.
-->
Minikube 有一个 "configurator" 功能,允许用户使用任意值配置 Kubernetes 组件。
<!--
To use this feature, you can use the `--extra-config` flag on the `minikube start` command.
-->
要使用此功能,可以在 `minikube start` 命令中使用 `--extra-config` 参数。
<!--
This flag is repeated, so you can pass it several times with several different values to set multiple options.
-->
此参数允许重复,因此你可以使用多个不同的值多次传递它以设置多个选项。
<!--
This flag takes a string of the form `component.key=value`, where `component` is one of the strings from the below list, `key` is a value on the configuration struct and `value` is the value to set.
-->
此参数采用 `component.key=value` 形式的字符串,其中 `component` 是下面列表中的一个字符串,`key` 是配置项名称,`value` 是要设置的值。
<!--
Valid keys can be found by examining the documentation for the Kubernetes `componentconfigs` for each component.
-->
通过检查每个组件的 Kubernetes `componentconfigs` 的文档,可以找到有效的 key。
<!--
Here is the documentation for each supported configuration:
-->
下面是每个组件所支持的配置的介绍文档:
* [kubelet](https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/config#KubeletConfiguration)
* [apiserver](https://godoc.org/k8s.io/kubernetes/cmd/kube-apiserver/app/options#ServerRunOptions)
* [proxy](https://godoc.org/k8s.io/kubernetes/pkg/proxy/apis/config#KubeProxyConfiguration)
* [controller-manager](https://godoc.org/k8s.io/kubernetes/pkg/controller/apis/config#KubeControllerManagerConfiguration)
* [etcd](https://godoc.org/github.com/coreos/etcd/etcdserver#ServerConfig)
* [scheduler](https://godoc.org/k8s.io/kubernetes/pkg/scheduler/apis/config#KubeSchedulerConfiguration)
<!--
#### Examples
-->
#### 例子
<!--
To change the `MaxPods` setting to 5 on the Kubelet, pass this flag: `--extra-config=kubelet.MaxPods=5`.
-->
要在 Kubelet 上将 `MaxPods` 设置更改为 5请传递此参数`--extra-config=kubelet.MaxPods=5`。
<!--
This feature also supports nested structs. To change the `LeaderElection.LeaderElect` setting to `true` on the scheduler, pass this flag: `--extra-config=scheduler.LeaderElection.LeaderElect=true`.
-->
此功能还支持嵌套结构。要在调度程序上将 `LeaderElection.LeaderElect` 设置更改为 `true`,请传递此参数:`--extra-config=scheduler.LeaderElection.LeaderElect=true`。
<!--
To set the `AuthorizationMode` on the `apiserver` to `RBAC`, you can use: `--extra-config=apiserver.authorization-mode=RBAC`.
-->
要将 `apiserver``AuthorizationMode` 设置为 `RBAC`,你可以使用:`--extra-config=apiserver.authorization-mode=RBAC`。
<!--
### Stopping a ClusterThe
`minikube stop` command can be used to stop your cluster.
-->
### 停止集群 {#stopsing-a-cluster}
`minikube stop` 命令可用于停止集群。
<!--
This command shuts down the Minikube Virtual Machine, but preserves all cluster state and data.
-->
此命令关闭 Minikube 虚拟机,但保留所有集群状态和数据。
<!--
Starting the cluster again will restore it to its previous state.
-->
再次启动集群会将其恢复到以前的状态。
<!--
### Deleting a ClusterThe
`minikube delete` command can be used to delete your cluster.
-->
### 删除集群 {#deletion-a-cluster}
`minikube delete` 命令可用于删除集群。
<!--
This command shuts down and deletes the Minikube Virtual Machine. No data or state is preserved.
-->
此命令将关闭并删除 Minikube 虚拟机,不保留任何数据或状态。
<!--
## Interacting with Your Cluster
-->
## 与集群交互 {#interacting-with-your-cluster}
<!--
### Kubectl
-->
### Kubectl
<!--
The `minikube start` command creates a [kubectl context](/docs/reference/generated/kubectl/kubectl-commands#-em-set-context-em-) called "minikube".
-->
`minikube start` 命令创建一个名为 `minikube` 的 [kubectl 上下文](/docs/reference/generated/kubectl/kubectl-commands#-em-set-context-em-)。
<!--
This context contains the configuration to communicate with your Minikube cluster.
-->
此上下文包含与 Minikube 集群通信的配置。
<!--
Minikube sets this context to default automatically, but if you need to switch back to it in the future, run:
-->
Minikube 会自动将此上下文设置为默认值,但如果你以后需要切换回它,请运行:
<!--
`kubectl config use-context minikube`,
-->
`kubectl config use-context minikube`
<!--
Or pass the context on each command like this: `kubectl get pods --context=minikube`.
-->
或者像这样,每个命令都附带其执行的上下文:`kubectl get pods --context=minikube`。
<!--
### Dashboard
-->
### 仪表盘
<!--
To access the [Kubernetes Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/), run this command in a shell after starting Minikube to get the address:
-->
要访问 [Kubernetes Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/)
请在启动 Minikube 后在 shell 中运行此命令以获取地址:
```shell
minikube dashboard
```
<!--
### Services
-->
### Service
<!--
To access a Service exposed via a node port, run this command in a shell after starting Minikube to get the address:
-->
要访问通过节点Node端口公开的 Service请在启动 Minikube 后在 shell 中运行此命令以获取地址:
```shell
minikube service [-n NAMESPACE] [--url] NAME
```
<!--
## Networking
-->
## 网络
<!--
The Minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the `minikube ip` command.
-->
Minikube VM 通过 host-only IP 暴露给主机系统,可以通过 `minikube ip` 命令获得该 IP。
<!--
Any services of type `NodePort` can be accessed over that IP address, on the NodePort.
-->
在 NodePort 上,可以通过该 IP 地址访问任何类型为 `NodePort` 的服务。
<!--
To determine the NodePort for your service, you can use a `kubectl` command like this:
-->
要确定服务的 NodePort可以像这样使用 `kubectl` 命令:
<!--
`kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'`
-->
`kubectl get service $SERVICE --output='jsonpath="{.spec.ports[0].nodePort}"'`
<!--
## Persistent Volumes
Minikube supports [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) of type `hostPath`.
-->
## 持久卷PersistentVolume
Minikube 支持 `hostPath` 类型的 [持久卷](/docs/concepts/storage/persistent-volumes/)。
<!--
These PersistentVolumes are mapped to a directory inside the Minikube VM.
-->
这些持久卷会映射为 Minikube VM 内的目录。
<!--
The Minikube VM boots into a tmpfs, so most directories will not be persisted across reboots (`minikube stop`).
-->
Minikube VM 引导到 tmpfs因此大多数目录不会在重新启动`minikube stop`)之后保持不变。
<!--
However, Minikube is configured to persist files stored under the following host directories:
-->
但是Minikube 被配置为保存存储在以下主机目录下的文件:
* `/data`
* `/var/lib/minikube`
* `/var/lib/docker`
<!--
Here is an example PersistentVolume config to persist data in the `/data` directory:
-->
下面是一个持久卷配置示例,用于在 `/data` 目录中保存数据:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0001/
```
<!--
## Mounted Host Folders
Some drivers will mount a host folder within the VM so that you can easily share files between the VM and host. These are not configurable at the moment and different for the driver and OS you are using.
-->
## 挂载宿主机文件夹
一些驱动程序将在 VM 中挂载一个主机文件夹,以便你可以轻松地在 VM 和主机之间共享文件。目前这些都是不可配置的,并且根据你正在使用的驱动程序和操作系统的不同而不同。
<!--
Host folder sharing is not implemented in the KVM driver yet.
-->
{{< note >}}
KVM 驱动程序中尚未实现主机文件夹共享。
{{< /note >}}
| 驱动 | 操作系统 | 宿主机文件夹 | VM 文件夹 |
| --- | --- | --- | --- |
| VirtualBox | Linux | /home | /hosthome |
| VirtualBox | macOS | /Users | /Users |
| VirtualBox | Windows | C://Users | /c/Users |
| VMware Fusion | macOS | /Users | /Users |
| Xhyve | macOS | /Users | /Users |
<!--
## Private Container Registries
-->
## 私有容器镜像仓库
<!--
To access a private container registry, follow the steps on [this page](/docs/concepts/containers/images/).
-->
要访问私有容器镜像仓库,请按照[此页](/zh/docs/concepts/containers/images/)上的步骤操作。
<!--
We recommend you use `ImagePullSecrets`, but if you would like to configure access on the Minikube VM you can place the `.dockercfg` in the `/home/docker` directory or the `config.json` in the `/home/docker/.docker` directory.
-->
我们建议你使用 `ImagePullSecrets`,但是如果你想在 Minikube VM 上配置访问权限,可以将 `.dockercfg` 放在 `/home/docker` 目录中,或将`config.json` 放在 `/home/docker/.docker` 目录。
<!--
## Add-ons
-->
## 附加组件
<!--
In order to have Minikube properly start or restart custom addons,place the addons you wish to be launched with Minikube in the `~/.minikube/addons`directory. Addons in this folder will be moved to the Minikube VM and launched each time Minikube is started or restarted.
-->
为了让 Minikube 正确启动或重新启动自定义插件,请将你希望用 Minikube 启动的插件放在 `~/.minikube/addons` 目录中。此文件夹中的插件将被移动到 Minikube VM 并在每次 Minikube 启动或重新启动时被启动。
<!--
## Using Minikube with an HTTP Proxy
-->
## 基于 HTTP 代理使用 Minikube
<!--
Minikube creates a Virtual Machine that includes Kubernetes and a Docker daemon.
-->
Minikube 创建了一个包含 Kubernetes 和 Docker 守护进程的虚拟机。
<!--
When Kubernetes attempts to schedule containers using Docker, the Docker daemon may require external network access to pull containers.
-->
当 Kubernetes 尝试使用 Docker 调度容器时Docker 守护程序可能需要访问外部网络来拉取容器镜像。
<!--
If you are behind an HTTP proxy, you may need to supply Docker with the proxy settings.
-->
如果你配置了 HTTP 代理,则可能也需要为 Docker 进行代理设置。
<!--
To do this, pass the required environment variables as flags during `minikube start`.
-->
要实现这一点,可以在 `minikube start` 期间将所需的环境变量作为参数传递给启动命令。
<!--
For example:
-->
例如:
```shell
minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \
--docker-env https_proxy=https://$YOURPROXY:PORT
```
<!--
If your Virtual Machine address is 192.168.99.100, then chances are your proxy settings will prevent `kubectl` from directly reaching it.
-->
如果你的虚拟机地址是 192.168.99.100,那么你的代理设置可能会阻止 `kubectl` 直接访问它。
<!--
To by-pass proxy configuration for this IP address, you should modify your no_proxy settings. You can do so with:
-->
要绕过此 IP 地址的代理配置,你应该修改 no_proxy 设置。你可以这样做:
```shell
export no_proxy=$no_proxy,$(minikube ip)
```
<!--
## Known Issues
-->
## 已知的问题
<!--
Features that require multiple nodes will not work in Minikube.
-->
需要多个节点的功能无法在 Minikube 中使用。
<!--
## Design
-->
## 设计
<!--
Minikube uses [libmachine](https://github.com/docker/machine/tree/master/libmachine) for provisioning VMs, and [kubeadm](https://github.com/kubernetes/kubeadm) to provision a Kubernetes cluster.
-->
Minikube 使用 [libmachine](https://github.com/docker/machine/tree/master/libmachine) 配置虚拟机,[kubeadm](https://github.com/kubernetes/kubeadm) 配置 Kubernetes 集群。
<!--
For more information about Minikube, see the [proposal](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md).
-->
有关 Minikube 的更多信息,请参阅[提案](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md)。
<!--
## Additional Links
-->
## 其他链接
<!--
* **Goals and Non-Goals**: For the goals and non-goals of the Minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md).
* **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests.
* **Building Minikube**: For instructions on how to build/test Minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md).
* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube, see the [adding dependencies guide](https://minikube.sigs.k8s.io/docs/contrib/building/iso/).
* **Adding a New Addon**: For instructions on how to add a new addon for Minikube, see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md).
* **MicroK8s**: Linux users wishing to avoid running a virtual machine may consider [MicroK8s](https://microk8s.io/) as an alternative.
-->
* **目标和非目标**: 有关 Minikube 项目的目标和非目标,请参阅我们的 [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md)。
* **开发指南**: 请查阅 [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) 获取有关如何提交 Pull Request 的概述。
* **构建 Minikube**: 有关如何从源代码构建/测试 Minikube 的说明,请参阅[构建指南](https://git.k8s.io/minikube/docs/contributors/build_guide.md)。
* **添加新依赖**: 有关如何向 Minikube 添加新依赖的说明,请参阅[添加依赖项指南](https://minikube.sigs.k8s.io/docs/contrib/building/iso/)。
* **添加新插件**: 有关如何为 Minikube 添加新插件的说明,请参阅[添加插件指南](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md)。
* **MicroK8s**: 希望避免运行虚拟机的 Linux 用户可以考虑使用 [MicroK8s](https://microk8s.io/) 作为替代品。
<!--
## Community
Contributions, questions, and comments are all welcomed and encouraged! Minikube developers hang out on [Slack](https://kubernetes.slack.com) in the #minikube channel (get an invitation [here](http://slack.kubernetes.io/)). We also have the [kubernetes-dev Google Groups mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). If you are posting to the list please prefix your subject with "minikube: ".
-->
## 社区
我们欢迎你向社区提交贡献、提出问题以及参与评论Minikube 开发人员可以在
[Slack](https://kubernetes.slack.com) 的 #minikube 频道上互动交流
(点击[这里](https://slack.kubernetes.io/)获得邀请)。
我们还有 [kubernetes-dev Google Groups 邮件列表](https://groups.google.com/forum/#!forum/kubernetes-dev)。
如果你要发信到列表中,请在主题前加上 "minikube: "。

View File

@ -21,3 +21,10 @@ single thing, typically by giving a short sequence of steps.
Kubernetes 文档这一部分包含的一些页面展示如何去完成单个任务。
每个任务页面是一般通过给出若干步骤展示如何执行完成某事。
<!--
If you would like to write a task page, see
[Creating a Documentation Pull Request](/docs/contribute/new-content/open-a-pr/).
-->
如果你希望编写一个任务页面,参考
[创建一个文档拉取请求](/zh/docs/contribute/new-content/open-a-pr/)。

View File

@ -203,7 +203,7 @@ users:
<!--
The `fake-ca-file`, `fake-cert-file` and `fake-key-file` above are the placeholders
for the pathnames of the certificate files. You need change these to the actual pathnames
for the pathnames of the certificate files. You need to change these to the actual pathnames
of certificate files in your environment.
Sometimes you may want to use Base64-encoded data embedded here instead of separate

View File

@ -228,10 +228,10 @@ so that you can change the configuration more easily.
<!--
## Interact with the frontend Service
Once youve created a Service of type LoadBalancer, you can use this
Once you've created a Service of type LoadBalancer, you can use this
command to find the external IP:
-->
### 与前端 Service 交互
### 与前端 Service 交互 {#interact-with-the-frontend-service}
一旦你创建了 LoadBalancer 类型的 Service你可以使用这条命令查看外部 IP

View File

@ -61,9 +61,9 @@ This page shows you how to set up a simple Ingress which routes requests to Serv
1. 为了启用 NGINIX Ingress 控制器,可以运行下面的命令:
```shell
minikube addons enable ingress
```
```shell
minikube addons enable ingress
```
<!--
1. Verify that the NGINX Ingress controller is running
@ -75,11 +75,13 @@ This page shows you how to set up a simple Ingress which routes requests to Serv
```
<!-- This can take up to a minute. -->
{{< note >}}这一操作可供需要近一分钟时间。{{< /note >}}
{{< note >}}
这一操作可能需要近一分钟时间。
{{< /note >}}
输出:
```shell
```
NAME READY STATUS RESTARTS AGE
default-http-backend-59868b7dd6-xb8tq 1/1 Running 0 1m
kube-addon-manager-minikube 1/1 Running 0 3m
@ -197,7 +199,7 @@ The following file is an Ingress resource that sends traffic to your Service via
1. 根据下面的 YAML 创建文件 `example-ingress.yaml`
{{< codenew file="service/networking/example-ingress.yaml" >}}
{{< codenew file="service/networking/example-ingress.yaml" >}}
<!--
1. Create the Ingress resource by running the following command:
@ -211,9 +213,10 @@ The following file is an Ingress resource that sends traffic to your Service via
<!-- Output: -->
输出:
```shell
```
ingress.networking.k8s.io/example-ingress created
```
<!--
1. Verify the IP address is set:
-->
@ -224,9 +227,11 @@ The following file is an Ingress resource that sends traffic to your Service via
```
<!-- This can take a couple of minutes. -->
{{< note >}}此操作可能需要几分钟时间。{{< /note >}}
{{< note >}}
此操作可能需要几分钟时间。
{{< /note >}}
```shell
```
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> hello-world.info 172.17.0.15 80 38s
```
@ -262,7 +267,7 @@ The following file is an Ingress resource that sends traffic to your Service via
<!-- Output: -->
输出:
```shell
```
Hello, world!
Version: 1.0.0
Hostname: web-55b8c6998d-8k564
@ -290,7 +295,7 @@ The following file is an Ingress resource that sends traffic to your Service via
<!-- Output: -->
输出:
```shell
```
deployment.apps/web2 created
```
@ -306,7 +311,7 @@ The following file is an Ingress resource that sends traffic to your Service via
<!-- Output: -->
输出:
```shell
```
service/web2 exposed
```
@ -321,13 +326,13 @@ The following file is an Ingress resource that sends traffic to your Service via
```yaml
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
```
<!--
@ -342,7 +347,7 @@ The following file is an Ingress resource that sends traffic to your Service via
<!-- Output: -->
输出:
```shell
```
ingress.networking/example-ingress configured
```

View File

@ -5,27 +5,22 @@ weight: 40
---
<!--
---
title: Use Port Forwarding to Access Applications in a Cluster
content_type: task
weight: 40
---
-->
<!-- overview -->
<!--
This page shows how to use `kubectl port-forward` to connect to a Redis
server running in a Kubernetes cluster. This type of connection can be useful
for database debugging.
-->
本文展示如何使用 `kubectl port-forward` 连接到在 Kubernetes 集群中运行的 Redis 服务。这种类型的连接对数据库调试很有用。
本文展示如何使用 `kubectl port-forward` 连接到在 Kubernetes 集群中
运行的 Redis 服务。这种类型的连接对数据库调试很有用。
## {{% heading "prerequisites" %}}
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!--
@ -33,9 +28,6 @@ for database debugging.
-->
* 安装 [redis-cli](http://redis.io/topics/rediscli)。
<!-- steps -->
<!--
@ -47,172 +39,206 @@ for database debugging.
1. 创建一个 Redis deployment
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
```
<!--
The output of a successful command verifies that the deployment was created:
-->
查看输出是否成功,以验证是否成功创建 deployment
<!--
The output of a successful command verifies that the deployment was created:
-->
查看输出是否成功,以验证是否成功创建 deployment
deployment.apps/redis-master created
<!--
View the pod status to check that it is ready:
-->
查看 pod 状态,检查其是否准备就绪:
```
deployment.apps/redis-master created
```
kubectl get pods
<!--
The output displays the pod created:
-->
输出显示创建的 pod
<!--
View the pod status to check that it is ready:
-->
查看 pod 状态,检查其是否准备就绪:
NAME READY STATUS RESTARTS AGE
redis-master-765d459796-258hz 1/1 Running 0 50s
```shell
kubectl get pods
```
<!--
View the deployment status:
-->
查看 deployment 状态
<!--
The output displays the pod created:
-->
输出显示创建的 pod
kubectl get deployment
```
NAME READY STATUS RESTARTS AGE
redis-master-765d459796-258hz 1/1 Running 0 50s
```
<!--
The output displays that the deployment was created:
-->
输出显示创建的 deployment
<!--
View the deployment status:
-->
查看 deployment 状态
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
redis-master 1 1 1 1 55s
```shell
kubectl get deployment
```
<!--
View the replicaset status using:
-->
查看 replicaset 状态
<!--
The output displays that the deployment was created:
-->
输出显示创建的 deployment
kubectl get rs
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
redis-master 1 1 1 1 55s
```
<!--
The output displays that the replicaset was created:
-->
输出显示创建的 replicaset
<!--
View the replicaset status using:
-->
查看 replicaset 状态
NAME DESIRED CURRENT READY AGE
redis-master-765d459796 1 1 1 1m
```shell
kubectl get rs
```
<!--
The output displays that the replicaset was created:
-->
输出显示创建的 replicaset
```
NAME DESIRED CURRENT READY AGE
redis-master-765d459796 1 1 1 1m
```
<!--
2. Create a Redis service:
-->
2. 创建一个 Redis 服务:
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
```
<!--
The output of a successful command verifies that the service was created:
-->
查看输出是否成功,以验证是否成功创建 service
<!--
The output of a successful command verifies that the service was created:
-->
查看输出是否成功,以验证是否成功创建 service
service/redis-master created
```
service/redis-master created
```
<!--
Check the service created:
-->
检查 service 是否创建:
<!--
Check the service created:
-->
检查 service 是否创建:
kubectl get svc | grep redis
```shell
kubectl get svc | grep redis
```
<!--
The output displays the service created:
-->
输出显示创建的 service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master ClusterIP 10.0.0.213 <none> 6379/TCP 27s
<!--
The output displays the service created:
-->
输出显示创建的 service
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master ClusterIP 10.0.0.213 <none> 6379/TCP 27s
```
<!--
3. Verify that the Redis server is running in the pod and listening on port 6379:
-->
3. 验证 Redis 服务是否运行在 pod 中并且监听 6379 端口:
```shell
kubectl get pods redis-master-765d459796-258hz \
--template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
```
<!--
The output displays the port:
-->
输出应该显示端口:
kubectl get pods redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
<!--
The output displays the port:
-->
输出应该显示端口:
6379
```
6379
```
<!--
## Forward a local port to a port on the pod
1. `kubectl port-forward` allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
1. `kubectl port-forward` allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
-->
## 转发一个本地端口到 pod 端口
1. 从 Kubernetes v1.10 开始,`kubectl port-forward` 允许使用资源名称(例如 pod 名称)来选择匹配的 pod 来进行端口转发。
1. 从 Kubernetes v1.10 开始,`kubectl port-forward` 允许使用资源名称
(例如 pod 名称)来选择匹配的 pod 来进行端口转发。
kubectl port-forward redis-master-765d459796-258hz 7000:6379
```shell
kubectl port-forward redis-master-765d459796-258hz 7000:6379
```
<!--
which is the same as
-->
这相当于
```shell
kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379
```
<!-- or -->
或者
```shell
kubectl port-forward deployment/redis-master 7000:6379
```
<!-- or -->
或者
```shell
kubectl port-forward rs/redis-master 7000:6379
```
<!-- or -->
或者
```
kubectl port-forward svc/redis-master 7000:redis
```
<!--
Any of the above commands works. The output is similar to this:
-->
以上所有命令都应该有效。输出应该类似于:
```
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:7000 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379
```
<!--
which is the same as
-->
这相当于
kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379
<!--
or
-->
或者
kubectl port-forward deployment/redis-master 7000:6379
<!--
or
-->
或者
kubectl port-forward rs/redis-master 7000:6379
<!--
or
-->
或者
kubectl port-forward svc/redis-master 7000:6379
<!--
Any of the above commands works. The output is similar to this:
-->
以上所有命令都应该有效。输出应该类似于:
I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:7000 -> 6379
I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379
<!--
2. Start the Redis command line interface:
2. Start the Redis command line interface:
-->
2. 启动 Redis 命令行接口:
redis-cli -p 7000
```shell
redis-cli -p 7000
```
<!--
3. At the Redis command line prompt, enter the `ping` command:
-->
3. 在 Redis 命令行提示符下,输入 `ping` 命令:
127.0.0.1:7000>ping
<!--
A successful ping request returns PONG.
-->
成功的 ping 请求应该返回 PONG。
```
127.0.0.1:7000>ping
```
<!--
A successful ping request returns PONG.
-->
成功的 ping 请求应该返回 PONG。
<!-- discussion -->
@ -223,9 +249,10 @@ Connections made to local port 7000 are forwarded to port 6379 of the pod that
is running the Redis server. With this connection in place you can use your
local workstation to debug the database that is running in the pod.
-->
## 讨论
## 讨论 {#discussion}
与本地 7000 端口建立的连接将转发到运行 Redis 服务器的 pod 的 6379 端口。通过此连接,您可以使用本地工作站来调试在 pod 中运行的数据库。
与本地 7000 端口建立的连接将转发到运行 Redis 服务器的 pod 的 6379 端口。
通过此连接,您可以使用本地工作站来调试在 pod 中运行的数据库。
<!--
Due to known limitations, port forward today only works for TCP protocol.
@ -234,19 +261,14 @@ The support to UDP protocol is being tracked in
-->
{{< warning >}}
由于已知的限制,目前的端口转发仅适用于 TCP 协议。
在 [issue 47862](https://github.com/kubernetes/kubernetes/issues/47862) 中正在跟踪对 UDP 协议的支持。
在 [issue 47862](https://github.com/kubernetes/kubernetes/issues/47862)
中正在跟踪对 UDP 协议的支持。
{{< /warning >}}
## {{% heading "whatsnext" %}}
<!--
Learn more about [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward).
-->
学习更多关于 [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward)。
进一步了解 [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward)。

View File

@ -225,14 +225,13 @@ certificate.
<!--
On some clusters, the API server does not require authentication; it may serve
on localhost, or be protected by a firewall. There is not a standard
for this. [Configuring Access to the API](/docs/reference/access-authn-authz/controlling-access/)
describes how a cluster admin can configure this. Such approaches may conflict
with future high-availability support.
for this. [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
describes how you can configure this as a cluster administrator.
-->
在一些集群中API 服务器不需要身份认证;它运行在本地,或由防火墙保护着。
对此并没有一个标准。
[配置对 API 的访问](/zh/docs/reference/access-authn-authz/controlling-access/)
阐述了一个集群管理员如何对此进行配置。这种方法可能与未来的高可用性支持发生冲突
[配置对 API 的访问](/zh/docs/concepts/security/controlling-access/)
讲解了作为集群管理员可如何对此进行配置
<!--
### Programmatic access to the API

View File

@ -1,452 +0,0 @@
---
approvers:
- lavalamp
- thockin
title: 集群管理
content_type: task
---
<!--
This document describes several topics related to the lifecycle of a cluster: creating a new cluster,
upgrading your cluster's
master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a
running cluster.
-->
本文描述了和集群生命周期相关的几个主题:创建新集群、更新集群的主控节点和工作节点、
执行节点维护(例如升级内核)以及升级运行中集群的 Kubernetes API 版本。
<!-- body -->
<!--
## Creating and configuring a Cluster
To install Kubernetes on a set of machines, consult one of the existing [Getting Started guides](/docs/setup/) depending on your environment.
-->
## 创建和配置集群
要在一组机器上安装 Kubernetes请根据你的环境查阅现有的[入门指南](/zh/docs/setup/)
<!--
## Upgrading a cluster
The current state of cluster upgrades is provider dependent, and some releases may require special care when upgrading. It is recommended that administrators consult both the [release notes](https://git.k8s.io/kubernetes/CHANGELOG/README.md), as well as the version specific upgrade notes prior to upgrading their clusters.
-->
## 升级集群
集群升级当前是配套提供的,某些发布版本在升级时可能需要特殊处理。
推荐管理员在升级他们的集群前,同时查阅
[发行说明](https://git.k8s.io/kubernetes/CHANGELOG.md) 和版本具体升级说明。
<!--
### Upgrading an Azure Kubernetes Service (AKS) cluster
Azure Kubernetes Service enables easy self-service upgrades of the control plane and nodes in your cluster. The process is
currently user-initiated and is described in the [Azure AKS documentation](https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster).
-->
### 升级 Azure Kubernetes ServiceAKS集群
Azure Kubernetes Service 支持自服务式的控制面升级和集群节点升级。
升级过程目前是由用户发起的,具体文档参见
[Azure AKS 文档](https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster)。
<!--
### Upgrading Google Compute Engine clusters
Google Compute Engine Open Source (GCE-OSS) support master upgrades by deleting and
recreating the master, while maintaining the same Persistent Disk (PD) to ensure that data is retained across the
upgrade.
-->
### 升级 Google Compute Engine 集群
Google Compute Engine Open SourceGCE-OSS通过删除和重建主控节点来支持主控节点升级。
通过维持相同的 Persistent Disk (PD) 以保证在升级过程中保留数据。
<!--
Node upgrades for GCE use a [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/), each node
is sequentially destroyed and then recreated with new software. Any Pods that are running on that node need to be
controlled by a Replication Controller, or manually re-created after the roll out.
-->
GCE 的 节点升级采用[受控实例组](https://cloud.google.com/compute/docs/instance-groups/)
每个节点将被顺序删除,然后使用新软件重建。
任何运行在那个节点上的 Pod 需要用副本控制器控制,或者在扩容之后手动重建。
<!--
Upgrades on open source Google Compute Engine (GCE) clusters are controlled by the `cluster/gce/upgrade.sh` script.
Get its usage by running `cluster/gce/upgrade.sh -h`.
For example, to upgrade just your master to a specific version (v1.0.2):
-->
开源 Google Compute Engine (GCE) 集群上的升级过程由 `cluster/gce/upgrade.sh` 脚本控制。
运行 `cluster/gce/upgrade.sh -h` 获取使用说明。
例如只将主控节点升级到一个指定的版本v1.0.2
```shell
cluster/gce/upgrade.sh -M v1.0.2
```
<!--
Alternatively, to upgrade your entire cluster to the latest stable release:
-->
或者,将整个集群升级到最新的稳定版本:
```shell
cluster/gce/upgrade.sh release/stable
```
<!--
### Upgrading Google Kubernetes Engine clusters
Google Kubernetes Engine automatically updates master components (e.g. `kube-apiserver`, `kube-scheduler`) to the latest version. It also handles upgrading the operating system and other components that the master runs on.
-->
### 升级 Google Kubernetes Engine 集群
Google Kubernetes Engine 自动升级主控节点组件(例如 `kube-apiserver`、`kube-scheduler`)至最新版本。
它还负责主控节点运行的操作系统和其它组件的升级。
<!--
The node upgrade process is user-initiated and is described in the [Google Kubernetes Engine documentation](https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade).
-->
节点升级过程由用户发起,[Google Kubernetes Engine 文档](https://cloud.google.com/kubernetes-engine/docs/clusters/upgrade)中有相关描述。
<!--
### Upgrading an Amazon EKS Cluster
Amazon EKS cluster's master components can be upgraded by using eksctl, AWS Management Console, or AWS CLI. The process is user-initiated and is described in the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html).
-->
### 升级 Amazon EKS 集群
Amazon EKS 集群的主控组件可以使用 eksctl、AWS 管理控制台或者 AWS CLI 来升级。
升级过程由用户发起,具体参看
[Amazon EKS 文档](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html)。
<!--
### Upgrading an Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) cluster
Oracle creates and manages a set of master nodes in the Oracle control plane on your behalf (and associated Kubernetes infrastructure such as etcd nodes) to ensure you have a highly available managed Kubernetes control plane. You can also seamlessly upgrade these master nodes to new versions of Kubernetes with zero downtime. These actions are described in the [OKE documentation](https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengupgradingk8smasternode.htm).
-->
### 升级 Oracle Cloud Infrastructure 上的 Container Engine for Kubernetes (OKE) 集群
Oracle 在 Oracle 控制面替你创建和管理一组主控节点(及相关的 Kubernetes 基础设施,
如 etcd 节点)。你可以在不停机的情况下无缝升级这些主控节点到新的 Kubernetes 版本。
相关的操作可参考
[OKE 文档](https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengupgradingk8smasternode.htm)。
<!--
### Upgrading clusters on other platforms
Different providers, and tools, will manage upgrades differently. It is recommended that you consult their main documentation regarding upgrades.
-->
### 在其他平台上升级集群
不同的供应商和工具管理升级的过程各不相同。建议你查阅它们有关升级的主要文档。
* [kops](https://github.com/kubernetes/kops)
* [kubespray](https://github.com/kubernetes-incubator/kubespray)
* [CoreOS Tectonic](https://coreos.com/tectonic/docs/latest/admin/upgrade.html)
* [Digital Rebar](https://provision.readthedocs.io/en/tip/doc/content-packages/krib.html)
* ...
<!--
To upgrade a cluster on a platform not mentioned in the above list, check the order of component upgrade on the
[Skewed versions](/docs/setup/release/version-skew-policy/#supported-component-upgrade-order) page.
-->
要在上面列表中没有提及的平台上升级集群时,请参阅
[版本偏差](/zh/docs/setup/release/version-skew-policy/#supported-component-upgrade-order)
页面所讨论的组件升级顺序。
<!--
## Resizing a cluster
If your cluster runs short on resources you can easily add more machines to it if your cluster
is running in [Node self-registration mode](/docs/concepts/architecture/nodes/#self-registration-of-nodes).
If you're using GCE or Google Kubernetes Engine it's done by resizing the Instance Group managing your Nodes.
It can be accomplished by modifying number of instances on
`Compute > Compute Engine > Instance groups > your group > Edit group`
[Google Cloud Console page](https://console.developers.google.com) or using gcloud CLI:
-->
## 调整集群大小
如果集群资源短缺,且集群正运行在
[节点自注册模式](/zh/docs/concepts/architecture/nodes/#self-registration-of-nodes)
你可以轻松地添加更多的机器。
如果正在使用的是 GCE 或者 Google Kubernetes Engine添加节点将通过调整管理节点的实例组的大小完成。
在 [Google Cloud 控制台](https://console.developers.google.com) 页面
`Compute > Compute Engine > Instance groups > your group > Edit group`
下修改实例数量或使用 gcloud CLI 都可以完成这个任务。
```shell
gcloud compute instance-groups managed resize kubernetes-minion-group --size 42 --zone $ZONE
```
<!--
The Instance Group will take care of putting appropriate image on new machines and starting them,
while the Kubelet will register its Node with the API server to make it available for scheduling.
If you scale the instance group down, system will randomly choose Nodes to kill.
-->
实例组将负责在新机器上放置恰当的镜像并启动它们。
kubelet 将向 API 服务器注册它的节点以使其可以用于调度。
如果你对实例组进行缩容,系统将会随机选取节点来终止。
<!--
In other environments you may need to configure the machine yourself and tell the Kubelet on which machine API server is running.
-->
在其他环境中,你可能需要手动配置机器并告诉 kubelet API 服务器在哪台机器上运行。
<!--
### Cluster autoscaling
If you are using GCE or Google Kubernetes Engine, you can configure your cluster so that it is automatically rescaled based on
pod needs.
-->
### 集群自动伸缩
如果正在使用 GCE 或者 Google Kubernetes Engine你可以配置你的集群
使其能够基于 Pod 需求自动重新调整大小。
<!--
As described in [Compute Resource](/docs/concepts/configuration/manage-resources-containers/),
users can reserve how much CPU and memory is allocated to pods.
This information is used by the Kubernetes scheduler to find a place to run the pod. If there is
no node that has enough free capacity (or doesn't match other pod requirements) then the pod has
to wait until some pods are terminated or a new node is added.
-->
如[计算资源](/zh/docs/concepts/configuration/manage-resources-containers/)所述,
用户可以控制预留多少 CPU 和内存来分配给 Pod。
这个信息被 Kubernetes 调度器用来寻找一个运行 Pod 的地方。
如果没有一个节点有足够的空闲容量(或者不能满足 Pod 的其他需求),
这个 Pod 就需要等待某些 Pod 结束,或者一个新的节点被添加。
<!--
Cluster autoscaler looks for the pods that cannot be scheduled and checks if adding a new node, similar
to the other in the cluster, would help. If yes, then it resizes the cluster to accommodate the waiting pods.
-->
集群 Autoscaler 查找不能被调度的 Pod 并检查添加一个新节点(和集群中其它节点类似的)
是否有帮助。如果是的话,它将调整集群的大小以容纳等待调度的 Pod。
<!--
Cluster autoscaler also scales down the cluster if it notices that one or more nodes are not needed anymore for
an extended period of time (10min but it may change in the future).
-->
如果发现在一段延时时间内(默认 10 分钟,将来有可能改变)某些节点不再需要,
集群 Autoscaler 也会缩小集群。
<!--
Cluster autoscaler is configured per instance group (GCE) or node pool (Google Kubernetes Engine).
-->
集群 Autoscaler 基于每个实例组GCE或节点池Google Kubernetes Engine来配置。
<!--
If you are using GCE then you can either enable it while creating a cluster with kube-up.sh script.
To configure cluster autoscaler you have to set three environment variables:
-->
如果你使用 GCE那么你可以在使用 kube-up.sh 脚本创建集群的时候启用集群自动扩缩。
要想配置集群 Autoscaler你需要设置三个环境变量
<!--
* `KUBE_ENABLE_CLUSTER_AUTOSCALER` - it enables cluster autoscaler if set to true.
* `KUBE_AUTOSCALER_MIN_NODES` - minimum number of nodes in the cluster.
* `KUBE_AUTOSCALER_MAX_NODES` - maximum number of nodes in the cluster.
Example:
-->
* `KUBE_ENABLE_CLUSTER_AUTOSCALER` - 如果设置为 true 将启用集群 Autoscaler。
* `KUBE_AUTOSCALER_MIN_NODES` - 集群的最小节点数量。
* `KUBE_AUTOSCALER_MAX_NODES` - 集群的最大节点数量。
示例:
```shell
KUBE_ENABLE_CLUSTER_AUTOSCALER=true KUBE_AUTOSCALER_MIN_NODES=3 KUBE_AUTOSCALER_MAX_NODES=10 NUM_NODES=5 ./cluster/kube-up.sh
```
<!--
On Google Kubernetes Engine you configure cluster autoscaler either on cluster creation or update or when creating a particular node pool
(which you want to be autoscaled) by passing flags `--enable-autoscaling` `--min-nodes` and `--max-nodes`
to the corresponding `gcloud` commands.
Examples:
-->
在 Google Kubernetes Engine 上,你可以在创建、更新集群或创建一个特别的(你希望自动伸缩的)
节点池时,通过给对应的 `gcloud` 命令传递 `--enable-autoscaling`、`--min-nodes` 和
`--max-nodes` 来配置集群 Autoscaler。
示例:
```shell
gcloud container clusters create mytestcluster --zone=us-central1-b --enable-autoscaling --min-nodes=3 --max-nodes=10 --num-nodes=5
```
```shell
gcloud container clusters update mytestcluster --enable-autoscaling --min-nodes=1 --max-nodes=15
```
<!--
**Cluster autoscaler expects that nodes have not been manually modified (e.g. by adding labels via kubectl) as those properties would not be propagated to the new nodes within the same instance group.**
For more details about how the cluster autoscaler decides whether, when and how
to scale a cluster, please refer to the [FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md)
documentation from the autoscaler project.
-->
**集群 Autoscaler 期望节点未被手动修改过(例如通过 kubectl 添加标签),因自行指定的属性
可能不能被传递到相同节点组中的新节点上。**
关于集群 Autoscaler 如何决定是否、合适以及怎样对集群进行缩放的细节,请参考 autoscaler 项目的
[FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md)
文档。
<!--
## Maintenance on a Node
If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer
(the default time is 5 minutes, controlled by `-pod-eviction-timeout` on the controller-manager),
then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
replica set (or replication controller), then a new copy of the pod will be started on a different node. So, in the case where all
pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.
-->
## 维护节点
如果需要重启节点例如内核升级、libc 升级、硬件维修等),且停机时间很短时,
kubelet 重启后,将尝试重启调度到节点上的 Pod。如果重启花费较长时间默认时间为 5 分钟,由
控制器管理器的 `--pod-eviction-timeout` 控制),节点控制器将会结束绑定到这个不可用节点上的 Pod。
如果存在对应的 ReplicaSet或者 ReplicationController则将在另一个节点上启动 Pod 的新副本。
所以,如果所有的 Pod 都是多副本的,那么在不是所有节点都同时停机的前提下,升级可以在不需要特殊
调整情况下完成。
<!--
If you want more control over the upgrading process, you may use the following workflow:
Use `kubectl drain` to gracefully terminate all pods on the node while marking the node as unschedulable:
-->
如果你希望对升级过程有更多的控制,可以使用下面的工作流程:
使用 `kubectl drain` 体面地结束节点上的所有 Pod 并同时标记节点为不可调度:
```shell
kubectl drain $NODENAME
```
<!--
This keeps new pods from landing on the node while you are trying to get them off.
For pods with a replica set, the pod will be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
-->
在你试图使节点离线时,这样做将阻止新的 Pod 落到它们上面。
对于有 ReplicaSet 的 Pod 来说,它们将会被新的 Pod 替换并且将被调度到一个新的节点。
此外,如果 Pod 是一个 Service 的一部分,则客户端将被自动重定向到新的 Pod。
<!--
For pods with no replica set, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
Perform maintenance work on the node.
Make the node schedulable again:
-->
对于没有 ReplicaSet 的 Pod你需要手动启动 Pod 的新副本,并且
如果它不是 Service 的一部分,你需要手动将客户端重定向到这个 Pod。
在节点上执行维护工作。
重新使节点可调度:
```shell
kubectl uncordon $NODENAME
```
<!--
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically (if you're using a cloud provider that supports
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register).
See [Node](/docs/concepts/architecture/nodes/) for more details.
-->
如果删除了节点的虚拟机实例并重新创建,那么一个新的可调度节点资源将被自动创建
(只在你使用支持节点发现的云服务提供商时;当前只有 Google Compute Engine
不包括在 Google Compute Engine 上使用 kube-register 的 CoreOS
相关详细信息,请查阅[节点](/zh/docs/concepts/architecture/nodes/)。
<!--
## Advanced Topics
### Upgrading to a different API version
When a new API version is released, you may need to upgrade a cluster to support the new API version (e.g. switching from 'v1' to 'v2' when 'v2' is launched).
-->
## 高级主题
### 升级到不同的 API 版本
当新的 API 版本发布时,你可能需要升级集群支持新的 API 版本
(例如当 'v2' 发布时从 'v1' 切换到 'v2')。
<!--
This is an infrequent event, but it requires careful management. There is a sequence of steps to upgrade to a new API version.
1. Turn on the new API version.
1. Upgrade the cluster's storage to use the new version.
1. Upgrade all config files. Identify users of the old API version endpoints.
1. Update existing objects in the storage to new version by running `cluster/update-storage-objects.sh`.
1. Turn off the old API version.
-->
这不是一个经常性的事件,但需要谨慎的处理。这里有一系列升级到新 API 版本的步骤。
1. 开启新 API 版本
1. 升级集群存储来使用新版本
1. 升级所有配置文件;识别使用旧 API 版本末端的用户
1. 运行 `cluster/update-storage-objects.sh` 升级存储中的现有对象为新版本
1. 关闭旧 API 版本
<!--
### Turn on or off an API version for your cluster
Specific API versions can be turned on or off by passing `-runtime-config=api/<version>` flag while bringing up the API server. For example: to turn off v1 API, pass `--runtime-config=api/v1=false`.
runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively.
For example, for turning off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true`.
For the purposes of these flags, _legacy_ APIs are those APIs which have been explicitly deprecated (e.g. `v1beta3`).
-->
### 打开或关闭集群的 API 版本
可以在启动 API 服务器时传递 `--runtime-config=api/<version>` 标志来打开或关闭特定的 API 版本。
例如:要关闭 v1 API请传递 `--runtime-config=api/v1=false`
`runtime-config` 还支持两个特殊键值:`api/all` 和 `api/legacy`,分别控制全部和遗留 API。
例如要关闭除 v1 外全部 API 版本,请传递 `--runtime-config=api/all=false,api/v1=true`
对于这些标志来说_遗留Legacy_ API 指的是那些被显式废弃的 API例如 `v1beta3`)。
<!--
### Switching your cluster's storage API version
The objects that are stored to disk for a cluster's internal representation of the Kubernetes resources active in the cluster are written using a particular version of the API.
When the supported API changes, these objects may need to be rewritten in the newer API. Failure to do this will eventually result in resources that are no longer decodable or usable
by the Kubernetes API server.
-->
### 切换集群存储的 API 版本
存储于磁盘中、用于在集群内部代表 Kubernetes 活跃资源的对象使用特定的 API 版本表达。
当所支持的 API 改变时,这些对象可能需要使用更新的 API 重写。
重写失败将最终导致资源不再能够被 Kubernetes API server 解析或使用。
<!--
### Switching your config files to a new API version
You can use `kubectl convert` command to convert config files between different API versions.
-->
### 切换配置文件到新 API 版本
你可以使用 `kubectl convert` 命令对不同 API 版本的配置文件进行转换。
```shell
kubectl convert -f pod.yaml --output-version v1
```
<!--
For more options, please refer to the usage of [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert) command.
-->
更多选项请参考 [`kubectl convert`](/docs/reference/generated/kubectl/kubectl-commands/#convert) 命令用法。

View File

@ -0,0 +1,182 @@
---
title: 升级集群
content_type: task
---
<!--
---
title: Upgrade A Cluster
content_type: task
---
-->
<!-- overview -->
<!--
This page provides an overview of the steps you should follow to upgrade a
Kubernetes cluster.
The way that you upgrade a cluster depends on how you initially deployed it
and on any subsequent changes.
At a high level, the steps you perform are:
-->
本页概述升级 Kubernetes 集群的步骤。
升级集群的方式取决于你最初部署它的方式、以及后续更改它的方式。
从高层规划的角度看,要执行的步骤是:
<!--
- Upgrade the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
- Upgrade the nodes in your cluster
- Upgrade clients such as {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}
- Adjust manifests and other resources based on the API changes that accompany the
new Kubernetes version
-->
- 升级{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}
- 升级集群中的节点
- 升级 {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} 之类的客户端
- 根据新 Kubernetes 版本带来的 API 变化,调整清单文件和其他资源
## {{% heading "prerequisites" %}}
<!--
You must have an existing cluster. This page is about upgrading from Kubernetes
{{< skew prevMinorVersion >}} to Kubernetes {{< skew latestVersion >}}. If your cluster
is not currently running Kubernetes {{< skew prevMinorVersion >}} then please check
the documentation for the version of Kubernetes that you plan to upgrade to.
-->
你必须有一个集群。
本页内容涉及从 Kubernetes {{< skew prevMinorVersion >}}
升级到 Kubernetes {{< skew latestVersion >}}。
如果你的集群未运行 Kubernetes {{< skew prevMinorVersion >}}
那请参考目标 Kubernetes 版本的文档。
<!-- ## Upgrade approaches -->
## 升级方法 {#upgrade-approaches}
### kubeadm {#upgrade-kubeadm}
<!--
If your cluster was deployed using the `kubeadm` tool, refer to
[Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for detailed information on how to upgrade the cluster.
Once you have upgraded the cluster, remember to
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
-->
如果你的集群是使用 `kubeadm` 安装工具部署而来,
那么升级群集的详细信息,请参阅
[升级 kubeadm 集群](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
升级集群之后,要记得
[安装最新版本的 `kubectl`](/zh/docs/tasks/tools/install-kubectl/).
<!-- ### Manual deployments -->
### 手动部署 {#manual-deployments}
<!--
These steps do not account for third-party extensions such as network and storage
plugins.
You should manually update the control plane following this sequence:
-->
{{< caution >}}
这些步骤不考虑第三方扩展,例如网络和存储插件。
{{< /caution >}}
你应该跟随下面操作顺序,手动更新控制平面:
<!--
- etcd (all instances)
- kube-apiserver (all control plane hosts)
- kube-controller-manager
- kube-scheduler
- cloud controller manager, if you use one
-->
- etcd (所有实例)
- kube-apiserver (所有控制平面的宿主机)
- kube-controller-manager
- kube-scheduler
- cloud controller manager, 在你用到时
<!--
At this point you should
[install the latest version of `kubectl`](/docs/tasks/tools/install-kubectl/).
For each node in your cluster, [drain](/docs/tasks/administer-cluster/safely-drain-node/)
that node and then either replace it with a new node that uses the {{< skew latestVersion >}}
kubelet, or upgrade the {{< skew latestVersion >}}
kubelet on that node and bring the node back into service.
-->
现在,你应该
[安装最新版本的 `kubectl`](/zh/docs/tasks/tools/install-kubectl/).
对于群集中的每个节点,
[排空](/zh/docs/tasks/administer-cluster/safely-drain-node/)
节点,然后,或者用一个运行了 {{< skew latestVersion >}} kubelet 的新节点替换它;
或者升级此节点的 kubelet并使节点恢复服务。
<!--
### Other deployments {#upgrade-other}
Refer to the documentation for your cluster deployment tool to learn the recommended set
up steps for maintenance.
## Post-upgrade tasks
### Switch your cluster's storage API version
-->
### 其他部署方式 {#upgrade-other}
参阅你的集群部署工具对应的文档,了解用于维护的推荐设置步骤。
## 升级后的任务 {#post-upgrade-tasks}
### 切换群集的存储 API 版本 {#switch-your-clusters-storage-api-version}
<!--
The objects that are serialized into etcd for a cluster's internal
representation of the Kubernetes resources active in the cluster are
written using a particular version of the API.
When the supported API changes, these objects may need to be rewritten
in the newer API. Failure to do this will eventually result in resources
that are no longer decodable or usable by the Kubernetes API server.
For each affected object, fetch it using the latest supported API and then
write it back also using the latest supported API.
-->
对象序列化到 etcd是为了提供集群中活动 Kubernetes 资源的内部表示法,
这些对象都使用特定版本的 API 编写。
当底层的 API 更改时,这些对象可能需要用新 API 重写。
如果不能做到这一点,会导致再也不能用 Kubernetes API 服务器解码、使用该对象。
对于每个受影响的对象,用最新支持的 API 获取它,然后再用最新支持的 API 写回来。
<!--
### Update manifests
Upgrading to a new Kubernetes version can provide new APIs.
You can use `kubectl convert` command to convert manifests between different API versions.
For example:
-->
### 更新清单 {#update-manifests}
升级到新版本 Kubernetes 就可以提供新的 API。
你可以使用 `kubectl convert` 命令在不同 API 版本之间转换清单。
例如:
```shell
kubectl convert -f pod.yaml --output-version v1
```
<!--
The `kubectl` tool replaces the contents of `pod.yaml` with a manifest that sets `kind` to
Pod (unchanged), but with a revised `apiVersion`.
-->
`kubectl` 替换了 `pod.yaml` 的内容,
在新的清单文件中,`kind` 被设置为 Pod未变
`apiVersion` 则被修订了。

View File

@ -21,6 +21,8 @@ This document helps you get started using the Kubernetes [NetworkPolicy API](/do
[NetworkPolicy API](/zh/docs/concepts/services-networking/network-policies/)
声明网络策略去管理 Pod 之间的通信
{{% thirdparty-content %}}
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
@ -42,14 +44,6 @@ Make sure you've configured a network provider with network policy support. Ther
* [Romana](/zh/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/)
* [Weave 网络](/zh/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/)
<!--
The above list is sorted alphabetically by product name, not by recommendation or preference. This example is valid for a Kubernetes cluster using any of these providers.
-->
{{< note >}}
以上列表是根据产品名称按字母顺序排序,而不是按推荐或偏好排序。
下面示例对于使用了上面任何提供商的 Kubernetes 集群都是有效的
{{< /note >}}
<!-- steps -->
<!--
@ -74,7 +68,7 @@ Expose the Deployment through a Service called `nginx`.
-->
将此 Deployment 以名为 `nginx` 的 Service 暴露出来:
```console
```shell
kubectl expose deployment nginx --port=80
```
@ -89,7 +83,7 @@ The above commands create a Deployment with an nginx Pod and expose the Deployme
Service 暴露出来。名为 `nginx` 的 Pod 和 Deployment 都位于 `default`
名字空间内。
```console
```shell
kubectl get svc,pod
```
```none
@ -111,7 +105,7 @@ You should be able to access the new `nginx` service from other Pods. To access
你应该可以从其它的 Pod 访问这个新的 `nginx` 服务。
要从 default 命名空间中的其它s Pod 来访问该服务。可以启动一个 busybox 容器:
```console
```shell
kubectl run busybox --rm -ti --image=busybox /bin/sh
```
@ -167,7 +161,7 @@ Use kubectl to create a NetworkPolicy from the above `nginx-policy.yaml` file:
使用 kubectl 根据上面的 `nginx-policy.yaml` 文件创建一个 NetworkPolicy
```console
```shell
kubectl apply -f https://k8s.io/examples/service/networking/nginx-policy.yaml
```
```none
@ -183,7 +177,7 @@ When you attempt to access the `nginx` Service from a Pod without the correct la
如果你尝试从没有设定正确标签的 Pod 中去访问 `nginx` 服务,请求将会超时:
```console
```shell
kubectl run busybox --rm -ti --image=busybox -- /bin/sh
```
@ -210,7 +204,7 @@ You can create a Pod with the correct labels to see that the request is allowed:
创建一个拥有正确标签的 Pod你将看到请求是被允许的
```console
```shell
kubectl run busybox --rm -ti --labels="access=true" --image=busybox -- /bin/sh
```
<!--

View File

@ -129,7 +129,7 @@ add-on or with associated Services:
-->
下列错误表示 CoreDNS (或 kube-dns插件或者相关服务出现了问题
```
```shell
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
```
@ -156,7 +156,7 @@ nslookup: can't resolve 'kubernetes.default'
Use the `kubectl get pods` command to verify that the DNS pod is running.
-->
### 检查 DNS Pod 是否运行
### 检查 DNS Pod 是否运行 {#check-if-the-dns-pod-is-running}
使用 `kubectl get pods` 命令来验证 DNS Pod 是否运行。
@ -192,7 +192,7 @@ will have to deploy it manually.
Use `kubectl logs` command to see logs for the DNS containers.
-->
### 检查 DNS Pod 里的错误
### 检查 DNS Pod 里的错误 {#check-for-errors-in-the-dns-pod}
使用 `kubectl logs` 命令来查看 DNS 容器的日志信息。
@ -224,7 +224,7 @@ See if there are any suspicious or unexpected messages in the logs.
Verify that the DNS service is up by using the `kubectl get service` command.
-->
### 检查是否启用了 DNS 服务
### 检查是否启用了 DNS 服务 {#is-dns-service-up}
使用 `kubectl get service` 命令来检查 DNS 服务是否已经启用。
@ -263,13 +263,14 @@ more information.
You can verify that DNS endpoints are exposed by using the `kubectl get endpoints`
command.
-->
### DNS 的端公开了吗?
### DNS 的端公开了吗? {#are-dns-endpoints-exposed}
你可以使用 `kubectl get endpoints` 命令来验证 DNS 的端是否公开了。
你可以使用 `kubectl get endpoints` 命令来验证 DNS 的端是否公开了。
```shell
kubectl get ep kube-dns --namespace=kube-system
```
```
NAME ENDPOINTS AGE
kube-dns 10.180.3.17:53,10.180.3.17:53 1h
@ -283,8 +284,8 @@ For additional Kubernetes DNS examples, see the
[cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns)
in the Kubernetes GitHub repository.
-->
如果你没看到对应的端,请阅读
[调试服务](/zh/docs/tasks/debug-application-cluster/debug-service/)的端部分。
如果你没看到对应的端,请阅读
[调试服务](/zh/docs/tasks/debug-application-cluster/debug-service/)的端部分。
若需要了解更多的 Kubernetes DNS 例子,请在 Kubernetes GitHub 仓库里查看
[cluster-dns 示例](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns)。
@ -295,12 +296,12 @@ in the Kubernetes GitHub repository.
You can verify if queries are being received by CoreDNS by adding the `log` plugin to the CoreDNS configuration (aka Corefile).
The CoreDNS Corefile is held in a ConfigMap named `coredns`. To edit it, use the command ...
-->
### DNS 查询有被接收或者执行吗?
### DNS 查询有被接收或者执行吗? {#are-dns-queries-bing-received-processed}
你可以通过给 CoreDNS 的配置文件(也叫 Corefile添加 `log` 插件来检查查询是否被正确接收。
CoreDNS 的 Corefile 被保存在一个叫 `coredns` 的 ConfigMap 里,使用下列命令来编辑它:
```
```shell
kubectl -n kube-system edit configmap coredns
```
@ -309,7 +310,7 @@ Then add `log` in the Corefile section per the example below.
-->
然后按下面的例子给 Corefile 添加 `log`
```
```yaml
apiVersion: v1
kind: ConfigMap
metadata:

View File

@ -252,7 +252,7 @@ The idea is that when a cluster is using nodes that have many cores,
cores, `nodesPerReplica` dominates.
There are other supported scaling patterns. For details, see
[cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler).
[cluster-proportional-autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).
-->
注意 `coresPerReplica``nodesPerReplica` 的值都是整数。
@ -260,7 +260,7 @@ There are other supported scaling patterns. For details, see
当一个集群使用具有较少核心的节点时,由 `nodesPerReplica` 来控制。
其它的扩缩模式也是支持的,详情查看
[cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler)。
[cluster-proportional-autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler)。
<!--
## Disable DNS horizontal autoscaling
@ -409,9 +409,9 @@ patterns: *linear* and *ladder*.
<!--
* Read about [Guaranteed Scheduling For Critical Add-On Pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
* Learn more about the
[implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler).
[implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).
-->
* 阅读[为关键插件 Pod 提供的调度保障](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)
* 进一步了解 [cluster-proportional-autoscaler 实现](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler)
* 进一步了解 [cluster-proportional-autoscaler 实现](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler)

View File

@ -0,0 +1,61 @@
---
title: 启用/禁用 Kubernetes API
content_type: task
---
<!--
---
title: Enable Or Disable A Kubernetes API
content_type: task
---
-->
<!-- overview -->
<!--
This page shows how to enable or disable an API version from your cluster's
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
-->
本页展示怎么用集群的
{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}.
启用/禁用 API 版本。
<!-- steps -->
<!--
Specific API versions can be turned on or off by passing `--runtime-config=api/<version>` as a
command line argument to the API server. The values for this argument are a comma-separated
list of API versions. Later values override earlier values.
The `runtime-config` command line argument also supports 2 special keys:
-->
通过 API 服务器的命令行参数 `--runtime-config=api/<version>`
可以开启/关闭某个指定的 API 版本。
此参数的值是一个逗号分隔的 API 版本列表。
此列表中,后面的值可以覆盖前面的值。
命令行参数 `runtime-config` 支持两个特殊的值keys
<!--
- `api/all`, representing all known APIs
- `api/legacy`, representing only legacy APIs. Legacy APIs are any APIs that have been
explicitly [deprecated](/zh/docs/reference/using-api/deprecation-policy/).
For example, to turning off all API versions except v1, pass `--runtime-config=api/all=false,api/v1=true`
to the `kube-apiserver`.
-->
- `api/all`:指所有已知的 API
- `api/legacy`:指过时的 API。过时的 API 就是明确地
[弃用](/zh/docs/reference/using-api/deprecation-policy/)
的 API。
例如:为了停用除去 v1 版本之外的全部其他 API 版本,
就用参数 `--runtime-config=api/all=false,api/v1=true` 启动 `kube-apiserver`
## {{% heading "whatsnext" %}}
<!--
Read the [full documentation](/docs/reference/command-line-tools-reference/kube-apiserver/)
for the `kube-apiserver` component.
-->
阅读[完整的文档](/zh/docs/reference/command-line-tools-reference/kube-apiserver/),
以了解 `kube-apiserver` 组件。

View File

@ -1,110 +1,130 @@
---
reviewers:
- bowei
- freehan
title: 启用端点切片
title: 启用 EndpointSlices
content_type: task
---
<!--
---
reviewers:
- bowei
- freehan
title: Enabling Endpoint Slices
title: Enabling EndpointSlices
content_type: task
---
-->
<!-- overview -->
<!--
This page provides an overview of enabling Endpoint Slices in Kubernetes.
This page provides an overview of enabling EndpointSlices in Kubernetes.
-->
本页提供启用 Kubernetes 端点切片的总览
本页提供启用 Kubernetes EndpointSlice 的总览。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
<!--
## Introduction
Endpoint Slices provide a scalable and extensible alternative to Endpoints in
EndpointSlices provide a scalable and extensible alternative to Endpoints in
Kubernetes. They build on top of the base of functionality provided by Endpoints
and extend that in a scalable way. When Services have a large number (>100) of
network endpoints, they will be split into multiple smaller Endpoint Slice
network endpoints, they will be split into multiple smaller EndpointSlice
resources instead of a single large Endpoints resource.
-->
## 介绍
端点切片为 Kubernetes 端点提供了可伸缩和可扩展的替代方案。它们建立在端点提供的功能基础之上,并以可伸缩的方式进行扩展。当服务具有大量(>100网络端点
它们将被分成多个较小的端点切片资源,而不是单个大型端点资源。
EndpointSlice (端点切片)为 Kubernetes Endpoints 提供了可伸缩和可扩展的替代方案。
它们建立在 Endpoints 提供的功能基础之上,并以可伸缩的方式进行扩展。
当 Service 具有大量(>100网络端点时它们将被分成多个较小的 EndpointSlice 资源,
而不是单个大型 Endpoints 资源。
<!--
## Enabling Endpoint Slices
## Enabling EndpointSlices
-->
## 启用端点切片
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
## 启用 EndpointSlice
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
{{< note >}}
<!--
Although Endpoint Slices may eventually replace Endpoints, many Kubernetes
components still rely on Endpoints. For now, enabling Endpoint Slices should be
Although EndpointSlices may eventually replace Endpoints, many Kubernetes
components still rely on Endpoints. For now, enabling EndpointSlices should be
seen as an addition to Endpoints in a cluster, not a replacement for them.
-->
尽管端点切片最终可能会取代端点,但许多 Kubernetes 组件仍然依赖于端点。目前,启用端点切片应该被视为集群中端点的补充,而不是它们的替代。
尽管 EndpointSlice 最终可能会取代 Endpoints但许多 Kubernetes 组件仍然依赖于
Endpoints。目前启用 EndpointSlice 应该被视为集群中 Endpoints 的补充,而不是
替代它们。
{{< /note >}}
<!--
As an alpha feature, Endpoint Slices are not enabled by default in Kubernetes.
Enabling Endpoint Slices requires as many as 3 changes to Kubernetes cluster
configuration.
To enable the Discovery API group that includes Endpoint Slices, use the runtime
config flag (`--runtime-config=discovery.k8s.io/v1alpha1=true`).
The logic responsible for watching services, pods, and nodes and creating or
updating associated Endpoint Slices lives within the EndpointSlice controller.
This is disabled by default but can be enabled with the controllers flag on
kube-controller-manager (`--controllers=endpointslice`).
For Kubernetes components like kube-proxy to actually start using Endpoint
Slices, the EndpointSlice feature gate will need to be enabled
(`--feature-gates=EndpointSlice=true`).
EndpointSlice functionality in Kubernetes is made up of several different
components, most are enabled by default:
-->
Kubernetes 中的 EndpointSlice 功能包含若干不同组件。它们中的大部分都是
默认被启用的:
作为 Alpha 功能默认情况下Kubernetes 中未启用端点切片。启用端点切片需要对 Kubernetes 集群进行多达 3 项配置修改。
<!--
* _The EndpointSlice API_: EndpointSlices are part of the
`discovery.k8s.io/v1beta1` API. This is beta and enabled by default since
Kubernetes 1.17. All components listed below are dependent on this API being
enabled.
* _The EndpointSlice Controller_: This {{< glossary_tooltip text="controller"
term_id="controller" >}} maintains EndpointSlices for Services and the Pods
they reference. This is controlled by the `EndpointSlice` feature gate. It has
been enabled by default since Kubernetes 1.18.
-->
* _EndpointSlice API_EndpointSlice 隶属于 `discovery.k8s.io/v1beta1` API。
此 API 处于 Beta 阶段,从 Kubernetes 1.17 开始默认被启用。
下面列举的所有组件都依赖于此 API 被启用。
* _EndpointSlice 控制器_:此 {{< glossary_tooltip text="控制器" term_id="controller" >}}
为 Service 维护 EndpointSlice 及其引用的 Pods。
此控制器通过 `EndpointSlice` 特性门控控制。自从 Kubernetes 1.18 起,
该特性门控默认被启用。
要启用包括端点切片的 Discovery API 组,请使用运行时配置标志(`--runtime-config=discovery.k8s.io/v1alpha1=true`)。
该逻辑负责监视服务pod 和节点以及创建或更新与之关联,在端点切片控制器内的端点切片。
默认情况下,此功能处于禁用状态,但可以通过启用在 kube-controller-manager 控制器的标志(`--controllers=endpointslice`)来开启。
对于像 kube-proxy 这样的 Kubernetes 组件真正开始使用端点切片,需要开启端点切片功能标志(`--feature-gates=EndpointSlice=true`)。
<!--
* _The EndpointSliceMirroring Controller_: This {{< glossary_tooltip
text="controller" term_id="controller" >}} mirrors custom Endpoints to
EndpointSlices. This is controlled by the `EndpointSlice` feature gate. It has
been enabled by default since Kubernetes 1.19.
* _Kube-Proxy_: When {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy">}}
is configured to use EndpointSlices, it can support higher numbers of Service
endpoints. This is controlled by the `EndpointSliceProxying` feature gate on
Linux and `WindowsEndpointSliceProxying` on Windows. It has been enabled by
default on Linux since Kubernetes 1.19. It is not enabled by default for
Windows nodes. To configure kube-proxy to use EndpointSlices on Windows, you
can enable the `WindowsEndpointSliceProxying` [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/) on
kube-proxy.
-->
* _EndpointSliceMirroring 控制器_:此 {{< glossary_tooltip text="控制器" term_id="controller" >}}
将自定义的 Endpoints 映射为 EndpointSlice。
控制器受 `EndpointSlice` 特性门控控制。该特性门控自 1.19 开始被默认启用。
* _kube-proxy_:当 {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy">}}
被配置为使用 EndpointSlice 时,它会支持更大数量的 Service 端点。
此功能在 Linux 上受 `EndpointSliceProxying` 特性门控控制;在 Windows 上受
`WindowsEndpointSliceProxying` 特性门控控制。
在 Linux 上,从 Kubernetes 1.19 版本起自动启用。目前尚未在 Windows 节点
上默认启用。
要在 Windows 节点上配置 kube-proxy 使用 EndpointSlice你需要为 kube-proxy 启用
`WindowsEndpointSliceProxying`
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
<!--
## Using Endpoint Slices
With Endpoint Slices fully enabled in your cluster, you should see corresponding
With EndpointSlices fully enabled in your cluster, you should see corresponding
EndpointSlice resources for each Endpoints resource. In addition to supporting
existing Endpoints functionality, Endpoint Slices should include new bits of
information such as topology. They will allow for greater scalability and
extensibility of network endpoints in your cluster.
existing Endpoints functionality, EndpointSlices include new bits of information
such as topology. They will allow for greater scalability and extensibility of
network endpoints in your cluster.
-->
## 使用 EndpointSlice
## 使用端点切片
在集群中完全启用 EndpointSlice 的情况下,你应该看到对应于每个
Endpoints 资源的 EndpointSlice 资源。除了支持现有的 Endpoints 功能外,
EndpointSlice 还引入了拓扑结构等新的信息。它们将使集群中网络端点具有更强的
可伸缩性和可扩展性。
在集群中完全启用端点切片的情况下,您应该看到对应的每个端点资源的端点切片资源。除了兼容现有的端点功能,端点切片应包括拓扑等新的信息。它们将使集群中网络端点具有更强的可伸缩性,可扩展性。

View File

@ -26,7 +26,10 @@ This page shows how to configure and enable the ip-masq-agent.
<!--
The ip-masq-agent configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
-->
ip-masq-agent 配置 iptables 规则以隐藏位于集群节点 IP 地址后面的 pod 的 IP 地址。 这通常在将流量发送到集群的 pod [CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1) 范围之外的目的地时使用。
ip-masq-agent 配置 iptables 规则以隐藏位于集群节点 IP 地址后面的 Pod 的 IP 地址。
这通常在将流量发送到集群的 Pod
[CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)
范围之外的目的地时使用。
<!--
### **Key Terms**
@ -34,47 +37,56 @@ ip-masq-agent 配置 iptables 规则以隐藏位于集群节点 IP 地址后面
### **关键术语**
<!--
* **NAT (Network Address Translation)**
Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
* **NAT (Network Address Translation)**
Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
-->
* **NAT (网络地址解析)**
是一种通过修改 IP 地址头中的源和/或目标地址信息将一个 IP 地址重新映射到另一个 IP 地址的方法。通常由执行 IP 路由的设备执行。
* **NAT (网络地址转译)**
是一种通过修改 IP 地址头中的源和/或目标地址信息将一个 IP 地址重新映射
到另一个 IP 地址的方法。通常由执行 IP 路由的设备执行。
<!--
* **Masquerading**
A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
* **Masquerading**
A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
-->
* **伪装**
NAT 的一种形式,通常用于执行多对一地址转换,其中多个源 IP 地址被隐藏在单个地址后面,该地址通常是执行 IP 路由的设备。在 Kubernetes 中,这是节点的 IP 地址。
* **伪装**
NAT 的一种形式,通常用于执行多对一地址转换,其中多个源 IP 地址被隐藏在
单个地址后面,该地址通常是执行 IP 路由的设备。在 Kubernetes 中,
这是节点的 IP 地址。
<!--
* **CIDR (Classless Inter-Domain Routing)**
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
* **CIDR (Classless Inter-Domain Routing)**
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
-->
* **CIDR (无类别域间路由)**
基于可变长度子网掩码允许指定任意长度的前缀。CIDR 引入了一种新的 IP 地址表示方法,现在通常称为**CIDR表示法**,其中地址或路由前缀后添加一个后缀,用来表示前缀的位数,例如 192.168.2.0/24。
* **CIDR (无类别域间路由)**
基于可变长度子网掩码,允许指定任意长度的前缀。
CIDR 引入了一种新的 IP 地址表示方法,现在通常称为**CIDR表示法**
其中地址或路由前缀后添加一个后缀,用来表示前缀的位数,例如 192.168.2.0/24。
<!--
* **Link Local**
A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
* **Link Local**
A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
-->
* **本地链路**
本地链路是仅对网段或主机所连接的广播域内的通信有效的网络地址。IPv4的本地链路地址在 CIDR 表示法的地址块 169.254.0.0/16 中定义。
* **本地链路**
本地链路是仅对网段或主机所连接的广播域内的通信有效的网络地址。
IPv4 的本地链路地址在 CIDR 表示法的地址块 169.254.0.0/16 中定义。
<!--
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
-->
ip-masq-agent 配置 iptables 规则,以便在将流量发送到集群节点的 IP 和集群 IP 范围之外的目标时
处理伪装节点/Pod 的 IP 地址。这基本上隐藏了集群节点 IP 地址后面的 Pod IP 地址。
处理伪装节点或 Pod 的 IP 地址。这本质上隐藏了集群节点 IP 地址后面的 Pod IP 地址。
在某些环境中,去往“外部”地址的流量必须从已知的机器地址发出。
例如,在 Google Cloud 中,任何到互联网的流量都必须来自 VM 的 IP。
使用容器时,如 Google Kubernetes Engine从 Pod IP 发出的流量将被拒绝出站。
为了避免这种情况,我们必须将 Pod IP 隐藏在 VM 自己的 IP 地址后面 - 通常称为“伪装”。
默认情况下,代理配置为将[RFC 1918](https://tools.ietf.org/html/rfc1918)指定的三个私有
IP 范围视为非伪装 [CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)。
默认情况下,代理配置为将
[RFC 1918](https://tools.ietf.org/html/rfc1918)
指定的三个私有 IP 范围视为非伪装
[CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)。
这些范围是 10.0.0.0/8,172.16.0.0/12 和 192.168.0.0/16。
默认情况下代理还将链路本地地址169.254.0.0/16视为非伪装 CIDR。
代理程序配置为每隔 60 秒从 */etc/config/ip-masq-agent* 重新加载其配置,这也是可修改的。
代理程序配置为每隔 60 秒从 */etc/config/ip-masq-agent* 重新加载其配置,
这也是可修改的。
![masq/non-masq example](/images/docs/ip-masq.png)
@ -86,17 +98,21 @@ The agent configuration file must be written in YAML or JSON syntax, and may con
<!--
* **nonMasqueradeCIDRs:** A list of strings in [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
-->
* **nonMasqueradeCIDRs:** [CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1) 表示法中的字符串列表,用于指定不需伪装的地址范围。
* **nonMasqueradeCIDRs:**
[CIDR](https://zh.wikipedia.org/wiki/%E6%97%A0%E7%B1%BB%E5%88%AB%E5%9F%9F%E9%97%B4%E8%B7%AF%E7%94%B1)
表示法中的字符串列表,用于指定不需伪装的地址范围。
<!--
* **masqLinkLocal:** A Boolean (true / false) which indicates whether to masquerade traffic to the link local prefix 169.254.0.0/16. False by default.
-->
* **masqLinkLocal:** 布尔值 (true / false),表示是否将流量伪装到本地链路前缀 169.254.0.0/16。默认为 false。
* **masqLinkLocal:** 布尔值 (true / false),表示是否将流量伪装到
本地链路前缀 169.254.0.0/16。默认为 false。
<!--
* **resyncInterval:** An interval at which the agent attempts to reload config from disk. e.g. '30s' where 's' is seconds, 'ms' is milliseconds etc...
-->
* **resyncInterval:** 代理尝试从磁盘重新加载配置的时间间隔。 例如 '30s',其中 's' 是秒,'ms' 是毫秒等...
* **resyncInterval:** 代理尝试从磁盘重新加载配置的时间间隔。
例如 '30s',其中 's' 是秒,'ms' 是毫秒等...
<!--
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
@ -106,7 +122,6 @@ Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masq
Pod 访问本地目的地的例子,可以是其节点的 IP 地址、另一节点的地址或集群的 IP 地址范围内的一个 IP 地址。
默认情况下,任何其他流量都将伪装。以下条目展示了 ip-masq-agent 的默认使用的规则:
<!--
```
iptables -t nat -L IP-MASQ-AGENT
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
@ -115,16 +130,6 @@ RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent:
RETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
```
-->
```
iptables -t nat -L IP-MASQ-AGENT
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: 集群本地流量不被 MASQUERADE 控制 */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: 集群本地流量不被 MASQUERADE 控制 */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: 集群本地流量不被 MASQUERADE 控制 */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 192.168.0.0/16 /* ip-masq-agent: 集群本地流量不被 MASQUERADE 控制 */ ADDRTYPE match dst-type !LOCAL
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: 出站流量应受 MASQUERADE 控制 (此规则必须在集群本地 CIDR 规则之后) */ ADDRTYPE match dst-type !LOCAL
```
<!--
@ -143,25 +148,26 @@ By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7
To create an ip-masq-agent, run the following kubectl command:
-->
## 创建 ip-masq-agent
通过运行以下 kubectl 指令创建 ip-masq-agent:
`
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/ip-masq-agent/master/ip-masq-agent.yaml
`
```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml
```
<!--
You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.
-->
你必须同时将适当的节点标签应用于集群中希望代理运行的任何节点。
`
```shell
kubectl label nodes my-node beta.kubernetes.io/masq-agent-ds-ready=true
`
```
<!--
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-incubator/ip-masq-agent)
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-sigs/ip-masq-agent)
-->
更多信息可以通过 ip-masq-agent 文档 [这里](https://github.com/kubernetes-incubator/ip-masq-agent) 找到
更多信息可以通过 ip-masq-agent 文档 [这里](https://github.com/kubernetes-sigs/ip-masq-agent) 找到。
<!--
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called "config".
@ -169,14 +175,15 @@ In most cases, the default set of rules should be sufficient; however, if this i
在大多数情况下,默认的规则集应该足够;但是,如果你的群集不是这种情况,则可以创建并应用
[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
来自定义受影响的 IP 范围。
例如,要允许 ip-masq-agent 仅作用于 10.0.0.0/8你可以一个名为 “config” 的文件中创建以下
例如,要允许 ip-masq-agent 仅作用于 10.0.0.0/8你可以一个名为 “config” 的文件中创建以下
[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/) 。
{{< note >}}
<!--
It is important that the file is called config since, by default, that will be used as the key for lookup by the ip-masq-agent:
-->
重要的是,该文件之所以被称为 config因为默认情况下该文件将被用作 ip-masq-agent 查找的关键:
重要的是,该文件之所以被称为 config因为默认情况下该文件将被用作
ip-masq-agent 查找的主键:
```
nonMasqueradeCIDRs:
@ -202,7 +209,6 @@ After the resync interval has expired, you should see the iptables rules reflect
为周期定期检查并应用于集群节点。
重新同步间隔到期后,你应该看到你的更改在 iptables 规则中体现:
<!--
```
iptables -t nat -L IP-MASQ-AGENT
Chain IP-MASQ-AGENT (1 references)
@ -211,20 +217,13 @@ RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent:
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
```
-->
```
iptables -t nat -L IP-MASQ-AGENT
Chain IP-MASQ-AGENT (1 references)
target prot opt source destination
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: 集群本地流量不被 MASQUERADE 控制 */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: 出站流量应受 MASQUERADE 控制 (此规则必须在集群本地 CIDR 规则之后) */ ADDRTYPE match dst-type !LOCAL
```
<!--
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set *masqLinkLocal* to true in the config map.
-->
默认情况下,本地链路范围 (169.254.0.0/16) 也由 ip-masq agent 处理,该代理设置适当的 iptables 规则。 要使 ip-masq-agent 忽略本地链路,可以在配置映射中将 *masqLinkLocal* 设置为true。
默认情况下,本地链路范围 (169.254.0.0/16) 也由 ip-masq agent 处理,
该代理设置适当的 iptables 规则。 要使 ip-masq-agent 忽略本地链路,
可以在配置映射中将 *masqLinkLocal* 设置为 true。
```
nonMasqueradeCIDRs:

View File

@ -15,9 +15,9 @@ content_type: task
{{< feature-state for_k8s_version="v1.15" state="stable" >}}
<!--
Client certificates generated by [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) expire after 1 year. This page explains how to manage certificate renewals with kubeadm.
Client certificates generated by [kubeadm](/docs/reference/setup-tools/kubeadm/) expire after 1 year. This page explains how to manage certificate renewals with kubeadm.
-->
由 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/kubeadm/) 生成的客户端证书在 1 年后到期。
由 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 生成的客户端证书在 1 年后到期。
本页说明如何使用 kubeadm 管理证书续订。
## {{% heading "prerequisites" %}}
@ -89,7 +89,7 @@ You can use the `check-expiration` subcommand to check certificate expiration.
你可以使用 `check-expiration` 子命令来检查证书是否过期
```
```shell
kubeadm alpha certs check-expiration
```

View File

@ -2,7 +2,7 @@
title: 升级 kubeadm 集群
content_type: task
weight: 20
min-kubernetes-server-version: 1.18
min-kubernetes-server-version: 1.19
---
<!--
reviewers:
@ -17,10 +17,10 @@ min-kubernetes-server-version: 1.18
<!--
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
1.17.x to version 1.18.x, and from version 1.18.x to 1.18.y (where `y > x`).
1.18.x to version 1.19.x, and from version 1.19.x to 1.19.y (where `y > x`).
-->
本页介绍如何将 `kubeadm` 创建的 Kubernetes 集群从 1.17.x 版本升级到 1.18.x 版本,
或者从版本 1.18.x 升级到 1.18.y ,其中 `y > x`
本页介绍如何将 `kubeadm` 创建的 Kubernetes 集群从 1.18.x 版本升级到 1.19.x 版本,
或者从版本 1.19.x 升级到 1.19.y ,其中 `y > x`
<!--
To see information about upgrading clusters created using older versions of kubeadm,
@ -29,15 +29,17 @@ please refer to following pages instead:
要查看 kubeadm 创建的有关旧版本集群升级的信息,请参考以下页面:
<!--
- [Upgrading kubeadm cluster from 1.17 to 1.18](https://v1-18.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading kubeadm cluster from 1.16 to 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading kubeadm cluster from 1.15 to 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [Upgrading kubeadm cluster from 1.14 to 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/)
- [Upgrading kubeadm cluster from 1.13 to 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/)
-->
- [将 kubeadm 集群从 1.16 升级到 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [将 kubeadm 集群从 1.15 升级到 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [将 kubeadm 集群从 1.14 升级到 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/)
- [将 kubeadm 集群从 1.13 升级到 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/)
- [将 kubeadm 集群从 1.17 升级到 1.18](https://v1-18.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [将 kubeadm 集群从 1.16 升级到 1.17](https://v1-17.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [将 kubeadm 集群从 1.15 升级到 1.16](https://v1-16.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [将 kubeadm 集群从 1.14 升级到 1.15](https://v1-15.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/)
- [将 kubeadm 集群从 1.13 升级到 1.14](https://v1-15.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/)
<!--
The upgrade workflow at high level is the following:
@ -55,14 +57,14 @@ The upgrade workflow at high level is the following:
## {{% heading "prerequisites" %}}
<!--
- You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later.
- You need to have a kubeadm Kubernetes cluster running version 1.18.0 or later.
- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux).
- The cluster should use a static control plane and etcd pods or external etcd.
- Make sure you read the [release notes]({{< latest-release-notes >}}) carefully.
- Make sure to back up any important components, such as app-level state stored in a database.
`kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice.
-->
- 你需要有一个由 `kubeadm` 创建并运行着 1.17.0 或更高版本的 Kubernetes 集群。
- 你需要有一个由 `kubeadm` 创建并运行着 1.18.0 或更高版本的 Kubernetes 集群。
- [禁用交换分区](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux)。
- 集群应使用静态的控制平面和 etcd Pod 或者 外部 etcd。
- 务必仔细认真阅读[发行说明]({{< latest-release-notes >}})。
@ -89,26 +91,26 @@ The upgrade workflow at high level is the following:
<!--
## Determine which version to upgrade to
Find the latest stable 1.18 version:
Find the latest stable 1.19 version:
-->
## 确定要升级到哪个版本
找到最新的稳定版 1.18
找到最新的稳定版 1.19
{{< tabs name="k8s_install_versions" >}}
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
```
apt update
apt-cache policy kubeadm
# 在列表中查找最新的 1.18 版本
# 它看起来应该是 1.18.x-00 ,其中 x 是最新的补丁
# 在列表中查找最新的 1.19 版本
# 它看起来应该是 1.19.x-00 ,其中 x 是最新的补丁
```
{{% /tab %}}
{{% tab name="CentOS、RHEL 或 Fedora" %}}
```
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# 在列表中查找最新的 1.18 版本
# 它看起来应该是 1.18.x-0 ,其中 x 是最新的补丁版本
# 在列表中查找最新的 1.19 版本
# 它看起来应该是 1.19.x-0 ,其中 x 是最新的补丁版本
```
{{% /tab %}}
{{< /tabs >}}
@ -130,16 +132,20 @@ yum list --showduplicates kubeadm --disableexcludes=kubernetes
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
```shell
# 用最新的修补程序版本替换 1.18.x-00 中的 x
# 用最新的修补程序版本替换 1.19.x-00 中的 x
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
apt-get update && apt-get install -y kubeadm=1.19.x-00 && \
apt-mark hold kubeadm
-
# 从 apt-get 1.1 版本起,你也可以使用下面的方法
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00
```
{{% /tab %}}
{{% tab name="CentOS、RHEL 或 Fedora" %}}
```shell
# 用最新的修补程序版本替换 1.18.x-0 中的 x
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
# 用最新的修补程序版本替换 1.19.x-0 中的 x
yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
@ -192,36 +198,48 @@ yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.17.3
[upgrade/versions] kubeadm version: v1.18.0
[upgrade/versions] Latest stable version: v1.18.0
[upgrade/versions] Latest version in the v1.17 series: v1.18.0
[upgrade/versions] Cluster version: v1.18.4
[upgrade/versions] kubeadm version: v1.19.0
[upgrade/versions] Latest stable version: v1.19.0
[upgrade/versions] Latest version in the v1.18 series: v1.18.4
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.17.3 v1.18.0
Kubelet 1 x v1.18.4 v1.19.0
Upgrade to the latest version in the v1.17 series:
Upgrade to the latest version in the v1.18 series:
COMPONENT CURRENT AVAILABLE
API Server v1.17.3 v1.18.0
Controller Manager v1.17.3 v1.18.0
Scheduler v1.17.3 v1.18.0
Kube Proxy v1.17.3 v1.18.0
CoreDNS 1.6.5 1.6.7
Etcd 3.4.3 3.4.3-0
API Server v1.18.4 v1.19.0
Controller Manager v1.18.4 v1.19.0
Scheduler v1.18.4 v1.19.0
Kube Proxy v1.18.4 v1.19.0
CoreDNS 1.6.7 1.7.0
Etcd 3.4.3-0 3.4.7-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.18.0
kubeadm upgrade apply v1.19.0
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
```
<!--
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
It also shows a table with the component config version states.
-->
此命令检查你的集群是否可以升级,并可以获取到升级的版本。
其中也显示了组件配置版本状态的表格。
<!--
`kubeadm upgrade` also automatically renews the certificates that it manages on this node.
@ -234,19 +252,30 @@ For more information see the [certificate management guide](/docs/tasks/administ
关于更多细节信息,可参见[证书管理指南](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)。
{{</ note >}}
{{< note >}}
<!--
If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide
a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag.
Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade.
-->
如果 `kubeadm upgrade plan` 显示有任何组件配置需要手动升级,则用户必须
通过命令行参数 `--config``kubeadm upgrade apply` 操作
提供带有替换配置的配置文件。
{{</ note >}}
<!--
- Choose a version to upgrade to, and run the appropriate command. For example:
```shell
# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v1.18.x
sudo kubeadm upgrade apply v1.19.x
```
-->
- 选择要升级到的版本,然后运行相应的命令。例如:
```shell
# 将 x 替换为你为此次升级所选的补丁版本号
sudo kubeadm upgrade apply v1.18.x
sudo kubeadm upgrade apply v1.19.x
```
<!--
@ -254,81 +283,84 @@ For more information see the [certificate management guide](/docs/tasks/administ
-->
你应该可以看见与下面类似的输出:
```none
```
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.0"
[upgrade/versions] Cluster version: v1.17.3
[upgrade/versions] kubeadm version: v1.18.0
[upgrade/version] You have chosen to change the cluster version to "v1.19.0"
[upgrade/versions] Cluster version: v1.18.4
[upgrade/versions] kubeadm version: v1.19.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.0"...
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2
Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.0" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012"
W0308 18:48:14.535122 3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
Static pod: etcd-kind-control-plane hash: 59e40b2aab1cd7055e64450b5ee438f0
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests999800980"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: f717874150ba572f020dcd89db8480fc
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156
Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2
Static pod: kube-controller-manager-kind-control-plane hash: b155b63c70e798b806e64a866e297dd0
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d
Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a
Static pod: kube-scheduler-kind-control-plane hash: 260018ac854dbf1c9fe82493e88aec31
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
W0713 16:26:14.074656 2986 dns.go:282] the CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.0". Enjoy!
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
```
@ -396,25 +428,23 @@ sudo kubeadm upgrade apply
{{< tabs name="k8s_install_kubelet" >}}
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
```shell
# 用最新的补丁版本替换 1.18.x-00 中的 x
# 用最新的补丁版本替换 1.19.x-00 中的 x
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \
apt-mark hold kubelet kubectl
```
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
```shell
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00
```
{{% /tab %}}
{{% tab name="CentOS、RHEL 或 Fedora" %}}
用最新的补丁版本替换 1.18.x-00 中的 x
用最新的补丁版本替换 1.19.x-00 中的 x
```shell
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
@ -437,7 +467,8 @@ without compromising the minimum required capacity for running your workloads.
-->
## 升级工作节点
工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,以不影响运行工作负载所需的最小容量。
工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,
以不影响运行工作负载所需的最小容量。
<!--
### Upgrade kubeadm
@ -453,33 +484,31 @@ without compromising the minimum required capacity for running your workloads.
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
```shell
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
# 将 1.19.x-00 中的 x 替换为最新的补丁版本
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
apt-get update && apt-get install -y kubeadm=1.19.x-00 && \
apt-mark hold kubeadm
```
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
```shell
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00
```
{{% /tab %}}
{{% tab name="CentOS、RHEL 或 Fedora" %}}
```shell
# 用最新的补丁版本替换 1.18.x-00 中的 x
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
# 用最新的补丁版本替换 1.19.x-00 中的 x
yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
<!--
### Cordon the node
### Drain the node
-->
### 保护节点
### 腾空节点
<!--
1. Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run:
@ -546,17 +575,15 @@ yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
```shell
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
# 将 1.19.x-00 中的 x 替换为最新的补丁版本
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \
apt-mark hold kubelet kubectl
```
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
```
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00
```
{{% /tab %}}
@ -564,7 +591,7 @@ apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x
```shell
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
@ -680,6 +707,7 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
- The control plane is healthy
- Enforces the version skew policies.
- Makes sure the control plane images are available or available to pull to the machine.
- Generates replacements and/or uses user supplied overwrites if component configs require version upgrades.
- Upgrades the control plane components or rollbacks if any of them fails to come up.
- Applies the new `kube-dns` and `kube-proxy` manifests and makes sure that all necessary RBAC rules are created.
- Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days.
@ -692,8 +720,9 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
- API 服务器是可访问的
- 所有节点处于 `Ready` 状态
- 控制面是健康的
- 强制执行版本 skew 策略。
- 强制执行版本偏差策略。
- 确保控制面的镜像是可用的或可拉取到服务器上。
- 如果组件配置要求版本升级,则生成替代配置与/或使用用户提供的覆盖版本配置。
- 升级控制面组件或回滚(如果其中任何一个组件无法启动)。
- 应用新的 `kube-dns``kube-proxy` 清单,并强制创建所有必需的 RBAC 规则。
- 如果旧文件在 180 天后过期,将创建 API 服务器的新证书和密钥文件并备份旧文件。
@ -717,6 +746,8 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
- Fetches the kubeadm `ClusterConfiguration` from the cluster.
- Upgrades the kubelet configuration for this node.
- Upgrades the static Pod manifests for the control plane components.
- Upgrades the kubelet configuration for this node.
-->
`kubeadm upgrade node` 在工作节点上完成以下工作:

View File

@ -397,9 +397,7 @@ Production likes to run cattle, so let's create some cattle pods.
生产环境需要以放牛的方式运维,让我们创建一些名为 `cattle` 的 Pod。
```shell
kubectl create deployment cattle --image=k8s.gcr.io/serve_hostname
kubectl scale deployment cattle --replicas=5
kubectl create deployment cattle --image=k8s.gcr.io/serve_hostname --replicas=5
kubectl get deployment
```

View File

@ -4,15 +4,24 @@ content_type: task
weight: 20
---
<!--
reviewers:
- danwent
- aanm
title: Use Cilium for NetworkPolicy
content_type: task
weight: 20
-->
<!-- overview -->
<!--
This page shows how to use Cilium for NetworkPolicy.
For background on Cilium, read the [Introduction to Cilium](https://cilium.readthedocs.io/en/latest/intro).
For background on Cilium, read the [Introduction to Cilium](https://docs.cilium.io/en/stable/intro).
-->
本页展示如何使用 Cilium 提供 NetworkPolicy。
关于 Cilium 的背景知识,请阅读 [Cilium 介绍](https://cilium.readthedocs.io/en/latest/intro)。
关于 Cilium 的背景知识,请阅读 [Cilium 介绍](https://docs.cilium.io/en/stable/intro)。
## {{% heading "prerequisites" %}}
@ -86,7 +95,7 @@ deployment.apps/cilium-operator created
The remainder of the Getting Started Guide explains how to enforce both L3/L4
(i.e., IP address + port) security policies, as well as L7 (e.g., HTTP) security
policies using an example application.
-->
-->
入门指南其余的部分用一个示例应用说明了如何强制执行 L3/L4即 IP 地址+端口)的安全策略
以及L7 (如 HTTP的安全策略。
@ -94,14 +103,14 @@ policies using an example application.
## Deploying Cilium for Production Use
For detailed instructions around deploying Cilium for production, see:
[Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/gettingstarted/#installation)
[Cilium Kubernetes Installation Guide](https://docs.cilium.io/en/stable/concepts/kubernetes/intro/)
This documentation includes detailed requirements, instructions and example
production DaemonSet files.
-->
## 部署 Cilium 用于生产用途
关于部署 Cilium 用于生产的详细说明,请见
[Cilium Kubernetes 安装指南](https://cilium.readthedocs.io/en/latest/gettingstarted/#installation)
[Cilium Kubernetes 安装指南](https://docs.cilium.io/en/stable/concepts/kubernetes/intro/)
此文档包括详细的需求、说明和生产用途 DaemonSet 文件示例。
<!-- discussion -->

View File

@ -4,15 +4,27 @@ content_type: task
weight: 40
---
<!--
reviewers:
- chrismarino
title: Romana for NetworkPolicy
content_type: task
weight: 40
-->
<!-- overview -->
<!-- This page shows how to use Romana for NetworkPolicy. -->
<!--
This page shows how to use Romana for NetworkPolicy.
-->
本页展示如何使用 Romana 作为 NetworkPolicy。
## {{% heading "prerequisites" %}}
<!-- Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/). -->
完成 [kubeadm 入门指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm/)中的 1、2、3 步。
<!--
Complete steps 1, 2, and 3 of the [kubeadm getting started guide](/docs/getting-started-guides/kubeadm/).
-->
完成 [kubeadm 入门指南](/zh/docs/reference/setup-tools/kubeadm/)中的 1、2、3 步。
<!-- steps -->
<!--
@ -30,14 +42,15 @@ To apply network policies use one of the following:
-->
## 使用 kubeadm 安装 Romana
按照[容器化安装指南](https://github.com/romana/romana/tree/master/containerize),使用 kubeadm 安装。
按照[容器化安装指南](https://github.com/romana/romana/tree/master/containerize)
使用 kubeadm 安装。
## 应用网络策略
使用以下的一种方式应用网络策略:
* [Romana 网络策略](https://github.com/romana/romana/wiki/Romana-policies)
* [Romana 网络策略例子](https://github.com/romana/core/blob/master/doc/policy.md)
* [Romana 网络策略例子](https://github.com/romana/core/blob/master/doc/policy.md)
* NetworkPolicy API
## {{% heading "whatsnext" %}}

Some files were not shown because too many files have changed in this diff Show More