[zh] Fix links in zh localization (2)
parent
9c696a5883
commit
35b6327159
|
@ -25,7 +25,7 @@ The cluster administration overview is for anyone creating or administering a Ku
|
|||
It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
|
||||
-->
|
||||
集群管理概述面向任何创建和管理 Kubernetes 集群的读者人群。
|
||||
我们假设你对一些核心的 Kubernetes [概念](/zh/docs/concepts/)大概了解。
|
||||
我们假设你大概了解一些核心的 Kubernetes [概念](/zh/docs/concepts/)。
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
@ -38,9 +38,10 @@ Not all distros are actively maintained. Choose distros which have been tested w
|
|||
|
||||
Before choosing a guide, here are some considerations:
|
||||
-->
|
||||
## 规划集群
|
||||
## 规划集群 {#planning-a-cluster}
|
||||
|
||||
查阅[安装](/zh/docs/setup/)中的指导,获取如何规划、建立以及配置 Kubernetes 集群的示例。本文所列的文章称为*发行版* 。
|
||||
查阅[安装](/zh/docs/setup/)中的指导,获取如何规划、建立以及配置 Kubernetes
|
||||
集群的示例。本文所列的文章称为*发行版* 。
|
||||
|
||||
{{< note >}}
|
||||
并非所有发行版都是被积极维护的。
|
||||
|
@ -60,27 +61,28 @@ Before choosing a guide, here are some considerations:
|
|||
offer a greater variety of choices.
|
||||
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
|
||||
-->
|
||||
- 你是打算在你的计算机上尝试 Kubernetes,还是要构建一个高可用的多节点集群?请选择最适合你需求的发行版。
|
||||
- 您正在使用类似 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) 这样的**被托管的 Kubernetes 集群**, 还是**管理您自己的集群**?
|
||||
- 你的集群是在**本地**还是**云(IaaS)** 上?Kubernetes 不能直接支持混合集群。作为代替,你可以建立多个集群。
|
||||
- **如果你在本地配置 Kubernetes**,需要考虑哪种[网络模型](/zh/docs/concepts/cluster-administration/networking/)最适合。
|
||||
- 你是打算在你的计算机上尝试 Kubernetes,还是要构建一个高可用的多节点集群?
|
||||
请选择最适合你需求的发行版。
|
||||
- 你正在使用类似 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)
|
||||
这样的**被托管的 Kubernetes 集群**, 还是**管理你自己的集群**?
|
||||
- 你的集群是在**本地**还是**云(IaaS)** 上?Kubernetes 不能直接支持混合集群。
|
||||
作为代替,你可以建立多个集群。
|
||||
- **如果你在本地配置 Kubernetes**,需要考虑哪种
|
||||
[网络模型](/zh/docs/concepts/cluster-administration/networking/)最适合。
|
||||
- 你的 Kubernetes 在**裸金属硬件**上还是**虚拟机(VMs)** 上运行?
|
||||
- 你**只想运行一个集群**,还是打算**参与开发 Kubernetes 项目代码**?如果是后者,请选择一个处于开发状态的发行版。某些发行版只提供二进制发布版,但提供更多的选择。
|
||||
- 你**只想运行一个集群**,还是打算**参与开发 Kubernetes 项目代码**?
|
||||
如果是后者,请选择一个处于开发状态的发行版。
|
||||
某些发行版只提供二进制发布版,但提供更多的选择。
|
||||
- 让你自己熟悉运行一个集群所需的[组件](/zh/docs/concepts/overview/components/)。
|
||||
|
||||
<!--
|
||||
## Managing a cluster
|
||||
|
||||
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
|
||||
|
||||
* Learn how to [manage nodes](/docs/concepts/nodes/node/).
|
||||
|
||||
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
|
||||
-->
|
||||
## 管理集群
|
||||
|
||||
* [管理集群](/zh/docs/tasks/administer-cluster/cluster-management/)叙述了和集群生命周期相关的几个主题:
|
||||
创建新集群、升级集群的控制节点和工作节点、执行节点维护(例如内核升级)以及升级运行中的集群的 Kubernetes API 版本。
|
||||
## 管理集群 {#managing-a-cluster}
|
||||
|
||||
* 学习如何[管理节点](/zh/docs/concepts/architecture/nodes/)。
|
||||
|
||||
|
@ -98,16 +100,24 @@ Before choosing a guide, here are some considerations:
|
|||
* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
|
||||
* [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs.
|
||||
-->
|
||||
## 保护集群
|
||||
## 保护集群 {#securing-a-cluster}
|
||||
|
||||
* [证书](/zh/docs/concepts/cluster-administration/certificates/)节描述了使用不同的工具链生成证书的步骤。
|
||||
* [Kubernetes 容器环境](/zh/docs/concepts/containers/container-environment/)描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
|
||||
* [控制到 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/)描述了如何为用户和 service accounts 建立权限许可。
|
||||
* [认证](/docs/reference/access-authn-authz/authentication/)节阐述了 Kubernetes 中的身份认证功能,包括许多认证选项。
|
||||
* [鉴权](/zh/docs/reference/access-authn-authz/authorization/)从认证中分离出来,用于控制如何处理 HTTP 请求。
|
||||
* [使用准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers) 阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。
|
||||
* [在 Kubernetes 集群中使用 Sysctls](/zh/docs/tasks/administer-cluster/sysctl-cluster/) 描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。
|
||||
* [审计](/zh/docs/tasks/debug-application-cluster/audit/)描述了如何与 Kubernetes 的审计日志交互。
|
||||
* [证书](/zh/docs/concepts/cluster-administration/certificates/)
|
||||
节描述了使用不同的工具链生成证书的步骤。
|
||||
* [Kubernetes 容器环境](/zh/docs/concepts/containers/container-environment/)
|
||||
描述了 Kubernetes 节点上由 Kubelet 管理的容器的环境。
|
||||
* [控制到 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/)
|
||||
描述了如何为用户和 service accounts 建立权限许可。
|
||||
* [身份认证](/zh/docs/reference/access-authn-authz/authentication/)
|
||||
节阐述了 Kubernetes 中的身份认证功能,包括许多认证选项。
|
||||
* [鉴权](/zh/docs/reference/access-authn-authz/authorization/)
|
||||
与身份认证不同,用于控制如何处理 HTTP 请求。
|
||||
* [使用准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers)
|
||||
阐述了在认证和授权之后拦截到 Kubernetes API 服务的请求的插件。
|
||||
* [在 Kubernetes 集群中使用 Sysctls](/zh/docs/tasks/administer-cluster/sysctl-cluster/)
|
||||
描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。
|
||||
* [审计](/zh/docs/tasks/debug-application-cluster/audit/)
|
||||
描述了如何与 Kubernetes 的审计日志交互。
|
||||
|
||||
<!--
|
||||
### Securing the kubelet
|
||||
|
@ -116,7 +126,7 @@ Before choosing a guide, here are some considerations:
|
|||
* [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
|
||||
-->
|
||||
### 保护 kubelet
|
||||
### 保护 kubelet {#securing-the-kubelet}
|
||||
|
||||
* [主控节点通信](/zh/docs/concepts/architecture/control-plane-node-communication/)
|
||||
* [TLS 引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
|
@ -128,9 +138,10 @@ Before choosing a guide, here are some considerations:
|
|||
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service.
|
||||
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.
|
||||
-->
|
||||
## 可选集群服务 {#optional-cluster-services}
|
||||
|
||||
## 可选集群服务
|
||||
|
||||
* [DNS 集成](/zh/docs/concepts/services-networking/dns-pod-service/)描述了如何将一个 DNS 名解析到一个 Kubernetes service。
|
||||
* [记录和监控集群活动](/zh/docs/concepts/cluster-administration/logging/)阐述了 Kubernetes 的日志如何工作以及怎样实现。
|
||||
* [DNS 集成](/zh/docs/concepts/services-networking/dns-pod-service/)
|
||||
描述了如何将一个 DNS 名解析到一个 Kubernetes service。
|
||||
* [记录和监控集群活动](/zh/docs/concepts/cluster-administration/logging/)
|
||||
阐述了 Kubernetes 的日志如何工作以及怎样实现。
|
||||
|
||||
|
|
|
@ -72,7 +72,7 @@ things by adding the following command-line flags to your
|
|||
`kube-apiserver` invocation:
|
||||
-->
|
||||
APF 特性由特性门控控制,默认情况下不启用。有关如何启用和禁用特性门控的描述,
|
||||
请参见[特性门控](/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
请参见[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
APF 的特性门控叫做 `APIPriorityAndFairness` 。
|
||||
此特性要求必须启用某个 {{< glossary_tooltip term_id="api-group" text="API Group" >}}。
|
||||
你可以在启动 `kube-apiserver` 时,添加以下命令行标志来完成这些操作:
|
||||
|
|
|
@ -273,7 +273,7 @@ Using a node-level logging agent is the most common and encouraged approach for
|
|||
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) for use with Google Cloud Platform, and [Elasticsearch](/docs/user-guide/logging/elasticsearch). You can find more information and instructions in the dedicated documents. Both use [fluentd](http://www.fluentd.org/) with custom configuration as an agent on the node.
|
||||
-->
|
||||
Kubernetes 并不指定日志代理,但是有两个可选的日志代理与 Kubernetes 发行版一起发布。
|
||||
[Stackdriver 日志](/docs/tasks/debug-application-cluster/logging-stackdriver/)
|
||||
[Stackdriver 日志](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/)
|
||||
适用于 Google Cloud Platform,和
|
||||
[Elasticsearch](/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/)。
|
||||
你可以在专门的文档中找到更多的信息和说明。
|
||||
|
@ -446,7 +446,7 @@ which uses fluentd as a logging agent. Here are two configuration files that
|
|||
you can use to implement this approach. The first file contains
|
||||
a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
|
||||
-->
|
||||
例如,你可以使用 [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
|
||||
例如,你可以使用 [Stackdriver](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/),
|
||||
它使用 fluentd 作为日志记录代理。
|
||||
以下是两个可用于实现此方法的配置文件。
|
||||
第一个文件包含配置 fluentd 的
|
||||
|
|
|
@ -115,7 +115,8 @@ service "my-nginx-svc" deleted
|
|||
<!--
|
||||
In the case of just two resources, it's also easy to specify both on the command line using the resource/name syntax:
|
||||
-->
|
||||
在仅有两种资源的情况下,可以使用"资源类型/资源名"的语法在命令行中同时指定这两个资源:
|
||||
在仅有两种资源的情况下,可以使用"资源类型/资源名"的语法在命令行中
|
||||
同时指定这两个资源:
|
||||
|
||||
```shell
|
||||
kubectl delete deployments/my-nginx services/my-nginx-svc
|
||||
|
@ -184,7 +185,8 @@ project/k8s/development
|
|||
<!--
|
||||
By default, performing a bulk operation on `project/k8s/development` will stop at the first level of the directory, not processing any subdirectories. If we had tried to create the resources in this directory using the following command, we would have encountered an error:
|
||||
-->
|
||||
默认情况下,对 `project/k8s/development` 执行的批量操作将停止在目录的第一级,而不是处理所有子目录。
|
||||
默认情况下,对 `project/k8s/development` 执行的批量操作将停止在目录的第一级,
|
||||
而不是处理所有子目录。
|
||||
如果我们试图使用以下命令在此目录中创建资源,则会遇到一个错误:
|
||||
|
||||
```shell
|
||||
|
@ -252,7 +254,8 @@ The examples we've used so far apply at most a single label to any resource. The
|
|||
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
|
||||
-->
|
||||
例如,不同的应用可能会为 `app` 标签设置不同的值。
|
||||
但是,类似 [guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) 这样的多层应用,还需要区分每一层。前端可以带以下标签:
|
||||
但是,类似 [guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
|
||||
这样的多层应用,还需要区分每一层。前端可以带以下标签:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
|
@ -368,7 +371,8 @@ and then you can create a new release of the guestbook frontend that carries the
|
|||
<!--
|
||||
The frontend service would span both sets of replicas by selecting the common subset of their labels (i.e. omitting the `track` label), so that the traffic will be redirected to both applications:
|
||||
-->
|
||||
前端服务通过选择标签的公共子集(即忽略 `track` 标签)来覆盖两组副本,以便流量可以转发到两个应用:
|
||||
前端服务通过选择标签的公共子集(即忽略 `track` 标签)来覆盖两组副本,
|
||||
以便流量可以转发到两个应用:
|
||||
|
||||
```yaml
|
||||
selector:
|
||||
|
@ -380,12 +384,16 @@ The frontend service would span both sets of replicas by selecting the common su
|
|||
You can tweak the number of replicas of the stable and canary releases to determine the ratio of each release that will receive live production traffic (in this case, 3:1).
|
||||
Once you're confident, you can update the stable track to the new application release and remove the canary one.
|
||||
-->
|
||||
你可以调整 `stable` 和 `canary` 版本的副本数量,以确定每个版本将接收实时生产流量的比例(在本例中为 3:1)。一旦有信心,你就可以将新版本应用的 `track` 标签的值从 `canary` 替换为 `stable`,并且将老版本应用删除。
|
||||
你可以调整 `stable` 和 `canary` 版本的副本数量,以确定每个版本将接收
|
||||
实时生产流量的比例(在本例中为 3:1)。
|
||||
一旦有信心,你就可以将新版本应用的 `track` 标签的值从
|
||||
`canary` 替换为 `stable`,并且将老版本应用删除。
|
||||
|
||||
<!--
|
||||
For a more concrete example, check the [tutorial of deploying Ghost](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary).
|
||||
-->
|
||||
想要了解更具体的示例,请查看 [Ghost 部署教程](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary)。
|
||||
想要了解更具体的示例,请查看
|
||||
[Ghost 部署教程](https://github.com/kelseyhightower/talks/tree/master/kubecon-eu-2016/demo#deploy-a-canary)。
|
||||
|
||||
<!--
|
||||
## Updating labels
|
||||
|
@ -447,7 +455,8 @@ Sometimes you would want to attach annotations to resources. Annotations are arb
|
|||
-->
|
||||
## 更新注解 {#updating-annotations}
|
||||
|
||||
有时,你可能希望将注解附加到资源中。注解是 API 客户端(如工具、库等)用于检索的任意非标识元数据。这可以通过 `kubectl annotate` 来完成。例如:
|
||||
有时,你可能希望将注解附加到资源中。注解是 API 客户端(如工具、库等)
|
||||
用于检索的任意非标识元数据。这可以通过 `kubectl annotate` 来完成。例如:
|
||||
|
||||
```shell
|
||||
kubectl annotate pods my-nginx-v4-9gw19 description='my frontend running nginx'
|
||||
|
@ -545,12 +554,14 @@ Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-co
|
|||
建议在源代码管理中维护一组配置文件
|
||||
(参见[配置即代码](https://martinfowler.com/bliki/InfrastructureAsCode.html)),
|
||||
这样,它们就可以和应用代码一样进行维护和版本管理。
|
||||
然后,你可以用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) 将配置变更应用到集群中。
|
||||
然后,你可以用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply)
|
||||
将配置变更应用到集群中。
|
||||
|
||||
<!--
|
||||
This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
|
||||
-->
|
||||
这个命令将会把推送的版本与以前的版本进行比较,并应用你所做的更改,但是不会自动覆盖任何你没有指定更改的属性。
|
||||
这个命令将会把推送的版本与以前的版本进行比较,并应用你所做的更改,
|
||||
但是不会自动覆盖任何你没有指定更改的属性。
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml
|
||||
|
@ -560,17 +571,24 @@ deployment.apps/my-nginx configured
|
|||
<!--
|
||||
Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource.
|
||||
-->
|
||||
注意,`kubectl apply` 将为资源增加一个额外的注解,以确定自上次调用以来对配置的更改。当调用它时,`kubectl apply` 会在以前的配置、提供的输入和资源的当前配置之间找出三方差异,以确定如何修改资源。
|
||||
注意,`kubectl apply` 将为资源增加一个额外的注解,以确定自上次调用以来对配置的更改。
|
||||
执行时,`kubectl apply` 会在以前的配置、提供的输入和资源的当前配置之间
|
||||
找出三方差异,以确定如何修改资源。
|
||||
|
||||
<!--
|
||||
Currently, resources are created without this annotation, so the first invocation of `kubectl apply` will fall back to a two-way diff between the provided input and the current configuration of the resource. During this first invocation, it cannot detect the deletion of properties set when the resource was created. For this reason, it will not remove them.
|
||||
-->
|
||||
目前,新创建的资源是没有这个注解的,所以,第一次调用 `kubectl apply` 将使用提供的输入和资源的当前配置双方之间差异进行比较。在第一次调用期间,它无法检测资源创建时属性集的删除情况。因此,不会删除它们。
|
||||
目前,新创建的资源是没有这个注解的,所以,第一次调用 `kubectl apply` 时
|
||||
将使用提供的输入和资源的当前配置双方之间差异进行比较。
|
||||
在第一次调用期间,它无法检测资源创建时属性集的删除情况。
|
||||
因此,kubectl 不会删除它们。
|
||||
|
||||
<!--
|
||||
All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff.
|
||||
-->
|
||||
所有后续调用 `kubectl apply` 以及其它修改配置的命令,如 `kubectl replace` 和 `kubectl edit`,都将更新注解,并允许随后调用的 `kubectl apply` 使用三方差异进行检查和执行删除。
|
||||
所有后续的 `kubectl apply` 操作以及其他修改配置的命令,如 `kubectl replace`
|
||||
和 `kubectl edit`,都将更新注解,并允许随后调用的 `kubectl apply`
|
||||
使用三方差异进行检查和执行删除。
|
||||
|
||||
<!--
|
||||
To use apply, always create resource initially with either `kubectl apply` or `kubectl create --save-config`.
|
||||
|
@ -611,21 +629,24 @@ This allows you to do more significant changes more easily. Note that you can sp
|
|||
|
||||
For more information, please see [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) document.
|
||||
-->
|
||||
这使你可以更加容易地进行更重大的更改。请注意,可以使用 `EDITOR` 或 `KUBE_EDITOR` 环境变量来指定编辑器。
|
||||
这使你可以更加容易地进行更重大的更改。
|
||||
请注意,可以使用 `EDITOR` 或 `KUBE_EDITOR` 环境变量来指定编辑器。
|
||||
|
||||
想要了解更多信息,请参考 [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) 文档。
|
||||
想要了解更多信息,请参考
|
||||
[kubectl edit](/docs/reference/generated/kubectl/kubectl-commands/#edit) 文档。
|
||||
|
||||
### kubectl patch
|
||||
|
||||
<!--
|
||||
You can use `kubectl patch` to update API objects in place. This command supports JSON patch,
|
||||
JSON merge patch, and strategic merge patch. See
|
||||
[Update API Objects in Place Using kubectl patch](/docs/tasks/run-application/update-api-object-kubectl-patch/)
|
||||
[Update API Objects in Place Using kubectl patch](/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)
|
||||
and
|
||||
[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch).
|
||||
-->
|
||||
你可以使用 `kubectl patch` 来更新 API 对象。此命令支持 JSON patch,JSON merge patch,以及 strategic merge patch。 请参考
|
||||
[使用 kubectl patch 更新 API 对象](/zh/docs/tasks/run-application/update-api-object-kubectl-patch/)
|
||||
你可以使用 `kubectl patch` 来更新 API 对象。此命令支持 JSON patch、
|
||||
JSON merge patch、以及 strategic merge patch。 请参考
|
||||
[使用 kubectl patch 更新 API 对象](/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)
|
||||
和
|
||||
[kubectl patch](/docs/reference/generated/kubectl/kubectl-commands/#patch).
|
||||
|
||||
|
@ -636,7 +657,9 @@ In some cases, you may need to update resource fields that cannot be updated onc
|
|||
-->
|
||||
## 破坏性的更新 {#disruptive-updates}
|
||||
|
||||
在某些情况下,你可能需要更新某些初始化后无法更新的资源字段,或者你可能只想立即进行递归更改,例如修复 Deployment 创建的不正常的 Pod。若要更改这些字段,请使用 `replace --force`,它将删除并重新创建资源。在这种情况下,你可以简单地修改原始配置文件:
|
||||
在某些情况下,你可能需要更新某些初始化后无法更新的资源字段,或者你可能只想立即进行递归更改,
|
||||
例如修复 Deployment 创建的不正常的 Pod。若要更改这些字段,请使用 `replace --force`,
|
||||
它将删除并重新创建资源。在这种情况下,你可以简单地修改原始配置文件:
|
||||
|
||||
```shell
|
||||
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
|
||||
|
@ -655,7 +678,9 @@ deployment.apps/my-nginx replaced
|
|||
<!--
|
||||
At some point, you'll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
|
||||
-->
|
||||
在某些时候,你最终需要更新已部署的应用,通常都是通过指定新的镜像或镜像标签,如上面的金丝雀发布的场景中所示。`kubectl` 支持几种更新操作,每种更新操作都适用于不同的场景。
|
||||
在某些时候,你最终需要更新已部署的应用,通常都是通过指定新的镜像或镜像标签,
|
||||
如上面的金丝雀发布的场景中所示。`kubectl` 支持几种更新操作,
|
||||
每种更新操作都适用于不同的场景。
|
||||
|
||||
<!--
|
||||
We'll guide you through how to create and update applications with Deployments.
|
||||
|
|
|
@ -185,7 +185,7 @@ kubelet 在驱动程序上保持打开状态。这意味着为了执行基础结
|
|||
现在,收集加速器指标的责任属于供应商,而不是 kubelet。供应商必须提供一个收集指标的容器,
|
||||
并将其公开给指标服务(例如 Prometheus)。
|
||||
|
||||
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/references/command-line-tools-reference/feature-gate.md)
|
||||
[`DisableAcceleratorUsageMetrics` 特性门控](/zh/docs/references/command-line-tools-reference/feature-gates/)
|
||||
禁止由 kubelet 收集的指标。
|
||||
关于[何时会在默认情况下启用此功能也有一定规划](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria)。
|
||||
|
||||
|
@ -235,4 +235,4 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
|
|||
-->
|
||||
* 阅读有关指标的 [Prometheus 文本格式](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
|
||||
* 查看 [Kubernetes 稳定指标](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)的列表
|
||||
* 阅读有关 [Kubernetes 弃用策略](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
|
||||
* 阅读有关 [Kubernetes 弃用策略](/zh/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
|
||||
|
|
|
@ -307,8 +307,8 @@ Kubernetes has several built-in authentication methods that it supports. It can
|
|||
Kubernetes 提供若干内置的身份认证方法。
|
||||
它也可以运行在某中身份认证代理的后面,并且可以将来自鉴权头部的令牌发送到
|
||||
某个远程服务(Webhook)来执行验证操作。
|
||||
所有这些方法都在[身份认证文档](/docs/reference/access-authn-authz/authentication/)
|
||||
中详细论述。
|
||||
所有这些方法都在[身份认证文档](/zh/docs/reference/access-authn-authz/authentication/)
|
||||
中有详细论述。
|
||||
|
||||
<!--
|
||||
### Authentication
|
||||
|
@ -405,8 +405,8 @@ Different networking fabrics can be supported via node-level [Network Plugins](/
|
|||
|
||||
The scheduler is a special type of controller that watches pods, and assigns
|
||||
pods to nodes. The default scheduler can be replaced entirely, while
|
||||
continuing to use other Kubernetes components, or [multiple
|
||||
schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
|
||||
continuing to use other Kubernetes components, or
|
||||
[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
|
||||
can run at the same time.
|
||||
|
||||
This is a significant undertaking, and almost all Kubernetes users find they
|
||||
|
@ -422,14 +422,14 @@ the nodes chosen for a pod.
|
|||
调度器是一种特殊的控制器,负责监视 Pod 变化并将 Pod 分派给节点。
|
||||
默认的调度器可以被整体替换掉,同时继续使用其他 Kubernetes 组件。
|
||||
或者也可以在同一时刻使用
|
||||
[多个调度器](/zh/docs/tasks/administer-cluster/configure-multiple-schedulers/)。
|
||||
[多个调度器](/zh/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)。
|
||||
|
||||
这是一项非同小可的任务,几乎绝大多数 Kubernetes
|
||||
用户都会发现其实他们不需要修改调度器。
|
||||
|
||||
调度器也支持一种 [webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md),
|
||||
允许使用某种 Webhook 后端(调度器扩展)来为 Pod
|
||||
可选的节点执行过滤和优先排序操作。
|
||||
调度器也支持一种
|
||||
[Webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md),
|
||||
允许使用某种 Webhook 后端(调度器扩展)来为 Pod 可选的节点执行过滤和优先排序操作。
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -3,8 +3,8 @@ title: 通过聚合层扩展 Kubernetes API
|
|||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Extending the Kubernetes API with the aggregation layer
|
||||
reviewers:
|
||||
- lavalamp
|
||||
|
@ -12,7 +12,6 @@ reviewers:
|
|||
- chenopis
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -32,7 +31,7 @@ The aggregation layer is different from [Custom Resources](/docs/concepts/extend
|
|||
这类已经成熟的解决方案,也可以是你自己开发的 API。
|
||||
|
||||
聚合层不同于
|
||||
[自定义资源(Custom Resources)](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。
|
||||
[定制资源(Custom Resources)](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)。
|
||||
后者的目的是让 {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}}
|
||||
能够认识新的对象类别(Kind)。
|
||||
|
||||
|
@ -91,6 +90,6 @@ to disable the timeout restriction. This deprecated feature gate will be removed
|
|||
了解如何在自己的环境中启用聚合器。
|
||||
* 接下来,了解[安装扩展 API 服务器](/zh/docs/tasks/extend-kubernetes/setup-extension-api-server/),
|
||||
开始使用聚合层。
|
||||
* 也可以学习怎样[使用自定义资源定义扩展 Kubernetes API](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。
|
||||
* 也可以学习怎样[使用自定义资源定义扩展 Kubernetes API](zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)。
|
||||
* 阅读 [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io) 的规范
|
||||
|
||||
|
|
|
@ -384,8 +384,8 @@ Different networking fabrics can be supported via node-level [Network Plugins](/
|
|||
|
||||
The scheduler is a special type of controller that watches pods, and assigns
|
||||
pods to nodes. The default scheduler can be replaced entirely, while
|
||||
continuing to use other Kubernetes components, or [multiple
|
||||
schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
|
||||
continuing to use other Kubernetes components, or
|
||||
[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
|
||||
can run at the same time.
|
||||
|
||||
This is a significant undertaking, and almost all Kubernetes users find they
|
||||
|
@ -400,7 +400,7 @@ the nodes chosen for a pod.
|
|||
|
||||
调度器是一种特殊类型的控制器,用于监视 pod 并将其分配到节点。
|
||||
默认的调度器可以完全被替换,而继续使用其他 Kubernetes 组件,或者可以同时运行
|
||||
[多个调度器](/zh/docs/tasks/administer-cluster/configure-multiple-schedulers/)。
|
||||
[多个调度器](/zh/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)。
|
||||
|
||||
这是一个不太轻松的任务,几乎所有的 Kubernetes 用户都会意识到他们并不需要修改调度器。
|
||||
|
||||
|
@ -419,7 +419,7 @@ the nodes chosen for a pod.
|
|||
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
|
||||
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)
|
||||
-->
|
||||
* 详细了解[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* 详细了解[自定义资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* 了解[动态准入控制](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
* 详细了解基础设施扩展
|
||||
* [网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
|
||||
|
|
|
@ -18,13 +18,12 @@ resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
|||
to manage applications and their components. Operators follow
|
||||
Kubernetes principles, notably the [control loop](/docs/concepts/architecture/controller/).
|
||||
-->
|
||||
|
||||
Operator 是 Kubernetes 的扩展软件,它利用
|
||||
[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)管理应用及其组件。
|
||||
Operator 遵循 Kubernetes 的理念,特别是在[控制回路](/zh/docs/concepts/architecture/controller/)
|
||||
[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
管理应用及其组件。
|
||||
Operator 遵循 Kubernetes 的理念,特别是在[控制器](/zh/docs/concepts/architecture/controller/)
|
||||
方面。
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
|
@ -69,7 +68,8 @@ Kubernetes 为自动化而生。无需任何修改,你即可以从 Kubernetes
|
|||
Kubernetes {{< glossary_tooltip text="控制器" term_id="controller" >}}
|
||||
使你无需修改 Kubernetes 自身的代码,即可以扩展集群的行为。
|
||||
Operator 是 Kubernetes API 的客户端,充当
|
||||
[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)的控制器。
|
||||
[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
的控制器。
|
||||
|
||||
<!--
|
||||
## An example Operator {#example}
|
||||
|
@ -225,7 +225,7 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie
|
|||
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
|
||||
-->
|
||||
|
||||
* 详细了解[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* 详细了解[定制资源](/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合你的 Operator
|
||||
* 借助已有的工具来编写你自己的 Operator,例如:
|
||||
* [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator)
|
||||
|
|
|
@ -196,7 +196,7 @@ To make it easier to evolve and to extend its API, Kubernetes implements
|
|||
-->
|
||||
为了便于演化和扩展其 API,Kubernetes 实现了
|
||||
可被[启用或禁用](/zh/docs/reference/using-api/#enabling-or-disabling)的
|
||||
[API 组](/docs/reference/using-api/#api-groups)。
|
||||
[API 组](/zh/docs/reference/using-api/#api-groups)。
|
||||
|
||||
<!--
|
||||
API resources are distinguished by their API group, resource type, namespace
|
||||
|
|
|
@ -288,7 +288,7 @@ on configuring Controller Manager authorization, see
|
|||
|
||||
- [控制器管理器组件](/zh/docs/reference/command-line-tools-reference/kube-controller-manager/)
|
||||
必须运行在
|
||||
[安全的 API 端口](/zh/docs/reference/access-authn-authz/controlling-access/),
|
||||
[安全的 API 端口](/zh/docs/concepts/security/controlling-access/),
|
||||
并且一定不能具有超级用户权限。
|
||||
否则其请求会绕过身份认证和鉴权模块控制,从而导致所有 PodSecurityPolicy 对象
|
||||
都被启用,用户亦能创建特权容器。
|
||||
|
@ -696,7 +696,7 @@ several security mechanisms.
|
|||
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/#policy-instantiation) for more examples.
|
||||
-->
|
||||
更多的示例可参考
|
||||
[Pod 安全标准](/docs/concepts/security/pod-security-standards/#policy-instantiation)。
|
||||
[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/#policy-instantiation)。
|
||||
|
||||
<!--
|
||||
## Policy Reference
|
||||
|
@ -1219,7 +1219,7 @@ By default, all safe sysctls are allowed.
|
|||
|
||||
- Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
|
||||
-->
|
||||
- 参阅[Pod 安全标准](/docs/concepts/security/pod-security-standards/)
|
||||
- 参阅[Pod 安全标准](zh/docs/concepts/security/pod-security-standards/)
|
||||
了解策略建议。
|
||||
- 阅读 [Pod 安全策略参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy)了解 API 细节。
|
||||
|
||||
|
|
|
@ -178,7 +178,8 @@ with prefix `requests.` is allowed for now.
|
|||
Take the GPU resource as an example, if the resource name is `nvidia.com/gpu`, and you want to
|
||||
limit the total number of GPUs requested in a namespace to 4, you can define a quota as follows:
|
||||
-->
|
||||
以 GPU 拓展资源为例,如果资源名称为 `nvidia.com/gpu`,并且要将命名空间中请求的 GPU 资源总数限制为 4,则可以如下定义配额:
|
||||
以 GPU 拓展资源为例,如果资源名称为 `nvidia.com/gpu`,并且要将命名空间中请求的 GPU
|
||||
资源总数限制为 4,则可以如下定义配额:
|
||||
|
||||
* `requests.nvidia.com/gpu: 4`
|
||||
|
||||
|
@ -196,7 +197,8 @@ In addition, you can limit consumption of storage resources based on associated
|
|||
-->
|
||||
## 存储资源配额
|
||||
|
||||
用户可以对给定命名空间下的[存储资源](/docs/concepts/storage/persistent-volumes/)总量进行限制。
|
||||
用户可以对给定命名空间下的[存储资源](/zh/docs/concepts/storage/persistent-volumes/)
|
||||
总量进行限制。
|
||||
|
||||
此外,还可以根据相关的存储类(Storage Class)来限制存储资源的消耗。
|
||||
|
||||
|
@ -219,7 +221,8 @@ In addition, you can limit consumption of storage resources based on associated
|
|||
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
|
||||
define a quota as follows:
|
||||
-->
|
||||
例如,如果一个操作人员针对 `gold` 存储类型与 `bronze` 存储类型设置配额,操作人员可以定义如下配额:
|
||||
例如,如果一个操作人员针对 `gold` 存储类型与 `bronze` 存储类型设置配额,
|
||||
操作人员可以定义如下配额:
|
||||
|
||||
* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi`
|
||||
* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi`
|
||||
|
@ -279,7 +282,8 @@ The same syntax can be used for custom resources.
|
|||
For example, to create a quota on a `widgets` custom resource in the `example.com` API group, use `count/widgets.example.com`.
|
||||
-->
|
||||
相同语法也可用于自定义资源。
|
||||
例如,要对 `example.com` API 组中的自定义资源 `widgets` 设置配额,请使用 `count/widgets.example.com`。
|
||||
例如,要对 `example.com` API 组中的自定义资源 `widgets` 设置配额,请使用
|
||||
`count/widgets.example.com`。
|
||||
|
||||
<!--
|
||||
When using `count/*` resource quota, an object is charged against the quota if it exists in server storage.
|
||||
|
@ -292,7 +296,8 @@ a poorly configured CronJob. CronJobs that create too many Jobs in a namespace c
|
|||
这些类型的配额有助于防止存储资源耗尽。例如,用户可能想根据服务器的存储能力来对服务器中
|
||||
Secret 的数量进行配额限制。
|
||||
集群中存在过多的 Secret 实际上会导致服务器和控制器无法启动。
|
||||
用户可以选择对 Job 进行配额管理,以防止配置不当的 CronJob 在某命名空间中创建太多 Job 而导致集群拒绝服务。
|
||||
用户可以选择对 Job 进行配额管理,以防止配置不当的 CronJob 在某命名空间中创建太多
|
||||
Job 而导致集群拒绝服务。
|
||||
|
||||
<!--
|
||||
It is possible to do generic object count quota on a limited set of resources.
|
||||
|
@ -337,7 +342,8 @@ quota on a namespace to avoid the case where a user creates many small pods and
|
|||
exhausts the cluster's supply of Pod IPs.
|
||||
-->
|
||||
例如,`pods` 配额统计某个命名空间中所创建的、非终止状态的 `Pod` 个数并确保其不超过某上限值。
|
||||
用户可能希望在某命名空间中设置 `pods` 配额,以避免有用户创建很多小的 Pod,从而耗尽集群所能提供的 Pod IP 地址。
|
||||
用户可能希望在某命名空间中设置 `pods` 配额,以避免有用户创建很多小的 Pod,
|
||||
从而耗尽集群所能提供的 Pod IP 地址。
|
||||
|
||||
<!--
|
||||
## Quota Scopes
|
||||
|
@ -460,7 +466,7 @@ Pods can be created at a specific [priority](/docs/concepts/configuration/pod-pr
|
|||
You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector`
|
||||
field in the quota spec.
|
||||
-->
|
||||
Pod 可以创建为特定的[优先级](/docs/concepts/configuration/pod-priority-preemption/#pod-priority)。
|
||||
Pod 可以创建为特定的[优先级](/zh/docs/concepts/configuration/pod-priority-preemption/#pod-priority)。
|
||||
通过使用配额规约中的 `scopeSelector` 字段,用户可以根据 Pod 的优先级控制其系统资源消耗。
|
||||
|
||||
<!--
|
||||
|
@ -891,7 +897,8 @@ will be able to consume these priority classes by default.
|
|||
To enforce this, kube-apiserver flag `-admission-control-config-file` should be
|
||||
used to pass path to the following configuration file:
|
||||
-->
|
||||
要实现此目的,应使用 kube-apiserver 标志 `--admission-control-config-file` 传递如下配置文件的路径:
|
||||
要实现此目的,应设置 kube-apiserver 的标志 `--admission-control-config-file`
|
||||
指向如下配置文件:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
|
@ -938,5 +945,5 @@ For example:
|
|||
- 查看[如何使用资源配额的详细示例](/zh/docs/tasks/administer-cluster/quota-api-object/)。
|
||||
- 阅读[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)。
|
||||
了解更多信息。
|
||||
- 参阅[LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
|
||||
- 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
|
||||
|
||||
|
|
|
@ -93,7 +93,7 @@ different Kubernetes components.
|
|||
{{< /table >}}
|
||||
-->
|
||||
|
||||
### Alpha 和 Beta 的特性门控
|
||||
### Alpha 和 Beta 的特性门控 {#feature-gates-for-alpha-or-beta-features}
|
||||
|
||||
{{< table caption="处于 Alpha 或 Beta 状态的特性门控" >}}
|
||||
|
||||
|
@ -233,7 +233,7 @@ different Kubernetes components.
|
|||
{{< /table >}}
|
||||
-->
|
||||
|
||||
### 已毕业和不推荐使用的特性门控
|
||||
### 已毕业和不推荐使用的特性门控 {#feature-gates-for-graduated-or-deprecated-features}
|
||||
|
||||
{{< table caption="已毕业或不推荐使用的特性门控" >}}
|
||||
|
||||
|
|
Loading…
Reference in New Issue