Merge pull request #22810 from tengqm/zh-links-concepts-1
[zh] Fix links in concepts section (1)pull/22833/head
commit
671c7f3072
|
@ -19,136 +19,3 @@ The Concepts section helps you learn about the parts of the Kubernetes system an
|
|||
-->
|
||||
|
||||
概念部分可以帮助你了解 Kubernetes 的各个组成部分以及 Kubernetes 用来表示集群的一些抽象概念,并帮助你更加深入的理解 Kubernetes 是如何工作的。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Overview
|
||||
-->
|
||||
|
||||
## 概述
|
||||
|
||||
<!--
|
||||
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
|
||||
-->
|
||||
|
||||
要使用 Kubernetes,你需要用 *Kubernetes API 对象* 来描述集群的 *预期状态(desired state)* :包括你需要运行的应用或者负载,它们使用的镜像、副本数,以及所需网络和磁盘资源等等。你可以使用命令行工具 `kubectl` 来调用 Kubernetes API 创建对象,通过所创建的这些对象来配置预期状态。你也可以直接调用 Kubernetes API 和集群进行交互,设置或者修改预期状态。
|
||||
|
||||
<!--
|
||||
Once you've set your desired state, the *Kubernetes Control Plane* makes the cluster's current state match the desired state via the Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). To do so, Kubernetes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
|
||||
-->
|
||||
|
||||
一旦你设置了你所需的目标状态,*Kubernetes 控制面(control plane)* 会通过 Pod 生命周期事件生成器([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)),促成集群的当前状态符合其预期状态。为此,Kubernetes 会自动执行各类任务,比如运行或者重启容器、调整给定应用的副本数等等。Kubernetes 控制面由一组运行在集群上的进程组成:
|
||||
|
||||
<!--
|
||||
* The **Kubernetes Master** is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and [kube-scheduler](/docs/admin/kube-scheduler/).
|
||||
* Each individual non-master node in your cluster runs two processes:
|
||||
* **[kubelet](/docs/admin/kubelet/)**, which communicates with the Kubernetes Master.
|
||||
* **[kube-proxy](/docs/admin/kube-proxy/)**, a network proxy which reflects Kubernetes networking services on each node.
|
||||
-->
|
||||
|
||||
* **Kubernetes 主控组件(Master)** 包含三个进程,都运行在集群中的某个节点上,主控组件通常这个节点被称为 master 节点。这些进程包括:[kube-apiserver](/docs/admin/kube-apiserver/)、[kube-controller-manager](/docs/admin/kube-controller-manager/) 和 [kube-scheduler](/docs/admin/kube-scheduler/)。
|
||||
* 集群中的每个非 master 节点都运行两个进程:
|
||||
* **[kubelet](/docs/admin/kubelet/)**,和 master 节点进行通信。
|
||||
* **[kube-proxy](/docs/admin/kube-proxy/)**,一种网络代理,将 Kubernetes 的网络服务代理到每个节点上。
|
||||
|
||||
<!--
|
||||
## Kubernetes Objects
|
||||
-->
|
||||
|
||||
## Kubernetes 对象
|
||||
|
||||
<!--
|
||||
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API. See [Understanding Kubernetes Objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) for more details.
|
||||
-->
|
||||
|
||||
Kubernetes 包含若干用来表示系统状态的抽象层,包括:已部署的容器化应用和负载、与它们相关的网络和磁盘资源以及有关集群正在运行的其他操作的信息。这些抽象使用 Kubernetes API 对象来表示。有关更多详细信息,请参阅[了解 Kubernetes 对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/)。
|
||||
|
||||
<!--
|
||||
The basic Kubernetes objects include:
|
||||
-->
|
||||
|
||||
基本的 Kubernetes 对象包括:
|
||||
|
||||
* [Pod](/docs/concepts/workloads/pods/pod-overview/)
|
||||
* [Service](/docs/concepts/services-networking/service/)
|
||||
* [Volume](/docs/concepts/storage/volumes/)
|
||||
* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
|
||||
|
||||
<!--
|
||||
Kubernetes also contains higher-level abstractions that rely on [Controllers](/docs/concepts/architecture/controller/) to build upon the basic objects, and provide additional functionality and convenience features. These include:
|
||||
-->
|
||||
|
||||
Kubernetes 也包含大量的被称作 [Controller](/docs/concepts/architecture/controller/) 的高级抽象。控制器基于基本对象构建并提供额外的功能和方便使用的特性。具体包括:
|
||||
|
||||
* [Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
|
||||
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
|
||||
* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
|
||||
|
||||
<!--
|
||||
## Kubernetes Control Plane
|
||||
-->
|
||||
|
||||
## Kubernetes 控制面
|
||||
|
||||
<!--
|
||||
The various parts of the Kubernetes Control Plane, such as the Kubernetes Master and kubelet processes, govern how Kubernetes communicates with your cluster. The Control Plane maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage those objects' state. At any given time, the Control Plane's control loops will respond to changes in the cluster and work to make the actual state of all the objects in the system match the desired state that you provided.
|
||||
-->
|
||||
|
||||
关于 Kubernetes 控制平面的各个部分,(如 Kubernetes 主控组件和 kubelet 进程),管理着 Kubernetes 如何与你的集群进行通信。控制平面维护着系统中所有的 Kubernetes 对象的状态记录,并且通过连续的控制循环来管理这些对象的状态。在任意的给定时间点,控制面的控制环都能响应集群中的变化,并且让系统中所有对象的实际状态与你提供的预期状态相匹配。
|
||||
|
||||
<!--
|
||||
For example, when you use the Kubernetes API to create a Deployment, you provide a new desired state for the system. The Kubernetes Control Plane records that object creation, and carries out your instructions by starting the required applications and scheduling them to cluster nodes--thus making the cluster's actual state match the desired state.
|
||||
-->
|
||||
|
||||
比如, 当你通过 Kubernetes API 创建一个 Deployment 对象,你就为系统增加了一个新的目标状态。Kubernetes 控制平面记录着对象的创建,并启动必要的应用然后将它们调度至集群某个节点上来执行你的指令,以此来保持集群的实际状态和目标状态的匹配。
|
||||
|
||||
<!--
|
||||
### Kubernetes Master
|
||||
-->
|
||||
|
||||
### Kubernetes Master 节点
|
||||
|
||||
<!--
|
||||
The Kubernetes master is responsible for maintaining the desired state for your cluster. When you interact with Kubernetes, such as by using the `kubectl` command-line interface, you're communicating with your cluster's Kubernetes master.
|
||||
-->
|
||||
|
||||
Kubernetes master 节点负责维护集群的目标状态。当你要与 Kubernetes 通信时,使用如 `kubectl` 的命令行工具,就可以直接与 Kubernetes master 节点进行通信。
|
||||
|
||||
<!--
|
||||
> The "master" refers to a collection of processes managing the cluster state. Typically all these processes run on a single node in the cluster, and this node is also referred to as the master. The master can also be replicated for availability and redundancy.
|
||||
-->
|
||||
|
||||
> "master" 是指管理集群状态的一组进程的集合。通常这些进程都跑在集群中一个单独的节点上,并且这个节点被称为 master 节点。master 节点也可以扩展副本数,来获取更好的可用性及冗余。
|
||||
|
||||
<!--
|
||||
### Kubernetes Nodes
|
||||
-->
|
||||
|
||||
### Kubernetes Node 节点
|
||||
|
||||
<!--
|
||||
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you'll rarely interact with nodes directly.
|
||||
-->
|
||||
|
||||
集群中的 node 节点(虚拟机、物理机等等)都是用来运行你的应用和云工作流的机器。Kubernetes master 节点控制所有 node 节点;你很少需要和 node 节点进行直接通信。
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
If you would like to write a concept page, see
|
||||
[Using Page Templates](/docs/home/contribute/page-templates/)
|
||||
for information about the concept page type and the concept template.
|
||||
-->
|
||||
|
||||
如果你想编写一个概念页面,请参阅[使用页面模板](/docs/home/contribute/page-templates/)获取更多有关概念页面类型和概念模板的信息。
|
||||
|
||||
|
||||
|
|
|
@ -19,7 +19,6 @@ ConfigMap 并不提供保密或者加密功能。如果你想存储的数据是
|
|||
{{< /caution >}}
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
<!--
|
||||
## Motivation
|
||||
|
@ -59,9 +58,11 @@ The name of a ConfigMap must be a valid
|
|||
-->
|
||||
## ConfigMap 对象
|
||||
|
||||
ConfigMap 是一个 API [对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/),让你可以存储其他对象所需要使用的配置。和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 块来存储元素(键名)和它们的值。
|
||||
ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/),
|
||||
让你可以存储其他对象所需要使用的配置。
|
||||
和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 块来存储元素(键名)和它们的值。
|
||||
|
||||
ConfigMap 的名字必须是一个合法的 [DNS 子域名](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
ConfigMap 的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
|
||||
<!--
|
||||
## ConfigMaps and Pods
|
||||
|
@ -216,15 +217,14 @@ ConfigMap 最常见的用法是为同一命名空间里某 Pod 中运行的容
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Read about [Secrets](/docs/concepts/configuration/secret/).
|
||||
* Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/).
|
||||
* Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for
|
||||
separating code from configuration.
|
||||
-->
|
||||
* 阅读 [Secret](/docs/concepts/configuration/secret/)。
|
||||
* 阅读 [配置 Pod 来使用 ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)。
|
||||
* 阅读 [Secret](/zh/docs/concepts/configuration/secret/)。
|
||||
* 阅读 [配置 Pod 来使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。
|
||||
* 阅读 [Twelve-Factor 应用](https://12factor.net/) 来了解将代码和配置分开的动机。
|
||||
|
||||
|
||||
|
|
|
@ -9,7 +9,6 @@ feature:
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Managing Resources for Containers
|
||||
content_type: concept
|
||||
weight: 40
|
||||
|
@ -17,7 +16,6 @@ feature:
|
|||
title: Automatic binpacking
|
||||
description: >
|
||||
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -35,7 +33,7 @@ at least the _request_ amount of that system resource specifically for that cont
|
|||
to use.
|
||||
-->
|
||||
|
||||
当你定义 {{< glossary_tooltip term_id="pod" >}} 时可以选择性地为每个
|
||||
当你定义 {{< glossary_tooltip text="Pod" term_id="pod" >}} 时可以选择性地为每个
|
||||
{{< glossary_tooltip text="容器" term_id="container" >}}设定所需要的资源数量。
|
||||
最常见的可设定资源是 CPU 和内存(RAM)大小;此外还有其他类型的资源。
|
||||
|
||||
|
@ -138,8 +136,8 @@ through the Kubernetes API server.
|
|||
|
||||
CPU 和内存统称为*计算资源*,或简称为*资源*。
|
||||
计算资源的数量是可测量的,可以被请求、被分配、被消耗。
|
||||
它们与 [API 资源](/docs/concepts/overview/kubernetes-api/) 不同。
|
||||
API 资源(如 Pod 和 [Service](/docs/concepts/services-networking/service/))是可通过
|
||||
它们与 [API 资源](/zh/docs/concepts/overview/kubernetes-api/) 不同。
|
||||
API 资源(如 Pod 和 [Service](/zh/docs/concepts/services-networking/service/))是可通过
|
||||
Kubernetes API 服务器读取和修改的对象。
|
||||
|
||||
<!--
|
||||
|
@ -388,9 +386,9 @@ directly or from your monitoring tools.
|
|||
Pod 的资源使用情况是作为 Pod 状态的一部分来报告的。
|
||||
|
||||
如果为集群配置了可选的
|
||||
[监控工具](/docs/tasks/debug-application-cluster/resource-usage-monitoring/),
|
||||
[监控工具](/zh/docs/tasks/debug-application-cluster/resource-usage-monitoring/),
|
||||
则可以直接从
|
||||
[指标 API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)
|
||||
[指标 API](/zh/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)
|
||||
或者监控工具获得 Pod 的资源使用情况。
|
||||
|
||||
<!--
|
||||
|
@ -417,7 +415,7 @@ mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
|
|||
|
||||
Pods 通常可以使用临时性本地存储来实现缓冲区、保存日志等功能。
|
||||
kubelet 可以为使用本地临时存储的 Pods 提供这种存储空间,允许后者使用
|
||||
[`emptyDir`](/docs/concepts/storage/volumes/#emptydir) 类型的
|
||||
[`emptyDir`](/zh/docs/concepts/storage/volumes/#emptydir) 类型的
|
||||
{{< glossary_tooltip term_id="volume" text="卷" >}}将其挂载到容器中。
|
||||
|
||||
<!--
|
||||
|
@ -436,7 +434,7 @@ of ephemeral local storage a Pod can consume.
|
|||
-->
|
||||
|
||||
kubelet 也使用此类存储来保存
|
||||
[节点层面的容器日志](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level),
|
||||
[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level),
|
||||
容器镜像文件、以及运行中容器的可写入层。
|
||||
|
||||
{{< caution >}}
|
||||
|
@ -474,7 +472,7 @@ Kubernetes 有两种方式支持节点上配置本地临时性存储:
|
|||
(kubelet)来保存数据的。
|
||||
|
||||
kubelet 也会生成
|
||||
[节点层面的容器日志](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level),
|
||||
[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level),
|
||||
并按临时性本地存储的方式对待之。
|
||||
|
||||
<!--
|
||||
|
@ -525,7 +523,7 @@ as you like.
|
|||
无关的其他系统日志);这个文件系统还可以是根文件系统。
|
||||
|
||||
kubelet 也将
|
||||
[节点层面的容器日志](/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
|
||||
[节点层面的容器日志](/zh/docs/concepts/cluster-administration/logging/#logging-at-the-node-level)
|
||||
写入到第一个文件系统中,并按临时性本地存储的方式对待之。
|
||||
|
||||
同时你使用另一个由不同逻辑存储设备支持的文件系统。在这种配置下,你会告诉
|
||||
|
@ -558,7 +556,7 @@ than as local ephemeral storage.
|
|||
|
||||
kubelet 能够度量其本地存储的用量。实现度量机制的前提是:
|
||||
|
||||
- `LocalStorageCapacityIsolation` [特性门控](/docs/reference/command-line-tools-reference/feature-gates/)被启用(默认状态),并且
|
||||
- `LocalStorageCapacityIsolation` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)被启用(默认状态),并且
|
||||
- 你已经对节点进行了配置,使之使用所支持的本地临时性储存配置方式之一
|
||||
|
||||
如果你的节点配置不同于以上预期,kubelet 就无法对临时性本地存储的资源约束实施限制。
|
||||
|
@ -650,7 +648,7 @@ The scheduler ensures that the sum of the resource requests of the scheduled Con
|
|||
当你创建一个 Pod 时,Kubernetes 调度器会为 Pod 选择一个节点来运行之。
|
||||
每个节点都有一个本地临时性存储的上限,是其可提供给 Pods 使用的总量。
|
||||
欲了解更多信息,可参考
|
||||
[节点可分配资源](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
[节点可分配资源](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
节。
|
||||
|
||||
调度器会确保所调度的 Containers 的资源请求总和不会超出节点的资源容量。
|
||||
|
@ -838,7 +836,7 @@ If you want to use project quotas, you should:
|
|||
如果你希望使用项目配额,你需要:
|
||||
|
||||
* 在 kubelet 配置中启用 `LocalStorageCapacityIsolationFSQuotaMonitoring=true`
|
||||
[特性门控](/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
|
||||
* 确保根文件系统(或者可选的运行时文件系统)启用了项目配额。所有 XFS
|
||||
文件系统都支持项目配额。
|
||||
|
@ -896,7 +894,8 @@ for how to advertise device plugin managed resources on each node.
|
|||
|
||||
##### 设备插件管理的资源
|
||||
|
||||
有关如何颁布在各节点上由设备插件所管理的资源,请参阅[设备插件](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)。
|
||||
有关如何颁布在各节点上由设备插件所管理的资源,请参阅
|
||||
[设备插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)。
|
||||
|
||||
<!--
|
||||
##### Other resources
|
||||
|
@ -1198,11 +1197,11 @@ with namespaces, it can prevent one team from hogging all the resources.
|
|||
通过查看 `Pods` 部分,您将看到哪些 Pod 占用了节点上的资源。
|
||||
|
||||
可供 Pod 使用的资源量小于节点容量,因为系统守护程序也会使用一部分可用资源。
|
||||
[NodeStatus](/docs/resources-reference/{{< param "version" >}}/#nodestatus-v1-core)
|
||||
[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
|
||||
的 `allocatable` 字段给出了可用于 Pod 的资源量。
|
||||
有关更多信息,请参阅 [节点可分配资源](https://git.k8s.io/community/contributors/design-proposals/node-allocatable.md)。
|
||||
|
||||
可以配置 [资源配额](/docs/concepts/policy/resource-quotas/) 功能特性以限制可以使用的资源总量。
|
||||
可以配置 [资源配额](/zh/docs/concepts/policy/resource-quotas/) 功能特性以限制可以使用的资源总量。
|
||||
如果与名字空间配合一起使用,就可以防止一个团队占用所有资源。
|
||||
|
||||
<!--
|
||||
|
@ -1301,10 +1300,10 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
|
|||
* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
|
||||
-->
|
||||
|
||||
* 获取将 [分配内存资源给容器和 Pod ](/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验
|
||||
* 获取将 [分配 CPU 资源给容器和 Pod ](/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验
|
||||
* 获取将 [分配内存资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-memory-resource/) 的实践经验
|
||||
* 获取将 [分配 CPU 资源给容器和 Pod ](/zh/docs/tasks/configure-pod-container/assign-cpu-resource/) 的实践经验
|
||||
* 关于请求和约束之间的区别,细节信息可参见[资源服务质量](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md)
|
||||
* 阅读 API 参考文档中 [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) 部分。
|
||||
* 阅读 API 参考文档中 [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) 部分。
|
||||
* 阅读 XFS 中关于 [项目配额](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) 的文档。
|
||||
* 阅读 XFS 中关于 [项目配额](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) 的文档。
|
||||
|
||||
|
|
|
@ -4,12 +4,10 @@ content_type: concept
|
|||
weight: 60
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Organizing Cluster Access Using kubeconfig Files
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
--->
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
|
@ -18,7 +16,7 @@ Use kubeconfig files to organize information about clusters, users, namespaces,
|
|||
authentication mechanisms. The `kubectl` command-line tool uses kubeconfig files to
|
||||
find the information it needs to choose a cluster and communicate with the API server
|
||||
of a cluster.
|
||||
--->
|
||||
-->
|
||||
使用 kubeconfig 文件来组织有关集群、用户、命名空间和身份认证机制的信息。`kubectl` 命令行工具使用 kubeconfig 文件来查找选择集群所需的信息,并与集群的 API 服务器进行通信。
|
||||
|
||||
<!--
|
||||
|
@ -27,7 +25,7 @@ A file that is used to configure access to clusters is called
|
|||
a *kubeconfig file*. This is a generic way of referring to configuration files.
|
||||
It does not mean that there is a file named `kubeconfig`.
|
||||
{{< /note >}}
|
||||
--->
|
||||
-->
|
||||
{{< note >}}
|
||||
用于配置集群访问的文件称为 *kubeconfig 文件*。这是引用配置文件的通用方法。这并不意味着有一个名为 `kubeconfig` 的文件
|
||||
{{< /note >}}
|
||||
|
@ -36,37 +34,37 @@ It does not mean that there is a file named `kubeconfig`.
|
|||
By default, `kubectl` looks for a file named `config` in the `$HOME/.kube` directory.
|
||||
You can specify other kubeconfig files by setting the `KUBECONFIG` environment
|
||||
variable or by setting the
|
||||
[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/) flag.
|
||||
--->
|
||||
默认情况下,`kubectl` 在 `$HOME/.kube` 目录下查找名为 `config` 的文件。您可以通过设置 `KUBECONFIG` 环境变量或者设置[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/)参数来指定其他 kubeconfig 文件。
|
||||
[`-kubeconfig`](/docs/reference/generated/kubectl/kubectl/) flag.
|
||||
-->
|
||||
默认情况下,`kubectl` 在 `$HOME/.kube` 目录下查找名为 `config` 的文件。
|
||||
您可以通过设置 `KUBECONFIG` 环境变量或者设置
|
||||
[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/)参数来指定其他 kubeconfig 文件。
|
||||
|
||||
<!--
|
||||
For step-by-step instructions on creating and specifying kubeconfig files, see
|
||||
[Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters).
|
||||
--->
|
||||
有关创建和指定 kubeconfig 文件的分步说明,请参阅[配置对多集群的访问](/docs/tasks/access-application-cluster/configure-access-multiple-clusters)。
|
||||
|
||||
|
||||
|
||||
-->
|
||||
有关创建和指定 kubeconfig 文件的分步说明,请参阅
|
||||
[配置对多集群的访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters)。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Supporting multiple clusters, users, and authentication mechanisms
|
||||
--->
|
||||
-->
|
||||
## 支持多集群、用户和身份认证机制
|
||||
|
||||
<!--
|
||||
Suppose you have several clusters, and your users and components authenticate
|
||||
in a variety of ways. For example:
|
||||
--->
|
||||
-->
|
||||
假设您有多个集群,并且您的用户和组件以多种方式进行身份认证。比如:
|
||||
|
||||
<!--
|
||||
- A running kubelet might authenticate using certificates.
|
||||
- A user might authenticate using tokens.
|
||||
- Administrators might have sets of certificates that they provide to individual users.
|
||||
--->
|
||||
-->
|
||||
- 正在运行的 kubelet 可能使用证书在进行认证。
|
||||
- 用户可能通过令牌进行认证。
|
||||
- 管理员可能拥有多个证书集合提供给各用户。
|
||||
|
@ -75,12 +73,12 @@ in a variety of ways. For example:
|
|||
With kubeconfig files, you can organize your clusters, users, and namespaces.
|
||||
You can also define contexts to quickly and easily switch between
|
||||
clusters and namespaces.
|
||||
--->
|
||||
-->
|
||||
使用 kubeconfig 文件,您可以组织集群、用户和命名空间。您还可以定义上下文,以便在集群和命名空间之间快速轻松地切换。
|
||||
|
||||
<!--
|
||||
## Context
|
||||
--->
|
||||
-->
|
||||
## 上下文(Context)
|
||||
|
||||
<!--
|
||||
|
@ -88,12 +86,12 @@ A *context* element in a kubeconfig file is used to group access parameters
|
|||
under a convenient name. Each context has three parameters: cluster, namespace, and user.
|
||||
By default, the `kubectl` command-line tool uses parameters from
|
||||
the *current context* to communicate with the cluster.
|
||||
--->
|
||||
-->
|
||||
通过 kubeconfig 文件中的 *context* 元素,使用简便的名称来对访问参数进行分组。每个上下文都有三个参数:cluster、namespace 和 user。默认情况下,`kubectl` 命令行工具使用 *当前上下文* 中的参数与集群进行通信。
|
||||
|
||||
<!--
|
||||
To choose the current context:
|
||||
--->
|
||||
-->
|
||||
选择当前上下文
|
||||
```
|
||||
kubectl config use-context
|
||||
|
@ -101,7 +99,7 @@ kubectl config use-context
|
|||
|
||||
<!--
|
||||
## The KUBECONFIG environment variable
|
||||
--->
|
||||
-->
|
||||
## KUBECONFIG 环境变量
|
||||
|
||||
<!--
|
||||
|
@ -110,25 +108,29 @@ For Linux and Mac, the list is colon-delimited. For Windows, the list
|
|||
is semicolon-delimited. The `KUBECONFIG` environment variable is not
|
||||
required. If the `KUBECONFIG` environment variable doesn't exist,
|
||||
`kubectl` uses the default kubeconfig file, `$HOME/.kube/config`.
|
||||
--->
|
||||
`KUBECONFIG` 环境变量包含一个 kubeconfig 文件列表。对于 Linux 和 Mac,列表以冒号分隔。对于 Windows,列表以分号分隔。`KUBECONFIG` 环境变量不是必要的。如果 `KUBECONFIG` 环境变量不存在,`kubectl` 使用默认的 kubeconfig 文件,`$HOME/.kube/config`。
|
||||
-->
|
||||
`KUBECONFIG` 环境变量包含一个 kubeconfig 文件列表。
|
||||
对于 Linux 和 Mac,列表以冒号分隔。对于 Windows,列表以分号分隔。
|
||||
`KUBECONFIG` 环境变量不是必要的。
|
||||
如果 `KUBECONFIG` 环境变量不存在,`kubectl` 使用默认的 kubeconfig 文件,`$HOME/.kube/config`。
|
||||
|
||||
<!--
|
||||
If the `KUBECONFIG` environment variable does exist, `kubectl` uses
|
||||
an effective configuration that is the result of merging the files
|
||||
listed in the `KUBECONFIG` environment variable.
|
||||
--->
|
||||
-->
|
||||
如果 `KUBECONFIG` 环境变量存在,`kubectl` 使用 `KUBECONFIG` 环境变量中列举的文件合并后的有效配置。
|
||||
|
||||
<!--
|
||||
## Merging kubeconfig files
|
||||
--->
|
||||
-->
|
||||
## 合并 kubeconfig 文件
|
||||
|
||||
<!--
|
||||
To see your configuration, enter this command:
|
||||
--->
|
||||
-->
|
||||
要查看配置,输入以下命令:
|
||||
|
||||
```shell
|
||||
kubectl config view
|
||||
```
|
||||
|
@ -136,16 +138,16 @@ kubectl config view
|
|||
<!--
|
||||
As described previously, the output might be from a single kubeconfig file,
|
||||
or it might be the result of merging several kubeconfig files.
|
||||
--->
|
||||
-->
|
||||
如前所述,输出可能来自 kubeconfig 文件,也可能是合并多个 kubeconfig 文件的结果。
|
||||
|
||||
<!--
|
||||
Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
||||
--->
|
||||
-->
|
||||
以下是 `kubectl` 在合并 kubeconfig 文件时使用的规则。
|
||||
|
||||
<!--
|
||||
1. If the `--kubeconfig` flag is set, use only the specified file. Do not merge.
|
||||
1. If the `-kubeconfig` flag is set, use only the specified file. Do not merge.
|
||||
Only one instance of this flag is allowed.
|
||||
|
||||
Otherwise, if the `KUBECONFIG` environment variable is set, use it as a
|
||||
|
@ -160,7 +162,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
Example: Preserve the context of the first file to set `current-context`.
|
||||
Example: If two files specify a `red-user`, use only values from the first file's `red-user`.
|
||||
Even if the second file has non-conflicting entries under `red-user`, discard them.
|
||||
--->
|
||||
-->
|
||||
1. 如果设置了 `--kubeconfig` 参数,则仅使用指定的文件。不进行合并。此参数只能使用一次。
|
||||
|
||||
否则,如果设置了 `KUBECONFIG` 环境变量,将它用作应合并的文件列表。根据以下规则合并 `KUBECONFIG` 环境变量中列出的文件:
|
||||
|
@ -173,20 +175,21 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
<!--
|
||||
For an example of setting the `KUBECONFIG` environment variable, see
|
||||
[Setting the KUBECONFIG environment variable](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable).
|
||||
--->
|
||||
有关设置 `KUBECONFIG` 环境变量的示例,请参阅[设置 KUBECONFIG 环境变量](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)。
|
||||
-->
|
||||
有关设置 `KUBECONFIG` 环境变量的示例,请参阅
|
||||
[设置 KUBECONFIG 环境变量](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)。
|
||||
|
||||
<!--
|
||||
Otherwise, use the default kubeconfig file, `$HOME/.kube/config`, with no merging.
|
||||
--->
|
||||
-->
|
||||
否则,使用默认的 kubeconfig 文件, `$HOME/.kube/config`,不进行合并。
|
||||
|
||||
<!--
|
||||
1. Determine the context to use based on the first hit in this chain:
|
||||
|
||||
1. Use the `--context` command-line flag if it exists.
|
||||
1. Use the `-context` command-line flag if it exists.
|
||||
2. Use the `current-context` from the merged kubeconfig files.
|
||||
--->
|
||||
-->
|
||||
1. 根据此链中的第一个匹配确定要使用的上下文。
|
||||
|
||||
1. 如果存在,使用 `--context` 命令行参数。
|
||||
|
@ -194,7 +197,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
|
||||
<!--
|
||||
An empty context is allowed at this point.
|
||||
--->
|
||||
-->
|
||||
这种场景下允许空上下文。
|
||||
|
||||
<!--
|
||||
|
@ -204,7 +207,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
|
||||
1. Use a command-line flag if it exists: `--user` or `--cluster`.
|
||||
2. If the context is non-empty, take the user or cluster from the context.
|
||||
--->
|
||||
-->
|
||||
1. 确定集群和用户。此时,可能有也可能没有上下文。根据此链中的第一个匹配确定集群和用户,这将运行两次:一次用于用户,一次用于集群。
|
||||
|
||||
1. 如果存在,使用命令行参数:`--user` 或者 `--cluster`。
|
||||
|
@ -212,7 +215,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
|
||||
<!--
|
||||
The user and cluster can be empty at this point.
|
||||
--->
|
||||
-->
|
||||
这种场景下用户和集群可以为空。
|
||||
|
||||
<!--
|
||||
|
@ -223,7 +226,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
1. Use command line flags if they exist: `--server`, `--certificate-authority`, `--insecure-skip-tls-verify`.
|
||||
2. If any cluster information attributes exist from the merged kubeconfig files, use them.
|
||||
3. If there is no server location, fail.
|
||||
--->
|
||||
-->
|
||||
1. 确定要使用的实际集群信息。此时,可能有也可能没有集群信息。基于此链构建每个集群信息;第一个匹配项会被采用:
|
||||
|
||||
1. 如果存在:`--server`、`--certificate-authority` 和 `--insecure-skip-tls-verify`,使用命令行参数。
|
||||
|
@ -238,7 +241,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
1. Use command line flags if they exist: `--client-certificate`, `--client-key`, `--username`, `--password`, `--token`.
|
||||
2. Use the `user` fields from the merged kubeconfig files.
|
||||
3. If there are two conflicting techniques, fail.
|
||||
--->
|
||||
-->
|
||||
2. 确定要使用的实际用户信息。使用与集群信息相同的规则构建用户信息,但每个用户只允许一种身份认证技术:
|
||||
|
||||
1. 如果存在:`--client-certificate`、`--client-key`、`--username`、`--password` 和 `--token`,使用命令行参数。
|
||||
|
@ -248,12 +251,12 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
<!--
|
||||
3. For any information still missing, use default values and potentially
|
||||
prompt for authentication information.
|
||||
--->
|
||||
-->
|
||||
3. 对于仍然缺失的任何信息,使用其对应的默认值,并可能提示输入身份认证信息。
|
||||
|
||||
<!--
|
||||
## File references
|
||||
--->
|
||||
-->
|
||||
## 文件引用
|
||||
|
||||
<!--
|
||||
|
@ -261,21 +264,17 @@ File and path references in a kubeconfig file are relative to the location of th
|
|||
File references on the command line are relative to the current working directory.
|
||||
In `$HOME/.kube/config`, relative paths are stored relatively, and absolute paths
|
||||
are stored absolutely.
|
||||
--->
|
||||
kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位置。命令行上的文件引用是相当对于当前工作目录的。在 `$HOME/.kube/config` 中,相对路径按相对路径存储,绝对路径按绝对路径存储。
|
||||
|
||||
|
||||
|
||||
-->
|
||||
kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位置。
|
||||
命令行上的文件引用是相当对于当前工作目录的。
|
||||
在 `$HOME/.kube/config` 中,相对路径按相对路径存储,绝对路径按绝对路径存储。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* [Configure Access to Multiple Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
--->
|
||||
* [配置对多集群的访问](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
* [配置对多集群的访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,18 +1,12 @@
|
|||
---
|
||||
reviewers:
|
||||
- mikedanese
|
||||
title: 配置最佳实践
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- mikedanese
|
||||
title: Configuration Best Practices
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -24,9 +18,8 @@ This document highlights and consolidates configuration best practices that are
|
|||
<!--
|
||||
This is a living document. If you think of something that is not on this list but might be useful to others, please don't hesitate to file an issue or submit a PR.
|
||||
-->
|
||||
这是一份活文件。
|
||||
如果您认为某些内容不在此列表中但可能对其他人有用,请不要犹豫,提交问题或提交 PR。
|
||||
|
||||
这是一份不断改进的文件。
|
||||
如果您认为某些内容缺失但可能对其他人有用,请不要犹豫,提交 Issue 或提交 PR。
|
||||
|
||||
<!-- body -->
|
||||
<!--
|
||||
|
@ -83,15 +76,19 @@ This is a living document. If you think of something that is not on this list bu
|
|||
<!--
|
||||
- Don't use naked Pods (that is, Pods not bound to a [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) or [Deployment](/docs/concepts/workloads/controllers/deployment/)) if you can avoid it. Naked Pods will not be rescheduled in the event of a node failure.
|
||||
-->
|
||||
- 如果您能避免,不要使用 naked Pods(即,Pod 未绑定到[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) 或[Deployment](/docs/concepts/workloads/controllers/deployment/))。
|
||||
如果节点发生故障,将不会重新安排 Naked Pods。
|
||||
- 如果可能,不要使用独立的 Pods(即,未绑定到
|
||||
[ReplicaSet](/zh/docs/concepts/workloads/controllers/replicaset/) 或
|
||||
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 的 Pod)。
|
||||
如果节点发生故障,将不会重新调度独立的 Pods。
|
||||
|
||||
<!--
|
||||
A Deployment, which both creates a ReplicaSet to ensure that the desired number of Pods is always available, and specifies a strategy to replace Pods (such as [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), is almost always preferable to creating Pods directly, except for some explicit [`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios. A [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) may also be appropriate.
|
||||
-->
|
||||
Deployment,它创建一个 ReplicaSet 以确保所需数量的 Pod 始终可用,并指定替换 Pod 的策略(例如 [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)),除了一些显式的[`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)场景之外,几乎总是优先考虑直接创建 Pod。
|
||||
[Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) 也可能是合适的。
|
||||
|
||||
Deployment 会创建一个 ReplicaSet 以确保所需数量的 Pod 始终可用,并指定替换 Pod 的策略
|
||||
(例如 [RollingUpdate](/zh/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)),
|
||||
除了一些显式的[`restartPolicy: Never`](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
|
||||
场景之外,几乎总是优先考虑直接创建 Pod。
|
||||
[Job](/zh/docs/concepts/workloads/controllers/job/) 也可能是合适的。
|
||||
|
||||
<!--
|
||||
## Services
|
||||
|
@ -101,9 +98,10 @@ This is a living document. If you think of something that is not on this list bu
|
|||
<!--
|
||||
- Create a [Service](/docs/concepts/services-networking/service/) before its corresponding backend workloads (Deployments or ReplicaSets), and before any workloads that need to access it. When Kubernetes starts a container, it provides environment variables pointing to all the Services which were running when the container was started. For example, if a Service named `foo` exists, all containers will get the following variables in their initial environment:
|
||||
-->
|
||||
- 在其相应的后端工作负载(Deployment 或 ReplicaSet)之前,以及在需要访问它的任何工作负载之前创建[服务](/docs/concepts/services-networking/service/)。
|
||||
当 Kubernetes 启动容器时,它提供指向启动容器时正在运行的所有服务的环境变量。
|
||||
例如,如果存在名为`foo`当服务,则所有容器将在其初始环境中获取以下变量。
|
||||
- 在创建相应的后端工作负载(Deployment 或 ReplicaSet),以及在需要访问它的任何工作负载之前创建
|
||||
[服务](/zh/docs/concepts/services-networking/service/)。
|
||||
当 Kubernetes 启动容器时,它提供指向启动容器时正在运行的所有服务的环境变量。
|
||||
例如,如果存在名为 `foo` 的服务,则所有容器将在其初始环境中获得以下变量。
|
||||
|
||||
```shell
|
||||
FOO_SERVICE_HOST=<the host the Service is running on>
|
||||
|
@ -113,43 +111,51 @@ This is a living document. If you think of something that is not on this list bu
|
|||
<!--
|
||||
*This does imply an ordering requirement* - any `Service` that a `Pod` wants to access must be created before the `Pod` itself, or else the environment variables will not be populated. DNS does not have this restriction.
|
||||
-->
|
||||
*这确实意味着订购要求* - 必须在`Pod`本身之前创建`Pod`想要访问的任何`Service`,否则将不会填充环境变量。
|
||||
DNS没有此限制。
|
||||
*这确实意味着在顺序上的要求* - 必须在 `Pod` 本身被创建之前创建 `Pod` 想要访问的任何 `Service`,
|
||||
否则将环境变量不会生效。DNS 没有此限制。
|
||||
|
||||
<!--
|
||||
- An optional (though strongly recommended) [cluster add-on](/docs/concepts/cluster-administration/addons/) is a DNS server. The
|
||||
DNS server watches the Kubernetes API for new `Services` and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all `Pods` should be able to do name resolution of `Services` automatically.
|
||||
-->
|
||||
- 一个可选(尽管强烈推荐)[cluster add-on](/docs/concepts/cluster-administration/addons/)是 DNS 服务器。DNS 服务器为新的`Services`监视 Kubernetes API,并为每个创建一组 DNS 记录。
|
||||
如果在整个集群中启用了 DNS,则所有`Pods`应该能够自动对`Services`进行名称解析。
|
||||
- 一个可选(尽管强烈推荐)的[集群插件](/zh/docs/concepts/cluster-administration/addons/)
|
||||
是 DNS 服务器。DNS 服务器为新的 `Services` 监视 Kubernetes API,并为每个创建一组 DNS 记录。
|
||||
如果在整个集群中启用了 DNS,则所有 `Pods` 应该能够自动对 `Services` 进行名称解析。
|
||||
|
||||
<!--
|
||||
- Don't specify a `hostPort` for a Pod unless it is absolutely necessary. When you bind a Pod to a `hostPort`, it limits the number of places the Pod can be scheduled, because each <`hostIP`, `hostPort`, `protocol`> combination must be unique. If you don't specify the `hostIP` and `protocol` explicitly, Kubernetes will use `0.0.0.0` as the default `hostIP` and `TCP` as the default `protocol`.
|
||||
-->
|
||||
- 除非绝对必要,否则不要为 Pod 指定`hostPort`。
|
||||
将 Pod 绑定到`hostPort`时,它会限制 Pod 可以调度的位置数,因为每个<`hostIP`, `hostPort`, `protocol`>组合必须是唯一的。如果您没有明确指定`hostIP`和`protocol`,Kubernetes将使用`0.0.0.0`作为默认`hostIP`和`TCP`作为默认`protocol`。
|
||||
- 除非绝对必要,否则不要为 Pod 指定 `hostPort`。
|
||||
将 Pod 绑定到`hostPort`时,它会限制 Pod 可以调度的位置数,因为每个
|
||||
`<hostIP, hostPort, protocol>`组合必须是唯一的。
|
||||
如果您没有明确指定 `hostIP` 和 `protocol`,Kubernetes 将使用 `0.0.0.0` 作为默认
|
||||
`hostIP` 和 `TCP` 作为默认 `protocol`。
|
||||
|
||||
<!--
|
||||
If you only need access to the port for debugging purposes, you can use the [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) or [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
|
||||
-->
|
||||
如果您只需要访问端口以进行调试,则可以使用[apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)或[`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)。
|
||||
如果您只需要访问端口以进行调试,则可以使用
|
||||
[apiserver proxy](/zh/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)或
|
||||
[`kubectl port-forward`](/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)。
|
||||
|
||||
<!--
|
||||
If you explicitly need to expose a Pod's port on the node, consider using a [NodePort](/docs/concepts/services-networking/service/#nodeport) Service before resorting to `hostPort`.
|
||||
-->
|
||||
如果您明确需要在节点上公开 Pod 的端口,请在使用`hostPort`之前考虑使用[NodePort](/docs/concepts/services-networking/service/#nodeport) 服务。
|
||||
如果您明确需要在节点上公开 Pod 的端口,请在使用 `hostPort` 之前考虑使用
|
||||
[NodePort](/zh/docs/concepts/services-networking/service/#nodeport) 服务。
|
||||
|
||||
<!--
|
||||
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
|
||||
-->
|
||||
- 避免使用`hostNetwork`,原因与`hostPort`相同。
|
||||
- 避免使用 `hostNetwork`,原因与 `hostPort` 相同。
|
||||
|
||||
<!--
|
||||
- Use [headless Services](/docs/concepts/services-networking/service/#headless-
|
||||
services) (which have a `ClusterIP` of `None`) for easy service discovery when you don't need `kube-proxy` load balancing.
|
||||
-->
|
||||
- 当您不需要`kube-proxy`负载平衡时,使用 [无头服务](/docs/concepts/services-networking/service/#headless-
|
||||
services) (具有`None`的`ClusterIP`)以便于服务发现。
|
||||
- 当您不需要 `kube-proxy` 负载均衡时,使用
|
||||
[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)
|
||||
(`ClusterIP` 被设置为 `None`)以便于服务发现。
|
||||
|
||||
<!--
|
||||
## Using Labels
|
||||
|
@ -159,29 +165,33 @@ services) (具有`None`的`ClusterIP`)以便于服务发现。
|
|||
<!--
|
||||
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach.
|
||||
-->
|
||||
- 定义并使用[标签](/docs/concepts/overview/working-with-objects/labels/)来识别应用程序或部署的__semantic attributes__,例如`{ app: myapp, tier: frontend, phase: test, deployment: v3 }`。
|
||||
您可以使用这些标签为其他资源选择合适的 Pod;例如,一个选择所有`tier: frontend` Pod 的服务,或者`app: myapp`的所有`phase: test`组件。
|
||||
有关此方法的示例,请参阅[留言板](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) 。
|
||||
- 定义并使用[标签](/zh/docs/concepts/overview/working-with-objects/labels/)来识别应用程序
|
||||
或 Deployment 的 __语义属性__,例如`{ app: myapp, tier: frontend, phase: test, deployment: v3 }`。
|
||||
你可以使用这些标签为其他资源选择合适的 Pod;
|
||||
例如,一个选择所有 `tier: frontend` Pod 的服务,或者 `app: myapp` 的所有 `phase: test` 组件。
|
||||
有关此方法的示例,请参阅[guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) 。
|
||||
|
||||
<!--
|
||||
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. [Deployments](/docs/concepts/workloads/controllers/deployment/) make it easy to update a running service without downtime.
|
||||
-->
|
||||
通过从选择器中省略特定发行版的标签,可以使服务跨越多个部署。
|
||||
[部署](/docs/concepts/workloads/controllers/deployment/)可以在不停机的情况下轻松更新正在运行的服务。
|
||||
通过从选择器中省略特定发行版的标签,可以使服务跨越多个 Deployment。
|
||||
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 可以在不停机的情况下轻松更新正在运行的服务。
|
||||
|
||||
<!--
|
||||
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.
|
||||
-->
|
||||
部署描述了对象的期望状态,并且如果对该规范的更改是_applied_,则部署控制器以受控速率将实际状态改变为期望状态。
|
||||
Deployment 描述了对象的期望状态,并且如果对该规范的更改被成功应用,
|
||||
则 Deployment 控制器以受控速率将实际状态改变为期望状态。
|
||||
|
||||
<!--
|
||||
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
|
||||
-->
|
||||
- 您可以操纵标签进行调试。
|
||||
- 由于 Kubernetes 控制器(例如 ReplicaSet)和服务使用选择器标签与 Pod 匹配,因此从 Pod 中删除相关标签将阻止其被控制器考虑或由服务提供服务流量。
|
||||
由于 Kubernetes 控制器(例如 ReplicaSet)和服务使用选择器标签来匹配 Pod,
|
||||
从 Pod 中删除相关标签将阻止其被控制器考虑或由服务提供服务流量。
|
||||
如果删除现有 Pod 的标签,其控制器将创建一个新的 Pod 来取代它。
|
||||
这是在"隔离"环境中调试先前"实时"Pod 的有用方法。
|
||||
要以交互方式删除或添加标签,请使用[`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label)。
|
||||
这是在"隔离"环境中调试先前"活跃"的 Pod 的有用方法。
|
||||
要以交互方式删除或添加标签,请使用 [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label)。
|
||||
|
||||
<!--
|
||||
## Container Images
|
||||
|
@ -191,38 +201,29 @@ A desired state of an object is described by a Deployment, and if changes to tha
|
|||
<!--
|
||||
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/admin/kubelet/) attempts to pull the specified image.
|
||||
-->
|
||||
当 [kubelet](/docs/admin/kubelet/)尝试拉取指定的镜像时,[imagePullPolicy](/docs/concepts/containers/images/#升级镜像)和镜像标签会生效。
|
||||
[imagePullPolicy](/zh/docs/concepts/containers/images/#updating-images)和镜像标签会影响
|
||||
[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) 何时尝试拉取指定的镜像。
|
||||
|
||||
<!--
|
||||
- `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally.
|
||||
-->
|
||||
- `imagePullPolicy: IfNotPresent`:仅当镜像在本地不存在时镜像才被拉取。
|
||||
|
||||
<!--
|
||||
- `imagePullPolicy: Always`: the image is pulled every time the pod is started.
|
||||
-->
|
||||
- `imagePullPolicy: Always`:每次启动 pod 的时候都会拉取镜像。
|
||||
|
||||
<!--
|
||||
- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied.
|
||||
-->
|
||||
- `imagePullPolicy` 省略时,镜像标签为 `:latest` 或不存在,使用 `Always` 值。
|
||||
|
||||
<!--
|
||||
- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `IfNotPresent` is applied.
|
||||
-->
|
||||
- `imagePullPolicy` 省略时,指定镜像标签并且不是 `:latest`,使用 `IfNotPresent` 值。
|
||||
|
||||
<!--
|
||||
- `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image.
|
||||
-->
|
||||
- `imagePullPolicy: IfNotPresent`:仅当镜像在本地不存在时才被拉取。
|
||||
- `imagePullPolicy: Always`:每次启动 Pod 的时候都会拉取镜像。
|
||||
- `imagePullPolicy` 省略时,镜像标签为 `:latest` 或不存在,使用 `Always` 值。
|
||||
- `imagePullPolicy` 省略时,指定镜像标签并且不是 `:latest`,使用 `IfNotPresent` 值。
|
||||
- `imagePullPolicy: Never`:假设镜像已经存在本地,不会尝试拉取镜像。
|
||||
|
||||
<!--
|
||||
To make sure the container always uses the same version of the image, you can specify its [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier), for example `sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`. The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value.
|
||||
-->
|
||||
{{< note >}}
|
||||
要确保容器始终使用相同版本的镜像,你可以指定其 [摘要](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier), 例如`sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`。
|
||||
要确保容器始终使用相同版本的镜像,你可以指定其
|
||||
[摘要](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier),
|
||||
例如 `sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`。
|
||||
摘要唯一地标识出镜像的指定版本,因此除非您更改摘要值,否则 Kubernetes 永远不会更新它。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -230,15 +231,15 @@ To make sure the container always uses the same version of the image, you can sp
|
|||
You should avoid using the `:latest` tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
|
||||
-->
|
||||
{{< note >}}
|
||||
在生产中部署容器时应避免使用 `:latest` 标记,因为更难跟踪正在运行的镜像版本,并且更难以正确回滚。
|
||||
在生产中部署容器时应避免使用 `:latest` 标记,因为这样更难跟踪正在运行的镜像版本,并且更难以正确回滚。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
|
||||
-->
|
||||
{{< note >}}
|
||||
底层镜像提供程序的缓存语义甚至使 `imagePullPolicy: Always`变得高效。
|
||||
例如,对于 Docker,如果镜像已经存在,则拉取尝试很快,因为镜像层都被缓存并且不需要镜像下载。
|
||||
底层镜像驱动程序的缓存语义能够使即便 `imagePullPolicy: Always` 的配置也很高效。
|
||||
例如,对于 Docker,如果镜像已经存在,则拉取尝试很快,因为镜像层都被缓存并且不需要下载。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -249,19 +250,20 @@ The caching semantics of the underlying image provider make even `imagePullPolic
|
|||
<!--
|
||||
- Use `kubectl apply -f <directory>`. This looks for Kubernetes configuration in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes it to `apply`.
|
||||
-->
|
||||
- 使用`kubectl apply -f <directory>`。
|
||||
它在`<directory>`中的所有`.yaml`,`.yml`和`.json`文件中查找 Kubernetes 配置,并将其传递给`apply`。
|
||||
- 使用 `kubectl apply -f <directory>`。
|
||||
它在 `<directory>` 中的所有` .yaml`、`.yml` 和 `.json` 文件中查找 Kubernetes 配置,并将其传递给 `apply`。
|
||||
|
||||
<!--
|
||||
- Use label selectors for `get` and `delete` operations instead of specific object names. See the sections on [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) and [using labels effectively](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively).
|
||||
-->
|
||||
- 使用标签选择器进行`get`和`delete`操作,而不是特定的对象名称。
|
||||
- 请参阅[标签选择器](/docs/concepts/overview/working-with-objects/labels/#label-selectors)和[有效使用标签](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。
|
||||
- 使用标签选择器进行 `get` 和 `delete` 操作,而不是特定的对象名称。
|
||||
- 请参阅[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/#label-selectors)和
|
||||
[有效使用标签](/zh/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)部分。
|
||||
|
||||
<!--
|
||||
- Use `kubectl run` and `kubectl expose` to quickly create single-container Deployments and Services. See [Use a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) for an example.
|
||||
-->
|
||||
- 使用`kubectl run`和`kubectl expose`来快速创建单容器部署和服务。
|
||||
有关示例,请参阅[使用服务访问集群中的应用程序](/docs/tasks/access-application-cluster/service-access-application-cluster/)。
|
||||
有关示例,请参阅[使用服务访问集群中的应用程序](/zh/docs/tasks/access-application-cluster/service-access-application-cluster/)。
|
||||
|
||||
|
||||
|
|
|
@ -18,9 +18,6 @@ on top of the container requests & limits.
|
|||
在节点上运行 Pod 时,Pod 本身占用大量系统资源。这些资源是运行 Pod 内容器所需资源的附加资源。
|
||||
_POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
|
@ -36,8 +33,8 @@ time according to the overhead associated with the Pod's
|
|||
[RuntimeClass](/docs/concepts/containers/runtime-class/).
|
||||
-->
|
||||
|
||||
在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/docs/concepts/containers/runtime-class/) 相关联的开销在
|
||||
[准入](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。
|
||||
在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 相关联的开销在
|
||||
[准入](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。
|
||||
|
||||
<!--
|
||||
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
|
||||
|
@ -56,7 +53,8 @@ You need to make sure that the `PodOverhead`
|
|||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18)
|
||||
across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field.
|
||||
-->
|
||||
您需要确保在集群中启用了 `PodOverhead` [特性门](/docs/reference/command-line-tools-reference/feature-gates/)(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`。
|
||||
您需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`。
|
||||
|
||||
<!--
|
||||
## Usage example
|
||||
|
@ -68,7 +66,9 @@ To use the PodOverhead feature, you need a RuntimeClass that defines the `overhe
|
|||
an example, you could use the following RuntimeClass definition with a virtualizing container runtime
|
||||
that uses around 120MiB per Pod for the virtual machine and the guest OS:
|
||||
-->
|
||||
要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass. 作为例子,可以在虚拟机和来宾操作系统中通过一个虚拟化容器运行时来定义 RuntimeClass 如下,其中每个 Pod 大约使用 120MiB:
|
||||
要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass。
|
||||
作为例子,可以在虚拟机和寄宿操作系统中通过一个虚拟化容器运行时来定义
|
||||
RuntimeClass 如下,其中每个 Pod 大约使用 120MiB:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
@ -123,8 +123,9 @@ updates the workload's PodSpec to include the `overhead` as described in the Run
|
|||
the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod
|
||||
to include an `overhead`.
|
||||
-->
|
||||
在准入阶段 RuntimeClass [准入控制器](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含
|
||||
RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`.
|
||||
在准入阶段 RuntimeClass [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含
|
||||
RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。
|
||||
在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`.
|
||||
|
||||
<!--
|
||||
After the RuntimeClass admission controller, you can check the updated PodSpec:
|
||||
|
@ -298,12 +299,8 @@ from source in the meantime.
|
|||
在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过 `kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定开销运行的工作负载的稳定性。
|
||||
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。在此之前,用户需要从源代码构建 kube-state-metrics.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
|
||||
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ The `image` property of a container supports the same syntax as the `docker` com
|
|||
<!--
|
||||
## Updating Images
|
||||
-->
|
||||
## 升级镜像
|
||||
## 更新镜像 {#updating-images}
|
||||
|
||||
<!--
|
||||
The default pull policy is `IfNotPresent` which causes the Kubelet to skip
|
||||
|
|
|
@ -1,76 +0,0 @@
|
|||
---
|
||||
title: 概念模板示例
|
||||
content_type: concept
|
||||
toc_hide: true
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Example Concept Template
|
||||
content_type: concept
|
||||
toc_hide: true
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Be sure to also [create an entry in the table of contents](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) for your new document.
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
确保您的新文档也可以[在目录中创建一个条目](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents)。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
This page explains ...
|
||||
-->
|
||||
|
||||
本页解释了 ...
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Understanding ...
|
||||
-->
|
||||
## 了解 ...
|
||||
|
||||
<!--
|
||||
Kubernetes provides ...
|
||||
-->
|
||||
Kubernetes 提供 ...
|
||||
|
||||
<!--
|
||||
## Using ...
|
||||
-->
|
||||
## 使用 ...
|
||||
|
||||
<!--
|
||||
To use ...
|
||||
-->
|
||||
使用 ...
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
**[Optional Section]**
|
||||
-->
|
||||
**[可选章节]**
|
||||
|
||||
|
||||
<!--
|
||||
* Learn more about [Writing a New Topic](/docs/home/contribute/write-new-topic/).
|
||||
* See [Using Page Templates - Concept template](/docs/home/contribute/page-templates/#concept_template) for how to use this template.
|
||||
-->
|
||||
* 了解有关[撰写新主题](/docs/home/contribute/write-new-topic/)的更多信息。
|
||||
* 有关如何使用此模板的信息,请参阅[使用页面模板 - 概念模板](/docs/home/contribute/page-templates/#concept_template)。
|
||||
|
||||
|
||||
|
||||
|
|
@ -5,18 +5,11 @@ weight: 50
|
|||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- davidopp
|
||||
- kevin-wangzefeng
|
||||
- bsalamat
|
||||
title: Assigning Pods to Nodes
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
-->
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
|
@ -26,14 +19,12 @@ There are several ways to do this, and the recommended approaches all use
|
|||
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to make the selection.
|
||||
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
|
||||
(e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.)
|
||||
but there are some circumstances where you may want more control on a node where a pod lands, e.g. to ensure
|
||||
but there are some circumstances where you may want more control on a node where a pod lands, for example to ensure
|
||||
that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different
|
||||
services that communicate a lot into the same availability zone.
|
||||
-->
|
||||
|
||||
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的 {{< glossary_tooltip text="Node(s)" term_id="node" >}} 上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用[标签选择器](/docs/concepts/overview/working-with-objects/labels/)来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 pod 分散到节点上,而不是将 pod 放置在可用资源不足的节点上等等),但在某些情况下,你可以需要更多控制 pod 停靠的节点,例如,确保 pod 最终落在连接了 SSD 的机器上,或者将来自两个不同的服务且有大量通信的 pod 放置在同一个可用区。
|
||||
|
||||
|
||||
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的 {{< glossary_tooltip text="Node(s)" term_id="node" >}} 上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/)来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 pod 分散到节点上,而不是将 pod 放置在可用资源不足的节点上等等),但在某些情况下,你可以需要更多控制 pod 停靠的节点,例如,确保 pod 最终落在连接了 SSD 的机器上,或者将来自两个不同的服务且有大量通信的 pod 放置在同一个可用区。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -46,7 +37,8 @@ to run on a node, the node must have each of the indicated key-value pairs as la
|
|||
additional labels as well). The most common usage is one key-value pair.
|
||||
-->
|
||||
|
||||
`nodeSelector` 是节点选择约束的最简单推荐形式。`nodeSelector` 是 PodSpec 的一个字段。它指定键值对的映射。为了使 pod 可以在节点上运行,节点必须具有每个指定的键值对作为标签(它也可以具有其他标签)。最常用的是一对键值对。
|
||||
`nodeSelector` 是节点选择约束的最简单推荐形式。`nodeSelector` 是 PodSpec 的一个字段。
|
||||
它包含键值对的映射。为了使 pod 可以在某个节点上运行,该节点的标签中必须包含这里的每个键值对(它也可以具有其他标签)。最常见的用法的是一对键值对。
|
||||
|
||||
<!--
|
||||
Let's walk through an example of how to use `nodeSelector`.
|
||||
|
@ -63,38 +55,40 @@ Let's walk through an example of how to use `nodeSelector`.
|
|||
<!--
|
||||
This example assumes that you have a basic understanding of Kubernetes pods and that you have [set up a Kubernetes cluster](/docs/setup/).
|
||||
-->
|
||||
|
||||
本示例假设你已基本了解 Kubernetes 的 pod 并且已经[建立一个 Kubernetes 集群](/docs/setup/)。
|
||||
本示例假设你已基本了解 Kubernetes 的 Pod 并且已经[建立一个 Kubernetes 集群](/zh/docs/setup/)。
|
||||
|
||||
<!--
|
||||
### Step One: Attach label to the node
|
||||
-->
|
||||
|
||||
### 步骤一:添加标签到节点
|
||||
### 步骤一:添加标签到节点 {#attach-labels-to-node}
|
||||
|
||||
<!--
|
||||
Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to, and then run `kubectl label nodes <node-name> <label-key>=<label-value>` to add a label to the node you've chosen. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
|
||||
-->
|
||||
|
||||
执行 `kubectl get nodes` 命令获取集群的节点名称。选择一个你要增加标签的节点,然后执行 `kubectl label nodes <node-name> <label-key>=<label-value>` 命令将标签添加到你所选择的节点上。例如,如果你的节点名称为 'kubernetes-foo-node-1.c.a-robinson.internal' 并且想要的标签是 'disktype=ssd',则可以执行 `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd` 命令。
|
||||
执行 `kubectl get nodes` 命令获取集群的节点名称。
|
||||
选择一个你要增加标签的节点,然后执行 `kubectl label nodes <node-name> <label-key>=<label-value>`
|
||||
命令将标签添加到你所选择的节点上。
|
||||
例如,如果你的节点名称为 'kubernetes-foo-node-1.c.a-robinson.internal'
|
||||
并且想要的标签是 'disktype=ssd',则可以执行
|
||||
`kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd` 命令。
|
||||
|
||||
<!--
|
||||
You can verify that it worked by re-running `kubectl get nodes --show-labels` and checking that the node now has a label. You can also use `kubectl describe node "nodename"` to see the full list of labels of the given node.
|
||||
You can verify that it worked by re-running `kubectl get nodes -show-labels` and checking that the node now has a label. You can also use `kubectl describe node "nodename"` to see the full list of labels of the given node.
|
||||
-->
|
||||
|
||||
你可以通过重新运行 `kubectl get nodes --show-labels` 并且查看节点当前具有了一个标签来验证它是否有效。你也可以使用 `kubectl describe node "nodename"` 命令查看指定节点的标签完整列表。
|
||||
你可以通过重新运行 `kubectl get nodes --show-labels`,查看节点当前具有了所指定的标签来验证它是否有效。
|
||||
你也可以使用 `kubectl describe node "nodename"` 命令查看指定节点的标签完整列表。
|
||||
|
||||
<!--
|
||||
### Step Two: Add a nodeSelector field to your pod configuration
|
||||
-->
|
||||
|
||||
### 步骤二:添加 nodeSelector 字段到 pod 配置中
|
||||
### 步骤二:添加 nodeSelector 字段到 Pod 配置中
|
||||
|
||||
<!--
|
||||
Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
|
||||
-->
|
||||
|
||||
拿任意一个你想运行的 pod 的配置文件,并且在其中添加一个 nodeSelector 部分。例如,如果下面是我的 pod 配置:
|
||||
选择任何一个你想运行的 Pod 的配置文件,并且在其中添加一个 nodeSelector 部分。
|
||||
例如,如果下面是我的 pod 配置:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -123,32 +117,30 @@ the Pod will get scheduled on the node that you attached the label to. You can
|
|||
verify that it worked by running `kubectl get pods -o wide` and looking at the
|
||||
"NODE" that the Pod was assigned to.
|
||||
-->
|
||||
|
||||
当你之后运行 `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml` 命令,pod 将会调度到将标签添加到的节点上。你可以通过运行 `kubectl get pods -o wide` 并查看分配给 pod 的 “NODE” 来验证其是否有效。
|
||||
当你之后运行 `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml` 命令,
|
||||
Pod 将会调度到将标签添加到的节点上。你可以通过运行 `kubectl get pods -o wide` 并查看分配给 pod 的 “NODE” 来验证其是否有效。
|
||||
|
||||
<!--
|
||||
## Interlude: built-in node labels {#built-in-node-labels}
|
||||
-->
|
||||
|
||||
## 插曲:内置的节点标签 {#内置的节点标签}
|
||||
## 插曲:内置的节点标签 {#built-in-node-labels}
|
||||
|
||||
<!--
|
||||
In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated
|
||||
with a standard set of labels. These labels are
|
||||
-->
|
||||
除了你[附加](#attach-labels-to-node)的标签外,节点还预先填充了一组标准标签。这些标签是
|
||||
|
||||
除了你[附加](#添加标签到节点)的标签外,节点还预先填充了一组标准标签。这些标签是
|
||||
|
||||
* [`kubernetes.io/hostname`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
|
||||
* [`failure-domain.beta.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone)
|
||||
* [`failure-domain.beta.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion)
|
||||
* [`topology.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
* [`topology.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
* [`beta.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type)
|
||||
* [`node.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
* [`kubernetes.io/os`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os)
|
||||
* [`kubernetes.io/arch`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch)
|
||||
|
||||
* [`kubernetes.io/hostname`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
|
||||
* [`failure-domain.beta.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone)
|
||||
* [`failure-domain.beta.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion)
|
||||
* [`topology.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
* [`topology.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
* [`beta.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type)
|
||||
* [`node.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
* [`kubernetes.io/os`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os)
|
||||
* [`kubernetes.io/arch`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch)
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -190,7 +182,8 @@ For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.
|
|||
-->
|
||||
|
||||
1. 检查是否在使用 Kubernetes v1.11+,以便 NodeRestriction 功能可用。
|
||||
2. 确保你在使用[节点授权](/docs/reference/access-authn-authz/node/)并且已经_启用_ [NodeRestriction 准入插件](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)。
|
||||
2. 确保你在使用[节点授权](/zh/docs/reference/access-authn-authz/node/)并且已经_启用_
|
||||
[NodeRestriction 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)。
|
||||
3. 将 `node-restriction.kubernetes.io/` 前缀下的标签添加到 Node 对象,然后在节点选择器中使用这些标签。例如,`example.com.node-restriction.kubernetes.io/fips=true` 或 `example.com.node-restriction.kubernetes.io/pci-dss=true`。
|
||||
|
||||
<!--
|
||||
|
@ -289,10 +282,10 @@ value is `another-node-label-value` should be preferred.
|
|||
<!--
|
||||
You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`.
|
||||
You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use
|
||||
[node taints](/docs/concepts/configuration/taint-and-toleration/) to repel pods from specific nodes.
|
||||
[node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to repel pods from specific nodes.
|
||||
-->
|
||||
|
||||
你可以在上面的例子中看到 `In` 操作符的使用。新的节点亲和语法支持下面的操作符: `In`,`NotIn`,`Exists`,`DoesNotExist`,`Gt`,`Lt`。你可以使用 `NotIn` 和 `DoesNotExist` 来实现节点反亲和行为,或者使用[节点污点](/docs/concepts/configuration/taint-and-toleration/)将 pod 从特定节点中驱逐。
|
||||
你可以在上面的例子中看到 `In` 操作符的使用。新的节点亲和语法支持下面的操作符: `In`,`NotIn`,`Exists`,`DoesNotExist`,`Gt`,`Lt`。你可以使用 `NotIn` 和 `DoesNotExist` 来实现节点反亲和行为,或者使用[节点污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)将 pod 从特定节点中驱逐。
|
||||
|
||||
<!--
|
||||
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
|
||||
|
@ -343,7 +336,7 @@ key for the node label that the system uses to denote such a topology domain, e.
|
|||
in the section [Interlude: built-in node labels](#built-in-node-labels).
|
||||
-->
|
||||
|
||||
pod 间亲和与反亲和使你可以*基于已经在节点上运行的 pod 的标签*来约束 pod 可以调度到的节点,而不是基于节点上的标签。规则的格式为“如果 X 节点上已经运行了一个或多个 满足规则 Y 的pod,则这个 pod 应该(或者在非亲和的情况下不应该)运行在 X 节点”。Y 表示一个具有可选的关联命令空间列表的 LabelSelector;与节点不同,因为 pod 是命名空间限定的(因此 pod 上的标签也是命名空间限定的),因此作用于 pod 标签的标签选择器必须指定选择器应用在哪个命名空间。从概念上讲,X 是一个拓扑域,如节点,机架,云供应商地区,云供应商区域等。你可以使用 `topologyKey` 来表示它,`topologyKey` 是节点标签的键以便系统用来表示这样的拓扑域。请参阅上面[插曲:内置的节点标签](#内置的节点标签)部分中列出的标签键。
|
||||
pod 间亲和与反亲和使你可以*基于已经在节点上运行的 pod 的标签*来约束 pod 可以调度到的节点,而不是基于节点上的标签。规则的格式为“如果 X 节点上已经运行了一个或多个 满足规则 Y 的pod,则这个 pod 应该(或者在非亲和的情况下不应该)运行在 X 节点”。Y 表示一个具有可选的关联命令空间列表的 LabelSelector;与节点不同,因为 pod 是命名空间限定的(因此 pod 上的标签也是命名空间限定的),因此作用于 pod 标签的标签选择器必须指定选择器应用在哪个命名空间。从概念上讲,X 是一个拓扑域,如节点,机架,云供应商地区,云供应商区域等。你可以使用 `topologyKey` 来表示它,`topologyKey` 是节点标签的键以便系统用来表示这样的拓扑域。请参阅上面[插曲:内置的节点标签](#built-in-node-labels)部分中列出的标签键。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -578,10 +571,12 @@ As you can see, all the 3 replicas of the `web-server` are automatically co-loca
|
|||
```
|
||||
kubectl get pods -o wide
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于如下内容:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3
|
||||
|
@ -604,8 +599,7 @@ no two instances are located on the same host.
|
|||
See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
|
||||
for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique.
|
||||
-->
|
||||
|
||||
上面的例子使用 `PodAntiAffinity` 规则和 `topologyKey: "kubernetes.io/hostname"` 来部署 redis 集群以便在同一主机上没有两个实例。参阅 [ZooKeeper 教程](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure),以获取配置反亲和来达到高可用性的 StatefulSet 的样例(使用了相同的技巧)。
|
||||
上面的例子使用 `PodAntiAffinity` 规则和 `topologyKey: "kubernetes.io/hostname"` 来部署 redis 集群以便在同一主机上没有两个实例。参阅 [ZooKeeper 教程](/zh/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure),以获取配置反亲和来达到高可用性的 StatefulSet 的样例(使用了相同的技巧)。
|
||||
|
||||
## nodeName
|
||||
|
||||
|
@ -617,13 +611,11 @@ kubelet running on the named node tries to run the pod. Thus, if
|
|||
`nodeName` is provided in the PodSpec, it takes precedence over the
|
||||
above methods for node selection.
|
||||
-->
|
||||
|
||||
`nodeName` 是节点选择约束的最简单方法,但是由于其自身限制,通常不使用它。`nodeName` 是 PodSpec 的一个字段。如果它不为空,调度器将忽略 pod,并且运行在它指定节点上的 kubelet 进程尝试运行该 pod。因此,如果 `nodeName` 在 PodSpec 中指定了,则它优先于上面的节点选择方法。
|
||||
|
||||
<!--
|
||||
Some of the limitations of using `nodeName` to select nodes are:
|
||||
-->
|
||||
|
||||
使用 `nodeName` 来选择节点的一些限制:
|
||||
|
||||
<!--
|
||||
|
@ -635,7 +627,6 @@ Some of the limitations of using `nodeName` to select nodes are:
|
|||
- Node names in cloud environments are not always predictable or
|
||||
stable.
|
||||
-->
|
||||
|
||||
- 如果指定的节点不存在,
|
||||
- 如果指定的节点没有资源来容纳 pod,pod 将会调度失败并且其原因将显示为,比如 OutOfmemory 或 OutOfcpu。
|
||||
- 云环境中的节点名称并非总是可预测或稳定的。
|
||||
|
@ -643,7 +634,6 @@ Some of the limitations of using `nodeName` to select nodes are:
|
|||
<!--
|
||||
Here is an example of a pod config file using the `nodeName` field:
|
||||
-->
|
||||
|
||||
下面的是使用 `nodeName` 字段的 pod 配置文件的例子:
|
||||
|
||||
```yaml
|
||||
|
@ -664,16 +654,13 @@ The above pod will run on the node kube-01.
|
|||
|
||||
上面的 pod 将运行在 kube-01 节点上。
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
[Taints](/docs/concepts/configuration/taint-and-toleration/) allow a Node to *repel* a set of Pods.
|
||||
[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods.
|
||||
-->
|
||||
|
||||
[污点](/docs/concepts/configuration/taint-and-toleration/)允许节点*排斥*一组 pod。
|
||||
[污点](/docs/concepts/scheduling-eviction/taint-and-toleration/)允许节点*排斥*一组 pod。
|
||||
|
||||
<!--
|
||||
The design documents for
|
||||
|
@ -689,6 +676,8 @@ The [topology manager](/docs/tasks/administer-cluster/topology-manager/) can tak
|
|||
resource allocation decisions.
|
||||
-->
|
||||
|
||||
一旦 pod 分配给 节点,kubelet 应用将运行该 pod 并且分配节点本地资源。[拓扑管理](/docs/tasks/administer-cluster/topology-manager/)
|
||||
一旦 pod 分配给 节点,kubelet 应用将运行该 pod 并且分配节点本地资源。
|
||||
[拓扑管理](/zh/docs/tasks/administer-cluster/topology-manager/)
|
||||
可以参与到节点级别的资源分配决定中。
|
||||
|
||||
|
|
@ -4,13 +4,11 @@ content_type: concept
|
|||
weight: 70
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- bsalamat
|
||||
title: Scheduler Performance Tuning
|
||||
content_type: concept
|
||||
weight: 70
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -22,7 +20,9 @@ weight: 70
|
|||
is the Kubernetes default scheduler. It is responsible for placement of Pods
|
||||
on Nodes in a cluster.
|
||||
-->
|
||||
作为 kubernetes 集群的默认调度器,[kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler) 主要负责将 Pod 调度到集群的 Node 上。
|
||||
作为 kubernetes 集群的默认调度器,
|
||||
[kube-scheduler](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler)
|
||||
主要负责将 Pod 调度到集群的 Node 上。
|
||||
|
||||
<!--
|
||||
Nodes in a cluster that meet the scheduling requirements of a Pod are
|
||||
|
@ -32,7 +32,10 @@ picking a Node with the highest score among the feasible ones to run
|
|||
the Pod. The scheduler then notifies the API server about this decision
|
||||
in a process called _Binding_.
|
||||
-->
|
||||
在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度_ Node。调度器先在集群中找到一个 Pod 的可调度 Node,然后根据一系列函数对这些可调度 Node打分,之后选出其中得分最高的 Node 来运行 Pod。最后,调度器将这个调度决定告知 kube-apiserver,这个过程叫做 _绑定_。
|
||||
在一个集群中,满足一个 Pod 调度请求的所有 Node 称之为 _可调度_ Node。
|
||||
调度器先在集群中找到一个 Pod 的可调度 Node,然后根据一系列函数对这些可调度 Node 打分,
|
||||
之后选出其中得分最高的 Node 来运行 Pod。
|
||||
最后,调度器将这个调度决定告知 kube-apiserver,这个过程叫做 _绑定(Binding)_。
|
||||
|
||||
<!--
|
||||
This page explains performance tuning optimizations that are relevant for
|
||||
|
@ -40,8 +43,6 @@ large Kubernetes clusters.
|
|||
-->
|
||||
这篇文章将会介绍一些在大规模 Kubernetes 集群下调度器性能优化的方式。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
|
@ -55,7 +56,8 @@ a threshold for scheduling nodes in your cluster.
|
|||
-->
|
||||
在大规模集群中,你可以调节调度器的表现来平衡调度的延迟(新 Pod 快速就位)和精度(调度器很少做出糟糕的放置决策)。
|
||||
|
||||
你可以通过设置 kube-scheduler 的 `percentageOfNodesToScore` 来配置这个调优设置。这个 KubeSchedulerConfiguration 设置决定了调度集群中节点的阈值。
|
||||
你可以通过设置 kube-scheduler 的 `percentageOfNodesToScore` 来配置这个调优设置。
|
||||
这个 KubeSchedulerConfiguration 设置决定了调度集群中节点的阈值。
|
||||
|
||||
<!--
|
||||
### Setting the threshold
|
||||
|
@ -117,8 +119,11 @@ enough feasible nodes to exceed the configured percentage, the kube-scheduler
|
|||
stops searching for more feasible nodes and moves on to the
|
||||
[scoring phase](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation).
|
||||
-->
|
||||
你可以使用整个集群节点总数的百分比作为阈值来指定需要多少节点就足够。 kube-scheduler 会将它转换为节点数的整数值。在调度期间,如果
|
||||
kube-scheduler 已确认的可调度节点数足以超过了配置的百分比数量,kube-scheduler 将停止继续查找可调度节点并继续进行 [打分阶段](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。
|
||||
你可以使用整个集群节点总数的百分比作为阈值来指定需要多少节点就足够。
|
||||
kube-scheduler 会将它转换为节点数的整数值。在调度期间,如果
|
||||
kube-scheduler 已确认的可调度节点数足以超过了配置的百分比数量,
|
||||
kube-scheduler 将停止继续查找可调度节点并继续进行
|
||||
[打分阶段](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。
|
||||
|
||||
<!--
|
||||
[How the scheduler iterates over Nodes](#how-the-scheduler-iterates-over-nodes)
|
||||
|
@ -228,7 +233,7 @@ prefer to run the Pod on any Node as long as it is feasible.
|
|||
<!--
|
||||
### How the scheduler iterates over Nodes
|
||||
-->
|
||||
### 调度器做调度选择的时候如何覆盖所有的 Node
|
||||
### 调度器做调度选择的时候如何覆盖所有的 Node {#how-the-scheduler-iterates-over-nodes}
|
||||
|
||||
<!--
|
||||
This section is intended for those who want to understand the internal details
|
||||
|
|
Loading…
Reference in New Issue