[zh] Resync concepts section (10)

pull/27780/head
Qiming Teng 2021-04-28 17:21:11 +08:00
parent acf2e99652
commit 677bee3bd4
4 changed files with 80 additions and 49 deletions

View File

@ -1,11 +1,11 @@
---
title: 云控制器管理器的基础概念
title: 云控制器管理器
content_type: concept
weight: 40
---
<!--
title: Concepts Underlying the Cloud Controller Manager
title: Cloud Controller Manager
content_type: concept
weight: 40
-->

View File

@ -15,7 +15,7 @@ aliases:
<!-- overview -->
<!--
This document catalogs the communication paths between the control plane (really the apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
This document catalogs the communication paths between the control plane (apiserver) and the Kubernetes cluster. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
-->
本文列举控制面节点(确切说是 API 服务器)和 Kubernetes 集群之间的通信路径。
目的是为了让用户能够自定义他们的安装,以实现对网络配置的加固,使得集群能够在不可信的网络上
@ -24,14 +24,15 @@ This document catalogs the communication paths between the control plane (really
<!-- body -->
<!--
## Node to Control Plane
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminate at the apiserver (none of the other control plane components are designed to expose remote services). The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminate at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.
One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed.
-->
## 节点到控制面
Kubernetes 采用的是中心辐射型Hub-and-SpokeAPI 模式。
所有从集群(或所运行的 Pods发出的 API 调用都终止于 apiserver其它控制面组件都没有被设计为可暴露远程服务
apiserver 被配置为在一个安全的 HTTPS 端口443上监听远程连接请求
所有从集群(或所运行的 Pods发出的 API 调用都终止于 apiserver。
其它控制面组件都没有被设计为可暴露远程服务。
apiserver 被配置为在一个安全的 HTTPS 端口(通常为 443上监听远程连接请求
并启用一种或多种形式的客户端[身份认证](/zh/docs/reference/access-authn-authz/authentication/)机制。
一种或多种客户端[鉴权机制](/zh/docs/reference/access-authn-authz/authorization/)应该被启用,
特别是在允许使用[匿名请求](/zh/docs/reference/access-authn-authz/authentication/#anonymous-requests)
@ -84,7 +85,7 @@ The connections from the apiserver to the kubelet are used for:
* Attaching (through kubectl) to running pods.
* Providing the kubelet's port-forwarding functionality.
These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and **unsafe** to run over untrusted and/or public networks.
These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks.
-->
### API 服务器到 kubelet
@ -121,7 +122,6 @@ kubelet 之间使用 [SSH 隧道](#ssh-tunnels)。
The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks.
-->
### apiserver 到节点、Pod 和服务
从 apiserver 到节点、Pod 或服务的连接默认为纯 HTTP 方式,因此既没有认证,也没有加密。
@ -153,7 +153,7 @@ Konnectivity 服务是对此通信通道的替代品。
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server and the Konnectivity agents, running in the control plane network and the nodes network respectively. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections.
As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections.
After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections.
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster.

View File

@ -17,9 +17,10 @@ weight: 10
<!--
Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
A node may be a virtual or physical machine, depending on the cluster. Each node
contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}, managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
is managed by the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
and contains the services necessary to run
{{< glossary_tooltip text="Pods" term_id="pod" >}}.
Typically you have several nodes in a cluster; in a learning or resource-limited
environment, you might have just one.
@ -31,8 +32,8 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
-->
Kubernetes 通过将容器放入在节点Node上运行的 Pod 中来执行你的工作负载。
节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。
每个节点包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务
这些 Pods 由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。
每个节点包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务
这些节点由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。
通常集群中会有若干个节点;而在一个学习用或者资源受限的环境中,你的集群中也可能
只有一个节点。
@ -556,17 +557,6 @@ that the scheduler won't place Pods onto unhealthy nodes.
{{< glossary_tooltip text="污点" term_id="taint" >}}。
这意味着调度器不会将 Pod 调度到不健康的节点上。
<!--
`kubectl cordon` marks a node as 'unschedulable', which has the side effect of the service
controller removing the node from any LoadBalancer node target lists it was previously
eligible for, effectively removing incoming load balancer traffic from the cordoned node(s).
-->
{{< caution>}}
`kubectl cordon` 会将节点标记为“不可调度Unschedulable”。
此操作的副作用是,服务控制器会将该节点从负载均衡器中之前的目标节点列表中移除,
从而使得来自负载均衡器的网络请求不会到达被保护起来的节点。
{{< /caution>}}
<!--
### Node capacity
@ -625,31 +615,58 @@ for more information.
了解详细信息。
<!--
## Graceful Node Shutdown {#graceful-node-shutdown}
## Graceful node shutdown {#graceful-node-shutdown}
-->
## 节点体面关闭 {#graceful-node-shutdown}
{{< feature-state state="alpha" for_k8s_version="v1.20" >}}
{{< feature-state state="beta" for_k8s_version="v1.21" >}}
<!--
If you have enabled the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then the kubelet attempts to detect the node system shutdown and terminates pods running on the node.
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
Kubelet ensures that pods follow the normal [pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) during the node shutdown.
-->
如果你启用了 `GracefulNodeShutdown` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
那么 kubelet 尝试检测节点的系统关闭事件并终止在节点上运行的 Pod。
kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的 Pods。
在节点终止期间kubelet 保证 Pod 遵从常规的 [Pod 终止流程](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
<!--
When the `GracefulNodeShutdown` feature gate is enabled, kubelet uses [systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to delay the node shutdown with a given duration. During a shutdown kubelet terminates pods in two phases:
The graceful node shutdown feature depends on systemd since it takes advantage of
[systemd inhibitor locks](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) to
delay the node shutdown with a given duration.
-->
当启用了 `GracefulNodeShutdown` 特性门控时,
kubelet 使用 [systemd 抑制器锁](https://www.freedesktop.org/wiki/Software/systemd/inhibit/)
在给定的期限内延迟节点关闭。在关闭过程中kubelet 分两个阶段终止 Pod
体面节点关闭特性依赖于 systemd因为它要利用
[systemd 抑制器锁](https://www.freedesktop.org/wiki/Software/systemd/inhibit/)
在给定的期限内延迟节点关闭。
<!--
Graceful node shutdown is controlled with the `GracefulNodeShutdown`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) which is
enabled by default in 1.21.
-->
体面节点关闭特性受 `GracefulNodeShutdown`
[特性门控](/docs/reference/command-line-tools-reference/feature-gates/)
控制,在 1.21 版本中是默认启用的。
<!--
Note that by default, both configuration options described below,
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are set to zero,
thus not activating Graceful node shutdown functionality.
To activate the feature, the two kubelet config settings should be configured appropriately and set to non-zero values.
-->
注意,默认情况下,下面描述的两个配置选项,`ShutdownGracePeriod` 和
`ShutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活
体面节点关闭功能。
要激活此功能特性,这两个 kubelet 配置选项要适当配置,并设置为非零值。
<!--
During a graceful shutdown, kubelet terminates pods in two phases:
1. Terminate regular pods running on the node.
2. Terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) running on the node.
-->
在体面关闭节点过程中kubelet 分两个阶段来终止 Pods
1. 终止在节点上运行的常规 Pod。
2. 终止在节点上运行的[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
@ -658,9 +675,11 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/
* `ShutdownGracePeriod`:
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
* `ShutdownGracePeriodCriticalPods`:
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This should be less than `ShutdownGracePeriod`.
* Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `ShutdownGracePeriod`.
-->
节点体面关闭的特性对应两个 [`KubeletConfiguration`](/zh/docs/tasks/administer-cluster/kubelet-config-file/) 选项:
节点体面关闭的特性对应两个
[`KubeletConfiguration`](/zh/docs/tasks/administer-cluster/kubelet-config-file/) 选项:
* `ShutdownGracePeriod`
* 指定节点应延迟关闭的总持续时间。此时间是 Pod 体面终止的时间总和,不区分常规 Pod 还是
[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
@ -670,10 +689,16 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/
的持续时间。该值应小于 `ShutdownGracePeriod`
<!--
For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by 30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved for gracefully terminating normal pods, and the last 10 seconds would be reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
For example, if `ShutdownGracePeriod=30s`, and
`ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
30 seconds. During the shutdown, the first 20 (30-10) seconds would be reserved
for gracefully terminating normal pods, and the last 10 seconds would be
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
-->
例如,如果设置了 `ShutdownGracePeriod=30s``ShutdownGracePeriodCriticalPods=10s`,则 kubelet 将延迟 30 秒关闭节点。
在关闭期间,将保留前 2030 - 10秒用于体面终止常规 Pod而保留最后 10 秒用于终止
例如,如果设置了 `ShutdownGracePeriod=30s``ShutdownGracePeriodCriticalPods=10s`
则 kubelet 将延迟 30 秒关闭节点。
在关闭期间,将保留前 2030 - 10秒用于体面终止常规 Pod
而保留最后 10 秒用于终止
[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
## {{% heading "whatsnext" %}}
@ -685,8 +710,10 @@ For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods=
section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
-->
* 了解有关节点[组件](/zh/docs/concepts/overview/components/#node-components)
* 阅读[节点的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)
* 阅读架构设计文档中有关[节点](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)的章节
* 了解[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)
* 了解有关节点[组件](/zh/docs/concepts/overview/components/#node-components)。
* 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。
* 阅读架构设计文档中有关
[节点](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
的章节。
* 了解[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。

View File

@ -52,18 +52,21 @@ The control plane's components make global decisions about the cluster (for exam
-->
## 控制平面组件Control Plane Components {#control-plane-components}
控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 `replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。
控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的
`replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。
<!--
Control plane components can be run on any machine in the cluster. However,
for simplicity, set up scripts typically start all control plane components on
the same machine, and do not run user containers on this machine. See
[Building High-Availability Clusters](/docs/admin/high-availability/) for an example multi-master-VM setup.
[Creating Highly Available clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
for an example control plane setup that runs across multiple VMs.
-->
控制平面组件可以在集群中的任何节点上运行。
然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件,并且不会在此计算机上运行用户容器。
请参阅[构建高可用性集群](/zh/docs/setup/production-environment/tools/kubeadm/high-availability/)
中对于多主机 VM 的设置示例。
然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件,
并且不会在此计算机上运行用户容器。
请参阅[使用 kubeadm 构建高可用性集群](/zh/docs/setup/production-environment/tools/kubeadm/high-availability/)
中关于多 VM 控制平面设置的示例。
### kube-apiserver
@ -203,7 +206,8 @@ Kubernetes 启动的容器自动将此 DNS 服务器包含在其 DNS 搜索列
-->
### Web 界面(仪表盘)
[Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/) 是Kubernetes 集群的通用的、基于 Web 的用户界面。
[Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/)
是Kubernetes 集群的通用的、基于 Web 的用户界面。
它使用户可以管理集群中运行的应用程序以及集群本身并进行故障排除。
<!--