Merge pull request #41585 from my-git9/path-4458

[zh-cn] sync service user-namespaces controller-manager-leader-migration
pull/41609/head
Kubernetes Prow Robot 2023-06-11 16:55:47 -07:00 committed by GitHub
commit b9bb302548
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 125 additions and 74 deletions

View File

@ -2000,7 +2000,7 @@ for that Service.
When you define a Service, you can specify `externalIPs` for any
[service type](#publishing-services-service-types).
In the example below, the Service named `"my-service"` can be accessed by clients using TCP,
on `"198.51.100.32:80"` (calculated from `.spec.externalIP` and `.spec.port`).
on `"198.51.100.32:80"` (calculated from `.spec.externalIPs[]` and `.spec.ports[].port`).
-->
### 外部 IP {#external-ips}
@ -2011,7 +2011,7 @@ on `"198.51.100.32:80"` (calculated from `.spec.externalIP` and `.spec.port`).
定义 Service 时,你可以为任何[服务类型](#publishing-services-service-types)指定 `externalIPs`
在下面的例子中,名为 `my-service` 的服务可以在 "`198.51.100.32:80`"
(从 .spec.externalIP 和 .spec.port 计算)上被客户端使用 TCP 协议访问。
(从 `.spec.externalIPs[]``.spec.ports[].port` 计算)上被客户端使用 TCP 协议访问。
```yaml
apiVersion: v1

View File

@ -90,7 +90,7 @@ to use this feature with Kubernetes stateless pods:
* CRI-O: version 1.25 (and later) supports user namespaces for containers.
Please note that containerd v1.7 supports user namespaces for containers,
compatible with Kubernetes {{< skew currentVersion >}}. It should not be used
compatible with Kubernetes {{< skew currentPatchVersion >}}. It should not be used
with Kubernetes 1.27 (and later).
Support for this in [cri-dockerd is not planned][CRI-dockerd-issue] yet.
@ -101,7 +101,7 @@ Support for this in [cri-dockerd is not planned][CRI-dockerd-issue] yet.
* CRI-O1.25(及更高)版本支持配置容器的用户命名空间。
请注意containerd v1.7 支持配置容器的用户命名空间,与 Kubernetes {{< skew currentVersion >}}
请注意containerd v1.7 支持配置容器的用户命名空间,与 Kubernetes {{< skew currentPatchVersion >}}
兼容。它不应与 Kubernetes 1.27(及更高)版本一起使用。
目前 [cri-dockerd 没有计划][CRI-dockerd-issue]支持此功能。

View File

@ -9,7 +9,7 @@ weight: 250
reviewers:
- jpbetz
- cheftako
title: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
title: Migrate Replicated Control Plane To Use Cloud Controller Manager
linkTitle: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
content_type: task
weight: 250
@ -24,11 +24,16 @@ weight: 250
As part of the [cloud provider extraction effort](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/),
all cloud specific controllers must be moved out of the `kube-controller-manager`.
All existing clusters that run cloud controllers in the `kube-controller-manager` must migrate to instead run the controllers in a cloud provider specific `cloud-controller-manager`.
All existing clusters that run cloud controllers in the `kube-controller-manager`
must migrate to instead run the controllers in a cloud provider specific
`cloud-controller-manager`.
Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud specific" controllers between
the `kube-controller-manager` and the `cloud-controller-manager` via a shared resource lock between the two components while upgrading the replicated control plane.
For a single-node control plane, or if unavailability of controller managers can be tolerated during the upgrade, Leader Migration is not needed and this guide can be ignored.
Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud
specific" controllers between the `kube-controller-manager` and the
`cloud-controller-manager` via a shared resource lock between the two components
while upgrading the replicated control plane. For a single-node control plane, or if
unavailability of controller managers can be tolerated during the upgrade, Leader
Migration is not needed and this guide can be ignored.
-->
## 背景
@ -44,12 +49,16 @@ For a single-node control plane, or if unavailability of controller managers can
亦可以忽略本指南。
<!--
Leader Migration can be enabled by setting `--enable-leader-migration` on `kube-controller-manager` or `cloud-controller-manager`.
Leader Migration only applies during the upgrade and can be safely disabled or left enabled after the upgrade is complete.
Leader Migration can be enabled by setting `--enable-leader-migration` on
`kube-controller-manager` or `cloud-controller-manager`. Leader Migration only
applies during the upgrade and can be safely disabled or left enabled after the
upgrade is complete.
This guide walks you through the manual process of upgrading the control plane from `kube-controller-manager` with
built-in cloud provider to running both `kube-controller-manager` and `cloud-controller-manager`.
If you use a tool to deploy and manage the cluster, please refer to the documentation of the tool and the cloud provider for specific instructions of the migration.
This guide walks you through the manual process of upgrading the control plane from
`kube-controller-manager` with built-in cloud provider to running both
`kube-controller-manager` and `cloud-controller-manager`. If you use a tool to deploy
and manage the cluster, please refer to the documentation of the tool and the cloud
provider for specific instructions of the migration.
-->
领导者迁移可以通过在 `kube-controller-manager``cloud-controller-manager` 上设置
`--enable-leader-migration` 来启用。
@ -62,12 +71,18 @@ If you use a tool to deploy and manage the cluster, please refer to the document
## {{% heading "prerequisites" %}}
<!--
It is assumed that the control plane is running Kubernetes version N and to be upgraded to version N + 1.
Although it is possible to migrate within the same version, ideally the migration should be performed as part of an upgrade so that changes of configuration can be aligned to each release.
The exact versions of N and N + 1 depend on each cloud provider. For example, if a cloud provider builds a `cloud-controller-manager` to work with Kubernetes 1.24, then N can be 1.23 and N + 1 can be 1.24.
It is assumed that the control plane is running Kubernetes version N and to be
upgraded to version N + 1. Although it is possible to migrate within the same
version, ideally the migration should be performed as part of an upgrade so that
changes of configuration can be aligned to each release. The exact versions of N and
N + 1 depend on each cloud provider. For example, if a cloud provider builds a
`cloud-controller-manager` to work with Kubernetes 1.24, then N can be 1.23 and N + 1
can be 1.24.
The control plane nodes should run `kube-controller-manager` with Leader Election enabled, which is the default.
As of version N, an in-tree cloud provider must be set with `--cloud-provider` flag and `cloud-controller-manager` should not yet be deployed.
The control plane nodes should run `kube-controller-manager` with Leader Election
enabled, which is the default. As of version N, an in-tree cloud provider must be set
with `--cloud-provider` flag and `cloud-controller-manager` should not yet be
deployed.
-->
假定控制平面正在运行 Kubernetes 版本 N要升级到版本 N+1。
尽管可以在同一版本内进行迁移,但理想情况下,迁移应作为升级的一部分执行,
@ -80,17 +95,22 @@ N 和 N+1 的确切版本值取决于各个云厂商。例如,如果云厂商
应该尚未部署。
<!--
The out-of-tree cloud provider must have built a `cloud-controller-manager` with Leader Migration implementation.
If the cloud provider imports `k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later, Leader Migration will be available.
However, for version before v0.22.0, Leader Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be enabled in `cloud-controller-manager`.
The out-of-tree cloud provider must have built a `cloud-controller-manager` with
Leader Migration implementation. If the cloud provider imports
`k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later,
Leader Migration will be available. However, for version before v0.22.0, Leader
Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be
enabled in `cloud-controller-manager`.
This guide assumes that kubelet of each control plane node starts `kube-controller-manager`
and `cloud-controller-manager` as static pods defined by their manifests.
If the components run in a different setting, please adjust the steps accordingly.
This guide assumes that kubelet of each control plane node starts
`kube-controller-manager` and `cloud-controller-manager` as static pods defined by
their manifests. If the components run in a different setting, please adjust the
steps accordingly.
For authorization, this guide assumes that the cluster uses RBAC.
If another authorization mode grants permissions to `kube-controller-manager` and `cloud-controller-manager` components,
please grant the needed access in a way that matches the mode.
For authorization, this guide assumes that the cluster uses RBAC. If another
authorization mode grants permissions to `kube-controller-manager` and
`cloud-controller-manager` components, please grant the needed access in a way that
matches the mode.
-->
树外云驱动必须已经构建了一个实现了领导者迁移的 `cloud-controller-manager`
如果云驱动导入了 v0.21.0 或更高版本的 `k8s.io/cloud-provider``k8s.io/controller-manager`
@ -110,12 +130,12 @@ please grant the needed access in a way that matches the mode.
<!--
### Grant access to Migration Lease
The default permissions of the controller manager allow only accesses to their main Lease.
In order for the migration to work, accesses to another Lease are required.
The default permissions of the controller manager allow only accesses to their main
Lease. In order for the migration to work, accesses to another Lease are required.
You can grant `kube-controller-manager` full access to the leases API by modifying
the `system::leader-locking-kube-controller-manager` role.
This task guide assumes that the name of the migration lease is `cloud-provider-extraction-migration`.
the `system::leader-locking-kube-controller-manager` role. This task guide assumes
that the name of the migration lease is `cloud-provider-extraction-migration`.
`kubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
@ -132,18 +152,26 @@ Do the same to the `system::leader-locking-cloud-controller-manager` role.
`kube-controller-manager` 对 Lease API 的完全访问权限。
本任务指南假定迁移 Lease 的名称为 `cloud-provider-extraction-migration`
`kubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
```shell
kubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge
```
`system::leader-locking-cloud-controller-manager` 角色执行相同的操作。
`kubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
```shell
kubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge
```
<!--
### Initial Leader Migration configuration
Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, `kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`. The following example configuration shows the assignment.
Leader Migration optionally takes a configuration file representing the state of
controller-to-manager assignment. At this moment, with in-tree cloud provider,
`kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`. The
following example configuration shows the assignment.
Leader Migration can be enabled without a configuration. Please see [Default Configuration](#default-configuration) for details.
Leader Migration can be enabled without a configuration. Please see
[Default Configuration](#default-configuration) for details.
-->
### 初始领导者迁移配置
@ -168,8 +196,9 @@ controllerLeaders:
```
<!--
Alternatively, because the controllers can run under either controller managers, setting `component` to `*`
for both sides makes the configuration file consistent between both parties of the migration.
Alternatively, because the controllers can run under either controller managers,
setting `component` to `*` for both sides makes the configuration file consistent
between both parties of the migration.
-->
或者,由于控制器可以在任一控制器管理器下运行,因此将双方的 `component` 设置为 `*`
可以使迁移双方的配置文件保持一致。
@ -189,14 +218,17 @@ controllerLeaders:
```
<!--
On each control plane node, save the content to `/etc/leadermigration.conf`,
and update the manifest of `kube-controller-manager` so that the file is mounted inside the container at the same location.
Also, update the same manifest to add the following arguments:
On each control plane node, save the content to `/etc/leadermigration.conf`, and
update the manifest of `kube-controller-manager` so that the file is mounted inside
the container at the same location. Also, update the same manifest to add the
following arguments:
- `--enable-leader-migration` to enable Leader Migration on the controller manager
- `--leader-migration-config=/etc/leadermigration.conf` to set configuration file
Restart `kube-controller-manager` on each node. At this moment, `kube-controller-manager` has leader migration enabled and is ready for the migration.
Restart `kube-controller-manager` on each node. At this moment,
`kube-controller-manager` has leader migration enabled and is ready for the
migration.
-->
在每个控制平面节点上,请将如上内容保存到 `/etc/leadermigration.conf` 中,
并更新 `kube-controller-manager` 清单,以便将文件挂载到容器内的同一位置。
@ -211,9 +243,11 @@ Restart `kube-controller-manager` on each node. At this moment, `kube-controller
<!--
### Deploy Cloud Controller Manager
In version N + 1, the desired state of controller-to-manager assignment can be represented by a new configuration file, shown as follows.
Please note `component` field of each `controllerLeaders` changing from `kube-controller-manager` to `cloud-controller-manager`.
Alternatively, use the wildcard version mentioned above, which has the same effect.
In version N + 1, the desired state of controller-to-manager assignment can be
represented by a new configuration file, shown as follows. Please note `component`
field of each `controllerLeaders` changing from `kube-controller-manager` to
`cloud-controller-manager`. Alternatively, use the wildcard version mentioned above,
which has the same effect.
-->
### 部署云控制器管理器
@ -236,15 +270,19 @@ controllerLeaders:
```
<!--
When creating control plane nodes of version N + 1, the content should be deployed to `/etc/leadermigration.conf`.
The manifest of `cloud-controller-manager` should be updated to mount the configuration file in
the same manner as `kube-controller-manager` of version N. Similarly, add `--enable-leader-migration`
and `--leader-migration-config=/etc/leadermigration.conf` to the arguments of `cloud-controller-manager`.
When creating control plane nodes of version N + 1, the content should be deployed to
`/etc/leadermigration.conf`. The manifest of `cloud-controller-manager` should be
updated to mount the configuration file in the same manner as
`kube-controller-manager` of version N. Similarly, add `--enable-leader-migration`
and `--leader-migration-config=/etc/leadermigration.conf` to the arguments of
`cloud-controller-manager`.
Create a new control plane node of version N + 1 with the updated `cloud-controller-manager` manifest,
and with the `--cloud-provider` flag set to `external` for `kube-controller-manager`.
`kube-controller-manager` of version N + 1 MUST NOT have Leader Migration enabled because,
with an external cloud provider, it does not run the migrated controllers anymore, and thus it is not involved in the migration.
Create a new control plane node of version N + 1 with the updated
`cloud-controller-manager` manifest, and with the `--cloud-provider` flag set to
`external` for `kube-controller-manager`. `kube-controller-manager` of version N + 1
MUST NOT have Leader Migration enabled because, with an external cloud provider, it
does not run the migrated controllers anymore, and thus it is not involved in the
migration.
Please refer to [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/)
for more detail on how to deploy `cloud-controller-manager`.
@ -267,15 +305,19 @@ for more detail on how to deploy `cloud-controller-manager`.
<!--
### Upgrade Control Plane
The control plane now contains nodes of both version N and N + 1.
The nodes of version N run `kube-controller-manager` only,
and these of version N + 1 run both `kube-controller-manager` and `cloud-controller-manager`.
The migrated controllers, as specified in the configuration, are running under either `kube-controller-manager` of
version N or `cloud-controller-manager` of version N + 1 depending on which controller manager holds the migration lease.
No controller will ever be running under both controller managers at any time.
The control plane now contains nodes of both version N and N + 1. The nodes of
version N run `kube-controller-manager` only, and these of version N + 1 run both
`kube-controller-manager` and `cloud-controller-manager`. The migrated controllers,
as specified in the configuration, are running under either `kube-controller-manager`
of version N or `cloud-controller-manager` of version N + 1 depending on which
controller manager holds the migration lease. No controller will ever be running
under both controller managers at any time.
In a rolling manner, create a new control plane node of version N + 1 and bring down one of version N until the control plane contains only nodes of version N + 1.
If a rollback from version N + 1 to N is required, add nodes of version N with Leader Migration enabled for `kube-controller-manager` back to the control plane, replacing one of version N + 1 each time until there are only nodes of version N.
In a rolling manner, create a new control plane node of version N + 1 and bring down
one of version N until the control plane contains only nodes of version N + 1.
If a rollback from version N + 1 to N is required, add nodes of version N with Leader
Migration enabled for `kube-controller-manager` back to the control plane, replacing
one of version N + 1 each time until there are only nodes of version N.
-->
### 升级控制平面
@ -294,14 +336,16 @@ N+1 的 `cloud-controller-manager` 下运行,具体取决于哪个控制器管
<!--
### (Optional) Disable Leader Migration {#disable-leader-migration}
Now that the control plane has been upgraded to run both `kube-controller-manager` and `cloud-controller-manager` of version N + 1,
Leader Migration has finished its job and can be safely disabled to save one Lease resource.
It is safe to re-enable Leader Migration for the rollback in the future.
Now that the control plane has been upgraded to run both `kube-controller-manager`
and `cloud-controller-manager` of version N + 1, Leader Migration has finished its
job and can be safely disabled to save one Lease resource. It is safe to re-enable
Leader Migration for the rollback in the future.
In a rolling manager, update manifest of `cloud-controller-manager` to unset both
`--enable-leader-migration` and `--leader-migration-config=` flag,
also remove the mount of `/etc/leadermigration.conf`, and finally remove `/etc/leadermigration.conf`.
To re-enable Leader Migration, recreate the configuration file and add its mount and the flags that enable Leader Migration back to `cloud-controller-manager`.
`--enable-leader-migration` and `--leader-migration-config=` flag, also remove the
mount of `/etc/leadermigration.conf`, and finally remove `/etc/leadermigration.conf`.
To re-enable Leader Migration, recreate the configuration file and add its mount and
the flags that enable Leader Migration back to `cloud-controller-manager`.
-->
### (可选)禁用领导者迁移 {#disable-leader-migration}
@ -318,10 +362,14 @@ Lease 资源。在将来可以安全地重新启用领导者迁移,以完成
<!--
### Default Configuration
Starting Kubernetes 1.22, Leader Migration provides a default configuration suitable for the default controller-to-manager assignment.
The default configuration can be enabled by setting `--enable-leader-migration` but without `--leader-migration-config=`.
Starting Kubernetes 1.22, Leader Migration provides a default configuration suitable
for the default controller-to-manager assignment.
The default configuration can be enabled by setting `--enable-leader-migration` but
without `--leader-migration-config=`.
For `kube-controller-manager` and `cloud-controller-manager`, if there are no flags that enable any in-tree cloud provider or change ownership of controllers, the default configuration can be used to avoid manual creation of the configuration file.
For `kube-controller-manager` and `cloud-controller-manager`, if there are no flags
that enable any in-tree cloud provider or change ownership of controllers, the
default configuration can be used to avoid manual creation of the configuration file.
-->
### 默认配置 {#default-configuration}
@ -335,9 +383,11 @@ For `kube-controller-manager` and `cloud-controller-manager`, if there are no fl
<!--
### Special case: migrating the Node IPAM controller {#node-ipam-controller-migration}
If your cloud provider provides an implementation of Node IPAM controller, you should switch to the implementation in `cloud-controller-manager`.
Disable Node IPAM controller in `kube-controller-manager` of version N + 1 by adding `--controllers=*,-nodeipam` to its flags.
Then add `nodeipam` to the list of migrated controllers.
If your cloud provider provides an implementation of Node IPAM controller, you should
switch to the implementation in `cloud-controller-manager`. Disable Node IPAM
controller in `kube-controller-manager` of version N + 1 by adding
`--controllers=*,-nodeipam` to its flags. Then add `nodeipam` to the list of migrated
controllers.
-->
### 特殊情况:迁移节点 IPAM 控制器 {#node-ipam-controller-migration}
@ -363,7 +413,8 @@ controllerLeaders:
## {{% heading "whatsnext" %}}
<!--
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration) enhancement proposal.
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration)
enhancement proposal.
-->
- 阅读[领导者迁移控制器管理器](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration)
改进建议提案。