[zh] Sync four files in tools/kubeadm/
parent
35023844e4
commit
f8fbaac121
|
@ -57,8 +57,7 @@ as Ansible or Terraform.
|
|||
To follow this guide, you need:
|
||||
|
||||
- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
|
||||
- 2 GiB or more of RAM per machine--any less leaves little room for your
|
||||
apps.
|
||||
- 2 GiB or more of RAM per machine--any less leaves little room for your apps.
|
||||
- At least 2 CPUs on the machine that you use as a control-plane node.
|
||||
- Full network connectivity among all machines in the cluster. You can use either a
|
||||
public or a private network.
|
||||
|
@ -92,7 +91,7 @@ The `kubeadm` tool's overall feature state is General Availability (GA). Some su
|
|||
still under active development. The implementation of creating the cluster may change
|
||||
slightly as the tool evolves, but the overall implementation should be pretty stable.
|
||||
-->
|
||||
`kubeadm` 工具的整体功能状态为一般可用性(GA)。一些子功能仍在积极开发中。
|
||||
`kubeadm` 工具的整体特性状态为正式发布(GA)。一些子特性仍在积极开发中。
|
||||
随着工具的发展,创建集群的实现可能会略有变化,但总体实现应相当稳定。
|
||||
|
||||
{{< note >}}
|
||||
|
@ -132,8 +131,9 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
|
|||
#### 安装组件 {#component-installation}
|
||||
|
||||
<!--
|
||||
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts.
|
||||
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}}
|
||||
and kubeadm on all the hosts. For detailed instructions and other prerequisites, see
|
||||
[Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
-->
|
||||
在所有主机上安装{{< glossary_tooltip term_id="container-runtime" text="容器运行时" >}}和 kubeadm。
|
||||
详细说明和其他前提条件,请参见[安装 kubeadm](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)。
|
||||
|
@ -141,7 +141,8 @@ For detailed instructions and other prerequisites, see [Installing kubeadm](/doc
|
|||
{{< note >}}
|
||||
<!--
|
||||
If you have already installed kubeadm, see the first two steps of the
|
||||
[Upgrading Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes) document for instructions on how to upgrade kubeadm.
|
||||
[Upgrading Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes)
|
||||
document for instructions on how to upgrade kubeadm.
|
||||
|
||||
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
|
||||
kubeadm to tell it what to do. This crashloop is expected and normal.
|
||||
|
@ -243,9 +244,8 @@ certificate files is reflected. See
|
|||
for more details on this topic.
|
||||
-->
|
||||
你分配给控制平面组件的 IP 地址将成为其 X.509 证书的使用者备用名称字段的一部分。
|
||||
更改这些 IP 地址将需要签署新的证书并重启受影响的组件,
|
||||
以便反映证书文件中的变化。有关此主题的更多细节参见
|
||||
[手动续期证书](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)。
|
||||
更改这些 IP 地址将需要签署新的证书并重启受影响的组件,以便反映证书文件中的变化。
|
||||
有关此主题的更多细节参见[手动续期证书](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)。
|
||||
|
||||
{{< warning >}}
|
||||
<!--
|
||||
|
@ -311,13 +311,13 @@ communicates with).
|
|||
|
||||
<!--
|
||||
1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster
|
||||
to [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/)
|
||||
you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes.
|
||||
Such an endpoint can be either a DNS name or an IP address of a load-balancer.
|
||||
to [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/)
|
||||
you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes.
|
||||
Such an endpoint can be either a DNS name or an IP address of a load-balancer.
|
||||
1. Choose a Pod network add-on, and verify whether it requires any arguments to
|
||||
be passed to `kubeadm init`. Depending on which
|
||||
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
||||
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
|
||||
be passed to `kubeadm init`. Depending on which
|
||||
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
||||
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
|
||||
-->
|
||||
1. (推荐)如果计划将单个控制平面 kubeadm 集群升级成[高可用](/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/),
|
||||
你应该指定 `--control-plane-endpoint` 为所有控制平面节点设置共享端点。
|
||||
|
@ -328,9 +328,9 @@ a provider-specific value. See [Installing a Pod network add-on](#pod-network).
|
|||
|
||||
<!--
|
||||
1. (Optional) `kubeadm` tries to detect the container runtime by using a list of well
|
||||
known endpoints. To use different container runtime or if there are more than one installed
|
||||
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See
|
||||
[Installing a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
|
||||
known endpoints. To use different container runtime or if there are more than one installed
|
||||
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See
|
||||
[Installing a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
|
||||
-->
|
||||
3. (可选)`kubeadm` 试图通过使用已知的端点列表来检测容器运行时。
|
||||
使用不同的容器运行时或在预配置的节点上安装了多个容器运行时,请为 `kubeadm init` 指定 `--cri-socket` 参数。
|
||||
|
@ -351,7 +351,7 @@ kubeadm init <args>
|
|||
### 关于 apiserver-advertise-address 和 ControlPlaneEndpoint 的注意事项 {#considerations-about-apiserver-advertise-address-and-controlplaneendpoint}
|
||||
|
||||
<!--
|
||||
While `--apiserver-advertise-address` can be used to set the advertise address for this particular
|
||||
While `--apiserver-advertise-address` can be used to set the advertised address for this particular
|
||||
control-plane node's API server, `--control-plane-endpoint` can be used to set the shared endpoint
|
||||
for all control-plane nodes.
|
||||
-->
|
||||
|
@ -377,7 +377,7 @@ Here is an example mapping:
|
|||
<!--
|
||||
Where `192.168.0.102` is the IP address of this node and `cluster-endpoint` is a custom DNS name that maps to this IP.
|
||||
This will allow you to pass `--control-plane-endpoint=cluster-endpoint` to `kubeadm init` and pass the same DNS name to
|
||||
`kubeadm join`. Later you can modify `cluster-endpoint` to point to the address of your load-balancer in an
|
||||
`kubeadm join`. Later you can modify `cluster-endpoint` to point to the address of your load-balancer in a
|
||||
high availability scenario.
|
||||
-->
|
||||
其中 `192.168.0.102` 是此节点的 IP 地址,`cluster-endpoint` 是映射到该 IP 的自定义 DNS 名称。
|
||||
|
@ -678,7 +678,8 @@ The `node-role.kubernetes.io/control-plane` label is such a restricted label and
|
|||
a privileged client after a node has been created. To do that manually you can do the same by using `kubectl label`
|
||||
and ensure it is using a privileged kubeconfig such as the kubeadm managed `/etc/kubernetes/admin.conf`.
|
||||
-->
|
||||
默认情况下,kubeadm 启用 [NodeRestriction](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||
默认情况下,kubeadm 启用
|
||||
[NodeRestriction](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||
准入控制器来限制 kubelet 在节点注册时可以应用哪些标签。准入控制器文档描述 kubelet `--node-labels` 选项允许使用哪些标签。
|
||||
其中 `node-role.kubernetes.io/control-plane` 标签就是这样一个受限制的标签,
|
||||
kubeadm 在节点创建后使用特权客户端手动应用此标签。
|
||||
|
@ -737,8 +738,8 @@ kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancer
|
|||
<!--
|
||||
### Adding more control plane nodes
|
||||
|
||||
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/) for steps on creating a high availability kubeadm cluster by adding more control plane
|
||||
nodes.
|
||||
See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
|
||||
for steps on creating a high availability kubeadm cluster by adding more control plane nodes.
|
||||
|
||||
### Adding worker nodes {#join-nodes}
|
||||
|
||||
|
@ -750,7 +751,7 @@ the `kubeadm join` command:
|
|||
* [Adding Linux worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/)
|
||||
* [Adding Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)
|
||||
-->
|
||||
### 添加更多控制平面节点
|
||||
### 添加更多控制平面节点 {#adding-more-control-plane-nodes}
|
||||
|
||||
请参阅[使用 kubeadm 创建高可用性集群](/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/),
|
||||
了解通过添加更多控制平面节点创建高可用性 kubeadm 集群的步骤。
|
||||
|
@ -821,7 +822,7 @@ admin.conf 文件为用户提供了对集群的超级用户特权。
|
|||
### (可选)将 API 服务器代理到本地主机 {#optional-proxying-api-server-to-localhost}
|
||||
|
||||
<!--
|
||||
If you want to connect to the API Server from outside the cluster you can use
|
||||
If you want to connect to the API Server from outside the cluster, you can use
|
||||
`kubectl proxy`:
|
||||
-->
|
||||
如果你要从集群外部连接到 API 服务器,则可以使用 `kubectl proxy`:
|
||||
|
@ -887,7 +888,8 @@ kubeadm reset
|
|||
```
|
||||
|
||||
<!--
|
||||
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
|
||||
The reset process does not reset or clean up iptables rules or IPVS tables.
|
||||
If you wish to reset iptables, you must do so manually:
|
||||
-->
|
||||
重置过程不会重置或清除 iptables 规则或 IPVS 表。如果你希望重置 iptables,则必须手动进行:
|
||||
|
||||
|
@ -903,6 +905,7 @@ If you want to reset the IPVS tables, you must run the following command:
|
|||
```bash
|
||||
ipvsadm -C
|
||||
```
|
||||
|
||||
<!--
|
||||
Now remove the node:
|
||||
|
||||
|
@ -996,6 +999,7 @@ the same version as kubeadm or three version older.
|
|||
|
||||
<!--
|
||||
Example:
|
||||
|
||||
* kubeadm is at {{< skew currentVersion >}}
|
||||
* kubelet on the host must be at {{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}},
|
||||
{{< skew currentVersionAddMinor -2 >}} or {{< skew currentVersionAddMinor -3 >}}
|
||||
|
@ -1047,11 +1051,13 @@ MINOR 版本或比后者新一个 MINOR 版本。
|
|||
|
||||
<!--
|
||||
Example for `kubeadm upgrade`:
|
||||
|
||||
* kubeadm version {{< skew currentVersionAddMinor -1 >}} was used to create or upgrade the node
|
||||
* The version of kubeadm used for upgrading the node must be at {{< skew currentVersionAddMinor -1 >}}
|
||||
or {{< skew currentVersion >}}
|
||||
or {{< skew currentVersion >}}
|
||||
-->
|
||||
`kubeadm upgrade` 的例子:
|
||||
|
||||
* 用于创建或升级节点的 kubeadm 版本为 {{< skew currentVersionAddMinor -1 >}}。
|
||||
* 用于升级节点的 kubeadm 版本必须为 {{< skew currentVersionAddMinor -1 >}} 或 {{< skew currentVersion >}}。
|
||||
|
||||
|
@ -1096,8 +1102,8 @@ Workarounds:
|
|||
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) to pick a cluster
|
||||
topology that provides [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/).
|
||||
-->
|
||||
* 使用多个控制平面节点。你可以阅读
|
||||
[可选的高可用性拓扑](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/)选择集群拓扑提供的
|
||||
* 使用多个控制平面节点。
|
||||
你可以阅读[可选的高可用性拓扑](/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/)选择集群拓扑提供的
|
||||
[高可用性](/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/)。
|
||||
|
||||
<!--
|
||||
|
@ -1107,8 +1113,7 @@ Workarounds:
|
|||
|
||||
<!--
|
||||
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
|
||||
following the [multi-platform
|
||||
proposal](https://git.k8s.io/design-proposals-archive/multi-platform.md).
|
||||
following the [multi-platform proposal](https://git.k8s.io/design-proposals-archive/multi-platform.md).
|
||||
-->
|
||||
kubeadm deb/rpm 软件包和二进制文件是为 amd64、arm (32-bit)、arm64、ppc64le 和 s390x
|
||||
构建的遵循[多平台提案](https://git.k8s.io/design-proposals-archive/multi-platform.md)。
|
||||
|
@ -1141,9 +1146,9 @@ If you are running into difficulties with kubeadm, please consult our
|
|||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
## What's next {#whats-next}
|
||||
## {{% heading "whatsnext" %}}
|
||||
-->
|
||||
## 下一步 {#whats-next}
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
|
||||
|
|
|
@ -37,7 +37,8 @@ You should carefully consider the advantages and disadvantages of each topology
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
kubeadm bootstraps the etcd cluster statically. Read the etcd [Clustering Guide](https://github.com/etcd-io/etcd/blob/release-3.4/Documentation/op-guide/clustering.md#static)
|
||||
kubeadm bootstraps the etcd cluster statically. Read the etcd
|
||||
[Clustering Guide](https://github.com/etcd-io/etcd/blob/release-3.4/Documentation/op-guide/clustering.md#static)
|
||||
for more details.
|
||||
-->
|
||||
kubeadm 静态引导 etcd 集群。
|
||||
|
@ -111,13 +112,19 @@ on control plane nodes when using `kubeadm init` and `kubeadm join --control-pla
|
|||
## 外部 etcd 拓扑 {#external-etcd-topology}
|
||||
|
||||
<!--
|
||||
An HA cluster with external etcd is a [topology](https://en.wikipedia.org/wiki/Network_topology) where the distributed data storage cluster provided by etcd is external to the cluster formed by the nodes that run control plane components.
|
||||
An HA cluster with external etcd is a [topology](https://en.wikipedia.org/wiki/Network_topology)
|
||||
where the distributed data storage cluster provided by etcd is external to the cluster formed by
|
||||
the nodes that run control plane components.
|
||||
-->
|
||||
具有外部 etcd 的 HA 集群是一种这样的[拓扑](https://zh.wikipedia.org/wiki/%E7%BD%91%E7%BB%9C%E6%8B%93%E6%89%91),
|
||||
其中 etcd 分布式数据存储集群在独立于控制平面节点的其他节点上运行。
|
||||
|
||||
<!--
|
||||
Like the stacked etcd topology, each control plane node in an external etcd topology runs an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`. And the `kube-apiserver` is exposed to worker nodes using a load balancer. However, etcd members run on separate hosts, and each etcd host communicates with the `kube-apiserver` of each control plane node.
|
||||
Like the stacked etcd topology, each control plane node in an external etcd topology runs
|
||||
an instance of the `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`.
|
||||
And the `kube-apiserver` is exposed to worker nodes using a load balancer. However,
|
||||
etcd members run on separate hosts, and each etcd host communicates with the
|
||||
`kube-apiserver` of each control plane node.
|
||||
-->
|
||||
就像堆叠的 etcd 拓扑一样,外部 etcd 拓扑中的每个控制平面节点都会运行
|
||||
`kube-apiserver`、`kube-scheduler` 和 `kube-controller-manager` 实例。
|
||||
|
@ -134,7 +141,8 @@ the cluster redundancy as much as the stacked HA topology.
|
|||
|
||||
<!--
|
||||
However, this topology requires twice the number of hosts as the stacked HA topology.
|
||||
A minimum of three hosts for control plane nodes and three hosts for etcd nodes are required for an HA cluster with this topology.
|
||||
A minimum of three hosts for control plane nodes and three hosts for etcd nodes are
|
||||
required for an HA cluster with this topology.
|
||||
-->
|
||||
但此拓扑需要两倍于堆叠 HA 拓扑的主机数量。
|
||||
具有此拓扑的 HA 集群至少需要三个用于控制平面节点的主机和三个用于 etcd 节点的主机。
|
||||
|
|
|
@ -30,7 +30,8 @@ cluster using kubeadm:
|
|||
|
||||
<!--
|
||||
Before proceeding, you should carefully consider which approach best meets the needs of your applications
|
||||
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each.
|
||||
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/)
|
||||
outlines the advantages and disadvantages of each.
|
||||
|
||||
If you encounter issues with setting up the HA cluster, please report these
|
||||
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
|
||||
|
@ -145,6 +146,7 @@ You need:
|
|||
<!-- end of shared prerequisites -->
|
||||
<!--
|
||||
And you also need:
|
||||
|
||||
- Three or more additional machines, that will become etcd cluster members.
|
||||
Having an odd number of members in the etcd cluster is a requirement for achieving
|
||||
optimal voting quorum.
|
||||
|
@ -152,6 +154,7 @@ And you also need:
|
|||
- These machines also require a container runtime, that is already set up and working.
|
||||
-->
|
||||
还需要准备:
|
||||
|
||||
- 给 etcd 集群使用的另外至少三台机器。为了分布式一致性算法达到更好的投票效果,集群必须由奇数个节点组成。
|
||||
- 机器上已经安装 `kubeadm` 和 `kubelet`。
|
||||
- 机器上同样需要安装好容器运行时,并能正常运行。
|
||||
|
@ -164,17 +167,23 @@ _See [External etcd topology](/docs/setup/production-environment/tools/kubeadm/h
|
|||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!-- ### Container images -->
|
||||
### 容器镜像
|
||||
<!--
|
||||
### Container images
|
||||
-->
|
||||
### 容器镜像 {#container-images}
|
||||
|
||||
<!--
|
||||
Each host should have access read and fetch images from the Kubernetes container image registry, `registry.k8s.io`.
|
||||
If you want to deploy a highly-available cluster where the hosts do not have access to pull images, this is possible. You must ensure by some other means that the correct container images are already available on the relevant hosts.
|
||||
Each host should have access read and fetch images from the Kubernetes container image registry,
|
||||
`registry.k8s.io`. If you want to deploy a highly-available cluster where the hosts do not have
|
||||
access to pull images, this is possible. You must ensure by some other means that the correct
|
||||
container images are already available on the relevant hosts.
|
||||
-->
|
||||
每台主机需要能够从 Kubernetes 容器镜像仓库(`registry.k8s.io`)读取和拉取镜像。
|
||||
想要在无法拉取 Kubernetes 仓库镜像的机器上部署高可用集群也是可行的。通过其他的手段保证主机上已经有对应的容器镜像即可。
|
||||
|
||||
<!-- ### Command line interface {#kubectl} -->
|
||||
<!--
|
||||
### Command line interface {#kubectl}
|
||||
-->
|
||||
### 命令行 {#kubectl}
|
||||
|
||||
<!--
|
||||
|
@ -285,6 +294,7 @@ option. Your cluster requirements may need a different configuration.
|
|||
```sh
|
||||
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
|
||||
```
|
||||
|
||||
- You can use the `--kubernetes-version` flag to set the Kubernetes version to use.
|
||||
It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
|
||||
- The `--control-plane-endpoint` flag should be set to the address or DNS and port of the load balancer.
|
||||
|
@ -351,11 +361,13 @@ option. Your cluster requirements may need a different configuration.
|
|||
```
|
||||
|
||||
<!--
|
||||
- Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.
|
||||
- Copy this output to a text file. You will need it later to join control plane and worker nodes to
|
||||
the cluster.
|
||||
- When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
|
||||
are encrypted and uploaded in the `kubeadm-certs` Secret.
|
||||
- To re-upload the certificates and generate a new decryption key, use the following command on a control plane
|
||||
node that is already joined to the cluster:
|
||||
are encrypted and uploaded in the `kubeadm-certs` Secret.
|
||||
- To re-upload the certificates and generate a new decryption key, use the following command on a
|
||||
control plane
|
||||
node that is already joined to the cluster:
|
||||
-->
|
||||
- 将此输出复制到文本文件。 稍后你将需要它来将控制平面节点和工作节点加入集群。
|
||||
- 当使用 `--upload-certs` 调用 `kubeadm init` 时,主控制平面的证书被加密并上传到 `kubeadm-certs` Secret 中。
|
||||
|
@ -717,6 +729,23 @@ SSH is required if you want to control all nodes from a single machine.
|
|||
-->
|
||||
在以下示例中,用其他控制平面节点的 IP 地址替换 `CONTROL_PLANE_IPS`。
|
||||
|
||||
<!--
|
||||
```sh
|
||||
USER=ubuntu # customizable
|
||||
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
|
||||
for host in ${CONTROL_PLANE_IPS}; do
|
||||
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
|
||||
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
|
||||
# Skip the next line if you are using external etcd
|
||||
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
|
||||
done
|
||||
```
|
||||
-->
|
||||
```sh
|
||||
USER=ubuntu # 可定制
|
||||
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
|
||||
|
@ -750,6 +779,21 @@ SSH is required if you want to control all nodes from a single machine.
|
|||
5. 然后,在每个即将加入集群的控制平面节点上,你必须先运行以下脚本,然后再运行 `kubeadm join`。
|
||||
该脚本会将先前复制的证书从主目录移动到 `/etc/kubernetes/pki`:
|
||||
|
||||
<!--
|
||||
```sh
|
||||
USER=ubuntu # customizable
|
||||
mkdir -p /etc/kubernetes/pki/etcd
|
||||
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
|
||||
mv /home/${USER}/sa.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
|
||||
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
|
||||
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
|
||||
# Skip the next line if you are using external etcd
|
||||
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
|
||||
```
|
||||
-->
|
||||
```sh
|
||||
USER=ubuntu # 可定制
|
||||
mkdir -p /etc/kubernetes/pki/etcd
|
||||
|
@ -763,4 +807,3 @@ SSH is required if you want to control all nodes from a single machine.
|
|||
# 如果你正使用外部 etcd,忽略下一行
|
||||
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
|
||||
```
|
||||
|
||||
|
|
|
@ -68,7 +68,8 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too
|
|||
The `kubeadm` installation is done via binaries that use dynamic linking and assumes that your target system provides `glibc`.
|
||||
This is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.)
|
||||
but it is not always the case with custom and lightweight distributions which don't include `glibc` by default, such as Alpine Linux.
|
||||
The expectation is that the distribution either includes `glibc` or a [compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs)
|
||||
The expectation is that the distribution either includes `glibc` or a
|
||||
[compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs)
|
||||
that provides the expected symbols.
|
||||
-->
|
||||
`kubeadm` 的安装是通过使用动态链接的二进制文件完成的,安装时假设你的目标系统提供 `glibc`。
|
||||
|
@ -369,7 +370,8 @@ These instructions are for Kubernetes {{< skew currentVersion >}}.
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
In releases older than Debian 12 and Ubuntu 22.04, folder `/etc/apt/keyrings` does not exist by default, and it should be created before the curl command.
|
||||
In releases older than Debian 12 and Ubuntu 22.04, directory `/etc/apt/keyrings` does not
|
||||
exist by default, and it should be created before the curl command.
|
||||
-->
|
||||
在低于 Debian 12 和 Ubuntu 22.04 的发行版本中,`/etc/apt/keyrings` 默认不存在。
|
||||
应在 curl 命令之前创建它。
|
||||
|
@ -387,6 +389,12 @@ In releases older than Debian 12 and Ubuntu 22.04, folder `/etc/apt/keyrings` do
|
|||
对于其他 Kubernetes 次要版本,则需要更改 URL 中的 Kubernetes 次要版本以匹配你所需的次要版本
|
||||
(你还应该检查正在阅读的安装文档是否为你计划安装的 Kubernetes 版本的文档)。
|
||||
|
||||
<!--
|
||||
```shell
|
||||
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
|
||||
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
-->
|
||||
```shell
|
||||
# 此操作会覆盖 /etc/apt/sources.list.d/kubernetes.list 中现存的所有配置。
|
||||
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
|
|
Loading…
Reference in New Issue