Merge pull request #44252 from katcosgrove/merged-main-dev-1.29

Merge main into dev 1.29
pull/44286/head
Kubernetes Prow Robot 2023-12-07 20:51:51 +01:00 committed by GitHub
commit 9b007ed000
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
72 changed files with 777 additions and 644 deletions

View File

@ -12,7 +12,6 @@ aliases:
sig-docs-localization-owners: # Admins for localization content
- a-mccarthy
- divya-mohan0209
- jimangel
- kbhawkey
- natalisucks
- onlydole
@ -22,15 +21,11 @@ aliases:
- tengqm
sig-docs-de-owners: # Admins for German content
- bene2k1
- mkorbi
- rlenferink
sig-docs-de-reviews: # PR reviews for German content
- bene2k1
- mkorbi
- rlenferink
sig-docs-en-owners: # Admins for English content
- annajung
- bradtopol
- divya-mohan0209
- katcosgrove # RT 1.29 Docs Lead
- kbhawkey
@ -112,18 +107,15 @@ aliases:
- atoato88
- bells17
- kakts
- ptux
- t-inu
sig-docs-ko-owners: # Admins for Korean content
- gochist
- ianychoi
- jihoon-seo
- seokho-son
- yoonian
- ysyukr
sig-docs-ko-reviews: # PR reviews for Korean content
- gochist
- ianychoi
- jihoon-seo
- jmyung
- jongwooo
@ -153,7 +145,6 @@ aliases:
sig-docs-zh-reviews: # PR reviews for Chinese content
- asa3311
- chenrui333
- chenxuc
- howieyuen
# idealhack
- kinzhi
@ -208,7 +199,6 @@ aliases:
- Arhell
- idvoretskyi
- MaxymVlasov
- Potapy4
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
committee-steering: # provide PR approvals for announcements
- bentheelder

View File

@ -3,7 +3,7 @@
[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तिया हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तिया हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
## डॉक्स में योगदान देना
@ -37,8 +37,6 @@
> यदि आप विंडोज पर हैं, तो आपको कुछ और टूल्स की आवश्यकता होगी जिन्हें आप [Chocolatey](https://chocolatey.org) के साथ इंस्टॉल कर सकते हैं।
> यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे Hugo का उपयोग करके स्थानीय रूप से साइट चलाना देखें।
यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे दिए गए Hugo का उपयोग करके स्थानीय रूप से [साइट को चलाने](#hugo-का-उपयोग-करते-हुए-स्थानीय-रूप-से-साइट-चलाना) का तरीका देखें।
यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ:

View File

@ -47,12 +47,12 @@ To download Kubernetes, visit the [download](/releases/download/) section.
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 6-9, 2023</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon Europe on March 19-22, 2024</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 12-15, 2024</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -100,10 +100,12 @@ community-owned repositories (`pkgs.k8s.io`).
## Can I continue to use the legacy package repositories?
The existing packages in the legacy repositories will be available for the foreseeable
~~The existing packages in the legacy repositories will be available for the foreseeable
future. However, the Kubernetes project can't provide _any_ guarantees on how long
is that going to be. The deprecated legacy repositories, and their contents, might
be removed at any time in the future and without a further notice period.
be removed at any time in the future and without a further notice period.~~
**UPDATE**: The legacy packages are expected to go away in January 2024.
The Kubernetes project **strongly recommends** migrating to the new community-owned
repositories **as soon as possible**.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -271,6 +271,7 @@ fail validation.
<li><code>net.ipv4.ip_unprivileged_port_start</code></li>
<li><code>net.ipv4.tcp_syncookies</code></li>
<li><code>net.ipv4.ping_group_range</code></li>
<li><code>net.ipv4.ip_local_reserved_ports</code> (since Kubernetes 1.27)</li>
</ul>
</td>
</tr>

View File

@ -185,7 +185,7 @@ and the volume is considered "released". But it is not yet available for
another claim because the previous claimant's data remains on the volume.
An administrator can manually reclaim the volume with the following steps.
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
still exists after the PV is deleted.
1. Manually clean up the data on the associated storage asset accordingly.
1. Manually delete the associated storage asset.
@ -273,7 +273,7 @@ Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk
@ -298,7 +298,7 @@ Access Modes: RWO
VolumeMode: Filesystem
Capacity: 200Mi
Node Affinity: <none>
Message:
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
@ -664,7 +664,7 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
{{< /note >}}
> __Important!__ A volume can only be mounted using one access mode at a time,
> even if it supports many.
> even if it supports many.
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
| :--- | :---: | :---: | :---: | - |
@ -699,7 +699,7 @@ Current reclaim policies are:
* Retain -- manual reclamation
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
* Delete -- associated storage asset
* Delete -- delete the volume
For Kubernetes {{< skew currentVersion >}}, only `nfs` and `hostPath` volume types support recycling.
@ -731,7 +731,7 @@ it will become fully deprecated in a future Kubernetes release.
### Node Affinity
{{< note >}}
For most volume types, you do not need to set this field.
For most volume types, you do not need to set this field.
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
{{< /note >}}
@ -1161,7 +1161,7 @@ users should be aware of:
When the `CrossNamespaceVolumeDataSource` feature is enabled, there are additional differences:
* The `dataSource` field only allows local objects, while the `dataSourceRef` field allows
objects in any namespaces.
objects in any namespaces.
* When namespace is specified, `dataSource` and `dataSourceRef` are not synced.
Users should always use `dataSourceRef` on clusters that have the feature gate enabled, and

View File

@ -320,8 +320,7 @@ The following container runtimes work with Windows:
You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+
as the container runtime for Kubernetes nodes that run Windows.
Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd).
Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd).
{{< note >}}
There is a [known limitation](/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations)
when using GMSA with containerd to access Windows network shares, which requires a

View File

@ -47,8 +47,8 @@ pull request (PR) to the
[`kubernetes/website` GitHub repository](https://github.com/kubernetes/website).
You need to be comfortable with
[git](https://git-scm.com/) and
[GitHub](https://lab.github.com/)
to work effectively in the Kubernetes community.
[GitHub](https://skills.github.com/)
to work effectively in the Kubernetes community.
To get involved with documentation:

View File

@ -350,7 +350,7 @@ kubectl logs my-pod # dump pod logs (stdout)
kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container
kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout)
kubectl logs -l name=myLabel -c my-container # dump pod container logs, with label name=myLabel (stdout)
kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container
kubectl logs -f my-pod # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)

View File

@ -89,7 +89,7 @@ After you initialize your control-plane, the kubelet runs normally.
#### Network setup
kubeadm similarly to other Kubernetes components tries to find a usable IP on
the network interface associated with the default gateway on a host. Such
the network interfaces associated with a default gateway on a host. Such
an IP is then used for the advertising and/or listening performed by a component.
To find out what this IP is on a Linux host you can use:
@ -98,10 +98,22 @@ To find out what this IP is on a Linux host you can use:
ip route show # Look for a line starting with "default via"
```
{{< note >}}
If two or more default gateways are present on the host, a Kubernetes component will
try to use the first one it encounters that has a suitable global unicast IP address.
While making this choice, the exact ordering of gateways might vary between different
operating systems and kernel versions.
{{< /note >}}
Kubernetes components do not accept custom network interface as an option,
therefore a custom IP address must be passed as a flag to all components instances
that need such a custom configuration.
{{< note >}}
If the host does not have a default gateway and if a custom IP address is not passed
to a Kubernetes component, the component may exit with an error.
{{< /note >}}
To configure the API server advertise address for control plane nodes created with both
`init` and `join`, the flag `--apiserver-advertise-address` can be used.
Preferably, this option can be set in the [kubeadm API](/docs/reference/config-api/kubeadm-config.v1beta3)
@ -114,13 +126,12 @@ For kubelets on all nodes, the `--node-ip` option can be passed in
For dual-stack see
[Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support).
{{< note >}}
IP addresses become part of certificates SAN fields. Changing these IP addresses would require
The IP addresses that you assign to control plane components become part of their X.509 certificates'
subject alternative name fields. Changing these IP addresses would require
signing new certificates and restarting the affected components, so that the change in
certificate files is reflected. See
[Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)
for more details on this topic.
{{</ note >}}
{{< warning >}}
The Kubernetes project recommends against this approach (configuring all component instances
@ -132,15 +143,6 @@ is a public IP address, you should configure packet filtering or other security
protect the nodes and your cluster.
{{< /warning >}}
{{< note >}}
If the host does not have a default gateway, it is recommended to setup one. Otherwise,
without passing a custom IP address to a Kubernetes component, the component
will exit with an error. If two or more default gateways are present on the host,
a Kubernetes component will try to use the first one it encounters that has a suitable
global unicast IP address. While making this choice, the exact ordering of gateways
might vary between different operating systems and kernel versions.
{{< /note >}}
### Preparing the required container images
This step is optional and only applies in case you wish `kubeadm init` and `kubeadm join`

View File

@ -73,7 +73,8 @@ If you see the following warnings while running `kubeadm init`
[preflight] WARNING: ethtool not found in system path
```
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands:
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node.
You can install them with the following commands:
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
- For CentOS/Fedora users, run `yum install ebtables ethtool`.
@ -90,9 +91,9 @@ This may be caused by a number of problems. The most common are:
- network connection problems. Check that your machine has full network connectivity before continuing.
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
configure it properly, see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
and investigating each container by running `docker logs`. For other container runtime see
and investigating each container by running `docker logs`. For other container runtime, see
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
## kubeadm blocks when removing managed containers
@ -144,10 +145,12 @@ provider. Please contact the author of the Pod Network add-on to find out whethe
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
For more information, see the
[CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.
If your network provider does not support the portmap CNI plugin, you may need to use the
[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport)
or use `HostNetwork=true`.
## Pods are not accessible via their Service IP
@ -157,9 +160,10 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos
add-on provider to get the latest status of their support for hairpin mode.
- If you are using VirtualBox (directly or via Vagrant), you will need to
ensure that `hostname -i` returns a routable IP address. By default the first
ensure that `hostname -i` returns a routable IP address. By default, the first
interface is connected to a non-routable host-only network. A work around
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
is to modify `/etc/hosts`, see this
[Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
for an example.
## TLS certificate errors
@ -175,6 +179,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
regenerate a certificate if necessary. The certificates in a kubeconfig file
are base64 encoded. The `base64 --decode` command can be used to decode the certificate
and `openssl x509 -text -noout` can be used for viewing the certificate information.
- Unset the `KUBECONFIG` environment variable using:
```sh
@ -190,7 +195,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
```sh
mv $HOME/.kube $HOME/.kube.bak
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
@ -198,7 +203,8 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
## Kubelet client certificate rotation fails {#kubelet-client-cert}
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the
`/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
in kube-apiserver logs. To fix the issue you must follow these steps:
@ -231,24 +237,34 @@ The following error might indicate that something was wrong in the pod network:
Error from server (NotFound): the server could not find the requested resource
```
- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
- If you're using flannel as the pod network inside Vagrant, then you will have to
specify the default interface name for flannel.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts
are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
This may lead to problems with flannel, which defaults to the first interface on a host.
This leads to all hosts thinking they have the same public IP address. To prevent this,
pass the `--iface eth1` flag to flannel so that the second interface is chosen.
## Non-public IP used for containers
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
In some situations `kubectl logs` and `kubectl run` commands may return with the
following errors in an otherwise functional cluster:
```console
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
```
- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
- This may be due to Kubernetes using an IP that can not communicate with other IPs on
the seemingly same subnet, possibly by policy of the machine provider.
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally
as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's
`InternalIP` instead of the public one.
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet:
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will
not display the offending alias IP address. Alternatively an API endpoint specific to
DigitalOcean allows to query for the anchor IP from the droplet:
```sh
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
@ -270,12 +286,13 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
## `coredns` pods have `CrashLoopBackOff` or `Error` state
If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
where the `coredns` pods are not starting. To solve that you can try one of the following options:
If you have nodes that are running SELinux with an older version of Docker, you might experience a scenario
where the `coredns` pods are not starting. To solve that, you can try one of the following options:
- Upgrade to a [newer version of Docker](/docs/setup/production-environment/container-runtimes/#docker).
- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:
```bash
@ -284,7 +301,8 @@ kubectl -n kube-system get deployment coredns -o yaml | \
kubectl apply -f -
```
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop.
[A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
{{< warning >}}
@ -300,7 +318,7 @@ If you encounter the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\""
```
this issue appears if you run CentOS 7 with Docker 1.13.1.84.
This issue appears if you run CentOS 7 with Docker 1.13.1.84.
This version of Docker can prevent the kubelet from executing into the etcd container.
To work around the issue, choose one of these options:
@ -344,6 +362,7 @@ to pick up the node's IP address properly and has knock-on effects to the proxy
load balancers.
The following error can be seen in kube-proxy Pods:
```
server.go:610] Failed to retrieve node IP: host IP unknown; known addresses: []
proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
@ -352,8 +371,26 @@ proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane
nodes regardless of their conditions, keeping it off of other nodes until their initial guarding
conditions abate:
```
kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/control-plane" } ] } } } }'
kubectl -n kube-system patch ds kube-proxy -p='{
"spec": {
"template": {
"spec": {
"tolerations": [
{
"key": "CriticalAddonsOnly",
"operator": "Exists"
},
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/control-plane"
}
]
}
}
}
}'
```
The tracking issue for this problem is [here](https://github.com/kubernetes/kubeadm/issues/1027).
@ -365,12 +402,15 @@ For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/c
Kubernetes components like the kubelet and kube-controller-manager use the default path of
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_
for the feature to work.
(**Note**: FlexVolume was deprecated in the Kubernetes v1.23 release)
To workaround this issue you can configure the flex-volume directory using the kubeadm
{{< note >}}
FlexVolume was deprecated in the Kubernetes v1.23 release.
{{< /note >}}
To workaround this issue, you can configure the flex-volume directory using the kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/).
On the primary control-plane Node (created using `kubeadm init`) pass the following
On the primary control-plane Node (created using `kubeadm init`), pass the following
file using `--config`:
```yaml
@ -402,7 +442,10 @@ be advised that this is modifying a design principle of the Linux distribution.
## `kubeadm upgrade plan` prints out `context deadline exceeded` error message
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`.
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in
the case of running an external etcd. This is not a critical bug and happens because
older versions of kubeadm perform a version check on the external etcd cluster.
You can proceed with `kubeadm upgrade apply ...`.
This issue is fixed as of version 1.19.
@ -422,6 +465,7 @@ can be used insecurely by passing the `--kubelet-insecure-tls` to it. This is no
If you want to use TLS between the metrics-server and the kubelet there is a problem,
since kubeadm deploys a self-signed serving certificate for the kubelet. This can cause the following errors
on the side of the metrics-server:
```
x509: certificate signed by unknown authority
x509: certificate is valid for IP-foo not IP-bar
@ -438,6 +482,7 @@ Only applicable to upgrading a control plane node with a kubeadm binary v1.28.3
where the node is currently managed by kubeadm versions v1.28.0, v1.28.1 or v1.28.2.
Here is the error message you may encounter:
```
[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition
[upgrade/etcd] Waiting for previous etcd to become available
@ -454,16 +499,19 @@ k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
...
```
The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec.
This will result in a diff from the manifest comparison, and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash.
The reason for this failure is that the affected versions generate an etcd manifest file with
unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison,
and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash.
There are two way to workaround this issue if you see it in your cluster:
- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using:
```shell
kubeadm upgrade {apply|node} [version] --etcd-upgrade=false
```
This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.
- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using:
```shell
kubeadm upgrade {apply|node} [version] --etcd-upgrade=false
```
This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.
- Before upgrade, patch the manifest for the etcd static pod, to remove the problematic defaulted attributes:
@ -509,4 +557,5 @@ This is not recommended in case a new etcd version was introduced by a later v1.
path: /etc/kubernetes/pki/etcd
```
More information can be found in the [tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug.
More information can be found in the
[tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug.

View File

@ -136,7 +136,7 @@ This step should be done upon upgrading from one to another Kubernetes minor
release in order to get access to the packages of the desired Kubernetes minor
version.
{{< tabs name="k8s_install_versions" >}}
{{< tabs name="k8s_upgrade_versions" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
1. Open the file that defines the Kubernetes `apt` repository using a text editor of your choice:

View File

@ -99,11 +99,14 @@ kubeadm init phase certs <component-name> --config <config-file>
To write new manifest files in `/etc/kubernetes/manifests` you can use:
```shell
# For Kubernetes control plane components
kubeadm init phase control-plane <component-name> --config <config-file>
# For local etcd
kubeadm init phase etcd local --config <config-file>
```
The `<config-file>` contents must match the updated `ClusterConfiguration`.
The `<component-name>` value must be the name of the component.
The `<component-name>` value must be a name of a Kubernetes control plane component (`apiserver`, `controller-manager` or `scheduler`).
{{< note >}}
Updating a file in `/etc/kubernetes/manifests` will tell the kubelet to restart the static Pod for the corresponding component.

View File

@ -76,6 +76,7 @@ The following sysctls are supported in the _safe_ set:
- `net.ipv4.tcp_syncookies`,
- `net.ipv4.ping_group_range` (since Kubernetes 1.18),
- `net.ipv4.ip_unprivileged_port_start` (since Kubernetes 1.22).
- `net.ipv4.ip_local_reserved_ports` (since Kubernetes 1.27).
{{< note >}}
There are some exceptions to the set of safe sysctls:

View File

@ -252,14 +252,14 @@ This is an incomplete list of things that could go wrong, and how to adjust your
- Network partition within cluster, or between cluster and users
- Crashes in Kubernetes software
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
- Operator error, for example misconfigured Kubernetes software or application software
- Operator error, for example, misconfigured Kubernetes software or application software
### Specific scenarios
- API server VM shutdown or apiserver crashing
- Results
- unable to stop, update, or start new pods, services, replication controller
- existing pods and services should continue to work normally, unless they depend on the Kubernetes API
- existing pods and services should continue to work normally unless they depend on the Kubernetes API
- API server backing storage lost
- Results
- the kube-apiserver component fails to start successfully and become healthy
@ -291,7 +291,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
### Mitigations
- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
- Action: Use the IaaS provider's automatic VM restarting feature for IaaS VMs
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Mitigates: Supporting services VM shutdown or crashes

View File

@ -26,7 +26,7 @@ running on a remote cluster locally.
* Kubernetes cluster is installed
* `kubectl` is configured to communicate with the cluster
* [Telepresence](https://www.telepresence.io/docs/latest/install/) is installed
* [Telepresence](https://www.telepresence.io/docs/latest/quick-start/) is installed
<!-- steps -->

View File

@ -23,15 +23,18 @@ kubectl cluster-info
If you see a URL response, kubectl is correctly configured to access your cluster.
If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.
If you see a message similar to the following, kubectl is not configured correctly
or is not able to connect to a Kubernetes cluster.
```
The connection to the server <server-name:port> was refused - did you specify the right host or port?
```
For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.
For example, if you are intending to run a Kubernetes cluster on your laptop (locally),
you will need a tool like Minikube to be installed first and then re-run the commands stated above.
If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use:
If kubectl cluster-info returns the url response but you can't access your cluster,
to check whether it is configured properly, use:
```shell
kubectl cluster-info dump
@ -40,9 +43,10 @@ kubectl cluster-info dump
### Troubleshooting the 'No Auth Provider Found' error message {#no-auth-provider-found}
In Kubernetes 1.26, kubectl removed the built-in authentication for the following cloud
providers' managed Kubernetes offerings. These providers have released kubectl plugins to provide the cloud-specific authentication. For instructions, refer to the following provider documentation:
providers' managed Kubernetes offerings. These providers have released kubectl plugins
to provide the cloud-specific authentication. For instructions, refer to the following provider documentation:
* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/)
* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/)
* Google Kubernetes Engine: [gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)
(There could also be other reasons to see the same error message, unrelated to that change.)

View File

@ -60,7 +60,7 @@ Now, switch back to the terminal where you ran `minikube start`.
The `dashboard` command enables the dashboard add-on and opens the proxy in the default web browser.
You can create Kubernetes resources on the dashboard such as Deployment and Service.
If you are running in an environment as root, see [Open Dashboard with URL](#open-dashboard-with-url).
To find out how to avoid directly invoking the browser from the terminal and get a URL for the web dashboard, see the "URL copy and paste" tab.
By default, the dashboard is only accessible from within the internal Kubernetes virtual network.
The `dashboard` command creates a temporary proxy to make the dashboard accessible from outside the Kubernetes virtual network.
@ -73,7 +73,7 @@ You can run the `dashboard` command again to create another proxy to access the
{{% /tab %}}
{{% tab name="URL copy and paste" %}}
If you don't want minikube to open a web browser for you, run the dashboard command with the
If you don't want minikube to open a web browser for you, run the `dashboard` subcommand with the
`--url` flag. `minikube` outputs a URL that you can open in the browser you prefer.
Open a **new** terminal, and run:
@ -82,7 +82,7 @@ Open a **new** terminal, and run:
minikube dashboard --url
```
Now, switch back to the terminal where you ran `minikube start`.
Now, you can use this URL and switch back to the terminal where you ran `minikube start`.
{{% /tab %}}
{{< /tabs >}}

View File

@ -130,10 +130,11 @@ description: |-
<div class="row">
<div class="col-md-12">
<h3>View the app</h3>
<p>Pods that are running inside Kubernetes are running on a private, isolated network.
<p><a href="/docs/concepts/workloads/pods/">Pods</a> that are running inside Kubernetes are running on a private, isolated network.
By default they are visible from other pods and services within the same Kubernetes cluster, but not outside that network.
When we use <code>kubectl</code>, we're interacting through an API endpoint to communicate with our application.</p>
<p>We will cover other options on how to expose your application outside the Kubernetes cluster later, in <a href="/docs/tutorials/kubernetes-basics/expose/">Module 4</a>.</p>
<p>We will cover other options on how to expose your application outside the Kubernetes cluster later, in <a href="/docs/tutorials/kubernetes-basics/expose/">Module 4</a>.
Also as a basic tutorial, we're not explaining what <code>Pods</code> are in any detail here, it will cover in later topics.</p>
<p>The <code>kubectl proxy</code> command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.</p>
<p><strong>You need to open a second terminal window to run the proxy.</strong></p>
<p><b><code>kubectl proxy</b></code>

View File

@ -94,7 +94,7 @@ description: |-
</div>
<div class="row">
<div class="col-md-12">
<h3>Create a new Service</h3>
<h3>Step 1: Creating a new Service</h3>
<p>Lets verify that our application is running. Well use the <code>kubectl get</code> command and look for existing Pods:</p>
<p><code><b>kubectl get pods</b></code></p>
<p>If no Pods are running then it means the objects from the previous tutorials were cleaned up. In this case, go back and recreate the deployment from the <a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro#deploy-an-app">Using kubectl to create a Deployment</a> tutorial.
@ -151,7 +151,7 @@ description: |-
<div class="row">
<div class="col-md-12">
<h3>Deleting a service</h3>
<h3>Step 3: Deleting a service</h3>
<p>To delete Services you can use the <code>delete service</code> subcommand. Labels can be used also here:</p>
<p><code><b>kubectl delete service -l app=kubernetes-bootcamp</b></code></p>
<p>Confirm that the Service is gone:</p>

View File

@ -156,6 +156,14 @@ kubernetes-bootcamp 1/1 1 1 11m
<p>Next, well do a <code>curl</code> to the exposed IP address and port. Execute the command multiple times:</p>
<p><code><b>curl http://"$(minikube ip):$NODE_PORT"</b></b></b></code></p>
<p>We hit a different Pod with every request. This demonstrates that the load-balancing is working.</p>
{{< note >}}<p>If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.<br>
<p>In a separate terminal window, execute:<br>
<code><b>minikube service kubernetes-bootcamp --url</b></code></p>
<p>The output looks like this:
<pre><b>http://127.0.0.1:51082<br>! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.</b></pre></p>
<p>Then use the given URL to access the app:<br>
<code><b>curl 127.0.0.1:51082</b></code></p>
{{< /note >}}
</div>
</div>

View File

@ -294,6 +294,8 @@ following:
1. Create a Pod in the default namespace:
{{% code_sample file="security/example-baseline-pod.yaml" %}}
```shell
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
```

View File

@ -1,4 +1,4 @@
apiVersion: autoscaling/v1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
@ -9,4 +9,10 @@ spec:
name: php-apache
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@ -2,5 +2,5 @@ apiVersion: rules.example.com/v1
kind: ReplicaLimit
metadata:
name: "replica-limit-test.example.com"
namesapce: "default"
maxReplicas: 3
namespace: "default"
maxReplicas: 3

View File

@ -78,7 +78,7 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| December 2023 | 2023-12-08 | 2023-12-13 |
| December 2023 | 2023-12-15 | 2023-12-19 |
| January 2024 | 2024-01-12 | 2024-01-17 |
| February 2024 | 2024-02-09 | 2024-02-14 |
| March 2024 | 2024-03-08 | 2024-03-13 |

View File

@ -22,50 +22,36 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
## Esenciales
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) se trata de un tutorial interactivo en profundidad para entender Kubernetes y probar algunas funciones básicas.
* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
* [Hello Minikube](/es/docs/tutorials/hello-minikube/)
## Configuración
* [Ejemplo: Configurando un Microservicio en Java](/docs/tutorials/configuration/configure-java-microservice/)
* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
## Aplicaciones Stateless
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
## Aplicaciones Stateful
* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
## Clústers
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servicios
* [Using Source IP](/docs/tutorials/services/source-ip/)
## {{% heading "whatsnext" %}}
Si quieres escribir un tutorial, revisa [utilizando templates](/docs/home/contribute/page-templates/) para obtener información sobre el tipo de página y la plantilla de los tutotriales.

View File

@ -857,7 +857,7 @@ Vous devez créer un secret dans l'API Kubernetes avant de pouvoir l'utiliser.
Un conteneur utilisant un secret en tant que point de montage de volume [subPath](#using-subpath) ne recevra pas les mises à jour des secrets.
{{< /note >}}
Les secrets sont décrits plus en détails [ici](/docs/user-guide/secrets).
Les secrets sont décrits plus en détails [ici](/docs/concepts/configuration/secret/).
### storageOS {#storageos}

View File

@ -15,7 +15,7 @@ Cette page montre comment générer automatiquement des pages de référence pou
* Vous devez avoir [Git](https://git-scm.com/book/fr/v2/D%C3%A9marrage-rapide-Installation-de-Git) installé.
* Vous devez avoir [Golang](https://golang.org/doc/install) version 1.9.1 ou ultérieur installé, et votre variable d'environnement `$GOPATH` doit être définie.
* Vous devez avoir [Golang](https://go.dev/doc/install) version 1.9.1 ou ultérieur installé, et votre variable d'environnement `$GOPATH` doit être définie.
* Vous devez avoir [Docker](https://docs.docker.com/engine/installation/) installé.

View File

@ -16,7 +16,7 @@ Cette page montre comment mettre à jour les documents de référence générés
Vous devez avoir ces outils installés:
* [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* [Golang](https://golang.org/doc/install) version 1.9.1 ou ultérieur
* [Golang](https://go.dev/doc/install) version 1.9.1 ou ultérieur
* [Docker](https://docs.docker.com/engine/installation/)
* [etcd](https://github.com/coreos/etcd/)

View File

@ -8,7 +8,7 @@ sitemap:
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
[कुबेरनेट्स]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}), जो K8s के रूप में भी जाना जाता है, कंटेनरीकृत एप्लीकेशन के डिप्लॉयमेंट, स्केलिंग और प्रबंधन को स्वचालित करने के लिए एक ओपन-सोर्स सिस्टम है।
[कुबेरनेट्स]({{< relref "/docs/concepts/overview/_index.md" >}}), जो K8s के रूप में भी जाना जाता है, कंटेनरीकृत एप्लीकेशन के डिप्लॉयमेंट, स्केलिंग और प्रबंधन को स्वचालित करने के लिए एक ओपन-सोर्स सिस्टम है।
यह आसान प्रबंधन और खोज के लिए लॉजिकल इकाइयों में एक एप्लीकेशन बनाने वाले कंटेनरों को समूहित करता है। कुबेरनेट्स [Google में उत्पादन कार्यभार चलाने के 15 वर्षों के अनुभव](http://queue.acm.org/detail.cfm?id=2898444) पर निर्माणित है, जो समुदाय के सर्वोत्तम-नस्लीय विचारों और प्रथाओं के साथ संयुक्त है।
{{% /blocks/feature %}}

View File

@ -1,5 +1,5 @@
---
title: कुबेरेट्स क्या है?
title: अवलोकन
description: >
कुबेरनेट्स कंटेनरीकृत वर्कलोड और सेवाओं के प्रबंधन के लिए एक पोर्टेबल, एक्स्टेंसिबल, ओपन-सोर्स प्लेटफॉर्म है, जो घोषणात्मक कॉन्फ़िगरेशन और स्वचालन दोनों की सुविधा प्रदान करता है। इसका एक बड़ा, तेजी से बढ़ता हुआ पारिस्थितिकी तंत्र है। कुबेरनेट्स सेवाएँ, समर्थन और उपकरण व्यापक रूप से उपलब्ध हैं।
content_type: concept

View File

@ -8,9 +8,9 @@ card:
name: setup
weight: 20
anchors:
- anchor: "#सीखने-का-वातावरण"
- anchor: "#learning-environment"
title: सीखने का वातावरण
- anchor: "#प्रोडक्शन-वातावरण"
- anchor: "#production-environment"
title: प्रोडक्शन वातावरण
---
@ -25,13 +25,13 @@ card:
<!-- body -->
## सीखने का वातावरण
## सीखने का वातावरण {#learning-environment}
यदि आप कुबेरनेट्स सीख रहे हैं, तो कुबेरनेट्स समुदाय द्वारा समर्थित टूल का उपयोग करें,
या स्थानीय मशीन पर कुबेरनेट्स क्लस्टर सेटअप करने के लिए इकोसिस्टम में उपलब्ध टूल का उपयोग करें।
[इंस्टॉल टूल्स](/hi/docs/tasks/tools/) देखें।
## प्रोडक्शन वातावरण
## प्रोडक्शन वातावरण {#production-environment}
[प्रोडक्शन वातावरण](/hi/docs/setup/production-environment/) के लिए समाधान का मूल्यांकन करते समय, विचार करें कि कुबेरनेट्स क्लस्टर के किन पहलुओं (या _abstractions_) का संचालन आप स्वयं प्रबंधित करना चाहते हैं और किसे आप एक प्रदाता को सौंपना पसंद करते हैं।

View File

@ -44,7 +44,7 @@ Linux पर kubectl संस्थापित करने के लिए
kubectl चेकसम फाइल डाउनलोड करें:
```bash
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
```
चेकसम फ़ाइल से kubectl बाइनरी को मान्य करें:
@ -199,7 +199,7 @@ kubectl Bash और Zsh के लिए ऑटोकम्प्लेशन
kubectl-convert चेकसम फ़ाइल डाउनलोड करें:
```bash
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
```
चेकसम फ़ाइल से kubectl-convert बाइनरी को मान्य करें:

View File

@ -52,7 +52,7 @@ content_type: concept
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
* [Seccomp](/docs/tutorials/clusters/seccomp/)
## सर्विस

View File

@ -20,7 +20,7 @@ v{{< skew currentVersion >}}以外のKubernetesバージョンを実行してい
ワークステーションに以下をインストールしてください:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](/ja/docs/tasks/tools/)
このチュートリアルでは、完全な制御下にあるKubernetesクラスターの何を設定できるかをデモンストレーションします。
@ -230,7 +230,7 @@ v{{< skew currentVersion >}}以外のKubernetesバージョンを実行してい
```
{{<note>}}
macOSでDocker DesktopとKinDを利用している場合は、**Preferences > Resources > File Sharing**のメニュー項目からShared Directoryとして`/tmp`を追加できます。
macOSでDocker Desktopと*kind*を利用している場合は、**Preferences > Resources > File Sharing**のメニュー項目からShared Directoryとして`/tmp`を追加できます。
{{</note>}}
1. 目的のPodセキュリティの標準を適用するために、Podセキュリティアドミッションを使うクラスターを作成します:

View File

@ -47,12 +47,12 @@ Kubernetes jako projekt open-source daje Ci wolność wyboru ⏤ skorzystaj z pr
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Obejrzyj wideo</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Weź udział w KubeCon + CloudNativeCon Europe 18-21.04.2023</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Weź udział w KubeCon + CloudNativeCon North America 6-9.11.2023</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Weź udział w KubeCon + CloudNativeCon Europe 19-22.03.2024</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -133,7 +133,7 @@ Para mais detalhes sobre compatibilidade entre as versões, veja:
```shell
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
```
2. Faça o download da chave de assinatura pública da Google Cloud:

View File

@ -22,7 +22,7 @@ weight: 20
- Установленные инструменты:
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- [Golang](https://golang.org/doc/install) версии 1.13+
- [Golang](https://go.dev/doc/install) версии 1.13+
- [Docker](https://docs.docker.com/engine/installation/)
- [etcd](https://github.com/coreos/etcd/)
- [make](https://www.gnu.org/software/make/)

View File

@ -171,12 +171,12 @@ underlying host devices.
基于 PCIe 的加密加速设备功能
可以受益于 IO 硬件虚拟化,通过 I/O 内存管理单元IOMMU提供隔离IOMMU 将设备分组,为工作负载提供隔离的资源
(假设加密卡不与其他设备共享 **IOMMU 组**。如果PCIe设备支持单根 I/O 虚拟化SR-IOV规范则可以进一步增加隔离资源的数量。
SR-IOV 允许将 PCIe 设备将**物理功能项Physical FunctionsPF**设备进一步拆分为
SR-IOV 允许将 PCIe 设备将 **物理功能项Physical FunctionsPF** 设备进一步拆分为
**虚拟功能项Virtual Functions, VF**,并且每个设备都属于自己的 IOMMU 组。
要将这些借助 IOMMU 完成隔离的设备功能项暴露给用户空间和容器,主机内核应该将它们绑定到特定的设备驱动程序。
在 Linux 中,这个驱动程序是 vfio-pci
它通过字符设备将设备提供给用户空间。内核 vfio-pci 驱动程序使用一种称为
**PCI 透传PCI Passthrough**的机制,
**PCI 透传PCI Passthrough** 的机制,
为用户空间应用程序提供了对 PCIe 设备与功能项的直接的、IOMMU 支持的访问。
用户空间框架如数据平面开发工具包Data Plane Development KitDPDK可以利用该接口。
此外虚拟机VM管理程序可以向 VM 提供这些用户空间设备节点,并将它们作为 PCI 设备暴露给寄宿内核。

View File

@ -21,7 +21,7 @@ slug: kubeadm-use-etcd-learner-mode
<!--
The [`kubeadm`](/docs/reference/setup-tools/kubeadm/) tool now supports etcd learner mode, which
allows you to enhance the resilience and stability
of your Kubernetes clusters by leveraging the [learner mode](https://etcd.io/docs/v3.4/learning/design-learner/#appendix-learner-implementation-in-v34)
of your Kubernetes clusters by leveraging the [learner mode](https://etcd.io/docs/v3.4/learning/design-learner/#appendix-learner-implementation-in-v34)
feature introduced in etcd version 3.4.
This guide will walk you through using etcd learner mode with kubeadm. By default, kubeadm runs
a local etcd instance on each control plane node.
@ -52,27 +52,27 @@ in Kubernetes clusters:
在 Kubernetes 集群中采用 etcd learner 模式具有以下几个优点:
<!--
1. **Enhanced Resilience**: etcd learner nodes are non-voting members that catch up with
the leader's logs before becoming fully operational. This prevents new cluster members
from disrupting the quorum or causing leader elections, making the cluster more resilient
during membership changes.
2. **Reduced Cluster Unavailability**: Traditional approaches to adding new members often
result in cluster unavailability periods, especially in slow infrastructure or misconfigurations.
etcd learner mode minimizes such disruptions.
3. **Simplified Maintenance**: Learner nodes provide a safer and reversible way to add or replace
cluster members. This reduces the risk of accidental cluster outages due to misconfigurations or
missteps during member additions.
4. **Improved Network Tolerance**: In scenarios involving network partitions, learner mode allows
for more graceful handling. Depending on the partition a new member lands, it can seamlessly
integrate with the existing cluster without causing disruptions.
1. **Enhanced Resilience**: etcd learner nodes are non-voting members that catch up with
the leader's logs before becoming fully operational. This prevents new cluster members
from disrupting the quorum or causing leader elections, making the cluster more resilient
during membership changes.
1. **Reduced Cluster Unavailability**: Traditional approaches to adding new members often
result in cluster unavailability periods, especially in slow infrastructure or misconfigurations.
etcd learner mode minimizes such disruptions.
1. **Simplified Maintenance**: Learner nodes provide a safer and reversible way to add or replace
cluster members. This reduces the risk of accidental cluster outages due to misconfigurations or
missteps during member additions.
1. **Improved Network Tolerance**: In scenarios involving network partitions, learner mode allows
for more graceful handling. Depending on the partition a new member lands, it can seamlessly
integrate with the existing cluster without causing disruptions.
-->
1. **增强了弹性**etcd learner 节点是非投票成员,在完全进入角色之前会追随领导者的日志。
这样可以防止新的集群成员干扰投票结果或引起领导者选举,从而使集群在成员变更期间更具弹性。
2. **减少了集群不可用时间**:传统的添加新成员的方法通常会造成一段时间集群不可用,特别是在基础设施迟缓或误配的情况下更为明显。
1. **减少了集群不可用时间**:传统的添加新成员的方法通常会造成一段时间集群不可用,特别是在基础设施迟缓或误配的情况下更为明显。
而 etcd learner 模式可以最大程度地减少此类干扰。
3. **简化了维护**learner 节点提供了一种更安全、可逆的方式来添加或替换集群成员。
1. **简化了维护**learner 节点提供了一种更安全、可逆的方式来添加或替换集群成员。
这降低了由于误配或在成员添加过程中出错而导致集群意外失效的风险。
4. **改进了网络容错性**在涉及网络分区的场景中learner 模式允许更优雅的处理。
1. **改进了网络容错性**在涉及网络分区的场景中learner 模式允许更优雅的处理。
根据新成员所落入的分区,它可以无缝地与现有集群集成,而不会造成中断。
<!--
@ -149,7 +149,7 @@ ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
member list
member list
...
dc543c4d307fadb9, started, node1, https://10.6.177.40:2380, https://10.6.177.40:2379, false
```
@ -157,26 +157,30 @@ dc543c4d307fadb9, started, node1, https://10.6.177.40:2380, https://10.6.177.40:
<!--
To check if the Kubernetes control plane is healthy, run `kubectl get node -l node-role.kubernetes.io/control-plane=`
and check if the nodes are ready.
Note: It is recommended to have an odd number of members in a etcd cluster.
Before joining a worker node to the new Kubernetes cluster, ensure that the control plane nodes are healthy.
-->
要检查 Kubernetes 控制平面是否健康,运行 `kubectl get node -l node-role.kubernetes.io/control-plane=`
并检查节点是否就绪。
注意:建议在 etcd 集群中的成员个数为奇数。
{{< note >}}
<!--
It is recommended to have an odd number of members in an etcd cluster.
-->
建议在 etcd 集群中的成员个数为奇数。
{{< /note >}}
<!--
Before joining a worker node to the new Kubernetes cluster, ensure that the control plane nodes are healthy.
-->
在将工作节点接入新的 Kubernetes 集群之前,确保控制平面节点健康。
<!--
## What's next
- The feature gate `EtcdLearnerMode` is alpha in v1.27 and we expect it to graduate to beta in the next
minor release of Kubernetes (v1.29).
- etcd has an open issue that may make the process more automatic:
minor release of Kubernetes (v1.29).
- etcd has an open issue that may make the process more automatic:
[Support auto-promoting a learner member to a voting member](https://github.com/etcd-io/etcd/issues/15107).
- Learn more about the kubeadm [configuration format](/docs/reference/config-api/kubeadm-config.v1beta3/) here.
- Learn more about the kubeadm [configuration format](/docs/reference/config-api/kubeadm-config.v1beta3/).
-->
## 接下来的步骤 {#whats-next}
@ -190,7 +194,9 @@ Before joining a worker node to the new Kubernetes cluster, ensure that the cont
Was this guide helpful? If you have any feedback or encounter any issues, please let us know.
Your feedback is always welcome! Join the bi-weekly [SIG Cluster Lifecycle meeting](https://docs.google.com/document/d/1Gmc7LyCIL_148a9Tft7pdhdee0NBHdOfHS1SAF0duI4/edit)
or weekly [kubeadm office hours](https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit). Or reach us via [Slack](https://slack.k8s.io/) (channel **#kubeadm**), or the [SIG's mailing list](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle).
or weekly [kubeadm office hours](https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit).
Or reach us via [Slack](https://slack.k8s.io/) (channel **#kubeadm**), or the
[SIG's mailing list](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle).
-->
## 反馈 {#feedback}

View File

@ -37,12 +37,12 @@ that system resource specifically for that container to use.
{{< glossary_tooltip text="容器" term_id="container" >}}设定所需要的资源数量。
最常见的可设定资源是 CPU 和内存RAM大小此外还有其他类型的资源。
当你为 Pod 中的 Container 指定了资源 **request请求**时,
当你为 Pod 中的 Container 指定了资源 **request请求** 时,
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
就利用该信息决定将 Pod 调度到哪个节点上。
当你为 Container 指定了资源 **limit限制**时,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
当你为 Container 指定了资源 **limit限制** 时,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
就可以确保运行的容器不会使用超出所设限制的资源。
kubelet 还会为容器预留所 **request请求**数量的系统资源,供其使用。
kubelet 还会为容器预留所 **request请求** 数量的系统资源,供其使用。
<!-- body -->

View File

@ -307,7 +307,7 @@ and structure the Secret type to have your domain name before the name, separate
by a `/`. For example: `cloud-hosting.example.net/cloud-api-credentials`.
-->
如果你要定义一种公开使用的 Secret 类型,请遵守 Secret 类型的约定和结构,
在类型名签名添加域名,并用 `/` 隔开。
在类型名前面添加域名,并用 `/` 隔开。
例如:`cloud-hosting.example.net/cloud-api-credentials`。
<!--

View File

@ -36,7 +36,7 @@ Pod 安全性标准定义了三种不同的**策略Policy**,以广泛覆
| Profile | 描述 |
| ------ | ----------- |
| <strong style="white-space: nowrap">Privileged</strong> | 不受限制的策略,提供最大可能范围的权限许可。此策略允许已知的特权提升。 |
| <strong style="white-space: nowrap">Baseline</strong> | 限制性最弱的策略,禁止已知的策略提升。允许使用默认的规定最少Pod 配置。 |
| <strong style="white-space: nowrap">Baseline</strong> | 限制性最弱的策略,禁止已知的特权提升。允许使用默认的规定最少Pod 配置。 |
| <strong style="white-space: nowrap">Restricted</strong> | 限制性非常强的策略,遵循当前的保护 Pod 的最佳实践。 |
<!-- body -->

View File

@ -338,15 +338,15 @@ An administrator can manually reclaim the volume with the following steps.
<!--
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
(such as an AWS EBS or GCE PD volume) still exists after the PV is deleted.
still exists after the PV is deleted.
1. Manually clean up the data on the associated storage asset accordingly.
1. Manually delete the associated storage asset.
If you want to reuse the same storage asset, create a new PersistentVolume with
the same storage asset definition.
-->
1. 删除 PersistentVolume 对象。与之相关的、位于外部基础设施中的存储资产
(例如 AWS EBS 或 GCE PD 卷)在 PV 删除之后仍然存在。
1. 删除 PersistentVolume 对象。与之相关的、位于外部基础设施中的存储资产
PV 删除之后仍然存在。
1. 根据情况,手动清除所关联的存储资产上的数据。
1. 手动删除所关联的存储资产。
@ -357,8 +357,7 @@ the same storage asset definition.
For volume plugins that support the `Delete` reclaim policy, deletion removes
both the PersistentVolume object from Kubernetes, as well as the associated
storage asset in the external infrastructure, such as an AWS EBS or GCE PD volume.
Volumes that were dynamically provisioned
storage asset in the external infrastructure. Volumes that were dynamically provisioned
inherit the [reclaim policy of their StorageClass](#reclaim-policy), which
defaults to `Delete`. The administrator should configure the StorageClass
according to users' expectations; otherwise, the PV must be edited or
@ -368,7 +367,7 @@ patched after it is created. See
#### 删除Delete {#delete}
对于支持 `Delete` 回收策略的卷插件,删除动作会将 PersistentVolume 对象从
Kubernetes 中移除,同时也会从外部基础设施(如 AWS EBS 或 GCE PD 卷)中移除所关联的存储资产。
Kubernetes 中移除,同时也会从外部基础设施中移除所关联的存储资产。
动态制备的卷会继承[其 StorageClass 中设置的回收策略](#reclaim-policy)
该策略默认为 `Delete`。管理员需要根据用户的期望来配置 StorageClass
否则 PV 卷被创建之后必须要被编辑或者修补。
@ -478,7 +477,7 @@ Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk
@ -506,7 +505,7 @@ Access Modes: RWO
VolumeMode: Filesystem
Capacity: 200Mi
Node Affinity: <none>
Message:
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
@ -617,14 +616,12 @@ the following types of volumes:
* azureFile (deprecated)
* {{< glossary_tooltip text="csi" term_id="csi" >}}
* flexVolume (deprecated)
* gcePersistentDisk (deprecated)
* rbd
* portworxVolume (deprecated)
-->
* azureFile已弃用
* {{< glossary_tooltip text="csi" term_id="csi" >}}
* flexVolume已弃用
* gcePersistentDisk已弃用
* rbd
* portworxVolume已弃用
@ -744,15 +741,6 @@ FlexVolume resize is possible only when the underlying driver supports resize.
FlexVolume 卷的重设大小只能在下层驱动支持重设大小的时候才可进行。
{{< /note >}}
{{< note >}}
<!--
Expanding EBS volumes is a time-consuming operation.
Also, there is a per-volume quota of one modification every 6 hours.
-->
扩充 EBS 卷的操作非常耗时。同时还存在另一个配额限制:
每 6 小时只能执行一次(尺寸)修改操作。
{{< /note >}}
<!--
#### Recovering from Failure when Expanding Volumes
@ -887,8 +875,6 @@ This means that support is still available but will be removed in a future Kuber
(**deprecated** in v1.21)
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
(**deprecated** in v1.23)
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
(**deprecated** in v1.17)
* [`portworxVolume`](/docs/concepts/storage/volumes/#portworxvolume) - Portworx volume
(**deprecated** in v1.25)
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK volume
@ -903,8 +889,6 @@ This means that support is still available but will be removed in a future Kuber
* [`azureFile`](/zh-cn/docs/concepts/storage/volumes/#azurefile) - Azure File
(于 v1.21 **弃用**
* [`flexVolume`](/zh-cn/docs/concepts/storage/volumes/#flexVolume) - FlexVolume (于 v1.23 **弃用**
* [`gcePersistentDisk`](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
(于 v1.17 **弃用**
* [`portworxVolume`](/zh-cn/docs/concepts/storage/volumes/#portworxvolume) - Portworx 卷
(于 v1.25 **弃用**
* [`vsphereVolume`](/zh-cn/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK 卷
@ -1153,12 +1137,9 @@ Kubernetes 使用卷访问模式来匹配 PersistentVolumeClaim 和 PersistentVo
<!--
> __Important!__ A volume can only be mounted using one access mode at a time,
> even if it supports many. For example, a GCEPersistentDisk can be mounted as
> ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
> even if it supports many.
-->
> **重要提醒!** 每个卷同一时刻只能以一种访问模式挂载,即使该卷能够支持多种访问模式。
> 例如,一个 GCEPersistentDisk 卷可以被某节点以 ReadWriteOnce
> 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式挂载。
<!--
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
@ -1168,8 +1149,6 @@ Kubernetes 使用卷访问模式来匹配 PersistentVolumeClaim 和 PersistentVo
| CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
| FC | &#x2713; | &#x2713; | - | - |
| FlexVolume | &#x2713; | &#x2713; | depends on the driver | - |
| GCEPersistentDisk | &#x2713; | &#x2713; | - | - |
| Glusterfs | &#x2713; | &#x2713; | &#x2713; | - |
| HostPath | &#x2713; | - | - | - |
| iSCSI | &#x2713; | &#x2713; | - | - |
| NFS | &#x2713; | &#x2713; | &#x2713; | - |
@ -1226,20 +1205,20 @@ Current reclaim policies are:
* Retain -- manual reclamation
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
* Delete -- associated storage asset such as AWS EBS or GCE PD volume is deleted
* Delete -- associated storage asset
Currently, only NFS and HostPath support recycling. AWS EBS and GCE PD volumes support deletion.
For Kubernetes {{< skew currentVersion >}}, only `nfs` and `hostPath` volume types support recycling.
-->
### 回收策略 {#reclaim-policy}
目前的回收策略有:
* Retain -- 手动回收
* Recycle -- 基本擦除 (`rm -rf /thevolume/*`)
* Delete -- 诸如 AWS EBS 或 GCE PD 卷这类关联存储资产也被删除
* Recycle -- 简单擦除 (`rm -rf /thevolume/*`)
* Delete -- 删除关联存储资产
目前,仅 NFS 和 HostPath 支持回收Recycle
AWS EBS 和 GCE PD 卷支持删除Delete
对于 Kubernetes {{< skew currentVersion >}} 来说,只有
`nfs``hostPath` 卷类型支持回收Recycle
<!--
### Mount Options
@ -1264,7 +1243,6 @@ The following volume types support mount options:
* `azureFile`
* `cephfs` (**deprecated** in v1.28)
* `cinder` (**deprecated** in v1.18)
* `gcePersistentDisk` (**deprecated** in v1.28)
* `iscsi`
* `nfs`
* `rbd` (**deprecated** in v1.28)
@ -1275,7 +1253,6 @@ The following volume types support mount options:
* `azureFile`
* `cephfs`(于 v1.28 中**弃用**
* `cinder`(于 v1.18 中**弃用**
* `gcePersistentDisk`(于 v1.28 中**弃用**
* `iscsi`
* `nfs`
* `rbd`(于 v1.28 中**弃用**
@ -1302,13 +1279,12 @@ it will become fully deprecated in a future Kubernetes release.
{{< note >}}
<!--
For most volume types, you do not need to set this field. It is automatically
populated for [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) volume block types.
For most volume types, you do not need to set this field.
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
-->
对大多数类型的卷而言,你不需要设置节点亲和性字段。
[GCE PD](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) 卷类型能自动设置相关字段。
你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local) 卷显式地设置此属性。
对大多数类型而言,你不需要设置节点亲和性字段。
你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local)
卷显式地设置此属性。
{{< /note >}}
<!--
@ -1702,7 +1678,6 @@ applicable:
<!--
* CSI
* FC (Fibre Channel)
* GCEPersistentDisk (deprecated)
* iSCSI
* Local volume
* OpenStack Cinder
@ -1712,7 +1687,6 @@ applicable:
-->
* CSI
* FC光纤通道
* GCEPersistentDisk已弃用
* iSCSI
* Local 卷
* OpenStack Cinder
@ -2232,4 +2206,3 @@ Read about the APIs described in this page:
* [`PersistentVolume`](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/)
* [`PersistentVolumeClaim`](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/)

View File

@ -103,7 +103,7 @@ Note that certain cloud providers may already define a default StorageClass.
当一个 PVC 没有指定 `storageClassName` 时,会使用默认的 StorageClass。
集群中只能有一个默认的 StorageClass。如果不小心设置了多个默认的 StorageClass
当 PVC 动态配置时,将使用最新设置的默认 StorageClass。
在动态制备 PVC 时将使用其中最新的默认设置的 StorageClass。
关于如何设置默认的 StorageClass
请参见[更改默认 StorageClass](/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/)。
@ -130,7 +130,6 @@ for provisioning PVs. This field must be specified.
| CephFS | - | - |
| FC | - | - |
| FlexVolume | - | - |
| GCEPersistentDisk | &#x2713; | [GCE PD](#gce-pd) |
| iSCSI | - | - |
| NFS | - | [NFS](#nfs) |
| RBD | &#x2713; | [Ceph RBD](#ceph-rbd) |
@ -208,14 +207,16 @@ PersistentVolume 可以配置为可扩展。将此功能设置为 `true` 时,
当下层 StorageClass 的 `allowVolumeExpansion` 字段设置为 true 时,以下类型的卷支持卷扩展。
{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}}
<!--
"Table of Volume types and the version of Kubernetes they require"
-->
{{< table caption = "卷类型及其 Kubernetes 版本要求" >}}
<!--
Volume type | Required Kubernetes version
-->
| 卷类型 | Kubernetes 版本要求 |
| :------------------- | :------------------------ |
| gcePersistentDisk | 1.11 |
| rbd | 1.11 |
| Azure File | 1.11 |
| Portworx | 1.11 |
@ -292,24 +293,12 @@ PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或制备。
[Pod 亲和性和互斥性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity/)、
以及[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration)。
<!--
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
- [GCEPersistentDisk](#gce-pd)
-->
以下插件支持动态制备的 `WaitForFirstConsumer` 模式:
- [GCEPersistentDisk](#gce-pd)
<!--
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
- All of the above
- [Local](#local)
-->
以下插件支持预创建绑定 PersistentVolume 的 `WaitForFirstConsumer` 模式:
- 上述全部
- [Local](#local)
{{< feature-state state="stable" for_k8s_version="v1.17" >}}
@ -485,85 +474,6 @@ parameters:
`zone``zones` 已被弃用并被[允许的拓扑结构](#allowed-topologies)取代。
{{< /note >}}
### GCE PD {#gce-pd}
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
replication-type: none
```
<!--
- `type`: `pd-standard` or `pd-ssd`. Default: `pd-standard`
- `zone` (Deprecated): GCE zone. If neither `zone` nor `zones` is specified, volumes are
generally round-robin-ed across all active zones where Kubernetes cluster has
a node. `zone` and `zones` parameters must not be used at the same time.
- `zones` (Deprecated): A comma separated list of GCE zone(s). If neither `zone` nor `zones`
is specified, volumes are generally round-robin-ed across all active zones
where Kubernetes cluster has a node. `zone` and `zones` parameters must not
be used at the same time.
- `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system.
- `replication-type`: `none` or `regional-pd`. Default: `none`.
-->
- `type``pd-standard` 或者 `pd-ssd`。默认:`pd-standard`
- `zone`已弃用GCE 区域。如果没有指定 `zone``zones`
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
`zone``zones` 参数不能同时使用。
- `zones`(已弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone``zones`
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度round-robin分配。
`zone``zones` 参数不能同时使用。
- `fstype``ext4` 或 `xfs`。默认:`ext4`。宿主机操作系统必须支持所定义的文件系统类型。
- `replication-type``none` 或者 `regional-pd`。默认值:`none`。
<!--
If `replication-type` is set to `none`, a regular (zonal) PD will be provisioned.
-->
如果 `replication-type` 设置为 `none`,会制备一个常规(当前区域内的)持久化磁盘。
<!--
If `replication-type` is set to `regional-pd`, a
[Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds)
will be provisioned. It's highly recommended to have
`volumeBindingMode: WaitForFirstConsumer` set, in which case when you create
a Pod that consumes a PersistentVolumeClaim which uses this StorageClass, a
Regional Persistent Disk is provisioned with two zones. One zone is the same
as the zone that the Pod is scheduled in. The other zone is randomly picked
from the zones available to the cluster. Disk zones can be further constrained
using `allowedTopologies`.
-->
如果 `replication-type` 设置为 `regional-pd`
会制备一个[区域性持久化磁盘Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds)。
强烈建议设置 `volumeBindingMode: WaitForFirstConsumer`,这样设置后,
当你创建一个 Pod它使用的 PersistentVolumeClaim 使用了这个 StorageClass
区域性持久化磁盘会在两个区域里制备。其中一个区域是 Pod 所在区域,
另一个区域是会在集群管理的区域中任意选择,磁盘区域可以通过 `allowedTopologies` 加以限制。
{{< note >}}
<!--
`zone` and `zones` parameters are deprecated and replaced with
[allowedTopologies](#allowed-topologies). When
[GCE CSI Migration](/docs/concepts/storage/volumes/#gce-csi-migration) is
enabled, a GCE PD volume can be provisioned in a topology that does not match
any nodes, but any pod trying to use that volume will fail to schedule. With
legacy pre-migration GCE PD, in this case an error will be produced
instead at provisioning time. GCE CSI Migration is enabled by default beginning
from the Kubernetes 1.23 release.
-->
`zone``zones` 已被弃用并被 [allowedTopologies](#allowed-topologies) 取代。
当启用 [GCE CSI 迁移](/zh-cn/docs/concepts/storage/volumes/#gce-csi-migration)时,
GCE PD 卷可能被制备在某个与所有节点都不匹配的拓扑域中,但任何尝试使用该卷的 Pod 都无法被调度。
对于传统的迁移前 GCE PD这种情况下将在制备卷的时候产生错误。
从 Kubernetes 1.23 版本开始GCE CSI 迁移默认启用。
{{< /note >}}
### NFS {#nfs}
```yaml

View File

@ -116,7 +116,7 @@ then both the underlying snapshot and VolumeSnapshotContent remain.
-->
### 删除策略 {#deletion-policy}
卷快照类具有 [deletionPolicy] 属性(/zh-cn/docs/concepts/storage/volume-snapshots/#delete)。
卷快照类具有 [deletionPolicy](/zh-cn/docs/concepts/storage/volume-snapshots/#delete) 属性
用户可以配置当所绑定的 VolumeSnapshot 对象将被删除时,如何处理 VolumeSnapshotContent 对象。
卷快照类的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。

View File

@ -139,7 +139,7 @@ AWSElasticBlockStore 树内存储驱动已在 Kubernetes v1.19 版本中废弃
并在 v1.27 版本中被完全移除。
Kubernetes 项目建议你转为使用 [AWS EBS](https://github.com/kubernetes-sigs/aws-ebs-csi-driver)
第三方存储驱动。
第三方存储驱动插件
<!--
### azureDisk (removed) {#azuredisk}
@ -163,7 +163,7 @@ Kubernetes {{< skew currentVersion >}} 不包含 `azureDisk` 卷类型。
AzureDisk 树内存储驱动已在 Kubernetes v1.19 版本中废弃,并在 v1.27 版本中被完全移除。
Kubernetes 项目建议你转为使用 [Azure Disk](https://github.com/kubernetes-sigs/azuredisk-csi-driver)
第三方存储驱动。
第三方存储驱动插件
<!--
### azureFile (deprecated) {#azurefile}
@ -234,7 +234,8 @@ and the kubelet, set the `InTreePluginAzureFileUnregister` flag to `true`.
The Kubernetes project suggests that you use the [CephFS CSI](https://github.com/ceph/ceph-csi) third party
storage driver instead.
-->
Kubernetes 项目建议你转为使用 [CephFS CSI](https://github.com/ceph/ceph-csi) 第三方存储驱动。
Kubernetes 项目建议你转为使用 [CephFS CSI](https://github.com/ceph/ceph-csi)
第三方存储驱动插件。
{{< /note >}}
<!--
@ -288,7 +289,7 @@ OpenStack Cinder 树内存储驱动已在 Kubernetes v1.11 版本中废弃,
Kubernetes 项目建议你转为使用
[OpenStack Cinder](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
第三方存储驱动。
第三方存储驱动插件
### configMap
@ -516,163 +517,27 @@ for more details.
更多详情请参考 [FC 示例](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel)。
<!--
### gcePersistentDisk (deprecated) {#gcepersistentdisk}
### gcePersistentDisk (removed) {#gcepersistentdisk}
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE)
[persistent disk](https://cloud.google.com/compute/docs/disks) (PD) into your Pod.
Unlike `emptyDir`, which is erased when a pod is removed, the contents of a PD are
preserved and the volume is merely unmounted. This means that a PD can be
pre-populated with data, and that data can be shared between pods.
Kubernetes {{< skew currentVersion >}} does not include a `gcePersistentDisk` volume type.
-->
### gcePersistentDisk弃用 {#gcepersistentdisk}
### gcePersistentDisk已移除 {#gcepersistentdisk}
{{< feature-state for_k8s_version="v1.17" state="deprecated" >}}
`gcePersistentDisk` 卷能将谷歌计算引擎 (GCE) [持久盘PD](http://cloud.google.com/compute/docs/disks)
挂载到你的 Pod 中。
不像 `emptyDir` 那样会在 Pod 被删除的同时也会被删除,持久盘卷的内容在删除 Pod
时会被保留,卷只是被卸载了。
这意味着持久盘卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。
{{< note >}}
<!--
You must create a PD using `gcloud` or the GCE API or UI before you can use it.
-->
在使用 PD 前,你必须使用 `gcloud` 或者 GCE API 或 UI 创建它。
{{< /note >}}
Kubernetes {{< skew currentVersion >}} 不包含 `gcePersistentDisk` 卷类型。
<!--
There are some restrictions when using a `gcePersistentDisk`:
* the nodes on which Pods are running must be GCE VMs
* those VMs need to be in the same GCE project and zone as the persistent disk
The `gcePersistentDisk` in-tree storage driver was deprecated in the Kubernetes v1.17 release
and then removed entirely in the v1.28 release.
-->
使用 `gcePersistentDisk` 时有一些限制:
* 运行 Pod 的节点必须是 GCE VM
* 这些 VM 必须和持久盘位于相同的 GCE 项目和区域zone
`gcePersistentDisk` 源代码树内卷存储驱动在 Kubernetes v1.17 版本中被弃用,在 v1.28 版本中被完全移除。
<!--
One feature of GCE persistent disk is concurrent read-only access to a persistent disk.
A `gcePersistentDisk` volume permits multiple consumers to simultaneously
mount a persistent disk as read-only. This means that you can pre-populate a PD with your dataset
and then serve it in parallel from as many Pods as you need. Unfortunately,
PDs can only be mounted by a single consumer in read-write mode. Simultaneous
writers are not allowed.
The Kubernetes project suggests that you use the [Google Compute Engine Persistent Disk CSI](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)
third party storage driver instead.
-->
GCE PD 的一个特点是它们可以同时被多个消费者以只读方式挂载。
这意味着你可以用数据集预先填充 PD然后根据需要并行地在尽可能多的 Pod 中提供该数据集。
不幸的是PD 只能由单个使用者以读写模式挂载,即不允许同时写入。
<!--
Using a GCE persistent disk with a Pod controlled by a ReplicaSet will fail unless
the PD is read-only or the replica count is 0 or 1.
-->
在由 ReplicationController 所管理的 Pod 上使用 GCE PD 将会失败,除非 PD
是只读模式或者副本的数量是 0 或 1。
<!--
#### Creating a GCE persistent disk {#gce-create-persistent-disk}
Before you can use a GCE persistent disk with a Pod, you need to create it.
-->
#### 创建 GCE 持久盘PD {#gce-create-persistent-disk}
在 Pod 中使用 GCE 持久盘之前,你首先要创建它。
```shell
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
```
<!--
#### Example Pod
-->
#### GCE 持久盘配置示例 {#gce-pd-configuration-example}
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: registry.k8s.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# 此 GCE PD 必须已经存在
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
```
<!--
#### Regional persistent Disks
-->
#### 区域持久盘 {#regional-persistent-disks}
<!--
The [Regional persistent disks](https://cloud.google.com/compute/docs/disks/#repds)
feature allows the creation of persistent disks that are available in two zones
within the same region. In order to use this feature, the volume must be provisioned
as a PersistentVolume; referencing the volume directly from a pod is not supported.
-->
[区域持久盘](https://cloud.google.com/compute/docs/disks/#repds)特性允许你创建能在同一区域的两个可用区中使用的持久盘。
要使用这个特性必须以持久卷PersistentVolume的方式提供卷直接从
Pod 引用这种卷是不可以的。
<!--
#### Manually provisioning a Regional PD PersistentVolume
Dynamic provisioning is possible using a
[StorageClass for GCE PD](/docs/concepts/storage/storage-classes/#gce-pd).
Before creating a PersistentVolume, you must create the persistent disk:
-->
#### 手动制备基于区域 PD 的 PersistentVolume {#manually-provisioning-regional-pd-pv}
使用[为 GCE PD 定义的存储类](/zh-cn/docs/concepts/storage/storage-classes/#gce-pd)
可以实现动态制备。在创建 PersistentVolume 之前,你首先要创建 PD。
```shell
gcloud compute disks create --size=500GB my-data-disk
--region us-central1
--replica-zones us-central1-a,us-central1-b
```
<!--
#### Regional persistent disk configuration example
-->
#### 区域持久盘配置示例
<!--
# failure-domain.beta.kubernetes.io/zone should be used prior to 1.21
-->
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
# failure-domain.beta.kubernetes.io/zone 应在 1.21 之前使用
- key: topology.kubernetes.io/zone
operator: In
values:
- us-central1-a
- us-central1-b
```
Kubernetes 项目建议你转为使用
[Google Compute Engine Persistent Disk CSI](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)
第三方存储驱动插件。
<!--
#### GCE CSI migration
@ -693,23 +558,9 @@ must be installed on the cluster.
为了使用此特性,必须在集群中上安装
[GCE PD CSI 驱动程序](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)。
<!--
#### GCE CSI migration complete
To disable the `gcePersistentDisk` storage plugin from being loaded by the controller manager
and the kubelet, set the `InTreePluginGCEUnregister` flag to `true`.
-->
#### GCE CSI 迁移完成
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
要禁止控制器管理器和 kubelet 加载 `gcePersistentDisk` 存储插件,请将
`InTreePluginGCEUnregister` 标志设置为 `true`
<!--
### gitRepo (deprecated) {#gitrepo}
-->
### gitRepo (已弃用) {#gitrepo}
{{< warning >}}
@ -1160,12 +1011,11 @@ for an example of mounting NFS volumes with PersistentVolumes.
<!--
A `persistentVolumeClaim` volume is used to mount a
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumeClaims
are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
are a way for users to "claim" durable storage (such as an iSCSI volume)
without knowing the details of the particular cloud environment.
-->
`persistentVolumeClaim` 卷用来将[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)PersistentVolume挂载到 Pod 中。
持久卷申领PersistentVolumeClaim是用户在不知道特定云环境细节的情况下“申领”持久存储例如
GCE PersistentDisk 或者 iSCSI 卷)的一种方法。
持久卷申领PersistentVolumeClaim是用户在不知道特定云环境细节的情况下“申领”持久存储例如 iSCSI 卷)的一种方法。
<!--
See the information about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) for more
@ -1277,7 +1127,8 @@ directory. For more details, see [projected volumes](/docs/concepts/storage/proj
The Kubernetes project suggests that you use the [Ceph CSI](https://github.com/ceph/ceph-csi)
third party storage driver instead, in RBD mode.
-->
Kubernetes 项目建议你转为以 RBD 模式使用 [Ceph CSI](https://github.com/ceph/ceph-csi) 第三方存储驱动。
Kubernetes 项目建议你转为以 RBD 模式使用 [Ceph CSI](https://github.com/ceph/ceph-csi)
第三方存储驱动插件。
{{< /note >}}
<!--

View File

@ -121,9 +121,7 @@ The following in-tree plugins support persistent storage on Windows nodes:
<!--
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile)
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk)
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume)
-->
* [`azureFile`](/zh-cn/docs/concepts/storage/volumes/#azurefile)
* [`gcePersistentDisk`](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk)
* [`vsphereVolume`](/zh-cn/docs/concepts/storage/volumes/#vspherevolume)

View File

@ -469,8 +469,8 @@ The following list documents differences between how Pod specifications work bet
请参考 [GitHub issue](https://github.com/moby/moby/issues/25982)。
目前的行为是通过 CTRL_SHUTDOWN_EVENT 发送 ENTRYPOINT 进程,然后 Windows 默认等待 5 秒,
最后使用正常的 Windows 关机行为终止所有进程。
5 秒默认值实际上位于[容器内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)的 Windows 注册表中,
因此在构建容器时可以覆盖这个值。
5 秒默认值实际上位于[容器内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)的
Windows 注册表中,因此在构建容器时可以覆盖这个值。
* `volumeDevices` - 这是一个 beta 版功能特性,未在 Windows 上实现。
Windows 无法将原始块设备挂接到 Pod。
* `volumes`
@ -594,7 +594,7 @@ The following container runtimes work with Windows:
You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+
as the container runtime for Kubernetes nodes that run Windows.
Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd).
Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd).
-->
### ContainerD {#containerd}
@ -603,7 +603,7 @@ Learn how to [install ContainerD on a Windows node](/docs/setup/production-envir
对于运行 Windows 的 Kubernetes 节点,你可以使用
{{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ 作为容器运行时。
学习如何[在 Windows 上安装 ContainerD](/zh-cn/docs/setup/production-environment/container-runtimes/#install-containerd)。
学习如何[在 Windows 上安装 ContainerD](/zh-cn/docs/setup/production-environment/container-runtimes/#containerd)。
{{< note >}}
<!--
@ -719,7 +719,8 @@ The Kubernetes [cluster API](https://cluster-api.sigs.k8s.io/) project also prov
kubeadm 工具帮助你部署 Kubernetes 集群,提供管理集群的控制平面以及运行工作负载的节点。
Kubernetes [集群 API](https://cluster-api.sigs.k8s.io/) 项目也提供了自动部署 Windows 节点的方式。
Kubernetes [集群 API](https://cluster-api.sigs.k8s.io/) 项目也提供了自动部署
Windows 节点的方式。
<!--
## Windows distribution channels

View File

@ -108,7 +108,7 @@ localization you want to help out with is inside `content/<two-letter-code>`.
### Suggest changes
Create or update your chosen localized page based on the English original. See
[translating content](#translating-content) for more details.
[localize content](#localize-content) for more details.
If you notice a technical inaccuracy or other problem with the upstream
(English) documentation, you should fix the upstream documentation first and
@ -124,7 +124,7 @@ changes to the upstream (English) content.
### 建议更改 {#suggest-changes}
根据英文原件创建或更新你选择的本地化页面。
有关更多详细信息,请参阅[翻译内容](#translating-content)。
有关更多详细信息,请参阅[本地化内容](#localize-content)。
如果你发现上游(英文)文档存在技术错误或其他问题,
你应该先修复上游文档,然后通过更新你正在处理的本地化来重复等效的修复。

View File

@ -762,12 +762,13 @@ The output is:
1. 在列表中使用 note 短代码
1. 带嵌套 note 的第二个条目
2. 带嵌套 note 的第二个条目
{{< note >}}
<!--
Warning, Caution, and Note shortcodes, embedded in lists, need to be indented four spaces. See [Common Shortcode Issues](#common-shortcode-issues).
-->
{{< note >}}
警告、小心和注释短代码可以嵌套在列表中,但是要缩进四个空格。
参见[常见短代码问题](#common-shortcode-issues)。
{{< /note >}}
@ -777,9 +778,9 @@ The output is:
1. A fourth item in a list
-->
1. 列表中第三个条目
3. 列表中第三个条目
1. 列表中第四个条目
4. 列表中第四个条目
<!--
### Caution

View File

@ -1433,6 +1433,13 @@ to interpret the credential format produced by the client plugin.
[Webhook 令牌身份认证组件](#webhook-token-authentication)的模块,
负责解析客户端插件所生成的凭据格式。
{{< note >}}
<!--
Earlier versions of `kubectl` included built-in support for authenticating to AKS and GKE, but this is no longer present.
-->
早期版本的 `kubectl` 内置了对 AKS 和 GKE 的认证支持,但这一功能已不再存在。
{{< /note >}}
<!--
### Example use case

View File

@ -960,7 +960,7 @@ so that authentication produces usernames in the format you want.
-->
### 对主体的引用 {#referring-to-subjects}
RoleBinding 或者 ClusterRoleBinding 可绑定角色到某**主体Subject**上。
RoleBinding 或者 ClusterRoleBinding 可绑定角色到某**主体Subject** 上。
主体可以是组,用户或者{{< glossary_tooltip text="服务账户" term_id="service-account" >}}。
Kubernetes 用字符串来表示用户名。

View File

@ -258,6 +258,7 @@ For a reference to old feature gates that are removed, please refer to
| `TopologyManagerPolicyBetaOptions` | `true` | Beta | 1.28 | |
| `TopologyManagerPolicyOptions` | `false` | Alpha | 1.26 | 1.27 |
| `TopologyManagerPolicyOptions` | `true` | Beta | 1.28 | |
| `UnauthenticatedHTTP2DOSMitigation` | `false` | Beta | 1.28 | |
| `UnknownVersionInteroperabilityProxy` | `false` | Alpha | 1.28 | |
| `UserNamespacesSupport` | `false` | Alpha | 1.28 | |
| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | 1.27 |
@ -1272,6 +1273,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
此特性门控绝对不会进阶至稳定版。
<!--
- `TopologyManagerPolicyOptions`: Allow fine-tuning of topology manager policies,
- `UnauthenticatedHTTP2DOSMitigation`: Enables HTTP/2 Denial of Service (DoS)
mitigations for unauthenticated clients.
Kubernetes v1.28.0 through v1.28.2 do not include this feature gate.
- `UnknownVersionInteroperabilityProxy`: Proxy resource requests to the correct peer kube-apiserver when
multiple kube-apiservers exist at varied versions.
See [Mixed version proxy](/docs/concepts/architecture/mixed-version-proxy/) for more information.
@ -1279,6 +1283,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
Before Kubernetes v1.28, this feature gate was named `UserNamespacesStatelessPodsSupport`.
-->
- `TopologyManagerPolicyOptions`:允许微调拓扑管理器策略。
- `UnauthenticatedHTTP2DOSMitigation`:此功能针对未认证客户端启用 HTTP/2 拒绝服务DoS防御机制。
值得注意的是Kubernetes v1.28.0 至 v1.28.2 版本中并未包含这一功能。
- `UnknownVersionInteroperabilityProxy`:存在多个不同版本的 kube-apiserver 时将资源请求代理到正确的对等 kube-apiserver。
详细信息请参阅[混合版本代理](/zh-cn/docs/concepts/architecture/mixed-version-proxy/)。
- `UserNamespacesSupport`:为 Pod 启用用户名字空间支持。

View File

@ -34,5 +34,5 @@ tags:
* In the **Kubernetes Community**: Conversations often use *downstream* to mean the ecosystem, code, or third-party tools that rely on the core Kubernetes codebase. For example, a new feature in Kubernetes may be adopted by applications *downstream* to improve their functionality.
* In **GitHub** or **git**: The convention is to refer to a forked repo as *downstream*, whereas the source repo is considered *upstream*.
-->
* 在 **Kubernetes 社区**中:**下游(downstream)**在人们交流中常用来表示那些依赖核心 Kubernetes 代码库的生态系统、代码或者第三方工具。例如Kubernetes 的一个新特性可以被**下游(downstream)**应用采用,以提升它们的功能性。
* 在 **GitHub****git** 中:约定用**下游(downstream)**表示分支代码库,源代码库被认为是**上游(upstream)**。
* 在 **Kubernetes 社区**中:**下游(downstream)** 在人们交流中常用来表示那些依赖核心 Kubernetes 代码库的生态系统、代码或者第三方工具。例如Kubernetes 的一个新特性可以被**下游(downstream)** 应用采用,以提升它们的功能性。
* 在 **GitHub****git** 中:约定用**下游(downstream)** 表示分支代码库,源代码库被认为是**上游(upstream)**。

View File

@ -36,7 +36,7 @@ The `kubeadm` tool is good if you need:
- A building block in other ecosystem and/or installer tools with a larger
scope.
-->
kubeadm 工具很棒,如果你需要:
`kubeadm` 工具很棒,如果你需要:
- 一个尝试 Kubernetes 的简单方法。
- 一个现有用户可以自动设置集群并测试其应用程序的途径。
@ -48,9 +48,8 @@ of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the
cloud or on-premises, you can integrate `kubeadm` into provisioning systems such
as Ansible or Terraform.
-->
你可以在各种机器上安装和使用 `kubeadm`:笔记本电脑,
一组云服务器Raspberry Pi 等。无论是部署到云还是本地,
你都可以将 `kubeadm` 集成到预配置系统中,例如 Ansible 或 Terraform。
你可以在各种机器上安装和使用 `kubeadm`笔记本电脑、一组云服务器、Raspberry Pi 等。
无论是部署到云还是本地,你都可以将 `kubeadm` 集成到 Ansible 或 Terraform 这类预配置系统中。
## {{% heading "prerequisites" %}}
@ -83,7 +82,8 @@ applies to `kubeadm` as well as to Kubernetes overall.
Check that policy to learn about what versions of Kubernetes and `kubeadm`
are supported. This page is written for Kubernetes {{< param "version" >}}.
-->
[Kubernetes 版本及版本偏差策略](/zh-cn/releases/version-skew-policy/#supported-versions)适用于 `kubeadm` 以及整个 Kubernetes。
[Kubernetes 版本及版本偏差策略](/zh-cn/releases/version-skew-policy/#supported-versions)适用于
`kubeadm` 以及整个 Kubernetes。
查阅该策略以了解支持哪些版本的 Kubernetes 和 `kubeadm`
该页面是为 Kubernetes {{< param "version" >}} 编写的。
@ -127,7 +127,7 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
#### Component installation
-->
### 主机准备 {#preparing-the-hosts}
### 主机准备 {#preparing-the-hosts}
#### 安装组件 {#component-installation}
@ -135,7 +135,7 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts.
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
-->
在所有主机上安装 {{< glossary_tooltip term_id="container-runtime" text="容器运行时" >}} 和 kubeadm。
在所有主机上安装{{< glossary_tooltip term_id="container-runtime" text="容器运行时" >}}和 kubeadm。
详细说明和其他前提条件,请参见[安装 kubeadm](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)。
{{< note >}}
@ -160,7 +160,7 @@ After you initialize your control-plane, the kubelet runs normally.
#### Network setup
kubeadm similarly to other Kubernetes components tries to find a usable IP on
the network interface associated with the default gateway on a host. Such
the network interfaces associated with a default gateway on a host. Such
an IP is then used for the advertising and/or listening performed by a component.
-->
#### 网络设置 {#network-setup}
@ -181,19 +181,39 @@ ip route show # Look for a line starting with "default via"
ip route show # 查找以 "default via" 开头的行
```
{{< note >}}
<!--
If two or more default gateways are present on the host, a Kubernetes component will
try to use the first one it encounters that has a suitable global unicast IP address.
While making this choice, the exact ordering of gateways might vary between different
operating systems and kernel versions.
-->
如果主机上存在两个或多个默认网关Kubernetes 组件将尝试使用遇到的第一个具有合适全局单播 IP 地址的网关。
在做出这个选择时,网关的确切顺序可能因不同的操作系统和内核版本而有所差异。
{{< /note >}}
<!--
Kubernetes components do not accept custom network interface as an option,
therefore a custom IP address must be passed as a flag to all components instances
that need such a custom configuration.
-->
Kubernetes 组件不接受自定义网络接口作为选项,因此必须将自定义 IP
地址作为标志传递给所有需要此自定义配置的组件实例。
{{< note >}}
<!--
If the host does not have a default gateway and if a custom IP address is not passed
to a Kubernetes component, the component may exit with an error.
-->
如果主机没有默认网关,并且没有将自定义 IP 地址传递给 Kubernetes 组件,此组件可能会因错误而退出。
{{< /note >}}
<!--
To configure the API server advertise address for control plane nodes created with both
`init` and `join`, the flag `--apiserver-advertise-address` can be used.
Preferably, this option can be set in the [kubeadm API](/docs/reference/config-api/kubeadm-config.v1beta3)
as `InitConfiguration.localAPIEndpoint` and `JoinConfiguration.controlPlane.localAPIEndpoint`.
-->
Kubernetes 组件不接受自定义网络接口作为选项,因此必须将自定义 IP
地址作为标志传递给所有需要此自定义配置的组件实例。
要为使用 `init``join` 创建的控制平面节点配置 API 服务器的公告地址,
你可以使用 `--apiserver-advertise-address` 标志。
最好在 [kubeadm API](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3)中使用
@ -214,18 +234,18 @@ For dual-stack see
有关双协议栈细节参见[使用 kubeadm 支持双协议栈](/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support)。
{{< note >}}
<!--
IP addresses become part of certificates SAN fields. Changing these IP addresses would require
The IP addresses that you assign to control plane components become part of their X.509 certificates'
subject alternative name fields. Changing these IP addresses would require
signing new certificates and restarting the affected components, so that the change in
certificate files is reflected. See
[Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)
for more details on this topic.
-->
IP 地址成为证书 SAN 字段的一部分。更改这些 IP 地址将需要签署新的证书并重启受影响的组件,
你分配给控制平面组件的 IP 地址将成为其 X.509 证书的使用者备用名称字段的一部分。
更改这些 IP 地址将需要签署新的证书并重启受影响的组件,
以便反映证书文件中的变化。有关此主题的更多细节参见
[手动续期证书](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)。
{{</ note >}}
{{< warning >}}
<!--
@ -244,22 +264,6 @@ Kubernetes 维护者建议设置主机网络,使默认网关 IP 成为 Kuberne
如果节点的默认网关是公共 IP 地址,你应配置数据包过滤或其他保护节点和集群的安全措施。
{{< /warning >}}
{{< note >}}
<!--
If the host does not have a default gateway, it is recommended to setup one. Otherwise,
without passing a custom IP address to a Kubernetes component, the component
will exit with an error. If two or more default gateways are present on the host,
a Kubernetes component will try to use the first one it encounters that has a suitable
global unicast IP address. While making this choice, the exact ordering of gateways
might vary between different operating systems and kernel versions.
-->
如果主机没有默认网关,则建议设置一个默认网关。
否则,在不传递自定义 IP 地址给 Kubernetes 组件的情况下,此组件将退出并报错。
如果主机上存在两个或多个默认网关,则 Kubernetes
组件将尝试使用所遇到的第一个具有合适全局单播 IP 地址的网关。
在做出此选择时,网关的确切顺序可能因不同的操作系统和内核版本而有所差异。
{{< /note >}}
<!--
### Preparing the required container images
-->
@ -278,15 +282,15 @@ Kubeadm allows you to use a custom image repository for the required images.
See [Using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images)
for more details.
-->
这个步骤是可选的,只适用于你希望 `kubeadm init``kubeadm join` 不去下载存放在 `registry.k8s.io` 上的默认的容器镜像的情况。
这个步骤是可选的,只适用于你希望 `kubeadm init``kubeadm join` 不去下载存放在
`registry.k8s.io` 上的默认容器镜像的情况。
当你在离线的节点上创建一个集群的时候Kubeadm 有一些命令可以帮助你预拉取所需的镜像。
阅读[离线运行 kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection)
获取更多的详情。
Kubeadm 允许你给所需要的镜像指定一个自定义的镜像仓库。
阅读[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images)
获取更多的详情。
阅读[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images)获取更多的详情。
<!--
### Initializing your control-plane node
@ -350,7 +354,7 @@ While `--apiserver-advertise-address` can be used to set the advertise address f
control-plane node's API server, `--control-plane-endpoint` can be used to set the shared endpoint
for all control-plane nodes.
-->
`--apiserver-advertise-address` 可用于为控制平面节点的 API server 设置广播地址,
`--apiserver-advertise-address` 可用于为控制平面节点的 API 服务器设置广播地址,
`--control-plane-endpoint` 可用于为所有控制平面节点设置共享端点。
<!--
@ -376,8 +380,9 @@ This will allow you to pass `--control-plane-endpoint=cluster-endpoint` to `kube
high availability scenario.
-->
其中 `192.168.0.102` 是此节点的 IP 地址,`cluster-endpoint` 是映射到该 IP 的自定义 DNS 名称。
这将允许你将 `--control-plane-endpoint=cluster-endpoint` 传递给 `kubeadm init`,并将相同的 DNS 名称传递给 `kubeadm join`
稍后你可以修改 `cluster-endpoint` 以指向高可用性方案中的负载均衡器的地址。
这将允许你将 `--control-plane-endpoint=cluster-endpoint` 传递给 `kubeadm init`
并将相同的 DNS 名称传递给 `kubeadm join`。稍后你可以修改 `cluster-endpoint`
以指向高可用性方案中的负载均衡器的地址。
<!--
Turning a single control plane cluster created without `--control-plane-endpoint` into a highly available cluster
@ -437,8 +442,7 @@ After it finishes you should see:
-->
`kubeadm init` 首先运行一系列预检查以确保机器为运行 Kubernetes 准备就绪。
这些预检查会显示警告并在错误时退出。然后 `kubeadm init`
下载并安装集群控制平面组件。这可能会需要几分钟。
完成之后你应该看到:
下载并安装集群控制平面组件。这可能会需要几分钟。完成之后你应该看到:
```none
Your Kubernetes control-plane has initialized successfully!
@ -528,8 +532,7 @@ This section contains important information about networking setup and
deployment order.
Read all of this advice carefully before proceeding.
-->
本节包含有关网络设置和部署顺序的重要信息。
在继续之前,请仔细阅读所有建议。
本节包含有关网络设置和部署顺序的重要信息。在继续之前,请仔细阅读所有建议。
<!--
**You must deploy a
@ -537,10 +540,8 @@ Read all of this advice carefully before proceeding.
(CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.**
-->
**你必须部署一个基于 Pod 网络插件的
{{< glossary_tooltip text="容器网络接口" term_id="cni" >}}
(CNI),以便你的 Pod 可以相互通信。
在安装网络之前,集群 DNS (CoreDNS) 将不会启动。**
**你必须部署一个基于 Pod 网络插件的{{< glossary_tooltip text="容器网络接口" term_id="cni" >}}CNI
以便你的 Pod 可以相互通信。在安装网络之前,集群 DNS (CoreDNS) 将不会启动。**
<!--
- Take care that your Pod network must not overlap with any of the host
@ -550,8 +551,7 @@ Cluster DNS (CoreDNS) will not start up before a network is installed.**
CIDR block to use instead, then use that during `kubeadm init` with
`--pod-network-cidr` and as a replacement in your network plugin's YAML).
-->
- 注意你的 Pod 网络不得与任何主机网络重叠:
如果有重叠,你很可能会遇到问题。
- 注意你的 Pod 网络不得与任何主机网络重叠:如果有重叠,你很可能会遇到问题。
(如果你发现网络插件的首选 Pod 网络与某些主机网络之间存在冲突,
则应考虑使用一个合适的 CIDR 块来代替,
然后在执行 `kubeadm init` 时使用 `--pod-network-cidr` 参数并在你的网络插件的 YAML 中替换它)。
@ -593,13 +593,15 @@ kubeadm 应该是与 CNI 无关的,对 CNI 驱动进行验证目前不在我
Several external projects provide Kubernetes Pod networks using CNI, some of which also
support [Network Policy](/docs/concepts/services-networking/network-policies/).
-->
一些外部项目为 Kubernetes 提供使用 CNI 的 Pod 网络,其中一些还支持[网络策略](/zh-cn/docs/concepts/services-networking/network-policies/)。
一些外部项目为 Kubernetes 提供使用 CNI 的 Pod 网络,
其中一些还支持[网络策略](/zh-cn/docs/concepts/services-networking/network-policies/)。
<!--
See a list of add-ons that implement the
[Kubernetes networking model](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
-->
请参阅实现 [Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)的附加组件列表。
请参阅实现
[Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)的附加组件列表。
<!--
You can install a Pod network add-on with the following command on the
@ -664,7 +666,7 @@ reasons. If you want to be able to schedule Pods on the control plane nodes,
for example for a single machine Kubernetes cluster, run:
-->
默认情况下,出于安全原因,你的集群不会在控制平面节点上调度 Pod。
如果你希望能够在控制平面节点上调度 Pod例如单机 Kubernetes 集群,请运行:
如果你希望能够在单机 Kubernetes 集群等控制平面节点上调度 Pod请运行
```bash
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
@ -718,7 +720,7 @@ The nodes are where your workloads (containers and Pods, etc) run. To add new no
<!--
If you do not have the token, you can get it by running the following command on the control-plane node:
-->
如果没有令牌,可以通过在控制平面节点上运行以下命令来获取令牌:
如果没有令牌,可以通过在控制平面节点上运行以下命令来获取令牌:
```bash
kubeadm token list
@ -747,6 +749,7 @@ you can create a new token by running the following command on the control-plane
```bash
kubeadm token create
```
<!--
The output is similar to this:
-->
@ -780,7 +783,8 @@ The output is similar to:
<!--
To specify an IPv6 tuple for `<control-plane-host>:<control-plane-port>`, IPv6 address must be enclosed in square brackets, for example: `[2001:db8::101]:2073`.
-->
要为 `<control-plane-host>:<control-plane-port>` 指定 IPv6 元组,必须将 IPv6 地址括在方括号中,例如:`[2001:db8::101]:2073`
要为 `<control-plane-host>:<control-plane-port>` 指定 IPv6 元组,必须将 IPv6
地址括在方括号中,例如 `[2001:db8::101]:2073`
{{< /note >}}
<!--
@ -814,8 +818,8 @@ on the first control-plane node. To provide higher availability, please rebalanc
with `kubectl -n kube-system rollout restart deployment coredns` after at least one new node is joined.
-->
由于集群节点通常是按顺序初始化的CoreDNS Pod 很可能都运行在第一个控制面节点上。
为了提供更高的可用性,请在加入至少一个新节点后
使用 `kubectl -n kube-system rollout restart deployment coredns` 命令,重新平衡这些 CoreDNS Pod。
为了提供更高的可用性,请在加入至少一个新节点后使用
`kubectl -n kube-system rollout restart deployment coredns` 命令,重新平衡这些 CoreDNS Pod。
{{< /note >}}
<!--
@ -869,7 +873,7 @@ admin.conf 文件为用户提供了对集群的超级用户特权。
If you want to connect to the API Server from outside the cluster you can use
`kubectl proxy`:
-->
如果要从集群外部连接到 API 服务器,则可以使用 `kubectl proxy`
如果要从集群外部连接到 API 服务器,则可以使用 `kubectl proxy`
```bash
scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
@ -879,7 +883,7 @@ kubectl --kubeconfig ./admin.conf proxy
<!--
You can now access the API Server locally at `http://localhost:8001/api/v1`
-->
你现在可以在本地访问 API 服务器 `http://localhost:8001/api/v1`
你现在可以在 `http://localhost:8001/api/v1` 从本地访问 API 服务器
<!--
## Clean up {#tear-down}
@ -907,21 +911,25 @@ and make sure that the node is empty, then deconfigure the node.
<!--
### Remove the node
-->
### 删除节点 {#remove-the-node}
### 移除节点 {#remove-the-node}
<!--
Talking to the control-plane node with the appropriate credentials, run:
-->
使用适当的凭证与控制平面节点通信,运行:
```bash
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
```
-->
使用适当的凭据与控制平面节点通信,运行:
```bash
kubectl drain <节点名称> --delete-emptydir-data --force --ignore-daemonsets
```
<!--
Before removing the node, reset the state installed by `kubeadm`:
-->
在删除节点之前,请重置 `kubeadm` 安装的状态:
除节点之前,请重置 `kubeadm` 安装的状态:
```bash
kubeadm reset
@ -946,8 +954,12 @@ ipvsadm -C
```
<!--
Now remove the node:
```bash
kubectl delete node <node name>
```
-->
现在删除节点:
现在除节点:
```bash
kubectl delete node <节点名称>
@ -1002,7 +1014,8 @@ options.
an overview of what is involved.
-->
* 使用 [Sonobuoy](https://github.com/heptio/sonobuoy) 验证集群是否正常运行。
* <a id="lifecycle"/>有关使用 kubeadm 升级集群的详细信息,请参阅[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
* <a id="lifecycle"/>有关使用 kubeadm 升级集群的详细信息,
请参阅[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
* 在 [kubeadm 参考文档](/zh-cn/docs/reference/setup-tools/kubeadm/)中了解有关 `kubeadm` 进阶用法的信息。
* 了解有关 Kubernetes [概念](/zh-cn/docs/concepts/)和 [`kubectl`](/zh-cn/docs/reference/kubectl/)的更多信息。
* 有关 Pod 网络附加组件的更多列表,请参见[集群网络](/zh-cn/docs/concepts/cluster-administration/networking/)页面。
@ -1029,10 +1042,10 @@ options.
* 有关漏洞,访问 [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
* 有关支持,访问
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack 频道
* General SIG 集群生命周期开发 Slack 频道:
* 常规的 SIG Cluster Lifecycle 开发 Slack 频道:
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* SIG 集群生命周期 [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
* SIG 集群生命周期邮件列表:
* SIG Cluster Lifecycle 的 [SIG 资料](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
* SIG Cluster Lifecycle 邮件列表:
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
<!--
@ -1143,7 +1156,7 @@ Example for `kubeadm upgrade`:
* The version of kubeadm used for upgrading the node must be at {{< skew currentVersionAddMinor -1 >}}
or {{< skew currentVersion >}}
-->
`kubeadm upgrade` 的例子:
`kubeadm upgrade` 的例子
* 用于创建或升级节点的 kubeadm 版本为 {{< skew currentVersionAddMinor -1 >}}。
* 用于升级节点的 kubeadm 版本必须为 {{< skew currentVersionAddMinor -1 >}} 或 {{< skew currentVersion >}}。
@ -1151,8 +1164,8 @@ or {{< skew currentVersion >}}
To learn more about the version skew between the different Kubernetes component see
the [Version Skew Policy](/releases/version-skew-policy/).
-->
要了解更多关于不同 Kubernetes 组件之间的版本偏差,请参见
[版本偏差策略](/zh-cn/releases/version-skew-policy/)。
要了解更多关于不同 Kubernetes 组件之间的版本偏差,
请参见[版本偏差策略](/zh-cn/releases/version-skew-policy/)。
<!--
## Limitations {#limitations}

View File

@ -114,7 +114,8 @@ If you see the following warnings while running `kubeadm init`
```
<!--
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands:
Then you may be missing `ebtables`, `ethtool` or a similar executable on your node.
You can install them with the following commands:
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
- For CentOS/Fedora users, run `yum install ebtables ethtool`.
@ -143,9 +144,9 @@ This may be caused by a number of problems. The most common are:
- network connection problems. Check that your machine has full network connectivity before continuing.
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
configure it properly, see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
and investigating each container by running `docker logs`. For other container runtime see
and investigating each container by running `docker logs`. For other container runtime, see
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
-->
这可能是由许多问题引起的。最常见的是:
@ -240,10 +241,12 @@ provider. Please contact the author of the Pod Network add-on to find out whethe
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
For more information, see the
[CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.
If your network provider does not support the portmap CNI plugin, you may need to use the
[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport)
or use `HostNetwork=true`.
-->
## `HostPort` 服务无法工作
@ -267,9 +270,10 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos
add-on provider to get the latest status of their support for hairpin mode.
- If you are using VirtualBox (directly or via Vagrant), you will need to
ensure that `hostname -i` returns a routable IP address. By default the first
ensure that `hostname -i` returns a routable IP address. By default, the first
interface is connected to a non-routable host-only network. A work around
is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
is to modify `/etc/hosts`, see this
[Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
for an example.
-->
## 无法通过其服务 IP 访问 Pod
@ -301,12 +305,14 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
regenerate a certificate if necessary. The certificates in a kubeconfig file
are base64 encoded. The `base64 --decode` command can be used to decode the certificate
and `openssl x509 -text -noout` can be used for viewing the certificate information.
- Unset the `KUBECONFIG` environment variable using:
-->
- 验证 `$HOME/.kube/config` 文件是否包含有效证书,
并在必要时重新生成证书。在 kubeconfig 文件中的证书是 base64 编码的。
`base64 --decode` 命令可以用来解码证书,`openssl x509 -text -noout`
命令可以用于查看证书信息。
- 使用如下方法取消设置 `KUBECONFIG` 环境变量的值:
```shell
@ -328,7 +334,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
- 另一个方法是覆盖 `kubeconfig` 的现有用户 "管理员"
```shell
mv $HOME/.kube $HOME/.kube.bak
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
@ -337,7 +343,8 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
<!--
## Kubelet client certificate rotation fails {#kubelet-client-cert}
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the
`/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
in kube-apiserver logs. To fix the issue you must follow these steps:
-->
@ -401,11 +408,15 @@ Error from server (NotFound): the server could not find the requested resource
```
<!--
- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
- If you're using flannel as the pod network inside Vagrant, then you will have to
specify the default interface name for flannel.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts
are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
This may lead to problems with flannel, which defaults to the first interface on a host.
This leads to all hosts thinking they have the same public IP address. To prevent this,
pass the `--iface eth1` flag to flannel so that the second interface is chosen.
-->
- 如果你正在 Vagrant 中使用 flannel 作为 Pod 网络,则必须指定 flannel 的默认接口名称。
@ -417,7 +428,8 @@ Error from server (NotFound): the server could not find the requested resource
<!--
## Non-public IP used for containers
In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
In some situations `kubectl logs` and `kubectl run` commands may return with the
following errors in an otherwise functional cluster:
-->
## 容器使用的非公共 IP
@ -428,10 +440,15 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
```
<!--
- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
- This may be due to Kubernetes using an IP that can not communicate with other IPs on
the seemingly same subnet, possibly by policy of the machine provider.
- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally
as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's
`InternalIP` instead of the public one.
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet:
Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will
not display the offending alias IP address. Alternatively an API endpoint specific to
DigitalOcean allows to query for the anchor IP from the droplet:
-->
- 这或许是由于 Kubernetes 使用的 IP 无法与看似相同的子网上的其他 IP 进行通信的缘故,
可能是由机器提供商的政策所导致的。
@ -471,8 +488,8 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
<!--
## `coredns` pods have `CrashLoopBackOff` or `Error` state
If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
where the `coredns` pods are not starting. To solve that you can try one of the following options:
If you have nodes that are running SELinux with an older version of Docker, you might experience a scenario
where the `coredns` pods are not starting. To solve that, you can try one of the following options:
- Upgrade to a [newer version of Docker](/docs/setup/production-environment/container-runtimes/#docker).
@ -497,7 +514,8 @@ kubectl -n kube-system get deployment coredns -o yaml | \
```
<!--
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop.
[A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
-->
CoreDNS 处于 `CrashLoopBackOff` 时的另一个原因是当 Kubernetes 中部署的 CoreDNS Pod 检测到环路时。
@ -526,7 +544,7 @@ rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:24
```
<!--
this issue appears if you run CentOS 7 with Docker 1.13.1.84.
This issue appears if you run CentOS 7 with Docker 1.13.1.84.
This version of Docker can prevent the kubelet from executing into the etcd container.
To work around the issue, choose one of these options:
@ -622,7 +640,24 @@ conditions abate:
而不管它们的条件如何,将其与其他节点保持隔离,直到它们的初始保护条件消除:
```shell
kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/control-plane" } ] } } } }'
kubectl -n kube-system patch ds kube-proxy -p='{
"spec": {
"template": {
"spec": {
"tolerations": [
{
"key": "CriticalAddonsOnly",
"operator": "Exists"
},
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/control-plane"
}
]
}
}
}
}'
```
<!--
@ -638,7 +673,6 @@ For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/c
Kubernetes components like the kubelet and kube-controller-manager use the default path of
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_
for the feature to work.
(**Note**: FlexVolume was deprecated in the Kubernetes v1.23 release)
-->
## 节点上的 `/usr` 被以只读方式挂载 {#usr-mounted-read-only}
@ -648,13 +682,19 @@ for the feature to work.
类似 kubelet 和 kube-controller-manager 这类 Kubernetes 组件使用默认路径
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`
而 FlexVolume 的目录 **必须是可写入的**,该功能特性才能正常工作。
**注意**FlexVolume 在 Kubernetes v1.23 版本中已被弃用)
{{< note >}}
<!--
FlexVolume was deprecated in the Kubernetes v1.23 release.
-->
FlexVolume 在 Kubernetes v1.23 版本中已被弃用。
{{< /note >}}
<!--
To workaround this issue you can configure the flex-volume directory using the kubeadm
To workaround this issue, you can configure the flex-volume directory using the kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/).
On the primary control-plane Node (created using `kubeadm init`) pass the following
On the primary control-plane Node (created using `kubeadm init`), pass the following
file using `--config`:
-->
为了解决这个问题,你可以使用 kubeadm 的[配置文件](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/)来配置
@ -700,7 +740,10 @@ be advised that this is modifying a design principle of the Linux distribution.
<!--
## `kubeadm upgrade plan` prints out `context deadline exceeded` error message
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`.
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in
the case of running an external etcd. This is not a critical bug and happens because
older versions of kubeadm perform a version check on the external etcd cluster.
You can proceed with `kubeadm upgrade apply ...`.
This issue is fixed as of version 1.19.
-->
@ -767,3 +810,110 @@ Also see [How to run the metrics-server securely](https://github.com/kubernetes-
以进一步了解如何在 kubeadm 集群中配置 kubelet 使用正确签名了的服务证书。
另请参阅 [How to run the metrics-server securely](https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-to-run-metrics-server-securely)。
<!--
## Upgrade fails due to etcd hash not changing
Only applicable to upgrading a control plane node with a kubeadm binary v1.28.3 or later,
where the node is currently managed by kubeadm versions v1.28.0, v1.28.1 or v1.28.2.
Here is the error message you may encounter:
-->
## 因 etcd 哈希值无变化而升级失败 {#upgrade-fails-due-to-etcd-hash-not-changing}
仅适用于通过 kubeadm 二进制文件 v1.28.3 或更高版本升级控制平面节点的情况,
其中此节点当前由 kubeadm v1.28.0、v1.28.1 或 v1.28.2 管理。
以下是你可能遇到的错误消息:
```
[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition
[upgrade/etcd] Waiting for previous etcd to become available
I0907 10:10:09.109104 3704 etcd.go:588] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379/ https://172.17.0.4:2379/ https://172.17.0.3:2379/]) are available 1/10
[upgrade/etcd] Etcd was rolled back and is now available
static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition
couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.rollbackOldManifests
cmd/kubeadm/app/phases/upgrade/staticpods.go:525
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.upgradeComponent
cmd/kubeadm/app/phases/upgrade/staticpods.go:254
k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
cmd/kubeadm/app/phases/upgrade/staticpods.go:338
...
```
<!--
The reason for this failure is that the affected versions generate an etcd manifest file with
unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison,
and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash.
There are two way to workaround this issue if you see it in your cluster:
- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using:
This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.
-->
本次失败的原因是受影响的版本在 PodSpec 中生成的 etcd 清单文件带有不需要的默认值。
这将导致与清单比较的差异,并且 kubeadm 预期 Pod 哈希值将发生变化,但 kubelet 永远不会更新哈希值。
如果你在集群中遇到此问题,有两种解决方法:
- 可以运行以下命令跳过 etcd 的版本升级,即受影响版本和 v1.28.3(或更高版本)之间的版本升级:
```shell
kubeadm upgrade {apply|node} [version] --etcd-upgrade=false
```
但不推荐这种方法,因为后续的 v1.28 补丁版本可能引入新的 etcd 版本。
<!--
- Before upgrade, patch the manifest for the etcd static pod, to remove the problematic defaulted attributes:
-->
- 在升级之前,对 etcd 静态 Pod 的清单进行修补,以删除有问题的默认属性:
```patch
diff --git a/etc/kubernetes/manifests/etcd_defaults.yaml b/etc/kubernetes/manifests/etcd_origin.yaml
index d807ccbe0aa..46b35f00e15 100644
--- a/etc/kubernetes/manifests/etcd_defaults.yaml
+++ b/etc/kubernetes/manifests/etcd_origin.yaml
@@ -43,7 +43,6 @@ spec:
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
- successThreshold: 1
timeoutSeconds: 15
name: etcd
resources:
@@ -59,26 +58,18 @@ spec:
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
- successThreshold: 1
timeoutSeconds: 15
- terminationMessagePath: /dev/termination-log
- terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
- dnsPolicy: ClusterFirst
- enableServiceLinks: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
- restartPolicy: Always
- schedulerName: default-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
- terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
```
<!--
More information can be found in the
[tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug.
-->
有关此错误的更多信息,请查阅[此问题的跟踪页面](https://github.com/kubernetes/kubeadm/issues/2927)。

View File

@ -183,18 +183,25 @@ kubeadm init phase certs <component-name> --config <config-file>
<!--
To write new manifest files in `/etc/kubernetes/manifests` you can use:
-->
要在 `/etc/kubernetes/manifests` 中编写新的清单文件,你可以使用:
要在 `/etc/kubernetes/manifests` 中编写新的清单文件,你可以使用以下命令
<!--
# For Kubernetes control plane components
# For local etcd
-->
```shell
# Kubernetes 控制平面组件
kubeadm init phase control-plane <component-name> --config <config-file>
# 本地 etcd
kubeadm init phase etcd local --config <config-file>
```
<!--
The `<config-file>` contents must match the updated `ClusterConfiguration`.
The `<component-name>` value must be the name of the component.
The `<component-name>` value must be a name of a Kubernetes control plane component (`apiserver`, `controller-manager` or `scheduler`).
-->
`<config-file>` 内容必须与更新后的 `ClusterConfiguration` 匹配。
`<component-name>` 值必须是组件的名称。
`<component-name>` 值必须是一个控制平面组件`apiserver`、`controller-manager` 或 `scheduler`的名称。
{{< note >}}
<!--

View File

@ -790,6 +790,36 @@ startupProbe:
value: ""
```
{{< note >}}
<!--
When the kubelet probes a Pod using HTTP, it only follows redirects if the redirect
is to the same host. If the kubelet receives 11 or more redirects during probing, the probe is considered successful
and a related Event is created:
-->
当 kubelet 使用 HTTP 探测 Pod 时,仅当重定向到同一主机时,它才会遵循重定向。
如果 kubelet 在探测期间收到 11 个或更多重定向,则认为探测成功并创建相关事件:
```none
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29m default-scheduler Successfully assigned default/httpbin-7b8bc9cb85-bjzwn to daocloud
Normal Pulling 29m kubelet Pulling image "docker.io/kennethreitz/httpbin"
Normal Pulled 24m kubelet Successfully pulled image "docker.io/kennethreitz/httpbin" in 5m12.402735213s
Normal Created 24m kubelet Created container httpbin
Normal Started 24m kubelet Started container httpbin
Warning ProbeWarning 4m11s (x1197 over 24m) kubelet Readiness probe warning: Probe terminated redirects
```
<!--
If the kubelet receives a redirect where the hostname is different from the request,
the outcome of the probe is treated as successful and kubelet creates an event
to report the redirect failure.
-->
如果 kubelet 收到主机名与请求不同的重定向,则探测结果将被视为成功,并且
kubelet 将创建一个事件来报告重定向失败。
{{< /note >}}
<!--
### TCP probes

View File

@ -30,7 +30,8 @@ ConfigMap 是 Kubernetes 的一种机制,可让你将配置数据注入到应
<!--
The ConfigMap concept allow you to decouple configuration artifacts from image content to
keep containerized applications portable. For example, you can download and run the same
{{< glossary_tooltip text="container image" term_id="image" >}} to spin up containers for the purposes of local development, system test, or running a live end-user workload.
{{< glossary_tooltip text="container image" term_id="image" >}} to spin up containers for
the purposes of local development, system test, or running a live end-user workload.
-->
ConfigMap 概念允许你将配置清单与镜像内容分离,以保持容器化的应用程序的可移植性。
例如,你可以下载并运行相同的{{< glossary_tooltip text="容器镜像" term_id="image" >}}来启动容器,
@ -133,8 +134,10 @@ symlinks, devices, pipes, and more).
{{< note >}}
<!--
Each filename being used for ConfigMap creation must consist of only acceptable characters, which are: letters (`A` to `Z` and `a` to `z`), digits (`0` to `9`), '-', '_', or '.'.
If you use `kubectl create configmap` with a directory where any of the file names contains an unacceptable character, the `kubectl` command may fail.
Each filename being used for ConfigMap creation must consist of only acceptable characters,
which are: letters (`A` to `Z` and `a` to `z`), digits (`0` to `9`), '-', '_', or '.'.
If you use `kubectl create configmap` with a directory where any of the file names contains
an unacceptable character, the `kubectl` command may fail.
-->
用于创建 ConfigMap 的每个文件名必须由可接受的字符组成,即:字母(`A` 到 `Z`
`a``z`)、数字(`0` 到 `9`)、'-'、'_'或'.'。
@ -162,6 +165,16 @@ Now, download the sample configuration and create the ConfigMap:
-->
现在,下载示例的配置并创建 ConfigMap
<!--
```shell
# Download the sample files into `configure-pod-container/configmap/` directory
wget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties
wget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties
# Create the ConfigMap
kubectl create configmap game-config --from-file=configure-pod-container/configmap/
```
-->
```shell
# 将示例文件下载到 `configure-pod-container/configmap/` 目录
wget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties
@ -360,6 +373,28 @@ Use the option `--from-env-file` to create a ConfigMap from an env-file, for exa
-->
使用 `--from-env-file` 选项基于 env 文件创建 ConfigMap例如
<!--
```shell
# Env-files contain a list of environment variables.
# These syntax rules apply:
# Each line in an env file has to be in VAR=VAL format.
# Lines beginning with # (i.e. comments) are ignored.
# Blank lines are ignored.
# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).
# Download the sample files into `configure-pod-container/configmap/` directory
wget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties
wget https://kubernetes.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties
# The env-file `game-env-file.properties` looks like below
cat configure-pod-container/configmap/game-env-file.properties
enemies=aliens
lives=3
allowed="true"
# This comment and the empty line above it are ignored
```
-->
```shell
# Env 文件包含环境变量列表。其中适用以下语法规则:
# 这些语法规则适用:
@ -378,7 +413,7 @@ enemies=aliens
lives=3
allowed="true"
# This comment and the empty line above it are ignored
# 此注释和上方的空行将被忽略
```
```shell
@ -595,13 +630,17 @@ For example, to generate a ConfigMap from files `configure-pod-container/configm
例如,要基于 `configure-pod-container/configmap/game.properties`
文件生成一个 ConfigMap
<!--
# Create a kustomization.yaml file with ConfigMapGenerator
-->
```shell
# 创建包含 ConfigMapGenerator 的 kustomization.yaml 文件
cat <<EOF >./kustomization.yaml
configMapGenerator:
- name: game-config-4
labels:
game-config: config-4
options:
labels:
game-config: config-4
files:
- configure-pod-container/configmap/game.properties
EOF
@ -680,13 +719,28 @@ with the key `game-special-key`
例如,从 `configure-pod-container/configmap/game.properties` 文件生成 ConfigMap
但使用 `game-special-key` 作为键名:
<!--
```shell
# Create a kustomization.yaml file with ConfigMapGenerator
cat <<EOF >./kustomization.yaml
configMapGenerator:
- name: game-config-5
options:
labels:
game-config: config-5
files:
- game-special-key=configure-pod-container/configmap/game.properties
EOF
```
-->
```shell
# 创建包含 ConfigMapGenerator 的 kustomization.yaml 文件
cat <<EOF >./kustomization.yaml
configMapGenerator:
- name: game-config-5
labels:
game-config: config-5
options:
labels:
game-config: config-5
files:
- game-special-key=configure-pod-container/configmap/game.properties
EOF
@ -719,6 +773,9 @@ this, you can specify the `ConfigMap` generator. Create (or replace)
为了实现这一点,你可以配置 `ConfigMap` 生成器。
创建(或替换)`kustomization.yaml`,使其具有以下内容。
<!--
# kustomization.yaml contents for creating a ConfigMap from literals
-->
```yaml
---
# 基于字面创建 ConfigMap 的 kustomization.yaml 内容
@ -784,7 +841,8 @@ section, and learn how to use these objects with Pods.
```
<!--
2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY` environment variable in the Pod specification.
2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY`
environment variable in the Pod specification.
-->
2. 将 ConfigMap 中定义的 `special.how` 赋值给 Pod 规约中的 `SPECIAL_LEVEL_KEY` 环境变量。
@ -964,7 +1022,7 @@ kubectl delete pod dapi-test-pod --now
## Add ConfigMap data to a Volume
As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create
a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of
a ConfigMap using `--from-file`, the filename becomes a key stored in the `data` section of
the ConfigMap. The file contents become the key's value.
-->
## 将 ConfigMap 数据添加到一个卷中 {#add-configmap-data-to-a-volume}
@ -1027,7 +1085,8 @@ SPECIAL_TYPE
<!--
Text data is exposed as files using the UTF-8 character encoding. To use some other
character encoding, use `binaryData` (see [ConfigMap object](/docs/concepts/configuration/configmap/#configmap-object) for more details).
character encoding, use `binaryData`
(see [ConfigMap object](/docs/concepts/configuration/configmap/#configmap-object) for more details).
-->
文本数据会展现为 UTF-8 字符编码的文件。如果使用其他字符编码,
可以使用 `binaryData`(详情参阅 [ConfigMap 对象](/zh-cn/docs/concepts/configuration/configmap/#configmap-object))。
@ -1117,7 +1176,7 @@ guide explains the syntax.
<!--
### Optional references
A ConfigMap reference may be marked _optional_. If the ConfigMap is non-existent, the mounted
A ConfigMap reference may be marked _optional_. If the ConfigMap is non-existent, the mounted
volume will be empty. If the ConfigMap exists, but the referenced key is non-existent, the path
will be absent beneath the mount point. See [Optional ConfigMaps](#optional-configmaps) for more
details.
@ -1157,7 +1216,8 @@ Kubelet 在每次定期同步时都会检查所挂载的 ConfigMap 是否是最
{{< note >}}
<!--
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates.
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath)
volume will not receive ConfigMap updates.
-->
使用 ConfigMap 作为 [subPath](/zh-cn/docs/concepts/storage/volumes/#using-subpath)
卷的容器将不会收到 ConfigMap 更新。
@ -1204,6 +1264,25 @@ ConfigMap 的 `data` 字段包含配置数据。如下例所示,它可以简
(如用 `--from-literal` 的单个属性定义)或复杂
(如用 `--from-file` 的配置文件或 JSON blob 定义)。
<!--
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: example-config
namespace: default
data:
# example of a simple property defined using --from-literal
example.property.1: hello
example.property.2: world
# example of a complex property defined using --from-file
example.property.file: |-
property.1=value-1
property.2=value-2
property.3=value-3
```
-->
```yaml
apiVersion: v1
kind: ConfigMap
@ -1264,6 +1343,9 @@ as optional:
-->
例如,以下 Pod 规约将 ConfigMap 中的环境变量标记为可选:
<!--
optional: true # mark the variable as optional
-->
```yaml
apiVersion: v1
kind: Pod
@ -1305,6 +1387,9 @@ example, the following Pod specification marks a volume that references a Config
此时 Kubernetes 将总是为卷创建挂载路径,即使引用的 ConfigMap 或键不存在。
例如,以下 Pod 规约将所引用得 ConfigMap 的卷标记为可选:
<!--
optional: true # mark the source ConfigMap as optional
-->
```yaml
apiVersion: v1
kind: Pod
@ -1390,6 +1475,9 @@ Delete the ConfigMaps and Pods that you made:
-->
删除你创建那些的 ConfigMap 和 Pod
<!--
# You might already have removed the next set
-->
```bash
kubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \
configmaps/game-config-env-file

View File

@ -33,7 +33,7 @@ server, you identify yourself as a particular _user_. Kubernetes recognises
the concept of a user, however, Kubernetes itself does **not** have a User
API.
-->
**服务账号Service Account**为 Pod 中运行的进程提供身份标识,
**服务账号Service Account** 为 Pod 中运行的进程提供身份标识,
并映射到 ServiceAccount 对象。当你向 API 服务器执行身份认证时,
你会将自己标识为某个**用户User**。Kubernetes 能够识别用户的概念,
但是 Kubernetes 自身**并不**提供 User API。

View File

@ -133,13 +133,12 @@ For example, this is how to start a simple web server as a static Pod:
<!--
1. Choose a directory, say `/etc/kubernetes/manifests` and place a web server
Pod definition there, for example `/etc/kubernetes/manifests/static-web.yaml`:
# Run this command on the node where kubelet is running
-->
2. 选择一个目录,比如在 `/etc/kubernetes/manifests` 目录来保存 Web 服务 Pod 的定义文件,例如
`/etc/kubernetes/manifests/static-web.yaml`
<!--
# Run this command on the node where kubelet is running
-->
```shell
# 在 kubelet 运行的节点上执行以下命令
mkdir -p /etc/kubernetes/manifests/

View File

@ -45,11 +45,11 @@ running on a remote cluster locally.
<!--
* Kubernetes cluster is installed
* `kubectl` is configured to communicate with the cluster
* [Telepresence](https://www.telepresence.io/docs/latest/install/) is installed
* [Telepresence](https://www.telepresence.io/docs/latest/quick-start/) is installed
-->
* Kubernetes 集群安装完毕
* 配置好 `kubectl` 与集群交互
* [Telepresence](https://www.telepresence.io/docs/latest/install/) 安装完毕
* [Telepresence](https://www.telepresence.io/docs/latest/quick-start/) 安装完毕
<!-- steps -->

View File

@ -67,3 +67,29 @@ If kubectl cluster-info returns the url response but you can't access your clust
```shell
kubectl cluster-info dump
```
<!--
### Troubleshooting the 'No Auth Provider Found' error message {#no-auth-provider-found}
In Kubernetes 1.26, kubectl removed the built-in authentication for the following cloud
providers' managed Kubernetes offerings.
These providers have released kubectl plugins to provide the cloud-specific authentication.
For instructions, refer to the following provider documentation:
-->
### 排查"找不到身份验证提供商"的错误信息 {#no-auth-provider-found}
在 Kubernetes 1.26 中kubectl 删除了以下云提供商托管的 Kubernetes 产品的内置身份验证。
这些提供商已经发布了 kubectl 插件来提供特定于云的身份验证。
有关说明,请参阅以下提供商文档:
<!--
* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/)
* Google Kubernetes Engine: [gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)
-->
* Azure AKS[kubelogin 插件](https://azure.github.io/kubelogin/)
* CKE[gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)
<!--
(There could also be other reasons to see the same error message, unrelated to that change.)
-->
(还可能会有其他原因会看到相同的错误信息,和这个更改无关。)

View File

@ -42,6 +42,7 @@ Pod 安全是一个准入控制器,当新的 Pod 被创建时,它会根据 K
请查阅该版本的文档。
## {{% heading "prerequisites" %}}
<!--
Install the following on your workstation:
@ -434,6 +435,8 @@ following:
-->
7. 在 default 名字空间下创建一个 Pod
{{% code_sample file="security/example-baseline-pod.yaml" %}}
```shell
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
```

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80

View File

@ -2,5 +2,5 @@ apiVersion: rules.example.com/v1
kind: ReplicaLimit
metadata:
name: "replica-limit-test.example.com"
namesapce: "default"
maxReplicas: 3
namespace: "default"
maxReplicas: 3

View File

@ -149,25 +149,17 @@ releases may also occur in between these.
<!--
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| September 2023 | 2023-09-08 | 2023-09-13 |
| October 2023 | 2023-10-13 | 2023-10-18 |
| November 2023 | N/A | N/A |
| December 2023 | 2023-12-01 | 2023-12-06 |
| December 2023 | 2023-12-15 | 2023-12-19 |
| January 2024 | 2024-01-12 | 2024-01-17 |
| February 2024 | 2024-02-09 | 2024-02-14 |
| March 2024 | 2024-03-08 | 2024-03-13 |
-->
| 月度补丁发布 | Cherry Pick 截止日期 | 目标日期 |
|--------------|---------------------|------------|
| 2023 年 9 月 | 2023-09-08 | 2023-09-13 |
| 2023 年 10 月 | 2023-10-13 | 2023-10-18 |
| 2023 年 11 月 | N/A | N/A |
| 2023 年 12 月 | 2023-12-01 | 2023-12-06 |
<!--
**Note:** Due to overlap with KubeCon NA 2023 and the resulting lack of
availability of Release Managers, it has been decided to skip patch releases
in November. Instead, we'll have patch releases early in December.
-->
**注意:**由于与 KubeCon NA 2023 时间冲突以及由此导致的缺少 Release Manager
我们决定在 11 月跳过补丁版本发布。而是在 12 月初发布补丁版本。
| 月度补丁发布 | Cherry Pick 截止日期 | 目标日期 |
|--------------|---------------------|-------------|
| 2023 年 12 月 | 2023-12-15 | 2023-12-19 |
| 2024 年 1 月 | 2024-01-12 | 2024-01-17 |
| 2024 年 2 月 | 2024-02-09 | 2024-02-14 |
| 2024 年 3 月 | 2024-03-08 | 2024-03-13 |
<!--
## Detailed Release History for Active Branches

View File

@ -207,3 +207,13 @@ announcements:
message: |
4 days of incredible opportunities to collaborate, learn + share with the entire community!<br />
November 6 - November 9, 2023.
- name: Legacy package repositories shutdown
startTime: 2023-11-27T00:00:00
endTime: 2024-02-01T00:00:00
style: "background: #d95e00"
title: Changes to the location of Linux packages for Kubernetes
message: |
The legacy Linux package repositories (`apt.kubernetes.io` and `yum.kubernetes.io` AKA `packages.cloud.google.com`)<br/>
have been frozen starting from September 13, 2023 **and are going away in January 2024**, users *must* migrate.<br/>
Please read our [announcement](/blog/2023/08/31/legacy-package-repository-deprecation/) for more details.

View File

@ -6,8 +6,8 @@ schedules:
releaseDate: 2023-08-15
next:
release: 1.28.5
cherryPickDeadline: 2023-12-01
targetDate: 2023-12-06
cherryPickDeadline: 2023-12-15
targetDate: 2023-12-19
maintenanceModeStartDate: 2024-08-28
endOfLifeDate: 2024-10-28
previousPatches:
@ -35,8 +35,8 @@ schedules:
endOfLifeDate: 2024-06-28
next:
release: 1.27.9
cherryPickDeadline: 2023-12-01
targetDate: 2023-12-06
cherryPickDeadline: 2023-12-15
targetDate: 2023-12-19
previousPatches:
- release: 1.27.8
cherryPickDeadline: ""
@ -75,8 +75,8 @@ schedules:
endOfLifeDate: 2024-02-28
next:
release: 1.26.12
cherryPickDeadline: 2023-12-01
targetDate: 2023-12-06
cherryPickDeadline: 2023-12-15
targetDate: 2023-12-19
previousPatches:
- release: 1.26.11
cherryPickDeadline: ""

View File

@ -15,4 +15,4 @@
<p>{{ T "outdated_blog__message" }}</p>
</div>
</section>
{{ end }}
{{ end }}