diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index ea469d42864..5b223b56fe4 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -12,7 +12,6 @@ aliases: sig-docs-localization-owners: # Admins for localization content - a-mccarthy - divya-mohan0209 - - jimangel - kbhawkey - natalisucks - onlydole @@ -22,15 +21,11 @@ aliases: - tengqm sig-docs-de-owners: # Admins for German content - bene2k1 - - mkorbi - rlenferink sig-docs-de-reviews: # PR reviews for German content - bene2k1 - - mkorbi - rlenferink sig-docs-en-owners: # Admins for English content - - annajung - - bradtopol - divya-mohan0209 - katcosgrove # RT 1.29 Docs Lead - kbhawkey @@ -112,18 +107,15 @@ aliases: - atoato88 - bells17 - kakts - - ptux - t-inu sig-docs-ko-owners: # Admins for Korean content - gochist - - ianychoi - jihoon-seo - seokho-son - yoonian - ysyukr sig-docs-ko-reviews: # PR reviews for Korean content - gochist - - ianychoi - jihoon-seo - jmyung - jongwooo @@ -153,7 +145,6 @@ aliases: sig-docs-zh-reviews: # PR reviews for Chinese content - asa3311 - chenrui333 - - chenxuc - howieyuen # idealhack - kinzhi @@ -208,7 +199,6 @@ aliases: - Arhell - idvoretskyi - MaxymVlasov - - Potapy4 # authoritative source: git.k8s.io/community/OWNERS_ALIASES committee-steering: # provide PR approvals for announcements - bentheelder diff --git a/README-hi.md b/README-hi.md index af64eae8f7c..958a44ead11 100644 --- a/README-hi.md +++ b/README-hi.md @@ -3,7 +3,7 @@ [![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) -स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज़](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियां हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं! +स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियाँ हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं! ## डॉक्स में योगदान देना @@ -37,8 +37,6 @@ > यदि आप विंडोज पर हैं, तो आपको कुछ और टूल्स की आवश्यकता होगी जिन्हें आप [Chocolatey](https://chocolatey.org) के साथ इंस्टॉल कर सकते हैं। -> यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे Hugo का उपयोग करके स्थानीय रूप से साइट चलाना देखें। - यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे दिए गए Hugo का उपयोग करके स्थानीय रूप से [साइट को चलाने](#hugo-का-उपयोग-करते-हुए-स्थानीय-रूप-से-साइट-चलाना) का तरीका देखें। यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ: diff --git a/content/en/_index.html b/content/en/_index.html index 5df38bb4207..e615cae35bd 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -47,12 +47,12 @@ To download Kubernetes, visit the [download](/releases/download/) section.

- Attend KubeCon + CloudNativeCon North America on November 6-9, 2023 -
-
-
-
Attend KubeCon + CloudNativeCon Europe on March 19-22, 2024 +
+
+
+
+ Attend KubeCon + CloudNativeCon North America on November 12-15, 2024
diff --git a/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md b/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md index da7e2b3ed6d..00ae52cd93a 100644 --- a/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md +++ b/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md @@ -100,10 +100,12 @@ community-owned repositories (`pkgs.k8s.io`). ## Can I continue to use the legacy package repositories? -The existing packages in the legacy repositories will be available for the foreseeable +~~The existing packages in the legacy repositories will be available for the foreseeable future. However, the Kubernetes project can't provide _any_ guarantees on how long is that going to be. The deprecated legacy repositories, and their contents, might -be removed at any time in the future and without a further notice period. +be removed at any time in the future and without a further notice period.~~ + +**UPDATE**: The legacy packages are expected to go away in January 2024. The Kubernetes project **strongly recommends** migrating to the new community-owned repositories **as soon as possible**. diff --git a/content/en/docs/concepts/extend-kubernetes/flowchart.svg b/content/en/docs/concepts/extend-kubernetes/flowchart.svg index b6044c4f49e..5ee7b6b5590 100644 --- a/content/en/docs/concepts/extend-kubernetes/flowchart.svg +++ b/content/en/docs/concepts/extend-kubernetes/flowchart.svg @@ -1,4 +1,4 @@ -
YES
YES
Go to "API Extensions"
Go to "API Extensions"
Do you want to add entirely new types to the Kubernetes API?
Do you want to add...
NO
NO
Do you want to restrict or automatically edit fields in some or all API types?
Do you want to restrict or...
YES
YES
Go to "API Access Extensions"
Go to "API Access Extensions"
NO
NO
Do you want to change the underlying implementation of the built-in API types?
Do you want to change the unde...
YES
YES
NO
NO
NO
NO
YES
YES
Do you want ot change Volumes, Services, Ingresses, PersistentVolumes?
Do you want ot change Volumes, S...
Go to "Infrastructure"
Go to "Infrastructure"
Go to "Automation"
Go to "Automation"
Text is not SVG - cannot display
\ No newline at end of file +
YES
YES
Go to "API Extensions"
Go to "API Extensions"
Do you want to add entirely new types to the Kubernetes API?
Do you want to add...
NO
NO
Do you want to restrict or automatically edit fields in some or all API types?
Do you want to restrict or...
YES
YES
Go to "API Access Extensions"
Go to "API Access Extensions"
NO
NO
Do you want to change the underlying implementation of the built-in API types?
Do you want to change the unde...
YES
YES
NO
NO
NO
NO
YES
YES
Do you want to change Volumes, Services, Ingresses, PersistentVolumes?
Do you want to change Volumes, S...
Go to "Infrastructure"
Go to "Infrastructure"
Go to "Automation"
Go to "Automation"
Text is not SVG - cannot display
diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md index 2fdd2638367..15b64e001a0 100644 --- a/content/en/docs/concepts/security/pod-security-standards.md +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -271,6 +271,7 @@ fail validation.
  • net.ipv4.ip_unprivileged_port_start
  • net.ipv4.tcp_syncookies
  • net.ipv4.ping_group_range
  • +
  • net.ipv4.ip_local_reserved_ports (since Kubernetes 1.27)
  • diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index ebef412106f..0d5caa4ef2f 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -185,7 +185,7 @@ and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps. -1. Delete the PersistentVolume. The associated storage asset in external infrastructure +1. Delete the PersistentVolume. The associated storage asset in external infrastructure still exists after the PV is deleted. 1. Manually clean up the data on the associated storage asset accordingly. 1. Manually delete the associated storage asset. @@ -273,7 +273,7 @@ Access Modes: RWO VolumeMode: Filesystem Capacity: 1Gi Node Affinity: -Message: +Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere) VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk @@ -298,7 +298,7 @@ Access Modes: RWO VolumeMode: Filesystem Capacity: 200Mi Node Affinity: -Message: +Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: csi.vsphere.vmware.com @@ -664,7 +664,7 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted {{< /note >}} > __Important!__ A volume can only be mounted using one access mode at a time, -> even if it supports many. +> even if it supports many. | Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod | | :--- | :---: | :---: | :---: | - | @@ -699,7 +699,7 @@ Current reclaim policies are: * Retain -- manual reclamation * Recycle -- basic scrub (`rm -rf /thevolume/*`) -* Delete -- associated storage asset +* Delete -- delete the volume For Kubernetes {{< skew currentVersion >}}, only `nfs` and `hostPath` volume types support recycling. @@ -731,7 +731,7 @@ it will become fully deprecated in a future Kubernetes release. ### Node Affinity {{< note >}} -For most volume types, you do not need to set this field. +For most volume types, you do not need to set this field. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes. {{< /note >}} @@ -1161,7 +1161,7 @@ users should be aware of: When the `CrossNamespaceVolumeDataSource` feature is enabled, there are additional differences: * The `dataSource` field only allows local objects, while the `dataSourceRef` field allows - objects in any namespaces. + objects in any namespaces. * When namespace is specified, `dataSource` and `dataSourceRef` are not synced. Users should always use `dataSourceRef` on clusters that have the feature gate enabled, and diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md index 01f4e304de2..8d230318656 100644 --- a/content/en/docs/concepts/windows/intro.md +++ b/content/en/docs/concepts/windows/intro.md @@ -320,8 +320,7 @@ The following container runtimes work with Windows: You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ as the container runtime for Kubernetes nodes that run Windows. -Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd). - +Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd). {{< note >}} There is a [known limitation](/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations) when using GMSA with containerd to access Windows network shares, which requires a diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md index a66b9526a27..8bd6cfa17da 100644 --- a/content/en/docs/contribute/_index.md +++ b/content/en/docs/contribute/_index.md @@ -47,8 +47,8 @@ pull request (PR) to the [`kubernetes/website` GitHub repository](https://github.com/kubernetes/website). You need to be comfortable with [git](https://git-scm.com/) and -[GitHub](https://lab.github.com/) -to work effectively in the Kubernetes community. +[GitHub](https://skills.github.com/) +to work effectively in the Kubernetes community. To get involved with documentation: diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 41a2d557159..df10c03584f 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -350,7 +350,7 @@ kubectl logs my-pod # dump pod logs (stdout) kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout) kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case) -kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout) +kubectl logs -l name=myLabel -c my-container # dump pod container logs, with label name=myLabel (stdout) kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container kubectl logs -f my-pod # stream pod logs (stdout) kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case) diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index 7149897213b..61b62893288 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -89,7 +89,7 @@ After you initialize your control-plane, the kubelet runs normally. #### Network setup kubeadm similarly to other Kubernetes components tries to find a usable IP on -the network interface associated with the default gateway on a host. Such +the network interfaces associated with a default gateway on a host. Such an IP is then used for the advertising and/or listening performed by a component. To find out what this IP is on a Linux host you can use: @@ -98,10 +98,22 @@ To find out what this IP is on a Linux host you can use: ip route show # Look for a line starting with "default via" ``` +{{< note >}} +If two or more default gateways are present on the host, a Kubernetes component will +try to use the first one it encounters that has a suitable global unicast IP address. +While making this choice, the exact ordering of gateways might vary between different +operating systems and kernel versions. +{{< /note >}} + Kubernetes components do not accept custom network interface as an option, therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration. +{{< note >}} +If the host does not have a default gateway and if a custom IP address is not passed +to a Kubernetes component, the component may exit with an error. +{{< /note >}} + To configure the API server advertise address for control plane nodes created with both `init` and `join`, the flag `--apiserver-advertise-address` can be used. Preferably, this option can be set in the [kubeadm API](/docs/reference/config-api/kubeadm-config.v1beta3) @@ -114,13 +126,12 @@ For kubelets on all nodes, the `--node-ip` option can be passed in For dual-stack see [Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support). -{{< note >}} -IP addresses become part of certificates SAN fields. Changing these IP addresses would require +The IP addresses that you assign to control plane components become part of their X.509 certificates' +subject alternative name fields. Changing these IP addresses would require signing new certificates and restarting the affected components, so that the change in certificate files is reflected. See [Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal) for more details on this topic. -{{}} {{< warning >}} The Kubernetes project recommends against this approach (configuring all component instances @@ -132,15 +143,6 @@ is a public IP address, you should configure packet filtering or other security protect the nodes and your cluster. {{< /warning >}} -{{< note >}} -If the host does not have a default gateway, it is recommended to setup one. Otherwise, -without passing a custom IP address to a Kubernetes component, the component -will exit with an error. If two or more default gateways are present on the host, -a Kubernetes component will try to use the first one it encounters that has a suitable -global unicast IP address. While making this choice, the exact ordering of gateways -might vary between different operating systems and kernel versions. -{{< /note >}} - ### Preparing the required container images This step is optional and only applies in case you wish `kubeadm init` and `kubeadm join` diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index 23b3c978f6f..abd3f3e0e49 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -73,7 +73,8 @@ If you see the following warnings while running `kubeadm init` [preflight] WARNING: ethtool not found in system path ``` -Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands: +Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. +You can install them with the following commands: - For Ubuntu/Debian users, run `apt install ebtables ethtool`. - For CentOS/Fedora users, run `yum install ebtables ethtool`. @@ -90,9 +91,9 @@ This may be caused by a number of problems. The most common are: - network connection problems. Check that your machine has full network connectivity before continuing. - the cgroup driver of the container runtime differs from that of the kubelet. To understand how to - configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). + configure it properly, see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). - control plane containers are crashlooping or hanging. You can check this by running `docker ps` - and investigating each container by running `docker logs`. For other container runtime see + and investigating each container by running `docker logs`. For other container runtime, see [Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/). ## kubeadm blocks when removing managed containers @@ -144,10 +145,12 @@ provider. Please contact the author of the Pod Network add-on to find out whethe Calico, Canal, and Flannel CNI providers are verified to support HostPort. -For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md). +For more information, see the +[CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md). -If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of -services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`. +If your network provider does not support the portmap CNI plugin, you may need to use the +[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport) +or use `HostNetwork=true`. ## Pods are not accessible via their Service IP @@ -157,9 +160,10 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos add-on provider to get the latest status of their support for hairpin mode. - If you are using VirtualBox (directly or via Vagrant), you will need to - ensure that `hostname -i` returns a routable IP address. By default the first + ensure that `hostname -i` returns a routable IP address. By default, the first interface is connected to a non-routable host-only network. A work around - is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11) + is to modify `/etc/hosts`, see this + [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11) for an example. ## TLS certificate errors @@ -175,6 +179,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The `base64 --decode` command can be used to decode the certificate and `openssl x509 -text -noout` can be used for viewing the certificate information. + - Unset the `KUBECONFIG` environment variable using: ```sh @@ -190,7 +195,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( - Another workaround is to overwrite the existing `kubeconfig` for the "admin" user: ```sh - mv $HOME/.kube $HOME/.kube.bak + mv $HOME/.kube $HOME/.kube.bak mkdir $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config @@ -198,7 +203,8 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( ## Kubelet client certificate rotation fails {#kubelet-client-cert} -By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`. +By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the +`/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`. If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid` in kube-apiserver logs. To fix the issue you must follow these steps: @@ -231,24 +237,34 @@ The following error might indicate that something was wrong in the pod network: Error from server (NotFound): the server could not find the requested resource ``` -- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel. +- If you're using flannel as the pod network inside Vagrant, then you will have to + specify the default interface name for flannel. - Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed. + Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts + are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed. - This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen. + This may lead to problems with flannel, which defaults to the first interface on a host. + This leads to all hosts thinking they have the same public IP address. To prevent this, + pass the `--iface eth1` flag to flannel so that the second interface is chosen. ## Non-public IP used for containers -In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster: +In some situations `kubectl logs` and `kubectl run` commands may return with the +following errors in an otherwise functional cluster: ```console Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host ``` -- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider. -- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one. +- This may be due to Kubernetes using an IP that can not communicate with other IPs on + the seemingly same subnet, possibly by policy of the machine provider. +- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally + as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's + `InternalIP` instead of the public one. - Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet: + Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will + not display the offending alias IP address. Alternatively an API endpoint specific to + DigitalOcean allows to query for the anchor IP from the droplet: ```sh curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address @@ -270,12 +286,13 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6 ## `coredns` pods have `CrashLoopBackOff` or `Error` state -If you have nodes that are running SELinux with an older version of Docker you might experience a scenario -where the `coredns` pods are not starting. To solve that you can try one of the following options: +If you have nodes that are running SELinux with an older version of Docker, you might experience a scenario +where the `coredns` pods are not starting. To solve that, you can try one of the following options: - Upgrade to a [newer version of Docker](/docs/setup/production-environment/container-runtimes/#docker). - [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux). + - Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`: ```bash @@ -284,7 +301,8 @@ kubectl -n kube-system get deployment coredns -o yaml | \ kubectl apply -f - ``` -Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters) +Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. +[A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters) are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits. {{< warning >}} @@ -300,7 +318,7 @@ If you encounter the following error: rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\"" ``` -this issue appears if you run CentOS 7 with Docker 1.13.1.84. +This issue appears if you run CentOS 7 with Docker 1.13.1.84. This version of Docker can prevent the kubelet from executing into the etcd container. To work around the issue, choose one of these options: @@ -344,6 +362,7 @@ to pick up the node's IP address properly and has knock-on effects to the proxy load balancers. The following error can be seen in kube-proxy Pods: + ``` server.go:610] Failed to retrieve node IP: host IP unknown; known addresses: [] proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP @@ -352,8 +371,26 @@ proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane nodes regardless of their conditions, keeping it off of other nodes until their initial guarding conditions abate: + ``` -kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/control-plane" } ] } } } }' +kubectl -n kube-system patch ds kube-proxy -p='{ + "spec": { + "template": { + "spec": { + "tolerations": [ + { + "key": "CriticalAddonsOnly", + "operator": "Exists" + }, + { + "effect": "NoSchedule", + "key": "node-role.kubernetes.io/control-plane" + } + ] + } + } + } +}' ``` The tracking issue for this problem is [here](https://github.com/kubernetes/kubeadm/issues/1027). @@ -365,12 +402,15 @@ For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/c Kubernetes components like the kubelet and kube-controller-manager use the default path of `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_ for the feature to work. -(**Note**: FlexVolume was deprecated in the Kubernetes v1.23 release) -To workaround this issue you can configure the flex-volume directory using the kubeadm +{{< note >}} +FlexVolume was deprecated in the Kubernetes v1.23 release. +{{< /note >}} + +To workaround this issue, you can configure the flex-volume directory using the kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/). -On the primary control-plane Node (created using `kubeadm init`) pass the following +On the primary control-plane Node (created using `kubeadm init`), pass the following file using `--config`: ```yaml @@ -402,7 +442,10 @@ be advised that this is modifying a design principle of the Linux distribution. ## `kubeadm upgrade plan` prints out `context deadline exceeded` error message -This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`. +This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in +the case of running an external etcd. This is not a critical bug and happens because +older versions of kubeadm perform a version check on the external etcd cluster. +You can proceed with `kubeadm upgrade apply ...`. This issue is fixed as of version 1.19. @@ -422,6 +465,7 @@ can be used insecurely by passing the `--kubelet-insecure-tls` to it. This is no If you want to use TLS between the metrics-server and the kubelet there is a problem, since kubeadm deploys a self-signed serving certificate for the kubelet. This can cause the following errors on the side of the metrics-server: + ``` x509: certificate signed by unknown authority x509: certificate is valid for IP-foo not IP-bar @@ -438,6 +482,7 @@ Only applicable to upgrading a control plane node with a kubeadm binary v1.28.3 where the node is currently managed by kubeadm versions v1.28.0, v1.28.1 or v1.28.2. Here is the error message you may encounter: + ``` [upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition [upgrade/etcd] Waiting for previous etcd to become available @@ -454,16 +499,19 @@ k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade ... ``` -The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec. -This will result in a diff from the manifest comparison, and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash. +The reason for this failure is that the affected versions generate an etcd manifest file with +unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison, +and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash. There are two way to workaround this issue if you see it in your cluster: -- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using: -```shell -kubeadm upgrade {apply|node} [version] --etcd-upgrade=false -``` -This is not recommended in case a new etcd version was introduced by a later v1.28 patch version. +- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using: + + ```shell + kubeadm upgrade {apply|node} [version] --etcd-upgrade=false + ``` + + This is not recommended in case a new etcd version was introduced by a later v1.28 patch version. - Before upgrade, patch the manifest for the etcd static pod, to remove the problematic defaulted attributes: @@ -509,4 +557,5 @@ This is not recommended in case a new etcd version was introduced by a later v1. path: /etc/kubernetes/pki/etcd ``` -More information can be found in the [tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug. \ No newline at end of file +More information can be found in the +[tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md index 810cd8d03d7..b4312a32a9b 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md @@ -136,7 +136,7 @@ This step should be done upon upgrading from one to another Kubernetes minor release in order to get access to the packages of the desired Kubernetes minor version. -{{< tabs name="k8s_install_versions" >}} +{{< tabs name="k8s_upgrade_versions" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} 1. Open the file that defines the Kubernetes `apt` repository using a text editor of your choice: diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md index b9cb5041d75..d2d6a83d356 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md @@ -99,11 +99,14 @@ kubeadm init phase certs --config To write new manifest files in `/etc/kubernetes/manifests` you can use: ```shell +# For Kubernetes control plane components kubeadm init phase control-plane --config +# For local etcd +kubeadm init phase etcd local --config ``` The `` contents must match the updated `ClusterConfiguration`. -The `` value must be the name of the component. +The `` value must be a name of a Kubernetes control plane component (`apiserver`, `controller-manager` or `scheduler`). {{< note >}} Updating a file in `/etc/kubernetes/manifests` will tell the kubelet to restart the static Pod for the corresponding component. diff --git a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md index 9347dc5c3ab..cccf4b8350a 100644 --- a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md +++ b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md @@ -76,6 +76,7 @@ The following sysctls are supported in the _safe_ set: - `net.ipv4.tcp_syncookies`, - `net.ipv4.ping_group_range` (since Kubernetes 1.18), - `net.ipv4.ip_unprivileged_port_start` (since Kubernetes 1.22). +- `net.ipv4.ip_local_reserved_ports` (since Kubernetes 1.27). {{< note >}} There are some exceptions to the set of safe sysctls: diff --git a/content/en/docs/tasks/debug/debug-cluster/_index.md b/content/en/docs/tasks/debug/debug-cluster/_index.md index fcb7ba4a016..a10b0bdcff7 100644 --- a/content/en/docs/tasks/debug/debug-cluster/_index.md +++ b/content/en/docs/tasks/debug/debug-cluster/_index.md @@ -252,14 +252,14 @@ This is an incomplete list of things that could go wrong, and how to adjust your - Network partition within cluster, or between cluster and users - Crashes in Kubernetes software - Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume) -- Operator error, for example misconfigured Kubernetes software or application software +- Operator error, for example, misconfigured Kubernetes software or application software ### Specific scenarios - API server VM shutdown or apiserver crashing - Results - unable to stop, update, or start new pods, services, replication controller - - existing pods and services should continue to work normally, unless they depend on the Kubernetes API + - existing pods and services should continue to work normally unless they depend on the Kubernetes API - API server backing storage lost - Results - the kube-apiserver component fails to start successfully and become healthy @@ -291,7 +291,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your ### Mitigations -- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs +- Action: Use the IaaS provider's automatic VM restarting feature for IaaS VMs - Mitigates: Apiserver VM shutdown or apiserver crashing - Mitigates: Supporting services VM shutdown or crashes diff --git a/content/en/docs/tasks/debug/debug-cluster/local-debugging.md b/content/en/docs/tasks/debug/debug-cluster/local-debugging.md index 6e7d73841f2..2bbf59a22e3 100644 --- a/content/en/docs/tasks/debug/debug-cluster/local-debugging.md +++ b/content/en/docs/tasks/debug/debug-cluster/local-debugging.md @@ -26,7 +26,7 @@ running on a remote cluster locally. * Kubernetes cluster is installed * `kubectl` is configured to communicate with the cluster -* [Telepresence](https://www.telepresence.io/docs/latest/install/) is installed +* [Telepresence](https://www.telepresence.io/docs/latest/quick-start/) is installed diff --git a/content/en/docs/tasks/tools/included/verify-kubectl.md b/content/en/docs/tasks/tools/included/verify-kubectl.md index ae5d818e02a..b4eb0fe08d2 100644 --- a/content/en/docs/tasks/tools/included/verify-kubectl.md +++ b/content/en/docs/tasks/tools/included/verify-kubectl.md @@ -23,15 +23,18 @@ kubectl cluster-info If you see a URL response, kubectl is correctly configured to access your cluster. -If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster. +If you see a message similar to the following, kubectl is not configured correctly +or is not able to connect to a Kubernetes cluster. ``` The connection to the server was refused - did you specify the right host or port? ``` -For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above. +For example, if you are intending to run a Kubernetes cluster on your laptop (locally), +you will need a tool like Minikube to be installed first and then re-run the commands stated above. -If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use: +If kubectl cluster-info returns the url response but you can't access your cluster, +to check whether it is configured properly, use: ```shell kubectl cluster-info dump @@ -40,9 +43,10 @@ kubectl cluster-info dump ### Troubleshooting the 'No Auth Provider Found' error message {#no-auth-provider-found} In Kubernetes 1.26, kubectl removed the built-in authentication for the following cloud -providers' managed Kubernetes offerings. These providers have released kubectl plugins to provide the cloud-specific authentication. For instructions, refer to the following provider documentation: +providers' managed Kubernetes offerings. These providers have released kubectl plugins +to provide the cloud-specific authentication. For instructions, refer to the following provider documentation: -* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/) +* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/) * Google Kubernetes Engine: [gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin) (There could also be other reasons to see the same error message, unrelated to that change.) diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index d12775eb235..e1ef63d0769 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -60,7 +60,7 @@ Now, switch back to the terminal where you ran `minikube start`. The `dashboard` command enables the dashboard add-on and opens the proxy in the default web browser. You can create Kubernetes resources on the dashboard such as Deployment and Service. -If you are running in an environment as root, see [Open Dashboard with URL](#open-dashboard-with-url). +To find out how to avoid directly invoking the browser from the terminal and get a URL for the web dashboard, see the "URL copy and paste" tab. By default, the dashboard is only accessible from within the internal Kubernetes virtual network. The `dashboard` command creates a temporary proxy to make the dashboard accessible from outside the Kubernetes virtual network. @@ -73,7 +73,7 @@ You can run the `dashboard` command again to create another proxy to access the {{% /tab %}} {{% tab name="URL copy and paste" %}} -If you don't want minikube to open a web browser for you, run the dashboard command with the +If you don't want minikube to open a web browser for you, run the `dashboard` subcommand with the `--url` flag. `minikube` outputs a URL that you can open in the browser you prefer. Open a **new** terminal, and run: @@ -82,7 +82,7 @@ Open a **new** terminal, and run: minikube dashboard --url ``` -Now, switch back to the terminal where you ran `minikube start`. +Now, you can use this URL and switch back to the terminal where you ran `minikube start`. {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index 764c785d7ac..7339066dc7f 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -130,10 +130,11 @@ description: |-

    View the app

    -

    Pods that are running inside Kubernetes are running on a private, isolated network. +

    Pods that are running inside Kubernetes are running on a private, isolated network. By default they are visible from other pods and services within the same Kubernetes cluster, but not outside that network. When we use kubectl, we're interacting through an API endpoint to communicate with our application.

    -

    We will cover other options on how to expose your application outside the Kubernetes cluster later, in Module 4.

    +

    We will cover other options on how to expose your application outside the Kubernetes cluster later, in Module 4. + Also as a basic tutorial, we're not explaining what Pods are in any detail here, it will cover in later topics.

    The kubectl proxy command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.

    You need to open a second terminal window to run the proxy.

    kubectl proxy diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html index ad24db80442..5595ed08971 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -94,7 +94,7 @@ description: |-

    -

    Create a new Service

    +

    Step 1: Creating a new Service

    Let’s verify that our application is running. We’ll use the kubectl get command and look for existing Pods:

    kubectl get pods

    If no Pods are running then it means the objects from the previous tutorials were cleaned up. In this case, go back and recreate the deployment from the Using kubectl to create a Deployment tutorial. @@ -151,7 +151,7 @@ description: |-

    -

    Deleting a service

    +

    Step 3: Deleting a service

    To delete Services you can use the delete service subcommand. Labels can be used also here:

    kubectl delete service -l app=kubernetes-bootcamp

    Confirm that the Service is gone:

    diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html index 7c6d980ef79..d714433962c 100644 --- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html +++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -156,6 +156,14 @@ kubernetes-bootcamp 1/1 1 1 11m

    Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:

    curl http://"$(minikube ip):$NODE_PORT"

    We hit a different Pod with every request. This demonstrates that the load-balancing is working.

    + {{< note >}}

    If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.
    +

    In a separate terminal window, execute:
    + minikube service kubernetes-bootcamp --url

    +

    The output looks like this: +

    http://127.0.0.1:51082
    ! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

    +

    Then use the given URL to access the app:
    + curl 127.0.0.1:51082

    + {{< /note >}}
    diff --git a/content/en/docs/tutorials/security/cluster-level-pss.md b/content/en/docs/tutorials/security/cluster-level-pss.md index bbeb186a23f..bc6f8fcc95a 100644 --- a/content/en/docs/tutorials/security/cluster-level-pss.md +++ b/content/en/docs/tutorials/security/cluster-level-pss.md @@ -294,6 +294,8 @@ following: 1. Create a Pod in the default namespace: + {{% code_sample file="security/example-baseline-pod.yaml" %}} + ```shell kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml ``` diff --git a/content/en/examples/application/hpa/php-apache.yaml b/content/en/examples/application/hpa/php-apache.yaml index f3f1ef5d4f9..1c49aca6a1f 100644 --- a/content/en/examples/application/hpa/php-apache.yaml +++ b/content/en/examples/application/hpa/php-apache.yaml @@ -1,4 +1,4 @@ -apiVersion: autoscaling/v1 +apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: php-apache @@ -9,4 +9,10 @@ spec: name: php-apache minReplicas: 1 maxReplicas: 10 - targetCPUUtilizationPercentage: 50 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 50 diff --git a/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml b/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml index 813bc7b3345..9d8ceee2201 100644 --- a/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml +++ b/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml @@ -2,5 +2,5 @@ apiVersion: rules.example.com/v1 kind: ReplicaLimit metadata: name: "replica-limit-test.example.com" - namesapce: "default" -maxReplicas: 3 \ No newline at end of file + namespace: "default" +maxReplicas: 3 diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index 312dafd1b3a..1ab1c8552de 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,7 +78,7 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| December 2023 | 2023-12-08 | 2023-12-13 | +| December 2023 | 2023-12-15 | 2023-12-19 | | January 2024 | 2024-01-12 | 2024-01-17 | | February 2024 | 2024-02-09 | 2024-02-14 | | March 2024 | 2024-03-08 | 2024-03-13 | diff --git a/content/es/docs/tutorials/_index.md b/content/es/docs/tutorials/_index.md index 932b6f60434..86fde686f22 100644 --- a/content/es/docs/tutorials/_index.md +++ b/content/es/docs/tutorials/_index.md @@ -22,50 +22,36 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a ## Esenciales * [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) se trata de un tutorial interactivo en profundidad para entender Kubernetes y probar algunas funciones básicas. - * [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) - * [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) - * [Hello Minikube](/es/docs/tutorials/hello-minikube/) ## Configuración * [Ejemplo: Configurando un Microservicio en Java](/docs/tutorials/configuration/configure-java-microservice/) - * [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/) ## Aplicaciones Stateless * [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/) - * [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/) ## Aplicaciones Stateful * [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/) - * [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) - * [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/) - * [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/) ## Clústers * [AppArmor](/docs/tutorials/clusters/apparmor/) - * [Seccomp](/docs/tutorials/clusters/seccomp/) ## Servicios * [Using Source IP](/docs/tutorials/services/source-ip/) - - ## {{% heading "whatsnext" %}} - Si quieres escribir un tutorial, revisa [utilizando templates](/docs/home/contribute/page-templates/) para obtener información sobre el tipo de página y la plantilla de los tutotriales. - - diff --git a/content/fr/docs/concepts/storage/volumes.md b/content/fr/docs/concepts/storage/volumes.md index e78e7ce76b3..c2fdf583cb4 100644 --- a/content/fr/docs/concepts/storage/volumes.md +++ b/content/fr/docs/concepts/storage/volumes.md @@ -857,7 +857,7 @@ Vous devez créer un secret dans l'API Kubernetes avant de pouvoir l'utiliser. Un conteneur utilisant un secret en tant que point de montage de volume [subPath](#using-subpath) ne recevra pas les mises à jour des secrets. {{< /note >}} -Les secrets sont décrits plus en détails [ici](/docs/user-guide/secrets). +Les secrets sont décrits plus en détails [ici](/docs/concepts/configuration/secret/). ### storageOS {#storageos} diff --git a/content/fr/docs/contribute/generate-ref-docs/federation-api.md b/content/fr/docs/contribute/generate-ref-docs/federation-api.md index 30bb6c525d8..cc99bdc9a62 100644 --- a/content/fr/docs/contribute/generate-ref-docs/federation-api.md +++ b/content/fr/docs/contribute/generate-ref-docs/federation-api.md @@ -15,7 +15,7 @@ Cette page montre comment générer automatiquement des pages de référence pou * Vous devez avoir [Git](https://git-scm.com/book/fr/v2/D%C3%A9marrage-rapide-Installation-de-Git) installé. -* Vous devez avoir [Golang](https://golang.org/doc/install) version 1.9.1 ou ultérieur installé, et votre variable d'environnement `$GOPATH` doit être définie. +* Vous devez avoir [Golang](https://go.dev/doc/install) version 1.9.1 ou ultérieur installé, et votre variable d'environnement `$GOPATH` doit être définie. * Vous devez avoir [Docker](https://docs.docker.com/engine/installation/) installé. diff --git a/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md b/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md index b039d19c8b9..bf7954500aa 100644 --- a/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md +++ b/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md @@ -16,7 +16,7 @@ Cette page montre comment mettre à jour les documents de référence générés Vous devez avoir ces outils installés: * [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) -* [Golang](https://golang.org/doc/install) version 1.9.1 ou ultérieur +* [Golang](https://go.dev/doc/install) version 1.9.1 ou ultérieur * [Docker](https://docs.docker.com/engine/installation/) * [etcd](https://github.com/coreos/etcd/) diff --git a/content/hi/_index.html b/content/hi/_index.html index 2009559be3b..b1faf30bcc5 100644 --- a/content/hi/_index.html +++ b/content/hi/_index.html @@ -8,7 +8,7 @@ sitemap: {{< blocks/section id="oceanNodes" >}} {{% blocks/feature image="flower" %}} -[कुबेरनेट्स]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}), जो K8s के रूप में भी जाना जाता है, कंटेनरीकृत एप्लीकेशन के डिप्लॉयमेंट, स्केलिंग और प्रबंधन को स्वचालित करने के लिए एक ओपन-सोर्स सिस्टम है। +[कुबेरनेट्स]({{< relref "/docs/concepts/overview/_index.md" >}}), जो K8s के रूप में भी जाना जाता है, कंटेनरीकृत एप्लीकेशन के डिप्लॉयमेंट, स्केलिंग और प्रबंधन को स्वचालित करने के लिए एक ओपन-सोर्स सिस्टम है। यह आसान प्रबंधन और खोज के लिए लॉजिकल इकाइयों में एक एप्लीकेशन बनाने वाले कंटेनरों को समूहित करता है। कुबेरनेट्स [Google में उत्पादन कार्यभार चलाने के 15 वर्षों के अनुभव](http://queue.acm.org/detail.cfm?id=2898444) पर निर्माणित है, जो समुदाय के सर्वोत्तम-नस्लीय विचारों और प्रथाओं के साथ संयुक्त है। {{% /blocks/feature %}} diff --git a/content/hi/docs/concepts/overview/what-is-kubernetes.md b/content/hi/docs/concepts/overview/_index.md similarity index 99% rename from content/hi/docs/concepts/overview/what-is-kubernetes.md rename to content/hi/docs/concepts/overview/_index.md index bb59092d2b3..6268b8761b7 100644 --- a/content/hi/docs/concepts/overview/what-is-kubernetes.md +++ b/content/hi/docs/concepts/overview/_index.md @@ -1,5 +1,5 @@ --- -title: कुबेरनेट्स क्या है? +title: अवलोकन description: > कुबेरनेट्स कंटेनरीकृत वर्कलोड और सेवाओं के प्रबंधन के लिए एक पोर्टेबल, एक्स्टेंसिबल, ओपन-सोर्स प्लेटफॉर्म है, जो घोषणात्मक कॉन्फ़िगरेशन और स्वचालन दोनों की सुविधा प्रदान करता है। इसका एक बड़ा, तेजी से बढ़ता हुआ पारिस्थितिकी तंत्र है। कुबेरनेट्स सेवाएँ, समर्थन और उपकरण व्यापक रूप से उपलब्ध हैं। content_type: concept diff --git a/content/hi/docs/setup/_index.md b/content/hi/docs/setup/_index.md index 26416b0c680..33034b66aec 100644 --- a/content/hi/docs/setup/_index.md +++ b/content/hi/docs/setup/_index.md @@ -8,9 +8,9 @@ card: name: setup weight: 20 anchors: - - anchor: "#सीखने-का-वातावरण" + - anchor: "#learning-environment" title: सीखने का वातावरण - - anchor: "#प्रोडक्शन-वातावरण" + - anchor: "#production-environment" title: प्रोडक्शन वातावरण --- @@ -25,13 +25,13 @@ card: -## सीखने का वातावरण +## सीखने का वातावरण {#learning-environment} यदि आप कुबेरनेट्स सीख रहे हैं, तो कुबेरनेट्स समुदाय द्वारा समर्थित टूल का उपयोग करें, या स्थानीय मशीन पर कुबेरनेट्स क्लस्टर सेटअप करने के लिए इकोसिस्टम में उपलब्ध टूल का उपयोग करें। [इंस्टॉल टूल्स](/hi/docs/tasks/tools/) देखें। -## प्रोडक्शन वातावरण +## प्रोडक्शन वातावरण {#production-environment} [प्रोडक्शन वातावरण](/hi/docs/setup/production-environment/) के लिए समाधान का मूल्यांकन करते समय, विचार करें कि कुबेरनेट्स क्लस्टर के किन पहलुओं (या _abstractions_) का संचालन आप स्वयं प्रबंधित करना चाहते हैं और किसे आप एक प्रदाता को सौंपना पसंद करते हैं। diff --git a/content/hi/docs/tasks/tools/install-kubectl-linux.md b/content/hi/docs/tasks/tools/install-kubectl-linux.md index e099ad5ab7a..b11abf29437 100644 --- a/content/hi/docs/tasks/tools/install-kubectl-linux.md +++ b/content/hi/docs/tasks/tools/install-kubectl-linux.md @@ -44,7 +44,7 @@ Linux पर kubectl संस्थापित करने के लिए kubectl चेकसम फाइल डाउनलोड करें: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" ``` चेकसम फ़ाइल से kubectl बाइनरी को मान्य करें: @@ -199,7 +199,7 @@ kubectl Bash और Zsh के लिए ऑटोकम्प्लेशन kubectl-convert चेकसम फ़ाइल डाउनलोड करें: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" ``` चेकसम फ़ाइल से kubectl-convert बाइनरी को मान्य करें: diff --git a/content/hi/docs/tutorials/_index.md b/content/hi/docs/tutorials/_index.md index 5aa154622e4..a0af44af676 100644 --- a/content/hi/docs/tutorials/_index.md +++ b/content/hi/docs/tutorials/_index.md @@ -52,7 +52,7 @@ content_type: concept * [AppArmor](/docs/tutorials/clusters/apparmor/) -* [seccomp](/docs/tutorials/clusters/seccomp/) +* [Seccomp](/docs/tutorials/clusters/seccomp/) ## सर्विस diff --git a/content/ja/docs/tutorials/security/cluster-level-pss.md b/content/ja/docs/tutorials/security/cluster-level-pss.md index 34622a32d94..1ae58454384 100644 --- a/content/ja/docs/tutorials/security/cluster-level-pss.md +++ b/content/ja/docs/tutorials/security/cluster-level-pss.md @@ -20,7 +20,7 @@ v{{< skew currentVersion >}}以外のKubernetesバージョンを実行してい ワークステーションに以下をインストールしてください: -- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) +- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) - [kubectl](/ja/docs/tasks/tools/) このチュートリアルでは、完全な制御下にあるKubernetesクラスターの何を設定できるかをデモンストレーションします。 @@ -230,7 +230,7 @@ v{{< skew currentVersion >}}以外のKubernetesバージョンを実行してい ``` {{}} - macOSでDocker DesktopとKinDを利用している場合は、**Preferences > Resources > File Sharing**のメニュー項目からShared Directoryとして`/tmp`を追加できます。 + macOSでDocker Desktopと*kind*を利用している場合は、**Preferences > Resources > File Sharing**のメニュー項目からShared Directoryとして`/tmp`を追加できます。 {{}} 1. 目的のPodセキュリティの標準を適用するために、Podセキュリティアドミッションを使うクラスターを作成します: diff --git a/content/pl/_index.html b/content/pl/_index.html index 06312f71979..a903381473c 100644 --- a/content/pl/_index.html +++ b/content/pl/_index.html @@ -47,12 +47,12 @@ Kubernetes jako projekt open-source daje Ci wolność wyboru ⏤ skorzystaj z pr

    - Weź udział w KubeCon + CloudNativeCon Europe 18-21.04.2023 -
    -
    -
    -
    Weź udział w KubeCon + CloudNativeCon North America 6-9.11.2023 +
    +
    +
    +
    + Weź udział w KubeCon + CloudNativeCon Europe 19-22.03.2024
    diff --git a/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 679431374a9..de08f48e374 100644 --- a/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -133,7 +133,7 @@ Para mais detalhes sobre compatibilidade entre as versões, veja: ```shell sudo apt-get update - sudo apt-get install -y apt-transport-https ca-certificates curl + sudo apt-get install -y apt-transport-https ca-certificates curl gpg ``` 2. Faça o download da chave de assinatura pública da Google Cloud: diff --git a/content/ru/docs/contribute/generate-ref-docs/contribute-upstream.md b/content/ru/docs/contribute/generate-ref-docs/contribute-upstream.md index ba529883efe..ba9a74a20a2 100644 --- a/content/ru/docs/contribute/generate-ref-docs/contribute-upstream.md +++ b/content/ru/docs/contribute/generate-ref-docs/contribute-upstream.md @@ -22,7 +22,7 @@ weight: 20 - Установленные инструменты: - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - - [Golang](https://golang.org/doc/install) версии 1.13+ + - [Golang](https://go.dev/doc/install) версии 1.13+ - [Docker](https://docs.docker.com/engine/installation/) - [etcd](https://github.com/coreos/etcd/) - [make](https://www.gnu.org/software/make/) diff --git a/content/zh-cn/blog/_posts/2019-04-24-Hardware-Accelerated-SSLTLS-Termination-in-Ingress-Controllers-using-Kubernetes-Device-Plugins-and-RuntimeClass.md b/content/zh-cn/blog/_posts/2019-04-24-Hardware-Accelerated-SSLTLS-Termination-in-Ingress-Controllers-using-Kubernetes-Device-Plugins-and-RuntimeClass.md index e5876c0ac7b..94a10b5d6c0 100644 --- a/content/zh-cn/blog/_posts/2019-04-24-Hardware-Accelerated-SSLTLS-Termination-in-Ingress-Controllers-using-Kubernetes-Device-Plugins-and-RuntimeClass.md +++ b/content/zh-cn/blog/_posts/2019-04-24-Hardware-Accelerated-SSLTLS-Termination-in-Ingress-Controllers-using-Kubernetes-Device-Plugins-and-RuntimeClass.md @@ -171,12 +171,12 @@ underlying host devices. 基于 PCIe 的加密加速设备功能 可以受益于 IO 硬件虚拟化,通过 I/O 内存管理单元(IOMMU),提供隔离:IOMMU 将设备分组,为工作负载提供隔离的资源 (假设加密卡不与其他设备共享 **IOMMU 组**)。如果PCIe设备支持单根 I/O 虚拟化(SR-IOV)规范,则可以进一步增加隔离资源的数量。 -SR-IOV 允许将 PCIe 设备将**物理功能项(Physical Functions,PF)**设备进一步拆分为 +SR-IOV 允许将 PCIe 设备将 **物理功能项(Physical Functions,PF)** 设备进一步拆分为 **虚拟功能项(Virtual Functions, VF)**,并且每个设备都属于自己的 IOMMU 组。 要将这些借助 IOMMU 完成隔离的设备功能项暴露给用户空间和容器,主机内核应该将它们绑定到特定的设备驱动程序。 在 Linux 中,这个驱动程序是 vfio-pci, 它通过字符设备将设备提供给用户空间。内核 vfio-pci 驱动程序使用一种称为 -**PCI 透传(PCI Passthrough)**的机制, +**PCI 透传(PCI Passthrough)** 的机制, 为用户空间应用程序提供了对 PCIe 设备与功能项的直接的、IOMMU 支持的访问。 用户空间框架,如数据平面开发工具包(Data Plane Development Kit,DPDK)可以利用该接口。 此外,虚拟机(VM)管理程序可以向 VM 提供这些用户空间设备节点,并将它们作为 PCI 设备暴露给寄宿内核。 diff --git a/content/zh-cn/blog/_posts/2023-09-25-kubeadm-use-etcd-learner-mode.md b/content/zh-cn/blog/_posts/2023-09-25-kubeadm-use-etcd-learner-mode.md index a8f9f2b89d1..f1e0768e8bf 100644 --- a/content/zh-cn/blog/_posts/2023-09-25-kubeadm-use-etcd-learner-mode.md +++ b/content/zh-cn/blog/_posts/2023-09-25-kubeadm-use-etcd-learner-mode.md @@ -21,7 +21,7 @@ slug: kubeadm-use-etcd-learner-mode 1. **增强了弹性**:etcd learner 节点是非投票成员,在完全进入角色之前会追随领导者的日志。 这样可以防止新的集群成员干扰投票结果或引起领导者选举,从而使集群在成员变更期间更具弹性。 -2. **减少了集群不可用时间**:传统的添加新成员的方法通常会造成一段时间集群不可用,特别是在基础设施迟缓或误配的情况下更为明显。 +1. **减少了集群不可用时间**:传统的添加新成员的方法通常会造成一段时间集群不可用,特别是在基础设施迟缓或误配的情况下更为明显。 而 etcd learner 模式可以最大程度地减少此类干扰。 -3. **简化了维护**:learner 节点提供了一种更安全、可逆的方式来添加或替换集群成员。 +1. **简化了维护**:learner 节点提供了一种更安全、可逆的方式来添加或替换集群成员。 这降低了由于误配或在成员添加过程中出错而导致集群意外失效的风险。 -4. **改进了网络容错性**:在涉及网络分区的场景中,learner 模式允许更优雅的处理。 +1. **改进了网络容错性**:在涉及网络分区的场景中,learner 模式允许更优雅的处理。 根据新成员所落入的分区,它可以无缝地与现有集群集成,而不会造成中断。 要检查 Kubernetes 控制平面是否健康,运行 `kubectl get node -l node-role.kubernetes.io/control-plane=` 并检查节点是否就绪。 -注意:建议在 etcd 集群中的成员个数为奇数。 +{{< note >}} + +建议在 etcd 集群中的成员个数为奇数。 +{{< /note >}} + 在将工作节点接入新的 Kubernetes 集群之前,确保控制平面节点健康。 ## 接下来的步骤 {#whats-next} @@ -190,7 +194,9 @@ Before joining a worker node to the new Kubernetes cluster, ensure that the cont Was this guide helpful? If you have any feedback or encounter any issues, please let us know. Your feedback is always welcome! Join the bi-weekly [SIG Cluster Lifecycle meeting](https://docs.google.com/document/d/1Gmc7LyCIL_148a9Tft7pdhdee0NBHdOfHS1SAF0duI4/edit) -or weekly [kubeadm office hours](https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit). Or reach us via [Slack](https://slack.k8s.io/) (channel **#kubeadm**), or the [SIG's mailing list](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle). +or weekly [kubeadm office hours](https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit). +Or reach us via [Slack](https://slack.k8s.io/) (channel **#kubeadm**), or the +[SIG's mailing list](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle). --> ## 反馈 {#feedback} diff --git a/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md b/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md index da2ed61df06..f0edbee2766 100644 --- a/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md +++ b/content/zh-cn/docs/concepts/configuration/manage-resources-containers.md @@ -37,12 +37,12 @@ that system resource specifically for that container to use. {{< glossary_tooltip text="容器" term_id="container" >}}设定所需要的资源数量。 最常见的可设定资源是 CPU 和内存(RAM)大小;此外还有其他类型的资源。 -当你为 Pod 中的 Container 指定了资源 **request(请求)**时, +当你为 Pod 中的 Container 指定了资源 **request(请求)** 时, {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} 就利用该信息决定将 Pod 调度到哪个节点上。 -当你为 Container 指定了资源 **limit(限制)**时,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} +当你为 Container 指定了资源 **limit(限制)** 时,{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 就可以确保运行的容器不会使用超出所设限制的资源。 -kubelet 还会为容器预留所 **request(请求)**数量的系统资源,供其使用。 +kubelet 还会为容器预留所 **request(请求)** 数量的系统资源,供其使用。 diff --git a/content/zh-cn/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md index 96529852df9..ba68b02a2f6 100644 --- a/content/zh-cn/docs/concepts/configuration/secret.md +++ b/content/zh-cn/docs/concepts/configuration/secret.md @@ -307,7 +307,7 @@ and structure the Secret type to have your domain name before the name, separate by a `/`. For example: `cloud-hosting.example.net/cloud-api-credentials`. --> 如果你要定义一种公开使用的 Secret 类型,请遵守 Secret 类型的约定和结构, -在类型名签名添加域名,并用 `/` 隔开。 +在类型名前面添加域名,并用 `/` 隔开。 例如:`cloud-hosting.example.net/cloud-api-credentials`。 diff --git a/content/zh-cn/docs/concepts/storage/persistent-volumes.md b/content/zh-cn/docs/concepts/storage/persistent-volumes.md index 37405bb767f..3eeca1fd519 100644 --- a/content/zh-cn/docs/concepts/storage/persistent-volumes.md +++ b/content/zh-cn/docs/concepts/storage/persistent-volumes.md @@ -338,15 +338,15 @@ An administrator can manually reclaim the volume with the following steps. -1. 删除 PersistentVolume 对象。与之相关的、位于外部基础设施中的存储资产 - (例如 AWS EBS 或 GCE PD 卷)在 PV 删除之后仍然存在。 +1. 删除 PersistentVolume 对象。与之相关的、位于外部基础设施中的存储资产在 + PV 删除之后仍然存在。 1. 根据情况,手动清除所关联的存储资产上的数据。 1. 手动删除所关联的存储资产。 @@ -357,8 +357,7 @@ the same storage asset definition. For volume plugins that support the `Delete` reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated -storage asset in the external infrastructure, such as an AWS EBS or GCE PD volume. -Volumes that were dynamically provisioned +storage asset in the external infrastructure. Volumes that were dynamically provisioned inherit the [reclaim policy of their StorageClass](#reclaim-policy), which defaults to `Delete`. The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or @@ -368,7 +367,7 @@ patched after it is created. See #### 删除(Delete) {#delete} 对于支持 `Delete` 回收策略的卷插件,删除动作会将 PersistentVolume 对象从 -Kubernetes 中移除,同时也会从外部基础设施(如 AWS EBS 或 GCE PD 卷)中移除所关联的存储资产。 +Kubernetes 中移除,同时也会从外部基础设施中移除所关联的存储资产。 动态制备的卷会继承[其 StorageClass 中设置的回收策略](#reclaim-policy), 该策略默认为 `Delete`。管理员需要根据用户的期望来配置 StorageClass; 否则 PV 卷被创建之后必须要被编辑或者修补。 @@ -478,7 +477,7 @@ Access Modes: RWO VolumeMode: Filesystem Capacity: 1Gi Node Affinity: -Message: +Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere) VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk @@ -506,7 +505,7 @@ Access Modes: RWO VolumeMode: Filesystem Capacity: 200Mi Node Affinity: -Message: +Message: Source: Type: CSI (a Container Storage Interface (CSI) volume source) Driver: csi.vsphere.vmware.com @@ -617,14 +616,12 @@ the following types of volumes: * azureFile (deprecated) * {{< glossary_tooltip text="csi" term_id="csi" >}} * flexVolume (deprecated) -* gcePersistentDisk (deprecated) * rbd * portworxVolume (deprecated) --> * azureFile(已弃用) * {{< glossary_tooltip text="csi" term_id="csi" >}} * flexVolume(已弃用) -* gcePersistentDisk(已弃用) * rbd * portworxVolume(已弃用) @@ -744,15 +741,6 @@ FlexVolume resize is possible only when the underlying driver supports resize. FlexVolume 卷的重设大小只能在下层驱动支持重设大小的时候才可进行。 {{< /note >}} -{{< note >}} - -扩充 EBS 卷的操作非常耗时。同时还存在另一个配额限制: -每 6 小时只能执行一次(尺寸)修改操作。 -{{< /note >}} - > **重要提醒!** 每个卷同一时刻只能以一种访问模式挂载,即使该卷能够支持多种访问模式。 -> 例如,一个 GCEPersistentDisk 卷可以被某节点以 ReadWriteOnce -> 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式挂载。 ### 回收策略 {#reclaim-policy} 目前的回收策略有: * Retain -- 手动回收 -* Recycle -- 基本擦除 (`rm -rf /thevolume/*`) -* Delete -- 诸如 AWS EBS 或 GCE PD 卷这类关联存储资产也被删除 +* Recycle -- 简单擦除 (`rm -rf /thevolume/*`) +* Delete -- 删除关联存储资产 -目前,仅 NFS 和 HostPath 支持回收(Recycle)。 -AWS EBS 和 GCE PD 卷支持删除(Delete)。 +对于 Kubernetes {{< skew currentVersion >}} 来说,只有 +`nfs` 和 `hostPath` 卷类型支持回收(Recycle)。 -对大多数类型的卷而言,你不需要设置节点亲和性字段。 -[GCE PD](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) 卷类型能自动设置相关字段。 -你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local) 卷显式地设置此属性。 +对大多数卷类型而言,你不需要设置节点亲和性字段。 +你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local) +卷显式地设置此属性。 {{< /note >}} * CSI * FC(光纤通道) -* GCEPersistentDisk(已弃用) * iSCSI * Local 卷 * OpenStack Cinder @@ -2232,4 +2206,3 @@ Read about the APIs described in this page: * [`PersistentVolume`](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/) * [`PersistentVolumeClaim`](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/) - diff --git a/content/zh-cn/docs/concepts/storage/storage-classes.md b/content/zh-cn/docs/concepts/storage/storage-classes.md index 58d759130c2..6d6d39d7aae 100644 --- a/content/zh-cn/docs/concepts/storage/storage-classes.md +++ b/content/zh-cn/docs/concepts/storage/storage-classes.md @@ -103,7 +103,7 @@ Note that certain cloud providers may already define a default StorageClass. 当一个 PVC 没有指定 `storageClassName` 时,会使用默认的 StorageClass。 集群中只能有一个默认的 StorageClass。如果不小心设置了多个默认的 StorageClass, -当 PVC 动态配置时,将使用最新设置的默认 StorageClass。 +在动态制备 PVC 时将使用其中最新的默认设置的 StorageClass。 关于如何设置默认的 StorageClass, 请参见[更改默认 StorageClass](/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/)。 @@ -130,7 +130,6 @@ for provisioning PVs. This field must be specified. | CephFS | - | - | | FC | - | - | | FlexVolume | - | - | -| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) | | iSCSI | - | - | | NFS | - | [NFS](#nfs) | | RBD | ✓ | [Ceph RBD](#ceph-rbd) | @@ -208,14 +207,16 @@ PersistentVolume 可以配置为可扩展。将此功能设置为 `true` 时, 当下层 StorageClass 的 `allowVolumeExpansion` 字段设置为 true 时,以下类型的卷支持卷扩展。 -{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}} + +{{< table caption = "卷类型及其 Kubernetes 版本要求" >}} | 卷类型 | Kubernetes 版本要求 | | :------------------- | :------------------------ | -| gcePersistentDisk | 1.11 | | rbd | 1.11 | | Azure File | 1.11 | | Portworx | 1.11 | @@ -292,24 +293,12 @@ PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或制备。 [Pod 亲和性和互斥性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity/)、 以及[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration)。 - -以下插件支持动态制备的 `WaitForFirstConsumer` 模式: - -- [GCEPersistentDisk](#gce-pd) - 以下插件支持预创建绑定 PersistentVolume 的 `WaitForFirstConsumer` 模式: -- 上述全部 - [Local](#local) {{< feature-state state="stable" for_k8s_version="v1.17" >}} @@ -485,85 +474,6 @@ parameters: `zone` 和 `zones` 已被弃用并被[允许的拓扑结构](#allowed-topologies)取代。 {{< /note >}} -### GCE PD {#gce-pd} - -```yaml -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: slow -provisioner: kubernetes.io/gce-pd -parameters: - type: pd-standard - fstype: ext4 - replication-type: none -``` - - -- `type`:`pd-standard` 或者 `pd-ssd`。默认:`pd-standard` -- `zone`(已弃用):GCE 区域。如果没有指定 `zone` 和 `zones`, - 通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。 - `zone` 和 `zones` 参数不能同时使用。 -- `zones`(已弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`, - 通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。 - `zone` 和 `zones` 参数不能同时使用。 -- `fstype`:`ext4` 或 `xfs`。默认:`ext4`。宿主机操作系统必须支持所定义的文件系统类型。 -- `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。 - - -如果 `replication-type` 设置为 `none`,会制备一个常规(当前区域内的)持久化磁盘。 - - -如果 `replication-type` 设置为 `regional-pd`, -会制备一个[区域性持久化磁盘(Regional Persistent Disk)](https://cloud.google.com/compute/docs/disks/#repds)。 - -强烈建议设置 `volumeBindingMode: WaitForFirstConsumer`,这样设置后, -当你创建一个 Pod,它使用的 PersistentVolumeClaim 使用了这个 StorageClass, -区域性持久化磁盘会在两个区域里制备。其中一个区域是 Pod 所在区域, -另一个区域是会在集群管理的区域中任意选择,磁盘区域可以通过 `allowedTopologies` 加以限制。 - -{{< note >}} - -`zone` 和 `zones` 已被弃用并被 [allowedTopologies](#allowed-topologies) 取代。 -当启用 [GCE CSI 迁移](/zh-cn/docs/concepts/storage/volumes/#gce-csi-migration)时, -GCE PD 卷可能被制备在某个与所有节点都不匹配的拓扑域中,但任何尝试使用该卷的 Pod 都无法被调度。 -对于传统的迁移前 GCE PD,这种情况下将在制备卷的时候产生错误。 -从 Kubernetes 1.23 版本开始,GCE CSI 迁移默认启用。 -{{< /note >}} - ### NFS {#nfs} ```yaml diff --git a/content/zh-cn/docs/concepts/storage/volume-snapshot-classes.md b/content/zh-cn/docs/concepts/storage/volume-snapshot-classes.md index ae9137add50..c5d21dbca02 100644 --- a/content/zh-cn/docs/concepts/storage/volume-snapshot-classes.md +++ b/content/zh-cn/docs/concepts/storage/volume-snapshot-classes.md @@ -116,7 +116,7 @@ then both the underlying snapshot and VolumeSnapshotContent remain. --> ### 删除策略 {#deletion-policy} -卷快照类具有 [deletionPolicy] 属性(/zh-cn/docs/concepts/storage/volume-snapshots/#delete)。 +卷快照类具有 [deletionPolicy](/zh-cn/docs/concepts/storage/volume-snapshots/#delete) 属性。 用户可以配置当所绑定的 VolumeSnapshot 对象将被删除时,如何处理 VolumeSnapshotContent 对象。 卷快照类的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。 diff --git a/content/zh-cn/docs/concepts/storage/volumes.md b/content/zh-cn/docs/concepts/storage/volumes.md index 82679539fe7..55220375f78 100644 --- a/content/zh-cn/docs/concepts/storage/volumes.md +++ b/content/zh-cn/docs/concepts/storage/volumes.md @@ -139,7 +139,7 @@ AWSElasticBlockStore 树内存储驱动已在 Kubernetes v1.19 版本中废弃 并在 v1.27 版本中被完全移除。 Kubernetes 项目建议你转为使用 [AWS EBS](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) -第三方存储驱动。 +第三方存储驱动插件。 -Kubernetes 项目建议你转为使用 [CephFS CSI](https://github.com/ceph/ceph-csi) 第三方存储驱动。 +Kubernetes 项目建议你转为使用 [CephFS CSI](https://github.com/ceph/ceph-csi) +第三方存储驱动插件。 {{< /note >}} -### gcePersistentDisk(已弃用) {#gcepersistentdisk} +### gcePersistentDisk(已移除) {#gcepersistentdisk} -{{< feature-state for_k8s_version="v1.17" state="deprecated" >}} - -`gcePersistentDisk` 卷能将谷歌计算引擎 (GCE) [持久盘(PD)](http://cloud.google.com/compute/docs/disks) -挂载到你的 Pod 中。 -不像 `emptyDir` 那样会在 Pod 被删除的同时也会被删除,持久盘卷的内容在删除 Pod -时会被保留,卷只是被卸载了。 -这意味着持久盘卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。 - -{{< note >}} - -在使用 PD 前,你必须使用 `gcloud` 或者 GCE API 或 UI 创建它。 -{{< /note >}} +Kubernetes {{< skew currentVersion >}} 不包含 `gcePersistentDisk` 卷类型。 -使用 `gcePersistentDisk` 时有一些限制: - -* 运行 Pod 的节点必须是 GCE VM -* 这些 VM 必须和持久盘位于相同的 GCE 项目和区域(zone) +`gcePersistentDisk` 源代码树内卷存储驱动在 Kubernetes v1.17 版本中被弃用,在 v1.28 版本中被完全移除。 -GCE PD 的一个特点是它们可以同时被多个消费者以只读方式挂载。 -这意味着你可以用数据集预先填充 PD,然后根据需要并行地在尽可能多的 Pod 中提供该数据集。 -不幸的是,PD 只能由单个使用者以读写模式挂载,即不允许同时写入。 - - -在由 ReplicationController 所管理的 Pod 上使用 GCE PD 将会失败,除非 PD -是只读模式或者副本的数量是 0 或 1。 - - -#### 创建 GCE 持久盘(PD) {#gce-create-persistent-disk} - -在 Pod 中使用 GCE 持久盘之前,你首先要创建它。 - -```shell -gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk -``` - - -#### GCE 持久盘配置示例 {#gce-pd-configuration-example} - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: test-pd -spec: - containers: - - image: registry.k8s.io/test-webserver - name: test-container - volumeMounts: - - mountPath: /test-pd - name: test-volume - volumes: - - name: test-volume - # 此 GCE PD 必须已经存在 - gcePersistentDisk: - pdName: my-data-disk - fsType: ext4 -``` - -#### 区域持久盘 {#regional-persistent-disks} - - -[区域持久盘](https://cloud.google.com/compute/docs/disks/#repds)特性允许你创建能在同一区域的两个可用区中使用的持久盘。 -要使用这个特性,必须以持久卷(PersistentVolume)的方式提供卷;直接从 -Pod 引用这种卷是不可以的。 - - -#### 手动制备基于区域 PD 的 PersistentVolume {#manually-provisioning-regional-pd-pv} - -使用[为 GCE PD 定义的存储类](/zh-cn/docs/concepts/storage/storage-classes/#gce-pd) -可以实现动态制备。在创建 PersistentVolume 之前,你首先要创建 PD。 - -```shell -gcloud compute disks create --size=500GB my-data-disk - --region us-central1 - --replica-zones us-central1-a,us-central1-b -``` - - -#### 区域持久盘配置示例 - - -```yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: test-volume -spec: - capacity: - storage: 400Gi - accessModes: - - ReadWriteOnce - gcePersistentDisk: - pdName: my-data-disk - fsType: ext4 - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - # failure-domain.beta.kubernetes.io/zone 应在 1.21 之前使用 - - key: topology.kubernetes.io/zone - operator: In - values: - - us-central1-a - - us-central1-b -``` +Kubernetes 项目建议你转为使用 +[Google Compute Engine Persistent Disk CSI](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) +第三方存储驱动插件。 -#### GCE CSI 迁移完成 - -{{< feature-state for_k8s_version="v1.21" state="alpha" >}} - -要禁止控制器管理器和 kubelet 加载 `gcePersistentDisk` 存储插件,请将 -`InTreePluginGCEUnregister` 标志设置为 `true`。 - - ### gitRepo (已弃用) {#gitrepo} {{< warning >}} @@ -1160,12 +1011,11 @@ for an example of mounting NFS volumes with PersistentVolumes. `persistentVolumeClaim` 卷用来将[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)(PersistentVolume)挂载到 Pod 中。 -持久卷申领(PersistentVolumeClaim)是用户在不知道特定云环境细节的情况下“申领”持久存储(例如 -GCE PersistentDisk 或者 iSCSI 卷)的一种方法。 +持久卷申领(PersistentVolumeClaim)是用户在不知道特定云环境细节的情况下“申领”持久存储(例如 iSCSI 卷)的一种方法。 -Kubernetes 项目建议你转为以 RBD 模式使用 [Ceph CSI](https://github.com/ceph/ceph-csi) 第三方存储驱动。 +Kubernetes 项目建议你转为以 RBD 模式使用 [Ceph CSI](https://github.com/ceph/ceph-csi) +第三方存储驱动插件。 {{< /note >}} * [`azureFile`](/zh-cn/docs/concepts/storage/volumes/#azurefile) -* [`gcePersistentDisk`](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) * [`vsphereVolume`](/zh-cn/docs/concepts/storage/volumes/#vspherevolume) \ No newline at end of file diff --git a/content/zh-cn/docs/concepts/windows/intro.md b/content/zh-cn/docs/concepts/windows/intro.md index 05f8f002850..77480fbbb6a 100644 --- a/content/zh-cn/docs/concepts/windows/intro.md +++ b/content/zh-cn/docs/concepts/windows/intro.md @@ -469,8 +469,8 @@ The following list documents differences between how Pod specifications work bet 请参考 [GitHub issue](https://github.com/moby/moby/issues/25982)。 目前的行为是通过 CTRL_SHUTDOWN_EVENT 发送 ENTRYPOINT 进程,然后 Windows 默认等待 5 秒, 最后使用正常的 Windows 关机行为终止所有进程。 - 5 秒默认值实际上位于[容器内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)的 Windows 注册表中, - 因此在构建容器时可以覆盖这个值。 + 5 秒默认值实际上位于[容器内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)的 + Windows 注册表中,因此在构建容器时可以覆盖这个值。 * `volumeDevices` - 这是一个 beta 版功能特性,未在 Windows 上实现。 Windows 无法将原始块设备挂接到 Pod。 * `volumes` @@ -594,7 +594,7 @@ The following container runtimes work with Windows: You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ as the container runtime for Kubernetes nodes that run Windows. -Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd). +Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd). --> ### ContainerD {#containerd} @@ -603,7 +603,7 @@ Learn how to [install ContainerD on a Windows node](/docs/setup/production-envir 对于运行 Windows 的 Kubernetes 节点,你可以使用 {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ 作为容器运行时。 -学习如何[在 Windows 上安装 ContainerD](/zh-cn/docs/setup/production-environment/container-runtimes/#install-containerd)。 +学习如何[在 Windows 上安装 ContainerD](/zh-cn/docs/setup/production-environment/container-runtimes/#containerd)。 {{< note >}} + + {{< note >}} 警告、小心和注释短代码可以嵌套在列表中,但是要缩进四个空格。 参见[常见短代码问题](#common-shortcode-issues)。 {{< /note >}} @@ -777,9 +778,9 @@ The output is: 1. A fourth item in a list --> -1. 列表中第三个条目 +3. 列表中第三个条目 -1. 列表中第四个条目 +4. 列表中第四个条目 +早期版本的 `kubectl` 内置了对 AKS 和 GKE 的认证支持,但这一功能已不再存在。 +{{< /note >}} + ### 对主体的引用 {#referring-to-subjects} -RoleBinding 或者 ClusterRoleBinding 可绑定角色到某**主体(Subject)**上。 +RoleBinding 或者 ClusterRoleBinding 可绑定角色到某**主体(Subject)** 上。 主体可以是组,用户或者{{< glossary_tooltip text="服务账户" term_id="service-account" >}}。 Kubernetes 用字符串来表示用户名。 diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md b/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md index 91083124259..57a1abcb486 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md @@ -258,6 +258,7 @@ For a reference to old feature gates that are removed, please refer to | `TopologyManagerPolicyBetaOptions` | `true` | Beta | 1.28 | | | `TopologyManagerPolicyOptions` | `false` | Alpha | 1.26 | 1.27 | | `TopologyManagerPolicyOptions` | `true` | Beta | 1.28 | | +| `UnauthenticatedHTTP2DOSMitigation` | `false` | Beta | 1.28 | | | `UnknownVersionInteroperabilityProxy` | `false` | Alpha | 1.28 | | | `UserNamespacesSupport` | `false` | Alpha | 1.28 | | | `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | 1.27 | @@ -1272,6 +1273,9 @@ Each feature gate is designed for enabling/disabling a specific feature: 此特性门控绝对不会进阶至稳定版。 - `TopologyManagerPolicyOptions`:允许微调拓扑管理器策略。 +- `UnauthenticatedHTTP2DOSMitigation`:此功能针对未认证客户端启用 HTTP/2 拒绝服务(DoS)防御机制。 + 值得注意的是,Kubernetes v1.28.0 至 v1.28.2 版本中并未包含这一功能。 - `UnknownVersionInteroperabilityProxy`:存在多个不同版本的 kube-apiserver 时将资源请求代理到正确的对等 kube-apiserver。 详细信息请参阅[混合版本代理](/zh-cn/docs/concepts/architecture/mixed-version-proxy/)。 - `UserNamespacesSupport`:为 Pod 启用用户名字空间支持。 diff --git a/content/zh-cn/docs/reference/glossary/downstream.md b/content/zh-cn/docs/reference/glossary/downstream.md index e88c38e1a64..f112a7d75ac 100644 --- a/content/zh-cn/docs/reference/glossary/downstream.md +++ b/content/zh-cn/docs/reference/glossary/downstream.md @@ -34,5 +34,5 @@ tags: * In the **Kubernetes Community**: Conversations often use *downstream* to mean the ecosystem, code, or third-party tools that rely on the core Kubernetes codebase. For example, a new feature in Kubernetes may be adopted by applications *downstream* to improve their functionality. * In **GitHub** or **git**: The convention is to refer to a forked repo as *downstream*, whereas the source repo is considered *upstream*. --> -* 在 **Kubernetes 社区**中:**下游(downstream)**在人们交流中常用来表示那些依赖核心 Kubernetes 代码库的生态系统、代码或者第三方工具。例如,Kubernetes 的一个新特性可以被**下游(downstream)**应用采用,以提升它们的功能性。 -* 在 **GitHub** 或 **git** 中:约定用**下游(downstream)**表示分支代码库,源代码库被认为是**上游(upstream)**。 +* 在 **Kubernetes 社区**中:**下游(downstream)** 在人们交流中常用来表示那些依赖核心 Kubernetes 代码库的生态系统、代码或者第三方工具。例如,Kubernetes 的一个新特性可以被**下游(downstream)** 应用采用,以提升它们的功能性。 +* 在 **GitHub** 或 **git** 中:约定用**下游(downstream)** 表示分支代码库,源代码库被认为是**上游(upstream)**。 diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index ea48228d23d..3733b9c106b 100644 --- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -36,7 +36,7 @@ The `kubeadm` tool is good if you need: - A building block in other ecosystem and/or installer tools with a larger scope. --> -kubeadm 工具很棒,如果你需要: +`kubeadm` 工具很棒,如果你需要: - 一个尝试 Kubernetes 的简单方法。 - 一个现有用户可以自动设置集群并测试其应用程序的途径。 @@ -48,9 +48,8 @@ of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the cloud or on-premises, you can integrate `kubeadm` into provisioning systems such as Ansible or Terraform. --> -你可以在各种机器上安装和使用 `kubeadm`:笔记本电脑, -一组云服务器,Raspberry Pi 等。无论是部署到云还是本地, -你都可以将 `kubeadm` 集成到预配置系统中,例如 Ansible 或 Terraform。 +你可以在各种机器上安装和使用 `kubeadm`:笔记本电脑、一组云服务器、Raspberry Pi 等。 +无论是部署到云还是本地,你都可以将 `kubeadm` 集成到 Ansible 或 Terraform 这类预配置系统中。 ## {{% heading "prerequisites" %}} @@ -83,7 +82,8 @@ applies to `kubeadm` as well as to Kubernetes overall. Check that policy to learn about what versions of Kubernetes and `kubeadm` are supported. This page is written for Kubernetes {{< param "version" >}}. --> -[Kubernetes 版本及版本偏差策略](/zh-cn/releases/version-skew-policy/#supported-versions)适用于 `kubeadm` 以及整个 Kubernetes。 +[Kubernetes 版本及版本偏差策略](/zh-cn/releases/version-skew-policy/#supported-versions)适用于 +`kubeadm` 以及整个 Kubernetes。 查阅该策略以了解支持哪些版本的 Kubernetes 和 `kubeadm`。 该页面是为 Kubernetes {{< param "version" >}} 编写的。 @@ -127,7 +127,7 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev #### Component installation --> -### 主机准备 {#preparing-the-hosts} +### 主机准备 {#preparing-the-hosts} #### 安装组件 {#component-installation} @@ -135,7 +135,7 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts. For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). --> -在所有主机上安装 {{< glossary_tooltip term_id="container-runtime" text="容器运行时" >}} 和 kubeadm。 +在所有主机上安装{{< glossary_tooltip term_id="container-runtime" text="容器运行时" >}}和 kubeadm。 详细说明和其他前提条件,请参见[安装 kubeadm](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)。 {{< note >}} @@ -160,7 +160,7 @@ After you initialize your control-plane, the kubelet runs normally. #### Network setup kubeadm similarly to other Kubernetes components tries to find a usable IP on -the network interface associated with the default gateway on a host. Such +the network interfaces associated with a default gateway on a host. Such an IP is then used for the advertising and/or listening performed by a component. --> #### 网络设置 {#network-setup} @@ -181,19 +181,39 @@ ip route show # Look for a line starting with "default via" ip route show # 查找以 "default via" 开头的行 ``` +{{< note >}} + +如果主机上存在两个或多个默认网关,Kubernetes 组件将尝试使用遇到的第一个具有合适全局单播 IP 地址的网关。 +在做出这个选择时,网关的确切顺序可能因不同的操作系统和内核版本而有所差异。 +{{< /note >}} + +Kubernetes 组件不接受自定义网络接口作为选项,因此必须将自定义 IP +地址作为标志传递给所有需要此自定义配置的组件实例。 +{{< note >}} + +如果主机没有默认网关,并且没有将自定义 IP 地址传递给 Kubernetes 组件,此组件可能会因错误而退出。 +{{< /note >}} + + -Kubernetes 组件不接受自定义网络接口作为选项,因此必须将自定义 IP -地址作为标志传递给所有需要此自定义配置的组件实例。 - 要为使用 `init` 或 `join` 创建的控制平面节点配置 API 服务器的公告地址, 你可以使用 `--apiserver-advertise-address` 标志。 最好在 [kubeadm API](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3)中使用 @@ -214,18 +234,18 @@ For dual-stack see 有关双协议栈细节参见[使用 kubeadm 支持双协议栈](/zh-cn/docs/setup/production-environment/tools/kubeadm/dual-stack-support)。 -{{< note >}} -IP 地址成为证书 SAN 字段的一部分。更改这些 IP 地址将需要签署新的证书并重启受影响的组件, +你分配给控制平面组件的 IP 地址将成为其 X.509 证书的使用者备用名称字段的一部分。 +更改这些 IP 地址将需要签署新的证书并重启受影响的组件, 以便反映证书文件中的变化。有关此主题的更多细节参见 [手动续期证书](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)。 -{{}} {{< warning >}} -如果主机没有默认网关,则建议设置一个默认网关。 -否则,在不传递自定义 IP 地址给 Kubernetes 组件的情况下,此组件将退出并报错。 -如果主机上存在两个或多个默认网关,则 Kubernetes -组件将尝试使用所遇到的第一个具有合适全局单播 IP 地址的网关。 -在做出此选择时,网关的确切顺序可能因不同的操作系统和内核版本而有所差异。 -{{< /note >}} - @@ -278,15 +282,15 @@ Kubeadm allows you to use a custom image repository for the required images. See [Using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images) for more details. --> -这个步骤是可选的,只适用于你希望 `kubeadm init` 和 `kubeadm join` 不去下载存放在 `registry.k8s.io` 上的默认的容器镜像的情况。 +这个步骤是可选的,只适用于你希望 `kubeadm init` 和 `kubeadm join` 不去下载存放在 +`registry.k8s.io` 上的默认容器镜像的情况。 当你在离线的节点上创建一个集群的时候,Kubeadm 有一些命令可以帮助你预拉取所需的镜像。 阅读[离线运行 kubeadm](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#without-internet-connection) 获取更多的详情。 Kubeadm 允许你给所需要的镜像指定一个自定义的镜像仓库。 -阅读[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images) -获取更多的详情。 +阅读[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init#custom-images)获取更多的详情。 -`--apiserver-advertise-address` 可用于为控制平面节点的 API server 设置广播地址, +`--apiserver-advertise-address` 可用于为控制平面节点的 API 服务器设置广播地址, `--control-plane-endpoint` 可用于为所有控制平面节点设置共享端点。 其中 `192.168.0.102` 是此节点的 IP 地址,`cluster-endpoint` 是映射到该 IP 的自定义 DNS 名称。 -这将允许你将 `--control-plane-endpoint=cluster-endpoint` 传递给 `kubeadm init`,并将相同的 DNS 名称传递给 `kubeadm join`。 -稍后你可以修改 `cluster-endpoint` 以指向高可用性方案中的负载均衡器的地址。 +这将允许你将 `--control-plane-endpoint=cluster-endpoint` 传递给 `kubeadm init`, +并将相同的 DNS 名称传递给 `kubeadm join`。稍后你可以修改 `cluster-endpoint` +以指向高可用性方案中的负载均衡器的地址。 `kubeadm init` 首先运行一系列预检查以确保机器为运行 Kubernetes 准备就绪。 这些预检查会显示警告并在错误时退出。然后 `kubeadm init` -下载并安装集群控制平面组件。这可能会需要几分钟。 -完成之后你应该看到: +下载并安装集群控制平面组件。这可能会需要几分钟。完成之后你应该看到: ```none Your Kubernetes control-plane has initialized successfully! @@ -528,8 +532,7 @@ This section contains important information about networking setup and deployment order. Read all of this advice carefully before proceeding. --> -本节包含有关网络设置和部署顺序的重要信息。 -在继续之前,请仔细阅读所有建议。 +本节包含有关网络设置和部署顺序的重要信息。在继续之前,请仔细阅读所有建议。 -**你必须部署一个基于 Pod 网络插件的 -{{< glossary_tooltip text="容器网络接口" term_id="cni" >}} -(CNI),以便你的 Pod 可以相互通信。 -在安装网络之前,集群 DNS (CoreDNS) 将不会启动。** +**你必须部署一个基于 Pod 网络插件的{{< glossary_tooltip text="容器网络接口" term_id="cni" >}}(CNI), +以便你的 Pod 可以相互通信。在安装网络之前,集群 DNS (CoreDNS) 将不会启动。** -- 注意你的 Pod 网络不得与任何主机网络重叠: - 如果有重叠,你很可能会遇到问题。 +- 注意你的 Pod 网络不得与任何主机网络重叠:如果有重叠,你很可能会遇到问题。 (如果你发现网络插件的首选 Pod 网络与某些主机网络之间存在冲突, 则应考虑使用一个合适的 CIDR 块来代替, 然后在执行 `kubeadm init` 时使用 `--pod-network-cidr` 参数并在你的网络插件的 YAML 中替换它)。 @@ -593,13 +593,15 @@ kubeadm 应该是与 CNI 无关的,对 CNI 驱动进行验证目前不在我 Several external projects provide Kubernetes Pod networks using CNI, some of which also support [Network Policy](/docs/concepts/services-networking/network-policies/). --> -一些外部项目为 Kubernetes 提供使用 CNI 的 Pod 网络,其中一些还支持[网络策略](/zh-cn/docs/concepts/services-networking/network-policies/)。 +一些外部项目为 Kubernetes 提供使用 CNI 的 Pod 网络, +其中一些还支持[网络策略](/zh-cn/docs/concepts/services-networking/network-policies/)。 -请参阅实现 [Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)的附加组件列表。 +请参阅实现 +[Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model)的附加组件列表。 默认情况下,出于安全原因,你的集群不会在控制平面节点上调度 Pod。 -如果你希望能够在控制平面节点上调度 Pod,例如单机 Kubernetes 集群,请运行: +如果你希望能够在单机 Kubernetes 集群等控制平面节点上调度 Pod,请运行: ```bash kubectl taint nodes --all node-role.kubernetes.io/control-plane- @@ -718,7 +720,7 @@ The nodes are where your workloads (containers and Pods, etc) run. To add new no -如果没有令牌,可以通过在控制平面节点上运行以下命令来获取令牌: +如果你没有令牌,可以通过在控制平面节点上运行以下命令来获取令牌: ```bash kubeadm token list @@ -747,6 +749,7 @@ you can create a new token by running the following command on the control-plane ```bash kubeadm token create ``` + @@ -780,7 +783,8 @@ The output is similar to: -要为 `:` 指定 IPv6 元组,必须将 IPv6 地址括在方括号中,例如:`[2001:db8::101]:2073` +要为 `:` 指定 IPv6 元组,必须将 IPv6 +地址括在方括号中,例如 `[2001:db8::101]:2073`。 {{< /note >}} 由于集群节点通常是按顺序初始化的,CoreDNS Pod 很可能都运行在第一个控制面节点上。 -为了提供更高的可用性,请在加入至少一个新节点后 -使用 `kubectl -n kube-system rollout restart deployment coredns` 命令,重新平衡这些 CoreDNS Pod。 +为了提供更高的可用性,请在加入至少一个新节点后使用 +`kubectl -n kube-system rollout restart deployment coredns` 命令,重新平衡这些 CoreDNS Pod。 {{< /note >}} -如果要从集群外部连接到 API 服务器,则可以使用 `kubectl proxy`: +如果你要从集群外部连接到 API 服务器,则可以使用 `kubectl proxy`: ```bash scp root@:/etc/kubernetes/admin.conf . @@ -879,7 +883,7 @@ kubectl --kubeconfig ./admin.conf proxy -你现在可以在本地访问 API 服务器 `http://localhost:8001/api/v1`。 +你现在可以在 `http://localhost:8001/api/v1` 从本地访问 API 服务器。 -### 删除节点 {#remove-the-node} +### 移除节点 {#remove-the-node} -使用适当的凭证与控制平面节点通信,运行: ```bash kubectl drain --delete-emptydir-data --force --ignore-daemonsets ``` +--> +使用适当的凭据与控制平面节点通信,运行: + +```bash +kubectl drain <节点名称> --delete-emptydir-data --force --ignore-daemonsets +``` -在删除节点之前,请重置 `kubeadm` 安装的状态: +在移除节点之前,请重置 `kubeadm` 安装的状态: ```bash kubeadm reset @@ -946,8 +954,12 @@ ipvsadm -C ``` -现在删除节点: +现在移除节点: ```bash kubectl delete node <节点名称> @@ -1002,7 +1014,8 @@ options. an overview of what is involved. --> * 使用 [Sonobuoy](https://github.com/heptio/sonobuoy) 验证集群是否正常运行。 -* 有关使用 kubeadm 升级集群的详细信息,请参阅[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。 +* 有关使用 kubeadm 升级集群的详细信息, + 请参阅[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。 * 在 [kubeadm 参考文档](/zh-cn/docs/reference/setup-tools/kubeadm/)中了解有关 `kubeadm` 进阶用法的信息。 * 了解有关 Kubernetes [概念](/zh-cn/docs/concepts/)和 [`kubectl`](/zh-cn/docs/reference/kubectl/)的更多信息。 * 有关 Pod 网络附加组件的更多列表,请参见[集群网络](/zh-cn/docs/concepts/cluster-administration/networking/)页面。 @@ -1029,10 +1042,10 @@ options. * 有关漏洞,访问 [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues) * 有关支持,访问 [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack 频道 -* General SIG 集群生命周期开发 Slack 频道: +* 常规的 SIG Cluster Lifecycle 开发 Slack 频道: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/) -* SIG 集群生命周期 [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme) -* SIG 集群生命周期邮件列表: +* SIG Cluster Lifecycle 的 [SIG 资料](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme) +* SIG Cluster Lifecycle 邮件列表: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) -`kubeadm upgrade` 的例子: +`kubeadm upgrade` 的例子: * 用于创建或升级节点的 kubeadm 版本为 {{< skew currentVersionAddMinor -1 >}}。 * 用于升级节点的 kubeadm 版本必须为 {{< skew currentVersionAddMinor -1 >}} 或 {{< skew currentVersion >}}。 @@ -1151,8 +1164,8 @@ or {{< skew currentVersion >}} To learn more about the version skew between the different Kubernetes component see the [Version Skew Policy](/releases/version-skew-policy/). --> -要了解更多关于不同 Kubernetes 组件之间的版本偏差,请参见 -[版本偏差策略](/zh-cn/releases/version-skew-policy/)。 +要了解更多关于不同 Kubernetes 组件之间的版本偏差, +请参见[版本偏差策略](/zh-cn/releases/version-skew-policy/)。 这可能是由许多问题引起的。最常见的是: @@ -240,10 +241,12 @@ provider. Please contact the author of the Pod Network add-on to find out whethe Calico, Canal, and Flannel CNI providers are verified to support HostPort. -For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md). +For more information, see the +[CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md). -If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of -services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`. +If your network provider does not support the portmap CNI plugin, you may need to use the +[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport) +or use `HostNetwork=true`. --> ## `HostPort` 服务无法工作 @@ -267,9 +270,10 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos add-on provider to get the latest status of their support for hairpin mode. - If you are using VirtualBox (directly or via Vagrant), you will need to - ensure that `hostname -i` returns a routable IP address. By default the first + ensure that `hostname -i` returns a routable IP address. By default, the first interface is connected to a non-routable host-only network. A work around - is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11) + is to modify `/etc/hosts`, see this + [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11) for an example. --> ## 无法通过其服务 IP 访问 Pod @@ -301,12 +305,14 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The `base64 --decode` command can be used to decode the certificate and `openssl x509 -text -noout` can be used for viewing the certificate information. + - Unset the `KUBECONFIG` environment variable using: --> - 验证 `$HOME/.kube/config` 文件是否包含有效证书, 并在必要时重新生成证书。在 kubeconfig 文件中的证书是 base64 编码的。 该 `base64 --decode` 命令可以用来解码证书,`openssl x509 -text -noout` 命令可以用于查看证书信息。 + - 使用如下方法取消设置 `KUBECONFIG` 环境变量的值: ```shell @@ -328,7 +334,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( - 另一个方法是覆盖 `kubeconfig` 的现有用户 "管理员": ```shell - mv $HOME/.kube $HOME/.kube.bak + mv $HOME/.kube $HOME/.kube.bak mkdir $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config @@ -337,7 +343,8 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( @@ -401,11 +408,15 @@ Error from server (NotFound): the server could not find the requested resource ``` - 如果你正在 Vagrant 中使用 flannel 作为 Pod 网络,则必须指定 flannel 的默认接口名称。 @@ -417,7 +428,8 @@ Error from server (NotFound): the server could not find the requested resource ## 容器使用的非公共 IP @@ -428,10 +440,15 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6 ``` - 这或许是由于 Kubernetes 使用的 IP 无法与看似相同的子网上的其他 IP 进行通信的缘故, 可能是由机器提供商的政策所导致的。 @@ -471,8 +488,8 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6 CoreDNS 处于 `CrashLoopBackOff` 时的另一个原因是当 Kubernetes 中部署的 CoreDNS Pod 检测到环路时。 @@ -526,7 +544,7 @@ rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:24 ``` ## 节点上的 `/usr` 被以只读方式挂载 {#usr-mounted-read-only} @@ -648,13 +682,19 @@ for the feature to work. 类似 kubelet 和 kube-controller-manager 这类 Kubernetes 组件使用默认路径 `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, 而 FlexVolume 的目录 **必须是可写入的**,该功能特性才能正常工作。 -(**注意**:FlexVolume 在 Kubernetes v1.23 版本中已被弃用) + +{{< note >}} + +FlexVolume 在 Kubernetes v1.23 版本中已被弃用。 +{{< /note >}} 为了解决这个问题,你可以使用 kubeadm 的[配置文件](/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/)来配置 @@ -700,7 +740,10 @@ be advised that this is modifying a design principle of the Linux distribution. @@ -767,3 +810,110 @@ Also see [How to run the metrics-server securely](https://github.com/kubernetes- 以进一步了解如何在 kubeadm 集群中配置 kubelet 使用正确签名了的服务证书。 另请参阅 [How to run the metrics-server securely](https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-to-run-metrics-server-securely)。 + + +## 因 etcd 哈希值无变化而升级失败 {#upgrade-fails-due-to-etcd-hash-not-changing} + +仅适用于通过 kubeadm 二进制文件 v1.28.3 或更高版本升级控制平面节点的情况, +其中此节点当前由 kubeadm v1.28.0、v1.28.1 或 v1.28.2 管理。 + +以下是你可能遇到的错误消息: + +``` +[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition +[upgrade/etcd] Waiting for previous etcd to become available +I0907 10:10:09.109104 3704 etcd.go:588] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379/ https://172.17.0.4:2379/ https://172.17.0.3:2379/]) are available 1/10 +[upgrade/etcd] Etcd was rolled back and is now available +static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition +couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced +k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.rollbackOldManifests + cmd/kubeadm/app/phases/upgrade/staticpods.go:525 +k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.upgradeComponent + cmd/kubeadm/app/phases/upgrade/staticpods.go:254 +k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade + cmd/kubeadm/app/phases/upgrade/staticpods.go:338 +... +``` + + +本次失败的原因是受影响的版本在 PodSpec 中生成的 etcd 清单文件带有不需要的默认值。 +这将导致与清单比较的差异,并且 kubeadm 预期 Pod 哈希值将发生变化,但 kubelet 永远不会更新哈希值。 + +如果你在集群中遇到此问题,有两种解决方法: + +- 可以运行以下命令跳过 etcd 的版本升级,即受影响版本和 v1.28.3(或更高版本)之间的版本升级: + + ```shell + kubeadm upgrade {apply|node} [version] --etcd-upgrade=false + ``` + + 但不推荐这种方法,因为后续的 v1.28 补丁版本可能引入新的 etcd 版本。 + + +- 在升级之前,对 etcd 静态 Pod 的清单进行修补,以删除有问题的默认属性: + + ```patch + diff --git a/etc/kubernetes/manifests/etcd_defaults.yaml b/etc/kubernetes/manifests/etcd_origin.yaml + index d807ccbe0aa..46b35f00e15 100644 + --- a/etc/kubernetes/manifests/etcd_defaults.yaml + +++ b/etc/kubernetes/manifests/etcd_origin.yaml + @@ -43,7 +43,6 @@ spec: + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + - successThreshold: 1 + timeoutSeconds: 15 + name: etcd + resources: + @@ -59,26 +58,18 @@ spec: + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + - successThreshold: 1 + timeoutSeconds: 15 + - terminationMessagePath: /dev/termination-log + - terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/lib/etcd + name: etcd-data + - mountPath: /etc/kubernetes/pki/etcd + name: etcd-certs + - dnsPolicy: ClusterFirst + - enableServiceLinks: true + hostNetwork: true + priority: 2000001000 + priorityClassName: system-node-critical + - restartPolicy: Always + - schedulerName: default-scheduler + securityContext: + seccompProfile: + type: RuntimeDefault + - terminationGracePeriodSeconds: 30 + volumes: + - hostPath: + path: /etc/kubernetes/pki/etcd + ``` + + +有关此错误的更多信息,请查阅[此问题的跟踪页面](https://github.com/kubernetes/kubeadm/issues/2927)。 diff --git a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md index bb3560efebc..2b44064c5d0 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md +++ b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md @@ -183,18 +183,25 @@ kubeadm init phase certs --config -要在 `/etc/kubernetes/manifests` 中编写新的清单文件,你可以使用: +要在 `/etc/kubernetes/manifests` 中编写新的清单文件,你可以使用以下命令: + ```shell +# Kubernetes 控制平面组件 kubeadm init phase control-plane --config +# 本地 etcd +kubeadm init phase etcd local --config ``` `` 内容必须与更新后的 `ClusterConfiguration` 匹配。 -`` 值必须是组件的名称。 +`` 值必须是一个控制平面组件(`apiserver`、`controller-manager` 或 `scheduler`)的名称。 {{< note >}} +当 kubelet 使用 HTTP 探测 Pod 时,仅当重定向到同一主机时,它才会遵循重定向。 +如果 kubelet 在探测期间收到 11 个或更多重定向,则认为探测成功并创建相关事件: + +```none +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 29m default-scheduler Successfully assigned default/httpbin-7b8bc9cb85-bjzwn to daocloud + Normal Pulling 29m kubelet Pulling image "docker.io/kennethreitz/httpbin" + Normal Pulled 24m kubelet Successfully pulled image "docker.io/kennethreitz/httpbin" in 5m12.402735213s + Normal Created 24m kubelet Created container httpbin + Normal Started 24m kubelet Started container httpbin + Warning ProbeWarning 4m11s (x1197 over 24m) kubelet Readiness probe warning: Probe terminated redirects +``` + + +如果 kubelet 收到主机名与请求不同的重定向,则探测结果将被视为成功,并且 +kubelet 将创建一个事件来报告重定向失败。 +{{< /note >}} + ConfigMap 概念允许你将配置清单与镜像内容分离,以保持容器化的应用程序的可移植性。 例如,你可以下载并运行相同的{{< glossary_tooltip text="容器镜像" term_id="image" >}}来启动容器, @@ -133,8 +134,10 @@ symlinks, devices, pipes, and more). {{< note >}} 用于创建 ConfigMap 的每个文件名必须由可接受的字符组成,即:字母(`A` 到 `Z` 和 `a` 到 `z`)、数字(`0` 到 `9`)、'-'、'_'或'.'。 @@ -162,6 +165,16 @@ Now, download the sample configuration and create the ConfigMap: --> 现在,下载示例的配置并创建 ConfigMap: + ```shell # 将示例文件下载到 `configure-pod-container/configmap/` 目录 wget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties @@ -360,6 +373,28 @@ Use the option `--from-env-file` to create a ConfigMap from an env-file, for exa --> 使用 `--from-env-file` 选项基于 env 文件创建 ConfigMap,例如: + ```shell # Env 文件包含环境变量列表。其中适用以下语法规则: # 这些语法规则适用: @@ -378,7 +413,7 @@ enemies=aliens lives=3 allowed="true" -# This comment and the empty line above it are ignored +# 此注释和上方的空行将被忽略 ``` ```shell @@ -595,13 +630,17 @@ For example, to generate a ConfigMap from files `configure-pod-container/configm 例如,要基于 `configure-pod-container/configmap/game.properties` 文件生成一个 ConfigMap: + ```shell # 创建包含 ConfigMapGenerator 的 kustomization.yaml 文件 cat <./kustomization.yaml configMapGenerator: - name: game-config-4 - labels: - game-config: config-4 + options: + labels: + game-config: config-4 files: - configure-pod-container/configmap/game.properties EOF @@ -680,13 +719,28 @@ with the key `game-special-key` 例如,从 `configure-pod-container/configmap/game.properties` 文件生成 ConfigMap, 但使用 `game-special-key` 作为键名: + ```shell # 创建包含 ConfigMapGenerator 的 kustomization.yaml 文件 cat <./kustomization.yaml configMapGenerator: - name: game-config-5 - labels: - game-config: config-5 + options: + labels: + game-config: config-5 files: - game-special-key=configure-pod-container/configmap/game.properties EOF @@ -719,6 +773,9 @@ this, you can specify the `ConfigMap` generator. Create (or replace) 为了实现这一点,你可以配置 `ConfigMap` 生成器。 创建(或替换)`kustomization.yaml`,使其具有以下内容。 + ```yaml --- # 基于字面创建 ConfigMap 的 kustomization.yaml 内容 @@ -784,7 +841,8 @@ section, and learn how to use these objects with Pods. ``` 2. 将 ConfigMap 中定义的 `special.how` 赋值给 Pod 规约中的 `SPECIAL_LEVEL_KEY` 环境变量。 @@ -964,7 +1022,7 @@ kubectl delete pod dapi-test-pod --now ## Add ConfigMap data to a Volume As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create -a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of +a ConfigMap using `--from-file`, the filename becomes a key stored in the `data` section of the ConfigMap. The file contents become the key's value. --> ## 将 ConfigMap 数据添加到一个卷中 {#add-configmap-data-to-a-volume} @@ -1027,7 +1085,8 @@ SPECIAL_TYPE 文本数据会展现为 UTF-8 字符编码的文件。如果使用其他字符编码, 可以使用 `binaryData`(详情参阅 [ConfigMap 对象](/zh-cn/docs/concepts/configuration/configmap/#configmap-object))。 @@ -1117,7 +1176,7 @@ guide explains the syntax. 使用 ConfigMap 作为 [subPath](/zh-cn/docs/concepts/storage/volumes/#using-subpath) 卷的容器将不会收到 ConfigMap 更新。 @@ -1204,6 +1264,25 @@ ConfigMap 的 `data` 字段包含配置数据。如下例所示,它可以简 (如用 `--from-literal` 的单个属性定义)或复杂 (如用 `--from-file` 的配置文件或 JSON blob 定义)。 + ```yaml apiVersion: v1 kind: ConfigMap @@ -1264,6 +1343,9 @@ as optional: --> 例如,以下 Pod 规约将 ConfigMap 中的环境变量标记为可选: + ```yaml apiVersion: v1 kind: Pod @@ -1305,6 +1387,9 @@ example, the following Pod specification marks a volume that references a Config 此时 Kubernetes 将总是为卷创建挂载路径,即使引用的 ConfigMap 或键不存在。 例如,以下 Pod 规约将所引用得 ConfigMap 的卷标记为可选: + ```yaml apiVersion: v1 kind: Pod @@ -1390,6 +1475,9 @@ Delete the ConfigMaps and Pods that you made: --> 删除你创建那些的 ConfigMap 和 Pod: + ```bash kubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \ configmaps/game-config-env-file diff --git a/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md b/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md index 41c4f2d01a6..f607bd41902 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/configure-service-account.md @@ -33,7 +33,7 @@ server, you identify yourself as a particular _user_. Kubernetes recognises the concept of a user, however, Kubernetes itself does **not** have a User API. --> -**服务账号(Service Account)**为 Pod 中运行的进程提供身份标识, +**服务账号(Service Account)** 为 Pod 中运行的进程提供身份标识, 并映射到 ServiceAccount 对象。当你向 API 服务器执行身份认证时, 你会将自己标识为某个**用户(User)**。Kubernetes 能够识别用户的概念, 但是 Kubernetes 自身**并不**提供 User API。 diff --git a/content/zh-cn/docs/tasks/configure-pod-container/static-pod.md b/content/zh-cn/docs/tasks/configure-pod-container/static-pod.md index 0b2a246e370..32b579ec481 100644 --- a/content/zh-cn/docs/tasks/configure-pod-container/static-pod.md +++ b/content/zh-cn/docs/tasks/configure-pod-container/static-pod.md @@ -133,13 +133,12 @@ For example, this is how to start a simple web server as a static Pod: 2. 选择一个目录,比如在 `/etc/kubernetes/manifests` 目录来保存 Web 服务 Pod 的定义文件,例如 `/etc/kubernetes/manifests/static-web.yaml`: - ```shell # 在 kubelet 运行的节点上执行以下命令 mkdir -p /etc/kubernetes/manifests/ diff --git a/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md b/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md index 5778f17ac9a..af4363c2060 100644 --- a/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md +++ b/content/zh-cn/docs/tasks/debug/debug-cluster/local-debugging.md @@ -45,11 +45,11 @@ running on a remote cluster locally. * Kubernetes 集群安装完毕 * 配置好 `kubectl` 与集群交互 -* [Telepresence](https://www.telepresence.io/docs/latest/install/) 安装完毕 +* [Telepresence](https://www.telepresence.io/docs/latest/quick-start/) 安装完毕 diff --git a/content/zh-cn/docs/tasks/tools/included/verify-kubectl.md b/content/zh-cn/docs/tasks/tools/included/verify-kubectl.md index d1ebc6c5f8d..6a5c6b7c759 100644 --- a/content/zh-cn/docs/tasks/tools/included/verify-kubectl.md +++ b/content/zh-cn/docs/tasks/tools/included/verify-kubectl.md @@ -67,3 +67,29 @@ If kubectl cluster-info returns the url response but you can't access your clust ```shell kubectl cluster-info dump ``` + + +### 排查"找不到身份验证提供商"的错误信息 {#no-auth-provider-found} + +在 Kubernetes 1.26 中,kubectl 删除了以下云提供商托管的 Kubernetes 产品的内置身份验证。 +这些提供商已经发布了 kubectl 插件来提供特定于云的身份验证。 +有关说明,请参阅以下提供商文档: + + +* Azure AKS:[kubelogin 插件](https://azure.github.io/kubelogin/) +* CKE:[gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin) + + +(还可能会有其他原因会看到相同的错误信息,和这个更改无关。) diff --git a/content/zh-cn/docs/tutorials/security/cluster-level-pss.md b/content/zh-cn/docs/tutorials/security/cluster-level-pss.md index 6fead0ce0e0..a5a491bb784 100644 --- a/content/zh-cn/docs/tutorials/security/cluster-level-pss.md +++ b/content/zh-cn/docs/tutorials/security/cluster-level-pss.md @@ -42,6 +42,7 @@ Pod 安全是一个准入控制器,当新的 Pod 被创建时,它会根据 K 请查阅该版本的文档。 ## {{% heading "prerequisites" %}} + 7. 在 default 名字空间下创建一个 Pod: + {{% code_sample file="security/example-baseline-pod.yaml" %}} + ```shell kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml ``` diff --git a/content/zh-cn/examples/security/example-baseline-pod.yaml b/content/zh-cn/examples/security/example-baseline-pod.yaml new file mode 100644 index 00000000000..eca57ea4de8 --- /dev/null +++ b/content/zh-cn/examples/security/example-baseline-pod.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + containers: + - image: nginx + name: nginx + ports: + - containerPort: 80 diff --git a/content/zh-cn/examples/validatingadmissionpolicy/replicalimit-param.yaml b/content/zh-cn/examples/validatingadmissionpolicy/replicalimit-param.yaml index 813bc7b3345..9d8ceee2201 100644 --- a/content/zh-cn/examples/validatingadmissionpolicy/replicalimit-param.yaml +++ b/content/zh-cn/examples/validatingadmissionpolicy/replicalimit-param.yaml @@ -2,5 +2,5 @@ apiVersion: rules.example.com/v1 kind: ReplicaLimit metadata: name: "replica-limit-test.example.com" - namesapce: "default" -maxReplicas: 3 \ No newline at end of file + namespace: "default" +maxReplicas: 3 diff --git a/content/zh-cn/releases/patch-releases.md b/content/zh-cn/releases/patch-releases.md index b6471733b08..12bfb33788c 100644 --- a/content/zh-cn/releases/patch-releases.md +++ b/content/zh-cn/releases/patch-releases.md @@ -149,25 +149,17 @@ releases may also occur in between these. -| 月度补丁发布 | Cherry Pick 截止日期 | 目标日期 | -|--------------|---------------------|------------| -| 2023 年 9 月 | 2023-09-08 | 2023-09-13 | -| 2023 年 10 月 | 2023-10-13 | 2023-10-18 | -| 2023 年 11 月 | N/A | N/A | -| 2023 年 12 月 | 2023-12-01 | 2023-12-06 | - - -**注意:**由于与 KubeCon NA 2023 时间冲突以及由此导致的缺少 Release Manager, -我们决定在 11 月跳过补丁版本发布。而是在 12 月初发布补丁版本。 +| 月度补丁发布 | Cherry Pick 截止日期 | 目标日期 | +|--------------|---------------------|-------------| +| 2023 年 12 月 | 2023-12-15 | 2023-12-19 | +| 2024 年 1 月 | 2024-01-12 | 2024-01-17 | +| 2024 年 2 月 | 2024-02-09 | 2024-02-14 | +| 2024 年 3 月 | 2024-03-08 | 2024-03-13 |