diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index ea469d42864..5b223b56fe4 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -12,7 +12,6 @@ aliases:
sig-docs-localization-owners: # Admins for localization content
- a-mccarthy
- divya-mohan0209
- - jimangel
- kbhawkey
- natalisucks
- onlydole
@@ -22,15 +21,11 @@ aliases:
- tengqm
sig-docs-de-owners: # Admins for German content
- bene2k1
- - mkorbi
- rlenferink
sig-docs-de-reviews: # PR reviews for German content
- bene2k1
- - mkorbi
- rlenferink
sig-docs-en-owners: # Admins for English content
- - annajung
- - bradtopol
- divya-mohan0209
- katcosgrove # RT 1.29 Docs Lead
- kbhawkey
@@ -112,18 +107,15 @@ aliases:
- atoato88
- bells17
- kakts
- - ptux
- t-inu
sig-docs-ko-owners: # Admins for Korean content
- gochist
- - ianychoi
- jihoon-seo
- seokho-son
- yoonian
- ysyukr
sig-docs-ko-reviews: # PR reviews for Korean content
- gochist
- - ianychoi
- jihoon-seo
- jmyung
- jongwooo
@@ -153,7 +145,6 @@ aliases:
sig-docs-zh-reviews: # PR reviews for Chinese content
- asa3311
- chenrui333
- - chenxuc
- howieyuen
# idealhack
- kinzhi
@@ -208,7 +199,6 @@ aliases:
- Arhell
- idvoretskyi
- MaxymVlasov
- - Potapy4
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
committee-steering: # provide PR approvals for announcements
- bentheelder
diff --git a/README-hi.md b/README-hi.md
index af64eae8f7c..958a44ead11 100644
--- a/README-hi.md
+++ b/README-hi.md
@@ -3,7 +3,7 @@
[](https://travis-ci.org/kubernetes/website)
[](https://github.com/kubernetes/website/releases/latest)
-स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज़](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियां हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
+स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियाँ हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
## डॉक्स में योगदान देना
@@ -37,8 +37,6 @@
> यदि आप विंडोज पर हैं, तो आपको कुछ और टूल्स की आवश्यकता होगी जिन्हें आप [Chocolatey](https://chocolatey.org) के साथ इंस्टॉल कर सकते हैं।
-> यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे Hugo का उपयोग करके स्थानीय रूप से साइट चलाना देखें।
-
यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे दिए गए Hugo का उपयोग करके स्थानीय रूप से [साइट को चलाने](#hugo-का-उपयोग-करते-हुए-स्थानीय-रूप-से-साइट-चलाना) का तरीका देखें।
यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ:
diff --git a/content/en/_index.html b/content/en/_index.html
index 5df38bb4207..e615cae35bd 100644
--- a/content/en/_index.html
+++ b/content/en/_index.html
@@ -47,12 +47,12 @@ To download Kubernetes, visit the [download](/releases/download/) section.
diff --git a/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md b/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md
index da7e2b3ed6d..00ae52cd93a 100644
--- a/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md
+++ b/content/en/blog/_posts/2023-08-31-legacy-package-repository-deprecation/index.md
@@ -100,10 +100,12 @@ community-owned repositories (`pkgs.k8s.io`).
## Can I continue to use the legacy package repositories?
-The existing packages in the legacy repositories will be available for the foreseeable
+~~The existing packages in the legacy repositories will be available for the foreseeable
future. However, the Kubernetes project can't provide _any_ guarantees on how long
is that going to be. The deprecated legacy repositories, and their contents, might
-be removed at any time in the future and without a further notice period.
+be removed at any time in the future and without a further notice period.~~
+
+**UPDATE**: The legacy packages are expected to go away in January 2024.
The Kubernetes project **strongly recommends** migrating to the new community-owned
repositories **as soon as possible**.
diff --git a/content/en/docs/concepts/extend-kubernetes/flowchart.svg b/content/en/docs/concepts/extend-kubernetes/flowchart.svg
index b6044c4f49e..5ee7b6b5590 100644
--- a/content/en/docs/concepts/extend-kubernetes/flowchart.svg
+++ b/content/en/docs/concepts/extend-kubernetes/flowchart.svg
@@ -1,4 +1,4 @@
-
\ No newline at end of file
+
diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md
index 2fdd2638367..15b64e001a0 100644
--- a/content/en/docs/concepts/security/pod-security-standards.md
+++ b/content/en/docs/concepts/security/pod-security-standards.md
@@ -271,6 +271,7 @@ fail validation.
diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md
index ebef412106f..0d5caa4ef2f 100644
--- a/content/en/docs/concepts/storage/persistent-volumes.md
+++ b/content/en/docs/concepts/storage/persistent-volumes.md
@@ -185,7 +185,7 @@ and the volume is considered "released". But it is not yet available for
another claim because the previous claimant's data remains on the volume.
An administrator can manually reclaim the volume with the following steps.
-1. Delete the PersistentVolume. The associated storage asset in external infrastructure
+1. Delete the PersistentVolume. The associated storage asset in external infrastructure
still exists after the PV is deleted.
1. Manually clean up the data on the associated storage asset accordingly.
1. Manually delete the associated storage asset.
@@ -273,7 +273,7 @@ Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
-Message:
+Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk
@@ -298,7 +298,7 @@ Access Modes: RWO
VolumeMode: Filesystem
Capacity: 200Mi
Node Affinity:
-Message:
+Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
@@ -664,7 +664,7 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
{{< /note >}}
> __Important!__ A volume can only be mounted using one access mode at a time,
-> even if it supports many.
+> even if it supports many.
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
| :--- | :---: | :---: | :---: | - |
@@ -699,7 +699,7 @@ Current reclaim policies are:
* Retain -- manual reclamation
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
-* Delete -- associated storage asset
+* Delete -- delete the volume
For Kubernetes {{< skew currentVersion >}}, only `nfs` and `hostPath` volume types support recycling.
@@ -731,7 +731,7 @@ it will become fully deprecated in a future Kubernetes release.
### Node Affinity
{{< note >}}
-For most volume types, you do not need to set this field.
+For most volume types, you do not need to set this field.
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
{{< /note >}}
@@ -1161,7 +1161,7 @@ users should be aware of:
When the `CrossNamespaceVolumeDataSource` feature is enabled, there are additional differences:
* The `dataSource` field only allows local objects, while the `dataSourceRef` field allows
- objects in any namespaces.
+ objects in any namespaces.
* When namespace is specified, `dataSource` and `dataSourceRef` are not synced.
Users should always use `dataSourceRef` on clusters that have the feature gate enabled, and
diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md
index 01f4e304de2..8d230318656 100644
--- a/content/en/docs/concepts/windows/intro.md
+++ b/content/en/docs/concepts/windows/intro.md
@@ -320,8 +320,7 @@ The following container runtimes work with Windows:
You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+
as the container runtime for Kubernetes nodes that run Windows.
-Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd).
-
+Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd).
{{< note >}}
There is a [known limitation](/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations)
when using GMSA with containerd to access Windows network shares, which requires a
diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md
index a66b9526a27..8bd6cfa17da 100644
--- a/content/en/docs/contribute/_index.md
+++ b/content/en/docs/contribute/_index.md
@@ -47,8 +47,8 @@ pull request (PR) to the
[`kubernetes/website` GitHub repository](https://github.com/kubernetes/website).
You need to be comfortable with
[git](https://git-scm.com/) and
-[GitHub](https://lab.github.com/)
-to work effectively in the Kubernetes community.
+[GitHub](https://skills.github.com/)
+to work effectively in the Kubernetes community.
To get involved with documentation:
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index 41a2d557159..df10c03584f 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -350,7 +350,7 @@ kubectl logs my-pod # dump pod logs (stdout)
kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container
kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
-kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout)
+kubectl logs -l name=myLabel -c my-container # dump pod container logs, with label name=myLabel (stdout)
kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container
kubectl logs -f my-pod # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index 7149897213b..61b62893288 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -89,7 +89,7 @@ After you initialize your control-plane, the kubelet runs normally.
#### Network setup
kubeadm similarly to other Kubernetes components tries to find a usable IP on
-the network interface associated with the default gateway on a host. Such
+the network interfaces associated with a default gateway on a host. Such
an IP is then used for the advertising and/or listening performed by a component.
To find out what this IP is on a Linux host you can use:
@@ -98,10 +98,22 @@ To find out what this IP is on a Linux host you can use:
ip route show # Look for a line starting with "default via"
```
+{{< note >}}
+If two or more default gateways are present on the host, a Kubernetes component will
+try to use the first one it encounters that has a suitable global unicast IP address.
+While making this choice, the exact ordering of gateways might vary between different
+operating systems and kernel versions.
+{{< /note >}}
+
Kubernetes components do not accept custom network interface as an option,
therefore a custom IP address must be passed as a flag to all components instances
that need such a custom configuration.
+{{< note >}}
+If the host does not have a default gateway and if a custom IP address is not passed
+to a Kubernetes component, the component may exit with an error.
+{{< /note >}}
+
To configure the API server advertise address for control plane nodes created with both
`init` and `join`, the flag `--apiserver-advertise-address` can be used.
Preferably, this option can be set in the [kubeadm API](/docs/reference/config-api/kubeadm-config.v1beta3)
@@ -114,13 +126,12 @@ For kubelets on all nodes, the `--node-ip` option can be passed in
For dual-stack see
[Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support).
-{{< note >}}
-IP addresses become part of certificates SAN fields. Changing these IP addresses would require
+The IP addresses that you assign to control plane components become part of their X.509 certificates'
+subject alternative name fields. Changing these IP addresses would require
signing new certificates and restarting the affected components, so that the change in
certificate files is reflected. See
[Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)
for more details on this topic.
-{{ note >}}
{{< warning >}}
The Kubernetes project recommends against this approach (configuring all component instances
@@ -132,15 +143,6 @@ is a public IP address, you should configure packet filtering or other security
protect the nodes and your cluster.
{{< /warning >}}
-{{< note >}}
-If the host does not have a default gateway, it is recommended to setup one. Otherwise,
-without passing a custom IP address to a Kubernetes component, the component
-will exit with an error. If two or more default gateways are present on the host,
-a Kubernetes component will try to use the first one it encounters that has a suitable
-global unicast IP address. While making this choice, the exact ordering of gateways
-might vary between different operating systems and kernel versions.
-{{< /note >}}
-
### Preparing the required container images
This step is optional and only applies in case you wish `kubeadm init` and `kubeadm join`
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
index 23b3c978f6f..abd3f3e0e49 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
@@ -73,7 +73,8 @@ If you see the following warnings while running `kubeadm init`
[preflight] WARNING: ethtool not found in system path
```
-Then you may be missing `ebtables`, `ethtool` or a similar executable on your node. You can install them with the following commands:
+Then you may be missing `ebtables`, `ethtool` or a similar executable on your node.
+You can install them with the following commands:
- For Ubuntu/Debian users, run `apt install ebtables ethtool`.
- For CentOS/Fedora users, run `yum install ebtables ethtool`.
@@ -90,9 +91,9 @@ This may be caused by a number of problems. The most common are:
- network connection problems. Check that your machine has full network connectivity before continuing.
- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to
- configure it properly see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
+ configure it properly, see [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/).
- control plane containers are crashlooping or hanging. You can check this by running `docker ps`
- and investigating each container by running `docker logs`. For other container runtime see
+ and investigating each container by running `docker logs`. For other container runtime, see
[Debugging Kubernetes nodes with crictl](/docs/tasks/debug/debug-cluster/crictl/).
## kubeadm blocks when removing managed containers
@@ -144,10 +145,12 @@ provider. Please contact the author of the Pod Network add-on to find out whethe
Calico, Canal, and Flannel CNI providers are verified to support HostPort.
-For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
+For more information, see the
+[CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
-If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
-services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.
+If your network provider does not support the portmap CNI plugin, you may need to use the
+[NodePort feature of services](/docs/concepts/services-networking/service/#type-nodeport)
+or use `HostNetwork=true`.
## Pods are not accessible via their Service IP
@@ -157,9 +160,10 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos
add-on provider to get the latest status of their support for hairpin mode.
- If you are using VirtualBox (directly or via Vagrant), you will need to
- ensure that `hostname -i` returns a routable IP address. By default the first
+ ensure that `hostname -i` returns a routable IP address. By default, the first
interface is connected to a non-routable host-only network. A work around
- is to modify `/etc/hosts`, see this [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
+ is to modify `/etc/hosts`, see this
+ [Vagrantfile](https://github.com/errordeveloper/k8s-playground/blob/22dd39dfc06111235620e6c4404a96ae146f26fd/Vagrantfile#L11)
for an example.
## TLS certificate errors
@@ -175,6 +179,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
regenerate a certificate if necessary. The certificates in a kubeconfig file
are base64 encoded. The `base64 --decode` command can be used to decode the certificate
and `openssl x509 -text -noout` can be used for viewing the certificate information.
+
- Unset the `KUBECONFIG` environment variable using:
```sh
@@ -190,7 +195,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
```sh
- mv $HOME/.kube $HOME/.kube.bak
+ mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
@@ -198,7 +203,8 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
## Kubelet client certificate rotation fails {#kubelet-client-cert}
-By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the `/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
+By default, kubeadm configures a kubelet with automatic rotation of client certificates by using the
+`/var/lib/kubelet/pki/kubelet-client-current.pem` symlink specified in `/etc/kubernetes/kubelet.conf`.
If this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`
in kube-apiserver logs. To fix the issue you must follow these steps:
@@ -231,24 +237,34 @@ The following error might indicate that something was wrong in the pod network:
Error from server (NotFound): the server could not find the requested resource
```
-- If you're using flannel as the pod network inside Vagrant, then you will have to specify the default interface name for flannel.
+- If you're using flannel as the pod network inside Vagrant, then you will have to
+ specify the default interface name for flannel.
- Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
+ Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts
+ are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.
- This may lead to problems with flannel, which defaults to the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this, pass the `--iface eth1` flag to flannel so that the second interface is chosen.
+ This may lead to problems with flannel, which defaults to the first interface on a host.
+ This leads to all hosts thinking they have the same public IP address. To prevent this,
+ pass the `--iface eth1` flag to flannel so that the second interface is chosen.
## Non-public IP used for containers
-In some situations `kubectl logs` and `kubectl run` commands may return with the following errors in an otherwise functional cluster:
+In some situations `kubectl logs` and `kubectl run` commands may return with the
+following errors in an otherwise functional cluster:
```console
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
```
-- This may be due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider.
-- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one.
+- This may be due to Kubernetes using an IP that can not communicate with other IPs on
+ the seemingly same subnet, possibly by policy of the machine provider.
+- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally
+ as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's
+ `InternalIP` instead of the public one.
- Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to DigitalOcean allows to query for the anchor IP from the droplet:
+ Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will
+ not display the offending alias IP address. Alternatively an API endpoint specific to
+ DigitalOcean allows to query for the anchor IP from the droplet:
```sh
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
@@ -270,12 +286,13 @@ Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc6
## `coredns` pods have `CrashLoopBackOff` or `Error` state
-If you have nodes that are running SELinux with an older version of Docker you might experience a scenario
-where the `coredns` pods are not starting. To solve that you can try one of the following options:
+If you have nodes that are running SELinux with an older version of Docker, you might experience a scenario
+where the `coredns` pods are not starting. To solve that, you can try one of the following options:
- Upgrade to a [newer version of Docker](/docs/setup/production-environment/container-runtimes/#docker).
- [Disable SELinux](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).
+
- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:
```bash
@@ -284,7 +301,8 @@ kubectl -n kube-system get deployment coredns -o yaml | \
kubectl apply -f -
```
-Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop. [A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
+Another cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop.
+[A number of workarounds](https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters)
are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.
{{< warning >}}
@@ -300,7 +318,7 @@ If you encounter the following error:
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:110: decoding init error from pipe caused \"read parent: connection reset by peer\""
```
-this issue appears if you run CentOS 7 with Docker 1.13.1.84.
+This issue appears if you run CentOS 7 with Docker 1.13.1.84.
This version of Docker can prevent the kubelet from executing into the etcd container.
To work around the issue, choose one of these options:
@@ -344,6 +362,7 @@ to pick up the node's IP address properly and has knock-on effects to the proxy
load balancers.
The following error can be seen in kube-proxy Pods:
+
```
server.go:610] Failed to retrieve node IP: host IP unknown; known addresses: []
proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
@@ -352,8 +371,26 @@ proxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
A known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane
nodes regardless of their conditions, keeping it off of other nodes until their initial guarding
conditions abate:
+
```
-kubectl -n kube-system patch ds kube-proxy -p='{ "spec": { "template": { "spec": { "tolerations": [ { "key": "CriticalAddonsOnly", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node-role.kubernetes.io/control-plane" } ] } } } }'
+kubectl -n kube-system patch ds kube-proxy -p='{
+ "spec": {
+ "template": {
+ "spec": {
+ "tolerations": [
+ {
+ "key": "CriticalAddonsOnly",
+ "operator": "Exists"
+ },
+ {
+ "effect": "NoSchedule",
+ "key": "node-role.kubernetes.io/control-plane"
+ }
+ ]
+ }
+ }
+ }
+}'
```
The tracking issue for this problem is [here](https://github.com/kubernetes/kubeadm/issues/1027).
@@ -365,12 +402,15 @@ For [flex-volume support](https://github.com/kubernetes/community/blob/ab55d85/c
Kubernetes components like the kubelet and kube-controller-manager use the default path of
`/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, yet the flex-volume directory _must be writeable_
for the feature to work.
-(**Note**: FlexVolume was deprecated in the Kubernetes v1.23 release)
-To workaround this issue you can configure the flex-volume directory using the kubeadm
+{{< note >}}
+FlexVolume was deprecated in the Kubernetes v1.23 release.
+{{< /note >}}
+
+To workaround this issue, you can configure the flex-volume directory using the kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/).
-On the primary control-plane Node (created using `kubeadm init`) pass the following
+On the primary control-plane Node (created using `kubeadm init`), pass the following
file using `--config`:
```yaml
@@ -402,7 +442,10 @@ be advised that this is modifying a design principle of the Linux distribution.
## `kubeadm upgrade plan` prints out `context deadline exceeded` error message
-This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`.
+This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in
+the case of running an external etcd. This is not a critical bug and happens because
+older versions of kubeadm perform a version check on the external etcd cluster.
+You can proceed with `kubeadm upgrade apply ...`.
This issue is fixed as of version 1.19.
@@ -422,6 +465,7 @@ can be used insecurely by passing the `--kubelet-insecure-tls` to it. This is no
If you want to use TLS between the metrics-server and the kubelet there is a problem,
since kubeadm deploys a self-signed serving certificate for the kubelet. This can cause the following errors
on the side of the metrics-server:
+
```
x509: certificate signed by unknown authority
x509: certificate is valid for IP-foo not IP-bar
@@ -438,6 +482,7 @@ Only applicable to upgrading a control plane node with a kubeadm binary v1.28.3
where the node is currently managed by kubeadm versions v1.28.0, v1.28.1 or v1.28.2.
Here is the error message you may encounter:
+
```
[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition
[upgrade/etcd] Waiting for previous etcd to become available
@@ -454,16 +499,19 @@ k8s.io/kubernetes/cmd/kubeadm/app/phases/upgrade.performEtcdStaticPodUpgrade
...
```
-The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec.
-This will result in a diff from the manifest comparison, and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash.
+The reason for this failure is that the affected versions generate an etcd manifest file with
+unwanted defaults in the PodSpec. This will result in a diff from the manifest comparison,
+and kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash.
There are two way to workaround this issue if you see it in your cluster:
-- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using:
-```shell
-kubeadm upgrade {apply|node} [version] --etcd-upgrade=false
-```
-This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.
+- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using:
+
+ ```shell
+ kubeadm upgrade {apply|node} [version] --etcd-upgrade=false
+ ```
+
+ This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.
- Before upgrade, patch the manifest for the etcd static pod, to remove the problematic defaulted attributes:
@@ -509,4 +557,5 @@ This is not recommended in case a new etcd version was introduced by a later v1.
path: /etc/kubernetes/pki/etcd
```
-More information can be found in the [tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug.
\ No newline at end of file
+More information can be found in the
+[tracking issue](https://github.com/kubernetes/kubeadm/issues/2927) for this bug.
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
index 810cd8d03d7..b4312a32a9b 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
@@ -136,7 +136,7 @@ This step should be done upon upgrading from one to another Kubernetes minor
release in order to get access to the packages of the desired Kubernetes minor
version.
-{{< tabs name="k8s_install_versions" >}}
+{{< tabs name="k8s_upgrade_versions" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
1. Open the file that defines the Kubernetes `apt` repository using a text editor of your choice:
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md
index b9cb5041d75..d2d6a83d356 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure.md
@@ -99,11 +99,14 @@ kubeadm init phase certs --config
To write new manifest files in `/etc/kubernetes/manifests` you can use:
```shell
+# For Kubernetes control plane components
kubeadm init phase control-plane --config
+# For local etcd
+kubeadm init phase etcd local --config
```
The `` contents must match the updated `ClusterConfiguration`.
-The `` value must be the name of the component.
+The `` value must be a name of a Kubernetes control plane component (`apiserver`, `controller-manager` or `scheduler`).
{{< note >}}
Updating a file in `/etc/kubernetes/manifests` will tell the kubelet to restart the static Pod for the corresponding component.
diff --git a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md
index 9347dc5c3ab..cccf4b8350a 100644
--- a/content/en/docs/tasks/administer-cluster/sysctl-cluster.md
+++ b/content/en/docs/tasks/administer-cluster/sysctl-cluster.md
@@ -76,6 +76,7 @@ The following sysctls are supported in the _safe_ set:
- `net.ipv4.tcp_syncookies`,
- `net.ipv4.ping_group_range` (since Kubernetes 1.18),
- `net.ipv4.ip_unprivileged_port_start` (since Kubernetes 1.22).
+- `net.ipv4.ip_local_reserved_ports` (since Kubernetes 1.27).
{{< note >}}
There are some exceptions to the set of safe sysctls:
diff --git a/content/en/docs/tasks/debug/debug-cluster/_index.md b/content/en/docs/tasks/debug/debug-cluster/_index.md
index fcb7ba4a016..a10b0bdcff7 100644
--- a/content/en/docs/tasks/debug/debug-cluster/_index.md
+++ b/content/en/docs/tasks/debug/debug-cluster/_index.md
@@ -252,14 +252,14 @@ This is an incomplete list of things that could go wrong, and how to adjust your
- Network partition within cluster, or between cluster and users
- Crashes in Kubernetes software
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
-- Operator error, for example misconfigured Kubernetes software or application software
+- Operator error, for example, misconfigured Kubernetes software or application software
### Specific scenarios
- API server VM shutdown or apiserver crashing
- Results
- unable to stop, update, or start new pods, services, replication controller
- - existing pods and services should continue to work normally, unless they depend on the Kubernetes API
+ - existing pods and services should continue to work normally unless they depend on the Kubernetes API
- API server backing storage lost
- Results
- the kube-apiserver component fails to start successfully and become healthy
@@ -291,7 +291,7 @@ This is an incomplete list of things that could go wrong, and how to adjust your
### Mitigations
-- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
+- Action: Use the IaaS provider's automatic VM restarting feature for IaaS VMs
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Mitigates: Supporting services VM shutdown or crashes
diff --git a/content/en/docs/tasks/debug/debug-cluster/local-debugging.md b/content/en/docs/tasks/debug/debug-cluster/local-debugging.md
index 6e7d73841f2..2bbf59a22e3 100644
--- a/content/en/docs/tasks/debug/debug-cluster/local-debugging.md
+++ b/content/en/docs/tasks/debug/debug-cluster/local-debugging.md
@@ -26,7 +26,7 @@ running on a remote cluster locally.
* Kubernetes cluster is installed
* `kubectl` is configured to communicate with the cluster
-* [Telepresence](https://www.telepresence.io/docs/latest/install/) is installed
+* [Telepresence](https://www.telepresence.io/docs/latest/quick-start/) is installed
diff --git a/content/en/docs/tasks/tools/included/verify-kubectl.md b/content/en/docs/tasks/tools/included/verify-kubectl.md
index ae5d818e02a..b4eb0fe08d2 100644
--- a/content/en/docs/tasks/tools/included/verify-kubectl.md
+++ b/content/en/docs/tasks/tools/included/verify-kubectl.md
@@ -23,15 +23,18 @@ kubectl cluster-info
If you see a URL response, kubectl is correctly configured to access your cluster.
-If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.
+If you see a message similar to the following, kubectl is not configured correctly
+or is not able to connect to a Kubernetes cluster.
```
The connection to the server was refused - did you specify the right host or port?
```
-For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like Minikube to be installed first and then re-run the commands stated above.
+For example, if you are intending to run a Kubernetes cluster on your laptop (locally),
+you will need a tool like Minikube to be installed first and then re-run the commands stated above.
-If kubectl cluster-info returns the url response but you can't access your cluster, to check whether it is configured properly, use:
+If kubectl cluster-info returns the url response but you can't access your cluster,
+to check whether it is configured properly, use:
```shell
kubectl cluster-info dump
@@ -40,9 +43,10 @@ kubectl cluster-info dump
### Troubleshooting the 'No Auth Provider Found' error message {#no-auth-provider-found}
In Kubernetes 1.26, kubectl removed the built-in authentication for the following cloud
-providers' managed Kubernetes offerings. These providers have released kubectl plugins to provide the cloud-specific authentication. For instructions, refer to the following provider documentation:
+providers' managed Kubernetes offerings. These providers have released kubectl plugins
+to provide the cloud-specific authentication. For instructions, refer to the following provider documentation:
-* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/)
+* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/)
* Google Kubernetes Engine: [gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)
(There could also be other reasons to see the same error message, unrelated to that change.)
diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md
index d12775eb235..e1ef63d0769 100644
--- a/content/en/docs/tutorials/hello-minikube.md
+++ b/content/en/docs/tutorials/hello-minikube.md
@@ -60,7 +60,7 @@ Now, switch back to the terminal where you ran `minikube start`.
The `dashboard` command enables the dashboard add-on and opens the proxy in the default web browser.
You can create Kubernetes resources on the dashboard such as Deployment and Service.
-If you are running in an environment as root, see [Open Dashboard with URL](#open-dashboard-with-url).
+To find out how to avoid directly invoking the browser from the terminal and get a URL for the web dashboard, see the "URL copy and paste" tab.
By default, the dashboard is only accessible from within the internal Kubernetes virtual network.
The `dashboard` command creates a temporary proxy to make the dashboard accessible from outside the Kubernetes virtual network.
@@ -73,7 +73,7 @@ You can run the `dashboard` command again to create another proxy to access the
{{% /tab %}}
{{% tab name="URL copy and paste" %}}
-If you don't want minikube to open a web browser for you, run the dashboard command with the
+If you don't want minikube to open a web browser for you, run the `dashboard` subcommand with the
`--url` flag. `minikube` outputs a URL that you can open in the browser you prefer.
Open a **new** terminal, and run:
@@ -82,7 +82,7 @@ Open a **new** terminal, and run:
minikube dashboard --url
```
-Now, switch back to the terminal where you ran `minikube start`.
+Now, you can use this URL and switch back to the terminal where you ran `minikube start`.
{{% /tab %}}
{{< /tabs >}}
diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
index 764c785d7ac..7339066dc7f 100644
--- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
@@ -130,10 +130,11 @@ description: |-
View the app
-
Pods that are running inside Kubernetes are running on a private, isolated network.
+
Pods that are running inside Kubernetes are running on a private, isolated network.
By default they are visible from other pods and services within the same Kubernetes cluster, but not outside that network.
When we use kubectl, we're interacting through an API endpoint to communicate with our application.
-
We will cover other options on how to expose your application outside the Kubernetes cluster later, in Module 4.
+
We will cover other options on how to expose your application outside the Kubernetes cluster later, in Module 4.
+ Also as a basic tutorial, we're not explaining what Pods are in any detail here, it will cover in later topics.
The kubectl proxy command can create a proxy that will forward communications into the cluster-wide, private network. The proxy can be terminated by pressing control-C and won't show any output while its running.
You need to open a second terminal window to run the proxy.
Let’s verify that our application is running. We’ll use the kubectl get command and look for existing Pods:
kubectl get pods
If no Pods are running then it means the objects from the previous tutorials were cleaned up. In this case, go back and recreate the deployment from the Using kubectl to create a Deployment tutorial.
@@ -151,7 +151,7 @@ description: |-
-
Deleting a service
+
Step 3: Deleting a service
To delete Services you can use the delete service subcommand. Labels can be used also here:
Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:
curl http://"$(minikube ip):$NODE_PORT"
We hit a different Pod with every request. This demonstrates that the load-balancing is working.
+ {{< note >}}
If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is needed. This is because containers inside Docker Desktop are isolated from your host computer.
+
In a separate terminal window, execute:
+ minikube service kubernetes-bootcamp --url
+
The output looks like this:
+
http://127.0.0.1:51082 ! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
+
Then use the given URL to access the app:
+ curl 127.0.0.1:51082
+ {{< /note >}}
diff --git a/content/en/docs/tutorials/security/cluster-level-pss.md b/content/en/docs/tutorials/security/cluster-level-pss.md
index bbeb186a23f..bc6f8fcc95a 100644
--- a/content/en/docs/tutorials/security/cluster-level-pss.md
+++ b/content/en/docs/tutorials/security/cluster-level-pss.md
@@ -294,6 +294,8 @@ following:
1. Create a Pod in the default namespace:
+ {{% code_sample file="security/example-baseline-pod.yaml" %}}
+
```shell
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
```
diff --git a/content/en/examples/application/hpa/php-apache.yaml b/content/en/examples/application/hpa/php-apache.yaml
index f3f1ef5d4f9..1c49aca6a1f 100644
--- a/content/en/examples/application/hpa/php-apache.yaml
+++ b/content/en/examples/application/hpa/php-apache.yaml
@@ -1,4 +1,4 @@
-apiVersion: autoscaling/v1
+apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
@@ -9,4 +9,10 @@ spec:
name: php-apache
minReplicas: 1
maxReplicas: 10
- targetCPUUtilizationPercentage: 50
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 50
diff --git a/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml b/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml
index 813bc7b3345..9d8ceee2201 100644
--- a/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml
+++ b/content/en/examples/validatingadmissionpolicy/replicalimit-param.yaml
@@ -2,5 +2,5 @@ apiVersion: rules.example.com/v1
kind: ReplicaLimit
metadata:
name: "replica-limit-test.example.com"
- namesapce: "default"
-maxReplicas: 3
\ No newline at end of file
+ namespace: "default"
+maxReplicas: 3
diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md
index 312dafd1b3a..1ab1c8552de 100644
--- a/content/en/releases/patch-releases.md
+++ b/content/en/releases/patch-releases.md
@@ -78,7 +78,7 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
-| December 2023 | 2023-12-08 | 2023-12-13 |
+| December 2023 | 2023-12-15 | 2023-12-19 |
| January 2024 | 2024-01-12 | 2024-01-17 |
| February 2024 | 2024-02-09 | 2024-02-14 |
| March 2024 | 2024-03-08 | 2024-03-13 |
diff --git a/content/es/docs/tutorials/_index.md b/content/es/docs/tutorials/_index.md
index 932b6f60434..86fde686f22 100644
--- a/content/es/docs/tutorials/_index.md
+++ b/content/es/docs/tutorials/_index.md
@@ -22,50 +22,36 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
## Esenciales
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) se trata de un tutorial interactivo en profundidad para entender Kubernetes y probar algunas funciones básicas.
-
* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
-
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
-
* [Hello Minikube](/es/docs/tutorials/hello-minikube/)
## Configuración
* [Ejemplo: Configurando un Microservicio en Java](/docs/tutorials/configuration/configure-java-microservice/)
-
* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
## Aplicaciones Stateless
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
-
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
## Aplicaciones Stateful
* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
-
* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
-
* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
-
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
## Clústers
* [AppArmor](/docs/tutorials/clusters/apparmor/)
-
* [Seccomp](/docs/tutorials/clusters/seccomp/)
## Servicios
* [Using Source IP](/docs/tutorials/services/source-ip/)
-
-
## {{% heading "whatsnext" %}}
-
Si quieres escribir un tutorial, revisa [utilizando templates](/docs/home/contribute/page-templates/) para obtener información sobre el tipo de página y la plantilla de los tutotriales.
-
-
diff --git a/content/fr/docs/concepts/storage/volumes.md b/content/fr/docs/concepts/storage/volumes.md
index e78e7ce76b3..c2fdf583cb4 100644
--- a/content/fr/docs/concepts/storage/volumes.md
+++ b/content/fr/docs/concepts/storage/volumes.md
@@ -857,7 +857,7 @@ Vous devez créer un secret dans l'API Kubernetes avant de pouvoir l'utiliser.
Un conteneur utilisant un secret en tant que point de montage de volume [subPath](#using-subpath) ne recevra pas les mises à jour des secrets.
{{< /note >}}
-Les secrets sont décrits plus en détails [ici](/docs/user-guide/secrets).
+Les secrets sont décrits plus en détails [ici](/docs/concepts/configuration/secret/).
### storageOS {#storageos}
diff --git a/content/fr/docs/contribute/generate-ref-docs/federation-api.md b/content/fr/docs/contribute/generate-ref-docs/federation-api.md
index 30bb6c525d8..cc99bdc9a62 100644
--- a/content/fr/docs/contribute/generate-ref-docs/federation-api.md
+++ b/content/fr/docs/contribute/generate-ref-docs/federation-api.md
@@ -15,7 +15,7 @@ Cette page montre comment générer automatiquement des pages de référence pou
* Vous devez avoir [Git](https://git-scm.com/book/fr/v2/D%C3%A9marrage-rapide-Installation-de-Git) installé.
-* Vous devez avoir [Golang](https://golang.org/doc/install) version 1.9.1 ou ultérieur installé, et votre variable d'environnement `$GOPATH` doit être définie.
+* Vous devez avoir [Golang](https://go.dev/doc/install) version 1.9.1 ou ultérieur installé, et votre variable d'environnement `$GOPATH` doit être définie.
* Vous devez avoir [Docker](https://docs.docker.com/engine/installation/) installé.
diff --git a/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md b/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md
index b039d19c8b9..bf7954500aa 100644
--- a/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md
+++ b/content/fr/docs/contribute/generate-ref-docs/kubernetes-api.md
@@ -16,7 +16,7 @@ Cette page montre comment mettre à jour les documents de référence générés
Vous devez avoir ces outils installés:
* [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
-* [Golang](https://golang.org/doc/install) version 1.9.1 ou ultérieur
+* [Golang](https://go.dev/doc/install) version 1.9.1 ou ultérieur
* [Docker](https://docs.docker.com/engine/installation/)
* [etcd](https://github.com/coreos/etcd/)
diff --git a/content/hi/_index.html b/content/hi/_index.html
index 2009559be3b..b1faf30bcc5 100644
--- a/content/hi/_index.html
+++ b/content/hi/_index.html
@@ -8,7 +8,7 @@ sitemap:
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
-[कुबेरनेट्स]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}), जो K8s के रूप में भी जाना जाता है, कंटेनरीकृत एप्लीकेशन के डिप्लॉयमेंट, स्केलिंग और प्रबंधन को स्वचालित करने के लिए एक ओपन-सोर्स सिस्टम है।
+[कुबेरनेट्स]({{< relref "/docs/concepts/overview/_index.md" >}}), जो K8s के रूप में भी जाना जाता है, कंटेनरीकृत एप्लीकेशन के डिप्लॉयमेंट, स्केलिंग और प्रबंधन को स्वचालित करने के लिए एक ओपन-सोर्स सिस्टम है।
यह आसान प्रबंधन और खोज के लिए लॉजिकल इकाइयों में एक एप्लीकेशन बनाने वाले कंटेनरों को समूहित करता है। कुबेरनेट्स [Google में उत्पादन कार्यभार चलाने के 15 वर्षों के अनुभव](http://queue.acm.org/detail.cfm?id=2898444) पर निर्माणित है, जो समुदाय के सर्वोत्तम-नस्लीय विचारों और प्रथाओं के साथ संयुक्त है।
{{% /blocks/feature %}}
diff --git a/content/hi/docs/concepts/overview/what-is-kubernetes.md b/content/hi/docs/concepts/overview/_index.md
similarity index 99%
rename from content/hi/docs/concepts/overview/what-is-kubernetes.md
rename to content/hi/docs/concepts/overview/_index.md
index bb59092d2b3..6268b8761b7 100644
--- a/content/hi/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/hi/docs/concepts/overview/_index.md
@@ -1,5 +1,5 @@
---
-title: कुबेरनेट्स क्या है?
+title: अवलोकन
description: >
कुबेरनेट्स कंटेनरीकृत वर्कलोड और सेवाओं के प्रबंधन के लिए एक पोर्टेबल, एक्स्टेंसिबल, ओपन-सोर्स प्लेटफॉर्म है, जो घोषणात्मक कॉन्फ़िगरेशन और स्वचालन दोनों की सुविधा प्रदान करता है। इसका एक बड़ा, तेजी से बढ़ता हुआ पारिस्थितिकी तंत्र है। कुबेरनेट्स सेवाएँ, समर्थन और उपकरण व्यापक रूप से उपलब्ध हैं।
content_type: concept
diff --git a/content/hi/docs/setup/_index.md b/content/hi/docs/setup/_index.md
index 26416b0c680..33034b66aec 100644
--- a/content/hi/docs/setup/_index.md
+++ b/content/hi/docs/setup/_index.md
@@ -8,9 +8,9 @@ card:
name: setup
weight: 20
anchors:
- - anchor: "#सीखने-का-वातावरण"
+ - anchor: "#learning-environment"
title: सीखने का वातावरण
- - anchor: "#प्रोडक्शन-वातावरण"
+ - anchor: "#production-environment"
title: प्रोडक्शन वातावरण
---
@@ -25,13 +25,13 @@ card:
-## सीखने का वातावरण
+## सीखने का वातावरण {#learning-environment}
यदि आप कुबेरनेट्स सीख रहे हैं, तो कुबेरनेट्स समुदाय द्वारा समर्थित टूल का उपयोग करें,
या स्थानीय मशीन पर कुबेरनेट्स क्लस्टर सेटअप करने के लिए इकोसिस्टम में उपलब्ध टूल का उपयोग करें।
[इंस्टॉल टूल्स](/hi/docs/tasks/tools/) देखें।
-## प्रोडक्शन वातावरण
+## प्रोडक्शन वातावरण {#production-environment}
[प्रोडक्शन वातावरण](/hi/docs/setup/production-environment/) के लिए समाधान का मूल्यांकन करते समय, विचार करें कि कुबेरनेट्स क्लस्टर के किन पहलुओं (या _abstractions_) का संचालन आप स्वयं प्रबंधित करना चाहते हैं और किसे आप एक प्रदाता को सौंपना पसंद करते हैं।
diff --git a/content/hi/docs/tasks/tools/install-kubectl-linux.md b/content/hi/docs/tasks/tools/install-kubectl-linux.md
index e099ad5ab7a..b11abf29437 100644
--- a/content/hi/docs/tasks/tools/install-kubectl-linux.md
+++ b/content/hi/docs/tasks/tools/install-kubectl-linux.md
@@ -44,7 +44,7 @@ Linux पर kubectl संस्थापित करने के लिए
kubectl चेकसम फाइल डाउनलोड करें:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
```
चेकसम फ़ाइल से kubectl बाइनरी को मान्य करें:
@@ -199,7 +199,7 @@ kubectl Bash और Zsh के लिए ऑटोकम्प्लेशन
kubectl-convert चेकसम फ़ाइल डाउनलोड करें:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
```
चेकसम फ़ाइल से kubectl-convert बाइनरी को मान्य करें:
diff --git a/content/hi/docs/tutorials/_index.md b/content/hi/docs/tutorials/_index.md
index 5aa154622e4..a0af44af676 100644
--- a/content/hi/docs/tutorials/_index.md
+++ b/content/hi/docs/tutorials/_index.md
@@ -52,7 +52,7 @@ content_type: concept
* [AppArmor](/docs/tutorials/clusters/apparmor/)
-* [seccomp](/docs/tutorials/clusters/seccomp/)
+* [Seccomp](/docs/tutorials/clusters/seccomp/)
## सर्विस
diff --git a/content/ja/docs/tutorials/security/cluster-level-pss.md b/content/ja/docs/tutorials/security/cluster-level-pss.md
index 34622a32d94..1ae58454384 100644
--- a/content/ja/docs/tutorials/security/cluster-level-pss.md
+++ b/content/ja/docs/tutorials/security/cluster-level-pss.md
@@ -20,7 +20,7 @@ v{{< skew currentVersion >}}以外のKubernetesバージョンを実行してい
ワークステーションに以下をインストールしてください:
-- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
+- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](/ja/docs/tasks/tools/)
このチュートリアルでは、完全な制御下にあるKubernetesクラスターの何を設定できるかをデモンストレーションします。
@@ -230,7 +230,7 @@ v{{< skew currentVersion >}}以外のKubernetesバージョンを実行してい
```
{{}}
- macOSでDocker DesktopとKinDを利用している場合は、**Preferences > Resources > File Sharing**のメニュー項目からShared Directoryとして`/tmp`を追加できます。
+ macOSでDocker Desktopと*kind*を利用している場合は、**Preferences > Resources > File Sharing**のメニュー項目からShared Directoryとして`/tmp`を追加できます。
{{}}
1. 目的のPodセキュリティの標準を適用するために、Podセキュリティアドミッションを使うクラスターを作成します:
diff --git a/content/pl/_index.html b/content/pl/_index.html
index 06312f71979..a903381473c 100644
--- a/content/pl/_index.html
+++ b/content/pl/_index.html
@@ -47,12 +47,12 @@ Kubernetes jako projekt open-source daje Ci wolność wyboru ⏤ skorzystaj z pr