From 04b5916f26c39a1e852258089dfcd79c2294bf73 Mon Sep 17 00:00:00 2001
From: Kelvin Nicholson
Date: Tue, 1 Jun 2021 22:15:18 +1000
Subject: [PATCH 01/83] Fix incorrect command
---
.../configure-pod-container/translate-compose-kubernetes.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
index 384b709720..a4c8dd871d 100644
--- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
+++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
@@ -150,13 +150,12 @@ you need is an existing `docker-compose.yml` file.
```
```bash
- kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,
+ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
```
The output is similar to:
```none
- redis-master-deployment.yaml,redis-slave-deployment.yaml
service/frontend created
service/redis-master created
service/redis-slave created
From 08506d03daaa93a4e8a55326de314919ff584ca6 Mon Sep 17 00:00:00 2001
From: Sandip Bhattacharya
Date: Fri, 25 Jun 2021 13:15:19 +0200
Subject: [PATCH 02/83] dns-pod-service.md: Fix unqualified host search ex
Unless I am mistake, according to the provided search config, the query for `data` is missing the `svc` component in the full search.
---
content/en/docs/concepts/services-networking/dns-pod-service.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md
index 48a8f07258..cb2a173018 100644
--- a/content/en/docs/concepts/services-networking/dns-pod-service.md
+++ b/content/en/docs/concepts/services-networking/dns-pod-service.md
@@ -39,7 +39,7 @@ namespace.
DNS queries may be expanded using the pod's `/etc/resolv.conf`. Kubelet
sets this file for each pod. For example, a query for just `data` may be
-expanded to `data.test.cluster.local`. The values of the `search` option
+expanded to `data.test.svc.cluster.local`. The values of the `search` option
are used to expand queries. To learn more about DNS queries, see
[the `resolv.conf` manual page.](https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html)
From d0f94114a1e4e41b8c64191cbfadbcefb7380af7 Mon Sep 17 00:00:00 2001
From: rajeshdeshpande02
Date: Wed, 30 Jun 2021 11:58:22 +0530
Subject: [PATCH 03/83] Adding kubectl cp command examples in cheatsheet
---
.../en/docs/reference/kubectl/cheatsheet.md | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index c32ba4f809..35cd0b2fdc 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -323,6 +323,24 @@ kubectl exec my-pod -c my-container -- ls / # Run command in existing po
kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
```
+## Copy files and directories to and from containers
+
+```bash
+kubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace
+kubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
+kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace
+kubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally
+```
+{{< note >}}
+`kubectl cp` requires that the 'tar' binary is present in your container image. If 'tar' is not present,`kubectl cp` will fail.
+For advanced use cases, such as symlinks, wildcard expansion or file mode preservation consider using `kubectl exec`.
+{{< /note >}}
+
+```bash
+tar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace
+kubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally
+```
+
## Interacting with Deployments and Services
```bash
From 6c50bc639fa6b37db07e6b77059af6a30b9ddf10 Mon Sep 17 00:00:00 2001
From: Ravi Gudimetla
Date: Thu, 19 Aug 2021 16:59:53 -0400
Subject: [PATCH 04/83] Recommend using TTL field in job
Recommend using ttlSecondsAfterFinished in the job spec so that the pod deletion can be guaranteed when jobs get deleted.
---
content/en/docs/concepts/workloads/controllers/job.md | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md
index 1635caf7b3..ac0b28ab6b 100644
--- a/content/en/docs/concepts/workloads/controllers/job.md
+++ b/content/en/docs/concepts/workloads/controllers/job.md
@@ -342,6 +342,16 @@ If the field is set to `0`, the Job will be eligible to be automatically deleted
immediately after it finishes. If the field is unset, this Job won't be cleaned
up by the TTL controller after it finishes.
+{{< note >}}
+It is recommended to set `ttlSecondsAfterFinished` field because unmanaged jobs
+(jobs not created via high level controllers like cronjobs) have a default deletion
+policy of `orphanDependents` causing pods created by this job to be left around.
+Even though podgc collector eventually deletes these lingering pods, sometimes these
+lingering pods may cause cluster performance degradation or in worst case cause the
+cluster to go down.
+{{< /note >}}
+
+
## Job patterns
The Job object can be used to support reliable parallel execution of Pods. The Job object is not
From 1e967d0b25af18ed1029d256614efc1589007345 Mon Sep 17 00:00:00 2001
From: RinkiyaKeDad
Date: Tue, 28 Sep 2021 15:53:10 +0530
Subject: [PATCH 05/83] adding info about tagging SIGs in pr wrangling guide
Signed-off-by: RinkiyaKeDad
---
content/en/docs/contribute/participate/pr-wranglers.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/content/en/docs/contribute/participate/pr-wranglers.md b/content/en/docs/contribute/participate/pr-wranglers.md
index 727a8a81d5..09ba0d7175 100644
--- a/content/en/docs/contribute/participate/pr-wranglers.md
+++ b/content/en/docs/contribute/participate/pr-wranglers.md
@@ -26,6 +26,7 @@ Each day in a week-long shift as PR Wrangler:
- If you need to verify content, comment on the PR and request more details.
- Assign relevant `sig/` label(s).
- If needed, assign reviewers from the `reviewers:` block in the file's front matter.
+ - You can also tag a SIG for a review by commenting `@kubernetes/-pr-reviews` on the PR.
- Use the `/approve` comment to approve a PR for merging. Merge the PR when ready.
- PRs should have a `/lgtm` comment from another member before merging.
- Consider accepting technically accurate content that doesn't meet the [style guidelines](/docs/contribute/style/style-guide/). Open a new issue with the label `good first issue` to address style concerns.
From 11103c7c5cfd61d29c288e96ef45cd82ca02cc63 Mon Sep 17 00:00:00 2001
From: Riita <42636694+riita10069@users.noreply.github.com>
Date: Thu, 7 Oct 2021 22:39:36 +0900
Subject: [PATCH 06/83] Update pod-lifecycle.md
---
.../concepts/workloads/pods/pod-lifecycle.md | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md
index 012c042c81..9ef8e0bdf6 100644
--- a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md
+++ b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md
@@ -19,10 +19,10 @@ Podはその生存期間に1回だけ[スケジューリング](/docs/concepts/s
## Podのライフタイム
-個々のアプリケーションコンテナと同様に、Podは(永続的ではなく)比較的短期間の存在と捉えられます。Podが作成されると、一意のID([UID](ja/docs/concepts/overview/working-with-objects/names/#uids))が割り当てられ、(再起動ポリシーに従って)終了または削除されるまでNodeで実行されるようにスケジュールされます。
+個々のアプリケーションコンテナと同様に、Podは(永続的ではなく)比較的短期間の存在と捉えられます。Podが作成されると、一意のID([UID](/ja/docs/concepts/overview/working-with-objects/names/#uids))が割り当てられ、(再起動ポリシーに従って)終了または削除されるまでNodeで実行されるようにスケジュールされます。
{{< glossary_tooltip term_id="node" >}}が停止した場合、そのNodeにスケジュールされたPodは、タイムアウト時間の経過後に[削除](#pod-garbage-collection)されます。
-Pod自体は、自己修復しません。Podが{{< glossary_tooltip text="node" term_id="node" >}}にスケジュールされ、その後に失敗、またはスケジュール操作自体が失敗した場合、Podは削除されます。同様に、リソースの不足またはNodeのメンテナンスによりポッドはNodeから立ち退きます。Kubernetesは、比較的使い捨てのPodインスタンスの管理作業を処理する、{{< glossary_tooltip term_id="controller" text="controller" >}}と呼ばれる上位レベルの抽象化を使用します。
+Pod自体は、自己修復しません。Podが{{< glossary_tooltip text="node" term_id="node" >}}にスケジュールされ、その後に失敗、またはスケジュール操作自体が失敗した場合、Podは削除されます。同様に、リソースの不足またはNodeのメンテナンスによりPodはNodeから立ち退きます。Kubernetesは、比較的使い捨てのPodインスタンスの管理作業を処理する、{{< glossary_tooltip term_id="controller" text="controller" >}}と呼ばれる上位レベルの抽象化を使用します。
特定のPod(UIDで定義)は新しいNodeに"再スケジュール"されません。代わりに、必要に応じて同じ名前で、新しいUIDを持つ同一のPodに置き換えることができます。
@@ -55,7 +55,7 @@ Nodeが停止するか、クラスタの残りの部分から切断された場
## コンテナのステータス {#container-states}
-Pod全体の[フェーズ](#pod-phase)と同様に、KubernetesはPod内の各コンテナの状態を追跡します。[container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/)を使用して、コンテナのライフサイクルの特定のポイントで実行するイベントをトリガーできます。
+Pod全体の[フェーズ](#pod-phase)と同様に、KubernetesはPod内の各コンテナの状態を追跡します。[container lifecycle hooks](/ja/docs/concepts/containers/container-lifecycle-hooks/)を使用して、コンテナのライフサイクルの特定のポイントで実行するイベントをトリガーできます。
Podが{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}によってNodeに割り当てられると、kubeletは{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}を使用してコンテナの作成を開始します。コンテナの状態は`Waiting`、`Running`または`Terminated`の3ついずれかです。
@@ -211,7 +211,7 @@ Podが削除されたときにリクエストを来ないようにするため
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
-startupProbeは、サービスの開始に時間がかかるコンテナを持つポッドに役立ちます。livenessProbeの間隔を長く設定するのではなく、コンテナの起動時に別のProbeを構成して、livenessProbeの間隔よりも長い時間を許可できます。
+startupProbeは、サービスの開始に時間がかかるコンテナを持つPodに役立ちます。livenessProbeの間隔を長く設定するのではなく、コンテナの起動時に別のProbeを構成して、livenessProbeの間隔よりも長い時間を許可できます。
コンテナの起動時間が、`initialDelaySeconds + failureThreshold x periodSeconds`よりも長い場合は、livenessProbeと同じエンドポイントをチェックするためにstartupProbeを指定します。`periodSeconds`のデフォルトは30秒です。次に、`failureThreshold`をlivenessProbeのデフォルト値を変更せずにコンテナが起動できるように、十分に高い値を設定します。これによりデッドロックを防ぐことができます。
## Podの終了 {#pod-termination}
@@ -220,7 +220,7 @@ Podは、クラスター内のNodeで実行中のプロセスを表すため、
ユーザーは削除を要求可能であるべきで、プロセスがいつ終了するかを知ることができなければなりませんが、削除が最終的に完了することも保証できるべきです。ユーザーがPodの削除を要求すると、システムはPodが強制終了される前に意図された猶予期間を記録および追跡します。強制削除までの猶予期間がある場合、{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}正常な終了を試みます。
-通常、コンテナランタイムは各コンテナのメインプロセスにTERMシグナルを送信します。多くのコンテナランタイムは、コンテナイメージで定義されたSTOPSIGNAL値を尊重し、TERMの代わりにこれを送信します。猶予期間が終了すると、プロセスにKILLシグナルが送信され、Podは{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}から削除されます。プロセスの終了を待っている間にkubeletかコンテナランタイムの管理サービスが再起動されると、クラスターは元の猶予期間を含めて、最初からリトライされます。
+通常、コンテナランタイムは各コンテナのメインプロセスにTERMシグナルを送信します。多くのコンテナランタイムは、コンテナイメージで定義されたSTOPSIGNAL値を尊重し、TERMシグナルの代わりにこれを送信します。猶予期間が終了すると、プロセスにKILLシグナルが送信され、Podは{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}から削除されます。プロセスの終了を待っている間にkubeletかコンテナランタイムの管理サービスが再起動されると、クラスターは元の猶予期間を含めて、最初からリトライされます。
フローの例は下のようになります。
@@ -228,7 +228,7 @@ Podは、クラスター内のNodeで実行中のプロセスを表すため、
1. API server内のPodは、猶予期間を越えるとPodが「死んでいる」と見なされるように更新される。
削除中のPodに対して`kubectl describe`コマンドを使用すると、Podは「終了中」と表示される。
Podが実行されているNode上で、Podが終了しているとマークされている(正常な終了期間が設定されている)とkubeletが認識するとすぐに、kubeletはローカルでPodの終了プロセスを開始します。
- 1. Pod内のコンテナの1つが`preStop`[フック](/docs/concepts/containers/container-lifecycle-hooks/#hook-details)を定義している場合は、コンテナの内側で呼び出される。猶予期間が終了した後も `preStop`フックがまだ実行されている場合は、一度だけ猶予期間を延長される(2秒)。
+ 1. Pod内のコンテナの1つが`preStop`[フック](/ja/docs/concepts/containers/container-lifecycle-hooks/#hook-details)を定義している場合は、コンテナの内側で呼び出される。猶予期間が終了した後も `preStop`フックがまだ実行されている場合は、一度だけ猶予期間を延長される(2秒)。
{{< note >}}
`preStop`フックが完了するまでにより長い時間が必要な場合は、`terminationGracePeriodSeconds`を変更する必要があります。
{{< /note >}}
@@ -271,10 +271,10 @@ StatefulSetのPodについては、[StatefulSetからPodを削除するための
## {{% heading "whatsnext" %}}
-* [attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)のハンズオンをやってみる
+* [attaching handlers to Container lifecycle events](/ja/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)のハンズオンをやってみる
* [Configure Liveness, Readiness and Startup Probes](/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)のハンズオンをやってみる
-* [Container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/)についてもっと学ぶ
+* [Container lifecycle hooks](/ja/docs/concepts/containers/container-lifecycle-hooks/)についてもっと学ぶ
* APIのPod/コンテナステータスの詳細情報は[PodStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podstatus-v1-core)および[ContainerStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerstatus-v1-core)を参照してください
From 4c2413c0bfb6503c2b8c3e2d14079029cb7d5e58 Mon Sep 17 00:00:00 2001
From: Riita <42636694+riita10069@users.noreply.github.com>
Date: Fri, 8 Oct 2021 17:09:00 +0900
Subject: [PATCH 07/83] Update _index.md
---
content/ja/docs/concepts/_index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/concepts/_index.md b/content/ja/docs/concepts/_index.md
index 1e22892eca..f8c7dd5cdb 100644
--- a/content/ja/docs/concepts/_index.md
+++ b/content/ja/docs/concepts/_index.md
@@ -35,7 +35,7 @@ Kubernetesには、デプロイ済みのコンテナ化されたアプリケー
* [Volume](/docs/concepts/storage/volumes/)
* [Namespace](/ja/docs/concepts/overview/working-with-objects/namespaces/)
-Kubernetesには、[コントローラー](/docs/concepts/architecture/controller/)に依存して基本オブジェクトを構築し、追加の機能と便利な機能を提供する高レベルの抽象化も含まれています。これらには以下のものを含みます:
+Kubernetesには、[コントローラー](/ja/docs/concepts/architecture/controller/)に依存して基本オブジェクトを構築し、追加の機能と便利な機能を提供する高レベルの抽象化も含まれています。これらには以下のものを含みます:
* [Deployment](/ja/docs/concepts/workloads/controllers/deployment/)
* [DaemonSet](/ja/docs/concepts/workloads/controllers/daemonset/)
From b293a1529a768a8e8534eaf4a7f99c6a8065fda2 Mon Sep 17 00:00:00 2001
From: Riita <42636694+riita10069@users.noreply.github.com>
Date: Fri, 8 Oct 2021 17:20:33 +0900
Subject: [PATCH 08/83] Update components.md
---
content/ja/docs/concepts/overview/components.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/concepts/overview/components.md b/content/ja/docs/concepts/overview/components.md
index 3dafc54e57..a1437cf611 100644
--- a/content/ja/docs/concepts/overview/components.md
+++ b/content/ja/docs/concepts/overview/components.md
@@ -115,7 +115,7 @@ Kubernetesによって開始されたコンテナは、DNS検索にこのDNSサ
## {{% heading "whatsnext" %}}
* [ノード](/ja/docs/concepts/architecture/nodes/)について学ぶ
-* [コントローラー](/docs/concepts/architecture/controller/)について学ぶ
+* [コントローラー](/ja/docs/concepts/architecture/controller/)について学ぶ
* [kube-scheduler](/ja/docs/concepts/scheduling-eviction/kube-scheduler/)について学ぶ
* etcdの公式 [ドキュメント](https://etcd.io/docs/)を読む
From 19dafaa6eebf725331d7bac945ece6a11e676ea6 Mon Sep 17 00:00:00 2001
From: Riita <42636694+riita10069@users.noreply.github.com>
Date: Fri, 8 Oct 2021 17:32:21 +0900
Subject: [PATCH 09/83] Update components.md
---
content/ja/docs/concepts/overview/components.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/concepts/overview/components.md b/content/ja/docs/concepts/overview/components.md
index a1437cf611..3f696afd57 100644
--- a/content/ja/docs/concepts/overview/components.md
+++ b/content/ja/docs/concepts/overview/components.md
@@ -88,7 +88,7 @@ kube-controller-managerを使用すると、cloud-controller-managerは複数の
アドオンはクラスター機能を実装するためにKubernetesリソース({{< glossary_tooltip term_id="daemonset" >}}、{{< glossary_tooltip term_id="deployment" >}}など)を使用します。
アドオンはクラスターレベルの機能を提供しているため、アドオンのリソースで名前空間が必要なものは`kube-system`名前空間に属します。
-いくつかのアドオンについて以下で説明します。より多くの利用可能なアドオンのリストは、[アドオン](/docs/concepts/cluster-administration/addons/) をご覧ください。
+いくつかのアドオンについて以下で説明します。より多くの利用可能なアドオンのリストは、[アドオン](/ja/docs/concepts/cluster-administration/addons/) をご覧ください。
### DNS
From 40accf234d27d26685e3ce66f357ef905efe2ad4 Mon Sep 17 00:00:00 2001
From: Riita <42636694+riita10069@users.noreply.github.com>
Date: Fri, 8 Oct 2021 17:37:18 +0900
Subject: [PATCH 10/83] Update overview.md
---
content/ja/docs/concepts/configuration/overview.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/content/ja/docs/concepts/configuration/overview.md b/content/ja/docs/concepts/configuration/overview.md
index ee63e066d8..74bc641df6 100644
--- a/content/ja/docs/concepts/configuration/overview.md
+++ b/content/ja/docs/concepts/configuration/overview.md
@@ -44,7 +44,7 @@ weight: 10
*これは順序付けの必要性を意味します* - `Pod`がアクセスしたい`Service`は`Pod`自身の前に作らなければならず、そうしないと環境変数は注入されません。DNSにはこの制限はありません。
-- (強くお勧めしますが)[クラスターアドオン](/docs/concepts/cluster-administration/addons/)の1つの選択肢はDNSサーバーです。DNSサーバーは、新しい`Service`についてKubernetes APIを監視し、それぞれに対して一連のDNSレコードを作成します。クラスタ全体でDNSが有効になっている場合は、すべての`Pod`が自動的に`Services`の名前解決を行えるはずです。
+- (強くお勧めしますが)[クラスターアドオン](/ja/docs/concepts/cluster-administration/addons/)の1つの選択肢はDNSサーバーです。DNSサーバーは、新しい`Service`についてKubernetes APIを監視し、それぞれに対して一連のDNSレコードを作成します。クラスタ全体でDNSが有効になっている場合は、すべての`Pod`が自動的に`Services`の名前解決を行えるはずです。
- どうしても必要な場合以外は、Podに`hostPort`を指定しないでください。Podを`hostPort`にバインドすると、Podがスケジュールできる場所の数を制限します、それぞれの<`hostIP`、 `hostPort`、`protocol`>の組み合わせはユニークでなければならないからです。`hostIP`と`protocol`を明示的に指定しないと、Kubernetesはデフォルトの`hostIP`として`0.0.0.0`を、デフォルトの `protocol`として`TCP`を使います。
@@ -58,7 +58,7 @@ weight: 10
## ラベルの使用
-- `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`のように、アプリケーションまたはデプロイメントの __セマンティック属性__ を識別する[ラベル](/ja/docs/concepts/overview/working-with-objects/labels/)を定義して使いましょう。これらのラベルを使用して、他のリソースに適切なポッドを選択できます。例えば、すべての`tier:frontend`を持つPodを選択するServiceや、`app:myapp`に属するすべての`phase:test`コンポーネント、などです。このアプローチの例を知るには、[ゲストブック](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)アプリも合わせてご覧ください。
+- `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`のように、アプリケーションまたはデプロイメントの __セマンティック属性__ を識別する[ラベル](/ja/docs/concepts/overview/working-with-objects/labels/)を定義して使いましょう。これらのラベルを使用して、他のリソースに適切なPodを選択できます。例えば、すべての`tier:frontend`を持つPodを選択するServiceや、`app:myapp`に属するすべての`phase:test`コンポーネント、などです。このアプローチの例を知るには、[ゲストブック](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)アプリも合わせてご覧ください。
セレクターからリリース固有のラベルを省略することで、Serviceを複数のDeploymentにまたがるように作成できます。 [Deployment](/ja/docs/concepts/workloads/controllers/deployment/)により、ダウンタイムなしで実行中のサービスを簡単に更新できます。
@@ -68,7 +68,7 @@ weight: 10
## コンテナイメージ
-[imagePullPolicy](/docs/concepts/containers/images/#updating-images)とイメージのタグは、[kubelet](/docs/reference/command-line-tools-reference/kubelet/)が特定のイメージをpullしようとしたときに作用します。
+[imagePullPolicy](/ja/docs/concepts/containers/images/#updating-images)とイメージのタグは、[kubelet](/docs/reference/command-line-tools-reference/kubelet/)が特定のイメージをpullしようとしたときに作用します。
- `imagePullPolicy: IfNotPresent`: ローカルでイメージが見つからない場合にのみイメージをpullします。
@@ -96,8 +96,8 @@ weight: 10
- `kubectl apply -f `を使いましょう。これを使うと、ディレクトリ内のすべての`.yaml`、`.yml`、および`.json`ファイルが`apply`に渡されます。
-- `get`や`delete`を行う際は、特定のオブジェクト名を指定するのではなくラベルセレクターを使いましょう。[ラベルセレクター](/ja/docs/concepts/overview/working-with-objects/labels/#label-selectors)と[ラベルの効果的な使い方](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)のセクションを参照してください。
+- `get`や`delete`を行う際は、特定のオブジェクト名を指定するのではなくラベルセレクターを使いましょう。[ラベルセレクター](/ja/docs/concepts/overview/working-with-objects/labels/#label-selectors)と[ラベルの効果的な使い方](/ja/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)のセクションを参照してください。
-- 単一コンテナのDeploymentやServiceを素早く作成するなら、`kubectl create deployment`や`kubectl expose`を使いましょう。一例として、[Serviceを利用したクラスター内のアプリケーションへのアクセス](/docs/tasks/access-application-cluster/service-access-application-cluster/)を参照してください。
+- 単一コンテナのDeploymentやServiceを素早く作成するなら、`kubectl create deployment`や`kubectl expose`を使いましょう。一例として、[Serviceを利用したクラスター内のアプリケーションへのアクセス](/ja/docs/tasks/access-application-cluster/service-access-application-cluster/)を参照してください。
From 56a0c18256c7d34fb294fb126a6baed5fba082d8 Mon Sep 17 00:00:00 2001
From: Riita <42636694+riita10069@users.noreply.github.com>
Date: Fri, 8 Oct 2021 17:46:33 +0900
Subject: [PATCH 11/83] Update scale-stateful-set.md
---
content/ja/docs/tasks/run-application/scale-stateful-set.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/ja/docs/tasks/run-application/scale-stateful-set.md b/content/ja/docs/tasks/run-application/scale-stateful-set.md
index 155b93d069..bfde1d0afd 100644
--- a/content/ja/docs/tasks/run-application/scale-stateful-set.md
+++ b/content/ja/docs/tasks/run-application/scale-stateful-set.md
@@ -14,7 +14,7 @@ weight: 50
* StatefulSetはKubernetesバージョン1.5以降でのみ利用可能です。
Kubernetesのバージョンを確認するには、`kubectl version`を実行してください。
-* すべてのステートフルアプリケーションがうまくスケールできるわけではありません。StatefulSetがスケールするかどうかわからない場合は、[StatefulSetの概念](/ja/docs/concepts/workloads/controllers/statefulset/)または[StatefulSetのチュートリアル](/docs/tutorials/stateful-application/basic-stateful-set/)を参照してください。
+* すべてのステートフルアプリケーションがうまくスケールできるわけではありません。StatefulSetがスケールするかどうかわからない場合は、[StatefulSetの概念](/ja/docs/concepts/workloads/controllers/statefulset/)または[StatefulSetのチュートリアル](/ja/docs/tutorials/stateful-application/basic-stateful-set/)を参照してください。
* ステートフルアプリケーションクラスターが完全に健全であると確信できる場合にのみ、スケーリングを実行してください。
@@ -40,7 +40,7 @@ kubectl scale statefulsets --replicas=
### StatefulSetのインプレースアップデート
-コマンドライン上でレプリカ数を変更する代わりに、StatefulSetに[インプレースアップデート](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources)が可能です。
+コマンドライン上でレプリカ数を変更する代わりに、StatefulSetに[インプレースアップデート](/ja/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources)が可能です。
StatefulSetが最初に `kubectl apply`で作成されたのなら、StatefulSetマニフェストの`.spec.replicas`を更新してから、`kubectl apply`を実行します:
From f8dcb7e792af75a752615385e714cfb605a9f33c Mon Sep 17 00:00:00 2001
From: ixodie
Date: Thu, 14 Oct 2021 16:04:38 -0700
Subject: [PATCH 12/83] Removed OVS
Open vSwitch is in no way required for Kubernetes to function. Kubernetes is not mentioned once on this project's website, nor are there any instructions for how you might use this in a k8s environment.
---
.../en/docs/concepts/cluster-administration/networking.md | 6 ------
1 file changed, 6 deletions(-)
diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md
index 9929b75d14..0b812648b6 100644
--- a/content/en/docs/concepts/cluster-administration/networking.md
+++ b/content/en/docs/concepts/cluster-administration/networking.md
@@ -266,12 +266,6 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
-### OpenVSwitch
-
-[OpenVSwitch](https://www.openvswitch.org/) is a somewhat more mature but also
-complicated way to build an overlay network. This is endorsed by several of the
-"Big Shops" for networking.
-
### OVN (Open Virtual Networking)
OVN is an opensource network virtualization solution developed by the
From bf38b5f17a28b12508881f0388a9b153c6c19dee Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Rodolfo=20Mart=C3=ADnez=20Vega?=
Date: Fri, 15 Oct 2021 13:44:37 -0500
Subject: [PATCH 13/83] [es] Add
content/es/docs/concepts/workloads/pods/init-containers.md
---
.../workloads/pods/init-containers.md | 341 ++++++++++++++++++
1 file changed, 341 insertions(+)
create mode 100644 content/es/docs/concepts/workloads/pods/init-containers.md
diff --git a/content/es/docs/concepts/workloads/pods/init-containers.md b/content/es/docs/concepts/workloads/pods/init-containers.md
new file mode 100644
index 0000000000..223d69214f
--- /dev/null
+++ b/content/es/docs/concepts/workloads/pods/init-containers.md
@@ -0,0 +1,341 @@
+---
+title: Contenedores de Inicialización
+content_type: concept
+weight: 40
+---
+
+
+Esta página proporciona una descripción general de los contenedores de inicialización: contenedores especializados que se ejecutan
+antes de los contenedores de aplicación en un {{< glossary_tooltip text="Pod" term_id="pod" >}}.
+Los contenedores de inicialización pueden contener utilidades o scripts de instalación no presentes en una imagen de aplicación.
+
+Tú puedes especificar contenedores de inicialización en la especificación del Pod junto con el array `containers`
+(el cual describe los contenedores de aplicación).
+
+
+## Entendiendo los contenedores de inicialización
+
+Un {{< glossary_tooltip text="Pod" term_id="pod" >}} puede tener múltiples contenedores
+ejecutando applicaciones dentro de él, pero también puede tener uno o más contenedores de inicialización
+que se ejecutan antes de que se inicien los contenedores de aplicación.
+
+Los contenedores de inicialización son exactamente iguales a los contenedores regulares excepto por:
+
+* Los contenedores de inicialización siempre se ejecutan hasta su finalización.
+* Cada contenedor de inicialiación debe completarse correctamente antes de que comience el siguiente.
+
+Si el contenedor de inicialización de un Pod falla, kubelet reinicia repetidamente ese contenedor de inicialización hasta que tenga éxito.
+Sin embargo, si el Pod tiene una `restartPolicy` de `Never` y un contenedor de inicialización falla durante el inicio de ese Pod, Kubernetes trata en general al Pod como fallido.
+
+Para especificar un contenedor de inicialización para un Pod, agrega el campo `initContainers` en
+la [especificación del Pod](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec),
+como un array de elementos `container` (similar al campo `containers` de aplicación y su contenido).
+Consulta [Container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container) en la
+referencia de API para más detalles.
+
+El estado de los contenedores de inicialización se devuelve en el campo `.status.initContainerStatuses`
+como un array de los estados del contenedor (similar al campo `.status.containerStatuses`).
+
+### Diferencias con los contenedores regulares
+
+Los contenedores de inicialización admiten todos los campos y características de los contenedores de aplicaciones,
+incluidos los límites de recursos, los volúmenes y la configuración de seguridad. Sin embargo, las
+solicitudes de recursos y los límites para un contenedor de inicialización se manejan de manera diferente,
+como se documenta en [Recursos](#resources).
+
+Además, los contenedores de inicialización no admiten `lifecycle`, `livenessProbe`, `readinessProbe` o
+`startupProbe` porque deben de ejecutarse hasta su finalización antes de que el Pod pueda estar listo.
+
+Si especificas varios contenedores de inicialización para un Pod, kubelet ejecuta cada contenedor
+de inicialización secuencialmente. Cada contenedor de inicialización debe tener éxito antes de que se pueda ejecutar el siguiente.
+Cuando todos los contenedores de inicialización se hayan ejecutado hasta su finalización, kubelet inicializa
+los contenedores de aplicación para el Pod y los ejecuta como de costumbre.
+
+### Usando contenedores de inicialización
+
+Dado que los contenedores de inicialización tienen imágenes separadas de los contenedores de aplicaciones, estos
+tienen algunas ventajas sobre el código relacionado de inicio:
+
+* Los contenedores de inicialización pueden contener utilidades o código personalizado para la configuración que no están presentes en una
+ imagen de aplicación. Por ejemplo, no hay necesidad de hacer una imagen `FROM` de otra imagen solo para usar una herramienta como
+ `sed`, `awk`, `python` o `dig` durante la instalación.
+* Los roles de constructor e implementador de imágenes de aplicación pueden funcionar de forma independiente sin
+ la necesidad de construir conjuntamente una sola imagen de aplicación.
+* Los contenedores de inicialización pueden ejecutarse con una vista diferente al sistema de archivos que los contenedores de aplicaciones en
+ el mismo Pod. En consecuencia, se les puede dar acceso a
+ {{}} a los que los contenedores de aplicaciones no pueden acceder.
+* Debido a que los contenedores de inicialización se ejecutan hasta su finalización antes de que se inicien los contenedores de aplicaciones, los contenedores de inicialización ofrecen
+ un mecanismo para bloquear o retrasar el inicio del contenedor de aplicación hasta que se cumplan una serie de condiciones previas. Una vez
+ que las condiciones previas se cumplen, todos los contenedores de aplicaciones de un Pod pueden iniciarse en paralelo.
+* Los contenedores de inicialización pueden ejecutar de forma segura utilidades o código personalizado que de otro modo harían a una imagen de aplicación
+ de contenedor menos segura. Si mantiene separadas herramientas innecesarias, puede limitar la superficie de ataque
+ a la imagen del contenedor de aplicación.
+
+### Ejemplos
+
+A continuación, se muestran algunas ideas sobre cómo utilizar los contenedores de inicialización:
+
+* Esperar a que se cree un {{< glossary_tooltip text="Service" term_id="service">}}
+ usando un comando de una línea de shell:
+
+ ```shell
+ for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
+ ```
+
+* Registrar este Pod con un servidor remoto desde la downward API con un comando como:
+
+ ```shell
+ curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'
+ ```
+
+* Esperar algo de tiempo antes de iniciar el contenedor de aplicación con un comando como:
+
+ ```shell
+ sleep 60
+ ```
+
+* Clonar un repositorio de Git en un {{< glossary_tooltip text="Volume" term_id="volume" >}}
+
+* Colocar valores en un archivo de configuración y ejecutar una herramienta de plantilla para generar
+ dinámicamente un archivo de configuración para el contenedor de aplicación principal. Por ejemplo,
+ colocar el valor `POD_IP` en una configuración y generar el archivo de configuración
+ de la aplicación principal usando Jinja.
+
+#### Contenedores de inicialización en uso
+
+Este ejemplo define un simple Pod que tiene dos contenedores de inicialización.
+El primero espera por `myservice` y el segundo espera por `mydb`. Una vez que ambos
+contenedores de inicialización se completen, el Pod ejecuta el contenedor de aplicación desde su sección `spec`.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: myapp-pod
+ labels:
+ app: myapp
+spec:
+ containers:
+ - name: myapp-container
+ image: busybox:1.28
+ command: ['sh', '-c', 'echo ¡La aplicación se está ejecutando! && sleep 3600']
+ initContainers:
+ - name: init-myservice
+ image: busybox:1.28
+ command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo esperando a myservice; sleep 2; done"]
+ - name: init-mydb
+ image: busybox:1.28
+ command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo esperando a mydb; sleep 2; done"]
+```
+
+Puedes iniciar este Pod ejecutando:
+
+```shell
+kubectl apply -f myapp.yaml
+```
+
+El resultado es similar a esto:
+
+```shell
+pod/myapp-pod created
+```
+
+Y verificar su estado con:
+
+```shell
+kubectl get -f myapp.yaml
+```
+
+El resultado es similar a esto:
+
+```shell
+NAME READY STATUS RESTARTS AGE
+myapp-pod 0/1 Init:0/2 0 6m
+```
+
+o para más detalles:
+
+```shell
+kubectl describe -f myapp.yaml
+```
+
+El resultado es similar a esto:
+
+```shell
+Name: myapp-pod
+Namespace: default
+[...]
+Labels: app=myapp
+Status: Pending
+[...]
+Init Containers:
+ init-myservice:
+[...]
+ State: Running
+[...]
+ init-mydb:
+[...]
+ State: Waiting
+ Reason: PodInitializing
+ Ready: False
+[...]
+Containers:
+ myapp-container:
+[...]
+ State: Waiting
+ Reason: PodInitializing
+ Ready: False
+[...]
+Events:
+ FirstSeen LastSeen Count From SubObjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- -------- ------ -------
+ 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201
+ 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image "busybox"
+ 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox"
+ 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container with docker id 5ced34a04634; Security:[seccomp=unconfined]
+ 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container with docker id 5ced34a04634
+```
+
+Para ver los logs de los contenedores de inicialización en este Pod ejecuta:
+
+```shell
+kubectl logs myapp-pod -c init-myservice # Inspecciona el primer contenedor de inicialización
+kubectl logs myapp-pod -c init-mydb # Inspecciona el segundo contenedor de inicialización
+```
+
+En este punto, estos contenedores de inicialización estarán esperando para descubrir los Servicios denominados
+`mydb` y `myservice`.
+
+Aquí hay una configuración que puedes usar para que aparezcan esos Servicios:
+
+```yaml
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: myservice
+spec:
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: mydb
+spec:
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9377
+```
+
+Para crear los servicios de `mydb` y `myservice`:
+
+```shell
+kubectl apply -f services.yaml
+```
+
+El resultado es similar a esto:
+
+```shell
+service/myservice created
+service/mydb created
+```
+
+Luego verás que esos contenedores de inicialización se completan y que el Pod `myapp-pod`
+pasa al estado `Running`:
+
+```shell
+kubectl get -f myapp.yaml
+```
+
+El resultado es similar a esto:
+
+```shell
+NAME READY STATUS RESTARTS AGE
+myapp-pod 1/1 Running 0 9m
+```
+
+Este sencillo ejemplo debería servirte de inspiración para crear tus propios
+contenedores de inicialización. [What's next](#what-s-next) contiene un enlace a un ejemplo más detallado.
+
+## Comportamiento detallado
+
+Durante el inicio del Pod, kubelet retrasa la ejecución de contenedores de inicialización hasta que la red
+y el almacenamiento estén listos. Después, kubelet ejecuta los contenedores de inicialización del Pod en el orden que
+aparecen en la especificación del Pod.
+
+Cada contenedor de inicialización debe salir correctamente antes de que
+comience el siguiente contenedor. Si un contenedor falla en iniciar debido al tiempo de ejecución o
+sale con una falla, se vuelve a intentar de acuerdo con el `restartPolicy` del Pod. Sin embargo,
+si el `restartPolicy` del Pod se establece en `Always`, los contenedores de inicialización usan
+el `restartPolicy` como `OnFailure`.
+
+Un Pod no puede estar `Ready` sino hasta que todos los contenedores de inicialización hayan tenido éxito. Los puertos en un
+contenedor de inicialización no se agregan a un Servicio. Un Pod que se está inicializando,
+está en el estado de `Pending`, pero debe tener una condición `Initialized` configurada como falsa.
+
+Si el Pod [se reinicia](#pod-restart-reasons) o es reiniciado, todos los contenedores de inicialización
+deben ejecutarse de nuevo.
+
+Los cambios en la especificación del contenedor de inicialización se limitan al campo de la imagen del contenedor.
+Alterar un campo de la imagen del contenedor de inicialización equivale a reiniciar el Pod.
+
+Debido a que los contenedores de inicialización se pueden reiniciar, reintentar o volverse a ejecutar, el código del contenedor de inicialización
+debe ser idempotente. En particular, el código que escribe en archivos en `EmptyDirs`
+debe estar preparado para la posibilidad de que ya exista un archivo de salida.
+
+Los contenedores de inicialización tienen todos los campos de un contenedor de aplicaciones. Sin embargo, Kubernetes
+prohíbe el uso de `readinessProbe` porque los contenedores de inicialización no pueden
+definir el `readiness` distinto de la finalización. Esto se aplica durante la validación.
+
+Usa `activeDeadlineSeconds` en el Pod para prevenir que los contenedores de inicialización fallen por siempre.
+La fecha límite incluye contenedores de inicialización.
+Sin embargo, se recomienda utilizar `activeDeadlineSeconds` si el usuario implementa su aplicación
+como un `Job` porque `activeDeadlineSeconds` tiene un efecto incluso después de que `initContainer` finaliza.
+El Pod que ya se está ejecutando correctamente sería eliminado por `activeDeadlineSeconds` si lo estableces.
+
+El nombre de cada aplicación y contenedor de inicialización en un Pod debe ser único; un
+error de validación es arrojado para cualquier contenedor que comparta un nombre con otro.
+
+### Recursos
+
+Dado el orden y la ejecución de los contenedores de inicialización, las siguientes reglas
+para el uso de recursos se aplican:
+
+* La más alta de cualquier solicitud particular de recurso o límite definido en todos los contenedores
+ de inicialización es la *solicitud/límite de inicialización efectiva*. Si algún recurso no tiene un
+ límite de recursos especificado éste se considera como el límite más alto.
+* La *solicitud/límite efectiva* para un recurso es la más alta entre:
+ * la suma de todas las solicitudes/límites de los contenedores de aplicación, y
+ * la solicitud/límite de inicialización efectiva para un recurso
+* La planificación es hecha con base en las solicitudes/límites efectivos, lo que significa
+ que los contenedores de inicialización pueden reservar recursos para la inicialización que no se utilizan
+ durante la vida del Pod.
+* El nivel de `QoS` (calidad de servicio) del *nivel de `QoS` efectivo* del Pod es el
+ nivel de `QoS` tanto para los contenedores de inicialización como para los contenedores de aplicación.
+
+La cuota y los límites son aplicados con base en la solicitud y límite efectivos de Pod.
+
+Los grupos de control de nivel de Pod (cgroups) se basan en la solicitud y el límite de Pod efectivos, al igual que el scheduler.
+
+### Razones de reinicio del Pod
+
+Un Pod puede reiniciarse, provocando la re-ejecución de los contenedores de inicialización por las siguientes razones:
+
+* Se reinicia el contenedor de infraestructura del Pod. Esto es poco común y debería hacerlo alguien con acceso de root a los nodos.
+* Todos los contenedores en un Pod son terminados mientras `restartPolicy` esté configurado en `Always`,
+ forzando un reinicio y el registro de finalización del contenedor de inicialización se ha perdido debido a
+ la recolección de basura.
+
+El Pod no se reiniciará cuando se cambie la imagen del contenedor de inicialización o cuando
+se pierda el registro de finalización del contenedor de inicialización debido a la recolección de basura. Esto
+se aplica a Kubernetes v1.20 y posteriores. Si estás utilizando una versión anterior de
+Kubernetes, consulta la documentación de la versión que estás utilizando.
+
+## {{% heading "whatsnext" %}}
+
+* Lee acerca sobre [creando un Pod que tiene un contenedor de inicialización](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
+* Aprende cómo [depurar contenedores de inicialización](/docs/tasks/debug-application-cluster/debug-init-containers/)
From 68502af09c7e4bc137803d9abd0cc160ef18d1fc Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Rodolfo=20Mart=C3=ADnez=20Vega?=
Date: Fri, 15 Oct 2021 16:22:18 -0500
Subject: [PATCH 14/83] Improve translation of about
---
content/es/docs/concepts/workloads/pods/init-containers.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/es/docs/concepts/workloads/pods/init-containers.md b/content/es/docs/concepts/workloads/pods/init-containers.md
index 223d69214f..13e8a67533 100644
--- a/content/es/docs/concepts/workloads/pods/init-containers.md
+++ b/content/es/docs/concepts/workloads/pods/init-containers.md
@@ -337,5 +337,5 @@ Kubernetes, consulta la documentación de la versión que estás utilizando.
## {{% heading "whatsnext" %}}
-* Lee acerca sobre [creando un Pod que tiene un contenedor de inicialización](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
+* Lee acerca de [creando un Pod que tiene un contenedor de inicialización](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
* Aprende cómo [depurar contenedores de inicialización](/docs/tasks/debug-application-cluster/debug-init-containers/)
From 675e4274264536fb729370f5dbe5db6b0f78b9b8 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Rodolfo=20Mart=C3=ADnez=20Vega?=
Date: Tue, 19 Oct 2021 13:08:05 -0500
Subject: [PATCH 15/83] Update typos and apply suggestions from PR review
---
.../concepts/workloads/pods/init-containers.md | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/content/es/docs/concepts/workloads/pods/init-containers.md b/content/es/docs/concepts/workloads/pods/init-containers.md
index 13e8a67533..dd001860ef 100644
--- a/content/es/docs/concepts/workloads/pods/init-containers.md
+++ b/content/es/docs/concepts/workloads/pods/init-containers.md
@@ -5,18 +5,18 @@ weight: 40
---
-Esta página proporciona una descripción general de los contenedores de inicialización: contenedores especializados que se ejecutan
+Esta página proporciona una descripción general de los contenedores de inicialización (init containers): contenedores especializados que se ejecutan
antes de los contenedores de aplicación en un {{< glossary_tooltip text="Pod" term_id="pod" >}}.
Los contenedores de inicialización pueden contener utilidades o scripts de instalación no presentes en una imagen de aplicación.
-Tú puedes especificar contenedores de inicialización en la especificación del Pod junto con el array `containers`
+Tú puedes especificar contenedores de inicialización en la especificación del Pod junto con el arreglo de `containers`
(el cual describe los contenedores de aplicación).
## Entendiendo los contenedores de inicialización
Un {{< glossary_tooltip text="Pod" term_id="pod" >}} puede tener múltiples contenedores
-ejecutando applicaciones dentro de él, pero también puede tener uno o más contenedores de inicialización
+ejecutando aplicaciones dentro de él, pero también puede tener uno o más contenedores de inicialización
que se ejecutan antes de que se inicien los contenedores de aplicación.
Los contenedores de inicialización son exactamente iguales a los contenedores regulares excepto por:
@@ -25,16 +25,16 @@ Los contenedores de inicialización son exactamente iguales a los contenedores r
* Cada contenedor de inicialiación debe completarse correctamente antes de que comience el siguiente.
Si el contenedor de inicialización de un Pod falla, kubelet reinicia repetidamente ese contenedor de inicialización hasta que tenga éxito.
-Sin embargo, si el Pod tiene una `restartPolicy` de `Never` y un contenedor de inicialización falla durante el inicio de ese Pod, Kubernetes trata en general al Pod como fallido.
+Sin embargo, si el Pod tiene una `restartPolicy` de `Never` y un contenedor de inicialización falla durante el inicio de ese Pod, Kubernetes trata al Pod en general como fallido.
Para especificar un contenedor de inicialización para un Pod, agrega el campo `initContainers` en
la [especificación del Pod](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec),
-como un array de elementos `container` (similar al campo `containers` de aplicación y su contenido).
+como un arreglo de elementos `container` (similar al campo `containers` de aplicación y su contenido).
Consulta [Container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container) en la
referencia de API para más detalles.
El estado de los contenedores de inicialización se devuelve en el campo `.status.initContainerStatuses`
-como un array de los estados del contenedor (similar al campo `.status.containerStatuses`).
+como un arreglo de los estados del contenedor (similar al campo `.status.containerStatuses`).
### Diferencias con los contenedores regulares
@@ -259,7 +259,7 @@ myapp-pod 1/1 Running 0 9m
```
Este sencillo ejemplo debería servirte de inspiración para crear tus propios
-contenedores de inicialización. [What's next](#what-s-next) contiene un enlace a un ejemplo más detallado.
+contenedores de inicialización. [¿Qué es lo que sigue?](#what-s-next) contiene un enlace a un ejemplo más detallado.
## Comportamiento detallado
@@ -305,7 +305,7 @@ error de validación es arrojado para cualquier contenedor que comparta un nombr
Dado el orden y la ejecución de los contenedores de inicialización, las siguientes reglas
para el uso de recursos se aplican:
-* La más alta de cualquier solicitud particular de recurso o límite definido en todos los contenedores
+* La solicitud más alta de cualquier recurso o límite particular definido en todos los contenedores
de inicialización es la *solicitud/límite de inicialización efectiva*. Si algún recurso no tiene un
límite de recursos especificado éste se considera como el límite más alto.
* La *solicitud/límite efectiva* para un recurso es la más alta entre:
@@ -319,7 +319,7 @@ para el uso de recursos se aplican:
La cuota y los límites son aplicados con base en la solicitud y límite efectivos de Pod.
-Los grupos de control de nivel de Pod (cgroups) se basan en la solicitud y el límite de Pod efectivos, al igual que el scheduler.
+Los grupos de control de nivel de Pod (cgroups) se basan en la solicitud y el límite de Pod efectivos, al igual que el planificador de Kubernetes ({{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}}).
### Razones de reinicio del Pod
From 1afd786d1bb60f919b732746de5afbce60547830 Mon Sep 17 00:00:00 2001
From: Ravi Gudimetla
Date: Thu, 21 Oct 2021 10:46:42 -0400
Subject: [PATCH 16/83] Apply suggestions from code review
Co-authored-by: Tim Bannister
---
.../docs/concepts/workloads/controllers/job.md | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md
index ac0b28ab6b..82c938ff05 100644
--- a/content/en/docs/concepts/workloads/controllers/job.md
+++ b/content/en/docs/concepts/workloads/controllers/job.md
@@ -344,11 +344,20 @@ up by the TTL controller after it finishes.
{{< note >}}
It is recommended to set `ttlSecondsAfterFinished` field because unmanaged jobs
-(jobs not created via high level controllers like cronjobs) have a default deletion
-policy of `orphanDependents` causing pods created by this job to be left around.
-Even though podgc collector eventually deletes these lingering pods, sometimes these
+(Jobs that you created directly, and not indirectly through other workload APIs
+such as CronJob) have a default deletion
+policy of `orphanDependents` causing Pods created by an unmanaged Job to be left around
+after that Job is fully deleted.
+Even though the {{< glossary_tooltip text="control plane" term_id="control-plane" >}} eventually
+[garbage collects](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)
+the Pods from a deleted Job after they either fail or complete, sometimes those
lingering pods may cause cluster performance degradation or in worst case cause the
-cluster to go down.
+cluster to go offline due to this degradation.
+
+You can use [LimitRanges](/docs/concepts/policy/limit-range/) and
+[ResourceQuotas](/docs/concepts/policy/resource-quotas/) to place a
+cap on the amount of resources that a particular namespace can
+consume.
{{< /note >}}
From 3514d186bbfc2a8804f0488d5e60272fed445c1b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Rodolfo=20Mart=C3=ADnez=20Vega?=
Date: Tue, 26 Oct 2021 11:57:35 -0500
Subject: [PATCH 17/83] Update
content/es/docs/concepts/workloads/pods/init-containers.md
Co-authored-by: Victor Morales
---
content/es/docs/concepts/workloads/pods/init-containers.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/es/docs/concepts/workloads/pods/init-containers.md b/content/es/docs/concepts/workloads/pods/init-containers.md
index dd001860ef..fad0220899 100644
--- a/content/es/docs/concepts/workloads/pods/init-containers.md
+++ b/content/es/docs/concepts/workloads/pods/init-containers.md
@@ -76,7 +76,7 @@ tienen algunas ventajas sobre el código relacionado de inicio:
A continuación, se muestran algunas ideas sobre cómo utilizar los contenedores de inicialización:
* Esperar a que se cree un {{< glossary_tooltip text="Service" term_id="service">}}
- usando un comando de una línea de shell:
+ usando una sola linea de comando de shell:
```shell
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
From b2035168a7a936a4e69c083741e01b4f048d286f Mon Sep 17 00:00:00 2001
From: Douglas Hellinger
Date: Wed, 27 Oct 2021 15:58:55 +0800
Subject: [PATCH 18/83] Clarify why cordon all but 4 nodes.
---
content/en/docs/tutorials/stateful-application/zookeeper.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md
index 3ed1cd454b..96c2ebcd0f 100644
--- a/content/en/docs/tutorials/stateful-application/zookeeper.md
+++ b/content/en/docs/tutorials/stateful-application/zookeeper.md
@@ -894,8 +894,7 @@ Use this command to get the nodes in your cluster.
kubectl get nodes
```
-Use [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordon) to
-cordon all but four of the nodes in your cluster.
+This tutorial assumes a cluster with at least four nodes. If the cluster has more than four, use [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordon) to cordon all but four nodes. Constraining to four nodes will ensure Kubernetes encounters affinity and PodDisruptionBudget constraints when scheduling zookeeper Pods in the following maintenance simulation.
```shell
kubectl cordon
From b48ddd9e90506e8b5bc3c1a8a0708df0ccc7a80a Mon Sep 17 00:00:00 2001
From: Brad Beck
Date: Thu, 28 Oct 2021 10:02:50 -0500
Subject: [PATCH 19/83] Fix zsh completion setup
---
.../docs/tasks/tools/included/optional-kubectl-configs-zsh.md | 2 +-
content/es/docs/tasks/tools/install-kubectl.md | 2 +-
content/fr/docs/tasks/tools/install-kubectl.md | 2 +-
content/id/docs/tasks/tools/install-kubectl.md | 2 +-
content/ja/docs/tasks/tools/install-kubectl.md | 2 +-
.../docs/tasks/tools/included/optional-kubectl-configs-zsh.md | 2 +-
content/ru/docs/tasks/tools/install-kubectl.md | 2 +-
content/vi/docs/tasks/tools/install-kubectl.md | 2 +-
.../docs/tasks/tools/included/optional-kubectl-configs-zsh.md | 2 +-
9 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
index 95c8c0aa71..b7d9044605 100644
--- a/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
+++ b/content/en/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
@@ -16,7 +16,7 @@ If you have an alias for kubectl, you can extend shell completion to work with t
```zsh
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
After reloading your shell, kubectl autocompletion should be working.
diff --git a/content/es/docs/tasks/tools/install-kubectl.md b/content/es/docs/tasks/tools/install-kubectl.md
index fdca3db92c..40250a0c0c 100644
--- a/content/es/docs/tasks/tools/install-kubectl.md
+++ b/content/es/docs/tasks/tools/install-kubectl.md
@@ -493,7 +493,7 @@ Si tienes alias para kubectl, puedes extender el completado de intérprete de co
```zsh
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
Tras recargar tu intérprete de comandos, el auto-completado de kubectl debería funcionar.
diff --git a/content/fr/docs/tasks/tools/install-kubectl.md b/content/fr/docs/tasks/tools/install-kubectl.md
index f61f5d5bee..2d57145edd 100644
--- a/content/fr/docs/tasks/tools/install-kubectl.md
+++ b/content/fr/docs/tasks/tools/install-kubectl.md
@@ -457,7 +457,7 @@ Si vous avez un alias pour kubectl, vous pouvez étendre la completion de votre
```shell
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
Après avoir rechargé votre shell, l'auto-complétion de kubectl devrait fonctionner.
diff --git a/content/id/docs/tasks/tools/install-kubectl.md b/content/id/docs/tasks/tools/install-kubectl.md
index bc112c3cdd..ce214ed799 100644
--- a/content/id/docs/tasks/tools/install-kubectl.md
+++ b/content/id/docs/tasks/tools/install-kubectl.md
@@ -472,7 +472,7 @@ Jika kamu menggunakan alias untuk `kubectl`, kamu masih dapat menggunakan fitur
```shell
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
Setelah memuat ulang terminal, penyelesaian otomatis dari `kubectl` seharusnya sudah dapat bekerja.
diff --git a/content/ja/docs/tasks/tools/install-kubectl.md b/content/ja/docs/tasks/tools/install-kubectl.md
index a33b475671..688c996b1b 100644
--- a/content/ja/docs/tasks/tools/install-kubectl.md
+++ b/content/ja/docs/tasks/tools/install-kubectl.md
@@ -484,7 +484,7 @@ kubectlにエイリアスを張っている場合は、以下のようにシェ
```zsh
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
シェルをリロードしたあとに、kubectlの自動補完が機能するはずです。
diff --git a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
index e81403300b..0d2a563c1b 100644
--- a/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
+++ b/content/ko/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
@@ -16,7 +16,7 @@ kubectl에 대한 앨리어스가 있는 경우, 해당 앨리어스로 작업
```zsh
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
셸을 다시 로드하면, kubectl 자동 완성 기능이 작동할 것이다.
diff --git a/content/ru/docs/tasks/tools/install-kubectl.md b/content/ru/docs/tasks/tools/install-kubectl.md
index 8b5da7adef..a5ab438012 100644
--- a/content/ru/docs/tasks/tools/install-kubectl.md
+++ b/content/ru/docs/tasks/tools/install-kubectl.md
@@ -461,7 +461,7 @@ source <(kubectl completion zsh)
```shell
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
После перезагрузки командной оболочки должны появляться дополнения ввода kubectl.
diff --git a/content/vi/docs/tasks/tools/install-kubectl.md b/content/vi/docs/tasks/tools/install-kubectl.md
index fe964d0b59..ae51a8b485 100644
--- a/content/vi/docs/tasks/tools/install-kubectl.md
+++ b/content/vi/docs/tasks/tools/install-kubectl.md
@@ -450,7 +450,7 @@ Nếu bạn có alias cho kubectl, bạn có thể mở rộng shell completion
```shell
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
Sau khi tải lại shell, kubectl autocompletion sẽ hoạt động.
diff --git a/content/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh.md b/content/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
index 0f32593d55..b439db63bc 100644
--- a/content/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
+++ b/content/zh/docs/tasks/tools/included/optional-kubectl-configs-zsh.md
@@ -32,7 +32,7 @@ If you have an alias for kubectl, you can extend shell completion to work with t
```zsh
echo 'alias k=kubectl' >>~/.zshrc
-echo 'complete -F __start_kubectl k' >>~/.zshrc
+echo 'compdef __start_kubectl k' >>~/.zshrc
```
+
\ No newline at end of file
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/index.md b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/index.md
new file mode 100644
index 0000000000..6ce9910f99
--- /dev/null
+++ b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/index.md
@@ -0,0 +1,118 @@
+---
+layout: blog
+title: 'Quality-of-Service for Memory Resources'
+date: 2021-09-01
+slug: qos-memory-resources
+---
+
+**Authors:** Tim Xu (Tencent Cloud)
+
+Kubernetes v1.22, released in August 2021, introduced a new alpha feature that improves how Linux nodes implement memory resource requests and limits.
+
+In prior releases, Kubernetes did not support memory quality guarantees.
+For example, if you set container resources as follows:
+```
+apiVersion: v1
+kind: Pod
+metadata:
+ name: example
+spec:
+ containers:
+ - name: nginx
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "64Mi"
+ cpu: "500m"
+```
+`spec.containers[].resources.requests`(e.g. cpu, memory) is designed for scheduling. When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.
+
+`spec.containers[].resources.limits` is passed to the container runtime when the kubelet starts a container. CPU is considered a "compressible" resource. If your app starts hitting your CPU limits, Kubernetes starts throttling your container, giving your app potentially worse performance. However, it won’t be terminated. That is what "compressible" means.
+
+In cgroup v1, and prior to this feature, the container runtime never took into account and effectively ignored spec.containers[].resources.requests["memory"]. This is unlike CPU, in which the container runtime consider both requests and limits. Furthermore, memory actually can't be compressed in cgroup v1. Because there is no way to throttle memory usage, if a container goes past its memory limit it will be terminated by the kernel with an OOM (Out of Memory) kill.
+
+Fortunately, cgroup v2 brings a new design and implementation to achieve full protection on memory. The new feature relies on cgroups v2 which most current operating system releases for Linux already provide. With this experimental feature, [quality-of-service for pods and containers](/docs/tasks/configure-pod-container/quality-service-pod/) extends to cover not just CPU time but memory as well.
+
+## How does it work?
+Memory QoS uses the memory controller of cgroup v2 to guarantee memory resources in Kubernetes. Memory requests and limits of containers in pod are used to set specific interfaces `memory.min` and `memory.high` provided by the memory controller. When `memory.min` is set to memory requests, memory resources are reserved and never reclaimed by the kernel; this is how Memory QoS ensures the availability of memory for Kubernetes pods. And if memory limits are set in the container, this means that the system needs to limit container memory usage, Memory QoS uses `memory.high` to throttle workload approaching it's memory limit, ensuring that the system is not overwhelmed by instantaneous memory allocation.
+
+![](./memory-qos-cal.svg)
+
+The following table details the specific functions of these two parameters and how they correspond to Kubernetes container resources.
+
+
+
+
File
+
Description
+
+
+
memory.min
+
memory.min specifies a minimum amount of memory the cgroup must always retain, i.e., memory that can never be reclaimed by the system. If the cgroup's memory usage reaches this low limit and can’t be increased, the system OOM killer will be invoked.
+
+
+ We map it to the container's memory request
+
+
+
+
memory.high
+
memory.high is the memory usage throttle limit. This is the main mechanism to control a cgroup's memory use. If a cgroup's memory use goes over the high boundary specified here, the cgroup’s processes are throttled and put under heavy reclaim pressure. The default is max, meaning there is no limit.
+
+
+ We use a formula to calculate memory.high, depending on container's memory limit or node allocatable memory (if container's memory limit is empty) and a throttling factor. Please refer to the KEP for more details on the formula.
+
+
+
+
+When container memory requests are made, kubelet passes `memory.min` to the back-end CRI runtime (possibly containerd, cri-o) via the `Unified` field in CRI during container creation. The `memory.min` in container level cgroup will be set to:
+
+![](./container-memory-min.svg)
+i: the ith container in one pod
+
+Since the `memory.min` interface requires that the ancestor cgroup directories are all set, the pod and node cgroup directories need to be set correctly.
+
+`memory.min` in pod level cgroup:
+![](./pod-memory-min.svg)
+i: the ith container in one pod
+
+`memory.min` in node level cgroup:
+![](./node-memory-min.svg)
+i: the ith pod in one node, j: the jth container in one pod
+
+Kubelet will manage the cgroup hierarchy of the pod level and node level cgroups directly using runc libcontainer library, while container cgroup limits are managed by the container runtime.
+
+For memory limits, in addition to the original way of limiting memory usage, Memory QoS adds an additional feature of throttling memory allocation. A throttling factor is introduced as a multiplier (default is 0.8). If the result of multiplying memory limits by the factor is greater than memory requests, kubelet will set `memory.high` to the value and use `Unified` via CRI. And if the container does not specify memory limits, kubelet will use node allocatable memory instead. The `memory.high` in container level cgroup is set to:
+
+![](./container-memory-high.svg)
+i: the ith container in one pod
+
+This can can help improve stability when pod memory usage increases, ensuring that memory is throttled as it approaches the memory limit.
+
+## How do I use it?
+Here are the prerequisites for enabling Memory QoS on your Linux node, some of these are related to [Kubernetes support for cgroup v2](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2254-cgroup-v2).
+
+1. Kubernetes since v1.22
+2. [runc](https://github.com/opencontainers/runc) since v1.0.0-rc93; [containerd](https://containerd.io/) since 1.14; [cri-o](https://cri-o.io/) since 1.20
+3. Linux kernel minimum version: 4.15, recommended version: 5.2+
+4. Linux image with cgroupv2 enabled or enabling cgroupv2 unified_cgroup_hierarchy manually
+
+OCI runtimes such as runc and crun already support cgroups v2 [`Unified`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#unified), and Kubernetes CRI has also made the desired changes to support passing [`Unified`](https://github.com/kubernetes/kubernetes/pull/102578). However, CRI Runtime support is required as well. Memory QoS in Alpha phase is designed to support containerd and cri-o. Related PR [Feature: containerd-cri support LinuxContainerResources.Unified #5627](https://github.com/containerd/containerd/pull/5627) has been merged and will be released in containerd 1.6. CRI-O [implement kube alpha features for 1.22 #5207](https://github.com/cri-o/cri-o/pull/5207) is still in WIP.
+
+With those prerequisites met, you can enable the memory QoS feature gate (see [Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/).
+
+## How can I learn more?
+
+You can find more details as follows:
+- [Support Memory QoS with cgroup v2](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2570-memory-qos/#readme)
+- [cgroup v2](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2254-cgroup-v2/#readme)
+
+## How do I get involved?
+You can reach SIG Node by several means:
+- Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node)
+- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node)
+- [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fnode)
+
+You can also contact me directly:
+- GitHub / Slack: @xiaoxubeii
+- Email: xiaoxubeii@gmail.com
\ No newline at end of file
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/memory-qos-cal.svg b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/memory-qos-cal.svg
new file mode 100644
index 0000000000..545ee77b29
--- /dev/null
+++ b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/memory-qos-cal.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/node-memory-min.svg b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/node-memory-min.svg
new file mode 100644
index 0000000000..7c03aafc28
--- /dev/null
+++ b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/node-memory-min.svg
@@ -0,0 +1,98 @@
+
+
+
\ No newline at end of file
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/pod-memory-min.svg b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/pod-memory-min.svg
new file mode 100644
index 0000000000..84d3a940a0
--- /dev/null
+++ b/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/pod-memory-min.svg
@@ -0,0 +1,97 @@
+
+
+
\ No newline at end of file
From b2d8188cc79c2f096a62d5dee026cd6b7f821dde Mon Sep 17 00:00:00 2001
From: RinkiyaKeDad
Date: Tue, 16 Nov 2021 15:56:43 +0530
Subject: [PATCH 26/83] add hyperlink to list of sigs
Signed-off-by: RinkiyaKeDad
---
content/en/docs/contribute/participate/pr-wranglers.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/contribute/participate/pr-wranglers.md b/content/en/docs/contribute/participate/pr-wranglers.md
index 09ba0d7175..3e12c2706e 100644
--- a/content/en/docs/contribute/participate/pr-wranglers.md
+++ b/content/en/docs/contribute/participate/pr-wranglers.md
@@ -26,7 +26,7 @@ Each day in a week-long shift as PR Wrangler:
- If you need to verify content, comment on the PR and request more details.
- Assign relevant `sig/` label(s).
- If needed, assign reviewers from the `reviewers:` block in the file's front matter.
- - You can also tag a SIG for a review by commenting `@kubernetes/-pr-reviews` on the PR.
+ - You can also tag a [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md) for a review by commenting `@kubernetes/-pr-reviews` on the PR.
- Use the `/approve` comment to approve a PR for merging. Merge the PR when ready.
- PRs should have a `/lgtm` comment from another member before merging.
- Consider accepting technically accurate content that doesn't meet the [style guidelines](/docs/contribute/style/style-guide/). Open a new issue with the label `good first issue` to address style concerns.
From 0cb7bc3295cbdcee54cd2421be11b79920aee896 Mon Sep 17 00:00:00 2001
From: Hoon Jo
Date: Tue, 16 Nov 2021 11:17:30 +0900
Subject: [PATCH 27/83] Update managing-tls-in-a-cluster.md
In my humble view, `kubectl certificate approve my-svc.my-namespace` command may understand easily to follow up this procedure.
Update managing-tls-in-a-cluster.md
Change some text as Divya Mohan's guidance.
Update managing-tls-in-a-cluster.md
Change text as sftim's guidance.! Thank you so much!
---
.../docs/tasks/tls/managing-tls-in-a-cluster.md | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
index 6ea02a06af..de3b8fc09d 100644
--- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
+++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
@@ -162,9 +162,19 @@ Events:
## Get the Certificate Signing Request Approved
-Approving the certificate signing request is either done by an automated
-approval process or on a one off basis by a cluster administrator. More
-information on what this involves is covered below.
+Approving the [certificate signing request](/docs/reference/access-authn-authz/certificate-signing-requests/)
+is either done by an automated approval process or on a one off basis by a cluster
+administrator. If you're authorized to approve a certificate request, you can do that
+manually using `kubectl`; for example:
+
+```shell
+kubectl certificate approve my-svc.my-namespace
+```
+
+```none
+certificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved
+```
+
## Download the Certificate and Use It
From 2a1ba00aa7b63de10f17cc8c9ce2a5cf66471bb6 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Thu, 18 Nov 2021 02:39:08 +0200
Subject: [PATCH 28/83] [ja] update apparmor.md
---
content/ja/docs/tutorials/clusters/apparmor.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/ja/docs/tutorials/clusters/apparmor.md b/content/ja/docs/tutorials/clusters/apparmor.md
index 5ee0201d71..55ec58c501 100644
--- a/content/ja/docs/tutorials/clusters/apparmor.md
+++ b/content/ja/docs/tutorials/clusters/apparmor.md
@@ -183,7 +183,7 @@ kubectl get events | grep hello-apparmor
コンテナがこのプロファイルで実際に実行されていることを確認するために、コンテナのproc attrをチェックします。
```shell
-kubectl exec hello-apparmor cat /proc/1/attr/current
+kubectl exec hello-apparmor -- cat /proc/1/attr/current
```
```
k8s-apparmor-example-deny-write (enforce)
@@ -192,7 +192,7 @@ k8s-apparmor-example-deny-write (enforce)
最後に、ファイルへの書き込みを行おうとすることで、プロファイルに違反すると何が起こるか見てみましょう。
```shell
-kubectl exec hello-apparmor touch /tmp/test
+kubectl exec hello-apparmor touch -- /tmp/test
```
```
touch: /tmp/test: Permission denied
From 2151d4e5c4f11f51dea4b5806440e4e2412b90fb Mon Sep 17 00:00:00 2001
From: Guangwen Feng
Date: Thu, 18 Nov 2021 11:06:58 +0800
Subject: [PATCH 29/83] [zh] Update apparmor.md
Signed-off-by: Guangwen Feng
---
content/zh/docs/tutorials/clusters/apparmor.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/zh/docs/tutorials/clusters/apparmor.md b/content/zh/docs/tutorials/clusters/apparmor.md
index 786fb2ae18..caca769b44 100644
--- a/content/zh/docs/tutorials/clusters/apparmor.md
+++ b/content/zh/docs/tutorials/clusters/apparmor.md
@@ -320,7 +320,7 @@ kubectl get events | grep hello-apparmor
我们可以通过检查该配置文件的 proc attr 来验证容器是否实际使用该配置文件运行:
```shell
-kubectl exec hello-apparmor cat /proc/1/attr/current
+kubectl exec hello-apparmor -- cat /proc/1/attr/current
```
```
k8s-apparmor-example-deny-write (enforce)
@@ -330,7 +330,7 @@ k8s-apparmor-example-deny-write (enforce)
最后,我们可以看到如果试图通过写入文件来违反配置文件,会发生什么情况:
```shell
-kubectl exec hello-apparmor touch /tmp/test
+kubectl exec hello-apparmor -- touch /tmp/test
```
```
touch: /tmp/test: Permission denied
From 65b7de840de74f21f59ba4b8367676f2874e7029 Mon Sep 17 00:00:00 2001
From: cndoit18
Date: Thu, 18 Nov 2021 11:23:00 +0800
Subject: [PATCH 30/83] [zh]: Add warning about using unsupported CRON_TZ
Signed-off-by: cndoit18
---
.../workloads/controllers/cron-jobs.md | 49 ++++++++++++-------
1 file changed, 31 insertions(+), 18 deletions(-)
diff --git a/content/zh/docs/concepts/workloads/controllers/cron-jobs.md b/content/zh/docs/concepts/workloads/controllers/cron-jobs.md
index 265a1fa3cb..c4239d1e82 100644
--- a/content/zh/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/zh/docs/concepts/workloads/controllers/cron-jobs.md
@@ -19,8 +19,6 @@ A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repe
One CronJob object is like one line of a _crontab_ (cron table) file. It runs a job periodically
on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format.
-
-In addition, the CronJob schedule supports timezone handling, you can specify the timezone by adding "CRON_TZ=
@@ -1639,9 +1641,9 @@ Maximum number of seconds between log flushes
@@ -2102,7 +2102,7 @@ If this flag is enabled, admission injected tokens would be extended up to
-
--service-account-issuer string
+
--service-account-issuer strings
@@ -2116,6 +2116,8 @@ comply with the OpenID spec: https://openid.net/specs/openid-connect-discovery-1
In practice, this means that service-account-issuer must be an https URL.
It is also highly recommended that this URL be capable of serving OpenID
discovery documents at {service-account-issuer}/.well-known/openid-configuration.
+When this flag is specified multiple times, the first is used to generate tokens
+and all are used to determine which issuers are accepted.
-->
服务帐号令牌颁发者的标识符。
颁发者将在已办法令牌的 "iss" 声明中检查此标识符。
@@ -2126,6 +2128,7 @@ ServiceAccountIssuerDiscovery 功能也将保持禁用状态。
实践中,这意味着 service-account-issuer 取值必须是 HTTPS URL。
还强烈建议此 URL 能够在 {service-account-issuer}/.well-known/openid-configuration
处提供 OpenID 发现文档。
+当此值被多次指定时,第一次的值用于生成令牌,所有的值用于确定接受哪些发行人。
@@ -2454,6 +2457,18 @@ If set, the file that will be used to secure the secure port of the API server v
+
+
--tracing-config-file string
+
+
+
+
+包含 API 服务器跟踪配置的文件。
+
+
+
-v, --v int
diff --git a/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md
index 7cb8e9778d..19037bd19b 100644
--- a/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md
+++ b/content/zh/docs/reference/command-line-tools-reference/kube-controller-manager.md
@@ -352,9 +352,10 @@ Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scop
## {{% heading "seealso" %}}
diff --git a/data/i18n/zh/zh.toml b/data/i18n/zh/zh.toml
index dcf32839f8..55ce241bb0 100644
--- a/data/i18n/zh/zh.toml
+++ b/data/i18n/zh/zh.toml
@@ -49,6 +49,9 @@ other = "我是..."
[docs_label_users]
other = "用户"
+[envvars_heading]
+other = "环境变量"
+
[examples_heading]
other = "示例"
From 0810f31204064ea616353c10cdede5fc1d06306f Mon Sep 17 00:00:00 2001
From: Carlos Panato
Date: Tue, 23 Nov 2021 09:20:16 +0100
Subject: [PATCH 51/83] patches: update patch release calendar for December/21
cycle
Signed-off-by: Carlos Panato
---
content/en/releases/patch-releases.md | 4 +++-
data/releases/schedule.yaml | 27 ++++++++++++++++++---------
2 files changed, 21 insertions(+), 10 deletions(-)
diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md
index 644351a833..b53adbc175 100644
--- a/content/en/releases/patch-releases.md
+++ b/content/en/releases/patch-releases.md
@@ -78,7 +78,6 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
-| November 2021 | 2021-11-12 | 2021-11-17 |
| December 2021 | 2021-12-10 | 2021-12-15 |
| January 2022 | 2021-01-14 | 2021-01-19 |
| February 2022 | 2021-02-11 | 2021-02-16 |
@@ -94,6 +93,7 @@ End of Life for **1.22** is **2022-10-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|---------------|----------------------|-------------|------|
+| 1.22.5 | 2021-12-10 | 2021-12-15 | |
| 1.22.4 | 2021-11-12 | 2021-11-17 | |
| 1.22.3 | 2021-10-22 | 2021-10-27 | |
| 1.22.2 | 2021-09-10 | 2021-09-15 | |
@@ -107,6 +107,7 @@ End of Life for **1.21** is **2022-06-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- |
+| 1.21.8 | 2021-12-10 | 2021-12-15 | |
| 1.21.7 | 2021-11-12 | 2021-11-17 | |
| 1.21.6 | 2021-10-22 | 2021-10-27 | |
| 1.21.5 | 2021-09-10 | 2021-09-15 | |
@@ -123,6 +124,7 @@ End of Life for **1.20** is **2022-02-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- |
+| 1.20.14 | 2021-12-10 | 2021-12-15 | |
| 1.20.13 | 2021-11-12 | 2021-11-17 | |
| 1.20.12 | 2021-10-22 | 2021-10-27 | |
| 1.20.11 | 2021-09-10 | 2021-09-15 | |
diff --git a/data/releases/schedule.yaml b/data/releases/schedule.yaml
index 062b2aa92b..172a6e7484 100644
--- a/data/releases/schedule.yaml
+++ b/data/releases/schedule.yaml
@@ -1,10 +1,13 @@
schedules:
- release: 1.22
- next: 1.22.4
- cherryPickDeadline: 2021-11-12
- targetDate: 2021-11-17
+ next: 1.22.5
+ cherryPickDeadline: 2021-12-10
+ targetDate: 2021-12-15
endOfLifeDate: 2022-10-28
previousPatches:
+ - release: 1.22.4
+ cherryPickDeadline: 2021-11-12
+ targetDate: 2021-11-17
- release: 1.22.3
cherryPickDeadline: 2021-10-22
targetDate: 2021-10-27
@@ -15,11 +18,14 @@ schedules:
cherryPickDeadline: 2021-08-16
targetDate: 2021-08-19
- release: 1.21
- next: 1.21.7
- cherryPickDeadline: 2021-11-12
- targetDate: 2021-11-17
+ next: 1.21.8
+ cherryPickDeadline: 2021-12-10
+ targetDate: 2021-12-15
endOfLifeDate: 2022-06-28
previousPatches:
+ - release: 1.21.7
+ cherryPickDeadline: 2021-11-12
+ targetDate: 2021-11-17
- release: 1.21.6
cherryPickDeadline: 2021-10-22
targetDate: 2021-10-27
@@ -40,11 +46,14 @@ schedules:
targetDate: 2021-05-12
note: Regression https://groups.google.com/g/kubernetes-dev/c/KuF8s2zueFs
- release: 1.20
- next: 1.20.13
- cherryPickDeadline: 2021-11-12
- targetDate: 2021-11-17
+ next: 1.20.14
+ cherryPickDeadline: 2021-12-10
+ targetDate: 2021-12-15
endOfLifeDate: 2022-02-28
previousPatches:
+ - release: 1.20.13
+ cherryPickDeadline: 2021-11-12
+ targetDate: 2021-11-17
- release: 1.20.12
cherryPickDeadline: 2021-10-22
targetDate: 2021-10-27
From 349be77566158d111c787b7863518f664f8ba1b8 Mon Sep 17 00:00:00 2001
From: Shubham Kuchhal
Date: Wed, 10 Nov 2021 17:33:40 +0530
Subject: [PATCH 52/83] Improvement for Authorization in Extending Kubernetes
docs.
Improvement: Corrected the link for Authorization.
Fix Typo
---
content/en/docs/concepts/extend-kubernetes/_index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/concepts/extend-kubernetes/_index.md b/content/en/docs/concepts/extend-kubernetes/_index.md
index 825484f50d..f00aecb8e6 100644
--- a/content/en/docs/concepts/extend-kubernetes/_index.md
+++ b/content/en/docs/concepts/extend-kubernetes/_index.md
@@ -145,7 +145,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat
### Authorization
-[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
+[Authorization](/docs/reference/access-authn-authz/authorization/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
### Dynamic Admission Control
From 2ff1bffa3d7e3200e724588f21ef7f43618e6091 Mon Sep 17 00:00:00 2001
From: Madhav Budhiraja
Date: Tue, 23 Nov 2021 23:56:41 +0530
Subject: [PATCH 53/83] Remove extra bracket
---
.../windows/intro-windows-in-kubernetes.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
index c2160e330f..31b3252330 100644
--- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
+++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
@@ -76,7 +76,7 @@ then paging can slow down performance.
You can place bounds on memory use for workloads using the kubelet
parameters `--kubelet-reserve` and/or `--system-reserve`; these account
for memory usage on the node (outside of containers), and reduce
-[NodeAllocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)).
+[NodeAllocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
As you deploy workloads, set resource limits on containers. This also subtracts from
`NodeAllocatable` and prevents the scheduler from adding more pods once a node is full.
From 8c5c7de4a9e4f1f14a034080474b11a0f5f80770 Mon Sep 17 00:00:00 2001
From: Daniel Weibel
Date: Wed, 24 Nov 2021 04:30:20 +0400
Subject: [PATCH 54/83] Change 'container' to 'Pod' in Jobs documentation
(#30592)
* Fix wording in Jobs documentation
* Apply change in Chinese text
---
content/zh/docs/concepts/workloads/controllers/job.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/zh/docs/concepts/workloads/controllers/job.md b/content/zh/docs/concepts/workloads/controllers/job.md
index 829d9a0599..a438294af6 100644
--- a/content/zh/docs/concepts/workloads/controllers/job.md
+++ b/content/zh/docs/concepts/workloads/controllers/job.md
@@ -441,13 +441,13 @@ other Pods for the Job failing around that time.
当 Job 的 Pod 被删除时,或者 Pod 成功时没有其它 Pod 处于失败状态,失效回退的次数也会被重置(为 0)。
{{< note >}}
-如果你的 Job 的 `restartPolicy` 被设置为 "OnFailure",就要注意运行该 Job 的容器
+如果你的 Job 的 `restartPolicy` 被设置为 "OnFailure",就要注意运行该 Job 的 Pod
会在 Job 到达失效回退次数上限时自动被终止。
这会使得调试 Job 中可执行文件的工作变得非常棘手。
我们建议在调试 Job 时将 `restartPolicy` 设置为 "Never",
From c61eccb64c3472acbb9df20583e4e158d2a1a321 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Wed, 24 Nov 2021 02:41:45 +0200
Subject: [PATCH 55/83] [fr] updated circtl version
---
.../production-environment/tools/kubeadm/install-kubeadm.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/fr/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/fr/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 06d5e5f286..20b44143e2 100644
--- a/content/fr/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/fr/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -245,7 +245,7 @@ sudo mkdir -p $DOWNLOAD_DIR
Installez crictl (requis pour Kubeadm / Kubelet Container Runtime Interface (CRI))
```bash
-CRICTL_VERSION="v1.17.0"
+CRICTL_VERSION="v1.22.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
```
From 972711149b25f9758cb7fb325f1e8dce2449c8cd Mon Sep 17 00:00:00 2001
From: Guangwen Feng
Date: Wed, 24 Nov 2021 09:52:09 +0800
Subject: [PATCH 56/83] [zh] Fix the output of "kubectl get deployment"
Signed-off-by: Guangwen Feng
---
.../zh/docs/reference/kubectl/docker-cli-to-kubectl.md | 4 ++--
.../tasks/administer-cluster/namespaces-walkthrough.md | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
index a5b8789163..e7b1316eb4 100644
--- a/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
+++ b/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
@@ -366,8 +366,8 @@ kubectl:
kubectl get deployment nginx-app
```
```
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-nginx-app 1 1 1 1 2m
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx-app 1/1 1 1 2m
```
```shell
diff --git a/content/zh/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/zh/docs/tasks/administer-cluster/namespaces-walkthrough.md
index 39f3f01718..3d0e911688 100644
--- a/content/zh/docs/tasks/administer-cluster/namespaces-walkthrough.md
+++ b/content/zh/docs/tasks/administer-cluster/namespaces-walkthrough.md
@@ -354,8 +354,8 @@ We have created a deployment whose replica size is 2 that is running the pod cal
kubectl get deployment
```
```
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-snowflake 2 2 2 2 2m
+NAME READY UP-TO-DATE AVAILABLE AGE
+snowflake 2/2 2 2 2m
```
```shell
@@ -402,8 +402,8 @@ kubectl get deployment
```
```
-NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
-cattle 5 5 5 5 10s
+NAME READY UP-TO-DATE AVAILABLE AGE
+cattle 5/5 5 5 10s
```
```shell
From c6c4a709bdee54f7bb3f31f651e08072f28c4b29 Mon Sep 17 00:00:00 2001
From: Guangwen Feng
Date: Wed, 24 Nov 2021 10:26:59 +0800
Subject: [PATCH 57/83] [zh] Update horizontal-pod-autoscale-walkthrough.md
Signed-off-by: Guangwen Feng
---
.../horizontal-pod-autoscale-walkthrough.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index f831b28fab..d11843826c 100644
--- a/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -135,8 +135,8 @@ Now that the server is running, we will create the autoscaler using
The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods
controlled by the php-apache deployment we created in the first step of these instructions.
Roughly speaking, HPA will increase and decrease the number of replicas
-(via the deployment) to maintain an average CPU utilization across all Pods of 50%
-(since each pod requests 200 milli-cores by `kubectl run`), this means average CPU usage of 100 milli-cores).
+(via the deployment) to maintain an average CPU utilization across all Pods of 50%.
+Since each pod requests 200 milli-cores by `kubectl run`, this means an average CPU usage of 100 milli-cores.
See [here](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details) for more details on the algorithm.
-->
## 创建 Horizontal Pod Autoscaler {#create-horizontal-pod-autoscaler}
@@ -147,8 +147,8 @@ See [here](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-detai
以下命令将创建一个 Horizontal Pod Autoscaler 用于控制我们上一步骤中创建的
Deployment,使 Pod 的副本数量维持在 1 到 10 之间。
大致来说,HPA 将(通过 Deployment)增加或者减少 Pod 副本的数量以保持所有 Pod
-的平均 CPU 利用率在 50% 左右(由于每个 Pod 请求 200 毫核的 CPU,这意味着平均
-CPU 用量为 100 毫核)。
+的平均 CPU 利用率在 50% 左右。由于每个 Pod 请求 200 毫核的 CPU,这意味着平均
+CPU 用量为 100 毫核。
算法的详情请参阅[相关文档](/zh/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)。
```shell
From 4edccb8044bec6511ed420275905091034dc5e4f Mon Sep 17 00:00:00 2001
From: Guangwen Feng
Date: Wed, 24 Nov 2021 11:13:46 +0800
Subject: [PATCH 58/83] [zh] Update hello-minikube.md
Signed-off-by: Guangwen Feng
---
content/zh/docs/tutorials/hello-minikube.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/content/zh/docs/tutorials/hello-minikube.md b/content/zh/docs/tutorials/hello-minikube.md
index daa7532f40..445f806c33 100644
--- a/content/zh/docs/tutorials/hello-minikube.md
+++ b/content/zh/docs/tutorials/hello-minikube.md
@@ -119,7 +119,7 @@ By default, the dashboard is only accessible from within the internal Kubernetes
The `dashboard` command creates a temporary proxy to make the dashboard accessible from outside the Kubernetes virtual network.
To stop the proxy, run `Ctrl+C` to exit the process.
-After the command exits, the dashboard remains running in Kubernetes cluster.
+After the command exits, the dashboard remains running in the Kubernetes cluster.
You can run the `dashboard` command again to create another proxy to access the dashboard.
-->
{{< note >}}
@@ -144,9 +144,9 @@ You can run the `dashboard` command again to create another proxy to access the
## 使用 URL 打开仪表板
-如果你不想打开 Web 浏览器,请使用 url 标志运行显示板命令以得到 URL:
+如果你不想打开 Web 浏览器,请使用 `--url` 标志运行显示板命令以得到 URL:
```shell
minikube dashboard --url
@@ -330,7 +330,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/).
4. 仅限 Katacoda 环境:单击加号,然后单击 **选择要在主机 1 上查看的端口**。
From fddb61757e3002d8527bde4195ea556564663e1e Mon Sep 17 00:00:00 2001
From: Vitthal Sai
Date: Wed, 24 Nov 2021 16:36:37 +0530
Subject: [PATCH 59/83] Added content for better phrasing as suggested
---
content/en/docs/reference/tools/_index.md | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/content/en/docs/reference/tools/_index.md b/content/en/docs/reference/tools/_index.md
index 603e44d98b..f582f992f5 100644
--- a/content/en/docs/reference/tools/_index.md
+++ b/content/en/docs/reference/tools/_index.md
@@ -52,14 +52,16 @@ Use Kompose to:
## Kui
-[`Kui`](https://github.com/kubernetes-sigs/kui) is a tool for enhancing CLIs with Graphics.
+[`Kui`](https://github.com/kubernetes-sigs/kui) is a GUI tool that takes your normal
+`kubectl` command line requests and responds with graphics.
Kui takes the normal `kubectl` command line requests and responds with graphics. Instead
-of ASCII tables `Sortable tables` are presented as the output.
+of ASCII tables, Kui provides a GUI rendering with tables that you can sort.
-Use Kui to:
+Kui lets you:
-* Click the long auto-generated resource names instead of copying and pasting it
-* Process `kubectl` commands 2-3 faster than `kubectl` itself
-* See waterfall diagrams of your jobs by executing `k get jobs`
-* Browse through resources in a tabbed UI with help of a click
\ No newline at end of file
+* Directly click on long, auto-generated resource names instead of copying and pasting
+* Type in `kubectl` commands and see them execute, even sometimes faster than `kubectl` itself
+* Query a {{< glossary_tooltip text="Job" term_id="job">}} and see its execution rendered
+ as a waterfall diagram
+* Click through resources in your cluster using a tabbed UI
\ No newline at end of file
From bfb6fd44d0b849efa57f896ac987906037224213 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Wed, 24 Nov 2021 23:27:07 +0000
Subject: [PATCH 60/83] Update article publication date
---
.../container-memory-high.svg | 0
.../container-memory-min.svg | 0
.../index.md | 2 +-
.../memory-qos-cal.svg | 0
.../node-memory-min.svg | 0
.../pod-memory-min.svg | 0
6 files changed, 1 insertion(+), 1 deletion(-)
rename content/en/blog/_posts/{2021-09-06-memory-qos-cgroups-v2 => 2021-11-26-memory-qos-cgroups-v2}/container-memory-high.svg (100%)
rename content/en/blog/_posts/{2021-09-06-memory-qos-cgroups-v2 => 2021-11-26-memory-qos-cgroups-v2}/container-memory-min.svg (100%)
rename content/en/blog/_posts/{2021-09-06-memory-qos-cgroups-v2 => 2021-11-26-memory-qos-cgroups-v2}/index.md (99%)
rename content/en/blog/_posts/{2021-09-06-memory-qos-cgroups-v2 => 2021-11-26-memory-qos-cgroups-v2}/memory-qos-cal.svg (100%)
rename content/en/blog/_posts/{2021-09-06-memory-qos-cgroups-v2 => 2021-11-26-memory-qos-cgroups-v2}/node-memory-min.svg (100%)
rename content/en/blog/_posts/{2021-09-06-memory-qos-cgroups-v2 => 2021-11-26-memory-qos-cgroups-v2}/pod-memory-min.svg (100%)
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/container-memory-high.svg b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/container-memory-high.svg
similarity index 100%
rename from content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/container-memory-high.svg
rename to content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/container-memory-high.svg
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/container-memory-min.svg b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/container-memory-min.svg
similarity index 100%
rename from content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/container-memory-min.svg
rename to content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/container-memory-min.svg
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/index.md b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
similarity index 99%
rename from content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/index.md
rename to content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
index 6ce9910f99..86995d3337 100644
--- a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/index.md
+++ b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
@@ -1,7 +1,7 @@
---
layout: blog
title: 'Quality-of-Service for Memory Resources'
-date: 2021-09-01
+date: 2021-11-26
slug: qos-memory-resources
---
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/memory-qos-cal.svg b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/memory-qos-cal.svg
similarity index 100%
rename from content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/memory-qos-cal.svg
rename to content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/memory-qos-cal.svg
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/node-memory-min.svg b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/node-memory-min.svg
similarity index 100%
rename from content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/node-memory-min.svg
rename to content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/node-memory-min.svg
diff --git a/content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/pod-memory-min.svg b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/pod-memory-min.svg
similarity index 100%
rename from content/en/blog/_posts/2021-09-06-memory-qos-cgroups-v2/pod-memory-min.svg
rename to content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/pod-memory-min.svg
From f71bc6d90a4de4769c96226551135291b8400430 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Wed, 24 Nov 2021 23:28:06 +0000
Subject: [PATCH 61/83] Fix punctuation nit
---
.../en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
index 86995d3337..51f60cd0b4 100644
--- a/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
+++ b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
@@ -99,7 +99,7 @@ Here are the prerequisites for enabling Memory QoS on your Linux node, some of t
OCI runtimes such as runc and crun already support cgroups v2 [`Unified`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#unified), and Kubernetes CRI has also made the desired changes to support passing [`Unified`](https://github.com/kubernetes/kubernetes/pull/102578). However, CRI Runtime support is required as well. Memory QoS in Alpha phase is designed to support containerd and cri-o. Related PR [Feature: containerd-cri support LinuxContainerResources.Unified #5627](https://github.com/containerd/containerd/pull/5627) has been merged and will be released in containerd 1.6. CRI-O [implement kube alpha features for 1.22 #5207](https://github.com/cri-o/cri-o/pull/5207) is still in WIP.
-With those prerequisites met, you can enable the memory QoS feature gate (see [Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/).
+With those prerequisites met, you can enable the memory QoS feature gate (see [Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)).
## How can I learn more?
From 1f34679e9f7feb486a66763495d16fa82f89b64d Mon Sep 17 00:00:00 2001
From: Vitthal Sai
Date: Thu, 25 Nov 2021 12:29:18 +0530
Subject: [PATCH 62/83] Correcting the docs for label
kubernetes.io/metadata.name
---
content/en/docs/reference/labels-annotations-taints.md | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/content/en/docs/reference/labels-annotations-taints.md b/content/en/docs/reference/labels-annotations-taints.md
index 07e6d19426..25decbed37 100644
--- a/content/en/docs/reference/labels-annotations-taints.md
+++ b/content/en/docs/reference/labels-annotations-taints.md
@@ -36,10 +36,9 @@ Example: `kubernetes.io/metadata.name=mynamespace`
Used on: Namespaces
-When the `NamespaceDefaultLabelName`
-[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled,
-the Kubernetes API server sets this label on all namespaces. The label value is set to
-the name of the namespace.
+The [`Control Plane`](https://kubernetes.io/docs/concepts/overview/components/)
+sets this label on all namespaces. It is **mandatory** to `set a label`
+on all the namespaces. The label value is set to the name of the namespace.
This is useful if you want to target a specific namespace with a label
{{< glossary_tooltip text="selector" term_id="selector" >}}.
From 82c3a0785d940fbd82fc5a07a14b1193cf985451 Mon Sep 17 00:00:00 2001
From: Vitthal Sai
Date: Thu, 25 Nov 2021 14:47:29 +0530
Subject: [PATCH 63/83] Incorporated the suggested feedback
---
content/en/docs/reference/labels-annotations-taints.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/reference/labels-annotations-taints.md b/content/en/docs/reference/labels-annotations-taints.md
index 25decbed37..75d84c700f 100644
--- a/content/en/docs/reference/labels-annotations-taints.md
+++ b/content/en/docs/reference/labels-annotations-taints.md
@@ -36,9 +36,9 @@ Example: `kubernetes.io/metadata.name=mynamespace`
Used on: Namespaces
-The [`Control Plane`](https://kubernetes.io/docs/concepts/overview/components/)
-sets this label on all namespaces. It is **mandatory** to `set a label`
-on all the namespaces. The label value is set to the name of the namespace.
+The Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
+sets this label on all namespaces. The label value is set
+to the name of the namespace. You can't change this label's value.
This is useful if you want to target a specific namespace with a label
{{< glossary_tooltip text="selector" term_id="selector" >}}.
From fbe2194d5ba8ac2006cb87c3ec5833210df42bb8 Mon Sep 17 00:00:00 2001
From: Robot Jelly <36916536+robotjellyzone@users.noreply.github.com>
Date: Thu, 25 Nov 2021 14:56:33 +0530
Subject: [PATCH 64/83] added line separation in kubectl page (#30359)
* added line separation in kubectl page
added line separation in kubectl page - 2
added line separation in kubectl page - 3 added space at ends
final changes
added spaces at end
* removed Line-549 from end of the file
* added Line-548 at the end of file
---
content/en/docs/reference/kubectl/overview.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md
index 72a60f96a2..53d3305ed9 100644
--- a/content/en/docs/reference/kubectl/overview.md
+++ b/content/en/docs/reference/kubectl/overview.md
@@ -91,6 +91,7 @@ If:
* the `KUBERNETES_SERVICE_HOST` environment variable is set, and
* the `KUBERNETES_SERVICE_PORT` environment variable is set, and
* you don't explicitly specify a namespace on the kubectl command line
+
then kubectl assumes it is running in your cluster. The kubectl tool looks up the
namespace of that ServiceAccount (this is the same as the namespace of the Pod)
and acts against that namespace. This is different from what happens outside of a
@@ -545,4 +546,3 @@ Current user: plugins-user
* To find out more about plugins, take a look at the [example cli plugin](https://github.com/kubernetes/sample-cli-plugin).
-
From af2a60b57ffa2b30a7a39e15be6b351dfee4bcf2 Mon Sep 17 00:00:00 2001
From: 243f6a88 85a308d3
<33170174+243f6a8885a308d313198a2e037@users.noreply.github.com>
Date: Thu, 25 Nov 2021 09:41:05 +0900
Subject: [PATCH 65/83] fix: typo in health-checks.md
---
content/en/docs/reference/using-api/health-checks.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/content/en/docs/reference/using-api/health-checks.md b/content/en/docs/reference/using-api/health-checks.md
index e4ce50f30d..b13793a07b 100644
--- a/content/en/docs/reference/using-api/health-checks.md
+++ b/content/en/docs/reference/using-api/health-checks.md
@@ -19,12 +19,12 @@ The `healthz` endpoint is deprecated (since Kubernetes v1.16), and you should us
The `livez` endpoint can be used with the `--livez-grace-period` [flag](/docs/reference/command-line-tools-reference/kube-apiserver) to specify the startup duration.
For a graceful shutdown you can specify the `--shutdown-delay-duration` [flag](/docs/reference/command-line-tools-reference/kube-apiserver) with the `/readyz` endpoint.
Machines that check the `healthz`/`livez`/`readyz` of the API server should rely on the HTTP status code.
-A status code `200` indicates the API server is `healthy`/`live`/`ready`, depending of the called endpoint.
-The more verbose options shown below are intended to be used by human operators to debug their cluster or specially the state of the API server.
+A status code `200` indicates the API server is `healthy`/`live`/`ready`, depending on the called endpoint.
+The more verbose options shown below are intended to be used by human operators to debug their cluster or understand the state of the API server.
The following examples will show how you can interact with the health API endpoints.
-For all endpoints you can use the `verbose` parameter to print out the checks and their status.
+For all endpoints, you can use the `verbose` parameter to print out the checks and their status.
This can be useful for a human operator to debug the current status of the API server, it is not intended to be consumed by a machine:
```shell
@@ -93,7 +93,7 @@ The output show that the `etcd` check is excluded:
{{< feature-state state="alpha" >}}
-Each individual health check exposes an HTTP endpoint and could can be checked individually.
+Each individual health check exposes an HTTP endpoint and can be checked individually.
The schema for the individual health checks is `/livez/` where `livez` and `readyz` and be used to indicate if you want to check the liveness or the readiness of the API server.
The `` path can be discovered using the `verbose` flag from above and take the path between `[+]` and `ok`.
These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system:
From 2ae0496450376cf0c4a5df433e3a70ecf8086c1d Mon Sep 17 00:00:00 2001
From: Vitthal Sai
Date: Thu, 25 Nov 2021 23:27:12 +0530
Subject: [PATCH 66/83] Added mention of the K8s API Server
---
content/en/docs/reference/labels-annotations-taints.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/labels-annotations-taints.md b/content/en/docs/reference/labels-annotations-taints.md
index 75d84c700f..3e998c6afc 100644
--- a/content/en/docs/reference/labels-annotations-taints.md
+++ b/content/en/docs/reference/labels-annotations-taints.md
@@ -36,7 +36,7 @@ Example: `kubernetes.io/metadata.name=mynamespace`
Used on: Namespaces
-The Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
+The Kubernetes API server (part of the {{< glossary_tooltip text="control plane" term_id="control-plane" >}})
sets this label on all namespaces. The label value is set
to the name of the namespace. You can't change this label's value.
From bd68246227397bcbc11ca28cd3bafffbd5f31a74 Mon Sep 17 00:00:00 2001
From: xzyangNB
Date: Fri, 26 Nov 2021 14:10:03 +0800
Subject: [PATCH 67/83] =?UTF-8?q?Delete=20redundant=20=E2=80=98=E3=80=82?=
=?UTF-8?q?=E2=80=99=20in=20/zh/docs/setup/=5Findex.md?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
content/zh/docs/setup/_index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/zh/docs/setup/_index.md b/content/zh/docs/setup/_index.md
index 27b2487087..5d9665b942 100644
--- a/content/zh/docs/setup/_index.md
+++ b/content/zh/docs/setup/_index.md
@@ -55,7 +55,7 @@ If you don't want to manage a Kubernetes cluster yourself, you could pick a mana
[certified platforms](/docs/setup/production-environment/turnkey-solutions/).
There are also other standardized and custom solutions across a wide range of cloud and
bare metal environments.
--->。
+-->
可以[下载 Kubernetes](/releases/download/),在本地机器、云或你自己的数据中心上部署 Kubernetes 集群。
如果你不想自己管理 Kubernetes 集群,则可以选择托管服务,包括[经过认证的平台](/zh/docs/setup/production-environment/turnkey-solutions/)。
在各种云和裸机环境中,还有其他标准化和定制的解决方案。
From f183a65cfea71d8fd63433c98b4f31dcde6daa73 Mon Sep 17 00:00:00 2001
From: xiaoxubeii
Date: Fri, 26 Nov 2021 15:41:24 +0800
Subject: [PATCH 68/83] Fix containerd version typo
---
.../en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
index 51f60cd0b4..3085aecda2 100644
--- a/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
+++ b/content/en/blog/_posts/2021-11-26-memory-qos-cgroups-v2/index.md
@@ -93,7 +93,7 @@ This can can help improve stability when pod memory usage increases, ensuring th
Here are the prerequisites for enabling Memory QoS on your Linux node, some of these are related to [Kubernetes support for cgroup v2](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2254-cgroup-v2).
1. Kubernetes since v1.22
-2. [runc](https://github.com/opencontainers/runc) since v1.0.0-rc93; [containerd](https://containerd.io/) since 1.14; [cri-o](https://cri-o.io/) since 1.20
+2. [runc](https://github.com/opencontainers/runc) since v1.0.0-rc93; [containerd](https://containerd.io/) since 1.4; [cri-o](https://cri-o.io/) since 1.20
3. Linux kernel minimum version: 4.15, recommended version: 5.2+
4. Linux image with cgroupv2 enabled or enabling cgroupv2 unified_cgroup_hierarchy manually
From 5799a8cd48707968e8a2e5e7cf12982b56656b18 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Sat, 16 Oct 2021 13:22:28 +0100
Subject: [PATCH 69/83] Add immediate pod deletion to cheat sheet
---
content/en/docs/reference/kubectl/cheatsheet.md | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index 511d8c9c7d..5ea50e9ca6 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -289,10 +289,11 @@ kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale multip
## Deleting resources
```bash
-kubectl delete -f ./pod.json # Delete a pod using the type and name specified in pod.json
-kubectl delete pod,service baz foo # Delete pods and services with same names "baz" and "foo"
-kubectl delete pods,services -l name=myLabel # Delete pods and services with label name=myLabel
-kubectl -n my-ns delete pod,svc --all # Delete all pods and services in namespace my-ns,
+kubectl delete -f ./pod.json # Delete a pod using the type and name specified in pod.json
+kubectl delete pod unwanted --now # Delete a pod with no grace period
+kubectl delete pod,service baz foo # Delete pods and services with same names "baz" and "foo"
+kubectl delete pods,services -l name=myLabel # Delete pods and services with label name=myLabel
+kubectl -n my-ns delete pod,svc --all # Delete all pods and services in namespace my-ns,
# Delete all pods matching the awk pattern1 or pattern2
kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs kubectl delete -n mynamespace pod
```
From 2c6a411fde37eb4baa0546ed37a7ef4067ebd3a4 Mon Sep 17 00:00:00 2001
From: halfC <1524398873@qq.com>
Date: Sun, 28 Nov 2021 20:23:21 +0800
Subject: [PATCH 70/83] [zh] typo Update images.md (#30545)
* Update images.md
typo
* sync en docs
* [zh] translate images.md
[zh] translate interpretation of config.json images.md
* [zh] modify the wording in images.md
* [zh] remove origin english in images.md
* [zh] del extra # in images.md
* [zh] del origin english in images.md
---
content/zh/docs/concepts/containers/images.md | 112 +++++++++++++++++-
1 file changed, 111 insertions(+), 1 deletion(-)
diff --git a/content/zh/docs/concepts/containers/images.md b/content/zh/docs/concepts/containers/images.md
index 1360faab32..b782d01833 100644
--- a/content/zh/docs/concepts/containers/images.md
+++ b/content/zh/docs/concepts/containers/images.md
@@ -68,7 +68,7 @@ There are additional rules about where you can place the separator
characters (`_`, `-`, and `.`) inside an image tag.
If you don't specify a tag, Kubernetes assumes you mean the tag `latest`.
-->
-镜像标签可以包含小写字母、大写字符、数字、下划线(`_`)、句点(`.`)和连字符(`-`)。
+镜像标签可以包含小写字母、大写字母、数字、下划线(`_`)、句点(`.`)和连字符(`-`)。
关于在镜像标签中何处可以使用分隔字符(`_`、`-` 和 `.`)还有一些额外的规则。
如果你不指定标签,Kubernetes 认为你想使用标签 `latest`。
@@ -517,6 +517,116 @@ registry keys are added to the `.docker/config.json`.
在 `.docker/config.json` 中配置了私有仓库密钥后,所有 Pod 都将能读取私有仓库中的镜像。
+
+### config.json 说明 {#config-json}
+
+
+对于 `config.json` 的解释在原始 Docker 实现和 Kubernetes 的解释之间有所不同。
+在 Docker 中,`auths` 键只能指定根 URL ,而 Kubernetes 允许 glob URLs 以及
+前缀匹配的路径。这意味着,像这样的 `config.json` 是有效的:
+```json
+{
+ "auths": {
+ "*my-registry.io/images": {
+ "auth": "…"
+ }
+ }
+}
+```
+
+
+使用以下语法匹配根 URL (`*my-registry.io`):
+```
+pattern:
+ { term }
+
+term:
+ '*' 匹配任何无分隔符字符序列
+ '?' 匹配任意单个非分隔符
+ '[' [ '^' ] 字符范围
+ 字符集(必须非空)
+ c 匹配字符 c (c 不为 '*','?','\\','[')
+ '\\' c 匹配字符 c
+
+字符范围:
+ c 匹配字符 c (c 不为 '\\','?','-',']')
+ '\\' c 匹配字符 c
+ lo '-' hi 匹配字符范围在 lo 到 hi 之间字符
+```
+
+
+现在镜像拉取操作会将每种有效模式的凭据都传递给 CRI 容器运行时。例如下面的容器镜像名称会匹配成功:
+
+- `my-registry.io/images`
+- `my-registry.io/images/my-image`
+- `my-registry.io/images/another-image`
+- `sub.my-registry.io/images/my-image`
+- `a.sub.my-registry.io/images/my-image`
+
+
+kubelet 为每个找到的凭证的镜像按顺序拉取。 这意味着在 `config.json` 中可能有多项:
+
+```json
+{
+ "auths": {
+ "my-registry.io/images": {
+ "auth": "…"
+ },
+ "my-registry.io/images/subpath": {
+ "auth": "…"
+ }
+ }
+}
+```
+
+
+如果一个容器指定了要拉取的镜像 `my-registry.io/images/subpath/my-image`,
+并且其中一个失败,kubelet 将尝试从另一个身份验证源下载镜像。
+
From 3f9f20178be52668a27435a1476931ba6c41286e Mon Sep 17 00:00:00 2001
From: Arhell
Date: Mon, 29 Nov 2021 00:28:59 +0200
Subject: [PATCH 71/83] [pt-br] updated circtl version
---
.../production-environment/tools/kubeadm/install-kubeadm.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index a281d8369f..85c0bff925 100644
--- a/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -213,7 +213,7 @@ sudo mkdir -p $DOWNLOAD_DIR
Instale o crictl (utilizado pelo kubeadm e pela Interface do Agente de execução do Kubelet (CRI))
```bash
-CRICTL_VERSION="v1.17.0"
+CRICTL_VERSION="v1.22.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
```
From edc96615a606e9522ca14e809698a378c502ef88 Mon Sep 17 00:00:00 2001
From: ashish-jaiswar
Date: Mon, 29 Nov 2021 10:11:47 +0530
Subject: [PATCH 72/83] updated setup-konnectivity.md
---
content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md b/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md
index 81cba07cc7..c9923ea1a1 100644
--- a/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md
+++ b/content/en/docs/tasks/extend-kubernetes/setup-konnectivity.md
@@ -11,7 +11,8 @@ communication.
## {{% heading "prerequisites" %}}
-{{< include "task-tutorial-prereqs.md" >}}
+You need to have a Kubernetes cluster, and the kubectl command-line tool must
+be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using [minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/).
From 06abbb5e1b17f3c995b0da12ac8ca50ecd3125ce Mon Sep 17 00:00:00 2001
From: Guangwen Feng
Date: Mon, 29 Nov 2021 13:46:35 +0800
Subject: [PATCH 73/83] [zh] Update install-kubectl-windows.md
Signed-off-by: Guangwen Feng
---
content/zh/docs/tasks/tools/install-kubectl-windows.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/content/zh/docs/tasks/tools/install-kubectl-windows.md b/content/zh/docs/tasks/tools/install-kubectl-windows.md
index 45f3be3328..3a59cf1185 100644
--- a/content/zh/docs/tasks/tools/install-kubectl-windows.md
+++ b/content/zh/docs/tasks/tools/install-kubectl-windows.md
@@ -22,13 +22,13 @@ card:
## {{% heading "prerequisites" %}}
kubectl 版本和集群版本之间的差异必须在一个小版本号内。
-例如:v{{< skew latestVersion >}} 版本的客户端能与 v{{< skew prevMinorVersion >}}、
-v{{< skew latestVersion >}} 和 v{{< skew nextMinorVersion >}} 版本的控制面通信。
-用最新版的 kubectl 有助于避免不可预见的问题。
+例如:v{{< skew currentVersion >}} 版本的客户端能与 v{{< skew currentVersionAddMinor -1 >}}、
+v{{< skew currentVersionAddMinor 0 >}} 和 v{{< skew currentVersionAddMinor 1 >}} 版本的控制面通信。
+用最新兼容版的 kubectl 有助于避免不可预见的问题。
kubectl 版本和集群之间的差异必须在一个小版本号之内。
-例如:v{{< skew latestVersion >}} 版本的客户端能与 v{{< skew prevMinorVersion >}}、
-v{{< skew latestVersion >}} 和 v{{< skew nextMinorVersion >}} 版本的控制面通信。
-用最新版本的 kubectl 有助于避免不可预见的问题。
+例如:v{{< skew currentVersion >}} 版本的客户端能与 v{{< skew currentVersionAddMinor -1 >}}、
+v{{< skew currentVersionAddMinor 0 >}} 和 v{{< skew currentVersionAddMinor 1 >}} 版本的控制面通信。
+用最新兼容版本的 kubectl 有助于避免不可预见的问题。
kubectl 版本和集群版本之间的差异必须在一个小版本号内。
-例如:v{{< skew latestVersion >}} 版本的客户端能与 v{{< skew prevMinorVersion >}}、
-v{{< skew latestVersion >}} 和 v{{< skew nextMinorVersion >}} 版本的控制面通信。
-用最新版的 kubectl 有助于避免不可预见的问题。
+例如:v{{< skew currentVersion >}} 版本的客户端能与 v{{< skew currentVersionAddMinor -1 >}}、
+v{{< skew currentVersionAddMinor 0 >}} 和 v{{< skew currentVersionAddMinor 1 >}} 版本的控制面通信。
+用最新兼容版的 kubectl 有助于避免不可预见的问题。
From 2f4e2f56d0d35c299e1f14dc189141e65020f22a Mon Sep 17 00:00:00 2001
From: Jihoon Seo <46767780+jihoon-seo@users.noreply.github.com>
Date: Tue, 30 Nov 2021 20:34:56 +0900
Subject: [PATCH 81/83] Fix wrong /security redirection (#30684)
* Fix wrong redirections
* Add/Modify redirection rules
* Use a 302 redirect for shortened URL
Co-authored-by: Tim Bannister
---
content/ko/docs/reference/issues-security/security.md | 2 +-
static/_redirects | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/content/ko/docs/reference/issues-security/security.md b/content/ko/docs/reference/issues-security/security.md
index fea97697e2..765a0c7665 100644
--- a/content/ko/docs/reference/issues-security/security.md
+++ b/content/ko/docs/reference/issues-security/security.md
@@ -1,6 +1,6 @@
---
title: 쿠버네티스 보안과 공개 정보
-aliases: [/security/]
+aliases: [/ko/security/]
diff --git a/static/_redirects b/static/_redirects
index c9a1af9be7..780597c901 100644
--- a/static/_redirects
+++ b/static/_redirects
@@ -227,7 +227,8 @@
/docs/reference/kubernetes-api/labels-annotations-taints/ /docs/reference/labels-annotations-taints/ 301
-/docs/reporting-security-issues/ /security/ 301
+/docs/reporting-security-issues/ /docs/reference/issues-security/security/ 301
+/security/ /docs/reference/issues-security/security/ 302
/docs/roadmap/ https://github.com/kubernetes/kubernetes/milestones/ 301
/docs/samples/ /docs/tutorials/ 301
From 0693d2b240d3d5790ec42eb5e0e27f51c77db5c5 Mon Sep 17 00:00:00 2001
From: Wang
Date: Wed, 1 Dec 2021 12:05:17 +0900
Subject: [PATCH 82/83] [zh] Update custom-resources.md (#30408)
* Update custom-resources.md
* Update custom-resources.md
---
.../api-extension/custom-resources.md | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/content/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
index 65bf222ff9..307a31d304 100644
--- a/content/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
+++ b/content/zh/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
@@ -316,9 +316,10 @@ CRD 使得你不必编写自己的 API 服务器来处理定制资源,不过
Usually, each resource in the Kubernetes API requires code that handles REST requests and manages persistent storage of objects. The main Kubernetes API server handles built-in resources like *pods* and *services*, and can also generically handle custom resources through [CRDs](#customresourcedefinitions).
The [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows you to provide specialized
-implementations for your custom resources by writing and deploying your own standalone API server.
-The main API server delegates requests to you for the custom resources that you handle,
+implementations for your custom resources by writing and deploying your own API server.
+The main API server delegates requests to your API server for the custom resources that you handle,
making them available to all of its clients.
+
-->
## API 服务器聚合 {#api-server-aggregation}
@@ -327,9 +328,9 @@ Kubernetes API 主服务器能够处理诸如 *pods* 和 *services* 这些内置
按通用的方式通过 [CRD](#customresourcedefinitions) 来处理定制资源。
[聚合层(Aggregation Layer)](/zh/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
-使得你可以通过编写和部署你自己的独立的 API 服务器来为定制资源提供特殊的实现。
-主 API 服务器将针对你要处理的定制资源的请求全部委托给你来处理,同时将这些资源
-提供给其所有客户。
+使得你可以通过编写和部署你自己的 API 服务器来为定制资源提供特殊的实现。
+主 API 服务器将针对你要处理的定制资源的请求全部委托给你自己的 API 服务器来处理,同时将这些资源
+提供给其所有客户端。