Merge remote-tracking branch 'upstream/main' into dev-1.24
commit
22643121ff
|
@ -162,6 +162,7 @@ aliases:
|
|||
# hanjiayao
|
||||
- howieyuen
|
||||
# lichuqiang
|
||||
- mengjiao-liu
|
||||
- SataQiu
|
||||
- tanjunchen
|
||||
- tengqm
|
||||
|
|
|
@ -123,7 +123,7 @@ Hugo is shipped in two set of binaries for technical reasons. The current websit
|
|||
|
||||
由于技术原因,Hugo 会发布两套二进制文件。
|
||||
当前网站仅基于 **Hugo Extended** 版本运行。
|
||||
在 [发布页面](https://github.com/gohugoio/hugo/releases) 中查找名称为 `extended` 的归档。可以运行 `huge version` 查看是否有单词 `extended` 来确认。
|
||||
在 [发布页面](https://github.com/gohugoio/hugo/releases) 中查找名称为 `extended` 的归档。可以运行 `hugo version` 查看是否有单词 `extended` 来确认。
|
||||
|
||||
<!--
|
||||
### Troubleshooting macOS for too many open files
|
||||
|
|
|
@ -15,7 +15,6 @@ If the data you want to store are confidential, use a
|
|||
or use additional (third party) tools to keep your data private.
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
## Motivation
|
||||
|
||||
|
@ -282,5 +281,3 @@ to the deleted ConfigMap, it is recommended to recreate these pods.
|
|||
* Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/).
|
||||
* Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for
|
||||
separating code from configuration.
|
||||
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ reviewers:
|
|||
- mikedanese
|
||||
title: What is Kubernetes?
|
||||
description: >
|
||||
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
content_type: concept
|
||||
weight: 10
|
||||
card:
|
||||
|
@ -19,7 +19,7 @@ This page is an overview of Kubernetes.
|
|||
|
||||
|
||||
<!-- body -->
|
||||
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
|
||||
|
||||
The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight letters between the "K" and the "s". Google open-sourced the Kubernetes project in 2014. Kubernetes combines [over 15 years of Google's experience](/blog/2015/04/borg-predecessor-to-kubernetes/) running production workloads at scale with best-of-breed ideas and practices from the community.
|
||||
|
||||
|
|
|
@ -158,9 +158,9 @@ Each port definition can have the same `protocol`, or a different one.
|
|||
|
||||
### Services without selectors
|
||||
|
||||
Services most commonly abstract access to Kubernetes Pods, but they can also
|
||||
abstract other kinds of backends.
|
||||
For example:
|
||||
Services most commonly abstract access to Kubernetes Pods thanks to the selector,
|
||||
but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
|
||||
including ones that run outside the cluster. For example:
|
||||
|
||||
* You want to have an external database cluster in production, but in your
|
||||
test environment you use your own databases.
|
||||
|
|
|
@ -450,6 +450,7 @@ different Kubernetes components.
|
|||
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ServerSideApply` | `true` | Beta | 1.16 | 1.21 |
|
||||
| `ServerSideApply` | `true` | GA | 1.22 | - |
|
||||
| `ServerSideFieldValidation` | `false` | Alpha | 1.23 | - |
|
||||
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | 1.19 |
|
||||
| `ServiceAccountIssuerDiscovery` | `true` | Beta | 1.20 | 1.20 |
|
||||
| `ServiceAccountIssuerDiscovery` | `true` | GA | 1.21 | - |
|
||||
|
@ -1088,6 +1089,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
cache to accelerate list operations.
|
||||
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/)
|
||||
feature on the API Server.
|
||||
- `ServerSideFieldValidation`: Enables server-side field validation. This means the validation
|
||||
of resource schema is performed at the API server side rather than the client side
|
||||
(for example, the `kubectl create` or `kubectl apply` command line).
|
||||
- `ServiceAccountIssuerDiscovery`: Enable OIDC discovery endpoints (issuer and
|
||||
JWKS URLs) for the service account issuer in the API server. See
|
||||
[Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: shuffle sharding
|
||||
title: Shuffle-sharding
|
||||
id: shuffle-sharding
|
||||
date: 2020-03-04
|
||||
full_link:
|
||||
|
@ -18,28 +18,28 @@ We are often concerned with insulating different flows of requests
|
|||
from each other, so that a high-intensity flow does not crowd out low-intensity flows.
|
||||
A simple way to put requests into queues is to hash some
|
||||
characteristics of the request, modulo the number of queues, to get
|
||||
the index of the queue to use. The hash function uses as input
|
||||
characteristics of the request that align with flows. For example, in
|
||||
the index of the queue to use. The hash function uses as input
|
||||
characteristics of the request that align with flows. For example, in
|
||||
the Internet this is often the 5-tuple of source and destination
|
||||
address, protocol, and source and destination port.
|
||||
|
||||
That simple hash-based scheme has the property that any high-intensity flow
|
||||
will crowd out all the low-intensity flows that hash to the same queue.
|
||||
Providing good insulation for a large number of flows requires a large
|
||||
number of queues, which is problematic. Shuffle sharding is a more
|
||||
number of queues, which is problematic. Shuffle-sharding is a more
|
||||
nimble technique that can do a better job of insulating the low-intensity
|
||||
flows from the high-intensity flows. The terminology of shuffle sharding uses
|
||||
flows from the high-intensity flows. The terminology of shuffle-sharding uses
|
||||
the metaphor of dealing a hand from a deck of cards; each queue is a
|
||||
metaphorical card. The shuffle sharding technique starts with hashing
|
||||
metaphorical card. The shuffle-sharding technique starts with hashing
|
||||
the flow-identifying characteristics of the request, to produce a hash
|
||||
value with dozens or more of bits. Then the hash value is used as a
|
||||
value with dozens or more of bits. Then the hash value is used as a
|
||||
source of entropy to shuffle the deck and deal a hand of cards
|
||||
(queues). All the dealt queues are examined, and the request is put
|
||||
into one of the examined queues with the shortest length. With a
|
||||
(queues). All the dealt queues are examined, and the request is put
|
||||
into one of the examined queues with the shortest length. With a
|
||||
modest hand size, it does not cost much to examine all the dealt cards
|
||||
and a given low-intensity flow has a good chance to dodge the effects of a
|
||||
given high-intensity flow. With a large hand size it is expensive to examine
|
||||
given high-intensity flow. With a large hand size it is expensive to examine
|
||||
the dealt queues and more difficult for the low-intensity flows to dodge the
|
||||
collective effects of a set of high-intensity flows. Thus, the hand size
|
||||
collective effects of a set of high-intensity flows. Thus, the hand size
|
||||
should be chosen judiciously.
|
||||
|
||||
|
|
|
@ -308,7 +308,7 @@ kubectl create secret tls server --cert server.crt --key server-key.pem
|
|||
secret/server created
|
||||
```
|
||||
|
||||
Finally, you can populate `ca.pem` into a {< glossary_tooltip text="ConfigMap" term_id="configmap" >}}
|
||||
Finally, you can populate `ca.pem` into a {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}}
|
||||
and use it as the trust root to verify the serving certificate:
|
||||
|
||||
```shell
|
||||
|
|
|
@ -23,13 +23,13 @@ spec:
|
|||
mountPath: /var/log
|
||||
- name: count-log-1
|
||||
image: busybox:1.28
|
||||
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
|
||||
args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log']
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: count-log-2
|
||||
image: busybox:1.28
|
||||
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
|
||||
args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log']
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
|
|
|
@ -275,7 +275,6 @@ dengan nama `topic` ke dalam _environment variable_ `TOPIC`.
|
|||
ke dalam klaster Kubernetes. Alternatif lain, kamu dapat [memasang Service Catalog dengan SC tool](/docs/tasks/service-catalog/install-service-catalog-using-sc/).
|
||||
* Lihat [contoh makelar servis](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers).
|
||||
* Pelajari mengenai [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog) proyek.
|
||||
* Lihat [svc-cat.io](https://svc-cat.io/docs/).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,136 @@
|
|||
---
|
||||
title: ガベージコレクション
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
ガベージコレクションは、Kubernetesがクラスターリソースをクリーンアップするために使用するさまざまなメカニズムの総称です。これにより、次のようなリソースのクリーンアップが可能になります:
|
||||
|
||||
* [失敗したPod](/ja/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)
|
||||
* [完了したJob](/ja/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
* [owner referenceのないオブジェクト](#owners-dependents)
|
||||
* [未使用のコンテナとコンテナイメージ](#containers-images)
|
||||
* [StorageClassの再利用ポリシーがDeleteである動的にプロビジョニングされたPersistentVolume](/ja/docs/concepts/storage/persistent-volumes/#delete)
|
||||
* [失効または期限切れのCertificateSigningRequests (CSRs)](/docs/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
|
||||
* 次のシナリオで削除された{{<glossary_tooltip text="Node" term_id="node">}}:
|
||||
* クラウド上でクラスターが[クラウドコントローラーマネージャー](/ja/docs/concepts/architecture/cloud-controller/)を使用する場合
|
||||
* オンプレミスでクラスターがクラウドコントローラーマネージャーと同様のアドオンを使用する場合
|
||||
* [Node Leaseオブジェクト](/ja/docs/concepts/architecture/nodes/#heartbeats)
|
||||
|
||||
## オーナーの依存関係 {#owners-dependents}
|
||||
|
||||
Kubernetesの多くのオブジェクトは、[*owner reference*](/docs/concepts/overview/working-with-objects/owners-dependents/)を介して相互にリンクしています。
|
||||
owner referenceは、どのオブジェクトが他のオブジェクトに依存しているかをコントロールプレーンに通知します。
|
||||
Kubernetesは、owner referenceを使用して、コントロールプレーンやその他のAPIクライアントに、オブジェクトを削除する前に関連するリソースをクリーンアップする機会を提供します。
|
||||
ほとんどの場合、Kubernetesはowner referenceを自動的に管理します。
|
||||
|
||||
Ownershipは、一部のリソースでも使用される[ラベルおよびセレクター](/docs/concepts/overview/working-with-objects/labels/)メカニズムとは異なります。
|
||||
たとえば、`EndpointSlice`オブジェクトを作成する{{<glossary_tooltip text="Service" term_id="service">}}を考えます。
|
||||
Serviceは*ラベル*を使用して、コントロールプレーンがServiceに使用されている`EndpointSlice`オブジェクトを判別できるようにします。
|
||||
ラベルに加えて、Serviceに代わって管理される各`EndpointSlice`には、owner referenceがあります。
|
||||
owner referenceは、Kubernetesのさまざまな部分が制御していないオブジェクトへの干渉を回避するのに役立ちます。
|
||||
|
||||
{{< note >}}
|
||||
namespace間のowner referenceは、設計上許可されていません。
|
||||
namespaceの依存関係は、クラスタースコープまたはnamespaceのオーナーを指定できます。
|
||||
namespaceのオーナーは、依存関係と同じnamespaceに**存在する必要があります**。
|
||||
そうでない場合、owner referenceは不在として扱われ、すべてのオーナーが不在であることが確認されると、依存関係は削除される可能性があります。
|
||||
|
||||
クラスタースコープの依存関係は、クラスタースコープのオーナーのみを指定できます。
|
||||
v1.20以降では、クラスタースコープの依存関係がnamespaceを持つkindをオーナーとして指定している場合、それは解決できないowner referenceを持つものとして扱われ、ガベージコレクションを行うことはできません。
|
||||
|
||||
V1.20以降では、ガベージコレクタは無効な名前空間間の`ownerReference`、またはnamespaceのkindを参照する`ownerReference`をもつクラスター・スコープの依存関係を検出した場合、無効な依存関係の`OwnerRefInvalidNamespace`と`involvedObject`を理由とする警告イベントが報告されます。
|
||||
以下のコマンドを実行すると、そのようなイベントを確認できます。
|
||||
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`
|
||||
{{< /note >}}
|
||||
|
||||
## カスケード削除 {#cascading-deletion}
|
||||
|
||||
Kubernetesは、ReplicaSetを削除したときに残されたPodなど、owner referenceがなくなったオブジェクトをチェックして削除します。
|
||||
オブジェクトを削除する場合、カスケード削除と呼ばれるプロセスで、Kubernetesがオブジェクトの依存関係を自動的に削除するかどうかを制御できます。
|
||||
カスケード削除には、次の2つのタイプがあります。
|
||||
|
||||
* フォアグラウンドカスケード削除
|
||||
* バックグラウンドカスケード削除
|
||||
|
||||
また、Kubernetes {{<glossary_tooltip text="finalizer" term_id="finalizer">}}を使用して、ガベージコレクションがowner referenceを持つリソースを削除する方法とタイミングを制御することもできます。
|
||||
|
||||
### フォアグラウンドカスケード削除 {#foreground-deletion}
|
||||
|
||||
フォアグラウンドカスケード削除では、削除するオーナーオブジェクトは最初に*削除進行中*の状態になります。
|
||||
この状態では、オーナーオブジェクトに次のことが起こります。
|
||||
|
||||
* Kubernetes APIサーバーは、オブジェクトの`metadata.deletionTimestamp`フィールドを、オブジェクトに削除のマークが付けられた時刻に設定します。
|
||||
* Kubernetes APIサーバーは、`metadata.finalizers`フィールドを`foregroundDeletion`に設定します。
|
||||
* オブジェクトは、削除プロセスが完了するまで、KubernetesAPIを介して表示されたままになります。
|
||||
|
||||
オーナーオブジェクトが削除進行中の状態に入ると、コントローラーは依存関係を削除します。
|
||||
すべての依存関係オブジェクトを削除した後、コントローラーはオーナーオブジェクトを削除します。
|
||||
この時点で、オブジェクトはKubernetesAPIに表示されなくなります。
|
||||
|
||||
フォアグラウンドカスケード削除中に、オーナーの削除をブロックする依存関係は、`ownerReference.blockOwnerDeletion=true`フィールドを持つ依存関係のみです。
|
||||
詳細については、[フォアグラウンドカスケード削除の使用](/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)を参照してください。
|
||||
|
||||
### バックグラウンドカスケード削除 {#background-deletion}
|
||||
|
||||
バックグラウンドカスケード削除では、Kubernetes APIサーバーがオーナーオブジェクトをすぐに削除し、コントローラーがバックグラウンドで依存オブジェクトをクリーンアップします。
|
||||
デフォルトでは、フォアグラウンド削除を手動で使用するか、依存オブジェクトを孤立させることを選択しない限り、Kubernetesはバックグラウンドカスケード削除を使用します。
|
||||
|
||||
詳細については、[バックグラウンドカスケード削除の使用](/docs/tasks/administer-cluster/use-cascading-deletion/#use-background-cascading-deletion)を参照してください。
|
||||
|
||||
### 孤立した依存関係
|
||||
|
||||
Kubernetesがオーナーオブジェクトを削除すると、残された依存関係は*orphan*オブジェクトと呼ばれます。
|
||||
デフォルトでは、Kubernetesは依存関係オブジェクトを削除します。この動作をオーバーライドする方法については、[オーナーオブジェクトと孤立した依存関係の削除](/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy)を参照してください。
|
||||
|
||||
## 未使用のコンテナとイメージのガベージコレクション {#containers-images}
|
||||
|
||||
{{<glossary_tooltip text="kubelet" term_id="kubelet">}}は未使用のイメージに対して5分ごとに、未使用のコンテナーに対して1分ごとにガベージコレクションを実行します。
|
||||
外部のガベージコレクションツールは、kubeletの動作を壊し、存在するはずのコンテナを削除する可能性があるため、使用しないでください。
|
||||
|
||||
未使用のコンテナーとイメージのガベージコレクションのオプションを設定するには、[設定ファイル](/docs/tasks/administer-cluster/kubelet-config-file/)を使用してkubeletを調整し、[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)リソースタイプを使用してガベージコレクションに関連するパラメーターを変更します。
|
||||
|
||||
### コンテナイメージのライフサイクル
|
||||
|
||||
Kubernetesは、kubeletの一部である*イメージマネージャー*を通じて、{{< glossary_tooltip text="cadvisor" term_id="cadvisor" >}}の協力を得て、すべてのイメージのライフサイクルを管理します。kubeletは、ガベージコレクションを決定する際に、次のディスク使用制限を考慮します。
|
||||
|
||||
* `HighThresholdPercent`
|
||||
* `LowThresholdPercent`
|
||||
|
||||
設定された`HighThresholdPercent`値を超えるディスク使用量はガベージコレクションをトリガーします。
|
||||
ガベージコレクションは、最後に使用された時間に基づいて、最も古いものから順にイメージを削除します。
|
||||
kubeletは、ディスク使用量が`LowThresholdPercent`値に達するまでイメージを削除します。
|
||||
|
||||
### コンテナイメージのガベージコレクション {#container-image-garbage-collection}
|
||||
|
||||
kubeletは、次の変数に基づいて未使用のコンテナをガベージコレクションします。
|
||||
|
||||
* `MinAge`: kubeletがガベージコレクションできるコンテナの最低期間。`0`を設定すると無効化されます。
|
||||
* `MaxPerPodContainer`: 各Podのペアが持つことができるデッドコンテナの最大数。`0`未満に設定すると無効化されます。
|
||||
* `MaxContainers`: クラスターが持つことができるデッドコンテナーの最大数。`0`未満に設定すると無効化されます。
|
||||
|
||||
これらの変数に加えて、kubeletは、通常、最も古いものから順に、定義されていない削除されたコンテナをガベージコレクションします。
|
||||
|
||||
`MaxPerPodContainer`と`MaxContainer`は、Podごとのコンテナーの最大数(`MaxPerPodContainer`)を保持すると、グローバルなデッドコンテナの許容合計(`MaxContainers`)を超える状況で、互いに競合する可能性があります。
|
||||
この状況では、kubeletは`MaxPodPerContainer`を調整して競合に対処します。最悪のシナリオは、`MaxPerPodContainer`を1にダウングレードし、最も古いコンテナーを削除することです。
|
||||
さらに、削除されたPodが所有するコンテナは、`MinAge`より古くなると削除されます。
|
||||
|
||||
{{<note>}}
|
||||
kubeletがガベージコレクションするのは、自分が管理するコンテナのみです。
|
||||
{{</note>}}
|
||||
|
||||
## ガベージコレクションの設定 {#configuring-gc}
|
||||
|
||||
これらのリソースを管理するコントローラーに固有のオプションを設定することにより、リソースのガベージコレクションを調整できます。次のページは、ガベージコレクションを設定する方法を示しています。
|
||||
|
||||
* [Kubernetesオブジェクトのカスケード削除の設定](/docs/tasks/administer-cluster/use-cascading-deletion/)
|
||||
* [完了したジョブのクリーンアップの設定](/ja/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
|
||||
<!-- * [Configuring unused container and image garbage collection](/docs/tasks/administer-cluster/reconfigure-kubelet/) -->
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Kubernetes オブジェクトの所有権](/docs/concepts/overview/working-with-objects/owners-dependents/)を学びます。
|
||||
* Kubernetes [finalizer](/docs/concepts/overview/working-with-objects/finalizers/)を学びます。
|
||||
* 完了したジョブをクリーンアップする[TTL controller](/ja/docs/concepts/workloads/controllers/ttlafterfinished/)(beta)について学びます。
|
|
@ -89,6 +89,7 @@ kubectl edit SampleDB/example-database # 手動でいくつかの設定を変更
|
|||
* ユースケースに合わせた、既製のオペレーターを[OperatorHub.io](https://operatorhub.io/)から見つけます
|
||||
* 自前のオペレーターを書くために既存のツールを使います、例:
|
||||
* [Charmed Operator Framework](https://juju.is/)
|
||||
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
|
||||
* [KUDO](https://kudo.dev/)(Kubernetes Universal Declarative Operator)を使います
|
||||
* [kubebuilder](https://book.kubebuilder.io/)を使います
|
||||
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (dotnet operator SDK)
|
||||
|
|
|
@ -124,7 +124,7 @@ Events: <none>
|
|||
1. ストレージアセットに関連するのデータを手動で適切にクリーンアップします。
|
||||
1. 関連するストレージアセットを手動で削除するか、同じストレージアセットを再利用したい場合、新しいストレージアセット定義と共にPersistentVolumeを作成します。
|
||||
|
||||
#### 削除
|
||||
#### 削除 {#delete}
|
||||
|
||||
`Delete`再クレームポリシーをサポートするボリュームプラグインの場合、削除するとPersistentVolumeオブジェクトがKubernetesから削除されるだけでなく、AWS EBS、GCE PD、Azure Disk、Cinderボリュームなどの外部インフラストラクチャーの関連ストレージアセットも削除されます。動的にプロビジョニングされたボリュームは、[StorageClassの再クレームポリシー](#reclaim-policy)を継承します。これはデフォルトで削除です。管理者は、ユーザーの需要に応じてStorageClassを構成する必要があります。そうでない場合、PVは作成後に編集またはパッチを適用する必要があります。[PersistentVolumeの再クレームポリシーの変更](/docs/tasks/administer-cluster/change-pv-reclaim-policy/)を参照してください。
|
||||
|
||||
|
|
|
@ -200,10 +200,10 @@ kubectl diff -f ./my-manifest.yaml
|
|||
|
||||
# Nodeから返されるすべてのキーをピリオド区切りの階層表記で生成します。
|
||||
# 複雑にネストされたJSON構造をもつキーを指定したい時に便利です
|
||||
kubectl get nodes -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
|
||||
kubectl get nodes -o json | jq -c 'paths|join(".")'
|
||||
|
||||
# Pod等から返されるすべてのキーをピリオド区切り階層表記で生成します。
|
||||
kubectl get pods -o json | jq -c 'path(..)|[.[]|tostring]|join(".")'
|
||||
kubectl get pods -o json | jq -c 'paths|join(".")'
|
||||
```
|
||||
|
||||
## リソースのアップデート
|
||||
|
|
|
@ -32,22 +32,22 @@ content_type: concept
|
|||
Для вызова API Kubernetes из языка программирования вы можете использовать
|
||||
[клиентские библиотеки](/docs/reference/using-api/client-libraries/). Официально поддерживаемые клиентские библиотеки:
|
||||
|
||||
- [Клиентская библиотека для Go](https://github.com/kubernetes/client-go/)
|
||||
- [Клиентская библиотека для Python](https://github.com/kubernetes-client/python)
|
||||
- [Клиентская библиотека для Java](https://github.com/kubernetes-client/java)
|
||||
- [Клиентская библиотека для JavaScript](https://github.com/kubernetes-client/javascript)
|
||||
- [Клиентская библиотека Go](https://github.com/kubernetes/client-go/)
|
||||
- [Клиентская библиотека Python](https://github.com/kubernetes-client/python)
|
||||
- [Клиентская библиотека Java](https://github.com/kubernetes-client/java)
|
||||
- [Клиентская библиотека JavaScript](https://github.com/kubernetes-client/javascript)
|
||||
|
||||
## Ссылки CLI
|
||||
|
||||
* [kubectl](/docs/user-guide/kubectl-overview) - Основной инструмент CLI для запуска команд и управления кластерами Kubernetes.
|
||||
* [JSONPath](/docs/user-guide/jsonpath/) - Документация по синтаксису использования [выражений JSONPath](http://goessner.net/articles/JsonPath/) с kubectl.
|
||||
* [JSONPath](/docs/user-guide/jsonpath/) - Документация по синтаксису использования [выражений JSONPath](http://goessner.net/articles/JsonPath/) с kubectl.
|
||||
* [kubeadm](/docs/admin/kubeadm/) - Инструмент CLI для легкого разворачивания защищенного кластера Kubernetes.
|
||||
* [kubefed](/docs/admin/kubefed/) - Инструмент CLI для помощи в администрировании федеративных кластеров.
|
||||
|
||||
## Ссылки на конфигурации
|
||||
|
||||
* [kubelet](/docs/admin/kubelet/) - Основной *агент ноды*, который работает на каждой ноде. Kubelet принимает набор PodSpecs и гарантирует, что описанные контейнеры работают исправно.
|
||||
* [kube-apiserver](/docs/admin/kube-apiserver/) - REST API, который проверяет и настраивает данные для объектов API, таких как модули, службы, контроллеры и репликации.
|
||||
* [kube-apiserver](/docs/admin/kube-apiserver/) - REST API, который проверяет и настраивает данные для объектов API, таких, как модули, службы, контроллеры и репликации.
|
||||
* [kube-controller-manager](/docs/admin/kube-controller-manager/) - Демон, который встраивает основные контрольные циклы, поставляемые с Kubernetes.
|
||||
* [kube-proxy](/docs/admin/kube-proxy/) - Может выполнять простую пересылку запросов TCP/UDP или циклическую переадресацию TCP/UDP через набор бекендов.
|
||||
* [kube-scheduler](/docs/admin/kube-scheduler/) - Планировщик, который управляет доступностью, производительностью и хранилищем.
|
||||
|
@ -56,6 +56,6 @@ content_type: concept
|
|||
|
||||
## Дизайн документация
|
||||
|
||||
Архив документации по дизайну для функциональности Kubernetes. Начните с [Архитектура Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) и [Обзор дизайна Kubernetes](https://git.k8s.io/community/contributors/design-proposals).
|
||||
Архив документации по дизайну для функциональности Kubernetes. Начните с [Архитектуры Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) и [Обзора дизайна Kubernetes](https://git.k8s.io/community/contributors/design-proposals).
|
||||
|
||||
|
||||
|
|
|
@ -33,8 +33,6 @@ klog is the Kubernetes logging library. [klog](https://github.com/kubernetes/klo
|
|||
generates log messages for the Kubernetes system components.
|
||||
|
||||
For more information about klog configuration, see the [Command line tool reference](/docs/reference/command-line-tools-reference/).
|
||||
|
||||
An example of the klog native format:
|
||||
-->
|
||||
klog 是 Kubernetes 的日志库。
|
||||
[klog](https://github.com/kubernetes/klog)
|
||||
|
@ -42,26 +40,111 @@ klog 是 Kubernetes 的日志库。
|
|||
|
||||
有关 klog 配置的更多信息,请参见[命令行工具参考](/zh/docs/reference/command-line-tools-reference/)。
|
||||
|
||||
klog 原始格式的示例:
|
||||
<!--
|
||||
Kubernetes is in the process of simplifying logging in its components. The
|
||||
following klog command line flags [are
|
||||
deprecated](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
|
||||
starting with Kubernetes 1.23 and will be removed in a future release:
|
||||
-->
|
||||
Kubernetes 正在进行简化其组件日志的努力。下面的 klog 命令行参数从 Kubernetes 1.23
|
||||
开始[已被废弃](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components),
|
||||
会在未来版本中移除:
|
||||
|
||||
- `--add-dir-header`
|
||||
- `--alsologtostderr`
|
||||
- `--log-backtrace-at`
|
||||
- `--log-dir`
|
||||
- `--log-file`
|
||||
- `--log-file-max-size`
|
||||
- `--logtostderr`
|
||||
- `--one-output`
|
||||
- `--skip-headers`
|
||||
- `--skip-log-headers`
|
||||
- `--stderrthreshold`
|
||||
|
||||
<!--
|
||||
Output will always be written to stderr, regardless of the output
|
||||
format. Output redirection is expected to be handled by the component which
|
||||
invokes a Kubernetes component. This can be a POSIX shell or a tool like
|
||||
systemd.
|
||||
-->
|
||||
输出总会被写到标准错误输出(stderr)之上,无论输出格式如何。
|
||||
对输出的重定向将由调用 Kubernetes 组件的软件来处理。
|
||||
这一软件可以是 POSIX Shell 或者类似 systemd 这样的工具。
|
||||
|
||||
<!--
|
||||
In some cases, for example a distroless container or a Windows system service,
|
||||
those options are not available. Then the
|
||||
[`kube-log-runner`](https://github.com/kubernetes/kubernetes/blob/d2a8a81639fcff8d1221b900f66d28361a170654/staging/src/k8s.io/component-base/logs/kube-log-runner/README.md)
|
||||
binary can be used as wrapper around a Kubernetes component to redirect
|
||||
output. A prebuilt binary is included in several Kubernetes base images under
|
||||
its traditional name as `/go-runner` and as `kube-log-runner` in server and
|
||||
node release archives.
|
||||
-->
|
||||
在某些场合下,例如对于无发行主体的(distroless)容器或者 Windows 系统服务,
|
||||
这些替代方案都是不存在的。那么你可以使用
|
||||
[`kube-log-runner`](https://github.com/kubernetes/kubernetes/blob/d2a8a81639fcff8d1221b900f66d28361a170654/staging/src/k8s.io/component-base/logs/kube-log-runner/README.md)
|
||||
可执行文件来作为 Kubernetes 的封装层,完成对输出的重定向。
|
||||
在很多 Kubernetes 基础镜像中,都包含一个预先构建的可执行程序。
|
||||
这个程序原来称作 `/go-runner`,而在服务器和节点的发行版本库中,称作 `kube-log-runner`。
|
||||
|
||||
<!--
|
||||
This table shows how `kube-log-runner` invocations correspond to shell redirection:
|
||||
-->
|
||||
下表展示的是 `kube-log-runner` 调用与 Shell 重定向之间的对应关系:
|
||||
|
||||
<!--
|
||||
| Usage | POSIX shell (such as bash) | `kube-log-runner <options> <cmd>` |
|
||||
| -----------------------------------------|----------------------------|-------------------------------------------------------------|
|
||||
| Merge stderr and stdout, write to stdout | `2>&1` | `kube-log-runner` (default behavior) |
|
||||
| Redirect both into log file | `1>>/tmp/log 2>&1` | `kube-log-runner -log-file=/tmp/log` |
|
||||
| Copy into log file and to stdout | `2>&1 \| tee -a /tmp/log` | `kube-log-runner -log-file=/tmp/log -also-stdout` |
|
||||
| Redirect only stdout into log file | `>/tmp/log` | `kube-log-runner -log-file=/tmp/log -redirect-stderr=false` |
|
||||
-->
|
||||
| 用法 | POSIX Shell(例如 Bash) | `kube-log-runner <options> <cmd>` |
|
||||
| --------------------------------|--------------------------|------------------------------------|
|
||||
| 合并 stderr 与 stdout,写出到 stdout | `2>&1` | `kube-log-runner`(默认行为 )|
|
||||
| 将 stderr 与 stdout 重定向到日志文件 | `1>>/tmp/log 2>&1` | `kube-log-runner -log-file=/tmp/log` |
|
||||
| 输出到 stdout 并复制到日志文件中 | `2>&1 \| tee -a /tmp/log` | `kube-log-runner -log-file=/tmp/log -also-stdout` |
|
||||
| 仅将 stdout 重定向到日志 | `>/tmp/log` | `kube-log-runner -log-file=/tmp/log -redirect-stderr=false` |
|
||||
|
||||
<!--
|
||||
### Klog output
|
||||
|
||||
An example of the traditional klog native format:
|
||||
-->
|
||||
### klog 输出
|
||||
|
||||
传统的 klog 原生格式示例:
|
||||
|
||||
```
|
||||
I1025 00:15:15.525108 1 httplog.go:79] GET /api/v1/namespaces/kube-system/pods/metrics-server-v0.3.1-57c75779f-9p8wg: (1.512ms) 200 [pod_nanny/v0.0.0 (linux/amd64) kubernetes/$Format 10.56.1.19:51756]
|
||||
```
|
||||
|
||||
<!--
|
||||
The message string may contain line breaks:
|
||||
-->
|
||||
消息字符串可能包含换行符:
|
||||
|
||||
```
|
||||
I1025 00:15:15.525108 1 example.go:79] This is a message
|
||||
which has a line break.
|
||||
```
|
||||
|
||||
<!--
|
||||
### Structured Logging
|
||||
-->
|
||||
### 结构化日志
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
{{< warning >}}
|
||||
<!--
|
||||
Migration to structured log messages is an ongoing process. Not all log messages are structured in this version. When parsing log files, you must also handle unstructured log messages.
|
||||
|
||||
Log formatting and value serialization are subject to change.
|
||||
-->
|
||||
{{< warning >}}
|
||||
到结构化日志消息的迁移是一个持续的过程。
|
||||
在此版本中,并非所有日志消息都是结构化的。
|
||||
迁移到结构化日志消息是一个正在进行的过程。在此版本中,并非所有日志消息都是结构化的。
|
||||
解析日志文件时,你也必须要处理非结构化日志消息。
|
||||
|
||||
日志格式和值的序列化可能会发生变化。
|
||||
|
@ -69,27 +152,45 @@ Log formatting and value serialization are subject to change.
|
|||
|
||||
<!--
|
||||
Structured logging introduces a uniform structure in log messages allowing for programmatic extraction of information. You can store and process structured logs with less effort and cost.
|
||||
New message format is backward compatible and enabled by default.
|
||||
|
||||
Format of structured logs:
|
||||
The code which generates a log message determines whether it uses the traditional unstructured klog output
|
||||
or structured logging.
|
||||
-->
|
||||
结构化日志记录旨在日志消息中引入统一结构,以便以编程方式提取信息。
|
||||
你可以方便地用更小的开销来处理结构化日志。
|
||||
新的消息格式向后兼容,并默认启用。
|
||||
生成日志消息的代码决定其使用传统的非结构化的 klog 还是结构化的日志。
|
||||
|
||||
结构化日志的格式:
|
||||
<!--
|
||||
The default formatting of structured log messages is as text, with a format that
|
||||
is backward compatible with traditional klog:
|
||||
-->
|
||||
默认的结构化日志消息是以文本形式呈现的,其格式与传统的 klog 保持向后兼容:
|
||||
|
||||
```ini
|
||||
<klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
|
||||
```
|
||||
|
||||
<!-- Example: -->
|
||||
<!--
|
||||
Example:
|
||||
-->
|
||||
示例:
|
||||
|
||||
```ini
|
||||
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
|
||||
```
|
||||
|
||||
<!--
|
||||
Strings are quoted. Other values are formatted with
|
||||
[`%+v`](https://pkg.go.dev/fmt#hdr-Printing), which may cause log messages to
|
||||
continue on the next line [depending on the data](https://github.com/kubernetes/kubernetes/issues/106428).
|
||||
-->
|
||||
字符串在输出时会被添加引号。其他数值类型都使用 [`%+v`](https://pkg.go.dev/fmt#hdr-Printing)
|
||||
来格式化,因此可能导致日志消息会延续到下一行,
|
||||
[具体取决于数据本身](https://github.com/kubernetes/kubernetes/issues/106428)。
|
||||
|
||||
```
|
||||
I1025 00:15:15.525108 1 example.go:116] "Example" data="This is text with a line break\nand \"quotation marks\"." someInt=1 someFloat=0.1 someStruct={StringField: First line,
|
||||
second line.}
|
||||
```
|
||||
|
||||
<!--
|
||||
### JSON log format
|
||||
|
@ -98,6 +199,7 @@ I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod=
|
|||
|
||||
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
||||
|
||||
{{< warning >}}
|
||||
<!--
|
||||
JSON output does not support many standard klog flags. For list of unsupported klog flags, see the [Command line tool reference](/docs/reference/command-line-tools-reference/).
|
||||
|
||||
|
@ -105,10 +207,8 @@ Not all logs are guaranteed to be written in JSON format (for example, during pr
|
|||
|
||||
Field names and JSON serialization are subject to change.
|
||||
-->
|
||||
{{<warning >}}
|
||||
JSON 输出并不支持太多标准 klog 参数。
|
||||
对于不受支持的 klog 参数的列表,请参见
|
||||
[命令行工具参考](/zh/docs/reference/command-line-tools-reference/)。
|
||||
JSON 输出并不支持太多标准 klog 参数。对于不受支持的 klog 参数的列表,
|
||||
请参见[命令行工具参考](/zh/docs/reference/command-line-tools-reference/)。
|
||||
|
||||
并不是所有日志都保证写成 JSON 格式(例如,在进程启动期间)。
|
||||
如果你打算解析日志,请确保可以处理非 JSON 格式的日志行。
|
||||
|
@ -137,8 +237,9 @@ JSON 日志格式示例(美化输出):
|
|||
|
||||
<!--
|
||||
Keys with special meaning:
|
||||
|
||||
* `ts` - timestamp as Unix time (required, float)
|
||||
* `v` - verbosity (required, int, default 0)
|
||||
* `v` - verbosity (only for info and not for error messages, int)
|
||||
* `err` - error string (optional, string)
|
||||
* `msg` - message (required, string)
|
||||
|
||||
|
@ -146,11 +247,12 @@ List of components currently supporting JSON format:
|
|||
-->
|
||||
具有特殊意义的 key:
|
||||
* `ts` - Unix 时间风格的时间戳(必选项,浮点值)
|
||||
* `v` - 精细度(必选项,整数,默认值 0)
|
||||
* `v` - 精细度(仅用于 info 级别,不能用于错误信息,整数)
|
||||
* `err` - 错误字符串(可选项,字符串)
|
||||
* `msg` - 消息(必选项,字符串)
|
||||
|
||||
当前支持JSON格式的组件列表:
|
||||
|
||||
* {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}
|
||||
* {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}}
|
||||
* {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}}
|
||||
|
@ -158,43 +260,30 @@ List of components currently supporting JSON format:
|
|||
|
||||
<!--
|
||||
### Log sanitization
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
|
||||
|
||||
{{<warning >}}
|
||||
Log sanitization might incur significant computation overhead and therefore should not be enabled in production.
|
||||
{{< /warning >}}
|
||||
-->
|
||||
|
||||
### 日志清理
|
||||
### 日志清洗 {#log-sanitization}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
|
||||
|
||||
{{<warning >}}
|
||||
日志清理可能会导致大量的计算开销,因此不应启用在生产环境中。
|
||||
<!--
|
||||
Log sanitization might incur significant computation overhead and therefore should not be enabled in production.
|
||||
-->
|
||||
日志清洗(Log Sanitization)可能会导致大量的计算开销,因此不应在生产环境中启用。
|
||||
{{< /warning >}}
|
||||
|
||||
<!--
|
||||
The `--experimental-logging-sanitization` flag enables the klog sanitization filter.
|
||||
If enabled all log arguments are inspected for fields tagged as sensitive data (e.g. passwords, keys, tokens) and logging of these fields will be prevented.
|
||||
-->
|
||||
|
||||
`--experimental-logging-sanitization` 参数可用来启用 klog 清理过滤器。
|
||||
如果启用后,将检查所有日志参数中是否有标记为敏感数据的字段(比如:密码,密钥,令牌),并且将阻止这些字段的记录。
|
||||
`--experimental-logging-sanitization` 参数可用来启用 klog 清洗过滤器。
|
||||
如果启用后,将检查所有日志参数中是否有标记为敏感数据的字段(比如:密码,密钥,令牌),
|
||||
并且将阻止这些字段的记录。
|
||||
|
||||
<!--
|
||||
List of components currently supporting log sanitization:
|
||||
* kube-controller-manager
|
||||
* kube-apiserver
|
||||
* kube-scheduler
|
||||
* kubelet
|
||||
|
||||
{{< note >}}
|
||||
The Log sanitization filter does not prevent user workload logs from leaking sensitive data.
|
||||
{{< /note >}}
|
||||
-->
|
||||
|
||||
当前支持日志清理的组件列表:
|
||||
当前支持日志清洗的组件列表:
|
||||
|
||||
* kube-controller-manager
|
||||
* kube-apiserver
|
||||
|
@ -202,7 +291,10 @@ The Log sanitization filter does not prevent user workload logs from leaking sen
|
|||
* kubelet
|
||||
|
||||
{{< note >}}
|
||||
日志清理过滤器不会阻止用户工作负载日志泄漏敏感数据。
|
||||
<!--
|
||||
The Log sanitization filter does not prevent user workload logs from leaking sensitive data.
|
||||
-->
|
||||
日志清洗过滤器不会阻止用户工作负载日志泄漏敏感数据。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -214,8 +306,7 @@ Increasing verbosity settings logs increasingly less severe events. A verbosity
|
|||
### 日志精细度级别
|
||||
|
||||
参数 `-v` 控制日志的精细度。增大该值会增大日志事件的数量。
|
||||
减小该值可以减小日志事件的数量。
|
||||
增大精细度会记录更多的不太严重的事件。
|
||||
减小该值可以减小日志事件的数量。增大精细度会记录更多的不太严重的事件。
|
||||
精细度设置为 0 时只记录关键(critical)事件。
|
||||
|
||||
<!--
|
||||
|
@ -225,13 +316,15 @@ There are two types of system components: those that run in a container and thos
|
|||
that do not run in a container. For example:
|
||||
|
||||
* The Kubernetes scheduler and kube-proxy run in a container.
|
||||
* The kubelet and container runtime, for example Docker, do not run in containers.
|
||||
* The kubelet and {{<glossary_tooltip term_id="container-runtime" text="container runtime">}}
|
||||
do not run in containers.
|
||||
-->
|
||||
### 日志位置
|
||||
|
||||
有两种类型的系统组件:运行在容器中的组件和不运行在容器中的组件。例如:
|
||||
|
||||
* Kubernetes 调度器和 kube-proxy 在容器中运行。
|
||||
* kubelet 和容器运行时,例如 Docker,不在容器中运行。
|
||||
* kubelet 和{{<glossary_tooltip term_id="container-runtime" text="容器运行时">}}不在容器中运行。
|
||||
|
||||
<!--
|
||||
On machines with systemd, the kubelet and container runtime write to journald.
|
||||
|
@ -254,8 +347,11 @@ The `logrotate` tool rotates logs daily, or once the log size is greater than 10
|
|||
<!--
|
||||
* Read about the [Kubernetes Logging Architecture](/docs/concepts/cluster-administration/logging/)
|
||||
* Read about [Structured Logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging)
|
||||
* Read about [deprecation of klog flags](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
|
||||
* Read about the [Conventions for logging severity](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)
|
||||
-->
|
||||
* 阅读 [Kubernetes 日志架构](/zh/docs/concepts/cluster-administration/logging/)
|
||||
* 阅读 [Structured Logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging)
|
||||
* 阅读 [Conventions for logging severity](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)
|
||||
* 阅读[结构化日志提案(英文)](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging)
|
||||
* 阅读 [klog 参数的废弃(英文)](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
|
||||
* 阅读[日志严重级别约定(英文)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)
|
||||
|
||||
|
|
|
@ -65,7 +65,7 @@ that lets you store configuration for other objects to use. Unlike most
|
|||
Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
|
||||
fields. These fields accept key-value pairs as their values. Both the `data`
|
||||
field and the `binaryData` are optional. The `data` field is designed to
|
||||
contain UTF-8 byte sequences while the `binaryData` field is designed to
|
||||
contain UTF-8 strings while the `binaryData` field is designed to
|
||||
contain binary data as base64-encoded strings.
|
||||
|
||||
The name of a ConfigMap must be a valid
|
||||
|
@ -77,7 +77,7 @@ ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects
|
|||
让你可以存储其他对象所需要使用的配置。
|
||||
和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 和
|
||||
`binaryData` 字段。这些字段能够接收键-值对作为其取值。`data` 和 `binaryData`
|
||||
字段都是可选的。`data` 字段设计用来保存 UTF-8 字节序列,而 `binaryData`
|
||||
字段都是可选的。`data` 字段设计用来保存 UTF-8 字符串,而 `binaryData`
|
||||
则被设计用来保存二进制数据作为 base64 编码的字串。
|
||||
|
||||
ConfigMap 的名字必须是一个合法的
|
||||
|
@ -289,6 +289,8 @@ To consume a ConfigMap in a volume in a Pod:
|
|||
-->
|
||||
### 在 Pod 中将 ConfigMap 当做文件使用
|
||||
|
||||
要在一个 Pod 的存储卷中使用 ConfigMap:
|
||||
|
||||
<!--
|
||||
1. Create a ConfigMap or use an existing one. Multiple Pods can reference the
|
||||
same ConfigMap.
|
||||
|
@ -384,19 +386,11 @@ ConfigMap 既可以通过 watch 操作实现内容传播(默认形式),也
|
|||
(分别对应 watch 操作的传播延迟、高速缓存的 TTL 时长或者 0)。
|
||||
|
||||
<!--
|
||||
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
|
||||
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
|
||||
-->
|
||||
以环境变量方式使用的 ConfigMap 数据不会被自动更新。
|
||||
更新这些数据需要重新启动 Pod。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive ConfigMap updates.
|
||||
-->
|
||||
将 ConfigMap 作为 [subPath](/docs/concepts/storage/volumes#using-subpath)
|
||||
卷挂载的容器无法收到 ConfigMap 更新。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive ConfigMap updates.
|
||||
-->
|
||||
|
@ -417,7 +411,7 @@ individual Secrets and ConfigMaps as immutable. For clusters that extensively us
|
|||
(at least tens of thousands of unique ConfigMap to Pod mounts), preventing changes to their
|
||||
data has the following advantages:
|
||||
-->
|
||||
Kubernetes 特性 _不可变更的 Secret 和 ConfigMap_ 提供了一种将各个
|
||||
Kubernetes 特性 _Immutable Secret 和 ConfigMaps_ 提供了一种将各个
|
||||
Secret 和 ConfigMap 设置为不可变更的选项。对于大量使用 ConfigMap 的集群
|
||||
(至少有数万个各不相同的 ConfigMap 给 Pod 挂载)而言,禁止更改
|
||||
ConfigMap 的数据有以下好处:
|
||||
|
@ -460,7 +454,7 @@ to the deleted ConfigMap, it is recommended to recreate these pods.
|
|||
-->
|
||||
一旦某 ConfigMap 被标记为不可变更,则 _无法_ 逆转这一变化,,也无法更改
|
||||
`data` 或 `binaryData` 字段的内容。你只能删除并重建 ConfigMap。
|
||||
因为现有的 Pod 会维护一个对已删除的 ConfigMap 的挂载点,建议重新创建这些 Pods。
|
||||
因为现有的 Pod 会维护一个已被删除的 ConfigMap 的挂载点,建议重新创建这些 Pods。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -473,4 +467,3 @@ to the deleted ConfigMap, it is recommended to recreate these pods.
|
|||
* 阅读 [Secret](/zh/docs/concepts/configuration/secret/)。
|
||||
* 阅读[配置 Pod 使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。
|
||||
* 阅读 [Twelve-Factor 应用](https://12factor.net/zh_cn/)来了解将代码和配置分开的动机。
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ It does not mean that there is a file named `kubeconfig`.
|
|||
|
||||
<!--
|
||||
{{< warning >}}
|
||||
Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure.
|
||||
Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure.
|
||||
If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.
|
||||
{{< /warning>}}
|
||||
-->
|
||||
|
@ -96,7 +96,7 @@ clusters and namespaces.
|
|||
A *context* element in a kubeconfig file is used to group access parameters
|
||||
under a convenient name. Each context has three parameters: cluster, namespace, and user.
|
||||
By default, the `kubectl` command-line tool uses parameters from
|
||||
the *current context* to communicate with the cluster.
|
||||
the *current context* to communicate with the cluster.
|
||||
-->
|
||||
通过 kubeconfig 文件中的 *context* 元素,使用简便的名称来对访问参数进行分组。每个上下文都有三个参数:cluster、namespace 和 user。默认情况下,`kubectl` 命令行工具使用 *当前上下文* 中的参数与集群进行通信。
|
||||
|
||||
|
@ -104,7 +104,8 @@ the *current context* to communicate with the cluster.
|
|||
To choose the current context:
|
||||
-->
|
||||
选择当前上下文
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl config use-context
|
||||
```
|
||||
|
||||
|
@ -175,7 +176,7 @@ Here are the rules that `kubectl` uses when it merges kubeconfig files:
|
|||
Even if the second file has non-conflicting entries under `red-user`, discard them.
|
||||
-->
|
||||
1. 如果设置了 `--kubeconfig` 参数,则仅使用指定的文件。不进行合并。此参数只能使用一次。
|
||||
|
||||
|
||||
否则,如果设置了 `KUBECONFIG` 环境变量,将它用作应合并的文件列表。根据以下规则合并 `KUBECONFIG` 环境变量中列出的文件:
|
||||
|
||||
* 忽略空文件名。
|
||||
|
@ -280,6 +281,34 @@ kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位
|
|||
命令行上的文件引用是相对于当前工作目录的。
|
||||
在 `$HOME/.kube/config` 中,相对路径按相对路径存储,绝对路径按绝对路径存储。
|
||||
|
||||
<!--
|
||||
## Proxy
|
||||
|
||||
You can configure `kubectl` to use proxy by setting `proxy-url` in the kubeconfig file, like:
|
||||
-->
|
||||
## 代理
|
||||
|
||||
你可以在 `kubeconfig` 文件中设置 `proxy-url` 来为 `kubectl` 使用代理,例如:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
|
||||
proxy-url: https://proxy.host:3128
|
||||
|
||||
clusters:
|
||||
- cluster:
|
||||
name: development
|
||||
|
||||
users:
|
||||
- name: developer
|
||||
|
||||
contexts:
|
||||
- context:
|
||||
name: development
|
||||
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
|
@ -288,4 +317,3 @@ kubeconfig 文件中的文件和路径引用是相对于 kubeconfig 文件的位
|
|||
--->
|
||||
* [配置对多集群的访问](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ The kubelet exports a `Registration` gRPC service:
|
|||
|
||||
```gRPC
|
||||
service Registration {
|
||||
rpc Register(RegisterRequest) returns (Empty) {}
|
||||
rpc Register(RegisterRequest) returns (Empty) {}
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -63,7 +63,7 @@ and reports two healthy devices on a node, the node status is updated
|
|||
to advertise that the node has 2 "Foo" devices installed and available.
|
||||
-->
|
||||
设备插件可以通过此 gRPC 服务在 kubelet 进行注册。在注册期间,设备插件需要发送下面几样内容:
|
||||
|
||||
|
||||
* 设备插件的 Unix 套接字。
|
||||
* 设备插件的 API 版本。
|
||||
* `ResourceName` 是需要公布的。这里 `ResourceName` 需要遵循
|
||||
|
@ -92,12 +92,14 @@ specification as they request other types of resources, with the following limit
|
|||
* 扩展资源仅可作为整数资源使用,并且不能被过量使用
|
||||
* 设备不能在容器之间共享
|
||||
|
||||
### 示例 {#example-pod}
|
||||
|
||||
<!--
|
||||
Suppose a Kubernetes cluster is running a device plugin that advertises resource `hardware-vendor.example/foo`
|
||||
on certain nodes. Here is an example of a pod requesting this resource to run a demo workload:
|
||||
-->
|
||||
假设 Kubernetes 集群正在运行一个设备插件,该插件在一些节点上公布的资源为 `hardware-vendor.example/foo`。
|
||||
下面就是一个 Pod 示例,请求此资源以运行某演示负载:
|
||||
下面就是一个 Pod 示例,请求此资源以运行一个工作负载的示例:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
@ -140,8 +142,12 @@ The general workflow of a device plugin includes the following steps:
|
|||
一个 gRPC 服务,该服务实现以下接口:
|
||||
|
||||
<!--
|
||||
|
||||
```gRPC
|
||||
service DevicePlugin {
|
||||
// GetDevicePluginOptions returns options to be communicated with Device Manager.
|
||||
rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
|
||||
|
||||
// ListAndWatch returns a stream of List of Devices
|
||||
// Whenever a Device state change or a Device disappears, ListAndWatch
|
||||
// returns the new list
|
||||
|
@ -168,6 +174,9 @@ The general workflow of a device plugin includes the following steps:
|
|||
-->
|
||||
```gRPC
|
||||
service DevicePlugin {
|
||||
// GetDevicePluginOptions 返回与设备管理器沟通的选项。
|
||||
rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
|
||||
|
||||
// ListAndWatch 返回 Device 列表构成的数据流。
|
||||
// 当 Device 状态发生变化或者 Device 消失时,ListAndWatch
|
||||
// 会返回新的列表。
|
||||
|
@ -331,10 +340,12 @@ service PodResourcesLister {
|
|||
}
|
||||
```
|
||||
|
||||
### `List` gRPC 端点 {#grpc-endpoint-list}
|
||||
|
||||
<!--
|
||||
The `List` endpoint provides information on resources of running pods, with details such as the
|
||||
id of exclusively allocated CPUs, device id as it was reported by device plugins and id of
|
||||
the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains
|
||||
the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains
|
||||
the information about memory and hugepages reserved for a container.
|
||||
-->
|
||||
这一 `List` 端点提供运行中 Pods 的资源信息,包括类似独占式分配的
|
||||
|
@ -387,6 +398,51 @@ message ContainerDevices {
|
|||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
cpu_ids in the `ContainerResources` in the `List` endpoint correspond to exclusive CPUs allocated
|
||||
to a partilar container. If the goal is to evaluate CPUs that belong to the shared pool, the `List`
|
||||
endpoint needs to be used in conjunction with the `GetAllocatableResources` endpoint as explained
|
||||
below:
|
||||
1. Call `GetAllocatableResources` to get a list of all the allocatable CPUs
|
||||
2. Call `GetCpuIds` on all `ContainerResources` in the system
|
||||
3. Subtract out all of the CPUs from the `GetCpuIds` calls from the `GetAllocatableResources` call
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
`List` 端点中的 `ContainerResources` 中的 cpu_ids 对应于分配给某个容器的专属 CPU。
|
||||
如果要统计共享池中的 CPU,`List` 端点需要与 `GetAllocatableResources` 端点一起使用,如下所述:
|
||||
|
||||
1. 调用 `GetAllocatableResources` 获取所有可用的 CPUs。
|
||||
2. 在系统中所有的 `ContainerResources` 上调用 `GetCpuIds`。
|
||||
3. 用 `GetAllocatableResources` 获取的 CPU 数减去 `GetCpuIds` 获取的 CPU 数。
|
||||
{{< /note >}}
|
||||
|
||||
### `GetAllocatableResources` gRPC 端点 {#grpc-endpoint-getallocatableresources}
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.23" >}}
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
`GetAllocatableResources` should only be used to evaluate [allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
resources on a node. If the goal is to evaluate free/unallocated resources it should be used in
|
||||
conjunction with the List() endpoint. The result obtained by `GetAllocatableResources` would remain
|
||||
the same unless the underlying resources exposed to kubelet change. This happens rarely but when
|
||||
it does (for example: hotplug/hotunplug, device health changes), client is expected to call
|
||||
`GetAlloctableResources` endpoint.
|
||||
However, calling `GetAllocatableResources` endpoint is not sufficient in case of cpu and/or memory
|
||||
update and Kubelet needs to be restarted to reflect the correct resource capacity and allocatable.
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
`GetAllocatableResources` 应该仅被用于评估一个节点上的[可分配的](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
资源。如果目标是评估空闲/未分配的资源,此调用应该与 List() 端点一起使用。
|
||||
除非暴露给 kubelet 的底层资源发生变化 否则 `GetAllocatableResources` 得到的结果将保持不变。
|
||||
这种情况很少发生,但当发生时(例如:热插拔,设备健康状况改变),客户端应该调用 `GetAlloctableResources` 端点。
|
||||
然而,调用 `GetAllocatableResources` 端点在 cpu、内存被更新的情况下是不够的,
|
||||
Kubelet 需要重新启动以获取正确的资源容量和可分配的资源。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
GetAllocatableResources provides information on resources initially available on the worker node.
|
||||
It provides more information than kubelet exports to APIServer.
|
||||
|
@ -394,7 +450,6 @@ It provides more information than kubelet exports to APIServer.
|
|||
端点 `GetAllocatableResources` 提供最初在工作节点上可用的资源的信息。
|
||||
此端点所提供的信息比导出给 API 服务器的信息更丰富。
|
||||
|
||||
|
||||
```gRPC
|
||||
// AllocatableResourcesResponses 包含 kubelet 所了解到的所有设备的信息
|
||||
message AllocatableResourcesResponse {
|
||||
|
@ -405,6 +460,23 @@ message AllocatableResourcesResponse {
|
|||
|
||||
```
|
||||
|
||||
<!--
|
||||
Starting from Kubernetes v1.23, the `GetAllocatableResources` is enabled by default.
|
||||
You can disable it by turning off the
|
||||
`KubeletPodResourcesGetAllocatable` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
|
||||
Preceding Kubernetes v1.23, to enable this feature `kubelet` must be started with the following flag:
|
||||
|
||||
`--feature-gates=KubeletPodResourcesGetAllocatable=true`
|
||||
-->
|
||||
从 Kubernetes v1.23 开始,`GetAllocatableResources` 被默认启用。
|
||||
你可以通过关闭 `KubeletPodResourcesGetAllocatable`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) 来禁用。
|
||||
|
||||
在 Kubernetes v1.23 之前,要启用这一功能,`kubelet` 必须用以下标志启动:
|
||||
|
||||
`--feature-gates=KubeletPodResourcesGetAllocatable=true`
|
||||
|
||||
<!--
|
||||
`ContainerDevices` do expose the topology information declaring to which NUMA cells the device is affine.
|
||||
The NUMA cells are identified using a opaque integer ID, which value is consistent to what device
|
||||
|
@ -457,7 +529,7 @@ The Topology Manager is a Kubelet component that allows resources to be co-ordin
|
|||
|
||||
```gRPC
|
||||
message TopologyInfo {
|
||||
repeated NUMANode nodes = 1;
|
||||
repeated NUMANode nodes = 1;
|
||||
}
|
||||
|
||||
message NUMANode {
|
||||
|
@ -507,14 +579,15 @@ Here are some examples of device plugin implementations:
|
|||
## 设备插件示例 {#examples}
|
||||
|
||||
下面是一些设备插件实现的示例:
|
||||
|
||||
|
||||
* [AMD GPU 设备插件](https://github.com/RadeonOpenCompute/k8s-device-plugin)
|
||||
* [Intel 设备插件](https://github.com/intel/intel-device-plugins-for-kubernetes) 支持 Intel GPU、FPGA 和 QuickAssist 设备
|
||||
* [KubeVirt 设备插件](https://github.com/kubevirt/kubernetes-device-plugins) 用于硬件辅助的虚拟化
|
||||
* The [NVIDIA GPU 设备插件](https://github.com/NVIDIA/k8s-device-plugin)
|
||||
* 需要 [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) 2.0,以允许运行 Docker 容器的时候启用 GPU。
|
||||
* 需要 [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) 2.0,以允许运行 Docker 容器的时候启用 GPU。
|
||||
* [为 Container-Optimized OS 所提供的 NVIDIA GPU 设备插件](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu)
|
||||
* [RDMA 设备插件](https://github.com/hustcat/k8s-rdma-device-plugin)
|
||||
* [SocketCAN 设备插件](https://github.com/collabora/k8s-socketcan)
|
||||
* [Solarflare 设备插件](https://github.com/vikaschoudhary16/sfc-device-plugin)
|
||||
* [SR-IOV 网络设备插件](https://github.com/intel/sriov-network-device-plugin)
|
||||
* [Xilinx FPGA 设备插件](https://github.com/Xilinx/FPGA_as_a_Service/tree/master/k8s-fpga-device-plugin)
|
||||
|
@ -529,7 +602,5 @@ Here are some examples of device plugin implementations:
|
|||
-->
|
||||
* 查看[调度 GPU 资源](/zh/docs/tasks/manage-gpus/scheduling-gpus/) 来学习使用设备插件
|
||||
* 查看在上如何[公布节点上的扩展资源](/zh/docs/tasks/administer-cluster/extended-resource-node/)
|
||||
* 阅读如何在 Kubernetes 中使用 [TLS Ingress 的硬件加速](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/)
|
||||
* 阅读如何在 Kubernetes 中使用 [TLS Ingress 的硬件加速](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/)
|
||||
* 学习[拓扑管理器](/zh/docs/tasks/administer-cluster/topology-manager/)
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: "使用 Kubernetes 对象"
|
||||
title: 使用 Kubernetes 对象
|
||||
weight: 40
|
||||
description: >
|
||||
Kubernetes 对象是 Kubernetes 系统中的持久性实体。Kubernetes 使用这些实体表示您的集群状态。了解 Kubernetes 对象模型以及如何使用这些对象。
|
||||
Kubernetes 对象是 Kubernetes 系统中的持久性实体。Kubernetes 使用这些实体表示你的集群状态。
|
||||
了解 Kubernetes 对象模型以及如何使用这些对象。
|
||||
---
|
||||
|
|
|
@ -32,32 +32,37 @@ Kubernetes 自动指定了一些 Finalizers,但你也可以指定你自己的
|
|||
|
||||
When you create a resource using a manifest file, you can specify finalizers in
|
||||
the `metadata.finalizers` field. When you attempt to delete the resource, the
|
||||
controller that manages it notices the values in the `finalizers` field and does
|
||||
the following:
|
||||
API server handling the delete request notices the values in the `finalizers` field
|
||||
and does the following:
|
||||
|
||||
* Modifies the object to add a `metadata.deletionTimestamp` field with the
|
||||
time you started the deletion.
|
||||
* Marks the object as read-only until its `metadata.finalizers` field is empty.
|
||||
* Prevents the object from being removed until its `metadata.finalizers` field is empty.
|
||||
* Returns a `202` status code (HTTP "Accepted")
|
||||
-->
|
||||
## Finalizers 如何工作 {#how-finalizers-work}
|
||||
|
||||
当你使用清单文件创建资源时,你可以在 `metadata.finalizers` 字段指定 Finalizers。
|
||||
当你试图删除该资源时,管理该资源的控制器会注意到 `finalizers` 字段中的值,
|
||||
当你试图删除该资源时,处理删除请求的 API 服务器会注意到 `finalizers` 字段中的值,
|
||||
并进行以下操作:
|
||||
|
||||
* 修改对象,将你开始执行删除的时间添加到 `metadata.deletionTimestamp` 字段。
|
||||
* 将该对象标记为只读,直到其 `metadata.finalizers` 字段为空。
|
||||
* 禁止对象被删除,直到其 `metadata.finalizers` 字段为空。
|
||||
* 返回 `202` 状态码(HTTP "Accepted")。
|
||||
|
||||
<!--
|
||||
The controller managing that finalizer notices the update to the object setting the
|
||||
`metadata.deletionTimestamp`, indicating deletion of the object has been requested.
|
||||
The controller then attempts to satisfy the requirements of the finalizers
|
||||
specified for that resource. Each time a finalizer condition is satisfied, the
|
||||
controller removes that key from the resource's `finalizers` field. When the
|
||||
field is empty, garbage collection continues. You can also use finalizers to
|
||||
prevent deletion of unmanaged resources.
|
||||
`finalizers` field is emptied, an object with a `deletionTimestamp` field set
|
||||
is automatically deleted. You can also use finalizers to prevent deletion of unmanaged resources.
|
||||
-->
|
||||
然后,控制器试图满足资源的 Finalizers 的条件。
|
||||
管理 finalizer 的控制器注意到对象上发生的更新操作,对象的 `metadata.deletionTimestamp`
|
||||
被设置,意味着已经请求删除该对象。然后,控制器会试图满足资源的 Finalizers 的条件。
|
||||
每当一个 Finalizer 的条件被满足时,控制器就会从资源的 `finalizers` 字段中删除该键。
|
||||
当该字段为空时,垃圾收集继续进行。
|
||||
当 `finalizers` 字段为空时,`deletionTimestamp` 字段被设置的对象会被自动删除。
|
||||
你也可以使用 Finalizers 来阻止删除未被管理的资源。
|
||||
|
||||
<!--
|
||||
|
@ -111,7 +116,7 @@ Kubernetes also processes finalizers when it identifies owner references on a
|
|||
resource targeted for deletion.
|
||||
|
||||
In some situations, finalizers can block the deletion of dependent objects,
|
||||
which can cause the targeted owner object to remain in a read-only state for
|
||||
which can cause the targeted owner object to remain for
|
||||
longer than expected without being fully deleted. In these situations, you
|
||||
should check finalizers and owner references on the target owner and dependent
|
||||
objects to troubleshoot the cause.
|
||||
|
@ -123,20 +128,24 @@ Kubernetes 会使用属主引用(而不是标签)来确定集群中哪些 Po
|
|||
当 Kubernetes 识别到要删除的资源上的属主引用时,它也会处理 Finalizers。
|
||||
|
||||
在某些情况下,Finalizers 会阻止依赖对象的删除,
|
||||
这可能导致目标属主对象,保持在只读状态的时间比预期的长,且没有被完全删除。
|
||||
这可能导致目标属主对象被保留的时间比预期的长,而没有被完全删除。
|
||||
在这些情况下,你应该检查目标属主和附属对象上的 Finalizers 和属主引用,来排查原因。
|
||||
|
||||
{{<note>}}
|
||||
{{< note >}}
|
||||
<!--
|
||||
In cases where objects are stuck in a deleting state, try to avoid manually
|
||||
In cases where objects are stuck in a deleting state, avoid manually
|
||||
removing finalizers to allow deletion to continue. Finalizers are usually added
|
||||
to resources for a reason, so forcefully removing them can lead to issues in
|
||||
your cluster.
|
||||
-->
|
||||
在对象卡在删除状态的情况下,尽量避免手动移除 Finalizers,以允许继续删除操作。
|
||||
Finalizers 通常因为特殊原因被添加到资源上,所以强行删除它们会导致集群出现问题。
|
||||
{{</note>}}
|
||||
your cluster. This should only be done when the purpose of the finalizer is
|
||||
understood and is accomplished in another way (for example, manually cleaning
|
||||
up some dependent object).
|
||||
|
||||
-->
|
||||
在对象卡在删除状态的情况下,要避免手动移除 Finalizers,以允许继续删除操作。
|
||||
Finalizers 通常因为特殊原因被添加到资源上,所以强行删除它们会导致集群出现问题。
|
||||
只有了解 finalizer 的用途时才能这样做,并且应该通过一些其他方式来完成
|
||||
(例如,手动清除其余的依赖对象)。
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -144,4 +153,5 @@ Finalizers 通常因为特殊原因被添加到资源上,所以强行删除它
|
|||
* Read [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/)
|
||||
on the Kubernetes blog.
|
||||
-->
|
||||
* 阅读 Kubernetes 博客的[使用 Finalizers 控制删除](/blog/2021/05/14/using-finalizers-to-control-deletion/)。
|
||||
* 在 Kubernetes 博客上阅读[使用 Finalizers 控制删除](/blog/2021/05/14/using-finalizers-to-control-deletion/)。
|
||||
|
||||
|
|
|
@ -140,7 +140,7 @@ in the `kubectl` command-line interface, passing the `.yaml` file as an argument
|
|||
将 `.yaml` 文件作为参数。下面是一个示例:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment.yaml --record
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -173,19 +173,35 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to
|
|||
|
||||
<!--
|
||||
The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes.
|
||||
|
||||
For example, the reference for Pod details the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
|
||||
for a Pod in the API, and the reference for Deployment details the [`spec` field](/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentSpec) for Deployments.
|
||||
In those API reference pages you'll see mention of PodSpec and DeploymentSpec. These names are implementation details of the Golang code that Kubernetes uses to implement its API.
|
||||
-->
|
||||
对象 `spec` 的精确格式对每个 Kubernetes 对象来说是不同的,包含了特定于该对象的嵌套字段。
|
||||
[Kubernetes API 参考](https://kubernetes.io/docs/reference/kubernetes-api/)
|
||||
能够帮助我们找到任何我们想创建的对象的规约格式。
|
||||
|
||||
例如,Pod 参考文档详细说明了 API 中 Pod 的 [`spec` 字段](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec),
|
||||
Deployment 的参考文档则详细说明了 Deployment 的 [`spec` 字段](/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentSpec)。
|
||||
在这些 API 参考页面中,你将看到提到的 PodSpec 和 DeploymentSpec。
|
||||
这些名字是 Kubernetes 用来实现其 API 的 Golang 代码的实现细节。
|
||||
<!--
|
||||
For example, see the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
|
||||
for the Pod API reference.
|
||||
For each Pod, the `.spec` field specifies the pod and its desired state (such as the container image name for
|
||||
each container within that pod).
|
||||
Another example of an object specification is the
|
||||
[`spec` field](/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetSpec)
|
||||
for the StatefulSet API. For StatefulSet, the `.spec` field specifies the StatefulSet and
|
||||
its desired state.
|
||||
Within the `.spec` of a StatefulSet is a [template](/docs/concepts/workloads/pods/#pod-templates)
|
||||
for Pod objects. That template describes Pods that the StatefulSet controller will create in order to
|
||||
satisfy the StatefulSet specification.
|
||||
Different kinds of object can also have different `.status`; again, the API reference pages
|
||||
detail the structure of that `.status` field, and its content for each different type of object.
|
||||
-->
|
||||
例如,参阅 Pod API 参考文档中
|
||||
[`spec` 字段](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)。
|
||||
对于每个 Pod,其 `.spec` 字段设置了 Pod 及其期望状态(例如 Pod 中每个容器的容器镜像名称)。
|
||||
另一个对象规约的例子是 StatefulSet API 中的
|
||||
[`spec` 字段](/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetSpec)。
|
||||
对于 StatefulSet 而言,其 `.spec` 字段设置了 StatefulSet 及其期望状态。
|
||||
在 StatefulSet 的 `.spec` 内,有一个为 Pod 对象提供的[模板](/zh/docs/concepts/workloads/pods/#pod-templates)。该模板描述了 StatefulSet 控制器为了满足 StatefulSet 规约而要创建的 Pod。
|
||||
不同类型的对象可以由不同的 `.status` 信息。API 参考页面给出了 `.status` 字段的详细结构,
|
||||
以及针对不同类型 API 对象的具体内容。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
@ -16,12 +16,12 @@ weight: 30
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
|
||||
These virtual clusters are called namespaces.
|
||||
In Kubernetes, _namespaces_ provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects _(e.g. Deployments, Services, etc)_ and not for cluster-wide objects _(e.g. StorageClass, Nodes, PersistentVolumes, etc)_.
|
||||
-->
|
||||
Kubernetes 支持多个虚拟集群,它们底层依赖于同一个物理集群。
|
||||
这些虚拟集群被称为名字空间。
|
||||
在一些文档里名字空间也称为命名空间。
|
||||
在 Kubernetes 中,“名字空间(Namespace)”提供一种机制,将同一集群中的资源划分为相互隔离的组。
|
||||
同一名字空间内的资源名称要唯一,但跨名字空间时没有这个要求。
|
||||
名字空间作用域仅针对带有名字空间的对象,例如 Deployment、Service 等,
|
||||
这种作用域对集群访问的对象不适用,例如 StorageClass、Node、PersistentVolume 等。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -175,6 +175,42 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
|
|||
`<服务名称>`,它将被解析到本地名字空间的服务。这对于跨多个名字空间(如开发、分级和生产)
|
||||
使用相同的配置非常有用。如果你希望跨名字空间访问,则需要使用完全限定域名(FQDN)。
|
||||
|
||||
<!--
|
||||
As a result, all namespace names must be valid
|
||||
[RFC 1123 DNS labels](/docs/concepts/overview/working-with-objects/names/#dns-label-names).
|
||||
-->
|
||||
因此,所有的名字空间名称都必须是合法的
|
||||
[RFC 1123 DNS 标签](/zh/docs/concepts/overview/working-with-objects/names/#dns-label-names)。
|
||||
|
||||
{{< warning >}}
|
||||
<!--
|
||||
By creating namespaces with the same name as [public top-level
|
||||
domains](https://data.iana.org/TLD/tlds-alpha-by-domain.txt), Services in these
|
||||
namespaces can have short DNS names that overlap with public DNS records.
|
||||
Workloads from any namespace performing a DNS lookup without a [trailing dot](https://datatracker.ietf.org/doc/html/rfc1034#page-8) will
|
||||
be redirected to those services, taking precedence over public DNS.
|
||||
-->
|
||||
通过创建与[公共顶级域名](https://data.iana.org/TLD/tlds-alpha-by-domain.txt)
|
||||
同名的名字空间,这些名字空间中的服务可以拥有与公共 DNS 记录重叠的、较短的 DNS 名称。
|
||||
所有名字空间中的负载在执行 DNS 查找时,如果查找的名称没有
|
||||
[尾部句点](https://datatracker.ietf.org/doc/html/rfc1034#page-8),
|
||||
就会被重定向到这些服务上,因此呈现出比公共 DNS 更高的优先序。
|
||||
|
||||
<!--
|
||||
To mitigate this, limit privileges for creating namespaces to trusted users. If
|
||||
required, you could additionally configure third-party security controls, such
|
||||
as [admission
|
||||
webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/),
|
||||
to block creating any namespace with the name of [public
|
||||
TLDs](https://data.iana.org/TLD/tlds-alpha-by-domain.txt).
|
||||
-->
|
||||
为了缓解这类问题,需要将创建名字空间的权限授予可信的用户。
|
||||
如果需要,你可以额外部署第三方的安全控制机制,例如以
|
||||
[准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
的形式,阻止用户创建与公共 [TLD](https://data.iana.org/TLD/tlds-alpha-by-domain.txt)
|
||||
同名的名字空间。
|
||||
{{< /warning >}}
|
||||
|
||||
<!--
|
||||
## Not All Objects are in a Namespace
|
||||
-->
|
||||
|
|
|
@ -1,323 +1,333 @@
|
|||
---
|
||||
title: Pod 开销
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- dchen1107
|
||||
- egernst
|
||||
- tallclair
|
||||
title: Pod Overhead
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
<!--
|
||||
When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
|
||||
resources are additional to the resources needed to run the container(s) inside the Pod.
|
||||
_Pod Overhead_ is a feature for accounting for the resources consumed by the Pod infrastructure
|
||||
on top of the container requests & limits.
|
||||
-->
|
||||
|
||||
在节点上运行 Pod 时,Pod 本身占用大量系统资源。这些资源是运行 Pod 内容器所需资源的附加资源。
|
||||
_POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
||||
<!--
|
||||
In Kubernetes, the Pod's overhead is set at
|
||||
[admission](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
|
||||
time according to the overhead associated with the Pod's
|
||||
[RuntimeClass](/docs/concepts/containers/runtime-class/).
|
||||
-->
|
||||
|
||||
在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 相关联的开销在
|
||||
[准入](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) 时设置的。
|
||||
|
||||
<!--
|
||||
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
|
||||
resource requests when scheduling a Pod. Similarly,the kubelet will include the Pod overhead when sizing
|
||||
the Pod cgroup, and when carrying out Pod eviction ranking.
|
||||
-->
|
||||
|
||||
如果启用了 Pod Overhead,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。
|
||||
类似地,kubelet 将在确定 Pod cgroups 的大小和执行 Pod 驱逐排序时也会考虑 Pod 开销。
|
||||
|
||||
<!--
|
||||
## Enabling Pod Overhead {#set-up}
|
||||
-->
|
||||
## 启用 Pod 开销 {#set-up}
|
||||
|
||||
<!--
|
||||
You need to make sure that the `PodOverhead`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18)
|
||||
across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field.
|
||||
-->
|
||||
您需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
(在 1.18 默认是开启的),以及一个用于定义 `overhead` 字段的 `RuntimeClass`。
|
||||
|
||||
<!--
|
||||
## Usage example
|
||||
-->
|
||||
## 使用示例
|
||||
|
||||
<!--
|
||||
To use the PodOverhead feature, you need a RuntimeClass that defines the `overhead` field. As
|
||||
an example, you could use the following RuntimeClass definition with a virtualizing container runtime
|
||||
that uses around 120MiB per Pod for the virtual machine and the guest OS:
|
||||
-->
|
||||
要使用 PodOverhead 特性,需要一个定义 `overhead` 字段的 RuntimeClass。
|
||||
作为例子,可以在虚拟机和寄宿操作系统中通过一个虚拟化容器运行时来定义
|
||||
RuntimeClass 如下,其中每个 Pod 大约使用 120MiB:
|
||||
|
||||
```yaml
|
||||
---
|
||||
kind: RuntimeClass
|
||||
apiVersion: node.k8s.io/v1
|
||||
metadata:
|
||||
name: kata-fc
|
||||
handler: kata-fc
|
||||
overhead:
|
||||
podFixed:
|
||||
memory: "120Mi"
|
||||
cpu: "250m"
|
||||
```
|
||||
|
||||
<!--
|
||||
Workloads which are created which specify the `kata-fc` RuntimeClass handler will take the memory and
|
||||
cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup sizing.
|
||||
|
||||
Consider running the given example workload, test-pod:
|
||||
-->
|
||||
通过指定 `kata-fc` RuntimeClass 处理程序创建的工作负载会将内存和 cpu 开销计入资源配额计算、节点调度以及 Pod cgroup 分级。
|
||||
|
||||
假设我们运行下面给出的工作负载示例 test-pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
runtimeClassName: kata-fc
|
||||
containers:
|
||||
- name: busybox-ctr
|
||||
image: busybox
|
||||
stdin: true
|
||||
tty: true
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 100Mi
|
||||
- name: nginx-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1500m
|
||||
memory: 100Mi
|
||||
```
|
||||
|
||||
<!--
|
||||
At admission time the RuntimeClass [admission controller](/docs/reference/access-authn-authz/admission-controllers/)
|
||||
updates the workload's PodSpec to include the `overhead` as described in the RuntimeClass. If the PodSpec already has this field defined,
|
||||
the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod
|
||||
to include an `overhead`.
|
||||
-->
|
||||
在准入阶段 RuntimeClass [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) 更新工作负载的 PodSpec 以包含
|
||||
RuntimeClass 中定义的 `overhead`. 如果 PodSpec 中该字段已定义,该 Pod 将会被拒绝。
|
||||
在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod, 包含了一个 `overhead`.
|
||||
|
||||
<!--
|
||||
After the RuntimeClass admission controller, you can check the updated PodSpec:
|
||||
-->
|
||||
在 RuntimeClass 准入控制器之后,可以检验一下已更新的 PodSpec:
|
||||
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
map[cpu:250m memory:120Mi]
|
||||
```
|
||||
|
||||
<!--
|
||||
If a ResourceQuota is defined, the sum of container requests as well as the
|
||||
`overhead` field are counted.
|
||||
-->
|
||||
如果定义了 ResourceQuata, 则容器请求的总量以及 `overhead` 字段都将计算在内。
|
||||
|
||||
<!--
|
||||
When the kube-scheduler is deciding which node should run a new Pod, the scheduler considers that Pod's
|
||||
`overhead` as well as the sum of container requests for that Pod. For this example, the scheduler adds the
|
||||
requests and the overhead, then looks for a node that has 2.25 CPU and 320 MiB of memory available.
|
||||
-->
|
||||
当 kube-scheduler 决定在哪一个节点调度运行新的 Pod 时,调度器会兼顾该 Pod 的 `overhead` 以及该 Pod 的容器请求总量。在这个示例中,调度器将资源请求和开销相加,然后寻找具备 2.25 CPU 和 320 MiB 内存可用的节点。
|
||||
|
||||
<!--
|
||||
Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
|
||||
for the Pod. It is within this pod that the underlying container runtime will create containers. -->
|
||||
一旦 Pod 调度到了某个节点, 该节点上的 kubelet 将为该 Pod 新建一个 {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}. 底层容器运行时将在这个 pod 中创建容器。
|
||||
|
||||
<!--
|
||||
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
|
||||
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
|
||||
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
|
||||
defined in the PodSpec.
|
||||
-->
|
||||
如果该资源对每一个容器都定义了一个限制(定义了受限的 Guaranteed QoS 或者 Bustrable QoS),kubelet 会为与该资源(CPU 的 cpu.cfs_quota_us 以及内存的 memory.limit_in_bytes)
|
||||
相关的 pod cgroup 设定一个上限。该上限基于容器限制总量与 PodSpec 中定义的 `overhead` 之和。
|
||||
|
||||
<!--
|
||||
For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set `cpu.shares` based on the sum of container
|
||||
requests plus the `overhead` defined in the PodSpec.
|
||||
-->
|
||||
对于 CPU, 如果 Pod 的 QoS 是 Guaranteed 或者 Burstable, kubelet 会基于容器请求总量与 PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`.
|
||||
|
||||
<!--
|
||||
Looking at our example, verify the container requests for the workload:
|
||||
-->
|
||||
请看这个例子,验证工作负载的容器请求:
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
|
||||
```
|
||||
|
||||
<!--
|
||||
The total container requests are 2000m CPU and 200MiB of memory:
|
||||
-->
|
||||
容器请求总计 2000m CPU 和 200MiB 内存:
|
||||
```
|
||||
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
|
||||
```
|
||||
|
||||
<!--
|
||||
Check this against what is observed by the node:
|
||||
-->
|
||||
对照从节点观察到的情况来检查一下:
|
||||
```bash
|
||||
kubectl describe node | grep test-pod -B2
|
||||
```
|
||||
|
||||
<!--
|
||||
The output shows 2250m CPU and 320MiB of memory are requested, which includes PodOverhead:
|
||||
-->
|
||||
该输出显示请求了 2250m CPU 以及 320MiB 内存,包含了 PodOverhead 在内:
|
||||
```
|
||||
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
|
||||
--------- ---- ------------ ---------- --------------- ------------- ---
|
||||
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
|
||||
```
|
||||
|
||||
<!--
|
||||
## Verify Pod cgroup limits
|
||||
-->
|
||||
## 验证 Pod cgroup 限制
|
||||
|
||||
<!--
|
||||
Check the Pod's memory cgroups on the node where the workload is running. In the following example, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
|
||||
is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an
|
||||
advanced example to show PodOverhead behavior, and it is not expected that users should need to check
|
||||
cgroups directly on the node.
|
||||
|
||||
First, on the particular node, determine the Pod identifier:
|
||||
-->
|
||||
在工作负载所运行的节点上检查 Pod 的内存 cgroups. 在接下来的例子中,
|
||||
将在该节点上使用具备 CRI 兼容的容器运行时命令行工具
|
||||
[`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)。
|
||||
|
||||
首先在特定的节点上确定该 Pod 的标识符:
|
||||
|
||||
<!--
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled
|
||||
-->
|
||||
```bash
|
||||
# 在该 Pod 调度的节点上执行如下命令:
|
||||
POD_ID="$(sudo crictl pods --name test-pod -q)"
|
||||
```
|
||||
|
||||
<!--
|
||||
From this, you can determine the cgroup path for the Pod:
|
||||
-->
|
||||
可以依此判断该 Pod 的 cgroup 路径:
|
||||
|
||||
<!--
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled
|
||||
-->
|
||||
```bash
|
||||
# 在该 Pod 调度的节点上执行如下命令:
|
||||
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
|
||||
```
|
||||
|
||||
<!--
|
||||
The resulting cgroup path includes the Pod's `pause` container. The Pod level cgroup is one directory above.
|
||||
-->
|
||||
执行结果的 cgroup 路径中包含了该 Pod 的 `pause` 容器。Pod 级别的 cgroup 即上面的一个目录。
|
||||
```
|
||||
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
|
||||
```
|
||||
|
||||
<!--
|
||||
In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verify the Pod level cgroup setting for memory:
|
||||
-->
|
||||
在这个例子中,该 pod 的 cgroup 路径是 `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`。验证内存的 Pod 级别 cgroup 设置:
|
||||
|
||||
<!--
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled.
|
||||
# Also, change the name of the cgroup to match the cgroup allocated for your pod.
|
||||
-->
|
||||
```bash
|
||||
# 在该 Pod 调度的节点上执行这个命令。
|
||||
# 另外,修改 cgroup 的名称以匹配为该 pod 分配的 cgroup。
|
||||
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
|
||||
```
|
||||
|
||||
<!--
|
||||
This is 320 MiB, as expected:
|
||||
-->
|
||||
和预期的一样是 320 MiB
|
||||
```
|
||||
335544320
|
||||
```
|
||||
|
||||
<!--
|
||||
### Observability
|
||||
-->
|
||||
### 可观察性
|
||||
|
||||
<!--
|
||||
A `kube_pod_overhead` metric is available in [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
|
||||
to help identify when PodOverhead is being utilized and to help observe stability of workloads
|
||||
running with a defined Overhead. This functionality is not available in the 1.9 release of
|
||||
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
|
||||
from source in the meantime.
|
||||
-->
|
||||
在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过
|
||||
`kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定
|
||||
开销运行的工作负载的稳定性。
|
||||
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。
|
||||
在此之前,用户需要从源代码构建 kube-state-metrics。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
-->
|
||||
|
||||
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
---
|
||||
title: Pod 开销
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- dchen1107
|
||||
- egernst
|
||||
- tallclair
|
||||
title: Pod Overhead
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
<!--
|
||||
When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
|
||||
resources are additional to the resources needed to run the container(s) inside the Pod.
|
||||
_Pod Overhead_ is a feature for accounting for the resources consumed by the Pod infrastructure
|
||||
on top of the container requests & limits.
|
||||
-->
|
||||
|
||||
在节点上运行 Pod 时,Pod 本身占用大量系统资源。这些是运行 Pod 内容器所需资源之外的资源。
|
||||
_POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和限制之上消耗的资源。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
||||
<!--
|
||||
In Kubernetes, the Pod's overhead is set at
|
||||
[admission](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
|
||||
time according to the overhead associated with the Pod's
|
||||
[RuntimeClass](/docs/concepts/containers/runtime-class/).
|
||||
-->
|
||||
|
||||
在 Kubernetes 中,Pod 的开销是根据与 Pod 的 [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
|
||||
相关联的开销在[准入](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)时设置的。
|
||||
|
||||
<!--
|
||||
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
|
||||
resource requests when scheduling a Pod. Similarly,the kubelet will include the Pod overhead when sizing
|
||||
the Pod cgroup, and when carrying out Pod eviction ranking.
|
||||
-->
|
||||
|
||||
如果启用了 Pod Overhead,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。
|
||||
类似地,kubelet 将在确定 Pod cgroups 的大小和执行 Pod 驱逐排序时也会考虑 Pod 开销。
|
||||
|
||||
<!--
|
||||
## Enabling Pod Overhead {#set-up}
|
||||
-->
|
||||
## 启用 Pod 开销 {#set-up}
|
||||
|
||||
<!--
|
||||
You need to make sure that the `PodOverhead`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18)
|
||||
across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field.
|
||||
-->
|
||||
你需要确保在集群中启用了 `PodOverhead` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
(在 1.18 默认是开启的),以及一个定义了 `overhead` 字段的 `RuntimeClass`。
|
||||
|
||||
<!--
|
||||
## Usage example
|
||||
-->
|
||||
## 使用示例
|
||||
|
||||
<!--
|
||||
To use the PodOverhead feature, you need a RuntimeClass that defines the `overhead` field. As
|
||||
an example, you could use the following RuntimeClass definition with a virtualizing container runtime
|
||||
that uses around 120MiB per Pod for the virtual machine and the guest OS:
|
||||
-->
|
||||
要使用 PodOverhead 特性,需要一个定义了 `overhead` 字段的 RuntimeClass。
|
||||
作为例子,下面的 RuntimeClass 定义中包含一个虚拟化所用的容器运行时,
|
||||
RuntimeClass 如下,其中每个 Pod 大约使用 120MiB 用来运行虚拟机和寄宿操作系统:
|
||||
|
||||
```yaml
|
||||
---
|
||||
kind: RuntimeClass
|
||||
apiVersion: node.k8s.io/v1
|
||||
metadata:
|
||||
name: kata-fc
|
||||
handler: kata-fc
|
||||
overhead:
|
||||
podFixed:
|
||||
memory: "120Mi"
|
||||
cpu: "250m"
|
||||
```
|
||||
|
||||
<!--
|
||||
Workloads which are created which specify the `kata-fc` RuntimeClass handler will take the memory and
|
||||
cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup sizing.
|
||||
|
||||
Consider running the given example workload, test-pod:
|
||||
-->
|
||||
通过指定 `kata-fc` RuntimeClass 处理程序创建的工作负载会将内存和 CPU
|
||||
开销计入资源配额计算、节点调度以及 Pod cgroup 尺寸确定。
|
||||
|
||||
假设我们运行下面给出的工作负载示例 test-pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
runtimeClassName: kata-fc
|
||||
containers:
|
||||
- name: busybox-ctr
|
||||
image: busybox:1.28
|
||||
stdin: true
|
||||
tty: true
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 100Mi
|
||||
- name: nginx-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1500m
|
||||
memory: 100Mi
|
||||
```
|
||||
|
||||
<!--
|
||||
At admission time the RuntimeClass [admission controller](/docs/reference/access-authn-authz/admission-controllers/)
|
||||
updates the workload's PodSpec to include the `overhead` as described in the RuntimeClass. If the PodSpec already has this field defined,
|
||||
the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod
|
||||
to include an `overhead`.
|
||||
-->
|
||||
在准入阶段 RuntimeClass [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/)
|
||||
更新工作负载的 PodSpec 以包含
|
||||
RuntimeClass 中定义的 `overhead`。如果 PodSpec 中已定义该字段,该 Pod 将会被拒绝。
|
||||
在这个例子中,由于只指定了 RuntimeClass 名称,所以准入控制器更新了 Pod,使之包含 `overhead`。
|
||||
|
||||
<!--
|
||||
After the RuntimeClass admission controller, you can check the updated PodSpec:
|
||||
-->
|
||||
在 RuntimeClass 准入控制器之后,可以检验一下已更新的 PodSpec:
|
||||
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
map[cpu:250m memory:120Mi]
|
||||
```
|
||||
|
||||
<!--
|
||||
If a ResourceQuota is defined, the sum of container requests as well as the
|
||||
`overhead` field are counted.
|
||||
-->
|
||||
如果定义了 ResourceQuata, 则容器请求的总量以及 `overhead` 字段都将计算在内。
|
||||
|
||||
<!--
|
||||
When the kube-scheduler is deciding which node should run a new Pod, the scheduler considers that Pod's
|
||||
`overhead` as well as the sum of container requests for that Pod. For this example, the scheduler adds the
|
||||
requests and the overhead, then looks for a node that has 2.25 CPU and 320 MiB of memory available.
|
||||
-->
|
||||
当 kube-scheduler 决定在哪一个节点调度运行新的 Pod 时,调度器会兼顾该 Pod 的
|
||||
`overhead` 以及该 Pod 的容器请求总量。在这个示例中,调度器将资源请求和开销相加,
|
||||
然后寻找具备 2.25 CPU 和 320 MiB 内存可用的节点。
|
||||
|
||||
<!--
|
||||
Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
|
||||
for the Pod. It is within this pod that the underlying container runtime will create containers. -->
|
||||
一旦 Pod 被调度到了某个节点, 该节点上的 kubelet 将为该 Pod 新建一个
|
||||
{{< glossary_tooltip text="cgroup" term_id="cgroup" >}}。 底层容器运行时将在这个
|
||||
Pod 中创建容器。
|
||||
|
||||
<!--
|
||||
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
|
||||
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
|
||||
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
|
||||
defined in the PodSpec.
|
||||
-->
|
||||
如果该资源对每一个容器都定义了一个限制(定义了限制值的 Guaranteed QoS 或者
|
||||
Bustrable QoS),kubelet 会为与该资源(CPU 的 `cpu.cfs_quota_us` 以及内存的
|
||||
`memory.limit_in_bytes`)
|
||||
相关的 Pod cgroup 设定一个上限。该上限基于 PodSpec 中定义的容器限制总量与 `overhead` 之和。
|
||||
|
||||
<!--
|
||||
For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set `cpu.shares` based on the sum of container
|
||||
requests plus the `overhead` defined in the PodSpec.
|
||||
-->
|
||||
对于 CPU,如果 Pod 的 QoS 是 Guaranteed 或者 Burstable,kubelet 会基于容器请求总量与
|
||||
PodSpec 中定义的 `overhead` 之和设置 `cpu.shares`。
|
||||
|
||||
<!--
|
||||
Looking at our example, verify the container requests for the workload:
|
||||
-->
|
||||
请看这个例子,验证工作负载的容器请求:
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
|
||||
```
|
||||
|
||||
<!--
|
||||
The total container requests are 2000m CPU and 200MiB of memory:
|
||||
-->
|
||||
容器请求总计 2000m CPU 和 200MiB 内存:
|
||||
```
|
||||
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
|
||||
```
|
||||
|
||||
<!--
|
||||
Check this against what is observed by the node:
|
||||
-->
|
||||
对照从节点观察到的情况来检查一下:
|
||||
```bash
|
||||
kubectl describe node | grep test-pod -B2
|
||||
```
|
||||
|
||||
<!--
|
||||
The output shows 2250m CPU and 320MiB of memory are requested, which includes PodOverhead:
|
||||
-->
|
||||
该输出显示请求了 2250m CPU 以及 320MiB 内存,包含了 PodOverhead 在内:
|
||||
```
|
||||
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
|
||||
--------- ---- ------------ ---------- --------------- ------------- ---
|
||||
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
|
||||
```
|
||||
|
||||
<!--
|
||||
## Verify Pod cgroup limits
|
||||
-->
|
||||
## 验证 Pod cgroup 限制
|
||||
|
||||
<!--
|
||||
Check the Pod's memory cgroups on the node where the workload is running. In the following example, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
|
||||
is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an
|
||||
advanced example to show PodOverhead behavior, and it is not expected that users should need to check
|
||||
cgroups directly on the node.
|
||||
|
||||
First, on the particular node, determine the Pod identifier:
|
||||
-->
|
||||
在工作负载所运行的节点上检查 Pod 的内存 cgroups。在接下来的例子中,
|
||||
将在该节点上使用具备 CRI 兼容的容器运行时命令行工具
|
||||
[`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)。
|
||||
这是一个显示 PodOverhead 行为的高级示例, 预计用户不需要直接在节点上检查 cgroups。
|
||||
首先在特定的节点上确定该 Pod 的标识符:
|
||||
|
||||
<!--
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled
|
||||
-->
|
||||
```bash
|
||||
# 在该 Pod 被调度到的节点上执行如下命令:
|
||||
POD_ID="$(sudo crictl pods --name test-pod -q)"
|
||||
```
|
||||
|
||||
<!--
|
||||
From this, you can determine the cgroup path for the Pod:
|
||||
-->
|
||||
可以依此判断该 Pod 的 cgroup 路径:
|
||||
|
||||
<!--
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled
|
||||
-->
|
||||
```bash
|
||||
# 在该 Pod 被调度到的节点上执行如下命令:
|
||||
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
|
||||
```
|
||||
|
||||
<!--
|
||||
The resulting cgroup path includes the Pod's `pause` container. The Pod level cgroup is one directory above.
|
||||
-->
|
||||
执行结果的 cgroup 路径中包含了该 Pod 的 `pause` 容器。Pod 级别的 cgroup 在即上一层目录。
|
||||
```
|
||||
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
|
||||
```
|
||||
|
||||
<!--
|
||||
In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verify the Pod level cgroup setting for memory:
|
||||
-->
|
||||
在这个例子中,该 Pod 的 cgroup 路径是 `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`。
|
||||
验证内存的 Pod 级别 cgroup 设置:
|
||||
|
||||
<!--
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled.
|
||||
# Also, change the name of the cgroup to match the cgroup allocated for your pod.
|
||||
-->
|
||||
```bash
|
||||
# 在该 Pod 被调度到的节点上执行这个命令。
|
||||
# 另外,修改 cgroup 的名称以匹配为该 Pod 分配的 cgroup。
|
||||
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
|
||||
```
|
||||
|
||||
<!--
|
||||
This is 320 MiB, as expected:
|
||||
-->
|
||||
和预期的一样,这一数值为 320 MiB。
|
||||
```
|
||||
335544320
|
||||
```
|
||||
|
||||
<!--
|
||||
### Observability
|
||||
-->
|
||||
### 可观察性
|
||||
|
||||
<!--
|
||||
A `kube_pod_overhead` metric is available in [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
|
||||
to help identify when PodOverhead is being utilized and to help observe stability of workloads
|
||||
running with a defined Overhead. This functionality is not available in the 1.9 release of
|
||||
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
|
||||
from source in the meantime.
|
||||
-->
|
||||
在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过
|
||||
`kube_pod_overhead` 指标来协助确定何时使用 PodOverhead
|
||||
以及协助观察以一个既定开销运行的工作负载的稳定性。
|
||||
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。
|
||||
在此之前,用户需要从源代码构建 kube-state-metrics。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
-->
|
||||
|
||||
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
|
|
|
@ -227,6 +227,22 @@ for the corresponding API object, and then written to the object store (shown as
|
|||
|
||||
请求通过所有准入控制器后,将使用检验例程检查对应的 API 对象,然后将其写入对象存储(如步骤 **4** 所示)。
|
||||
|
||||
<!--
|
||||
## Auditing
|
||||
|
||||
Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster.
|
||||
The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.
|
||||
|
||||
For more information, see [Auditing](/docs/tasks/debug-application-cluster/audit/).
|
||||
-->
|
||||
|
||||
## 审计 {#auditing}
|
||||
|
||||
Kubernetes 审计提供了一套与安全相关的、按时间顺序排列的记录,其中记录了集群中的操作序列。
|
||||
集群对用户、使用 Kubernetes API 的应用程序以及控制平面本身产生的活动进行审计。
|
||||
|
||||
更多信息请参考 [审计](/zh/docs/tasks/debug-application-cluster/audit/).
|
||||
|
||||
<!-- ## API server ports and IPs -->
|
||||
## API 服务器端口和 IP {#api-server-ports-and-ips}
|
||||
|
||||
|
|
|
@ -105,10 +105,10 @@ Suggestions for securing your infrastructure in a Kubernetes cluster:
|
|||
Area of Concern for Kubernetes Infrastructure | Recommendation |
|
||||
--------------------------------------------- | -------------- |
|
||||
Network access to API Server (Control plane) | All access to the Kubernetes control plane is not allowed publicly on the internet and is controlled by network access control lists restricted to the set of IP addresses needed to administer the cluster.|
|
||||
Network access to Nodes (nodes) | Nodes should be configured to _only_ accept connections (via network access control lists)from the control plane on the specified ports, and accept connections for services in Kubernetes of type NodePort and LoadBalancer. If possible, these nodes should not be exposed on the public internet entirely.
|
||||
Network access to Nodes (nodes) | Nodes should be configured to _only_ accept connections (via network access control lists) from the control plane on the specified ports, and accept connections for services in Kubernetes of type NodePort and LoadBalancer. If possible, these nodes should not be exposed on the public internet entirely.
|
||||
Kubernetes access to Cloud Provider API | Each cloud provider needs to grant a different set of permissions to the Kubernetes control plane and nodes. It is best to provide the cluster with cloud provider access that follows the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) for the resources it needs to administer. The [Kops documentation](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles) provides information about IAM policies and roles.
|
||||
Access to etcd | Access to etcd (the datastore of Kubernetes) should be limited to the control plane only. Depending on your configuration, you should attempt to use etcd over TLS. More information can be found in the [etcd documentation](https://github.com/etcd-io/etcd/tree/master/Documentation).
|
||||
etcd Encryption | Wherever possible it's a good practice to encrypt all drives at rest, but since etcd holds the state of the entire cluster (including Secrets) its disk should especially be encrypted at rest.
|
||||
etcd Encryption | Wherever possible it's a good practice to encrypt all drives at rest, and since etcd holds the state of the entire cluster (including Secrets) its disk should especially be encrypted at rest.
|
||||
|
||||
{{< /table >}}
|
||||
-->
|
||||
|
@ -124,7 +124,7 @@ Kubetnetes 基础架构关注领域 | 建议 |
|
|||
通过网络访问 Node(节点)| 节点应配置为 _仅能_ 从控制平面上通过指定端口来接受(通过网络访问控制列表)连接,以及接受 NodePort 和 LoadBalancer 类型的 Kubernetes 服务连接。如果可能的话,这些节点不应完全暴露在公共互联网上。|
|
||||
Kubernetes 访问云提供商的 API | 每个云提供商都需要向 Kubernetes 控制平面和节点授予不同的权限集。为集群提供云提供商访问权限时,最好遵循对需要管理的资源的[最小特权原则](https://en.wikipedia.org/wiki/Principle_of_least_privilege)。[Kops 文档](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles)提供有关 IAM 策略和角色的信息。|
|
||||
访问 etcd | 对 etcd(Kubernetes 的数据存储)的访问应仅限于控制平面。根据配置情况,你应该尝试通过 TLS 来使用 etcd。更多信息可以在 [etcd 文档](https://github.com/etcd-io/etcd/tree/master/Documentation)中找到。|
|
||||
etcd 加密 | 在所有可能的情况下,最好对所有驱动器进行静态数据加密,但是由于 etcd 拥有整个集群的状态(包括机密信息),因此其磁盘更应该进行静态数据加密。|
|
||||
etcd 加密 | 在所有可能的情况下,最好对所有驱动器进行静态数据加密,并且由于 etcd 拥有整个集群的状态(包括机密信息),因此其磁盘更应该进行静态数据加密。|
|
||||
|
||||
{{< /table >}}
|
||||
|
||||
|
@ -160,7 +160,7 @@ good information practices, read and follow the advice about
|
|||
Depending on the attack surface of your application, you may want to focus on specific
|
||||
aspects of security. For example: If you are running a service (Service A) that is critical
|
||||
in a chain of other resources and a separate workload (Service B) which is
|
||||
vulnerable to a resource exhaustion attack then the risk of compromising Service A
|
||||
vulnerable to a resource exhaustion attack, then the risk of compromising Service A
|
||||
is high if you do not limit the resources of Service B. The following table lists
|
||||
areas of security concerns and recommendations for securing workloads running in Kubernetes:
|
||||
|
||||
|
@ -169,10 +169,10 @@ Area of Concern for Workload Security | Recommendation |
|
|||
RBAC Authorization (Access to the Kubernetes API) | https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
Authentication | https://kubernetes.io/docs/concepts/security/controlling-access/
|
||||
Application secrets management (and encrypting them in etcd at rest) | https://kubernetes.io/docs/concepts/configuration/secret/ <br> https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
|
||||
Pod Security Policies | https://kubernetes.io/docs/concepts/policy/pod-security-policy/
|
||||
Ensuring that pods meet defined Pod Security Standards | https://kubernetes.io/docs/concepts/security/pod-security-standards/#policy-instantiation
|
||||
Quality of Service (and Cluster resource management) | https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/
|
||||
Network Policies | https://kubernetes.io/docs/concepts/services-networking/network-policies/
|
||||
TLS For Kubernetes Ingress | https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
|
||||
TLS for Kubernetes Ingress | https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
|
||||
-->
|
||||
### 集群中的组件(您的应用) {#cluster-applications}
|
||||
|
||||
|
@ -186,7 +186,7 @@ TLS For Kubernetes Ingress | https://kubernetes.io/docs/concepts/services-networ
|
|||
RBAC 授权(访问 Kubernetes API) | https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/
|
||||
认证方式 | https://kubernetes.io/zh/docs/concepts/security/controlling-access/
|
||||
应用程序 Secret 管理 (并在 etcd 中对其进行静态数据加密) | https://kubernetes.io/zh/docs/concepts/configuration/secret/ <br> https://kubernetes.io/zh/docs/tasks/administer-cluster/encrypt-data/
|
||||
Pod 安全策略 | https://kubernetes.io/zh/docs/concepts/policy/pod-security-policy/
|
||||
确保 Pod 符合定义的 Pod 安全标准 | https://kubernetes.io/zh/docs/concepts/security/pod-security-standards/#policy-instantiation
|
||||
服务质量(和集群资源管理)| https://kubernetes.io/zh/docs/tasks/configure-pod-container/quality-service-pod/
|
||||
网络策略 | https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/
|
||||
Kubernetes Ingress 的 TLS 支持 | https://kubernetes.io/zh/docs/concepts/services-networking/ingress/#tls
|
||||
|
@ -202,7 +202,7 @@ Area of Concern for Containers | Recommendation |
|
|||
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
|
||||
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
|
||||
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
|
||||
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provider stronger isolation
|
||||
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provide stronger isolation
|
||||
-->
|
||||
## 容器
|
||||
|
||||
|
@ -232,7 +232,7 @@ are recommendations to protect application code:
|
|||
|
||||
Area of Concern for Code | Recommendation |
|
||||
-------------------------| -------------- |
|
||||
Access over TLS only | If your code needs to communicate by TCP, perform a TLS handshake with the client ahead of time. With the exception of a few cases, encrypt everything in transit. Going one step further, it's a good idea to encrypt network traffic between services. This can be done through a process known as mutual or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) which performs a two sided verification of communication between two certificate holding services. |
|
||||
Access over TLS only | If your code needs to communicate by TCP, perform a TLS handshake with the client ahead of time. With the exception of a few cases, encrypt everything in transit. Going one step further, it's a good idea to encrypt network traffic between services. This can be done through a process known as mutual TLS authentication or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) which performs a two sided verification of communication between two certificate holding services. |
|
||||
Limiting port ranges of communication | This recommendation may be a bit self-explanatory, but wherever possible you should only expose the ports on your service that are absolutely essential for communication or metric gathering. |
|
||||
3rd Party Dependency Security | It is a good practice to regularly scan your application's third party libraries for known security vulnerabilities. Each programming language has a tool for performing this check automatically. |
|
||||
Static Code Analysis | Most languages provide a way for a snippet of code to be analyzed for any potentially unsafe coding practices. Whenever possible you should perform checks using automated tooling that can scan codebases for common security errors. Some of the tools can be found at: https://owasp.org/www-community/Source_Code_Analysis_Tools |
|
||||
|
@ -246,7 +246,7 @@ Dynamic probing attacks | There are a few automated tools that you can run again
|
|||
|
||||
代码关注领域 | 建议 |
|
||||
-------------------------| -------------- |
|
||||
仅通过 TLS 访问 | 如果您的代码需要通过 TCP 通信,请提前与客户端执行 TLS 握手。除少数情况外,请加密传输中的所有内容。更进一步,加密服务之间的网络流量是一个好主意。这可以通过被称为相互 LTS 或 [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) 的过程来完成,该过程对两个证书持有服务之间的通信执行双向验证。 |
|
||||
仅通过 TLS 访问 | 如果您的代码需要通过 TCP 通信,请提前与客户端执行 TLS 握手。除少数情况外,请加密传输中的所有内容。更进一步,加密服务之间的网络流量是一个好主意。这可以通过被称为双向 TLS 或 [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) 的过程来完成,该过程对两个证书持有服务之间的通信执行双向验证。 |
|
||||
限制通信端口范围 | 此建议可能有点不言自明,但是在任何可能的情况下,你都只应公开服务上对于通信或度量收集绝对必要的端口。|
|
||||
第三方依赖性安全 | 最好定期扫描应用程序的第三方库以了解已知的安全漏洞。每种编程语言都有一个自动执行此检查的工具。 |
|
||||
静态代码分析 | 大多数语言都提供给了一种方法,来分析代码段中是否存在潜在的不安全的编码实践。只要有可能,你都应该使用自动工具执行检查,该工具可以扫描代码库以查找常见的安全错误,一些工具可以在以下连接中找到:https://owasp.org/www-community/Source_Code_Analysis_Tools |
|
||||
|
|
|
@ -35,13 +35,13 @@ Kubernetes [Pod 安全性标准(Security Standards)](/zh/docs/concepts/secur
|
|||
<!--
|
||||
As a Beta feature, Kubernetes offers a built-in _Pod Security_ {{< glossary_tooltip
|
||||
text="admission controller" term_id="admission-controller" >}}, the successor
|
||||
to [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/). Pod security restrictions
|
||||
to [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/). Pod security restrictions
|
||||
are applied at the {{< glossary_tooltip text="namespace" term_id="namespace" >}} level when pods
|
||||
are created.
|
||||
-->
|
||||
作为一项 Beta 功能特性,Kubernetes 提供一种内置的 _Pod 安全性_
|
||||
{{< glossary_tooltip text="准入控制器" term_id="admission-controller" >}},
|
||||
作为 [PodSecurityPolicies](/zh/docs/concepts/policy/pod-security-policy/)
|
||||
作为 [PodSecurityPolicies](/zh/docs/concepts/security/pod-security-policy/)
|
||||
特性的后继演化版本。Pod 安全性限制是在 Pod 被创建时在
|
||||
{{< glossary_tooltip text="名字空间" term_id="namespace" >}}层面实施的。
|
||||
|
||||
|
|
|
@ -836,7 +836,7 @@ of individual policies are not defined here.
|
|||
- {{< example file="security/podsecurity-baseline.yaml" >}}Baseline 名字空间{{< /example >}}
|
||||
- {{< example file="security/podsecurity-restricted.yaml" >}}Restricted 名字空间{{< /example >}}
|
||||
|
||||
[**PodSecurityPolicy**](/zh/docs/concepts/policy/pod-security-policy/) (已弃用)
|
||||
[**PodSecurityPolicy**](/zh/docs/concepts/security/pod-security-policy/) (已弃用)
|
||||
|
||||
- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}}
|
||||
- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}}
|
||||
|
@ -895,7 +895,7 @@ ecosystem, such as:
|
|||
-->
|
||||
安全策略则是控制面用来对安全上下文以及安全性上下文之外的参数实施某种设置的机制。
|
||||
在 2020 年 7 月,
|
||||
[Pod 安全性策略](/zh/docs/concepts/policy/pod-security-policy/)已被废弃,
|
||||
[Pod 安全性策略](/zh/docs/concepts/security/pod-security-policy/)已被废弃,
|
||||
取而代之的是内置的 [Pod 安全性准入控制器](/zh/docs/concepts/security/pod-security-admission/)。
|
||||
|
||||
Kubernetes 生态系统中还在开发一些其他的替代方案,例如
|
||||
|
|
|
@ -131,6 +131,7 @@ is managed by kubelet, or injecting different data.
|
|||
### CSI ephemeral volumes
|
||||
-->
|
||||
### CSI 临时卷 {#csi-ephemeral-volumes}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
<!--
|
||||
|
@ -142,12 +143,12 @@ The Kubernetes CSI [Drivers list](https://kubernetes-csi.github.io/docs/drivers.
|
|||
shows which drivers support ephemeral volumes.
|
||||
-->
|
||||
|
||||
该特性需要启用参数 `CSIInlineVolume`
|
||||
该特性需要启用参数 `CSIInlineVolume`
|
||||
[特性门控(feature gate)](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
该参数从 Kubernetes 1.16 开始默认启用。
|
||||
|
||||
{{< note >}}
|
||||
只有一部分 CSI 驱动程序支持 CSI 临时卷。Kubernetes CSI
|
||||
只有一部分 CSI 驱动程序支持 CSI 临时卷。Kubernetes CSI
|
||||
[驱动程序列表](https://kubernetes-csi.github.io/docs/drivers.html)
|
||||
显示了支持临时卷的驱动程序。
|
||||
{{< /note >}}
|
||||
|
@ -170,7 +171,7 @@ Here's an example manifest for a Pod that uses CSI ephemeral storage:
|
|||
从概念上讲,CSI 临时卷类似于 `configMap`、`downwardAPI` 和 `secret` 类型的卷:
|
||||
其存储在每个节点本地管理,并在将 Pod 调度到节点后与其他本地资源一起创建。
|
||||
在这个阶段,Kubernetes 没有重新调度 Pods 的概念。卷创建不太可能失败,否则 Pod 启动将会受阻。
|
||||
特别是,这些卷 *不* 支持[感知存储容量的 Pod 调度](/zh/docs/concepts/storage/storage-capacity/)。
|
||||
特别是,这些卷 **不** 支持[感知存储容量的 Pod 调度](/zh/docs/concepts/storage/storage-capacity/)。
|
||||
它们目前也没包括在 Pod 的存储资源使用限制中,因为 kubelet 只能对它自己管理的存储强制执行。
|
||||
|
||||
下面是使用 CSI 临时存储的 Pod 的示例清单:
|
||||
|
@ -183,7 +184,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: my-frontend
|
||||
image: busybox
|
||||
image: busybox:1.28
|
||||
volumeMounts:
|
||||
- mountPath: "/data"
|
||||
name: my-csi-inline-vol
|
||||
|
@ -202,18 +203,52 @@ driver. These attributes are specific to each driver and not
|
|||
standardized. See the documentation of each CSI driver for further
|
||||
instructions.
|
||||
|
||||
As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) to control which CSI drivers can be used in a Pod, specified with the
|
||||
[`allowedCSIDrivers` field](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicyspec-v1beta1-policy).
|
||||
-->
|
||||
|
||||
`volumeAttributes` 决定驱动程序准备什么样的卷。这些属性特定于每个驱动程序,且没有实现标准化。
|
||||
有关进一步的说明,请参阅每个 CSI 驱动程序的文档。
|
||||
|
||||
作为一个集群管理员,你可以使用
|
||||
[PodSecurityPolicy](/zh/docs/concepts/policy/pod-security-policy/)
|
||||
<!--
|
||||
### CSI driver restrictions
|
||||
|
||||
As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) to control which CSI drivers can be used in a Pod, specified with the
|
||||
[`allowedCSIDrivers` field](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicyspec-v1beta1-policy).
|
||||
-->
|
||||
|
||||
### CSI 驱动程序限制 {#csi-driver-restrictions}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
作为一个集群管理员,你可以使用
|
||||
[PodSecurityPolicy](/zh/docs/concepts/security/pod-security-policy/)
|
||||
来控制在 Pod 中可以使用哪些 CSI 驱动程序,
|
||||
具体则是通过 [`allowedCSIDrivers` 字段](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicyspec-v1beta1-policy)
|
||||
指定。
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
PodSecurityPolicy is deprecated and will be removed in the Kubernetes v1.25 release.
|
||||
{{< /note >}}
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
PodSecurityPolicy 已弃用,并将在 Kubernetes v1.25 版本中移除。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
CSI ephemeral volumes are only supported by a subset of CSI drivers.
|
||||
The Kubernetes CSI [Drivers list](https://kubernetes-csi.github.io/docs/drivers.html)
|
||||
shows which drivers support ephemeral volumes.
|
||||
{{< /note >}}
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
CSI 临时卷仅有 CSI 驱动程序的一个子集支持。
|
||||
Kubernetes CSI [驱动列表](https://kubernetes-csi.github.io/docs/drivers.html)显示了哪些驱动程序支持临时卷。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Generic ephemeral volumes
|
||||
-->
|
||||
|
@ -262,7 +297,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: my-frontend
|
||||
image: busybox
|
||||
image: busybox:1.28
|
||||
volumeMounts:
|
||||
- mountPath: "/scratch"
|
||||
name: scratch-volume
|
||||
|
@ -312,8 +347,8 @@ because then the scheduler is free to choose a suitable node for
|
|||
the Pod. With immediate binding, the scheduler is forced to select a node that has
|
||||
access to the volume once it is available.
|
||||
-->
|
||||
如上设置将触发卷的绑定与/或准备操作,相应动作或者在
|
||||
{{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}
|
||||
如上设置将触发卷的绑定与/或准备操作,相应动作或者在
|
||||
{{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}
|
||||
使用即时卷绑定时立即执行,
|
||||
或者当 Pod 被暂时性调度到某节点时执行 (`WaitForFirstConsumer` 卷绑定模式)。
|
||||
对于常见的临时卷,建议采用后者,这样调度器就可以自由地为 Pod 选择合适的节点。
|
||||
|
@ -326,7 +361,7 @@ that provide that ephemeral storage. When the Pod is deleted,
|
|||
the Kubernetes garbage collector deletes the PVC, which then usually
|
||||
triggers deletion of the volume because the default reclaim policy of
|
||||
storage classes is to delete volumes. You can create quasi-ephemeral local storage
|
||||
using a StorageClass with a reclaim policy of `retain`: the storage outlives the Pod,
|
||||
using a StorageClass with a reclaim policy of `retain`: the storage outlives the Pod,
|
||||
and in this case you need to ensure that volume clean up happens separately.
|
||||
-->
|
||||
就[资源所有权](/zh/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)而言,
|
||||
|
@ -411,20 +446,20 @@ two choices:
|
|||
如果这不符合他们的安全模型,他们有如下选择:
|
||||
|
||||
<!--
|
||||
- Use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
that rejects objects like Pods that have a generic ephemeral
|
||||
volume.
|
||||
- Use a [Pod Security
|
||||
Policy](/docs/concepts/policy/pod-security-policy/) where the
|
||||
`volumes` list does not contain the `ephemeral` volume type
|
||||
(deprecated in Kubernetes 1.21).
|
||||
- Use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
which rejects objects like Pods that have a generic ephemeral
|
||||
volume.
|
||||
-->
|
||||
- 通过特性门控显式禁用该特性。
|
||||
- 使用一个[准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
拒绝包含通用临时卷的 Pods。
|
||||
- 当 `volumes` 列表不包含 `ephemeral` 卷类型时,使用
|
||||
[Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)。
|
||||
(这一方式在 Kubernetes 1.21 版本已经弃用)
|
||||
- 使用一个[准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
拒绝包含通用临时卷的 Pods。
|
||||
|
||||
<!--
|
||||
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota) still applies, so
|
||||
|
|
|
@ -438,7 +438,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: test
|
||||
image: busybox
|
||||
image: busybox:1.28
|
||||
volumeMounts:
|
||||
- name: config-vol
|
||||
mountPath: /etc/config
|
||||
|
@ -1836,7 +1836,7 @@ spec:
|
|||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.name
|
||||
image: busybox
|
||||
image: busybox:1.28
|
||||
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
|
||||
volumeMounts:
|
||||
- name: workdir1
|
||||
|
|
|
@ -247,7 +247,7 @@ completion or failed for some reason. When you use `kubectl` to query a Pod with
|
|||
a container that is `Terminated`, you see a reason, an exit code, and the start and
|
||||
finish time for that container's period of execution.
|
||||
|
||||
If a container has a `preStop` hook configured, that runs before the container enters
|
||||
If a container has a `preStop` hook configured, this hook runs before the container enters
|
||||
the `Terminated` state.
|
||||
-->
|
||||
### `Terminated`(已终止) {#container-state-terminated}
|
||||
|
|
|
@ -1208,10 +1208,10 @@ based on the requested security context and the available Pod Security Policies.
|
|||
安全策略确定是否可以执行请求。
|
||||
|
||||
<!--
|
||||
See also [Pod Security Policy documentation](/docs/concepts/policy/pod-security-policy/)
|
||||
See also the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) documentation
|
||||
for more information.
|
||||
-->
|
||||
查看 [Pod 安全策略文档](/zh/docs/concepts/policy/pod-security-policy/)
|
||||
查看 [Pod 安全策略文档](/zh/docs/concepts/security/pod-security-policy/)
|
||||
了解更多细节。
|
||||
|
||||
### PodTolerationRestriction {#podtolerationrestriction}
|
||||
|
@ -1328,22 +1328,29 @@ Pod 的 `.spec.overhead` 字段和 RuntimeClass 的 `.overhead` 字段均为处
|
|||
### SecurityContextDeny {#securitycontextdeny}
|
||||
|
||||
<!--
|
||||
This admission controller will deny any pod that attempts to set certain escalating
|
||||
This admission controller will deny any Pod that attempts to set certain escalating
|
||||
[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
|
||||
fields, as shown in the
|
||||
[Configure a Security Context for a Pod or Container](/docs/tasks/configure-pod-container/security-context/)
|
||||
task.
|
||||
This should be enabled if a cluster doesn't utilize
|
||||
[pod security policies](/docs/concepts/policy/pod-security-policy/)
|
||||
to restrict the set of values a security context can take.
|
||||
If you don't use [Pod Security admission]((/docs/concepts/security/pod-security-admission/),
|
||||
[PodSecurityPolicies](/docs/concepts/security/pod-security-policy/), nor any external enforcement mechanism,
|
||||
then you could use this admission controller to restrict the set of values a security context can take.
|
||||
|
||||
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for more context on restricting
|
||||
pod privileges.
|
||||
-->
|
||||
该准入控制器将拒绝任何试图设置特定提升
|
||||
[SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/)
|
||||
字段的 Pod,正如任务
|
||||
[为 Pod 或 Container 配置安全上下文](/zh/docs/tasks/configure-pod-container/security-context/)
|
||||
中所展示的那样。
|
||||
如果集群没有使用 [Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)
|
||||
来限制安全上下文所能获取的值集,那么应该启用这个功能。
|
||||
如果集群没有使用 [Pod 安全性准入](/zh/docs/concepts/security/pod-security-admission/)、
|
||||
[PodSecurityPolicies](/zh/docs/concepts/security/pod-security-policy/),
|
||||
也没有任何外部执行机制,那么你可以使用此准入控制器来限制安全上下文所能获取的值集。
|
||||
|
||||
有关限制 Pod 权限的更多内容,请参阅
|
||||
[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)。
|
||||
|
||||
### ServiceAccount {#serviceaccount}
|
||||
|
||||
|
|
|
@ -160,7 +160,7 @@ DELETE | delete(针对单个资源)、deletecollection(针对集合)
|
|||
<!--
|
||||
Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:
|
||||
|
||||
* [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)
|
||||
* [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/)
|
||||
* `use` verb on `podsecuritypolicies` resources in the `policy` API group.
|
||||
* [RBAC](/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping)
|
||||
* `bind` and `escalate` verbs on `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group.
|
||||
|
@ -169,7 +169,7 @@ Kubernetes sometimes checks authorization for additional permissions using speci
|
|||
-->
|
||||
Kubernetes 有时使用专门的动词以对额外的权限进行鉴权。例如:
|
||||
|
||||
* [PodSecurityPolicy](/zh/docs/concepts/policy/pod-security-policy/)
|
||||
* [PodSecurityPolicy](/zh/docs/concepts/security/pod-security-policy/)
|
||||
* `policy` API 组中 `podsecuritypolicies` 资源使用 `use` 动词
|
||||
* [RBAC](/zh/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping)
|
||||
* 对 `rbac.authorization.k8s.io` API 组中 `roles` 和 `clusterroles` 资源的 `bind`
|
||||
|
|
|
@ -199,7 +199,7 @@ Use the following phase to configure bootstrap tokens.
|
|||
{{< tab name="bootstrap-token" include="generated/kubeadm_init_phase_bootstrap-token.md" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## kubeadm init phase kubelet-finialize {#cmd-phase-kubelet-finalize-all}
|
||||
## kubeadm init phase kubelet-finalize {#cmd-phase-kubelet-finalize-all}
|
||||
|
||||
<!--
|
||||
Use the following phase to update settings relevant to the kubelet after TLS
|
||||
|
@ -210,9 +210,9 @@ phases.
|
|||
你可以使用 `all` 子命令来运行所有 `kubelet-finalize` 阶段。
|
||||
|
||||
{{< tabs name="tab-kubelet-finalize" >}}
|
||||
{{< tab name="kublet-finalize" include="generated/kubeadm_init_phase_kubelet-finalize.md" />}}
|
||||
{{< tab name="kublet-finalize-all" include="generated/kubeadm_init_phase_kubelet-finalize_all.md" />}}
|
||||
{{< tab name="kublet-finalize-cert-rotation" include="generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md" />}}
|
||||
{{< tab name="kubelet-finalize" include="generated/kubeadm_init_phase_kubelet-finalize.md" />}}
|
||||
{{< tab name="kubelet-finalize-all" include="generated/kubeadm_init_phase_kubelet-finalize_all.md" />}}
|
||||
{{< tab name="kubelet-finalize-cert-rotation" include="generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
|
@ -234,11 +234,11 @@ install them selectively.
|
|||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
For more details on each field in the `v1beta2` configuration you can navigate to our
|
||||
[API reference pages.] (https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2)
|
||||
For more details on each field in the `v1beta3` configuration you can navigate to our
|
||||
[API reference pages.](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
-->
|
||||
有关 `v1beta2` 配置中每个字段的更多详细信息,可以访问
|
||||
[API](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2)。
|
||||
有关 `v1beta3` 配置中每个字段的更多详细信息,可以访问
|
||||
[API](/zh/docs/reference/config-api/kubeadm-config.v1beta3/)。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
@ -26,10 +26,10 @@ weight: 60
|
|||
|
||||
<!-- `kubeadm reset` is responsible for cleaning up a node local file system from files that were created using
|
||||
the `kubeadm init` or `kubeadm join` commands. For control-plane nodes `reset` also removes the local stacked
|
||||
etcd member of this node from the etcd cluster and also removes this node's information from the kubeadm
|
||||
`ClusterStatus` object. `ClusterStatus` is a kubeadm managed Kubernetes API object that holds a list of kube-apiserver endpoints. -->
|
||||
`kubeadm reset` 负责从使用 `kubeadm init` 或 `kubeadm join` 命令创建的文件中清除节点本地文件系统。对于控制平面节点,`reset` 还从 etcd 集群中删除该节点的本地 etcd 堆成员,还从 kubeadm `ClusterStatus` 对象中删除该节点的信息。
|
||||
`ClusterStatus` 是一个 kubeadm 管理的 Kubernetes API 对象,该对象包含 kube-apiserver 端点列表。
|
||||
etcd member of this node from the etcd cluster.
|
||||
-->
|
||||
`kubeadm reset` 负责从使用 `kubeadm init` 或 `kubeadm join` 命令创建的文件中清除节点本地文件系统。
|
||||
对于控制平面节点,`reset` 还从 etcd 集群中删除该节点的本地 etcd Stacked 部署的成员。
|
||||
|
||||
<!-- `kubeadm reset phase` can be used to execute the separate phases of the above workflow.
|
||||
To skip a list of phases you can use the `--skip-phases` flag, which works in a similar way to
|
||||
|
|
|
@ -1,25 +1,23 @@
|
|||
---
|
||||
title: "将重复的控制平面迁至云控制器管理器"
|
||||
linkTitle: "将重复的控制平面迁至云控制器管理器"
|
||||
title: 迁移多副本的控制面以使用云控制器管理器
|
||||
linkTitle: 迁移多副本的控制面以使用云控制器管理器
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- jpbetz
|
||||
- cheftako
|
||||
title: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
|
||||
linkTitle: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
|
||||
content_type: task
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.22" >}}
|
||||
|
||||
{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="云管理控制器是">}}
|
||||
{{< glossary_definition term_id="cloud-controller-manager" length="all">}}
|
||||
|
||||
<!--
|
||||
## Background
|
||||
|
@ -33,14 +31,17 @@ the `kube-controller-manager` and the `cloud-controller-manager` via a shared re
|
|||
For a single-node control plane, or if unavailability of controller managers can be tolerated during the upgrade, Leader Migration is not needed and this guide can be ignored.
|
||||
-->
|
||||
## 背景
|
||||
|
||||
作为[云驱动提取工作](https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/)
|
||||
的一部分,所有特定于云的控制器都必须移出 `kube-controller-manager`。
|
||||
所有在 `kube-controller-manager` 中运行云控制器的现有集群必须迁移到云驱动特定的 `cloud-controller-manager` 中运行控制器。
|
||||
所有在 `kube-controller-manager` 中运行云控制器的现有集群必须迁移到特定于云厂商的
|
||||
`cloud-controller-manager` 中运行这些控制器。
|
||||
|
||||
领导者迁移提供了一种机制,使得 HA 集群可以通过两个组件之间的共享资源锁定,
|
||||
安全地将“特定于云”的控制器从 `kube-controller-manager` 和迁移到`cloud-controller-manager`,
|
||||
同时升级复制的控制平面。
|
||||
对于单节点控制平面,或者在升级过程中可以容忍控制器管理器不可用的情况,则不需要领导者迁移,并且可以忽略本指南。
|
||||
领导者迁移(Leader Migration)提供了一种机制,使得 HA 集群可以通过这两个组件之间共享资源锁,
|
||||
在升级多副本的控制平面时,安全地将“特定于云”的控制器从 `kube-controller-manager` 迁移到
|
||||
`cloud-controller-manager`。
|
||||
对于单节点控制平面,或者在升级过程中可以容忍控制器管理器不可用的情况,则不需要领导者迁移,
|
||||
亦可以忽略本指南。
|
||||
|
||||
<!--
|
||||
Leader Migration can be enabled by setting `--enable-leader-migration` on `kube-controller-manager` or `cloud-controller-manager`.
|
||||
|
@ -50,12 +51,13 @@ This guide walks you through the manual process of upgrading the control plane f
|
|||
built-in cloud provider to running both `kube-controller-manager` and `cloud-controller-manager`.
|
||||
If you use a tool to administrator the cluster, please refer to the documentation of the tool and the cloud provider for more details.
|
||||
-->
|
||||
领导者迁移可以通过在 `kube-controller-manager` 或 `cloud-controller-manager` 上设置 `--enable-leader-migration` 来启用。
|
||||
领导者迁移仅在升级期间适用,并且可以安全地禁用,也可以在升级完成后保持启用状态。
|
||||
领导者迁移可以通过在 `kube-controller-manager` 或 `cloud-controller-manager` 上设置
|
||||
`--enable-leader-migration` 来启用。
|
||||
领导者迁移仅在升级期间适用,并且在升级完成后可以安全地禁用或保持启用状态。
|
||||
|
||||
本指南将引导你手动将控制平面从内置的云驱动的 `kube-controller-manager` 升级为
|
||||
同时运行 `kube-controller-manager` 和 `cloud-controller-manager`。
|
||||
如果使用工具来管理群集,请参阅对应工具和云驱动的文档以获取更多详细信息。
|
||||
如果使用某种工具来管理群集,请参阅对应工具和云驱动的文档以获取更多详细信息。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -67,13 +69,15 @@ The exact versions of N and N + 1 depend on each cloud provider. For example, if
|
|||
The control plane nodes should run `kube-controller-manager` with Leader Election enabled through `--leader-elect=true`.
|
||||
As of version N, an in-tree cloud privider must be set with `--cloud-provider` flag and `cloud-controller-manager` should not yet be deployed.
|
||||
-->
|
||||
假定控制平面正在运行 Kubernetes N 版本,并且要升级到 N+1 版本。
|
||||
尽管可以在同一版本中进行迁移,但理想情况下,迁移应作为升级的一部分执行,以便可以更改配置与每个发布版本保持一致。
|
||||
N 和 N+1的确切版本取决于各个云驱动。例如,如果云驱动构建了一个可与 Kubernetes 1.22 配合使用的 `cloud-controller-manager`,
|
||||
则 N 可以为 1.21,N+1 可以为 1.22。
|
||||
假定控制平面正在运行 Kubernetes 版本 N,要升级到版本 N+1。
|
||||
尽管可以在同一版本内进行迁移,但理想情况下,迁移应作为升级的一部分执行,
|
||||
以便可以配置的变更可以与发布版本变化对应起来。
|
||||
N 和 N+1 的确切版本值取决于各个云厂商。例如,如果云厂商构建了一个可与 Kubernetes 1.22
|
||||
配合使用的 `cloud-controller-manager`,则 N 可以为 1.21,N+1 可以为 1.22。
|
||||
|
||||
控制平面节点应运行 `kube-controller-manager`,并通过 `--leader-elect=true` 启用领导者选举。
|
||||
从版本 N 开始,树内云驱动必须设置 `--cloud-provider` 标志,而且 `cloud-controller-manager` 尚未部署。
|
||||
在版本 N 中,树内云驱动必须设置 `--cloud-provider` 标志,而且 `cloud-controller-manager`
|
||||
应该尚未部署。
|
||||
|
||||
<!--
|
||||
The out-of-tree cloud provider must have built a `cloud-controller-manager` with Leader Migration implementation.
|
||||
|
@ -88,18 +92,18 @@ For authorization, this guide assumes that the cluster uses RBAC.
|
|||
If another authorization mode grants permissions to `kube-controller-manager` and `cloud-controller-manager` components,
|
||||
please grant the needed access in a way that matches the mode.
|
||||
-->
|
||||
树外云驱动必须已经构建了一个实现领导者迁移的 `cloud-controller-manager`。
|
||||
树外云驱动必须已经构建了一个实现了领导者迁移的 `cloud-controller-manager`。
|
||||
如果云驱动导入了 v0.21.0 或更高版本的 `k8s.io/cloud-provider` 和 `k8s.io/controller-manager`,
|
||||
则可以进行领导者迁移。
|
||||
但是,对 v0.22.0 以下的版本,领导者迁移是一项 Alpha 阶段功能,它需要启用特性门控 `ControllerManagerLeaderMigration`。
|
||||
但是,对 v0.22.0 以下的版本,领导者迁移是一项 Alpha 阶段功能,需要启用特性门控
|
||||
`ControllerManagerLeaderMigration`。
|
||||
|
||||
本指南假定每个控制平面节点的 kubelet 以静态 pod 的形式启动 `kube-controller-manager`
|
||||
和 `cloud-controller-manager`,静态 pod 的定义在清单文件中。
|
||||
如果组件以其他设置运行,请相应地调整步骤。
|
||||
本指南假定每个控制平面节点的 kubelet 以静态 Pod 的形式启动 `kube-controller-manager`
|
||||
和 `cloud-controller-manager`,静态 Pod 的定义在清单文件中。
|
||||
如果组件以其他设置运行,请相应地调整这里的步骤。
|
||||
|
||||
为了获得授权,本指南假定集群使用 RBAC。
|
||||
如果其他授权模式授予 `kube-controller-manager` 和 `cloud-controller-manager` 组件权限,
|
||||
请以与该模式匹配的方式授予所需的访问权限。
|
||||
关于鉴权,本指南假定集群使用 RBAC。如果其他鉴权模式授予 `kube-controller-manager`
|
||||
和 `cloud-controller-manager` 组件权限,请以与该模式匹配的方式授予所需的访问权限。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -119,9 +123,10 @@ Do the same to the `system::leader-locking-cloud-controller-manager` role.
|
|||
|
||||
`kubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
|
||||
-->
|
||||
### 授予访问迁移 Lease 的权限
|
||||
### 授予访问迁移租约的权限
|
||||
|
||||
控制器管理器的默认权限仅允许访问其主 Lease 对象。为了使迁移正常进行,需要访问其他 Lease 对象。
|
||||
控制器管理器的默认权限仅允许访问其主租约(Lease)对象。为了使迁移正常进行,
|
||||
需要授权它访问其他 Lease 对象。
|
||||
|
||||
你可以通过修改 `system::leader-locking-kube-controller-manager` 角色来授予
|
||||
`kube-controller-manager` 对 Lease API 的完全访问权限。
|
||||
|
@ -142,11 +147,12 @@ Leader Migration can be enabled without a configuration. Please see [Default Con
|
|||
-->
|
||||
### 初始领导者迁移配置
|
||||
|
||||
领导者迁移可以选择使用一个表示控制器到管理器分配状态的配置文件。
|
||||
目前,对于树内云驱动,`kube-controller-manager` 运行 `route`、`service` 和 `cloud-node-lifecycle`。
|
||||
以下示例配置显示了分配。
|
||||
领导者迁移可以选择使用一个表示如何将控制器分配给不同管理器的配置文件。
|
||||
目前,对于树内云驱动,`kube-controller-manager` 运行 `route`、`service` 和
|
||||
`cloud-node-lifecycle`。以下示例配置显示的是这种分配。
|
||||
|
||||
领导者迁移可以不指定配置来启用。请参阅 [默认配置](#default-configuration) 以获取更多详细信息。
|
||||
领导者迁移可以不指定配置的情况下启用。请参阅[默认配置](#default-configuration)
|
||||
以获取更多详细信息。
|
||||
|
||||
```yaml
|
||||
kind: LeaderMigrationConfiguration
|
||||
|
@ -172,15 +178,15 @@ Also, update the same manifest to add the following arguments:
|
|||
|
||||
Restart `kube-controller-manager` on each node. At this moment, `kube-controller-manager` has leader migration enabled and is ready for the migration.
|
||||
-->
|
||||
在每个控制平面节点上,将内容保存到 `/etc/leadermigration.conf` 中,
|
||||
并更新 `kube-controller-manager` 清单,以便将文件安装在容器内的同一位置。
|
||||
另外,更新相同的清单,添加以下参数:
|
||||
在每个控制平面节点上,请将如上内容保存到 `/etc/leadermigration.conf` 中,
|
||||
并更新 `kube-controller-manager` 清单,以便将文件挂载到容器内的同一位置。
|
||||
另外,请更新同一清单,添加以下参数:
|
||||
|
||||
- `--enable-leader-migration` 在控制器管理器上启用领导者迁移
|
||||
- `--leader-migration-config=/etc/leadermigration.conf` 设置配置文件
|
||||
|
||||
在每个节点上重新启动 `kube-controller-manager`。这时,`kube-controller-manager`
|
||||
已启用领导者迁移,并准备进行迁移。
|
||||
已启用领导者迁移,为迁移准备就绪。
|
||||
|
||||
<!--
|
||||
### Deploy Cloud Controller Manager
|
||||
|
@ -190,8 +196,9 @@ Please note `component` field of each `controllerLeaders` changing from `kube-co
|
|||
-->
|
||||
### 部署云控制器管理器
|
||||
|
||||
在 N+1 版本中,控制器到管理器分配的期望状态可以由新的配置文件表示,如下所示。
|
||||
请注意,每个 `controllerLeaders` 的 `component` 字段从 `kube-controller-manager` 更改为 `cloud-controller-manager`。
|
||||
在版本 N+1 中,如何将控制器分配给不同管理器的预期分配状态可以由新的配置文件表示,
|
||||
如下所示。请注意,各个 `controllerLeaders` 的 `component` 字段从 `kube-controller-manager`
|
||||
更改为 `cloud-controller-manager`。
|
||||
|
||||
```yaml
|
||||
kind: LeaderMigrationConfiguration
|
||||
|
@ -221,15 +228,16 @@ with an external cloud provider, it does not run the migrated controllers anymor
|
|||
Please refer to [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/)
|
||||
for more detail on how to deploy `cloud-controller-manager`.
|
||||
-->
|
||||
|
||||
当创建 N+1 版本的控制平面节点时,应将内容部署到 `/etc/leadermigration.conf`。
|
||||
应该更新 `cloud-controller-manager` 清单,以与 N 版本的 `kube-controller-manager` 相同的方式挂载配置文件。
|
||||
当创建版本 N+1 的控制平面节点时,应将如上内容写入到 `/etc/leadermigration.conf`。
|
||||
你需要更新 `cloud-controller-manager` 的清单,以与版本 N 的 `kube-controller-manager`
|
||||
相同的方式挂载配置文件。
|
||||
类似地,添加 `--feature-gates=ControllerManagerLeaderMigration=true`、`--enable-leader-migration`
|
||||
和 `--leader-migration-config=/etc/leadermigration.conf` 到 `cloud-controller-manager` 的参数中。
|
||||
和 `--leader-migration-config=/etc/leadermigration.conf` 到 `cloud-controller-manager`
|
||||
的参数中。
|
||||
|
||||
使用已更新的 `cloud-controller-manager` 清单创建一个新的 N+1 版本的控制平面节点。
|
||||
并且没有设置 `kube-controller-manager` 的 `--cloud-provider` 标志。
|
||||
N+1 版本的 `kube-controller-manager` 不能启用领导者迁移,
|
||||
使用已更新的 `cloud-controller-manager` 清单创建一个新的 N+1 版本的控制平面节点,
|
||||
同时确保没有设置 `kube-controller-manager` 的 `--cloud-provider` 标志。
|
||||
版本为 N+1 的 `kube-controller-manager` 不能启用领导者迁移,
|
||||
因为在使用外部云驱动的情况下,它不再运行已迁移的控制器,因此不参与迁移。
|
||||
|
||||
请参阅[云控制器管理器管理](/zh/docs/tasks/administer-cluster/running-cloud-controller/)
|
||||
|
@ -250,17 +258,17 @@ If a rollback from version N + 1 to N is required, add nodes of version N with L
|
|||
-->
|
||||
### 升级控制平面
|
||||
|
||||
现在,控制平面包含 N 和 N+1 版本的节点。
|
||||
N 版本的节点仅运行 `kube-controller-manager`,而 N+1 版本的节点同时运行
|
||||
现在,控制平面同时包含 N 和 N+1 版本的节点。
|
||||
版本 N 的节点仅运行 `kube-controller-manager`,而版本 N+1 的节点同时运行
|
||||
`kube-controller-manager` 和 `cloud-controller-manager`。
|
||||
根据配置所指定,已迁移的控制器在 N 版本的 `kube-controller-manager` 或 N+1 版本的
|
||||
`cloud-controller-manager` 下运行,
|
||||
具体取决于哪个控制器管理器拥有迁移 Lease 对象。任何时候都不存在一个控制器在两个控制器管理器下运行。
|
||||
根据配置所指定,已迁移的控制器在版本 N 的 `kube-controller-manager` 或版本
|
||||
N+1 的 `cloud-controller-manager` 下运行,具体取决于哪个控制器管理器拥有迁移租约对象。
|
||||
任何时候都不会有同一个控制器在两个控制器管理器下运行。
|
||||
|
||||
以滚动的方式创建一个新的版本为 N+1 的控制平面节点,并将 N+1 版本中的一个关闭,
|
||||
以滚动的方式创建一个新的版本为 N+1 的控制平面节点,并将版本 N 中的一个关闭,
|
||||
直到控制平面仅包含版本为 N+1 的节点。
|
||||
如果需要从 N+1 版本回滚到 N 版本,则将启用了领导者迁移的 `kube-controller-manager`
|
||||
且版本为 N 的节点添加回控制平面,每次替换 N+1 版本的一个,直到只有 N 版本的节点为止。
|
||||
如果需要从 N+1 版本回滚到 N 版本,则将 `kube-controller-manager` 启用了领导者迁移的、
|
||||
且版本为 N 的节点添加回控制平面,每次替换 N+1 版本中的一个,直到只有版本 N 的节点为止。
|
||||
|
||||
<!--
|
||||
### (Optional) Disable Leader Migration {#disable-leader-migration}
|
||||
|
@ -276,14 +284,15 @@ To re-enable Leader Migration, recreate the configuration file and add its mount
|
|||
-->
|
||||
### (可选)禁用领导者迁移 {#disable-leader-migration}
|
||||
|
||||
现在,控制平面已经升级,可以同时运行 N+1 版本的 `kube-controller-manager` 和 `cloud-controller-manager` 了。
|
||||
领导者迁移已经完成工作,可以安全地禁用以节省一个 Lease 资源。
|
||||
在将来可以安全地重新启用领导者迁移以完成回滚。
|
||||
现在,控制平面已经完成升级,同时运行版本 N+1 的 `kube-controller-manager`
|
||||
和 `cloud-controller-manager`。领导者迁移的任务已经结束,可以被安全地禁用以节省一个
|
||||
Lease 资源。在将来可以安全地重新启用领导者迁移,以完成回滚。
|
||||
|
||||
在滚动管理器中,更新 `cloud-controller-manager` 的清单以同时取消设置 `--enable-leader-migration`
|
||||
和 `--leader-migration-config=` 标志,并删除 `/etc/leadermigration.conf` 的挂载。
|
||||
最后删除 `/etc/leadermigration.conf`。
|
||||
要重新启用领导者迁移,请重新创建配置文件,并将其挂载和启用领导者迁移的标志添加回到 `cloud-controller-manager`。
|
||||
在滚动管理器中,更新 `cloud-controller-manager` 的清单以同时取消设置
|
||||
`--enable-leader-migration` 和 `--leader-migration-config=` 标志,并删除
|
||||
`/etc/leadermigration.conf` 的挂载,最后删除 `/etc/leadermigration.conf`。
|
||||
要重新启用领导者迁移,请重新创建配置文件,并将其挂载和启用领导者迁移的标志添加回到
|
||||
`cloud-controller-manager`。
|
||||
|
||||
<!--
|
||||
### Default Configuration
|
||||
|
@ -295,8 +304,9 @@ For `kube-controller-manager` and `cloud-controller-manager`, if there are no fl
|
|||
-->
|
||||
### 默认配置 {#default-configuration}
|
||||
|
||||
从 Kubernetes 1.22 开始,领导者迁移提供了一个默认配置,它适用于默认的控制器到管理器分配。
|
||||
可以通过设置 `--enable-leader-migration`,但不设置 `--leader-migration-config=` 来启用默认配置。
|
||||
从 Kubernetes 1.22 开始,领导者迁移提供了一个默认配置,它适用于控制器与管理器间默认的分配关系。
|
||||
可以通过设置 `--enable-leader-migration`,但不设置 `--leader-migration-config=`
|
||||
来启用默认配置。
|
||||
|
||||
对于 `kube-controller-manager` 和 `cloud-controller-manager`,如果没有用参数来启用树内云驱动或者改变控制器属主,
|
||||
则可以使用默认配置来避免手动创建配置文件。
|
||||
|
@ -305,4 +315,6 @@ For `kube-controller-manager` and `cloud-controller-manager`, if there are no fl
|
|||
<!--
|
||||
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration) enhancement proposal
|
||||
-->
|
||||
- 阅读[领导者迁移控制器管理器](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration)改进建议
|
||||
- 阅读[领导者迁移控制器管理器](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration)
|
||||
改进建议提案。
|
||||
|
||||
|
|
|
@ -351,8 +351,9 @@ respectively.
|
|||
<!--
|
||||
## General Guidelines
|
||||
|
||||
System daemons are expected to be treated similar to 'Guaranteed' pods. System
|
||||
daemons can burst within their bounding control groups and this behavior needs
|
||||
System daemons are expected to be treated similar to
|
||||
[Guaranteed pods](/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed).
|
||||
System daemons can burst within their bounding control groups and this behavior needs
|
||||
to be managed as part of kubernetes deployments. For example, `kubelet` should
|
||||
have its own control group and share `Kube-reserved` resources with the
|
||||
container runtime. However, Kubelet cannot burst and use up all available Node
|
||||
|
@ -360,7 +361,9 @@ resources if `kube-reserved` is enforced.
|
|||
-->
|
||||
## 一般原则 {#general-guidelines}
|
||||
|
||||
系统守护进程一般会被按照类似 'Guaranteed' Pod 一样对待。
|
||||
系统守护进程一般会被按照类似
|
||||
[Guaranteed pods](/zh/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed)
|
||||
一样对待。
|
||||
系统守护进程可以在与其对应的控制组中出现突发资源用量,这一行为要作为
|
||||
kubernetes 部署的一部分进行管理。
|
||||
例如,`kubelet` 应该有它自己的控制组并和容器运行时共享 `Kube-reserved` 资源。
|
||||
|
|
|
@ -52,7 +52,7 @@ configure a [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/).
|
|||
|
||||
If availability is important for any applications that run or could run on the node(s)
|
||||
that you are draining, [configure a PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
|
||||
first and the continue following this guide.
|
||||
first and then continue following this guide.
|
||||
-->
|
||||
## (可选) 配置干扰预算 {#configure-poddisruptionbudget}
|
||||
|
||||
|
|
|
@ -66,15 +66,14 @@ you can see the `spec.serviceAccountName` field has been
|
|||
|
||||
<!--
|
||||
You can access the API from inside a pod using automatically mounted service account credentials,
|
||||
as described in [Accessing the Cluster](/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod).
|
||||
as described in [Accessing the Cluster](/docs/tasks/accessing-application-cluster/access-cluster/).
|
||||
The API permissions of the service account depend on the [authorization plugin and policy](/docs/reference/access-authn-authz/authorization/#authorization-modules) in use.
|
||||
|
||||
In version 1.6+, you can opt out of automounting API credentials for a service account by setting
|
||||
`automountServiceAccountToken: false` on the service account:
|
||||
-->
|
||||
你可以使用自动挂载给 Pod 的服务账户凭据访问 API,
|
||||
[访问集群](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
|
||||
中有相关描述。
|
||||
[访问集群](/zh/docs/tasks/access-application-cluster/access-cluster/)页面中有相关描述。
|
||||
服务账户的 API 许可取决于你所使用的
|
||||
[鉴权插件和策略](/zh/docs/reference/access-authn-authz/authorization/#authorization-modules)。
|
||||
|
||||
|
@ -172,7 +171,7 @@ The output is similar to this:
|
|||
-->
|
||||
输出类似于:
|
||||
|
||||
```yaml
|
||||
```none
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
|
@ -237,7 +236,6 @@ metadata:
|
|||
kubernetes.io/service-account.name: build-robot
|
||||
type: kubernetes.io/service-account-token
|
||||
EOF
|
||||
secret/build-robot-secret created
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -246,7 +244,6 @@ Now you can confirm that the newly built secret is populated with an API token f
|
|||
Any tokens for non-existent service accounts will be cleaned up by the token controller.
|
||||
-->
|
||||
现在,你可以确认新构建的 Secret 中填充了 "build-robot" 服务帐户的 API 令牌。
|
||||
|
||||
令牌控制器将清理不存在的服务帐户的所有令牌。
|
||||
|
||||
```shell
|
||||
|
@ -312,10 +309,10 @@ The content of `token` is elided here.
|
|||
<!-- The output is similar to this: -->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
myregistrykey kubernetes.io/.dockerconfigjson 1 1d
|
||||
```
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
myregistrykey kubernetes.io/.dockerconfigjson 1 1d
|
||||
```
|
||||
|
||||
<!--
|
||||
### Add image pull secret to service account
|
||||
|
@ -324,7 +321,7 @@ Next, modify the default service account for the namespace to use this secret as
|
|||
-->
|
||||
### 将镜像拉取 Secret 添加到服务账号
|
||||
|
||||
接着修改命名空间的 `default` 服务帐户,以将该 Secret 用作 imagePullSecret。
|
||||
接着修改命名空间的 `default` 服务帐户,以将该 Secret 用作 `imagePullSecret`。
|
||||
|
||||
```shell
|
||||
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
|
||||
|
@ -394,8 +391,8 @@ Now, when a new Pod is created in the current namespace and using the default Se
|
|||
-->
|
||||
### 验证镜像拉取 Secret 已经被添加到 Pod 规约
|
||||
|
||||
现在,在当前命名空间中创建的每个使用默认服务账号的新 Pod,新 Pod 都会自动
|
||||
设置其 `.spec.imagePullSecrets` 字段:
|
||||
现在,在当前命名空间中创建使用默认服务账号的新 Pod 时,新 Pod
|
||||
会自动设置其 `.spec.imagePullSecrets` 字段:
|
||||
|
||||
```shell
|
||||
kubectl run nginx --image=nginx --restart=Never
|
||||
|
@ -419,22 +416,65 @@ myregistrykey
|
|||
<!--
|
||||
To enable and use token request projection, you must specify each of the following
|
||||
command line arguments to `kube-apiserver`:
|
||||
|
||||
* `--service-account-issuer`
|
||||
* `--service-account-key-file`
|
||||
* `--service-account-signing-key-file`
|
||||
* `--api-audiences` (can be omitted)
|
||||
|
||||
-->
|
||||
{{< note >}}
|
||||
为了启用令牌请求投射,你必须为 `kube-apiserver` 设置以下命令行参数:
|
||||
|
||||
<!--
|
||||
* `--service-account-issuer`
|
||||
It can be used as the Identifier of the service account token issuer. You can specify the
|
||||
`--service-account-issuer` argument multiple times, this can be useful to enable a non-disruptive
|
||||
change of the issuer. When this flag is specified multiple times, the first is used to generate
|
||||
tokens and all are used to determine which issuers are accepted. You must be running Kubernetes
|
||||
v1.22 or later to be able to specify `--service-account-issuer` multiple times.
|
||||
-->
|
||||
* `--service-account-issuer`
|
||||
* `--service-account-key-file`
|
||||
* `--service-account-signing-key-file`
|
||||
* `--api-audiences`(可以省略)
|
||||
|
||||
{{< /note >}}
|
||||
此参数可作为服务账户令牌发放者的身份标识(Identifier)。你可以多次指定
|
||||
`--service-account-issuer` 参数,对于要变更发放者而又不想带来业务中断的场景,
|
||||
这样做是有用的。如果这个参数被多次指定,则第一个参数值会被用来生成令牌,
|
||||
而所有参数值都会被用来确定哪些发放者是可接受的。你所运行的 Kubernetes
|
||||
集群必须是 v1.22 或更高版本,才能多次指定 `--service-account-issuer`。
|
||||
|
||||
<!--
|
||||
* `--service-account-key-file`
|
||||
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify
|
||||
ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified
|
||||
multiple times with different files. If specified multiple times, tokens signed by any of the
|
||||
specified keys are considered valid by the Kubernetes API server.
|
||||
-->
|
||||
* `--service-account-key-file`
|
||||
|
||||
包含 PEM 编码的 x509 RSA 或 ECDSA 私钥或公钥,用来检查 ServiceAccount
|
||||
的令牌。所指定的文件中可以包含多个秘钥,并且你可以多次使用此参数,
|
||||
每次参数值为不同的文件。多次使用此参数时,由所给的秘钥之一签名的令牌会被
|
||||
Kubernetes API 服务器认为是合法令牌。
|
||||
|
||||
<!--
|
||||
* `--service-account-signing-key-file`
|
||||
Path to the file that contains the current private key of the service account token issuer. The
|
||||
issuer signs issued ID tokens with this private key.
|
||||
-->
|
||||
* `--service-account-signing-key-file`
|
||||
|
||||
指向包含当前服务账户令牌发放者的私钥的文件路径。
|
||||
此发放者使用此私钥来签署所发放的 ID 令牌。
|
||||
|
||||
<!--
|
||||
* `--api-audiences` (can be omitted)
|
||||
The service account token authenticator validates that tokens used against the API are bound to
|
||||
at least one of these audiences. If `api-audiences` is specified multiple times, tokens for any of
|
||||
the specified audiences are considered valid by the Kubernetes API server. If the
|
||||
`--service-account-issuer` flag is configured and this flag is not, this field defaults to a
|
||||
single element list containing the issuer URL.
|
||||
-->
|
||||
* `--api-audiences` (can be omitted)
|
||||
|
||||
服务账号令牌身份检查组件会检查针对 API 访问所使用的令牌,
|
||||
确认令牌至少是被绑定到这里所给的受众(audiences)之一。
|
||||
如果此参数被多次指定,则针对所给的多个受众中任何目标的令牌都会被
|
||||
Kubernetes API 服务器当做合法的令牌。如果 `--service-account-issuer`
|
||||
参数被设置,而这个参数未指定,则这个参数的默认值为一个只有一个元素的列表,
|
||||
且该元素为令牌发放者的 URL。
|
||||
|
||||
<!--
|
||||
The kubelet can also project a service account token into a Pod. You can
|
||||
|
@ -443,9 +483,9 @@ duration. These properties are not configurable on the default service account
|
|||
token. The service account token will also become invalid against the API when
|
||||
the Pod or the ServiceAccount is deleted.
|
||||
-->
|
||||
kubelet 还可以将服务帐户令牌投影到 Pod 中。
|
||||
你可以指定令牌的所需属性,例如受众和有效持续时间。
|
||||
这些属性在默认服务帐户令牌上无法配置。
|
||||
kubelet 还可以将服务帐户令牌投射到 Pod 中。
|
||||
你可以指定令牌的期望属性,例如受众和有效期限。
|
||||
这些属性在 default 服务帐户令牌上无法配置。
|
||||
当删除 Pod 或 ServiceAccount 时,服务帐户令牌也将对 API 无效。
|
||||
|
||||
<!--
|
||||
|
@ -476,13 +516,14 @@ The kubelet proactively rotates the token if it is older than 80% of its total T
|
|||
|
||||
The application is responsible for reloading the token when it rotates. Periodic reloading (e.g. once every 5 minutes) is sufficient for most use cases.
|
||||
-->
|
||||
`kubelet` 组件会替 Pod 请求令牌并将其保存起来,通过将令牌存储到一个可配置的
|
||||
路径使之在 Pod 内可用,并在令牌快要到期的时候刷新它。
|
||||
`kubelet` 会在令牌存在期达到其 TTL 的 80% 的时候或者令牌生命期超过 24 小时
|
||||
的时候主动轮换它。
|
||||
`kubelet` 组件会替 Pod 请求令牌并将其保存起来,
|
||||
通过将令牌存储到一个可配置的路径使之在 Pod 内可用,
|
||||
并在令牌快要到期的时候刷新它。
|
||||
`kubelet` 会在令牌存在期达到其 TTL 的 80% 的时候或者令牌生命期超过
|
||||
24 小时的时候主动轮换它。
|
||||
|
||||
应用程序负责在令牌被轮换时重新加载其内容。对于大多数使用场景而言,周期性地
|
||||
(例如,每隔 5 分钟)重新加载就足够了。
|
||||
应用程序负责在令牌被轮换时重新加载其内容。对于大多数使用场景而言,
|
||||
周期性地(例如,每隔 5 分钟)重新加载就足够了。
|
||||
|
||||
<!--
|
||||
## Service Account Issuer Discovery
|
||||
|
@ -496,8 +537,8 @@ The Service Account Issuer Discovery feature is enabled when the Service Account
|
|||
Token Projection feature is enabled, as described
|
||||
[above](#service-account-token-volume-projection).
|
||||
-->
|
||||
当启用服务账号令牌投射时启用发现服务账号分发者(Service Account Issuer Discovery)这一功能特性,
|
||||
如[上文所述](#service-account-token-volume-projection)。
|
||||
当启用服务账号令牌投射时启用发现服务账号分发者(Service Account Issuer Discovery)
|
||||
这一功能特性,如[上文所述](#service-account-token-volume-projection)。
|
||||
|
||||
<!--
|
||||
The issuer URL must comply with the
|
||||
|
@ -508,16 +549,14 @@ provider configuration at `{service-account-issuer}/.well-known/openid-configura
|
|||
If the URL does not comply, the `ServiceAccountIssuerDiscovery` endpoints will
|
||||
not be registered, even if the feature is enabled.
|
||||
-->
|
||||
{{< note >}}
|
||||
分发者的 URL 必须遵从
|
||||
[OIDC 发现规范](https://openid.net/specs/openid-connect-discovery-1_0.html)。
|
||||
这意味着 URL 必须使用 `https` 模式,并且必须在
|
||||
`{service-account-issuer}/.well-known/openid-configuration`
|
||||
路径提供 OpenID 提供者(Provider)配置。
|
||||
路径给出 OpenID 提供者(Provider)配置。
|
||||
|
||||
如果 URL 没有遵从这一规范,`ServiceAccountIssuerDiscovery` 末端就不会被注册,
|
||||
即使该特性已经被启用。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The Service Account Issuer Discovery feature enables federation of Kubernetes
|
||||
|
@ -530,13 +569,13 @@ JSON Web Key Set (JWKS) at `/openid/v1/jwks`. The OpenID Provider Configuration
|
|||
is sometimes referred to as the _discovery document_.
|
||||
-->
|
||||
发现服务账号分发者这一功能使得用户能够用联邦的方式结合使用 Kubernetes
|
||||
集群(_Identity Provider_,标识提供者)与外部系统(_relying parties_,
|
||||
集群(“Identity Provider”,标识提供者)与外部系统(“Relying Parties”,
|
||||
依赖方)所分发的服务账号令牌。
|
||||
|
||||
当此功能被启用时,Kubernetes API 服务器会在 `/.well-known/openid-configuration`
|
||||
提供一个 OpenID 提供者配置文档,并在 `/openid/v1/jwks` 处提供与之关联的
|
||||
JSON Web Key Set(JWKS)。
|
||||
这里的 OpenID 提供者配置有时候也被称作 _发现文档(Discovery Document)_。
|
||||
这里的 OpenID 提供者配置有时候也被称作“发现文档(Discovery Document)”。
|
||||
|
||||
<!--
|
||||
Clusters include a default RBAC ClusterRole called
|
||||
|
@ -550,7 +589,8 @@ requirements and which external systems they intend to federate with.
|
|||
-->
|
||||
集群包括一个的默认 RBAC ClusterRole, 名为 `system:service-account-issuer-discovery`。
|
||||
默认的 RBAC ClusterRoleBinding 将此角色分配给 `system:serviceaccounts` 组,
|
||||
所有服务帐户隐式属于该组。这使得集群上运行的 Pod 能够通过它们所挂载的服务帐户令牌访问服务帐户发现文档。
|
||||
所有服务帐户隐式属于该组。这使得集群上运行的 Pod
|
||||
能够通过它们所挂载的服务帐户令牌访问服务帐户发现文档。
|
||||
此外,管理员可以根据其安全性需要以及期望集成的外部系统选择是否将该角色绑定到
|
||||
`system:authenticated` 或 `system:unauthenticated`。
|
||||
|
||||
|
@ -560,11 +600,9 @@ The responses served at `/.well-known/openid-configuration` and
|
|||
compliant. Those documents contain only the parameters necessary to perform
|
||||
validation of Kubernetes service account tokens.
|
||||
-->
|
||||
{{< note >}}
|
||||
对 `/.well-known/openid-configuration` 和 `/openid/v1/jwks` 路径请求的响应
|
||||
被设计为与 OIDC 兼容,但不是完全与其一致。
|
||||
返回的文档仅包含对 Kubernetes 服务账号令牌进行验证所必须的参数。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The JWKS response contains public keys that a relying party can use to validate
|
||||
|
@ -573,8 +611,7 @@ OpenID Provider Configuration, and use the `jwks_uri` field in the response to
|
|||
find the JWKS.
|
||||
-->
|
||||
JWKS 响应包含依赖方可以用来验证 Kubernetes 服务账号令牌的公钥数据。
|
||||
依赖方先会查询 OpenID 提供者配置,之后使用返回响应中的 `jwks_uri` 来查找
|
||||
JWKS。
|
||||
依赖方先会查询 OpenID 提供者配置,之后使用返回响应中的 `jwks_uri` 来查找 JWKS。
|
||||
|
||||
<!--
|
||||
In many cases, Kubernetes API servers are not available on the public internet,
|
||||
|
@ -592,7 +629,6 @@ JWKS URI is required to use the `https` scheme.
|
|||
这时需要向 API 服务器传递 `--service-account-jwks-uri` 参数。
|
||||
与分发者 URL 类似,此 JWKS URI 也需要使用 `https` 模式。
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
|
@ -607,3 +643,4 @@ See also:
|
|||
- [服务账号的集群管理员指南](/zh/docs/reference/access-authn-authz/service-accounts-admin/)
|
||||
- [服务账号签署密钥检索 KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1393-oidc-discovery)
|
||||
- [OIDC 发现规范](https://openid.net/specs/openid-connect-discovery-1_0.html)
|
||||
|
||||
|
|
|
@ -70,17 +70,11 @@ a Pod or Container. Security context settings include, but are not limited to:
|
|||
The above bullets are not a complete set of security context settings - please see
|
||||
[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
|
||||
for a comprehensive list.
|
||||
|
||||
For more information about security mechanisms in Linux, see
|
||||
[Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features)
|
||||
-->
|
||||
以上条目不是安全上下文设置的完整列表 -- 请参阅
|
||||
[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
|
||||
了解其完整列表。
|
||||
|
||||
关于在 Linux 系统中的安全机制的更多信息,可参阅
|
||||
[Linux 内核安全性能力概述](https://www.linux.com/learn/overview-linux-kernel-security-features)。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
@ -779,15 +773,19 @@ kubectl delete pod security-context-demo-4
|
|||
* [Tuning Docker with the newest security enhancements](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)
|
||||
* [Security Contexts design document](https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md)
|
||||
* [Ownership Management design document](https://git.k8s.io/community/contributors/design-proposals/storage/volume-ownership-management.md)
|
||||
* [Pod Security Policies](/docs/concepts/policy/pod-security-policy/)
|
||||
* [Pod Security Policies](/docs/concepts/security/pod-security-policy/)
|
||||
* [AllowPrivilegeEscalation design
|
||||
document](https://git.k8s.io/community/contributors/design-proposals/auth/no-new-privs.md)
|
||||
* For more information about security mechanisms in Linux, see
|
||||
[Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features)
|
||||
-->
|
||||
* [PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core) API 定义
|
||||
* [SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core) API 定义
|
||||
* [使用最新的安全性增强来调优 Docker(英文)](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)
|
||||
* [安全上下文的设计文档(英文)](https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md)
|
||||
* [属主管理的设计文档(英文)](https://git.k8s.io/community/contributors/design-proposals/storage/volume-ownership-management.md)
|
||||
* [Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)
|
||||
* [Pod 安全策略](/zh/docs/concepts/security/pod-security-policy/)
|
||||
* [AllowPrivilegeEscalation 的设计文档(英文)](https://git.k8s.io/community/contributors/design-proposals/auth/no-new-privs.md)
|
||||
* 关于在 Linux 系统中的安全机制的更多信息,可参阅
|
||||
[Linux 内核安全性能力概述](https://www.linux.com/learn/overview-linux-kernel-security-features)。
|
||||
|
||||
|
|
|
@ -43,8 +43,8 @@ on general patterns for running stateful applications in Kubernetes.
|
|||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
{{< include "default-storage-class-prereqs.md" >}}
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
* {{< include "default-storage-class-prereqs.md" >}}
|
||||
|
||||
<!--
|
||||
* This tutorial assumes you are familiar with
|
||||
|
@ -146,7 +146,7 @@ resolving `<pod-name>.mysql` from within any other Pod in the same Kubernetes
|
|||
cluster and namespace.
|
||||
-->
|
||||
这个无头服务给 StatefulSet 控制器为集合中每个 Pod 创建的 DNS 条目提供了一个宿主。
|
||||
因为服务名为 `mysql`,所以可以通过在同一 Kubernetes 集群和名字中的任何其他 Pod
|
||||
因为无头服务名为 `mysql`,所以可以通过在同一 Kubernetes 集群和命名空间中的任何其他 Pod
|
||||
内解析 `<Pod 名称>.mysql` 来访问 Pod。
|
||||
|
||||
<!--
|
||||
|
@ -274,7 +274,7 @@ controller into the domain of MySQL server IDs, which require the same
|
|||
properties.
|
||||
-->
|
||||
该脚本通过从 Pod 名称的末尾提取索引来确定自己的序号索引,而 Pod 名称由 `hostname` 命令返回。
|
||||
然后将序数(带有数字偏移量以避免保留值)保存到 MySQL conf.d 目录中的文件 server-id.cnf。
|
||||
然后将序数(带有数字偏移量以避免保留值)保存到 MySQL `conf.d` 目录中的文件 `server-id.cnf`。
|
||||
这一操作将 StatefulSet 所提供的唯一、稳定的标识转换为 MySQL 服务器的 ID,
|
||||
而这些 ID 也是需要唯一性、稳定性保证的。
|
||||
|
||||
|
@ -290,7 +290,7 @@ Combined with the StatefulSet controller's
|
|||
this ensures the primary MySQL server is Ready before creating replicas, so they can begin
|
||||
replicating.
|
||||
-->
|
||||
通过将内容复制到 conf.d 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的
|
||||
通过将内容复制到 `conf.d` 中,`init-mysql` 容器中的脚本也可以应用 ConfigMap 中的
|
||||
`primary.cnf` 或 `replica.cnf`。
|
||||
由于示例部署结构由单个 MySQL 主节点和任意数量的副本节点组成,
|
||||
因此脚本仅将序数 `0` 指定为主节点,而将其他所有节点指定为副本节点。
|
||||
|
@ -341,7 +341,7 @@ Ready before starting Pod `N+1`.
|
|||
MySQL 本身不提供执行此操作的机制,因此本示例使用了一种流行的开源工具 Percona XtraBackup。
|
||||
在克隆期间,源 MySQL 服务器性能可能会受到影响。
|
||||
为了最大程度地减少对 MySQL 主服务器的影响,该脚本指示每个 Pod 从序号较低的 Pod 中克隆。
|
||||
可以这样做的原因是 StatefulSet 控制器始终确保在启动 Pod N + 1 之前 Pod N 已准备就绪。
|
||||
可以这样做的原因是 StatefulSet 控制器始终确保在启动 Pod `N + 1` 之前 Pod `N` 已准备就绪。
|
||||
|
||||
<!--
|
||||
### Starting replication
|
||||
|
|
|
@ -70,6 +70,9 @@ other = "你是否在搜索:"
|
|||
[examples_heading]
|
||||
other = "示例"
|
||||
|
||||
[feature_state]
|
||||
other = "特性状态:"
|
||||
|
||||
[feedback_heading]
|
||||
other = "反馈"
|
||||
|
||||
|
|
Loading…
Reference in New Issue