Merge branch 'main' into zqy2

pull/34748/head
sarazqy 2022-07-29 11:00:36 +08:00
commit b0d7042116
629 changed files with 58385 additions and 14408 deletions

1
.gitignore vendored
View File

@ -29,6 +29,7 @@ nohup.out
# Hugo output
public/
resources/
.hugo_build.lock
# Netlify Functions build output
package-lock.json

View File

@ -9,7 +9,7 @@ CONTAINER_ENGINE ?= docker
IMAGE_REGISTRY ?= gcr.io/k8s-staging-sig-docs
IMAGE_VERSION=$(shell scripts/hash-files.sh Dockerfile Makefile | cut -c 1-12)
CONTAINER_IMAGE = $(IMAGE_REGISTRY)/k8s-website-hugo:v$(HUGO_VERSION)-$(IMAGE_VERSION)
CONTAINER_RUN = $(CONTAINER_ENGINE) run --rm --interactive --tty --volume $(CURDIR):/src
CONTAINER_RUN = "$(CONTAINER_ENGINE)" run --rm --interactive --tty --volume "$(CURDIR):/src"
CCRED=\033[0;31m
CCEND=\033[0m
@ -95,7 +95,7 @@ docker-internal-linkcheck:
container-internal-linkcheck: link-checker-image-pull
$(CONTAINER_RUN) $(CONTAINER_IMAGE) hugo --config config.toml,linkcheck-config.toml --buildFuture --environment test
$(CONTAINER_ENGINE) run --mount type=bind,source=$(CURDIR),target=/test --rm wjdp/htmltest htmltest
$(CONTAINER_ENGINE) run --mount "type=bind,source=$(CURDIR),target=/test" --rm wjdp/htmltest htmltest
clean-api-reference: ## Clean all directories in API reference directory, preserve _index.md
rm -rf content/en/docs/reference/kubernetes-api/*/

View File

@ -200,6 +200,7 @@ aliases:
- devlware
- jhonmike
- rikatz
- stormqueen1990
- yagonobre
sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem

View File

@ -4,6 +4,9 @@
이 저장소에는 [쿠버네티스 웹사이트 및 문서](https://kubernetes.io/)를 빌드하는 데 필요한 자산이 포함되어 있습니다. 기여해주셔서 감사합니다!
- [문서에 기여하기](#contributing-to-the-docs)
- [`README.md`에 대한 쿠버네티스 문서 현지화](#localization-readmemds)
# 저장소 사용하기
Hugo(확장 버전)를 사용하여 웹사이트를 로컬에서 실행하거나, 컨테이너 런타임에서 실행할 수 있습니다. 라이브 웹사이트와의 배포 일관성을 제공하므로, 컨테이너 런타임을 사용하는 것을 적극 권장합니다.
@ -40,6 +43,8 @@ make container-image
make container-serve
```
에러가 발생한다면, Hugo 컨테이너를 위한 컴퓨팅 리소스가 충분하지 않기 때문일 수 있습니다. 이를 해결하려면, 머신에서 도커에 허용할 CPU 및 메모리 사용량을 늘립니다([MacOSX](https://docs.docker.com/docker-for-mac/#resources) / [Windows](https://docs.docker.com/docker-for-windows/#resources)).
웹사이트를 보려면 브라우저를 http://localhost:1313 으로 엽니다. 소스 파일을 변경하면 Hugo가 웹사이트를 업데이트하고 브라우저를 강제로 새로 고칩니다.
## Hugo를 사용하여 로컬에서 웹사이트 실행하기
@ -56,7 +61,45 @@ make serve
그러면 포트 1313에서 로컬 Hugo 서버가 시작됩니다. 웹사이트를 보려면 http://localhost:1313 으로 브라우저를 엽니다. 소스 파일을 변경하면, Hugo가 웹사이트를 업데이트하고 브라우저를 강제로 새로 고칩니다.
## API 레퍼런스 페이지 빌드하기
`content/en/docs/reference/kubernetes-api`에 있는 API 레퍼런스 페이지는 <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>를 사용하여 Swagger 명세로부터 빌드되었습니다.
새로운 쿠버네티스 릴리스를 위해 레퍼런스 페이지를 업데이트하려면 다음 단계를 수행합니다.
1. `api-ref-generator` 서브모듈을 받아옵니다.
```bash
git submodule update --init --recursive --depth 1
```
2. Swagger 명세를 업데이트합니다.
```bash
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
```
3. `api-ref-assets/config/`에서, 새 릴리스의 변경 사항이 반영되도록 `toc.yaml``fields.yaml` 파일을 업데이트합니다.
4. 다음으로, 페이지를 빌드합니다.
```bash
make api-reference
```
로컬에서 결과를 테스트하기 위해 컨테이너 이미지를 이용하여 사이트를 빌드 및 실행합니다.
```bash
make container-image
make container-serve
```
웹 브라우저에서, <http://localhost:1313/docs/reference/kubernetes-api/>로 이동하여 API 레퍼런스를 확인합니다.
5. 모든 API 변경사항이 `toc.yaml``fields.yaml` 구성 파일에 반영되었다면, 새로 생성된 API 레퍼런스 페이지에 대한 PR을 엽니다.
## 문제 해결
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Hugo는 기술적인 이유로 2개의 바이너리 세트로 제공됩니다. 현재 웹사이트는 **Hugo 확장** 버전 기반에서만 실행됩니다. [릴리스 페이지](https://github.com/gohugoio/hugo/releases)에서 이름에 `extended` 가 포함된 아카이브를 찾습니다. 확인하려면, `hugo version` 을 실행하고 `extended` 라는 단어를 찾습니다.
@ -97,17 +140,17 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
이 내용은 Catalina와 Mojave macOS에서 작동합니다.
# SIG Docs에 참여하기
[커뮤니티 페이지](https://github.com/kubernetes/community/tree/master/sig-docs#meetings)에서 SIG Docs 쿠버네티스 커뮤니티 및 회의에 대한 자세한 내용을 확인합니다.
이 프로젝트의 메인테이너에게 연락을 할 수도 있습니다.
- [슬랙](https://kubernetes.slack.com/messages/sig-docs) [슬랙에 초대 받기](https://slack.k8s.io/)
- [슬랙](https://kubernetes.slack.com/messages/sig-docs)
- [슬랙에 초대 받기](https://slack.k8s.io/)
- [메일링 리스트](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
# 문서에 기여하기
# 문서에 기여하기 {#contributing-to-the-docs}
이 저장소에 대한 복제본을 여러분의 GitHub 계정에 생성하기 위해 화면 오른쪽 위 영역에 있는 **Fork** 버튼을 클릭하면 됩니다. 이 복제본은 *fork* 라고 부릅니다. 여러분의 fork에서 원하는 임의의 변경 사항을 만들고, 해당 변경 사항을 보낼 준비가 되었다면, 여러분의 fork로 이동하여 새로운 풀 리퀘스트를 만들어 우리에게 알려주시기 바랍니다.
@ -124,7 +167,15 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
* [문서화 스타일 가이드](http://kubernetes.io/docs/contribute/style/style-guide/)
* [쿠버네티스 문서 현지화](https://kubernetes.io/docs/contribute/localization/)
# `README.md`에 대한 쿠버네티스 문서 현지화(localization)
### 신규 기여자 대사(ambassadors)
기여 과정에서 도움이 필요하다면, [신규 기여자 대사](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador)에게 연락하는 것이 좋습니다. 이들은 신규 기여자를 멘토링하고 첫 PR 과정에서 도움을 주는 역할도 담당하는 SIG Docs 승인자입니다. 신규 기여자 대사에게 문의할 가장 좋은 곳은 [쿠버네티스 슬랙](https://slack.k8s.io/)입니다. 현재 SIG Docs 신규 기여자 대사는 다음과 같습니다.
| Name | Slack | GitHub |
| -------------------------- | -------------------------- | -------------------------- |
| Arsh Sharma | @arsh | @RinkiyaKeDad |
# `README.md`에 대한 쿠버네티스 문서 현지화(localization) {#localization-readmemds}
## 한국어
@ -135,6 +186,7 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
* 손석호 ([GitHub - @seokho-son](https://github.com/seokho-son))
* [슬랙 채널](https://kubernetes.slack.com/messages/kubernetes-docs-ko)
# 행동 강령
쿠버네티스 커뮤니티 참여는 [CNCF 행동 강령](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/ko.md)을 따릅니다.

View File

@ -80,7 +80,7 @@ To build the site in a container, run the following to build the container image
要在容器中构建网站,请通过以下命令来构建容器镜像并运行:
```bash
make container-image
# 你可以将 $CONTAINER_ENGINE 设置为任何 Docker 类容器工具的名称
make container-serve
```
@ -257,6 +257,51 @@ This works for Catalina as well as Mojave macOS.
-->
这适用于 Catalina 和 Mojave macOS。
### 对执行 make container-image 命令部分地区访问超时的故障排除
现象如下:
```shell
langs/language.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
langs/language.go:24:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:21:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:22:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
common/text/transform.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout
hugolib/integrationtest_builder.go:29:2: golang.org/x/tools@v0.1.11: Get "https://proxy.golang.org/golang.org/x/tools/@v/v0.1.11.zip": dial tcp 142.251.42.241:443: i/o timeout
deploy/google.go:24:2: google.golang.org/api@v0.76.0: Get "https://proxy.golang.org/google.golang.org/api/@v/v0.76.0.zip": dial tcp 142.251.43.17:443: i/o timeout
parser/metadecoders/decoder.go:32:2: gopkg.in/yaml.v2@v2.4.0: Get "https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.4.0.zip": dial tcp 142.251.42.241:443: i/o timeout
The command '/bin/sh -c mkdir $HOME/src && cd $HOME/src && curl -L https://github.com/gohugoio/hugo/archive/refs/tags/v${HUGO_VERSION}.tar.gz | tar -xz && cd "hugo-${HUGO_VERS ION}" && go install --tags extended' returned a non-zero code: 1
make: *** [Makefile:69container-image] error 1
```
请修改 `Dockerfile` 文件,为其添加网络代理。修改内容如下:
```dockerfile
...
FROM golang:1.18-alpine
LABEL maintainer="Luc Perkins <lperkins@linuxfoundation.org>"
ENV GO111MODULE=on # 需要添加内容1
ENV GOPROXY=https://proxy.golang.org,direct # 需要添加内容2
RUN apk add --no-cache \
curl \
gcc \
g++ \
musl-dev \
build-base \
libc6-compat
ARG HUGO_VERSION
...
```
将 "https://proxy.golang.org" 替换为本地可以使用的代理地址。
**注意:** 此部分仅适用于中国大陆
<!--
## Get involved with SIG Docs

View File

@ -21,15 +21,15 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
* [ACI](https://www.github.com/noironetworks/aci-containers) bietet Container-Networking und Network-Security mit Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/introduction/) ist ein Networking- und Network-Policy-Provider. Calico unterstützt eine Reihe von Networking-Optionen, damit Du die richtige für deinen Use-Case wählen kannst. Dies beinhaltet Non-Overlaying and Overlaying-Networks mit oder ohne BGP. Calico nutzt die gleiche Engine um Network-Policies für Hosts, Pods und (falls Du Istio & Envoy benutzt) Anwendungen auf Service-Mesh-Ebene durchzusetzen.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Cilium](https://github.com/cilium/cilium) ist ein L3 Network- and Network-Policy-Plugin welches das transparent HTTP/API/L7-Policies durchsetzen kann. Sowohl Routing- als auch Overlay/Encapsulation-Modes werden uterstützt. Außerdem kann Cilium auf andere CNI-Plugins aufsetzen.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [Contiv](https://contivpp.io/) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
* Multus ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) ist eine SDN-Plattform die Policy-Basiertes Networking zwischen Kubernetes Pods und nicht-Kubernetes Umgebungen inklusive Sichtbarkeit und Security-Monitoring bereitstellt.
* [Romana](https://github.com/romana/romana) ist eine Layer 3 Network-Lösung für Pod-Netzwerke welche auch die [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) unterstützt. Details zur Installation als kubeadm Add-On sind [hier](https://github.com/romana/romana/tree/master/containerize) verfügbar.
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) bietet Networking and Network-Policies und arbeitet auf beiden Seiten der Network-Partition ohne auf eine externe Datenbank angwiesen zu sein.

View File

@ -16,7 +16,7 @@ Die `image` Eigenschaft eines Containers unterstüzt die gleiche Syntax wie die
## Aktualisieren von Images
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Kubelet Images überspringt, die bereits auf einem Node vorliegen.
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Image wird nur heruntergeladen wenn es noch nicht lokal verfügbar ist.
Wenn sie stattdessen möchten, dass ein Image immer forciert heruntergeladen wird, können sie folgendes tun:

View File

@ -54,14 +54,14 @@ die Entwicklern und Anwendern zur Verfügung stehen. Benutzer können ihre eigen
ihren [eigenen APIs](/docs/concepts/api-extension/custom-resources/) schreiben, die von einem
universellen [Kommandozeilen-Tool](/docs/user-guide/kubectl-overview/) angesprochen werden können.
Dieses [Design](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) hat es einer Reihe anderer Systeme ermöglicht, auf Kubernetes aufzubauen.
Dieses [Design](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) hat es einer Reihe anderer Systeme ermöglicht, auf Kubernetes aufzubauen.
## Was Kubernetes nicht ist
Kubernetes ist kein traditionelles, allumfassendes PaaS (Plattform als ein Service) System. Da Kubernetes nicht auf Hardware-,
sondern auf Containerebene arbeitet, bietet es einige allgemein anwendbare Funktionen, die PaaS-Angeboten gemeinsam sind,
wie Bereitstellung, Skalierung, Lastausgleich, Protokollierung und Überwachung.
Kubernetes ist jedoch nicht monolithisch, und diese Standardlösungen sind optional und modular etweiterbar.
Kubernetes ist jedoch nicht monolithisch, und diese Standardlösungen sind optional und modular erweiterbar.
Kubernetes liefert die Bausteine für den Aufbau von Entwicklerplattformen, bewahrt aber die
Wahlmöglichkeiten und Flexibilität der Benutzer, wo es wichtig ist.
@ -79,7 +79,7 @@ Kubernetes:
Cluster-Speichersysteme (z.B. Ceph) als eingebaute Dienste. Solche Komponenten können
auf Kubernetes laufen und/oder von Anwendungen, die auf Kubernetes laufen, über
portable Mechanismen wie den Open Service Broker angesprochen werden.
* Bietet keine Konfigurationssprache bzw. kein Konfigurationssystem (z.B.[jsonnet](https://github.com/google/jsonnet)).
* Bietet keine Konfigurationssprache bzw. kein Konfigurationssystem (z.B. [jsonnet](https://github.com/google/jsonnet)).
Es bietet eine deklarative API, die von beliebigen Formen deklarativer Spezifikationen angesprochen werden kann.
* Bietet keine umfassenden Systeme zur Maschinenkonfiguration, Wartung, Verwaltung oder Selbstheilung.
@ -135,17 +135,17 @@ Zusammenfassung der Container-Vorteile:
* **Dev und Ops Trennung der Bedenken**:
Erstellen Sie Anwendungscontainer-Images nicht zum Deployment-, sondern zum Build-Releasezeitpunkt
und entkoppeln Sie so Anwendungen von der Infrastruktur.
* **Überwachbarkeit**
* **Überwachbarkeit**:
Nicht nur Informationen und Metriken auf Betriebssystemebene werden angezeigt,
sondern auch der Zustand der Anwendung und andere Signale.
* **Umgebungskontinuität in Entwicklung, Test und Produktion**:
Läuft auf einem Laptop genauso wie in der Cloud.
* **Cloud- und OS-Distribution portabilität**:
* **Cloud- und OS-Distribution-Portabilität**:
Läuft auf Ubuntu, RHEL, CoreOS, On-Prem, Google Kubernetes Engine und überall sonst.
* **Anwendungsorientiertes Management**:
Erhöht den Abstraktionsgrad vom Ausführen eines Betriebssystems auf virtueller Hardware
bis zum Ausführen einer Anwendung auf einem Betriebssystem unter Verwendung logischer Ressourcen.
* **Locker gekoppelte, verteilte, elastische, freie [micro-services](https://martinfowler.com/articles/microservices.html)**:
* **Locker gekoppelte, verteilte, elastische, freie [Microservices](https://martinfowler.com/articles/microservices.html)**:
Anwendungen werden in kleinere, unabhängige Teile zerlegt und können dynamisch bereitgestellt
und verwaltet werden -- nicht ein monolithischer Stack, der auf einer großen Single-Purpose-Maschine läuft.
* **Ressourcenisolierung**:

View File

@ -56,6 +56,6 @@ Offiziell unterstützte Clientbibliotheken:
## Design Dokumentation
Ein Archiv der Designdokumente für Kubernetes-Funktionalität. Gute Ansatzpunkte sind [Kubernetes Architektur](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) und [Kubernetes Design Übersicht](https://git.k8s.io/community/contributors/design-proposals).
Ein Archiv der Designdokumente für Kubernetes-Funktionalität. Gute Ansatzpunkte sind [Kubernetes Architektur](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) und [Kubernetes Design Übersicht](https://git.k8s.io/community/contributors/design-proposals).

View File

@ -424,7 +424,7 @@ export no_proxy=$no_proxy,$(minikube ip)
Minikube verwendet [libmachine](https://github.com/docker/machine/tree/master/libmachine) zur Bereitstellung von VMs, und [kubeadm](https://github.com/kubernetes/kubeadm) um einen Kubernetes-Cluster in Betrieb zu nehmen.
Weitere Informationen zu Minikube finden Sie im [Vorschlag](https://git.k8s.io/community/contributors/design-proposals/cluster-lifecycle/local-cluster-ux.md).
Weitere Informationen zu Minikube finden Sie im [Vorschlag](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/local-cluster-ux.md).
## Zusätzliche Links

View File

@ -11,7 +11,7 @@ weight: 90
<!-- overview -->
Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets.
Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/design-proposals-archive/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets.
Der Horizontal Pod Autoscaler ist als Kubernetes API-Ressource und einem Controller implementiert.
Die Ressource bestimmt das Verhalten des Controllers.
@ -46,7 +46,7 @@ Das Verwenden von Metriken aus Heapster ist seit der Kubernetes Version 1.11 ver
Siehe [Unterstützung der Metrik APIs](#unterstützung-der-metrik-apis) für weitere Details.
Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
### Details zum Algorithmus
@ -90,7 +90,7 @@ Die aktuelle stabile Version, die nur die Unterstützung für die automatische S
Die Beta-Version, welche die Skalierung des Speichers und benutzerdefinierte Metriken unterstützt, befindet sich unter `autoscaling/v2beta2`. Die in `autoscaling/v2beta2` neu eingeführten Felder bleiben bei der Arbeit mit `autoscaling/v1` als Anmerkungen erhalten.
Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden.
Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden.
## Unterstützung des Horizontal Pod Autoscaler in kubectl
@ -166,7 +166,7 @@ Standardmäßig ruft der HorizontalPodAutoscaler Controller Metriken aus einer R
## {{% heading "whatsnext" %}}
* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md).
* kubectl autoscale Befehl: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
* Verwenden des [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).

View File

@ -16,7 +16,7 @@ It groups containers that make up an application into logical units for easy man
{{% blocks/feature image="scalable" %}}
#### Planet Scale
Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team.
Designed on the same principles that allow Google to run billions of containers a week, Kubernetes can scale without increasing your operations team.
{{% /blocks/feature %}}
@ -43,12 +43,12 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon Europe on April 17-21, 2023</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -41,7 +41,7 @@ These repo labels let reviewers filter for PRs and issues by language. For examp
### Team review
L10n teams can now review and approve their own PRs. For example, review and approval permissions for English are [assigned in an OWNERS file](https://github.com/kubernetes/website/blob/master/content/en/OWNERS) in the top subfolder for English content.
L10n teams can now review and approve their own PRs. For example, review and approval permissions for English are [assigned in an OWNERS file](https://github.com/kubernetes/website/blob/main/content/en/OWNERS) in the top subfolder for English content.
Adding `OWNERS` files to subdirectories lets localization teams review and approve changes without requiring a rubber stamp approval from reviewers who may lack fluency.

View File

@ -67,7 +67,7 @@ Let's see an example of a cluster to understand this API.
As the feature name "PodTopologySpread" implies, the basic usage of this feature
is to run your workload with an absolute even manner (maxSkew=1), or relatively
even manner (maxSkew>=2). See the [official
document](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
document](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
for more details.
In addition to this basic usage, there are some advanced usage examples that

View File

@ -70,7 +70,7 @@ To correct the latter issue, we now employ a "hunt and peck" approach to removin
### 1. Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints
While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE.
Furthermore, [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) were still a [beta feature in 1.18](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
Furthermore, [pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) were still a beta feature in 1.18 which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available.
The entire endeavour was concerningly reminiscent of checking [caniuse.com](https://caniuse.com/) when Internet Explorer 8 was still around.
### 2. Deploy a statefulset _per zone_.

View File

@ -1,9 +1,9 @@
---
layout: blog
title: "Meet Our Contributors - APAC (India region)"
date: 2022-01-10T12:00:00+0000
date: 2022-01-10
slug: meet-our-contributors-india-ep-01
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
canonicalUrl: https://www.kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
---
**Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
@ -19,7 +19,7 @@ Welcome to the first episode of the APAC edition of the "Meet Our Contributors"
In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives.
💫 *Let's get started, so without further ado…*
💫 *Let's get started, so without further ado…*
## [Arsh Sharma](https://github.com/RinkiyaKeDad)
@ -39,7 +39,7 @@ To the newcomers, Arsh helps plan their early contributions sustainably.
Kunal Kushwaha is a core member of the Kubernetes marketing council. He is also a CNCF ambassador and one of the founders of the [CNCF Students Program](https://community.cncf.io/cloud-native-students/).. He also served as a Communications role shadow during the 1.22 release cycle.
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
As an open-source enthusiast, he believes that diverse participation in the community is beneficial since it introduces new perspectives and opinions and respect for one's peers. He has worked on various open-source projects, and his participation in communities has considerably assisted his development as a developer.
@ -103,4 +103,3 @@ If you have any recommendations/suggestions for who we should interview next, pl
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋

View File

@ -86,7 +86,7 @@ in Kubernetes 1.24.
If you're running Kubernetes v1.24 or later, see [Can I still use Docker Engine as my container runtime?](#can-i-still-use-docker-engine-as-my-container-runtime).
(Remember, you can switch away from the dockershim if you're using any supported Kubernetes release; from release v1.24, you
**must** switch as Kubernetes no longer incluides the dockershim).
**must** switch as Kubernetes no longer includes the dockershim).
[kubelet]: /docs/reference/command-line-tools-reference/kubelet/

View File

@ -1,7 +1,7 @@
---
layout: blog
title: "Meet Our Contributors - APAC (Aus-NZ region)"
date: 2022-03-16T12:00:00+0000
date: 2022-03-16
slug: meet-our-contributors-au-nz-ep-02
canonicalUrl: https://www.kubernetes.dev/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/
---
@ -60,19 +60,13 @@ Nick Young works at VMware as a technical lead for Contour, a CNCF ingress contr
His contribution path was notable in that he began working on major areas of the Kubernetes project early on, skewing his trajectory.
He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions.
He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions.
> _Just being active and contributing will get you a long way. Once you've been active for a while, you'll find that you're able to answer questions, which will mean you're asked questions, and before you know it you are an expert._
---
If you have any recommendations/suggestions for who we should interview next, please let us know in #sig-contribex. Your suggestions would be much appreciated. We're thrilled to have additional folks assisting us in reaching out to even more wonderful individuals of the community.
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋

View File

@ -0,0 +1,178 @@
---
layout: blog
title: Kubernetes Gateway API Graduates to Beta
date: 2022-07-13
slug: gateway-api-graduates-to-beta
canonicalUrl: https://gateway-api.sigs.k8s.io/blog/2022/graduating-to-beta/
---
**Authors:** Shane Utt (Kong), Rob Scott (Google), Nick Young (VMware), Jeff Apple (HashiCorp)
We are excited to announce the v0.5.0 release of Gateway API. For the first
time, several of our most important Gateway API resources are graduating to
beta. Additionally, we are starting a new initiative to explore how Gateway API
can be used for mesh and introducing new experimental concepts such as URL
rewrites. We'll cover all of this and more below.
## What is Gateway API?
Gateway API is a collection of resources centered around [Gateway][gw] resources
(which represent the underlying network gateways / proxy servers) to enable
robust Kubernetes service networking through expressive, extensible and
role-oriented interfaces that are implemented by many vendors and have broad
industry support.
Originally conceived as a successor to the well known [Ingress][ing] API, the
benefits of Gateway API include (but are not limited to) explicit support for
many commonly used networking protocols (e.g. `HTTP`, `TLS`, `TCP`, `UDP`) as
well as tightly integrated support for Transport Layer Security (TLS). The
`Gateway` resource in particular enables implementations to manage the lifecycle
of network gateways as a Kubernetes API.
If you're an end-user interested in some of the benefits of Gateway API we
invite you to jump in and find an implementation that suits you. At the time of
this release there are over a dozen [implementations][impl] for popular API
gateways and service meshes and guides are available to start exploring quickly.
[gw]:https://gateway-api.sigs.k8s.io/api-types/gateway/
[ing]:https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/
[impl]:https://gateway-api.sigs.k8s.io/implementations/
### Getting started
Gateway API is an official Kubernetes API like
[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/).
Gateway API represents a superset of Ingress functionality, enabling more
advanced concepts. Similar to Ingress, there is no default implementation of
Gateway API built into Kubernetes. Instead, there are many different
[implementations][impl] available, providing significant choice in terms of underlying
technologies while providing a consistent and portable experience.
Take a look at the [API concepts documentation][concepts] and check out some of
the [Guides][guides] to start familiarizing yourself with the APIs and how they
work. When you're ready for a practical application open the [implementations
page][impl] and select an implementation that belongs to an existing technology
you may already be familiar with or the one your cluster provider uses as a
default (if applicable). Gateway API is a [Custom Resource Definition
(CRD)][crd] based API so you'll need to [install the CRDs][install-crds] onto a
cluster to use the API.
If you're specifically interested in helping to contribute to Gateway API, we
would love to have you! Please feel free to [open a new issue][issue] on the
repository, or join in the [discussions][disc]. Also check out the [community
page][community] which includes links to the Slack channel and community meetings.
[crd]:https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/
[concepts]:https://gateway-api.sigs.k8s.io/concepts/api-overview/
[guides]:https://gateway-api.sigs.k8s.io/guides/getting-started/
[impl]:https://gateway-api.sigs.k8s.io/implementations/
[install-crds]:https://gateway-api.sigs.k8s.io/guides/getting-started/#install-the-crds
[issue]:https://github.com/kubernetes-sigs/gateway-api/issues/new/choose
[disc]:https://github.com/kubernetes-sigs/gateway-api/discussions
[community]:https://gateway-api.sigs.k8s.io/contributing/community/
## Release highlights
### Graduation to beta
The `v0.5.0` release is particularly historic because it marks the growth in
maturity to a beta API version (`v1beta1`) release for some of the key APIs:
- [GatewayClass](https://gateway-api.sigs.k8s.io/api-types/gatewayclass/)
- [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/)
- [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/)
This achievement was marked by the completion of several graduation criteria:
- API has been [widely implemented][impl].
- Conformance tests provide basic coverage for all resources and have multiple implementations passing tests.
- Most of the API surface is actively being used.
- Kubernetes SIG Network API reviewers have approved graduation to beta.
For more information on Gateway API versioning, refer to the [official
documentation](https://gateway-api.sigs.k8s.io/concepts/versioning/). To see
what's in store for future releases check out the [next steps](#next-steps)
section.
[impl]:https://gateway-api.sigs.k8s.io/implementations/
### Release channels
This release introduces the `experimental` and `standard` [release channels][ch]
which enable a better balance of maintaining stability while still enabling
experimentation and iterative development.
The `standard` release channel includes:
- resources that have graduated to beta
- fields that have graduated to standard (no longer considered experimental)
The `experimental` release channel includes everything in the `standard` release
channel, plus:
- `alpha` API resources
- fields that are considered experimental and have not graduated to `standard` channel
Release channels are used internally to enable iterative development with
quick turnaround, and externally to indicate feature stability to implementors
and end-users.
For this release we've added the following experimental features:
- [Routes can attach to Gateways by specifying port numbers](https://gateway-api.sigs.k8s.io/geps/gep-957/)
- [URL rewrites and path redirects](https://gateway-api.sigs.k8s.io/geps/gep-726/)
[ch]:https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard
### Other improvements
For an exhaustive list of changes included in the `v0.5.0` release, please see
the [v0.5.0 release notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.5.0).
## Gateway API for service mesh: the GAMMA Initiative
Some service mesh projects have [already implemented support for the Gateway
API](https://gateway-api.sigs.k8s.io/implementations/). Significant overlap
between the Service Mesh Interface (SMI) APIs and the Gateway API has [inspired
discussion in the SMI
community](https://github.com/servicemeshinterface/smi-spec/issues/249) about
possible integration.
We are pleased to announce that the service mesh community, including
representatives from Cilium Service Mesh, Consul, Istio, Kuma, Linkerd, NGINX
Service Mesh and Open Service Mesh, is coming together to form the [GAMMA
Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/), a dedicated
workstream within the Gateway API subproject focused on Gateway API for Mesh
Management and Administration.
This group will deliver [enhancement
proposals](https://gateway-api.sigs.k8s.io/v1beta1/contributing/gep/) consisting
of resources, additions, and modifications to the Gateway API specification for
mesh and mesh-adjacent use-cases.
This work has begun with [an exploration of using Gateway API for
service-to-service
traffic](https://docs.google.com/document/d/1T_DtMQoq2tccLAtJTpo3c0ohjm25vRS35MsestSL9QU/edit#heading=h.jt37re3yi6k5)
and will continue with enhancement in areas such as authentication and
authorization policy.
## Next steps
As we continue to mature the API for production use cases, here are some of the highlights of what we'll be working on for the next Gateway API releases:
- [GRPCRoute][gep1016] for [gRPC][grpc] traffic routing
- [Route delegation][pr1085]
- Layer 4 API maturity: Graduating [TCPRoute][tcpr], [UDPRoute][udpr] and
[TLSRoute][tlsr] to beta
- [GAMMA Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/) - Gateway API for Service Mesh
If there's something on this list you want to get involved in, or there's
something not on this list that you want to advocate for to get on the roadmap
please join us in the #sig-network-gateway-api channel on Kubernetes Slack or our weekly [community calls](https://gateway-api.sigs.k8s.io/contributing/community/#meetings).
[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/master/site-src/geps/gep-1016.md
[grpc]:https://grpc.io/
[pr1085]:https://github.com/kubernetes-sigs/gateway-api/pull/1085
[tcpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tcproute_types.go
[udpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/udproute_types.go
[tlsr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tlsroute_types.go
[community]:https://gateway-api.sigs.k8s.io/contributing/community/

View File

@ -33,7 +33,7 @@ are allowed.
Nodes should be provisioned with the public root certificate for the cluster such that they can
connect securely to the API server along with valid client credentials. A good approach is that the
client credentials provided to the kubelet are in the form of a client certificate. See
[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
for automated provisioning of kubelet client certificates.
Pods that wish to connect to the API server can do so securely by leveraging a service account so

View File

@ -479,29 +479,24 @@ these pods will be stuck in terminating status on the shutdown node forever.
To mitigate the above situation, a user can manually add the taint `node
kubernetes.io/out-of-service` with either `NoExecute` or `NoSchedule` effect to
a Node marking it out-of-service.
If the `NodeOutOfServiceVolumeDetach` [feature gate](/docs/reference/
command-line-tools-reference/feature-gates/) is enabled on
`kube-controller-manager`, and a Node is marked out-of-service with this taint, the
pods on the node will be forcefully deleted if there are no matching tolerations on
it and volume detach operations for the pods terminating on the node will happen
immediately. This allows the Pods on the out-of-service node to recover quickly on a
different node.
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the
pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
detach operations for the pods terminating on the node will happen immediately. This allows the
Pods on the out-of-service node to recover quickly on a different node.
During a non-graceful shutdown, Pods are terminated in the two phases:
1. Force delete the Pods that do not have matching `out-of-service` tolerations.
2. Immediately perform detach volume operation for such pods.
{{< note >}}
- Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified
that the node is already in shutdown or power off state (not in the middle of
restarting).
that the node is already in shutdown or power off state (not in the middle of
restarting).
- The user is required to manually remove the out-of-service taint after the pods are
moved to a new node and the user has checked that the shutdown node has been
recovered since the user was the one who originally added the taint.
moved to a new node and the user has checked that the shutdown node has been
recovered since the user was the one who originally added the taint.
{{< /note >}}
### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown}

View File

@ -11,31 +11,37 @@ no_list: true
---
<!-- overview -->
The cluster administration overview is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/).
<!-- body -->
## Planning a cluster
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*.
See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure
Kubernetes clusters. The solutions listed in this article are called *distros*.
{{< note >}}
Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes.
{{< /note >}}
{{< note >}}
Not all distros are actively maintained. Choose distros which have been tested with a recent
version of Kubernetes.
{{< /note >}}
Before choosing a guide, here are some considerations:
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs.
- Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters.
- **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best.
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability,
multi-node cluster? Choose distros best suited for your needs.
- Will you be using **a hosted Kubernetes cluster**, such as
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**?
- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly
support hybrid clusters. Instead, you can set up multiple clusters.
- **If you are configuring Kubernetes on-premises**, consider which
[networking model](/docs/concepts/cluster-administration/networking/) fits best.
- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**?
- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**?
If the latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
## Managing a cluster
@ -45,29 +51,43 @@ Before choosing a guide, here are some considerations:
## Securing a cluster
* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains.
* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to
generate certificates using different tool chains.
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node.
* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes
the environment for Kubelet managed containers on a Kubernetes node.
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes how Kubernetes implements access control for its own API.
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes
how Kubernetes implements access control for its own API.
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options.
* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in
Kubernetes, including the various authentication options.
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled.
* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from
authentication, and controls how HTTP calls are handled.
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
explains plug-ins which intercepts requests to the Kubernetes API server after authentication
and authorization.
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/)
describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters
.
* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes' audit logs.
* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes'
audit logs.
### Securing the kubelet
* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/)
* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/reference/acess-authn-authz/kubelet-authn-authz/)
* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/)
* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/)
## Optional Cluster Services
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service.
* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve
a DNS name directly to a Kubernetes service.
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/)
explains how logging in Kubernetes works and how to implement it.
* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it.

View File

@ -332,7 +332,7 @@ container of a Pod can specify either or both of the following:
Limits and requests for `ephemeral-storage` are measured in byte quantities.
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following quantities all represent roughly the same value:
- `128974848`
@ -340,6 +340,10 @@ Mi, Ki. For example, the following quantities all represent roughly the same val
- `129M`
- `123Mi`
Pay attention to the case of the suffixes. If you request `400m` of ephemeral-storage, this is a request
for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`)
or 400 megabytes (`400M`).
In the following example, the Pod has two containers. Each container has a request of
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
@ -799,7 +803,7 @@ memory limit (and possibly request) for that container.
* Get hands-on experience [assigning Memory resources to containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
and its [resource requirements](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
* Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)

View File

@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN
## Using Labels
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).

View File

@ -116,7 +116,7 @@ Runtime handlers are configured through containerd's configuration at
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
```
See containerd's [config documentation](https://github.com/containerd/cri/blob/master/docs/config.md)
See containerd's [config documentation](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)
for more details:
#### {{< glossary_tooltip term_id="cri-o" >}}

View File

@ -8,21 +8,29 @@ card:
---
<!-- overview -->
This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format.
This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can
express them in `.yaml` format.
<!-- body -->
## Understanding Kubernetes objects {#kubernetes-objects}
*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these
entities to represent the state of your cluster. Specifically, they can describe:
* What containerized applications are running (and on which nodes)
* The resources available to those applications
* The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's *desired state*.
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system
will constantly work to ensure that object exists. By creating an object, you're effectively
telling the Kubernetes system what you want your cluster's workload to look like; this is your
cluster's *desired state*.
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use the Kubernetes API directly in your own programs using one of the [Client Libraries](/docs/reference/using-api/client-libraries/).
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the
[Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line
interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use
the Kubernetes API directly in your own programs using one of the
[Client Libraries](/docs/reference/using-api/client-libraries/).
### Object Spec and Status
@ -48,11 +56,17 @@ the status to match your spec. If any of those instances should fail
between spec and status by making a correction--in this case, starting
a replacement instance.
For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
For more information on the object spec, status, and metadata, see the
[Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
### Describing a Kubernetes object
When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API request must include that information as JSON in the request body. **Most often, you provide the information to `kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API request.
When you create an object in Kubernetes, you must provide the object spec that describes its
desired state, as well as some basic information about the object (such as a name). When you use
the Kubernetes API to create the object (either directly or via `kubectl`), that API request must
include that information as JSON in the request body. **Most often, you provide the information to
`kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API
request.
Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:
@ -81,7 +95,9 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to
* `metadata` - Data that helps uniquely identify the object, including a `name` string, `UID`, and optional `namespace`
* `spec` - What state you desire for the object
The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes.
The precise format of the object `spec` is different for every Kubernetes object, and contains
nested fields specific to that object. The [Kubernetes API Reference](/docs/reference/kubernetes-api/)
can help you find the spec format for all of the objects you can create using Kubernetes.
For example, see the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
for the Pod API reference.
@ -103,5 +119,3 @@ detail the structure of that `.status` field, and its content for each different
* Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes.
* [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts.

View File

@ -169,9 +169,9 @@ Disadvantages compared to imperative object configuration:
## {{% heading "whatsnext" %}}
- [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
- [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [Managing Kubernetes Objects Using Kustomize (Declarative)](/docs/tasks/manage-kubernetes-objects/kustomization/)
- [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/)
- [Declarative Management of Kubernetes Objects Using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/)
- [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
- [Kubectl Book](https://kubectl.docs.kubernetes.io)
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)

View File

@ -23,6 +23,7 @@ of terminating one or more Pods on Nodes.
* [Kubernetes Scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/)
* [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/)
* [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
* [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/)
* [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework)
* [Scheduler Performance Tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)

View File

@ -11,24 +11,27 @@ weight: 20
<!-- overview -->
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
{{< glossary_tooltip text="node(s)" term_id="node" >}}.
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is
_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}},
or to _prefer_ to run on particular nodes.
There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
Often, you do not need to set any such constraints; the
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
However, there are some circumstances where you may want to control which node
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
services that communicate a lot into the same availability zone.
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it,
or to co-locate Pods from two different services that communicate a lot into the same availability zone.
<!-- body -->
You can use any of the following methods to choose where Kubernetes schedules
specific Pods:
specific Pods:
* [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
* [Affinity and anti-affinity](#affinity-and-anti-affinity)
* [nodeName](#nodename) field
* [Pod topology spread constraints](#pod-topology-spread-constraints)
## Node labels {#built-in-node-labels}
@ -170,7 +173,7 @@ For example, consider the following Pod spec:
{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}}
If there are two possible nodes that match the
`requiredDuringSchedulingIgnoredDuringExecution` rule, one with the
`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the
`label-1:key-1` label and another with the `label-2:key-2` label, the scheduler
considers the `weight` of each node and adds the weight to the other scores for
that node, and schedules the Pod onto the node with the highest final score.
@ -337,13 +340,15 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
rules allow you to configure that a set of workloads should
be co-located in the same defined topology, eg., the same node.
be co-located in the same defined topology; for example, preferring to place two related
Pods onto the same node.
Take, for example, a three-node cluster running a web application with an
in-memory cache like redis. You could use inter-pod affinity and anti-affinity
to co-locate the web servers with the cache as much as possible.
For example: imagine a three-node cluster. You use the cluster to run a web application
and also an in-memory cache (such as Redis). For this example, also assume that latency between
the web application and the memory cache should be as low as is practical. You could use inter-pod
affinity and anti-affinity to co-locate the web servers with the cache as much as possible.
In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The
`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
with the `app=store` label on a single node. This creates each cache in a
separate node.
@ -378,10 +383,10 @@ spec:
image: redis:3.2-alpine
```
The following Deployment for the web servers creates replicas with the label `app=web-store`. The
Pod affinity rule tells the scheduler to place each replica on a node that has a
Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler
to avoid placing multiple `app=web-store` servers on a single node.
The following example Deployment for the web servers creates replicas with the label `app=web-store`.
The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod
with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place
multiple `app=web-store` servers on a single node.
```yaml
apiVersion: apps/v1
@ -430,6 +435,10 @@ where each web server is co-located with a cache, on three separate nodes.
| *webserver-1* | *webserver-2* | *webserver-3* |
| *cache-1* | *cache-2* | *cache-3* |
The overall effect is that each cache instance is likely to be accessed by a single client, that
is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency.
You might have other reasons to use Pod anti-affinity.
See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
for an example of a StatefulSet configured with anti-affinity for high
availability, using the same technique as this example.
@ -468,6 +477,16 @@ spec:
The above Pod will only run on the node `kube-01`.
## Pod topology spread constraints
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}}
are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other
topology domains that you define. You might do this to improve performance, expected availability, or
overall utilization.
Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
to learn more about how these work.
## {{% heading "whatsnext" %}}
* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .

View File

@ -83,7 +83,7 @@ of the scheduler:
## {{% heading "whatsnext" %}}
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
* Read the [kube-scheduler config (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) reference
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)

View File

@ -66,8 +66,8 @@ the signal.
The value for `memory.available` is derived from the cgroupfs instead of tools
like `free -m`. This is important because `free -m` does not work in a
container, and if users use the [node
allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) feature, out of resource decisions
container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
feature, out of resource decisions
are made local to the end user Pod part of the cgroup hierarchy as well as the
root node. This [script](/examples/admin/resource/memory-available.sh)
reproduces the same set of steps that the kubelet performs to calculate
@ -85,10 +85,15 @@ The kubelet supports the following filesystem partitions:
Kubelet auto-discovers these filesystems and ignores other filesystems. Kubelet
does not support other configurations.
{{<note>}}
Some kubelet garbage collection features are deprecated in favor of eviction.
For a list of the deprecated features, see [kubelet garbage collection deprecation](/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation).
{{</note>}}
Some kubelet garbage collection features are deprecated in favor of eviction:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context |
### Eviction thresholds
@ -211,7 +216,7 @@ the kubelet frees up disk space in the following order:
If the kubelet's attempts to reclaim node-level resources don't bring the eviction
signal below the threshold, the kubelet begins to evict end-user pods.
The kubelet uses the following parameters to determine pod eviction order:
The kubelet uses the following parameters to determine the pod eviction order:
1. Whether the pod's resource usage exceeds requests
1. [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
@ -314,7 +319,7 @@ The kubelet sets an `oom_score_adj` value for each container based on the QoS fo
{{<note>}}
The kubelet also sets an `oom_score_adj` value of `-997` for containers in Pods that have
`system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}
`system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}.
{{</note>}}
If the kubelet can't reclaim memory before a node experiences OOM, the
@ -396,7 +401,7 @@ counted as `active_file`. If enough of these kernel block buffers are on the
active LRU list, the kubelet is liable to observe this as high resource use and
taint the node as experiencing memory pressure - triggering pod eviction.
For more more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
For more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
You can work around that behavior by setting the memory limit and memory request
the same for containers likely to perform intensive I/O activity. You will need

View File

@ -15,14 +15,15 @@ is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attrac
a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a
hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods.
_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also [evaluates other parameters](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/) as part of its function.
_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching
taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also
[evaluates other parameters](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
as part of its function.
Taints and tolerations work together to ensure that pods are not scheduled
onto inappropriate nodes. One or more taints are applied to a node; this
marks that the node should not accept any pods that do not tolerate the taints.
<!-- body -->
## Concepts
@ -266,7 +267,8 @@ This ensures that DaemonSet pods are never evicted due to these problems.
## Taint Nodes by Condition
The control plane, using the node {{<glossary_tooltip text="controller" term_id="controller">}},
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions).
automatically creates taints with a `NoSchedule` effect for
[node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions).
The scheduler checks taints, not node conditions, when it makes scheduling
decisions. This ensures that node conditions don't directly affect scheduling.
@ -297,7 +299,7 @@ arbitrary tolerations to DaemonSets.
## {{% heading "whatsnext" %}}
* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) and how you can configure it
* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
and how you can configure it
* Read about [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)

View File

@ -0,0 +1,570 @@
---
title: Pod Topology Spread Constraints
content_type: concept
weight: 40
---
<!-- overview -->
You can use _topology spread constraints_ to control how
{{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster
among failure-domains such as regions, zones, nodes, and other user-defined topology
domains. This can help to achieve high availability as well as efficient resource
utilization.
You can set [cluster-level constraints](#cluster-level-default-constraints) as a default,
or configure topology spread constraints for individual workloads.
<!-- body -->
## Motivation
Imagine that you have a cluster of up to twenty nodes, and you want to run a
{{< glossary_tooltip text="workload" term_id="workload" >}}
that automatically scales how many replicas it uses. There could be as few as
two Pods or as many as fifteen.
When there are only two Pods, you'd prefer not to have both of those Pods run on the
same node: you would run the risk that a single node failure takes your workload
offline.
In addition to this basic usage, there are some advanced usage examples that
enable your workloads to benefit on high availability and cluster utilization.
As you scale up and run more Pods, a different concern becomes important. Imagine
that you have three nodes running five Pods each. The nodes have enough capacity
to run that many replicas; however, the clients that interact with this workload
are split across three different datacenters (or infrastructure zones). Now you
have less concern about a single node failure, but you notice that latency is
higher than you'd like, and you are paying for network costs associated with
sending network traffic between the different zones.
You decide that under normal operation you'd prefer to have a similar number of replicas
[scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone,
and you'd like the cluster to self-heal in the case that there is a problem.
Pod topology spread constraints offer you a declarative way to configure that.
## `topologySpreadConstraints` field
The Pod API includes a field, `spec.topologySpreadConstraints`. Here is an example:
```yaml
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
# Configure a topology spread constraint
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; alpha since v1.24
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
### other Pod fields go here
```
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
### Spread constraint definition
You can define one or multiple `topologySpreadConstraints` entries to instruct the
kube-scheduler how to place each incoming Pod in relation to the existing Pods across
your cluster. Those fields are:
- **maxSkew** describes the degree to which Pods may be unevenly distributed. You must
specify this field and the number must be greater than zero. Its semantics differ
according to the value of `whenUnsatisfiable`:
- if you select `whenUnsatisfiable: DoNotSchedule`, then `maxSkew` defines the
maximum permitted difference between the number of matching pods in the target
topology and the _global minimum_
(the minimum number of pods that match the label selector in a topology domain).
For example, if you have 3 zones with 2, 4 and 5 matching pods respectively,
then the global minimum is 2 and `maxSkew` is compared relative to that number.
- if you select `whenUnsatisfiable: ScheduleAnyway`, the scheduler gives higher
precedence to topologies that would help reduce the skew.
- **minDomains** indicates a minimum number of eligible domains. This field is optional.
A domain is a particular instance of a topology. An eligible domain is a domain whose
nodes match the node selector.
{{< note >}}
The `minDomains` field is an alpha field added in 1.24. You have to enable the
`MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
in order to use it.
{{< /note >}}
- The value of `minDomains` must be greater than 0, when specified.
You can only specify `minDomains` in conjunction with `whenUnsatisfiable: DoNotSchedule`.
- When the number of eligible domains with match topology keys is less than `minDomains`,
Pod topology spread treats global minimum as 0, and then the calculation of `skew` is performed.
The global minimum is the minimum number of matching Pods in an eligible domain,
or zero if the number of eligible domains is less than `minDomains`.
- When the number of eligible domains with matching topology keys equals or is greater than
`minDomains`, this value has no effect on scheduling.
- If you do not specify `minDomains`, the constraint behaves as if `minDomains` is 1.
- **topologyKey** is the key of [node labels](#node-labels). If two Nodes are labelled
with this key and have identical values for that label, the scheduler treats both
Nodes as being in the same topology. The scheduler tries to place a balanced number
of Pods into each topology domain.
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
- **labelSelector** is used to find matching Pods. Pods
that match this label selector are counted to determine the
number of Pods in their corresponding topology domain.
See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
for more details.
When a Pod defines more than one `topologySpreadConstraint`, those constraints are
combined using a logical AND operation: the kube-scheduler looks for a node for the incoming Pod
that satisfies all the configured constraints.
### Node labels
Topology spread constraints rely on node labels to identify the topology
domain(s) that each {{< glossary_tooltip text="node" term_id="node" >}} is in.
For example, a node might have labels:
```yaml
region: us-east-1
zone: us-east-1a
```
{{< note >}}
For brevity, this example doesn't use the
[well-known](/docs/reference/labels-annotations-taints/) label keys
`topology.kubernetes.io/zone` and `topology.kubernetes.io/region`. However,
those registered label keys are nonetheless recommended rather than the private
(unqualified) label keys `region` and `zone` that are used here.
You can't make a reliable assumption about the meaning of a private label key
between different contexts.
{{< /note >}}
Suppose you have a 4-node cluster with the following labels:
```
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA
node2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA
node3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB
node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
```
Then the cluster is logically viewed as below:
{{<mermaid>}}
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
## Consistency
You should set the same Pod topology spread constraints on all pods in a group.
Usually, if you are using a workload controller such as a Deployment, the pod template
takes care of this for you. If you mix different spread constraints then Kubernetes
follows the API definition of the field; however, the behavior is more likely to become
confusing and troubleshooting is less straightforward.
You need a mechanism to ensure that all the nodes in a topology domain (such as a
cloud provider region) are labelled consistently.
To avoid you needing to manually label nodes, most clusters automatically
populate well-known labels such as `topology.kubernetes.io/hostname`. Check whether
your cluster supports this.
## Topology spread constraint examples
### Example: one topology spread constraint {#example-one-topologyspreadconstraint}
Suppose you have a 4-node cluster where 3 Pods labelled `foo: bar` are located in
node1, node2 and node3 respectively:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If you want an incoming Pod to be evenly spread with existing Pods across zones, you
can use a manifest similar to:
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
From that manifest, `topologyKey: zone` implies the even distribution will only be applied
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the
incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint.
If the scheduler placed this incoming Pod into zone `A`, the distribution of Pods would
become `[3, 1]`. That means the actual skew is then 2 (calculated as `3 - 1`), which
violates `maxSkew: 1`. To satisfy the constraints and context for this example, the
incoming Pod can only be placed onto a node in zone `B`:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
OR
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n3
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can tweak the Pod spec to meet various kinds of requirements:
- Change `maxSkew` to a bigger value - such as `2` - so that the incoming Pod can
be placed into zone `A` as well.
- Change `topologyKey` to `node` so as to distribute the Pods evenly across nodes
instead of zones. In the above example, if `maxSkew` remains `1`, the incoming
Pod can only be placed onto the node `node4`.
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway`
to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs
are satisfied). However, it's preferred to be placed into the topology domain which
has fewer matching Pods. (Be aware that this preference is jointly normalized
with other internal scheduling priorities such as resource usage ratio).
### Example: multiple topology spread constraints {#example-multiple-topologyspreadconstraints}
This builds upon the previous example. Suppose you have a 4-node cluster where 3
existing Pods labeled `foo: bar` are located on node1, node2 and node3 respectively:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can combine two topology spread constraints to control the spread of Pods both
by node and by zone:
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
In this case, to match the first constraint, the incoming Pod can only be placed onto
nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be
scheduled to the node `node4`. The scheduler only considers options that satisfy all
defined constraints, so the only valid placement is onto node `node4`.
### Example: conflicting topology spread constraints {#example-conflicting-topologyspreadconstraints}
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p4(Pod) --> n3(Node3)
p5(Pod) --> n3
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n1
p3(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If you were to apply
[`two-constraints.yaml`](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/topology-spread-constraints/two-constraints.yaml)
(the manifest from the previous example)
to **this** cluster, you would see that the Pod `mypod` stays in the `Pending` state.
This happens because: to satisfy the first constraint, the Pod `mypod` can only
be placed into zone `B`; while in terms of the second constraint, the Pod `mypod`
can only schedule to node `node2`. The intersection of the two constraints returns
an empty set, and the scheduler cannot place the Pod.
To overcome this situation, you can either increase the value of `maxSkew` or modify
one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`. Depending on
circumstances, you might also decide to delete an existing Pod manually - for example,
if you are troubleshooting why a bug-fix rollout is not making progress.
#### Interaction with node affinity and node selectors
The scheduler will skip the non-matching nodes from the skew calculations if the
incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined.
### Example: topology spread constraints with node affinity {#example-topologyspreadconstraints-with-nodeaffinity}
Suppose you have a 5-node cluster ranging across zones A to C:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
{{<mermaid>}}
graph BT
subgraph "zoneC"
n5(Node5)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n5 k8s;
class zoneC cluster;
{{< /mermaid >}}
and you know that zone `C` must be excluded. In this case, you can compose a manifest
as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`.
Similarly, Kubernetes also respects `spec.nodeSelector`.
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
## Implicit conventions
There are some implicit conventions worth noting here:
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- The scheduler bypasses any nodes that don't have any `topologySpreadConstraints[*].topologyKey`
present. This implies that:
1. any Pods located on those bypassed nodes do not impact `maxSkew` calculation - in the
above example, suppose the node `node1` does not have a label "zone", then the 2 Pods will
be disregarded, hence the incoming Pod will be scheduled into zone `A`.
2. the incoming Pod has no chances to be scheduled onto this kind of nodes -
in the above example, suppose a node `node5` has the **mistyped** label `zone-typo: zoneC`
(and no `zone` label set). After node `node5` joins the cluster, it will be bypassed and
Pods for this workload aren't scheduled there.
- Be aware of what will happen if the incoming Pod's
`topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the
above example, if you remove the incoming Pod's labels, it can still be placed onto
nodes in zone `B`, since the constraints are still satisfied. However, after that
placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A`
having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as
`foo: bar`. If this is not what you expect, update the workload's
`topologySpreadConstraints[*].labelSelector` to match the labels in the pod template.
## Cluster-level default constraints
It is possible to set default topology spread constraints for a cluster. Default
topology spread constraints are applied to a Pod if, and only if:
- It doesn't define any constraints in its `.spec.topologySpreadConstraints`.
- It belongs to a Service, ReplicaSet, StatefulSet or ReplicationController.
Default constraints can be set as part of the `PodTopologySpread` plugin
arguments in a [scheduling profile](/docs/reference/scheduling/config/#profiles).
The constraints are specified with the same [API above](#api), except that
`labelSelector` must be empty. The selectors are calculated from the Services,
ReplicaSets, StatefulSets or ReplicationControllers that the Pod belongs to.
An example configuration might look like follows:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
defaultingType: List
```
{{< note >}}
The [`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
is disabled by default. The Kubernetes project recommends using `PodTopologySpread`
to achieve similar behavior.
{{< /note >}}
### Built-in default constraints {#internal-default-constraints}
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
If you don't configure any cluster-level default constraints for pod topology spreading,
then kube-scheduler acts as if you specified the following default topology constraints:
```yaml
defaultConstraints:
- maxSkew: 3
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
- maxSkew: 5
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
```
Also, the legacy `SelectorSpread` plugin, which provides an equivalent behavior,
is disabled by default.
{{< note >}}
The `PodTopologySpread` plugin does not score the nodes that don't have
the topology keys specified in the spreading constraints. This might result
in a different default behavior compared to the legacy `SelectorSpread` plugin when
using the default topology constraints.
If your nodes are not expected to have **both** `kubernetes.io/hostname` and
`topology.kubernetes.io/zone` labels set, define your own constraints
instead of using the Kubernetes defaults.
{{< /note >}}
If you don't want to use the default Pod spreading constraints for your cluster,
you can disable those defaults by setting `defaultingType` to `List` and leaving
empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints: []
defaultingType: List
```
## Comparison with podAffinity and podAntiAffinity {#comparison-with-podaffinity-podantiaffinity}
In Kubernetes, [inter-Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
control how Pods are scheduled in relation to one another - either more packed
or more scattered.
`podAffinity`
: attracts Pods; you can try to pack any number of Pods into qualifying
topology domain(s)
`podAntiAffinity`
: repels Pods. If you set this to `requiredDuringSchedulingIgnoredDuringExecution` mode then
only a single Pod can be scheduled into a single topology domain; if you choose
`preferredDuringSchedulingIgnoredDuringExecution` then you lose the ability to enforce the
constraint.
For finer control, you can specify topology spread constraints to distribute
Pods across different topology domains - to achieve either high availability or
cost-saving. This can also help on rolling update workloads and scaling out
replicas smoothly.
For more context, see the
[Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation)
section of the enhancement proposal about Pod topology spread constraints.
## Known limitations
- There's no guarantee that the constraints remain satisfied when Pods are removed. For
example, scaling down a Deployment may result in imbalanced Pods distribution.
You can use a tool such as the [Descheduler](https://github.com/kubernetes-sigs/descheduler)
to rebalance the Pods distribution.
- Pods matched on tainted nodes are respected.
See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921).
- The scheduler doesn't have prior knowledge of all the zones or other topology
domains that a cluster has. They are determined from the existing nodes in the
cluster. This could lead to a problem in autoscaled clusters, when a node pool (or
node group) is scaled to zero nodes, and you're expecting the cluster to scale up,
because, in this case, those topology domains won't be considered until there is
at least one node in them.
You can work around this by using an cluster autoscaling tool that is aware of
Pod topology spread constraints and is also aware of the overall set of topology
domains.
## {{% heading "whatsnext" %}}
- The blog article [Introducing PodTopologySpread](/blog/2020/05/introducing-podtopologyspread/)
explains `maxSkew` in some detail, as well as covering some advanced usage examples.
- Read the [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of
the API reference for Pod.

View File

@ -23,10 +23,11 @@ following diagram:
## Transport security
In a typical Kubernetes cluster, the API serves on port 443, protected by TLS.
By default, the Kubernetes API server listens on port 6443 on the first non-localhost network interface, protected by TLS. In a typical production Kubernetes cluster, the API serves on port 443. The port can be changed with the `--secure-port`, and the listening IP address with the `--bind-address` flag.
The API server presents a certificate. This certificate may be signed using
a private certificate authority (CA), or based on a public key infrastructure linked
to a generally recognized CA.
to a generally recognized CA. The certificate and corresponding private key can be set by using the `--tls-cert-file` and `--tls-private-key-file` flags.
If your cluster uses a private certificate authority, you need a copy of that CA
certificate configured into your `~/.kube/config` on the client, so that you can
@ -137,34 +138,6 @@ The cluster audits the activities generated by users, by applications that use t
For more information, see [Auditing](/docs/tasks/debug/debug-cluster/audit/).
## API server ports and IPs
The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports:
By default, the Kubernetes API server serves HTTP on 2 ports:
1. `localhost` port:
- is intended for testing and bootstrap, and for other components of the master node
(scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080
- default IP is localhost, change with `--insecure-bind-address` flag.
- request **bypasses** authentication and authorization modules.
- request handled by admission control module(s).
- protected by need to have host access
2. “Secure port”:
- use whenever possible
- uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag.
- default is port 6443, change with `--secure-port` flag.
- default IP is first non-localhost network interface, change with `--bind-address` flag.
- request handled by authentication and authorization modules.
- request handled by admission control module(s).
- authentication and authorization modules run.
## {{% heading "whatsnext" %}}
Read more documentation on authentication, authorization and API access control:

View File

@ -126,10 +126,10 @@ Pod-to-pod communication can be controlled using [Network Policies](/docs/conce
Namespace management tools may simplify the creation of default or common network policies. In addition, some of these tools allow you to enforce a consistent set of namespace labels across your cluster, ensuring that they are a trusted basis for your policies.
{{< warning >}}
Network policies require a [CNI plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) that supports the implementation of network policies. Otherwise, NetworkPolicy resources will be ignored.
Network policies require a [CNI plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) that supports the implementation of network policies. Otherwise, NetworkPolicy resources will be ignored.
{{< /warning >}}
More advanced network isolation may be provided by service meshes, which provide OSI Layer 7 policies based on workload identity, in addition to namespaces. These higher-level policies can make it easier to manage namespaced based multi-tenancy, especially when multiple namespaces are dedicated to a single tenant. They frequently also offer encryption using mutual TLS, protecting your data even in the presence of a compromised node, and work across dedicated or virtual clusters. However, they can be significantly more complex to manage and may not be appropriate for all users.
More advanced network isolation may be provided by service meshes, which provide OSI Layer 7 policies based on workload identity, in addition to namespaces. These higher-level policies can make it easier to manage namespace-based multi-tenancy, especially when multiple namespaces are dedicated to a single tenant. They frequently also offer encryption using mutual TLS, protecting your data even in the presence of a compromised node, and work across dedicated or virtual clusters. However, they can be significantly more complex to manage and may not be appropriate for all users.
### Storage isolation
@ -165,7 +165,7 @@ Although workloads from different tenants are running on different nodes, it is
Node isolation is a little easier to reason about from a billing standpoint than sandboxing containers since you can charge back per node rather than per pod. It also has fewer compatibility and performance issues and may be easier to implement than sandboxing containers. For example, nodes for each tenant can be configured with taints so that only pods with the corresponding toleration can run on them. A mutating webhook could then be used to automatically add tolerations and node affinities to pods deployed into tenant namespaces so that they run on a specific set of nodes designated for that tenant.
Node isolation can be implemented using an [pod node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) or a [Virtual Kubelet](https://github.com/virtual-kubelet).
Node isolation can be implemented using an [pod node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/) or a [Virtual Kubelet](https://github.com/virtual-kubelet).
## Additional Considerations

View File

@ -214,6 +214,9 @@ controller selects policies according to the following criteria:
2. If the pod must be defaulted or mutated, the first PodSecurityPolicy
(ordered by name) to allow the pod is selected.
When a Pod is validated against a PodSecurityPolicy, [a `kubernetes.io/psp` annotation](/docs/reference/labels-annotations-taints/#kubernetes-io-psp)
is added to the Pod, with the name of the PodSecurityPolicy as the annotation value.
{{< note >}}
During update operations (during which mutations to pod specs are disallowed)
only non-mutating PodSecurityPolicies are used to validate the pod.
@ -245,8 +248,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
### Create a policy and a pod
Define the example PodSecurityPolicy object in a file. This is a policy that
prevents the creation of privileged pods.
This is a policy that prevents the creation of privileged pods.
The name of a PodSecurityPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
@ -255,7 +257,7 @@ The name of a PodSecurityPolicy object must be a valid
And create it with kubectl:
```shell
kubectl-admin create -f example-psp.yaml
kubectl-admin create -f https://k8s.io/examples/policy/example-psp.yaml
```
Now, as the unprivileged user, try to create a simple pod:
@ -284,6 +286,11 @@ pod's service account nor `fake-user` have permission to use the new policy:
```shell
kubectl-user auth can-i use podsecuritypolicy/example
```
The output is similar to this:
```
no
```
@ -300,14 +307,27 @@ kubectl-admin create role psp:unprivileged \
--verb=use \
--resource=podsecuritypolicy \
--resource-name=example
role "psp:unprivileged" created
```
```
role "psp:unprivileged" created
```
```shell
kubectl-admin create rolebinding fake-user:psp:unprivileged \
--role=psp:unprivileged \
--serviceaccount=psp-example:fake-user
rolebinding "fake-user:psp:unprivileged" created
```
```
rolebinding "fake-user:psp:unprivileged" created
```
```shell
kubectl-user auth can-i use podsecuritypolicy/example
```
```
yes
```
@ -332,7 +352,20 @@ The output is similar to this
pod "pause" created
```
It works as expected! But any attempts to create a privileged pod should still
It works as expected! You can verify that the pod was validated against the
newly created PodSecurityPolicy:
```shell
kubectl-user get pod pause -o yaml | grep kubernetes.io/psp
```
The output is similar to this
```
kubernetes.io/psp: example
```
But any attempts to create a privileged pod should still
be denied:
```shell

View File

@ -462,11 +462,11 @@ of individual policies are not defined here.
{{% thirdparty-content %}}
Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as:
- [Kubewarden](https://github.com/kubewarden)
- [Kyverno](https://kyverno.io/policies/pod-security/)
- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper)
## FAQ
### Why isn't there a profile between privileged and baseline?
@ -493,9 +493,9 @@ built-in [Pod Security Admission Controller](/docs/concepts/security/pod-securit
### What profiles should I apply to my Windows Pods?
Windows in Kubernetes has some limitations and differentiators from standard Linux-based
workloads. Specifically, many of the Pod SecurityContext fields [have no effect on
Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext). As
such, no standardized Pod Security profiles currently exist.
workloads. Specifically, many of the Pod SecurityContext fields
[have no effect on Windows](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext).
As such, no standardized Pod Security profiles currently exist.
If you apply the restricted profile for a Windows pod, this **may** have an impact on the pod
at runtime. The restricted profile requires enforcing Linux-specific restrictions (such as seccomp
@ -504,7 +504,9 @@ these Linux-specific values, then the Windows pod should still work normally wit
profile. However, the lack of enforcement means that there is no additional restriction, for Pods
that use Windows containers, compared to the baseline profile.
The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy. Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies, so any HostProcess pod should be considered privileged.
The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy.
Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies,
so any HostProcess pod should be considered privileged.
### What about sandboxed Pods?
@ -518,3 +520,4 @@ kernel. This allows for workloads requiring heightened permissions to still be i
Additionally, the protection of sandboxed workloads is highly dependent on the method of
sandboxing. As such, no single recommended profile is recommended for all sandboxed workloads.

View File

@ -15,7 +15,8 @@ execute their roles. It is important to ensure that, when designing permissions
users, the cluster administrator understands the areas where privilge escalation could occur,
to reduce the risk of excessive access leading to security incidents.
The good practices laid out here should be read in conjunction with the general [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update).
The good practices laid out here should be read in conjunction with the general
[RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update).
<!-- body -->
@ -23,18 +24,19 @@ The good practices laid out here should be read in conjunction with the general
### Least privilege
Ideally minimal RBAC rights should be assigned to users and service accounts. Only permissions
explicitly required for their operation should be used. Whilst each cluster will be different,
Ideally, minimal RBAC rights should be assigned to users and service accounts. Only permissions
explicitly required for their operation should be used. While each cluster will be different,
some general rules that can be applied are :
- Assign permissions at the namespace level where possible. Use RoleBindings as opposed to
ClusterRoleBindings to give users rights only within a specific namespace.
- Avoid providing wildcard permissions when possible, especially to all resources.
As Kubernetes is an extensible system, providing wildcard access gives rights
not just to all object types presently in the cluster, but also to all future object types
not just to all object types that currently exist in the cluster, but also to all future object types
which are created in the future.
- Administrators should not use `cluster-admin` accounts except where specifically needed.
Providing a low privileged account with [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation)
Providing a low privileged account with
[impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation)
can avoid accidental modification of cluster resources.
- Avoid adding users to the `system:masters` group. Any user who is a member of this group
bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be
@ -44,15 +46,17 @@ some general rules that can be applied are :
### Minimize distribution of privileged tokens
Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions (for example, any of the rights listed under
[privilege escalation risks](#privilege-escalation-risks)).
Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions
(for example, any of the rights listed under [privilege escalation risks](#privilege-escalation-risks)).
In cases where a workload requires powerful permissions, consider the following practices:
- Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run
are necessary and are run with least privilege to limit the blast radius of container escapes.
- Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/), [NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) to ensure
pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/),
[NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or
[PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
to ensure pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard.
### Hardening
@ -62,7 +66,7 @@ the RBAC rights provided by default can provide opportunities for security harde
In general, changes should not be made to rights provided to `system:` accounts some options
to harden cluster rights exist:
- Review bindings for the `system:unauthenticated` group and remove where possible, as this gives
- Review bindings for the `system:unauthenticated` group and remove them where possible, as this gives
access to anyone who can contact the API server at a network level.
- Avoid the default auto-mounting of service account tokens by setting
`automountServiceAccountToken: false`. For more details, see
@ -107,7 +111,7 @@ with the ability to create suitably secure and isolated Pods, you should enforce
You can use [Pod Security admission](/docs/concepts/security/pod-security-admission/)
or other (third party) mechanisms to implement that enforcement.
You can also use the deprecated [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) mechanism
You can also use the deprecated [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) mechanism
to restrict users' abilities to create privileged Pods (N.B. PodSecurityPolicy is scheduled for removal
in version 1.25).
@ -117,25 +121,27 @@ Secrets they would not have through RBAC directly.
### Persistent volume creation
As noted in the [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems) documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host. Where access to persistent storage is required trusted administrators should create
As noted in the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/#volumes-and-file-systems)
documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host.
Where access to persistent storage is required trusted administrators should create
PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage.
### Access to `proxy` subresource of Nodes
Users with access to the proxy sub-resource of node objects have rights to the Kubelet API,
which allows for command execution on every pod on the node(s) which they have rights to.
which allows for command execution on every pod on the node(s) to which they have rights.
This access bypasses audit logging and admission control, so care should be taken before
granting rights to this resource.
### Escalate verb
Generally the RBAC system prevents users from creating clusterroles with more rights than
they possess. The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update),
Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses.
The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update),
users with this right can effectively escalate their privileges.
### Bind verb
Similar to the `escalate` verb, granting users this right allows for bypass of Kubernetes
Similar to the `escalate` verb, granting users this right allows for the bypass of Kubernetes
in-built protections against privilege escalation, allowing users to create bindings to
roles with rights they do not already have.
@ -173,8 +179,11 @@ objects to create a denial of service condition either based on the size or numb
specifically relevant in multi-tenant clusters if semi-trusted or untrusted users
are allowed limited access to a system.
One option for mitigation of this issue would be to use [resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota)
One option for mitigation of this issue would be to use
[resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota)
to limit the quantity of objects which can be created.
## {{% heading "whatsnext" %}}
* To learn more about RBAC, see the [RBAC documentation](/docs/reference/access-authn-authz/rbac/).

View File

@ -22,34 +22,41 @@ storage (as compared to using tmpfs / in-memory filesystems on Linux). As a clus
operator, you should take both of the following additional measures:
1. Use file ACLs to secure the Secrets' file location.
1. Apply volume-level encryption using [BitLocker](https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server).
1. Apply volume-level encryption using
[BitLocker](https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server).
## Container users
[RunAsUsername](/docs/tasks/configure-pod-container/configure-runasusername)
can be specified for Windows Pods or containers to execute the container
processes as specific user. This is roughly equivalent to
[RunAsUser](/docs/concepts/policy/pod-security-policy/#users-and-groups).
[RunAsUser](/docs/concepts/security/pod-security-policy/#users-and-groups).
Windows containers offer two default user accounts, ContainerUser and ContainerAdministrator.
The differences between these two user accounts are covered in
[When to use ContainerAdmin and ContainerUser user accounts](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/container-security#when-to-use-containeradmin-and-containeruser-user-accounts) within Microsoft's _Secure Windows containers_ documentation.
[When to use ContainerAdmin and ContainerUser user accounts](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/container-security#when-to-use-containeradmin-and-containeruser-user-accounts)
within Microsoft's _Secure Windows containers_ documentation.
Local users can be added to container images during the container build process.
{{< note >}}
* [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver) based images run as `ContainerUser` by default
* [Server Core](https://hub.docker.com/_/microsoft-windows-servercore) based images run as `ContainerAdministrator` by default
* [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver) based images run as
`ContainerUser` by default
* [Server Core](https://hub.docker.com/_/microsoft-windows-servercore) based images run as
`ContainerAdministrator` by default
{{< /note >}}
Windows containers can also run as Active Directory identities by utilizing [Group Managed Service Accounts](/docs/tasks/configure-pod-container/configure-gmsa/)
Windows containers can also run as Active Directory identities by utilizing
[Group Managed Service Accounts](/docs/tasks/configure-pod-container/configure-gmsa/)
## Pod-level security isolation
Linux-specific pod security context mechanisms (such as SELinux, AppArmor, Seccomp, or custom
POSIX capabilities) are not supported on Windows nodes.
Privileged containers are [not supported](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext) on Windows.
Instead [HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod) can be used on Windows to perform many of the tasks performed by privileged containers on Linux.
Privileged containers are [not supported](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext)
on Windows.
Instead [HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod)
can be used on Windows to perform many of the tasks performed by privileged containers on Linux.

View File

@ -37,7 +37,7 @@ IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:
* Kubernetes 1.20 or later
* Kubernetes 1.20 or later
For information about using dual-stack services with earlier
Kubernetes versions, refer to the documentation for that version
@ -95,7 +95,7 @@ set the `.spec.ipFamilyPolicy` field to one of the following values:
If you would like to define which IP family to use for single stack or define the order of IP
families for dual-stack, you can choose the address families by setting an optional field,
`.spec.ipFamilies`, on the Service.
`.spec.ipFamilies`, on the Service.
{{< note >}}
The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a
@ -133,11 +133,11 @@ These examples demonstrate the behavior of various dual-stack Service configurat
address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned
IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from
`.spec.ClusterIPs`.
* For the `.spec.ClusterIP` field, the control plane records the IP address that is from the
same address family as the first service cluster IP range.
same address family as the first service cluster IP range.
* On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list
one address.
one address.
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
behaves the same as `PreferDualStack`.
@ -174,7 +174,7 @@ dual-stack.)
kind: Service
metadata:
labels:
app: MyApp
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: 10.0.197.123
@ -188,7 +188,7 @@ dual-stack.)
protocol: TCP
targetPort: 80
selector:
app: MyApp
app.kubernetes.io/name: MyApp
type: ClusterIP
status:
loadBalancer: {}
@ -214,7 +214,7 @@ dual-stack.)
kind: Service
metadata:
labels:
app: MyApp
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: None
@ -228,7 +228,7 @@ dual-stack.)
protocol: TCP
targetPort: 80
selector:
app: MyApp
app.kubernetes.io/name: MyApp
```
#### Switching Services between single-stack and dual-stack

View File

@ -46,6 +46,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
is an [Istio](https://istio.io/) based ingress controller.
* The [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
is an ingress controller driving [Kong Gateway](https://konghq.com/kong/).
* [Kusk Gateway](https://kusk.kubeshop.io/) is an OpenAPI-driven ingress controller based on [Envoy](https://www.envoyproxy.io).
* The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy).
* The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy.

View File

@ -43,7 +43,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80

View File

@ -75,7 +75,7 @@ The name of a Service object must be a valid
[RFC 1035 label name](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names).
For example, suppose you have a set of Pods where each listens on TCP port 9376
and contains a label `app=MyApp`:
and contains a label `app.kubernetes.io/name=MyApp`:
```yaml
apiVersion: v1
@ -84,7 +84,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
@ -92,7 +92,7 @@ spec:
```
This specification creates a new Service object named "my-service", which
targets TCP port 9376 on any Pod with the `app=MyApp` label.
targets TCP port 9376 on any Pod with the `app.kubernetes.io/name=MyApp` label.
Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
which is used by the Service proxies
@ -126,7 +126,7 @@ spec:
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
@ -144,9 +144,9 @@ spec:
This works even if there is a mixture of Pods in the Service using a single
configured name, with the same network protocol available via different
port numbers. This offers a lot of flexibility for deploying and evolving
your Services. For example, you can change the port numbers that Pods expose
configured name, with the same network protocol available via different
port numbers. This offers a lot of flexibility for deploying and evolving
your Services. For example, you can change the port numbers that Pods expose
in the next version of your backend software, without breaking clients.
The default protocol for Services is TCP; you can also use any other
@ -159,7 +159,7 @@ Each port definition can have the same `protocol`, or a different one.
### Services without selectors
Services most commonly abstract access to Kubernetes Pods thanks to the selector,
but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
including ones that run outside the cluster. For example:
* You want to have an external database cluster in production, but in your
@ -222,10 +222,10 @@ In the example above, traffic is routed to the single endpoint defined in
the YAML: `192.0.2.42:9376` (TCP).
{{< note >}}
The Kubernetes API server does not allow proxying to endpoints that are not mapped to
pods. Actions such as `kubectl proxy <service-name>` where the service has no
selector will fail due to this constraint. This prevents the Kubernetes API server
from being used as a proxy to endpoints the caller may not be authorized to access.
The Kubernetes API server does not allow proxying to endpoints that are not mapped to
pods. Actions such as `kubectl proxy <service-name>` where the service has no
selector will fail due to this constraint. This prevents the Kubernetes API server
from being used as a proxy to endpoints the caller may not be authorized to access.
{{< /note >}}
An ExternalName Service is a special case of Service that does not have
@ -289,7 +289,7 @@ There are a few reasons for using proxying for Services:
Later in this page you can read about various kube-proxy implementations work. Overall,
you should note that, when running `kube-proxy`, kernel level rules may be
modified (for example, iptables rules might get created), which won't get cleaned up,
modified (for example, iptables rules might get created), which won't get cleaned up,
in some cases until you reboot. Thus, running kube-proxy is something that should
only be done by an administrator which understands the consequences of having a
low level, privileged network proxying service on a computer. Although the `kube-proxy`
@ -299,9 +299,14 @@ thus is only available to use as-is.
### Configuration
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy
effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup. For example, if your operating system doesn't allow you to run iptables commands, the standard kernel kube-proxy implementation will not work. Likewise, if you have an operating system which doesn't support `netsh`, it will not run in Windows userspace mode.
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
For example, if your operating system doesn't allow you to run iptables commands,
the standard kernel kube-proxy implementation will not work.
Likewise, if you have an operating system which doesn't support `netsh`,
it will not run in Windows userspace mode.
### User space proxy mode {#proxy-mode-userspace}
@ -418,7 +423,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
@ -492,7 +497,11 @@ variables and DNS.
### Environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables
for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature.
for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72))
that are compatible with Docker Engine's
"_[legacy container links](https://docs.docker.com/network/links/)_" feature.
For example, the Service `redis-master` which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11, produces the following environment
@ -604,8 +613,10 @@ The default is `ClusterIP`.
to use the `ExternalName` type.
{{< /note >}}
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules
into a single resource as it can expose multiple services under the same IP address.
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service.
Ingress is not a Service type, but it acts as the entry point for your cluster.
It lets you consolidate your routing rules into a single resource as it can expose multiple
services under the same IP address.
### Type NodePort {#type-nodeport}
@ -620,9 +631,14 @@ field of the
[kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
to particular IP block(s).
This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`) to specify IP address ranges that kube-proxy should consider as local to this node.
This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`)
to specify IP address ranges that kube-proxy should consider as local to this node.
For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases).
For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag,
kube-proxy only selects the loopback interface for NodePort Services.
The default for `--nodeport-addresses` is an empty list.
his means that kube-proxy should consider all available network interfaces for NodePort.
(That's also compatible with earlier Kubernetes releases).
If you want a specific port number, you can specify a value in the `nodePort`
field. The control plane will either allocate you that port or report that
@ -650,7 +666,7 @@ metadata:
spec:
type: NodePort
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
@ -676,7 +692,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
@ -689,7 +705,8 @@ status:
- ip: 192.0.2.127
```
Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
Traffic from the external load balancer is directed at the backend Pods.
The cloud provider decides how it is load balanced.
Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created
with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
@ -704,7 +721,11 @@ to create a static type public IP address resource. This public IP address resou
be in the same resource group of the other automatically created resources of the cluster.
For example, `MC_myResourceGroup_myAKSCluster_eastus`.
Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357).
Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the
`securityGroupName` in the cloud provider configuration file.
For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see,
[Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip)
or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357).
{{< /note >}}
@ -744,13 +765,13 @@ You must explicitly remove the `nodePorts` entry in every Service port to de-all
`spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default.
By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses
the cloud provider's default load balancer implementation if the cluster is configured with
a cloud provider using the `--cloud-provider` component flag.
a cloud provider using the `--cloud-provider` component flag.
If `spec.loadBalancerClass` is specified, it is assumed that a load balancer
implementation that matches the specified class is watching for Services.
Any default load balancer implementation (for example, the one provided by
the cloud provider) will ignore Services that have this field set.
`spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only.
Once set, it cannot be changed.
Once set, it cannot be changed.
The value of `spec.loadBalancerClass` must be a label-style identifier,
with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`".
Unprefixed names are reserved for end-users.
@ -760,7 +781,8 @@ Unprefixed names are reserved for end-users.
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
(virtual) network address block.
In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
In a split-horizon DNS environment you would need two Services to be able to route both external
and internal traffic to your endpoints.
To set an internal load balancer, add one of the following annotations to your Service
depending on the cloud Service provider you're using.
@ -925,7 +947,9 @@ you can use the following annotations:
In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
From Kubernetes v1.9 onwards you can use
[predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool:
```bash
@ -981,14 +1005,17 @@ specifies the logical hierarchy you created for your Amazon S3 bucket.
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
# Specifies whether access logs are enabled for the load balancer
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
# The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes).
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
# The name of the Amazon S3 bucket where the access logs are stored
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
# The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod`
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
```
#### Connection Draining on AWS
@ -997,7 +1024,8 @@ Connection draining for Classic ELBs can be managed with the annotation
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set
to the value of `"true"`. The annotation
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
also be used to set maximum time, in seconds, to keep the existing connections open before
deregistering the instances.
```yaml
metadata:
@ -1015,50 +1043,56 @@ There are other annotations to manage Classic Elastic Load Balancers that are de
metadata:
name: my-service
annotations:
# The time, in seconds, that the connection is allowed to be idle (no data has been sent
# over the connection) before it is closed by the load balancer
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
# The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# Specifies whether cross-zone load balancing is enabled for the load balancer
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
# A comma-separated list of key-value pairs which will be recorded as
# additional tags in the ELB.
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
# The number of successive successful health checks required for a backend to
# be considered healthy for traffic. Defaults to 2, must be between 2 and 10
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
# The number of unsuccessful health checks required for a backend to be
# considered unhealthy for traffic. Defaults to 6, must be between 2 and 10
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
# The approximate interval, in seconds, between health checks of an
# individual instance. Defaults to 10, must be between 5 and 300
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
# The amount of time, in seconds, during which no response means a failed
# health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
# value. Defaults to 5, must be between 2 and 60
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
# A list of existing security groups to be configured on the ELB created. Unlike the annotation
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other
# security groups previously assigned to the ELB and also overrides the creation
# of a uniquely generated security group for this ELB.
# The first security group ID on this list is used as a source to permit incoming traffic to target worker nodes (service traffic and health checks).
# If multiple ELBs are configured with the same security group ID, only a single permit line will be added to the worker node security groups, that means if you delete any
# The first security group ID on this list is used as a source to permit incoming traffic to
# target worker nodes (service traffic and health checks).
# If multiple ELBs are configured with the same security group ID, only a single permit line
# will be added to the worker node security groups, that means if you delete any
# of those ELBs it will remove the single permit line and block access for all ELBs that shared the same security group ID.
# This can cause a cross-service outage if not used properly
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
# A list of additional security groups to be added to the created ELB, this leaves the uniquely
# generated security group in place, this ensures that every ELB
# has a unique security group ID and a matching permit line to allow traffic to the target worker nodes
# (service traffic and health checks).
# Security groups defined here can be shared between services.
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# A list of additional security groups to be added to the created ELB, this leaves the uniquely generated security group in place, this ensures that every ELB
# has a unique security group ID and a matching permit line to allow traffic to the target worker nodes (service traffic and health checks).
# Security groups defined here can be shared between services.
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
# A comma separated list of key-value pairs which are used
# to select the target nodes for the load balancer
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
```
#### Network Load Balancer support on AWS {#aws-nlb-support}
@ -1075,7 +1109,8 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet
```
{{< note >}}
NLB only works with certain instance classes; see the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
NLB only works with certain instance classes; see the
[AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
on Elastic Load Balancing for a list of supported instance types.
{{< /note >}}
@ -1182,7 +1217,8 @@ spec:
```
{{< note >}}
ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address.
ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
is intended to specify a canonical DNS name. To hardcode an IP address, consider using
[headless Services](#headless-services).
{{< /note >}}
@ -1196,9 +1232,13 @@ can start its Pods, add appropriate selectors or endpoints, and change the
Service's `type`.
{{< warning >}}
You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references.
You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS.
If you use ExternalName then the hostname used by clients inside your cluster is different from
the name that the ExternalName references.
For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
For protocols that use hostnames this difference may lead to errors or unexpected responses.
HTTP requests will have a `Host:` header that the origin server does not recognize;
TLS servers will not be able to provide a certificate matching the hostname that the client connected to.
{{< /warning >}}
{{< note >}}
@ -1223,7 +1263,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
@ -1357,12 +1397,15 @@ through a load-balancer, though in those cases the client IP does get altered.
#### IPVS
iptables operations slow down dramatically in large scale cluster e.g 10,000 Services.
IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).
IPVS is designed for load balancing and based on in-kernel hash tables.
So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy.
Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms
(least conns, locality, weighted, persistence).
## API Object
Service is a top-level resource in the Kubernetes REST API. You can find more details
about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
about the [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core).
## Supported protocols {#protocol-support}
@ -1388,7 +1431,8 @@ provider offering this facility. (Most do not).
##### Support for multihomed SCTP associations {#caveat-sctp-multihomed}
{{< warning >}}
The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod.
The support of multihomed SCTP associations requires that the CNI plugin can support the
assignment of multiple interfaces and IP addresses to a Pod.
NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.
{{< /warning >}}

View File

@ -116,7 +116,7 @@ can enable this behavior by:
is enabled on the API server.
An administrator can mark a specific `StorageClass` as default by adding the
`storageclass.kubernetes.io/is-default-class` annotation to it.
`storageclass.kubernetes.io/is-default-class` [annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it.
When a default `StorageClass` exists in a cluster and a user creates a
`PersistentVolumeClaim` with `storageClassName` unspecified, the
`DefaultStorageClass` admission controller automatically adds the

View File

@ -76,8 +76,8 @@ is managed by kubelet, or injecting different data.
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
This feature requires the `CSIInlineVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It
is enabled by default starting with Kubernetes 1.16.
This feature requires the `CSIInlineVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
to be enabled. It is enabled by default starting with Kubernetes 1.16.
{{< note >}}
CSI ephemeral volumes are only supported by a subset of CSI drivers.
@ -136,8 +136,11 @@ should not be exposed to users through the use of inline ephemeral volumes.
Cluster administrators who need to restrict the CSI drivers that are
allowed to be used as inline volumes within a Pod spec may do so by:
- Removing `Ephemeral` from `volumeLifecycleModes` in the CSIDriver spec, which prevents the driver from being used as an inline ephemeral volume.
- Using an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) to restrict how this driver is used.
- Removing `Ephemeral` from `volumeLifecycleModes` in the CSIDriver spec, which prevents the
driver from being used as an inline ephemeral volume.
- Using an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
to restrict how this driver is used.
### Generic ephemeral volumes
@ -207,7 +210,7 @@ because then the scheduler is free to choose a suitable node for
the Pod. With immediate binding, the scheduler is forced to select a node that has
access to the volume once it is available.
In terms of [resource ownership](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents),
In terms of [resource ownership](/docs/concepts/architecture/garbage-collection/#owners-dependents),
a Pod that has generic ephemeral storage is the owner of the PersistentVolumeClaim(s)
that provide that ephemeral storage. When the Pod is deleted,
the Kubernetes garbage collector deletes the PVC, which then usually
@ -252,10 +255,11 @@ Enabling the GenericEphemeralVolume feature allows users to create
PVCs indirectly if they can create Pods, even if they do not have
permission to create PVCs directly. Cluster administrators must be
aware of this. If this does not fit their security model, they should
use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) that rejects objects like Pods that have a generic ephemeral volume.
use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
that rejects objects like Pods that have a generic ephemeral volume.
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota) still applies, so
even if users are allowed to use this new mechanism, they cannot use
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota)
still applies, so even if users are allowed to use this new mechanism, they cannot use
it to circumvent other policies.
## {{% heading "whatsnext" %}}
@ -266,11 +270,13 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont
### CSI ephemeral volumes
- For more information on the design, see the [Ephemeral Inline CSI
volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md).
- For more information on further development of this feature, see the [enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596).
- For more information on the design, see the
[Ephemeral Inline CSI volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md).
- For more information on further development of this feature, see the
[enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596).
### Generic ephemeral volumes
- For more information on the design, see the
[Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md).
[Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md).

View File

@ -558,7 +558,7 @@ If the access modes are specified as ReadWriteOncePod, the volume is constrained
| AzureFile | &#x2713; | &#x2713; | &#x2713; | - |
| AzureDisk | &#x2713; | - | - | - |
| CephFS | &#x2713; | &#x2713; | &#x2713; | - |
| Cinder | &#x2713; | - | - | - |
| Cinder | &#x2713; | - | ([if multi-attach volumes are available](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/features.md#multi-attach-volumes)) | - |
| CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
| FC | &#x2713; | &#x2713; | - | - |
| FlexVolume | &#x2713; | &#x2713; | depends on the driver | - |

View File

@ -73,7 +73,7 @@ volume mount will not receive updates for those volume sources.
## SecurityContext interactions
The [proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the the correct owner permissions set.
The [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.
### Linux
@ -99,6 +99,7 @@ into their own volume mount outside of `C:\`.
By default, the projected files will have the following ownership as shown for
an example projected volume file:
```powershell
PS C:\> Get-Acl C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt | Format-List
@ -111,6 +112,7 @@ Access : NT AUTHORITY\SYSTEM Allow FullControl
Audit :
Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)
```
This implies all administrator users like `ContainerAdministrator` will have
read, write and execute access while, non-administrator users will have read and
execute access.

View File

@ -121,7 +121,7 @@ section refers to several key workload abstractions and how they map to Windows.
In the above list, wildcards (`*`) indicate all elements in a list.
For example, `spec.containers[*].securityContext` refers to the SecurityContext object
for all containers. If any of these fields is specified, the Pod will
not be admited by the API server.
not be admitted by the API server.
* [Workload resources](/docs/concepts/workloads/controllers/) including:
* ReplicaSet
@ -132,7 +132,7 @@ section refers to several key workload abstractions and how they map to Windows.
* CronJob
* ReplicationController
* {{< glossary_tooltip text="Services" term_id="service" >}}
See [Load balancing and Services](#load-balancing-and-services) for more details.
See [Load balancing and Services](/docs/concepts/services-networking/windows-networking/#load-balancing-and-services) for more details.
Pods, workload resources, and Services are critical elements to managing Windows
workloads on Kubernetes. However, on their own they are not enough to enable

View File

@ -105,12 +105,12 @@ port 80 of the container directly to the Service.
* Node-to-pod communication across the network, `curl` port 80 of your pod IPs from the Linux control plane node
to check for a web server response
* Pod-to-pod communication, ping between pods (and across hosts, if you have more than one Windows node)
using docker exec or kubectl exec
using `docker exec` or `kubectl exec`
* Service-to-pod communication, `curl` the virtual service IP (seen under `kubectl get services`)
from the Linux control plane node and from individual pods
* Service discovery, `curl` the service name with the Kubernetes [default DNS suffix](/docs/concepts/services-networking/dns-pod-service/#services)
* Inbound connectivity, `curl` the NodePort from the Linux control plane node or machines outside of the cluster
* Outbound connectivity, `curl` external IPs from inside the pod using kubectl exec
* Outbound connectivity, `curl` external IPs from inside the pod using `kubectl exec`
{{< note >}}
Windows container hosts are not able to access the IP of services scheduled on them due to current platform limitations of the Windows networking stack.
@ -307,4 +307,4 @@ spec:
app: iis-2019
```
[RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/
[RuntimeClass]: /docs/concepts/containers/runtime-class/

View File

@ -70,7 +70,7 @@ visit [Configuration](/docs/concepts/configuration/).
There are two supporting concepts that provide backgrounds about how Kubernetes manages pods
for applications:
* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects
* [Garbage collection](/docs/concepts/architecture/garbage-collection/) tidies up objects
from your cluster after their _owning resource_ has been removed.
* The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/)
removes Jobs once a defined time has passed since they completed.

View File

@ -71,7 +71,7 @@ Pod Template:
job-name=pi
Containers:
pi:
Image: perl
Image: perl:5.34.0
Port: <none>
Host Port: <none>
Command:
@ -125,7 +125,7 @@ spec:
- -Mbignum=bpi
- -wle
- print bpi(2000)
image: perl
image: perl:5.34.0
imagePullPolicy: Always
name: pi
resources: {}
@ -356,7 +356,7 @@ spec:
spec:
containers:
- name: pi
image: perl
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```
@ -402,7 +402,7 @@ spec:
spec:
containers:
- name: pi
image: perl
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
```

View File

@ -13,9 +13,6 @@ weight: 20
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often
used to guarantee the availability of a specified number of identical Pods.
<!-- body -->
## How a ReplicaSet works
@ -26,14 +23,14 @@ it should create to meet the number of replicas criteria. A ReplicaSet then fulf
and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
template.
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/architecture/garbage-collection/#owners-and-dependents)
field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning
ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet
knows of the state of the Pods it is maintaining and plans accordingly.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's selector, it will be immediately acquired by said
ReplicaSet.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no
OwnerReference or the OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it
matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet.
## When to use a ReplicaSet
@ -253,7 +250,9 @@ In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`,
be rejected by the API.
{{< note >}}
For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
For 2 ReplicaSets specifying the same `.spec.selector` but different
`.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the
Pods created by the other ReplicaSet.
{{< /note >}}
### Replicas
@ -267,11 +266,14 @@ If you do not specify `.spec.replicas`, then it defaults to 1.
### Deleting a ReplicaSet and its Pods
To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default.
To delete a ReplicaSet and all of its Pods, use
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The
[Garbage collector](/docs/concepts/architecture/garbage-collection/) automatically deletes all of
the dependent Pods by default.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to
`Background` or `Foreground` in the `-d` option. For example:
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in
the -d option.
For example:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
@ -281,9 +283,12 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
### Deleting just a ReplicaSet
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=orphan` option.
You can delete a ReplicaSet without affecting any of its Pods using
[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete)
with the `--cascade=orphan` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
For example:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
@ -295,7 +300,8 @@ Once the original is deleted, you can create a new ReplicaSet to replace it. As
as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods.
However, it will not make any effort to make existing Pods match a new, different pod template.
To update Pods to a new spec in a controlled way, use a
[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as ReplicaSets do not support a rolling update directly.
[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as
ReplicaSets do not support a rolling update directly.
### Isolating Pods from a ReplicaSet
@ -310,17 +316,19 @@ ensures that a desired number of Pods with a matching label selector are availab
When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to
prioritize scaling down pods based on the following general algorithm:
1. Pending (and unschedulable) pods are scaled down first
2. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then
the pod with the lower value will come first.
3. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
4. If the pods' creation times differ, the pod that was created more recently
comes before the older pod (the creation times are bucketed on an integer log scale
when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
1. Pending (and unschedulable) pods are scaled down first
1. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then
the pod with the lower value will come first.
1. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
1. If the pods' creation times differ, the pod that was created more recently
comes before the older pod (the creation times are bucketed on an integer log scale
when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
If all of the above match, then selection is random.
### Pod deletion cost
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
Using the [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)
@ -344,6 +352,7 @@ This feature is beta and enabled by default. You can disable it using the
{{< /note >}}
#### Example Use Case
The different pods of an application could have different utilization levels. On scale down, the application
may prefer to remove the pods with lower utilization. To avoid frequently updating the pods, the application
should update `controller.kubernetes.io/pod-deletion-cost` once before issuing a scale down (setting the
@ -387,12 +396,17 @@ As such, it is recommended to use Deployments when you want ReplicaSets.
### Bare Pods
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node such as Kubelet.
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or
terminated for any reason, such as in the case of node failure or disruptive node maintenance,
such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your
application requires only a single Pod. Think of it similarly to a process supervisor, only it
supervises multiple Pods across multiple nodes instead of individual processes on a single node. A
ReplicaSet delegates local container restarts to some agent on the node such as Kubelet.
### Job
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are expected to terminate on their own
(that is, batch jobs).
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are
expected to terminate on their own (that is, batch jobs).
### DaemonSet
@ -402,12 +416,12 @@ to a machine lifetime: the Pod needs to be running on the machine before other P
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
### ReplicationController
ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/).
ReplicaSets are the successors to [ReplicationControllers](/docs/concepts/workloads/controllers/replicationcontroller/).
The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based
selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
As such, ReplicaSets are preferred over ReplicationControllers
## {{% heading "whatsnext" %}}
* Learn about [Pods](/docs/concepts/workloads/pods).
@ -419,3 +433,4 @@ As such, ReplicaSets are preferred over ReplicationControllers
object definition to understand the API for replica sets.
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how
you can use it to manage application availability during disruptions.

View File

@ -39,10 +39,18 @@ that provides a set of stateless replicas.
## Limitations
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.
* StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is possible to scale the StatefulSet down to 0 prior to deletion.
* The storage for a given Pod must either be provisioned by a
[PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/README.md)
based on the requested `storage class`, or pre-provisioned by an admin.
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the
StatefulSet. This is done to ensure data safety, which is generally more valuable than an
automatic purge of all related StatefulSet resources.
* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services)
to be responsible for the network identity of the Pods. You are responsible for creating this
Service.
* StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is
deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is
possible to scale the StatefulSet down to 0 prior to deletion.
* When using [Rolling Updates](#rolling-updates) with the default
[Pod Management Policy](#pod-management-policies) (`OrderedReady`),
it's possible to get into a broken state that requires
@ -108,18 +116,24 @@ In the above example:
* A Headless Service, named `nginx`, is used to control the network domain.
* The StatefulSet, named `web`, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
* The `volumeClaimTemplates` will provide stable storage using [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a PersistentVolume Provisioner.
* The `volumeClaimTemplates` will provide stable storage using
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a
PersistentVolume Provisioner.
The name of a StatefulSet object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
### Pod Selector
You must set the `.spec.selector` field of a StatefulSet to match the labels of its `.spec.template.metadata.labels`. Failing to specify a matching Pod Selector will result in a validation error during StatefulSet creation.
You must set the `.spec.selector` field of a StatefulSet to match the labels of its
`.spec.template.metadata.labels`. Failing to specify a matching Pod Selector will result in a
validation error during StatefulSet creation.
### Volume Claim Templates
You can set the `.spec.volumeClaimTemplates` which can provide stable storage using [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a PersistentVolume Provisioner.
You can set the `.spec.volumeClaimTemplates` which can provide stable storage using
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a PersistentVolume
Provisioner.
### Minimum ready seconds
@ -128,9 +142,11 @@ You can set the `.spec.volumeClaimTemplates` which can provide stable storage u
`.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly
created Pod should be ready without any of its containers crashing, for it to be considered available.
Please note that this feature is beta and enabled by default. Please opt out by unsetting the StatefulSetMinReadySeconds flag, if you don't
Please note that this feature is beta and enabled by default. Please opt out by unsetting the
StatefulSetMinReadySeconds flag, if you don't
want this feature to be enabled. This field defaults to 0 (the Pod will be considered
available as soon as it is ready). To learn more about when a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
available as soon as it is ready). To learn more about when a Pod is considered ready, see
[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
## Pod Identity
@ -166,8 +182,8 @@ remembered and reused, even after the Pod is running, for at least a few seconds
If you need to discover Pods promptly after they are created, you have a few options:
- Query the Kubernetes API directly (for example, using a watch) rather than relying on DNS lookups.
- Decrease the time of caching in your Kubernetes DNS provider (typically this means editing the config map for CoreDNS, which currently caches for 30 seconds).
- Decrease the time of caching in your Kubernetes DNS provider (typically this means editing the
config map for CoreDNS, which currently caches for 30 seconds).
As mentioned in the [limitations](#limitations) section, you are responsible for
creating the [Headless Service](/docs/concepts/services-networking/service/#headless-services)
@ -189,7 +205,9 @@ Cluster Domain will be set to `cluster.local` unless
### Stable Storage
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one
PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume
with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod is (re)scheduled
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
PersistentVolume Claims. Note that, the PersistentVolumes associated with the
@ -210,7 +228,9 @@ the StatefulSet.
* Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
* Before a Pod is terminated, all of its successors must be completely shutdown.
The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. This practice
is unsafe and strongly discouraged. For further explanation, please refer to
[force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
When the nginx example above is created, three Pods will be deployed in the order
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
@ -256,7 +276,8 @@ annotations for the Pods in a StatefulSet. There are two possible values:
create new Pods that reflect modifications made to a StatefulSet's `.spec.template`.
`RollingUpdate`
: The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a StatefulSet. This is the default update strategy.
: The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a
StatefulSet. This is the default update strategy.
## Rolling Updates
@ -299,7 +320,7 @@ unavailable Pod in the range `0` to `replicas - 1`, it will be counted towards
{{< note >}}
The `maxUnavailable` field is in Alpha stage and it is honored only by API servers
that are running with the `MaxUnavailableStatefulSet`
[feature gate](/docs/reference/commmand-line-tools-reference/feature-gates/)
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enabled.
{{< /note >}}
@ -375,8 +396,8 @@ spec:
...
```
The StatefulSet {{<glossary_tooltip text="controller" term_id="controller">}} adds [owner
references](/docs/concepts/overview/working-with-objects/owners-dependents/#owner-references-in-object-specifications)
The StatefulSet {{<glossary_tooltip text="controller" term_id="controller">}} adds
[owner references](/docs/concepts/overview/working-with-objects/owners-dependents/#owner-references-in-object-specifications)
to its PVCs, which are then deleted by the {{<glossary_tooltip text="garbage collector"
term_id="garbage-collection">}} after the Pod is terminated. This enables the Pod to
cleanly unmount all volumes before the PVCs are deleted (and before the backing PV and

View File

@ -320,12 +320,12 @@ in the Pod Lifecycle documentation.
* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
* Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
configure different Pods with different container runtime configurations.
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions.
* Pod is a top-level resource in the Kubernetes REST API.
The {{< api-reference page="workload-resources/pod-v1" >}}
object definition describes the object in detail.
* [The Distributed System Toolkit: Patterns for Composite Containers](/blog/2015/06/the-distributed-system-toolkit-patterns/) explains common layouts for Pods with more than one container.
* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}), you can read about the prior art, including:

View File

@ -28,7 +28,7 @@ Init containers are exactly like regular containers, except:
* Init containers always run to completion.
* Each init container must complete successfully before the next one starts.
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.
To specify an init container for a Pod, add the `initContainers` field into
@ -115,7 +115,7 @@ kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
app.kubernetes.io/name: MyApp
spec:
containers:
- name: myapp-container
@ -159,7 +159,7 @@ The output is similar to this:
Name: myapp-pod
Namespace: default
[...]
Labels: app=myapp
Labels: app.kubernetes.io/name=MyApp
Status: Pending
[...]
Init Containers:

View File

@ -1,421 +0,0 @@
---
title: Pod Topology Spread Constraints
content_type: concept
weight: 40
---
<!-- overview -->
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
<!-- body -->
## Prerequisites
### Node Labels
Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. For example, a Node might have labels: `node=node1,zone=us-east-1a,region=us-east-1`
Suppose you have a 4-node cluster with the following labels:
```
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready <none> 4m26s v1.16.0 node=node1,zone=zoneA
node2 Ready <none> 3m58s v1.16.0 node=node2,zone=zoneA
node3 Ready <none> 3m17s v1.16.0 node=node3,zone=zoneB
node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
```
Then the cluster is logically viewed as below:
{{<mermaid>}}
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/labels-annotations-taints/) that are created and populated automatically on most clusters.
## Spread Constraints for Pods
### API
The API field `pod.spec.topologySpreadConstraints` is defined as below:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer>
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
```
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
- **maxSkew** describes the degree to which Pods may be unevenly distributed.
It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`:
- when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum
permitted difference between the number of matching pods in the target
topology and the global minimum
(the minimum number of pods that match the label selector in a topology domain.
For example, if you have 3 zones with 0, 2 and 3 matching pods respectively,
The global minimum is 0).
- when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher
precedence to topologies that would help reduce the skew.
- **minDomains** indicates a minimum number of eligible domains.
A domain is a particular instance of a topology. An eligible domain is a domain whose
nodes match the node selector.
- The value of `minDomains` must be greater than 0, when specified.
- When the number of eligible domains with match topology keys is less than `minDomains`,
Pod topology spread treats "global minimum" as 0, and then the calculation of `skew` is performed.
The "global minimum" is the minimum number of matching Pods in an eligible domain,
or zero if the number of eligible domains is less than `minDomains`.
- When the number of eligible domains with matching topology keys equals or is greater than
`minDomains`, this value has no effect on scheduling.
- When `minDomains` is nil, the constraint behaves as if `minDomains` is 1.
- When `minDomains` is not nil, the value of `whenUnsatisfiable` must be "`DoNotSchedule`".
{{< note >}}
The `minDomains` field is an alpha field added in 1.24. You have to enable the
`MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
in order to use it.
{{< /note >}}
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
### Example: One TopologySpreadConstraint
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:&lt;any value&gt;" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint.
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed into "zoneB":
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
OR
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n3
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can tweak the Pod spec to meet various kinds of requirements:
- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed into "zoneA" as well.
- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes instead of zones. In the above example, if `maxSkew` remains "1", the incoming Pod can only be placed onto "node4".
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it's preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.)
### Example: Multiple TopologySpreadConstraints
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node:
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
In this case, to match the first constraint, the incoming Pod can only be placed into "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4".
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p4(Pod) --> n3(Node3)
p5(Pod) --> n3
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n1
p3(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only placed into "zoneB"; while in terms of the second constraint, "mypod" can only be placed onto "node2". Then a joint result of "zoneB" and "node2" returns nothing.
To overcome this situation, you can either increase the `maxSkew` or modify one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`.
### Interaction With Node Affinity and Node Selectors
The scheduler will skip the non-matching nodes from the skew calculations if the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined.
### Example: TopologySpreadConstraints with NodeAffinity
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
{{<mermaid>}}
graph BT
subgraph "zoneC"
n5(Node5)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n5 k8s;
class zoneC cluster;
{{< /mermaid >}}
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed into "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
The scheduler doesn't have prior knowledge of all the zones or other topology domains that a cluster has. They are determined from the existing nodes in the cluster. This could lead to a problem in autoscaled clusters, when a node pool (or node group) is scaled to zero nodes and the user is expecting them to scale up, because, in this case, those topology domains won't be considered until there is at least one node in them.
### Other Noticeable Semantics
There are some implicit conventions worth noting here:
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that:
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA".
2. the incoming Pod has no chances to be scheduled onto such nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
- Be aware of what will happen if the incoming Pod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed into "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels.
### Cluster-level default constraints
It is possible to set default topology spread constraints for a cluster. Default
topology spread constraints are applied to a Pod if, and only if:
- It doesn't define any constraints in its `.spec.topologySpreadConstraints`.
- It belongs to a service, replication controller, replica set or stateful set.
Default constraints can be set as part of the `PodTopologySpread` plugin args
in a [scheduling profile](/docs/reference/scheduling/config/#profiles).
The constraints are specified with the same [API above](#api), except that
`labelSelector` must be empty. The selectors are calculated from the services,
replication controllers, replica sets or stateful sets that the Pod belongs to.
An example configuration might look like follows:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
defaultingType: List
```
{{< note >}}
[`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
is disabled by default. It's recommended to use `PodTopologySpread` to achieve similar
behavior.
{{< /note >}}
#### Built-in default constraints {#internal-default-constraints}
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
If you don't configure any cluster-level default constraints for pod topology spreading,
then kube-scheduler acts as if you specified the following default topology constraints:
```yaml
defaultConstraints:
- maxSkew: 3
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
- maxSkew: 5
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: ScheduleAnyway
```
Also, the legacy `SelectorSpread` plugin, which provides an equivalent behavior,
is disabled by default.
{{< note >}}
The `PodTopologySpread` plugin does not score the nodes that don't have
the topology keys specified in the spreading constraints. This might result
in a different default behavior compared to the legacy `SelectorSpread` plugin when
using the default topology constraints.
If your nodes are not expected to have **both** `kubernetes.io/hostname` and
`topology.kubernetes.io/zone` labels set, define your own constraints
instead of using the Kubernetes defaults.
{{< /note >}}
If you don't want to use the default Pod spreading constraints for your cluster,
you can disable those defaults by setting `defaultingType` to `List` and leaving
empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints: []
defaultingType: List
```
## Comparison with PodAffinity/PodAntiAffinity
In Kubernetes, directives related to "Affinity" control how Pods are
scheduled - more packed or more scattered.
- For `PodAffinity`, you can try to pack any number of Pods into qualifying
topology domain(s)
- For `PodAntiAffinity`, only one Pod can be scheduled into a
single topology domain.
For finer control, you can specify topology spread constraints to distribute
Pods across different topology domains - to achieve either high availability or
cost-saving. This can also help on rolling update workloads and scaling out
replicas smoothly. See
[Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation)
for more details.
## Known Limitations
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
## {{% heading "whatsnext" %}}
- [Blog: Introducing PodTopologySpread](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/)
explains `maxSkew` in details, as well as bringing up some advanced usage examples.

View File

@ -278,7 +278,7 @@ For an example of adding a new localization, see the PR to enable
To guide other localization contributors, add a new
[`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of
[k/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code.
[kubernetes/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code.
For example, a German README file would be `README-de.md`.
Provide guidance to localization contributors in the localized `README-**.md` file.
@ -299,7 +299,7 @@ Once a localization meets requirements for workflow and minimum output, SIG Docs
- Enable language selection on the website
- Publicize the localization's availability through
[Cloud Native Computing Foundation](https://www.cncf.io/about/)(CNCF) channels, including the
[Kubernetes blog](https://kubernetes.io/blog/).
[Kubernetes blog](/blog/).
## Translating content
@ -418,7 +418,7 @@ To collaborate on a localization branch:
`dev-<source version>-<language code>.<team milestone>`
For example, an approver on a German localization team opens the localization branch
`dev-1.12-de.1` directly against the k/website repository, based on the source branch for
`dev-1.12-de.1` directly against the `kubernetes/website` repository, based on the source branch for
Kubernetes v1.12.
2. Individual contributors open feature branches based on the localization branch.

View File

@ -37,7 +37,7 @@ the techniques described in
### Find out about upcoming features
To find out about upcoming features, attend the weekly SIG Release meeting (see
the [community](https://kubernetes.io/community/) page for upcoming meetings)
the [community](/community/) page for upcoming meetings)
and monitor the release-specific documentation
in the [kubernetes/sig-release](https://github.com/kubernetes/sig-release/)
repository. Each release has a sub-directory in the [/sig-release/tree/master/releases/](https://github.com/kubernetes/sig-release/tree/master/releases)

View File

@ -15,13 +15,15 @@ upcoming Kubernetes release, see
[Document a new feature](/docs/contribute/new-content/new-features/).
{{< /note >}}
To contribute new content pages or improve existing content pages, open a pull request (PR). Make sure you follow all the requirements in the [Before you begin](/docs/contribute/new-content/overview/#before-you-begin) section.
If your change is small, or you're unfamiliar with git, read [Changes using GitHub](#changes-using-github) to learn how to edit a page.
If your changes are large, read [Work from a local fork](#fork-the-repo) to learn how to make changes locally on your computer.
To contribute new content pages or improve existing content pages, open a pull request (PR).
Make sure you follow all the requirements in the
[Before you begin](/docs/contribute/new-content/) section.
If your change is small, or you're unfamiliar with git, read
[Changes using GitHub](#changes-using-github) to learn how to edit a page.
If your changes are large, read [Work from a local fork](#fork-the-repo) to learn how to make
changes locally on your computer.
<!-- body -->
@ -63,38 +65,39 @@ class id1 k8s
Figure 1. Steps for opening a PR using GitHub.
1. On the page where you see the issue, select the pencil icon at the top right.
You can also scroll to the bottom of the page and select **Edit this page**.
1. On the page where you see the issue, select the pencil icon at the top right.
You can also scroll to the bottom of the page and select **Edit this page**.
2. Make your changes in the GitHub markdown editor.
1. Make your changes in the GitHub markdown editor.
3. Below the editor, fill in the **Propose file change**
form. In the first field, give your commit message a title. In
the second field, provide a description.
1. Below the editor, fill in the **Propose file change** form.
In the first field, give your commit message a title.
In the second field, provide a description.
{{< note >}}
Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request
description later.
{{< /note >}}
{{< note >}}
Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
in your commit message. You can add those to the pull request description later.
{{< /note >}}
4. Select **Propose file change**.
1. Select **Propose file change**.
5. Select **Create pull request**.
1. Select **Create pull request**.
6. The **Open a pull request** screen appears. Fill in the form:
1. The **Open a pull request** screen appears. Fill in the form:
- The **Subject** field of the pull request defaults to the commit summary.
You can change it if needed.
- The **Body** contains your extended commit message, if you have one,
and some template text. Add the
details the template text asks for, then delete the extra template text.
- Leave the **Allow edits from maintainers** checkbox selected.
- The **Subject** field of the pull request defaults to the commit summary.
You can change it if needed.
- The **Body** contains your extended commit message, if you have one,
and some template text. Add the
details the template text asks for, then delete the extra template text.
- Leave the **Allow edits from maintainers** checkbox selected.
{{< note >}}
PR descriptions are a great way to help reviewers understand your change. For more information, see [Opening a PR](#open-a-pr).
{{</ note >}}
{{< note >}}
PR descriptions are a great way to help reviewers understand your change.
For more information, see [Opening a PR](#open-a-pr).
{{</ note >}}
7. Select **Create pull request**.
1. Select **Create pull request**.
### Addressing feedback in GitHub
@ -106,12 +109,12 @@ leave a comment with their GitHub username in it.
If a reviewer asks you to make changes:
1. Go to the **Files changed** tab.
2. Select the pencil (edit) icon on any files changed by the
pull request.
3. Make the changes requested.
4. Commit the changes.
1. Select the pencil (edit) icon on any files changed by the pull request.
1. Make the changes requested.
1. Commit the changes.
If you are waiting on a reviewer, reach out once every 7 days. You can also post a message in the `#sig-docs` Slack channel.
If you are waiting on a reviewer, reach out once every 7 days. You can also post a message in the
`#sig-docs` Slack channel.
When your review is complete, a reviewer merges your PR and your changes go live a few minutes later.
@ -120,7 +123,8 @@ When your review is complete, a reviewer merges your PR and your changes go live
If you're more experienced with git, or if your changes are larger than a few lines,
work from a local fork.
Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed on your computer. You can also use a git UI application.
Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed
on your computer. You can also use a git UI application.
Figure 2 shows the steps to follow when you work from a local fork. The details for each step follow.
@ -157,75 +161,80 @@ Figure 2. Working from a local fork to make your changes.
### Fork the kubernetes/website repository
1. Navigate to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository.
2. Select **Fork**.
1. Select **Fork**.
### Create a local clone and set the upstream
3. In a terminal window, clone your fork and update the [Docsy Hugo theme](https://github.com/google/docsy#readme):
1. In a terminal window, clone your fork and update the [Docsy Hugo theme](https://github.com/google/docsy#readme):
```bash
git clone git@github.com/<github_username>/website
cd website
git submodule update --init --recursive --depth 1
```
```shell
git clone git@github.com/<github_username>/website
cd website
git submodule update --init --recursive --depth 1
```
4. Navigate to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote:
1. Navigate to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote:
```bash
cd website
```shell
cd website
git remote add upstream https://github.com/kubernetes/website.git
```
git remote add upstream https://github.com/kubernetes/website.git
```
5. Confirm your `origin` and `upstream` repositories:
1. Confirm your `origin` and `upstream` repositories:
```bash
git remote -v
```
```shell
git remote -v
```
Output is similar to:
Output is similar to:
```bash
origin git@github.com:<github_username>/website.git (fetch)
origin git@github.com:<github_username>/website.git (push)
upstream https://github.com/kubernetes/website.git (fetch)
upstream https://github.com/kubernetes/website.git (push)
```
```none
origin git@github.com:<github_username>/website.git (fetch)
origin git@github.com:<github_username>/website.git (push)
upstream https://github.com/kubernetes/website.git (fetch)
upstream https://github.com/kubernetes/website.git (push)
```
6. Fetch commits from your fork's `origin/main` and `kubernetes/website`'s `upstream/main`:
1. Fetch commits from your fork's `origin/main` and `kubernetes/website`'s `upstream/main`:
```bash
git fetch origin
git fetch upstream
```
```shell
git fetch origin
git fetch upstream
```
This makes sure your local repository is up to date before you start making changes.
This makes sure your local repository is up to date before you start making changes.
{{< note >}}
This workflow is different than the [Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). You do not need to merge your local copy of `main` with `upstream/main` before pushing updates to your fork.
{{< /note >}}
{{< note >}}
This workflow is different than the
[Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md).
You do not need to merge your local copy of `main` with `upstream/main` before pushing updates
to your fork.
{{< /note >}}
### Create a branch
1. Decide which branch base to your work on:
- For improvements to existing content, use `upstream/main`.
- For new content about existing features, use `upstream/main`.
- For localized content, use the localization's conventions. For more information, see [localizing Kubernetes documentation](/docs/contribute/localization/).
- For new features in an upcoming Kubernetes release, use the feature branch. For more information, see [documenting for a release](/docs/contribute/new-content/new-features/).
- For long-running efforts that multiple SIG Docs contributors collaborate on,
like content reorganization, use a specific feature branch created for that
effort.
- For improvements to existing content, use `upstream/main`.
- For new content about existing features, use `upstream/main`.
- For localized content, use the localization's conventions. For more information, see
[localizing Kubernetes documentation](/docs/contribute/localization/).
- For new features in an upcoming Kubernetes release, use the feature branch. For more
information, see [documenting for a release](/docs/contribute/new-content/new-features/).
- For long-running efforts that multiple SIG Docs contributors collaborate on,
like content reorganization, use a specific feature branch created for that effort.
If you need help choosing a branch, ask in the `#sig-docs` Slack channel.
If you need help choosing a branch, ask in the `#sig-docs` Slack channel.
2. Create a new branch based on the branch identified in step 1. This example assumes the base branch is `upstream/main`:
1. Create a new branch based on the branch identified in step 1. This example assumes the base
branch is `upstream/main`:
```bash
git checkout -b <my_new_branch> upstream/main
```
```shell
git checkout -b <my_new_branch> upstream/main
```
3. Make your changes using a text editor.
1. Make your changes using a text editor.
At any time, use the `git status` command to see what files you've changed.
@ -235,109 +244,116 @@ When you are ready to submit a pull request, commit your changes.
1. In your local repository, check which files you need to commit:
```bash
git status
```
```shell
git status
```
Output is similar to:
Output is similar to:
```bash
On branch <my_new_branch>
Your branch is up to date with 'origin/<my_new_branch>'.
```none
On branch <my_new_branch>
Your branch is up to date with 'origin/<my_new_branch>'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: content/en/docs/contribute/new-content/contributing-content.md
modified: content/en/docs/contribute/new-content/contributing-content.md
no changes added to commit (use "git add" and/or "git commit -a")
```
no changes added to commit (use "git add" and/or "git commit -a")
```
2. Add the files listed under **Changes not staged for commit** to the commit:
1. Add the files listed under **Changes not staged for commit** to the commit:
```bash
git add <your_file_name>
```
```shell
git add <your_file_name>
```
Repeat this for each file.
Repeat this for each file.
3. After adding all the files, create a commit:
1. After adding all the files, create a commit:
```bash
git commit -m "Your commit message"
```
```shell
git commit -m "Your commit message"
```
{{< note >}}
Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request
description later.
{{< /note >}}
{{< note >}}
Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
in your commit message. You can add those to the pull request
description later.
{{< /note >}}
4. Push your local branch and its new commit to your remote fork:
1. Push your local branch and its new commit to your remote fork:
```bash
git push origin <my_new_branch>
```
```shell
git push origin <my_new_branch>
```
### Preview your changes locally {#preview-locally}
It's a good idea to preview your changes locally before pushing them or opening a pull request. A preview lets you catch build errors or markdown formatting problems.
It's a good idea to preview your changes locally before pushing them or opening a pull request.
A preview lets you catch build errors or markdown formatting problems.
You can either build the website's container image or run Hugo locally. Building the container image is slower but displays [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/), which can be useful for debugging.
You can either build the website's container image or run Hugo locally. Building the container
image is slower but displays [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/), which can
be useful for debugging.
{{< tabs name="tab_with_hugo" >}}
{{% tab name="Hugo in a container" %}}
{{< note >}}
The commands below use Docker as default container engine. Set the `CONTAINER_ENGINE` environment variable to override this behaviour.
The commands below use Docker as default container engine. Set the `CONTAINER_ENGINE` environment
variable to override this behaviour.
{{< /note >}}
1. Build the container image locally
_You only need this step if you are testing a change to the Hugo tool itself_
```bash
```shell
# Run this in a terminal (if required)
make container-image
```
1. Start Hugo in a container:
```bash
```shell
# Run this in a terminal
make container-serve
```
1. In a web browser, navigate to `https://localhost:1313`. Hugo watches the
changes and rebuilds the site as needed.
1. In a web browser, navigate to `https://localhost:1313`. Hugo watches the
changes and rebuilds the site as needed.
1. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`,
or close the terminal window.
1. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`,
or close the terminal window.
{{% /tab %}}
{{% tab name="Hugo on the command line" %}}
Alternately, install and use the `hugo` command on your computer:
1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/main/netlify.toml).
1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in
[`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/main/netlify.toml).
2. If you have not updated your website repository, the `website/themes/docsy` directory is empty.
The site cannot build without a local copy of the theme. To update the website theme, run:
1. If you have not updated your website repository, the `website/themes/docsy` directory is empty.
The site cannot build without a local copy of the theme. To update the website theme, run:
```bash
git submodule update --init --recursive --depth 1
```
```shell
git submodule update --init --recursive --depth 1
```
3. In a terminal, go to your Kubernetes website repository and start the Hugo server:
1. In a terminal, go to your Kubernetes website repository and start the Hugo server:
```bash
cd <path_to_your_repo>/website
hugo server --buildFuture
```
```shell
cd <path_to_your_repo>/website
hugo server --buildFuture
```
4. In a web browser, navigate to `https://localhost:1313`. Hugo watches the
changes and rebuilds the site as needed.
1. In a web browser, navigate to `https://localhost:1313`. Hugo watches the
changes and rebuilds the site as needed.
5. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`,
or close the terminal window.
1. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`,
or close the terminal window.
{{% /tab %}}
{{< /tabs >}}
@ -345,6 +361,7 @@ Alternately, install and use the `hugo` command on your computer:
### Open a pull request from your fork to kubernetes/website {#open-a-pr}
Figure 3 shows the steps to open a PR from your fork to the K8s/website. The details follow.
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
@ -374,47 +391,55 @@ class first,second white
Figure 3. Steps to open a PR from your fork to the K8s/website.
1. In a web browser, go to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository.
2. Select **New Pull Request**.
3. Select **compare across forks**.
4. From the **head repository** drop-down menu, select your fork.
5. From the **compare** drop-down menu, select your branch.
6. Select **Create Pull Request**.
7. Add a description for your pull request:
1. Select **New Pull Request**.
1. Select **compare across forks**.
1. From the **head repository** drop-down menu, select your fork.
1. From the **compare** drop-down menu, select your branch.
1. Select **Create Pull Request**.
1. Add a description for your pull request:
- **Title** (50 characters or less): Summarize the intent of the change.
- **Description**: Describe the change in more detail.
- If there is a related GitHub issue, include `Fixes #12345` or `Closes #12345` in the description. GitHub's automation closes the mentioned issue after merging the PR if used. If there are other related PRs, link those as well.
- If you want advice on something specific, include any questions you'd like reviewers to think about in your description.
8. Select the **Create pull request** button.
- If there is a related GitHub issue, include `Fixes #12345` or `Closes #12345` in the
description. GitHub's automation closes the mentioned issue after merging the PR if used.
If there are other related PRs, link those as well.
- If you want advice on something specific, include any questions you'd like reviewers to
think about in your description.
1. Select the **Create pull request** button.
Congratulations! Your pull request is available in [Pull requests](https://github.com/kubernetes/website/pulls).
After opening a PR, GitHub runs automated tests and tries to deploy a preview using
[Netlify](https://www.netlify.com/).
After opening a PR, GitHub runs automated tests and tries to deploy a preview using [Netlify](https://www.netlify.com/).
- If the Netlify build fails, select **Details** for more information.
- If the Netlify build succeeds, select **Details** opens a staged version of the Kubernetes
website with your changes applied. This is how reviewers check your changes.
- If the Netlify build fails, select **Details** for more information.
- If the Netlify build succeeds, select **Details** opens a staged version of the Kubernetes website with your changes applied. This is how reviewers check your changes.
GitHub also automatically assigns labels to a PR, to help reviewers. You can add them too, if needed. For more information, see [Adding and removing issue labels](/docs/contribute/review/for-approvers/#adding-and-removing-issue-labels).
GitHub also automatically assigns labels to a PR, to help reviewers. You can add them too, if
needed. For more information, see [Adding and removing issue labels](/docs/contribute/review/for-approvers/#adding-and-removing-issue-labels).
### Addressing feedback locally
1. After making your changes, amend your previous commit:
```bash
git commit -a --amend
```
```shell
git commit -a --amend
```
- `-a`: commits all changes
- `--amend`: amends the previous commit, rather than creating a new one
- `-a`: commits all changes
- `--amend`: amends the previous commit, rather than creating a new one
2. Update your commit message if needed.
1. Update your commit message if needed.
3. Use `git push origin <my_new_branch>` to push your changes and re-run the Netlify tests.
1. Use `git push origin <my_new_branch>` to push your changes and re-run the Netlify tests.
{{< note >}}
If you use `git commit -m` instead of amending, you must [squash your commits](#squashing-commits) before merging.
{{< /note >}}
{{< note >}}
If you use `git commit -m` instead of amending, you must [squash your commits](#squashing-commits)
before merging.
{{< /note >}}
#### Changes from reviewers
@ -422,89 +447,97 @@ Sometimes reviewers commit to your pull request. Before making any other changes
1. Fetch commits from your remote fork and rebase your working branch:
```bash
git fetch origin
git rebase origin/<your-branch-name>
```
```shell
git fetch origin
git rebase origin/<your-branch-name>
```
2. After rebasing, force-push new changes to your fork:
1. After rebasing, force-push new changes to your fork:
```bash
git push --force-with-lease origin <your-branch-name>
```
```shell
git push --force-with-lease origin <your-branch-name>
```
#### Merge conflicts and rebasing
{{< note >}}
For more information, see [Git Branching - Basic Branching and Merging](https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging#_basic_merge_conflicts), [Advanced Merging](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging), or ask in the `#sig-docs` Slack channel for help.
For more information, see [Git Branching - Basic Branching and Merging](https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging#_basic_merge_conflicts),
[Advanced Merging](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging), or ask in the
`#sig-docs` Slack channel for help.
{{< /note >}}
If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR.
If another contributor commits changes to the same file in another PR, it can create a merge
conflict. You must resolve all merge conflicts in your PR.
1. Update your fork and rebase your local branch:
```bash
git fetch origin
git rebase origin/<your-branch-name>
```
```shell
git fetch origin
git rebase origin/<your-branch-name>
```
Then force-push the changes to your fork:
Then force-push the changes to your fork:
```bash
git push --force-with-lease origin <your-branch-name>
```
```shell
git push --force-with-lease origin <your-branch-name>
```
2. Fetch changes from `kubernetes/website`'s `upstream/main` and rebase your branch:
1. Fetch changes from `kubernetes/website`'s `upstream/main` and rebase your branch:
```bash
git fetch upstream
git rebase upstream/main
```
```shell
git fetch upstream
git rebase upstream/main
```
3. Inspect the results of the rebase:
1. Inspect the results of the rebase:
```bash
git status
```
```shell
git status
```
This results in a number of files marked as conflicted.
This results in a number of files marked as conflicted.
4. Open each conflicted file and look for the conflict markers: `>>>`, `<<<`, and `===`. Resolve the conflict and delete the conflict marker.
1. Open each conflicted file and look for the conflict markers: `>>>`, `<<<`, and `===`.
Resolve the conflict and delete the conflict marker.
{{< note >}}
For more information, see [How conflicts are presented](https://git-scm.com/docs/git-merge#_how_conflicts_are_presented).
{{< /note >}}
{{< note >}}
For more information, see [How conflicts are presented](https://git-scm.com/docs/git-merge#_how_conflicts_are_presented).
{{< /note >}}
5. Add the files to the changeset:
1. Add the files to the changeset:
```bash
git add <filename>
```
6. Continue the rebase:
```shell
git add <filename>
```
```bash
git rebase --continue
```
1. Continue the rebase:
7. Repeat steps 2 to 5 as needed.
```shell
git rebase --continue
```
After applying all commits, the `git status` command shows that the rebase is complete.
1. Repeat steps 2 to 5 as needed.
8. Force-push the branch to your fork:
After applying all commits, the `git status` command shows that the rebase is complete.
```bash
git push --force-with-lease origin <your-branch-name>
```
1. Force-push the branch to your fork:
The pull request no longer shows any conflicts.
```shell
git push --force-with-lease origin <your-branch-name>
```
The pull request no longer shows any conflicts.
### Squashing commits
{{< note >}}
For more information, see [Git Tools - Rewriting History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History), or ask in the `#sig-docs` Slack channel for help.
For more information, see [Git Tools - Rewriting History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History),
or ask in the `#sig-docs` Slack channel for help.
{{< /note >}}
If your PR has multiple commits, you must squash them into a single commit before merging your PR. You can check the number of commits on your PR's **Commits** tab or by running the `git log` command locally.
If your PR has multiple commits, you must squash them into a single commit before merging your PR.
You can check the number of commits on your PR's **Commits** tab or by running the `git log`
command locally.
{{< note >}}
This topic assumes `vim` as the command line text editor.
@ -512,79 +545,83 @@ This topic assumes `vim` as the command line text editor.
1. Start an interactive rebase:
```bash
git rebase -i HEAD~<number_of_commits_in_branch>
```
```shell
git rebase -i HEAD~<number_of_commits_in_branch>
```
Squashing commits is a form of rebasing. The `-i` switch tells git you want to rebase interactively. `HEAD~<number_of_commits_in_branch` indicates how many commits to look at for the rebase.
Squashing commits is a form of rebasing. The `-i` switch tells git you want to rebase interactively.
`HEAD~<number_of_commits_in_branch` indicates how many commits to look at for the rebase.
Output is similar to:
Output is similar to:
```bash
pick d875112ca Original commit
pick 4fa167b80 Address feedback 1
pick 7d54e15ee Address feedback 2
```none
pick d875112ca Original commit
pick 4fa167b80 Address feedback 1
pick 7d54e15ee Address feedback 2
# Rebase 3d18sf680..7d54e15ee onto 3d183f680 (3 commands)
# Rebase 3d18sf680..7d54e15ee onto 3d183f680 (3 commands)
...
...
# These lines can be re-ordered; they are executed from top to bottom.
```
# These lines can be re-ordered; they are executed from top to bottom.
```
The first section of the output lists the commits in the rebase. The second section lists the options for each commit. Changing the word `pick` changes the status of the commit once the rebase is complete.
The first section of the output lists the commits in the rebase. The second section lists the
options for each commit. Changing the word `pick` changes the status of the commit once the rebase
is complete.
For the purposes of rebasing, focus on `squash` and `pick`.
For the purposes of rebasing, focus on `squash` and `pick`.
{{< note >}}
For more information, see [Interactive Mode](https://git-scm.com/docs/git-rebase#_interactive_mode).
{{< /note >}}
{{< note >}}
For more information, see [Interactive Mode](https://git-scm.com/docs/git-rebase#_interactive_mode).
{{< /note >}}
2. Start editing the file.
1. Start editing the file.
Change the original text:
Change the original text:
```bash
pick d875112ca Original commit
pick 4fa167b80 Address feedback 1
pick 7d54e15ee Address feedback 2
```
```none
pick d875112ca Original commit
pick 4fa167b80 Address feedback 1
pick 7d54e15ee Address feedback 2
```
To:
To:
```bash
pick d875112ca Original commit
squash 4fa167b80 Address feedback 1
squash 7d54e15ee Address feedback 2
```
```none
pick d875112ca Original commit
squash 4fa167b80 Address feedback 1
squash 7d54e15ee Address feedback 2
```
This squashes commits `4fa167b80 Address feedback 1` and `7d54e15ee Address feedback 2` into `d875112ca Original commit`, leaving only `d875112ca Original commit` as a part of the timeline.
This squashes commits `4fa167b80 Address feedback 1` and `7d54e15ee Address feedback 2` into
`d875112ca Original commit`, leaving only `d875112ca Original commit` as a part of the timeline.
3. Save and exit your file.
1. Save and exit your file.
4. Push your squashed commit:
1. Push your squashed commit:
```bash
git push --force-with-lease origin <branch_name>
```
```shell
git push --force-with-lease origin <branch_name>
```
## Contribute to other repos
The [Kubernetes project](https://github.com/kubernetes) contains 50+ repositories. Many of these repositories contain documentation: user-facing help text, error messages, API references or code comments.
The [Kubernetes project](https://github.com/kubernetes) contains 50+ repositories. Many of these
repositories contain documentation: user-facing help text, error messages, API references or code
comments.
If you see text you'd like to improve, use GitHub to search all repositories in the Kubernetes organization.
This can help you figure out where to submit your issue or PR.
If you see text you'd like to improve, use GitHub to search all repositories in the Kubernetes
organization. This can help you figure out where to submit your issue or PR.
Each repository has its own processes and procedures. Before you file an
issue or submit a PR, read that repository's `README.md`, `CONTRIBUTING.md`, and
`code-of-conduct.md`, if they exist.
Each repository has its own processes and procedures. Before you file an issue or submit a PR,
read that repository's `README.md`, `CONTRIBUTING.md`, and `code-of-conduct.md`, if they exist.
Most repositories use issue and PR templates. Have a look through some open
issues and PRs to get a feel for that team's processes. Make sure to fill out
the templates with as much detail as possible when you file issues or PRs.
Most repositories use issue and PR templates. Have a look through some open issues and PRs to get
a feel for that team's processes. Make sure to fill out the templates with as much detail as
possible when you file issues or PRs.
## {{% heading "whatsnext" %}}
- Read [Reviewing](/docs/contribute/review/reviewing-prs) to learn more about the review process.

View File

@ -141,7 +141,7 @@ To add a label, leave a comment in one of the following formats:
To remove a label, leave a comment in one of the following formats:
- `/remove-<label-to-remove>` (for example, `/remove-help`)
- `/remove-<label-category> <label-to-remove>` (for example, `/remove-triage needs-information`)`
- `/remove-<label-category> <label-to-remove>` (for example, `/remove-triage needs-information`)
In both cases, the label must already exist. If you try to add a label that does not exist, the command is
silently ignored.
@ -181,7 +181,7 @@ If the dead link issue is in the API or `kubectl` documentation, assign them `/p
### Blog issues
We expect [Kubernetes Blog](https://kubernetes.io/blog/) entries to become
We expect [Kubernetes Blog](/blog/) entries to become
outdated over time. Therefore, we only maintain blog entries less than a year old.
If an issue is related to a blog entry that is more than one year old,
close the issue without fixing.

View File

@ -10,9 +10,8 @@ weight: 10
Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls)
section in the Kubernetes website repository to see open pull requests.
Reviewing documentation pull requests is a
great way to introduce yourself to the Kubernetes community.
It helps you learn the code base and build trust with other contributors.
Reviewing documentation pull requests is a great way to introduce yourself to the Kubernetes
community. It helps you learn the code base and build trust with other contributors.
Before reviewing, it's a good idea to:
@ -28,7 +27,6 @@ Before reviewing, it's a good idea to:
Before you start a review:
- Read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
and ensure that you abide by it at all times.
- Be polite, considerate, and helpful.
@ -73,6 +71,7 @@ class third,fourth white
Figure 1. Review process steps.
1. Go to [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls).
You see a list of every open pull request against the Kubernetes website and docs.
@ -103,12 +102,20 @@ Figure 1. Review process steps.
4. Go to the **Files changed** tab to start your review.
1. Click on the `+` symbol beside the line you want to comment on.
1. Fill in any comments you have about the line and click either **Add single comment** (if you
have only one comment to make) or **Start a review** (if you have multiple comments to make).
1. Fill in any comments you have about the line and click either **Add single comment**
(if you have only one comment to make) or **Start a review** (if you have multiple comments to make).
1. When finished, click **Review changes** at the top of the page. Here, you can add
a summary of your review (and leave some positive comments for the contributor!),
approve the PR, comment or request changes as needed. New contributors should always
choose **Comment**.
a summary of your review (and leave some positive comments for the contributor!).
Please always use the "Comment"
- Avoid clicking the "Request changes" button when finishing your review.
If you want to block a PR from being merged before some further changes are made,
you can leave a "/hold" comment.
Mention why you are setting a hold, and optionally specify the conditions under
which the hold can be removed by you or other reviewers.
- Avoid clicking the "Approve" button when finishing your review.
Leaving a "/approve" comment is recommended most of the time.
## Reviewing checklist

View File

@ -438,7 +438,7 @@ Note that the live editor doesn't recognize Hugo shortcodes.
### Example 1 - Pod topology spread constraints
Figure 6 shows the diagram appearing in the
[Pod topology pread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels)
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/#node-labels)
page.
{{< mermaid >}}

View File

@ -46,10 +46,6 @@ When you refer specifically to interacting with an API object, use [UpperCamelCa
When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence.
Don't split an API object name into separate words. For example, use PodTemplateList, not Pod Template List.
The following examples focus on capitalization. For more information about formatting API object names, review the related guidance on [Code Style](#code-style-inline-code).
{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
@ -187,6 +183,36 @@ Set the value of `image` to nginx:1.16. | Set the value of `image` to `nginx:1.1
Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`.
{{< /table >}}
## Referring to Kubernetes API resources
This section talks about how we reference API resources in the documentation.
### Clarification about "resource"
Kubernetes uses the word "resource" to refer to API resources, such as `pod`, `deployment`, and so on. We also use "resource" to talk about CPU and memory requests and limits. Always refer to API resources as "API resources" to avoid confusion with CPU and memory resources.
### When to use Kubernetes API terminologies
The different Kubernetes API terminologies are:
- Resource type: the name used in the API URL (such as `pods`, `namespaces`)
- Resource: a single instance of a resource type (such as `pod`, `secret`)
- Object: a resource that serves as a "record of intent". An object is a desired state for a specific part of your cluster, which the Kubernetes control plane tries to maintain.
Always use "resource" or "object" when referring to an API resource in docs. For example, use "a `Secret` object" over just "a `Secret`".
### API resource names
Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as PascalCase, and code formatting.
For inline code in an HTML document, use the `<code>` tag. In a Markdown document, use the backtick (`` ` ``).
Don't split an API object name into separate words. For example, use `PodTemplateList`, not Pod Template List.
For more information about PascalCase and code formatting, please review the related guidance on [Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects) and [Use code style for inline code, commands, and API objects](/docs/contribute/style/style-guide/#code-style-inline-code).
For more information about Kubernetes API terminologies, please review the related guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/#standard-api-terminology).
## Code snippet formatting
### Don't include the command prompt
@ -361,7 +387,7 @@ Beware.
### Katacoda Embedded Live Environment
This button lets users run Minikube in their browser using the [Katacoda Terminal](https://www.katacoda.com/embed/panel).
This button lets users run Minikube in their browser using the Katacoda Terminal.
It lowers the barrier of entry by allowing users to use Minikube with one click instead of going through the complete
Minikube and Kubectl installation process locally.

View File

@ -77,7 +77,7 @@ operator to use or manage a cluster.
* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/)
* [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
* [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/)
* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1/)
* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/)
* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and
[kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/)
* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/)

View File

@ -74,6 +74,10 @@ PUT | update
PATCH | patch
DELETE | delete (for individual resources), deletecollection (for collections)
{{< caution >}}
The `get`, `list` and `watch` verbs can all return the full details of a resource. In terms of the returned data they are equivalent. For example, `list` on `secrets` will still reveal the `data` attributes of any returned resources.
{{< /caution >}}
Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:
* [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/)

View File

@ -9,7 +9,7 @@ weight: 95
<!-- overview -->
The tables below enumerate the configuration parameters on
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) objects, whether the field mutates
[PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) objects, whether the field mutates
and/or validates pods, and how the configuration values map to the
[Pod Security Standards](/docs/concepts/security/pod-security-standards/).
@ -31,9 +31,9 @@ The fields enumerated in this table are part of the `PodSecurityPolicySpec`, whi
under the `.spec` field path.
<table class="no-word-break">
<caption style="display:none">Mapping PodSecurityPolicySpec fields to Pod Security Standards</caption>
<tbody>
<tr>
<caption style="display:none">Mapping PodSecurityPolicySpec fields to Pod Security Standards</caption>
<tbody>
<tr>
<th><code>PodSecurityPolicySpec</code></th>
<th>Type</th>
<th>Pod Security Standards Equivalent</th>
@ -54,19 +54,19 @@ under the `.spec` field path.
<td>
<p><b>Baseline</b>: subset of</p>
<ul>
<li><code>AUDIT_WRITE</code></li>
<li><code>CHOWN</code></li>
<li><code>DAC_OVERRIDE</code></li>
<li><code>FOWNER</code></li>
<li><code>FSETID</code></li>
<li><code>KILL</code></li>
<li><code>MKNOD</code></li>
<li><code>NET_BIND_SERVICE</code></li>
<li><code>SETFCAP</code></li>
<li><code>SETGID</code></li>
<li><code>SETPCAP</code></li>
<li><code>SETUID</code></li>
<li><code>SYS_CHROOT</code></li>
<li><code>AUDIT_WRITE</code></li>
<li><code>CHOWN</code></li>
<li><code>DAC_OVERRIDE</code></li>
<li><code>FOWNER</code></li>
<li><code>FSETID</code></li>
<li><code>KILL</code></li>
<li><code>MKNOD</code></li>
<li><code>NET_BIND_SERVICE</code></li>
<li><code>SETFCAP</code></li>
<li><code>SETGID</code></li>
<li><code>SETPCAP</code></li>
<li><code>SETUID</code></li>
<li><code>SYS_CHROOT</code></li>
</ul>
<p><b>Restricted</b>: empty / undefined / nil OR a list containing <i>only</i> <code>NET_BIND_SERVICE</code>
</td>
@ -236,9 +236,9 @@ The [annotations](/docs/concepts/overview/working-with-objects/annotations/) enu
table can be specified under `.metadata.annotations` on the PodSecurityPolicy object.
<table class="no-word-break">
<caption style="display:none">Mapping PodSecurityPolicy annotations to Pod Security Standards</caption>
<tbody>
<tr>
<caption style="display:none">Mapping PodSecurityPolicy annotations to Pod Security Standards</caption>
<tbody>
<tr>
<th><code>PSP Annotation</code></th>
<th>Type</th>
<th>Pod Security Standards Equivalent</th>

View File

@ -54,8 +54,8 @@ it can't be both.
ClusterRoles have several uses. You can use a ClusterRole to:
1. define permissions on namespaced resources and be granted within individual namespace(s)
1. define permissions on namespaced resources and be granted across all namespaces
1. define permissions on namespaced resources and be granted access within individual namespace(s)
1. define permissions on namespaced resources and be granted access across all namespaces
1. define permissions on cluster-scoped resources
If you want to define a role within a namespace, use a Role; if you want to define

View File

@ -808,7 +808,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
availability during update per node.
See [Perform a Rolling Update on a DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/).
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
[default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
[default spreading](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints).
- `DelegateFSGroupToCSIDriver`: If supported by the CSI driver, delegates the
role of applying `fsGroup` from a Pod's `securityContext` to the driver by
passing `fsGroup` through the NodeStageVolume and NodePublishVolume CSI calls.
@ -854,7 +854,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
to running pods.
- `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See
[Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
[Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
- `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts.
This feature gate exists in case any of your existing workloads depend on a
now-corrected fault where Kubernetes ignored exec probe timeouts. See
@ -995,7 +995,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `MemoryQoS`: Enable memory protection and usage throttle on pod / container using
cgroup v2 memory controller.
- `MinDomainsInPodTopologySpread`: Enable `minDomains` in Pod
[topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
[topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
Service instance.
- `MountContainers`: Enable using utility containers on host as the volume mounter.

View File

@ -27,7 +27,7 @@ each Pod in the scheduling queue according to constraints and available
resources. The scheduler then ranks each valid Node and binds the Pod to a
suitable Node. Multiple different schedulers may be used within a cluster;
kube-scheduler is the reference implementation.
See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/)
See [scheduling](/docs/concepts/scheduling-eviction/)
for more information about scheduling and the kube-scheduler component.
```

View File

@ -90,7 +90,7 @@ kubelet [flags]
</tr>
<tr>
<td colspan="2">--authorization-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>AlwaysAllow</code></td></td>
<td colspan="2">--authorization-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>AlwaysAllow</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
@ -187,27 +187,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--cni-bin-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>/opt/cni/bin</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A comma-separated list of full paths of directories in which to search for CNI plugin binaries. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr>
<tr>
<td colspan="2">--cni-cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>/var/lib/cni/cache</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The full path of the directory in which CNI should store cache files. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr>
<tr>
<td colspan="2">--cni-conf-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>/etc/cni/net.d</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Alpha feature&gt; The full path of the directory in which to search for CNI config files. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr>
<tr>
<td colspan="2">--config string</td>
</tr>
@ -230,20 +209,19 @@ kubelet [flags]
</tr>
<tr>
<td colspan="2">--container-runtime string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>docker</code></td>
<td colspan="2">--container-runtime string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>remote</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The container runtime to use. Possible values: <code>docker</code>, <code>remote</code>.</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The container runtime to use. Possible values: <code>docker</code>, <code>remote</code>. (DEPRECATED: will be removed in 1.27 as the only valid value is 'remote')</td>
</tr>
<tr>
<td colspan="2">--container-runtime-endpoint string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>unix:///var/run/dockershim.sock</code></td>
<td colspan="2">--container-runtime-endpoint string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] The endpoint of remote runtime service. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Examples: <code>unix:///var/run/dockershim.sock</code>, <code>npipe:////./pipe/dockershim</code>.</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The endpoint of remote runtime service. Unix Domain SOckets are supported on Linux, while npipe and tcp endpoints are supported on windows. Examples: <code>unix:///var/run/dockershim.sock</code>, <code>npipe:////./pipe/dockershim</code>.</td>
</tr>
<tr>
<td colspan="2">--contention-profiling</td>
</tr>
@ -276,7 +254,7 @@ kubelet [flags]
<td colspan="2">--cpu-manager-policy-options mapStringString</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of options to fine-tune the behavior of the selected CPU Manager policy. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of key=value CPU Manager policy options to use, to fine tune their behaviour. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
@ -286,20 +264,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Alpha feature&gt; CPU Manager reconciliation period. Examples: <code>10s</code>, or <code>1m</code>. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--docker-endpoint string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>unix:///var/run/docker.sock</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Use this for the <code>docker</code> endpoint to communicate with. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr>
<tr>
<td colspan="2">--dynamic-config-dir string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The <code>DynamicKubeletConfig</code> feature gate must be enabled to pass this flag. (DEPRECATED: Feature DynamicKubeletConfig is deprecated in 1.22 and will not move to GA. It is planned to be removed from Kubernetes in the version 1.24 or later. Please use alternative ways to update kubelet configuration.)</td>
</tr>
<tr>
<td colspan="2">--enable-controller-attach-detach&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td>
</tr>
@ -398,13 +362,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">When set to <code>true</code>, hard eviction thresholds will be ignored while calculating node allocatable. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/">here</a> for more details. (DEPRECATED: will be removed in 1.24 or later)</td>
</tr>
<tr>
<td colspan="2">--experimental-check-node-capabilities-before-mount</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] if set to <code>true</code>, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount (DEPRECATED: will be removed in 1.24 or later, in favor of using CSI.)</td>
</tr>
<tr>
<td colspan="2">--experimental-kernel-memcg-notification</td>
</tr>
@ -412,13 +369,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">Use kernelMemcgNotification configuration, this flag will be removed in 1.24 or later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--experimental-log-sanitization bool</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] When enabled, prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)
</tr>
<tr>
<td colspan="2">--experimental-mounter-path string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>mount</code></td>
</tr>
@ -445,83 +395,76 @@ APIServerIdentity=true|false (ALPHA - default=false)<br/>
APIServerTracing=true|false (ALPHA - default=false)<br/>
AllAlpha=true|false (ALPHA - default=false)<br/>
AllBeta=true|false (BETA - default=false)<br/>
AnyVolumeDataSource=true|false (ALPHA - default=false)<br/>
AnyVolumeDataSource=true|false (BETA - default=true)<br/>
AppArmor=true|false (BETA - default=true)<br/>
CPUManager=true|false (BETA - default=true)<br/>
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br/>
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br/>
CPUManagerPolicyOptions=true|false (ALPHA - default=false)<br/>
CPUManagerPolicyOptions=true|false (BETA - default=true)<br/>
CSIInlineVolume=true|false (BETA - default=true)<br/>
CSIMigration=true|false (BETA - default=true)<br/>
CSIMigrationAWS=true|false (BETA - default=false)<br/>
CSIMigrationAzureDisk=true|false (BETA - default=true)<br/>
CSIMigrationAzureFile=true|false (BETA - default=false)<br/>
CSIMigrationAWS=true|false (BETA - default=true)<br/>
CSIMigrationAzureFile=true|false (BETA - default=true)<br/>
CSIMigrationGCE=true|false (BETA - default=true)<br/>
CSIMigrationOpenStack=true|false (BETA - default=true)<br/>
CSIMigrationPortworx=true|false (ALPHA - default=false)<br/>
CSIMigrationRBD=true|false (ALPHA - default=false)<br/>
CSIMigrationvSphere=true|false (BETA - default=false)<br/>
CSIStorageCapacity=true|false (BETA - default=true)<br/>
CSIVolumeHealth=true|false (ALPHA - default=false)<br/>
CSRDuration=true|false (BETA - default=true)<br/>
ControllerManagerLeaderMigration=true|false (BETA - default=true)<br/>
ContextualLogging=true|false (ALPHA - default=false)<br/>
CronJobTimeZone=true|false (ALPHA - default=false)<br/>
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br/>
CustomResourceValidationExpressions=true|false (ALPHA - default=false)<br/>
DaemonSetUpdateSurge=true|false (BETA - default=true)<br/>
DefaultPodTopologySpread=true|false (BETA - default=true)<br/>
DelegateFSGroupToCSIDriver=true|false (BETA - default=true)<br/>
DevicePlugins=true|false (BETA - default=true)<br/>
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)<br/>
DisableCloudProviders=true|false (ALPHA - default=false)<br/>
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)<br/>
DownwardAPIHugePages=true|false (BETA - default=true)<br/>
EfficientWatchResumption=true|false (BETA - default=true)<br/>
EndpointSliceTerminatingCondition=true|false (BETA - default=true)<br/>
EphemeralContainers=true|false (BETA - default=true)<br/>
ExpandCSIVolumes=true|false (BETA - default=true)<br/>
ExpandInUsePersistentVolumes=true|false (BETA - default=true)<br/>
ExpandPersistentVolumes=true|false (BETA - default=true)<br/>
ExpandedDNSConfig=true|false (ALPHA - default=false)<br/>
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/>
GRPCContainerProbe=true|false (ALPHA - default=false)<br/>
GRPCContainerProbe=true|false (BETA - default=true)<br/>
GracefulNodeShutdown=true|false (BETA - default=true)<br/>
GracefulNodeShutdownBasedOnPodPriority=true|false (ALPHA - default=false)<br/>
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)<br/>
HPAContainerMetrics=true|false (ALPHA - default=false)<br/>
HPAScaleToZero=true|false (ALPHA - default=false)<br/>
HonorPVReclaimPolicy=true|false (ALPHA - default=false)<br/>
IdentifyPodOS=true|false (ALPHA - default=false)<br/>
IdentifyPodOS=true|false (BETA - default=true)<br/>
InTreePluginAWSUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginGCEUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginRBDUnregister=true|false (ALPHA - default=false)<br>
InTreePluginRBDUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)<br/>
IndexedJob=true|false (BETA - default=true)<br/>
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)<br/>
JobReadyPods=true|false (ALPHA - default=false)<br/>
JobTrackingWithFinalizers=true|false (BETA - default=true)<br/>
KubeletCredentialProviders=true|false (ALPHA - default=false)<br/>
JobReadyPods=true|false (BETA - default=true)<br/>
JobTrackingWithFinalizers=true|false (BETA - default=false)<br/>
KubeletCredentialProviders=true|false (BETA - default=true)<br/>
KubeletInUserNamespace=true|false (ALPHA - default=false)<br/>
KubeletPodResources=true|false (BETA - default=true)<br/>
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)<br/>
LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)<br/>
LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/>
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br/>
LogarithmicScaleDown=true|false (BETA - default=true)<br/>
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)<br/>
MemoryManager=true|false (BETA - default=true)<br/>
MemoryQoS=true|false (ALPHA - default=false)<br/>
MixedProtocolLBService=true|false (ALPHA - default=false)<br/>
MinDomainsInPodTopologySpread=true|false (ALPHA - default=false)<br/>
MixedProtocolLBService=true|false (BETA - default=true)<br/>
NetworkPolicyEndPort=true|false (BETA - default=true)<br/>
NetworkPolicyStatus=true|false (ALPHA - default=false)<br/>
NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)<br/>
NodeSwap=true|false (ALPHA - default=false)<br/>
NonPreemptingPriority=true|false (BETA - default=true)<br/>
OpenAPIEnums=true|false (ALPHA - default=false)<br/>
OpenAPIV3=true|false (ALPHA - default=false)<br/>
PodAffinityNamespaceSelector=true|false (BETA - default=true)<br/>
OpenAPIEnums=true|false (BETA - default=true)<br/>
OpenAPIV3=true|false (BETA - default=true)<br/>
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br/>
PodDeletionCost=true|false (BETA - default=true)<br/>
PodOverhead=true|false (BETA - default=true)<br/>
PodSecurity=true|false (BETA - default=true)<br/>
PreferNominatedNode=true|false (BETA - default=true)<br/>
ProbeTerminationGracePeriod=true|false (BETA - default=false)<br/>
ProcMountType=true|false (ALPHA - default=false)<br/>
ProxyTerminatingEndpoints=true|false (ALPHA - default=false)<br/>
@ -529,25 +472,22 @@ QOSReserved=true|false (ALPHA - default=false)<br/>
ReadWriteOncePod=true|false (ALPHA - default=false)<br/>
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br/>
RemainingItemCount=true|false (BETA - default=true)<br/>
RemoveSelfLink=true|false (BETA - default=true)<br/>
RotateKubeletServerCertificate=true|false (BETA - default=true)<br/>
SeccompDefault=true|false (ALPHA - default=false)<br/>
ServerSideFieldValidation=true|false (ALPHA - default=false)<br/>
ServiceIPStaticSubrange=true|false (ALPHA - default=false)<br/>
ServiceInternalTrafficPolicy=true|false (BETA - default=true)<br/>
ServiceLBNodePortControl=true|false (BETA - default=true)<br/>
ServiceLoadBalancerClass=true|false (BETA - default=true)<br/>
SizeMemoryBackedVolumes=true|false (BETA - default=true)<br/>
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)<br/>
StatefulSetMinReadySeconds=true|false (BETA - default=true)<br/>
StorageVersionAPI=true|false (ALPHA - default=false)<br/>
StorageVersionHash=true|false (BETA - default=true)<br/>
SuspendJob=true|false (BETA - default=true)<br/>
TopologyAwareHints=true|false (BETA - default=true)<br/>
TopologyManager=true|false (BETA - default=true)<br/>
VolumeCapacityPriority=true|false (ALPHA - default=false)<br/>
WinDSR=true|false (ALPHA - default=false)<br/>
WinOverlay=true|false (BETA - default=true)<br/>
WindowsHostProcessContainers=true|false (BETA - default=true)<br/>
csiMigrationRBD=true|false (ALPHA - default=false)<br/>
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
@ -628,18 +568,11 @@ csiMigrationRBD=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of <code>--image-gc-high-threshold</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--image-pull-progress-deadline duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>1m0s</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If no pulling progress is made before this deadline, the image pulling will be cancelled. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr>
<tr>
<td colspan="2">--image-service-endpoint string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] The endpoint of remote image service. If not specified, it will be the same with <code>--container-runtime-endpoint</code> by default. Currently UNIX socket endpoint is supported on Linux, while npipe and TCP endpoints are supported on Windows. Examples: <code>unix:///var/run/dockershim.sock</code>, <code>npipe:////./pipe/dockershim</code></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] The endpoint of remote image service. If not specified, it will be the same with <code>--container-runtime-endpoint</code> by default. Unix Domain Socket are supported on Linux, while npipe and TCP endpoints are supported on Windows. Examples: <code>unix:///var/run/dockershim.sock</code>, <code>npipe:////./pipe/dockershim</code></td>
</tr>
<tr>
@ -866,20 +799,6 @@ csiMigrationRBD=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Minimum age for an unused image before it is garbage collected. Examples: <code>'300ms'</code>, <code>'10s'</code> or <code>'2h45m'</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--network-plugin string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The name of the network plugin to be invoked for various events in kubelet/pod lifecycle. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr>
<tr>
<td colspan="2">--network-plugin-mtu int32</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The MTU to be passed to the network plugin, to override the default. Set to <code>0</code> to use the default 1460 MTU. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr>
<tr>
<td colspan="2">--node-ip string</td>
</tr>
@ -908,13 +827,6 @@ csiMigrationRBD=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with <code>nodeMonitorGracePeriod</code> in Node controller. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--non-masquerade-cidr string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>10.0.0.0/8</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Traffic to IPs outside this range will use IP masquerade. Set to <code>'0.0.0.0/0'</code> to never masquerade. (DEPRECATED: will be removed in a future version)</td>
</tr>
<tr>
<td colspan="2">--one-output</td>
</tr>
@ -999,13 +911,6 @@ csiMigrationRBD=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">The read-only port for the kubelet to serve on with no authentication/authorization (set to <code>0</code> to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--really-crash-for-testing</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true, when panics occur crash. Intended for testing. (DEPRECATED: will be removed in a future version.)</td>
</tr>
<tr>
<td colspan="2">--register-node&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td>
</tr>
@ -1105,7 +1010,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)<br/>
</tr>
<tr>
<td colspan="2">--seccomp-default RuntimeDefault</td>
<td colspan="2">--seccomp-default string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Alpha feature&gt; Enable the use of <code>RuntimeDefault</code> as the default seccomp profile for all workloads. The <code>SeccompDefault</code> feature gate must be enabled to allow this flag, which is disabled by default.</td>
@ -1187,10 +1092,10 @@ csiMigrationRBD=true|false (ALPHA - default=false)<br/>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.<br/>
Preferred values:
TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384<br/>
Insecure values:
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)
`TLS_AES_128_GCM_SHA256`, `TLS_AES_256_GCM_SHA384`, `TLS_CHACHA20_POLY1305_SHA256`, `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA`, `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`, `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305`, `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256`, `TLS_RSA_WITH_AES_128_CBC_SHA`, `TLS_RSA_WITH_AES_128_GCM_SHA256`, `TLS_RSA_WITH_AES_256_CBC_SHA`, `TLS_RSA_WITH_AES_256_GCM_SHA384`<br/>
Insecure values:<br/>
`TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_ECDSA_WITH_RC4_128_SHA`, `TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_RSA_WITH_RC4_128_SHA`, `TLS_RSA_WITH_3DES_EDE_CBC_SHA`, `TLS_RSA_WITH_AES_128_CBC_SHA256`, `TLS_RSA_WITH_RC4_128_SHA`.<br/>
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)
</tr>
<tr>
@ -1237,7 +1142,7 @@ TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_E
</tr>
<tr>
<td colspan="2">--vmodule &lt;A list of 'pattern=N' string&gt;</td>
<td colspan="2">--vmodule &lt;A list of 'pattern=N' strings&gt;</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of <code>pattern=N</code> settings for file-filtered logging</td>

View File

@ -113,7 +113,7 @@ components by adding customized setting or overriding kubeadm default settings.<
</span></pre><p>The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed
in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.</p>
<p>See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ or
https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
https://pkg.go.dev/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
for kube proxy official documentation.</p>
<pre style="background-color:#fff"><span style="color:#000;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>kubelet.config.k8s.io/v1beta1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>KubeletConfiguration<span style="color:#bbb">
@ -121,7 +121,7 @@ for kube proxy official documentation.</p>
</span></pre><p>The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances
deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.</p>
<p>See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or
https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
https://pkg.go.dev/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
for kubelet official documentation.</p>
<p>Here is a fully populated example of a single YAML file containing multiple
configuration types to be used during a <code>kubeadm init</code> run.</p>
@ -1307,4 +1307,4 @@ current node is registered.</p>
</tr>
</tbody>
</table>

View File

@ -122,7 +122,7 @@ components by adding customized setting or overriding kubeadm default settings.<
</span></pre><p>The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances
deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.</p>
<p>See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ or
https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
https://pkg.go.dev/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
for kube-proxy official documentation.</p>
<pre style="background-color:#fff"><span style="color:#000;font-weight:bold">apiVersion</span>:<span style="color:#bbb"> </span>kubelet.config.k8s.io/v1beta1<span style="color:#bbb">
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">kind</span>:<span style="color:#bbb"> </span>KubeletConfiguration<span style="color:#bbb">
@ -130,7 +130,7 @@ for kube-proxy official documentation.</p>
</span></pre><p>The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances
deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.</p>
<p>See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or
https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
https://pkg.go.dev/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
for kubelet official documentation.</p>
<p>Here is a fully populated example of a single YAML file containing multiple
configuration types to be used during a <code>kubeadm init</code> run.</p>
@ -555,7 +555,7 @@ by kubeadm during <code>kubeadm join</code>.</p>
<code>int32</code>
</td>
<td>
<p><code>bindPorti</code> sets the secure port for the API Server to bind to.
<p><code>bindPort</code> sets the secure port for the API Server to bind to.
Defaults to 6443.</p>
</td>
</tr>
@ -1159,7 +1159,7 @@ This information will be annotated to the Node API object, for later re-use</p>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#taint-v1-core"><code>[]core/v1.Taint</code></a>
</td>
<td>
<p><code>tains</code> specifies the taints the Node API object should be registered with.
<p><code>taints</code> specifies the taints the Node API object should be registered with.
If this field is unset, i.e. nil, in the <code>kubeadm init</code> process it will be defaulted
with a control-plane taint for control-plane nodes.
If you don't want to taint your control-plane node, set this field to an empty list,

View File

@ -22,6 +22,6 @@ When an `Eviction` object is created, the API server terminates the Pod.
API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction).
API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
* See [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/) for more information.

View File

@ -2,9 +2,10 @@
title: Extensions
id: Extensions
date: 2019-02-01
full_link: /docs/concepts/extend-kubernetes/extend-cluster/#extensions
full_link: /docs/concepts/extend-kubernetes/#extensions
short_description: >
Extensions are software components that extend and deeply integrate with Kubernetes to support new types of hardware.
Extensions are software components that extend and deeply integrate with Kubernetes to support
new types of hardware.
aka:
tags:
@ -15,4 +16,6 @@ tags:
<!--more-->
Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters come with extensions pre-installed. As a result, most Kubernetes users will not need to install [extensions](/docs/concepts/extend-kubernetes/extend-cluster/#extensions) and even fewer users will need to author new ones.
Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters
come with extensions pre-installed. As a result, most Kubernetes users will not need to install
[extensions](/docs/concepts/extend-kubernetes/) and even fewer users will need to author new ones.

View File

@ -2,7 +2,7 @@
title: Garbage Collection
id: garbage-collection
date: 2021-07-07
full_link: /docs/concepts/workloads/controllers/garbage-collection/
full_link: /docs/concepts/architecture/garbage-collection/
short_description: >
A collective term for the various mechanisms Kubernetes uses to clean up cluster
resources.
@ -12,13 +12,16 @@ tags:
- fundamental
- operation
---
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up
cluster resources.
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up
cluster resources.
<!--more-->
Kubernetes uses garbage collection to clean up resources like [unused containers and images](/docs/concepts/workloads/controllers/garbage-collection/#containers-images),
Kubernetes uses garbage collection to clean up resources like
[unused containers and images](/docs/concepts/architecture/garbage-collection/#containers-images),
[failed Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection),
[objects owned by the targeted resource](/docs/concepts/overview/working-with-objects/owners-dependents/),
[completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/), and resources
that have expired or failed.
that have expired or failed.

View File

@ -68,6 +68,11 @@ kubectl config get-contexts # display list of contexts
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
kubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig
# configure the URL to a proxy server to use for requests made by this client in the kubeconfig
kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url
# add a new user to your kubeconf that supports basic auth
kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword
@ -182,6 +187,9 @@ kubectl get pods --selector=app=cassandra -o \
kubectl get configmap myconfig \
-o jsonpath='{.data.ca\.crt}'
# Retrieve a base64 encoded value with dashes instead of underscores.
kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}'
# Get all worker nodes (use a selector to exclude results that have a label
# named 'node-role.kubernetes.io/control-plane')
kubectl get node --selector='!node-role.kubernetes.io/control-plane'
@ -381,6 +389,9 @@ kubectl cluster-info # Display
kubectl cluster-info dump # Dump current cluster state to stdout
kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state
# View existing taints on which exist on current nodes.
kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect
# If a taint with that key and effect already exists, its value is replaced as specified.
kubectl taint nodes foo dedicated=special-user:NoSchedule
```

View File

@ -618,6 +618,16 @@ or updating objects that contain Pod templates, such as Deployments, Jobs, State
See [Enforcing Pod Security at the Namespace Level](/docs/concepts/security/pod-security-admission)
for more information.
### kubernetes.io/psp (deprecated) {#kubernetes-io-psp}
Example: `kubernetes.io/psp: restricted`
This annotation is only relevant if you are using [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/).
When the PodSecurityPolicy admission controller admits a Pod, the admission controller
modifies the Pod to have this annotation.
The value of the annotation is the name of the PodSecurityPolicy that was used for validation.
### seccomp.security.alpha.kubernetes.io/pod (deprecated) {#seccomp-security-alpha-kubernetes-io-pod}
This annotation has been deprecated since Kubernetes v1.19 and will become non-functional in v1.25.

View File

@ -123,7 +123,7 @@ extension points:
and [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity).
Extension points: `filter`, `score`.
- `PodTopologySpread`: Implements
[Pod topology spread](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
[Pod topology spread](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
Extension points: `preFilter`, `filter`, `preScore`, `score`.
- `NodeUnschedulable`: Filters out nodes that have `.spec.unschedulable` set to
true.

View File

@ -17,7 +17,7 @@ Generate keys and certificate signing requests
Generates keys and certificate signing requests (CSRs) for all the certificates required to run the control plane. This command also generates partial kubeconfig files with private key data in the "users &gt; user &gt; client-key-data" field, and for each kubeconfig file an accompanying ".csr" file is created.
This command is designed for use in [Kubeadm External CA Mode](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing.
This command is designed for use in [Kubeadm External CA Mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing.
The PEM encoded signed certificates should then be saved alongside the key files, using ".crt" as the file extension, or in the case of kubeconfig files, the PEM encoded signed certificate should be base64 encoded and added to the kubeconfig file in the "users &gt; user &gt; client-certificate-data" field.

View File

@ -6,7 +6,9 @@ title: kubeadm init
content_type: concept
weight: 20
---
<!-- overview -->
This command initializes a Kubernetes control-plane node.
<!-- body -->
@ -26,12 +28,12 @@ following steps:
1. Generates a self-signed CA to set up identities for each component in the cluster. The user can provide their
own CA cert and/or key by dropping it in the cert directory configured via `--cert-dir`
(`/etc/kubernetes/pki` by default).
The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans` arguments, lowercased if necessary.
The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans`
arguments, lowercased if necessary.
1. Writes kubeconfig files in `/etc/kubernetes/` for
the kubelet, the controller-manager and the scheduler to use to connect to the
API server, each with its own identity, as well as an additional
kubeconfig file for administration named `admin.conf`.
1. Writes kubeconfig files in `/etc/kubernetes/` for the kubelet, the controller-manager and the
scheduler to use to connect to the API server, each with its own identity, as well as an
additional kubeconfig file for administration named `admin.conf`.
1. Generates static Pod manifests for the API server,
controller-manager and scheduler. In case an external etcd is not provided,
@ -76,10 +78,12 @@ following steps:
Kubeadm allows you to create a control-plane node in phases using the `kubeadm init phase` command.
To view the ordered list of phases and sub-phases you can call `kubeadm init --help`. The list will be located at the top of the help screen and each phase will have a description next to it.
To view the ordered list of phases and sub-phases you can call `kubeadm init --help`. The list
will be located at the top of the help screen and each phase will have a description next to it.
Note that by calling `kubeadm init` all of the phases and sub-phases will be executed in this exact order.
Some phases have unique flags, so if you want to have a look at the list of available options add `--help`, for example:
Some phases have unique flags, so if you want to have a look at the list of available options add
`--help`, for example:
```shell
sudo kubeadm init phase control-plane controller-manager --help
@ -91,7 +95,8 @@ You can also use `--help` to see the list of sub-phases for a certain parent pha
sudo kubeadm init phase control-plane --help
```
`kubeadm init` also exposes a flag called `--skip-phases` that can be used to skip certain phases. The flag accepts a list of phase names and the names can be taken from the above ordered list.
`kubeadm init` also exposes a flag called `--skip-phases` that can be used to skip certain phases.
The flag accepts a list of phase names and the names can be taken from the above ordered list.
An example:
@ -102,7 +107,10 @@ sudo kubeadm init phase etcd local --config=configfile.yaml
sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml
```
What this example would do is write the manifest files for the control plane and etcd in `/etc/kubernetes/manifests` based on the configuration in `configfile.yaml`. This allows you to modify the files and then skip these phases using `--skip-phases`. By calling the last command you will create a control plane node with the custom manifest files.
What this example would do is write the manifest files for the control plane and etcd in
`/etc/kubernetes/manifests` based on the configuration in `configfile.yaml`. This allows you to
modify the files and then skip these phases using `--skip-phases`. By calling the last command you
will create a control plane node with the custom manifest files.
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
@ -249,7 +257,7 @@ To set a custom image for these you need to configure this in your
to use the image.
Consult the documentation for your container runtime to find out how to change this setting;
for selected container runtimes, you can also find advice within the
[Container Runtimes]((/docs/setup/production-environment/container-runtimes/) topic.
[Container Runtimes](/docs/setup/production-environment/container-runtimes/) topic.
### Uploading control-plane certificates to the cluster
@ -284,30 +292,35 @@ and certificate renewal.
### Managing the kubeadm drop-in file for the kubelet {#kubelet-drop-in}
The `kubeadm` package ships with a configuration file for running the `kubelet` by `systemd`. Note that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm DEB/RPM package.
The `kubeadm` package ships with a configuration file for running the `kubelet` by `systemd`.
Note that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm
DEB/RPM package.
For further information, see [Managing the kubeadm drop-in file for systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd).
For further information, see
[Managing the kubeadm drop-in file for systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd).
### Use kubeadm with CRI runtimes
By default kubeadm attempts to detect your container runtime. For more details on this detection, see
the [kubeadm CRI installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
By default kubeadm attempts to detect your container runtime. For more details on this detection,
see the [kubeadm CRI installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
### Setting the node name
By default, `kubeadm` assigns a node name based on a machine's host address. You can override this setting with the `--node-name` flag.
By default, `kubeadm` assigns a node name based on a machine's host address.
You can override this setting with the `--node-name` flag.
The flag passes the appropriate [`--hostname-override`](/docs/reference/command-line-tools-reference/kubelet/#options)
value to the kubelet.
Be aware that overriding the hostname can [interfere with cloud providers](https://github.com/kubernetes/website/pull/8873).
Be aware that overriding the hostname can
[interfere with cloud providers](https://github.com/kubernetes/website/pull/8873).
### Automating kubeadm
Rather than copying the token you obtained from `kubeadm init` to each node, as
in the [basic kubeadm tutorial](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), you can parallelize the
token distribution for easier automation. To implement this automation, you must
know the IP address that the control-plane node will have after it is started,
or use a DNS name or an address of a load balancer.
in the [basic kubeadm tutorial](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/),
you can parallelize the token distribution for easier automation. To implement this automation,
you must know the IP address that the control-plane node will have after it is started, or use a
DNS name or an address of a load balancer.
1. Generate a token. This token must have the form `<6 character string>.<16
character string>`. More formally, it must match the regex:
@ -341,7 +354,11 @@ provisioned). For details, see the [kubeadm join](/docs/reference/setup-tools/ku
## {{% heading "whatsnext" %}}
* [kubeadm init phase](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) to understand more about
`kubeadm init` phases
* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes worker node and join it to the cluster
* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes cluster to a newer version
* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join`
`kubeadm init` phases
* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes
worker node and join it to the cluster
* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes
cluster to a newer version
* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made
to this host by `kubeadm init` or `kubeadm join`

View File

@ -39,7 +39,7 @@ The JSON and Protobuf serialization schemas follow the same guidelines for
schema changes. The following descriptions cover both formats.
The API versioning and software versioning are indirectly related.
The [API and release versioning proposal](https://git.k8s.io/design-proposals-archive/release/versioning.md)
The [API and release versioning proposal](https://git.k8s.io/sig-release/release-engineering/versioning.md)
describes the relationship between API versioning and software versioning.
Different API versions indicate different levels of stability and support. You

View File

@ -330,7 +330,7 @@ For example:
### Locate use of deprecated APIs
Use [client warnings, metrics, and audit information available in 1.19+](https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings)
Use [client warnings, metrics, and audit information available in 1.19+](/blog/2020/09/03/warnings/#deprecation-warnings)
to locate use of deprecated APIs.
### Migrate to non-deprecated APIs
@ -340,11 +340,11 @@ to locate use of deprecated APIs.
You can use the `kubectl-convert` command (`kubectl convert` prior to v1.20)
to automatically convert an existing object:
`kubectl-convert -f <file> --output-version <group>/<version>`.
For example, to convert an older Deployment to `apps/v1`, you can run:
`kubectl-convert -f ./my-deployment.yaml --output-version apps/v1`
Note that this may use non-ideal default values. To learn more about a specific

View File

@ -63,7 +63,7 @@ These labels can include
If your cluster spans multiple zones or regions, you can use node labels
in conjunction with
[Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
to control how Pods are spread across your cluster among fault domains:
regions, zones, and even specific nodes.
These hints enable the

View File

@ -55,7 +55,7 @@ are influenced by the following issues:
Before building a Kubernetes production environment on your own, consider
handing off some or all of this job to
[Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/)
providers or other [Kubernetes Partners](https://kubernetes.io/partners/).
providers or other [Kubernetes Partners](/partners/).
Options include:
- *Serverless*: Just run workloads on third-party equipment without managing
@ -288,7 +288,7 @@ needs of your cluster's workloads:
- Decide if you want to build your own production Kubernetes or obtain one from
available [Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/)
or [Kubernetes Partners](https://kubernetes.io/partners/).
or [Kubernetes Partners](/partners/).
- If you choose to build your own cluster, plan how you want to
handle [certificates](/docs/setup/best-practices/certificates/)
and set up high availability for features such as

View File

@ -179,9 +179,9 @@ Follow the instructions for [getting started with containerd](https://github.com
{{% tab name="Linux" %}}
You can find this file under the path `/etc/containerd/config.toml`.
{{% /tab %}}
{{< tab name="Windows" >}}
{{% tab name="Windows" %}}
You can find this file under the path `C:\Program Files\containerd\config.toml`.
{{< /tab >}}
{{% /tab %}}
{{< /tabs >}}
On Linux the default CRI socket for containerd is `/run/containerd/containerd.sock`.
@ -217,7 +217,7 @@ When using kubeadm, manually configure the
#### Overriding the sandbox (pause) image {#override-pause-image-containerd}
In your [containerd config](https://github.com/containerd/cri/blob/master/docs/config.md) you can overwrite the
In your [containerd config](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) you can overwrite the
sandbox image by setting the following config:
```toml

View File

@ -11,7 +11,7 @@ weight: 30
<img src="/images/kubeadm-stacked-color.png" align="right" width="150px"></img>
Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices.
In fact, you can use `kubeadm` to set up a cluster that will pass the
[Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
[Kubernetes Conformance tests](/blog/2017/10/software-conformance-certification/).
`kubeadm` also supports other cluster lifecycle functions, such as
[bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
@ -76,8 +76,9 @@ Install a {{< glossary_tooltip term_id="container-runtime" text="container runti
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
{{< note >}}
If you have already installed kubeadm, run `apt-get update &&
apt-get upgrade` or `yum update` to get the latest version of kubeadm.
If you have already installed kubeadm, run
`apt-get update && apt-get upgrade` or
`yum update` to get the latest version of kubeadm.
When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
kubeadm to tell it what to do. This crashloop is expected and normal.
@ -582,7 +583,7 @@ Example for `kubeadm upgrade`:
or {{< skew currentVersion >}}
To learn more about the version skew between the different Kubernetes component see
the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/).
the [Version Skew Policy](/releases/version-skew-policy/).
## Limitations {#limitations}

View File

@ -8,19 +8,24 @@ weight: 30
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides:
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks.
* a highly available cluster
* composable attributes
* support for most popular Linux distributions
* Ubuntu 16.04, 18.04, 20.04
* CentOS/RHEL/Oracle Linux 7, 8
* Debian Buster, Jessie, Stretch, Wheezy
* Fedora 31, 32
* Fedora CoreOS
* openSUSE Leap 15
* Flatcar Container Linux by Kinvolk
* continuous integration tests
Kubespray provides:
* Highly available cluster.
* Composable (Choice of the network plugin for instance).
* Supports most popular Linux distributions:
- Flatcar Container Linux by Kinvolk
- Debian Bullseye, Buster, Jessie, Stretch
- Ubuntu 16.04, 18.04, 20.04, 22.04
- CentOS/RHEL 7, 8
- Fedora 34, 35
- Fedora CoreOS
- openSUSE Leap 15.x/Tumbleweed
- Oracle Linux 7, 8
- Alma Linux 8
- Rocky Linux 8
- Amazon Linux 2
* Continuous integration tests.
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
[kubeadm](/docs/reference/setup-tools/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
@ -33,13 +38,13 @@ To choose a tool which best fits your use case, read [this comparison](https://g
Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):
* **Ansible v2.9 and python-netaddr are installed on the machine that will run Ansible commands**
* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**
* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* The target servers are configured to allow **IPv4 forwarding**
* **Your ssh key must be copied** to all the servers in your inventory
* **Firewalls are not managed by kubespray**. You'll need to implement appropriate rules as needed. You should disable your firewall in order to avoid any issues during deployment
* If kubespray is run from a non-root user account, correct privilege escalation method should be configured in the target servers and the `ansible_become` flag or command parameters `--become` or `-b` should be specified
* **Minimum required version of Kubernetes is v1.22**
* **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
* The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required See ([Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* The target servers are configured to allow **IPv4 forwarding**.
* If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall.
* If kubespray is run from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified.
Kubespray provides the following utilities to help provision your environment:
@ -110,11 +115,10 @@ When running the reset playbook, be sure not to accidentally target your product
## Feedback
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/))
* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues)
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)).
* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues).
## {{% heading "whatsnext" %}}
Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md).
* Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md).
* Learn more about [Kubespray](https://github.com/kubernetes-sigs/kubespray).

View File

@ -127,7 +127,7 @@ the shared Volume is lost.
## {{% heading "whatsnext" %}}
* Learn more about [patterns for composite containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns).
* Learn more about [patterns for composite containers](/blog/2015/06/the-distributed-system-toolkit-patterns/).
* Learn about [composite containers for modular architecture](https://www.slideshare.net/Docker/slideshare-burns).

View File

@ -226,7 +226,7 @@ mvn install
See [https://github.com/kubernetes-client/java/releases](https://github.com/kubernetes-client/java/releases) to see which versions are supported.
The Java client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java):
as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java):
```java
package io.kubernetes.client.examples;

View File

@ -4,124 +4,156 @@ content_type: task
weight: 20
---
<!-- overview -->
When using client certificate authentication, you can generate certificates
manually through `easyrsa`, `openssl` or `cfssl`.
<!-- body -->
### easyrsa
**easyrsa** can manually generate certificates for your cluster.
1. Download, unpack, and initialize the patched version of easyrsa3.
1. Download, unpack, and initialize the patched version of `easyrsa3`.
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
tar xzf easy-rsa.tar.gz
cd easy-rsa-master/easyrsa3
./easyrsa init-pki
1. Generate a new certificate authority (CA). `--batch` sets automatic mode;
`--req-cn` specifies the Common Name (CN) for the CA's new root certificate.
```shell
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
tar xzf easy-rsa.tar.gz
cd easy-rsa-master/easyrsa3
./easyrsa init-pki
```
1. Generate a new certificate authority (CA). `--batch` sets automatic mode;
`--req-cn` specifies the Common Name (CN) for the CA's new root certificate.
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
1. Generate server certificate and key.
The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will
be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR
that is specified as the `--service-cluster-ip-range` argument for both the API server and
the controller manager component. The argument `--days` is used to set the number of days
after which the certificate expires.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
```shell
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass
```
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
"IP:${MASTER_CLUSTER_IP},"\
"DNS:kubernetes,"\
"DNS:kubernetes.default,"\
"DNS:kubernetes.default.svc,"\
"DNS:kubernetes.default.svc.cluster,"\
"DNS:kubernetes.default.svc.cluster.local" \
--days=10000 \
build-server-full server nopass
1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory.
1. Fill in and add the following parameters into the API server start parameters:
1. Generate server certificate and key.
--client-ca-file=/yourdirectory/ca.crt
--tls-cert-file=/yourdirectory/server.crt
--tls-private-key-file=/yourdirectory/server.key
The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will
be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR
that is specified as the `--service-cluster-ip-range` argument for both the API server and
the controller manager component. The argument `--days` is used to set the number of days
after which the certificate expires.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
```shell
./easyrsa --subject-alt-name="IP:${MASTER_IP},"\
"IP:${MASTER_CLUSTER_IP},"\
"DNS:kubernetes,"\
"DNS:kubernetes.default,"\
"DNS:kubernetes.default.svc,"\
"DNS:kubernetes.default.svc.cluster,"\
"DNS:kubernetes.default.svc.cluster.local" \
--days=10000 \
build-server-full server nopass
```
1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory.
1. Fill in and add the following parameters into the API server start parameters:
```shell
--client-ca-file=/yourdirectory/ca.crt
--tls-cert-file=/yourdirectory/server.crt
--tls-private-key-file=/yourdirectory/server.key
```
### openssl
**openssl** can manually generate certificates for your cluster.
1. Generate a ca.key with 2048bit:
1. Generate a ca.key with 2048bit:
openssl genrsa -out ca.key 2048
1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time):
```shell
openssl genrsa -out ca.key 2048
```
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
1. Generate a server.key with 2048bit:
1. According to the ca.key generate a ca.crt (use `-days` to set the certificate effective time):
openssl genrsa -out server.key 2048
1. Create a config file for generating a Certificate Signing Request (CSR).
Be sure to substitute the values marked with angle brackets (e.g. `<MASTER_IP>`)
with real values before saving this to a file (e.g. `csr.conf`).
Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the
API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
```shell
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
```
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
1. Generate a server.key with 2048bit:
[ dn ]
C = <country>
ST = <state>
L = <city>
O = <organization>
OU = <organization unit>
CN = <MASTER_IP>
```shell
openssl genrsa -out server.key 2048
```
[ req_ext ]
subjectAltName = @alt_names
1. Create a config file for generating a Certificate Signing Request (CSR).
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = <MASTER_IP>
IP.2 = <MASTER_CLUSTER_IP>
Be sure to substitute the values marked with angle brackets (e.g. `<MASTER_IP>`)
with real values before saving this to a file (e.g. `csr.conf`).
Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the
API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
1. Generate the certificate signing request based on the config file:
```ini
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
openssl req -new -key server.key -out server.csr -config csr.conf
1. Generate the server certificate using the ca.key, ca.crt and server.csr:
[ dn ]
C = <country>
ST = <state>
L = <city>
O = <organization>
OU = <organization unit>
CN = <MASTER_IP>
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out server.crt -days 10000 \
-extensions v3_ext -extfile csr.conf
1. View the certificate signing request:
[ req_ext ]
subjectAltName = @alt_names
openssl req -noout -text -in ./server.csr
1. View the certificate:
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = <MASTER_IP>
IP.2 = <MASTER_CLUSTER_IP>
openssl x509 -noout -text -in ./server.crt
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
```
1. Generate the certificate signing request based on the config file:
```shell
openssl req -new -key server.key -out server.csr -config csr.conf
```
1. Generate the server certificate using the ca.key, ca.crt and server.csr:
```shell
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
-CAcreateserial -out server.crt -days 10000 \
-extensions v3_ext -extfile csr.conf
```
1. View the certificate signing request:
```shell
openssl req -noout -text -in ./server.csr
```
1. View the certificate:
```shell
openssl x509 -noout -text -in ./server.crt
```
Finally, add the same parameters into the API server start parameters.
@ -129,101 +161,121 @@ Finally, add the same parameters into the API server start parameters.
**cfssl** is another tool for certificate generation.
1. Download, unpack and prepare the command line tools as shown below.
Note that you may need to adapt the sample commands based on the hardware
architecture and cfssl version you are using.
1. Download, unpack and prepare the command line tools as shown below.
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
chmod +x cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
chmod +x cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
chmod +x cfssl-certinfo
1. Create a directory to hold the artifacts and initialize cfssl:
Note that you may need to adapt the sample commands based on the hardware
architecture and cfssl version you are using.
mkdir cert
cd cert
../cfssl print-defaults config > config.json
../cfssl print-defaults csr > csr.json
1. Create a JSON config file for generating the CA file, for example, `ca-config.json`:
```shell
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl
chmod +x cfssl
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson
chmod +x cfssljson
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo
chmod +x cfssl-certinfo
```
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
1. Create a JSON config file for CA certificate signing request (CSR), for example,
`ca-csr.json`. Be sure to replace the values marked with angle brackets with
real values you want to use.
1. Create a directory to hold the artifacts and initialize cfssl:
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names":[{
"C": "<country>",
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>"
}]
}
1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`):
```shell
mkdir cert
cd cert
../cfssl print-defaults config > config.json
../cfssl print-defaults csr > csr.json
```
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
1. Create a JSON config file for generating keys and certificates for the API
server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with
real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster
IP for the API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
1. Create a JSON config file for generating the CA file, for example, `ca-config.json`:
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"<MASTER_IP>",
"<MASTER_CLUSTER_IP>",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "<country>",
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>"
}]
}
1. Generate the key and certificate for the API server, which are by default
saved into file `server-key.pem` and `server.pem` respectively:
```json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
```
../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
1. Create a JSON config file for CA certificate signing request (CSR), for example,
`ca-csr.json`. Be sure to replace the values marked with angle brackets with
real values you want to use.
```json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names":[{
"C": "<country>",
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>"
}]
}
```
1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`):
```shell
../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca
```
1. Create a JSON config file for generating keys and certificates for the API
server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with
real values you want to use. The `<MASTER_CLUSTER_IP>` is the service cluster
IP for the API server as described in previous subsection.
The sample below also assumes that you are using `cluster.local` as the default
DNS domain name.
```json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"<MASTER_IP>",
"<MASTER_CLUSTER_IP>",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "<country>",
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>"
}]
}
```
1. Generate the key and certificate for the API server, which are by default
saved into file `server-key.pem` and `server.pem` respectively:
```shell
../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
--config=ca-config.json -profile=kubernetes \
server-csr.json | ../cfssljson -bare server
```
## Distributing Self-Signed CA Certificate
@ -234,12 +286,12 @@ refresh the local list for valid certificates.
On each client, perform the following operations:
```bash
```shell
sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt
sudo update-ca-certificates
```
```
```none
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....
@ -250,6 +302,6 @@ done.
You can use the `certificates.k8s.io` API to provision
x509 certificates to use for authentication as documented
[here](/docs/tasks/tls/managing-tls-in-a-cluster).
in the [Managing TLS in a cluster](/docs/tasks/tls/managing-tls-in-a-cluster)
task page.

View File

@ -15,7 +15,7 @@ content_type: task
## Background
As part of the [cloud provider extraction effort](https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/), all cloud specific controllers must be moved out of the `kube-controller-manager`. All existing clusters that run cloud controllers in the `kube-controller-manager` must migrate to instead run the controllers in a cloud provider specific `cloud-controller-manager`.
As part of the [cloud provider extraction effort](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/), all cloud specific controllers must be moved out of the `kube-controller-manager`. All existing clusters that run cloud controllers in the `kube-controller-manager` must migrate to instead run the controllers in a cloud provider specific `cloud-controller-manager`.
Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud specific" controllers between the `kube-controller-manager` and the `cloud-controller-manager` via a shared resource lock between the two components while upgrading the replicated control plane. For a single-node control plane, or if unavailability of controller managers can be tolerated during the upgrade, Leader Migration is not needed and this guide can be ignored.

View File

@ -11,7 +11,7 @@ weight: 20
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
{{< skew currentVersionAddMinor -1 >}}.x to version {{< skew currentVersion >}}.x, and from version
{{< skew currentVersion >}}.x to {{< skew currentVersion >}}.y (where `y > x`). Skipping MINOR versions
when upgrading is unsupported. For more details, please visit [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/).
when upgrading is unsupported. For more details, please visit [Version Skew Policy](/releases/version-skew-policy/).
To see information about upgrading clusters created using older versions of kubeadm,
please refer to following pages instead:

View File

@ -27,12 +27,12 @@ The configuration file must be a JSON or YAML representation of the parameters
in this struct. Make sure the Kubelet has read permissions on the file.
Here is an example of what this file might look like:
```
```yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: "192.168.0.8",
port: 20250,
serializeImagePulls: false,
address: "192.168.0.8"
port: 20250
serializeImagePulls: false
evictionHard:
memory.available: "200Mi"
```

View File

@ -41,13 +41,12 @@ See [Running kind with Rootless Docker](https://kind.sigs.k8s.io/docs/user/rootl
### minikube
[minikube](https://minikube.sigs.k8s.io/) also supports running Kubernetes inside Rootless Docker.
[minikube](https://minikube.sigs.k8s.io/) also supports running Kubernetes inside Rootless Docker or Rootless Podman.
See the page about the [docker](https://minikube.sigs.k8s.io/docs/drivers/docker/) driver in the Minikube documentation.
See the Minikube documentation:
Rootless Podman is not supported.
<!-- Supporting rootless podman is discussed in https://github.com/kubernetes/minikube/issues/8719 -->
* [Rootless Docker](https://minikube.sigs.k8s.io/docs/drivers/docker/)
* [Rootless Podman](https://minikube.sigs.k8s.io/docs/drivers/podman/)
## Running Kubernetes inside Unprivileged Containers

View File

@ -5,8 +5,8 @@ content_type: task
---
This task outlines the steps needed to update your container runtime to containerd from Docker. It
is applicable for cluster operators running Kubernetes 1.23 or earlier. Also this covers an
example scenario for migrating from dockershim to containerd and alternative container runtimes
is applicable for cluster operators running Kubernetes 1.23 or earlier. This also covers an
example scenario for migrating from dockershim to containerd. Alternative container runtimes
can be picked from this [page](/docs/setup/production-environment/container-runtimes/).
## {{% heading "prerequisites" %}}
@ -100,7 +100,7 @@ then run the following commands:
Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags.
`--container-runtime=remote` and
`--container-runtime-endpoint=unix:///run/containerd/containerd.sock"`.
`--container-runtime-endpoint=unix:///run/containerd/containerd.sock`.
Users using kubeadm should be aware that the `kubeadm` tool stores the CRI socket for each host as
an annotation in the Node object for that host. To change it you can execute the following command

View File

@ -3,7 +3,7 @@ reviewers:
- bowei
- zihongz
- sftim
title: Using NodeLocal DNSCache in Kubernetes clusters
title: Using NodeLocal DNSCache in Kubernetes Clusters
content_type: task
---
@ -40,7 +40,7 @@ hostnames ("`cluster.local`" suffix by default).
[conntrack races](https://github.com/kubernetes/kubernetes/issues/56903)
and avoid UDP DNS entries filling up conntrack table.
* Connections from local caching agent to kube-dns service can be upgraded to TCP.
* Connections from the local caching agent to kube-dns service can be upgraded to TCP.
TCP conntrack entries will be removed on connection close in contrast with
UDP entries that have to timeout
([default](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt)
@ -52,7 +52,7 @@ hostnames ("`cluster.local`" suffix by default).
* Metrics & visibility into DNS requests at a node level.
* Negative caching can be re-enabled, thereby reducing number of queries to kube-dns service.
* Negative caching can be re-enabled, thereby reducing the number of queries for the kube-dns service.
## Architecture Diagram
@ -66,7 +66,7 @@ This is the path followed by DNS Queries after NodeLocal DNSCache is enabled:
{{< note >}}
The local listen IP address for NodeLocal DNSCache can be any address that
can be guaranteed to not collide with any existing IP in your cluster.
It's recommended to use an address with a local scope, per example,
It's recommended to use an address with a local scope, for example,
from the 'link-local' range '169.254.0.0/16' for IPv4 or from the
'Unique Local Address' range in IPv6 'fd00::/8'.
{{< /note >}}
@ -77,9 +77,9 @@ This feature can be enabled using the following steps:
[`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml)
and save it as `nodelocaldns.yaml.`
* If using IPv6, the CoreDNS configuration file need to enclose all the IPv6 addresses
* If using IPv6, the CoreDNS configuration file needs to enclose all the IPv6 addresses
into square brackets if used in 'IP:Port' format.
If you are using the sample manifest from the previous point, this will require to modify
If you are using the sample manifest from the previous point, this will require you to modify
[the configuration line L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70)
like this: "`health [__PILLAR__LOCAL__DNS__]:8080`"
@ -103,7 +103,7 @@ This feature can be enabled using the following steps:
`__PILLAR__CLUSTER__DNS__` and `__PILLAR__UPSTREAM__SERVERS__` will be populated by
the `node-local-dns` pods.
In this mode, the `node-local-dns` pods listen on both the kube-dns service IP
as well as `<node-local-address>`, so pods can lookup DNS records using either IP address.
as well as `<node-local-address>`, so pods can look up DNS records using either IP address.
* If kube-proxy is running in IPVS mode:

View File

@ -68,5 +68,5 @@ e.g. [conformance image](https://github.com/kubernetes/kubernetes/blob/master/te
admission controller. To get started with `cosigned` here are a few helpful
resources:
* [Installation](https://github.com/sigstore/helm-charts/tree/main/charts/cosigned)
* [Configuration Options](https://github.com/sigstore/cosign/tree/main/config)
* [Installation](https://github.com/sigstore/cosign#installation)
* [Configuration Options](https://github.com/sigstore/cosign/blob/main/USAGE.md#detailed-usage)

View File

@ -13,65 +13,70 @@ description: Creating Secret objects using resource configuration file.
<!-- steps -->
## Create the Config file
## Create the Secret {#create-the-config-file}
You can create a Secret in a file first, in JSON or YAML format, and then
create that object. The
You can define the `Secret` object in a manifest first, in JSON or YAML format,
and then create that object. The
[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
resource contains two maps: `data` and `stringData`.
The `data` field is used to store arbitrary data, encoded using base64. The
`stringData` field is provided for convenience, and it allows you to provide
Secret data as unencoded strings.
the same data as unencoded strings.
The keys of `data` and `stringData` must consist of alphanumeric characters,
`-`, `_` or `.`.
For example, to store two strings in a Secret using the `data` field, convert
the strings to base64 as follows:
The following example stores two strings in a Secret using the `data` field.
```shell
echo -n 'admin' | base64
```
1. Convert the strings to base64:
The output is similar to:
```shell
echo -n 'admin' | base64
echo -n '1f2d1e2e67df' | base64
```
```
YWRtaW4=
```
{{< note >}}
The serialized JSON and YAML values of Secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/macOS, users should avoid using the `-b` option to split long lines. Conversely, Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` option is not available.
{{< /note >}}
```shell
echo -n '1f2d1e2e67df' | base64
```
The output is similar to:
The output is similar to:
```
YWRtaW4=
MWYyZDFlMmU2N2Rm
```
```
MWYyZDFlMmU2N2Rm
```
1. Create the manifest:
Write a Secret config file that looks like this:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
```
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
```
Note that the name of a Secret object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
Note that the name of a Secret object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
1. Create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply):
{{< note >}}
The serialized JSON and YAML values of Secret data are encoded as base64
strings. Newlines are not valid within these strings and must be omitted. When
using the `base64` utility on Darwin/macOS, users should avoid using the `-b`
option to split long lines. Conversely, Linux users *should* add the option
`-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w`
option is not available.
{{< /note >}}
```shell
kubectl apply -f ./secret.yaml
```
The output is similar to:
```
secret/mysecret created
```
To verify that the Secret was created and to decode the Secret data, refer to
[Managing Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret).
### Specify unencoded data when creating a Secret
For certain scenarios, you may wish to use the `stringData` field instead. This
field allows you to put a non-base64 encoded string directly into the Secret,
@ -103,25 +108,10 @@ stringData:
username: <user>
password: <password>
```
When you retrieve the Secret data, the command returns the encoded values,
and not the plaintext values you provided in `stringData`.
## Create the Secret object
Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply):
```shell
kubectl apply -f ./secret.yaml
```
The output is similar to:
```
secret/mysecret created
```
## Check the Secret
The `stringData` field is a write-only convenience field. It is never output when
retrieving Secrets. For example, if you run the following command:
For example, if you run the following command:
```shell
kubectl get secret mysecret -o yaml
@ -143,14 +133,11 @@ metadata:
type: Opaque
```
The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by
default. This is to protect the `Secret` from being exposed accidentally to an onlooker,
or from being stored in a terminal log.
To check the actual content of the encoded data, please refer to
[decoding secret](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret).
### Specifying both `data` and `stringData`
If a field, such as `username`, is specified in both `data` and `stringData`,
the value from `stringData` is used. For example, the following Secret definition:
If you specify a field in both `data` and `stringData`, the value from `stringData` is used.
For example, if you define the following Secret:
```yaml
apiVersion: v1
@ -164,7 +151,7 @@ stringData:
username: administrator
```
Results in the following Secret:
The `Secret` object is created as follows:
```yaml
apiVersion: v1
@ -180,7 +167,7 @@ metadata:
type: Opaque
```
Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
`YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
## Clean Up

View File

@ -113,6 +113,8 @@ The commands `kubectl get` and `kubectl describe` avoid showing the contents
of a `Secret` by default. This is to protect the `Secret` from being exposed
accidentally, or from being stored in a terminal log.
To check the actual content of the encoded data, refer to [Decoding the Secret](#decoding-secret).
## Decoding the Secret {#decoding-secret}
To view the contents of the Secret you created, run the following command:

Some files were not shown because too many files have changed in this diff Show More