Merge branch 'kubernetes:main' into add-csimigration-instruction
commit
adabfd12c0
1
OWNERS
1
OWNERS
|
@ -13,6 +13,7 @@ emeritus_approvers:
|
|||
# - jaredbhatti, commented out to disable PR assignments
|
||||
# - jimangel, commented out to disable PR assignments
|
||||
# - kbarnard10, commented out to disable PR assignments
|
||||
# - kbhawkey, commented out to disable PR assignments
|
||||
# - steveperry-53, commented out to disable PR assignments
|
||||
- stewart-yu
|
||||
# - zacharysarah, commented out to disable PR assignments
|
||||
|
|
|
@ -5,6 +5,7 @@ aliases:
|
|||
- onlydole
|
||||
- sftim
|
||||
sig-docs-blog-reviewers: # Reviewers for blog content
|
||||
- Gauravpadam
|
||||
- mrbobbytables
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
|
@ -12,8 +13,6 @@ aliases:
|
|||
sig-docs-localization-owners: # Admins for localization content
|
||||
- a-mccarthy
|
||||
- divya-mohan0209
|
||||
- jimangel
|
||||
- kbhawkey
|
||||
- natalisucks
|
||||
- onlydole
|
||||
- reylejano
|
||||
|
@ -22,18 +21,14 @@ aliases:
|
|||
- tengqm
|
||||
sig-docs-de-owners: # Admins for German content
|
||||
- bene2k1
|
||||
- mkorbi
|
||||
- rlenferink
|
||||
sig-docs-de-reviews: # PR reviews for German content
|
||||
- bene2k1
|
||||
- mkorbi
|
||||
- rlenferink
|
||||
sig-docs-en-owners: # Admins for English content
|
||||
- annajung
|
||||
- bradtopol
|
||||
- divya-mohan0209
|
||||
- katcosgrove # RT 1.29 Docs Lead
|
||||
- kbhawkey
|
||||
- katcosgrove # RT 1.30 Lead
|
||||
- drewhagen # RT 1.30 Docs Lead
|
||||
- natalisucks
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
|
@ -112,18 +107,15 @@ aliases:
|
|||
- atoato88
|
||||
- bells17
|
||||
- kakts
|
||||
- ptux
|
||||
- t-inu
|
||||
sig-docs-ko-owners: # Admins for Korean content
|
||||
- gochist
|
||||
- ianychoi
|
||||
- jihoon-seo
|
||||
- seokho-son
|
||||
- yoonian
|
||||
- ysyukr
|
||||
sig-docs-ko-reviews: # PR reviews for Korean content
|
||||
- gochist
|
||||
- ianychoi
|
||||
- jihoon-seo
|
||||
- jmyung
|
||||
- jongwooo
|
||||
|
@ -132,7 +124,6 @@ aliases:
|
|||
- ysyukr
|
||||
sig-docs-leads: # Website chairs and tech leads
|
||||
- divya-mohan0209
|
||||
- kbhawkey
|
||||
- natalisucks
|
||||
- onlydole
|
||||
- reylejano
|
||||
|
@ -153,7 +144,6 @@ aliases:
|
|||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- asa3311
|
||||
- chenrui333
|
||||
- chenxuc
|
||||
- howieyuen
|
||||
# idealhack
|
||||
- kinzhi
|
||||
|
@ -208,7 +198,6 @@ aliases:
|
|||
- Arhell
|
||||
- idvoretskyi
|
||||
- MaxymVlasov
|
||||
- Potapy4
|
||||
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
|
||||
committee-steering: # provide PR approvals for announcements
|
||||
- bentheelder
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
|
||||
[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज़](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियां हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
|
||||
स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियाँ हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
|
||||
|
||||
## डॉक्स में योगदान देना
|
||||
|
||||
|
@ -37,8 +37,6 @@
|
|||
|
||||
> यदि आप विंडोज पर हैं, तो आपको कुछ और टूल्स की आवश्यकता होगी जिन्हें आप [Chocolatey](https://chocolatey.org) के साथ इंस्टॉल कर सकते हैं।
|
||||
|
||||
> यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे Hugo का उपयोग करके स्थानीय रूप से साइट चलाना देखें।
|
||||
|
||||
यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे दिए गए Hugo का उपयोग करके स्थानीय रूप से [साइट को चलाने](#hugo-का-उपयोग-करते-हुए-स्थानीय-रूप-से-साइट-चलाना) का तरीका देखें।
|
||||
|
||||
यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ:
|
||||
|
|
30
README-pl.md
30
README-pl.md
|
@ -9,7 +9,7 @@ W tym repozytorium znajdziesz wszystko, czego potrzebujesz do zbudowania [strony
|
|||
|
||||
## Jak używać tego repozytorium
|
||||
|
||||
Możesz uruchomić serwis lokalnie poprzez Hugo (Extended version) lub ze środowiska kontenerowego. Zdecydowanie zalecamy korzystanie z kontenerów, bo dzięki temu lokalna wersja będzie spójna z tym, co jest na oficjalnej stronie.
|
||||
Możesz uruchomić serwis lokalnie poprzez [Hugo (Extended version)](https://gohugo.io/) lub ze środowiska kontenerowego. Zdecydowanie zalecamy korzystanie z kontenerów, bo dzięki temu lokalna wersja będzie spójna z tym, co jest na oficjalnej stronie.
|
||||
|
||||
## Wymagania wstępne
|
||||
|
||||
|
@ -29,17 +29,24 @@ cd website
|
|||
|
||||
Strona Kubernetesa używa [Docsy Hugo theme](https://github.com/google/docsy#readme). Nawet jeśli planujesz uruchomić serwis w środowisku kontenerowym, zalecamy pobranie podmodułów i innych zależności za pomocą polecenia:
|
||||
|
||||
```bash
|
||||
# pull in the Docsy submodule
|
||||
### Windows
|
||||
```powershell
|
||||
# aktualizuj podrzędne moduły
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
### Linux / inne systemy Unix
|
||||
```bash
|
||||
# aktualizuj podrzędne moduły
|
||||
make module-init
|
||||
```
|
||||
|
||||
## Uruchomienie serwisu w kontenerze
|
||||
|
||||
Aby zbudować i uruchomić serwis wewnątrz środowiska kontenerowego, wykonaj następujące polecenia:
|
||||
|
||||
```bash
|
||||
make container-image
|
||||
# Możesz ustawić zmienną $CONTAINER_ENGINE wskazującą na dowolne narzędzie obsługujące kontenery podobnie jak Docker
|
||||
make container-serve
|
||||
```
|
||||
|
||||
|
@ -53,11 +60,16 @@ Upewnij się, że zainstalowałeś odpowiednią wersję Hugo "extended", określ
|
|||
|
||||
Aby uruchomić i przetestować serwis lokalnie, wykonaj:
|
||||
|
||||
```bash
|
||||
# install dependencies
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
- macOS i Linux
|
||||
```bash
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
- Windows (PowerShell)
|
||||
```powershell
|
||||
npm ci
|
||||
hugo.exe server --buildFuture --environment development
|
||||
```
|
||||
|
||||
Zostanie uruchomiony lokalny serwer Hugo na porcie 1313. Otwórz w przeglądarce adres <http://localhost:1313>, aby obejrzeć zawartość serwisu. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
|
||||
|
||||
|
|
|
@ -178,6 +178,7 @@ For more information about contributing to the Kubernetes documentation, see:
|
|||
- [Page Content Types](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
- [Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
- [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/)
|
||||
- [Introduction to Kubernetes Docs](https://www.youtube.com/watch?v=pprMgmNzDcw)
|
||||
|
||||
### New contributor ambassadors
|
||||
|
||||
|
|
|
@ -1003,3 +1003,32 @@ div.alert > em.javascript-required {
|
|||
margin: 0.25em;
|
||||
}
|
||||
}
|
||||
|
||||
// Adjust Search-bar search-icon
|
||||
.search-bar {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
background-color: #fff;
|
||||
border: 1px solid #4c4c4c;
|
||||
border-radius: 20px;
|
||||
vertical-align: middle;
|
||||
flex-grow: 1;
|
||||
overflow-x: hidden;
|
||||
width: auto;
|
||||
}
|
||||
|
||||
.search-bar:focus-within {
|
||||
border: 2.5px solid rgba(47, 135, 223, 0.7);
|
||||
}
|
||||
|
||||
.search-bar i.search-icon {
|
||||
padding: .5em .5em .5em .75em;
|
||||
opacity: .75;
|
||||
}
|
||||
|
||||
.search-input {
|
||||
flex: 1;
|
||||
border: none;
|
||||
outline: none;
|
||||
padding: .5em 0 .5em 0;
|
||||
}
|
|
@ -43,12 +43,12 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Video ansehen</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Besuche die KubeCon + CloudNativeCon North America vom 6. bis 9. November 2023</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon + CloudNativeCon Europe vom 19. bis 22. März 2024</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">Besuche die KubeCon + CloudNativeCon North America vom 12. bis 15. November 2024</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
title: Über cgroup v2
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Auf Linux beschränken {{< glossary_tooltip text="control groups" term_id="cgroup" >}} die Ressourcen, die einem Prozess zugeteilt werden.
|
||||
|
||||
Das {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} und die zugrundeliegende Container Runtime müssen mit cgroups interagieren um
|
||||
[Ressourcen-Verwaltung für Pods und Container](/docs/concepts/configuration/manage-resources-containers/) durchzusetzen. Das schließt CPU/Speicher Anfragen und Limits für containerisierte Arbeitslasten ein.
|
||||
|
||||
Es gibt zwei Versionen cgroups in Linux: cgroup v1 und cgroup v2. cgroup v2 ist die neue Generation der `cgroup` API.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
||||
## Was ist cgroup v2? {#cgroup-v2}
|
||||
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
|
||||
|
||||
cgroup v2 ist die nächste Version der Linux `cgroup` API. cgroup v2 stellt ein einheitliches Kontrollsystem mit erweiterten Ressourcenmanagement Fähigkeiten bereit.
|
||||
|
||||
cgroup v2 bietet einige Verbesserungen gegenüber cgroup v1, zum Beispiel folgende:
|
||||
|
||||
- Einzelnes vereinheitlichtes Hierarchiendesign in der API
|
||||
- Erhöhte Sicherheit bei sub-tree Delegierung zu Container
|
||||
- Neuere Features, wie [Pressure Stall Information](https://www.kernel.org/doc/html/latest/accounting/psi.html)
|
||||
- Erweitertes Ressourcen Zuteilungsmanagement und Isolierung über mehrfache Ressourcen
|
||||
- Einheitliche Erfassung für verschiedene Arten der Speicherzuteilung (Netzwerkspeicher, Kernelspeicher, usw.)
|
||||
- Erfassung nicht-unmittelbarer Ressourcenänderungen wie "page cache write backs"
|
||||
|
||||
Manche Kubernetes Funktionen verwenden ausschließlich cgroup v2 für erweitertes Ressourcenmanagement und Isolierung. Die [MemoryQoS](/blog/2021/11/26/qos-memory-resources/) Funktion, zum Beispiel, verbessert Speicher QoS und setzt dabei auf cgroup v2 Primitives.
|
||||
|
||||
|
||||
## cgroup v2 verwenden {#cgroupv2-verwenden}
|
||||
|
||||
Die empfohlene Methode um cgroup v2 zu verwenden, ist eine Linux Distribution zu verwenden, die cgroup v2 standardmäßig aktiviert und verwendet.
|
||||
|
||||
Um zu Kontrollieren ob ihre Distribution cgroup v2 verwendet, siehe [Identifizieren der cgroup Version auf Linux Knoten](#cgroup-version-identifizieren).
|
||||
|
||||
### Voraussetzungen {#Voraussetzungen}
|
||||
|
||||
cgroup v2 hat folgende Voraussetzungen:
|
||||
|
||||
* Betriebssystem Distribution ermöglicht cgroup v2
|
||||
* Linux Kernel Version ist 5.8 oder neuer
|
||||
* Container Runtime unterstützt cgroup v2. Zum Besipiel:
|
||||
* [containerd](https://containerd.io/) v1.4 und neuer
|
||||
* [cri-o](https://cri-o.io/) v1.20 und neuer
|
||||
* Das kubelet und die Container Runtime sind konfiguriert, um den [systemd cgroup Treiber](/docs/setup/production-environment/container-runtimes#systemd-cgroup-driver) zu verwenden
|
||||
|
||||
### Linux Distribution cgroup v2 Support
|
||||
|
||||
Für eine Liste der Linux Distributionen, die cgroup v2 verwenden, siehe die [cgroup v2 Dokumentation](https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md)
|
||||
|
||||
<!-- the list should be kept in sync with https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md -->
|
||||
* Container Optimized OS (seit M97)
|
||||
* Ubuntu (seit 21.10, 22.04+ empfohlen)
|
||||
* Debian GNU/Linux (seit Debian 11 bullseye)
|
||||
* Fedora (seit 31)
|
||||
* Arch Linux (seit April 2021)
|
||||
* RHEL und RHEL-basierte Distributionen (seit 9)
|
||||
|
||||
Zum Überprüfen ob Ihre Distribution cgroup v2 verwendet, siehe die Dokumentation Ihrer Distribution, oder folge den Anweisungen in [Identifizieren der cgroup Version auf Linux Knoten](#cgroup-version-identifizieren).
|
||||
|
||||
Man kann auch manuell cgroup v2 aktivieren, indem man die Kernel Boot Argumente anpasst. Wenn Ihre Distribution GRUB verwendet, muss `systemd.unified_cgroup_hierarchy=1` in `GRUB_CMDLINE_LINUX` unter `/etc/default/grub` hinzugefügt werden. Danach muss man `sudo update-grub` ausführen. Die empfohlene Methode ist aber das Verwenden einer Distribution, die schon standardmäßig cgroup v2 aktiviert.
|
||||
|
||||
### Migrieren zu cgroup v2 {#cgroupv2-migrieren}
|
||||
|
||||
Um zu cgroup v2 zu migrieren, müssen Sie erst sicherstellen, dass die [Voraussetzungen](#Voraussetzungen) erfüllt sind. Dann müssen Sie auf eine Kernel Version aktualisieren, die cgroup v2 standardmäßig aktiviert.
|
||||
|
||||
Das kubelet erkennt automatisch, dass das Betriebssystem auf cgroup v2 läuft, und verhält sich entsprechend, ohne weitere Konfiguration.
|
||||
|
||||
Nach dem Umschalten auf cgroup v2 sollte es keinen erkennbaren Unterschied in der Benutzererfahrung geben, es sei denn, die Benutzer greifen auf das cgroup Dateisystem direkt zu, entweder auf dem Knoten oder in den Containern.
|
||||
|
||||
cgroup v2 verwendet eine andere API als cgroup v1. Wenn es also Anwendungen gibt, die direkt auf das cgroup Dateisystem zugreifen, müssen sie aktualisiert werden, um cgroup v2 zu unterstützen. Zum Beispiel:
|
||||
|
||||
* Manche Überwachungs- und Sicherheitsagenten von Drittanbietern können vom cgroup Dateisystem abhängig sein.
|
||||
Diese müssen aktualisiert werden um cgroup v2 zu unterstützen.
|
||||
* Wenn Sie [cAdvisor](https://github.com/google/cadvisor) als eigenständigen DaemonSet verwenden, zum Überwachen von Pods und Container, muss es auf v0.43.0 oder neuer aktualisiert werden.
|
||||
* Wenn Sie Java Applikationen bereitstellen, sollten Sie bevorzugt Versionen verwenden, die cgroup v2 vollständig unterstützen:
|
||||
* [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 und neuer
|
||||
* [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, und neuer
|
||||
* [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 und neuer
|
||||
* Wenn Sie das [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs) Paket verwenden, vergewissern Sie sich, dass Sie v1.5.1 oder höher verwenden.
|
||||
|
||||
## Identifizieren der cgroup Version auf Linux Knoten {#cgroup-version-identifizieren}
|
||||
|
||||
Die cgroup Version hängt von der verwendeten Linux Distribution und der standardmäßig auf dem Betriebssystem konfigurierten cgroup Version ab. Zum Überprüfen der cgroup Version, die ihre Distribution verwendet, führen Sie den Befehl `stat -fc %T /sys/fs/cgroup/` auf dem Knoten aus:
|
||||
|
||||
```shell
|
||||
stat -fc %T /sys/fs/cgroup/
|
||||
```
|
||||
|
||||
Für cgroup v2, ist das Ergebnis `cgroup2fs`.
|
||||
|
||||
Für cgroup v1, ist das Ergebnis `tmpfs.`
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Erfahre mehr über [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html)
|
||||
- Erfahre mehr über [container runtime](/docs/concepts/architecture/cri)
|
||||
- Erfahre mehr über [cgroup drivers](/docs/setup/production-environment/container-runtimes#cgroup-drivers)
|
||||
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: Master-Node Kommunikation
|
||||
title: Control-Plane-Node Kommunikation
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Dieses Dokument katalogisiert die Kommunikationspfade zwischen dem Master (eigentlich dem Apiserver) und des Kubernetes-Clusters.
|
||||
Dieses Dokument katalogisiert die Kommunikationspfade zwischen dem Control Plane (eigentlich dem Apiserver) und des Kubernetes-Clusters.
|
||||
Die Absicht besteht darin, Benutzern die Möglichkeit zu geben, ihre Installation so anzupassen, dass die Netzwerkkonfiguration so abgesichert wird, dass der Cluster in einem nicht vertrauenswürdigen Netzwerk (oder mit vollständig öffentlichen IP-Adressen eines Cloud-Providers) ausgeführt werden kann.
|
||||
|
||||
|
||||
|
@ -14,28 +14,28 @@ Die Absicht besteht darin, Benutzern die Möglichkeit zu geben, ihre Installatio
|
|||
|
||||
<!-- body -->
|
||||
|
||||
## Cluster zum Master
|
||||
## Cluster zum Control Plane
|
||||
|
||||
Alle Kommunikationspfade vom Cluster zum Master enden beim Apiserver (keine der anderen Master-Komponenten ist dafür ausgelegt, Remote-Services verfügbar zu machen).
|
||||
Alle Kommunikationspfade vom Cluster zum Control Plane enden beim Apiserver (keine der anderen Control-Plane-Komponenten ist dafür ausgelegt, Remote-Services verfügbar zu machen).
|
||||
In einem typischen Setup ist der Apiserver so konfiguriert, dass er Remote-Verbindungen an einem sicheren HTTPS-Port (443) mit einer oder mehreren Formen der [Clientauthentifizierung](/docs/reference/access-authn-authz/authentication/) überwacht.
|
||||
Eine oder mehrere Formene von [Autorisierung](/docs/reference/access-authn-authz/authorization/) sollte aktiviert sein, insbesondere wenn [anonyme Anfragen](/docs/reference/access-authn-authz/authentication/#anonymous-requests) oder [Service Account Tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) aktiviert sind.
|
||||
Eine oder mehrere Formen von [Autorisierung](/docs/reference/access-authn-authz/authorization/) sollte aktiviert sein, insbesondere wenn [anonyme Anfragen](/docs/reference/access-authn-authz/authentication/#anonymous-requests) oder [Service Account Tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) aktiviert sind.
|
||||
|
||||
Nodes sollten mit dem öffentlichen Stammzertifikat für den Cluster konfiguriert werden, sodass sie eine sichere Verbindung zum Apiserver mit gültigen Client-Anmeldeinformationen herstellen können.
|
||||
Knoten sollten mit dem öffentlichen Stammzertifikat für den Cluster konfiguriert werden, sodass sie eine sichere Verbindung zum Apiserver mit gültigen Client-Anmeldeinformationen herstellen können.
|
||||
Beispielsweise bei einer gewöhnlichen GKE-Konfiguration enstprechen die dem kubelet zur Verfügung gestellten Client-Anmeldeinformationen eines Client-Zertifikats.
|
||||
Lesen Sie über [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) zur automatisierten Bereitstellung von kubelet-Client-Zertifikaten.
|
||||
|
||||
Pods, die eine Verbindung zum Apiserver herstellen möchten, können dies auf sichere Weise tun, indem sie ein Dienstkonto verwenden, sodass Kubernetes das öffentliche Stammzertifikat und ein gültiges Trägertoken automatisch in den Pod einfügt, wenn er instanziiert wird.
|
||||
Der `kubernetes`-Dienst (in allen Namespaces) ist mit einer virtuellen IP-Adresse konfiguriert, die (über den Kube-Proxy) an den HTTPS-Endpunkt auf dem Apiserver umgeleitet wird.
|
||||
|
||||
Die Master-Komponenten kommunizieren auch über den sicheren Port mit dem Cluster-Apiserver.
|
||||
Die Control-Plane-Komponenten kommunizieren auch über den sicheren Port mit dem Cluster-Apiserver.
|
||||
|
||||
Der Standardbetriebsmodus für Verbindungen vom Cluster (Knoten und Pods, die auf den Knoten ausgeführt werden) zum Master ist daher standardmäßig gesichert und kann über nicht vertrauenswürdige und/oder öffentliche Netzwerke laufen.
|
||||
Der Standardbetriebsmodus für Verbindungen vom Cluster (Knoten und Pods, die auf den Knoten ausgeführt werden) zum Control Plane ist daher standardmäßig gesichert und kann über nicht vertrauenswürdige und/oder öffentliche Netzwerke laufen.
|
||||
|
||||
## Master zum Cluster
|
||||
## Control Plane zum Cluster
|
||||
|
||||
Es gibt zwei primäre Kommunikationspfade vom Master (Apiserver) zum Cluster.
|
||||
Der Erste ist vom Apiserver hin zum Kubelet-Prozess, der auf jedem Node im Cluster ausgeführt wird.
|
||||
Der Zweite ist vom Apiserver zu einem beliebigen Node, Pod oder Dienst über die Proxy-Funktionalität des Apiservers.
|
||||
Es gibt zwei primäre Kommunikationspfade vom Control Plane (Apiserver) zum Cluster.
|
||||
Der Erste ist vom Apiserver hin zum Kubelet-Prozess, der auf jedem Knoten im Cluster ausgeführt wird.
|
||||
Der Zweite ist vom Apiserver zu einem beliebigen Knoten, Pod oder Dienst über die Proxy-Funktionalität des Apiservers.
|
||||
|
||||
### Apiserver zum kubelet
|
||||
|
||||
|
@ -55,16 +55,16 @@ zwischen dem Apiserver und dem kubelet, falls es erforderlich ist eine Verbindun
|
|||
|
||||
Außerdem sollte [Kubelet Authentifizierung und/oder Autorisierung](/docs/admin/kubelet-authentication-authorization/) aktiviert sein, um die kubelet-API abzusichern.
|
||||
|
||||
### Apiserver zu Nodes, Pods und Services
|
||||
### Apiserver zu Knoten, Pods und Services
|
||||
|
||||
Die Verbindungen vom Apiserver zu einem Node, Pod oder Dienst verwenden standardmäßig einfache HTTP-Verbindungen und werden daher weder authentifiziert noch verschlüsselt.
|
||||
Die Verbindungen vom Apiserver zu einem Knoten, Pod oder Dienst verwenden standardmäßig einfache HTTP-Verbindungen und werden daher weder authentifiziert noch verschlüsselt.
|
||||
Sie können über eine sichere HTTPS-Verbindung ausgeführt werden, indem dem Node, dem Pod oder dem Servicenamen in der API-URL "https:" vorangestellt wird. Das vom HTTPS-Endpunkt bereitgestellte Zertifikat wird jedoch nicht überprüft, und es werden keine Clientanmeldeinformationen bereitgestellt. Die Verbindung wird zwar verschlüsselt, garantiert jedoch keine Integrität.
|
||||
Diese Verbindungen **sind derzeit nicht sicher** innerhalb von nicht vertrauenswürdigen und/oder öffentlichen Netzen.
|
||||
|
||||
### SSH Tunnels
|
||||
### SSH-Tunnel
|
||||
|
||||
Kubernetes unterstützt SSH-Tunnel zum Schutz der Master -> Cluster Kommunikationspfade.
|
||||
In dieser Konfiguration initiiert der Apiserver einen SSH-Tunnel zu jedem Node im Cluster (Verbindung mit dem SSH-Server, der mit Port 22 läuft), und leitet den gesamten Datenverkehr für ein kubelet, einen Node, einen Pod oder einen Dienst durch den Tunnel.
|
||||
Kubernetes unterstützt SSH-Tunnel zum Schutz der Control Plane -> Cluster Kommunikationspfade.
|
||||
In dieser Konfiguration initiiert der Apiserver einen SSH-Tunnel zu jedem Knoten im Cluster (Verbindung mit dem SSH-Server, der mit Port 22 läuft), und leitet den gesamten Datenverkehr für ein kubelet, einen Knoten, einen Pod oder einen Dienst durch den Tunnel.
|
||||
Dieser Tunnel stellt sicher, dass der Datenverkehr nicht außerhalb des Netzwerks sichtbar ist, in dem die Knoten ausgeführt werden.
|
||||
|
||||
SSH-Tunnel werden zur Zeit nicht unterstützt. Sie sollten also nicht verwendet werden, sei denn, man weiß, was man tut. Ein Ersatz für diesen Kommunikationskanal wird entwickelt.
|
|
@ -0,0 +1,103 @@
|
|||
---
|
||||
title: Controller
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
In der Robotik und der Automatisierung ist eine _Kontrollschleife_ eine endlose Schleife, die den Zustand eines Systems regelt.
|
||||
|
||||
Hier ist ein Beispiel einer Kontrollschleife: ein Thermostat in einem Zimmer.
|
||||
|
||||
Wenn Sie die Temperatur einstellen, sagen Sie dem Thermostaten was der *Wunschzustand* ist. Die tatsächliche Raumtemperatur ist der *Istzustand*. Der Thermostat agiert um den Istzustand dem Wunschzustand anzunähern, indem er Geräte an oder ausschaltet.
|
||||
|
||||
{{< glossary_definition text="Controller" term_id="controller" length="short">}}
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Controller Muster
|
||||
|
||||
Ein Controller überwacht mindestens einen Kubernetes Ressourcentyp.
|
||||
Diese {{< glossary_tooltip text="Objekte" term_id="object" >}}
|
||||
haben ein Spezifikationsfeld, das den Wunschzustand darstellt. Der oder die Controller für diese Ressource sind dafür verantwortlich, dass sich der Istzustand dem Wunschzustand annähert.
|
||||
|
||||
Der Controller könnte die Aktionen selbst ausführen; meistens jedoch sendet der Controller Nachrichten an den {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}, die nützliche Effekte haben. Unten sehen Sie Beispiele dafür.
|
||||
|
||||
{{< comment >}}
|
||||
Manche eingebaute Controller, zum Beispiel der Namespace Controller, agieren auf Objekte die keine Spezifikation haben. Zur Vereinfachung lässt diese Seite die Erklärung zu diesem Detail aus.
|
||||
{{< /comment >}}
|
||||
|
||||
### Kontrolle via API Server
|
||||
|
||||
Der {{< glossary_tooltip text="Job" term_id="job" >}} Controller ist ein Beispiel eines eingebauten Kubernetes Controllers. Eingebaute Controller verwalten den Zustand durch Interaktion mit dem Cluster API Server.
|
||||
|
||||
Ein Job ist eine Kubernetes Ressource, die einen {{< glossary_tooltip text="Pod" term_id="pod" >}}, oder vielleicht mehrere Pods, erstellt, um eine Tätigkeit auszuführen und dann zu beenden.
|
||||
|
||||
(Sobald [geplant](/docs/concepts/scheduling-eviction/), werden Pod Objekte Teil des Wunschzustands eines Kubelets).
|
||||
|
||||
Wenn der Job Controller eine neue Tätigkeit erkennt, versichert er, dass irgendwo in Ihrem Cluster, die Kubelets auf einem Satz Knoten, die korrekte Anzahl Pods laufen lässt, um die Tätigkeit auszuführen. Der Job Controller selbst lässt keine Pods oder Container laufen. Stattdessen sagt der Job Controller dem API Server, dass er Pods erstellen oder entfernen soll.
|
||||
Andere Komponenten in der {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}} reagieren auf die neue Information (neue Pods müssen geplant werden und müssen laufen), und irgendwann ist die Arbeit beendet.
|
||||
|
||||
Nachdem Sie einen neuen Job erstellen, ist der Wunschzustand, dass dieser Job beendet ist. Der Job Controller sorgt dafür, dass der Istzustand sich dem Wunschzustand annähert: Pods, die die Arbeit ausführen, werden erstellt, sodass der Job näher an seine Vollendung kommt.
|
||||
|
||||
Controller aktualisieren auch die Objekte die sie konfigurieren. Zum Beispiel: sobald die Arbeit eines Jobs beendet ist, aktualisiert der Job Controller das Job Objekt und markiert es als `beendet`.
|
||||
|
||||
(Das ist ungefähr wie ein Thermostat, der ein Licht ausschaltet, um anzuzeigen dass der Raum nun die Wunschtemperatur hat).
|
||||
|
||||
### Direkte Kontrolle
|
||||
|
||||
Im Gegensatz zum Job Controller, müssen manche Controller auch Sachen außerhalb Ihres Clusters ändern.
|
||||
|
||||
Zum Beispiel, wenn Sie eine Kontrollschleife verwenden, um sicherzustellen dass es genug {{< glossary_tooltip text="Knoten" term_id="node" >}} in ihrem Cluster gibt, dann benötigt dieser Controller etwas außerhalb des jetztigen Clusters, um neue Knoten bei Bedarf zu erstellen.
|
||||
|
||||
Controller die mit dem externen Status interagieren, erhalten den Wunschzustand vom API Server, und kommunizieren dann direkt mit einem externen System, um den Istzustand näher an den Wunschzustand zu bringen.
|
||||
|
||||
(Es gibt tatsächlich einen [Controller](https://github.com/kubernetes/autoscaler/), der die Knoten in Ihrem Cluster horizontal skaliert.)
|
||||
|
||||
Wichtig ist hier, dass der Controller Veränderungen vornimmt, um den Wunschzustand zu erreichen, und dann den Istzustand an den API Server Ihres Clusters meldet. Andere Kontrollschleifen können diese Daten beobachten und eigene Aktionen unternehmen.
|
||||
|
||||
Im Beispiel des Thermostaten, wenn der Raum sehr kalt ist, könnte ein anderer Controller eine Frostschutzheizung einschalten. Bei Kubernetes Cluster arbeitet die Contol Plane indirekt mit IP Adressen Verwaltungstools, Speicherdienste, Cloud Provider APIs, und andere Dienste, um [Kubernetes zu erweitern](/docs/concepts/extend-kubernetes/) und das zu implementieren.
|
||||
|
||||
## Wunschzustand gegen Istzustand {#desired-vs-current}
|
||||
|
||||
Kubernetes hat eine Cloud-Native Sicht auf Systeme, und kann ständige Veränderungen verarbeiten.
|
||||
|
||||
Ihr Cluster kann sich jederzeit verändern, während Arbeit erledigt wird und Kontrollschleifen automatisch Fehler reparieren. Das bedeutet, dass Ihr Cluster eventuell nie einen stabilen Zustand erreicht.
|
||||
|
||||
Solange die Controller Ihres Clusters laufen, und sinnvolle Veränderungen vornehmen können, ist es egal ob der Gesamtzustand stabil ist oder nicht.
|
||||
|
||||
## Design
|
||||
|
||||
Als Grundlage seines Designs verwendet Kubernetes viele Controller, die alle einen bestimmten Aspekt des Clusterzustands verwalten. Meistens verwendet eine bestimmte Kontrollschleife (Controller) eine Sorte Ressourcen als seinen Wunschzustand, und hat eine andere Art Ressource, dass sie verwaltet um den Wunschzustand zu erreichen. Zum Beispiel, ein Controller für Jobs überwacht Job Objekte (um neue Arbeit zu finden), und Pod Objekte (um die Arbeit auszuführen, und um zu erkennen wann die Arbeit beendet ist). In diesem Fall erstellt etwas anderes die Jobs, während der Job Controller Pods erstellt.
|
||||
|
||||
Es ist sinnvoll einfache Controller zu haben, statt eines monolithischen Satzes Kontrollschleifen, die miteinander verbunden sind. Controller können scheitern, also ist Kubernetes entworfen um das zu erlauben.
|
||||
|
||||
{{< note >}}
|
||||
Es kann mehrere Controller geben, die die gleiche Art Objekte erstellen oder aktualisieren können. Im Hintergrund sorgen Kubernetes Controller dafür, dass sie nur auf die Ressourcen achten, die mit den kontrollierenden Ressourcen verbunden sind.
|
||||
|
||||
Man kann zum Beispiel Deployments und Jobs haben; beide erstellen Pods.
|
||||
Der Job Controller löscht nicht die Pods die Ihr Deployment erstellt hat, weil es Informationen ({{< glossary_tooltip term_id="Label" text="Bezeichnungen" >}}) gibt, die der Controller verwenden kann, um die Pods voneinander zu unterscheiden.
|
||||
{{< /note >}}
|
||||
|
||||
## Wege um Controller auszuführen {#running-controllers}
|
||||
|
||||
Kubernetes enthält eingebaute Controller, die innerhalb des {{< glossary_tooltip text="Kube Controller Manager" term_id="kube-controller-manager" >}} laufen. Diese eingebaute Controller liefern wichtige grundsätzliche Verhalten.
|
||||
|
||||
Der Deployment Controller und Job Controller sind Beispiele für Controller die Teil von Kubernetes selbst sind ("eingebaute" Controller).
|
||||
Kubernetes erlaubt die Verwendung von resilienten Control Planes, sodass bei Versagen eines eingebauten Controllers ein anderer Teil des Control Planes die Arbeit übernimmt.
|
||||
|
||||
Sie finden auch Controller, die außerhalb des Control Planes laufen, um Kubernetes zu erweitern. Oder, wenn Sie möchten, können Sie auch selbst einen neuen Controller entwickeln.
|
||||
Sie können Ihren Controller als einen Satz Pods oder außerhalb von Kubernetes laufen lassen. Was am besten passt, hängt davon ab was der jeweilige Controller tut.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Lesen Sie über den [Kubernetes Control Plane](/docs/concepts/overview/components/#control-plane-components)
|
||||
* Entdecke einige der grundlegenden [Kubernetes Objekte](/docs/concepts/overview/working-with-objects/)
|
||||
* Lerne mehr über die [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
||||
* Wenn Sie ihren eigenen Controller entwickeln wollen, siehe [Kubernetes extension patterns](/docs/concepts/extend-kubernetes/#extension-patterns)
|
||||
und das [sample-controller](https://github.com/kubernetes/sample-controller) Repository.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: cgroup (control group)
|
||||
id: cgroup
|
||||
date: 2019-06-25
|
||||
full_link:
|
||||
short_description: >
|
||||
Eine Gruppe Linux Prozesse mit optionaler Isolation, Erfassung und Begrenzung der Ressourcen
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Eine Gruppe Linux Prozesse mit optionaler Isolation, Erfassung und Begrenzung der Ressourcen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
cgroup ist eine Funktion des Linux Kernels, dass die Ressourcennutzung (CPU, Speicher, Platten I/O, Netzwerk) begrenzt, erfasst und isoliert, für eine Sammling Prozesse.
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: Controller
|
||||
id: controller
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/architecture/controller/
|
||||
short_description: >
|
||||
Eine Kontrollschleife, die den geteilten Zustand des Clusters über den Apiserver beobachtet, und Änderungen ausführt, um den aktuellen Zustand in Richtung des Wunschzustands zu bewegen.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
In Kubernetes sind Controller Kontrollschleifen, die den Zustand des {{< glossary_tooltip term_id="cluster" text="Clusters">}} beobachten, und dann Änderungen ausführen oder anfragen, wenn benötigt.
|
||||
Jeder Controller versucht den aktuellen Clusterzustand in Richtung des Wunschzustands zu bewegen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Controller beobachten den geteilten Zustand des Clusters durch den {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} (Teil der {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}}).
|
||||
|
||||
Mache Controller, laufen auch im Control Plane, und stellen Kontrollschleifen zur Verfügung, die essentiell für die grundlegende Kubernetes Funktionalität sind. Zum Beispiel: der Deployment Controller, der Daemonset Controller, der Namespace Controller und der Persistent Volume Controller (unter anderem) laufen alle innerhalb des {{< glossary_tooltip text="Kube Controller Managers" term_id="kube-controller-manager" >}}.
|
|
@ -4,16 +4,16 @@ id: kube-apiserver
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-apiserver/
|
||||
short_description: >
|
||||
Komponente auf dem Master, der die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
Komponente auf der Control Plane, die die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
Komponente auf dem Master, der die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
Komponente auf der Control Plane, die die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Es ist für die horizontale Skalierung konzipiert, d. H. Es skaliert durch die Bereitstellung von mehr Instanzen. Mehr informationen finden Sie unter [Cluster mit hoher Verfügbarkeit erstellen](/docs/admin/high-availability/).
|
||||
Es ist für die horizontale Skalierung konzipiert, d.h. es skaliert durch die Bereitstellung von mehr Instanzen. Mehr informationen finden Sie unter [Cluster mit hoher Verfügbarkeit erstellen](/docs/admin/high-availability/).
|
||||
|
||||
|
|
|
@ -4,16 +4,16 @@ id: kube-controller-manager
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-controller-manager/
|
||||
short_description: >
|
||||
Komponente auf dem Master, auf dem Controller ausgeführt werden.
|
||||
Komponente auf der Control Plane, auf der Controller ausgeführt werden.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
Komponente auf dem Master, auf dem {{< glossary_tooltip text="controllers" term_id="controller" >}} ausgeführt werden.
|
||||
Komponente auf der Control Plane, auf der {{< glossary_tooltip text="Controller" term_id="controller" >}} ausgeführt werden.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Logisch gesehen ist jeder {{< glossary_tooltip text="controller" term_id="controller" >}} ein separater Prozess, aber zur Vereinfachung der Komplexität werden sie alle zu einer einzigen Binärdatei zusammengefasst und in einem einzigen Prozess ausgeführt.
|
||||
Logisch gesehen ist jeder {{< glossary_tooltip text="Controller" term_id="controller" >}} ein separater Prozess, aber zur Vereinfachung der Komplexität werden sie alle zu einer einzigen Binärdatei zusammengefasst und in einem einzigen Prozess ausgeführt.
|
||||
|
||||
|
|
|
@ -4,13 +4,13 @@ id: kube-scheduler
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-scheduler/
|
||||
short_description: >
|
||||
Komponente auf dem Master, die neu erstellte Pods überwacht, denen kein Node zugewiesen ist. Sie wählt den Node aus, auf dem sie ausgeführt werden sollen.
|
||||
Komponente auf der Control Plane, die neu erstellte Pods überwacht, denen kein Knoten zugewiesen ist. Sie wählt den Knoten aus, auf dem sie ausgeführt werden sollen.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
---
|
||||
Komponente auf dem Master, die neu erstellte Pods überwacht, denen kein Node zugewiesen ist. Sie wählt den Node aus, auf dem sie ausgeführt werden sollen.
|
||||
Komponente auf der Control Plane, die neu erstellte Pods überwacht, denen kein Knoten zugewiesen ist. Sie wählt den Knoten aus, auf dem sie ausgeführt werden sollen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -4,14 +4,14 @@ id: kubelet
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kubelet
|
||||
short_description: >
|
||||
Ein Agent, der auf jedem Node im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
Ein Agent, der auf jedem Knoten im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
---
|
||||
Ein Agent, der auf jedem Node im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
Ein Agent, der auf jedem Knoten im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -47,12 +47,12 @@ To download Kubernetes, visit the [download](/releases/download/) section.
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 6-9, 2023</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon Europe on March 19-22, 2024</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 12-15, 2024</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -17,7 +17,7 @@ and are in no way a direct recommendation from the Kubernetes community or autho
|
|||
|
||||
USA's National Security Agency (NSA) and the Cybersecurity and Infrastructure
|
||||
Security Agency (CISA)
|
||||
released, "[Kubernetes Hardening Guidance](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF)"
|
||||
released Kubernetes Hardening Guidance
|
||||
on August 3rd, 2021. The guidance details threats to Kubernetes environments
|
||||
and provides secure configuration guidance to minimize risk.
|
||||
|
||||
|
@ -29,6 +29,14 @@ _Note_: This blog post is not a substitute for reading the guide. Reading the pu
|
|||
guidance is recommended before proceeding as the following content is
|
||||
complementary.
|
||||
|
||||
{{% pageinfo color="primary" %}}
|
||||
**Update, November 2023:**
|
||||
|
||||
The National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) released the 1.0 version of the Kubernetes hardening guide in August 2021 and updated it based on industry feedback in March 2022 (version 1.1).
|
||||
|
||||
The most recent version of the Kubernetes hardening guidance was released in August 2022 with corrections and clarifications. Version 1.2 outlines a number of recommendations for [hardening Kubernetes clusters](https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR_KUBERNETES_HARDENING_GUIDANCE_1.2_20220829.PDF).
|
||||
{{% /pageinfo %}}
|
||||
|
||||
## Introduction and Threat Model
|
||||
|
||||
Note that the threats identified as important by the NSA/CISA, or the intended audience of this guidance, may be different from the threats that other enterprise users of Kubernetes consider important. This section
|
||||
|
|
|
@ -18,38 +18,38 @@ This blog post contains information about these new package repositories,
|
|||
what does it mean to you as an end user, and how to migrate to the new
|
||||
repositories.
|
||||
|
||||
**ℹ️ Update (August 31, 2023):** the _**legacy Google-hosted repositories are deprecated
|
||||
and will be frozen starting with September 13, 2023.**_
|
||||
**ℹ️ Update (January 12, 2024):** the _**legacy Google-hosted repositories are going
|
||||
away in January 2024.**_
|
||||
Check out [the deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/)
|
||||
for more details about this change.
|
||||
|
||||
## What you need to know about the new package repositories?
|
||||
|
||||
_(updated on August 31, 2023)_
|
||||
_(updated on January 12, 2024)_
|
||||
|
||||
- This is an **opt-in change**; you're required to manually migrate from the
|
||||
Google-hosted repository to the Kubernetes community-owned repositories.
|
||||
See [how to migrate](#how-to-migrate) later in this announcement for migration information
|
||||
and instructions.
|
||||
- The legacy Google-hosted repositories are **deprecated as of August 31, 2023**,
|
||||
and will be **frozen approximately as of September 13, 2023**. The freeze will happen
|
||||
immediately following the patch releases that are scheduled for September 2023.
|
||||
Freezing the legacy repositories means that we will publish packages for the Kubernetes
|
||||
project only to the community-owned repositories as of the September 13, 2023 cut-off point.
|
||||
- **The legacy Google-hosted package repositories are going away in January 2024.** These repositories
|
||||
have been **deprecated as of August 31, 2023**, and **frozen as of September 13, 2023**.
|
||||
Check out the [deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/)
|
||||
for more details about this change.
|
||||
- The existing packages in the legacy repositories will be available for the foreseeable future.
|
||||
- ~~The existing packages in the legacy repositories will be available for the foreseeable future.
|
||||
However, the Kubernetes project can't provide any guarantees on how long is that going to be.
|
||||
The deprecated legacy repositories, and their contents, might be removed at any time in the future
|
||||
and without a further notice period.
|
||||
and without a further notice period.~~ **The legacy package repositories are going away in
|
||||
January 2024.**
|
||||
- Given that no new releases will be published to the legacy repositories after
|
||||
the September 13, 2023 cut-off point, you will not be able to upgrade to any patch or minor
|
||||
release made from that date onwards if you don't migrate to the new Kubernetes package repositories.
|
||||
That said, we recommend migrating to the new Kubernetes package repositories **as soon as possible**.
|
||||
- The new Kubernetes package repositories contain packages beginning with those
|
||||
Kubernetes versions that were still under support when the community took
|
||||
over the package builds. This means that anything before v1.24.0 will only be
|
||||
available in the Google-hosted repository.
|
||||
over the package builds. This means that the new package repositories have Linux packages for all
|
||||
Kubernetes releases starting with v1.24.0.
|
||||
- Kubernetes does not have official Linux packages available for earlier releases of Kubernetes;
|
||||
however, your Linux distribution may provide its own packages.
|
||||
- There's a dedicated package repository for each Kubernetes minor version.
|
||||
When upgrading to a different minor release, you must bear in mind that
|
||||
the package repository details also change.
|
||||
|
|
|
@ -98,12 +98,21 @@ we'll only publish packages to the new package repositories (`pkgs.k8s.io`).
|
|||
Kubernetes 1.29 and onwards will have packages published **only** to the
|
||||
community-owned repositories (`pkgs.k8s.io`).
|
||||
|
||||
### What releases are available in the new community-owned package repositories?
|
||||
|
||||
Linux packages for releases starting from Kubernetes v1.24.0 are available in the
|
||||
Kubernetes package repositories (`pkgs.k8s.io`). Kubernetes does not have official
|
||||
Linux packages available for earlier releases of Kubernetes; however, your Linux
|
||||
distribution may provide its own packages.
|
||||
|
||||
## Can I continue to use the legacy package repositories?
|
||||
|
||||
The existing packages in the legacy repositories will be available for the foreseeable
|
||||
~~The existing packages in the legacy repositories will be available for the foreseeable
|
||||
future. However, the Kubernetes project can't provide _any_ guarantees on how long
|
||||
is that going to be. The deprecated legacy repositories, and their contents, might
|
||||
be removed at any time in the future and without a further notice period.
|
||||
be removed at any time in the future and without a further notice period.~~
|
||||
|
||||
**UPDATE**: The legacy packages are expected to go away in January 2024.
|
||||
|
||||
The Kubernetes project **strongly recommends** migrating to the new community-owned
|
||||
repositories **as soon as possible**.
|
||||
|
|
|
@ -16,7 +16,7 @@ Architecture: Conformance subproject_
|
|||
In this [SIG
|
||||
Architecture](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md)
|
||||
spotlight, we talked with [Riaan
|
||||
Kleinhans](https://github.com/Riaankl) (ii-Team), Lead for the
|
||||
Kleinhans](https://github.com/Riaankl) (ii.nz), Lead for the
|
||||
[Conformance
|
||||
sub-project](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1).
|
||||
|
||||
|
@ -26,7 +26,7 @@ sub-project](https://github.com/kubernetes/community/blob/master/sig-architectur
|
|||
bit about yourself, your role and how you got involved in Kubernetes.
|
||||
|
||||
**Riaan Kleinhans (RK)**: Hi! My name is Riaan Kleinhans and I live in
|
||||
South Africa. I am the Project manager for the [ii-Team](ii.nz) in New
|
||||
South Africa. I am the Project manager for the [ii.nz](https://ii.nz) team in New
|
||||
Zealand. When I joined ii the plan was to move to New Zealand in April
|
||||
2020 and then Covid happened. Fortunately, being a flexible and
|
||||
dynamic team we were able to make it work remotely and in very
|
||||
|
|
|
@ -0,0 +1,236 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Kubernetes v1.29: Mandala'
|
||||
date: 2023-12-13
|
||||
slug: kubernetes-v1-29-release
|
||||
---
|
||||
|
||||
**Authors:** [Kubernetes v1.29 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.29/release-team.md)
|
||||
|
||||
**Editors:** Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley
|
||||
|
||||
Announcing the release of Kubernetes v1.29: Mandala (The Universe), the last release of 2023!
|
||||
|
||||
Similar to previous releases, the release of Kubernetes v1.29 introduces new stable, beta, and alpha features. The consistent delivery of top-notch releases underscores the strength of our development cycle and the vibrant support from our community.
|
||||
|
||||
This release consists of 49 enhancements. Of those enhancements, 11 have graduated to Stable, 19 are entering Beta and 19 have graduated to Alpha.
|
||||
|
||||
## Release theme and logo
|
||||
|
||||
Kubernetes v1.29: *Mandala (The Universe)* ✨🌌
|
||||
|
||||
{{< figure src="/images/blog/2023-12-13-kubernetes-1.29-release/k8s-1.29.png" alt="Kubernetes 1.29 Mandala logo" class="release-logo" >}}
|
||||
|
||||
Join us on a cosmic journey with Kubernetes v1.29!
|
||||
|
||||
This release is inspired by the beautiful art form that is Mandala—a symbol of the universe in its perfection. Our tight-knit universe of around 40 Release Team members, backed by hundreds of community contributors, has worked tirelessly to turn challenges into joy for millions worldwide.
|
||||
|
||||
The Mandala theme reflects our community’s interconnectedness—a vibrant tapestry woven by enthusiasts and experts alike. Each contributor is a crucial part, adding their unique energy, much like the diverse patterns in Mandala art. Kubernetes thrives on collaboration, echoing the harmony in Mandala creations.
|
||||
|
||||
The release logo, made by [Mario Jason Braganza](https://janusworx.com) (base Mandala art, courtesy - [Fibrel Ojalá](https://pixabay.com/users/fibrel-3502541/)), symbolizes the little universe that is the Kubernetes project and all its people.
|
||||
|
||||
In the spirit of Mandala’s transformative symbolism, Kubernetes v1.29 celebrates our project’s evolution. Like stars in the Kubernetes universe, each contributor, user, and supporter lights the way. Together, we create a universe of possibilities—one release at a time.
|
||||
|
||||
## Improvements that graduated to stable in Kubernetes v1.29 {#graduations-to-stable}
|
||||
|
||||
_This is a selection of some of the improvements that are now stable following the v1.29 release._
|
||||
|
||||
### ReadWriteOncePod PersistentVolume access mode ([SIG Storage](https://github.com/kubernetes/community/tree/master/sig-storage)) {#readwriteoncepod-pv-access-mode}
|
||||
|
||||
In Kubernetes, volume [access modes](/docs/concepts/storage/persistent-volumes/#access-modes)
|
||||
are the way you can define how durable storage is consumed. These access modes are a part of the spec for PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). When using storage, there are different ways to model how that storage is consumed. For example, a storage system like a network file share can have many users all reading and writing data simultaneously. In other cases maybe everyone is allowed to read data but not write it. For highly sensitive data, maybe only one user is allowed to read and write data but nobody else.
|
||||
|
||||
Before v1.22, Kubernetes offered three access modes for PVs and PVCs:
|
||||
* ReadWriteOnce – the volume can be mounted as read-write by a single node
|
||||
* ReadOnlyMany – the volume can be mounted read-only by many nodes
|
||||
* ReadWriteMany – the volume can be mounted as read-write by many nodes
|
||||
|
||||
The ReadWriteOnce access mode restricts volume access to a single node, which means it is possible for multiple pods on the same node to read from and write to the same volume. This could potentially be a major problem for some applications, especially if they require at most one writer for data safety guarantees.
|
||||
|
||||
To address this problem, a fourth access mode ReadWriteOncePod was introduced as an Alpha feature in v1.22 for CSI volumes. If you create a pod with a PVC that uses the ReadWriteOncePod access mode, Kubernetes ensures that pod is the only pod across your whole cluster that can read that PVC or write to it. In v1.29, this feature became Generally Available.
|
||||
|
||||
### Node volume expansion Secret support for CSI drivers ([SIG Storage](https://github.com/kubernetes/community/tree/master/sig-storage)) {#csi-node-volume-expansion-secrets}
|
||||
|
||||
In Kubernetes, a volume expansion operation may include the expansion of the volume on the node, which involves filesystem resize. Some CSI drivers require secrets, for example a credential for accessing a SAN fabric, during the node expansion for the following use cases:
|
||||
* When a PersistentVolume represents encrypted block storage, for example using LUKS, you may need to provide a passphrase in order to expand the device.
|
||||
* For various validations, the CSI driver needs to have credentials to communicate with the backend storage system at time of node expansion.
|
||||
|
||||
To meet this requirement, the CSI Node Expand Secret feature was introduced in Kubernetes v1.25. This allows an optional secret field to be sent as part of the NodeExpandVolumeRequest by the CSI drivers so that node volume expansion operation can be performed with the underlying storage system. In Kubernetes v1.29, this feature became generally available.
|
||||
|
||||
### KMS v2 encryption at rest generally available ([SIG Auth](https://github.com/kubernetes/community/tree/master/sig-auth)) {#kms-v2-api-encryption}
|
||||
|
||||
One of the first things to consider when securing a Kubernetes cluster is encrypting persisted
|
||||
API data at rest. KMS provides an interface for a provider to utilize a key stored in an external
|
||||
key service to perform this encryption. With the Kubernetes v1.29, KMS v2 has become
|
||||
a stable feature bringing numerous improvements in performance, key rotation,
|
||||
health check & status, and observability.
|
||||
These enhancements provide users with a reliable solution to encrypt all resources in their Kubernetes clusters. You can read more about this in [KEP-3299](https://kep.k8s.io/3299).
|
||||
|
||||
It is recommended to use KMS v2. KMS v1 feature gate is disabled by default. You will have to opt in to continue to use it.
|
||||
|
||||
|
||||
|
||||
## Improvements that graduated to beta in Kubernetes v1.29 {#graduations-to-beta}
|
||||
|
||||
_This is a selection of some of the improvements that are now beta following the v1.29 release._
|
||||
|
||||
|
||||
The throughput of the scheduler is our eternal challenge. This QueueingHint feature brings a new possibility to optimize the efficiency of requeueing, which could reduce useless scheduling retries significantly.
|
||||
|
||||
### Node lifecycle separated from taint management ([SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling))
|
||||
|
||||
As title describes, it's to decouple `TaintManager` that performs taint-based pod eviction from `NodeLifecycleController` and make them two separate controllers: `NodeLifecycleController` to add taints to unhealthy nodes and `TaintManager` to perform pod deletion on nodes tainted with NoExecute effect.
|
||||
|
||||
### Clean up for legacy Secret-based ServiceAccount tokens ([SIG Auth](https://github.com/kubernetes/community/tree/master/sig-auth)) {#serviceaccount-token-clean-up}
|
||||
|
||||
Kubernetes switched to using more secure service account tokens, which were time-limited and bound to specific pods by 1.22. Stopped auto-generating legacy secret-based service account tokens in 1.24. Then started labeling remaining auto-generated secret-based tokens still in use with their last-used date in 1.27.
|
||||
|
||||
In v1.29, to reduce potential attack surface, the LegacyServiceAccountTokenCleanUp feature labels legacy auto-generated secret-based tokens as invalid if they have not been used for a long time (1 year by default), and automatically removes them if use is not attempted for a long time after being marked as invalid (1 additional year by default). [KEP-2799](https://kep.k8s.io/2799)
|
||||
## New alpha features
|
||||
|
||||
### Define Pod affinity or anti-affinity using `matchLabelKeys` ([SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling)) {#match-label-keys-pod-affinity}
|
||||
|
||||
One enhancement will be introduced in PodAffinity/PodAntiAffinity as alpha. It will increase the accuracy of calculation during rolling updates.
|
||||
|
||||
### nftables backend for kube-proxy ([SIG Network](https://github.com/kubernetes/community/tree/master/sig-network)) {#kube-proxy-nftables}
|
||||
|
||||
The default kube-proxy implementation on Linux is currently based on iptables. This was the preferred packet filtering and processing system in the Linux kernel for many years (starting with the 2.4 kernel in 2001). However, unsolvable problems with iptables led to the development of a successor, nftables. Development on iptables has mostly stopped, with new features and performance improvements primarily going into nftables instead.
|
||||
|
||||
This feature adds a new backend to kube-proxy based on nftables, since some Linux distributions already started to deprecate and remove iptables, and nftables claims to solve the main performance problems of iptables.
|
||||
|
||||
### APIs to manage IP address ranges for Services ([SIG Network](https://github.com/kubernetes/community/tree/master/sig-network)) {#ip-address-range-apis}
|
||||
|
||||
Services are an abstract way to expose an application running on a set of Pods. Services can have a cluster-scoped virtual IP address, that is allocated from a predefined CIDR defined in the kube-apiserver flags. However, users may want to add, remove, or resize existing IP ranges allocated for Services without having to restart the kube-apiserver.
|
||||
|
||||
This feature implements a new allocator logic that uses 2 new API Objects: ServiceCIDR and IPAddress, allowing users to dynamically increase the number of Services IPs available by creating new ServiceCIDRs. This helps to resolve problems like IP exhaustion or IP renumbering.
|
||||
|
||||
### Add support to containerd/kubelet/CRI to support image pull per runtime class ([SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows)) {#image-pull-per-runtimeclass}
|
||||
|
||||
Kubernetes v1.29 adds support to pull container images based on the RuntimeClass of the Pod that uses them.
|
||||
This feature is off by default in v1.29 under a feature gate called `RuntimeClassInImageCriApi`.
|
||||
|
||||
Container images can either be a manifest or an index. When the image being pulled is an index (image index has a list of image manifests ordered by platform), platform matching logic in the container runtime is used to pull an appropriate image manifest from the index. By default, the platform matching logic picks a manifest that matches the host that the image pull is being executed from. This can be limiting for VM-based containers where a user could pull an image with the intention of running it as a VM-based container, for example, Windows Hyper-V containers.
|
||||
|
||||
The image pull per runtime class feature adds support to pull different images based the runtime class specified. This is achieved by referencing an image by a tuple of (`imageID`, `runtimeClass`), instead of just the `imageName` or `imageID`. Container runtimes could choose to add support for this feature if they'd like. If they do not, the default behavior of kubelet that existed prior to Kubernetes v1.29 will be retained.
|
||||
|
||||
### In-place updates for Pod resources, for Windows Pods ([SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows))
|
||||
|
||||
As an alpha feature, Kubernetes Pods can be mutable with respect to their `resources`, allowing users to change the _desired_ resource requests and limits for a Pod without the need to restart the Pod. With v1.29, this feature is now supported for Windows containers.
|
||||
|
||||
## Graduations, deprecations and removals for Kubernetes v1.29
|
||||
|
||||
### Graduated to stable
|
||||
|
||||
This lists all the features that graduated to stable (also known as _general availability_).
|
||||
For a full list of updates including new features and graduations from alpha to beta, see the
|
||||
[release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md).
|
||||
|
||||
This release includes a total of 11 enhancements promoted to Stable:
|
||||
|
||||
- [Remove transient node predicates from KCCM's service controller](https://kep.k8s.io/3458)
|
||||
- [Reserve nodeport ranges for dynamic and static allocation](https://kep.k8s.io/3668)
|
||||
- [Priority and Fairness for API Server Requests](https://kep.k8s.io/1040)
|
||||
- [KMS v2 Improvements](https://kep.k8s.io/3299)
|
||||
- [Support paged LIST queries from the Kubernetes API](https://kep.k8s.io/365)
|
||||
- [ReadWriteOncePod PersistentVolume Access Mode](https://kep.k8s.io/2485)
|
||||
- [Kubernetes Component Health SLIs](https://kep.k8s.io/3466)
|
||||
- [CRD Validation Expression Language](https://kep.k8s.io/2876)
|
||||
- [Introduce nodeExpandSecret in CSI PV source](https://kep.k8s.io/3107)
|
||||
- [Track Ready Pods in Job status](https://kep.k8s.io/2879)
|
||||
- [Kubelet Resource Metrics Endpoint](https://kep.k8s.io/727)
|
||||
|
||||
### Deprecations and removals
|
||||
|
||||
#### Removal of in-tree integrations with cloud providers ([SIG Cloud Provider](https://github.com/kubernetes/community/tree/master/sig-cloud-provider)) {#in-tree-cloud-provider-integration-removal}
|
||||
|
||||
Kubernetes v1.29 defaults to operating _without_ a built-in integration to any cloud provider.
|
||||
If you have previously been relying on in-tree cloud provider integrations (with Azure, GCE, or vSphere) then you can either:
|
||||
- enable an equivalent external [cloud controller manager](/docs/concepts/architecture/cloud-controller/)
|
||||
integration _(recommended)_
|
||||
- opt back in to the legacy integration by setting the associated feature gates to `false`; the feature
|
||||
gates to change are `DisableCloudProviders` and `DisableKubeletCloudCredentialProviders`
|
||||
|
||||
Enabling external cloud controller managers means you must run a suitable cloud controller manager within your cluster's control plane; it also requires setting the command line argument `--cloud-provider=external` for the kubelet (on every relevant node), and across the control plane (kube-apiserver and kube-controller-manager).
|
||||
|
||||
For more information about how to enable and run external cloud controller managers, read [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) and [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).
|
||||
|
||||
If you need a cloud controller manager for one of the legacy in-tree providers, please see the following links:
|
||||
* [Cloud provider AWS](https://github.com/kubernetes/cloud-provider-aws)
|
||||
* [Cloud provider Azure](https://github.com/kubernetes-sigs/cloud-provider-azure)
|
||||
* [Cloud provider GCE](https://github.com/kubernetes/cloud-provider-gcp)
|
||||
* [Cloud provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack)
|
||||
* [Cloud provider vSphere](https://github.com/kubernetes/cloud-provider-vsphere)
|
||||
|
||||
There are more details in [KEP-2395](https://kep.k8s.io/2395).
|
||||
|
||||
#### Removal of the `v1beta2` flow control API group
|
||||
|
||||
The deprecated _flowcontrol.apiserver.k8s.io/v1beta2_ API version of FlowSchema and
|
||||
PriorityLevelConfiguration are no longer served in Kubernetes v1.29.
|
||||
|
||||
If you have manifests or client software that uses the deprecated beta API group, you should change
|
||||
these before you upgrade to v1.29.
|
||||
See the [deprecated API migration guide](/docs/reference/using-api/deprecation-guide/#v1-29)
|
||||
for details and advice.
|
||||
|
||||
#### Deprecation of the `status.nodeInfo.kubeProxyVersion` field for Node
|
||||
|
||||
The `.status.kubeProxyVersion` field for Node objects is now deprecated, and the Kubernetes project
|
||||
is proposing to remove that field in a future release. The deprecated field is not accurate and has historically
|
||||
been managed by kubelet - which does not actually know the kube-proxy version, or even whether kube-proxy
|
||||
is running.
|
||||
|
||||
If you've been using this field in client software, stop - the information isn't reliable and the field is now
|
||||
deprecated.
|
||||
|
||||
|
||||
#### Legacy Linux package repositories
|
||||
|
||||
Please note that in August of 2023, the legacy package repositories (`apt.kubernetes.io` and
|
||||
`yum.kubernetes.io`) were formally deprecated and the Kubernetes project announced the
|
||||
general availability of the community-owned package repositories for Debian and RPM packages,
|
||||
available at `https://pkgs.k8s.io`.
|
||||
|
||||
These legacy repositories were frozen in September of 2023, and
|
||||
will go away entirely in January of 2024. If you are currently relying on them, you **must** migrate.
|
||||
|
||||
_This deprecation is not directly related to the v1.29 release._ For more details, including how these changes may affect you and what to do if you are affected, please read the [legacy package repository deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/).
|
||||
|
||||
## Release notes
|
||||
|
||||
Check out the full details of the Kubernetes v1.29 release in our [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md).
|
||||
|
||||
## Availability
|
||||
|
||||
Kubernetes v1.29 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.29.0). To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials) or run local Kubernetes clusters using [minikube](https://minikube.sigs.k8s.io/). You can also easily install v1.29 using [kubeadm](/docs/setup/independent/create-cluster-kubeadm).
|
||||
|
||||
## Release team
|
||||
|
||||
Kubernetes is only possible with the support, commitment, and hard work of its community. Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on. This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management.
|
||||
|
||||
We would like to thank the entire [release team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.29/release-team.md) for the hours spent hard at work to deliver the Kubernetes v1.29 release for our community. A very special thanks is in order for our release lead, [Priyanka Saggu](https://github.com/Priyankasaggu11929), for supporting and guiding us through a successful release cycle, making sure that we could all contribute in the best way possible, and challenging us to improve the release process.
|
||||
|
||||
## Project velocity
|
||||
The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.
|
||||
|
||||
In the v1.29 release cycle, which [ran for 14 weeks](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.29) (September 6 to December 13), we saw contributions from [888 companies](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.28.0%20-%20now&var-metric=contributions) and [1422 individuals](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.28.0%20-%20now&var-metric=contributions&var-repogroup_name=Kubernetes&var-repo_name=kubernetes%2Fkubernetes&var-country_name=All&var-companies=All).
|
||||
|
||||
|
||||
## Ecosystem updates
|
||||
- KubeCon + CloudNativeCon Europe 2024 will take in Paris, France, from **19 – 22 March 2024**! You can find more information about the conference and registration on the [event site](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/).
|
||||
|
||||
## Upcoming release webinar {#release-webinar}
|
||||
|
||||
Join members of the Kubernetes v1.29 release team on Friday, December 15th, 2023, at 11am PT (2pm eastern) to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the [event page](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-129-release/) on the CNCF Online Programs site.
|
||||
|
||||
### Get involved
|
||||
|
||||
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below. Thank you for your continued feedback and support.
|
||||
|
||||
- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
|
||||
- Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
|
||||
- Join the community on [Slack](http://slack.k8s.io/)
|
||||
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
|
||||
- Share your Kubernetes [story](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
|
||||
- Read more about what’s happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
|
||||
- Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team)
|
|
@ -0,0 +1,218 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29: Cloud Provider Integrations Are Now Separate Components"
|
||||
date: 2023-12-14T09:30:00-08:00
|
||||
slug: cloud-provider-integration-changes
|
||||
---
|
||||
|
||||
**Authors:** Michael McCune (Red Hat), Andrew Sy Kim (Google)
|
||||
|
||||
For Kubernetes v1.29, you need to use additional components to integrate your
|
||||
Kubernetes cluster with a cloud infrastructure provider. By default, Kubernetes
|
||||
v1.29 components **abort** if you try to specify integration with any cloud provider using
|
||||
one of the legacy compiled-in cloud provider integrations. If you want to use a legacy
|
||||
integration, you have to opt back in - and a future release will remove even that option.
|
||||
|
||||
In 2018, the [Kubernetes community agreed to form the Cloud Provider Special
|
||||
Interest Group (SIG)][oldblog], with a mission to externalize all cloud provider
|
||||
integrations and remove all the existing in-tree cloud provider integrations.
|
||||
In January 2019, the Kubernetes community approved the initial draft of
|
||||
[KEP-2395: Removing In-Tree Cloud Provider Code][kep2395]. This KEP defines a
|
||||
process by which we can remove cloud provider specific code from the core
|
||||
Kubernetes source tree. From the KEP:
|
||||
|
||||
> Motiviation [sic] behind this effort is to allow cloud providers to develop and
|
||||
> make releases independent from the core Kubernetes release cycle. The
|
||||
> de-coupling of cloud provider code allows for separation of concern between
|
||||
> "Kubernetes core" and the cloud providers within the ecosystem. In addition,
|
||||
> this ensures all cloud providers in the ecosystem are integrating with
|
||||
> Kubernetes in a consistent and extendable way.
|
||||
|
||||
After many years of development and collaboration across many contributors,
|
||||
the default behavior for legacy cloud provider integrations is changing.
|
||||
This means that users will need to confirm their Kubernetes configurations,
|
||||
and in some cases run external cloud controller managers. These changes are
|
||||
taking effect in Kubernetes version 1.29; read on to learn if you are affected
|
||||
and what changes you will need to make.
|
||||
|
||||
These updated default settings affect a large proportion of Kubernetes users,
|
||||
and **will require changes** for users who were previously using the in-tree
|
||||
provider integrations. The legacy integrations offered compatibility with
|
||||
Azure, AWS, GCE, OpenStack, and vSphere; however for AWS and OpenStack the
|
||||
compiled-in integrations were removed in Kubernetes versions 1.26 and 1.27,
|
||||
respectively.
|
||||
|
||||
## What has changed?
|
||||
|
||||
At the most basic level, two [feature gates][fg] are changing their default
|
||||
value from false to true. Those feature gates, `DisableCloudProviders` and
|
||||
`DisableKubeletCloudCredentialProviders`, control the way that the
|
||||
[kube-apiserver][kapi], [kube-controller-manager][kcm], and [kubelet][kubelet]
|
||||
invoke the cloud provider related code that is included in those components.
|
||||
When these feature gates are true (the default), the only recognized value for
|
||||
the `--cloud-provider` command line argument is `external`.
|
||||
|
||||
Let's see what the [official Kubernetes documentation][fg] says about these
|
||||
feature gates:
|
||||
|
||||
> `DisableCloudProviders`: Disables any functionality in `kube-apiserver`,
|
||||
> `kube-controller-manager` and `kubelet` related to the `--cloud-provider`
|
||||
> component flag.
|
||||
|
||||
> `DisableKubeletCloudCredentialProviders`: Disable the in-tree functionality
|
||||
> in kubelet to authenticate to a cloud provider container registry for image
|
||||
> pull credentials.
|
||||
|
||||
The next stage beyond beta will be full removal; for that release onwards, you
|
||||
won't be able to override those feature gates back to false.
|
||||
|
||||
## What do you need to do?
|
||||
|
||||
If you are upgrading from Kubernetes 1.28+ and are not on Azure, GCE, or
|
||||
vSphere then there are no changes you will need to make. If
|
||||
you **are** on Azure, GCE, or vSphere, or you are upgrading from a version
|
||||
older than 1.28, then read on.
|
||||
|
||||
Historically, Kubernetes has included code for a set of cloud providers that
|
||||
included AWS, Azure, GCE, OpenStack, and vSphere. Since the inception of
|
||||
[KEP-2395][kep2395] the community has been moving towards removal of that
|
||||
cloud provider code. The OpenStack provider code was removed in version 1.26,
|
||||
and the AWS provider code was removed in version 1.27. This means that users
|
||||
who are upgrading from one of the affected cloud providers and versions will
|
||||
need to modify their deployments.
|
||||
|
||||
### Upgrading on Azure, GCE, or vSphere
|
||||
|
||||
There are two options for upgrading in this configuration: migrate to external
|
||||
cloud controller managers, or continue using the in-tree provider code.
|
||||
Although migrating to external cloud controller managers is recommended,
|
||||
there are scenarios where continuing with the current behavior is desired.
|
||||
Please choose the best option for your needs.
|
||||
|
||||
#### Migrate to external cloud controller managers
|
||||
|
||||
Migrating to use external cloud controller managers is the recommended upgrade
|
||||
path, when possible in your situation. To do this you will need to
|
||||
enable the `--cloud-provider=external` command line flag for the
|
||||
`kube-apiserver`, `kube-controller-manager`, and `kubelet` components. In
|
||||
addition you will need to deploy a cloud controller manager for your provider.
|
||||
|
||||
Installing and running cloud controller managers is a larger topic than this
|
||||
post can address; if you would like more information on this process please
|
||||
read the documentation for [Cloud Controller Manager Administration][ccmadmin]
|
||||
and [Migrate Replicated Control Plane To Use Cloud Controller Manager][ccmha].
|
||||
See [below](#cloud-provider-integrations) for links to specific cloud provider
|
||||
implementations.
|
||||
|
||||
#### Continue using in-tree provider code
|
||||
|
||||
If you wish to continue using Kubernetes with the in-tree cloud provider code,
|
||||
you will need to modify the command line parameters for `kube-apiserver`,
|
||||
`kube-controller-manager`, and `kubelet` to disable the feature gates for
|
||||
`DisableCloudProviders` and `DisableKubeletCloudCredentialProviders`. To do
|
||||
this, add the following command line flag to the arguments for the previously
|
||||
listed commands:
|
||||
|
||||
```
|
||||
--feature-gates=DisableCloudProviders=false,DisableKubeletCloudCredentialProviders=false
|
||||
```
|
||||
|
||||
_Please note that if you have other feature gate modifications on the command
|
||||
line, they will need to include these 2 feature gates._
|
||||
|
||||
**Note**: These feature gates will be locked to `true` in an upcoming
|
||||
release. Setting these feature gates to `false` should be used as a last
|
||||
resort. It is highly recommended to migrate to an external cloud controller
|
||||
manager as the in-tree providers are planned for removal as early as Kubernetes
|
||||
version 1.31.
|
||||
|
||||
### Upgrading on other providers
|
||||
|
||||
For providers other than Azure, GCE, or vSphere, good news, the external cloud
|
||||
controller manager should already be in use. You can confirm this by inspecting
|
||||
the `--cloud-provider` flag for the kubelets in your cluster, they will have
|
||||
the value `external` if using external providers. The code for AWS and OpenStack
|
||||
providers was removed from Kubernetes before version 1.27 was released.
|
||||
Other providers beyond the AWS, Azure, GCE, OpenStack, and vSphere were never
|
||||
included in Kubernetes and as such they began their life as external cloud
|
||||
controller managers.
|
||||
|
||||
### Upgrading from older Kubernetes versions
|
||||
|
||||
If you are upgrading from a Kubernetes release older than 1.26, and you are on
|
||||
AWS, Azure, GCE, OpenStack, or vSphere then you will need to enable the
|
||||
`--cloud-provider=external` flag, and follow the advice for installing and
|
||||
running a cloud controller manager for your provider.
|
||||
|
||||
Please read the documentation for
|
||||
[Cloud Controller Manager Administration][ccmadmin] and
|
||||
[Migrate Replicated Control Plane To Use Cloud Controller Manager][ccmha]. See
|
||||
below for links to specific cloud provider implementations.
|
||||
|
||||
## Where to find a cloud controller manager?
|
||||
|
||||
At its core, this announcement is about the cloud provider integrations that
|
||||
were previously included in Kubernetes. As these components move out of the
|
||||
core Kubernetes code and into their own repositories, it is important to note
|
||||
a few things:
|
||||
|
||||
First, SIG Cloud Provider offers a reference framework for developers who
|
||||
wish to create cloud controller managers for any provider. See the
|
||||
[cloud-provider repository][cloud-provider] for more information about how
|
||||
these controllers work and how to get started creating your own.
|
||||
|
||||
Second, there are many cloud controller managers available for Kubernetes.
|
||||
This post is addressing the provider integrations that have been historically
|
||||
included with Kubernetes but are now in the process of being removed. If you
|
||||
need a cloud controller manager for your provider and do not see it listed here,
|
||||
please reach out to the cloud provider you are integrating with or the
|
||||
[Kubernetes SIG Cloud Provider community][sig] for help and advice. It is
|
||||
worth noting that while most cloud controller managers are open source today,
|
||||
this may not always be the case. Users should always contact their cloud
|
||||
provider to learn if there are preferred solutions to utilize on their
|
||||
infrastructure.
|
||||
|
||||
### Cloud provider integrations provided by the Kubernetes project {#cloud-provider-integrations}
|
||||
|
||||
* AWS - https://github.com/kubernetes/cloud-provider-aws
|
||||
* Azure - https://github.com/kubernetes-sigs/cloud-provider-azure
|
||||
* GCE - https://github.com/kubernetes/cloud-provider-gcp
|
||||
* OpenStack - https://github.com/kubernetes/cloud-provider-openstack
|
||||
* vSphere - https://github.com/kubernetes/cloud-provider-vsphere
|
||||
|
||||
If you are looking for an automated approach to installing cloud controller
|
||||
managers in your clusters, the [kOps][kops] project provides a convenient
|
||||
solution for managing production-ready clusters.
|
||||
|
||||
## Want to learn more?
|
||||
|
||||
Cloud providers and cloud controller managers serve a core function in
|
||||
Kubernetes. Cloud providers are often the substrate upon which Kubernetes is
|
||||
operated, and the cloud controller managers supply the essential lifeline
|
||||
between Kubernetes clusters and their physical infrastructure.
|
||||
|
||||
This post covers one aspect of how the Kubernetes community interacts with
|
||||
the world of cloud infrastructure providers. If you are curious about this
|
||||
topic and want to learn more, the Cloud Provider Special Interest Group (SIG)
|
||||
is the place to go. SIG Cloud Provider hosts bi-weekly meetings to discuss all
|
||||
manner of topics related to cloud providers and cloud controller managers in
|
||||
Kubernetes.
|
||||
|
||||
### SIG Cloud Provider
|
||||
|
||||
* Regular SIG Meeting: [Wednesdays at 9:00 PT (Pacific Time)](https://zoom.us/j/508079177?pwd=ZmEvMksxdTFTc0N1eXFLRm91QUlyUT09) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:00&tz=PT%20%28Pacific%20Time%29).
|
||||
* [Kubernetes slack][kslack] channel `#sig-cloud-provider`
|
||||
* [SIG Community page][sig]
|
||||
|
||||
[kep2395]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers
|
||||
[fg]: https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
|
||||
[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
|
||||
[kcm]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
|
||||
[kapi]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
|
||||
[ccmadmin]: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/
|
||||
[ccmha]: https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/
|
||||
[kslack]: https://kubernetes.slack.com
|
||||
[sig]: https://github.com/kubernetes/community/tree/master/sig-cloud-provider
|
||||
[cloud-provider]: https://github.com/kubernetes/cloud-provider
|
||||
[oldblog]: https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/
|
||||
[kops]: https://github.com/kubernetes/kops
|
|
@ -0,0 +1,92 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29: CSI Storage Resizing Authenticated and Generally Available in v1.29"
|
||||
date: 2023-12-15
|
||||
slug: csi-node-expand-secret-support-ga
|
||||
---
|
||||
**Authors:** Humble Chirammal (Vmware), Louis Koo (deeproute.ai)
|
||||
|
||||
Kubernetes version v1.29 brings generally available support for authentication
|
||||
during CSI (Container Storage Interface) storage resize operations.
|
||||
|
||||
Let's embark on the evolution of this feature, initially introduced in alpha in
|
||||
Kubernetes v1.25, and unravel the changes accompanying its transition to GA.
|
||||
|
||||
## Authenticated CSI storage resizing unveiled
|
||||
|
||||
Kubernetes harnesses the capabilities of CSI to seamlessly integrate with third-party
|
||||
storage systems, empowering your cluster to seamlessly expand storage volumes
|
||||
managed by the CSI driver. The recent elevation of authentication secret support
|
||||
for resizes from Beta to GA ushers in new horizons, enabling volume expansion in
|
||||
scenarios where the underlying storage operation demands credentials for backend
|
||||
cluster operations – such as accessing a SAN/NAS fabric. This enhancement addresses
|
||||
a critical limitation for CSI drivers, allowing volume expansion at the node level,
|
||||
especially in cases necessitating authentication for resize operations.
|
||||
|
||||
The challenges extend beyond node-level expansion. Within the Special Interest
|
||||
Group (SIG) Storage, use cases have surfaced, including scenarios where the
|
||||
CSI driver needs to validate the actual size of backend block storage before
|
||||
initiating a node-level filesystem expand operation. This validation prevents
|
||||
false positive returns from the backend storage cluster during file system expansion.
|
||||
Additionally, for PersistentVolumes representing encrypted block storage (e.g., using LUKS),
|
||||
a passphrase is mandated to expand the device and grow the filesystem, underscoring
|
||||
the necessity for authenticated resizing.
|
||||
|
||||
## What's new for Kubernetes v1.29
|
||||
With the graduation to GA, the feature remains enabled by default. Support for
|
||||
node-level volume expansion secrets has been seamlessly integrated into the CSI
|
||||
external-provisioner sidecar controller. To take advantage, ensure your external
|
||||
CSI storage provisioner sidecar controller is operating at v3.3.0 or above.
|
||||
|
||||
## Navigating Authenticated CSI Storage Resizing
|
||||
Assuming all requisite components, including the CSI driver, are deployed and operational
|
||||
on your cluster, and you have a CSI driver supporting resizing, you can initiate a
|
||||
`NodeExpand` operation on a CSI volume. Credentials for the CSI `NodeExpand` operation
|
||||
can be conveniently provided as a Kubernetes Secret, specifying the Secret via the
|
||||
StorageClass. Here's an illustrative manifest for a Secret holding credentials:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: test-secret
|
||||
namespace: default
|
||||
data:
|
||||
stringData:
|
||||
username: admin
|
||||
password: t0p-Secret
|
||||
```
|
||||
And here's an example manifest for a StorageClass referencing those credentials:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: csi-blockstorage-sc
|
||||
parameters:
|
||||
csi.storage.k8s.io/node-expand-secret-name: test-secret
|
||||
csi.storage.k8s.io/node-expand-secret-namespace: default
|
||||
provisioner: blockstorage.cloudprovider.example
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
allowVolumeExpansion: true
|
||||
```
|
||||
|
||||
Upon successful creation of the PersistentVolumeClaim (PVC), you can verify the
|
||||
configuration within the .spec.csi field of the PersistentVolume. To confirm,
|
||||
execute `kubectl get persistentvolume <pv_name> -o yaml`.
|
||||
|
||||
## Engage with the Evolution!
|
||||
For those enthusiastic about contributing or delving deeper into the technical
|
||||
intricacies, the enhancement proposal comprises exhaustive details about the
|
||||
feature's history and implementation. Explore the realms of StorageClass-based
|
||||
dynamic provisioning in Kubernetes by referring to the [storage class documentation]
|
||||
(https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class)
|
||||
and the overarching [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) documentation.
|
||||
|
||||
Join the Kubernetes Storage SIG (Special Interest Group) to actively participate
|
||||
in elevating this feature. Your insights are invaluable, and we eagerly anticipate
|
||||
welcoming more contributors to shape the future of Kubernetes storage!
|
||||
|
|
@ -0,0 +1,177 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29: VolumeAttributesClass for Volume Modification"
|
||||
date: 2023-12-15
|
||||
slug: kubernetes-1-29-volume-attributes-class
|
||||
---
|
||||
|
||||
**Author**: Sunny Song (Google)
|
||||
|
||||
The v1.29 release of Kubernetes introduced an alpha feature to support modifying a volume
|
||||
by changing the `volumeAttributesClassName` that was specified for a PersistentVolumeClaim (PVC).
|
||||
With the feature enabled, Kubernetes can handle updates of volume attributes other than capacity.
|
||||
Allowing volume attributes to be changed without managing it through different
|
||||
provider's APIs directly simplifies the current flow.
|
||||
|
||||
You can read about VolumeAttributesClass usage details in the Kubernetes documentation
|
||||
or you can read on to learn about why the Kubernetes project is supporting this feature.
|
||||
|
||||
|
||||
## VolumeAttributesClass
|
||||
|
||||
The new `storage.k8s.io/v1alpha1` API group provides two new types:
|
||||
|
||||
**VolumeAttributesClass**
|
||||
|
||||
Represents a specification of mutable volume attributes defined by the CSI driver.
|
||||
The class can be specified during dynamic provisioning of PersistentVolumeClaims,
|
||||
and changed in the PersistentVolumeClaim spec after provisioning.
|
||||
|
||||
**ModifyVolumeStatus**
|
||||
|
||||
Represents the status object of `ControllerModifyVolume` operation.
|
||||
|
||||
With this alpha feature enabled, the spec of PersistentVolumeClaim defines VolumeAttributesClassName
|
||||
that is used in the PVC. At volume provisioning, the `CreateVolume` operation will apply the parameters in the
|
||||
VolumeAttributesClass along with the parameters in the StorageClass.
|
||||
|
||||
When there is a change of volumeAttributesClassName in the PVC spec,
|
||||
the external-resizer sidecar will get an informer event. Based on the current state of the configuration,
|
||||
the resizer will trigger a CSI ControllerModifyVolume.
|
||||
More details can be found in [KEP-3751](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/3751-volume-attributes-class/README.md).
|
||||
|
||||
## How to use it
|
||||
|
||||
If you want to test the feature whilst it's alpha, you need to enable the relevant feature gate
|
||||
in the `kube-controller-manager` and the `kube-apiserver`. Use the `--feature-gates` command line argument:
|
||||
|
||||
|
||||
```
|
||||
--feature-gates="...,VolumeAttributesClass=true"
|
||||
```
|
||||
|
||||
|
||||
It also requires that the CSI driver has implemented the ModifyVolume API.
|
||||
|
||||
|
||||
### User flow
|
||||
|
||||
If you would like to see the feature in action and verify it works fine in your cluster, here's what you can try:
|
||||
|
||||
|
||||
1. Define a StorageClass and VolumeAttributesClass
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: csi-sc-example
|
||||
provisioner: pd.csi.storage.gke.io
|
||||
parameters:
|
||||
disk-type: "hyperdisk-balanced"
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
```
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1alpha1
|
||||
kind: VolumeAttributesClass
|
||||
metadata:
|
||||
name: silver
|
||||
driverName: pd.csi.storage.gke.io
|
||||
parameters:
|
||||
provisioned-iops: "3000"
|
||||
provisioned-throughput: "50"
|
||||
```
|
||||
|
||||
|
||||
2. Define and create the PersistentVolumeClaim
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: test-pv-claim
|
||||
spec:
|
||||
storageClassName: csi-sc-example
|
||||
volumeAttributesClassName: silver
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 64Gi
|
||||
```
|
||||
|
||||
|
||||
3. Verify that the PersistentVolumeClaim is now provisioned correctly with:
|
||||
|
||||
```
|
||||
kubectl get pvc
|
||||
```
|
||||
|
||||
|
||||
4. Create a new VolumeAttributesClass gold:
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1alpha1
|
||||
kind: VolumeAttributesClass
|
||||
metadata:
|
||||
name: gold
|
||||
driverName: pd.csi.storage.gke.io
|
||||
parameters:
|
||||
iops: "4000"
|
||||
throughput: "60"
|
||||
```
|
||||
|
||||
|
||||
5. Update the PVC with the new VolumeAttributesClass and apply:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: test-pv-claim
|
||||
spec:
|
||||
storageClassName: csi-sc-example
|
||||
volumeAttributesClassName: gold
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 64Gi
|
||||
```
|
||||
|
||||
|
||||
6. Verify that PersistentVolumeClaims has the updated VolumeAttributesClass parameters with:
|
||||
|
||||
```
|
||||
kubectl describe pvc <PVC_NAME>
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
* See the [VolumeAttributesClass KEP](https://kep.k8s.io/3751) for more information on the design
|
||||
* You can view or comment on the [project board](https://github.com/orgs/kubernetes-csi/projects/72) for VolumeAttributesClass
|
||||
* In order to move this feature towards beta, we need feedback from the community,
|
||||
so here's a call to action: add support to the CSI drivers, try out this feature,
|
||||
consider how it can help with problems that your users are having…
|
||||
|
||||
|
||||
## Getting involved
|
||||
|
||||
We always welcome new contributors. So, if you would like to get involved, you can join our [Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
|
||||
|
||||
If you would like to share feedback, you can do so on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
|
||||
|
||||
Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):
|
||||
|
||||
* Baofa Fan (calory)
|
||||
* Ben Swartzlander (bswartz)
|
||||
* Connor Catlett (ConnorJC3)
|
||||
* Hemant Kumar (gnufied)
|
||||
* Jan Šafránek (jsafrane)
|
||||
* Joe Betz (jpbetz)
|
||||
* Jordan Liggitt (liggitt)
|
||||
* Matthew Cary (mattcary)
|
||||
* Michelle Au (msau42)
|
||||
* Xing Yang (xing-yang)
|
|
@ -0,0 +1,110 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29: New (alpha) Feature, Load Balancer IP Mode for Services"
|
||||
date: 2023-12-18
|
||||
slug: kubernetes-1-29-feature-loadbalancer-ip-mode-alpha
|
||||
---
|
||||
|
||||
**Author:** [Aohan Yang](https://github.com/RyanAoh)
|
||||
|
||||
This blog introduces a new alpha feature in Kubernetes 1.29.
|
||||
It provides a configurable approach to define how Service implementations,
|
||||
exemplified in this blog by kube-proxy,
|
||||
handle traffic from pods to the Service, within the cluster.
|
||||
|
||||
## Background
|
||||
|
||||
In older Kubernetes releases, the kube-proxy would intercept traffic that was destined for the IP
|
||||
address associated with a Service of `type: LoadBalancer`. This happened whatever mode you used
|
||||
for `kube-proxy`.
|
||||
The interception implemented the expected behavior (traffic eventually reaching the expected
|
||||
endpoints behind the Service). The mechanism to make that work depended on the mode for kube-proxy;
|
||||
on Linux, kube-proxy in iptables mode would redirecting packets directly to the endpoint; in ipvs mode,
|
||||
kube-proxy would configure the load balancer's IP address to one interface on the node.
|
||||
The motivation for implementing that interception was for two reasons:
|
||||
|
||||
1. **Traffic path optimization:** Efficiently redirecting pod traffic - when a container in a pod sends an outbound
|
||||
packet that is destined for the load balancer's IP address -
|
||||
directly to the backend service by bypassing the load balancer.
|
||||
|
||||
2. **Handling load balancer packets:** Some load balancers send packets with the destination IP set to
|
||||
the load balancer's IP address. As a result, these packets need to be routed directly to the correct backend (which
|
||||
might not be local to that node), in order to avoid loops.
|
||||
|
||||
## Problems
|
||||
|
||||
However, there are several problems with the aforementioned behavior:
|
||||
|
||||
1. **[Source IP](https://github.com/kubernetes/kubernetes/issues/79783):**
|
||||
Some cloud providers use the load balancer's IP as the source IP when
|
||||
transmitting packets to the node. In the ipvs mode of kube-proxy,
|
||||
there is a problem that health checks from the load balancer never return. This occurs because the reply packets
|
||||
would be forward to the local interface `kube-ipvs0`(where the load balancer's IP is bound to)
|
||||
and be subsequently ignored.
|
||||
|
||||
2. **[Feature loss at load balancer level](https://github.com/kubernetes/kubernetes/issues/66607):**
|
||||
Certain cloud providers offer features(such as TLS termination, proxy protocol, etc.) at the
|
||||
load balancer level.
|
||||
Bypassing the load balancer results in the loss of these features when the packet reaches the service
|
||||
(leading to protocol errors).
|
||||
|
||||
|
||||
Even with the new alpha behaviour disabled (the default), there is a
|
||||
[workaround](https://github.com/kubernetes/kubernetes/issues/66607#issuecomment-474513060)
|
||||
that involves setting `.status.loadBalancer.ingress.hostname` for the Service, in order
|
||||
to bypass kube-proxy binding.
|
||||
But this is just a makeshift solution.
|
||||
|
||||
## Solution
|
||||
|
||||
In summary, providing an option for cloud providers to disable the current behavior would be highly beneficial.
|
||||
|
||||
To address this, Kubernetes v1.29 introduces a new (alpha) `.status.loadBalancer.ingress.ipMode`
|
||||
field for a Service.
|
||||
This field specifies how the load balancer IP behaves and can be specified only when
|
||||
the `.status.loadBalancer.ingress.ip` field is also specified.
|
||||
|
||||
Two values are possible for `.status.loadBalancer.ingress.ipMode`: `"VIP"` and `"Proxy"`.
|
||||
The default value is "VIP", meaning that traffic delivered to the node
|
||||
with the destination set to the load balancer's IP and port will be redirected to the backend service by kube-proxy.
|
||||
This preserves the existing behavior of kube-proxy.
|
||||
The "Proxy" value is intended to prevent kube-proxy from binding the load balancer's IP address
|
||||
to the node in both ipvs and iptables modes.
|
||||
Consequently, traffic is sent directly to the load balancer and then forwarded to the destination node.
|
||||
The destination setting for forwarded packets varies depending on how the cloud provider's load balancer delivers traffic:
|
||||
|
||||
- If the traffic is delivered to the node then DNATed to the pod, the destination would be set to the node's IP and node port;
|
||||
- If the traffic is delivered directly to the pod, the destination would be set to the pod's IP and port.
|
||||
|
||||
## Usage
|
||||
|
||||
Here are the necessary steps to enable this feature:
|
||||
|
||||
- Download the [latest Kubernetes project](https://kubernetes.io/releases/download/) (version `v1.29.0` or later).
|
||||
- Enable the feature gate with the command line flag `--feature-gates=LoadBalancerIPMode=true`
|
||||
on kube-proxy, kube-apiserver, and cloud-controller-manager.
|
||||
- For Services with `type: LoadBalancer`, set `ipMode` to the appropriate value.
|
||||
This step is likely handled by your chosen cloud-controller-manager during the `EnsureLoadBalancer` process.
|
||||
|
||||
## More information
|
||||
|
||||
- Read [Specifying IPMode of load balancer status](/docs/concepts/services-networking/service/#load-balancer-ip-mode).
|
||||
- Read [KEP-1860](https://kep.k8s.io/1860) - [Make Kubernetes aware of the LoadBalancer behaviour](https://github.com/kubernetes/enhancements/tree/b103a6b0992439f996be4314caf3bf7b75652366/keps/sig-network/1860-kube-proxy-IP-node-binding#kep-1860-make-kubernetes-aware-of-the-loadbalancer-behaviour) _(sic)_.
|
||||
|
||||
## Getting involved
|
||||
|
||||
Reach us on [Slack](https://slack.k8s.io/): [#sig-network](https://kubernetes.slack.com/messages/sig-network),
|
||||
or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network).
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
Huge thanks to [@Sh4d1](https://github.com/Sh4d1) for the original KEP and initial implementation code.
|
||||
I took over midway and completed the work. Similarly, immense gratitude to other contributors
|
||||
who have assisted in the design, implementation, and review of this feature (alphabetical order):
|
||||
|
||||
- [@aojea](https://github.com/aojea)
|
||||
- [@danwinship](https://github.com/danwinship)
|
||||
- [@sftim](https://github.com/sftim)
|
||||
- [@tengqm](https://github.com/tengqm)
|
||||
- [@thockin](https://github.com/thockin)
|
||||
- [@wojtek-t](https://github.com/wojtek-t)
|
|
@ -0,0 +1,99 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29: Single Pod Access Mode for PersistentVolumes Graduates to Stable"
|
||||
date: 2023-12-18
|
||||
slug: read-write-once-pod-access-mode-ga
|
||||
---
|
||||
|
||||
**Author:** Chris Henzie (Google)
|
||||
|
||||
With the release of Kubernetes v1.29, the `ReadWriteOncePod` volume access mode
|
||||
has graduated to general availability: it's part of Kubernetes' stable API. In
|
||||
this blog post, I'll take a closer look at this access mode and what it does.
|
||||
|
||||
## What is `ReadWriteOncePod`?
|
||||
|
||||
`ReadWriteOncePod` is an access mode for
|
||||
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/#persistent-volumes) (PVs)
|
||||
and [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVCs)
|
||||
introduced in Kubernetes v1.22. This access mode enables you to restrict volume
|
||||
access to a single pod in the cluster, ensuring that only one pod can write to
|
||||
the volume at a time. This can be particularly useful for stateful workloads
|
||||
that require single-writer access to storage.
|
||||
|
||||
For more context on access modes and how `ReadWriteOncePod` works read
|
||||
[What are access modes and why are they important?](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#what-are-access-modes-and-why-are-they-important)
|
||||
in the _Introducing Single Pod Access Mode for PersistentVolumes_ article from 2021.
|
||||
|
||||
## How can I start using `ReadWriteOncePod`?
|
||||
|
||||
The `ReadWriteOncePod` volume access mode is available by default in Kubernetes
|
||||
versions v1.27 and beyond. In Kubernetes v1.29 and later, the Kubernetes API
|
||||
always recognizes this access mode.
|
||||
|
||||
Note that `ReadWriteOncePod` is
|
||||
[only supported for CSI volumes](/docs/concepts/storage/persistent-volumes/#access-modes),
|
||||
and before using this feature, you will need to update the following
|
||||
[CSI sidecars](https://kubernetes-csi.github.io/docs/sidecar-containers.html)
|
||||
to these versions or greater:
|
||||
|
||||
- [csi-provisioner:v3.0.0+](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0)
|
||||
- [csi-attacher:v3.3.0+](https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0)
|
||||
- [csi-resizer:v1.3.0+](https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0)
|
||||
|
||||
To start using `ReadWriteOncePod`, you need to create a PVC with the
|
||||
`ReadWriteOncePod` access mode:
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: single-writer-only
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOncePod # Allows only a single pod to access single-writer-only.
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
If your storage plugin supports
|
||||
[Dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/), then
|
||||
new PersistentVolumes will be created with the `ReadWriteOncePod` access mode
|
||||
applied.
|
||||
|
||||
Read [Migrating existing PersistentVolumes](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes)
|
||||
for details on migrating existing volumes to use `ReadWriteOncePod`.
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
Please see the blog posts [alpha](/blog/2021/09/13/read-write-once-pod-access-mode-alpha),
|
||||
[beta](/blog/2023/04/20/read-write-once-pod-access-mode-beta), and
|
||||
[KEP-2485](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode/README.md)
|
||||
for more details on the `ReadWriteOncePod` access mode and motivations for CSI
|
||||
spec changes.
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
The [Kubernetes #csi Slack channel](https://kubernetes.slack.com/messages/csi)
|
||||
and any of the standard
|
||||
[SIG Storage communication channels](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact)
|
||||
are great methods to reach out to the SIG Storage and the CSI teams.
|
||||
|
||||
Special thanks to the following people whose thoughtful reviews and feedback helped shape this feature:
|
||||
|
||||
* Abdullah Gharaibeh (ahg-g)
|
||||
* Aldo Culquicondor (alculquicondor)
|
||||
* Antonio Ojea (aojea)
|
||||
* David Eads (deads2k)
|
||||
* Jan Šafránek (jsafrane)
|
||||
* Joe Betz (jpbetz)
|
||||
* Kante Yin (kerthcet)
|
||||
* Michelle Au (msau42)
|
||||
* Tim Bannister (sftim)
|
||||
* Xing Yang (xing-yang)
|
||||
|
||||
If you’re interested in getting involved with the design and development of CSI
|
||||
or any part of the Kubernetes storage system, join the
|
||||
[Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
|
||||
We’re rapidly growing and always welcome new contributors.
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29: PodReadyToStartContainers Condition Moves to Beta"
|
||||
date: 2023-12-19
|
||||
slug: pod-ready-to-start-containers-condition-now-in-beta
|
||||
---
|
||||
|
||||
**Authors**: Zefeng Chen (independent), Kevin Hannon (Red Hat)
|
||||
|
||||
|
||||
With the recent release of Kubernetes 1.29, the `PodReadyToStartContainers`
|
||||
[condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions) is
|
||||
available by default.
|
||||
The kubelet manages the value for that condition throughout a Pod's lifecycle,
|
||||
in the status field of a Pod. The kubelet will use the `PodReadyToStartContainers`
|
||||
condition to accurately surface the initialization state of a Pod,
|
||||
from the perspective of Pod sandbox creation and network configuration by a container runtime.
|
||||
|
||||
## What's the motivation for this feature?
|
||||
|
||||
Cluster administrators did not have a clear and easily accessible way to view the completion of Pod's sandbox creation
|
||||
and initialization. As of 1.28, the `Initialized` condition in Pods tracks the execution of init containers.
|
||||
However, it has limitations in accurately reflecting the completion of sandbox creation and readiness to start containers for all Pods in a cluster.
|
||||
This distinction is particularly important in multi-tenant clusters where tenants own the Pod specifications, including the set of init containers,
|
||||
while cluster administrators manage storage plugins, networking plugins, and container runtime handlers.
|
||||
Therefore, there is a need for an improved mechanism to provide cluster administrators with a clear and
|
||||
comprehensive view of Pod sandbox creation completion and container readiness.
|
||||
|
||||
## What's the benefit?
|
||||
|
||||
1. Improved Visibility: Cluster administrators gain a clearer and more comprehensive view of Pod sandbox
|
||||
creation completion and container readiness.
|
||||
This enhanced visibility allows them to make better-informed decisions and troubleshoot issues more effectively.
|
||||
2. Metric Collection and Monitoring: Monitoring services can leverage the fields associated with
|
||||
the `PodReadyToStartContainers` condition to report sandbox creation state and latency.
|
||||
Metrics can be collected at per-Pod cardinality or aggregated based on various
|
||||
properties of the Pod, such as `volumes`, `runtimeClassName`, custom annotations for CNI
|
||||
and IPAM plugins or arbitrary labels and annotations, and `storageClassName` of
|
||||
PersistentVolumeClaims.
|
||||
This enables comprehensive monitoring and analysis of Pod readiness across the cluster.
|
||||
3. Enhanced Troubleshooting: With a more accurate representation of Pod sandbox creation and container readiness,
|
||||
cluster administrators can quickly identify and address any issues that may arise during the initialization process.
|
||||
This leads to improved troubleshooting capabilities and reduced downtime.
|
||||
|
||||
### What’s next?
|
||||
|
||||
Due to feedback and adoption, the Kubernetes team promoted `PodReadyToStartContainersCondition` to Beta in 1.29.
|
||||
Your comments will help determine if this condition continues forward to get promoted to GA,
|
||||
so please submit additional feedback on this feature!
|
||||
|
||||
### How can I learn more?
|
||||
|
||||
Please check out the
|
||||
[documentation](/docs/concepts/workloads/pods/pod-lifecycle/) for the
|
||||
`PodReadyToStartContainersCondition` to learn more about it and how it fits in relation to
|
||||
other Pod conditions.
|
||||
|
||||
### How to get involved?
|
||||
|
||||
This feature is driven by the SIG Node community. Please join us to connect with
|
||||
the community and share your ideas and feedback around the above feature and
|
||||
beyond. We look forward to hearing from you!
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.29: Decoupling taint-manager from node-lifecycle-controller"
|
||||
date: 2023-12-19
|
||||
slug: kubernetes-1-29-taint-eviction-controller
|
||||
---
|
||||
|
||||
**Authors:** Yuan Chen (Apple), Andrea Tosatto (Apple)
|
||||
|
||||
This blog discusses a new feature in Kubernetes 1.29 to improve the handling of taint-based pod eviction.
|
||||
|
||||
## Background
|
||||
|
||||
In Kubernetes 1.29, an improvement has been introduced to enhance the taint-based pod eviction handling on nodes.
|
||||
This blog discusses the changes made to node-lifecycle-controller
|
||||
to separate its responsibilities and improve overall code maintainability.
|
||||
|
||||
## Summary of changes
|
||||
|
||||
node-lifecycle-controller previously combined two independent functions:
|
||||
|
||||
- Adding a pre-defined set of `NoExecute` taints to Node based on Node's condition.
|
||||
- Performing pod eviction on `NoExecute` taint.
|
||||
|
||||
With the Kubernetes 1.29 release, the taint-based eviction implementation has been
|
||||
moved out of node-lifecycle-controller into a separate and independent component called taint-eviction-controller.
|
||||
This separation aims to disentangle code, enhance code maintainability,
|
||||
and facilitate future extensions to either component.
|
||||
|
||||
As part of the change, additional metrics were introduced to help you monitor taint-based pod evictions:
|
||||
|
||||
- `pod_deletion_duration_seconds` measures the latency between the time when a taint effect
|
||||
has been activated for the Pod and its deletion via taint-eviction-controller.
|
||||
- `pod_deletions_total` reports the total number of Pods deleted by taint-eviction-controller since its start.
|
||||
|
||||
## How to use the new feature?
|
||||
|
||||
A new feature gate, `SeparateTaintEvictionController`, has been added. The feature is enabled by default as Beta in Kubernetes 1.29.
|
||||
Please refer to the [feature gate document](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
|
||||
|
||||
When this feature is enabled, users can optionally disable taint-based eviction by setting `--controllers=-taint-eviction-controller`
|
||||
in kube-controller-manager.
|
||||
|
||||
To disable the new feature and use the old taint-manager within node-lifecylecycle-controller , users can set the feature gate `SeparateTaintEvictionController=false`.
|
||||
|
||||
## Use cases
|
||||
|
||||
This new feature will allow cluster administrators to extend and enhance the default
|
||||
taint-eviction-controller and even replace the default taint-eviction-controller with a
|
||||
custom implementation to meet different needs. An example is to better support
|
||||
stateful workloads that use PersistentVolume on local disks.
|
||||
|
||||
## FAQ
|
||||
|
||||
**Does this feature change the existing behavior of taint-based pod evictions?**
|
||||
|
||||
No, the taint-based pod eviction behavior remains unchanged. If the feature gate
|
||||
`SeparateTaintEvictionController` is turned off, the legacy node-lifecycle-controller with taint-manager will continue to be used.
|
||||
|
||||
**Will enabling/using this feature result in an increase in the time taken by any operations covered by existing SLIs/SLOs?**
|
||||
|
||||
No.
|
||||
|
||||
**Will enabling/using this feature result in an increase in resource usage (CPU, RAM, disk, IO, ...)?**
|
||||
|
||||
The increase in resource usage by running a separate `taint-eviction-controller` will be negligible.
|
||||
|
||||
## Learn more
|
||||
|
||||
For more details, refer to the [KEP](http://kep.k8s.io/3902).
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
As with any Kubernetes feature, multiple community members have contributed, from
|
||||
writing the KEP to implementing the new controller and reviewing the KEP and code. Special thanks to:
|
||||
|
||||
- Aldo Culquicondor (@alculquicondor)
|
||||
- Maciej Szulik (@soltysh)
|
||||
- Filip Křepinský (@atiratree)
|
||||
- Han Kang (@logicalhan)
|
||||
- Wei Huang (@Huang-Wei)
|
||||
- Sergey Kanzhelevi (@SergeyKanzhelev)
|
||||
- Ravi Gudimetla (@ravisantoshgudimetla)
|
||||
- Deep Debroy (@ddebroy)
|
|
@ -0,0 +1,147 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Contextual logging in Kubernetes 1.29: Better troubleshooting and enhanced logging"
|
||||
slug: contextual-logging-in-kubernetes-1-29
|
||||
date: 2023-12-20T09:30:00-08:00
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2023/12/20/contextual-logging/
|
||||
---
|
||||
|
||||
**Authors**: [Mengjiao Liu](https://github.com/mengjiao-liu/) (DaoCloud), [Patrick Ohly](https://github.com/pohly) (Intel)
|
||||
|
||||
On behalf of the [Structured Logging Working Group](https://github.com/kubernetes/community/blob/master/wg-structured-logging/README.md)
|
||||
and [SIG Instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation#readme),
|
||||
we are pleased to announce that the contextual logging feature
|
||||
introduced in Kubernetes v1.24 has now been successfully migrated to
|
||||
two components (kube-scheduler and kube-controller-manager)
|
||||
as well as some directories. This feature aims to provide more useful logs
|
||||
for better troubleshooting of Kubernetes and to empower developers to enhance Kubernetes.
|
||||
|
||||
## What is contextual logging?
|
||||
|
||||
[Contextual logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)
|
||||
is based on the [go-logr](https://github.com/go-logr/logr#a-minimal-logging-api-for-go) API.
|
||||
The key idea is that libraries are passed a logger instance by their caller
|
||||
and use that for logging instead of accessing a global logger.
|
||||
The binary decides the logging implementation, not the libraries.
|
||||
The go-logr API is designed around structured logging and supports attaching
|
||||
additional information to a logger.
|
||||
|
||||
This enables additional use cases:
|
||||
|
||||
- The caller can attach additional information to a logger:
|
||||
- [WithName](<https://pkg.go.dev/github.com/go-logr/logr#Logger.WithName>) adds a "logger" key with the names concatenated by a dot as value
|
||||
- [WithValues](<https://pkg.go.dev/github.com/go-logr/logr#Logger.WithValues>) adds key/value pairs
|
||||
|
||||
When passing this extended logger into a function, and the function uses it
|
||||
instead of the global logger, the additional information is then included
|
||||
in all log entries, without having to modify the code that generates the log entries.
|
||||
This is useful in highly parallel applications where it can become hard to identify
|
||||
all log entries for a certain operation, because the output from different operations gets interleaved.
|
||||
|
||||
- When running unit tests, log output can be associated with the current test.
|
||||
Then, when a test fails, only the log output of the failed test gets shown by go test.
|
||||
That output can also be more verbose by default because it will not get shown for successful tests.
|
||||
Tests can be run in parallel without interleaving their output.
|
||||
|
||||
One of the design decisions for contextual logging was to allow attaching a logger as value to a `context.Context`.
|
||||
Since the logger encapsulates all aspects of the intended logging for the call,
|
||||
it is *part* of the context, and not just *using* it. A practical advantage is that many APIs
|
||||
already have a `ctx` parameter or can add one. This provides additional advantages, like being able to
|
||||
get rid of `context.TODO()` calls inside the functions.
|
||||
|
||||
## How to use it
|
||||
|
||||
The contextual logging feature is alpha starting from Kubernetes v1.24,
|
||||
so it requires the `ContextualLogging` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
|
||||
If you want to test the feature while it is alpha, you need to enable this feature gate
|
||||
on the `kube-controller-manager` and the `kube-scheduler`.
|
||||
|
||||
For the `kube-scheduler`, there is one thing to note, in addition to enabling
|
||||
the `ContextualLogging` feature gate, instrumentation also depends on log verbosity.
|
||||
To avoid slowing down the scheduler with the logging instrumentation for contextual logging added for 1.29,
|
||||
it is important to choose carefully when to add additional information:
|
||||
- At `-v3` or lower, only `WithValues("pod")` is used once per scheduling cycle.
|
||||
This has the intended effect that all log messages for the cycle include the pod information.
|
||||
Once contextual logging is GA, "pod" key/value pairs can be removed from all log calls.
|
||||
- At `-v4` or higher, richer log entries get produced where `WithValues` is also used for the node (when applicable)
|
||||
and `WithName` is used for the current operation and plugin.
|
||||
|
||||
Here is an example that demonstrates the effect:
|
||||
> I1113 08:43:37.029524 87144 default_binder.go:53] "Attempting to bind pod to node" **logger="Bind.DefaultBinder"** **pod**="kube-system/coredns-69cbfb9798-ms4pq" **node**="127.0.0.1"
|
||||
|
||||
The immediate benefit is that the operation and plugin name are visible in `logger`.
|
||||
`pod` and `node` are already logged as parameters in individual log calls in `kube-scheduler` code.
|
||||
Once contextual logging is supported by more packages outside of `kube-scheduler`,
|
||||
they will also be visible there (for example, client-go). Once it is GA,
|
||||
log calls can be simplified to avoid repeating those values.
|
||||
|
||||
In `kube-controller-manager`, `WithName` is used to add the user-visible controller name to log output,
|
||||
for example:
|
||||
|
||||
> I1113 08:43:29.284360 87141 graph_builder.go:285] "garbage controller monitor not synced: no monitors" **logger="garbage-collector-controller"**
|
||||
|
||||
The `logger=”garbage-collector-controller”` was added by the `kube-controller-manager` core
|
||||
when instantiating that controller and appears in all of its log entries - at least as long as the code
|
||||
that it calls supports contextual logging. Further work is needed to convert shared packages like client-go.
|
||||
|
||||
## Performance impact
|
||||
|
||||
Supporting contextual logging in a package, i.e. accepting a logger from a caller, is cheap.
|
||||
No performance impact was observed for the `kube-scheduler`. As noted above,
|
||||
adding `WithName` and `WithValues` needs to be done more carefully.
|
||||
|
||||
In Kubernetes 1.29, enabling contextual logging at production verbosity (`-v3` or lower)
|
||||
caused no measurable slowdown for the `kube-scheduler` and is not expected for the `kube-controller-manager` either.
|
||||
At debug levels, a 28% slowdown for some test cases is still reasonable given that the resulting logs make debugging easier.
|
||||
For details, see the [discussion around promoting the feature to beta](https://github.com/kubernetes/enhancements/pull/4219#issuecomment-1807811995).
|
||||
|
||||
## Impact on downstream users
|
||||
Log output is not part of the Kubernetes API and changes regularly in each release,
|
||||
whether it is because developers work on the code or because of the ongoing conversion
|
||||
to structured and contextual logging.
|
||||
|
||||
If downstream users have dependencies on specific logs,
|
||||
they need to be aware of how this change affects them.
|
||||
|
||||
## Further reading
|
||||
|
||||
- Read the [Contextual Logging in Kubernetes 1.24](https://www.kubernetes.dev/blog/2022/05/25/contextual-logging/) article.
|
||||
- Read the [KEP-3077: contextual logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging).
|
||||
|
||||
## Get involved
|
||||
|
||||
If you're interested in getting involved, we always welcome new contributors to join us.
|
||||
Contextual logging provides a fantastic opportunity for you to contribute to Kubernetes development and make a meaningful impact.
|
||||
By joining [Structured Logging WG](https://github.com/kubernetes/community/tree/master/wg-structured-logging),
|
||||
you can actively participate in the development of Kubernetes and make your first contribution.
|
||||
It's a great way to learn and engage with the community while gaining valuable experience.
|
||||
|
||||
We encourage you to explore the repository and familiarize yourself with the ongoing discussions and projects.
|
||||
It's a collaborative environment where you can exchange ideas, ask questions, and work together with other contributors.
|
||||
|
||||
If you have any questions or need guidance, don't hesitate to reach out to us
|
||||
and you can do so on our [public Slack channel](https://kubernetes.slack.com/messages/wg-structured-logging).
|
||||
If you're not already part of that Slack workspace, you can visit [https://slack.k8s.io/](https://slack.k8s.io/)
|
||||
for an invitation.
|
||||
|
||||
We would like to express our gratitude to all the contributors who provided excellent reviews,
|
||||
shared valuable insights, and assisted in the implementation of this feature (in alphabetical order):
|
||||
|
||||
- Aldo Culquicondor ([alculquicondor](https://github.com/alculquicondor))
|
||||
- Andy Goldstein ([ncdc](https://github.com/ncdc))
|
||||
- Feruzjon Muyassarov ([fmuyassarov](https://github.com/fmuyassarov))
|
||||
- Freddie ([freddie400](https://github.com/freddie400))
|
||||
- JUN YANG ([yangjunmyfm192085](https://github.com/yangjunmyfm192085))
|
||||
- Kante Yin ([kerthcet](https://github.com/kerthcet))
|
||||
- Kiki ([carlory](https://github.com/carlory))
|
||||
- Lucas Severo Alve ([knelasevero](https://github.com/knelasevero))
|
||||
- Maciej Szulik ([soltysh](https://github.com/soltysh))
|
||||
- Mengjiao Liu ([mengjiao-liu](https://github.com/mengjiao-liu))
|
||||
- Naman Lakhwani ([Namanl2001](https://github.com/Namanl2001))
|
||||
- Oksana Baranova ([oxxenix](https://github.com/oxxenix))
|
||||
- Patrick Ohly ([pohly](https://github.com/pohly))
|
||||
- songxiao-wang87 ([songxiao-wang87](https://github.com/songxiao-wang87))
|
||||
- Tim Allclai ([tallclair](https://github.com/tallclair))
|
||||
- ZhangYu ([Octopusjust](https://github.com/Octopusjust))
|
||||
- Ziqi Zhao ([fatsheep9146](https://github.com/fatsheep9146))
|
||||
- Zac ([249043822](https://github.com/249043822))
|
|
@ -0,0 +1,173 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Spotlight on SIG Release (Release Team Subproject)"
|
||||
date: 2024-01-15
|
||||
slug: sig-release-spotlight-2023
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2024/01/15/sig-release-spotlight-2023/
|
||||
---
|
||||
|
||||
**Author:** Nitish Kumar
|
||||
|
||||
The Release Special Interest Group (SIG Release), where Kubernetes sharpens its blade
|
||||
with cutting-edge features and bug fixes every 4 months. Have you ever considered how such a big
|
||||
project like Kubernetes manages its timeline so efficiently to release its new version, or how
|
||||
the internal workings of the Release Team look like? If you're curious about these questions or
|
||||
want to know more and get involved with the work SIG Release does, read on!
|
||||
|
||||
SIG Release plays a crucial role in the development and evolution of Kubernetes.
|
||||
Its primary responsibility is to manage the release process of new versions of Kubernetes.
|
||||
It operates on a regular release cycle, [typically every three to four months](https://www.kubernetes.dev/resources/release/).
|
||||
During this cycle, the Kubernetes Release Team works closely with other SIGs and contributors
|
||||
to ensure a smooth and well-coordinated release. This includes planning the release schedule, setting deadlines for code freeze and testing
|
||||
phases, as well as creating release artefacts like binaries, documentation, and release notes.
|
||||
|
||||
Before you read further, it is important to note that there are two subprojects under SIG
|
||||
Release - _Release Engineering_ and _Release Team_.
|
||||
|
||||
In this blog post, [Nitish Kumar](https://twitter.com/nitishfy) interviews Verónica
|
||||
López (PlanetScale), Technical Lead of SIG Release, with the spotlight on the Release Team
|
||||
subproject, how the release process looks like, and ways to get involved.
|
||||
|
||||
1. **What is the typical release process for a new version of Kubernetes, from initial planning
|
||||
to the final release? Are there any specific methodologies and tools that you use to ensure a smooth release?**
|
||||
|
||||
The release process for a new Kubernetes version is a well-structured and community-driven
|
||||
effort. There are no specific methodologies or
|
||||
tools as such that we follow, except a calendar with a series of steps to keep things organised.
|
||||
The complete release process looks like this:
|
||||
|
||||
- **Release Team Onboarding:** We start with the formation of a Release Team, which includes
|
||||
volunteers from the Kubernetes community who will be responsible for managing different
|
||||
components of the new release. This is typically done before the previous release is about to
|
||||
wrap up. Once the team is formed, new members are onboarded while the Release Team Lead and
|
||||
the Branch Manager propose a calendar for the usual deliverables. As an example, you can take a look
|
||||
at [the v1.29 team formation issue](https://github.com/kubernetes/sig-release/issues/2307) created at the SIG Release
|
||||
repository. For a contributor to be the part of Release Team, they typically go through the
|
||||
Release Shadow program, but that's not the only way to get involved with SIG Release.
|
||||
|
||||
- **Beginning Phase:** In the initial weeks of each release cycle, SIG Release diligently
|
||||
tracks the progress of new features and enhancements outlined in Kubernetes Enhancement
|
||||
Proposals (KEPs). While not all of these features are entirely new, they often commence
|
||||
their journey in the alpha phase, subsequently advancing to the beta stage, and ultimately
|
||||
attaining the status of stability.
|
||||
|
||||
- **Feature Maturation Phase:** We usually cut a couple of Alpha releases, containing new
|
||||
features in an experimental state, to gather feedback from the community, followed by a
|
||||
couple of Beta releases, where features are more stable and the focus is on fixing bugs. Feedback
|
||||
from users is critical at this stage, to the point where sometimes we need to cut an
|
||||
additional Beta release to address bugs or other concerns that may arise during this phase. Once
|
||||
this is cleared, we cut a _release candidate_ (RC) before the actual release. Throughout
|
||||
the cycle, efforts are made to update and improve documentation, including release notes
|
||||
and user guides, a process that, in my opinion, deserves its own post.
|
||||
|
||||
- **Stabilisation Phase:** A few weeks before the new release, we implement a _code freeze_, and
|
||||
no new features are allowed after this point: this allows the focus to shift towards testing
|
||||
and stabilisation. In parallel to the main release, we keep cutting monthly patches of old,
|
||||
officially supported versions of Kubernetes, so you could say that the lifecycle of a Kubernetes
|
||||
version extends for several months afterwards. Throughout the complete release cycle, efforts
|
||||
are made to update and improve documentation, including release notes and user guides, a
|
||||
process that, in our opinion, deserves its own post.
|
||||
|
||||
![Release team onboarding; beginning phase; stabalization phase; feature maturation phase](sig-release-overview.png)
|
||||
|
||||
|
||||
2. **How do you handle the balance between stability and introducing new features in each
|
||||
release? What criteria are used to determine which features make it into a release?**
|
||||
|
||||
It’s a neverending mission, however, we think
|
||||
that the key is in respecting our process and guidelines. Our guidelines are the result of
|
||||
hours of discussions and feedback from dozens of members of the community who bring a wealth of knowledge and experience to the project. If we
|
||||
didn’t have strict guidelines, we would keep having the same discussions over and over again,
|
||||
instead of using our time for more productive topics that needs our attention. All the
|
||||
critical exceptions require consensus from most of the team members, so we can ensure quality.
|
||||
|
||||
The process of deciding what makes it into a release starts way before the Release Teams
|
||||
takes over the workflows. Each individual SIG along with the most experienced contributors
|
||||
gets to decide whether they’d like to include a feature or change, so the planning and ultimate
|
||||
approval usually belongs to them. Then, the Release Team makes sure those contributions meet
|
||||
the requirements of documentation, testing, backwards compatibility, among others, before
|
||||
officially allowing them in. A similar process happens with cherry-picks for the monthly patch
|
||||
releases, where we have strict policies about not accepting PRs that would require a full KEP,
|
||||
or fixes that don’t include all the affected branches.
|
||||
|
||||
3. **What are some of the most significant challenges you’ve encountered while developing
|
||||
and releasing Kubernetes? How have you overcome these challenges?**
|
||||
|
||||
Every cycle of release brings its own array of
|
||||
challenges. It might involve tackling last-minute concerns like newly discovered Common Vulnerabilities and Exposures (CVEs),
|
||||
resolving bugs within our internal tools, or addressing unexpected regressions caused by
|
||||
features from previous releases. Another obstacle we often face is that, although our
|
||||
team is substantial, most of us contribute on a volunteer basis. Sometimes it can feel like
|
||||
we’re a bit understaffed, however we always manage to get organised and make it work.
|
||||
|
||||
4. **As a new contributor, what should be my ideal path to get involved with SIG Release? In
|
||||
a community where everyone is busy with their own tasks, how can I find the right set of tasks to contribute effectively to it?**
|
||||
|
||||
Everyone's way of getting involved within the Open Source community is different. SIG Release
|
||||
is a self-serving team, meaning that we write our own tools to be able to ship releases. We
|
||||
collaborate a lot with other SIGs, such as [SIG K8s Infra](https://github.com/kubernetes/community/blob/master/sig-k8s-infra/README.md), but all the tools that we used needs to be
|
||||
tailor-made for our massive technical needs, while reducing costs. This means that we are
|
||||
constantly looking for volunteers who’d like to help with different types of projects, beyond “just” cutting a release.
|
||||
|
||||
Our current project requires a mix of skills like [Go](https://go.dev/) programming,
|
||||
understanding Kubernetes internals, Linux packaging, supply chain security, technical
|
||||
writing, and general open-source project maintenance. This skill set is always evolving as our project grows.
|
||||
|
||||
For an ideal path, this is what we suggest:
|
||||
|
||||
- Get yourself familiar with the code, including how features are managed, the release calendar, and the overall structure of the Release Team.
|
||||
- Join the Kubernetes community communication channels, such as [Slack](https://communityinviter.com/apps/kubernetes/community) (#sig-release), where we are particularly active.
|
||||
- Join the [SIG Release weekly meetings](https://github.com/kubernetes/community/tree/master/sig-release#meetings)
|
||||
which are open to all in the community. Participating in these meetings is a great way to learn about ongoing and future projects that
|
||||
you might find relevant for your skillset and interests.
|
||||
|
||||
Remember, every experienced contributor was once in your shoes, and the community is often more than willing to guide and support newcomers.
|
||||
Don't hesitate to ask questions, engage in discussions, and take small steps to contribute.
|
||||
![sig-release-questions](sig-release-meetings.png)
|
||||
|
||||
5. **What is the Release Shadow Program and how is it different from other shadow programs included in various other SIGs?**
|
||||
|
||||
The Release Shadow Program offers a chance for interested individuals to shadow experienced
|
||||
members of the Release Team throughout a Kubernetes release cycle. This is a unique chance to see all the hard work that a
|
||||
Kubernetes release requires across sub-teams. A lot of people think that all we do is cut a release every three months, but that’s just the
|
||||
top of the iceberg.
|
||||
|
||||
Our program typically aligns with a specific Kubernetes release cycle, which has a
|
||||
predictable timeline of approximately three months. While this program doesn’t involve writing new Kubernetes features, it still
|
||||
requires a high sense of responsibility since the Release Team is the last step between a new release and thousands of contributors, so it’s a
|
||||
great opportunity to learn a lot about modern software development cycles at an accelerated pace.
|
||||
|
||||
6. **What are the qualifications that you generally look for in a person to volunteer as a release shadow/release lead for the next Kubernetes release?**
|
||||
|
||||
While all the roles require some degree of technical ability, some require more hands-on
|
||||
experience with Go and familiarity with the Kubernetes API while others require people who
|
||||
are good at communicating technical content in a clear and concise way. It’s important to mention that we value enthusiasm and commitment over
|
||||
technical expertise from day 1. If you have the right attitude and show us that you enjoy working with Kubernetes and or/release
|
||||
engineering, even if it’s only through a personal project that you put together in your spare time, the team will make sure to guide
|
||||
you. Being a self-starter and not being afraid to ask questions can take you a long way in our team.
|
||||
|
||||
7. **What will you suggest to someone who has got rejected from being a part of the Release Shadow Program several times?**
|
||||
|
||||
Keep applying.
|
||||
|
||||
With every release cycle we have had an exponential growth in the number of applicants,
|
||||
so it gets harder to be selected, which can be discouraging, but please know that getting rejected doesn’t mean you’re not talented. It’s
|
||||
just practically impossible to accept every applicant, however here's an alternative that we suggest:
|
||||
|
||||
*Start attending our weekly Kubernetes SIG Release meetings to introduce yourself and get familiar with the team and the projects we are working on.*
|
||||
|
||||
The Release Team is one of the way to join SIG Release, but we are always looking for more hands to help. Again, in addition to certain
|
||||
technical ability, the most sought after trait that we look for is people we can trust, and that requires time.
|
||||
![sig-release-motivation](sig-release-motivation.png)
|
||||
|
||||
8. **Can you discuss any ongoing initiatives or upcoming features that the release team is particularly excited about for Kubernetes v1.28? How do these advancements align with the long-term vision of Kubernetes?**
|
||||
|
||||
We are excited about finally publishing Kubernetes packages on community infrastructure. It has been something that we have been wanting to do for a few years now, but it’s a project
|
||||
with many technical implications that must be in place before doing the transition. Once that’s done, we’ll be able to increase our productivity and take control of the entire workflows.
|
||||
|
||||
## Final thoughts
|
||||
|
||||
Well, this conversation ends here but not the learning. I hope this interview has given you some idea about what SIG Release does and how to
|
||||
get started in helping out. It is important to mention again that this article covers the first subproject under SIG Release, the Release Team.
|
||||
In the next Spotlight blog on SIG Release, we will provide a spotlight on the Release Engineering subproject, what it does and how to
|
||||
get involved. Finally, you can go through the [SIG Release charter](https://github.com/kubernetes/community/tree/master/sig-release) to get a more in-depth understanding of how SIG Release operates.
|
Binary file not shown.
After Width: | Height: | Size: 208 KiB |
Binary file not shown.
After Width: | Height: | Size: 175 KiB |
Binary file not shown.
After Width: | Height: | Size: 259 KiB |
|
@ -18,7 +18,7 @@ case_study_details:
|
|||
|
||||
<h2>Challenge</h2>
|
||||
|
||||
<p>After moving from <a href="https://www.rackspace.com/">Rackspace</a> to <a href="https://aws.amazon.com/">AWS</a> in 2015, <a href="https://vsco.co/">VSCO</a> began building <a href="https://nodejs.org/en/">Node.js</a> and <a href="https://golang.org/">Go</a> microservices in addition to running its <a href="http://php.net/">PHP monolith</a>. The team containerized the microservices using <a href="https://www.docker.com/">Docker</a>, but "they were all in separate groups of <a href="https://aws.amazon.com/ec2/">EC2</a> instances that were dedicated per service," says Melinda Lu, Engineering Manager for the Machine Learning Team. Adds Naveen Gattu, Senior Software Engineer on the Community Team: "That yielded a lot of wasted resources. We started looking for a way to consolidate and be more efficient in the AWS EC2 instances."</p>
|
||||
<p>After moving from <a href="https://www.rackspace.com/">Rackspace</a> to <a href="https://aws.amazon.com/">AWS</a> in 2015, <a href="https://vsco.co/">VSCO</a> began building <a href="https://nodejs.org/en/">Node.js</a> and <a href="https://go.dev/">Go</a> microservices in addition to running its <a href="http://php.net/">PHP monolith</a>. The team containerized the microservices using <a href="https://www.docker.com/">Docker</a>, but "they were all in separate groups of <a href="https://aws.amazon.com/ec2/">EC2</a> instances that were dedicated per service," says Melinda Lu, Engineering Manager for the Machine Learning Team. Adds Naveen Gattu, Senior Software Engineer on the Community Team: "That yielded a lot of wasted resources. We started looking for a way to consolidate and be more efficient in the AWS EC2 instances."</p>
|
||||
|
||||
<h2>Solution</h2>
|
||||
|
||||
|
|
|
@ -122,9 +122,9 @@ community_styles_migrated: true
|
|||
|
||||
<div id="twitter" class="community-resource">
|
||||
<a href="https://twitter.com/kubernetesio">
|
||||
<img src="/images/community/twitter.png" alt="Twitter">
|
||||
<img src="/images/community/x-org.png" alt="𝕏.org">
|
||||
</a>
|
||||
<a href="https://twitter.com/kubernetesio">Twitter ▶</a>
|
||||
<a href="https://twitter.com/kubernetesio">𝕏 ▶</a>
|
||||
<p><em>#kubernetesio</em></p>
|
||||
<p>Real-time announcements of blog posts, events, news, ideas.</p>
|
||||
</div>
|
||||
|
|
|
@ -165,6 +165,6 @@ controller does.
|
|||
* Discover some of the basic [Kubernetes objects](/docs/concepts/overview/working-with-objects/)
|
||||
* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
||||
* If you want to write your own controller, see
|
||||
[Extension Patterns](/docs/concepts/extend-kubernetes/#extension-patterns)
|
||||
in Extending Kubernetes.
|
||||
[Kubernetes extension patterns](/docs/concepts/extend-kubernetes/#extension-patterns)
|
||||
and the [sample-controller](https://github.com/kubernetes/sample-controller) repository.
|
||||
|
||||
|
|
|
@ -111,7 +111,7 @@ to override this behaviour, see [Delete owner objects and orphan dependents](/do
|
|||
## Garbage collection of unused containers and images {#containers-images}
|
||||
|
||||
The {{<glossary_tooltip text="kubelet" term_id="kubelet">}} performs garbage
|
||||
collection on unused images every five minutes and on unused containers every
|
||||
collection on unused images every two minutes and on unused containers every
|
||||
minute. You should avoid using external garbage collection tools, as these can
|
||||
break the kubelet behavior and remove containers that should exist.
|
||||
|
||||
|
@ -137,6 +137,20 @@ collection, which deletes images in order based on the last time they were used,
|
|||
starting with the oldest first. The kubelet deletes images
|
||||
until disk usage reaches the `LowThresholdPercent` value.
|
||||
|
||||
#### Garbage collection for unused container images {#image-maximum-age-gc}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
As an alpha feature, you can specify the maximum time a local image can be unused for,
|
||||
regardless of disk usage. This is a kubelet setting that you configure for each node.
|
||||
|
||||
To configure the setting, enable the `ImageMaximumGCAge`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the kubelet,
|
||||
and also set a value for the `ImageMaximumGCAge` field in the kubelet configuration file.
|
||||
|
||||
The value is specified as a Kubernetes _duration_; for example, you can set the configuration
|
||||
field to `3d12h`, which means 3 days and 12 hours.
|
||||
|
||||
### Container garbage collection {#container-image-garbage-collection}
|
||||
|
||||
The kubelet garbage collects unused containers based on the following variables,
|
||||
|
@ -178,4 +192,4 @@ configure garbage collection:
|
|||
|
||||
* Learn more about [ownership of Kubernetes objects](/docs/concepts/overview/working-with-objects/owners-dependents/).
|
||||
* Learn more about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
|
||||
* Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) that cleans up finished Jobs.
|
||||
* Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) that cleans up finished Jobs.
|
|
@ -280,7 +280,7 @@ If you want to explicitly reserve resources for non-Pod processes, see
|
|||
|
||||
## Node topology
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.18" >}}
|
||||
{{< feature-state state="stable" for_k8s_version="v1.27" >}}
|
||||
|
||||
If you have enabled the `TopologyManager`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then
|
||||
|
|
|
@ -79,6 +79,9 @@ installation instructions. The list does not try to be exhaustive.
|
|||
Pods and non-Kubernetes environments with visibility and security monitoring.
|
||||
* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod
|
||||
networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API.
|
||||
* [Spiderpool](https://github.com/spidernet-io/spiderpool) is an underlay and RDMA
|
||||
networking solution for Kubernetes. Spiderpool is supported on bare metal, virtual machines,
|
||||
and public cloud environments.
|
||||
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)
|
||||
provides networking and network policy, will carry on working on both sides
|
||||
of a network partition, and does not require an external database.
|
||||
|
|
|
@ -7,7 +7,7 @@ weight: 110
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.20" >}}
|
||||
{{< feature-state state="stable" for_k8s_version="v1.29" >}}
|
||||
|
||||
Controlling the behavior of the Kubernetes API server in an overload situation
|
||||
is a key task for cluster administrators. The {{< glossary_tooltip
|
||||
|
@ -45,30 +45,27 @@ are not subject to the `--max-requests-inflight` limit.
|
|||
|
||||
## Enabling/Disabling API Priority and Fairness
|
||||
|
||||
The API Priority and Fairness feature is controlled by a feature gate
|
||||
and is enabled by default. See [Feature
|
||||
Gates](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
for a general explanation of feature gates and how to enable and
|
||||
disable them. The name of the feature gate for APF is
|
||||
"APIPriorityAndFairness". This feature also involves an {{<
|
||||
glossary_tooltip term_id="api-group" text="API Group" >}} with: (a) a
|
||||
`v1alpha1` version and a `v1beta1` version, disabled by default, and
|
||||
(b) `v1beta2` and `v1beta3` versions, enabled by default. You can
|
||||
disable the feature gate and API group beta versions by adding the
|
||||
The API Priority and Fairness feature is controlled by a command-line flag
|
||||
and is enabled by default. See
|
||||
[Options](/docs/reference/command-line-tools-reference/kube-apiserver/#options)
|
||||
for a general explanation of the available kube-apiserver command-line
|
||||
options and how to enable and disable them. The name of the
|
||||
command-line option for APF is "--enable-priority-and-fairness". This feature
|
||||
also involves an {{<glossary_tooltip term_id="api-group" text="API Group" >}}
|
||||
with: (a) a stable `v1` version, introduced in 1.29, and
|
||||
enabled by default (b) a `v1beta3` version, enabled by default, and
|
||||
deprecated in v1.29. You can
|
||||
disable the API group beta version `v1beta3` by adding the
|
||||
following command-line flags to your `kube-apiserver` invocation:
|
||||
|
||||
```shell
|
||||
kube-apiserver \
|
||||
--feature-gates=APIPriorityAndFairness=false \
|
||||
--runtime-config=flowcontrol.apiserver.k8s.io/v1beta2=false,flowcontrol.apiserver.k8s.io/v1beta3=false \
|
||||
--runtime-config=flowcontrol.apiserver.k8s.io/v1beta3=false \
|
||||
# …and other flags as usual
|
||||
```
|
||||
|
||||
Alternatively, you can enable the v1alpha1 and v1beta1 versions of the API group
|
||||
with `--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true,flowcontrol.apiserver.k8s.io/v1beta1=true`.
|
||||
|
||||
The command-line flag `--enable-priority-and-fairness=false` will disable the
|
||||
API Priority and Fairness feature, even if other flags have enabled it.
|
||||
API Priority and Fairness feature.
|
||||
|
||||
## Concepts
|
||||
|
||||
|
@ -178,14 +175,12 @@ server.
|
|||
## Resources
|
||||
|
||||
The flow control API involves two kinds of resources.
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta2-flowcontrol-apiserver-k8s-io)
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1-flowcontrol-apiserver-k8s-io)
|
||||
define the available priority levels, the share of the available concurrency
|
||||
budget that each can handle, and allow for fine-tuning queuing behavior.
|
||||
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1beta2-flowcontrol-apiserver-k8s-io)
|
||||
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1-flowcontrol-apiserver-k8s-io)
|
||||
are used to classify individual inbound requests, matching each to a
|
||||
single PriorityLevelConfiguration. There is also a `v1alpha1` version
|
||||
of the same API group, and it has the same Kinds with the same syntax and
|
||||
semantics.
|
||||
single PriorityLevelConfiguration.
|
||||
|
||||
### PriorityLevelConfiguration
|
||||
|
||||
|
|
|
@ -32,6 +32,34 @@ different approach.
|
|||
|
||||
To learn about the Kubernetes networking model, see [here](/docs/concepts/services-networking/).
|
||||
|
||||
## Kubernetes IP address ranges
|
||||
|
||||
Kubernetes clusters require to allocate non-overlapping IP addresses for Pods, Services and Nodes,
|
||||
from a range of available addresses configured in the following components:
|
||||
|
||||
- The network plugin is configured to assign IP addresses to Pods.
|
||||
- The kube-apiserver is configured to assign IP addresses to Services.
|
||||
- The kubelet or the cloud-controller-manager is configured to assign IP addresses to Nodes.
|
||||
|
||||
{{< figure src="/docs/images/kubernetes-cluster-network.svg" alt="A figure illustrating the different network ranges in a kubernetes cluster" class="diagram-medium" >}}
|
||||
|
||||
## Cluster networking types {#cluster-network-ipfamilies}
|
||||
|
||||
Kubernetes clusters, attending to the IP families configured, can be categorized into:
|
||||
|
||||
- IPv4 only: The network plugin, kube-apiserver and kubelet/cloud-controller-manager are configured to assign only IPv4 addresses.
|
||||
- IPv6 only: The network plugin, kube-apiserver and kubelet/cloud-controller-manager are configured to assign only IPv6 addresses.
|
||||
- IPv4/IPv6 or IPv6/IPv4 [dual-stack](/docs/concepts/services-networking/dual-stack/):
|
||||
- The network plugin is configured to assign IPv4 and IPv6 addresses.
|
||||
- The kube-apiserver is configured to assign IPv4 and IPv6 addresses.
|
||||
- The kubelet or cloud-controller-manager is configured to assign IPv4 and IPv6 address.
|
||||
- All components must agree on the configured primary IP family.
|
||||
|
||||
Kubernetes clusters only consider the IP families present on the Pods, Services and Nodes objects,
|
||||
independently of the existing IPs of the represented objects. Per example, a server or a pod can have multiple
|
||||
IP addresses on its interfaces, but only the IP addresses in `node.status.addresses` or `pod.status.ips` are
|
||||
considered for implementing the Kubernetes network model and defining the type of the cluster.
|
||||
|
||||
## How to implement the Kubernetes network model
|
||||
|
||||
The network model is implemented by the container runtime on each node. The most common container
|
||||
|
|
|
@ -202,10 +202,23 @@ Here is an example:
|
|||
--allow-label-value number_count_metric,odd_number='1,3,5', number_count_metric,even_number='2,4,6', date_gauge_metric,weekend='Saturday,Sunday'
|
||||
```
|
||||
|
||||
In addition to specifying this from the CLI, this can also be done within a configuration file. You
|
||||
can specify the path to that configuration file using the `--allow-metric-labels-manifest` command
|
||||
line argument to a component. Here's an example of the contents of that configuration file:
|
||||
|
||||
```yaml
|
||||
allow-list:
|
||||
- "metric1,label2": "v1,v2,v3"
|
||||
- "metric2,label1": "v1,v2,v3"
|
||||
```
|
||||
|
||||
Additionally, the `cardinality_enforcement_unexpected_categorizations_total` meta-metric records the
|
||||
count of unexpected categorizations during cardinality enforcement, that is, whenever a label value
|
||||
is encountered that is not allowed with respect to the allow-list constraints.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
|
||||
for metrics
|
||||
* See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)
|
||||
* Read about the [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
|
||||
|
||||
* Read about the [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
|
|
@ -116,8 +116,13 @@ runs on a single-core, dual-core, or 48-core machine.
|
|||
|
||||
{{< note >}}
|
||||
Kubernetes doesn't allow you to specify CPU resources with a precision finer than
|
||||
`1m`. Because of this, it's useful to specify CPU units less than `1.0` or `1000m` using
|
||||
the milliCPU form; for example, `5m` rather than `0.005`.
|
||||
`1m` or `0.001` CPU. To avoid accidentally using an invalid CPU quantity, it's useful to specify CPU units using the milliCPU form
|
||||
instead of the decimal form when using less than 1 CPU unit.
|
||||
|
||||
For example, you have a Pod that uses `5m` or `0.005` CPU and would like to decrease
|
||||
its CPU resources. By using the decimal form, it's harder to spot that `0.0005` CPU
|
||||
is an invalid value, while by using the milliCPU form, it's easier to spot that
|
||||
`0.5m` is an invalid value.
|
||||
{{< /note >}}
|
||||
|
||||
### Memory resource units {#meaning-of-memory}
|
||||
|
@ -571,7 +576,7 @@ Cluster-level extended resources are not tied to nodes. They are usually managed
|
|||
by scheduler extenders, which handle the resource consumption and resource quota.
|
||||
|
||||
You can specify the extended resources that are handled by scheduler extenders
|
||||
in [scheduler configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
in [scheduler configuration](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
|
||||
**Example:**
|
||||
|
||||
|
@ -817,6 +822,6 @@ memory limit (and possibly request) for that container.
|
|||
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
|
||||
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
|
||||
* Read about [project quotas](https://www.linux.org/docs/man8/xfs_quota.html) in XFS
|
||||
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
* Read more about the [kube-scheduler configuration reference (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
* Read more about [Quality of Service classes for Pods](/docs/concepts/workloads/pods/pod-qos/)
|
||||
|
||||
|
|
|
@ -209,7 +209,7 @@ You should only create a ServiceAccount token Secret
|
|||
if you can't use the `TokenRequest` API to obtain a token,
|
||||
and the security exposure of persisting a non-expiring token credential
|
||||
in a readable API object is acceptable to you. For instructions, see
|
||||
[Manually create a long-lived API token for a ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token).
|
||||
[Manually create a long-lived API token for a ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount).
|
||||
{{< /note >}}
|
||||
|
||||
When using this Secret type, you need to ensure that the
|
||||
|
@ -381,7 +381,7 @@ The following YAML contains an example config for a TLS Secret:
|
|||
|
||||
The TLS Secret type is provided only for convenience.
|
||||
You can create an `Opaque` type for credentials used for TLS authentication.
|
||||
However, using the defined and public Secret type (`kubernetes.io/ssh-auth`)
|
||||
However, using the defined and public Secret type (`kubernetes.io/tls`)
|
||||
helps ensure the consistency of Secret format in your project. The API server
|
||||
verifies if the required keys are set for a Secret of this type.
|
||||
|
||||
|
|
|
@ -50,17 +50,20 @@ A more detailed description of the termination behavior can be found in
|
|||
### Hook handler implementations
|
||||
|
||||
Containers can access a hook by implementing and registering a handler for that hook.
|
||||
There are two types of hook handlers that can be implemented for Containers:
|
||||
There are three types of hook handlers that can be implemented for Containers:
|
||||
|
||||
* Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container.
|
||||
Resources consumed by the command are counted against the Container.
|
||||
* HTTP - Executes an HTTP request against a specific endpoint on the Container.
|
||||
* Sleep - Pauses the container for a specified duration.
|
||||
The "Sleep" action is available when the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`PodLifecycleSleepAction` is enabled.
|
||||
|
||||
### Hook handler execution
|
||||
|
||||
When a Container lifecycle management hook is called,
|
||||
the Kubernetes management system executes the handler according to the hook action,
|
||||
`httpGet` and `tcpSocket` are executed by the kubelet process, and `exec` is executed in the container.
|
||||
`httpGet` , `tcpSocket` and `sleep` are executed by the kubelet process, and `exec` is executed in the container.
|
||||
|
||||
Hook handler calls are synchronous within the context of the Pod containing the Container.
|
||||
This means that for a `PostStart` hook,
|
||||
|
|
|
@ -159,6 +159,17 @@ that Kubernetes will keep trying to pull the image, with an increasing back-off
|
|||
Kubernetes raises the delay between each attempt until it reaches a compiled-in limit,
|
||||
which is 300 seconds (5 minutes).
|
||||
|
||||
### Image pull per runtime class
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
Kubernetes includes alpha support for performing image pulls based on the RuntimeClass of a Pod.
|
||||
|
||||
If you enable the `RuntimeClassInImageCriApi` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
the kubelet references container images by a tuple of (image name, runtime handler) rather than just the
|
||||
image name or digest. Your {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
|
||||
may adapt its behavior based on the selected runtime handler.
|
||||
Pulling images based on runtime class will be helpful for VM based containers like windows hyperV containers.
|
||||
|
||||
## Serial and parallel image pulls
|
||||
|
||||
By default, kubelet pulls images serially. In other words, kubelet sends only
|
||||
|
|
|
@ -264,7 +264,7 @@ a way to extend Kubernetes with supports for new kinds of volumes. The volumes c
|
|||
durable external storage, or provide ephemeral storage, or they might offer a read-only interface
|
||||
to information using a filesystem paradigm.
|
||||
|
||||
Kubernetes also includes support for [FlexVolume](/docs/concepts/storage/volumes/#flexvolume-deprecated) plugins,
|
||||
Kubernetes also includes support for [FlexVolume](/docs/concepts/storage/volumes/#flexvolume) plugins,
|
||||
which are deprecated since Kubernetes v1.23 (in favour of CSI).
|
||||
|
||||
FlexVolume plugins allow users to mount volume types that aren't natively supported by Kubernetes. When
|
||||
|
|
|
@ -114,7 +114,7 @@ The general workflow of a device plugin includes the following steps:
|
|||
// informed allocation decision when possible.
|
||||
rpc GetPreferredAllocation(PreferredAllocationRequest) returns (PreferredAllocationResponse) {}
|
||||
|
||||
// PreStartContainer is called, if indicated by Device Plugin during registeration phase,
|
||||
// PreStartContainer is called, if indicated by Device Plugin during registration phase,
|
||||
// before each container start. Device plugin can run device specific operations
|
||||
// such as resetting the device before making devices available to the container.
|
||||
rpc PreStartContainer(PreStartContainerRequest) returns (PreStartContainerResponse) {}
|
||||
|
@ -159,8 +159,8 @@ The general workflow of a device plugin includes the following steps:
|
|||
{{< note >}}
|
||||
The processing of the fully-qualified CDI device names by the Device Manager requires
|
||||
that the `DevicePluginCDIDevices` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
is enabled for the kubelet and the kube-apiserver. This was added as an alpha feature in Kubernetes
|
||||
v1.28.
|
||||
is enabled for both the kubelet and the kube-apiserver. This was added as an alpha feature in Kubernetes
|
||||
v1.28 and graduated to beta in v1.29.
|
||||
{{< /note >}}
|
||||
|
||||
### Handling kubelet restarts
|
||||
|
@ -346,7 +346,7 @@ update and Kubelet needs to be restarted to reflect the correct resource capacit
|
|||
{{< /note >}}
|
||||
|
||||
```gRPC
|
||||
// AllocatableResourcesResponses contains informations about all the devices known by the kubelet
|
||||
// AllocatableResourcesResponses contains information about all the devices known by the kubelet
|
||||
message AllocatableResourcesResponse {
|
||||
repeated ContainerDevices devices = 1;
|
||||
repeated int64 cpu_ids = 2;
|
||||
|
|
|
@ -50,7 +50,7 @@ documentation for that Container Runtime, for example:
|
|||
- [CRI-O](https://github.com/cri-o/cri-o/blob/main/contrib/cni/README.md)
|
||||
|
||||
For specific information about how to install and manage a CNI plugin, see the documentation for
|
||||
that plugin or [networking provider](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
|
||||
that plugin or [networking provider](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model).
|
||||
|
||||
## Network Plugin Requirements
|
||||
|
||||
|
|
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 23 KiB |
|
@ -25,7 +25,10 @@ a complete and working Kubernetes cluster.
|
|||
<!-- body -->
|
||||
## Control Plane Components
|
||||
|
||||
The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new {{< glossary_tooltip text="pod" term_id="pod">}} when a deployment's `replicas` field is unsatisfied).
|
||||
The control plane's components make global decisions about the cluster (for example, scheduling),
|
||||
as well as detecting and responding to cluster events (for example, starting up a new
|
||||
{{< glossary_tooltip text="pod" term_id="pod">}} when a Deployment's
|
||||
`{{< glossary_tooltip text="replicas" term_id="replica" >}}` field is unsatisfied).
|
||||
|
||||
Control plane components can be run on any machine in the cluster. However,
|
||||
for simplicity, set up scripts typically start all control plane components on
|
||||
|
@ -105,19 +108,24 @@ see [Addons](/docs/concepts/cluster-administration/addons/).
|
|||
|
||||
### DNS
|
||||
|
||||
While the other addons are not strictly required, all Kubernetes clusters should have [cluster DNS](/docs/concepts/services-networking/dns-pod-service/), as many examples rely on it.
|
||||
While the other addons are not strictly required, all Kubernetes clusters should have
|
||||
[cluster DNS](/docs/concepts/services-networking/dns-pod-service/), as many examples rely on it.
|
||||
|
||||
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
|
||||
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment,
|
||||
which serves DNS records for Kubernetes services.
|
||||
|
||||
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
|
||||
|
||||
### Web UI (Dashboard)
|
||||
|
||||
[Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/) is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.
|
||||
[Dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard/) is a general purpose,
|
||||
web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications
|
||||
running in the cluster, as well as the cluster itself.
|
||||
|
||||
### Container Resource Monitoring
|
||||
|
||||
[Container Resource Monitoring](/docs/tasks/debug/debug-cluster/resource-usage-monitoring/) records generic time-series metrics
|
||||
[Container Resource Monitoring](/docs/tasks/debug/debug-cluster/resource-usage-monitoring/)
|
||||
records generic time-series metrics
|
||||
about containers in a central database, and provides a UI for browsing that data.
|
||||
|
||||
### Cluster-level Logging
|
||||
|
@ -135,7 +143,8 @@ allocating IP addresses to pods and enabling them to communicate with each other
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Learn more about the following:
|
||||
* [Nodes](/docs/concepts/architecture/nodes/) and [their communication](/docs/concepts/architecture/control-plane-node-communication/) with the control plane.
|
||||
* [Nodes](/docs/concepts/architecture/nodes/) and [their communication](/docs/concepts/architecture/control-plane-node-communication/)
|
||||
with the control plane.
|
||||
* Kubernetes [controllers](/docs/concepts/architecture/controller/).
|
||||
* [kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) which is the default scheduler for Kubernetes.
|
||||
* Etcd's official [documentation](https://etcd.io/docs/).
|
||||
|
|
|
@ -67,6 +67,13 @@ This means the name must:
|
|||
- start with an alphabetic character
|
||||
- end with an alphanumeric character
|
||||
|
||||
{{< note >}}
|
||||
The only difference between the RFC 1035 and RFC 1123
|
||||
label standards is that RFC 1123 labels are allowed to
|
||||
start with a digit, whereas RFC 1035 labels can start
|
||||
with a lowercase alphabetic character only.
|
||||
{{< /note >}}
|
||||
|
||||
### Path Segment Names
|
||||
|
||||
Some resource types require their names to be able to be safely encoded as a
|
||||
|
|
|
@ -64,5 +64,5 @@ Dynamic Admission Controllers that act as flexible policy engines are being deve
|
|||
## Apply policies using Kubelet configurations
|
||||
|
||||
Kubernetes allows configuring the Kubelet on each worker node. Some Kubelet configurations act as policies:
|
||||
* [Process ID limts and reservations](/docs/concepts/policy/pid-limiting/) are used to limit and reserve allocatable PIDs.
|
||||
* [Process ID limits and reservations](/docs/concepts/policy/pid-limiting/) are used to limit and reserve allocatable PIDs.
|
||||
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/) can manage compute, memory, and device resources for latency-critical and high-throughput workloads.
|
||||
|
|
|
@ -306,7 +306,7 @@ Pod affinity rule uses the "hard"
|
|||
uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
|
||||
|
||||
The affinity rule specifies that the scheduler is allowed to place the example Pod
|
||||
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
|
||||
on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
where other Pods have been labeled with `security=S1`.
|
||||
For instance, if we have a cluster with a designated zone, let's call it "Zone V,"
|
||||
consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can
|
||||
|
@ -315,7 +315,7 @@ Zone V already labeled with `security=S1`. Conversely, if there are no Pods with
|
|||
labels in Zone V, the scheduler will not assign the example Pod to any node in that zone.
|
||||
|
||||
The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod
|
||||
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/)
|
||||
on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
where other Pods have been labeled with `security=S2`.
|
||||
For instance, if we have a cluster with a designated zone, let's call it "Zone R,"
|
||||
consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid
|
||||
|
@ -358,6 +358,108 @@ The affinity term is applied to namespaces selected by both `namespaceSelector`
|
|||
Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and
|
||||
null `namespaceSelector` matches the namespace of the Pod where the rule is defined.
|
||||
|
||||
#### matchLabelKeys
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
|
||||
The `matchLabelKeys` field is a alpha-level field and is disabled by default in
|
||||
Kubernetes {{< skew currentVersion >}}.
|
||||
When you want to use it, you have to enable it via the
|
||||
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
{{< /note >}}
|
||||
|
||||
Kubernetes includes an optional `matchLabelKeys` field for Pod affinity
|
||||
or anti-affinity. The field specifies keys for the labels that should match with the incoming Pod's labels,
|
||||
when satisfying the Pod (anti)affinity.
|
||||
|
||||
The keys are used to look up values from the pod labels; those key-value labels are combined
|
||||
(using `AND`) with the match restrictions defined using the `labelSelector` field. The combined
|
||||
filtering selects the set of existing pods that will be taken into Pod (anti)affinity calculation.
|
||||
|
||||
A common use case is to use `matchLabelKeys` with `pod-template-hash` (set on Pods
|
||||
managed as part of a Deployment, where the value is unique for each revision).
|
||||
Using `pod-template-hash` in `matchLabelKeys` allows you to target the Pods that belong
|
||||
to the same revision as the incoming Pod, so that a rolling upgrade won't break affinity.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: application-server
|
||||
...
|
||||
spec:
|
||||
template:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- database
|
||||
topologyKey: topology.kubernetes.io/zone
|
||||
# Only Pods from a given rollout are taken into consideration when calculating pod affinity.
|
||||
# If you update the Deployment, the replacement Pods follow their own affinity rules
|
||||
# (if there are any defined in the new Pod template)
|
||||
matchLabelKeys:
|
||||
- pod-template-hash
|
||||
```
|
||||
|
||||
#### mismatchLabelKeys
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
|
||||
The `mismatchLabelKeys` field is a alpha-level field and is disabled by default in
|
||||
Kubernetes {{< skew currentVersion >}}.
|
||||
When you want to use it, you have to enable it via the
|
||||
`MatchLabelKeysInPodAffinity` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
{{< /note >}}
|
||||
|
||||
Kubernetes includes an optional `mismatchLabelKeys` field for Pod affinity
|
||||
or anti-affinity. The field specifies keys for the labels that should **not** match with the incoming Pod's labels,
|
||||
when satisfying the Pod (anti)affinity.
|
||||
|
||||
One example use case is to ensure Pods go to the topology domain (node, zone, etc) where only Pods from the same tenant or team are scheduled in.
|
||||
In other words, you want to avoid running Pods from two different tenants on the same topology domain at the same time.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
# Assume that all relevant Pods have a "tenant" label set
|
||||
tenant: tenant-a
|
||||
...
|
||||
spec:
|
||||
affinity:
|
||||
podAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# ensure that pods associated with this tenant land on the correct node pool
|
||||
- matchLabelKeys:
|
||||
- tenant
|
||||
topologyKey: node-pool
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# ensure that pods associated with this tenant can't schedule to nodes used for another tenant
|
||||
- mismatchLabelKeys:
|
||||
- tenant # whatever the value of the "tenant" label for this Pod, prevent
|
||||
# scheduling to nodes in any pool where any Pod from a different
|
||||
# tenant is running.
|
||||
labelSelector:
|
||||
# We have to have the labelSelector which selects only Pods with the tenant label,
|
||||
# otherwise this Pod would hate Pods from daemonsets as well, for example,
|
||||
# which aren't supposed to have the tenant label.
|
||||
matchExpressions:
|
||||
- key: tenant
|
||||
operator: Exists
|
||||
topologyKey: node-pool
|
||||
```
|
||||
|
||||
#### More practical use-cases
|
||||
|
||||
Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
|
||||
|
|
|
@ -162,6 +162,17 @@ gets scheduled onto one node and then cannot run there, which is bad because
|
|||
such a pending Pod also blocks all other resources like RAM or CPU that were
|
||||
set aside for it.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
Scheduling of pods which use ResourceClaims is going to be slower because of
|
||||
the additional communication that is required. Beware that this may also impact
|
||||
pods that don't use ResourceClaims because only one pod at a time gets
|
||||
scheduled, blocking API calls are made while handling a pod with
|
||||
ResourceClaims, and thus scheduling the next pod gets delayed.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## Monitoring resources
|
||||
|
||||
The kubelet provides a gRPC service to enable discovery of dynamic resources of
|
||||
|
|
|
@ -86,7 +86,7 @@ of the scheduler:
|
|||
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
|
||||
* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
|
||||
* Read the [kube-scheduler config (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) reference
|
||||
* Read the [kube-scheduler config (v1)](/docs/reference/config-api/kube-scheduler-config.v1/) reference
|
||||
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
|
||||
* Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/)
|
||||
* Learn about [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||
|
|
|
@ -23,7 +23,7 @@ To set the `MostAllocated` strategy for the `NodeResourcesFit` plugin, use a
|
|||
[scheduler configuration](/docs/reference/scheduling/config) similar to the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||
apiVersion: kubescheduler.config.k8s.io/v1
|
||||
kind: KubeSchedulerConfiguration
|
||||
profiles:
|
||||
- pluginConfig:
|
||||
|
@ -43,7 +43,7 @@ profiles:
|
|||
```
|
||||
|
||||
To learn more about other parameters and their default configuration, see the API documentation for
|
||||
[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs).
|
||||
[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1/#kubescheduler-config-k8s-io-v1-NodeResourcesFitArgs).
|
||||
|
||||
## Enabling bin packing using RequestedToCapacityRatio
|
||||
|
||||
|
@ -53,7 +53,7 @@ allows users to bin pack extended resources by using appropriate parameters
|
|||
to improve the utilization of scarce resources in large clusters. It favors nodes according to a
|
||||
configured function of the allocated resources. The behavior of the `RequestedToCapacityRatio` in
|
||||
the `NodeResourcesFit` score function can be controlled by the
|
||||
[scoringStrategy](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy) field.
|
||||
[scoringStrategy](/docs/reference/config-api/kube-scheduler-config.v1/#kubescheduler-config-k8s-io-v1-ScoringStrategy) field.
|
||||
Within the `scoringStrategy` field, you can configure two parameters: `requestedToCapacityRatio` and
|
||||
`resources`. The `shape` in the `requestedToCapacityRatio`
|
||||
parameter allows the user to tune the function as least requested or most
|
||||
|
@ -66,7 +66,7 @@ the bin packing behavior for extended resources `intel.com/foo` and `intel.com/b
|
|||
using the `requestedToCapacityRatio` field.
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1beta3
|
||||
apiVersion: kubescheduler.config.k8s.io/v1
|
||||
kind: KubeSchedulerConfiguration
|
||||
profiles:
|
||||
- pluginConfig:
|
||||
|
@ -92,7 +92,7 @@ flag `--config=/path/to/config/file` will pass the configuration to the
|
|||
scheduler.
|
||||
|
||||
To learn more about other parameters and their default configuration, see the API documentation for
|
||||
[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs).
|
||||
[`NodeResourcesFitArgs`](/docs/reference/config-api/kube-scheduler-config.v1/#kubescheduler-config-k8s-io-v1-NodeResourcesFitArgs).
|
||||
|
||||
### Tuning the score function
|
||||
|
||||
|
|
|
@ -43,7 +43,7 @@ If you set `percentageOfNodesToScore` above 100, kube-scheduler acts as if you
|
|||
had set a value of 100.
|
||||
|
||||
To change the value, edit the
|
||||
[kube-scheduler configuration file](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
[kube-scheduler configuration file](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
and then restart the scheduler.
|
||||
In many cases, the configuration file can be found at `/etc/kubernetes/config/kube-scheduler.yaml`.
|
||||
|
||||
|
@ -161,5 +161,5 @@ After going over all the Nodes, it goes back to Node 1.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Check the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
* Check the [kube-scheduler configuration reference (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
|
||||
|
|
|
@ -10,28 +10,27 @@ weight: 60
|
|||
|
||||
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
|
||||
|
||||
The scheduling framework is a pluggable architecture for the Kubernetes scheduler.
|
||||
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled into the scheduler. The APIs allow most scheduling features to be implemented as plugins, while keeping the
|
||||
scheduling "core" lightweight and maintainable. Refer to the [design proposal of the
|
||||
scheduling framework][kep] for more technical information on the design of the
|
||||
framework.
|
||||
The _scheduling framework_ is a pluggable architecture for the Kubernetes scheduler.
|
||||
It consists of a set of "plugin" APIs that are compiled directly into the scheduler.
|
||||
These APIs allow most scheduling features to be implemented as plugins,
|
||||
while keeping the scheduling "core" lightweight and maintainable. Refer to the
|
||||
[design proposal of the scheduling framework][kep] for more technical information on
|
||||
the design of the framework.
|
||||
|
||||
[kep]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/624-scheduling-framework/README.md
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
# Framework workflow
|
||||
## Framework workflow
|
||||
|
||||
The Scheduling Framework defines a few extension points. Scheduler plugins
|
||||
register to be invoked at one or more extension points. Some of these plugins
|
||||
can change the scheduling decisions and some are informational only.
|
||||
|
||||
Each attempt to schedule one Pod is split into two phases, the **scheduling
|
||||
cycle** and the **binding cycle**.
|
||||
Each attempt to schedule one Pod is split into two phases, the
|
||||
**scheduling cycle** and the **binding cycle**.
|
||||
|
||||
## Scheduling Cycle & Binding Cycle
|
||||
### Scheduling cycle & binding cycle
|
||||
|
||||
The scheduling cycle selects a node for the Pod, and the binding cycle applies
|
||||
that decision to the cluster. Together, a scheduling cycle and binding cycle are
|
||||
|
@ -51,7 +50,7 @@ that the scheduling framework exposes.
|
|||
One plugin may implement multiple interfaces to perform more complex or
|
||||
stateful tasks.
|
||||
|
||||
Some interfaces match the scheduler extension points which can be configured through
|
||||
Some interfaces match the scheduler extension points which can be configured through
|
||||
[Scheduler Configuration](/docs/reference/scheduling/config/#extension-points).
|
||||
|
||||
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="Scheduling framework extension points" class="diagram-large">}}
|
||||
|
@ -69,23 +68,27 @@ For more details about how internal scheduler queues work, read
|
|||
|
||||
### EnqueueExtension
|
||||
|
||||
EnqueueExtension is the interface where the plugin can control
|
||||
EnqueueExtension is the interface where the plugin can control
|
||||
whether to retry scheduling of Pods rejected by this plugin, based on changes in the cluster.
|
||||
Plugins that implement PreEnqueue, PreFilter, Filter, Reserve or Permit should implement this interface.
|
||||
|
||||
#### QueueingHint
|
||||
### QueueingHint
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="beta" >}}
|
||||
|
||||
QueueingHint is a callback function for deciding whether a Pod can be requeued to the active queue or backoff queue.
|
||||
QueueingHint is a callback function for deciding whether a Pod can be requeued to the active queue or backoff queue.
|
||||
It's executed every time a certain kind of event or change happens in the cluster.
|
||||
When the QueueingHint finds that the event might make the Pod schedulable,
|
||||
When the QueueingHint finds that the event might make the Pod schedulable,
|
||||
the Pod is put into the active queue or the backoff queue
|
||||
so that the scheduler will retry the scheduling of the Pod.
|
||||
|
||||
{{< note >}}
|
||||
QueueingHint evaluation during scheduling is a beta-level feature and is enabled by default in 1.28.
|
||||
You can disable it via the
|
||||
QueueingHint evaluation during scheduling is a beta-level feature.
|
||||
The v1.28 release series initially enabled the associated feature gate; however, after the
|
||||
discovery of an excessive memory footprint, the Kubernetes project set that feature gate
|
||||
to be disabled by default. In Kubernetes {{< skew currentVersion >}}, this feature gate is
|
||||
disabled and you need to enable it manually.
|
||||
You can enable it via the
|
||||
`SchedulerQueueingHints` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -218,9 +221,9 @@ the three things:
|
|||
|
||||
{{< note >}}
|
||||
While any plugin can access the list of "waiting" Pods and approve them
|
||||
(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit
|
||||
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
|
||||
is approved, it is sent to the [PreBind](#pre-bind) phase.
|
||||
(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)),
|
||||
we expect only the permit plugins to approve binding of reserved Pods that are in "waiting" state.
|
||||
Once a Pod is approved, it is sent to the [PreBind](#pre-bind) phase.
|
||||
{{< /note >}}
|
||||
|
||||
### PreBind {#pre-bind}
|
||||
|
@ -284,4 +287,3 @@ plugins and get them configured along with default plugins. You can visit
|
|||
If you are using Kubernetes v1.18 or later, you can configure a set of plugins as
|
||||
a scheduler profile and then define multiple profiles to fit various kinds of workload.
|
||||
Learn more at [multiple profiles](/docs/reference/scheduling/config/#multiple-profiles).
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ The default value for `operator` is `Equal`.
|
|||
A toleration "matches" a taint if the keys are the same and the effects are the same, and:
|
||||
|
||||
* the `operator` is `Exists` (in which case no `value` should be specified), or
|
||||
* the `operator` is `Equal` and the `value`s are equal.
|
||||
* the `operator` is `Equal` and the values should be equal.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
|
@ -97,7 +97,7 @@ The allowed values for the `effect` field are:
|
|||
* Pods that tolerate the taint with a specified `tolerationSeconds` remain
|
||||
bound for the specified amount of time. After that time elapses, the node
|
||||
lifecycle controller evicts the Pods from the node.
|
||||
|
||||
|
||||
`NoSchedule`
|
||||
: No new Pods will be scheduled on the tainted node unless they have a matching
|
||||
toleration. Pods currently running on the node are **not** evicted.
|
||||
|
@ -105,7 +105,7 @@ The allowed values for the `effect` field are:
|
|||
`PreferNoSchedule`
|
||||
: `PreferNoSchedule` is a "preference" or "soft" version of `NoSchedule`.
|
||||
The control plane will *try* to avoid placing a Pod that does not tolerate
|
||||
the taint on the node, but it is not guaranteed.
|
||||
the taint on the node, but it is not guaranteed.
|
||||
|
||||
You can put multiple taints on the same node and multiple tolerations on the same pod.
|
||||
The way Kubernetes processes multiple taints and tolerations is like a filter: start
|
||||
|
@ -293,15 +293,15 @@ decisions. This ensures that node conditions don't directly affect scheduling.
|
|||
For example, if the `DiskPressure` node condition is active, the control plane
|
||||
adds the `node.kubernetes.io/disk-pressure` taint and does not schedule new pods
|
||||
onto the affected node. If the `MemoryPressure` node condition is active, the
|
||||
control plane adds the `node.kubernetes.io/memory-pressure` taint.
|
||||
control plane adds the `node.kubernetes.io/memory-pressure` taint.
|
||||
|
||||
You can ignore node conditions for newly created pods by adding the corresponding
|
||||
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
|
||||
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
|
||||
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
|
||||
Pod tolerations. The control plane also adds the `node.kubernetes.io/memory-pressure`
|
||||
toleration on pods that have a {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}
|
||||
other than `BestEffort`. This is because Kubernetes treats pods in the `Guaranteed`
|
||||
or `Burstable` QoS classes (even pods with no memory request set) as if they are
|
||||
able to cope with memory pressure, while new `BestEffort` pods are not scheduled
|
||||
onto the affected node.
|
||||
onto the affected node.
|
||||
|
||||
The DaemonSet controller automatically adds the following `NoSchedule`
|
||||
tolerations to all daemons, to prevent DaemonSets from breaking.
|
||||
|
|
|
@ -29,7 +29,7 @@ suitable for this use-case.
|
|||
|
||||
## X.509 client certificate authentication {#x509-client-certificate-authentication}
|
||||
|
||||
Kubernetes leverages [X.509 client certificate](/docs/reference/access-authn-authz/authentication/#x509-client-certs)
|
||||
Kubernetes leverages [X.509 client certificate](/docs/reference/access-authn-authz/authentication/#x509-client-certificates)
|
||||
authentication for system components, such as when the Kubelet authenticates to the API Server.
|
||||
While this mechanism can also be used for user authentication, it might not be suitable for
|
||||
production use due to several restrictions:
|
||||
|
|
|
@ -271,6 +271,11 @@ fail validation.
|
|||
<li><code>net.ipv4.ip_unprivileged_port_start</code></li>
|
||||
<li><code>net.ipv4.tcp_syncookies</code></li>
|
||||
<li><code>net.ipv4.ping_group_range</code></li>
|
||||
<li><code>net.ipv4.ip_local_reserved_ports</code> (since Kubernetes 1.27)</li>
|
||||
<li><code>net.ipv4.tcp_keepalive_time</code> (since Kubernetes 1.29)</li>
|
||||
<li><code>net.ipv4.tcp_fin_timeout</code> (since Kubernetes 1.29)</li>
|
||||
<li><code>net.ipv4.tcp_keepalive_intvl</code> (since Kubernetes 1.29)</li>
|
||||
<li><code>net.ipv4.tcp_keepalive_probes</code> (since Kubernetes 1.29)</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -485,6 +490,12 @@ Restrictions on the following controls are only required if `.spec.os.name` is n
|
|||
- Seccomp
|
||||
- Linux Capabilities
|
||||
|
||||
## User namespaces
|
||||
|
||||
User Namespaces are a Linux-only feature to run workloads with increased
|
||||
isolation. How they work together with Pod Security Standards is described in
|
||||
the [documentation](/docs/concepts/workloads/pods/user-namespaces#integration-with-pod-security-admission-checks) for Pods that use user namespaces.
|
||||
|
||||
## FAQ
|
||||
|
||||
### Why isn't there a profile between privileged and baseline?
|
||||
|
|
|
@ -207,21 +207,7 @@ SELinux is only available on Linux nodes, and enabled in
|
|||
## Logs and auditing
|
||||
|
||||
- [ ] Audit logs, if enabled, are protected from general access.
|
||||
- [ ] The `/logs` API is disabled (you are running kube-apiserver with
|
||||
`--enable-logs-handler=false`).
|
||||
|
||||
Kubernetes includes a `/logs` API endpoint, enabled by default,
|
||||
that lets users request the contents of the API server's `/var/log` directory over HTTP. Accessing
|
||||
that endpoint requires authentication.
|
||||
|
||||
Allowing broad access to Kubernetes logs can make security information
|
||||
available to a potential attacker.
|
||||
|
||||
As a good practice, set up a separate means to collect and aggregate
|
||||
control plane logs, and do not use the `/logs` API endpoint.
|
||||
Alternatively, if you run your control plane with the `/logs` API endpoint
|
||||
and limit the content of `/var/log` (within the host or container where the API server is running) to
|
||||
Kubernetes API server logs only.
|
||||
|
||||
## Pod placement
|
||||
|
||||
|
@ -390,7 +376,7 @@ availability state and recommended to improve your security posture:
|
|||
|
||||
[`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||
: Restricts kubelet's permissions to only modify the pods API resources they own
|
||||
or the node API ressource that represent themselves. It also prevents kubelet
|
||||
or the node API resource that represent themselves. It also prevents kubelet
|
||||
from using the `node-restriction.kubernetes.io/` annotation, which can be used
|
||||
by an attacker with access to the kubelet's credentials to influence pod
|
||||
placement to the controlled node.
|
||||
|
|
|
@ -247,7 +247,8 @@ request. The API server checks the validity of that bearer token as follows:
|
|||
|
||||
The TokenRequest API produces _bound tokens_ for a ServiceAccount. This
|
||||
binding is linked to the lifetime of the client, such as a Pod, that is acting
|
||||
as that ServiceAccount.
|
||||
as that ServiceAccount. See [Token Volume Projection](/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection)
|
||||
for an example of a bound pod service account token's JWT schema and payload.
|
||||
|
||||
For tokens issued using the `TokenRequest` API, the API server also checks that
|
||||
the specific object reference that is using the ServiceAccount still exists,
|
||||
|
@ -269,7 +270,7 @@ account credentials, you can use the following methods:
|
|||
|
||||
The Kubernetes project recommends that you use the TokenReview API, because
|
||||
this method invalidates tokens that are bound to API objects such as Secrets,
|
||||
ServiceAccounts, and Pods when those objects are deleted. For example, if you
|
||||
ServiceAccounts, Pods or Nodes when those objects are deleted. For example, if you
|
||||
delete the Pod that contains a projected ServiceAccount token, the cluster
|
||||
invalidates that token immediately and a TokenReview immediately fails.
|
||||
If you use OIDC validation instead, your clients continue to treat the token
|
||||
|
|
|
@ -98,9 +98,9 @@ of the form `hostname.my-svc.my-namespace.svc.cluster-domain.example`.
|
|||
|
||||
### A/AAAA records
|
||||
|
||||
In general a Pod has the following DNS resolution:
|
||||
Kube-DNS versions, prior to the implementation of the [DNS specification](https://github.com/kubernetes/dns/blob/master/docs/specification.md), had the following DNS resolution:
|
||||
|
||||
`pod-ip-address.my-namespace.pod.cluster-domain.example`.
|
||||
`pod-ipv4-address.my-namespace.pod.cluster-domain.example`.
|
||||
|
||||
For example, if a Pod in the `default` namespace has the IP address 172.17.0.3,
|
||||
and the domain name for your cluster is `cluster.local`, then the Pod has a DNS name:
|
||||
|
@ -109,7 +109,7 @@ and the domain name for your cluster is `cluster.local`, then the Pod has a DNS
|
|||
|
||||
Any Pods exposed by a Service have the following DNS resolution available:
|
||||
|
||||
`pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`.
|
||||
`pod-ipv4-address.service-name.my-namespace.svc.cluster-domain.example`.
|
||||
|
||||
### Pod's hostname and subdomain fields
|
||||
|
||||
|
|
|
@ -65,12 +65,12 @@ To configure IPv4/IPv6 dual-stack, set dual-stack cluster network assignments:
|
|||
* kube-proxy:
|
||||
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* kubelet:
|
||||
* when there is no `--cloud-provider` the administrator can pass a comma-separated pair of IP
|
||||
addresses via `--node-ip` to manually configure dual-stack `.status.addresses` for that Node.
|
||||
If a Pod runs on that node in HostNetwork mode, the Pod reports these IP addresses in its
|
||||
`.status.podIPs` field.
|
||||
All `podIPs` in a node match the IP family preference defined by the `.status.addresses`
|
||||
field for that Node.
|
||||
* `--node-ip=<IPv4 IP>,<IPv6 IP>`
|
||||
* This option is required for bare metal dual-stack nodes (nodes that do not define a
|
||||
cloud provider with the `--cloud-provider` flag). If you are using a cloud provider
|
||||
and choose to override the node IPs chosen by the cloud provider, set the
|
||||
`--node-ip` option.
|
||||
* (The legacy built-in cloud providers do not support dual-stack `--node-ip`.)
|
||||
|
||||
{{< note >}}
|
||||
An example of an IPv4 CIDR: `10.244.0.0/16` (though you would supply your own address range)
|
||||
|
@ -79,13 +79,6 @@ An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but
|
|||
address - see [RFC 4193](https://tools.ietf.org/html/rfc4193))
|
||||
{{< /note >}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
When using an external cloud provider, you can pass a dual-stack `--node-ip` value to
|
||||
kubelet if you enable the `CloudDualStackNodeIPs` feature gate in both kubelet and the
|
||||
external cloud provider. This is only supported for cloud providers that support dual
|
||||
stack clusters.
|
||||
|
||||
## Services
|
||||
|
||||
You can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both.
|
||||
|
|
|
@ -210,7 +210,7 @@ perfectly full distribution of EndpointSlices. As an example, if there are 10
|
|||
new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each,
|
||||
this approach will create a new EndpointSlice instead of filling up the 2
|
||||
existing EndpointSlices. In other words, a single EndpointSlice creation is
|
||||
preferrable to multiple EndpointSlice updates.
|
||||
preferable to multiple EndpointSlice updates.
|
||||
|
||||
With kube-proxy running on each Node and watching EndpointSlices, every change
|
||||
to an EndpointSlice becomes relatively expensive since it will be transmitted to
|
||||
|
|
|
@ -191,7 +191,7 @@ guide for details on migrating Ingress resources to Gateway API resources.
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Instead of Gateway API resources being natively implemented by Kubernetes, the specifications
|
||||
are defined as [Custom Resources](docs/concepts/extend-kubernetes/api-extension/custom-resources)
|
||||
are defined as [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
supported by a wide range of [implementations](https://gateway-api.sigs.k8s.io/implementations/).
|
||||
[Install](https://gateway-api.sigs.k8s.io/guides/#installing-gateway-api) the Gateway API CRDs or
|
||||
follow the installation instructions of your selected implementation. After installing an
|
||||
|
|
|
@ -64,7 +64,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
|||
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
|
||||
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
|
||||
* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane.
|
||||
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
|
||||
* [Voyager](https://voyagermesh.com) is an ingress controller for
|
||||
[HAProxy](https://www.haproxy.org/#desc).
|
||||
* [Wallarm Ingress Controller](https://www.wallarm.com/solutions/waf-for-kubernetes) is an Ingress Controller that provides WAAP (WAF) and API Security capabilities.
|
||||
|
||||
|
|
|
@ -58,7 +58,8 @@ By default, a pod is non-isolated for egress; all outbound connections are allow
|
|||
A pod is isolated for egress if there is any NetworkPolicy that both selects the pod and has
|
||||
"Egress" in its `policyTypes`; we say that such a policy applies to the pod for egress.
|
||||
When a pod is isolated for egress, the only allowed connections from the pod are those allowed by
|
||||
the `egress` list of some NetworkPolicy that applies to the pod for egress.
|
||||
the `egress` list of some NetworkPolicy that applies to the pod for egress. Reply traffic for those
|
||||
allowed connections will also be implicitly allowed.
|
||||
The effects of those `egress` lists combine additively.
|
||||
|
||||
By default, a pod is non-isolated for ingress; all inbound connections are allowed.
|
||||
|
@ -66,7 +67,8 @@ A pod is isolated for ingress if there is any NetworkPolicy that both selects th
|
|||
has "Ingress" in its `policyTypes`; we say that such a policy applies to the pod for ingress.
|
||||
When a pod is isolated for ingress, the only allowed connections into the pod are those from
|
||||
the pod's node and those allowed by the `ingress` list of some NetworkPolicy that applies to
|
||||
the pod for ingress. The effects of those `ingress` lists combine additively.
|
||||
the pod for ingress. Reply traffic for those allowed connections will also be implicitly allowed.
|
||||
The effects of those `ingress` lists combine additively.
|
||||
|
||||
Network policies do not conflict; they are additive. If any policy or policies apply to a given
|
||||
pod for a given direction, the connections allowed in that direction from that pod is the union of
|
||||
|
@ -456,6 +458,16 @@ implemented using the NetworkPolicy API.
|
|||
- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost
|
||||
access, nor do they have the ability to block access from their resident node).
|
||||
|
||||
## NetworkPolicy's impact on existing connections
|
||||
|
||||
When the set of NetworkPolicies that applies to an existing connection changes - this could happen
|
||||
either due to a change in NetworkPolicies or if the relevant labels of the namespaces/pods selected by the
|
||||
policy (both subject and peers) are changed in the middle of an existing connection - it is
|
||||
implementation defined as to whether the change will take effect for that existing connection or not.
|
||||
Example: A policy is created that leads to denying a previously allowed connection, the underlying network plugin
|
||||
implementation is responsible for defining if that new policy will close the existing connections or not.
|
||||
It is recommended not to modify policies/pods/namespaces in ways that might affect existing connections.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
|
|
|
@ -520,16 +520,15 @@ spec:
|
|||
|
||||
#### Reserve Nodeport ranges to avoid collisions {#avoid-nodeport-collisions}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.29" state="stable" >}}
|
||||
|
||||
The policy for assigning ports to NodePort services applies to both the auto-assignment and
|
||||
the manual assignment scenarios. When a user wants to create a NodePort service that
|
||||
uses a specific port, the target port may conflict with another port that has already been assigned.
|
||||
In this case, you can enable the feature gate `ServiceNodePortStaticSubrange`, which allows you
|
||||
to use a different port allocation strategy for NodePort Services. The port range for NodePort services
|
||||
is divided into two bands. Dynamic port assignment uses the upper band by default, and it may use
|
||||
the lower band once the upper band has been exhausted. Users can then allocate from the lower band
|
||||
with a lower risk of port collision.
|
||||
|
||||
To avoid this problem, the port range for NodePort services is divided into two bands.
|
||||
Dynamic port assignment uses the upper band by default, and it may use the lower band once the
|
||||
upper band has been exhausted. Users can then allocate from the lower band with a lower risk of port collision.
|
||||
|
||||
#### Custom IP address configuration for `type: NodePort` Services {#service-nodeport-custom-listen-address}
|
||||
|
||||
|
@ -669,6 +668,28 @@ The value of `spec.loadBalancerClass` must be a label-style identifier,
|
|||
with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`".
|
||||
Unprefixed names are reserved for end-users.
|
||||
|
||||
#### Specifying IPMode of load balancer status {#load-balancer-ip-mode}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
Starting as Alpha in Kubernetes 1.29,
|
||||
a [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
named `LoadBalancerIPMode` allows you to set the `.status.loadBalancer.ingress.ipMode`
|
||||
for a Service with `type` set to `LoadBalancer`.
|
||||
The `.status.loadBalancer.ingress.ipMode` specifies how the load-balancer IP behaves.
|
||||
It may be specified only when the `.status.loadBalancer.ingress.ip` field is also specified.
|
||||
|
||||
There are two possible values for `.status.loadBalancer.ingress.ipMode`: "VIP" and "Proxy".
|
||||
The default value is "VIP" meaning that traffic is delivered to the node
|
||||
with the destination set to the load-balancer's IP and port.
|
||||
There are two cases when setting this to "Proxy", depending on how the load-balancer
|
||||
from the cloud provider delivers the traffics:
|
||||
|
||||
- If the traffic is delivered to the node then DNATed to the pod, the destination would be set to the node's IP and node port;
|
||||
- If the traffic is delivered directly to the pod, the destination would be set to the pod's IP and port.
|
||||
|
||||
Service implementations may use this information to adjust traffic routing.
|
||||
|
||||
#### Internal load balancer
|
||||
|
||||
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
|
||||
|
@ -866,10 +887,7 @@ finding a Service: environment variables and DNS.
|
|||
When a Pod is run on a Node, the kubelet adds a set of environment variables
|
||||
for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
|
||||
where the Service name is upper-cased and dashes are converted to underscores.
|
||||
It also supports variables
|
||||
(see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72))
|
||||
that are compatible with Docker Engine's
|
||||
"_[legacy container links](https://docs.docker.com/network/links/)_" feature.
|
||||
|
||||
|
||||
For example, the Service `redis-primary` which exposes TCP port 6379 and has been
|
||||
allocated cluster IP address 10.0.0.11, produces the following environment
|
||||
|
|
|
@ -17,7 +17,8 @@ weight: 20
|
|||
<!-- overview -->
|
||||
|
||||
This document describes _persistent volumes_ in Kubernetes. Familiarity with
|
||||
[volumes](/docs/concepts/storage/volumes/) is suggested.
|
||||
[volumes](/docs/concepts/storage/volumes/), [StorageClasses](/docs/concepts/storage/storage-classes/)
|
||||
and [VolumeAttributesClasses](/docs/concepts/storage/volume-attributes-classes/) is suggested.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -39,8 +40,8 @@ NFS, iSCSI, or a cloud-provider-specific storage system.
|
|||
A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar
|
||||
to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can
|
||||
request specific levels of resources (CPU and Memory). Claims can request specific
|
||||
size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or
|
||||
ReadWriteMany, see [AccessModes](#access-modes)).
|
||||
size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany,
|
||||
ReadWriteMany, or ReadWriteOncePod, see [AccessModes](#access-modes)).
|
||||
|
||||
While PersistentVolumeClaims allow a user to consume abstract storage resources,
|
||||
it is common that users need PersistentVolumes with varying properties, such as
|
||||
|
@ -184,7 +185,7 @@ and the volume is considered "released". But it is not yet available for
|
|||
another claim because the previous claimant's data remains on the volume.
|
||||
An administrator can manually reclaim the volume with the following steps.
|
||||
|
||||
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
|
||||
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
|
||||
still exists after the PV is deleted.
|
||||
1. Manually clean up the data on the associated storage asset accordingly.
|
||||
1. Manually delete the associated storage asset.
|
||||
|
@ -272,7 +273,7 @@ Access Modes: RWO
|
|||
VolumeMode: Filesystem
|
||||
Capacity: 1Gi
|
||||
Node Affinity: <none>
|
||||
Message:
|
||||
Message:
|
||||
Source:
|
||||
Type: vSphereVolume (a Persistent Disk resource in vSphere)
|
||||
VolumePath: [vsanDatastore] d49c4a62-166f-ce12-c464-020077ba5d46/kubernetes-dynamic-pvc-74a498d6-3929-47e8-8c02-078c1ece4d78.vmdk
|
||||
|
@ -297,7 +298,7 @@ Access Modes: RWO
|
|||
VolumeMode: Filesystem
|
||||
Capacity: 200Mi
|
||||
Node Affinity: <none>
|
||||
Message:
|
||||
Message:
|
||||
Source:
|
||||
Type: CSI (a Container Storage Interface (CSI) volume source)
|
||||
Driver: csi.vsphere.vmware.com
|
||||
|
@ -577,9 +578,7 @@ mounting of NFS filesystems.
|
|||
### Capacity
|
||||
|
||||
Generally, a PV will have a specific storage capacity. This is set using the PV's
|
||||
`capacity` attribute. Read the glossary term
|
||||
[Quantity](/docs/reference/glossary/?all=true#term-quantity) to understand the units
|
||||
expected by `capacity`.
|
||||
`capacity` attribute which is a {{< glossary_tooltip term_id="quantity" >}} value.
|
||||
|
||||
Currently, storage size is the only resource that can be set or requested.
|
||||
Future attributes may include IOPS, throughput, etc.
|
||||
|
@ -618,7 +617,8 @@ The access modes are:
|
|||
|
||||
`ReadWriteOnce`
|
||||
: the volume can be mounted as read-write by a single node. ReadWriteOnce access
|
||||
mode still can allow multiple pods to access the volume when the pods are running on the same node.
|
||||
mode still can allow multiple pods to access the volume when the pods are
|
||||
running on the same node. For single pod access, please see ReadWriteOncePod.
|
||||
|
||||
`ReadOnlyMany`
|
||||
: the volume can be mounted as read-only by many nodes.
|
||||
|
@ -627,15 +627,22 @@ The access modes are:
|
|||
: the volume can be mounted as read-write by many nodes.
|
||||
|
||||
`ReadWriteOncePod`
|
||||
: {{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
: {{< feature-state for_k8s_version="v1.29" state="stable" >}}
|
||||
the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod
|
||||
access mode if you want to ensure that only one pod across the whole cluster can
|
||||
read that PVC or write to it. This is only supported for CSI volumes and
|
||||
Kubernetes version 1.22+.
|
||||
read that PVC or write to it.
|
||||
|
||||
The blog article
|
||||
[Introducing Single Pod Access Mode for PersistentVolumes](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/)
|
||||
covers this in more detail.
|
||||
{{< note >}}
|
||||
The `ReadWriteOncePod` access mode is only supported for
|
||||
{{< glossary_tooltip text="CSI" term_id="csi" >}} volumes and Kubernetes version
|
||||
1.22+. To use this feature you will need to update the following
|
||||
[CSI sidecars](https://kubernetes-csi.github.io/docs/sidecar-containers.html)
|
||||
to these versions or greater:
|
||||
|
||||
* [csi-provisioner:v3.0.0+](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0)
|
||||
* [csi-attacher:v3.3.0+](https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0)
|
||||
* [csi-resizer:v1.3.0+](https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0)
|
||||
{{< /note >}}
|
||||
|
||||
In the CLI, the access modes are abbreviated to:
|
||||
|
||||
|
@ -655,7 +662,7 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
|
|||
{{< /note >}}
|
||||
|
||||
> __Important!__ A volume can only be mounted using one access mode at a time,
|
||||
> even if it supports many.
|
||||
> even if it supports many.
|
||||
|
||||
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
|
||||
| :--- | :---: | :---: | :---: | - |
|
||||
|
@ -690,7 +697,7 @@ Current reclaim policies are:
|
|||
|
||||
* Retain -- manual reclamation
|
||||
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
|
||||
* Delete -- associated storage asset
|
||||
* Delete -- delete the volume
|
||||
|
||||
For Kubernetes {{< skew currentVersion >}}, only `nfs` and `hostPath` volume types support recycling.
|
||||
|
||||
|
@ -722,7 +729,7 @@ it will become fully deprecated in a future Kubernetes release.
|
|||
### Node Affinity
|
||||
|
||||
{{< note >}}
|
||||
For most volume types, you do not need to set this field.
|
||||
For most volume types, you do not need to set this field.
|
||||
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -753,7 +760,7 @@ You can see the name of the PVC bound to the PV using `kubectl describe persiste
|
|||
|
||||
#### Phase transition timestamp
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
The `.status` field for a PersistentVolume can include an alpha `lastPhaseTransitionTime` field. This field records
|
||||
the timestamp of when the volume last transitioned its phase. For newly created
|
||||
|
@ -1152,7 +1159,7 @@ users should be aware of:
|
|||
When the `CrossNamespaceVolumeDataSource` feature is enabled, there are additional differences:
|
||||
|
||||
* The `dataSource` field only allows local objects, while the `dataSourceRef` field allows
|
||||
objects in any namespaces.
|
||||
objects in any namespaces.
|
||||
* When namespace is specified, `dataSource` and `dataSourceRef` are not synced.
|
||||
|
||||
Users should always use `dataSourceRef` on clusters that have the feature gate enabled, and
|
||||
|
|
|
@ -24,6 +24,7 @@ Currently, the following types of volume sources can be projected:
|
|||
* [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi)
|
||||
* [`configMap`](/docs/concepts/storage/volumes/#configmap)
|
||||
* [`serviceAccountToken`](#serviceaccounttoken)
|
||||
* [`clusterTrustBundle`](#clustertrustbundle)
|
||||
|
||||
All sources are required to be in the same namespace as the Pod. For more details,
|
||||
see the [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) design document.
|
||||
|
@ -70,6 +71,31 @@ A container using a projected volume source as a [`subPath`](/docs/concepts/stor
|
|||
volume mount will not receive updates for those volume sources.
|
||||
{{< /note >}}
|
||||
|
||||
## clusterTrustBundle projected volumes {#clustertrustbundle}
|
||||
|
||||
{{<feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
{{< note >}}
|
||||
To use this feature in Kubernetes {{< skew currentVersion >}}, you must enable support for ClusterTrustBundle objects with the `ClusterTrustBundle` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and `--runtime-config=certificates.k8s.io/v1alpha1/clustertrustbundles=true` kube-apiserver flag, then enable the `ClusterTrustBundleProjection` feature gate.
|
||||
{{< /note >}}
|
||||
|
||||
The `clusterTrustBundle` projected volume source injects the contents of one or more [ClusterTrustBundle](/docs/reference/access-authn-authz/certificate-signing-requests#cluster-trust-bundles) objects as an automatically-updating file in the container filesystem.
|
||||
|
||||
ClusterTrustBundles can be selected either by [name](/docs/reference/access-authn-authz/certificate-signing-requests#ctb-signer-unlinked) or by [signer name](/docs/reference/access-authn-authz/certificate-signing-requests#ctb-signer-linked).
|
||||
|
||||
To select by name, use the `name` field to designate a single ClusterTrustBundle object.
|
||||
|
||||
To select by signer name, use the `signerName` field (and optionally the
|
||||
`labelSelector` field) to designate a set of ClusterTrustBundle objects that use
|
||||
the given signer name. If `labelSelector` is not present, then all
|
||||
ClusterTrustBundles for that signer are selected.
|
||||
|
||||
The kubelet deduplicates the certificates in the selected ClusterTrustBundle objects, normalizes the PEM representations (discarding comments and headers), reorders the certificates, and writes them into the file named by `path`. As the set of selected ClusterTrustBundles or their content changes, kubelet keeps the file up-to-date.
|
||||
|
||||
By default, the kubelet will prevent the pod from starting if the named ClusterTrustBundle is not found, or if `signerName` / `labelSelector` do not match any ClusterTrustBundles. If this behavior is not what you want, then set the `optional` field to `true`, and the pod will start up with an empty file at `path`.
|
||||
|
||||
{{% code_sample file="pods/storage/projected-clustertrustbundle.yaml" %}}
|
||||
|
||||
## SecurityContext interactions
|
||||
|
||||
The [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set.
|
||||
|
|
|
@ -15,59 +15,78 @@ This document describes the concept of a StorageClass in Kubernetes. Familiarity
|
|||
with [volumes](/docs/concepts/storage/volumes/) and
|
||||
[persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Introduction
|
||||
|
||||
A StorageClass provides a way for administrators to describe the "classes" of
|
||||
A StorageClass provides a way for administrators to describe the _classes_ of
|
||||
storage they offer. Different classes might map to quality-of-service levels,
|
||||
or to backup policies, or to arbitrary policies determined by the cluster
|
||||
administrators. Kubernetes itself is unopinionated about what classes
|
||||
represent. This concept is sometimes called "profiles" in other storage
|
||||
systems.
|
||||
represent.
|
||||
|
||||
## The StorageClass Resource
|
||||
The Kubernetes concept of a storage class is similar to “profiles” in some other
|
||||
storage system designs.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## StorageClass objects
|
||||
|
||||
Each StorageClass contains the fields `provisioner`, `parameters`, and
|
||||
`reclaimPolicy`, which are used when a PersistentVolume belonging to the
|
||||
class needs to be dynamically provisioned.
|
||||
class needs to be dynamically provisioned to satisfy a PersistentVolumeClaim (PVC).
|
||||
|
||||
The name of a StorageClass object is significant, and is how users can
|
||||
request a particular class. Administrators set the name and other parameters
|
||||
of a class when first creating StorageClass objects.
|
||||
|
||||
Administrators can specify a default StorageClass only for PVCs that don't
|
||||
request any particular class to bind to: see the
|
||||
[PersistentVolumeClaim section](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
|
||||
for details.
|
||||
As an administrator, you can specify a default StorageClass that applies to any PVCs that
|
||||
don't request a specific class. For more details, see the
|
||||
[PersistentVolumeClaim concept](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: kubernetes.io/aws-ebs
|
||||
parameters:
|
||||
type: gp2
|
||||
reclaimPolicy: Retain
|
||||
allowVolumeExpansion: true
|
||||
mountOptions:
|
||||
- debug
|
||||
volumeBindingMode: Immediate
|
||||
```
|
||||
Here's an example of a StorageClass:
|
||||
|
||||
### Default StorageClass
|
||||
{{% code_sample file="storage/storageclass-low-latency.yaml" %}}
|
||||
|
||||
When a PVC does not specify a `storageClassName`, the default StorageClass is
|
||||
used. The cluster can only have one default StorageClass. If more than one
|
||||
default StorageClass is accidentally set, the newest default is used when the
|
||||
PVC is dynamically provisioned.
|
||||
## Default StorageClass
|
||||
|
||||
You can mark a StorageClass as the default for your cluster.
|
||||
For instructions on setting the default StorageClass, see
|
||||
[Change the default StorageClass](/docs/tasks/administer-cluster/change-default-storage-class/).
|
||||
Note that certain cloud providers may already define a default StorageClass.
|
||||
|
||||
### Provisioner
|
||||
When a PVC does not specify a `storageClassName`, the default StorageClass is
|
||||
used.
|
||||
|
||||
If you set the
|
||||
[`storageclass.kubernetes.io/is-default-class`](/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class)
|
||||
annotation to true on more than one StorageClass in your cluster, and you then
|
||||
create a PersistentVolumeClaim with no `storageClassName` set, Kubernetes
|
||||
uses the most recently created default StorageClass.
|
||||
|
||||
{{< note >}}
|
||||
You should try to only have one StorageClass in your cluster that is
|
||||
marked as the default. The reason that Kubernetes allows you to have
|
||||
multiple default StorageClasses is to allow for seamless migration.
|
||||
{{< /note >}}
|
||||
|
||||
You can create a PersistentVolumeClaim without specifying a `storageClassName`
|
||||
for the new PVC, and you can do so even when no default StorageClass exists
|
||||
in your cluster. In this case, the new PVC creates as you defined it, and the
|
||||
`storageClassName` of that PVC remains unset until a default becomes available.
|
||||
|
||||
You can have a cluster without any default StorageClass. If you don't mark any
|
||||
StorageClass as default (and one hasn't been set for you by, for example, a cloud provider),
|
||||
then Kubernetes cannot apply that defaulting for PersistentVolumeClaims that need
|
||||
it.
|
||||
|
||||
If or when a default StorageClass becomes available, the control plane identifies any
|
||||
existing PVCs without `storageClassName`. For the PVCs that either have an empty
|
||||
value for `storageClassName` or do not have this key, the control plane then
|
||||
updates those PVCs to set `storageClassName` to match the new default StorageClass.
|
||||
If you have an existing PVC where the `storageClassName` is `""`, and you configure
|
||||
a default StorageClass, then this PVC will not get updated.
|
||||
|
||||
In order to keep binding to PVs with `storageClassName` set to `""`
|
||||
(while a default StorageClass is present), you need to set the `storageClassName`
|
||||
of the associated PVC to `""`.
|
||||
|
||||
## Provisioner
|
||||
|
||||
Each StorageClass has a provisioner that determines what volume plugin is used
|
||||
for provisioning PVs. This field must be specified.
|
||||
|
@ -79,11 +98,11 @@ for provisioning PVs. This field must be specified.
|
|||
| FC | - | - |
|
||||
| FlexVolume | - | - |
|
||||
| iSCSI | - | - |
|
||||
| NFS | - | [NFS](#nfs) |
|
||||
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
|
||||
| VsphereVolume | ✓ | [vSphere](#vsphere) |
|
||||
| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) |
|
||||
| Local | - | [Local](#local) |
|
||||
| NFS | - | [NFS](#nfs) |
|
||||
| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) |
|
||||
| RBD | - | [Ceph RBD](#ceph-rbd) |
|
||||
| VsphereVolume | ✓ | [vSphere](#vsphere) |
|
||||
|
||||
You are not restricted to specifying the "internal" provisioners
|
||||
listed here (whose names are prefixed with "kubernetes.io" and shipped
|
||||
|
@ -101,7 +120,7 @@ For example, NFS doesn't provide an internal provisioner, but an external
|
|||
provisioner can be used. There are also cases when 3rd party storage
|
||||
vendors provide their own external provisioner.
|
||||
|
||||
### Reclaim Policy
|
||||
## Reclaim policy
|
||||
|
||||
PersistentVolumes that are dynamically created by a StorageClass will have the
|
||||
[reclaim policy](/docs/concepts/storage/persistent-volumes/#reclaiming)
|
||||
|
@ -112,23 +131,24 @@ StorageClass object is created, it will default to `Delete`.
|
|||
PersistentVolumes that are created manually and managed via a StorageClass will have
|
||||
whatever reclaim policy they were assigned at creation.
|
||||
|
||||
### Allow Volume Expansion
|
||||
## Volume expansion {#allow-volume-expansion}
|
||||
|
||||
PersistentVolumes can be configured to be expandable. This feature when set to `true`,
|
||||
allows the users to resize the volume by editing the corresponding PVC object.
|
||||
PersistentVolumes can be configured to be expandable. This allows you to resize the
|
||||
volume by editing the corresponding PVC object, requesting a new larger amount of
|
||||
storage.
|
||||
|
||||
The following types of volumes support volume expansion, when the underlying
|
||||
StorageClass has the field `allowVolumeExpansion` set to true.
|
||||
|
||||
{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}}
|
||||
|
||||
| Volume type | Required Kubernetes version |
|
||||
| :------------------- | :-------------------------- |
|
||||
| rbd | 1.11 |
|
||||
| Azure File | 1.11 |
|
||||
| Portworx | 1.11 |
|
||||
| FlexVolume | 1.13 |
|
||||
| CSI | 1.14 (alpha), 1.16 (beta) |
|
||||
| Volume type | Required Kubernetes version for volume expansion |
|
||||
| :------------------- | :----------------------------------------------- |
|
||||
| Azure File | 1.11 |
|
||||
| CSI | 1.24 |
|
||||
| FlexVolume | 1.13 |
|
||||
| Portworx | 1.11 |
|
||||
| rbd | 1.11 |
|
||||
|
||||
{{< /table >}}
|
||||
|
||||
|
@ -136,20 +156,20 @@ StorageClass has the field `allowVolumeExpansion` set to true.
|
|||
You can only use the volume expansion feature to grow a Volume, not to shrink it.
|
||||
{{< /note >}}
|
||||
|
||||
### Mount Options
|
||||
## Mount options
|
||||
|
||||
PersistentVolumes that are dynamically created by a StorageClass will have the
|
||||
mount options specified in the `mountOptions` field of the class.
|
||||
|
||||
If the volume plugin does not support mount options but mount options are
|
||||
specified, provisioning will fail. Mount options are not validated on either
|
||||
specified, provisioning will fail. Mount options are **not** validated on either
|
||||
the class or PV. If a mount option is invalid, the PV mount fails.
|
||||
|
||||
### Volume Binding Mode
|
||||
## Volume binding mode
|
||||
|
||||
The `volumeBindingMode` field controls when
|
||||
[volume binding and dynamic provisioning](/docs/concepts/storage/persistent-volumes/#provisioning)
|
||||
should occur. When unset, "Immediate" mode is used by default.
|
||||
should occur. When unset, `Immediate` mode is used by default.
|
||||
|
||||
The `Immediate` mode indicates that volume binding and dynamic
|
||||
provisioning occurs once the PersistentVolumeClaim is created. For storage
|
||||
|
@ -167,19 +187,21 @@ requirements](/docs/concepts/configuration/manage-resources-containers/),
|
|||
anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
|
||||
and [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration).
|
||||
|
||||
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
|
||||
- [Local](#local)
|
||||
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
|
||||
|
||||
[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning
|
||||
and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver
|
||||
to see its supported topology keys and examples.
|
||||
- CSI volumes, provided that the specific CSI driver supports this
|
||||
|
||||
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
|
||||
|
||||
- CSI volumes, provided that the specific CSI driver supports this
|
||||
- [`local`](#local)
|
||||
|
||||
{{< note >}}
|
||||
If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec
|
||||
to specify node affinity.
|
||||
If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
|
||||
|
||||
Instead, you can use node selector for hostname in this case as shown below.
|
||||
Instead, you can use node selector for `kubernetes.io/hostname`:
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
|
@ -205,7 +227,7 @@ spec:
|
|||
name: task-pv-storage
|
||||
```
|
||||
|
||||
### Allowed Topologies
|
||||
## Allowed topologies
|
||||
|
||||
When a cluster operator specifies the `WaitForFirstConsumer` volume binding mode, it is no longer necessary
|
||||
to restrict provisioning to specific topologies in most situations. However,
|
||||
|
@ -220,7 +242,7 @@ apiVersion: storage.k8s.io/v1
|
|||
kind: StorageClass
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
provisioner: kubernetes.io/example
|
||||
parameters:
|
||||
type: pd-standard
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
|
@ -234,11 +256,9 @@ allowedTopologies:
|
|||
|
||||
## Parameters
|
||||
|
||||
Storage Classes have parameters that describe volumes belonging to the storage
|
||||
class. Different parameters may be accepted depending on the `provisioner`. For
|
||||
example, the value `io1`, for the parameter `type`, and the parameter
|
||||
`iopsPerGB` are specific to EBS. When a parameter is omitted, some default is
|
||||
used.
|
||||
StorageClasses have parameters that describe volumes belonging to the storage
|
||||
class. Different parameters may be accepted depending on the `provisioner`.
|
||||
When a parameter is omitted, some default is used.
|
||||
|
||||
There can be at most 512 parameters defined for a StorageClass.
|
||||
The total length of the parameters object including its keys and values cannot
|
||||
|
@ -246,48 +266,43 @@ exceed 256 KiB.
|
|||
|
||||
### AWS EBS
|
||||
|
||||
<!-- maintenance note: OK to remove all mention of awsElasticBlockStore once the v1.27 release of
|
||||
Kubernetes has gone out of support -->
|
||||
|
||||
Kubernetes {{< skew currentVersion >}} does not include a `awsElasticBlockStore` volume type.
|
||||
|
||||
The AWSElasticBlockStore in-tree storage driver was deprecated in the Kubernetes v1.19 release
|
||||
and then removed entirely in the v1.27 release.
|
||||
|
||||
The Kubernetes project suggests that you use the [AWS EBS](https://github.com/kubernetes-sigs/aws-ebs-csi-driver)
|
||||
out-of-tree storage driver instead.
|
||||
|
||||
Here is an example StorageClass for the AWS EBS CSI driver:
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/aws-ebs
|
||||
name: ebs-sc
|
||||
provisioner: ebs.csi.aws.com
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
parameters:
|
||||
csi.storage.k8s.io/fstype: xfs
|
||||
type: io1
|
||||
iopsPerGB: "10"
|
||||
fsType: ext4
|
||||
iopsPerGB: "50"
|
||||
encrypted: "true"
|
||||
allowedTopologies:
|
||||
- matchLabelExpressions:
|
||||
- key: topology.ebs.csi.aws.com/zone
|
||||
values:
|
||||
- us-east-2c
|
||||
```
|
||||
|
||||
- `type`: `io1`, `gp2`, `sc1`, `st1`. See
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
|
||||
for details. Default: `gp2`.
|
||||
- `zone` (Deprecated): AWS zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
generally round-robin-ed across all active zones where Kubernetes cluster
|
||||
has a node. `zone` and `zones` parameters must not be used at the same time.
|
||||
- `zones` (Deprecated): A comma separated list of AWS zone(s). If neither `zone` nor `zones`
|
||||
is specified, volumes are generally round-robin-ed across all active zones
|
||||
where Kubernetes cluster has a node. `zone` and `zones` parameters must not
|
||||
be used at the same time.
|
||||
- `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS
|
||||
volume plugin multiplies this with size of requested volume to compute IOPS
|
||||
of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)).
|
||||
A string is expected here, i.e. `"10"`, not `10`.
|
||||
- `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`.
|
||||
- `encrypted`: denotes whether the EBS volume should be encrypted or not.
|
||||
Valid values are `"true"` or `"false"`. A string is expected here,
|
||||
i.e. `"true"`, not `true`.
|
||||
- `kmsKeyId`: optional. The full Amazon Resource Name of the key to use when
|
||||
encrypting the volume. If none is supplied but `encrypted` is true, a key is
|
||||
generated by AWS. See AWS docs for valid ARN value.
|
||||
|
||||
{{< note >}}
|
||||
`zone` and `zones` parameters are deprecated and replaced with
|
||||
[allowedTopologies](#allowed-topologies)
|
||||
{{< /note >}}
|
||||
|
||||
### NFS
|
||||
|
||||
To configure NFS storage, you can use the in-tree driver or the
|
||||
[NFS CSI driver for Kubernetes](https://github.com/kubernetes-csi/csi-driver-nfs#readme)
|
||||
(recommended).
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
|
@ -400,7 +415,8 @@ There are few
|
|||
[vSphere examples](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere)
|
||||
which you try out for persistent volume management inside Kubernetes for vSphere.
|
||||
|
||||
### Ceph RBD
|
||||
### Ceph RBD (deprecated) {#ceph-rbd}
|
||||
|
||||
{{< note >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="v1.28" >}}
|
||||
This internal provisioner of Ceph RBD is deprecated. Please use
|
||||
|
@ -456,58 +472,18 @@ parameters:
|
|||
|
||||
### Azure Disk
|
||||
|
||||
#### Azure Unmanaged Disk storage class {#azure-unmanaged-disk-storage-class}
|
||||
<!-- maintenance note: OK to remove all mention of azureDisk once the v1.27 release of
|
||||
Kubernetes has gone out of support -->
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
location: eastus
|
||||
storageAccount: azure_storage_account_name
|
||||
```
|
||||
Kubernetes {{< skew currentVersion >}} does not include a `azureDisk` volume type.
|
||||
|
||||
- `skuName`: Azure storage account Sku tier. Default is empty.
|
||||
- `location`: Azure storage account location. Default is empty.
|
||||
- `storageAccount`: Azure storage account name. If a storage account is provided,
|
||||
it must reside in the same resource group as the cluster, and `location` is
|
||||
ignored. If a storage account is not provided, a new storage account will be
|
||||
created in the same resource group as the cluster.
|
||||
The `azureDisk` in-tree storage driver was deprecated in the Kubernetes v1.19 release
|
||||
and then removed entirely in the v1.27 release.
|
||||
|
||||
#### Azure Disk storage class (starting from v1.7.2) {#azure-disk-storage-class}
|
||||
The Kubernetes project suggests that you use the [Azure Disk](https://github.com/kubernetes-sigs/azuredisk-csi-driver) third party
|
||||
storage driver instead.
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
storageaccounttype: Standard_LRS
|
||||
kind: managed
|
||||
```
|
||||
|
||||
- `storageaccounttype`: Azure storage account Sku tier. Default is empty.
|
||||
- `kind`: Possible values are `shared`, `dedicated`, and `managed` (default).
|
||||
When `kind` is `shared`, all unmanaged disks are created in a few shared
|
||||
storage accounts in the same resource group as the cluster. When `kind` is
|
||||
`dedicated`, a new dedicated storage account will be created for the new
|
||||
unmanaged disk in the same resource group as the cluster. When `kind` is
|
||||
`managed`, all managed disks are created in the same resource group as
|
||||
the cluster.
|
||||
- `resourceGroup`: Specify the resource group in which the Azure disk will be created.
|
||||
It must be an existing resource group name. If it is unspecified, the disk will be
|
||||
placed in the same resource group as the current Kubernetes cluster.
|
||||
|
||||
* Premium VM can attach both Standard_LRS and Premium_LRS disks, while Standard
|
||||
VM can only attach Standard_LRS disks.
|
||||
* Managed VM can only attach managed disks and unmanaged VM can only attach
|
||||
unmanaged disks.
|
||||
|
||||
### Azure File
|
||||
### Azure File (deprecated) {#azure-file}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
|
@ -521,7 +497,7 @@ parameters:
|
|||
storageAccount: azure_storage_account_name
|
||||
```
|
||||
|
||||
- `skuName`: Azure storage account Sku tier. Default is empty.
|
||||
- `skuName`: Azure storage account SKU tier. Default is empty.
|
||||
- `location`: Azure storage account location. Default is empty.
|
||||
- `storageAccount`: Azure storage account name. Default is empty. If a storage
|
||||
account is not provided, all storage accounts associated with the resource
|
||||
|
@ -547,7 +523,7 @@ In a multi-tenancy context, it is strongly recommended to set the value for
|
|||
`secretNamespace` explicitly, otherwise the storage account credentials may
|
||||
be read by other users.
|
||||
|
||||
### Portworx Volume
|
||||
### Portworx volume (deprecated) {#portworx-volume}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
|
@ -592,9 +568,10 @@ provisioner: kubernetes.io/no-provisioner
|
|||
volumeBindingMode: WaitForFirstConsumer
|
||||
```
|
||||
|
||||
Local volumes do not currently support dynamic provisioning, however a StorageClass
|
||||
should still be created to delay volume binding until Pod scheduling. This is
|
||||
specified by the `WaitForFirstConsumer` volume binding mode.
|
||||
Local volumes do not support dynamic provisioning in Kubernetes {{< skew currentVersion >}};
|
||||
however a StorageClass should still be created to delay volume binding until a Pod is actually
|
||||
scheduled to the appropriate node. This is specified by the `WaitForFirstConsumer` volume
|
||||
binding mode.
|
||||
|
||||
Delaying volume binding allows the scheduler to consider all of a Pod's
|
||||
scheduling constraints when choosing an appropriate PersistentVolume for a
|
||||
|
|
|
@ -0,0 +1,131 @@
|
|||
---
|
||||
reviewers:
|
||||
- msau42
|
||||
- xing-yang
|
||||
title: Volume Attributes Classes
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
This page assumes that you are familiar with [StorageClasses](/docs/concepts/storage/storage-classes/),
|
||||
[volumes](/docs/concepts/storage/volumes/) and [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
|
||||
in Kubernetes.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
A VolumeAttributesClass provides a way for administrators to describe the mutable
|
||||
"classes" of storage they offer. Different classes might map to different quality-of-service levels.
|
||||
Kubernetes itself is unopinionated about what these classes represent.
|
||||
|
||||
This is an alpha feature and disabled by default.
|
||||
|
||||
If you want to test the feature whilst it's alpha, you need to enable the `VolumeAttributesClass`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the kube-controller-manager and the kube-apiserver. You use the `--feature-gates` command line argument:
|
||||
|
||||
```
|
||||
--feature-gates="...,VolumeAttributesClass=true"
|
||||
```
|
||||
|
||||
You can also only use VolumeAttributesClasses with storage backed by
|
||||
{{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}}, and only where the
|
||||
relevant CSI driver implements the `ModifyVolume` API.
|
||||
|
||||
## The VolumeAttributesClass API
|
||||
|
||||
Each VolumeAttributesClass contains the `driverName` and `parameters`, which are
|
||||
used when a PersistentVolume (PV) belonging to the class needs to be dynamically provisioned
|
||||
or modified.
|
||||
|
||||
The name of a VolumeAttributesClass object is significant and is how users can request a particular class.
|
||||
Administrators set the name and other parameters of a class when first creating VolumeAttributesClass objects.
|
||||
While the name of a VolumeAttributesClass object in a `PersistentVolumeClaim` is mutable, the parameters in an existing class are immutable.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1alpha1
|
||||
kind: VolumeAttributesClass
|
||||
metadata:
|
||||
name: silver
|
||||
driverName: pd.csi.storage.gke.io
|
||||
parameters:
|
||||
provisioned-iops: "3000"
|
||||
provisioned-throughput: "50"
|
||||
```
|
||||
|
||||
|
||||
### Provisioner
|
||||
|
||||
Each VolumeAttributesClass has a provisioner that determines what volume plugin is used for provisioning PVs. The field `driverName` must be specified.
|
||||
|
||||
The feature support for VolumeAttributesClass is implemented in [kubernetes-csi/external-provisioner](https://github.com/kubernetes-csi/external-provisioner).
|
||||
|
||||
You are not restricted to specifying the [kubernetes-csi/external-provisioner](https://github.com/kubernetes-csi/external-provisioner). You can also run and specify external provisioners,
|
||||
which are independent programs that follow a specification defined by Kubernetes.
|
||||
Authors of external provisioners have full discretion over where their code lives, how
|
||||
the provisioner is shipped, how it needs to be run, what volume plugin it uses, etc.
|
||||
|
||||
|
||||
### Resizer
|
||||
|
||||
Each VolumeAttributesClass has a resizer that determines what volume plugin is used for modifying PVs. The field `driverName` must be specified.
|
||||
|
||||
The modifying volume feature support for VolumeAttributesClass is implemented in [kubernetes-csi/external-resizer](https://github.com/kubernetes-csi/external-resizer).
|
||||
|
||||
For example, a existing PersistentVolumeClaim is using a VolumeAttributesClass named silver:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: test-pv-claim
|
||||
spec:
|
||||
…
|
||||
volumeAttributesClassName: silver
|
||||
…
|
||||
```
|
||||
|
||||
A new VolumeAttributesClass gold is available in the cluster:
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1alpha1
|
||||
kind: VolumeAttributesClass
|
||||
metadata:
|
||||
name: gold
|
||||
driverName: pd.csi.storage.gke.io
|
||||
parameters:
|
||||
iops: "4000"
|
||||
throughput: "60"
|
||||
```
|
||||
|
||||
|
||||
The end user can update the PVC with the new VolumeAttributesClass gold and apply:
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: test-pv-claim
|
||||
spec:
|
||||
…
|
||||
volumeAttributesClassName: gold
|
||||
…
|
||||
```
|
||||
|
||||
|
||||
## Parameters
|
||||
|
||||
VolumeAttributeClasses have parameters that describe volumes belonging to them. Different parameters may be accepted
|
||||
depending on the provisioner or the resizer. For example, the value `4000`, for the parameter `iops`,
|
||||
and the parameter `throughput` are specific to GCE PD.
|
||||
When a parameter is omitted, the default is used at volume provisioning.
|
||||
If a user apply the PVC with a different VolumeAttributesClass with omitted parameters, the default value of
|
||||
the parameters may be used depends on the CSI driver implementation.
|
||||
Please refer to the related CSI driver documentation for more details.
|
||||
|
||||
There can be at most 512 parameters defined for a VolumeAttributesClass.
|
||||
The total length of the parameters object including its keys and values cannot exceed 256 KiB.
|
|
@ -349,92 +349,161 @@ and then removed entirely in the v1.26 release.
|
|||
|
||||
### hostPath {#hostpath}
|
||||
|
||||
{{< warning >}}
|
||||
HostPath volumes present many security risks, and it is a best practice to avoid the use of
|
||||
HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the
|
||||
required file or directory, and mounted as ReadOnly.
|
||||
|
||||
If restricting HostPath access to specific directories through AdmissionPolicy, `volumeMounts` MUST
|
||||
be required to use `readOnly` mounts for the policy to be effective.
|
||||
{{< /warning >}}
|
||||
|
||||
A `hostPath` volume mounts a file or directory from the host node's filesystem
|
||||
into your Pod. This is not something that most Pods will need, but it offers a
|
||||
powerful escape hatch for some applications.
|
||||
|
||||
For example, some uses for a `hostPath` are:
|
||||
{{< warning >}}
|
||||
Using the `hostPath` volume type presents many security risks.
|
||||
If you can avoid using a `hostPath` volume, you should. For example,
|
||||
define a [`local` PersistentVolume](#local), and use that instead.
|
||||
|
||||
* running a container that needs access to Docker internals; use a `hostPath`
|
||||
of `/var/lib/docker`
|
||||
* running cAdvisor in a container; use a `hostPath` of `/sys`
|
||||
* allowing a Pod to specify whether a given `hostPath` should exist prior to the
|
||||
Pod running, whether it should be created, and what it should exist as
|
||||
If you are restricting access to specific directories on the node using
|
||||
admission-time validation, that restriction is only effective when you
|
||||
additionally require that any mounts of that `hostPath` volume are
|
||||
**read only**. If you allow a read-write mount of any host path by an
|
||||
untrusted Pod, the containers in that Pod may be able to subvert the
|
||||
read-write host mount.
|
||||
|
||||
In addition to the required `path` property, you can optionally specify a `type` for a `hostPath` volume.
|
||||
---
|
||||
|
||||
The supported values for field `type` are:
|
||||
Take care when using `hostPath` volumes, whether these are mounted as read-only
|
||||
or as read-write, because:
|
||||
|
||||
* Access to the host filesystem can expose privileged system credentials (such as for the kubelet) or privileged APIs
|
||||
(such as the container runtime socket), that can be used for container escape or to attack other
|
||||
parts of the cluster.
|
||||
* Pods with identical configuration (such as created from a PodTemplate) may
|
||||
behave differently on different nodes due to different files on the nodes.
|
||||
{{< /warning >}}
|
||||
|
||||
Some uses for a `hostPath` are:
|
||||
|
||||
* running a container that needs access to node-level system components
|
||||
(such as a container that transfers system logs to a central location,
|
||||
accessing those logs using a read-only mount of `/var/log`)
|
||||
* making a configuration file stored on the host system available read-only
|
||||
to a {{< glossary_tooltip text="static pod" term_id="static-pod" >}};
|
||||
unlike normal Pods, static Pods cannot access ConfigMaps
|
||||
|
||||
#### `hostPath` volume types
|
||||
|
||||
In addition to the required `path` property, you can optionally specify a
|
||||
`type` for a `hostPath` volume.
|
||||
|
||||
The available values for `type` are:
|
||||
|
||||
<!-- empty string represented using U+200C ZERO WIDTH NON-JOINER -->
|
||||
|
||||
| Value | Behavior |
|
||||
|:------|:---------|
|
||||
| | Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume. |
|
||||
| `""` | Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the `hostPath` volume. |
|
||||
| `DirectoryOrCreate` | If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet. |
|
||||
| `Directory` | A directory must exist at the given path |
|
||||
| `FileOrCreate` | If nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet. |
|
||||
| `File` | A file must exist at the given path |
|
||||
| `Socket` | A UNIX socket must exist at the given path |
|
||||
| `CharDevice` | A character device must exist at the given path |
|
||||
| `BlockDevice` | A block device must exist at the given path |
|
||||
| `CharDevice` | _(Linux nodes only)_ A character device must exist at the given path |
|
||||
| `BlockDevice` | _(Linux nodes only)_ A block device must exist at the given path |
|
||||
|
||||
Watch out when using this type of volume, because:
|
||||
{{< caution >}}
|
||||
The `FileOrCreate` mode does **not** create the parent directory of the file. If the parent directory
|
||||
of the mounted file does not exist, the pod fails to start. To ensure that this mode works,
|
||||
you can try to mount directories and files separately, as shown in the
|
||||
[`FileOrCreate` example](#hostpath-fileorcreate-example) for `hostPath`.
|
||||
{{< /caution >}}
|
||||
|
||||
* HostPaths can expose privileged system credentials (such as for the Kubelet) or privileged APIs
|
||||
(such as container runtime socket), which can be used for container escape or to attack other
|
||||
parts of the cluster.
|
||||
* Pods with identical configuration (such as created from a PodTemplate) may
|
||||
behave differently on different nodes due to different files on the nodes
|
||||
* The files or directories created on the underlying hosts are only writable by root. You
|
||||
either need to run your process as root in a
|
||||
[privileged Container](/docs/tasks/configure-pod-container/security-context/) or modify the file
|
||||
permissions on the host to be able to write to a `hostPath` volume
|
||||
Some files or directories created on the underlying hosts might only be
|
||||
accessible by root. You then either need to run your process as root in a
|
||||
[privileged container](/docs/tasks/configure-pod-container/security-context/)
|
||||
or modify the file permissions on the host to be able to read from
|
||||
(or write to) a `hostPath` volume.
|
||||
|
||||
#### hostPath configuration example
|
||||
|
||||
```yaml
|
||||
{{< tabs name="hostpath_examples" >}}
|
||||
{{< tab name="Linux node" codelang="yaml" >}}
|
||||
---
|
||||
# This manifest mounts /data/foo on the host as /foo inside the
|
||||
# single container that runs within the hostpath-example-linux Pod.
|
||||
#
|
||||
# The mount into the container is read-only.
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
name: hostpath-example-linux
|
||||
spec:
|
||||
os: { name: linux }
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
containers:
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
- name: example-container
|
||||
image: registry.k8s.io/test-webserver
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
name: test-volume
|
||||
- mountPath: /foo
|
||||
name: example-volume
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: test-volume
|
||||
- name: example-volume
|
||||
# mount /data/foo, but only if that directory already exists
|
||||
hostPath:
|
||||
# directory location on host
|
||||
path: /data
|
||||
# this field is optional
|
||||
type: Directory
|
||||
```
|
||||
|
||||
{{< caution >}}
|
||||
The `FileOrCreate` mode does not create the parent directory of the file. If the parent directory
|
||||
of the mounted file does not exist, the pod fails to start. To ensure that this mode works,
|
||||
you can try to mount directories and files separately, as shown in the
|
||||
[`FileOrCreate`configuration](#hostpath-fileorcreate-example).
|
||||
{{< /caution >}}
|
||||
path: /data/foo # directory location on host
|
||||
type: Directory # this field is optional
|
||||
{{< /tab >}}
|
||||
{{< tab name="Windows node" codelang="yaml" >}}
|
||||
---
|
||||
# This manifest mounts C:\Data\foo on the host as C:\foo, inside the
|
||||
# single container that runs within the hostpath-example-windows Pod.
|
||||
#
|
||||
# The mount into the container is read-only.
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: hostpath-example-windows
|
||||
spec:
|
||||
os: { name: windows }
|
||||
nodeSelector:
|
||||
kubernetes.io/os: windows
|
||||
containers:
|
||||
- name: example-container
|
||||
image: microsoft/windowsservercore:1709
|
||||
volumeMounts:
|
||||
- name: example-volume
|
||||
mountPath: "C:\\foo"
|
||||
readOnly: true
|
||||
volumes:
|
||||
# mount C:\Data\foo from the host, but only if that directory already exists
|
||||
- name: example-volume
|
||||
hostPath:
|
||||
path: "C:\\Data\\foo" # directory location on host
|
||||
type: Directory # this field is optional
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
#### hostPath FileOrCreate configuration example {#hostpath-fileorcreate-example}
|
||||
|
||||
The following manifest defines a Pod that mounts `/var/local/aaa`
|
||||
inside the single container in the Pod. If the node does not
|
||||
already have a path `/var/local/aaa`, the kubelet creates
|
||||
it as a directory and then mounts it into the Pod.
|
||||
|
||||
If `/var/local/aaa` already exists but is not a directory,
|
||||
the Pod fails. Additionally, the kubelet attempts to make
|
||||
a file named `/var/local/aaa/1.txt` inside that directory
|
||||
(as seen from the host); if something already exists at
|
||||
that path and isn't a regular file, the Pod fails.
|
||||
|
||||
Here's the example manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-webserver
|
||||
spec:
|
||||
os: { name: linux }
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
containers:
|
||||
- name: test-webserver
|
||||
image: registry.k8s.io/test-webserver:latest
|
||||
|
|
|
@ -37,7 +37,7 @@ you can deploy worker nodes running either Windows or Linux.
|
|||
|
||||
Windows {{< glossary_tooltip text="nodes" term_id="node" >}} are
|
||||
[supported](#windows-os-version-support) provided that the operating system is
|
||||
Windows Server 2019.
|
||||
Windows Server 2019 or Windows Server 2022.
|
||||
|
||||
This document uses the term *Windows containers* to mean Windows containers with
|
||||
process isolation. Kubernetes does not support running Windows containers with
|
||||
|
@ -320,8 +320,7 @@ The following container runtimes work with Windows:
|
|||
You can use {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+
|
||||
as the container runtime for Kubernetes nodes that run Windows.
|
||||
|
||||
Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd).
|
||||
|
||||
Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd).
|
||||
{{< note >}}
|
||||
There is a [known limitation](/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations)
|
||||
when using GMSA with containerd to access Windows network shares, which requires a
|
||||
|
|
|
@ -55,9 +55,9 @@ The `.spec.schedule` field is required. The value of that field follows the [Cro
|
|||
# │ ┌───────────── hour (0 - 23)
|
||||
# │ │ ┌───────────── day of the month (1 - 31)
|
||||
# │ │ │ ┌───────────── month (1 - 12)
|
||||
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
|
||||
# │ │ │ │ │ 7 is also Sunday on some systems)
|
||||
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
|
||||
# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
|
||||
# │ │ │ │ │
|
||||
# │ │ │ │ │
|
||||
# * * * * *
|
||||
```
|
||||
|
@ -181,15 +181,14 @@ A time zone database from the Go standard library is included in the binaries an
|
|||
|
||||
### Unsupported TimeZone specification
|
||||
|
||||
The implementation of the CronJob API in Kubernetes {{< skew currentVersion >}} lets you set
|
||||
the `.spec.schedule` field to include a timezone; for example: `CRON_TZ=UTC * * * * *`
|
||||
or `TZ=UTC * * * * *`.
|
||||
Specifying a timezone using `CRON_TZ` or `TZ` variables inside `.spec.schedule`
|
||||
is **not officially supported** (and never has been).
|
||||
|
||||
Specifying a timezone that way is **not officially supported** (and never has been).
|
||||
|
||||
If you try to set a schedule that includes `TZ` or `CRON_TZ` timezone specification,
|
||||
Kubernetes reports a [warning](/blog/2020/09/03/warnings/) to the client.
|
||||
Future versions of Kubernetes will prevent setting the unofficial timezone mechanism entirely.
|
||||
Starting with Kubernetes 1.29 if you try to set a schedule that includes `TZ` or `CRON_TZ`
|
||||
timezone specification, Kubernetes will fail to create the resource with a validation
|
||||
error.
|
||||
Updates to CronJobs already using `TZ` or `CRON_TZ` will continue to report a
|
||||
[warning](/blog/2020/09/03/warnings/) to the client.
|
||||
|
||||
### Modifying a CronJob
|
||||
|
||||
|
|
|
@ -382,7 +382,7 @@ from failed Jobs is not lost inadvertently.
|
|||
|
||||
### Backoff limit per index {#backoff-limit-per-index}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
You can only configure the backoff limit per index for an [Indexed](#completion-mode) Job, if you
|
||||
|
@ -395,7 +395,7 @@ for pod failures independently for each index. To do so, set the
|
|||
`.spec.backoffLimitPerIndex` to specify the maximal number of pod failures
|
||||
per index.
|
||||
|
||||
When the per-index backoff limit is exceeded for an index, Kuberentes considers the index as failed and adds it to the
|
||||
When the per-index backoff limit is exceeded for an index, Kubernetes considers the index as failed and adds it to the
|
||||
`.status.failedIndexes` field. The succeeded indexes, those with a successfully
|
||||
executed pods, are recorded in the `.status.completedIndexes` field, regardless of whether you set
|
||||
the `backoffLimitPerIndex` field.
|
||||
|
@ -940,7 +940,7 @@ the Job status, allowing the Pod to be removed by other controllers or users.
|
|||
|
||||
{{< note >}}
|
||||
See [My pod stays terminating](/docs/tasks/debug/debug-application/debug-pods/) if you
|
||||
observe that pods from a Job are stucked with the tracking finalizer.
|
||||
observe that pods from a Job are stuck with the tracking finalizer.
|
||||
{{< /note >}}
|
||||
|
||||
### Elastic Indexed Jobs
|
||||
|
@ -958,11 +958,12 @@ scaling an indexed Job, such as MPI, Horovord, Ray, and PyTorch training jobs.
|
|||
|
||||
### Delayed creation of replacement pods {#pod-replacement-policy}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
You can only set `podReplacementPolicy` on Jobs if you enable the `JobPodReplacementPolicy`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
(enabled by default).
|
||||
{{< /note >}}
|
||||
|
||||
By default, the Job controller recreates Pods as soon they either fail or are terminating (have a deletion timestamp).
|
||||
|
|
|
@ -225,7 +225,7 @@ pod1 1/1 Running 0 36s
|
|||
pod2 1/1 Running 0 36s
|
||||
```
|
||||
|
||||
In this manner, a ReplicaSet can own a non-homogenous set of Pods
|
||||
In this manner, a ReplicaSet can own a non-homogeneous set of Pods
|
||||
|
||||
## Writing a ReplicaSet manifest
|
||||
|
||||
|
|
|
@ -116,6 +116,12 @@ spec:
|
|||
storage: 1Gi
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
This example uses the `ReadWriteOnce` access mode, for simplicity. For
|
||||
production use, the Kubernetes project recommends using the `ReadWriteOncePod`
|
||||
access mode instead.
|
||||
{{< /note >}}
|
||||
|
||||
In the above example:
|
||||
|
||||
* A Headless Service, named `nginx`, is used to control the network domain.
|
||||
|
|
|
@ -111,9 +111,9 @@ Some Pods have {{< glossary_tooltip text="init containers" term_id="init-contain
|
|||
as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
|
||||
By default, init containers run and complete before the app containers are started.
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
Enabling the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
Enabled by default, the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
allows you to specify `restartPolicy: Always` for init containers.
|
||||
Setting the `Always` restart policy ensures that the init containers where you set it are
|
||||
kept running during the entire lifetime of the Pod.
|
||||
|
|
|
@ -14,6 +14,9 @@ Init containers can contain utilities or setup scripts not present in an app ima
|
|||
You can specify init containers in the Pod specification alongside the `containers`
|
||||
array (which describes app containers).
|
||||
|
||||
In Kubernetes, a [sidecar container](/docs/concepts/workloads/pods/sidecar-containers/) is a container that
|
||||
starts before the main application container and _continues to run_. This document is about init containers:
|
||||
containers that run to completion during Pod initialization.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -48,14 +51,33 @@ including resource limits, [volumes](/docs/concepts/storage/volumes/), and secur
|
|||
resource requests and limits for an init container are handled differently,
|
||||
as documented in [Resource sharing within containers](#resource-sharing-within-containers).
|
||||
|
||||
Also, init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or
|
||||
`startupProbe` because they must run to completion before the Pod can be ready.
|
||||
Regular init containers (in other words: excluding sidecar containers) do not support the
|
||||
`lifecycle`, `livenessProbe`, `readinessProbe`, or `startupProbe` fields. Init containers
|
||||
must run to completion before the Pod can be ready; sidecar containers continue running
|
||||
during a Pod's lifetime, and _do_ support some probes. See [sidecar container](/docs/concepts/workloads/pods/sidecar-containers/)
|
||||
for further details about sidecar containers.
|
||||
|
||||
If you specify multiple init containers for a Pod, kubelet runs each init
|
||||
container sequentially. Each init container must succeed before the next can run.
|
||||
When all of the init containers have run to completion, kubelet initializes
|
||||
the application containers for the Pod and runs them as usual.
|
||||
|
||||
### Differences from sidecar containers
|
||||
|
||||
Init containers run and complete their tasks before the main application container starts.
|
||||
Unlike [sidecar containers](/docs/concepts/workloads/pods/sidecar-containers),
|
||||
init containers are not continuously running alongside the main containers.
|
||||
|
||||
Init containers run to completion sequentially, and the main container does not start
|
||||
until all the init containers have successfully completed.
|
||||
|
||||
init containers do not support `lifecycle`, `livenessProbe`, `readinessProbe`, or
|
||||
`startupProbe` whereas sidecar containers support all these [probes](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe) to control their lifecycle.
|
||||
|
||||
Init containers share the same resources (CPU, memory, network) with the main application
|
||||
containers but do not interact directly with them. They can, however, use shared volumes
|
||||
for data exchange.
|
||||
|
||||
## Using init containers
|
||||
|
||||
Because init containers have separate images from app containers, they
|
||||
|
@ -289,51 +311,9 @@ The Pod which is already running correctly would be killed by `activeDeadlineSec
|
|||
The name of each app and init container in a Pod must be unique; a
|
||||
validation error is thrown for any container sharing a name with another.
|
||||
|
||||
#### API for sidecar containers
|
||||
### Resource sharing within containers
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="alpha" >}}
|
||||
|
||||
Starting with Kubernetes 1.28 in alpha, a feature gate named `SidecarContainers`
|
||||
allows you to specify a `restartPolicy` for init containers which is independent of
|
||||
the Pod and other init containers. Container [probes](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe)
|
||||
can also be added to control their lifecycle.
|
||||
|
||||
If an init container is created with its `restartPolicy` set to `Always`, it will
|
||||
start and remain running during the entire life of the Pod, which is useful for
|
||||
running supporting services separated from the main application containers.
|
||||
|
||||
If a `readinessProbe` is specified for this init container, its result will be used
|
||||
to determine the `ready` state of the Pod.
|
||||
|
||||
Since these containers are defined as init containers, they benefit from the same
|
||||
ordering and sequential guarantees as other init containers, allowing them to
|
||||
be mixed with other init containers into complex Pod initialization flows.
|
||||
|
||||
Compared to regular init containers, sidecar-style init containers continue to
|
||||
run and the next init container can begin starting once the kubelet has set
|
||||
the `started` container status for the sidecar-style init container to true.
|
||||
That status either becomes true because there is a process running in the
|
||||
container and no startup probe defined, or
|
||||
as a result of its `startupProbe` succeeding.
|
||||
|
||||
This feature can be used to implement the sidecar container pattern in a more
|
||||
robust way, as the kubelet always restarts a sidecar container if it fails.
|
||||
|
||||
Here's an example of a Deployment with two containers, one of which is a sidecar:
|
||||
|
||||
{{% code_sample language="yaml" file="application/deployment-sidecar.yaml" %}}
|
||||
|
||||
This feature is also useful for running Jobs with sidecars, as the sidecar
|
||||
container will not prevent the Job from completing after the main container
|
||||
has finished.
|
||||
|
||||
Here's an example of a Job with two containers, one of which is a sidecar:
|
||||
|
||||
{{% code_sample language="yaml" file="application/job/job-sidecar.yaml" %}}
|
||||
|
||||
#### Resource sharing within containers
|
||||
|
||||
Given the ordering and execution for init containers, the following rules
|
||||
Given the order of execution for init, sidecar and app containers, the following rules
|
||||
for resource usage apply:
|
||||
|
||||
* The highest of any particular resource request or limit defined on all init
|
||||
|
@ -354,6 +334,10 @@ limit.
|
|||
Pod level control groups (cgroups) are based on the effective Pod request and
|
||||
limit, the same as the scheduler.
|
||||
|
||||
{{< comment >}}
|
||||
This section also present under [sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/) page.
|
||||
If you're editing this section, change both places.
|
||||
{{< /comment >}}
|
||||
|
||||
### Pod restart reasons
|
||||
|
||||
|
@ -373,7 +357,9 @@ Kubernetes, consult the documentation for the version you are using.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
|
||||
* Learn how to [debug init containers](/docs/tasks/debug/debug-application/debug-init-containers/)
|
||||
* Read about an overview of [kubelet](/docs/reference/command-line-tools-reference/kubelet/) and [kubectl](/docs/reference/kubectl/)
|
||||
* Learn about the [types of probes](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe): liveness, readiness, startup probe.
|
||||
Learn more about the following:
|
||||
* [Creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container).
|
||||
* [Debug init containers](/docs/tasks/debug/debug-application/debug-init-containers/).
|
||||
* Overview of [kubelet](/docs/reference/command-line-tools-reference/kubelet/) and [kubectl](/docs/reference/kubectl/).
|
||||
* [Types of probes](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe): liveness, readiness, startup probe.
|
||||
* [Sidecar containers](/docs/concepts/workloads/pods/sidecar-containers).
|
||||
|
|
|
@ -150,11 +150,22 @@ the `Terminated` state.
|
|||
The `spec` of a Pod has a `restartPolicy` field with possible values Always, OnFailure,
|
||||
and Never. The default value is Always.
|
||||
|
||||
The `restartPolicy` applies to all containers in the Pod. `restartPolicy` only
|
||||
refers to restarts of the containers by the kubelet on the same node. After containers
|
||||
in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s,
|
||||
40s, …), that is capped at five minutes. Once a container has executed for 10 minutes
|
||||
without any problems, the kubelet resets the restart backoff timer for that container.
|
||||
The `restartPolicy` for a Pod applies to {{< glossary_tooltip text="app containers" term_id="app-container" >}}
|
||||
in the Pod and to regular [init containers](/docs/concepts/workloads/pods/init-containers/).
|
||||
[Sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/)
|
||||
ignore the Pod-level `restartPolicy` field: in Kubernetes, a sidecar is defined as an
|
||||
entry inside `initContainers` that has its container-level `restartPolicy` set to `Always`.
|
||||
For init containers that exit with an error, the kubelet restarts the init container if
|
||||
the Pod level `restartPolicy` is either `OnFailure` or `Always`.
|
||||
|
||||
When the kubelet is handling container restarts according to the configured restart
|
||||
policy, that only applies to restarts that make replacement containers inside the
|
||||
same Pod and running on the same node. After containers in a Pod exit, the kubelet
|
||||
restarts them with an exponential back-off delay (10s, 20s,40s, …), that is capped at
|
||||
five minutes. Once a container has executed for 10 minutes without any problems, the
|
||||
kubelet resets the restart backoff timer for that container.
|
||||
[Sidecar containers and Pod lifecycle](/docs/concepts/workloads/pods/sidecar-containers/#sidecar-containers-and-pod-lifecycle)
|
||||
explains the behaviour of `init containers` when specify `restartpolicy` field on it.
|
||||
|
||||
## Pod conditions
|
||||
|
||||
|
@ -164,7 +175,7 @@ through which the Pod has or has not passed. Kubelet manages the following
|
|||
PodConditions:
|
||||
|
||||
* `PodScheduled`: the Pod has been scheduled to a node.
|
||||
* `PodReadyToStartContainers`: (alpha feature; must be [enabled explicitly](#pod-has-network)) the
|
||||
* `PodReadyToStartContainers`: (beta feature; enabled by [default](#pod-has-network)) the
|
||||
Pod sandbox has been successfully created and networking configured.
|
||||
* `ContainersReady`: all containers in the Pod are ready.
|
||||
* `Initialized`: all [init containers](/docs/concepts/workloads/pods/init-containers/)
|
||||
|
@ -242,19 +253,21 @@ When a Pod's containers are Ready but at least one custom condition is missing o
|
|||
|
||||
### Pod network readiness {#pod-has-network}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
This condition was renamed from PodHasNetwork to PodReadyToStartContainers.
|
||||
During its early development, this condition was named `PodHasNetwork`.
|
||||
{{< /note >}}
|
||||
|
||||
After a Pod gets scheduled on a node, it needs to be admitted by the Kubelet and
|
||||
have any volumes mounted. Once these phases are complete, the Kubelet works with
|
||||
After a Pod gets scheduled on a node, it needs to be admitted by the kubelet and
|
||||
to have any required storage volumes mounted. Once these phases are complete,
|
||||
the kubelet works with
|
||||
a container runtime (using {{< glossary_tooltip term_id="cri" >}}) to set up a
|
||||
runtime sandbox and configure networking for the Pod. If the
|
||||
`PodReadyToStartContainersCondition` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled,
|
||||
Kubelet reports whether a pod has reached this initialization milestone through
|
||||
the `PodReadyToStartContainers` condition in the `status.conditions` field of a Pod.
|
||||
`PodReadyToStartContainersCondition`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled
|
||||
(it is enabled by default for Kubernetes {{< skew currentVersion >}}), the
|
||||
`PodReadyToStartContainers` condition will be added to the `status.conditions` field of a Pod.
|
||||
|
||||
The `PodReadyToStartContainers` condition is set to `False` by the Kubelet when it detects a
|
||||
Pod does not have a runtime sandbox with networking configured. This occurs in
|
||||
|
@ -504,6 +517,22 @@ termination grace period _begins_. The behavior above is described when the
|
|||
feature gate `EndpointSliceTerminatingCondition` is enabled.
|
||||
{{</note>}}
|
||||
|
||||
{{<note>}}
|
||||
Beginning with Kubernetes 1.29, if your Pod includes one or more sidecar containers
|
||||
(init containers with an Always restart policy), the kubelet will delay sending
|
||||
the TERM signal to these sidecar containers until the last main container has fully terminated.
|
||||
The sidecar containers will be terminated in the reverse order they are defined in the Pod spec.
|
||||
This ensures that sidecar containers continue serving the other containers in the Pod until they are no longer needed.
|
||||
|
||||
Note that slow termination of a main container will also delay the termination of the sidecar containers.
|
||||
If the grace period expires before the termination process is complete, the Pod may enter emergency termination.
|
||||
In this case, all remaining containers in the Pod will be terminated simultaneously with a short grace period.
|
||||
|
||||
Similarly, if the Pod has a preStop hook that exceeds the termination grace period, emergency termination may occur.
|
||||
In general, if you have used preStop hooks to control the termination order without sidecar containers, you can now
|
||||
remove them and allow the kubelet to manage sidecar termination automatically.
|
||||
{{</note>}}
|
||||
|
||||
1. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends
|
||||
`SIGKILL` to any processes still running in any container in the Pod.
|
||||
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
|
||||
|
@ -582,6 +611,8 @@ for more details.
|
|||
|
||||
* Learn more about [container lifecycle hooks](/docs/concepts/containers/container-lifecycle-hooks/).
|
||||
|
||||
* Learn more about [sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/).
|
||||
|
||||
* For detailed information about Pod and container status in the API, see
|
||||
the API reference documentation covering
|
||||
[`status`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus) for Pod.
|
||||
[`status`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus) for Pod.
|
|
@ -0,0 +1,123 @@
|
|||
---
|
||||
title: Sidecar Containers
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
|
||||
|
||||
Sidecar containers are the secondary containers that run along with the main
|
||||
application container within the same {{< glossary_tooltip text="Pod" term_id="pod" >}}.
|
||||
These containers are used to enhance or to extend the functionality of the main application
|
||||
container by providing additional services, or functionality such as logging, monitoring,
|
||||
security, or data synchronization, without directly altering the primary application code.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Enabling sidecar containers
|
||||
|
||||
Enabled by default with Kubernetes 1.29, a
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) named
|
||||
`SidecarContainers` allows you to specify a `restartPolicy` for containers listed in a
|
||||
Pod's `initContainers` field. These restartable _sidecar_ containers are independent with
|
||||
other [init containers](/docs/concepts/workloads/pods/init-containers/) and main
|
||||
application container within the same pod. These can be started, stopped, or restarted
|
||||
without effecting the main application container and other init containers.
|
||||
|
||||
## Sidecar containers and Pod lifecycle
|
||||
|
||||
If an init container is created with its `restartPolicy` set to `Always`, it will
|
||||
start and remain running during the entire life of the Pod. This can be helpful for
|
||||
running supporting services separated from the main application containers.
|
||||
|
||||
If a `readinessProbe` is specified for this init container, its result will be used
|
||||
to determine the `ready` state of the Pod.
|
||||
|
||||
Since these containers are defined as init containers, they benefit from the same
|
||||
ordering and sequential guarantees as other init containers, allowing them to
|
||||
be mixed with other init containers into complex Pod initialization flows.
|
||||
|
||||
Compared to regular init containers, sidecars defined within `initContainers` continue to
|
||||
run after they have started. This is important when there is more than one entry inside
|
||||
`.spec.initContainers` for a Pod. After a sidecar-style init container is running (the kubelet
|
||||
has set the `started` status for that init container to true), the kubelet then starts the
|
||||
next init container from the ordered `.spec.initContainers` list.
|
||||
That status either becomes true because there is a process running in the
|
||||
container and no startup probe defined, or as a result of its `startupProbe` succeeding.
|
||||
|
||||
Here's an example of a Deployment with two containers, one of which is a sidecar:
|
||||
|
||||
{{% code_sample language="yaml" file="application/deployment-sidecar.yaml" %}}
|
||||
|
||||
This feature is also useful for running Jobs with sidecars, as the sidecar
|
||||
container will not prevent the Job from completing after the main container
|
||||
has finished.
|
||||
|
||||
Here's an example of a Job with two containers, one of which is a sidecar:
|
||||
|
||||
{{% code_sample language="yaml" file="application/job/job-sidecar.yaml" %}}
|
||||
|
||||
## Differences from regular containers
|
||||
|
||||
Sidecar containers run alongside regular containers in the same pod. However, they do not
|
||||
execute the primary application logic; instead, they provide supporting functionality to
|
||||
the main application.
|
||||
|
||||
Sidecar containers have their own independent lifecycles. They can be started, stopped,
|
||||
and restarted independently of regular containers. This means you can update, scale, or
|
||||
maintain sidecar containers without affecting the primary application.
|
||||
|
||||
Sidecar containers share the same network and storage namespaces with the primary
|
||||
container This co-location allows them to interact closely and share resources.
|
||||
|
||||
## Differences from init containers
|
||||
|
||||
Sidecar containers work alongside the main container, extending its functionality and
|
||||
providing additional services.
|
||||
|
||||
Sidecar containers run concurrently with the main application container. They are active
|
||||
throughout the lifecycle of the pod and can be started and stopped independently of the
|
||||
main container. Unlike [init containers](/docs/concepts/workloads/pods/init-containers/),
|
||||
sidecar containers support [probes](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe) to control their lifecycle.
|
||||
|
||||
These containers can interact directly with the main application containers, sharing
|
||||
the same network namespace, filesystem, and environment variables. They work closely
|
||||
together to provide additional functionality.
|
||||
|
||||
## Resource sharing within containers
|
||||
|
||||
{{< comment >}}
|
||||
This section is also present in the [init containers](/docs/concepts/workloads/pods/init-containers/) page.
|
||||
If you're editing this section, change both places.
|
||||
{{< /comment >}}
|
||||
|
||||
Given the order of execution for init, sidecar and app containers, the following rules
|
||||
for resource usage apply:
|
||||
|
||||
* The highest of any particular resource request or limit defined on all init
|
||||
containers is the *effective init request/limit*. If any resource has no
|
||||
resource limit specified this is considered as the highest limit.
|
||||
* The Pod's *effective request/limit* for a resource is the sum of
|
||||
[pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/) and the higher of:
|
||||
* the sum of all non-init containers(app and sidecar containers) request/limit for a
|
||||
resource
|
||||
* the effective init request/limit for a resource
|
||||
* Scheduling is done based on effective requests/limits, which means
|
||||
init containers can reserve resources for initialization that are not used
|
||||
during the life of the Pod.
|
||||
* The QoS (quality of service) tier of the Pod's *effective QoS tier* is the
|
||||
QoS tier for all init, sidecar and app containers alike.
|
||||
|
||||
Quota and limits are applied based on the effective Pod request and
|
||||
limit.
|
||||
|
||||
Pod level control groups (cgroups) are based on the effective Pod request and
|
||||
limit, the same as the scheduler.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read a blog post on [native sidecar containers](/blog/2023/08/25/native-sidecar-containers/).
|
||||
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container).
|
||||
* Learn about the [types of probes](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe): liveness, readiness, startup probe.
|
||||
* Learn about [pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/).
|
|
@ -152,6 +152,35 @@ host's file owner/group.
|
|||
|
||||
[CVE-2021-25741]: https://github.com/kubernetes/kubernetes/issues/104980
|
||||
|
||||
## Integration with Pod security admission checks
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.29" >}}
|
||||
|
||||
For Linux Pods that enable user namespaces, Kubernetes relaxes the application of
|
||||
[Pod Security Standards](/docs/concepts/security/pod-security-standards) in a controlled way.
|
||||
This behavior can be controlled by the [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`UserNamespacesPodSecurityStandards`, which allows an early opt-in for end
|
||||
users. Admins have to ensure that user namespaces are enabled by all nodes
|
||||
within the cluster if using the feature gate.
|
||||
|
||||
If you enable the associated feature gate and create a Pod that uses user
|
||||
namespaces, the following fields won't be constrained even in contexts that enforce the
|
||||
_Baseline_ or _Restricted_ pod security standard. This behavior does not
|
||||
present a security concern because `root` inside a Pod with user namespaces
|
||||
actually refers to the user inside the container, that is never mapped to a
|
||||
privileged user on the host. Here's the list of fields that are **not** checks for Pods in those
|
||||
circumstances:
|
||||
|
||||
- `spec.securityContext.runAsNonRoot`
|
||||
- `spec.containers[*].securityContext.runAsNonRoot`
|
||||
- `spec.initContainers[*].securityContext.runAsNonRoot`
|
||||
- `spec.ephemeralContainers[*].securityContext.runAsNonRoot`
|
||||
- `spec.securityContext.runAsUser`
|
||||
- `spec.containers[*].securityContext.runAsUser`
|
||||
- `spec.initContainers[*].securityContext.runAsUser`
|
||||
- `spec.ephemeralContainers[*].securityContext.runAsUser`
|
||||
|
||||
## Limitations
|
||||
|
||||
When using a user namespace for the pod, it is disallowed to use other host
|
||||
|
|
|
@ -1,197 +1,31 @@
|
|||
---
|
||||
content_type: concept
|
||||
title: Contribute to K8s docs
|
||||
linktitle: Contribute
|
||||
title: Contribute to Kubernetes
|
||||
linkTitle: Contribute
|
||||
main_menu: true
|
||||
no_list: true
|
||||
weight: 80
|
||||
card:
|
||||
name: contribute
|
||||
weight: 10
|
||||
title: Start contributing to K8s
|
||||
title: Contribute to Kubernetes
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
*Kubernetes welcomes improvements from all contributors, new and experienced!*
|
||||
There are lots of ways to contribute to Kubernetes. You can work on designs for new features,
|
||||
you can document the code we already have, you can write for our [blog](/blog). There's more:
|
||||
you can implement those new features or fix bugs. You can help people join our contributor
|
||||
community, or support existing contributors.
|
||||
|
||||
{{< note >}}
|
||||
To learn more about contributing to Kubernetes in general, see the
|
||||
[contributor documentation](https://www.kubernetes.dev/docs/).
|
||||
With all these different ways to make a difference to the project, we - Kubernetes - have made
|
||||
a dedicated website: https://k8s.dev/. You can go there to learn more about
|
||||
contributing to Kubernetes.
|
||||
|
||||
If you specifically want to learn about contributing to _this_ documentation, read
|
||||
[Contribute to Kubernetes documentation](/docs/contribute/docs/).
|
||||
|
||||
You can also read the
|
||||
{{< glossary_tooltip text="CNCF" term_id="cncf" >}}
|
||||
[page](https://contribute.cncf.io/contributors/projects/#kubernetes)
|
||||
about contributing to Kubernetes.
|
||||
{{< /note >}}
|
||||
|
||||
---
|
||||
|
||||
This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs).
|
||||
|
||||
Kubernetes documentation contributors:
|
||||
|
||||
- Improve existing content
|
||||
- Create new content
|
||||
- Translate the documentation
|
||||
- Manage and publish the documentation parts of the Kubernetes release cycle
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Getting started
|
||||
|
||||
Anyone can open an issue about documentation, or contribute a change with a
|
||||
pull request (PR) to the
|
||||
[`kubernetes/website` GitHub repository](https://github.com/kubernetes/website).
|
||||
You need to be comfortable with
|
||||
[git](https://git-scm.com/) and
|
||||
[GitHub](https://lab.github.com/)
|
||||
to work effectively in the Kubernetes community.
|
||||
|
||||
To get involved with documentation:
|
||||
|
||||
1. Sign the CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md).
|
||||
2. Familiarize yourself with the [documentation repository](https://github.com/kubernetes/website)
|
||||
and the website's [static site generator](https://gohugo.io).
|
||||
3. Make sure you understand the basic processes for
|
||||
[opening a pull request](/docs/contribute/new-content/open-a-pr/) and
|
||||
[reviewing changes](/docs/contribute/review/reviewing-prs/).
|
||||
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart TB
|
||||
subgraph third[Open PR]
|
||||
direction TB
|
||||
U[ ] -.-
|
||||
Q[Improve content] --- N[Create content]
|
||||
N --- O[Translate docs]
|
||||
O --- P[Manage/publish docs parts<br>of K8s release cycle]
|
||||
|
||||
end
|
||||
|
||||
subgraph second[Review]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
D[Look over the<br>kubernetes/website<br>repository] --- E[Check out the<br>Hugo static site<br>generator]
|
||||
E --- F[Understand basic<br>GitHub commands]
|
||||
F --- G[Review open PR<br>and change review <br>processes]
|
||||
end
|
||||
|
||||
subgraph first[Sign up]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
B[Sign the CNCF<br>Contributor<br>License Agreement] --- C[Join sig-docs<br>Slack channel]
|
||||
C --- V[Join kubernetes-sig-docs<br>mailing list]
|
||||
V --- M[Attend weekly<br>sig-docs calls<br>or slack meetings]
|
||||
end
|
||||
|
||||
A([fa:fa-user New<br>Contributor]) --> first
|
||||
A --> second
|
||||
A --> third
|
||||
A --> H[Ask Questions!!!]
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,C,D,E,F,G,H,M,Q,N,O,P,V grey
|
||||
class S,T,U spacewhite
|
||||
class first,second,third white
|
||||
{{</ mermaid >}}
|
||||
Figure 1. Getting started for a new contributor.
|
||||
|
||||
Figure 1 outlines a roadmap for new contributors. You can follow some or all of the steps for `Sign up` and `Review`. Now you are ready to open PRs that achieve your contribution objectives with some listed under `Open PR`. Again, questions are always welcome!
|
||||
|
||||
Some tasks require more trust and more access in the Kubernetes organization.
|
||||
See [Participating in SIG Docs](/docs/contribute/participate/) for more details about
|
||||
roles and permissions.
|
||||
|
||||
## Your first contribution
|
||||
|
||||
You can prepare for your first contribution by reviewing several steps beforehand. Figure 2 outlines the steps and the details follow.
|
||||
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart LR
|
||||
subgraph second[First Contribution]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
G[Review PRs from other<br>K8s members] -->
|
||||
A[Check kubernetes/website<br>issues list for<br>good first PRs] --> B[Open a PR!!]
|
||||
end
|
||||
subgraph first[Suggested Prep]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
D[Read contribution overview] -->E[Read K8s content<br>and style guides]
|
||||
E --> F[Learn about Hugo page<br>content types<br>and shortcodes]
|
||||
end
|
||||
|
||||
|
||||
first ----> second
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,D,E,F,G grey
|
||||
class S,T spacewhite
|
||||
class first,second white
|
||||
{{</ mermaid >}}
|
||||
Figure 2. Preparation for your first contribution.
|
||||
|
||||
- Read the [Contribution overview](/docs/contribute/new-content/) to
|
||||
learn about the different ways you can contribute.
|
||||
- Check [`kubernetes/website` issues list](https://github.com/kubernetes/website/issues/)
|
||||
for issues that make good entry points.
|
||||
- [Open a pull request using GitHub](/docs/contribute/new-content/open-a-pr/#changes-using-github)
|
||||
to existing documentation and learn more about filing issues in GitHub.
|
||||
- [Review pull requests](/docs/contribute/review/reviewing-prs/) from other
|
||||
Kubernetes community members for accuracy and language.
|
||||
- Read the Kubernetes [content](/docs/contribute/style/content-guide/) and
|
||||
[style guides](/docs/contribute/style/style-guide/) so you can leave informed comments.
|
||||
- Learn about [page content types](/docs/contribute/style/page-content-types/)
|
||||
and [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/).
|
||||
|
||||
## Getting help when contributing
|
||||
|
||||
Making your first contribution can be overwhelming. The [New Contributor Ambassadors](https://github.com/kubernetes/website#new-contributor-ambassadors) are there to walk you through making your first few contributions. You can reach out to them in the [Kubernetes Slack](https://slack.k8s.io/) preferably in the `#sig-docs` channel. There is also the [New Contributors Meet and Greet call](https://www.kubernetes.dev/resources/calendar/) that happens on the first Tuesday of every month. You can interact with the New Contributor Ambassadors and get your queries resolved here.
|
||||
|
||||
## Next steps
|
||||
|
||||
- Learn to [work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo)
|
||||
of the repository.
|
||||
- Document [features in a release](/docs/contribute/new-content/new-features/).
|
||||
- Participate in [SIG Docs](/docs/contribute/participate/), and become a
|
||||
[member or reviewer](/docs/contribute/participate/roles-and-responsibilities/).
|
||||
|
||||
- Start or help with a [localization](/docs/contribute/localization/).
|
||||
|
||||
## Get involved with SIG Docs
|
||||
|
||||
[SIG Docs](/docs/contribute/participate/) is the group of contributors who
|
||||
publish and maintain Kubernetes documentation and the website. Getting
|
||||
involved with SIG Docs is a great way for Kubernetes contributors (feature
|
||||
development or otherwise) to have a large impact on the Kubernetes project.
|
||||
|
||||
SIG Docs communicates with different methods:
|
||||
|
||||
- [Join `#sig-docs` on the Kubernetes Slack instance](https://slack.k8s.io/). Make sure to
|
||||
introduce yourself!
|
||||
- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs),
|
||||
where broader discussions take place and official decisions are recorded.
|
||||
- Join the [SIG Docs video meeting](https://github.com/kubernetes/community/tree/master/sig-docs) held every two weeks. Meetings are always announced on `#sig-docs` and added to the [Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles). You'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone.
|
||||
- Join the SIG Docs async Slack standup meeting on those weeks when the in-person Zoom video meeting does not take place. Meetings are always announced on `#sig-docs`. You can contribute to any one of the threads up to 24 hours after meeting announcement.
|
||||
|
||||
## Other ways to contribute
|
||||
|
||||
- Visit the [Kubernetes community site](/community/). Participate on Twitter or Stack Overflow, learn about local Kubernetes meetups and events, and more.
|
||||
- Read the [contributor cheatsheet](https://www.kubernetes.dev/docs/contributor-cheatsheet/) to get involved with Kubernetes feature development.
|
||||
- Visit the contributor site to learn more about [Kubernetes Contributors](https://www.kubernetes.dev/) and [additional contributor resources](https://www.kubernetes.dev/resources/).
|
||||
- Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/).
|
||||
|
||||
|
|
|
@ -0,0 +1,203 @@
|
|||
---
|
||||
content_type: concept
|
||||
title: Contribute to Kubernetes Documentation
|
||||
weight: 09
|
||||
card:
|
||||
name: contribute
|
||||
weight: 11
|
||||
title: Contribute to documentation
|
||||
---
|
||||
|
||||
|
||||
This website is maintained by [Kubernetes SIG Docs](/docs/contribute/#get-involved-with-sig-docs).
|
||||
The Kubernetes project welcomes help from all contributors, new or experienced!
|
||||
|
||||
Kubernetes documentation contributors:
|
||||
|
||||
- Improve existing content
|
||||
- Create new content
|
||||
- Translate the documentation
|
||||
- Manage and publish the documentation parts of the Kubernetes release cycle
|
||||
|
||||
---
|
||||
|
||||
{{< note >}}
|
||||
To learn more about contributing to Kubernetes in general, see the general
|
||||
[contributor documentation](https://www.kubernetes.dev/docs/) site.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Getting started
|
||||
|
||||
Anyone can open an issue about documentation, or contribute a change with a
|
||||
pull request (PR) to the
|
||||
[`kubernetes/website` GitHub repository](https://github.com/kubernetes/website).
|
||||
You need to be comfortable with
|
||||
[git](https://git-scm.com/) and
|
||||
[GitHub](https://skills.github.com/)
|
||||
to work effectively in the Kubernetes community.
|
||||
|
||||
To get involved with documentation:
|
||||
|
||||
1. Sign the CNCF [Contributor License Agreement](https://github.com/kubernetes/community/blob/master/CLA.md).
|
||||
2. Familiarize yourself with the [documentation repository](https://github.com/kubernetes/website)
|
||||
and the website's [static site generator](https://gohugo.io).
|
||||
3. Make sure you understand the basic processes for
|
||||
[opening a pull request](/docs/contribute/new-content/open-a-pr/) and
|
||||
[reviewing changes](/docs/contribute/review/reviewing-prs/).
|
||||
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart TB
|
||||
subgraph third[Open PR]
|
||||
direction TB
|
||||
U[ ] -.-
|
||||
Q[Improve content] --- N[Create content]
|
||||
N --- O[Translate docs]
|
||||
O --- P[Manage/publish docs parts<br>of K8s release cycle]
|
||||
|
||||
end
|
||||
|
||||
subgraph second[Review]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
D[Look over the<br>kubernetes/website<br>repository] --- E[Check out the<br>Hugo static site<br>generator]
|
||||
E --- F[Understand basic<br>GitHub commands]
|
||||
F --- G[Review open PR<br>and change review <br>processes]
|
||||
end
|
||||
|
||||
subgraph first[Sign up]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
B[Sign the CNCF<br>Contributor<br>License Agreement] --- C[Join sig-docs<br>Slack channel]
|
||||
C --- V[Join kubernetes-sig-docs<br>mailing list]
|
||||
V --- M[Attend weekly<br>sig-docs calls<br>or slack meetings]
|
||||
end
|
||||
|
||||
A([fa:fa-user New<br>Contributor]) --> first
|
||||
A --> second
|
||||
A --> third
|
||||
A --> H[Ask Questions!!!]
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,C,D,E,F,G,H,M,Q,N,O,P,V grey
|
||||
class S,T,U spacewhite
|
||||
class first,second,third white
|
||||
{{</ mermaid >}}
|
||||
Figure 1. Getting started for a new contributor.
|
||||
|
||||
Figure 1 outlines a roadmap for new contributors. You can follow some or all of
|
||||
the steps for `Sign up` and `Review`. Now you are ready to open PRs that achieve
|
||||
your contribution objectives with some listed under `Open PR`. Again, questions
|
||||
are always welcome!
|
||||
|
||||
Some tasks require more trust and more access in the Kubernetes organization.
|
||||
See [Participating in SIG Docs](/docs/contribute/participate/) for more details about
|
||||
roles and permissions.
|
||||
|
||||
## Your first contribution
|
||||
|
||||
You can prepare for your first contribution by reviewing several steps beforehand.
|
||||
Figure 2 outlines the steps and the details follow.
|
||||
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart LR
|
||||
subgraph second[First Contribution]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
G[Review PRs from other<br>K8s members] -->
|
||||
A[Check kubernetes/website<br>issues list for<br>good first PRs] --> B[Open a PR!!]
|
||||
end
|
||||
subgraph first[Suggested Prep]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
D[Read contribution overview] -->E[Read K8s content<br>and style guides]
|
||||
E --> F[Learn about Hugo page<br>content types<br>and shortcodes]
|
||||
end
|
||||
|
||||
|
||||
first ----> second
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,D,E,F,G grey
|
||||
class S,T spacewhite
|
||||
class first,second white
|
||||
{{</ mermaid >}}
|
||||
Figure 2. Preparation for your first contribution.
|
||||
|
||||
- Read the [Contribution overview](/docs/contribute/new-content/) to
|
||||
learn about the different ways you can contribute.
|
||||
- Check [`kubernetes/website` issues list](https://github.com/kubernetes/website/issues/)
|
||||
for issues that make good entry points.
|
||||
- [Open a pull request using GitHub](/docs/contribute/new-content/open-a-pr/#changes-using-github)
|
||||
to existing documentation and learn more about filing issues in GitHub.
|
||||
- [Review pull requests](/docs/contribute/review/reviewing-prs/) from other
|
||||
Kubernetes community members for accuracy and language.
|
||||
- Read the Kubernetes [content](/docs/contribute/style/content-guide/) and
|
||||
[style guides](/docs/contribute/style/style-guide/) so you can leave informed comments.
|
||||
- Learn about [page content types](/docs/contribute/style/page-content-types/)
|
||||
and [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/).
|
||||
|
||||
## Getting help when contributing
|
||||
|
||||
Making your first contribution can be overwhelming. The
|
||||
[New Contributor Ambassadors](https://github.com/kubernetes/website#new-contributor-ambassadors)
|
||||
are there to walk you through making your first few contributions. You can reach out to them in the
|
||||
[Kubernetes Slack](https://slack.k8s.io/) preferably in the `#sig-docs` channel. There is also the
|
||||
[New Contributors Meet and Greet call](https://www.kubernetes.dev/resources/calendar/)
|
||||
that happens on the first Tuesday of every month. You can interact with the New Contributor Ambassadors
|
||||
and get your queries resolved here.
|
||||
|
||||
## Next steps
|
||||
|
||||
- Learn to [work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo)
|
||||
of the repository.
|
||||
- Document [features in a release](/docs/contribute/new-content/new-features/).
|
||||
- Participate in [SIG Docs](/docs/contribute/participate/), and become a
|
||||
[member or reviewer](/docs/contribute/participate/roles-and-responsibilities/).
|
||||
|
||||
- Start or help with a [localization](/docs/contribute/localization/).
|
||||
|
||||
## Get involved with SIG Docs
|
||||
|
||||
[SIG Docs](/docs/contribute/participate/) is the group of contributors who
|
||||
publish and maintain Kubernetes documentation and the website. Getting
|
||||
involved with SIG Docs is a great way for Kubernetes contributors (feature
|
||||
development or otherwise) to have a large impact on the Kubernetes project.
|
||||
|
||||
SIG Docs communicates with different methods:
|
||||
|
||||
- [Join `#sig-docs` on the Kubernetes Slack instance](https://slack.k8s.io/). Make sure to
|
||||
introduce yourself!
|
||||
- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs),
|
||||
where broader discussions take place and official decisions are recorded.
|
||||
- Join the [SIG Docs video meeting](https://github.com/kubernetes/community/tree/master/sig-docs)
|
||||
held every two weeks. Meetings are always announced on `#sig-docs` and added to the
|
||||
[Kubernetes community meetings calendar](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles).
|
||||
You'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone.
|
||||
- Join the SIG Docs async Slack standup meeting on those weeks when the in-person Zoom
|
||||
video meeting does not take place. Meetings are always announced on `#sig-docs`.
|
||||
You can contribute to any one of the threads up to 24 hours after meeting announcement.
|
||||
|
||||
## Other ways to contribute
|
||||
|
||||
- Visit the [Kubernetes community site](/community/). Participate on Twitter or Stack Overflow,
|
||||
learn about local Kubernetes meetups and events, and more.
|
||||
- Read the [contributor cheatsheet](https://www.kubernetes.dev/docs/contributor-cheatsheet/)
|
||||
to get involved with Kubernetes feature development.
|
||||
- Visit the contributor site to learn more about [Kubernetes Contributors](https://www.kubernetes.dev/)
|
||||
and [additional contributor resources](https://www.kubernetes.dev/resources/).
|
||||
- Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/).
|
|
@ -16,11 +16,8 @@ API or the `kube-*` components from the upstream code, see the following instruc
|
|||
- [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
|
||||
- [Generating Reference Documentation for the Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
- You need to have these tools installed:
|
||||
|
||||
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
|
||||
|
@ -38,8 +35,6 @@ API or the `kube-*` components from the upstream code, see the following instruc
|
|||
For more information, see [Creating a Pull Request](https://help.github.com/articles/creating-a-pull-request/)
|
||||
and [GitHub Standard Fork & Pull Request Workflow](https://gist.github.com/Chaser324/ce0505fbed06b947d962).
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## The big picture
|
||||
|
@ -51,7 +46,7 @@ from the source code in the [upstream Kubernetes](https://github.com/kubernetes/
|
|||
When you see bugs in the generated documentation, you may want to consider
|
||||
creating a patch to fix it in the upstream project.
|
||||
|
||||
## Cloning the Kubernetes repository
|
||||
## Clone the Kubernetes repository
|
||||
|
||||
If you don't already have the kubernetes/kubernetes repository, get it now:
|
||||
|
||||
|
@ -64,16 +59,16 @@ go get github.com/kubernetes/kubernetes
|
|||
Determine the base directory of your clone of the
|
||||
[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/github.com/kubernetes/kubernetes.`
|
||||
base directory is `$GOPATH/src/github.com/kubernetes/kubernetes`.
|
||||
The remaining steps refer to your base directory as `<k8s-base>`.
|
||||
|
||||
Determine the base directory of your clone of the
|
||||
[kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs.`
|
||||
base directory is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs`.
|
||||
The remaining steps refer to your base directory as `<rdocs-base>`.
|
||||
|
||||
## Editing the Kubernetes source code
|
||||
## Edit the Kubernetes source code
|
||||
|
||||
The Kubernetes API reference documentation is automatically generated from
|
||||
an OpenAPI spec, which is generated from the Kubernetes source code. If you
|
||||
|
@ -84,10 +79,10 @@ The documentation for the `kube-*` components is also generated from the upstrea
|
|||
source code. You must change the code related to the component
|
||||
you want to fix in order to fix the generated documentation.
|
||||
|
||||
### Making changes to the upstream source code
|
||||
### Make changes to the upstream source code
|
||||
|
||||
{{< note >}}
|
||||
The following steps are an example, not a general procedure. Details
|
||||
The following steps are an example, not a general procedure. Details
|
||||
will be different in your situation.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -123,12 +118,12 @@ On branch master
|
|||
modified: staging/src/k8s.io/api/apps/v1/types.go
|
||||
```
|
||||
|
||||
### Committing your edited file
|
||||
### Commit your edited file
|
||||
|
||||
Run `git add` and `git commit` to commit the changes you have made so far. In the next step,
|
||||
you will do a second commit. It is important to keep your changes separated into two commits.
|
||||
|
||||
### Generating the OpenAPI spec and related files
|
||||
### Generate the OpenAPI spec and related files
|
||||
|
||||
Go to `<k8s-base>` and run these scripts:
|
||||
|
||||
|
@ -179,7 +174,7 @@ repository and in related repositories, such as
|
|||
[kubernetes/apiserver](https://github.com/kubernetes/apiserver/blob/master/README.md).
|
||||
{{< /note >}}
|
||||
|
||||
### Cherry picking your commit into a release branch
|
||||
### Cherry pick your commit into a release branch
|
||||
|
||||
In the preceding section, you edited a file in the master branch and then ran scripts
|
||||
to generate an OpenAPI spec and related files. Then you submitted your changes in a pull request
|
||||
|
@ -189,7 +184,7 @@ Kubernetes version {{< skew latestVersion >}}, and you want to backport your cha
|
|||
release-{{< skew prevMinorVersion >}} branch.
|
||||
|
||||
Recall that your pull request has two commits: one for editing `types.go`
|
||||
and one for the files generated by scripts. The next step is to propose a cherry pick of your first
|
||||
and one for the files generated by scripts. The next step is to propose a cherry pick of your first
|
||||
commit into the release-{{< skew prevMinorVersion >}} branch. The idea is to cherry pick the commit
|
||||
that edited `types.go`, but not the commit that has the results of running the scripts. For instructions, see
|
||||
[Propose a Cherry Pick](https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md).
|
||||
|
@ -220,7 +215,7 @@ the same as the generated files in the master branch. The generated files in the
|
|||
contain API elements only from Kubernetes {{< skew prevMinorVersion >}}. The generated files in the master branch might contain
|
||||
API elements that are not in {{< skew prevMinorVersion >}}, but are under development for {{< skew latestVersion >}}.
|
||||
|
||||
## Generating the published reference docs
|
||||
## Generate the published reference docs
|
||||
|
||||
The preceding section showed how to edit a source file and then generate
|
||||
several files, including `api/openapi-spec/swagger.json` in the
|
||||
|
@ -228,16 +223,13 @@ several files, including `api/openapi-spec/swagger.json` in the
|
|||
The `swagger.json` file is the OpenAPI definition file to use for generating
|
||||
the API reference documentation.
|
||||
|
||||
You are now ready to follow the [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/) guide to generate the
|
||||
You are now ready to follow the
|
||||
[Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
|
||||
guide to generate the
|
||||
[published Kubernetes API reference documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/).
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
|
||||
* [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
|
||||
* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/)
|
||||
|
||||
|
||||
|
|
|
@ -10,8 +10,7 @@ This page shows how to generate the `kubectl` command reference.
|
|||
|
||||
{{< note >}}
|
||||
This topic shows how to generate reference documentation for
|
||||
[kubectl commands](/docs/reference/generated/kubectl/kubectl-commands)
|
||||
like
|
||||
[kubectl commands](/docs/reference/generated/kubectl/kubectl-commands) like
|
||||
[kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) and
|
||||
[kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint).
|
||||
This topic does not show how to generate the
|
||||
|
@ -27,9 +26,9 @@ reference page, see
|
|||
|
||||
<!-- steps -->
|
||||
|
||||
## Setting up the local repositories
|
||||
## Set up the local repositories
|
||||
|
||||
Create a local workspace and set your `GOPATH`.
|
||||
Create a local workspace and set your `GOPATH`:
|
||||
|
||||
```shell
|
||||
mkdir -p $HOME/<workspace>
|
||||
|
@ -58,7 +57,7 @@ Get a clone of the kubernetes/kubernetes repository as k8s.io/kubernetes:
|
|||
git clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes
|
||||
```
|
||||
|
||||
Remove the spf13 package from `$GOPATH/src/k8s.io/kubernetes/vendor/github.com`.
|
||||
Remove the spf13 package from `$GOPATH/src/k8s.io/kubernetes/vendor/github.com`:
|
||||
|
||||
```shell
|
||||
rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/github.com/spf13
|
||||
|
@ -67,22 +66,22 @@ rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/github.com/spf13
|
|||
The kubernetes/kubernetes repository provides the `kubectl` and `kustomize` source code.
|
||||
|
||||
* Determine the base directory of your clone of the
|
||||
[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/k8s.io/kubernetes.`
|
||||
The remaining steps refer to your base directory as `<k8s-base>`.
|
||||
[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/k8s.io/kubernetes`.
|
||||
The remaining steps refer to your base directory as `<k8s-base>`.
|
||||
|
||||
* Determine the base directory of your clone of the
|
||||
[kubernetes/website](https://github.com/kubernetes/website) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/github.com/<your-username>/website.`
|
||||
The remaining steps refer to your base directory as `<web-base>`.
|
||||
[kubernetes/website](https://github.com/kubernetes/website) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/github.com/<your-username>/website`.
|
||||
The remaining steps refer to your base directory as `<web-base>`.
|
||||
|
||||
* Determine the base directory of your clone of the
|
||||
[kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs.`
|
||||
The remaining steps refer to your base directory as `<rdocs-base>`.
|
||||
[kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs) repository.
|
||||
For example, if you followed the preceding step to get the repository, your
|
||||
base directory is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs`.
|
||||
The remaining steps refer to your base directory as `<rdocs-base>`.
|
||||
|
||||
In your local k8s.io/kubernetes repository, check out the branch of interest,
|
||||
and make sure it is up to date. For example, if you want to generate docs for
|
||||
|
@ -97,7 +96,7 @@ git pull https://github.com/kubernetes/kubernetes {{< skew prevMinorVersion >}}.
|
|||
If you do not need to edit the `kubectl` source code, follow the instructions for
|
||||
[Setting build variables](#setting-build-variables).
|
||||
|
||||
## Editing the kubectl source code
|
||||
## Edit the kubectl source code
|
||||
|
||||
The kubectl command reference documentation is automatically generated from
|
||||
the kubectl source code. If you want to change the reference documentation, the first step
|
||||
|
@ -111,15 +110,14 @@ is an example of a pull request that fixes a typo in the kubectl source code.
|
|||
Monitor your pull request, and respond to reviewer comments. Continue to monitor your
|
||||
pull request until it is merged into the target branch of the kubernetes/kubernetes repository.
|
||||
|
||||
## Cherry picking your change into a release branch
|
||||
## Cherry pick your change into a release branch
|
||||
|
||||
Your change is now in the master branch, which is used for development of the next
|
||||
Kubernetes release. If you want your change to appear in the docs for a Kubernetes
|
||||
version that has already been released, you need to propose that your change be
|
||||
cherry picked into the release branch.
|
||||
|
||||
For example, suppose the master branch is being used to develop Kubernetes
|
||||
{{< skew currentVersion >}}
|
||||
For example, suppose the master branch is being used to develop Kubernetes {{< skew currentVersion >}}
|
||||
and you want to backport your change to the release-{{< skew prevMinorVersion >}} branch. For
|
||||
instructions on how to do this, see
|
||||
[Propose a Cherry Pick](https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md).
|
||||
|
@ -132,14 +130,15 @@ milestone in your pull request. If you don't have those permissions, you will
|
|||
need to work with someone who can set the label and milestone for you.
|
||||
{{< /note >}}
|
||||
|
||||
## Setting build variables
|
||||
## Set build variables
|
||||
|
||||
Go to `<rdocs-base>`. On you command line, set the following environment variables.
|
||||
|
||||
* Set `K8S_ROOT` to `<k8s-base>`.
|
||||
* Set `K8S_WEBROOT` to `<web-base>`.
|
||||
* Set `K8S_RELEASE` to the version of the docs you want to build.
|
||||
For example, if you want to build docs for Kubernetes {{< skew prevMinorVersion >}}, set `K8S_RELEASE` to {{< skew prevMinorVersion >}}.
|
||||
For example, if you want to build docs for Kubernetes {{< skew prevMinorVersion >}},
|
||||
set `K8S_RELEASE` to {{< skew prevMinorVersion >}}.
|
||||
|
||||
For example:
|
||||
|
||||
|
@ -162,13 +161,12 @@ cd <rdocs-base>
|
|||
make createversiondirs
|
||||
```
|
||||
|
||||
## Checking out a release tag in k8s.io/kubernetes
|
||||
## Check out a release tag in k8s.io/kubernetes
|
||||
|
||||
In your local `<k8s-base>` repository, checkout the branch that has
|
||||
In your local `<k8s-base>` repository, check out the branch that has
|
||||
the version of Kubernetes that you want to document. For example, if you want
|
||||
to generate docs for Kubernetes {{< skew prevMinorVersion >}}.0, check out the
|
||||
`v{{< skew prevMinorVersion >}}` tag. Make sure
|
||||
you local branch is up to date.
|
||||
`v{{< skew prevMinorVersion >}}` tag. Make sure your local branch is up to date.
|
||||
|
||||
```shell
|
||||
cd <k8s-base>
|
||||
|
@ -176,7 +174,7 @@ git checkout v{{< skew prevMinorVersion >}}.0
|
|||
git pull https://github.com/kubernetes/kubernetes v{{< skew prevMinorVersion >}}.0
|
||||
```
|
||||
|
||||
## Running the doc generation code
|
||||
## Run the doc generation code
|
||||
|
||||
In your local `<rdocs-base>`, run the `copycli` build target. The command runs as `root`:
|
||||
|
||||
|
@ -238,27 +236,21 @@ make container-serve
|
|||
|
||||
View the [local preview](https://localhost:1313/docs/reference/generated/kubectl/kubectl-commands/).
|
||||
|
||||
## Adding and committing changes in kubernetes/website
|
||||
## Add and commit changes in kubernetes/website
|
||||
|
||||
Run `git add` and `git commit` to commit the files.
|
||||
|
||||
## Creating a pull request
|
||||
## Create a pull request
|
||||
|
||||
Create a pull request to the `kubernetes/website` repository. Monitor your
|
||||
pull request, and respond to review comments as needed. Continue to monitor
|
||||
your pull request until it is merged.
|
||||
|
||||
A few minutes after your pull request is merged, your updated reference
|
||||
topics will be visible in the
|
||||
[published documentation](/docs/home).
|
||||
|
||||
|
||||
topics will be visible in the [published documentation](/docs/home).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/)
|
||||
* [Generating Reference Documentation for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
|
||||
* [Generating Reference Documentation for the Kubernetes API](/docs/contribute/generate-ref-docs/kubernetes-api/)
|
||||
|
||||
|
||||
|
|
|
@ -15,23 +15,19 @@ using the [kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/re
|
|||
If you find bugs in the generated documentation, you need to
|
||||
[fix them upstream](/docs/contribute/generate-ref-docs/contribute-upstream/).
|
||||
|
||||
If you need only to regenerate the reference documentation from the [OpenAPI](https://github.com/OAI/OpenAPI-Specification)
|
||||
If you need only to regenerate the reference documentation from the
|
||||
[OpenAPI](https://github.com/OAI/OpenAPI-Specification)
|
||||
spec, continue reading this page.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "prerequisites-ref-docs.md" >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Setting up the local repositories
|
||||
## Set up the local repositories
|
||||
|
||||
Create a local workspace and set your `GOPATH`.
|
||||
Create a local workspace and set your `GOPATH`:
|
||||
|
||||
```shell
|
||||
mkdir -p $HOME/<workspace>
|
||||
|
@ -61,26 +57,26 @@ git clone https://github.com/kubernetes/kubernetes $GOPATH/src/k8s.io/kubernetes
|
|||
```
|
||||
|
||||
* The base directory of your clone of the
|
||||
[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository is
|
||||
`$GOPATH/src/k8s.io/kubernetes.`
|
||||
The remaining steps refer to your base directory as `<k8s-base>`.
|
||||
[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository is
|
||||
`$GOPATH/src/k8s.io/kubernetes.`
|
||||
The remaining steps refer to your base directory as `<k8s-base>`.
|
||||
|
||||
* The base directory of your clone of the
|
||||
[kubernetes/website](https://github.com/kubernetes/website) repository is
|
||||
`$GOPATH/src/github.com/<your username>/website.`
|
||||
The remaining steps refer to your base directory as `<web-base>`.
|
||||
[kubernetes/website](https://github.com/kubernetes/website) repository is
|
||||
`$GOPATH/src/github.com/<your username>/website`.
|
||||
The remaining steps refer to your base directory as `<web-base>`.
|
||||
|
||||
* The base directory of your clone of the
|
||||
[kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs)
|
||||
repository is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs.`
|
||||
The remaining steps refer to your base directory as `<rdocs-base>`.
|
||||
[kubernetes-sigs/reference-docs](https://github.com/kubernetes-sigs/reference-docs)
|
||||
repository is `$GOPATH/src/github.com/kubernetes-sigs/reference-docs`.
|
||||
The remaining steps refer to your base directory as `<rdocs-base>`.
|
||||
|
||||
## Generating the API reference docs
|
||||
## Generate the API reference docs
|
||||
|
||||
This section shows how to generate the
|
||||
[published Kubernetes API reference documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/).
|
||||
|
||||
### Setting build variables
|
||||
### Set build variables
|
||||
|
||||
* Set `K8S_ROOT` to `<k8s-base>`.
|
||||
* Set `K8S_WEBROOT` to `<web-base>`.
|
||||
|
@ -95,9 +91,9 @@ export K8S_ROOT=${GOPATH}/src/k8s.io/kubernetes
|
|||
export K8S_RELEASE=1.17.0
|
||||
```
|
||||
|
||||
### Creating versioned directory and fetching Open API spec
|
||||
### Create versioned directory and fetch Open API spec
|
||||
|
||||
The `updateapispec` build target creates the versioned build directory.
|
||||
The `updateapispec` build target creates the versioned build directory.
|
||||
After the directory is created, the Open API spec is fetched from the
|
||||
`<k8s-base>` repository. These steps ensure that the version
|
||||
of the configuration files and Kubernetes Open API spec match the release version.
|
||||
|
@ -110,7 +106,7 @@ cd <rdocs-base>
|
|||
make updateapispec
|
||||
```
|
||||
|
||||
### Building the API reference docs
|
||||
### Build the API reference docs
|
||||
|
||||
The `copyapi` target builds the API reference and
|
||||
copies the generated files to directories in `<web-base>`.
|
||||
|
@ -154,7 +150,7 @@ static/docs/reference/generated/kubernetes-api/{{< param "version" >}}/js/navDat
|
|||
static/docs/reference/generated/kubernetes-api/{{< param "version" >}}/js/scroll.js
|
||||
```
|
||||
|
||||
## Updating the API reference index pages
|
||||
## Update the API reference index pages
|
||||
|
||||
When generating reference documentation for a new release, update the file,
|
||||
`<web-base>/content/en/docs/reference/kubernetes-api/api-index.md` with the new
|
||||
|
@ -163,13 +159,13 @@ version number.
|
|||
* Open `<web-base>/content/en/docs/reference/kubernetes-api/api-index.md` for editing,
|
||||
and update the API reference version number. For example:
|
||||
|
||||
```
|
||||
---
|
||||
title: v1.17
|
||||
---
|
||||
```
|
||||
---
|
||||
title: v1.17
|
||||
---
|
||||
|
||||
[Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/)
|
||||
```
|
||||
[Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/)
|
||||
```
|
||||
|
||||
* Open `<web-base>/content/en/docs/reference/_index.md` for editing, and add a
|
||||
new link for the latest API reference. Remove the oldest API reference version.
|
||||
|
@ -188,7 +184,7 @@ make container-serve
|
|||
|
||||
## Commit the changes
|
||||
|
||||
In `<web-base>` run `git add` and `git commit` to commit the change.
|
||||
In `<web-base>`, run `git add` and `git commit` to commit the change.
|
||||
|
||||
Submit your changes as a
|
||||
[pull request](/docs/contribute/new-content/open-a-pr/) to the
|
||||
|
@ -196,11 +192,8 @@ Submit your changes as a
|
|||
Monitor your pull request, and respond to reviewer comments as needed. Continue
|
||||
to monitor your pull request until it has been merged.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Generating Reference Documentation Quickstart](/docs/contribute/generate-ref-docs/quickstart/)
|
||||
* [Generating Reference Docs for Kubernetes Components and Tools](/docs/contribute/generate-ref-docs/kubernetes-components/)
|
||||
* [Generating Reference Documentation for kubectl Commands](/docs/contribute/generate-ref-docs/kubectl/)
|
||||
|
||||
|
||||
|
|
|
@ -98,7 +98,7 @@ Scenario | Branch
|
|||
:---------|:------------
|
||||
Existing or new English language content for the current release | `main`
|
||||
Content for a feature change release | The branch which corresponds to the major and minor version the feature change is in, using the pattern `dev-<version>`. For example, if a feature changes in the `v{{< skew nextMinorVersion >}}` release, then add documentation changes to the ``dev-{{< skew nextMinorVersion >}}`` branch.
|
||||
Content in other languages (localizations) | Use the localization's convention. See the [Localization branching strategy](/docs/contribute/localization/#branching-strategy) for more information.
|
||||
Content in other languages (localizations) | Use the localization's convention. See the [Localization branching strategy](/docs/contribute/localization/#branch-strategy) for more information.
|
||||
|
||||
If you're still not sure which branch to choose, ask in `#sig-docs` on Slack.
|
||||
|
||||
|
|
|
@ -123,18 +123,74 @@ When you complete your content, the documentation person assigned to your featur
|
|||
To ensure technical accuracy, the content may also require a technical review from corresponding SIG(s).
|
||||
Use their suggestions to get the content to a release ready state.
|
||||
|
||||
If your feature is an Alpha or Beta feature and is behind a feature gate,
|
||||
make sure you add it to [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features)
|
||||
table as part of your pull request. With new feature gates, a description of
|
||||
the feature gate is also required. If your feature is GA'ed or deprecated,
|
||||
make sure to move it from the
|
||||
[Feature gates for Alpha/Feature](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
|
||||
to [Feature gates for graduated or deprecated features](/docs/reference/command-line-tools-reference/feature-gates-removed/#feature-gates-that-are-removed)
|
||||
table with Alpha and Beta history intact.
|
||||
|
||||
If your feature needs documentation and the first draft
|
||||
content is not received, the feature may be removed from the milestone.
|
||||
|
||||
#### Feature gates {#ready-for-review-feature-gates}
|
||||
|
||||
If your feature is an Alpha or Beta feature and is behind a feature gate,
|
||||
you need a feature gate file for it inside
|
||||
`content/en/docs/reference/command-line-tools-reference/feature-gates/`.
|
||||
The name of the file should be the feature gate, converted from `UpperCamelCase`
|
||||
to `kebab-case`, with `.md` as the suffix.
|
||||
You can look at other files already in the same directory for a hint about what yours
|
||||
should look like. Usually a single paragraph is enough; for longer explanations,
|
||||
add documentation elsewhere and link to that.
|
||||
|
||||
Also,
|
||||
to ensure your feature gate appears in the [Alpha/Beta Feature gates](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table, include the following details
|
||||
in the [front matter](https://gohugo.io/content-management/front-matter/) of your Markdown
|
||||
description file:
|
||||
|
||||
```yaml
|
||||
stages:
|
||||
- stage: <alpha/beta/stable/deprecated> # Specify the development stage of the feature gate
|
||||
defaultValue: <true or false> # Set to true if enabled by default, false otherwise
|
||||
fromVersion: <Version> # Version from which the feature gate is available
|
||||
toVersion: <Version> # (Optional) The version until which the feature gate is available
|
||||
```
|
||||
|
||||
With net new feature gates, a separate
|
||||
description of the feature gate is also required; create a new Markdown file
|
||||
inside `content/en/docs/reference/command-line-tools-reference/feature-gates/`
|
||||
(use other files as a template).
|
||||
|
||||
When you change a feature gate to disabled-by-default to enabled-by-default,
|
||||
you may also need to change other documentation (not just the list of
|
||||
feature gates). Watch out for language such as ”The `exampleSetting` field
|
||||
is a beta field and disabled by default. You can enable it by enabling the
|
||||
`ProcessExampleThings` feature gate.”
|
||||
|
||||
If your feature is GA'ed or deprecated,
|
||||
include an additional `stage` entry within the `stages` block in the description file.
|
||||
Ensure that the Alpha and Beta stages remain intact.
|
||||
This step transitions the feature gate from the
|
||||
[Feature gates for Alpha/Feature](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features) table
|
||||
to [Feature gates for graduated or deprecated features](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-graduated-or-deprecated-features) table. For example:
|
||||
|
||||
{{< highlight yaml "linenos=false,hl_lines=10-15" >}}
|
||||
stages:
|
||||
- stage: alpha
|
||||
defaultValue: false
|
||||
fromVersion: "1.12"
|
||||
toVersion: "1.12"
|
||||
- stage: beta
|
||||
defaultValue: true
|
||||
fromVersion: "1.13"
|
||||
toVersion: "1.18"
|
||||
# Added 'stable' stage block to existing stages.
|
||||
- stage: stable
|
||||
defaultValue: true
|
||||
fromVersion: "1.19"
|
||||
toVersion: "1.27"
|
||||
{{< / highlight >}}
|
||||
|
||||
Eventually, Kubernetes will stop including the feature gate at all. To signify the removal of a feature gate,
|
||||
include `removed: true` in the front matter of the respective description file.
|
||||
This action triggers the transition of the feature gate
|
||||
from [Feature gates for graduated or deprecated features](/docs/reference/command-line-tools-reference/feature-gates-removed/#feature-gates-that-are-removed) section to a dedicated page titled
|
||||
[Feature Gates (removed)](/docs/reference/command-line-tools-reference/feature-gates-removed/), including its description.
|
||||
|
||||
### All PRs reviewed and ready to merge
|
||||
|
||||
If your PR has not yet been merged into the `dev-{{< skew nextMinorVersion >}}` branch by the release deadline, work with the
|
||||
|
|
|
@ -6,9 +6,12 @@ weight: 20
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
SIG Docs [approvers](/docs/contribute/participate/roles-and-responsibilities/#approvers) take week-long shifts [managing pull requests](https://github.com/kubernetes/website/wiki/PR-Wranglers) for the repository.
|
||||
SIG Docs [approvers](/docs/contribute/participate/roles-and-responsibilities/#approvers)
|
||||
take week-long shifts [managing pull requests](https://github.com/kubernetes/website/wiki/PR-Wranglers)
|
||||
for the repository.
|
||||
|
||||
This section covers the duties of a PR wrangler. For more information on giving good reviews, see [Reviewing changes](/docs/contribute/review/).
|
||||
This section covers the duties of a PR wrangler. For more information on giving good reviews,
|
||||
see [Reviewing changes](/docs/contribute/review/).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -16,17 +19,23 @@ This section covers the duties of a PR wrangler. For more information on giving
|
|||
|
||||
Each day in a week-long shift as PR Wrangler:
|
||||
|
||||
- Triage and tag incoming issues daily. See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata.
|
||||
- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality and adherence to the [Style](/docs/contribute/style/style-guide/) and [Content](/docs/contribute/style/content-guide/) guides.
|
||||
- Triage and tag incoming issues daily. See
|
||||
[Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
|
||||
for guidelines on how SIG Docs uses metadata.
|
||||
- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality
|
||||
and adherence to the [Style](/docs/contribute/style/style-guide/) and
|
||||
[Content](/docs/contribute/style/content-guide/) guides.
|
||||
- Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`). Review as many PRs as you can.
|
||||
- Make sure PR contributors sign the [CLA](https://github.com/kubernetes/community/blob/master/CLA.md).
|
||||
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors that haven't signed the CLA to do so.
|
||||
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors
|
||||
that haven't signed the CLA to do so.
|
||||
- Provide feedback on changes and ask for technical reviews from members of other SIGs.
|
||||
- Provide inline suggestions on the PR for the proposed content changes.
|
||||
- If you need to verify content, comment on the PR and request more details.
|
||||
- Assign relevant `sig/` label(s).
|
||||
- If needed, assign reviewers from the `reviewers:` block in the file's front matter.
|
||||
- You can also tag a [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md) for a review by commenting `@kubernetes/<sig>-pr-reviews` on the PR.
|
||||
- You can also tag a [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md)
|
||||
for a review by commenting `@kubernetes/<sig>-pr-reviews` on the PR.
|
||||
- Use the `/approve` comment to approve a PR for merging. Merge the PR when ready.
|
||||
- PRs should have a `/lgtm` comment from another member before merging.
|
||||
- Consider accepting technically accurate content that doesn't meet the
|
||||
|
@ -48,11 +57,19 @@ These queries exclude localization PRs. All queries are against the main branch
|
|||
the PR and remind them that they can open it after signing the CLA.
|
||||
**Do not review PRs whose authors have not signed the CLA!**
|
||||
- [Needs LGTM](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge%2Fwork-in-progress+-label%3Ado-not-merge%2Fhold+label%3Alanguage%2Fen+-label%3Algtm):
|
||||
Lists PRs that need an LGTM from a member. If the PR needs technical review, loop in one of the reviewers suggested by the bot. If the content needs work, add suggestions and feedback in-line.
|
||||
Lists PRs that need an LGTM from a member. If the PR needs technical review,
|
||||
loop in one of the reviewers suggested by the bot. If the content needs work,
|
||||
add suggestions and feedback in-line.
|
||||
- [Has LGTM, needs docs approval](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge%2Fwork-in-progress+-label%3Ado-not-merge%2Fhold+label%3Alanguage%2Fen+label%3Algtm+):
|
||||
Lists PRs that need an `/approve` comment to merge.
|
||||
- [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amain+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22): Lists PRs against the main branch with no clear blockers. (change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]).
|
||||
- [Not against the primary branch](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amain): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @<manager's_github-username>`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch.
|
||||
- [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amain+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22):
|
||||
Lists PRs against the main branch with no clear blockers.
|
||||
(change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]).
|
||||
- [Not against the primary branch](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amain):
|
||||
If the PR is against a `dev-` branch, it's for an upcoming release. Assign the
|
||||
[docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles)
|
||||
using: `/assign @<manager's_github-username>`. If the PR is against an old branch,
|
||||
help the author figure out whether it's targeted against the best branch.
|
||||
|
||||
### Helpful Prow commands for wranglers
|
||||
|
||||
|
@ -74,30 +91,43 @@ Reviews and approvals are one tool to keep our PR queue short and current. Anoth
|
|||
Close PRs where:
|
||||
- The author hasn't signed the CLA for two weeks.
|
||||
|
||||
Authors can reopen the PR after signing the CLA. This is a low-risk way to make sure nothing gets merged without a signed CLA.
|
||||
Authors can reopen the PR after signing the CLA. This is a low-risk way to make
|
||||
sure nothing gets merged without a signed CLA.
|
||||
|
||||
- The author has not responded to comments or feedback in 2 or more weeks.
|
||||
|
||||
Don't be afraid to close pull requests. Contributors can easily reopen and resume works in progress. Often a closure notice is what spurs an author to resume and finish their contribution.
|
||||
Don't be afraid to close pull requests. Contributors can easily reopen and resume works in progress.
|
||||
Often a closure notice is what spurs an author to resume and finish their contribution.
|
||||
|
||||
To close a pull request, leave a `/close` comment on the PR.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
The [`k8s-triage-robot`](https://github.com/k8s-triage-robot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity.
|
||||
The [`k8s-triage-robot`](https://github.com/k8s-triage-robot) bot marks issues
|
||||
as stale after 90 days of inactivity. After 30 more days it marks issues as rotten
|
||||
and closes them. PR wranglers should close issues after 14-30 days of inactivity.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
## PR Wrangler shadow program
|
||||
|
||||
In late 2021, SIG Docs introduced the PR Wrangler Shadow Program. The program was introduced to help new contributors understand the PR wrangling process.
|
||||
In late 2021, SIG Docs introduced the PR Wrangler Shadow Program.
|
||||
The program was introduced to help new contributors understand the PR wrangling process.
|
||||
|
||||
### Become a shadow
|
||||
|
||||
- If you are interested in shadowing as a PR wrangler, please visit the [PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers) to see the PR wrangling schedule for this year and sign up.
|
||||
- If you are interested in shadowing as a PR wrangler, please visit the
|
||||
[PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers)
|
||||
to see the PR wrangling schedule for this year and sign up.
|
||||
|
||||
- Kubernetes org members can edit the [PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers) and sign up to shadow an existing PR Wrangler for a week.
|
||||
- Kubernetes org members can edit the
|
||||
[PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers)
|
||||
and sign up to shadow an existing PR Wrangler for a week.
|
||||
|
||||
- Others can reach out on the [#sig-docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) for requesting to shadow an assigned PR Wrangler for a specific week. Feel free to reach out to Brad Topol (`@bradtopol`) or one of the [SIG Docs co-chairs/leads](https://github.com/kubernetes/community/tree/master/sig-docs#leadership).
|
||||
- Others can reach out on the [#sig-docs Slack channel](https://kubernetes.slack.com/messages/sig-docs)
|
||||
for requesting to shadow an assigned PR Wrangler for a specific week. Feel free to reach out to
|
||||
Brad Topol (`@bradtopol`) or one of the
|
||||
[SIG Docs co-chairs/leads](https://github.com/kubernetes/community/tree/master/sig-docs#leadership).
|
||||
|
||||
- Once you've signed up to shadow a PR Wrangler, introduce yourself to the PR Wrangler on the [Kubernetes Slack](https://slack.k8s.io).
|
||||
- Once you've signed up to shadow a PR Wrangler, introduce yourself to the PR Wrangler on the
|
||||
[Kubernetes Slack](https://slack.k8s.io).
|
||||
|
|
|
@ -624,7 +624,7 @@ caption and the diagram referral.
|
|||
flowchart
|
||||
A[Diagram<br><br>Inline Mermaid or<br>SVG image files]
|
||||
B[Diagram Caption<br><br>Add Figure Number. and<br>Caption Text]
|
||||
C[Diagram Referral<br><br>Referenence Figure Number<br>in text]
|
||||
C[Diagram Referral<br><br>Reference Figure Number<br>in text]
|
||||
|
||||
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
|
||||
class A,B,C box
|
||||
|
|
|
@ -49,6 +49,24 @@ Renders to:
|
|||
|
||||
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
|
||||
|
||||
## Feature gate description
|
||||
|
||||
In a Markdown page (`.md` file) on this site, you can add a shortcode to
|
||||
display the description for a shortcode.
|
||||
|
||||
### Feature gate description demo
|
||||
|
||||
Below is a demo of the feature state snippet, which displays the feature as
|
||||
stable in the latest Kubernetes version.
|
||||
|
||||
```
|
||||
{{</* feature-gate-description name="DryRun" */>}}
|
||||
```
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< feature-gate-description name="DryRun" >}}
|
||||
|
||||
## Glossary
|
||||
|
||||
There are two glossary shortcodes: `glossary_tooltip` and `glossary_definition`.
|
||||
|
@ -401,6 +419,7 @@ Renders to:
|
|||
|
||||
{{< latest-release-notes >}}
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about [Hugo](https://gohugo.io/).
|
||||
|
|
|
@ -102,7 +102,7 @@ following cases (not an exhaustive list):
|
|||
- The code is not generic enough for users to try out. As an example, you can
|
||||
embed the YAML
|
||||
file for creating a Pod which depends on a specific
|
||||
[FlexVolume](/docs/concepts/storage/volumes/#flexvolume-deprecated) implementation.
|
||||
[FlexVolume](/docs/concepts/storage/volumes/#flexvolume) implementation.
|
||||
- The code is an incomplete example because its purpose is to highlight a
|
||||
portion of a larger file. For example, when describing ways to
|
||||
customize a [RoleBinding](/docs/reference/access-authn-authz/rbac/#role-binding-examples),
|
||||
|
|
|
@ -48,9 +48,9 @@ cards:
|
|||
button_path: /docs/reference
|
||||
- name: contribute
|
||||
title: Contribute to Kubernetes
|
||||
description: Anyone can contribute, whether you're new to the project or you've been around a long time.
|
||||
button: Find out how to help
|
||||
button_path: /docs/contribute
|
||||
description: Find out how you can help make Kubernetes better.
|
||||
button: See Ways to Contribute
|
||||
button_path: "/docs/contribute"
|
||||
- name: training
|
||||
title: "Training"
|
||||
description: "Get certified in Kubernetes and make your cloud native projects successful!"
|
||||
|
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 147 KiB |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue