Merge branch 'main' into fix/zh-ha-format

pull/31500/head
FOWind 2022-02-09 14:34:37 +08:00
commit 270b6c006c
236 changed files with 6363 additions and 3035 deletions

View File

@ -11,7 +11,7 @@ STOP -- PLEASE READ!
GitHub is not the right place for support requests. GitHub is not the right place for support requests.
If you're looking for help, check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) If you're looking for help, check [Server Fault](https://serverfault.com/questions/tagged/kubernetes).
You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum. You can also post your question on the [Kubernetes Slack](http://slack.k8s.io/) or the [Discuss Kubernetes](https://discuss.kubernetes.io/) forum.

View File

@ -19,10 +19,10 @@ CCEND=\033[0m
help: ## Show this help. help: ## Show this help.
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) @awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
module-check: module-check: ## Check if all of the required submodules are correctly initialized.
@git submodule status --recursive | awk '/^[+-]/ {err = 1; printf "\033[31mWARNING\033[0m Submodule not initialized: \033[34m%s\033[0m\n",$$2} END { if (err != 0) print "You need to run \033[32mmake module-init\033[0m to initialize missing modules first"; exit err }' 1>&2 @git submodule status --recursive | awk '/^[+-]/ {err = 1; printf "\033[31mWARNING\033[0m Submodule not initialized: \033[34m%s\033[0m\n",$$2} END { if (err != 0) print "You need to run \033[32mmake module-init\033[0m to initialize missing modules first"; exit err }' 1>&2
module-init: module-init: ## Initialize required submodules.
@echo "Initializing submodules..." 1>&2 @echo "Initializing submodules..." 1>&2
@git submodule update --init --recursive --depth 1 @git submodule update --init --recursive --depth 1

View File

@ -5,9 +5,9 @@
W tym repozytorium znajdziesz wszystko, czego potrzebujesz do zbudowania [strony internetowej Kubernetesa wraz z dokumentacją](https://kubernetes.io/). Bardzo nam miło, że chcesz wziąć udział w jej współtworzeniu! W tym repozytorium znajdziesz wszystko, czego potrzebujesz do zbudowania [strony internetowej Kubernetesa wraz z dokumentacją](https://kubernetes.io/). Bardzo nam miło, że chcesz wziąć udział w jej współtworzeniu!
+ [Twój wkład w dokumentację](#twój-wkład-w-dokumentację) + [Twój wkład w dokumentację](#twój-wkład-w-dokumentację)
+ [Informacje o wersjach językowych](#informacje-o-wersjach-językowych) + [Informacje o wersjach językowych](#różne-wersje-językowe-readmemd)
# Jak używać tego repozytorium ## Jak używać tego repozytorium
Możesz uruchomić serwis lokalnie poprzez Hugo (Extended version) lub ze środowiska kontenerowego. Zdecydowanie zalecamy korzystanie z kontenerów, bo dzięki temu lokalna wersja będzie spójna z tym, co jest na oficjalnej stronie. Możesz uruchomić serwis lokalnie poprzez Hugo (Extended version) lub ze środowiska kontenerowego. Zdecydowanie zalecamy korzystanie z kontenerów, bo dzięki temu lokalna wersja będzie spójna z tym, co jest na oficjalnej stronie.
@ -22,14 +22,14 @@ Aby móc skorzystać z tego repozytorium, musisz lokalnie zainstalować:
Przed rozpoczęciem zainstaluj niezbędne zależności. Sklonuj repozytorium i przejdź do odpowiedniego katalogu: Przed rozpoczęciem zainstaluj niezbędne zależności. Sklonuj repozytorium i przejdź do odpowiedniego katalogu:
``` ```bash
git clone https://github.com/kubernetes/website.git git clone https://github.com/kubernetes/website.git
cd website cd website
``` ```
Strona Kubernetesa używa [Docsy Hugo theme](https://github.com/google/docsy#readme). Nawet jeśli planujesz uruchomić serwis w środowisku kontenerowym, zalecamy pobranie podmodułów i innych zależności za pomocą polecenia: Strona Kubernetesa używa [Docsy Hugo theme](https://github.com/google/docsy#readme). Nawet jeśli planujesz uruchomić serwis w środowisku kontenerowym, zalecamy pobranie podmodułów i innych zależności za pomocą polecenia:
``` ```bash
# pull in the Docsy submodule # pull in the Docsy submodule
git submodule update --init --recursive --depth 1 git submodule update --init --recursive --depth 1
``` ```
@ -38,14 +38,14 @@ git submodule update --init --recursive --depth 1
Aby zbudować i uruchomić serwis wewnątrz środowiska kontenerowego, wykonaj następujące polecenia: Aby zbudować i uruchomić serwis wewnątrz środowiska kontenerowego, wykonaj następujące polecenia:
``` ```bash
make container-image make container-image
make container-serve make container-serve
``` ```
Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)). Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)).
Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce. Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres <http://localhost:1313>. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
## Jak uruchomić lokalną kopię strony przy pomocy Hugo? ## Jak uruchomić lokalną kopię strony przy pomocy Hugo?
@ -59,13 +59,14 @@ npm ci
make serve make serve
``` ```
Zostanie uruchomiony lokalny serwer Hugo na porcie 1313. Otwórz w przeglądarce adres http://localhost:1313, aby obejrzeć zawartość serwisu. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce. Zostanie uruchomiony lokalny serwer Hugo na porcie 1313. Otwórz w przeglądarce adres <http://localhost:1313>, aby obejrzeć zawartość serwisu. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
## Budowanie dokumentacji źródłowej API ## Budowanie dokumentacji źródłowej API
Budowanie dokumentacji źródłowej API zostało opisane w [angielskiej wersji pliku README.md](README.md#building-the-api-reference-pages). Budowanie dokumentacji źródłowej API zostało opisane w [angielskiej wersji pliku README.md](README.md#building-the-api-reference-pages).
## Rozwiązywanie problemów ## Rozwiązywanie problemów
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version ### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Z przyczyn technicznych, Hugo jest rozprowadzany w dwóch wersjach. Aktualny serwis używa tylko wersji **Hugo Extended**. Na stronie z [wydaniami](https://github.com/gohugoio/hugo/releases) poszukaj archiwum z `extended` w nazwie. Dla potwierdzenia, uruchom `hugo version` i poszukaj słowa `extended`. Z przyczyn technicznych, Hugo jest rozprowadzany w dwóch wersjach. Aktualny serwis używa tylko wersji **Hugo Extended**. Na stronie z [wydaniami](https://github.com/gohugoio/hugo/releases) poszukaj archiwum z `extended` w nazwie. Dla potwierdzenia, uruchom `hugo version` i poszukaj słowa `extended`.
@ -74,7 +75,7 @@ Z przyczyn technicznych, Hugo jest rozprowadzany w dwóch wersjach. Aktualny ser
Jeśli po uruchomieniu `make serve` na macOS widzisz następujący błąd: Jeśli po uruchomieniu `make serve` na macOS widzisz następujący błąd:
``` ```bash
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1 make: *** [serve] Error 1
``` ```
@ -104,19 +105,19 @@ sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
``` ```
Przedstawiony sposób powinien działać dla MacOS w wersji Catalina i Mojave. Przedstawiony sposób powinien działać dla MacOS w wersjach Catalina i Mojave.
## Zaangażowanie w prace SIG Docs
# Zaangażowanie w prace SIG Docs
O społeczności SIG Docs i terminach spotkań dowiesz z [jej strony](https://github.com/kubernetes/community/tree/master/sig-docs#meetings). O społeczności SIG Docs i terminach spotkań dowiesz z [jej strony](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
Możesz kontaktować się z gospodarzami projektu za pomocą: Możesz kontaktować się z gospodarzami projektu za pomocą:
- [Komunikatora Slack](https://kubernetes.slack.com/messages/sig-docs) [Tutaj możesz dostać zaproszenie do tej grupy Slack-a](https://slack.k8s.io/) - [Komunikatora Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Tutaj możesz dostać zaproszenie do tej grupy Slacka](https://slack.k8s.io/)
- [List dyskusyjnych](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) - [List dyskusyjnych](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
# Twój wkład w dokumentację ## Twój wkład w dokumentację
Możesz kliknąć w przycisk **Fork** w prawym górnym rogu ekranu, aby stworzyć kopię tego repozytorium na swoim koncie GitHub. Taki rodzaj kopii (odgałęzienia) nazywa się *fork*. Zmieniaj w nim, co chcesz, a kiedy będziesz już gotowy/a przesłać te zmiany do nas, przejdź do swojej kopii i stwórz nowy *pull request*, abyśmy zostali o tym poinformowani. Możesz kliknąć w przycisk **Fork** w prawym górnym rogu ekranu, aby stworzyć kopię tego repozytorium na swoim koncie GitHub. Taki rodzaj kopii (odgałęzienia) nazywa się *fork*. Zmieniaj w nim, co chcesz, a kiedy będziesz już gotowy/a przesłać te zmiany do nas, przejdź do swojej kopii i stwórz nowy *pull request*, abyśmy zostali o tym poinformowani.
@ -124,16 +125,16 @@ Po stworzeniu *pull request*, jeden z recenzentów projektu Kubernetes podejmie
Może też się zdarzyć, że swoje uwagi zgłosi więcej niż jeden recenzent, lub że recenzję będzie robił ktoś inny, niż ten, kto został przydzielony na początku. Może też się zdarzyć, że swoje uwagi zgłosi więcej niż jeden recenzent, lub że recenzję będzie robił ktoś inny, niż ten, kto został przydzielony na początku.
W niektórych przypadkach, jeśli zajdzie taka potrzeba, recenzent może poprosić dodatkowo o recenzję jednego z [recenzentów technicznych](https://github.com/kubernetes/website/wiki/Tech-reviewers). Recenzenci zrobią wszystko, aby odpowiedzieć sprawnie, ale konkretny czas odpowiedzi zależy od wielu czynników. W niektórych przypadkach, jeśli zajdzie taka potrzeba, recenzent może poprosić dodatkowo o recenzję jednego z recenzentów technicznych. Recenzenci zrobią wszystko, aby odpowiedzieć sprawnie, ale konkretny czas odpowiedzi zależy od wielu czynników.
Więcej informacji na temat współpracy przy tworzeniu dokumentacji znajdziesz na stronach: Więcej informacji na temat współpracy przy tworzeniu dokumentacji znajdziesz na stronach:
* [Udział w rozwijaniu dokumentacji](https://kubernetes.io/docs/contribute/) - [Udział w rozwijaniu dokumentacji](https://kubernetes.io/docs/contribute/)
* [Rodzaje stron](https://kubernetes.io/docs/contribute/style/page-content-types/) - [Rodzaje stron](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Styl pisania dokumentacji](http://kubernetes.io/docs/contribute/style/style-guide/) - [Styl pisania dokumentacji](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Lokalizacja dokumentacji Kubernetes](https://kubernetes.io/docs/contribute/localization/) - [Lokalizacja dokumentacji Kubernetesa](https://kubernetes.io/docs/contribute/localization/)
# Różne wersje językowe `README.md` ## Różne wersje językowe `README.md`
| Język | Język | | Język | Język |
|---|---| |---|---|
@ -145,10 +146,10 @@ Więcej informacji na temat współpracy przy tworzeniu dokumentacji znajdziesz
| [wietnamski](README-vi.md) | [rosyjski](README-ru.md) | | [wietnamski](README-vi.md) | [rosyjski](README-ru.md) |
| [włoski](README-it.md) | [ukraiński](README-uk.md) | | [włoski](README-it.md) | [ukraiński](README-uk.md) |
# Zasady postępowania ## Zasady postępowania
Udział w działaniach społeczności Kubernetesa jest regulowany przez [Kodeks postępowania CNCF](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/pl.md). Udział w działaniach społeczności Kubernetesa jest regulowany przez [Kodeks postępowania CNCF](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/pl.md).
# Dziękujemy! ## Dziękujemy!
Kubernetes rozkwita dzięki zaangażowaniu społeczności — doceniamy twój wkład w tworzenie naszego serwisu i dokumentacji! Kubernetes rozkwita dzięki zaangażowaniu społeczności — doceniamy twój wkład w tworzenie naszego serwisu i dokumentacji!

View File

@ -65,9 +65,9 @@ This will start the local Hugo server on port 1313. Open up your browser to <htt
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>. The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
To update the reference pages for a new Kubernetes release (replace v1.20 in the following examples with the release to update to): To update the reference pages for a new Kubernetes release follow these steps:
1. Pull the `kubernetes-resources-reference` submodule: 1. Pull in the `api-ref-generator` submodule:
```bash ```bash
git submodule update --init --recursive --depth 1 git submodule update --init --recursive --depth 1
@ -75,9 +75,9 @@ To update the reference pages for a new Kubernetes release (replace v1.20 in the
2. Update the Swagger specification: 2. Update the Swagger specification:
``` ```bash
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
``` ```
3. In `api-ref-assets/config/`, adapt the files `toc.yaml` and `fields.yaml` to reflect the changes of the new release. 3. In `api-ref-assets/config/`, adapt the files `toc.yaml` and `fields.yaml` to reflect the changes of the new release.

View File

@ -50,11 +50,9 @@
- securityContext - securityContext
- name: Beta level - name: Beta level
fields: fields:
- ephemeralContainers
- preemptionPolicy - preemptionPolicy
- overhead - overhead
- name: Alpha level
fields:
- ephemeralContainers
- name: Deprecated - name: Deprecated
fields: fields:
- serviceAccount - serviceAccount
@ -227,6 +225,9 @@
- stdin - stdin
- stdinOnce - stdinOnce
- tty - tty
- name: Security context
fields:
- securityContext
- name: Not allowed - name: Not allowed
fields: fields:
- ports - ports
@ -234,7 +235,6 @@
- lifecycle - lifecycle
- livenessProbe - livenessProbe
- readinessProbe - readinessProbe
- securityContext
- startupProbe - startupProbe
- definition: io.k8s.api.core.v1.ReplicationControllerSpec - definition: io.k8s.api.core.v1.ReplicationControllerSpec

View File

@ -215,10 +215,6 @@ body.td-404 main .error-details {
} }
} }
body > footer {
width: 100vw;
}
/* FOOTER */ /* FOOTER */
footer { footer {
background-color: #303030; background-color: #303030;
@ -317,6 +313,12 @@ footer {
padding-top: 1.5rem !important; padding-top: 1.5rem !important;
top: 5rem !important; top: 5rem !important;
@supports (position: sticky) {
position: sticky !important;
height: calc(100vh - 10rem);
overflow-y: auto;
}
#TableOfContents { #TableOfContents {
padding-top: 1rem; padding-top: 1rem;
} }

0
assets/scss/_reset.scss Executable file → Normal file
View File

0
assets/scss/_skin.scss Executable file → Normal file
View File

View File

@ -42,12 +42,12 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Video ansehen</button> <button id="desktopShowVideoButton" onclick="kub.showVideo()">Video ansehen</button>
<br> <br>
<br> <br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Besuche die KubeCon North America vom 11. bis 15. Oktober 2021</a> <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Besuche die KubeCon Europe vom 16. bis 20. Mai 2022</a>
<br> <br>
<br> <br>
<br> <br>
<br> <br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Besuche die KubeCon Europe vom 17. bis 20. Mai 2022</a> <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Besuchen die KubeCon North America vom 24. bis 28. Oktober 2022</a>
</div> </div>
<div id="videoPlayer"> <div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe> <iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -48,7 +48,7 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise
<br> <br>
<br> <br>
<br> <br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a> <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Attend KubeCon North America on October 24-28, 2022</a>
</div> </div>
<div id="videoPlayer"> <div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe> <iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -190,7 +190,8 @@ kubectl get configmap
No resources found in default namespace. No resources found in default namespace.
``` ```
To sum things up, when there's an override owner reference from a child to a parent, deleting the parent deletes the children automatically. This is called `cascade`. The default for cascade is `true`, however, you can use the --cascade=orphan option for `kubectl delete` to delete an object and orphan its children. To sum things up, when there's an override owner reference from a child to a parent, deleting the parent deletes the children automatically. This is called `cascade`. The default for cascade is `true`, however, you can use the --cascade=orphan option for `kubectl delete` to delete an object and orphan its children. *Update: starting with kubectl v1.20, the default for cascade is `background`.*
In the following example, there is a parent and a child. Notice the owner references are still included. If I delete the parent using --cascade=orphan, the parent is deleted but the child still exists: In the following example, there is a parent and a child. Notice the owner references are still included. If I delete the parent using --cascade=orphan, the parent is deleted but the child still exists:

View File

@ -138,4 +138,5 @@ Stay tuned for what comes next, and if you have any questions, comments or sugge
* Chat with us on the Kubernetes [Slack](http://slack.k8s.io/):[#cluster-api](https://kubernetes.slack.com/archives/C8TSNPY4T) * Chat with us on the Kubernetes [Slack](http://slack.k8s.io/):[#cluster-api](https://kubernetes.slack.com/archives/C8TSNPY4T)
* Join the SIG Cluster Lifecycle [Google Group](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle) to receive calendar invites and gain access to documents * Join the SIG Cluster Lifecycle [Google Group](https://groups.google.com/g/kubernetes-sig-cluster-lifecycle) to receive calendar invites and gain access to documents
* Join our [Zoom meeting](https://zoom.us/j/861487554), every Wednesday at 10:00 Pacific Time * Join our [Zoom meeting](https://zoom.us/j/861487554), every Wednesday at 10:00 Pacific Time
* Check out the [ClusterClass tutorial](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-classes.html) in the Cluster API book. * Check out the [ClusterClass quick-start](https://cluster-api.sigs.k8s.io/user/quick-start.html) for the Docker provider (CAPD) in the Cluster API book.
* _UPDATE_: Check out the [ClusterClass experimental feature](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/index.html) documentation in the Cluster API book.

View File

@ -67,8 +67,11 @@ As you can see in the above event messages, the affected Pod is not evicted imme
For our production clusters, we specify a lower time limit so as to avoid the impacted Pods serving traffic abidingly. The *kube-exec-controller* internally sets and tracks a timer for each Pod that matches the associated TTL. Once the timer is up, the controller evicts that Pod using K8s API. The eviction (rather than deletion) is to ensure service availability, since the cluster respects any configured [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) (PDB). Let's say if a user has defined *x* number of Pods as critical in their PDB, the eviction (as requested by *kube-exec-controller*) does not continue when the target workload has fewer than *x* Pods running. For our production clusters, we specify a lower time limit so as to avoid the impacted Pods serving traffic abidingly. The *kube-exec-controller* internally sets and tracks a timer for each Pod that matches the associated TTL. Once the timer is up, the controller evicts that Pod using K8s API. The eviction (rather than deletion) is to ensure service availability, since the cluster respects any configured [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) (PDB). Let's say if a user has defined *x* number of Pods as critical in their PDB, the eviction (as requested by *kube-exec-controller*) does not continue when the target workload has fewer than *x* Pods running.
Here comes a sequence diagram of the entire workflow mentioned above:  Here comes a sequence diagram of the entire workflow mentioned above:
{{< figure src="workflow-diagram.svg" alt="Workflow Diagram" class="diagram-medium" >}}
<!-- Mermaid Live Editor link - https://mermaid-js.github.io/mermaid-live-editor/edit/#pako:eNp9kjFPAzEMhf-KlalIbWd0QpUQdGJB3JrFTUyJmjhHzncFof53nGtpqYTYEuu958-Wv4zLnkxjenofiB09BtwWTJbRSS6QCLCHu01ZPdJIMXdUYNZTGYOjRd4zlRvLHRYJLnTIArvbtozV83TbAnZhUcVUrkXo04OU2I6uKu99Cn0fMsNDZik5Rm3SHntYTrRYrabUBl4GBmt2w4acRKAPcrBcLq0Bl1NC9pYnoRouHZopX9RX9aotddJeADaf4DDGwFuQN4IRY_Ao9bunzVvOO13COeYCcR9j3k-OCQDP9KfgC8TJsFbZIHSxnGljzp1lgKs2v9HXugMBwe2WPHTZ94CvottB6Ap5eg2s9cBaUnrLVEP_Yp5ynrOf3fxPV2V1lBOhmZtEJWHweiFfldQa1SWyptGnAuAQxRrLB5UOna6P1j7o4ZhGykBzg4Pk9pPdz_-oOR3ZsXj4BjrP5rU-->
![Sequence Diagram](/images/sequence_diagram.svg)
## A new kubectl plugin for better user experience ## A new kubectl plugin for better user experience
Our admission controller component works great for solving the container drift issue we had on the platform. It is also able to submit all related Events to the target Pod that has been affected. However, K8s clusters don't retain Events very long (the default retention period is one hour). We need to provide other ways for developers to get their Pod interaction activity. A [kubectl plugin](/docs/tasks/extend-kubectl/kubectl-plugins/) is a perfect choice for us to expose this information. We named our plugin `kubectl pi` (short for `pod-interaction`) and provide two subcommands: `get` and `extend`. Our admission controller component works great for solving the container drift issue we had on the platform. It is also able to submit all related Events to the target Pod that has been affected. However, K8s clusters don't retain Events very long (the default retention period is one hour). We need to provide other ways for developers to get their Pod interaction activity. A [kubectl plugin](/docs/tasks/extend-kubectl/kubectl-plugins/) is a perfect choice for us to expose this information. We named our plugin `kubectl pi` (short for `pod-interaction`) and provide two subcommands: `get` and `extend`.

View File

@ -0,0 +1,63 @@
---
layout: blog
title: "Spotlight on SIG Multicluster"
date: 2022-02-07
slug: sig-multicluster-spotlight-2022
canonicalUrl: https://www.kubernetes.dev/blog/2022/02/04/sig-multicluster-spotlight-2022/
---
**Authors:** Dewan Ahmed (Aiven) and Chris Short (AWS)
## Introduction
[SIG Multicluster](https://github.com/kubernetes/community/tree/master/sig-multicluster) is the SIG focused on how Kubernetes concepts are expanded and used beyond the cluster boundary. Historically, Kubernetes resources only interacted within that boundary - KRU or Kubernetes Resource Universe (not an actual Kubernetes concept). Kubernetes clusters, even now, don't really know anything about themselves or, about other clusters. Absence of cluster identifiers is a case in point. With the growing adoption of multicloud and multicluster deployments, the work SIG Multicluster doing is gaining a lot of attention. In this blog, [Jeremy Olmsted-Thompson, Google](https://twitter.com/jeremyot) and [Chris Short, AWS](https://twitter.com/ChrisShort) discuss the interesting problems SIG Multicluster is solving and how you can get involved. Their initials **JOT** and **CS** will be used for brevity.
## A summary of their conversation
**CS**: How long has the SIG Multicluster existed and how was the SIG in its infancy? How long have you been with this SIG?
**JOT**: I've been around for almost two years in the SIG Multicluster. All I know about the infancy years is from the lore but even in the early days, it was always about solving this same problem. Early efforts have been things like [KubeFed](https://github.com/kubernetes-sigs/kubefed). I think there are still folks using KubeFed but it's a smaller slice. Back then, I think people out there deploying large numbers of Kubernetes clusters were really not at a point where we had a ton of real concrete use cases. Projects like KubeFed and [Cluster Registry](https://github.com/kubernetes-retired/cluster-registry) were developed around that time and the need back then can be associated to these projects. The motivation for these projects were how do we solve the problems that we think people are **going to have**, when they start expanding to multiple clusters. Honestly, in some ways, it was trying to do too much at that time.
**CS**: How does KubeFed differ from the current state of SIG Multicluster? How does the **lore** differ from the **now**?
**JOT**: Yeah, it was like trying to get ahead of potential problems instead of addressing specific problems. I think towards the end of 2019, there was a slow down in SIG multicluster work and we kind of picked it back up with one of the most active recent projects that is the [SIG Multicluster services (MCS)](https://github.com/kubernetes-sigs/mcs-api).
Now this is the shift to solving real specific problems. For example,
> I've got workloads that are spread across multiple clusters and I need them to talk to each other.
Okay, that's very straightforward and we know that we need to solve that. To get started, let's make sure that these projects can work together on a common API so you get the same kind of portability that you get with Kubernetes.
There's a few implementations of the MCS API out there and more are being developed. But, we didn't build an implementation because depending on how you're deploying things there could be hundreds of implementations. As long as you only need the basic Multicluster service functionality, it'll just work on whatever background you want, whether it's Submariner, GKE, or a service mesh.
My favorite example of "then vs. now" is cluster ID. A few years ago, there was an effort to define a cluster ID. A lot of really good thought went into this concept, for example, how do we make a cluster ID is unique across multiple clusters. How do we make this ID globally unique so it'll work in every contact? Let's say, there's an acquisition or merger of teams - does the cluster IDs still remain unique for those teams?
With Multicluster services, we found the need for an actual cluster ID, and it has a very specific need. To address this specific need, we're no longer considering every single Kubernetes cluster out there rather the ClusterSets - a grouping of clusters that work together in some kind of bounds. That's a much narrower scope than considering clusters everywhere in time and space. It also leaves flexibility for an implementer to define the boundary (a ClusterSet) beyond which this cluster ID will no longer be unique.
**CS**: How do you feel about the current state of SIG Multicluster versus where you're hoping to be in future?
**JOT**: There's a few projects that are kind of getting started, for example, Work API. In the future, I think that some common practices around how do we deploy things across clusters are going to develop.
> If I have clusters deployed in a bunch of different regions; what's the best way to actually do that?
The answer is, almost always, "it depends". Why are you doing this? Is it because there's some kind of compliance that makes you care about locality? Is it performance? Is it availability?
I think revisiting registry patterns will probably be a natural step after we have cluster IDs, that is, how do you actually associate these clusters together? Maybe you've got a distributed deployment that you run in your own data centers all over the world. I imagine that expanding the API in that space is going to be important as more multi cluster features develop. It really depends on what the community starts doing with these tools.
**CS**: In the early days of Kubernetes, we used to have a few large Kubernetes clusters and now we're dealing with many small Kubernetes clusters - even multiple clusters for our own dev environments. How has this shift from a few large clusters to many small clusters affected the SIG? Has it accelerated the work or make it challenging in any way?
**JOT**: I think that it has created a lot of ambiguity that needs solving. Originally, you'd have a dev cluster, a staging cluster, and a prod cluster. When the multi region thing came in, we started needing dev/staging/prod clusters, per region. And then, sometimes clusters really need more isolation due to compliance or some regulations issues. Thus, we're ending up with a lot of clusters. I think figuring out the right balance on how many clusters should you actually have is important. The power of Kubernetes is being able to deploy a lot of things managed by a single control plane. So, it's not like every single workload that gets deployed should be in its own cluster. But I think it's pretty clear that we can't put every single workload in a single cluster.
**CS**: What are some of your favorite things about this SIG?
**JOT**: The complexity of the problems, the people and the newness of the space. We don't have right answers and we have to figure this out. At the beginning, we couldn't even think about multi clusters because there was no way to connect services across clusters. Now there is and we're starting to go tackle those problems, I think that this is a really fun place to be in because I expect that the SIG is going to get a lot busier the next couple of years. It's a very collaborative group and we definitely would like more people to come join us, get involved, raise their problems and bring their ideas.
**CS**: What do you think keeps people in this group? How has the pandemic affected you?
**JOT**: I think it definitely got a little bit quieter during the pandemic. But for the most part; it's a very distributed group so whether you're calling in to our weekly meetings from a conference room or from your home, it doesn't make that huge of a difference. During the pandemic, a lot of people had time to focus on what's next for their scale and growth. I think that's what keeps people in the group - we have real problems that need to be solved which are very new in this space. And it's fun :)
## Wrap up
**CS**: That's all we have for today. Thanks Jeremy for your time.
**JOT**: Thanks Chris. Everybody is welcome at our [bi-weekly meetings](https://github.com/kubernetes/community/tree/master/sig-multicluster#meetings). We love as many people to come as possible and welcome all questions and all ideas. It's a new space and it'd be great to grow the community.

View File

@ -20,7 +20,7 @@ case_study_details:
<h2>Solution</h2> <h2>Solution</h2>
<p>In 2016, the company began moving their code from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p> <p>In 2016, the company began moving their code from Heroku to containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p>
<h2>Impact</h2> <h2>Impact</h2>
@ -42,7 +42,7 @@ With the speed befitting a startup, Pear Deck delivered its first prototype to c
<p>On top of that, many of Pear Deck's customers are behind government firewalls and connect through Firebase, not Pear Deck's servers, making troubleshooting even more difficult.</p> <p>On top of that, many of Pear Deck's customers are behind government firewalls and connect through Firebase, not Pear Deck's servers, making troubleshooting even more difficult.</p>
<p>The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to <a href="https://www.docker.com/">Docker</a> containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p> <p>The team began looking around for another solution, and finally decided in early 2016 to start moving the app from Heroku to containers running on <a href="https://cloud.google.com/kubernetes-engine/">Google Kubernetes Engine</a>, orchestrated by <a href="http://kubernetes.io/">Kubernetes</a> and monitored with <a href="https://prometheus.io/">Prometheus</a>.</p>
{{< case-studies/quote image="/images/case-studies/peardeck/banner1.jpg" >}} {{< case-studies/quote image="/images/case-studies/peardeck/banner1.jpg" >}}
"When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch. "When it became clear that Google Kubernetes Engine was going to have a lot of support from Google and be a fully-managed Kubernetes platform, it seemed very obvious to us that was the way to go," says Eynon-Lynch.

View File

@ -70,7 +70,7 @@ If you haven't set foot in a school in awhile, you might be surprised by what yo
<p>Recently, the team launched a new single sign-on solution for use in an internal application. "Due to the resource based architecture of the Kubernetes platform, we were able to bring that application into an entirely new production environment in less than a day, most of that time used for testing after applying the already well-known resource definitions from staging to the new environment," says van den Bosch. "On a traditional VM this would have likely cost a day or two, and then probably a few weeks to iron out the kinks in our provisioning scripts as we apply updates."</p> <p>Recently, the team launched a new single sign-on solution for use in an internal application. "Due to the resource based architecture of the Kubernetes platform, we were able to bring that application into an entirely new production environment in less than a day, most of that time used for testing after applying the already well-known resource definitions from staging to the new environment," says van den Bosch. "On a traditional VM this would have likely cost a day or two, and then probably a few weeks to iron out the kinks in our provisioning scripts as we apply updates."</p>
<p>Legacy applications are also being moved to Kubernetes. Not long ago, the team needed to set up a Java-based application for compiling and running a frontend. "On a traditional VM, it would have taken quite a bit of time to set it up and keep it up to date, not to mention maintenance for that setup down the line," says van den Bosch. Instead, it took less than half a day to Dockerize it and get it running on Kubernetes. "It was much easier, and we were able to save costs too because we didn't have to spin up new VMs specially for it."</p> <p>Legacy applications are also being moved to Kubernetes. Not long ago, the team needed to set up a Java-based application for compiling and running a frontend. "On a traditional VM, it would have taken quite a bit of time to set it up and keep it up to date, not to mention maintenance for that setup down the line," says van den Bosch. Instead, it took less than half a day to containerize it and get it running on Kubernetes. "It was much easier, and we were able to save costs too because we didn't have to spin up new VMs specially for it."</p>
{{< case-studies/quote author="VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE" >}} {{< case-studies/quote author="VICTOR VAN DEN BOSCH, SENIOR DEVOPS ENGINEER, PROWISE" >}}
"We're really trying to deliver integrated solutions with our hardware and software and making it as easy as possible for users to use and collaborate from different places," says van den Bosch. And, says Haalstra, "We cannot do it without Kubernetes." "We're really trying to deliver integrated solutions with our hardware and software and making it as easy as possible for users to use and collaborate from different places," says van den Bosch. And, says Haalstra, "We cannot do it without Kubernetes."

View File

@ -46,7 +46,7 @@ Since it was started in a dorm room in 2003, Squarespace has made it simple for
After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had." After experimenting with another container orchestration platform and "breaking it in very painful ways," Lynch says, the team began experimenting with Kubernetes in mid-2016 and found that it "answered all the questions that we had."
{{< /case-studies/quote >}} {{< /case-studies/quote >}}
<p>Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production. They also added Zipkin and CNCF projects <a href="https://prometheus.io/">Prometheus</a> and <a href="https://www.fluentd.org/">fluentd</a> to their cloud native stack. "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process, so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Docker file, and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds. "Now there is little configuration variation."</p> <p>Within a couple months, they had a stable cluster for their internal use, and began rolling out Kubernetes for production. They also added Zipkin and CNCF projects <a href="https://prometheus.io/">Prometheus</a> and <a href="https://www.fluentd.org/">fluentd</a> to their cloud native stack. "We switched to Kubernetes, a new world, and we revamped all our other tooling as well," says Lynch. "It allowed us to streamline our process, so we can now easily create an entire microservice project from templates, generate the code and deployment pipeline for that, generate the Dockerfile, and then immediately just ship a workable, deployable project to Kubernetes." Deployments across Dev/QA/Stage/Prod were also "simplified drastically," Lynch adds. "Now there is little configuration variation."</p>
<p>And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment. "From end to end that probably took half an hour, and that's not accounting for the fact that an infrastructure engineer would be responsible for doing that, so there's some business delay in there as well."</p> <p>And the whole process takes only five minutes, an almost 85% reduction in time compared to their VM deployment. "From end to end that probably took half an hour, and that's not accounting for the fact that an infrastructure engineer would be responsible for doing that, so there's some business delay in there as well."</p>

View File

@ -58,9 +58,9 @@ How many people does it take to turn on a light bulb?
<p>In addition, Wink had other requirements: horizontal scalability, the ability to encrypt everything quickly, connections that could be easily brought back up if something went wrong. "Looking at this whole structure we started, we decided to make a secure socket-based service," says Klein. "We've always used, I would say, some sort of clustering technology to deploy our services and so the decision we came to was, this thing is going to be containerized, running on Docker."</p> <p>In addition, Wink had other requirements: horizontal scalability, the ability to encrypt everything quickly, connections that could be easily brought back up if something went wrong. "Looking at this whole structure we started, we decided to make a secure socket-based service," says Klein. "We've always used, I would say, some sort of clustering technology to deploy our services and so the decision we came to was, this thing is going to be containerized, running on Docker."</p>
<p>At the time just over two years ago Docker wasn't yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn't really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads."</p> <p>In 2015, Docker wasn't yet widely used, but as Klein points out, "it was certainly understood by the people who were on the frontier of technology. We started looking at potential technologies that existed. One of the limiting factors was that we needed to deploy multi-port non-http/https services. It wasn't really appropriate for some of the early cluster technology. We liked the project a lot and we ended up using it on other stuff for a while, but initially it was too targeted toward http workloads."</p>
<p>Once Wink's backend engineering team decided on a Dockerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."</p> <p>Once Wink's backend engineering team decided on a containerized workload, they had to make decisions about the OS and the container orchestration platform. "Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."</p>
{{< case-studies/quote image="/images/case-studies/wink/banner4.jpg" >}} {{< case-studies/quote image="/images/case-studies/wink/banner4.jpg" >}}
"Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure." "Obviously you can't just start the containers and hope everything goes well," Klein says with a laugh. "You need to have a system that is helpful [in order] to manage where the workloads are being distributed out to. And when the container inevitably dies or something like that, to restart it, you have a load balancer. All sorts of housekeeping work is needed to have a robust infrastructure."
@ -68,7 +68,7 @@ How many people does it take to turn on a light bulb?
<p>Wink considered building directly on a general purpose Linux distro like Ubuntu (which would have required installing tools to run a containerized workload) and cluster management systems like Mesos (which was targeted toward enterprises with larger teams/workloads), but ultimately set their sights on CoreOS Container Linux. "A container-optimized Linux distribution system was exactly what we needed," he says. "We didn't have to futz around with trying to take something like a Linux distro and install everything. It's got a built-in container orchestration system, which is Fleet, and an easy-to-use API. It's not as feature-rich as some of the heavier solutions, but we realized that, at that moment, it was exactly what we needed."</p> <p>Wink considered building directly on a general purpose Linux distro like Ubuntu (which would have required installing tools to run a containerized workload) and cluster management systems like Mesos (which was targeted toward enterprises with larger teams/workloads), but ultimately set their sights on CoreOS Container Linux. "A container-optimized Linux distribution system was exactly what we needed," he says. "We didn't have to futz around with trying to take something like a Linux distro and install everything. It's got a built-in container orchestration system, which is Fleet, and an easy-to-use API. It's not as feature-rich as some of the heavier solutions, but we realized that, at that moment, it was exactly what we needed."</p>
<p>Wink's hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the Dockerized CoreOS deployment. Since then, they've moved almost every other piece of their infrastructure from third-party cloud-to-cloud integrations to their customer service and payment portals onto CoreOS Container Linux clusters.</p> <p>Wink's hub (along with a revamped app) was introduced in July 2014 with a short-term deployment, and within the first month, they had moved the service to the containerized CoreOS deployment. Since then, they've moved almost every other piece of their infrastructure from third-party cloud-to-cloud integrations to their customer service and payment portals onto CoreOS Container Linux clusters.</p>
<p>Using this setup did require some customization. "Fleet is really nice as a basic container orchestration system, but it doesn't take care of routing, sharing configurations, secrets, et cetera, among instances of a service," Klein says. "All of those layers of functionality can be implemented, of course, but if you don't want to spend a lot of time writing unit files manually which of course nobody does you need to create a tool to automate some of that, which we did."</p> <p>Using this setup did require some customization. "Fleet is really nice as a basic container orchestration system, but it doesn't take care of routing, sharing configurations, secrets, et cetera, among instances of a service," Klein says. "All of those layers of functionality can be implemented, of course, but if you don't want to spend a lot of time writing unit files manually which of course nobody does you need to create a tool to automate some of that, which we did."</p>

View File

@ -42,21 +42,21 @@ Fairness feature enabled.
## Enabling/Disabling API Priority and Fairness ## Enabling/Disabling API Priority and Fairness
The API Priority and Fairness feature is controlled by a feature gate The API Priority and Fairness feature is controlled by a feature gate
and is enabled by default. See and is enabled by default. See [Feature
[Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/) Gates](/docs/reference/command-line-tools-reference/feature-gates/)
for a general explanation of feature gates and how to enable and for a general explanation of feature gates and how to enable and
disable them. The name of the feature gate for APF is disable them. The name of the feature gate for APF is
"APIPriorityAndFairness". This feature also involves an {{< "APIPriorityAndFairness". This feature also involves an {{<
glossary_tooltip term_id="api-group" text="API Group" >}} with: (a) a glossary_tooltip term_id="api-group" text="API Group" >}} with: (a) a
`v1alpha1` version, disabled by default, and (b) a `v1beta1` `v1alpha1` version, disabled by default, and (b) `v1beta1` and
version, enabled by default. You can disable the feature `v1beta2` versions, enabled by default. You can disable the feature
gate and API group v1beta1 version by adding the following gate and API group beta versions by adding the following
command-line flags to your `kube-apiserver` invocation: command-line flags to your `kube-apiserver` invocation:
```shell ```shell
kube-apiserver \ kube-apiserver \
--feature-gates=APIPriorityAndFairness=false \ --feature-gates=APIPriorityAndFairness=false \
--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=false \ --runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=false,flowcontrol.apiserver.k8s.io/v1beta2=false \
# …and other flags as usual # …and other flags as usual
``` ```
@ -127,86 +127,13 @@ any of the limitations imposed by this feature. These exemptions prevent an
improperly-configured flow control configuration from totally disabling an API improperly-configured flow control configuration from totally disabling an API
server. server.
## Defaults
The Priority and Fairness feature ships with a suggested configuration that
should suffice for experimentation; if your cluster is likely to
experience heavy load then you should consider what configuration will work
best. The suggested configuration groups requests into five priority
classes:
* The `system` priority level is for requests from the `system:nodes` group,
i.e. Kubelets, which must be able to contact the API server in order for
workloads to be able to schedule on them.
* The `leader-election` priority level is for leader election requests from
built-in controllers (in particular, requests for `endpoints`, `configmaps`,
or `leases` coming from the `system:kube-controller-manager` or
`system:kube-scheduler` users and service accounts in the `kube-system`
namespace). These are important to isolate from other traffic because failures
in leader election cause their controllers to fail and restart, which in turn
causes more expensive traffic as the new controllers sync their informers.
* The `workload-high` priority level is for other requests from built-in
controllers.
* The `workload-low` priority level is for requests from any other service
account, which will typically include all requests from controllers running in
Pods.
* The `global-default` priority level handles all other traffic, e.g.
interactive `kubectl` commands run by nonprivileged users.
Additionally, there are two PriorityLevelConfigurations and two FlowSchemas that
are built in and may not be overwritten:
* The special `exempt` priority level is used for requests that are not subject
to flow control at all: they will always be dispatched immediately. The
special `exempt` FlowSchema classifies all requests from the `system:masters`
group into this priority level. You may define other FlowSchemas that direct
other requests to this priority level, if appropriate.
* The special `catch-all` priority level is used in combination with the special
`catch-all` FlowSchema to make sure that every request gets some kind of
classification. Typically you should not rely on this catch-all configuration,
and should create your own catch-all FlowSchema and PriorityLevelConfiguration
(or use the `global-default` configuration that is installed by default) as
appropriate. To help catch configuration errors that miss classifying some
requests, the mandatory `catch-all` priority level only allows one concurrency
share and does not queue requests, making it relatively likely that traffic
that only matches the `catch-all` FlowSchema will be rejected with an HTTP 429
error.
## Health check concurrency exemption
The suggested configuration gives no special treatment to the health
check requests on kube-apiservers from their local kubelets --- which
tend to use the secured port but supply no credentials. With the
suggested config, these requests get assigned to the `global-default`
FlowSchema and the corresponding `global-default` priority level,
where other traffic can crowd them out.
If you add the following additional FlowSchema, this exempts those
requests from rate limiting.
{{< caution >}}
Making this change also allows any hostile party to then send
health-check requests that match this FlowSchema, at any volume they
like. If you have a web traffic filter or similar external security
mechanism to protect your cluster's API server from general internet
traffic, you can configure rules to block any health check requests
that originate from outside your cluster.
{{< /caution >}}
{{< codenew file="priority-and-fairness/health-for-strangers.yaml" >}}
## Resources ## Resources
The flow control API involves two kinds of resources. The flow control API involves two kinds of resources.
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta1-flowcontrol-apiserver-k8s-io) [PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta2-flowcontrol-apiserver-k8s-io)
define the available isolation classes, the share of the available concurrency define the available isolation classes, the share of the available concurrency
budget that each can handle, and allow for fine-tuning queuing behavior. budget that each can handle, and allow for fine-tuning queuing behavior.
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1beta1-flowcontrol-apiserver-k8s-io) [FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1beta2-flowcontrol-apiserver-k8s-io)
are used to classify individual inbound requests, matching each to a are used to classify individual inbound requests, matching each to a
single PriorityLevelConfiguration. There is also a `v1alpha1` version single PriorityLevelConfiguration. There is also a `v1alpha1` version
of the same API group, and it has the same Kinds with the same syntax and of the same API group, and it has the same Kinds with the same syntax and
@ -329,6 +256,153 @@ omitted entirely), in which case all requests matched by this FlowSchema will be
considered part of a single flow. The correct choice for a given FlowSchema considered part of a single flow. The correct choice for a given FlowSchema
depends on the resource and your particular environment. depends on the resource and your particular environment.
## Defaults
Each kube-apiserver maintains two sorts of APF configuration objects:
mandatory and suggested.
### Mandatory Configuration Objects
The four mandatory configuration objects reflect fixed built-in
guardrail behavior. This is behavior that the servers have before
those objects exist, and when those objects exist their specs reflect
this behavior. The four mandatory objects are as follows.
* The mandatory `exempt` priority level is used for requests that are
not subject to flow control at all: they will always be dispatched
immediately. The mandatory `exempt` FlowSchema classifies all
requests from the `system:masters` group into this priority
level. You may define other FlowSchemas that direct other requests
to this priority level, if appropriate.
* The mandatory `catch-all` priority level is used in combination with
the mandatory `catch-all` FlowSchema to make sure that every request
gets some kind of classification. Typically you should not rely on
this catch-all configuration, and should create your own catch-all
FlowSchema and PriorityLevelConfiguration (or use the suggested
`global-default` priority level that is installed by default) as
appropriate. Because it is not expected to be used normally, the
mandatory `catch-all` priority level has a very small concurrency
share and does not queue requests.
### Suggested Configuration Objects
The suggested FlowSchemas and PriorityLevelConfigurations constitute a
reasonable default configuration. You can modify these and/or create
additional configuration objects if you want. If your cluster is
likely to experience heavy load then you should consider what
configuration will work best.
The suggested configuration groups requests into six priority levels:
* The `node-high` priority level is for health updates from nodes.
* The `system` priority level is for non-health requests from the
`system:nodes` group, i.e. Kubelets, which must be able to contact
the API server in order for workloads to be able to schedule on
them.
* The `leader-election` priority level is for leader election requests from
built-in controllers (in particular, requests for `endpoints`, `configmaps`,
or `leases` coming from the `system:kube-controller-manager` or
`system:kube-scheduler` users and service accounts in the `kube-system`
namespace). These are important to isolate from other traffic because failures
in leader election cause their controllers to fail and restart, which in turn
causes more expensive traffic as the new controllers sync their informers.
* The `workload-high` priority level is for other requests from built-in
controllers.
* The `workload-low` priority level is for requests from any other service
account, which will typically include all requests from controllers running in
Pods.
* The `global-default` priority level handles all other traffic, e.g.
interactive `kubectl` commands run by nonprivileged users.
The suggested FlowSchemas serve to steer requests into the above
priority levels, and are not enumerated here.
### Maintenance of the Mandatory and Suggested Configuration Objects
Each `kube-apiserver` independently maintains the mandatory and
suggested configuration objects, using initial and periodic behavior.
Thus, in a situation with a mixture of servers of different versions
there may be thrashing as long as different servers have different
opinions of the proper content of these objects.
Each `kube-apiserver` makes an inital maintenance pass over the
mandatory and suggested configuration objects, and after that does
periodic maintenance (once per minute) of those objects.
For the mandatory configuration objects, maintenance consists of
ensuring that the object exists and, if it does, has the proper spec.
The server refuses to allow a creation or update with a spec that is
inconsistent with the server's guardrail behavior.
Maintenance of suggested configuration objects is designed to allow
their specs to be overridden. Deletion, on the other hand, is not
respected: maintenance will restore the object. If you do not want a
suggested configuration object then you need to keep it around but set
its spec to have minimal consequences. Maintenance of suggested
objects is also designed to support automatic migration when a new
version of the `kube-apiserver` is rolled out, albeit potentially with
thrashing while there is a mixed population of servers.
Maintenance of a suggested configuration object consists of creating
it --- with the server's suggested spec --- if the object does not
exist. OTOH, if the object already exists, maintenance behavior
depends on whether the `kube-apiservers` or the users control the
object. In the former case, the server ensures that the object's spec
is what the server suggests; in the latter case, the spec is left
alone.
The question of who controls the object is answered by first looking
for an annotation with key `apf.kubernetes.io/autoupdate-spec`. If
there is such an annotation and its value is `true` then the
kube-apiservers control the object. If there is such an annotation
and its value is `false` then the users control the object. If
neither of those condtions holds then the `metadata.generation` of the
object is consulted. If that is 1 then the kube-apiservers control
the object. Otherwise the users control the object. These rules were
introduced in release 1.22 and their consideration of
`metadata.generation` is for the sake of migration from the simpler
earlier behavior. Users who wish to control a suggested configuration
object should set its `apf.kubernetes.io/autoupdate-spec` annotation
to `false`.
Maintenance of a mandatory or suggested configuration object also
includes ensuring that it has an `apf.kubernetes.io/autoupdate-spec`
annotation that accurately reflects whether the kube-apiservers
control the object.
Maintenance also includes deleting objects that are neither mandatory
nor suggested but are annotated
`apf.kubernetes.io/autoupdate-spec=true`.
## Health check concurrency exemption
The suggested configuration gives no special treatment to the health
check requests on kube-apiservers from their local kubelets --- which
tend to use the secured port but supply no credentials. With the
suggested config, these requests get assigned to the `global-default`
FlowSchema and the corresponding `global-default` priority level,
where other traffic can crowd them out.
If you add the following additional FlowSchema, this exempts those
requests from rate limiting.
{{< caution >}}
Making this change also allows any hostile party to then send
health-check requests that match this FlowSchema, at any volume they
like. If you have a web traffic filter or similar external security
mechanism to protect your cluster's API server from general internet
traffic, you can configure rules to block any health check requests
that originate from outside your cluster.
{{< /caution >}}
{{< codenew file="priority-and-fairness/health-for-strangers.yaml" >}}
## Diagnostics ## Diagnostics
Every HTTP response from an API server with the priority and fairness feature Every HTTP response from an API server with the priority and fairness feature

View File

@ -12,7 +12,9 @@ weight: 60
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams. Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.
For example, you may want access your application's logs if a container crashes; a pod gets evicted; or a node dies.
For example, you may want to access your application's logs if a container crashes, a pod gets evicted, or a node dies.
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_. In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level logging_.
<!-- body --> <!-- body -->
@ -141,7 +143,7 @@ as a `DaemonSet`.
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node. Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation. Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent} ### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}

View File

@ -709,13 +709,13 @@ Allocated resources:
680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%) 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)
``` ```
In the preceding output, you can see that if a Pod requests more than 1.120 CPUs, In the preceding output, you can see that if a Pod requests more than 1.120 CPUs
or more than 6.23Gi of memory, that Pod will not fit on the node. or more than 6.23Gi of memory, that Pod will not fit on the node.
By looking at the “Pods” section, you can see which Pods are taking up space on By looking at the “Pods” section, you can see which Pods are taking up space on
the node. the node.
The amount of resources available to Pods is less than the node capacity, because The amount of resources available to Pods is less than the node capacity because
system daemons use a portion of the available resources. Within the Kubernetes API, system daemons use a portion of the available resources. Within the Kubernetes API,
each Node has a `.status.allocatable` field each Node has a `.status.allocatable` field
(see [NodeStatus](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeStatus) (see [NodeStatus](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeStatus)
@ -736,7 +736,7 @@ prevent one team from using so much of any resource that this over-use affects o
You should also consider what access you grant to that namespace: You should also consider what access you grant to that namespace:
**full** write access to a namespace allows someone with that access to remove any **full** write access to a namespace allows someone with that access to remove any
resource, include a configured ResourceQuota. resource, including a configured ResourceQuota.
### My container is terminated ### My container is terminated

View File

@ -35,13 +35,12 @@ The Pod name and namespace are available as environment variables through the
[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). [downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
User defined environment variables from the Pod definition are also available to the Container, User defined environment variables from the Pod definition are also available to the Container,
as are any environment variables specified statically in the Docker image. as are any environment variables specified statically in the container image.
### Cluster information ### Cluster information
A list of all services that were running when a Container was created is available to that Container as environment variables. A list of all services that were running when a Container was created is available to that Container as environment variables.
This list is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services. This list is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services.
Those environment variables match the syntax of Docker links.
For a service named *foo* that maps to a Container named *bar*, For a service named *foo* that maps to a Container named *bar*,
the following variables are defined: the following variables are defined:

View File

@ -105,22 +105,22 @@ The logs for a Hook handler are not exposed in Pod events.
If a handler fails for some reason, it broadcasts an event. If a handler fails for some reason, it broadcasts an event.
For `PostStart`, this is the `FailedPostStartHook` event, For `PostStart`, this is the `FailedPostStartHook` event,
and for `PreStop`, this is the `FailedPreStopHook` event. and for `PreStop`, this is the `FailedPreStopHook` event.
You can see these events by running `kubectl describe pod <pod_name>`. To generate a failed `FailedPreStopHook` event yourself, modify the [lifecycle-events.yaml](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/lifecycle-events.yaml) file to change the postStart command to "badcommand" and apply it.
Here is some example output of events from running this command: Here is some example output of the resulting events you see from running `kubectl describe pod lifecycle-demo`:
``` ```
Events: Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message Type Reason Age From Message
--------- -------- ----- ---- ------------- -------- ------ ------- ---- ------ ---- ---- -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd Normal Scheduled 7s default-scheduler Successfully assigned default/lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2...
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0" Normal Pulled 6s kubelet Successfully pulled image "nginx" in 229.604315ms
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined] Normal Pulling 4s (x2 over 6s) kubelet Pulling image "nginx"
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0" Normal Created 4s (x2 over 5s) kubelet Created container lifecycle-demo-container
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567 Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo-container
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1 Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container "lifecycle-demo-container" in Pod "lifecycle-demo_default(30229739-9651-4e5a-9a32-a8f1688862db)" failed - error: command 'badcommand' exited with 126: , message: "OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \"badcommand\": executable file not found in $PATH: unknown\r\n"
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1 Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1" Normal Pulled 4s kubelet Successfully pulled image "nginx" in 215.66395ms
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container
``` ```

View File

@ -109,6 +109,11 @@ For more details on setting up CRI runtimes, see [CRI installation](/docs/setup/
#### dockershim #### dockershim
{{< feature-state for_k8s_version="v1.20" state="deprecated" >}}
Dockershim is deprecated as of Kubernetes v1.20, and will be removed in v1.24. For more information on the deprecation,
see [dockershim deprecation](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
RuntimeClasses with dockershim must set the runtime handler to `docker`. Dockershim does not support RuntimeClasses with dockershim must set the runtime handler to `docker`. Dockershim does not support
custom configurable runtime handlers. custom configurable runtime handlers.

View File

@ -101,6 +101,21 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
As a result, all namespace names must be valid As a result, all namespace names must be valid
[RFC 1123 DNS labels](/docs/concepts/overview/working-with-objects/names/#dns-label-names). [RFC 1123 DNS labels](/docs/concepts/overview/working-with-objects/names/#dns-label-names).
{{< warning >}}
By creating namespaces with the same name as [public top-level
domains](https://data.iana.org/TLD/tlds-alpha-by-domain.txt), Services in these
namespaces can have short DNS names that overlap with public DNS records.
Workloads from any namespace performing a DNS lookup without a [trailing dot](https://datatracker.ietf.org/doc/html/rfc1034#page-8) will
be redirected to those services, taking precedence over public DNS.
To mitigate this, limit privileges for creating namespaces to trusted users. If
required, you could additionally configure third-party security controls, such
as [admission
webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/),
to block creating any namespace with the name of [public
TLDs](https://data.iana.org/TLD/tlds-alpha-by-domain.txt).
{{< /warning >}}
## Not All Objects are in a Namespace ## Not All Objects are in a Namespace
Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are

View File

@ -11,7 +11,8 @@ weight: 30
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}} {{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. For more information on the deprecation, PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. It has been replaced by
[Pod Security Admission](/docs/concepts/security/pod-security-admission/). For more information on the deprecation,
see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/). see [PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/).
Pod Security Policies enable fine-grained authorization of pod creation and Pod Security Policies enable fine-grained authorization of pod creation and

View File

@ -305,34 +305,22 @@ fail validation.
<tr> <tr>
<td style="white-space: nowrap">Volume Types</td> <td style="white-space: nowrap">Volume Types</td>
<td> <td>
<p>In addition to restricting HostPath volumes, the restricted policy limits usage of non-core volume types to those defined through PersistentVolumes.</p> <p>The restricted policy only permits the following volume types.</p>
<p><strong>Restricted Fields</strong></p> <p><strong>Restricted Fields</strong></p>
<ul> <ul>
<li><code>spec.volumes[*].hostPath</code></li> <li><code>spec.volumes[*]</code></li>
<li><code>spec.volumes[*].gcePersistentDisk</code></li>
<li><code>spec.volumes[*].awsElasticBlockStore</code></li>
<li><code>spec.volumes[*].gitRepo</code></li>
<li><code>spec.volumes[*].nfs</code></li>
<li><code>spec.volumes[*].iscsi</code></li>
<li><code>spec.volumes[*].glusterfs</code></li>
<li><code>spec.volumes[*].rbd</code></li>
<li><code>spec.volumes[*].flexVolume</code></li>
<li><code>spec.volumes[*].cinder</code></li>
<li><code>spec.volumes[*].cephfs</code></li>
<li><code>spec.volumes[*].flocker</code></li>
<li><code>spec.volumes[*].fc</code></li>
<li><code>spec.volumes[*].azureFile</code></li>
<li><code>spec.volumes[*].vsphereVolume</code></li>
<li><code>spec.volumes[*].quobyte</code></li>
<li><code>spec.volumes[*].azureDisk</code></li>
<li><code>spec.volumes[*].portworxVolume</code></li>
<li><code>spec.volumes[*].scaleIO</code></li>
<li><code>spec.volumes[*].storageos</code></li>
<li><code>spec.volumes[*].photonPersistentDisk</code></li>
</ul> </ul>
<p><strong>Allowed Values</strong></p> <p><strong>Allowed Values</strong></p>
Every item in the <code>spec.volumes[*]</code> list must set one of the following fields to a non-null value:
<ul> <ul>
<li>Undefined/nil</li> <li><code>spec.volumes[*].configMap</code></li>
<li><code>spec.volumes[*].csi</code></li>
<li><code>spec.volumes[*].downwardAPI</code></li>
<li><code>spec.volumes[*].emptyDir</code></li>
<li><code>spec.volumes[*].ephemeral</code></li>
<li><code>spec.volumes[*].persistentVolumeClaim</code></li>
<li><code>spec.volumes[*].projected</code></li>
<li><code>spec.volumes[*].secret</code></li>
</ul> </ul>
</td> </td>
</tr> </tr>
@ -391,26 +379,6 @@ fail validation.
</ul> </ul>
</td> </td>
</tr> </tr>
<tr>
<td style="white-space: nowrap">Non-root groups <em>(optional)</em></td>
<td>
<p>Containers should be forbidden from running with a root primary or supplementary GID.</p>
<p><strong>Restricted Fields</strong></p>
<ul>
<li><code>spec.securityContext.runAsGroup</code></li>
<li><code>spec.securityContext.supplementalGroups[*]</code></li>
<li><code>spec.securityContext.fsGroup</code></li>
<li><code>spec.containers[*].securityContext.runAsGroup</code></li>
<li><code>spec.initContainers[*].securityContext.runAsGroup</code></li>
<li><code>spec.ephemeralContainers[*].securityContext.runAsGroup</code></li>
</ul>
<p><strong>Allowed Values</strong></p>
<ul>
<li>Undefined/nil (except for <code>*.runAsGroup</code>)</li>
<li>Non-zero</li>
</ul>
</td>
</tr>
<tr> <tr>
<td style="white-space: nowrap">Seccomp (v1.19+)</td> <td style="white-space: nowrap">Seccomp (v1.19+)</td>
<td> <td>

View File

@ -13,11 +13,9 @@ weight: 30
## The Kubernetes model for connecting containers ## The Kubernetes model for connecting containers
Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker. Now that you have a continuously running, replicated application you can expose it on a network.
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machine's own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
Coordinating port allocations across multiple developers or teams that provide containers is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
This guide uses a simple nginx server to demonstrate proof of concept. This guide uses a simple nginx server to demonstrate proof of concept.
@ -52,7 +50,7 @@ kubectl get pods -l run=my-nginx -o yaml | grep podIP
podIP: 10.244.2.5 podIP: 10.244.2.5
``` ```
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interfaces, but the need for this is radically diminished because of the networking model. You should be able to ssh into any node in your cluster and use a tool such as `curl` to make queries against both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so.
You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious. You can read more about the [Kubernetes Networking Model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) if you're curious.

View File

@ -106,10 +106,9 @@ and the domain name for your cluster is `cluster.local`, then the Pod has a DNS
`172-17-0-3.default.pod.cluster.local`. `172-17-0-3.default.pod.cluster.local`.
Any pods created by a Deployment or DaemonSet exposed by a Service have the Any pods exposed by a Service have the following DNS resolution available:
following DNS resolution available:
`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`. `pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`.
### Pod's hostname and subdomain fields ### Pod's hostname and subdomain fields

View File

@ -48,6 +48,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
is an ingress controller driving [Kong Gateway](https://konghq.com/kong/). is an ingress controller driving [Kong Gateway](https://konghq.com/kong/).
* The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/) * The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy). works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy).
* The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy.
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy. * [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an * The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy. ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.

View File

@ -485,9 +485,7 @@ web traffic to the IP address of your Ingress controller can be matched without
virtual host being required. virtual host being required.
For example, the following Ingress routes traffic For example, the following Ingress routes traffic
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic whose request host header doesn't match `first.bar.com` and `second.bar.com` to `service3`.
to the IP address without a hostname defined in request (that is, without a request header being
presented) to `service3`.
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}} {{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}

View File

@ -9,7 +9,7 @@ weight: 45
<!-- overview --> <!-- overview -->
{{< feature-state for_k8s_version="v1.21" state="alpha" >}} {{< feature-state for_k8s_version="v1.23" state="beta" >}}
_Service Internal Traffic Policy_ enables internal traffic restrictions to only route _Service Internal Traffic Policy_ enables internal traffic restrictions to only route
internal traffic to endpoints within the node the traffic originated from. The internal traffic to endpoints within the node the traffic originated from. The
@ -20,9 +20,9 @@ cluster. This can help to reduce costs and improve performance.
## Using Service Internal Traffic Policy ## Using Service Internal Traffic Policy
Once you have enabled the `ServiceInternalTrafficPolicy` The `ServiceInternalTrafficPolicy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), is a Beta feature and enabled by default.
you can enable an internal-only traffic policy for a When the feature is enabled, you can enable the internal-only traffic policy for a
{{< glossary_tooltip text="Services" term_id="service" >}}, by setting its {{< glossary_tooltip text="Services" term_id="service" >}}, by setting its
`.spec.internalTrafficPolicy` to `Local`. `.spec.internalTrafficPolicy` to `Local`.
This tells kube-proxy to only use node local endpoints for cluster internal traffic. This tells kube-proxy to only use node local endpoints for cluster internal traffic.

View File

@ -450,10 +450,7 @@ variables and DNS.
### Environment variables ### Environment variables
When a Pod is run on a Node, the kubelet adds a set of environment variables When a Pod is run on a Node, the kubelet adds a set of environment variables
for each active Service. It supports both [Docker links for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature.
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72))
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
For example, the Service `redis-master` which exposes TCP port 6379 and has been For example, the Service `redis-master` which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11, produces the following environment allocated cluster IP address 10.0.0.11, produces the following environment
@ -687,21 +684,28 @@ The set of protocols that can be used for LoadBalancer type of Services is still
#### Disabling load balancer NodePort allocation {#load-balancer-nodeport-allocation} #### Disabling load balancer NodePort allocation {#load-balancer-nodeport-allocation}
{{< feature-state for_k8s_version="v1.20" state="alpha" >}} {{< feature-state for_k8s_version="v1.22" state="beta" >}}
Starting in v1.20, you can optionally disable node port allocation for a Service Type=LoadBalancer by setting You can optionally disable node port allocation for a Service of `type=LoadBalancer`, by setting
the field `spec.allocateLoadBalancerNodePorts` to `false`. This should only be used for load balancer implementations the field `spec.allocateLoadBalancerNodePorts` to `false`. This should only be used for load balancer implementations
that route traffic directly to pods as opposed to using node ports. By default, `spec.allocateLoadBalancerNodePorts` that route traffic directly to pods as opposed to using node ports. By default, `spec.allocateLoadBalancerNodePorts`
is `true` and type LoadBalancer Services will continue to allocate node ports. If `spec.allocateLoadBalancerNodePorts` is `true` and type LoadBalancer Services will continue to allocate node ports. If `spec.allocateLoadBalancerNodePorts`
is set to `false` on an existing Service with allocated node ports, those node ports will NOT be de-allocated automatically. is set to `false` on an existing Service with allocated node ports, those node ports will **not** be de-allocated automatically.
You must explicitly remove the `nodePorts` entry in every Service port to de-allocate those node ports. You must explicitly remove the `nodePorts` entry in every Service port to de-allocate those node ports.
You must enable the `ServiceLBNodePortControl` feature gate to use this field. Your cluster must have the `ServiceLBNodePortControl`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enabled to use this field.
For Kubernetes v{{< skew currentVersion >}}, this feature gate is enabled by default,
and you can use the `spec.allocateLoadBalancerNodePorts` field. For clusters running
other versions of Kubernetes, check the documentation for that release.
#### Specifying class of load balancer implementation {#load-balancer-class} #### Specifying class of load balancer implementation {#load-balancer-class}
{{< feature-state for_k8s_version="v1.22" state="beta" >}} {{< feature-state for_k8s_version="v1.22" state="beta" >}}
`spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default. This feature is available from v1.21, you must enable the `ServiceLoadBalancerClass` feature gate to use this field in v1.21, and the feature gate is enabled by default from v1.22 onwards. `spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default.
Your cluster must have the `ServiceLoadBalancerClass` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled to use this field. For Kubernetes v{{< skew currentVersion >}}, this feature gate is enabled by default. For clusters running
other versions of Kubernetes, check the documentation for that release.
By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses
the cloud provider's default load balancer implementation if the cluster is configured with the cloud provider's default load balancer implementation if the cluster is configured with
a cloud provider using the `--cloud-provider` component flag. a cloud provider using the `--cloud-provider` component flag.

View File

@ -422,7 +422,7 @@ Helper programs relating to the volume type may be required for consumption of a
### Capacity ### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`. Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. Read the glossary term [Quantity](/docs/reference/glossary/?all=true#term-quantity) to understand the units expected by `capacity`.
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc. Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
@ -535,19 +535,19 @@ Not all Persistent Volume types support mount options.
The following volume types support mount options: The following volume types support mount options:
* AWSElasticBlockStore * `awsElasticBlockStore`
* AzureDisk * `azureDisk`
* AzureFile * `azureFile`
* CephFS * `cephfs`
* Cinder (OpenStack block storage) * `cinder` (**deprecated** in v1.18)
* GCEPersistentDisk * `gcePersistentDisk`
* Glusterfs * `glusterfs`
* NFS * `iscsi`
* Quobyte Volumes * `nfs`
* RBD (Ceph Block Device) * `quobyte` (**deprecated** in v1.22)
* StorageOS * `rbd`
* VsphereVolume * `storageos` (**deprecated** in v1.22)
* iSCSI * `vsphereVolume`
Mount options are not validated. If a mount option is invalid, the mount fails. Mount options are not validated. If a mount option is invalid, the mount fails.

View File

@ -842,6 +842,13 @@ Kubernetes marks a Deployment as _progressing_ when one of the following tasks i
* The Deployment is scaling down its older ReplicaSet(s). * The Deployment is scaling down its older ReplicaSet(s).
* New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)). * New Pods become ready or available (ready for at least [MinReadySeconds](#min-ready-seconds)).
When the rollout becomes “progressing”, the Deployment controller adds a condition with the following
attributes to the Deployment's `.status.conditions`:
* `type: Progressing`
* `status: "True"`
* `reason: NewReplicaSetCreated` | `reason: FoundNewReplicaSet` | `reason: ReplicaSetUpdated`
You can monitor the progress for a Deployment by using `kubectl rollout status`. You can monitor the progress for a Deployment by using `kubectl rollout status`.
### Complete Deployment ### Complete Deployment
@ -853,6 +860,17 @@ updates you've requested have been completed.
* All of the replicas associated with the Deployment are available. * All of the replicas associated with the Deployment are available.
* No old replicas for the Deployment are running. * No old replicas for the Deployment are running.
When the rollout becomes “complete”, the Deployment controller sets a condition with the following
attributes to the Deployment's `.status.conditions`:
* `type: Progressing`
* `status: "True"`
* `reason: NewReplicaSetAvailable`
This `Progressing` condition will retain a status value of `"True"` until a new rollout
is initiated. The condition holds even when availability of replicas changes (which
does instead affect the `Available` condition).
You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed You can check if a Deployment has completed by using `kubectl rollout status`. If the rollout completed
successfully, `kubectl rollout status` returns a zero exit code. successfully, `kubectl rollout status` returns a zero exit code.
@ -890,7 +908,7 @@ number of seconds the Deployment controller waits before indicating (in the Depl
Deployment progress has stalled. Deployment progress has stalled.
The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report The following `kubectl` command sets the spec with `progressDeadlineSeconds` to make the controller report
lack of progress for a Deployment after 10 minutes: lack of progress of a rollout for a Deployment after 10 minutes:
```shell ```shell
kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}' kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
@ -902,15 +920,18 @@ deployment.apps/nginx-deployment patched
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the following
attributes to the Deployment's `.status.conditions`: attributes to the Deployment's `.status.conditions`:
* Type=Progressing * `type: Progressing`
* Status=False * `status: "False"`
* Reason=ProgressDeadlineExceeded * `reason: ProgressDeadlineExceeded`
This condition can also fail early and is then set to status value of `"False"` due to reasons as `ReplicaSetCreateError`.
Also, the deadline is not taken into account anymore once the Deployment rollout completes.
See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions. See the [Kubernetes API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for more information on status conditions.
{{< note >}} {{< note >}}
Kubernetes takes no action on a stalled Deployment other than to report a status condition with Kubernetes takes no action on a stalled Deployment other than to report a status condition with
`Reason=ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for `reason: ProgressDeadlineExceeded`. Higher level orchestrators can take advantage of it and act accordingly, for
example, rollback the Deployment to its previous version. example, rollback the Deployment to its previous version.
{{< /note >}} {{< /note >}}
@ -984,7 +1005,7 @@ Conditions:
You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other You can address an issue of insufficient quota by scaling down your Deployment, by scaling down other
controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota controllers you may be running, or by increasing quota in your namespace. If you satisfy the quota
conditions and the Deployment controller then completes the Deployment rollout, you'll see the conditions and the Deployment controller then completes the Deployment rollout, you'll see the
Deployment's status update with a successful condition (`Status=True` and `Reason=NewReplicaSetAvailable`). Deployment's status update with a successful condition (`status: "True"` and `reason: NewReplicaSetAvailable`).
``` ```
Conditions: Conditions:
@ -994,11 +1015,11 @@ Conditions:
Progressing True NewReplicaSetAvailable Progressing True NewReplicaSetAvailable
``` ```
`Type=Available` with `Status=True` means that your Deployment has minimum availability. Minimum availability is dictated `type: Available` with `status: "True"` means that your Deployment has minimum availability. Minimum availability is dictated
by the parameters specified in the deployment strategy. `Type=Progressing` with `Status=True` means that your Deployment by the parameters specified in the deployment strategy. `type: Progressing` with `status: "True"` means that your Deployment
is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum is either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimum
required new replicas are available (see the Reason of the condition for the particulars - in our case required new replicas are available (see the Reason of the condition for the particulars - in our case
`Reason=NewReplicaSetAvailable` means that the Deployment is complete). `reason: NewReplicaSetAvailable` means that the Deployment is complete).
You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status` You can check if a Deployment has failed to progress by using `kubectl rollout status`. `kubectl rollout status`
returns a non-zero exit code if the Deployment has exceeded the progression deadline. returns a non-zero exit code if the Deployment has exceeded the progression deadline.
@ -1155,8 +1176,8 @@ total number of Pods running at any time during the update is at most 130% of de
`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want `.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
to wait for your Deployment to progress before the system reports back that the Deployment has to wait for your Deployment to progress before the system reports back that the Deployment has
[failed progressing](#failed-deployment) - surfaced as a condition with `Type=Progressing`, `Status=False`. [failed progressing](#failed-deployment) - surfaced as a condition with `type: Progressing`, `status: "False"`.
and `Reason=ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep and `reason: ProgressDeadlineExceeded` in the status of the resource. The Deployment controller will keep
retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment retrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deployment
controller will roll back a Deployment as soon as it observes such a condition. controller will roll back a Deployment as soon as it observes such a condition.

View File

@ -313,7 +313,7 @@ ensures that a desired number of Pods with a matching label selector are availab
When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to
prioritize scaling down pods based on the following general algorithm: prioritize scaling down pods based on the following general algorithm:
1. Pending (and unschedulable) pods are scaled down first 1. Pending (and unschedulable) pods are scaled down first
2. If controller.kubernetes.io/pod-deletion-cost annotation is set, then 2. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then
the pod with the lower value will come first. the pod with the lower value will come first.
3. Pods on nodes with more replicas come before pods on nodes with fewer replicas. 3. Pods on nodes with more replicas come before pods on nodes with fewer replicas.
4. If the pods' creation times differ, the pod that was created more recently 4. If the pods' creation times differ, the pod that was created more recently

View File

@ -266,7 +266,7 @@ Note that we recommend using Deployments instead of directly using Replica Sets,
### Deployment (Recommended) ### Deployment (Recommended)
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods. Deployments are recommended if you want this rolling update functionality because, they are declarative, server-side, and have additional features. [`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods. Deployments are recommended if you want the rolling update functionality because, they are declarative, server-side, and have additional features.
### Bare Pods ### Bare Pods

View File

@ -12,11 +12,13 @@ Read more about shortcodes in the [Hugo documentation](https://gohugo.io/content
## Feature state ## Feature state
In a Markdown page (`.md` file) on this site, you can add a shortcode to display version and state of the documented feature. In a Markdown page (`.md` file) on this site, you can add a shortcode to
display version and state of the documented feature.
### Feature state demo ### Feature state demo
Below is a demo of the feature state snippet, which displays the feature as stable in the latest Kubernetes version. Below is a demo of the feature state snippet, which displays the feature as
stable in the latest Kubernetes version.
``` ```
{{</* feature-state state="stable" */>}} {{</* feature-state state="stable" */>}}
@ -50,16 +52,22 @@ Renders to:
There are two glossary shortcodes: `glossary_tooltip` and `glossary_definition`. There are two glossary shortcodes: `glossary_tooltip` and `glossary_definition`.
You can reference glossary terms with an inclusion that automatically updates and replaces content with the relevant links from [our glossary](/docs/reference/glossary/). When the glossary term is moused-over, the glossary entry displays a tooltip. The glossary term also displays as a link. You can reference glossary terms with an inclusion that automatically updates
and replaces content with the relevant links from [our glossary](/docs/reference/glossary/).
When the glossary term is moused-over, the glossary entry displays a tooltip.
The glossary term also displays as a link.
As well as inclusions with tooltips, you can reuse the definitions from the glossary in As well as inclusions with tooltips, you can reuse the definitions from the glossary in
page content. page content.
The raw data for glossary terms is stored at [https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary), with a content file for each glossary term. The raw data for glossary terms is stored at
[the glossary directory](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary),
with a content file for each glossary term.
### Glossary demo ### Glossary demo
For example, the following include within the Markdown renders to {{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip: For example, the following include within the Markdown renders to
{{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip:
``` ```
{{</* glossary_tooltip text="cluster" term_id="cluster" */>}} {{</* glossary_tooltip text="cluster" term_id="cluster" */>}}
@ -85,7 +93,9 @@ which renders as:
## Links to API Reference ## Links to API Reference
You can link to a page of the Kubernetes API reference using the `api-reference` shortcode, for example to the {{< api-reference page="workload-resources/pod-v1" >}} reference: You can link to a page of the Kubernetes API reference using the
`api-reference` shortcode, for example to the
{{< api-reference page="workload-resources/pod-v1" >}} reference:
``` ```
{{</* api-reference page="workload-resources/pod-v1" */>}} {{</* api-reference page="workload-resources/pod-v1" */>}}
@ -94,7 +104,10 @@ You can link to a page of the Kubernetes API reference using the `api-reference`
The content of the `page` parameter is the suffix of the URL of the API reference page. The content of the `page` parameter is the suffix of the URL of the API reference page.
You can link to a specific place into a page by specifying an `anchor` parameter, for example to the {{< api-reference page="workload-resources/pod-v1" anchor="PodSpec" >}} reference or the {{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" >}} section of the page: You can link to a specific place into a page by specifying an `anchor`
parameter, for example to the {{< api-reference page="workload-resources/pod-v1" anchor="PodSpec" >}}
reference or the {{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" >}}
section of the page:
``` ```
{{</* api-reference page="workload-resources/pod-v1" anchor="PodSpec" */>}} {{</* api-reference page="workload-resources/pod-v1" anchor="PodSpec" */>}}
@ -102,17 +115,20 @@ You can link to a specific place into a page by specifying an `anchor` parameter
``` ```
You can change the text of the link by specifying a `text` parameter, for example by linking to the {{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variables">}} section of the page: You can change the text of the link by specifying a `text` parameter, for
example by linking to the
{{< api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variables">}}
section of the page:
``` ```
{{</* api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variable" */>}} {{</* api-reference page="workload-resources/pod-v1" anchor="environment-variables" text="Environment Variable" */>}}
``` ```
## Table captions ## Table captions
You can make tables more accessible to screen readers by adding a table caption. To add a [caption](https://www.w3schools.com/tags/tag_caption.asp) to a table, enclose the table with a `table` shortcode and specify the caption with the `caption` parameter. You can make tables more accessible to screen readers by adding a table caption. To add a
[caption](https://www.w3schools.com/tags/tag_caption.asp) to a table,
enclose the table with a `table` shortcode and specify the caption with the `caption` parameter.
{{< note >}} {{< note >}}
Table captions are visible to screen readers but invisible when viewed in standard HTML. Table captions are visible to screen readers but invisible when viewed in standard HTML.
@ -138,7 +154,8 @@ Parameter | Description | Default
`logLevel` | The log level for log output | `INFO` `logLevel` | The log level for log output | `INFO`
{{< /table >}} {{< /table >}}
If you inspect the HTML for the table, you should see this element immediately after the opening `<table>` element: If you inspect the HTML for the table, you should see this element immediately
after the opening `<table>` element:
```html ```html
<caption style="display: none;">Configuration parameters</caption> <caption style="display: none;">Configuration parameters</caption>
@ -146,14 +163,25 @@ If you inspect the HTML for the table, you should see this element immediately a
## Tabs ## Tabs
In a markdown page (`.md` file) on this site, you can add a tab set to display multiple flavors of a given solution. In a markdown page (`.md` file) on this site, you can add a tab set to display
multiple flavors of a given solution.
The `tabs` shortcode takes these parameters: The `tabs` shortcode takes these parameters:
* `name`: The name as shown on the tab. * `name`: The name as shown on the tab.
* `codelang`: If you provide inner content to the `tab` shortcode, you can tell Hugo what code language to use for highlighting. * `codelang`: If you provide inner content to the `tab` shortcode, you can tell Hugo
* `include`: The file to include in the tab. If the tab lives in a Hugo [leaf bundle](https://gohugo.io/content-management/page-bundles/#leaf-bundles), the file -- which can be any MIME type supported by Hugo -- is looked up in the bundle itself. If not, the content page that needs to be included is looked up relative to the current page. Note that with the `include`, you do not have any shortcode inner content and must use the self-closing syntax. For example, <code>{{</* tab name="Content File #1" include="example1" /*/>}}</code>. The language needs to be specified under `codelang` or the language is taken based on the file name. Non-content files are code-highlighted by default. what code language to use for highlighting.
* If your inner content is markdown, you must use the `%`-delimiter to surround the tab. For example, `{{%/* tab name="Tab 1" %}}This is **markdown**{{% /tab */%}}` * `include`: The file to include in the tab. If the tab lives in a Hugo
[leaf bundle](https://gohugo.io/content-management/page-bundles/#leaf-bundles),
the file -- which can be any MIME type supported by Hugo -- is looked up in the bundle itself.
If not, the content page that needs to be included is looked up relative to the current page.
Note that with the `include`, you do not have any shortcode inner content and must use the
self-closing syntax. For example,
`{{</* tab name="Content File #1" include="example1" /*/>}}`. The language needs to be specified
under `codelang` or the language is taken based on the file name.
Non-content files are code-highlighted by default.
* If your inner content is markdown, you must use the `%`-delimiter to surround the tab.
For example, `{{%/* tab name="Tab 1" %}}This is **markdown**{{% /tab */%}}`
* You can combine the variations mentioned above inside a tab set. * You can combine the variations mentioned above inside a tab set.
Below is a demo of the tabs shortcode. Below is a demo of the tabs shortcode.
@ -288,13 +316,17 @@ The two most commonly used version parameters are `latest` and `version`.
### `{{</* param "version" */>}}` ### `{{</* param "version" */>}}`
The `{{</* param "version" */>}}` shortcode generates the value of the current version of The `{{</* param "version" */>}}` shortcode generates the value of the current
the Kubernetes documentation from the `version` site parameter. The `param` shortcode accepts the name of one site parameter, in this case: `version`. version of the Kubernetes documentation from the `version` site parameter. The
`param` shortcode accepts the name of one site parameter, in this case:
`version`.
{{< note >}} {{< note >}}
In previously released documentation, `latest` and `version` parameter values are not equivalent. In previously released documentation, `latest` and `version` parameter values
After a new version is released, `latest` is incremented and the value of `version` for the documentation set remains unchanged. For example, a previously released version of the documentation displays `version` as are not equivalent. After a new version is released, `latest` is incremented
`v1.19` and `latest` as `v1.20`. and the value of `version` for the documentation set remains unchanged. For
example, a previously released version of the documentation displays `version`
as `v1.19` and `latest` as `v1.20`.
{{< /note >}} {{< /note >}}
Renders to: Renders to:
@ -313,7 +345,8 @@ Renders to:
### `{{</* latest-semver */>}}` ### `{{</* latest-semver */>}}`
The `{{</* latest-semver */>}}` shortcode generates the value of `latest` without the "v" prefix. The `{{</* latest-semver */>}}` shortcode generates the value of `latest`
without the "v" prefix.
Renders to: Renders to:
@ -330,8 +363,9 @@ Renders to:
### `{{</* latest-release-notes */>}}` ### `{{</* latest-release-notes */>}}`
The `{{</* latest-release-notes */>}}` shortcode generates a version string from `latest` and removes The `{{</* latest-release-notes */>}}` shortcode generates a version string
the "v" prefix. The shortcode prints a new URL for the release note CHANGELOG page with the modified version string. from `latest` and removes the "v" prefix. The shortcode prints a new URL for
the release note CHANGELOG page with the modified version string.
Renders to: Renders to:
@ -344,3 +378,4 @@ Renders to:
* Learn about [page content types](/docs/contribute/style/page-content-types/). * Learn about [page content types](/docs/contribute/style/page-content-types/).
* Learn about [opening a pull request](/docs/contribute/new-content/open-a-pr/). * Learn about [opening a pull request](/docs/contribute/new-content/open-a-pr/).
* Learn about [advanced contributing](/docs/contribute/advanced/). * Learn about [advanced contributing](/docs/contribute/advanced/).

View File

@ -25,7 +25,8 @@ on each Kubernetes component.
Each Kubernetes component lets you enable or disable a set of feature gates that Each Kubernetes component lets you enable or disable a set of feature gates that
are relevant to that component. are relevant to that component.
Use `-h` flag to see a full set of feature gates for all components. Use `-h` flag to see a full set of feature gates for all components.
To set feature gates for a component, such as kubelet, use the `--feature-gates` flag assigned to a list of feature pairs: To set feature gates for a component, such as kubelet, use the `--feature-gates`
flag assigned to a list of feature pairs:
```shell ```shell
--feature-gates="...,GracefulNodeShutdown=true" --feature-gates="...,GracefulNodeShutdown=true"
@ -562,7 +563,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests. - `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
- `APIServerIdentity`: Assign each API server an ID in a cluster. - `APIServerIdentity`: Assign each API server an ID in a cluster.
- `APIServerTracing`: Add support for distributed tracing in the API server. - `APIServerTracing`: Add support for distributed tracing in the API server.
- `Accelerators`: Enable Nvidia GPU support when using Docker - `Accelerators`: Provided an early form of plugin to enable Nvidia GPU support when using
Docker Engine; no longer available. See
[Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) for
an alternative.
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit) - `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
- `AffinityInAnnotations`: Enable setting - `AffinityInAnnotations`: Enable setting
[Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). [Pod affinity or anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
@ -571,8 +575,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
kubelets on Pod log requests. kubelets on Pod log requests.
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a - `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}. {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
- `AppArmor`: Enable AppArmor based mandatory access control on Linux nodes when using Docker. - `AppArmor`: Enable use of AppArmor mandatory access control for Pods running on Linux nodes.
See [AppArmor Tutorial](/docs/tutorials/clusters/apparmor/) for more details. See [AppArmor Tutorial](/docs/tutorials/security/apparmor/) for more details.
- `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes - `AttachVolumeLimit`: Enable volume plugins to report limits on number of volumes
that can be attached to a node. that can be attached to a node.
See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) for more details. See [dynamic volume limits](/docs/concepts/storage/storage-limits/#dynamic-volume-limits) for more details.
@ -766,12 +770,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `EnableEquivalenceClassCache`: Enable the scheduler to cache equivalence of - `EnableEquivalenceClassCache`: Enable the scheduler to cache equivalence of
nodes when scheduling Pods. nodes when scheduling Pods.
- `EndpointSlice`: Enables EndpointSlices for more scalable and extensible - `EndpointSlice`: Enables EndpointSlices for more scalable and extensible
network endpoints. See [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices/). network endpoints. See [Enabling EndpointSlices](/docs/concepts/services-networking/endpoint-slices/).
- `EndpointSliceNodeName`: Enables EndpointSlice `nodeName` field. - `EndpointSliceNodeName`: Enables EndpointSlice `nodeName` field.
- `EndpointSliceProxying`: When enabled, kube-proxy running - `EndpointSliceProxying`: When enabled, kube-proxy running
on Linux will use EndpointSlices as the primary data source instead of on Linux will use EndpointSlices as the primary data source instead of
Endpoints, enabling scalability and performance improvements. See Endpoints, enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/). [Enabling Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/).
- `EndpointSliceTerminatingCondition`: Enables EndpointSlice `terminating` and `serving` - `EndpointSliceTerminatingCondition`: Enables EndpointSlice `terminating` and `serving`
condition fields. condition fields.
- `EphemeralContainers`: Enable the ability to add - `EphemeralContainers`: Enable the ability to add
@ -1086,7 +1090,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `WindowsEndpointSliceProxying`: When enabled, kube-proxy running on Windows - `WindowsEndpointSliceProxying`: When enabled, kube-proxy running on Windows
will use EndpointSlices as the primary data source instead of Endpoints, will use EndpointSlices as the primary data source instead of Endpoints,
enabling scalability and performance improvements. See enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/). [Enabling Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/).
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes. - `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
- `WindowsHostProcessContainers`: Enables support for Windows HostProcess containers. - `WindowsHostProcessContainers`: Enables support for Windows HostProcess containers.
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers - `WindowsRunAsUserName` : Enable support for running applications in Windows containers

View File

@ -44,7 +44,7 @@ kubelet [flags]
<td colspan="2">--add-dir-header</td> <td colspan="2">--add-dir-header</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true, adds the file directory to the header of the log messages</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">If true, adds the file directory to the header of the log messages (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
@ -65,7 +65,7 @@ kubelet [flags]
<td colspan="2">--alsologtostderr</td> <td colspan="2">--alsologtostderr</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Log to standard error as well as files</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Log to standard error as well as files (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
@ -90,10 +90,10 @@ kubelet [flags]
</tr> </tr>
<tr> <tr>
<td colspan="2">--authorization-mode string</td> <td colspan="2">--authorization-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>AlwaysAllow</code></td></td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Authorization mode for Kubelet server. Valid options are <code>AlwaysAllow</code> or <code>Webhook</code>. <code>Webhook</code> mode uses the <code>SubjectAccessReview</code> API to determine authorization. Default <code>AlwaysAllow</code> when <code>--config</code> flag is not provided; <code>Webhook</code> when <code>--config</code> flag presents. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
<tr> <tr>
@ -273,7 +273,7 @@ kubelet [flags]
</tr> </tr>
<tr> <tr>
<td colspan="2">--cpu-manager-policy-options strings</td> <td colspan="2">--cpu-manager-policy-options mapStringString</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of options to fine-tune the behavior of the selected CPU Manager policy. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of options to fine-tune the behavior of the selected CPU Manager policy. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
@ -290,7 +290,7 @@ kubelet [flags]
<td colspan="2">--docker-endpoint string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>unix:///var/run/docker.sock</code></td> <td colspan="2">--docker-endpoint string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>unix:///var/run/docker.sock</code></td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Use this for the <code>docker</code> endpoint to communicate with. This docker-specific flag only works when container-runtime is set to <code>docker</code>.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Use this for the <code>docker</code> endpoint to communicate with. This docker-specific flag only works when container-runtime is set to <code>docker</code>. (DEPRECATED: will be removed along with dockershim.)</td>
</tr> </tr>
<tr> <tr>
@ -398,13 +398,6 @@ kubelet [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;">When set to <code>true</code>, hard eviction thresholds will be ignored while calculating node allocatable. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: will be removed in 1.23)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">When set to <code>true</code>, hard eviction thresholds will be ignored while calculating node allocatable. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: will be removed in 1.23)</td>
</tr> </tr>
<tr>
<td colspan="2">--experimental-bootstrap-kubeconfig string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">DEPRECATED: Use <code>--bootstrap-kubeconfig</code></td>
</tr>
<tr> <tr>
<td colspan="2">--experimental-check-node-capabilities-before-mount</td> <td colspan="2">--experimental-check-node-capabilities-before-mount</td>
</tr> </tr>
@ -416,7 +409,7 @@ kubelet [flags]
<td colspan="2">--experimental-kernel-memcg-notification</td> <td colspan="2">--experimental-kernel-memcg-notification</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. This flag will be removed in 1.23. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Use kernelMemcgNotification configuration, this flag will be removed in 1.23. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
<tr> <tr>
@ -455,55 +448,63 @@ AllBeta=true|false (BETA - default=false)<br/>
AnyVolumeDataSource=true|false (ALPHA - default=false)<br/> AnyVolumeDataSource=true|false (ALPHA - default=false)<br/>
AppArmor=true|false (BETA - default=true)<br/> AppArmor=true|false (BETA - default=true)<br/>
CPUManager=true|false (BETA - default=true)<br/> CPUManager=true|false (BETA - default=true)<br/>
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br/>
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br/>
CPUManagerPolicyOptions=true|false (ALPHA - default=false)<br/> CPUManagerPolicyOptions=true|false (ALPHA - default=false)<br/>
CSIInlineVolume=true|false (BETA - default=true)<br/> CSIInlineVolume=true|false (BETA - default=true)<br/>
CSIMigration=true|false (BETA - default=true)<br/> CSIMigration=true|false (BETA - default=true)<br/>
CSIMigrationAWS=true|false (BETA - default=false)<br/> CSIMigrationAWS=true|false (BETA - default=false)<br/>
CSIMigrationAzureDisk=true|false (BETA - default=false)<br/> CSIMigrationAzureDisk=true|false (BETA - default=true)<br/>
CSIMigrationAzureFile=true|false (BETA - default=false)<br/> CSIMigrationAzureFile=true|false (BETA - default=false)<br/>
CSIMigrationGCE=true|false (BETA - default=false)<br/> CSIMigrationGCE=true|false (BETA - default=true)<br/>
CSIMigrationOpenStack=true|false (BETA - default=true)<br/> CSIMigrationOpenStack=true|false (BETA - default=true)<br/>
CSIMigrationPortworx=true|false (ALPHA - default=false)<br/>
CSIMigrationvSphere=true|false (BETA - default=false)<br/> CSIMigrationvSphere=true|false (BETA - default=false)<br/>
CSIStorageCapacity=true|false (BETA - default=true)<br/> CSIStorageCapacity=true|false (BETA - default=true)<br/>
CSIVolumeFSGroupPolicy=true|false (BETA - default=true)<br/>
CSIVolumeHealth=true|false (ALPHA - default=false)<br/> CSIVolumeHealth=true|false (ALPHA - default=false)<br/>
CSRDuration=true|false (BETA - default=true)<br/> CSRDuration=true|false (BETA - default=true)<br/>
ConfigurableFSGroupPolicy=true|false (BETA - default=true)<br/>
ControllerManagerLeaderMigration=true|false (BETA - default=true)<br/> ControllerManagerLeaderMigration=true|false (BETA - default=true)<br/>
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br/> CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br/>
CustomResourceValidationExpressions=true|false (ALPHA - default=false)<br/>
DaemonSetUpdateSurge=true|false (BETA - default=true)<br/> DaemonSetUpdateSurge=true|false (BETA - default=true)<br/>
DefaultPodTopologySpread=true|false (BETA - default=true)<br/> DefaultPodTopologySpread=true|false (BETA - default=true)<br/>
DelegateFSGroupToCSIDriver=true|false (ALPHA - default=false)<br/> DelegateFSGroupToCSIDriver=true|false (BETA - default=true)<br/>
DevicePlugins=true|false (BETA - default=true)<br/> DevicePlugins=true|false (BETA - default=true)<br/>
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)<br/> DisableAcceleratorUsageMetrics=true|false (BETA - default=true)<br/>
DisableCloudProviders=true|false (ALPHA - default=false)<br/> DisableCloudProviders=true|false (ALPHA - default=false)<br/>
DownwardAPIHugePages=true|false (BETA - default=false)<br/> DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)<br/>
DownwardAPIHugePages=true|false (BETA - default=true)<br/>
EfficientWatchResumption=true|false (BETA - default=true)<br/> EfficientWatchResumption=true|false (BETA - default=true)<br/>
EndpointSliceTerminatingCondition=true|false (BETA - default=true)<br/> EndpointSliceTerminatingCondition=true|false (BETA - default=true)<br/>
EphemeralContainers=true|false (ALPHA - default=false)<br/> EphemeralContainers=true|false (BETA - default=true)<br/>
ExpandCSIVolumes=true|false (BETA - default=true)<br/> ExpandCSIVolumes=true|false (BETA - default=true)<br/>
ExpandInUsePersistentVolumes=true|false (BETA - default=true)<br/> ExpandInUsePersistentVolumes=true|false (BETA - default=true)<br/>
ExpandPersistentVolumes=true|false (BETA - default=true)<br/> ExpandPersistentVolumes=true|false (BETA - default=true)<br/>
ExpandedDNSConfig=true|false (ALPHA - default=false)<br/> ExpandedDNSConfig=true|false (ALPHA - default=false)<br/>
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/> ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/>
GenericEphemeralVolume=true|false (BETA - default=true)<br/> GRPCContainerProbe=true|false (ALPHA - default=false)<br/>
GracefulNodeShutdown=true|false (BETA - default=true)<br/> GracefulNodeShutdown=true|false (BETA - default=true)<br/>
GracefulNodeShutdownBasedOnPodPriority=true|false (ALPHA - default=false)<br/>
HPAContainerMetrics=true|false (ALPHA - default=false)<br/> HPAContainerMetrics=true|false (ALPHA - default=false)<br/>
HPAScaleToZero=true|false (ALPHA - default=false)<br/> HPAScaleToZero=true|false (ALPHA - default=false)<br/>
IPv6DualStack=true|false (BETA - default=true)<br/> HonorPVReclaimPolicy=true|false (ALPHA - default=false)<br/>
IdentifyPodOS=true|false (ALPHA - default=false)<br/>
InTreePluginAWSUnregister=true|false (ALPHA - default=false)<br/> InTreePluginAWSUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)<br/> InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)<br/> InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginGCEUnregister=true|false (ALPHA - default=false)<br/> InTreePluginGCEUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)<br/> InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br/>
InTreePluginRBDUnregister=true|false (ALPHA - default=false)<br>
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)<br/> InTreePluginvSphereUnregister=true|false (ALPHA - default=false)<br/>
IndexedJob=true|false (BETA - default=true)<br/> IndexedJob=true|false (BETA - default=true)<br/>
IngressClassNamespacedParams=true|false (BETA - default=true)<br/> JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)<br/>
JobTrackingWithFinalizers=true|false (ALPHA - default=false)<br/> JobReadyPods=true|false (ALPHA - default=false)<br/>
JobTrackingWithFinalizers=true|false (BETA - default=true)<br/>
KubeletCredentialProviders=true|false (ALPHA - default=false)<br/> KubeletCredentialProviders=true|false (ALPHA - default=false)<br/>
KubeletInUserNamespace=true|false (ALPHA - default=false)<br/> KubeletInUserNamespace=true|false (ALPHA - default=false)<br/>
KubeletPodResources=true|false (BETA - default=true)<br/> KubeletPodResources=true|false (BETA - default=true)<br/>
KubeletPodResourcesGetAllocatable=true|false (ALPHA - default=false)<br/> KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)<br/>
LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/> LocalStorageCapacityIsolation=true|false (BETA - default=true)<br/>
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br/> LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br/>
LogarithmicScaleDown=true|false (BETA - default=true)<br/> LogarithmicScaleDown=true|false (BETA - default=true)<br/>
@ -513,16 +514,20 @@ MixedProtocolLBService=true|false (ALPHA - default=false)<br/>
NetworkPolicyEndPort=true|false (BETA - default=true)<br/> NetworkPolicyEndPort=true|false (BETA - default=true)<br/>
NodeSwap=true|false (ALPHA - default=false)<br/> NodeSwap=true|false (ALPHA - default=false)<br/>
NonPreemptingPriority=true|false (BETA - default=true)<br/> NonPreemptingPriority=true|false (BETA - default=true)<br/>
OpenAPIEnums=true|false (ALPHA - default=false)<br/>
OpenAPIV3=true|false (ALPHA - default=false)<br/>
PodAffinityNamespaceSelector=true|false (BETA - default=true)<br/> PodAffinityNamespaceSelector=true|false (BETA - default=true)<br/>
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br/>
PodDeletionCost=true|false (BETA - default=true)<br/> PodDeletionCost=true|false (BETA - default=true)<br/>
PodOverhead=true|false (BETA - default=true)<br/> PodOverhead=true|false (BETA - default=true)<br/>
PodSecurity=true|false (ALPHA - default=false)<br/> PodSecurity=true|false (BETA - default=true)<br/>
PreferNominatedNode=true|false (BETA - default=true)<br/> PreferNominatedNode=true|false (BETA - default=true)<br/>
ProbeTerminationGracePeriod=true|false (BETA - default=false)<br/> ProbeTerminationGracePeriod=true|false (BETA - default=false)<br/>
ProcMountType=true|false (ALPHA - default=false)<br/> ProcMountType=true|false (ALPHA - default=false)<br/>
ProxyTerminatingEndpoints=true|false (ALPHA - default=false)<br/> ProxyTerminatingEndpoints=true|false (ALPHA - default=false)<br/>
QOSReserved=true|false (ALPHA - default=false)<br/> QOSReserved=true|false (ALPHA - default=false)<br/>
ReadWriteOncePod=true|false (ALPHA - default=false)<br/> ReadWriteOncePod=true|false (ALPHA - default=false)<br/>
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br/>
RemainingItemCount=true|false (BETA - default=true)<br/> RemainingItemCount=true|false (BETA - default=true)<br/>
RemoveSelfLink=true|false (BETA - default=true)<br/> RemoveSelfLink=true|false (BETA - default=true)<br/>
RotateKubeletServerCertificate=true|false (BETA - default=true)<br/> RotateKubeletServerCertificate=true|false (BETA - default=true)<br/>
@ -531,17 +536,18 @@ ServiceInternalTrafficPolicy=true|false (BETA - default=true)<br/>
ServiceLBNodePortControl=true|false (BETA - default=true)<br/> ServiceLBNodePortControl=true|false (BETA - default=true)<br/>
ServiceLoadBalancerClass=true|false (BETA - default=true)<br/> ServiceLoadBalancerClass=true|false (BETA - default=true)<br/>
SizeMemoryBackedVolumes=true|false (BETA - default=true)<br/> SizeMemoryBackedVolumes=true|false (BETA - default=true)<br/>
StatefulSetMinReadySeconds=true|false (ALPHA - default=false)<br/> StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)<br/>
StatefulSetMinReadySeconds=true|false (BETA - default=true)<br/>
StorageVersionAPI=true|false (ALPHA - default=false)<br/> StorageVersionAPI=true|false (ALPHA - default=false)<br/>
StorageVersionHash=true|false (BETA - default=true)<br/> StorageVersionHash=true|false (BETA - default=true)<br/>
SuspendJob=true|false (BETA - default=true)<br/> SuspendJob=true|false (BETA - default=true)<br/>
TTLAfterFinished=true|false (BETA - default=true)<br/> TopologyAwareHints=true|false (BETA - default=true)<br/>
TopologyAwareHints=true|false (ALPHA - default=false)<br/>
TopologyManager=true|false (BETA - default=true)<br/> TopologyManager=true|false (BETA - default=true)<br/>
VolumeCapacityPriority=true|false (ALPHA - default=false)<br/> VolumeCapacityPriority=true|false (ALPHA - default=false)<br/>
WinDSR=true|false (ALPHA - default=false)<br/> WinDSR=true|false (ALPHA - default=false)<br/>
WinOverlay=true|false (BETA - default=true)<br/> WinOverlay=true|false (BETA - default=true)<br/>
WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/> WindowsHostProcessContainers=true|false (BETA - default=true)<br/>
csiMigrationRBD=true|false (ALPHA - default=false)<br/>
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td> (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
@ -682,7 +688,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td colspan="2">--kube-api-qps int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5</td> <td colspan="2">--kube-api-qps int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">QPS to use while talking with kubernetes API server. The number must be &gt;= 0. If 0 will use default QPS (5). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">QPS to use while talking with kubernetes API server. The number must be &gt;= 0. If 0 will use default QPS (5). Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
<tr> <tr>
@ -724,28 +730,28 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td colspan="2">--log-backtrace-at &lt;A string of format 'file:line'&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>":0"</code></td> <td colspan="2">--log-backtrace-at &lt;A string of format 'file:line'&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>":0"</code></td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">When logging hits line <code><file>:<N></code>, emit a stack trace.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">When logging hits line <code><file>:<N></code>, emit a stack trace. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
<td colspan="2">--log-dir string</td> <td colspan="2">--log-dir string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If non-empty, write log files in this directory</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">If non-empty, write log files in this directory. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
<td colspan="2">--log-file string</td> <td colspan="2">--log-file string</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If non-empty, use this log file</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">If non-empty, use this log file. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
<td colspan="2">--log-file-max-size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1800</td> <td colspan="2">--log-file-max-size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1800</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
@ -755,6 +761,20 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum number of seconds between log flushes.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum number of seconds between log flushes.</td>
</tr> </tr>
<tr>
<td colspan="2">--log-json-info-buffer-size string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>'0'</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] In JSON format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr>
<td colspan="2">--log-json-split-stream</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">[Experimental] In JSON format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr>
<tr> <tr>
<td colspan="2">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>text</code></td> <td colspan="2">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>text</code></td>
</tr> </tr>
@ -766,7 +786,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td colspan="2">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td> <td colspan="2">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">log to standard error instead of files.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">log to standard error instead of files. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
@ -789,6 +809,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of HTTP headers to use when accessing the URL provided to <code>--manifest-url</code>. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: <code>--manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful'</code> (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of HTTP headers to use when accessing the URL provided to <code>--manifest-url</code>. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: <code>--manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful'</code> (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
<tr> <tr>
<td colspan="2">--master-service-namespace string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>default</code></td> <td colspan="2">--master-service-namespace string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>default</code></td>
</tr> </tr>
@ -898,7 +919,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td colspan="2">--one-output</td> <td colspan="2">--one-output</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true, only write logs to their native severity level (vs also writing to each lower severity level).</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">If true, only write logs to their native severity level (vs also writing to each lower severity level). (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
@ -989,7 +1010,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td colspan="2">--register-node&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td> <td colspan="2">--register-node&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the API server. If <code>--kubeconfig</code> is not provided, this flag is irrelevant, as the Kubelet won't have an API server to register with.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the API server. If <code>--kubeconfig</code> is not provided, this flag is irrelevant, as the Kubelet won't have an API server to register with. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
<tr> <tr>
@ -1003,7 +1024,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td colspan="2">--register-with-taints mapStringString</td> <td colspan="2">--register-with-taints mapStringString</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the given list of taints (comma separated <code>&lt;key&gt;=&lt;value&gt;:&lt;effect&gt;</code>). No-op if <code>--register-node</code> is <code>false</code>.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Register the node with the given list of taints (comma separated <code>&lt;key&gt;=&lt;value&gt;:&lt;effect&gt;</code>). No-op if <code>--register-node</code> is <code>false</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
<tr> <tr>
@ -1090,14 +1111,6 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Alpha feature&gt; Enable the use of <code>RuntimeDefault</code> as the default seccomp profile for all workloads. The <code>SeccompDefault</code> feature gate must be enabled to allow this flag, which is disabled by default.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Alpha feature&gt; Enable the use of <code>RuntimeDefault</code> as the default seccomp profile for all workloads. The <code>SeccompDefault</code> feature gate must be enabled to allow this flag, which is disabled by default.</td>
</tr> </tr>
<tr>
<td colspan="2">--seccomp-profile-root string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>/var/lib/kubelet/seccomp</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Alpha feature&gt; Directory path for seccomp profiles. (DEPRECATED: will be removed in 1.23, in favor of using the <code><root-dir>/seccomp</code> directory)
</td>
</tr>
<tr> <tr>
<td colspan="2">--serialize-image-pulls&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td> <td colspan="2">--serialize-image-pulls&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true</code></td>
</tr> </tr>
@ -1109,28 +1122,28 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<td colspan="2">--skip-headers</td> <td colspan="2">--skip-headers</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If <code>true</code>, avoid header prefixes in the log messages</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">If <code>true</code>, avoid header prefixes in the log messages. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
<td colspan="2">--skip-log-headers</td> <td colspan="2">--skip-log-headers</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If <code>true</code>, avoid headers when opening log files</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">If <code>true</code>, avoid headers when opening log files. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
<td colspan="2">--stderrthreshold int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2</td> <td colspan="2">--stderrthreshold int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2</td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">logs at or above this threshold go to stderr.</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">logs at or above this threshold go to stderr. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)</td>
</tr> </tr>
<tr> <tr>
<td colspan="2">--streaming-connection-idle-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>4h0m0s</code></td> <td colspan="2">--streaming-connection-idle-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>4h0m0s</code></td>
</tr> </tr>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum time a streaming connection can be idle before the connection is automatically closed. <code>0</code> indicates no timeout. Example: <code>5m</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td> <td></td><td style="line-height: 130%; word-wrap: break-word;">Maximum time a streaming connection can be idle before the connection is automatically closed. <code>0</code> indicates no timeout. Example: <code>5m</code>. Note: All connections to the kubelet server have a maximum duration of 4 hours. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
</tr> </tr>
<tr> <tr>
@ -1174,7 +1187,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
<tr> <tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.<br/> <td></td><td style="line-height: 130%; word-wrap: break-word;">Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.<br/>
Preferred values: Preferred values:
TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.<br/> TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384<br/>
Insecure values: Insecure values:
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

View File

@ -19,8 +19,6 @@ This page describes Kubernetes security and disclosure information.
Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) group for emails about security and major API announcements. Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) group for emails about security and major API announcements.
You can also subscribe to an RSS feed of the above using [this link](https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50).
## Report a Vulnerability ## Report a Vulnerability
We're extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers. We're extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers.

View File

@ -489,6 +489,12 @@ PodSpec is a description of a pod.
### Beta level ### Beta level
- **ephemeralContainers** ([]<a href="{{< ref "../workload-resources/pod-v1#EphemeralContainer" >}}">EphemeralContainer</a>)
*Patch strategy: merge on key `name`*
List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.
- **preemptionPolicy** (string) - **preemptionPolicy** (string)
PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. This field is beta-level, gated by the NonPreemptingPriority feature-gate. PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. This field is beta-level, gated by the NonPreemptingPriority feature-gate.
@ -497,15 +503,6 @@ PodSpec is a description of a pod.
Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md This field is beta-level as of Kubernetes v1.18, and is only honored by servers that enable the PodOverhead feature. Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md This field is beta-level as of Kubernetes v1.18, and is only honored by servers that enable the PodOverhead feature.
### Alpha level
- **ephemeralContainers** ([]<a href="{{< ref "../workload-resources/pod-v1#EphemeralContainer" >}}">EphemeralContainer</a>)
*Patch strategy: merge on key `name`*
List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.
### Deprecated ### Deprecated
@ -1220,83 +1217,9 @@ This is a beta feature available on clusters that haven't disabled the Ephemeral
Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.
### Not allowed ### Security context
- **ports** ([]ContainerPort)
*Patch strategy: merge on key `containerPort`*
*Map: unique values on keys `containerPort, protocol` will be kept during a merge*
Ports are not allowed for ephemeral containers.
<a name="ContainerPort"></a>
*ContainerPort represents a network port in a single container.*
- **ports.containerPort** (int32), required
Number of port to expose on the pod's IP address. This must be a valid port number, 0 \< x \< 65536.
- **ports.hostIP** (string)
What host IP to bind the external port to.
- **ports.hostPort** (int32)
Number of port to expose on the host. If specified, this must be a valid port number, 0 \< x \< 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
- **ports.name** (string)
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
- **ports.protocol** (string)
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Possible enum values:
- `"SCTP"` is the SCTP protocol.
- `"TCP"` is the TCP protocol.
- `"UDP"` is the UDP protocol.
- **resources** (ResourceRequirements)
Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
<a name="ResourceRequirements"></a>
*ResourceRequirements describes the compute resource requirements.*
- **resources.limits** (map[string]<a href="{{< ref "../common-definitions/quantity#Quantity" >}}">Quantity</a>)
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- **resources.requests** (map[string]<a href="{{< ref "../common-definitions/quantity#Quantity" >}}">Quantity</a>)
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- **lifecycle** (Lifecycle)
Lifecycle is not allowed for ephemeral containers.
<a name="Lifecycle"></a>
*Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.*
- **lifecycle.postStart** (<a href="{{< ref "../workload-resources/pod-v1#LifecycleHandler" >}}">LifecycleHandler</a>)
PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
- **lifecycle.preStop** (<a href="{{< ref "../workload-resources/pod-v1#LifecycleHandler" >}}">LifecycleHandler</a>)
PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
- **livenessProbe** (<a href="{{< ref "../workload-resources/pod-v1#Probe" >}}">Probe</a>)
Probes are not allowed for ephemeral containers.
- **readinessProbe** (<a href="{{< ref "../workload-resources/pod-v1#Probe" >}}">Probe</a>)
Probes are not allowed for ephemeral containers.
- **securityContext** (SecurityContext) - **securityContext** (SecurityContext)
Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
@ -1415,6 +1338,83 @@ This is a beta feature available on clusters that haven't disabled the Ephemeral
The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
### Not allowed
- **ports** ([]ContainerPort)
*Patch strategy: merge on key `containerPort`*
*Map: unique values on keys `containerPort, protocol` will be kept during a merge*
Ports are not allowed for ephemeral containers.
<a name="ContainerPort"></a>
*ContainerPort represents a network port in a single container.*
- **ports.containerPort** (int32), required
Number of port to expose on the pod's IP address. This must be a valid port number, 0 \< x \< 65536.
- **ports.hostIP** (string)
What host IP to bind the external port to.
- **ports.hostPort** (int32)
Number of port to expose on the host. If specified, this must be a valid port number, 0 \< x \< 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
- **ports.name** (string)
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
- **ports.protocol** (string)
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Possible enum values:
- `"SCTP"` is the SCTP protocol.
- `"TCP"` is the TCP protocol.
- `"UDP"` is the UDP protocol.
- **resources** (ResourceRequirements)
Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.
<a name="ResourceRequirements"></a>
*ResourceRequirements describes the compute resource requirements.*
- **resources.limits** (map[string]<a href="{{< ref "../common-definitions/quantity#Quantity" >}}">Quantity</a>)
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- **resources.requests** (map[string]<a href="{{< ref "../common-definitions/quantity#Quantity" >}}">Quantity</a>)
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- **lifecycle** (Lifecycle)
Lifecycle is not allowed for ephemeral containers.
<a name="Lifecycle"></a>
*Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.*
- **lifecycle.postStart** (<a href="{{< ref "../workload-resources/pod-v1#LifecycleHandler" >}}">LifecycleHandler</a>)
PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
- **lifecycle.preStop** (<a href="{{< ref "../workload-resources/pod-v1#LifecycleHandler" >}}">LifecycleHandler</a>)
PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
- **livenessProbe** (<a href="{{< ref "../workload-resources/pod-v1#Probe" >}}">Probe</a>)
Probes are not allowed for ephemeral containers.
- **readinessProbe** (<a href="{{< ref "../workload-resources/pod-v1#Probe" >}}">Probe</a>)
Probes are not allowed for ephemeral containers.
- **startupProbe** (<a href="{{< ref "../workload-resources/pod-v1#Probe" >}}">Probe</a>) - **startupProbe** (<a href="{{< ref "../workload-resources/pod-v1#Probe" >}}">Probe</a>)
Probes are not allowed for ephemeral containers. Probes are not allowed for ephemeral containers.

View File

@ -159,6 +159,20 @@ The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure tha
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all. adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
## volume.beta.kubernetes.io/storage-provisioner (deprecated)
Example: `volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath`
Used on: PersistentVolumeClaim
This annotation has been deprecated.
## volume.kubernetes.io/storage-provisioner
Used on: PersistentVolumeClaim
This annotation will be added to dynamic provisioning required PVC.
## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} ## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build}
Example: `node.kubernetes.io/windows-build=10.0.17763` Example: `node.kubernetes.io/windows-build=10.0.17763`

View File

@ -0,0 +1,4 @@
---
title: Node Reference Information
weight: 40
---

View File

@ -0,0 +1,39 @@
---
title: External Articles on dockershim Removal and on Using CRI-compatible Runtimes
content_type: reference
weight: 20
---
<!-- overview -->
This is a list of articles about:
- the Kubernetes' deprecation and removal of _dockershim_
- using CRI-compatible container runtimes
<!-- body -->
## Primary sources
* [Kubernetes Blog: "Dockershim Deprecation FAQ", 2020/12/02](/blog/2020/12/02/dockershim-faq/)
* [Kubernetes Documentation: "Migrating from dockershim"](/docs/tasks/administer-cluster/migrating-from-dockershim/)
* [Kubernetes Documentation: "Container runtimes"](/docs/setup/production-environment/container-runtimes/)
* [Kubernetes enhancement issue: "Removing dockershim from kubelet" (`kubernetes/enhancements#2221`)](https://github.com/kubernetes/enhancements/issues/2221)
* [Kubernetes enhancement proposal: "KEP-2221: Removing dockershim from kubelet"](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2221-remove-dockershim/README.md)
* [Kubernetes Blog: "Dockershim removal is coming. Are you ready?", 2021/11/12](/blog/2021/11/12/are-you-ready-for-dockershim-removal/)
## Secondary sources
* [Docker.com blog: "What developers need to know about Docker, Docker Engine, and Kubernetes v1.20", 2020/12/04](https://www.docker.com/blog/what-developers-need-to-know-about-docker-docker-engine-and-kubernetes-v1-20/)
* [Tripwire.com: "How Dockershims Forthcoming Deprecation Affects Your Kubernetes"](https://www.tripwire.com/state-of-security/security-data-protection/cloud/how-dockershim-forthcoming-deprecation-affects-your-kubernetes/)
* [Amazon EKS documentation: "Dockershim deprecation"](https://docs.aws.amazon.com/eks/latest/userguide/dockershim-deprecation.html)
* ["Google Open Source" channel on YouTube: "Learn Kubernetes with Google - Migrating from Dockershim to Containerd"](https://youtu.be/fl7_4hjT52g)
* [Mirantis Blog: "The Future of Dockershim is cri-dockerd", 2021/04/21](https://www.mirantis.com/blog/the-future-of-dockershim-is-cri-dockerd/)
* [Github.com: "Mirantis/cri-dockerd" repo](https://github.com/Mirantis/cri-dockerd)

View File

@ -6,6 +6,9 @@ weight: 90
`kubeadm kubeconfig` provides utilities for managing kubeconfig files. `kubeadm kubeconfig` provides utilities for managing kubeconfig files.
For examples on how to use `kubeadm kubeconfig user` see
[Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#kubeconfig-additional-users).
## kubeadm kubeconfig {#cmd-kubeconfig} ## kubeadm kubeconfig {#cmd-kubeconfig}
{{< tabs name="tab-kubeconfig" >}} {{< tabs name="tab-kubeconfig" >}}

View File

@ -12,16 +12,17 @@ Kubernetes contains several tools to help you work with the Kubernetes system.
<!-- body --> <!-- body -->
## Minikube ## crictl
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that [`crictl`](https://github.com/kubernetes-sigs/cri-tools) is a command-line
runs a single-node Kubernetes cluster locally on your workstation for interface for inspecting and debugging {{<glossary_tooltip term_id="cri" text="CRI">}}-compatible
development and testing purposes. container runtimes.
## Dashboard ## Dashboard
[`Dashboard`](/docs/tasks/access-application-cluster/web-ui-dashboard/), the web-based user interface of Kubernetes, allows you to deploy containerized applications [`Dashboard`](/docs/tasks/access-application-cluster/web-ui-dashboard/), the web-based user interface of Kubernetes, allows you to deploy containerized applications
to a Kubernetes cluster, troubleshoot them, and manage the cluster and its resources itself. to a Kubernetes cluster, troubleshoot them, and manage the cluster and its
resources itself.
## Helm ## Helm
{{% thirdparty-content single="true" %}} {{% thirdparty-content single="true" %}}
@ -65,3 +66,9 @@ Kui lets you:
* Query a {{< glossary_tooltip text="Job" term_id="job">}} and see its execution rendered * Query a {{< glossary_tooltip text="Job" term_id="job">}} and see its execution rendered
as a waterfall diagram as a waterfall diagram
* Click through resources in your cluster using a tabbed UI * Click through resources in your cluster using a tabbed UI
## Minikube
[`minikube`](https://minikube.sigs.k8s.io/docs/) is a tool that
runs a single-node Kubernetes cluster locally on your workstation for
development and testing purposes.

View File

@ -0,0 +1,76 @@
---
title: Mapping from dockercli to crictl
content_type: reference
---
{{% thirdparty-content %}}
{{<note>}}
This page is deprecated and will be removed in Kubernetes 1.27.
{{</note>}}
`crictl` is a command-line interface for {{<glossary_tooltip term_id="cri" text="CRI">}}-compatible container runtimes.
You can use it to inspect and debug container runtimes and applications on a
Kubernetes node. `crictl` and its source are hosted in the
[cri-tools](https://github.com/kubernetes-sigs/cri-tools) repository.
This page provides a reference for mapping common commands for the `docker`
command-line tool into the equivalent commands for `crictl`.
## Mapping from docker CLI to crictl
The exact versions for the mapping table are for `docker` CLI v1.40 and `crictl`
v1.19.0. This list is not exhaustive. For example, it doesn't include
experimental `docker` CLI commands.
{{< note >}}
The output format of `crictl` is similar to `docker` CLI, despite some missing
columns for some CLI. Make sure to check output for the specific command if your
command output is being parsed programmatically.
{{< /note >}}
### Retrieve debugging information
{{< table caption="mapping from docker cli to crictl - retrieve debugging information" >}}
docker cli | crictl | Description | Unsupported Features
-- | -- | -- | --
`attach` | `attach` | Attach to a running container | `--detach-keys`, `--sig-proxy`
`exec` | `exec` | Run a command in a running container | `--privileged`, `--user`, `--detach-keys`
`images` | `images` | List images |  
`info` | `info` | Display system-wide information |  
`inspect` | `inspect`, `inspecti` | Return low-level information on a container, image or task |  
`logs` | `logs` | Fetch the logs of a container | `--details`
`ps` | `ps` | List containers |  
`stats` | `stats` | Display a live stream of container(s) resource usage statistics | Column: NET/BLOCK I/O, PIDs
`version` | `version` | Show the runtime (Docker, ContainerD, or others) version information |  
{{< /table >}}
### Perform Changes
{{< table caption="mapping from docker cli to crictl - perform changes" >}}
docker cli | crictl | Description | Unsupported Features
-- | -- | -- | --
`create` | `create` | Create a new container |  
`kill` | `stop` (timeout = 0) | Kill one or more running container | `--signal`
`pull` | `pull` | Pull an image or a repository from a registry | `--all-tags`, `--disable-content-trust`
`rm` | `rm` | Remove one or more containers |  
`rmi` | `rmi` | Remove one or more images |  
`run` | `run` | Run a command in a new container |  
`start` | `start` | Start one or more stopped containers | `--detach-keys`
`stop` | `stop` | Stop one or more running containers |  
`update` | `update` | Update configuration of one or more containers | `--restart`, `--blkio-weight` and some other resource limit not supported by CRI.
{{< /table >}}
### Supported only in crictl
{{< table caption="mapping from docker cli to crictl - supported only in crictl" >}}
crictl | Description
-- | --
`imagefsinfo` | Return image filesystem info
`inspectp` | Display the status of one or more pods
`port-forward` | Forward local port to a pod
`pods` | List pods
`runp` | Run a new pod
`rmp` | Remove one or more pods
`stopp` | Stop one or more running pods
{{< /table >}}

View File

@ -73,21 +73,23 @@ knows how to convert between them in both directions. Additionally, any new
field added in v2 must be able to round-trip to v1 and back, which means v1 field added in v2 must be able to round-trip to v1 and back, which means v1
might have to add an equivalent field or represent it as an annotation. might have to add an equivalent field or represent it as an annotation.
**Rule #3: An API version in a given track may not be deprecated until a new **Rule #3: An API version in a given track may not be deprecated in favor of a less stable API version.**
API version at least as stable is released.**
GA API versions can replace GA API versions as well as beta and alpha API * GA API versions can replace beta and alpha API versions.
versions. Beta API versions *may not* replace GA API versions. * Beta API versions can replace earlier beta and alpha API versions, but *may not* replace GA API versions.
* Alpha API versions can replace earlier alpha API versions, but *may not* replace GA or beta API versions.
**Rule #4a: Other than the most recent API versions in each track, older API **Rule #4a: minimum API lifetime is determined by the API stability level**
versions must be supported after their announced deprecation for a duration of
no less than:**
* **GA: 12 months or 3 releases (whichever is longer)** * **GA API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes**
* **Beta: 9 months or 3 releases (whichever is longer)** * **Beta API versions must be supported for 9 months or 3 releases (whichever is longer) after deprecation**
* **Alpha: 0 releases** * **Alpha API versions may be removed in any release without prior deprecation notice**
This covers the [maximum supported version skew of 2 releases](/docs/setup/release/version-skew-policy/). This ensures beta API support covers the [maximum supported version skew of 2 releases](/docs/setup/release/version-skew-policy/).
{{< note >}}
There are no current plans for a major version revision of Kubernetes that removes GA APIs.
{{< /note >}}
{{< note >}} {{< note >}}
Until [#52185](https://github.com/kubernetes/kubernetes/issues/52185) is Until [#52185](https://github.com/kubernetes/kubernetes/issues/52185) is
@ -237,7 +239,7 @@ API versions are supported in a series of subsequent releases.
<td> <td>
<ul> <ul>
<li>v2beta2 is deprecated, "action required" relnote</li> <li>v2beta2 is deprecated, "action required" relnote</li>
<li>v1 is deprecated, "action required" relnote</li> <li>v1 is deprecated in favor of v2, but will not be removed</li>
</ul> </ul>
</td> </td>
</tr> </tr>
@ -267,22 +269,6 @@ API versions are supported in a series of subsequent releases.
</ul> </ul>
</td> </td>
</tr> </tr>
<tr>
<td>X+16</td>
<td>v2, v1 (deprecated)</td>
<td>v2</td>
<td></td>
</tr>
<tr>
<td>X+17</td>
<td>v2</td>
<td>v2</td>
<td>
<ul>
<li>v1 is removed, "action required" relnote</li>
</ul>
</td>
</tr>
</tbody> </tbody>
</table> </table>

View File

@ -83,13 +83,17 @@ If systemd doesn't use cgroup v2 by default, you can configure the system to use
`systemd.unified_cgroup_hierarchy=1` to the kernel command line. `systemd.unified_cgroup_hierarchy=1` to the kernel command line.
```shell ```shell
# dnf install -y grubby && \ # This example is for a Linux OS that uses the DNF package manager
# Your system might use a different method for setting the command line
# that the Linux kernel uses.
sudo dnf install -y grubby && \
sudo grubby \ sudo grubby \
--update-kernel=ALL \ --update-kernel=ALL \
--args="systemd.unified_cgroup_hierarchy=1" --args="systemd.unified_cgroup_hierarchy=1"
``` ```
To apply the configuration, it is necessary to reboot the node. If you change the command line for the kernel, you must reboot the node before your
change takes effect.
There should not be any noticeable difference in the user experience when switching to cgroup v2, unless There should not be any noticeable difference in the user experience when switching to cgroup v2, unless
users are accessing the cgroup file system directly, either on the node or from within the containers. users are accessing the cgroup file system directly, either on the node or from within the containers.
@ -168,7 +172,7 @@ installing the `containerd.io` package can be found at
{{% /tab %}} {{% /tab %}}
{{% tab name="Windows (PowerShell)" %}} {{% tab name="Windows (PowerShell)" %}}
Start a Powershell session, set `$Version` to the desired version (ex: `$Version=1.4.3`), Start a Powershell session, set `$Version` to the desired version (ex: `$Version="1.4.3"`),
and then run the following commands: and then run the following commands:
1. Download containerd: 1. Download containerd:

View File

@ -210,7 +210,8 @@ export KUBECONFIG=/etc/kubernetes/admin.conf
Kubeadm signs the certificate in the `admin.conf` to have `Subject: O = system:masters, CN = kubernetes-admin`. Kubeadm signs the certificate in the `admin.conf` to have `Subject: O = system:masters, CN = kubernetes-admin`.
`system:masters` is a break-glass, super user group that bypasses the authorization layer (e.g. RBAC). `system:masters` is a break-glass, super user group that bypasses the authorization layer (e.g. RBAC).
Do not share the `admin.conf` file with anyone and instead grant users custom permissions by generating Do not share the `admin.conf` file with anyone and instead grant users custom permissions by generating
them a kubeconfig file using the `kubeadm kubeconfig user` command. them a kubeconfig file using the `kubeadm kubeconfig user` command. For more details see
[Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#kubeconfig-additional-users).
{{< /warning >}} {{< /warning >}}
Make a record of the `kubeadm join` command that `kubeadm init` outputs. You Make a record of the `kubeadm join` command that `kubeadm init` outputs. You

View File

@ -1,7 +1,7 @@
--- ---
reviewers: reviewers:
- sig-cluster-lifecycle - sig-cluster-lifecycle
title: Options for Highly Available topology title: Options for Highly Available Topology
content_type: concept content_type: concept
weight: 50 weight: 50
--- ---

View File

@ -1,7 +1,7 @@
--- ---
reviewers: reviewers:
- sig-cluster-lifecycle - sig-cluster-lifecycle
title: Creating Highly Available clusters with kubeadm title: Creating Highly Available Clusters with kubeadm
content_type: task content_type: task
weight: 60 weight: 60
--- ---
@ -12,17 +12,17 @@ This page explains two different approaches to setting up a highly available Kub
cluster using kubeadm: cluster using kubeadm:
- With stacked control plane nodes. This approach requires less infrastructure. The etcd members - With stacked control plane nodes. This approach requires less infrastructure. The etcd members
and control plane nodes are co-located. and control plane nodes are co-located.
- With an external etcd cluster. This approach requires more infrastructure. The - With an external etcd cluster. This approach requires more infrastructure. The
control plane nodes and etcd members are separated. control plane nodes and etcd members are separated.
Before proceeding, you should carefully consider which approach best meets the needs of your applications Before proceeding, you should carefully consider which approach best meets the needs of your applications
and environment. [This comparison topic](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each. and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each.
If you encounter issues with setting up the HA cluster, please provide us with feedback If you encounter issues with setting up the HA cluster, please report these
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new). in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/). See also the [upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/).
{{< caution >}} {{< caution >}}
This page does not address running your cluster on a cloud provider. In a cloud This page does not address running your cluster on a cloud provider. In a cloud
@ -32,22 +32,80 @@ LoadBalancer, or with dynamic PersistentVolumes.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
The prerequisites depend on which topology you have selected for your cluster's
control plane:
For both methods you need this infrastructure: {{< tabs name="prerequisite_tabs" >}}
{{% tab name="Stacked etcd" %}}
<!--
note to reviewers: these prerequisites should match the start of the
external etc tab
-->
- Three machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for You need:
the control-plane nodes
- Three machines that meet [kubeadm's minimum - Three or more machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
the control-plane nodes. Having an odd number of control plane nodes can help
with leader selection in the case of machine or zone failure.
- including a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, already set up and working
- Three or more machines that meet [kubeadm's minimum
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
- including a container runtime, already set up and working
- Full network connectivity between all machines in the cluster (public or - Full network connectivity between all machines in the cluster (public or
private network) private network)
- sudo privileges on all machines - Superuser privileges on all machines using `sudo`
- You can use a different tool; this guide uses `sudo` in the examples.
- SSH access from one device to all nodes in the system - SSH access from one device to all nodes in the system
- `kubeadm` and `kubelet` installed on all machines. `kubectl` is optional. - `kubeadm` and `kubelet` already installed on all machines.
For the external etcd cluster only, you also need: _See [Stacked etcd topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology) for context._
- Three additional machines for etcd members {{% /tab %}}
{{% tab name="External etcd" %}}
<!--
note to reviewers: these prerequisites should match the start of the
stacked etc tab
-->
You need:
- Three or more machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
the control-plane nodes. Having an odd number of control plane nodes can help
with leader selection in the case of machine or zone failure.
- including a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, already set up and working
- Three or more machines that meet [kubeadm's minimum
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
- including a container runtime, already set up and working
- Full network connectivity between all machines in the cluster (public or
private network)
- Superuser privileges on all machines using `sudo`
- You can use a different tool; this guide uses `sudo` in the examples.
- SSH access from one device to all nodes in the system
- `kubeadm` and `kubelet` already installed on all machines.
<!-- end of shared prerequisites -->
And you also need:
- Three or more additional machines, that will become etcd cluster members.
Having an odd number of members in the etcd cluster is a requirement for achieving
optimal voting quorum.
- These machines again need to have `kubeadm` and `kubelet` installed.
- These machines also require a container runtime, that is already set up and working.
_See [External etcd topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/#external-etcd-topology) for context._
{{% /tab %}}
{{< /tabs >}}
### Container images
Each host should have access read and fetch images from the Kubernetes container image registry, `k8s.gcr.io`.
If you want to deploy a highly-available cluster where the hosts do not have access to pull images, this is possible. You must ensure by some other means that the correct container images are already available on the relevant hosts.
### Command line interface {#kubectl}
To manage Kubernetes once your cluster is set up, you should
[install kubectl](/docs/tasks/tools/#kubectl) on your PC. It is also useful
to install the `kubectl` tool on each control plane node, as this can be
helpful for troubleshooting.
<!-- steps --> <!-- steps -->
@ -60,147 +118,146 @@ There are many configurations for load balancers. The following example is only
option. Your cluster requirements may need a different configuration. option. Your cluster requirements may need a different configuration.
{{< /note >}} {{< /note >}}
1. Create a kube-apiserver load balancer with a name that resolves to DNS. 1. Create a kube-apiserver load balancer with a name that resolves to DNS.
- In a cloud environment you should place your control plane nodes behind a TCP - In a cloud environment you should place your control plane nodes behind a TCP
forwarding load balancer. This load balancer distributes traffic to all forwarding load balancer. This load balancer distributes traffic to all
healthy control plane nodes in its target list. The health check for healthy control plane nodes in its target list. The health check for
an apiserver is a TCP check on the port the kube-apiserver listens on an apiserver is a TCP check on the port the kube-apiserver listens on
(default value `:6443`). (default value `:6443`).
- It is not recommended to use an IP address directly in a cloud environment. - It is not recommended to use an IP address directly in a cloud environment.
- The load balancer must be able to communicate with all control plane nodes - The load balancer must be able to communicate with all control plane nodes
on the apiserver port. It must also allow incoming traffic on its on the apiserver port. It must also allow incoming traffic on its
listening port. listening port.
- Make sure the address of the load balancer always matches - Make sure the address of the load balancer always matches
the address of kubeadm's `ControlPlaneEndpoint`. the address of kubeadm's `ControlPlaneEndpoint`.
- Read the [Options for Software Load Balancing](https://git.k8s.io/kubeadm/docs/ha-considerations.md#options-for-software-load-balancing) - Read the [Options for Software Load Balancing](https://git.k8s.io/kubeadm/docs/ha-considerations.md#options-for-software-load-balancing)
guide for more details. guide for more details.
1. Add the first control plane nodes to the load balancer and test the 1. Add the first control plane node to the load balancer, and test the
connection: connection:
```sh ```shell
nc -v LOAD_BALANCER_IP PORT nc -v <LOAD_BALANCER_IP> <PORT>
``` ```
- A connection refused error is expected because the apiserver is not yet A connection refused error is expected because the API server is not yet
running. A timeout, however, means the load balancer cannot communicate running. A timeout, however, means the load balancer cannot communicate
with the control plane node. If a timeout occurs, reconfigure the load with the control plane node. If a timeout occurs, reconfigure the load
balancer to communicate with the control plane node. balancer to communicate with the control plane node.
1. Add the remaining control plane nodes to the load balancer target group. 1. Add the remaining control plane nodes to the load balancer target group.
## Stacked control plane and etcd nodes ## Stacked control plane and etcd nodes
### Steps for the first control plane node ### Steps for the first control plane node
1. Initialize the control plane: 1. Initialize the control plane:
```sh ```sh
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
``` ```
- You can use the `--kubernetes-version` flag to set the Kubernetes version to use. - You can use the `--kubernetes-version` flag to set the Kubernetes version to use.
It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match. It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
- The `--control-plane-endpoint` flag should be set to the address or DNS and port of the load balancer. - The `--control-plane-endpoint` flag should be set to the address or DNS and port of the load balancer.
- The `--upload-certs` flag is used to upload the certificates that should be shared - The `--upload-certs` flag is used to upload the certificates that should be shared
across all the control-plane instances to the cluster. If instead, you prefer to copy certs across across all the control-plane instances to the cluster. If instead, you prefer to copy certs across
control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual
certificate distribution](#manual-certs) section below. certificate distribution](#manual-certs) section below.
{{< note >}} {{< note >}}
The `kubeadm init` flags `--config` and `--certificate-key` cannot be mixed, therefore if you want The `kubeadm init` flags `--config` and `--certificate-key` cannot be mixed, therefore if you want
to use the [kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta3/) to use the [kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta3/)
you must add the `certificateKey` field in the appropriate config locations you must add the `certificateKey` field in the appropriate config locations
(under `InitConfiguration` and `JoinConfiguration: controlPlane`). (under `InitConfiguration` and `JoinConfiguration: controlPlane`).
{{< /note >}} {{< /note >}}
{{< note >}} {{< note >}}
Some CNI network plugins require additional configuration, for example specifying the pod IP CIDR, while others do not. Some CNI network plugins require additional configuration, for example specifying the pod IP CIDR, while others do not.
See the [CNI network documentation](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network). See the [CNI network documentation](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network).
To add a pod CIDR pass the flag `--pod-network-cidr`, or if you are using a kubeadm configuration file To add a pod CIDR pass the flag `--pod-network-cidr`, or if you are using a kubeadm configuration file
set the `podSubnet` field under the `networking` object of `ClusterConfiguration`. set the `podSubnet` field under the `networking` object of `ClusterConfiguration`.
{{< /note >}} {{< /note >}}
- The output looks similar to: The output looks similar to:
```sh ```sh
... ...
You can now join any number of control-plane node by running the following command on each as a root: You can now join any number of control-plane node by running the following command on each as a root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
Please note that the certificate-key gives access to cluster sensitive data, keep it secret! Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root: Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
``` ```
- Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster. - Copy this output to a text file. You will need it later to join control plane and worker nodes to
- When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane the cluster.
are encrypted and uploaded in the `kubeadm-certs` Secret. - When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
- To re-upload the certificates and generate a new decryption key, use the following command on a control plane are encrypted and uploaded in the `kubeadm-certs` Secret.
node that is already joined to the cluster: - To re-upload the certificates and generate a new decryption key, use the following command on a
control plane
node that is already joined to the cluster:
```sh ```sh
sudo kubeadm init phase upload-certs --upload-certs sudo kubeadm init phase upload-certs --upload-certs
``` ```
- You can also specify a custom `--certificate-key` during `init` that can later be used by `join`. - You can also specify a custom `--certificate-key` during `init` that can later be used by `join`.
To generate such a key you can use the following command: To generate such a key you can use the following command:
```sh ```sh
kubeadm certs certificate-key kubeadm certs certificate-key
``` ```
{{< note >}} {{< note >}}
The `kubeadm-certs` Secret and decryption key expire after two hours. The `kubeadm-certs` Secret and decryption key expire after two hours.
{{< /note >}} {{< /note >}}
{{< caution >}} {{< caution >}}
As stated in the command output, the certificate key gives access to cluster sensitive data, keep it secret! As stated in the command output, the certificate key gives access to cluster sensitive data, keep it secret!
{{< /caution >}} {{< /caution >}}
1. Apply the CNI plugin of your choice: 1. Apply the CNI plugin of your choice:
[Follow these instructions](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network) [Follow these instructions](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)
to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm configuration file if applicable. to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the
kubeadm configuration file (if applicable).
{{< note >}} {{< note >}}
You must pick a network plugin that suits your use case and deploy it before you move on to next step. You must pick a network plugin that suits your use case and deploy it before you move on to next step.
If you don't do this, you will not be able to launch your cluster properly. If you don't do this, you will not be able to launch your cluster properly.
{{< /note >}} {{< /note >}}
1. Type the following and watch the pods of the control plane components get started: 1. Type the following and watch the pods of the control plane components get started:
```sh ```sh
kubectl get pod -n kube-system -w kubectl get pod -n kube-system -w
``` ```
### Steps for the rest of the control plane nodes ### Steps for the rest of the control plane nodes
{{< note >}}
Since kubeadm version 1.15 you can join multiple control-plane nodes in parallel.
Prior to this version, you must join new control plane nodes sequentially, only after
the first node has finished initializing.
{{< /note >}}
For each additional control plane node you should: For each additional control plane node you should:
1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node. 1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node.
It should look something like this: It should look something like this:
```sh ```sh
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
``` ```
- The `--control-plane` flag tells `kubeadm join` to create a new control plane. - The `--control-plane` flag tells `kubeadm join` to create a new control plane.
- The `--certificate-key ...` will cause the control plane certificates to be downloaded - The `--certificate-key ...` will cause the control plane certificates to be downloaded
from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key. from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key.
You can join multiple control-plane nodes in parallel.
## External etcd nodes ## External etcd nodes
@ -210,64 +267,69 @@ in the kubeadm config file.
### Set up the etcd cluster ### Set up the etcd cluster
1. Follow [these instructions](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) to set up the etcd cluster. 1. Follow these [instructions](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) to set up the etcd cluster.
1. Setup SSH as described [here](#manual-certs). 1. Setup SSH as described [here](#manual-certs).
1. Copy the following files from any etcd node in the cluster to the first control plane node: 1. Copy the following files from any etcd node in the cluster to the first control plane node:
```sh ```sh
export CONTROL_PLANE="ubuntu@10.0.0.7" export CONTROL_PLANE="ubuntu@10.0.0.7"
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}": scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}": scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}": scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
``` ```
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane node. - Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane node.
### Set up the first control plane node ### Set up the first control plane node
1. Create a file called `kubeadm-config.yaml` with the following contents: 1. Create a file called `kubeadm-config.yaml` with the following contents:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
external:
endpoints:
- https://ETCD_0_IP:2379
- https://ETCD_1_IP:2379
- https://ETCD_2_IP:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
{{< note >}} ```yaml
The difference between stacked etcd and external etcd here is that the external etcd setup requires ---
a configuration file with the etcd endpoints under the `external` object for `etcd`. apiVersion: kubeadm.k8s.io/v1beta3
In the case of the stacked etcd topology this is managed automatically. kind: ClusterConfiguration
{{< /note >}} kubernetesVersion: stable
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" # change this (see below)
etcd:
external:
endpoints:
- https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately
- https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately
- https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
```
- Replace the following variables in the config template with the appropriate values for your cluster: {{< note >}}
The difference between stacked etcd and external etcd here is that the external etcd setup requires
a configuration file with the etcd endpoints under the `external` object for `etcd`.
In the case of the stacked etcd topology, this is managed automatically.
{{< /note >}}
- `LOAD_BALANCER_DNS` - Replace the following variables in the config template with the appropriate values for your cluster:
- `LOAD_BALANCER_PORT`
- `ETCD_0_IP` - `LOAD_BALANCER_DNS`
- `ETCD_1_IP` - `LOAD_BALANCER_PORT`
- `ETCD_2_IP` - `ETCD_0_IP`
- `ETCD_1_IP`
- `ETCD_2_IP`
The following steps are similar to the stacked etcd setup: The following steps are similar to the stacked etcd setup:
1. Run `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` on this node. 1. Run `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` on this node.
1. Write the output join commands that are returned to a text file for later use. 1. Write the output join commands that are returned to a text file for later use.
1. Apply the CNI plugin of your choice. The given example is for Weave Net: 1. Apply the CNI plugin of your choice.
```sh {{< note >}}
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" You must pick a network plugin that suits your use case and deploy it before you move on to next step.
``` If you don't do this, you will not be able to launch your cluster properly.
{{< /note >}}
### Steps for the rest of the control plane nodes ### Steps for the rest of the control plane nodes
@ -275,7 +337,7 @@ The steps are the same as for the stacked etcd setup:
- Make sure the first control plane node is fully initialized. - Make sure the first control plane node is fully initialized.
- Join each control plane node with the join command you saved to a text file. It's recommended - Join each control plane node with the join command you saved to a text file. It's recommended
to join the control plane nodes one at a time. to join the control plane nodes one at a time.
- Don't forget that the decryption key from `--certificate-key` expires after two hours, by default. - Don't forget that the decryption key from `--certificate-key` expires after two hours, by default.
## Common tasks after bootstrapping control plane ## Common tasks after bootstrapping control plane
@ -295,79 +357,81 @@ If you choose to not use `kubeadm init` with the `--upload-certs` flag this mean
you are going to have to manually copy the certificates from the primary control plane node to the you are going to have to manually copy the certificates from the primary control plane node to the
joining control plane nodes. joining control plane nodes.
There are many ways to do this. In the following example we are using `ssh` and `scp`: There are many ways to do this. The following example uses `ssh` and `scp`:
SSH is required if you want to control all nodes from a single machine. SSH is required if you want to control all nodes from a single machine.
1. Enable ssh-agent on your main device that has access to all other nodes in 1. Enable ssh-agent on your main device that has access to all other nodes in
the system: the system:
``` ```
eval $(ssh-agent) eval $(ssh-agent)
```
1. Add your SSH identity to the session:
```
ssh-add ~/.ssh/path_to_private_key
```
1. SSH between nodes to check that the connection is working correctly.
- When you SSH to any node, add the `-A` flag. This flag allows the node that you
have logged into via SSH to access the SSH agent on your PC. Consider alternative
methods if you do not fully trust the security of your user session on the node.
```
ssh -A 10.0.0.7
```
- When using sudo on any node, make sure to preserve the environment so SSH
forwarding works:
```
sudo -E -s
```
1. After configuring SSH on all the nodes you should run the following script on the first
control plane node after running `kubeadm init`. This script will copy the certificates from
the first control plane node to the other control plane nodes:
In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
other control plane nodes.
```sh
USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Skip the next line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
``` ```
1. Add your SSH identity to the session: {{< caution >}}
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates
``` with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake,
ssh-add ~/.ssh/path_to_private_key the creation of additional nodes could fail due to a lack of required SANs.
``` {{< /caution >}}
1. SSH between nodes to check that the connection is working correctly.
- When you SSH to any node, make sure to add the `-A` flag:
```
ssh -A 10.0.0.7
```
- When using sudo on any node, make sure to preserve the environment so SSH
forwarding works:
```
sudo -E -s
```
1. After configuring SSH on all the nodes you should run the following script on the first control plane node after
running `kubeadm init`. This script will copy the certificates from the first control plane node to the other
control plane nodes:
In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
other control plane nodes.
```sh
USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
```
{{< caution >}}
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates
with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake,
the creation of additional nodes could fail due to a lack of required SANs.
{{< /caution >}}
1. Then on each joining control plane node you have to run the following script before running `kubeadm join`. 1. Then on each joining control plane node you have to run the following script before running `kubeadm join`.
This script will move the previously copied certificates from the home directory to `/etc/kubernetes/pki`: This script will move the previously copied certificates from the home directory to `/etc/kubernetes/pki`:
```sh ```sh
USER=ubuntu # customizable USER=ubuntu # customizable
mkdir -p /etc/kubernetes/pki/etcd mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/ mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/ mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/ mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/ mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd # Skip the next line if you are using external etcd
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
``` ```

View File

@ -1,7 +1,7 @@
--- ---
reviewers: reviewers:
- sig-cluster-lifecycle - sig-cluster-lifecycle
title: Set up a High Availability etcd cluster with kubeadm title: Set up a High Availability etcd Cluster with kubeadm
content_type: task content_type: task
weight: 70 weight: 70
--- ---
@ -19,7 +19,8 @@ aspects.
By default, kubeadm runs a local etcd instance on each control plane node. By default, kubeadm runs a local etcd instance on each control plane node.
It is also possible to treat the etcd cluster as external and provision It is also possible to treat the etcd cluster as external and provision
etcd instances on separate hosts. The differences between the two approaches are covered in the etcd instances on separate hosts. The differences between the two approaches are covered in the
[Options for Highly Available topology][/docs/setup/production-environment/tools/kubeadm/ha-topology] page. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology) page.
This task walks through the process of creating a high availability external This task walks through the process of creating a high availability external
etcd cluster of three members that can be used by kubeadm during cluster creation. etcd cluster of three members that can be used by kubeadm during cluster creation.

View File

@ -27,6 +27,11 @@ The `kube-apiserver` process accepts an argument `--encryption-provider-config`
that controls how API data is encrypted in etcd. An example configuration that controls how API data is encrypted in etcd. An example configuration
is provided below. is provided below.
{{< caution >}}
**IMPORTANT:** For multi-master configurations (with two or more control plane nodes) the encryption configuration file must be the same!
Otherwise, the kube-apiserver can't decrypt data stored inside the key-value store.
{{< /caution >}}
## Understanding the encryption at rest configuration. ## Understanding the encryption at rest configuration.
```yaml ```yaml

View File

@ -10,7 +10,9 @@ weight: 10
{{< feature-state for_k8s_version="v1.15" state="stable" >}} {{< feature-state for_k8s_version="v1.15" state="stable" >}}
Client certificates generated by [kubeadm](/docs/reference/setup-tools/kubeadm/) expire after 1 year. This page explains how to manage certificate renewals with kubeadm. Client certificates generated by [kubeadm](/docs/reference/setup-tools/kubeadm/) expire after 1 year.
This page explains how to manage certificate renewals with kubeadm. It also covers other tasks related
to kubeadm certificate management.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
@ -289,3 +291,52 @@ Such a controller is not a secure mechanism unless it not only verifies the Comm
in the CSR but also verifies the requested IPs and domain names. This would prevent in the CSR but also verifies the requested IPs and domain names. This would prevent
a malicious actor that has access to a kubelet client certificate to create a malicious actor that has access to a kubelet client certificate to create
CSRs requesting serving certificates for any IP or domain name. CSRs requesting serving certificates for any IP or domain name.
## Generating kubeconfig files for additional users {#kubeconfig-additional-users}
During cluster creation, kubeadm signs the certificate in the `admin.conf` to have
`Subject: O = system:masters, CN = kubernetes-admin`.
[`system:masters`](/docs/reference/access-authn-authz/rbac/#user-facing-roles)
is a break-glass, super user group that bypasses the authorization layer (e.g. RBAC).
Sharing the `admin.conf` with additional users is **not recommended**!
Instead, you can use the [`kubeadm kubeconfig user`](/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig)
command to generate kubeconfig files for additional users.
The command accepts a mixture of command line flags and
[kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta3/) options.
The generated kubeconfig will be written to stdout and can be piped to a file
using `kubeadm kubeconfig user ... > somefile.conf`.
Example configuration file that can be used with `--config`:
```yaml
# example.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# Will be used as the target "cluster" in the kubeconfig
clusterName: "kubernetes"
# Will be used as the "server" (IP or DNS name) of this cluster in the kubeconfig
controlPlaneEndpoint: "some-dns-address:6443"
# The cluster CA key and certificate will be loaded from this local directory
certificatesDir: "/etc/kubernetes/pki"
```
Make sure that these settings match the desired target cluster settings.
To see the settings of an existing cluster use:
```shell
kubectl get cm kubeadm-config -n kube-system -o=jsonpath="{.data.ClusterConfiguration}"
```
The following example will generate a kubeconfig file with credentials valid for 24 hours
for a new user `johndoe` that is part of the `appdevs` group:
```shell
kubeadm kubeconfig user --config example.yaml --org appdevs --client-name johndoe --validity-period 24h
```
The following example will generate a kubeconfig file with administrator credentials valid for 1 week:
```shell
kubeadm kubeconfig user --config example.yaml --client-name admin --validity-period 168h
```

View File

@ -2,14 +2,18 @@
title: Configure Minimum and Maximum CPU Constraints for a Namespace title: Configure Minimum and Maximum CPU Constraints for a Namespace
content_type: task content_type: task
weight: 40 weight: 40
description: >-
Define a range of valid CPU resource limits for a namespace, so that every new Pod
in that namespace falls within the range you configure.
--- ---
<!-- overview --> <!-- overview -->
This page shows how to set minimum and maximum values for the CPU resources used by Containers This page shows how to set minimum and maximum values for the CPU resources used by containers
and Pods in a namespace. You specify minimum and maximum CPU values in a and Pods in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. You specify minimum
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core) and maximum CPU values in a
[LimitRange](/docs/reference/kubernetes-api/policy-resources/limit-range-v1/)
object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created object. If a Pod does not meet the constraints imposed by the LimitRange, it cannot be created
in the namespace. in the namespace.
@ -19,11 +23,13 @@ in the namespace.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} {{< include "task-tutorial-prereqs.md" >}}
Your cluster must have at least 1 CPU available for use to run the task examples.
You must have access to create namespaces in your cluster.
Your cluster must have at least 1.0 CPU available for use to run the task examples.
See [meaning of CPU](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu)
to learn what Kubernetes means by “1 CPU”.
<!-- steps --> <!-- steps -->
@ -39,7 +45,7 @@ kubectl create namespace constraints-cpu-example
## Create a LimitRange and a Pod ## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange: Here's an example manifest for a LimitRange:
{{< codenew file="admin/resource/cpu-constraints.yaml" >}} {{< codenew file="admin/resource/cpu-constraints.yaml" >}}
@ -72,15 +78,15 @@ limits:
type: Container type: Container
``` ```
Now whenever a Container is created in the constraints-cpu-example namespace, Kubernetes Now whenever you create a Pod in the constraints-cpu-example namespace (or some other client
performs these steps: of the Kubernetes API creates an equivalent Pod), Kubernetes performs these steps:
* If the Container does not specify its own CPU request and limit, assign the default * If any container in that Pod does not specify its own CPU request and limit, the control plane
CPU request and limit to the Container. assigns the default CPU request and limit to that container.
* Verify that the Container specifies a CPU request that is greater than or equal to 200 millicpu. * Verify that every container in that Pod specifies a CPU request that is greater than or equal to 200 millicpu.
* Verify that the Container specifies a CPU limit that is less than or equal to 800 millicpu. * Verify that every container in that Pod specifies a CPU limit that is less than or equal to 800 millicpu.
{{< note >}} {{< note >}}
When creating a `LimitRange` object, you can specify limits on huge-pages When creating a `LimitRange` object, you can specify limits on huge-pages
@ -88,7 +94,7 @@ or GPUs as well. However, when both `default` and `defaultRequest` are specified
on these resources, the two values must be the same. on these resources, the two values must be the same.
{{< /note >}} {{< /note >}}
Here's the configuration file for a Pod that has one Container. The Container manifest Here's a manifest for a Pod that has one container. The container manifest
specifies a CPU request of 500 millicpu and a CPU limit of 800 millicpu. These satisfy the specifies a CPU request of 500 millicpu and a CPU limit of 800 millicpu. These satisfy the
minimum and maximum CPU constraints imposed by the LimitRange. minimum and maximum CPU constraints imposed by the LimitRange.
@ -100,7 +106,7 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace=constraints-cpu-example
``` ```
Verify that the Pod's Container is running: Verify that the Pod is running and that its container is healthy:
```shell ```shell
kubectl get pod constraints-cpu-demo --namespace=constraints-cpu-example kubectl get pod constraints-cpu-demo --namespace=constraints-cpu-example
@ -112,7 +118,7 @@ View detailed information about the Pod:
kubectl get pod constraints-cpu-demo --output=yaml --namespace=constraints-cpu-example kubectl get pod constraints-cpu-demo --output=yaml --namespace=constraints-cpu-example
``` ```
The output shows that the Container has a CPU request of 500 millicpu and CPU limit The output shows that the Pod's only container has a CPU request of 500 millicpu and CPU limit
of 800 millicpu. These satisfy the constraints imposed by the LimitRange. of 800 millicpu. These satisfy the constraints imposed by the LimitRange.
```yaml ```yaml
@ -131,7 +137,7 @@ kubectl delete pod constraints-cpu-demo --namespace=constraints-cpu-example
## Attempt to create a Pod that exceeds the maximum CPU constraint ## Attempt to create a Pod that exceeds the maximum CPU constraint
Here's the configuration file for a Pod that has one Container. The Container specifies a Here's a manifest for a Pod that has one container. The container specifies a
CPU request of 500 millicpu and a cpu limit of 1.5 cpu. CPU request of 500 millicpu and a cpu limit of 1.5 cpu.
{{< codenew file="admin/resource/cpu-constraints-pod-2.yaml" >}} {{< codenew file="admin/resource/cpu-constraints-pod-2.yaml" >}}
@ -142,8 +148,8 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace=constraints-cpu-example
``` ```
The output shows that the Pod does not get created, because the Container specifies a CPU limit that is The output shows that the Pod does not get created, because it defines an unacceptable container.
too large: That container is not acceptable because it specifies a CPU limit that is too large:
``` ```
Error from server (Forbidden): error when creating "examples/admin/resource/cpu-constraints-pod-2.yaml": Error from server (Forbidden): error when creating "examples/admin/resource/cpu-constraints-pod-2.yaml":
@ -152,7 +158,7 @@ pods "constraints-cpu-demo-2" is forbidden: maximum cpu usage per Container is 8
## Attempt to create a Pod that does not meet the minimum CPU request ## Attempt to create a Pod that does not meet the minimum CPU request
Here's the configuration file for a Pod that has one Container. The Container specifies a Here's a manifest for a Pod that has one container. The container specifies a
CPU request of 100 millicpu and a CPU limit of 800 millicpu. CPU request of 100 millicpu and a CPU limit of 800 millicpu.
{{< codenew file="admin/resource/cpu-constraints-pod-3.yaml" >}} {{< codenew file="admin/resource/cpu-constraints-pod-3.yaml" >}}
@ -163,8 +169,9 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace=constraints-cpu-example
``` ```
The output shows that the Pod does not get created, because the Container specifies a CPU The output shows that the Pod does not get created, because it defines an unacceptable container.
request that is too small: That container is not acceptable because it specifies a CPU limit that is lower than the
enforced minimum:
``` ```
Error from server (Forbidden): error when creating "examples/admin/resource/cpu-constraints-pod-3.yaml": Error from server (Forbidden): error when creating "examples/admin/resource/cpu-constraints-pod-3.yaml":
@ -173,8 +180,8 @@ pods "constraints-cpu-demo-3" is forbidden: minimum cpu usage per Container is 2
## Create a Pod that does not specify any CPU request or limit ## Create a Pod that does not specify any CPU request or limit
Here's the configuration file for a Pod that has one Container. The Container does not Here's a manifest for a Pod that has one container. The container does not
specify a CPU request, and it does not specify a CPU limit. specify a CPU request, nor does it specify a CPU limit.
{{< codenew file="admin/resource/cpu-constraints-pod-4.yaml" >}} {{< codenew file="admin/resource/cpu-constraints-pod-4.yaml" >}}
@ -190,8 +197,9 @@ View detailed information about the Pod:
kubectl get pod constraints-cpu-demo-4 --namespace=constraints-cpu-example --output=yaml kubectl get pod constraints-cpu-demo-4 --namespace=constraints-cpu-example --output=yaml
``` ```
The output shows that the Pod's Container has a CPU request of 800 millicpu and a CPU limit of 800 millicpu. The output shows that the Pod's single container has a CPU request of 800 millicpu and a
How did the Container get those values? CPU limit of 800 millicpu.
How did that container get those values?
```yaml ```yaml
resources: resources:
@ -201,11 +209,12 @@ resources:
cpu: 800m cpu: 800m
``` ```
Because your Container did not specify its own CPU request and limit, it was given the Because that container did not specify its own CPU request and limit, the control plane
applied the
[default CPU request and limit](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) [default CPU request and limit](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
from the LimitRange. from the LimitRange for this namespace.
At this point, your Container might be running or it might not be running. Recall that a prerequisite for this task is that your cluster must have at least 1 CPU available for use. If each of your Nodes has only 1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request of 800 millicpu. If you happen to be using Nodes with 2 CPU, then you probably have enough CPU to accommodate the 800 millicpu request. At this point, your Pod might be running or it might not be running. Recall that a prerequisite for this task is that your cluster must have at least 1 CPU available for use. If each of your Nodes has only 1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request of 800 millicpu. If you happen to be using Nodes with 2 CPU, then you probably have enough CPU to accommodate the 800 millicpu request.
Delete your Pod: Delete your Pod:

View File

@ -2,23 +2,36 @@
title: Configure Default CPU Requests and Limits for a Namespace title: Configure Default CPU Requests and Limits for a Namespace
content_type: task content_type: task
weight: 20 weight: 20
description: >-
Define a default CPU resource limits for a namespace, so that every new Pod
in that namespace has a CPU resource limit configured.
--- ---
<!-- overview --> <!-- overview -->
This page shows how to configure default CPU requests and limits for a namespace. This page shows how to configure default CPU requests and limits for a
A Kubernetes cluster can be divided into namespaces. If a Container is created in a namespace {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
that has a default CPU limit, and the Container does not specify its own CPU limit, then
the Container is assigned the default CPU limit. Kubernetes assigns a default CPU request
under certain conditions that are explained later in this topic.
A Kubernetes cluster can be divided into namespaces. If you create a Pod within a
namespace that has a default CPU
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits), and any container in that Pod does not specify
its own CPU limit, then the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} assigns the default
CPU limit to that container.
Kubernetes assigns a default CPU
[request](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
but only under certain conditions that are explained later in this page.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} {{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
If you're not already familiar with what Kubernetes means by 1.0 CPU,
read [meaning of CPU](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu).
<!-- steps --> <!-- steps -->
@ -33,8 +46,8 @@ kubectl create namespace default-cpu-example
## Create a LimitRange and a Pod ## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange object. The configuration specifies Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}.
a default CPU request and a default CPU limit. The manifest specifies a default CPU request and a default CPU limit.
{{< codenew file="admin/resource/cpu-defaults.yaml" >}} {{< codenew file="admin/resource/cpu-defaults.yaml" >}}
@ -44,12 +57,12 @@ Create the LimitRange in the default-cpu-example namespace:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults.yaml --namespace=default-cpu-example
``` ```
Now if a Container is created in the default-cpu-example namespace, and the Now if you create a Pod in the default-cpu-example namespace, and any container
Container does not specify its own values for CPU request and CPU limit, in that Pod does not specify its own values for CPU request and CPU limit,
the Container is given a default CPU request of 0.5 and a default then the control plane applies default values: a CPU request of 0.5 and a default
CPU limit of 1. CPU limit of 1.
Here's the configuration file for a Pod that has one Container. The Container Here's a manifest for a Pod that has one container. The container
does not specify a CPU request and limit. does not specify a CPU request and limit.
{{< codenew file="admin/resource/cpu-defaults-pod.yaml" >}} {{< codenew file="admin/resource/cpu-defaults-pod.yaml" >}}
@ -66,8 +79,9 @@ View the Pod's specification:
kubectl get pod default-cpu-demo --output=yaml --namespace=default-cpu-example kubectl get pod default-cpu-demo --output=yaml --namespace=default-cpu-example
``` ```
The output shows that the Pod's Container has a CPU request of 500 millicpus and The output shows that the Pod's only container has a CPU request of 500m `cpu`
a CPU limit of 1 cpu. These are the default values specified by the LimitRange. (which you can read as “500 millicpu”), and a CPU limit of 1 `cpu`.
These are the default values specified by the LimitRange.
```shell ```shell
containers: containers:
@ -81,9 +95,9 @@ containers:
cpu: 500m cpu: 500m
``` ```
## What if you specify a Container's limit, but not its request? ## What if you specify a container's limit, but not its request?
Here's the configuration file for a Pod that has one Container. The Container Here's a manifest for a Pod that has one container. The container
specifies a CPU limit, but not a request: specifies a CPU limit, but not a request:
{{< codenew file="admin/resource/cpu-defaults-pod-2.yaml" >}} {{< codenew file="admin/resource/cpu-defaults-pod-2.yaml" >}}
@ -95,14 +109,15 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-2.yaml --namespace=default-cpu-example
``` ```
View the Pod specification: View the [specification](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status)
of the Pod that you created:
``` ```
kubectl get pod default-cpu-demo-2 --output=yaml --namespace=default-cpu-example kubectl get pod default-cpu-demo-2 --output=yaml --namespace=default-cpu-example
``` ```
The output shows that the Container's CPU request is set to match its CPU limit. The output shows that the container's CPU request is set to match its CPU limit.
Notice that the Container was not assigned the default CPU request value of 0.5 cpu. Notice that the container was not assigned the default CPU request value of 0.5 `cpu`:
``` ```
resources: resources:
@ -112,9 +127,9 @@ resources:
cpu: "1" cpu: "1"
``` ```
## What if you specify a Container's request, but not its limit? ## What if you specify a container's request, but not its limit?
Here's the configuration file for a Pod that has one Container. The Container Here's an example manifest for a Pod that has one container. The container
specifies a CPU request, but not a limit: specifies a CPU request, but not a limit:
{{< codenew file="admin/resource/cpu-defaults-pod-3.yaml" >}} {{< codenew file="admin/resource/cpu-defaults-pod-3.yaml" >}}
@ -125,15 +140,16 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example kubectl apply -f https://k8s.io/examples/admin/resource/cpu-defaults-pod-3.yaml --namespace=default-cpu-example
``` ```
View the Pod specification: View the specification of the Pod that you created:
``` ```
kubectl get pod default-cpu-demo-3 --output=yaml --namespace=default-cpu-example kubectl get pod default-cpu-demo-3 --output=yaml --namespace=default-cpu-example
``` ```
The output shows that the Container's CPU request is set to the value specified in the The output shows that the container's CPU request is set to the value you specified at
Container's configuration file. The Container's CPU limit is set to 1 cpu, which is the the time you created the Pod (in other words: it matches the manifest).
default CPU limit for the namespace. However, the same container's CPU limit is set to 1 `cpu`, which is the default CPU limit
for that namespace.
``` ```
resources: resources:
@ -145,16 +161,22 @@ resources:
## Motivation for default CPU limits and requests ## Motivation for default CPU limits and requests
If your namespace has a If your namespace has a CPU {{< glossary_tooltip text="resource quota" term_id="resource-quota" >}}
[resource quota](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/), configured,
it is helpful to have a default value in place for CPU limit. it is helpful to have a default value in place for CPU limit.
Here are two of the restrictions that a resource quota imposes on a namespace: Here are two of the restrictions that a CPU resource quota imposes on a namespace:
* Every Container that runs in the namespace must have its own CPU limit. * For every Pod that runs in the namespace, each of its containers must have a CPU limit.
* The total amount of CPU used by all Containers in the namespace must not exceed a specified limit. * CPU limits apply a resource reservation on the node where the Pod in question is scheduled.
The total amount of CPU that is reserved for use by all Pods in the namespace must not
exceed a specified limit.
When you add a LimitRange:
If any Pod in that namespace that includes a container does not specify its own CPU limit,
the control plane applies the default CPU limit to that container, and the Pod can be
allowed to run in a namespace that is restricted by a CPU ResourceQuota.
If a Container does not specify its own CPU limit, it is given the default limit, and then
it can be allowed to run in a namespace that is restricted by a quota.
## Clean up ## Clean up

View File

@ -2,12 +2,15 @@
title: Configure Minimum and Maximum Memory Constraints for a Namespace title: Configure Minimum and Maximum Memory Constraints for a Namespace
content_type: task content_type: task
weight: 30 weight: 30
description: >-
Define a range of valid memory resource limits for a namespace, so that every new Pod
in that namespace falls within the range you configure.
--- ---
<!-- overview --> <!-- overview -->
This page shows how to set minimum and maximum values for memory used by Containers This page shows how to set minimum and maximum values for memory used by containers
running in a namespace. You specify minimum and maximum memory values in a running in a namespace. You specify minimum and maximum memory values in a
[LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core) [LimitRange](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#limitrange-v1-core)
object. If a Pod does not meet the constraints imposed by the LimitRange, object. If a Pod does not meet the constraints imposed by the LimitRange,
@ -15,16 +18,14 @@ it cannot be created in the namespace.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} {{< include "task-tutorial-prereqs.md" >}}
Each node in your cluster must have at least 1 GiB of memory.
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 1 GiB of memory available for Pods.
<!-- steps --> <!-- steps -->
@ -39,7 +40,7 @@ kubectl create namespace constraints-mem-example
## Create a LimitRange and a Pod ## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange: Here's an example manifest for a LimitRange:
{{< codenew file="admin/resource/memory-constraints.yaml" >}} {{< codenew file="admin/resource/memory-constraints.yaml" >}}
@ -72,18 +73,19 @@ file for the LimitRange, they were created automatically.
type: Container type: Container
``` ```
Now whenever a Container is created in the constraints-mem-example namespace, Kubernetes Now whenever you define a Pod within the constraints-mem-example namespace, Kubernetes
performs these steps: performs these steps:
* If the Container does not specify its own memory request and limit, assign the default * If any container in that Pod does not specify its own memory request and limit, assign
memory request and limit to the Container. the default memory request and limit to that container.
* Verify that the Container has a memory request that is greater than or equal to 500 MiB. * Verify that every container in that Pod requests at least 500 MiB of memory.
* Verify that the Container has a memory limit that is less than or equal to 1 GiB. * Verify that every container in that Pod requests no more than 1024 MiB (1 GiB)
of memory.
Here's the configuration file for a Pod that has one Container. The Container manifest Here's a manifest for a Pod that has one container. Within the Pod spec, the sole
specifies a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the container specifies a memory request of 600 MiB and a memory limit of 800 MiB. These satisfy the
minimum and maximum memory constraints imposed by the LimitRange. minimum and maximum memory constraints imposed by the LimitRange.
{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}} {{< codenew file="admin/resource/memory-constraints-pod.yaml" >}}
@ -94,7 +96,7 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod.yaml --namespace=constraints-mem-example
``` ```
Verify that the Pod's Container is running: Verify that the Pod is running and that its container is healthy:
```shell ```shell
kubectl get pod constraints-mem-demo --namespace=constraints-mem-example kubectl get pod constraints-mem-demo --namespace=constraints-mem-example
@ -106,8 +108,9 @@ View detailed information about the Pod:
kubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example kubectl get pod constraints-mem-demo --output=yaml --namespace=constraints-mem-example
``` ```
The output shows that the Container has a memory request of 600 MiB and a memory limit The output shows that the container within that Pod has a memory request of 600 MiB and
of 800 MiB. These satisfy the constraints imposed by the LimitRange. a memory limit of 800 MiB. These satisfy the constraints imposed by the LimitRange for
this namespace:
```yaml ```yaml
resources: resources:
@ -125,7 +128,7 @@ kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example
## Attempt to create a Pod that exceeds the maximum memory constraint ## Attempt to create a Pod that exceeds the maximum memory constraint
Here's the configuration file for a Pod that has one Container. The Container specifies a Here's a manifest for a Pod that has one container. The container specifies a
memory request of 800 MiB and a memory limit of 1.5 GiB. memory request of 800 MiB and a memory limit of 1.5 GiB.
{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}} {{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}}
@ -136,8 +139,8 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-2.yaml --namespace=constraints-mem-example
``` ```
The output shows that the Pod does not get created, because the Container specifies a memory limit that is The output shows that the Pod does not get created, because it defines a container that
too large: requests more memory than is allowed:
``` ```
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-2.yaml": Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-2.yaml":
@ -146,7 +149,7 @@ pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container i
## Attempt to create a Pod that does not meet the minimum memory request ## Attempt to create a Pod that does not meet the minimum memory request
Here's the configuration file for a Pod that has one Container. The Container specifies a Here's a manifest for a Pod that has one container. That container specifies a
memory request of 100 MiB and a memory limit of 800 MiB. memory request of 100 MiB and a memory limit of 800 MiB.
{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}} {{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}}
@ -157,8 +160,8 @@ Attempt to create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-3.yaml --namespace=constraints-mem-example
``` ```
The output shows that the Pod does not get created, because the Container specifies a memory The output shows that the Pod does not get created, because it defines a container
request that is too small: that requests less memory than the enforced minimum:
``` ```
Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-3.yaml": Error from server (Forbidden): error when creating "examples/admin/resource/memory-constraints-pod-3.yaml":
@ -167,9 +170,7 @@ pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container i
## Create a Pod that does not specify any memory request or limit ## Create a Pod that does not specify any memory request or limit
Here's a manifest for a Pod that has one container. The container does not
Here's the configuration file for a Pod that has one Container. The Container does not
specify a memory request, and it does not specify a memory limit. specify a memory request, and it does not specify a memory limit.
{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}} {{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}}
@ -182,12 +183,12 @@ kubectl apply -f https://k8s.io/examples/admin/resource/memory-constraints-pod-4
View detailed information about the Pod: View detailed information about the Pod:
``` ```shell
kubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml kubectl get pod constraints-mem-demo-4 --namespace=constraints-mem-example --output=yaml
``` ```
The output shows that the Pod's Container has a memory request of 1 GiB and a memory limit of 1 GiB. The output shows that the Pod's only container has a memory request of 1 GiB and a memory limit of 1 GiB.
How did the Container get those values? How did that container get those values?
``` ```
resources: resources:
@ -197,11 +198,20 @@ resources:
memory: 1Gi memory: 1Gi
``` ```
Because your Container did not specify its own memory request and limit, it was given the Because your Pod did not define any memory request and limit for that container, the cluster
applied a
[default memory request and limit](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) [default memory request and limit](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
from the LimitRange. from the LimitRange.
At this point, your Container might be running or it might not be running. Recall that a prerequisite This means that the definition of that Pod shows those values. You can check it using
`kubectl describe`:
```shell
# Look for the "Requests:" section of the output
kubectl describe pod constraints-mem-demo-4 --namespace=constraints-mem-example
```
At this point, your Pod might be running or it might not be running. Recall that a prerequisite
for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only for this task is that your Nodes have at least 1 GiB of memory. If each of your Nodes has only
1 GiB of memory, then there is not enough allocatable memory on any Node to accommodate a memory 1 GiB of memory, then there is not enough allocatable memory on any Node to accommodate a memory
request of 1 GiB. If you happen to be using Nodes with 2 GiB of memory, then you probably have request of 1 GiB. If you happen to be using Nodes with 2 GiB of memory, then you probably have
@ -209,7 +219,7 @@ enough space to accommodate the 1 GiB request.
Delete your Pod: Delete your Pod:
``` ```shell
kubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example kubectl delete pod constraints-mem-demo-4 --namespace=constraints-mem-example
``` ```
@ -224,12 +234,12 @@ Pods that were created previously.
As a cluster administrator, you might want to impose restrictions on the amount of memory that Pods can use. As a cluster administrator, you might want to impose restrictions on the amount of memory that Pods can use.
For example: For example:
* Each Node in a cluster has 2 GB of memory. You do not want to accept any Pod that requests * Each Node in a cluster has 2 GiB of memory. You do not want to accept any Pod that requests
more than 2 GB of memory, because no Node in the cluster can support the request. more than 2 GiB of memory, because no Node in the cluster can support the request.
* A cluster is shared by your production and development departments. * A cluster is shared by your production and development departments.
You want to allow production workloads to consume up to 8 GB of memory, but You want to allow production workloads to consume up to 8 GiB of memory, but
you want development workloads to be limited to 512 MB. You create separate namespaces you want development workloads to be limited to 512 MiB. You create separate namespaces
for production and development, and you apply memory constraints to each namespace. for production and development, and you apply memory constraints to each namespace.
## Clean up ## Clean up
@ -241,7 +251,6 @@ kubectl delete namespace constraints-mem-example
``` ```
## {{% heading "whatsnext" %}} ## {{% heading "whatsnext" %}}

View File

@ -2,21 +2,35 @@
title: Configure Default Memory Requests and Limits for a Namespace title: Configure Default Memory Requests and Limits for a Namespace
content_type: task content_type: task
weight: 10 weight: 10
description: >-
Define a default memory resource limit for a namespace, so that every new Pod
in that namespace has a memory resource limit configured.
--- ---
<!-- overview --> <!-- overview -->
This page shows how to configure default memory requests and limits for a namespace. This page shows how to configure default memory requests and limits for a
If a Container is created in a namespace that has a default memory limit, and the Container {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
does not specify its own memory limit, then the Container is assigned the default memory limit.
A Kubernetes cluster can be divided into namespaces. Once you have a namespace that
that has a default memory
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
and you then try to create a Pod with a container that does not specify its own memory
limit its own memory limit, then the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} assigns the default
memory limit to that container.
Kubernetes assigns a default memory request under certain conditions that are explained later in this topic. Kubernetes assigns a default memory request under certain conditions that are explained later in this topic.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} {{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 2 GiB of memory. Each node in your cluster must have at least 2 GiB of memory.
@ -35,8 +49,9 @@ kubectl create namespace default-mem-example
## Create a LimitRange and a Pod ## Create a LimitRange and a Pod
Here's the configuration file for a LimitRange object. The configuration specifies Here's a manifest for an example {{< glossary_tooltip text="LimitRange" term_id="limitrange" >}}.
a default memory request and a default memory limit. The manifest specifies a default memory
request and a default memory limit.
{{< codenew file="admin/resource/memory-defaults.yaml" >}} {{< codenew file="admin/resource/memory-defaults.yaml" >}}
@ -46,12 +61,13 @@ Create the LimitRange in the default-mem-example namespace:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example
``` ```
Now if a Container is created in the default-mem-example namespace, and the Now if you create a Pod in the default-mem-example namespace, and any container
Container does not specify its own values for memory request and memory limit, within that Pod does not specify its own values for memory request and memory limit,
the Container is given a default memory request of 256 MiB and a default then the {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
memory limit of 512 MiB. applies default values: a memory request of 256MiB and a memory limit of 512MiB.
Here's the configuration file for a Pod that has one Container. The Container
Here's an example manifest for a Pod that has one container. The container
does not specify a memory request and limit. does not specify a memory request and limit.
{{< codenew file="admin/resource/memory-defaults-pod.yaml" >}} {{< codenew file="admin/resource/memory-defaults-pod.yaml" >}}
@ -68,7 +84,7 @@ View detailed information about the Pod:
kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example
``` ```
The output shows that the Pod's Container has a memory request of 256 MiB and The output shows that the Pod's container has a memory request of 256 MiB and
a memory limit of 512 MiB. These are the default values specified by the LimitRange. a memory limit of 512 MiB. These are the default values specified by the LimitRange.
```shell ```shell
@ -89,9 +105,9 @@ Delete your Pod:
kubectl delete pod default-mem-demo --namespace=default-mem-example kubectl delete pod default-mem-demo --namespace=default-mem-example
``` ```
## What if you specify a Container's limit, but not its request? ## What if you specify a container's limit, but not its request?
Here's the configuration file for a Pod that has one Container. The Container Here's a manifest for a Pod that has one container. The container
specifies a memory limit, but not a request: specifies a memory limit, but not a request:
{{< codenew file="admin/resource/memory-defaults-pod-2.yaml" >}} {{< codenew file="admin/resource/memory-defaults-pod-2.yaml" >}}
@ -109,8 +125,8 @@ View detailed information about the Pod:
kubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example kubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example
``` ```
The output shows that the Container's memory request is set to match its memory limit. The output shows that the container's memory request is set to match its memory limit.
Notice that the Container was not assigned the default memory request value of 256Mi. Notice that the container was not assigned the default memory request value of 256Mi.
``` ```
resources: resources:
@ -120,9 +136,9 @@ resources:
memory: 1Gi memory: 1Gi
``` ```
## What if you specify a Container's request, but not its limit? ## What if you specify a container's request, but not its limit?
Here's the configuration file for a Pod that has one Container. The Container Here's a manifest for a Pod that has one container. The container
specifies a memory request, but not a limit: specifies a memory request, but not a limit:
{{< codenew file="admin/resource/memory-defaults-pod-3.yaml" >}} {{< codenew file="admin/resource/memory-defaults-pod-3.yaml" >}}
@ -139,9 +155,9 @@ View the Pod's specification:
kubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example kubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example
``` ```
The output shows that the Container's memory request is set to the value specified in the The output shows that the container's memory request is set to the value specified in the
Container's configuration file. The Container's memory limit is set to 512Mi, which is the container's manifest. The container is limited to use no more than 512MiB of
default memory limit for the namespace. memory, which matches the default memory limit for the namespace.
``` ```
resources: resources:
@ -153,15 +169,23 @@ resources:
## Motivation for default memory limits and requests ## Motivation for default memory limits and requests
If your namespace has a resource quota, If your namespace has a memory {{< glossary_tooltip text="resource quota" term_id="resource-quota" >}}
configured,
it is helpful to have a default value in place for memory limit. it is helpful to have a default value in place for memory limit.
Here are two of the restrictions that a resource quota imposes on a namespace: Here are two of the restrictions that a resource quota imposes on a namespace:
* Every Container that runs in the namespace must have its own memory limit. * For every Pod that runs in the namespace, the Pod and each of its containers must have a memory limit.
* The total amount of memory used by all Containers in the namespace must not exceed a specified limit. (If you specify a memory limit for every container in a Pod, Kubernetes can infer the Pod-level memory
limit by adding up the limits for its containers).
* CPU limits apply a resource reservation on the node where the Pod in question is scheduled.
The total amount of memory reserved for all Pods in the namespace must not exceed a specified limit.
* The total amount of memory actually used by all Pods in the namespace must also not exceed a specified limit.
If a Container does not specify its own memory limit, it is given the default limit, and then When you add a LimitRange:
it can be allowed to run in a namespace that is restricted by a quota.
If any Pod in that namespace that includes a container does not specify its own memory limit,
the control plane applies the default memory limit to that container, and the Pod can be
allowed to run in a namespace that is restricted by a memory ResourceQuota.
## Clean up ## Clean up

View File

@ -2,14 +2,17 @@
title: Configure Memory and CPU Quotas for a Namespace title: Configure Memory and CPU Quotas for a Namespace
content_type: task content_type: task
weight: 50 weight: 50
description: >-
Define overall memory and CPU resource limits for a namespace.
--- ---
<!-- overview --> <!-- overview -->
This page shows how to set quotas for the total amount memory and CPU that This page shows how to set quotas for the total amount memory and CPU that
can be used by all Containers running in a namespace. You specify quotas in a can be used by all Pods running in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
[ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) You specify quotas in a
[ResourceQuota](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/)
object. object.
@ -17,14 +20,13 @@ object.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 1 GiB of memory. Each node in your cluster must have at least 1 GiB of memory.
<!-- steps --> <!-- steps -->
## Create a namespace ## Create a namespace
@ -38,7 +40,7 @@ kubectl create namespace quota-mem-cpu-example
## Create a ResourceQuota ## Create a ResourceQuota
Here is the configuration file for a ResourceQuota object: Here is a manifest for an example ResourceQuota:
{{< codenew file="admin/resource/quota-mem-cpu.yaml" >}} {{< codenew file="admin/resource/quota-mem-cpu.yaml" >}}
@ -56,15 +58,18 @@ kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --outpu
The ResourceQuota places these requirements on the quota-mem-cpu-example namespace: The ResourceQuota places these requirements on the quota-mem-cpu-example namespace:
* Every Container must have a memory request, memory limit, cpu request, and cpu limit. * For every Pod in the namespace, each container must have a memory request, memory limit, cpu request, and cpu limit.
* The memory request total for all Containers must not exceed 1 GiB. * The memory request total for all Pods in that namespace must not exceed 1 GiB.
* The memory limit total for all Containers must not exceed 2 GiB. * The memory limit total for all Pods in that namespace must not exceed 2 GiB.
* The CPU request total for all Containers must not exceed 1 cpu. * The CPU request total for all Pods in that namespace must not exceed 1 cpu.
* The CPU limit total for all Containers must not exceed 2 cpu. * The CPU limit total for all Pods in that namespace must not exceed 2 cpu.
See [meaning of CPU](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu)
to learn what Kubernetes means by “1 CPU”.
## Create a Pod ## Create a Pod
Here is the configuration file for a Pod: Here is a manifest for an example Pod:
{{< codenew file="admin/resource/quota-mem-cpu-pod.yaml" >}} {{< codenew file="admin/resource/quota-mem-cpu-pod.yaml" >}}
@ -75,15 +80,15 @@ Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu-pod.yaml --namespace=quota-mem-cpu-example
``` ```
Verify that the Pod's Container is running: Verify that the Pod is running and that its (only) container is healthy:
``` ```shell
kubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example kubectl get pod quota-mem-cpu-demo --namespace=quota-mem-cpu-example
``` ```
Once again, view detailed information about the ResourceQuota: Once again, view detailed information about the ResourceQuota:
``` ```shell
kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml
``` ```
@ -105,15 +110,22 @@ status:
requests.memory: 600Mi requests.memory: 600Mi
``` ```
If you have the `jq` tool, you can also query (using [JSONPath](/docs/reference/kubectl/jsonpath/))
for just the `used` values, **and** pretty-print that that of the output. For example:
```shell
kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example -o jsonpath='{ .status.used }' | jq .
```
## Attempt to create a second Pod ## Attempt to create a second Pod
Here is the configuration file for a second Pod: Here is a manifest for a second Pod:
{{< codenew file="admin/resource/quota-mem-cpu-pod-2.yaml" >}} {{< codenew file="admin/resource/quota-mem-cpu-pod-2.yaml" >}}
In the configuration file, you can see that the Pod has a memory request of 700 MiB. In the manifest, you can see that the Pod has a memory request of 700 MiB.
Notice that the sum of the used memory request and this new memory Notice that the sum of the used memory request and this new memory
request exceeds the memory request quota. 600 MiB + 700 MiB > 1 GiB. request exceeds the memory request quota: 600 MiB + 700 MiB > 1 GiB.
Attempt to create the Pod: Attempt to create the Pod:
@ -133,11 +145,12 @@ requested: requests.memory=700Mi,used: requests.memory=600Mi, limited: requests.
## Discussion ## Discussion
As you have seen in this exercise, you can use a ResourceQuota to restrict As you have seen in this exercise, you can use a ResourceQuota to restrict
the memory request total for all Containers running in a namespace. the memory request total for all Pods running in a namespace.
You can also restrict the totals for memory limit, cpu request, and cpu limit. You can also restrict the totals for memory limit, cpu request, and cpu limit.
If you want to restrict individual Containers, instead of totals for all Containers, use a Instead of managing total resource use within a namespace, you might want to restrict
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/). individual Pods, or the containers in those Pods. To achieve that kind of limiting, use a
[LimitRange](/docs/concepts/policy/limit-range/).
## Clean up ## Clean up

View File

@ -2,14 +2,16 @@
title: Configure a Pod Quota for a Namespace title: Configure a Pod Quota for a Namespace
content_type: task content_type: task
weight: 60 weight: 60
description: >-
Restrict how many Pods you can create within a namespace.
--- ---
<!-- overview --> <!-- overview -->
This page shows how to set a quota for the total number of Pods that can run This page shows how to set a quota for the total number of Pods that can run
in a namespace. You specify quotas in a in a {{< glossary_tooltip text="Namespace" term_id="namespace" >}}. You specify quotas in a
[ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) [ResourceQuota](/docs/reference/kubernetes-api/policy-resources/resource-quota-v1/)
object. object.
@ -18,10 +20,9 @@ object.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} {{< include "task-tutorial-prereqs.md" >}}
You must have access to create namespaces in your cluster.
<!-- steps --> <!-- steps -->
@ -36,7 +37,7 @@ kubectl create namespace quota-pod-example
## Create a ResourceQuota ## Create a ResourceQuota
Here is the configuration file for a ResourceQuota object: Here is an example manifest for a ResourceQuota:
{{< codenew file="admin/resource/quota-pod.yaml" >}} {{< codenew file="admin/resource/quota-pod.yaml" >}}
@ -66,11 +67,12 @@ status:
pods: "0" pods: "0"
``` ```
Here is the configuration file for a Deployment: Here is an example manifest for a {{< glossary_tooltip term_id="deployment" >}}:
{{< codenew file="admin/resource/quota-pod-deployment.yaml" >}} {{< codenew file="admin/resource/quota-pod-deployment.yaml" >}}
In the configuration file, `replicas: 3` tells Kubernetes to attempt to create three Pods, all running the same application. In that manifest, `replicas: 3` tells Kubernetes to attempt to create three new Pods, all
running the same application.
Create the Deployment: Create the Deployment:
@ -85,7 +87,7 @@ kubectl get deployment pod-quota-demo --namespace=quota-pod-example --output=yam
``` ```
The output shows that even though the Deployment specifies three replicas, only two The output shows that even though the Deployment specifies three replicas, only two
Pods were created because of the quota. Pods were created because of the quota you defined earlier:
```yaml ```yaml
spec: spec:
@ -95,11 +97,18 @@ spec:
status: status:
availableReplicas: 2 availableReplicas: 2
... ...
lastUpdateTime: 2017-07-07T20:57:05Z lastUpdateTime: 2021-04-02T20:57:05Z
message: 'unable to create pods: pods "pod-quota-demo-1650323038-" is forbidden: message: 'unable to create pods: pods "pod-quota-demo-1650323038-" is forbidden:
exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2' exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2'
``` ```
### Choice of resource
In this task you have defined a ResourceQuota that limited the total number of Pods, but
you could also limit the total number of other kinds of object. For example, you
might decide to limit how many {{< glossary_tooltip text="CronJobs" term_id="cronjob" >}}
that can live in a single namespace.
## Clean up ## Clean up
Delete your namespace: Delete your namespace:

View File

@ -138,7 +138,8 @@ The sum of their values will account for the total amount of reserved memory.
A new `--reserved-memory` flag was added to Memory Manager to allow for this total reserved memory A new `--reserved-memory` flag was added to Memory Manager to allow for this total reserved memory
to be split (by a node administrator) and accordingly reserved across many NUMA nodes. to be split (by a node administrator) and accordingly reserved across many NUMA nodes.
The flag specifies a comma-separated list of memory reservations per NUMA node. The flag specifies a comma-separated list of memory reservations of different memory types per NUMA node.
Memory reservations across multiple NUMA nodes can be specified using semicolon as separator.
This parameter is only useful in the context of the Memory Manager feature. This parameter is only useful in the context of the Memory Manager feature.
The Memory Manager will not use this reserved memory for the allocation of container workloads. The Memory Manager will not use this reserved memory for the allocation of container workloads.
@ -180,6 +181,10 @@ or
`--reserved-memory 0:memory=1Gi --reserved-memory 1:memory=2Gi` `--reserved-memory 0:memory=1Gi --reserved-memory 1:memory=2Gi`
or
`--reserved-memory '0:memory=1Gi;1:memory=2Gi'`
When you specify values for `--reserved-memory` flag, you must comply with the setting that When you specify values for `--reserved-memory` flag, you must comply with the setting that
you prior provided via Node Allocatable Feature flags. you prior provided via Node Allocatable Feature flags.
That is, the following rule must be obeyed for each memory type: That is, the following rule must be obeyed for each memory type:
@ -215,7 +220,7 @@ Here is an example of a correct configuration:
--kube-reserved=cpu=4,memory=4Gi --kube-reserved=cpu=4,memory=4Gi
--system-reserved=cpu=1,memory=1Gi --system-reserved=cpu=1,memory=1Gi
--memory-manager-policy=Static --memory-manager-policy=Static
--reserved-memory 0:memory=3Gi --reserved-memory 1:memory=2148Mi --reserved-memory '0:memory=3Gi;1:memory=2148Mi'
``` ```
Let us validate the configuration above: Let us validate the configuration above:

View File

@ -8,37 +8,35 @@ weight: 70
<!-- overview --> <!-- overview -->
With Kubernetes 1.20 dockershim was deprecated. From the Kubernetes' support for direct integration with Docker Engine is deprecated, and will be removed. Most apps do not have a direct dependency on runtime hosting containers. However, there are still a lot of telemetry and monitoring agents that has a dependency on docker to collect containers metadata, logs and metrics. This document aggregates information on how to detect these dependencies and links on how to migrate these agents to use generic tools or alternative runtimes.
[Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
you might already know that most apps do not have a direct dependency on runtime hosting
containers. However, there are still a lot of telemetry and security agents
that has a dependency on docker to collect containers metadata, logs and
metrics. This document aggregates information on how to detect these
dependencies and links on how to migrate these agents to use generic tools or
alternative runtimes.
## Telemetry and security agents ## Telemetry and security agents
There are a few ways agents may run on Kubernetes cluster. Agents may run on Within a Kubernetes cluster there are a few different ways to run telemetry or security agents.
nodes directly or as DaemonSets. Some agents have a direct dependency on Docker Engine when they as DaemonSets or
directly on nodes.
### Why do telemetry agents rely on Docker? ### Why do some telemetry agents communicate with Docker Engine?
Historically, Kubernetes was built on top of Docker. Kubernetes is managing Historically, Kubernetes was written to work specifically with Docker Engine.
networking and scheduling, Docker was placing and operating containers on a Kubernetes took care of networking and scheduling, relying on Docker Engine for launching
node. So you can get scheduling-related metadata like a pod name from Kubernetes and running containers (within Pods) on a node. Some information that is relevant to telemetry,
and containers state information from Docker. Over time more runtimes were such as a pod name, is only available from Kubernetes components. Other data, such as container
created to manage containers. Also there are projects and Kubernetes features metrics, is not the responsibility of the container runtime. Early yelemetry agents needed to query the
that generalize container status information extraction across many runtimes. container runtime **and** Kubernetes to report an accurate picture. Over time, Kubernetes gained
the ability to support multiple runtimes, and now supports any runtime that is compatible with
the container runtime interface.
Some agents are tied specifically to the Docker tool. The agents may run Some telemetry agents rely specifically on Docker Engine tooling. For example, an agent
commands like [`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/) might run a command such as
[`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
or [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) to list or [`docker top`](https://docs.docker.com/engine/reference/commandline/top/) to list
containers and processes or [docker logs](https://docs.docker.com/engine/reference/commandline/logs/) containers and processes or [`docker logs`](https://docs.docker.com/engine/reference/commandline/logs/)
to subscribe on docker logs. With the deprecating of Docker as a container runtime, to receive streamed logs. If nodes in your existing cluster use
Docker Engine, and you switch to a different container runtime,
these commands will not work any longer. these commands will not work any longer.
### Identify DaemonSets that depend on Docker {#identify-docker-dependency} ### Identify DaemonSets that depend on Docker Engine {#identify-docker-dependency}
If a pod wants to make calls to the `dockerd` running on the node, the pod must either: If a pod wants to make calls to the `dockerd` running on the node, the pod must either:

View File

@ -101,3 +101,30 @@ The `node-local-dns` ConfigMap can also be modified directly with the stubDomain
in the Corefile format. Some cloud providers might not allow modifying `node-local-dns` ConfigMap directly. in the Corefile format. Some cloud providers might not allow modifying `node-local-dns` ConfigMap directly.
In those cases, the `kube-dns` ConfigMap can be updated. In those cases, the `kube-dns` ConfigMap can be updated.
## Setting memory limits
node-local-dns pods use memory for storing cache entries and processing queries. Since they do not watch Kubernetes objects, the cluster size or the number of Services/Endpoints do not directly affect memory usage. Memory usage is influenced by the DNS query pattern.
From [CoreDNS docs](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md),
> The default cache size is 10000 entries, which uses about 30 MB when completely filled.
This would be the memory usage for each server block (if the cache gets completely filled).
Memory usage can be reduced by specifying smaller cache sizes.
The number of concurrent queries is linked to the memory demand, because each extra
goroutine used for handling a query requires an amount of memory. You can set an upper limit
using the `max_concurrent` option in the forward plugin.
If a node-local-dns pod attempts to use more memory than is available (because of total system
resources, or because of a configured
[resource limit](/docs/concepts/configuration/manage-resources-containers/)), the operating system
may shut down that pod's container.
If this happens, the container that is terminated (“OOMKilled”) does not clean up the custom
packet filtering rules that it previously added during startup.
The node-local-dns container should get restarted (since managed as part of a DaemonSet), but this
will lead to a brief DNS downtime each time that the container fails: the packet filtering rules direct
DNS queries to a local Pod that is unhealthy.
You can determine a suitable memory limit by running node-local-dns pods without a limit and
measuring the peak usage. You can also set up and use a
[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler)
in _recommender mode_, and then check its recommendations.

View File

@ -67,7 +67,7 @@ transient slices for resources that are supported by that init system.
Depending on the configuration of the associated container runtime, Depending on the configuration of the associated container runtime,
operators may have to choose a particular cgroup driver to ensure operators may have to choose a particular cgroup driver to ensure
proper system behavior. For example, if operators use the `systemd` proper system behavior. For example, if operators use the `systemd`
cgroup driver provided by the `docker` runtime, the `kubelet` must cgroup driver provided by the `containerd` runtime, the `kubelet` must
be configured to use the `systemd` cgroup driver. be configured to use the `systemd` cgroup driver.
### Kube Reserved ### Kube Reserved

View File

@ -70,7 +70,7 @@ The following sysctls are supported in the _safe_ set:
- `kernel.shm_rmid_forced`, - `kernel.shm_rmid_forced`,
- `net.ipv4.ip_local_port_range`, - `net.ipv4.ip_local_port_range`,
- `net.ipv4.tcp_syncookies`, - `net.ipv4.tcp_syncookies`,
- `net.ipv4.ping_group_range` (since Kubernetes 1.18). - `net.ipv4.ping_group_range` (since Kubernetes 1.18),
- `net.ipv4.ip_unprivileged_port_start` (since Kubernetes 1.22). - `net.ipv4.ip_unprivileged_port_start` (since Kubernetes 1.22).
{{< note >}} {{< note >}}

View File

@ -8,10 +8,12 @@ card:
--- ---
<!-- overview --> <!-- overview -->
Many applications rely on configuration which is used during either application initialization or runtime.
Most of the times there is a requirement to adjust values assigned to configuration parameters.
ConfigMaps is the kubernetes way to inject application pods with configuration data.
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}

View File

@ -290,10 +290,18 @@ To enable and use token request projection, you must specify each of the followi
command line arguments to `kube-apiserver`: command line arguments to `kube-apiserver`:
* `--service-account-issuer` * `--service-account-issuer`
It can be used as the Identifier of the service account token issuer. You can specify the `--service-account-issuer` argument multiple times, this can be useful to enable a non-disruptive change of the issuer. When this flag is specified multiple times, the first is used to generate tokens and all are used to determine which issuers are accepted. You must be running running Kubernetes v1.22 or later to be able to specify `--service-account-issuer` multiple times.
* `--service-account-key-file` * `--service-account-key-file`
File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If specified multiple times, tokens signed by any of the specified keys are considered valid by the Kubernetes API server.
* `--service-account-signing-key-file` * `--service-account-signing-key-file`
Path to the file that contains the current private key of the service account token issuer. The issuer signs issued ID tokens with this private key.
* `--api-audiences` (can be omitted) * `--api-audiences` (can be omitted)
The service account token authenticator validates that tokens used against the API are bound to at least one of these audiences. If `api-audiences` is specified multiple times, tokens for any of the specified audiences are considered valid by the Kubernetes API server. If the `--service-account-issuer` flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.
{{< /note >}} {{< /note >}}
The kubelet can also project a service account token into a Pod. You can The kubelet can also project a service account token into a Pod. You can

View File

@ -31,7 +31,7 @@ as Windows server containers, meaning that the version of the base images does n
to match that of the host. It is, however, recommended that you use the same base image to match that of the host. It is, however, recommended that you use the same base image
version as your Windows Server container workloads to ensure you do not have any unused version as your Windows Server container workloads to ensure you do not have any unused
images taking up space on the node. HostProcess containers also support images taking up space on the node. HostProcess containers also support
[volume mounts](./create-hostprocess-pod#volume-mounts) within the container volume. [volume mounts](#volume-mounts) within the container volume.
### When should I use a Windows HostProcess container? ### When should I use a Windows HostProcess container?
@ -73,19 +73,20 @@ documentation for more details.
These limitations are relevant for Kubernetes v{{< skew currentVersion >}}: These limitations are relevant for Kubernetes v{{< skew currentVersion >}}:
- HostProcess containers require containerd 1.6 or higher - HostProcess containers require containerd 1.6 or higher
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}. {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
- HostProcess pods can only contain HostProcess containers. This is a current limitation - HostProcess pods can only contain HostProcess containers. This is a current limitation
of the Windows OS; non-privileged Windows containers cannot share a vNIC with the host IP namespace. of the Windows OS; non-privileged Windows containers cannot share a vNIC with the host IP namespace.
- HostProcess containers run as a process on the host and do not have any degree of - HostProcess containers run as a process on the host and do not have any degree of
isolation other than resource constraints imposed on the HostProcess user account. Neither isolation other than resource constraints imposed on the HostProcess user account. Neither
filesystem or Hyper-V isolation are supported for HostProcess containers. filesystem or Hyper-V isolation are supported for HostProcess containers.
- Volume mounts are supported and are mounted under the container volume. See [Volume Mounts](#volume-mounts) - Volume mounts are supported and are mounted under the container volume. See
[Volume Mounts](#volume-mounts)
- A limited set of host user accounts are available for HostProcess containers by default. - A limited set of host user accounts are available for HostProcess containers by default.
See [Choosing a User Account](#choosing-a-user-account). See [Choosing a User Account](#choosing-a-user-account).
- Resource limits (disk, memory, cpu count) are supported in the same fashion as processes - Resource limits (disk, memory, cpu count) are supported in the same fashion as processes
on the host. on the host.
- Both Named pipe mounts and Unix domain sockets are **not** supported and should instead - Both Named pipe mounts and Unix domain sockets are **not** supported and should instead
be accessed via their path on the host (e.g. \\\\.\\pipe\\\*) be accessed via their path on the host (e.g. \\\\.\\pipe\\\*)
## HostProcess Pod configuration requirements ## HostProcess Pod configuration requirements

View File

@ -42,7 +42,7 @@ The `spec` of a static Pod cannot refer to other API objects
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
This page assumes you're using {{< glossary_tooltip term_id="docker" >}} to run Pods, This page assumes you're using {{< glossary_tooltip term_id="cri-o" >}} to run Pods,
and that your nodes are running the Fedora operating system. and that your nodes are running the Fedora operating system.
Instructions for other distributions or Kubernetes installations may vary. Instructions for other distributions or Kubernetes installations may vary.
@ -156,15 +156,20 @@ already be running.
You can view running containers (including static Pods) by running (on the node): You can view running containers (including static Pods) by running (on the node):
```shell ```shell
# Run this command on the node where the kubelet is running # Run this command on the node where the kubelet is running
docker ps crictl ps
``` ```
The output might be something like: The output might be something like:
```console
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
129fd7d382018 docker.io/library/nginx@sha256:... 11 minutes ago Running web 0 34533c6729106
``` ```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c {{< note >}}
``` `crictl` outputs the image URI and SHA-256 checksum. `NAME` will look more like:
`docker.io/library/nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31`.
{{< /note >}}
You can see the mirror Pod on the API server: You can see the mirror Pod on the API server:
@ -172,8 +177,8 @@ You can see the mirror Pod on the API server:
kubectl get pods kubectl get pods
``` ```
``` ```
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
static-web-my-node1 1/1 Running 0 2m static-web 1/1 Running 0 2m
``` ```
{{< note >}} {{< note >}}
@ -181,7 +186,6 @@ Make sure the kubelet has permission to create the mirror Pod in the API server.
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/). [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/).
{{< /note >}} {{< /note >}}
{{< glossary_tooltip term_id="label" text="Labels" >}} from the static Pod are {{< glossary_tooltip term_id="label" text="Labels" >}} from the static Pod are
propagated into the mirror Pod. You can use those labels as normal via propagated into the mirror Pod. You can use those labels as normal via
{{< glossary_tooltip term_id="selector" text="selectors" >}}, etc. {{< glossary_tooltip term_id="selector" text="selectors" >}}, etc.
@ -190,34 +194,33 @@ If you try to use `kubectl` to delete the mirror Pod from the API server,
the kubelet _doesn't_ remove the static Pod: the kubelet _doesn't_ remove the static Pod:
```shell ```shell
kubectl delete pod static-web-my-node1 kubectl delete pod static-web
``` ```
``` ```
pod "static-web-my-node1" deleted pod "static-web" deleted
``` ```
You can see that the Pod is still running: You can see that the Pod is still running:
```shell ```shell
kubectl get pods kubectl get pods
``` ```
``` ```
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
static-web-my-node1 1/1 Running 0 12s static-web 1/1 Running 0 4s
``` ```
Back on your node where the kubelet is running, you can try to stop the Docker Back on your node where the kubelet is running, you can try to stop the container manually.
container manually.
You'll see that, after a time, the kubelet will notice and will restart the Pod You'll see that, after a time, the kubelet will notice and will restart the Pod
automatically: automatically:
```shell ```shell
# Run these commands on the node where the kubelet is running # Run these commands on the node where the kubelet is running
docker stop f6d05272b57e # replace with the ID of your container crictl stop 129fd7d382018 # replace with the ID of your container
sleep 20 sleep 20
docker ps crictl ps
``` ```
``` ```console
CONTAINER ID IMAGE COMMAND CREATED ... CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ... 89db4553e1eeb docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106
``` ```
## Dynamic addition and removal of static pods ## Dynamic addition and removal of static pods
@ -230,14 +233,13 @@ The running kubelet periodically scans the configured directory (`/etc/kubelet.d
# #
mv /etc/kubelet.d/static-web.yaml /tmp mv /etc/kubelet.d/static-web.yaml /tmp
sleep 20 sleep 20
docker ps crictl ps
# You see that no nginx container is running # You see that no nginx container is running
mv /tmp/static-web.yaml /etc/kubelet.d/ mv /tmp/static-web.yaml /etc/kubelet.d/
sleep 20 sleep 20
docker ps crictl ps
``` ```
```console
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running web 1 34533c6729106
``` ```
CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
```

View File

@ -17,15 +17,11 @@ You can use it to inspect and debug container runtimes and applications on a
Kubernetes node. `crictl` and its source are hosted in the Kubernetes node. `crictl` and its source are hosted in the
[cri-tools](https://github.com/kubernetes-sigs/cri-tools) repository. [cri-tools](https://github.com/kubernetes-sigs/cri-tools) repository.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
`crictl` requires a Linux operating system with a CRI runtime. `crictl` requires a Linux operating system with a CRI runtime.
<!-- steps --> <!-- steps -->
## Installing crictl ## Installing crictl
@ -41,27 +37,37 @@ of Kubernetes. Extract it and move it to a location on your system path, such as
The `crictl` command has several subcommands and runtime flags. Use The `crictl` command has several subcommands and runtime flags. Use
`crictl help` or `crictl <subcommand> help` for more details. `crictl help` or `crictl <subcommand> help` for more details.
`crictl` connects to `unix:///var/run/dockershim.sock` by default. For other You can set the endpoint for `crictl` by doing one of the following:
runtimes, you can set the endpoint in multiple different ways:
- By setting flags `--runtime-endpoint` and `--image-endpoint` * Set the `--runtime-endpoint` and `--image-endpoint` flags.
- By setting environment variables `CONTAINER_RUNTIME_ENDPOINT` and `IMAGE_SERVICE_ENDPOINT` * Set the `CONTAINER_RUNTIME_ENDPOINT` and `IMAGE_SERVICE_ENDPOINT` environment
- By setting the endpoint in the config file `--config=/etc/crictl.yaml` variables.
* Set the endpoint in the configuration file `/etc/crictl.yaml`. To specify a
different file, use the `--config=PATH_TO_FILE` flag when you run `crictl`.
{{<note>}}
If you don't set an endpoint, `crictl` attempts to connect to a list of known
endpoints, which might result in an impact to performance.
{{</note>}}
You can also specify timeout values when connecting to the server and enable or You can also specify timeout values when connecting to the server and enable or
disable debugging, by specifying `timeout` or `debug` values in the configuration disable debugging, by specifying `timeout` or `debug` values in the configuration
file or using the `--timeout` and `--debug` command-line flags. file or using the `--timeout` and `--debug` command-line flags.
To view or edit the current configuration, view or edit the contents of `/etc/crictl.yaml`. To view or edit the current configuration, view or edit the contents of
`/etc/crictl.yaml`. For example, the configuration when using the `containerd`
container runtime would be similar to this:
```shell ```
cat /etc/crictl.yaml runtime-endpoint: unix:///var/run/containerd/containerd.sock
runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/dockershim.sock
timeout: 10 timeout: 10
debug: true debug: true
``` ```
To learn more about `crictl`, refer to the [`crictl`
documentation](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md).
## Example crictl commands ## Example crictl commands
The following examples show some `crictl` commands and example output. The following examples show some `crictl` commands and example output.
@ -348,64 +354,9 @@ CONTAINER ID IMAGE CREATED STATE
3e025dd50a72d busybox About a minute ago Running busybox 0 3e025dd50a72d busybox About a minute ago Running busybox 0
``` ```
## {{% heading "whatsnext" %}}
* [Learn more about `crictl`](https://github.com/kubernetes-sigs/cri-tools).
* [Map `docker` CLI commands to `crictl`](/reference/tools/map-crictl-dockercli/).
<!-- discussion --> <!-- discussion -->
See [kubernetes-sigs/cri-tools](https://github.com/kubernetes-sigs/cri-tools)
for more information.
## Mapping from docker cli to crictl
The exact versions for below mapping table are for docker cli v1.40 and crictl v1.19.0. Please note that the list is not exhaustive. For example, it doesn't include experimental commands of docker cli.
{{< note >}}
The output format of CRICTL is similar to Docker CLI, despite some missing columns for some CLI. Make sure to check output for the specific command if your script output parsing.
{{< /note >}}
### Retrieve Debugging Information
{{< table caption="mapping from docker cli to crictl - retrieve debugging information" >}}
docker cli | crictl | Description | Unsupported Features
-- | -- | -- | --
`attach` | `attach` | Attach to a running container | `--detach-keys`, `--sig-proxy`
`exec` | `exec` | Run a command in a running container | `--privileged`, `--user`, `--detach-keys`
`images` | `images` | List images |  
`info` | `info` | Display system-wide information |  
`inspect` | `inspect`, `inspecti` | Return low-level information on a container, image or task |  
`logs` | `logs` | Fetch the logs of a container | `--details`
`ps` | `ps` | List containers |  
`stats` | `stats` | Display a live stream of container(s) resource usage statistics | Column: NET/BLOCK I/O, PIDs
`version` | `version` | Show the runtime (Docker, ContainerD, or others) version information |  
{{< /table >}}
### Perform Changes
{{< table caption="mapping from docker cli to crictl - perform changes" >}}
docker cli | crictl | Description | Unsupported Features
-- | -- | -- | --
`create` | `create` | Create a new container |  
`kill` | `stop` (timeout = 0) | Kill one or more running container | `--signal`
`pull` | `pull` | Pull an image or a repository from a registry | `--all-tags`, `--disable-content-trust`
`rm` | `rm` | Remove one or more containers |  
`rmi` | `rmi` | Remove one or more images |  
`run` | `run` | Run a command in a new container |  
`start` | `start` | Start one or more stopped containers | `--detach-keys`
`stop` | `stop` | Stop one or more running containers |  
`update` | `update` | Update configuration of one or more containers | `--restart`, `--blkio-weight` and some other resource limit not supported by CRI.
{{< /table >}}
### Supported only in crictl
{{< table caption="mapping from docker cli to crictl - supported only in crictl" >}}
crictl | Description
-- | --
`imagefsinfo` | Return image filesystem info
`inspectp` | Display the status of one or more pods
`port-forward` | Forward local port to a pod
`pods` | List pods
`runp` | Run a new pod
`rmp` | Remove one or more pods
`stopp` | Stop one or more running pods
{{< /table >}}

View File

@ -45,8 +45,9 @@ and command-line interfaces (CLIs), such as [`kubectl`](/docs/reference/kubectl/
Someone else from the community may have already asked a similar question or may Someone else from the community may have already asked a similar question or may
be able to help with your problem. The Kubernetes team will also monitor be able to help with your problem. The Kubernetes team will also monitor
[posts tagged Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes). [posts tagged Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes).
If there aren't any existing questions that help, please If there aren't any existing questions that help, **please [ensure that your question is on-topic on Stack Overflow](https://stackoverflow.com/help/on-topic)
[ask a new one](https://stackoverflow.com/questions/ask?tags=kubernetes)! and that you read through the guidance on [how to ask a new question](https://stackoverflow.com/help/how-to-ask)**,
before [asking a new one](https://stackoverflow.com/questions/ask?tags=kubernetes)!
### Slack ### Slack

View File

@ -39,7 +39,7 @@ You may want to set
(default to 1), (default to 1),
[`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds)
(default to 0) and (default to 0) and
[`.spec.maxSurge`](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) [`.spec.updateStrategy.rollingUpdate.maxSurge`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)
(a beta feature and defaults to 0) as well. (a beta feature and defaults to 0) as well.
### Creating a DaemonSet with `RollingUpdate` update strategy ### Creating a DaemonSet with `RollingUpdate` update strategy

View File

@ -173,7 +173,10 @@ automatically responds to changes in the number of replicas of the corresponding
## Create the PDB object ## Create the PDB object
You can create or update the PDB object with a command like `kubectl apply -f mypdb.yaml`. You can create or update the PDB object using kubectl.
```shell
kubectl apply -f mypdb.yaml
```
## Check the status of the PDB ## Check the status of the PDB

View File

@ -321,7 +321,7 @@ object:
metric: metric:
name: requests-per-second name: requests-per-second
describedObject: describedObject:
apiVersion: networking.k8s.io/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
name: main-route name: main-route
target: target:
@ -367,7 +367,7 @@ spec:
metric: metric:
name: requests-per-second name: requests-per-second
describedObject: describedObject:
apiVersion: networking.k8s.io/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
name: main-route name: main-route
target: target:
@ -390,7 +390,7 @@ status:
metric: metric:
name: requests-per-second name: requests-per-second
describedObject: describedObject:
apiVersion: networking.k8s.io/v1beta1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
name: main-route name: main-route
current: current:

View File

@ -57,7 +57,8 @@ Kubernetes implements horizontal pod autoscaling as a control loop that runs int
Once during each period, the controller manager queries the resource utilization against the Once during each period, the controller manager queries the resource utilization against the
metrics specified in each HorizontalPodAutoscaler definition. The controller manager metrics specified in each HorizontalPodAutoscaler definition. The controller manager
obtains the metrics from either the resource metrics API (for per-pod resource metrics), finds the target resource defined by the `scaleTargetRef`,
then selects the pods based on the target resource's `.spec.selector` labels, and obtains the metrics from either the resource metrics API (for per-pod resource metrics),
or the custom metrics API (for all other metrics). or the custom metrics API (for all other metrics).
* For per-pod resource metrics (like CPU), the controller fetches the metrics * For per-pod resource metrics (like CPU), the controller fetches the metrics

View File

@ -76,16 +76,11 @@ cat <<EOF | cfssl genkey - | cfssljson -bare server
"192.0.2.24", "192.0.2.24",
"10.0.34.2" "10.0.34.2"
], ],
"CN": "system:node:my-pod.my-namespace.pod.cluster.local", "CN": "my-pod.my-namespace.pod.cluster.local",
"key": { "key": {
"algo": "ecdsa", "algo": "ecdsa",
"size": 256 "size": 256
}, }
"names": [
{
"O": "system:nodes"
}
]
} }
EOF EOF
``` ```
@ -93,13 +88,13 @@ EOF
Where `192.0.2.24` is the service's cluster IP, Where `192.0.2.24` is the service's cluster IP,
`my-svc.my-namespace.svc.cluster.local` is the service's DNS name, `my-svc.my-namespace.svc.cluster.local` is the service's DNS name,
`10.0.34.2` is the pod's IP and `my-pod.my-namespace.pod.cluster.local` `10.0.34.2` is the pod's IP and `my-pod.my-namespace.pod.cluster.local`
is the pod's DNS name. You should see the following output: is the pod's DNS name. You should see the output similar to:
``` ```
2017/03/21 06:48:17 [INFO] generate received request 2022/02/01 11:45:32 [INFO] generate received request
2017/03/21 06:48:17 [INFO] received CSR 2022/02/01 11:45:32 [INFO] received CSR
2017/03/21 06:48:17 [INFO] generating key: ecdsa-256 2022/02/01 11:45:32 [INFO] generating key: ecdsa-256
2017/03/21 06:48:17 [INFO] encoded CSR 2022/02/01 11:45:32 [INFO] encoded CSR
``` ```
This command generates two files; it generates `server.csr` containing the PEM This command generates two files; it generates `server.csr` containing the PEM
@ -120,7 +115,7 @@ metadata:
name: my-svc.my-namespace name: my-svc.my-namespace
spec: spec:
request: $(cat server.csr | base64 | tr -d '\n') request: $(cat server.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kubelet-serving signerName: example.com/serving
usages: usages:
- digital signature - digital signature
- key encipherment - key encipherment
@ -131,7 +126,7 @@ EOF
Notice that the `server.csr` file created in step 1 is base64 encoded Notice that the `server.csr` file created in step 1 is base64 encoded
and stashed in the `.spec.request` field. We are also requesting a and stashed in the `.spec.request` field. We are also requesting a
certificate with the "digital signature", "key encipherment", and "server certificate with the "digital signature", "key encipherment", and "server
auth" key usages, signed by the `kubernetes.io/kubelet-serving` signer. auth" key usages, signed by an example `example.com/serving` signer.
A specific `signerName` must be requested. A specific `signerName` must be requested.
View documentation for [supported signer names](/docs/reference/access-authn-authz/certificate-signing-requests/#signers) View documentation for [supported signer names](/docs/reference/access-authn-authz/certificate-signing-requests/#signers)
for more information. for more information.
@ -147,14 +142,16 @@ kubectl describe csr my-svc.my-namespace
Name: my-svc.my-namespace Name: my-svc.my-namespace
Labels: <none> Labels: <none>
Annotations: <none> Annotations: <none>
CreationTimestamp: Tue, 21 Mar 2017 07:03:51 -0700 CreationTimestamp: Tue, 01 Feb 2022 11:49:15 -0500
Requesting User: yourname@example.com Requesting User: yourname@example.com
Signer: example.com/serving
Status: Pending Status: Pending
Subject: Subject:
Common Name: my-svc.my-namespace.svc.cluster.local Common Name: my-pod.my-namespace.pod.cluster.local
Serial Number: Serial Number:
Subject Alternative Names: Subject Alternative Names:
DNS Names: my-svc.my-namespace.svc.cluster.local DNS Names: my-pod.my-namespace.pod.cluster.local
my-svc.my-namespace.svc.cluster.local
IP Addresses: 192.0.2.24 IP Addresses: 192.0.2.24
10.0.34.2 10.0.34.2
Events: <none> Events: <none>
@ -175,30 +172,136 @@ kubectl certificate approve my-svc.my-namespace
certificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved certificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved
``` ```
You should now see the following:
## Download the Certificate and Use It
Once the CSR is signed and approved you should see the following:
```shell ```shell
kubectl get csr kubectl get csr
``` ```
```none ```none
NAME AGE REQUESTOR CONDITION NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
my-svc.my-namespace 10m yourname@example.com Approved,Issued my-svc.my-namespace 10m example.com/serving yourname@example.com <none> Approved
``` ```
You can download the issued certificate and save it to a `server.crt` file This means the certificate request has been approved and is waiting for the
by running the following: requested signer to sign it.
## Sign the Certificate Signing Request
Next, you'll play the part of a certificate signer, issue the certificate, and upload it to the API.
A signer would typically watch the Certificate Signing Request API for objects with its `signerName`,
check that they have been approved, sign certificates for those requests,
and update the API object status with the issued certificate.
### Create a Certificate Authority
First, create a signing certificate by running the following:
```shell
cat <<EOF | cfssl gencert -initca - | cfssljson -bare ca
{
"CN": "My Example Signer",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
```
You should see the output similar to:
```none
2022/02/01 11:50:39 [INFO] generating a new CA key and certificate from CSR
2022/02/01 11:50:39 [INFO] generate received request
2022/02/01 11:50:39 [INFO] received CSR
2022/02/01 11:50:39 [INFO] generating key: rsa-2048
2022/02/01 11:50:39 [INFO] encoded CSR
2022/02/01 11:50:39 [INFO] signed certificate with serial number 263983151013686720899716354349605500797834580472
```
This produces a certificate authority key file (`ca-key.pem`) and certificate (`ca.pem`).
### Issue a Certificate
{{< codenew file="tls/server-signing-config.json" >}}
Use a `server-signing-config.json` signing configuration and the certificate authority key file
and certificate to sign the certificate request:
```shell
kubectl get csr my-svc.my-namespace -o jsonpath='{.spec.request}' | \
base64 --decode | \
cfssl sign -ca ca.pem -ca-key ca-key.pem -config server-signing-config.json - | \
cfssljson -bare ca-signed-server
```
You should see the output similar to:
```
2022/02/01 11:52:26 [INFO] signed certificate with serial number 576048928624926584381415936700914530534472870337
```
This produces a signed serving certificate file, `ca-signed-server.pem`.
### Upload the Signed Certificate
Finally, populate the signed certificate in the API object's status:
```shell
kubectl get csr my-svc.my-namespace -o json | \
jq '.status.certificate = "'$(base64 ca-signed-server.pem | tr -d '\n')'"' | \
kubectl replace --raw /apis/certificates.k8s.io/v1/certificatesigningrequests/my-svc.my-namespace/status -f -
```
{{< note >}}
This uses the command line tool [jq](https://stedolan.github.io/jq/) to populate the base64-encoded content in the `.status.certificate` field.
If you do not have `jq`, you can also save the JSON output to a file, populate this field manually, and upload the resulting file.
{{< /note >}}
Once the CSR is approved and the signed certificate is uploaded you should see the following:
```shell
kubectl get csr
```
```none
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
my-svc.my-namespace 20m example.com/serving yourname@example.com <none> Approved,Issued
```
## Download the Certificate and Use It
Now, as the requesting user, you can download the issued certificate
and save it to a `server.crt` file by running the following:
```shell ```shell
kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \ kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \
| base64 --decode > server.crt | base64 --decode > server.crt
``` ```
Now you can use `server.crt` and `server-key.pem` as the keypair to start Now you can populate `server.crt` and `server-key.pem` in a secret and mount
your HTTPS server. it into a pod to use as the keypair to start your HTTPS server:
```shell
kubectl create secret tls server --cert server.crt --key server-key.pem
```
```none
secret/server created
```
Finally, you can populate `ca.pem` in a configmap and use it as the trust root
to verify the serving certificate:
```shell
kubectl create configmap example-serving-ca --from-file ca.crt=ca.pem
```
```none
configmap/example-serving-ca created
```
## Approving Certificate Signing Requests ## Approving Certificate Signing Requests

View File

@ -85,7 +85,7 @@ For example, to download version {{< param "fullversion" >}} on Linux, type:
chmod +x kubectl chmod +x kubectl
mkdir -p ~/.local/bin/kubectl mkdir -p ~/.local/bin/kubectl
mv ./kubectl ~/.local/bin/kubectl mv ./kubectl ~/.local/bin/kubectl
# and then add ~/.local/bin/kubectl to $PATH # and then append (or prepend) ~/.local/bin to $PATH
``` ```
{{< /note >}} {{< /note >}}

View File

@ -59,7 +59,7 @@ The following methods exist for installing kubectl on Windows:
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
``` ```
1. Add the binary in to your `PATH`. 1. Append or prepend the kubectl binary folder to your `PATH` environment variable.
1. Test to ensure the version of `kubectl` is the same as downloaded: 1. Test to ensure the version of `kubectl` is the same as downloaded:
@ -172,7 +172,7 @@ Below are the procedures to set up autocompletion for PowerShell.
$($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl-convert.exe.sha256) $($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl-convert.exe.sha256)
``` ```
1. Add the binary in to your `PATH`. 1. Append or prepend the kubectl binary folder to your `PATH` environment variable.
1. Verify plugin is successfully installed 1. Verify plugin is successfully installed

View File

@ -174,6 +174,15 @@ of security defaults while preserving the functionality of the workload. It is
possible that the default profiles differ between container runtimes and their possible that the default profiles differ between container runtimes and their
release versions, for example when comparing those from CRI-O and containerd. release versions, for example when comparing those from CRI-O and containerd.
{{< note >}}
Enabling the feature will neither change the Kubernetes
`securityContext.seccompProfile` API field nor add the deprecated annotations of
the workload. This provides users the possibility to rollback anytime without
actually changing the workload configuration. Tools like
[`crictl inspect`](https://github.com/kubernetes-sigs/cri-tools) can be used to
verify which seccomp profile is being used by a container.
{{< /note >}}
Some workloads may require a lower amount of syscall restrictions than others. Some workloads may require a lower amount of syscall restrictions than others.
This means that they can fail during runtime even with the `RuntimeDefault` This means that they can fail during runtime even with the `RuntimeDefault`
profile. To mitigate such a failure, you can: profile. To mitigate such a failure, you can:
@ -203,6 +212,51 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4 apiVersion: kind.x-k8s.io/v1alpha4
featureGates: featureGates:
SeccompDefault: true SeccompDefault: true
nodes:
- role: control-plane
image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
seccomp-default: "true"
- role: worker
image: kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
feature-gates: SeccompDefault=true
seccomp-default: "true"
```
If the cluster is ready, then running a pod:
```shell
kubectl run --rm -it --restart=Never --image=alpine alpine -- sh
```
Should now have the default seccomp profile attached. This can be verified by
using `docker exec` to run `crictl inspect` for the container on the kind
worker:
```shell
docker exec -it kind-worker bash -c \
'crictl inspect $(crictl ps --name=alpine -q) | jq .info.runtimeSpec.linux.seccomp'
```
```json
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"],
"syscalls": [
{
"names": ["..."]
}
]
}
``` ```
## Create a Pod with a seccomp profile for syscall auditing ## Create a Pod with a seccomp profile for syscall auditing

View File

@ -30,7 +30,6 @@ roleRef:
name: system:volume-scheduler name: system:volume-scheduler
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
--- ---
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
@ -44,7 +43,6 @@ data:
- schedulerName: my-scheduler - schedulerName: my-scheduler
leaderElection: leaderElection:
leaderElect: false leaderElect: false
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
@ -76,13 +74,15 @@ spec:
livenessProbe: livenessProbe:
httpGet: httpGet:
path: /healthz path: /healthz
port: 10251 port: 10259
scheme: HTTPS
initialDelaySeconds: 15 initialDelaySeconds: 15
name: kube-second-scheduler name: kube-second-scheduler
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /healthz path: /healthz
port: 10251 port: 10259
scheme: HTTPS
resources: resources:
requests: requests:
cpu: '0.1' cpu: '0.1'

View File

@ -1,4 +1,4 @@
apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 apiVersion: flowcontrol.apiserver.k8s.io/v1beta2
kind: FlowSchema kind: FlowSchema
metadata: metadata:
name: health-for-strangers name: health-for-strangers

View File

@ -0,0 +1,15 @@
{
"signing": {
"default": {
"usages": [
"digital signature",
"key encipherment",
"server auth"
],
"expiry": "876000h",
"ca_constraint": {
"is_ca": false
}
}
}
}

View File

@ -78,9 +78,10 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date | | Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- | | --------------------- | -------------------- | ----------- |
| January 2022 | 2022-01-14 | 2022-01-19 |
| February 2022 | 2022-02-11 | 2022-02-16 | | February 2022 | 2022-02-11 | 2022-02-16 |
| March 2022 | 2022-03-11 | 2022-03-16 | | March 2022 | 2022-03-11 | 2022-03-16 |
| April 2022 | 2022-04-08 | 2022-04-13 |
| May 2022 | 2022-05-13 | 2022-05-18 |
## Detailed Release History for Active Branches ## Detailed Release History for Active Branches
@ -92,6 +93,7 @@ End of Life for **1.23** is **2023-02-28**.
| Patch Release | Cherry Pick Deadline | Target Date | Note | | Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------| |---------------|----------------------|-------------|------|
| 1.23.4 | 2022-02-11 | 2022-02-16 | |
| 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/u/2/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) | | 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/u/2/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) |
| 1.23.2 | 2022-01-14 | 2022-01-19 | | | 1.23.2 | 2022-01-14 | 2022-01-19 | |
| 1.23.1 | 2021-12-14 | 2021-12-16 | | | 1.23.1 | 2021-12-14 | 2021-12-16 | |
@ -104,6 +106,7 @@ End of Life for **1.22** is **2022-10-28**
| Patch Release | Cherry Pick Deadline | Target Date | Note | | Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------| |---------------|----------------------|-------------|------|
| 1.22.7 | 2022-02-11 | 2022-02-16 | |
| 1.22.6 | 2022-01-14 | 2022-01-19 | | | 1.22.6 | 2022-01-14 | 2022-01-19 | |
| 1.22.5 | 2021-12-10 | 2021-12-15 | | | 1.22.5 | 2021-12-10 | 2021-12-15 | |
| 1.22.4 | 2021-11-12 | 2021-11-17 | | | 1.22.4 | 2021-11-12 | 2021-11-17 | |
@ -119,6 +122,7 @@ End of Life for **1.21** is **2022-06-28**
| Patch Release | Cherry Pick Deadline | Target Date | Note | | Patch Release | Cherry Pick Deadline | Target Date | Note |
| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- | | ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- |
| 1.21.10 | 2022-02-11 | 2022-02-16 | |
| 1.21.9 | 2022-01-14 | 2022-01-19 | | | 1.21.9 | 2022-01-14 | 2022-01-19 | |
| 1.21.8 | 2021-12-10 | 2021-12-15 | | | 1.21.8 | 2021-12-10 | 2021-12-15 | |
| 1.21.7 | 2021-11-12 | 2021-11-17 | | | 1.21.7 | 2021-11-12 | 2021-11-17 | |
@ -137,6 +141,7 @@ End of Life for **1.20** is **2022-02-28**
| Patch Release | Cherry Pick Deadline | Target Date | Note | | Patch Release | Cherry Pick Deadline | Target Date | Note |
| ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- | | ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- |
| 1.20.16 | 2022-02-11 | 2022-02-16 | If there is critical/blocker patches to be released |
| 1.20.15 | 2022-01-14 | 2022-01-19 | | | 1.20.15 | 2022-01-14 | 2022-01-19 | |
| 1.20.14 | 2021-12-10 | 2021-12-15 | | | 1.20.14 | 2021-12-10 | 2021-12-15 | |
| 1.20.13 | 2021-11-12 | 2021-11-17 | | | 1.20.13 | 2021-11-12 | 2021-11-17 | |

View File

@ -169,9 +169,9 @@ of each minor (1.Y) and patch (1.Y.Z) release
GitHub team: [@kubernetes/build-admins](https://github.com/orgs/kubernetes/teams/build-admins) GitHub team: [@kubernetes/build-admins](https://github.com/orgs/kubernetes/teams/build-admins)
- Aaron Crickenberger ([@spiffxp](https://github.com/spiffxp)) - Aaron Crickenberger ([@spiffxp](https://github.com/spiffxp))
- Amit Watve ([@amwat](https://github.com/amwat))
- Benjamin Elder ([@BenTheElder](https://github.com/BenTheElder)) - Benjamin Elder ([@BenTheElder](https://github.com/BenTheElder))
- Grant McCloskey ([@MushuEE](https://github.com/MushuEE)) - Grant McCloskey ([@MushuEE](https://github.com/MushuEE))
- Juan Escobar ([@juanfescobar](https://github.com/juanfescobar))
## SIG Release Leads ## SIG Release Leads

View File

@ -110,11 +110,11 @@ CPU es siempre solicitada como una cantidad absoluta, nunca como una cantidad re
Los límites y peticiones de `memoria` son medidos en bytes. Puedes expresar la memoria como Los límites y peticiones de `memoria` son medidos en bytes. Puedes expresar la memoria como
un número entero o como un número decimal usando alguno de estos sufijos: un número entero o como un número decimal usando alguno de estos sufijos:
E, P, T, G, M, K. También puedes usar los equivalentes en potencia de dos: Ei, Pi, Ti, Gi, E, P, T, G, M, k, m (millis). También puedes usar los equivalentes en potencia de dos: Ei, Pi, Ti, Gi,
Mi, Ki. Por ejemplo, los siguientes valores representan lo mismo: Mi, Ki. Por ejemplo, los siguientes valores representan lo mismo:
```shell ```shell
128974848, 129e6, 129M, 123Mi 128974848, 129e6, 129M, 128974848000m, 123Mi
``` ```
Aquí un ejemplo. Aquí un ejemplo.

View File

@ -0,0 +1,102 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Aprovisionamiento Dinámico de volumen
content_type: concept
weight: 40
---
<!-- overview -->
El aprovisionamiento dinámico de volúmenes permite crear volúmenes de almacenamiento bajo demanda. Sin el aprovisionamiento dinámico, los administradores de clústeres tienen que realizar llamadas manualmente a su proveedor de almacenamiento o nube para crear nuevos volúmenes de almacenamiento y luego crear [objetos de `PersistentVolume`](/docs/concepts/storage/persistent-volumes/)
para representarlos en Kubernetes. La función de aprovisionamiento dinámico elimina la necesidad de que los administradores del clúster aprovisionen previamente el almacenamiento. En cambio, el aprovisionamiento ocurre automáticamente cuando los usuarios lo solicitan.
<!-- body -->
## Antecedentes
La implementación del aprovisionamiento dinámico de volúmenes se basa en el objeto API `StorageClass`
del grupo API `storage.k8s.io`. Un administrador de clúster puede definir tantos objetos
`StorageClass` como sea necesario, cada uno especificando un _volume plugin_ (aka
_provisioner_) que aprovisiona un volumen y el conjunto de parámetros para pasar a ese aprovisionador. Un administrador de clúster puede definir y exponer varios tipos de almacenamiento (del mismo o de diferentes sistemas de almacenamiento) dentro de un clúster, cada uno con un conjunto personalizado de parámetros. Este diseño también garantiza que los usuarios finales no tengan que preocuparse por la complejidad y los matices de cómo se aprovisiona el almacenamiento, pero que aún tengan la capacidad de seleccionar entre múltiples opciones de almacenamiento.
Puede encontrar más información sobre las clases de almacenamiento
[aqui](/docs/concepts/storage/storage-classes/).
## Habilitación del aprovisionamiento dinámico
Para habilitar el aprovisionamiento dinámico, un administrador de clúster debe crear previamente uno o más objetos StorageClass para los usuarios. Los objetos StorageClass definen qué aprovisionador se debe usar y qué parámetros se deben pasar a ese aprovisionador cuando se invoca el aprovisionamiento dinámico.
El nombre de un objeto StorageClass debe ser un
[nombre de subdominio de DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) válido.
El siguiente manifiesto crea una clase de almacenamiento llamada "slow" que aprovisiona discos persistentes estándar similares a discos.
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
```
El siguiente manifiesto crea una clase de almacenamiento llamada "fast" que aprovisiona discos persistentes similares a SSD.
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
```
## Usar Aprovisionamiento Dinámico
Los usuarios solicitan almacenamiento aprovisionado dinámicamente al incluir una clase de almacenamiento en su `PersistentVolumeClaim`. Antes de Kubernetes v1.6, esto se hacía a través del la anotación
`volume.beta.kubernetes.io/storage-class`. Sin embargo, esta anotación está obsoleta desde v1.9. Los usuarios ahora pueden y deben usar el campo
`storageClassName` del objeto `PersistentVolumeClaim`. El valor de este campo debe coincidir con el nombre de un `StorageClass` configurada por el administrador
(ver [documentación](#habilitación-del-aprovisionamiento-dinámico)).
Para seleccionar la clase de almacenamiento llamada "fast", por ejemplo, un usuario crearía el siguiente PersistentVolumeClaim:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim1
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast
resources:
requests:
storage: 30Gi
```
Esta afirmación da como resultado que se aprovisione automáticamente un disco persistente similar a SSD. Cuando se elimina la petición, se destruye el volumen.
## Comportamiento Predeterminado
El aprovisionamiento dinámico se puede habilitar en un clúster de modo que todas las peticiones se aprovisionen dinámicamente si no se especifica una clase de almacenamiento. Un administrador de clúster puede habilitar este comportamiento al:
- Marcar un objeto `StorageClass` como _default_;
- Asegúrese de que el [controlador de admisión `DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) esté habilitado en el servidor de API.
Un administrador puede marcar un `StorageClass` específico como predeterminada agregando la anotación
`storageclass.kubernetes.io/is-default-class`.
Cuando existe un `StorageClass` predeterminado en un clúster y un usuario crea un
`PersistentVolumeClaim` con `storageClassName` sin especificar, el controlador de admisión
`DefaultStorageClass` agrega automáticamente el campo
`storageClassName` que apunta a la clase de almacenamiento predeterminada.
Tenga en cuenta que puede haber como máximo una clase de almacenamiento _default_, o un `PersistentVolumeClaim` sin `storageClassName` especificado explícitamente.
## Conocimiento de la Topología
En los clústeres [Multi-Zone](/docs/setup/multiple-zones), los Pods se pueden distribuir en zonas de una región. Los backends de almacenamiento de zona única deben aprovisionarse en las zonas donde se programan los Pods. Esto se puede lograr configurando el [Volume Binding
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).

View File

@ -0,0 +1,74 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Capacidad de Almacenamiento
content_type: concept
weight: 45
---
<!-- overview -->
La capacidad de almacenamiento es limitada y puede variar según el nodo en el que un Pod se ejecuta: es posible que no todos los nodos puedan acceder al almacenamiento conectado a la red o que, para empezar, el almacenamiento sea local en un nodo.
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
Esta página describe cómo Kubernetes realiza un seguimiento de la capacidad de almacenamiento y cómo el planificador usa esa información para programar Pods en nodos que tienen acceso a suficiente capacidad de almacenamiento para los volúmenes restantes que faltan. Sin el seguimiento de la capacidad de almacenamiento, el planificador puede elegir un nodo que no tenga suficiente capacidad para aprovisionar un volumen y se necesitarán varios reintentos de planificación.
El seguimiento de la capacidad de almacenamiento es compatible con los controladores de la {{< glossary_tooltip
text="Interfaz de Almacenamiento de Contenedores" term_id="csi" >}} (CSI) y
[necesita estar habilitado](#enabling-storage-capacity-tracking) al instalar un controlador CSI.
<!-- body -->
## API
Hay dos extensiones de API para esta función:
- Los objetos CSIStorageCapacity:
son producidos por un controlador CSI en el Namespace donde está instalado el controlador. Cada objeto contiene información de capacidad para una clase de almacenamiento y define qué nodos tienen acceso a ese almacenamiento.
- [El campo `CSIDriverSpec.StorageCapacity`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#csidriverspec-v1-storage-k8s-io):
cuando se establece en `true`, el [Planificador de Kubernetes](/docs/concepts/scheduling-eviction/kube-scheduler/) considerará la capacidad de almacenamiento para los volúmenes que usan el controlador CSI.
## Planificación
El planificador de Kubernetes utiliza la información sobre la capacidad de almacenamiento si:
- la Feature gate de `CSIStorageCapacity` es `true`,
- un Pod usa un volumen que aún no se ha creado,
- ese volumen usa un {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}} que hace referencia a un controlador CSI y usa el [modo de enlace de volumen] (/docs/concepts/storage/storage-classes/#volume-binding-mode)`WaitForFirstConsumer`,
y
- el objeto `CSIDriver` para el controlador tiene `StorageCapacity` establecido en `true`.
En ese caso, el planificador sólo considera los nodos para el Pod que tienen suficiente almacenamiento disponible. Esta verificación es muy simplista y solo compara el tamaño del volumen con la capacidad indicada en los objetos `CSIStorageCapacity` con una topología que incluye el nodo.
Para los volúmenes con el modo de enlace de volumen `Immediate`, el controlador de almacenamiento decide dónde crear el volumen, independientemente de los pods que usarán el volumen.
Luego, el planificador programa los pods en los nodos donde el volumen está disponible después de que se haya creado.
Para los [volúmenes efímeros de CSI](/docs/concepts/storage/volumes/#csi),
la planificación siempre ocurre sin considerar la capacidad de almacenamiento. Esto se basa en la suposición de que este tipo de volumen solo lo utilizan controladores CSI especiales que son locales a un nodo y no necesitan allí recursos importantes.
## Replanificación
Cuando se selecciona un nodo para un Pod con volúmenes `WaitForFirstConsumer`, esa decisión sigue siendo tentativa. El siguiente paso es que se le pide al controlador de almacenamiento CSI que cree el volumen con una pista de que el volumen está disponible en el nodo seleccionado.
Debido a que Kubernetes pudo haber elegido un nodo basándose en información de capacidad desactualizada, es posible que el volumen no se pueda crear realmente. Luego, la selección de nodo se restablece y el planificador de Kubernetes intenta nuevamente encontrar un nodo para el Pod.
## Limitaciones
El seguimiento de la capacidad de almacenamiento aumenta las posibilidades de que la planificación funcione en el primer intento, pero no puede garantizarlo porque el planificador tiene que decidir basándose en información potencialmente desactualizada. Por lo general, el mismo mecanismo de reintento que para la planificación sin información de capacidad de almacenamiento es manejado por los errores de planificación.
Una situación en la que la planificación puede fallar de forma permanente es cuando un pod usa varios volúmenes: es posible que un volumen ya se haya creado en un segmento de topología que luego no tenga suficiente capacidad para otro volumen. La intervención manual es necesaria para recuperarse de esto, por ejemplo, aumentando la capacidad o eliminando el volumen que ya se creó. [
Trabajo adicional](https://github.com/kubernetes/enhancements/pull/1703) para manejar esto automáticamente.
## Habilitación del seguimiento de la capacidad de almacenamiento
El seguimiento de la capacidad de almacenamiento es una función beta y está habilitada de forma predeterminada en un clúster de Kubernetes desde Kubernetes 1.21. Además de tener la función habilitada en el clúster, un controlador CSI también tiene que admitirlo. Consulte la documentación del controlador para obtener más detalles.
## {{% heading "whatsnext" %}}
- Para obtener más información sobre el diseño, consulte las
[Restricciones de Capacidad de Almacenamiento para la Planificación de Pods KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1472-storage-capacity-tracking/README.md).
- Para obtener más información sobre un mayor desarrollo de esta función, consulte [problema de seguimiento de mejoras #1472](https://github.com/kubernetes/enhancements/issues/1472).
- Aprender sobre [Planificador de Kubernetes](/docs/concepts/scheduling-eviction/kube-scheduler/)

View File

@ -0,0 +1,66 @@
---
reviewers:
- edithturn
- raelga
- electrocucaracha
title: Clonación de volumen CSI
content_type: concept
weight: 30
---
<!-- overview -->
Este documento describe el concepto para clonar volúmenes CSI existentes en Kubernetes. Se sugiere estar familiarizado con [Volúmenes](/docs/concepts/storage/volumes).
<!-- body -->
## Introducción
La función de clonación de volumen {{< glossary_tooltip text="CSI" term_id="csi" >}} agrega soporte para especificar {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s existentes en el campo `dataSource` para indicar que un usuario desea clonar un {{< glossary_tooltip term_id="volume" >}}.
Un Clon se define como un duplicado de un volumen de Kubernetes existente que se puede consumir como lo sería cualquier volumen estándar. La única diferencia es que al aprovisionar, en lugar de crear un "nuevo" Volumen vacío, el dispositivo de backend crea un duplicado exacto del Volumen especificado.
La implementación de la clonación, desde la perspectiva de la API de Kubernetes, agrega la capacidad de especificar un PVC existente como dataSource durante la creación de un nuevo PVC. El PVC de origen debe estar vinculado y disponible (no en uso).
Los usuarios deben tener en cuenta lo siguiente cuando utilicen esta función:
- El soporte de clonación (`VolumePVCDataSource`) sólo está disponible para controladores CSI.
- El soporte de clonación sólo está disponible para aprovisionadores dinámicos.
- Los controladores CSI pueden haber implementado o no la funcionalidad de clonación de volúmenes.
- Sólo puede clonar un PVC cuando existe en el mismo Namespace que el PVC de destino (el origen y el destino deben estar en el mismo Namespace).
- La clonación sólo se admite dentro de la misma Clase de Almacenamiento.
- El volumen de destino debe ser de la misma clase de almacenamiento que el origen
- Se puede utilizar la clase de almacenamiento predeterminada y se puede omitir storageClassName en la especificación
- La clonación sólo se puede realizar entre dos volúmenes que usan la misma configuración de VolumeMode (si solicita un volumen en modo de bloqueo, la fuente DEBE también ser en modo de bloqueo)
## Aprovisionamiento
Los clones se aprovisionan como cualquier otro PVC con la excepción de agregar un origen de datos que hace referencia a un PVC existente en el mismo Namespace.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clone-of-pvc-1
namespace: myns
spec:
accessModes:
- ReadWriteOnce
storageClassName: cloning
resources:
requests:
storage: 5Gi
dataSource:
kind: PersistentVolumeClaim
name: pvc-1
```
{{< note >}}
Debe especificar un valor de capacidad para `spec.resources.requests.storage` y el valor que especifique debe ser igual o mayor que la capacidad del volumen de origen.
{{< /note >}}
El resultado es un nuevo PVC con el nombre `clone-of-pvc-1` que tiene exactamente el mismo contenido que la fuente especificada `pvc-1`.
## Uso
Una vez disponible el nuevo PVC, el PVC clonado se consume igual que el resto de PVC. También se espera en este punto que el PVC recién creado sea un objeto independiente. Se puede consumir, clonar, tomar snapshots, o eliminar de forma independiente y sin tener en cuenta sus datos originales. Esto también implica que la fuente no está vinculada de ninguna manera al clon recién creado, también puede modificarse o eliminarse sin afectar al clon recién creado.

View File

@ -0,0 +1,6 @@
---
title: "Herramientas incluidas"
description: "Fragmentos que se incluirán en las páginas principales de kubectl-installs-*."
headless: true
toc_hide: true
---

View File

@ -0,0 +1,244 @@
---
reviewers:
title: Instalar y configurar kubectl en Linux
content_type: task
weight: 10
card:
name: tasks
weight: 20
title: Instalar kubectl en Linux
---
## {{% heading "prerequisites" %}}
Debes usar una versión de kubectl que esté dentro de una diferencia de versión menor de tu clúster. Por ejemplo, un cliente v{{< skew latestVersion >}} puede comunicarse con v{{< skew prevMinorVersion >}}, v{{< skew latestVersion >}}, y v{{< skew nextMinorVersion >}} del plano de control.
El uso de la última versión de kubectl ayuda a evitar problemas inesperados.
## Instalar kubectl en Linux
Existen los siguientes métodos para instalar kubectl en Linux:
- [Instalar el binario kubectl con curl en Linux](#install-kubectl-binary-with-curl-on-linux)
- [Instalar usando la administración nativa de paquetes](#install-using-native-package-management)
- [Instalar usando otra administración de paquetes](#install-using-other-package-management)
### Instale el binario kubectl con curl en Linux
1. Descargue la última versión con el comando:
```bash
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
```
{{< note >}}
Para descargar una versión específica, reemplace la parte de `$(curl -L -s https://dl.k8s.io/release/stable.txt)` del comando con la versión específica.
Por ejemplo, para descargar la versión {{< param "fullversion" >}} en Linux, escriba:
```bash
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
```
{{< /note >}}
1. Validar el binario (opcional)
Descargue el archivo de comprobación de kubectl:
```bash
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
```
Valide el binario kubectl con el archivo de comprobación:
```bash
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
```
Si es válido, la salida es:
```console
kubectl: OK
```
Si la comprobación falla, `sha256` arroja un valor distinto de cero e imprime una salida similar a:
```bash
kubectl: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
Descargue la misma versión del binario y el archivo de comprobación.
{{< /note >}}
1. Instalar kubectl
```bash
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
```
{{< note >}}
Si no tiene acceso de root en el sistema de destino, aún puede instalar kubectl en el `~/.local/bin` directorio:
```bash
chmod +x kubectl
mkdir -p ~/.local/bin/kubectl
mv ./kubectl ~/.local/bin/kubectl
# y luego agregue ~/.local/bin/kubectl en el $PATH
```
{{< /note >}}
1. Para asegurarse de que la versión que instaló este actualizada ejecute:
```bash
kubectl version --client
```
### Instalar usando la administración nativa de paquetes
{{< tabs name="kubectl_install" >}}
{{% tab name="Debian-based distributions" %}}
1. Actualice el índice de paquetes de `apt` e instale los paquetes necesarios para usar Kubernetes con el repositorio `apt`:
```shell
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
```
2. Descargue la clave de firma pública de Google Cloud:
```shell
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
```
3. Agregue el repositorio de Kubernetes a `apt`:
```shell
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
4. Actualice el índice de paquetes de `apt` con el nuevo repositorio e instale kubectl:
```shell
sudo apt-get update
sudo apt-get install -y kubectl
```
{{% /tab %}}
{{< tab name="Red Hat-based distributions" codelang="bash" >}}
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl
{{< /tab >}}
{{< /tabs >}}
### Instalar usando otra administración de paquetes
{{< tabs name="other_kubectl_install" >}}
{{% tab name="Snap" %}}
Si está en Ubuntu u otra distribución de Linux que admita el administrador de paquetes [snap](https://snapcraft.io/docs/core/install), kubectl estará disponible como solicitud de [snap](https://snapcraft.io/).
```shell
snap install kubectl --classic
kubectl version --client
```
{{% /tab %}}
{{% tab name="Homebrew" %}}
Si está en Linux y usa [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) como administrador de paquetes, kubectl está disponible para [instalación](https://docs.brew.sh/Homebrew-on-Linux#install).
```shell
brew install kubectl
kubectl version --client
```
{{% /tab %}}
{{< /tabs >}}
## Verificar la configuración de kubectl
{{< include "verify-kubectl.md" >}}
## Plugins y configuraciones opcionales de kubectl
### Habilitar el autocompletado de shell
kubectl proporciona soporte de autocompletado para Bash y Zsh, lo que puede ahorrarle mucho tiempo al interactuar con la herramienta.
A continuación, se muestran los procedimientos para configurar el autocompletado para Bash y Zsh.
{{< tabs name="kubectl_autocompletion" >}}
{{< tab name="Bash" include="optional-kubectl-configs-bash-linux.md" />}}
{{< tab name="Zsh" include="optional-kubectl-configs-zsh.md" />}}
{{< /tabs >}}
### Instalar en pc `kubectl convert` plugin
{{< include "kubectl-convert-overview.md" >}}
1. Descargue la última versión con el comando:
```bash
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert
```
1. Valide el binario (opcional)
Descargue el archivo de comprobación kubectl-convert:
```bash
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
```
Valide el binario kubectl-convert con el archivo de comprobación:
```bash
echo "$(<kubectl-convert.sha256) kubectl-convert" | sha256sum --check
```
Si es válido, la salida es:
```console
kubectl-convert: OK
```
Si la comprobación falla, `sha256` arroja un valor distinto de cero e imprime una salida similar a:
```bash
kubectl-convert: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
Descargue la misma versión del binario y el archivo de comprobación.
{{< /note >}}
1. Instale kubectl-convert en pc
```bash
sudo install -o root -g root -m 0755 kubectl-convert /usr/local/bin/kubectl-convert
```
1. Verifique que el plugin se haya instalado correctamente
```shell
kubectl convert --help
```
Si no ve un error, significa que el plugin se instaló correctamente.
## {{% heading "whatsnext" %}}
{{< include "kubectl-whats-next.md" >}}

View File

@ -0,0 +1,248 @@
---
reviewers:
title: Instalar y configurar kubectl en macOS
content_type: task
weight: 10
card:
name: tasks
weight: 20
title: Instalar kubectl en macOS
---
## {{% heading "prerequisites" %}}
Debes usar una versión de kubectl que esté dentro de una diferencia de versión menor de tu clúster. Por ejemplo, un cliente v{{< skew latestVersion >}} puede comunicarse con v{{< skew prevMinorVersion >}}, v{{< skew latestVersion >}}, y v{{< skew nextMinorVersion >}} del plano de control.
El uso de la última versión de kubectl ayuda a evitar problemas imprevistos.
## Instalar kubectl en macOS
Existen los siguientes métodos para instalar kubectl en macOS:
- [Instalar el binario de kubectl con curl en macOS](#install-kubectl-binary-with-curl-on-macos)
- [Instalar con Homebrew en macOS](#install-with-homebrew-on-macos)
- [Instalar con Macports en macOS](#install-with-macports-on-macos)
### Instalar el binario de kubectl con curl en macOS
1. Descargue la última versión:
{{< tabs name="download_binary_macos" >}}
{{< tab name="Intel" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
{{< /tab >}}
{{< tab name="Apple Silicon" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"
{{< /tab >}}
{{< /tabs >}}
{{< note >}}
Para descargar una versión específica, reemplace el `$(curl -L -s https://dl.k8s.io/release/stable.txt)` parte del comando con la versión específica.
Por ejemplo, para descargar la versión {{< param "fullversion" >}} en Intel macOS, escriba:
```bash
curl -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl"
```
Y para macOS en Apple Silicon, escriba:
```bash
curl -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/arm64/kubectl"
```
{{< /note >}}
1. Validar el binario (opcional)
Descargue el archivo de comprobación de kubectl:
{{< tabs name="download_checksum_macos" >}}
{{< tab name="Intel" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256"
{{< /tab >}}
{{< tab name="Apple Silicon" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256"
{{< /tab >}}
{{< /tabs >}}
Valide el binario kubectl con el archivo de comprobación:
```bash
echo "$(<kubectl.sha256) kubectl" | shasum -a 256 --check
```
Si es válido, la salida es:
```console
kubectl: OK
```
Si la verificación falla, `shasum` arroja un valor distinto de cero e imprime una salida similar a:
```bash
kubectl: FAILED
shasum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
Descargue la misma versión del binario y el archivo de comprobación.
{{< /note >}}
1. Hacer ejecutable el binario de kubectl.
```bash
chmod +x ./kubectl
```
1. Mueva el binario kubectl a una ubicación de archivo en su sistema `PATH`.
```bash
sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl
```
{{< note >}}
Asegúrese de que `/usr/local/bin` se encuentre definida en su variable de entorno PATH.
{{< /note >}}
1. Para asegurarse de que la versión que instaló se encuentra actualizada, ejecute:
```bash
kubectl version --client
```
### Instalar con Homebrew en macOS
Si está en macOS y usa [Homebrew](https://brew.sh/) como administrador de paquetes, puede instalar kubectl con Homebrew.
1. Ejecute el comando de instalación:
```bash
brew install kubectl
```
or
```bash
brew install kubernetes-cli
```
1. Para asegurarse de que la versión que instaló se encuentra actualizada, ejecute:
```bash
kubectl version --client
```
### Instalar con Macports en macOS
Si está en macOS y usa [Macports](https://macports.org/) como administrador de paquetes, puede instalar kubectl con Macports.
1. Ejecute el comando de instalación:
```bash
sudo port selfupdate
sudo port install kubectl
```
1. Para asegurarse de que la versión que instaló se encuentra actualizada, ejecute:
```bash
kubectl version --client
```
## Verificar la configuración de kubectl
{{< include "verify-kubectl.md" >}}
## Plugins y configuraciones opcionales de kubectl
### Habilitar el autocompletado de shell
kubectl proporciona soporte de autocompletado para Bash y Zsh, lo que puede ahorrarle mucho tiempo al escribir.
A continuación, se muestran los procedimientos para configurar el autocompletado para Bash y Zsh.
{{< tabs name="kubectl_autocompletion" >}}
{{< tab name="Bash" include="optional-kubectl-configs-bash-mac.md" />}}
{{< tab name="Zsh" include="optional-kubectl-configs-zsh.md" />}}
{{< /tabs >}}
### Instalar el plugin `kubectl-convert`
{{< include "kubectl-convert-overview.md" >}}
1. Descargue la última versión con el comando:
{{< tabs name="download_convert_binary_macos" >}}
{{< tab name="Intel" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert"
{{< /tab >}}
{{< tab name="Apple Silicon" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert"
{{< /tab >}}
{{< /tabs >}}
1. Validar el binario (opcional)
Descargue el archivo de comprobación de kubectl:
{{< tabs name="download_convert_checksum_macos" >}}
{{< tab name="Intel" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl-convert.sha256"
{{< /tab >}}
{{< tab name="Apple Silicon" codelang="bash" >}}
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl-convert.sha256"
{{< /tab >}}
{{< /tabs >}}
Valide el binario kubectl-convert con el archivo de comprobación:
```bash
echo "$(<kubectl-convert.sha256) kubectl-convert" | shasum -a 256 --check
```
Si es válido, la salida es:
```console
kubectl-convert: OK
```
Si la verificación falla, `shasum` arroja un valor distinto de cero e imprime una salida similar a:
```bash
kubectl-convert: FAILED
shasum: WARNING: 1 computed checksum did NOT match
```
{{< note >}}
Descargue la misma versión del binario y el archivo de comprobación.
{{< /note >}}
1. Hacer ejecutable el binario de kubectl-convert
```bash
chmod +x ./kubectl-convert
```
1. Mueva el binario kubectl-convert a una ubicación de archivo en su sistema `PATH`.
```bash
sudo mv ./kubectl-convert /usr/local/bin/kubectl-convert
sudo chown root: /usr/local/bin/kubectl-convert
```
{{< note >}}
Asegúrese de que `/usr/local/bin` se encuentre definida en su variable de entorno PATH.
{{< /note >}}
1. Verifique que el complemento se haya instalado correctamente
```shell
kubectl convert --help
```
Si no ve algun error, significa que el complemento se instaló correctamente.
## {{% heading "whatsnext" %}}
{{< include "kubectl-whats-next.md" >}}

View File

@ -0,0 +1,190 @@
---
reviewers:
title: Instalar y configurar kubectl en Windows
content_type: task
weight: 10
card:
name: tasks
weight: 20
title: Instalar kubectl en Windows
---
## {{% heading "prerequisites" %}}
Debes usar una versión de kubectl que este dentro de una diferencia de versión menor de tu clúster. Por ejemplo, un cliente v{{< skew latestVersion >}} puede comunicarse con versiones v{{< skew prevMinorVersion >}}, v{{< skew latestVersion >}}, y v{{< skew nextMinorVersion >}} del plano de control.
El uso de la última versión de kubectl ayuda a evitar problemas imprevistos.
## Instalar kubectl en Windows
Existen los siguientes métodos para instalar kubectl en Windows:
- [Instalar el binario de kubectl con curl en Windows](#install-kubectl-binary-with-curl-on-windows)
- [Instalar en Windows usando Chocolatey o Scoop](#install-on-windows-using-chocolatey-or-scoop)
### Instalar el binario de kubectl con curl en Windows
1. Descarga la [última versión {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
O si tiene `curl` instalado, use este comando:
```powershell
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
```
{{< note >}}
Para conocer la última versión estable (por ejemplo, para secuencias de comandos), eche un vistazo a [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
{{< /note >}}
1. Validar el binario (opcional)
Descargue el archivo de comprobación de kubectl:
```powershell
curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256
```
Valide el binario kubectl con el archivo de comprobación:
- Usando la consola del sistema para comparar manualmente la salida de `CertUtil` con el archivo de comprobación descargado:
```cmd
CertUtil -hashfile kubectl.exe SHA256
type kubectl.exe.sha256
```
- Usando PowerShell puede automatizar la verificación usando el operador `-eq` para obtener un resultado de `True` o `False`:
```powershell
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
```
1. Agregue el binario a su `PATH`.
1. Para asegurar que la versión de`kubectl` es la misma que descargada, ejecute:
```cmd
kubectl version --client
```
{{< note >}}
[Docker Desktop para Windows](https://docs.docker.com/docker-for-windows/#kubernetes) agrega su propia versión de `kubectl` a el `PATH`.
Si ha instalado Docker Desktop antes, es posible que deba colocar su entrada en el `PATH` antes de la agregada por el instalador de Docker Desktop o elimine el `kubectl`.
{{< /note >}}
### Instalar en Windows usando Chocolatey o Scoop
1. Para instalar kubectl en Windows, puede usar [Chocolatey](https://chocolatey.org)
como administrador de paquetes o el instalador [Scoop](https://scoop.sh) desde línea de comandos.
{{< tabs name="kubectl_win_install" >}}
{{% tab name="choco" %}}
```powershell
choco install kubernetes-cli
```
{{% /tab %}}
{{% tab name="scoop" %}}
```powershell
scoop install kubectl
```
{{% /tab %}}
{{< /tabs >}}
1. Para asegurarse de que la versión que instaló esté actualizada, ejecute:
```powershell
kubectl version --client
```
1. Navegue a su directorio de inicio:
```powershell
# Si estas usando cmd.exe, correr: cd %USERPROFILE%
cd ~
```
1. Cree el directorio `.kube`:
```powershell
mkdir .kube
```
1. Cambie al directorio `.kube` que acaba de crear:
```powershell
cd .kube
```
1. Configure kubectl para usar un clúster de Kubernetes remoto:
```powershell
New-Item config -type file
```
{{< note >}}
Edite el archivo de configuración con un editor de texto de su elección, como el Bloc de notas.
{{< /note >}}
## Verificar la configuración de kubectl
{{< include "verify-kubectl.md" >}}
## Plugins y configuraciones opcionales de kubectl
### Habilitar el autocompletado de shell
kubectl proporciona soporte de autocompletado para Bash y Zsh, lo que puede ahorrarle mucho tiempo al escribir.
A continuación se muestran los procedimientos para configurar el autocompletado para Zsh, si lo está ejecutando en Windows.
{{< include "optional-kubectl-configs-zsh.md" >}}
### Instalar el plugin `kubectl-convert`
{{< include "kubectl-convert-overview.md" >}}
1. Descargue la última versión con el comando:
```powershell
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe
```
1. Validar el binario (opcional)
Descargue el archivo de comprobación kubectl-convert:
```powershell
curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe.sha256
```
Valide el binario kubectl-convert con el archivo de comprobación:
- Usando la consola del sistema puede comparar manualmente la salida de `CertUtil` con el archivo de comprobación descargado:
```cmd
CertUtil -hashfile kubectl-convert.exe SHA256
type kubectl-convert.exe.sha256
```
- Usando PowerShell puede automatizar la verificación usando el operador `-eq`
para obtener un resultado de `True` o `False`:
```powershell
$($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl-convert.exe.sha256)
```
1. Agregue el binario a su `PATH`.
1. Verifique que el plugin se haya instalado correctamente
```shell
kubectl convert --help
```
Si no ve un error, significa que el plugin se instaló correctamente.
## {{% heading "whatsnext" %}}
{{< include "kubectl-whats-next.md" >}}

View File

@ -0,0 +1,9 @@
---
title: "Descripción general de kubectl-convert"
description: >-
Un plugin de kubectl que le permite convertir manifiestos de una versión
headless: true
---
Un plugin para la herramienta de línea de comandos de Kubernetes `kubectl`, que le permite convertir manifiestos entre diferentes versiones de la API. Esto puede ser particularmente útil para migrar manifiestos a una versión no obsoleta de la API con la versión más reciente de Kubernetes.
Para obtener más información, visite [migrar a APIs no obsoletas](/docs/reference/using-api/deprecation-guide/#migrate-to-non-deprecated-apis)

View File

@ -0,0 +1,12 @@
---
title: "¿Que sigue?"
description: "¿Qué sigue después de instalar kubectl."
headless: true
---
* [Instalar Minikube](https://minikube.sigs.k8s.io/docs/start/)
* Consulte las [guías de introducción](/docs/setup/) para obtener más información sobre la creación de clústeres.
* [Aprenda a iniciar y exponer su aplicación.](/docs/tasks/access-application-cluster/service-access-application-cluster/)
* Si necesita acceso a un clúster que no creó, consulte la guia de
[Compartir el acceso al clúster](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
* Lea los [documentos de referencia de kubectl](/docs/reference/kubectl/kubectl/)

View File

@ -0,0 +1,54 @@
---
title: "Autocompletar bash en Linux"
description: "Alguna configuración opcional para la finalización automática de bash en Linux."
headless: true
---
### Introducción
El script de completado de kubectl para Bash se puede generar con el comando `kubectl completion bash`. Obtener el script de completado en su shell habilita el autocompletado de kubectl.
Sin embargo, el script de completado depende de [**bash-completion**](https://github.com/scop/bash-completion), lo que significa que primero debe instalar este software (puedes probar si tienes bash-completion ya instalado ejecutando `type _init_completion`).
### Instalar bash-complete
El completado de bash es proporcionado por muchos administradores de paquetes (ver [aquí](https://github.com/scop/bash-completion#installation)). Puedes instalarlo con `apt-get install bash-completion` o `yum install bash-completion`, etc.
Los comandos anteriores crean `/usr/share/bash-completion/bash_completion`, que es el script principal de bash-complete. Dependiendo de su administrador de paquetes, debe obtener manualmente este archivo de perfil en su `~/.bashrc`.
Para averiguarlo, recargue su shell y ejecute `type _init_completion`. Si el comando tiene éxito, ya está configurado; de lo contrario, agregue lo siguiente a su archivo `~/.bashrc`:
```bash
source /usr/share/bash-completion/bash_completion
```
Vuelva a cargar su shell y verifique que la finalización de bash esté correctamente instalada escribiendo `type _init_completion`.
### Habilitar el autocompletado de kubectl
Ahora debe asegurarse de que el script de completado de kubectl se obtenga en todas sus sesiones de shell. Hay dos formas de hacer esto:
- Obtenga el script de completado en su perfil `~/.bashrc`:
```bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
```
- Agregue el script de completado al directorio de `/etc/bash_completion.d`:
```bash
kubectl completion bash >/etc/bash_completion.d/kubectl
```
Si tiene un alias para kubectl, puede extender el completado del shell para trabajar con ese alias:
```bash
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc
```
{{< note >}}
El código fuente de bash-complete todos los scripts se encuentra en `/etc/bash_completion.d`.
{{< /note >}}
Ambos enfoques son equivalentes. Después de recargar su shell, el autocompletado de kubectl debería estar funcionando.

View File

@ -0,0 +1,88 @@
---
title: "Autocompletar bash en macOS"
description: "Alguna configuración opcional para la finalización automática de bash en macOS."
headless: true
---
### Introducción
El script de completado de kubectl para Bash se puede generar con `kubectl completion bash`. Obtener este script en su shell permite el completado de kubectl.
Sin embargo, el script de finalización de kubectl depende de [**bash-completion**](https://github.com/scop/bash-completion) que, por lo tanto, debe instalar previamente.
{{< warning>}}
Hay dos versiones de bash-complete, v1 y v2. V1 es para Bash 3.2 (
que es el predeterminado en macOS), y v2 es para Bash 4.1+. El script de completado de kubectl **no funciona** correctamente con bash-complete v1 y Bash 3.2. Requiere **bash-complete v2** y **Bash 4.1+**. Por lo tanto, para poder usar correctamente la finalización de kubectl en macOS, debe instalar y usar Bash 4.1+ ([*instrucciones*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). Las siguientes instrucciones asumen que usa Bash 4.1+ (es decir, cualquier versión de Bash de 4.1 o posterior).
{{< /warning >}}
### Actualizar Bash
Las siguientes instrucciones asumen que usa Bash 4.1+. Puede verificar la versión de su Bash ejecutando:
```bash
echo $BASH_VERSION
```
Si es demasiado antiguo, puede instalarlo o actualizarlo usando Homebrew:
```bash
brew install bash
```
Vuelva a cargar su shell y verifique que se esté utilizando la versión deseada:
```bash
echo $BASH_VERSION $SHELL
```
Homebrew generalmente lo instala en `/usr/local/bin/bash`.
### Instalar bash-complete
{{< note >}}
Como se mencionó antes, estas instrucciones asumen que usa Bash 4.1+, lo que significa que instalará bash-completacion v2 (a diferencia de Bash 3.2 y bash-deployment v1, en cuyo caso el completado de kubectl no funcionará).
{{< /note >}}
Puede probar si ya tiene instalado bash-complete v2 con `type _init_completion`. Si no es así, puede instalarlo con Homebrew:
```bash
brew install bash-completion@2
```
Como se indica en el resultado de este comando, agregue lo siguiente a su archivo `~/.bash_profile`:
```bash
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
```
Vuelva a cargar su shell y verifique que bash-complete v2 esté instalado correctamente con `type _init_completion`.
### Habilitar el autocompletado de kubectl
Ahora debe asegurarse de que el script de completado de kubectl se obtenga en todas sus sesiones de shell. Hay varias formas de lograrlo:
- Obtenga el script de finalización en su perfil `~/.bash_profile`:
```bash
echo 'source <(kubectl completion bash)' >>~/.bash_profile
```
- Agregue el script de completado al directorio `/usr/local/etc/bash_completion.d`:
```bash
kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl
```
- Si tiene un alias para kubectl, puede extender el completado del shell para trabajar con ese alias:
```bash
echo 'alias k=kubectl' >>~/.bash_profile
echo 'complete -F __start_kubectl k' >>~/.bash_profile
```
- Si instaló kubectl con Homebrew (como se explica [aquí](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)), entonces el script de completado de kubectl ya debería estar en `/usr/local/etc/bash_completion.d/kubectl`. En ese caso, no necesita hacer nada.
{{< note >}}
La instalación de Homebrew de bash-completion v2 genera todos los archivos en el directorio `BASH_COMPLETION_COMPAT_DIR`, es por eso que los dos últimos métodos funcionan.
{{< /note >}}
En cualquier caso, después de recargar su shell, el completado de kubectl debería estar funcionando.

View File

@ -0,0 +1,29 @@
---
title: "Autocompletar zsh"
description: "Alguna configuración opcional para la finalización automática de zsh."
headless: true
---
El script de completado de kubectl para Zsh se puede generar con el comando `kubectl completion zsh`. Obtener el script de completado en su shell habilita el autocompletado de kubectl.
Para hacerlo en todas sus sesiones de shell, agregue lo siguiente a su perfil `~/.zshrc`:
```zsh
source <(kubectl completion zsh)
```
Si tiene un alias para kubectl, puede extender el completado del shell para trabajar con ese alias:
```zsh
echo 'alias k=kubectl' >>~/.zshrc
echo 'compdef __start_kubectl k' >>~/.zshrc
```
Después de recargar su shell, el autocompletado de kubectl debería estar funcionando.
Si recibe un error como `complete:13: command not found: compdef`,
luego agregue lo siguiente al comienzo de su perfil `~/.zshrc`:
```zsh
autoload -Uz compinit
compinit
```

View File

@ -0,0 +1,31 @@
---
title: "verificar la instalación de kubectl"
description: "Cómo verificar kubectl."
headless: true
---
Para que kubectl encuentre y acceda a un clúster de Kubernetes, necesita un
[archivo kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/), que se crea automáticamente cuando creas un clúster usando
[kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh)
o implementar con éxito un clúster de Minikube.
De forma predeterminada, la configuración de kubectl se encuentra en `~/.kube/config`.
Verifique que kubectl esté configurado correctamente obteniendo el estado del clúster:
```shell
kubectl cluster-info
```
Si ve una respuesta de URL, kubectl está configurado correctamente para acceder a su clúster.
Si ve un mensaje similar al siguiente, kubectl no está configurado correctamente o no puede conectarse a un clúster de Kubernetes.
```
The connection to the server <server-name:port> was refused - did you specify the right host or port?
```
Por ejemplo, si tiene la intención de ejecutar un clúster de Kubernetes en su computadora portátil (localmente), primero necesitará instalar una herramienta como Minikube y luego volver a ejecutar los comandos indicados anteriormente.
Si kubectl cluster-info devuelve la respuesta de la URL pero no puede acceder a su clúster, para verificar si está configurado correctamente, use:
```shell
kubectl cluster-info dump
```

Some files were not shown because too many files have changed in this diff Show More