diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d5f2c4ce05..9fe6cea5f3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -35,3 +35,5 @@ Note that code issues should be filed against the main kubernetes repository, wh ### Submitting Documentation Pull Requests If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/). + +For more information, see [contributing to Kubernetes docs](https://kubernetes.io/docs/contribute/). diff --git a/OWNERS b/OWNERS index 3dd8f337ba..18bcac1558 100644 --- a/OWNERS +++ b/OWNERS @@ -7,10 +7,10 @@ approvers: - sig-docs-en-owners # Defined in OWNERS_ALIASES emeritus_approvers: -# chenopis, you're welcome to return when you're ready to resume PR wrangling -# jaredbhatti, you're welcome to return when you're ready to resume PR wrangling -# stewart-yu, you're welcome to return when you're ready to resume PR wrangling +# - chenopis, commented out to disable PR assignments +# - jaredbhatti, commented out to disable PR assignments - stewart-yu +- zacharysarah labels: - sig/docs diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index c39698ba81..4eaa882a18 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -24,10 +24,10 @@ aliases: sig-docs-en-owners: # Admins for English content - bradtopol - celestehorgan + - irvifa - jimangel - kbarnard10 - kbhawkey - - makoscafee - onlydole - savitharaghunathan - sftim @@ -43,14 +43,12 @@ aliases: - jimangel - kbarnard10 - kbhawkey - - makoscafee - onlydole - rajeshdeshpande02 - sftim - steveperry-53 - tengqm - xiangpengzhao - - zacharysarah - zparnold sig-docs-es-owners: # Admins for Spanish content - raelga @@ -133,7 +131,6 @@ aliases: - ianychoi - seokho-son - ysyukr - - zacharysarah sig-docs-ko-reviews: # PR reviews for Korean content - ClaudiaJKang - gochist @@ -142,35 +139,36 @@ aliases: - ysyukr - pjhwa sig-docs-leads: # Website chairs and tech leads + - irvifa - jimangel - kbarnard10 - kbhawkey - onlydole - sftim - - zacharysarah sig-docs-zh-owners: # Admins for Chinese content - - chenopis + # chenopis - chenrui333 - - dchen1107 - - haibinxie - - hanjiayao - - lichuqiang - - SataQiu - - tengqm - - xiangpengzhao - - xichengliudui - - zacharysarah - - zhangxiaoyu-zidif - sig-docs-zh-reviews: # PR reviews for Chinese content - - chenrui333 - - idealhack + # dchen1107 + # haibinxie + # hanjiayao + # lichuqiang - SataQiu - tanjunchen - tengqm - xiangpengzhao - xichengliudui - - zhangxiaoyu-zidif + # zhangxiaoyu-zidif + sig-docs-zh-reviews: # PR reviews for Chinese content + - chenrui333 + - howieyuen + - idealhack - pigletfly + - SataQiu + - tanjunchen + - tengqm + - xiangpengzhao + - xichengliudui + # zhangxiaoyu-zidif sig-docs-pt-owners: # Admins for Portuguese content - femrtnz - jcjesus diff --git a/README-de.md b/README-de.md index bf647d828f..9efc1fb597 100644 --- a/README-de.md +++ b/README-de.md @@ -40,13 +40,13 @@ Um die Kubernetes-Website lokal laufen zu lassen, empfiehlt es sich, ein speziel Wenn Sie Docker [installiert](https://www.docker.com/get-started) haben, erstellen Sie das Docker-Image `kubernetes-hugo` lokal: ```bash -make docker-image +make container-image ``` Nachdem das Image erstellt wurde, können Sie die Site lokal ausführen: ```bash -make docker-serve +make container-serve ``` Öffnen Sie Ihren Browser unter http://localhost:1313, um die Site anzuzeigen. Wenn Sie Änderungen an den Quelldateien vornehmen, aktualisiert Hugo die Site und erzwingt eine Browseraktualisierung. diff --git a/README-es.md b/README-es.md index ba2f13a80f..cc4a6cefa0 100644 --- a/README-es.md +++ b/README-es.md @@ -33,13 +33,13 @@ El método recomendado para levantar una copia local del sitio web kubernetes.io Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio: ```bash -make docker-image +make container-image ``` Una vez tenga la imagen construida, puede levantar el sitio web ejecutando: ```bash -make docker-serve +make container-serve ``` Abra su navegador y visite http://localhost:1313 para acceder a su copia local del sitio. A medida que vaya haciendo cambios en el código fuente, Hugo irá actualizando la página y forzará la actualización en el navegador. diff --git a/README-fr.md b/README-fr.md index b493ea60f0..c5cf5de786 100644 --- a/README-fr.md +++ b/README-fr.md @@ -16,13 +16,13 @@ Faites tous les changements que vous voulez dans votre fork, et quand vous êtes Une fois votre pull request créée, un examinateur de Kubernetes se chargera de vous fournir une revue claire et exploitable. En tant que propriétaire de la pull request, **il est de votre responsabilité de modifier votre pull request pour tenir compte des commentaires qui vous ont été fournis par l'examinateur de Kubernetes.** Notez également que vous pourriez vous retrouver avec plus d'un examinateur de Kubernetes pour vous fournir des commentaires ou vous pourriez finir par recevoir des commentaires d'un autre examinateur que celui qui vous a été initialement affecté pour vous fournir ces commentaires. -De plus, dans certains cas, l'un de vos examinateur peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin. +De plus, dans certains cas, l'un de vos examinateurs peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin. Les examinateurs feront de leur mieux pour fournir une revue rapidement, mais le temps de réponse peut varier selon les circonstances. Pour plus d'informations sur la contribution à la documentation Kubernetes, voir : * [Commencez à contribuer](https://kubernetes.io/docs/contribute/start/) -* [Apperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally) +* [Aperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally) * [Utilisation des modèles de page](https://kubernetes.io/docs/contribute/style/page-content-types/) * [Documentation Style Guide](http://kubernetes.io/docs/contribute/style/style-guide/) * [Traduction de la documentation Kubernetes](https://kubernetes.io/docs/contribute/localization/) @@ -38,13 +38,13 @@ La façon recommandée d'exécuter le site web Kubernetes localement est d'utili Si vous avez Docker [up and running](https://www.docker.com/get-started), construisez l'image Docker `kubernetes-hugo' localement: ```bash -make docker-image +make container-image ``` Une fois l'image construite, vous pouvez exécuter le site localement : ```bash -make docker-serve +make container-serve ``` Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site. diff --git a/README-hi.md b/README-hi.md index e572cfc738..649ece0e55 100644 --- a/README-hi.md +++ b/README-hi.md @@ -41,13 +41,13 @@ यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ: ```bash -make docker-image +make container-image ``` एक बार image बन जाने के बाद, आप साइट को स्थानीय रूप से चला सकते हैं: ```bash -make docker-serve +make container-serve ``` साइट देखने के लिए अपने browser को `http://localhost:1313` पर खोलें। जैसा कि आप source फ़ाइलों में परिवर्तन करते हैं, Hugo साइट को अपडेट करता है और browser को refresh करने पर मजबूर करता है। diff --git a/README-id.md b/README-id.md index 1d6b830f2e..5685e72ab9 100644 --- a/README-id.md +++ b/README-id.md @@ -30,13 +30,13 @@ Petunjuk yang disarankan untuk menjalankan Dokumentasi Kubernetes pada mesin lok Jika kamu sudah memiliki **Docker** [yang sudah dapat digunakan](https://www.docker.com/get-started), kamu dapat melakukan **build** `kubernetes-hugo` **Docker image** secara lokal: ```bash -make docker-image +make container-image ``` Setelah **image** berhasil di-**build**, kamu dapat menjalankan website tersebut pada mesin lokal-mu: ```bash -make docker-serve +make container-serve ``` Buka **browser** kamu ke http://localhost:1313 untuk melihat laman dokumentasi. Selama kamu melakukan penambahan konten, **Hugo** akan secara otomatis melakukan perubahan terhadap laman dokumentasi apabila **browser** melakukan proses **refresh**. diff --git a/README-it.md b/README-it.md index 5530a673e5..7002aef9ee 100644 --- a/README-it.md +++ b/README-it.md @@ -30,13 +30,13 @@ Il modo consigliato per eseguire localmente il sito Web Kubernetes prevede l'uti Se hai Docker [attivo e funzionante](https://www.docker.com/get-started), crea l'immagine Docker `kubernetes-hugo` localmente: ```bash -make docker-image +make container-image ``` Dopo aver creato l'immagine, è possibile eseguire il sito Web localmente: ```bash -make docker-serve +make container-serve ``` Apri il tuo browser su http://localhost:1313 per visualizzare il sito Web. Mentre modifichi i file sorgenti, Hugo aggiorna automaticamente il sito Web e forza un aggiornamento della pagina visualizzata nel browser. diff --git a/README-ko.md b/README-ko.md index f0b13168e3..f5b549439a 100644 --- a/README-ko.md +++ b/README-ko.md @@ -41,13 +41,13 @@ 도커 [동작 및 실행](https://www.docker.com/get-started) 환경이 있는 경우, 로컬에서 `kubernetes-hugo` 도커 이미지를 빌드 합니다: ```bash -make docker-image +make container-image ``` 해당 이미지가 빌드 된 이후, 사이트를 로컬에서 실행할 수 있습니다: ```bash -make docker-serve +make container-serve ``` 브라우저에서 http://localhost:1313 를 열어 사이트를 살펴봅니다. 소스 파일에 변경 사항이 있을 때, Hugo는 사이트를 업데이트하고 브라우저를 강제로 새로고침합니다. diff --git a/README-pl.md b/README-pl.md index f8bffdb0b2..166bc5ef4e 100644 --- a/README-pl.md +++ b/README-pl.md @@ -49,13 +49,13 @@ choco install make Jeśli [zainstalowałeś i uruchomiłeś](https://www.docker.com/get-started) już Dockera, zbuduj obraz `kubernetes-hugo` lokalnie: ```bash -make docker-image +make container-image ``` Po zbudowaniu obrazu, możesz uruchomić serwis lokalnie: ```bash -make docker-serve +make container-serve ``` Aby obejrzeć zawartość serwisu otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce. diff --git a/README-pt.md b/README-pt.md index 8ce121a792..3154b77bab 100644 --- a/README-pt.md +++ b/README-pt.md @@ -35,13 +35,13 @@ A maneira recomendada de executar o site do Kubernetes localmente é executar um Se você tiver o Docker [em funcionamento](https://www.docker.com/get-started), crie a imagem do Docker do `kubernetes-hugo` localmente: ```bash -make docker-image +make container-image ``` Depois que a imagem foi criada, você pode executar o site localmente: ```bash -make docker-serve +make container-serve ``` Abra seu navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador. diff --git a/README-ru.md b/README-ru.md index bb290654a8..d999e1cc88 100644 --- a/README-ru.md +++ b/README-ru.md @@ -38,8 +38,8 @@ hugo server --buildFuture Узнать подробнее о том, как поучаствовать в документации Kubernetes, вы можете по ссылкам ниже: * [Начните вносить свой вклад](https://kubernetes.io/docs/contribute/) -* [Использование шаблонов страниц](http://kubernetes.io/docs/contribute/style/page-templates/) -* [Руководство по оформлению документации](http://kubernetes.io/docs/contribute/style/style-guide/) +* [Использование шаблонов страниц](https://kubernetes.io/docs/contribute/style/page-content-types/) +* [Руководство по оформлению документации](https://kubernetes.io/docs/contribute/style/style-guide/) * [Руководство по локализации Kubernetes](https://kubernetes.io/docs/contribute/localization/) ## Файл `README.md` на других языках diff --git a/README-vi.md b/README-vi.md index b06b6df368..3a1ee99195 100644 --- a/README-vi.md +++ b/README-vi.md @@ -31,13 +31,13 @@ Cách được đề xuất để chạy trang web Kubernetes cục bộ là dù Nếu bạn có Docker đang [up và running](https://www.docker.com/get-started), build `kubernetes-hugo` Docker image cục bộ: ```bash -make docker-image +make container-image ``` Khi image đã được built, bạn có thể chạy website cục bộ: ```bash -make docker-serve +make container-serve ``` Mở trình duyệt và đến địa chỉ http://localhost:1313 để xem website. Khi bạn thay đổi các file nguồn, Hugo cập nhật website và buộc làm mới trình duyệt. diff --git a/README.md b/README.md index 9ff0c14e53..dce4cb91a6 100644 --- a/README.md +++ b/README.md @@ -101,7 +101,7 @@ Learn more about SIG Docs Kubernetes community and meetings on the [community pa You can also reach the maintainers of this project at: -- [Slack](https://kubernetes.slack.com/messages/sig-docs) +- [Slack](https://kubernetes.slack.com/messages/sig-docs) [Get an invite for this Slack](https://slack.k8s.io/) - [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) # Contributing to the docs diff --git a/SECURITY_CONTACTS b/SECURITY_CONTACTS index c87673847c..5b0cc85b45 100644 --- a/SECURITY_CONTACTS +++ b/SECURITY_CONTACTS @@ -10,6 +10,7 @@ # DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE # INSTRUCTIONS AT https://kubernetes.io/security/ +irvifa jimangel kbarnard10 -zacharysarah +sftim \ No newline at end of file diff --git a/assets/js/search.js b/assets/js/search.js index 3d6e9d9447..40e1797d5a 100644 --- a/assets/js/search.js +++ b/assets/js/search.js @@ -17,14 +17,22 @@ limitations under the License. var Search = { init: function () { $(document).ready(function () { + // Fill the search input form with the current search keywords + const searchKeywords = new URLSearchParams(location.search).get('q'); + if (searchKeywords !== null && searchKeywords !== '') { + const searchInput = document.querySelector('.td-search-input'); + searchInput.focus(); + searchInput.value = searchKeywords; + } + + // Set a keydown event $(document).on("keypress", ".td-search-input", function (e) { if (e.keyCode !== 13) { return; } var query = $(this).val(); - var searchPage = "{{ "docs/search/" | absURL }}?q=" + query; - document.location = searchPage; + document.location = "{{ "search/" | absURL }}?q=" + query; return false; }); diff --git a/assets/scss/_base.scss b/assets/scss/_base.scss index 4b9125bcd9..087f076e19 100644 --- a/assets/scss/_base.scss +++ b/assets/scss/_base.scss @@ -42,6 +42,10 @@ $video-section-height: 200px; body { background-color: white; + + a { + color: $blue; + } } section { @@ -71,6 +75,7 @@ footer { background-color: $blue; text-decoration: none; font-size: 1rem; + border: 0px; } #cellophane { @@ -336,7 +341,6 @@ dd { width: 100%; height: 45px; line-height: 45px; - font-family: "Roboto", sans-serif; font-size: 20px; color: $blue; } @@ -612,7 +616,6 @@ section#cncf { padding-top: 30px; padding-bottom: 80px; background-size: auto; - // font-family: "Roboto Mono", monospace !important; font-size: 24px; // font-weight: bold; diff --git a/assets/scss/_custom.scss b/assets/scss/_custom.scss index 563d87272f..f511e05e3b 100644 --- a/assets/scss/_custom.scss +++ b/assets/scss/_custom.scss @@ -20,6 +20,15 @@ $announcement-size-adjustment: 8px; padding-top: 2rem !important; } } + + .ui-widget { + font-family: inherit; + font-size: inherit; + } + + .ui-widget-content a { + color: $blue; + } } section { @@ -44,6 +53,23 @@ section { } } +body.td-404 main .error-details { + max-width: 1100px; + margin-left: auto; + margin-right: auto; + margin-top: 4em; + margin-bottom: 0; +} + +/* Global - Mermaid.js diagrams */ + +.mermaid { + overflow-x: auto; + max-width: 80%; + border: 1px solid rgb(222, 226, 230); + border-radius: 5px; +} + /* HEADER */ .td-navbar { @@ -268,22 +294,34 @@ main { // blockquotes and callouts -blockquote { - padding: 0.4rem 0.4rem 0.4rem 1rem !important; -} +.td-content, body { + blockquote.callout { + padding: 0.4rem 0.4rem 0.4rem 1rem; + border: 1px solid #eee; + border-left-width: 0.5em; + background: #fff; + color: #000; + margin-top: 0.5em; + margin-bottom: 0.5em; + } + blockquote.callout { + border-radius: calc(1em/3); + } + .callout.caution { + border-left-color: #f0ad4e; + } -// callouts are contained in static CSS as well. these require override. + .callout.note { + border-left-color: #428bca; + } -.caution { - border-left-color: #f0ad4e !important; -} + .callout.warning { + border-left-color: #d9534f; + } -.note { - border-left-color: #428bca !important; -} - -.warning { - border-left-color: #d9534f !important; + h1:first-of-type + blockquote.callout { + margin-top: 1.5em; + } } .deprecation-warning { @@ -393,6 +431,7 @@ body.cid-community > #deprecation-warning > .deprecation-warning > * { } color: $blue; + margin: 1rem; } } } @@ -512,3 +551,4 @@ body.td-documentation { } } } + diff --git a/assets/scss/_variables_project.scss b/assets/scss/_variables_project.scss index da42c07367..9030d61dc9 100644 --- a/assets/scss/_variables_project.scss +++ b/assets/scss/_variables_project.scss @@ -11,3 +11,5 @@ Add styles or override variables from the theme here. */ @import "base"; @import "tablet"; @import "desktop"; + +$primary: #3371e3; \ No newline at end of file diff --git a/config.toml b/config.toml index 0915eeb4c3..f5425257d9 100644 --- a/config.toml +++ b/config.toml @@ -112,6 +112,8 @@ copyright_linux = "Copyright © 2020 The Linux Foundation ®." version_menu = "Versions" time_format_blog = "Monday, January 02, 2006" +time_format_default = "January 02, 2006 at 3:04 PM PST" + description = "Production-Grade Container Orchestration" showedit = true @@ -124,9 +126,13 @@ docsbranch = "master" deprecated = false currentUrl = "https://kubernetes.io/docs/home/" nextUrl = "https://kubernetes-io-vnext-staging.netlify.com/" -githubWebsiteRepo = "github.com/kubernetes/website" + +# See codenew shortcode githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website" +# GitHub repository link for editing a page and opening issues. +github_repo = "https://github.com/kubernetes/website" + # param for displaying an announcement block on every page. # See /i18n/en.toml for message text and title. announcement = true diff --git a/content/de/docs/reference/kubectl/cheatsheet.md b/content/de/docs/reference/kubectl/cheatsheet.md index 15b8d0bda3..bce7eeefb3 100644 --- a/content/de/docs/reference/kubectl/cheatsheet.md +++ b/content/de/docs/reference/kubectl/cheatsheet.md @@ -54,7 +54,8 @@ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view # Zeigen Sie das Passwort für den e2e-Benutzer an kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' -kubectl config view -o jsonpath='{.users[].name}' # eine Liste der Benutzer erhalten +kubectl config view -o jsonpath='{.users[].name}' # den ersten Benutzer anzeigen +kubectl config view -o jsonpath='{.users[*].name}' # eine Liste der Benutzer erhalten kubectl config current-context # den aktuellen Kontext anzeigen kubectl config use-context my-cluster-name # Setzen Sie den Standardkontext auf my-cluster-name diff --git a/content/de/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/de/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html index a8e40fad5e..7a5fe0ce4f 100644 --- a/content/de/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html +++ b/content/de/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html @@ -21,7 +21,7 @@ weight: 20
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
-
+
diff --git a/content/de/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/de/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html index f6c836c6c4..8c74aafd78 100644 --- a/content/de/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html +++ b/content/de/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -23,7 +23,7 @@ weight: 20 Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
-
+
diff --git a/content/de/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/de/docs/tutorials/kubernetes-basics/explore/explore-interactive.html index 9d4026933e..d3fb05eff5 100644 --- a/content/de/docs/tutorials/kubernetes-basics/explore/explore-interactive.html +++ b/content/de/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -24,7 +24,7 @@ weight: 20 Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
-
+
diff --git a/content/de/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/de/docs/tutorials/kubernetes-basics/expose/expose-interactive.html index 2c6ebd5ecd..ab5b880397 100644 --- a/content/de/docs/tutorials/kubernetes-basics/expose/expose-interactive.html +++ b/content/de/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -21,7 +21,7 @@ weight: 20
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
-
+
diff --git a/content/de/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/de/docs/tutorials/kubernetes-basics/scale/scale-interactive.html index 1ea0c1a70d..648623339f 100644 --- a/content/de/docs/tutorials/kubernetes-basics/scale/scale-interactive.html +++ b/content/de/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -21,7 +21,7 @@ weight: 20
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
-
+
diff --git a/content/de/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/de/docs/tutorials/kubernetes-basics/update/update-interactive.html index 8971b0dd17..448ddc81b9 100644 --- a/content/de/docs/tutorials/kubernetes-basics/update/update-interactive.html +++ b/content/de/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -21,7 +21,7 @@ weight: 20
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
-
+
diff --git a/content/en/blog/_posts/2018-06-26-kubernetes-1-11-release-announcement.md b/content/en/blog/_posts/2018-06-26-kubernetes-1-11-release-announcement.md index 4614421d1b..a5d9f8b4d0 100644 --- a/content/en/blog/_posts/2018-06-26-kubernetes-1-11-release-announcement.md +++ b/content/en/blog/_posts/2018-06-26-kubernetes-1-11-release-announcement.md @@ -45,7 +45,7 @@ Support for [dynamic maximum volume count](https://github.com/kubernetes/feature The StorageObjectInUseProtection feature is now stable and prevents the removal of both [Persistent Volumes](https://github.com/kubernetes/features/issues/499) that are bound to a Persistent Volume Claim, and [Persistent Volume Claims](https://github.com/kubernetes/features/issues/498) that are being used by a pod. This safeguard will help prevent issues from deleting a PV or a PVC that is currently tied to an active pod. -Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#111-release-notes). +Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/release-1.11/CHANGELOG-1.11.md#111-release-notes). ## Availability @@ -88,7 +88,7 @@ Is Kubernetes helping your team? Share your story with the community. * The CNCF recently expanded its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual's ability to design, build, configure, and expose cloud native applications for Kubernetes. More information can be found [here](https://www.cncf.io/blog/2018/03/16/cncf-announces-ckad-exam/). * The CNCF recently added a new partner category, Kubernetes Training Partners (KTP). KTPs are a tier of vetted training providers who have deep experience in cloud native technology training. View partners and learn more [here](https://www.cncf.io/certification/training/). * CNCF also offers [online training](https://www.cncf.io/certification/training/) that teaches the skills needed to create and configure a real-world Kubernetes cluster. -* Kubernetes documentation now features [user journeys](https://k8s.io/docs/home/): specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers. +* Kubernetes documentation now features [user journeys](https://k8s.io/docs/home/): specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers. ## KubeCon diff --git a/content/en/blog/_posts/2020-08-21-Moving-Forward-From-Beta/index.md b/content/en/blog/_posts/2020-08-21-Moving-Forward-From-Beta/index.md index 3be4481954..1d82e67fb2 100644 --- a/content/en/blog/_posts/2020-08-21-Moving-Forward-From-Beta/index.md +++ b/content/en/blog/_posts/2020-08-21-Moving-Forward-From-Beta/index.md @@ -61,7 +61,7 @@ mind. ## Avoiding permanent beta For Kubernetes REST APIs, when a new feature's API reaches beta, that starts a countdown. -The beta-quality API now has **nine calendar months** to either: +The beta-quality API now has **three releases** (about nine calendar months) to either: - reach GA, and deprecate the beta, or - have a new beta version (_and deprecate the previous beta_). @@ -69,9 +69,10 @@ To be clear, at this point **only REST APIs are affected**. For example, _APILis a beta feature but isn't itself a REST API. Right now there are no plans to automatically deprecate _APIListChunking_ nor any other features that aren't REST APIs. -If a REST API reaches the end of that 9 month countdown, then the next Kubernetes release -will deprecate that API version. There's no option for the REST API to stay at the same -beta version beyond the first Kubernetes release to come out after the 9 month window. +If a beta API has not graduated to GA after three Kubernetes releases, then the +next Kubernetes release will deprecate that API version. There's no option for +the REST API to stay at the same beta version beyond the first Kubernetes +release to come out after the release window. ### What this means for you diff --git a/content/en/blog/_posts/2020-08-31-increase-kubernetes-support-one-year.md b/content/en/blog/_posts/2020-08-31-increase-kubernetes-support-one-year.md new file mode 100644 index 0000000000..91bc4660a5 --- /dev/null +++ b/content/en/blog/_posts/2020-08-31-increase-kubernetes-support-one-year.md @@ -0,0 +1,30 @@ +--- +layout: blog +title: 'Increasing the Kubernetes Support Window to One Year' +date: 2020-08-31 +slug: kubernetes-1-19-feature-one-year-support +--- + +**Authors:** Tim Pepper (VMware), Nick Young (VMware) + +Starting with Kubernetes 1.19, the support window for Kubernetes versions [will increase from 9 months to one year](https://github.com/kubernetes/enhancements/issues/1498). The longer support window is intended to allow organizations to perform major upgrades at a time of the year that works the best for them. + +This is a big change. For many years, the Kubernetes project has delivered a new minor release (e.g.: 1.13 or 1.14) every 3 months. The project provides bugfix support via patch releases (e.g.: 1.13.Y) for three parallel branches of the codebase. Combined, this led to each minor release (e.g.: 1.13) having a patch release stream of support for approximately 9 months. In the end, a cluster operator had to upgrade at least every 9 months to remain supported. + +A survey conducted in early 2019 by the WG LTS showed that a significant subset of Kubernetes end-users fail to upgrade within the 9-month support period. + +![Versions in Production](/images/blog/2020-08-31-increase-kubernetes-support-one-year/versions-in-production-text-2.png) + +This, and other responses from the survey, suggest that a considerable portion of our community would better be able to manage their deployments on supported versions if the patch support period were extended to 12-14 months. It appears to be true regardless of whether the users are on DIY builds or commercially vendored distributions. An extension in the patch support length of time would thus lead to a larger percentage of our user base running supported versions compared to what we have now. + +A yearly support period provides the cushion end-users appear to desire, and is more aligned with familiar annual planning cycles. +There are many unknowns about changing the support windows for a project with as many moving parts as Kubernetes. Keeping the change relatively small (relatively being the important word), gives us the chance to find out what those unknowns are in detail and address them. +From Kubernetes version 1.19 on, the support window will be extended to one year. For Kubernetes versions 1.16, 1.17, and 1.18, the story is more complicated. + +All of these versions still fall under the older “three releases support” model, and will drop out of support when 1.19, 1.20 and 1.21 are respectively released. However, because the 1.19 release has been delayed due to the events of 2020, they will end up with close to a year of support (depending on their exact release dates). + +For example, 1.19 was released on the 26th of August 2020, which is 11 months since the release of 1.16. Since 1.16 is still under the old release policy, this means that it is now out of support. + +![Support Timeline](/images/blog/2020-08-31-increase-kubernetes-support-one-year/support-timeline.png) + +If you’ve got thoughts or feedback, we’d love to hear them. Please contact us on [#wg-lts](https://kubernetes.slack.com/messages/wg-lts/) on the Kubernetes Slack, or to the [kubernetes-wg-lts mailing list](https://groups.google.com/g/kubernetes-wg-lts). diff --git a/content/en/blog/_posts/2020-09-01-generic-ephemeral-volumes.md b/content/en/blog/_posts/2020-09-01-generic-ephemeral-volumes.md new file mode 100644 index 0000000000..49b0d3b26f --- /dev/null +++ b/content/en/blog/_posts/2020-09-01-generic-ephemeral-volumes.md @@ -0,0 +1,394 @@ +--- +layout: blog +title: 'Ephemeral volumes with storage capacity tracking: EmptyDir on steroids' +date: 2020-09-01 +slug: ephemeral-volumes-with-storage-capacity-tracking +--- + +**Author:** Patrick Ohly (Intel) + +Some applications need additional storage but don't care whether that +data is stored persistently across restarts. For example, caching +services are often limited by memory size and can move infrequently +used data into storage that is slower than memory with little impact +on overall performance. Other applications expect some read-only input +data to be present in files, like configuration data or secret keys. + +Kubernetes already supports several kinds of such [ephemeral +volumes](/docs/concepts/storage/ephemeral-volumes), but the +functionality of those is limited to what is implemented inside +Kubernetes. + +[CSI ephemeral volumes](https://kubernetes.io/blog/2020/01/21/csi-ephemeral-inline-volumes/) +made it possible to extend Kubernetes with CSI +drivers that provide light-weight, local volumes. These [*inject +arbitrary states, such as configuration, secrets, identity, variables +or similar +information*](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190122-csi-inline-volumes.md#motivation). +CSI drivers must be modified to support this Kubernetes feature, +i.e. normal, standard-compliant CSI drivers will not work, and +by design such volumes are supposed to be usable on whatever node +is chosen for a pod. + +This is problematic for volumes which consume significant resources on +a node or for special storage that is only available on some nodes. +Therefore, Kubernetes 1.19 introduces two new alpha features for +volumes that are conceptually more like the `EmptyDir` volumes: +- [*generic* ephemeral volumes](/docs/concepts/storage/ephemeral-volumes#generic-ephemeral-volumes) and +- [CSI storage capacity tracking](/docs/concepts/storage/storage-capacity). + +The advantages of the new approach are: +- Storage can be local or network-attached. +- Volumes can have a fixed size that applications are never able to exceed. +- Works with any CSI driver that supports provisioning of persistent + volumes and (for capacity tracking) implements the CSI `GetCapacity` call. +- Volumes may have some initial data, depending on the driver and + parameters. +- All of the typical volume operations (snapshotting, + resizing, the future storage capacity tracking, etc.) + are supported. +- The volumes are usable with any app controller that accepts + a Pod or volume specification. +- The Kubernetes scheduler itself picks suitable nodes, i.e. there is + no need anymore to implement and configure scheduler extenders and + mutating webhooks. + +This makes generic ephemeral volumes a suitable solution for several +use cases: + +# Use cases + +## Persistent Memory as DRAM replacement for memcached + +Recent releases of memcached added [support for using Persistent +Memory](https://memcached.org/blog/persistent-memory/) (PMEM) instead +of standard DRAM. When deploying memcached through one of the app +controllers, generic ephemeral volumes make it possible to request a PMEM volume +of a certain size from a CSI driver like +[PMEM-CSI](https://intel.github.io/pmem-csi/). + +## Local LVM storage as scratch space + +Applications working with data sets that exceed the RAM size can +request local storage with performance characteristics or size that is +not met by the normal Kubernetes `EmptyDir` volumes. For example, +[TopoLVM](https://github.com/cybozu-go/topolvm) was written for that +purpose. + +## Read-only access to volumes with data + +Provisioning a volume might result in a non-empty volume: +- [restore a snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support) +- [cloning a volume](/docs/concepts/storage/volume-pvc-datasource) +- [generic data populators](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20200120-generic-data-populators.md) + +Such volumes can be mounted read-only. + +# How it works + +## Generic ephemeral volumes + +The key idea behind generic ephemeral volumes is that a new volume +source, the so-called +[`EphemeralVolumeSource`](/docs/reference/generated/kubernetes-api/#ephemeralvolumesource-v1alpha1-core) +contains all fields that are needed to created a volume claim +(historically called persistent volume claim, PVC). A new controller +in the `kube-controller-manager` waits for Pods which embed such a +volume source and then creates a PVC for that pod. To a CSI driver +deployment, that PVC looks like any other, so no special support is +needed. + +As long as these PVCs exist, they can be used like any other volume claim. In +particular, they can be referenced as data source in volume cloning or +snapshotting. The PVC object also holds the current status of the +volume. + +Naming of the automatically created PVCs is deterministic: the name is +a combination of Pod name and volume name, with a hyphen (`-`) in the +middle. This deterministic naming makes it easier to +interact with the PVC because one does not have to search for it once +the Pod name and volume name are known. The downside is that the name might +be in use already. This is detected by Kubernetes and then blocks Pod +startup. + +To ensure that the volume gets deleted together with the pod, the +controller makes the Pod the owner of the volume claim. When the Pod +gets deleted, the normal garbage-collection mechanism also removes the +claim and thus the volume. + +Claims select the storage driver through the normal storage class +mechanism. Although storage classes with both immediate and late +binding (aka `WaitForFirstConsumer`) are supported, for ephemeral +volumes it makes more sense to use `WaitForFirstConsumer`: then Pod +scheduling can take into account both node utilization and +availability of storage when choosing a node. This is where the other +new feature comes in. + +## Storage capacity tracking + +Normally, the Kubernetes scheduler has no information about where a +CSI driver might be able to create a volume. It also has no way of +talking directly to a CSI driver to retrieve that information. It +therefore tries different nodes until it finds one where all volumes +can be made available (late binding) or leaves it entirely to the +driver to choose a location (immediate binding). + +The new [`CSIStorageCapacity` alpha +API](/docs/reference/generated/kubernetes-api/v1.19/#csistoragecapacity-v1alpha1-storage-k8s-io) +allows storing the necessary information in etcd where it is available to the +scheduler. In contrast to support for generic ephemeral volumes, +storage capacity tracking must be [enabled when deploying a CSI +driver](https://github.com/kubernetes-csi/external-provisioner/blob/master/README.md#capacity-support): +the `external-provisioner` must be told to publish capacity +information that it then retrieves from the CSI driver through the normal +`GetCapacity` call. + + +When the Kubernetes scheduler needs to choose a node for a Pod with an +unbound volume that uses late binding and the CSI driver deployment +has opted into the feature by setting the [`CSIDriver.storageCapacity` +flag](/docs/reference/generated/kubernetes-api/v1.19/#csidriver-v1beta1-storage-k8s-io) +flag, the scheduler automatically filters out nodes that do not have +access to enough storage capacity. This works for generic ephemeral +and persistent volumes but *not* for CSI ephemeral volumes because the +parameters of those are opaque for Kubernetes. + +As usual, volumes with immediate binding get created before scheduling +pods, with their location chosen by the storage driver. Therefore, the +external-provisioner's default configuration skips storage +classes with immediate binding as the information wouldn't be used anyway. + +Because the Kubernetes scheduler must act on potentially outdated +information, it cannot be ensured that the capacity is still available +when a volume is to be created. Still, the chances that it can be created +without retries should be higher. + +# Security + +## CSIStorageCapacity + +CSIStorageCapacity objects are namespaced. When deploying each CSI +drivers in its own namespace and, as recommended, limiting the RBAC +permissions for CSIStorageCapacity to that namespace, it is +always obvious where the data came from. However, Kubernetes does +not check that and typically drivers get installed in the same +namespace anyway, so ultimately drivers are *expected to behave* and +not publish incorrect data. + +## Generic ephemeral volumes + +If users have permission to create a Pod (directly or indirectly), +then they can also create generic ephemeral volumes even when they do +not have permission to create a volume claim. That's because RBAC +permission checks are applied to the controller which creates the +PVC, not the original user. This is a fundamental change that must be +[taken into +account](/docs/concepts/storage/ephemeral-volumes#security) before +enabling the feature in clusters where untrusted users are not +supposed to have permission to create volumes. + +# Example + +A [special branch](https://github.com/intel/pmem-csi/commits/kubernetes-1-19-blog-post) +in PMEM-CSI contains all the necessary changes to bring up a +Kubernetes 1.19 cluster inside QEMU VMs with both alpha features +enabled. The PMEM-CSI driver code is used unchanged, only the +deployment was updated. + +On a suitable machine (Linux, non-root user can use Docker - see the +[QEMU and +Kubernetes](https://intel.github.io/pmem-csi/0.7/docs/autotest.html#qemu-and-kubernetes) +section in the PMEM-CSI documentation), the following commands bring +up a cluster and install the PMEM-CSI driver: + +```console +git clone --branch=kubernetes-1-19-blog-post https://github.com/intel/pmem-csi.git +cd pmem-csi +export TEST_KUBERNETES_VERSION=1.19 TEST_FEATURE_GATES=CSIStorageCapacity=true,GenericEphemeralVolume=true TEST_PMEM_REGISTRY=intel +make start && echo && test/setup-deployment.sh +``` + +If all goes well, the output contains the following usage +instructions: + +``` +The test cluster is ready. Log in with [...]/pmem-csi/_work/pmem-govm/ssh.0, run +kubectl once logged in. Alternatively, use kubectl directly with the +following env variable: + KUBECONFIG=[...]/pmem-csi/_work/pmem-govm/kube.config + +secret/pmem-csi-registry-secrets created +secret/pmem-csi-node-secrets created +serviceaccount/pmem-csi-controller created +... +To try out the pmem-csi driver ephemeral volumes: + cat deploy/kubernetes-1.19/pmem-app-ephemeral.yaml | + [...]/pmem-csi/_work/pmem-govm/ssh.0 kubectl create -f - +``` + +The CSIStorageCapacity objects are not meant to be human-readable, so +some post-processing is needed. The following Golang template filters +all objects by the storage class that the example uses and prints the +name, topology and capacity: + +```console +kubectl get \ + -o go-template='{{range .items}}{{if eq .storageClassName "pmem-csi-sc-late-binding"}}{{.metadata.name}} {{.nodeTopology.matchLabels}} {{.capacity}} +{{end}}{{end}}' \ + csistoragecapacities +``` + +``` +csisc-2js6n map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker2] 30716Mi +csisc-sqdnt map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker1] 30716Mi +csisc-ws4bv map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker3] 30716Mi +``` + +One individual object has the following content: + +```console +kubectl describe csistoragecapacities/csisc-6cw8j +``` + +``` +Name: csisc-sqdnt +Namespace: default +Labels: +Annotations: +API Version: storage.k8s.io/v1alpha1 +Capacity: 30716Mi +Kind: CSIStorageCapacity +Metadata: + Creation Timestamp: 2020-08-11T15:41:03Z + Generate Name: csisc- + Managed Fields: + ... + Owner References: + API Version: apps/v1 + Controller: true + Kind: StatefulSet + Name: pmem-csi-controller + UID: 590237f9-1eb4-4208-b37b-5f7eab4597d1 + Resource Version: 2994 + Self Link: /apis/storage.k8s.io/v1alpha1/namespaces/default/csistoragecapacities/csisc-sqdnt + UID: da36215b-3b9d-404a-a4c7-3f1c3502ab13 +Node Topology: + Match Labels: + pmem-csi.intel.com/node: pmem-csi-pmem-govm-worker1 +Storage Class Name: pmem-csi-sc-late-binding +Events: +``` + +Now let's create the example app with one generic ephemeral +volume. The `pmem-app-ephemeral.yaml` file contains: + +```yaml +# This example Pod definition demonstrates +# how to use generic ephemeral inline volumes +# with a PMEM-CSI storage class. +kind: Pod +apiVersion: v1 +metadata: + name: my-csi-app-inline-volume +spec: + containers: + - name: my-frontend + image: intel/pmem-csi-driver-test:v0.7.14 + command: [ "sleep", "100000" ] + volumeMounts: + - mountPath: "/data" + name: my-csi-volume + volumes: + - name: my-csi-volume + ephemeral: + volumeClaimTemplate: + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 4Gi + storageClassName: pmem-csi-sc-late-binding +``` + +After creating that as shown in the usage instructions above, we have one additional Pod and PVC: + +```console +kubectl get pods/my-csi-app-inline-volume -o wide +``` + +``` +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +my-csi-app-inline-volume 1/1 Running 0 6m58s 10.36.0.2 pmem-csi-pmem-govm-worker1 +``` + +```console +kubectl get pvc/my-csi-app-inline-volume-my-csi-volume +``` + +``` +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +my-csi-app-inline-volume-my-csi-volume Bound pvc-c11eb7ab-a4fa-46fe-b515-b366be908823 4Gi RWO pmem-csi-sc-late-binding 9m21s +``` + +That PVC is owned by the Pod: + +```console +kubectl get -o yaml pvc/my-csi-app-inline-volume-my-csi-volume +``` + +``` +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + annotations: + pv.kubernetes.io/bind-completed: "yes" + pv.kubernetes.io/bound-by-controller: "yes" + volume.beta.kubernetes.io/storage-provisioner: pmem-csi.intel.com + volume.kubernetes.io/selected-node: pmem-csi-pmem-govm-worker1 + creationTimestamp: "2020-08-11T15:44:57Z" + finalizers: + - kubernetes.io/pvc-protection + managedFields: + ... + name: my-csi-app-inline-volume-my-csi-volume + namespace: default + ownerReferences: + - apiVersion: v1 + blockOwnerDeletion: true + controller: true + kind: Pod + name: my-csi-app-inline-volume + uid: 75c925bf-ca8e-441a-ac67-f190b7a2265f +... +``` + +Eventually, the storage capacity information for `pmem-csi-pmem-govm-worker1` also gets updated: + +``` +csisc-2js6n map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker2] 30716Mi +csisc-sqdnt map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker1] 26620Mi +csisc-ws4bv map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker3] 30716Mi +``` + +If another app needs more than 26620Mi, the Kubernetes +scheduler will not pick `pmem-csi-pmem-govm-worker1` anymore. + + +# Next steps + +Both features are under development. Several open questions were +already raised during the alpha review process. The two enhancement +proposals document the work that will be needed for migration to beta and what +alternatives were already considered and rejected: + +* [KEP-1698: generic ephemeral inline +volumes](https://github.com/kubernetes/enhancements/blob/9d7a75d/keps/sig-storage/1698-generic-ephemeral-volumes/README.md) +* [KEP-1472: Storage Capacity +Tracking](https://github.com/kubernetes/enhancements/tree/9d7a75d/keps/sig-storage/1472-storage-capacity-tracking) + +Your feedback is crucial for driving that development. SIG-Storage +[meets +regularly](https://github.com/kubernetes/community/tree/master/sig-storage#meetings) +and can be reached via [Slack and a mailing +list](https://github.com/kubernetes/community/tree/master/sig-storage#contact). diff --git a/content/en/blog/_posts/2020-09-02-scaling-kubernetes-networking-endpointslices.md b/content/en/blog/_posts/2020-09-02-scaling-kubernetes-networking-endpointslices.md new file mode 100644 index 0000000000..eebf88a1f1 --- /dev/null +++ b/content/en/blog/_posts/2020-09-02-scaling-kubernetes-networking-endpointslices.md @@ -0,0 +1,46 @@ +--- +layout: blog +title: 'Scaling Kubernetes Networking With EndpointSlices' +date: 2020-09-02 +slug: scaling-kubernetes-networking-with-endpointslices +--- + +**Author:** Rob Scott (Google) + +EndpointSlices are an exciting new API that provides a scalable and extensible alternative to the Endpoints API. EndpointSlices track IP addresses, ports, readiness, and topology information for Pods backing a Service. + +In Kubernetes 1.19 this feature is enabled by default with kube-proxy reading from [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) instead of Endpoints. Although this will mostly be an invisible change, it should result in noticeable scalability improvements in large clusters. It also enables significant new features in future Kubernetes releases like [Topology Aware Routing](/docs/concepts/services-networking/service-topology/). + +## Scalability Limitations of the Endpoints API +With the Endpoints API, there was only one Endpoints resource for a Service. That meant that it needed to be able to store IP addresses and ports (network endpoints) for every Pod that was backing the corresponding Service. This resulted in huge API resources. To compound this problem, kube-proxy was running on every node and watching for any updates to Endpoints resources. If even a single network endpoint changed in an Endpoints resource, the whole object would have to be sent to each of those instances of kube-proxy. + +A further limitation of the Endpoints API is that it limits the number of network endpoints that can be tracked for a Service. The default size limit for an object stored in etcd is 1.5MB. In some cases that can limit an Endpoints resource to 5,000 Pod IPs. This is not an issue for most users, but it becomes a significant problem for users with Services approaching this size. + +To show just how significant these issues become at scale it helps to have a simple example. Think about a Service which has 5,000 Pods, it might end up with a 1.5MB Endpoints resource. If even a single network endpoint in that list changes, the full Endpoints resource will need to be distributed to each Node in the cluster. This becomes quite an issue in a large cluster with 3,000 Nodes. Each update would involve sending 4.5GB of data (1.5MB Endpoints * 3,000 Nodes) across the cluster. That's nearly enough to fill up a DVD, and it would happen for each Endpoints change. Imagine a rolling update that results in all 5,000 Pods being replaced - that's more than 22TB (or 5,000 DVDs) worth of data transferred. + +## Splitting endpoints up with the EndpointSlice API +The EndpointSlice API was designed to address this issue with an approach similar to sharding. Instead of tracking all Pod IPs for a Service with a single Endpoints resource, we split them into multiple smaller EndpointSlices. + +Consider an example where a Service is backed by 15 pods. We'd end up with a single Endpoints resource that tracked all of them. If EndpointSlices were configured to store 5 endpoints each, we'd end up with 3 different EndpointSlices: +![EndpointSlices](/images/blog/2020-09-02-scaling-kubernetes-networking-endpointslices/endpoint-slices.png) + +By default, EndpointSlices store as many as 100 endpoints each, though this can be configured with the `--max-endpoints-per-slice` flag on kube-controller-manager. + +## EndpointSlices provide 10x scalability improvements +This API dramatically improves networking scalability. Now when a Pod is added or removed, only 1 small EndpointSlice needs to be updated. This difference becomes quite noticeable when hundreds or thousands of Pods are backing a single Service. + +Potentially more significant, now that all Pod IPs for a Service don't need to be stored in a single resource, we don't have to worry about the size limit for objects stored in etcd. EndpointSlices have already been used to scale Services beyond 100,000 network endpoints. + +All of this is brought together with some significant performance improvements that have been made in kube-proxy. When using EndpointSlices at scale, significantly less data will be transferred for endpoints updates and kube-proxy should be faster to update iptables or ipvs rules. Beyond that, Services can now scale to at least 10 times beyond any previous limitations. + +## EndpointSlices enable new functionality +Introduced as an alpha feature in Kubernetes v1.16, EndpointSlices were built to enable some exciting new functionality in future Kubernetes releases. This could include dual-stack Services, topology aware routing, and endpoint subsetting. + +Dual-Stack Services are an exciting new feature that has been in development alongside EndpointSlices. They will utilize both IPv4 and IPv6 addresses for Services and rely on the addressType field on EndpointSlices to track these addresses by IP family. + +Topology aware routing will update kube-proxy to prefer routing requests within the same zone or region. This makes use of the topology fields stored for each endpoint in an EndpointSlice. As a further refinement of that, we're exploring the potential of endpoint subsetting. This would allow kube-proxy to only watch a subset of EndpointSlices. For example, this might be combined with topology aware routing so that kube-proxy would only need to watch EndpointSlices containing endpoints within the same zone. This would provide another very significant scalability improvement. + +## What does this mean for the Endpoints API? +Although the EndpointSlice API is providing a newer and more scalable alternative to the Endpoints API, the Endpoints API will continue to be considered generally available and stable. The most significant change planned for the Endpoints API will involve beginning to truncate Endpoints that would otherwise run into scalability issues. + +The Endpoints API is not going away, but many new features will rely on the EndpointSlice API. To take advantage of the new scalability and functionality that EndpointSlices provide, applications that currently consume Endpoints will likely want to consider supporting EndpointSlices in the future. diff --git a/content/en/blog/_posts/2020-09-03-warnings/index.md b/content/en/blog/_posts/2020-09-03-warnings/index.md new file mode 100644 index 0000000000..2e93d5a573 --- /dev/null +++ b/content/en/blog/_posts/2020-09-03-warnings/index.md @@ -0,0 +1,278 @@ +--- +layout: blog +title: "Warning: Helpful Warnings Ahead" +date: 2020-09-03 +slug: warnings +--- + +**Author**: Jordan Liggitt (Google) + +As Kubernetes maintainers, we're always looking for ways to improve usability while preserving compatibility. +As we develop features, triage bugs, and answer support questions, we accumulate information that would be helpful for Kubernetes users to know. +In the past, sharing that information was limited to out-of-band methods like release notes, announcement emails, documentation, and blog posts. +Unless someone knew to seek out that information and managed to find it, they would not benefit from it. + +In Kubernetes v1.19, we added a feature that allows the Kubernetes API server to +[send warnings to API clients](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1693-warnings). +The warning is sent using a [standard `Warning` response header](https://tools.ietf.org/html/rfc7234#section-5.5), +so it does not change the status code or response body in any way. +This allows the server to send warnings easily readable by any API client, while remaining compatible with previous client versions. + +Warnings are surfaced by `kubectl` v1.19+ in `stderr` output, and by the `k8s.io/client-go` client library v0.19.0+ in log output. +The `k8s.io/client-go` behavior can be overridden [per-process](https://godoc.org/k8s.io/client-go/rest#SetDefaultWarningHandler) +or [per-client](https://godoc.org/k8s.io/client-go/rest#Config). + +## Deprecation Warnings + +The first way we are using this new capability is to send warnings for use of deprecated APIs. + +Kubernetes is a [big, fast-moving project](https://www.cncf.io/cncf-kubernetes-project-journey/#development-velocity). +Keeping up with the [changes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#changelog-since-v1180) +in each release can be daunting, even for people who work on the project full-time. One important type of change is API deprecations. +As APIs in Kubernetes graduate to GA versions, pre-release API versions are deprecated and eventually removed. + +Even though there is an [extended deprecation period](/docs/reference/using-api/deprecation-policy/), +and deprecations are [included in release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecation), +they can still be hard to track. During the deprecation period, the pre-release API remains functional, +allowing several releases to transition to the stable API version. However, we have found that users often don't even realize +they are depending on a deprecated API version until they upgrade to the release that stops serving it. + +Starting in v1.19, whenever a request is made to a deprecated REST API, a warning is returned along with the API response. +This warning includes details about the release in which the API will no longer be available, and the replacement API version. + +Because the warning originates at the server, and is intercepted at the client level, it works for all kubectl commands, +including high-level commands like `kubectl apply`, and low-level commands like `kubectl get --raw`: + +kubectl applying a manifest file, then displaying a warning message 'networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress'. + +This helps people affected by the deprecation to know the request they are making is deprecated, +how long they have to address the issue, and what API they should use instead. +This is especially helpful when the user is applying a manifest they didn't create, +so they have time to reach out to the authors to ask for an updated version. + +We also realized that the person *using* a deprecated API is often not the same person responsible for upgrading the cluster, +so we added two administrator-facing tools to help track use of deprecated APIs and determine when upgrades are safe. + +### Metrics + +Starting in Kubernetes v1.19, when a request is made to a deprecated REST API endpoint, +an `apiserver_requested_deprecated_apis` gauge metric is set to `1` in the kube-apiserver process. +This metric has labels for the API `group`, `version`, `resource`, and `subresource`, +and a `removed_version` label that indicates the Kubernetes release in which the API will no longer be served. + +This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json), +and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested +from the current instance of the API server: + +```sh +kubectl get --raw /metrics | prom2json | jq ' + .[] | select(.name=="apiserver_requested_deprecated_apis").metrics[].labels +' +``` + +Output: + +```json +{ + "group": "extensions", + "removed_release": "1.22", + "resource": "ingresses", + "subresource": "", + "version": "v1beta1" +} +{ + "group": "rbac.authorization.k8s.io", + "removed_release": "1.22", + "resource": "clusterroles", + "subresource": "", + "version": "v1beta1" +} +``` + +This shows the deprecated `extensions/v1beta1` Ingress and `rbac.authorization.k8s.io/v1beta1` ClusterRole APIs +have been requested on this server, and will be removed in v1.22. + +We can join that information with the `apiserver_request_total` metrics to get more details about the requests being made to these APIs: + +```sh +kubectl get --raw /metrics | prom2json | jq ' + # set $deprecated to a list of deprecated APIs + [ + .[] | + select(.name=="apiserver_requested_deprecated_apis").metrics[].labels | + {group,version,resource} + ] as $deprecated + + | + + # select apiserver_request_total metrics which are deprecated + .[] | select(.name=="apiserver_request_total").metrics[] | + select(.labels | {group,version,resource} as $key | $deprecated | index($key)) +' +``` + +Output: + +```json +{ + "labels": { + "code": "0", + "component": "apiserver", + "contentType": "application/vnd.kubernetes.protobuf;stream=watch", + "dry_run": "", + "group": "extensions", + "resource": "ingresses", + "scope": "cluster", + "subresource": "", + "verb": "WATCH", + "version": "v1beta1" + }, + "value": "21" +} +{ + "labels": { + "code": "200", + "component": "apiserver", + "contentType": "application/vnd.kubernetes.protobuf", + "dry_run": "", + "group": "extensions", + "resource": "ingresses", + "scope": "cluster", + "subresource": "", + "verb": "LIST", + "version": "v1beta1" + }, + "value": "1" +} +{ + "labels": { + "code": "200", + "component": "apiserver", + "contentType": "application/json", + "dry_run": "", + "group": "rbac.authorization.k8s.io", + "resource": "clusterroles", + "scope": "cluster", + "subresource": "", + "verb": "LIST", + "version": "v1beta1" + }, + "value": "1" +} +``` + +The output shows that only read requests are being made to these APIs, and the most requests have been made to watch the deprecated Ingress API. + +You can also find that information through the following Prometheus query, +which returns information about requests made to deprecated APIs which will be removed in v1.22: + +```promql +apiserver_requested_deprecated_apis{removed_version="1.22"} * on(group,version,resource,subresource) +group_right() apiserver_request_total +``` + +### Audit annotations + +Metrics are a fast way to check whether deprecated APIs are being used, and at what rate, +but they don't include enough information to identify particular clients or API objects. +Starting in Kubernetes v1.19, [audit events](/docs/tasks/debug-application-cluster/audit/) +for requests to deprecated APIs include an audit annotation of `"k8s.io/deprecated":"true"`. +Administrators can use those audit events to identify specific clients or objects that need to be updated. + +## Custom Resource Definitions + +Along with the API server ability to warn about deprecated API use, starting in v1.19, a CustomResourceDefinition can indicate a +[particular version of the resource it defines is deprecated](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation). +When API requests to a deprecated version of a custom resource are made, a warning message is returned, matching the behavior of built-in APIs. + +The author of the CustomResourceDefinition can also customize the warning for each version if they want to. +This allows them to give a pointer to a migration guide or other information if needed. + +```yaml +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition + name: crontabs.example.com +spec: + versions: + - name: v1alpha1 + # This indicates the v1alpha1 version of the custom resource is deprecated. + # API requests to this version receive a warning in the server response. + deprecated: true + # This overrides the default warning returned to clients making v1alpha1 API requests. + deprecationWarning: "example.com/v1alpha1 CronTab is deprecated; use example.com/v1 CronTab (see http://example.com/v1alpha1-v1)" + ... + + - name: v1beta1 + # This indicates the v1beta1 version of the custom resource is deprecated. + # API requests to this version receive a warning in the server response. + # A default warning message is returned for this version. + deprecated: true + ... + + - name: v1 + ... +``` + +## Admission Webhooks + +[Admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers) +are the primary way to integrate custom policies or validation with Kubernetes. +Starting in v1.19, admission webhooks can [return warning messages](/docs/reference/access-authn-authz/extensible-admission-controllers/#response) +that are passed along to the requesting API client. Warnings can be returned with allowed or rejected admission responses. + +As an example, to allow a request but warn about a configuration known not to work well, an admission webhook could send this response: + +```json +{ + "apiVersion": "admission.k8s.io/v1", + "kind": "AdmissionReview", + "response": { + "uid": "", + "allowed": true, + "warnings": [ + ".spec.memory: requests >1GB do not work on Fridays" + ] + } +} +``` + +If you are implementing a webhook that returns a warning message, here are some tips: + +* Don't include a "Warning:" prefix in the message (that is added by clients on output) +* Use warning messages to describe problems the client making the API request should correct or be aware of +* Be brief; limit warnings to 120 characters if possible + +There are many ways admission webhooks could use this new feature, and I'm looking forward to seeing what people come up with. +Here are a couple ideas to get you started: + +* webhook implementations adding a "complain" mode, where they return warnings instead of rejections, + to allow trying out a policy to verify it is working as expected before starting to enforce it +* "lint" or "vet"-style webhooks, inspecting objects and surfacing warnings when best practices are not followed + +## Kubectl strict mode + +If you want to be sure you notice deprecations as soon as possible and get a jump start on addressing them, +`kubectl` added a `--warnings-as-errors` option in v1.19. When invoked with this option, +`kubectl` treats any warnings it receives from the server as errors and exits with a non-zero exit code: + +kubectl applying a manifest file with a --warnings-as-errors flag, displaying a warning message and exiting with a non-zero exit code. + +This could be used in a CI job to apply manifests to a current server, +and required to pass with a zero exit code in order for the CI job to succeed. + +## Future Possibilities + +Now that we have a way to communicate helpful information to users in context, +we're already considering other ways we can use this to improve people's experience with Kubernetes. +A couple areas we're looking at next are warning about [known problematic values](http://issue.k8s.io/64841#issuecomment-395141013) +we cannot reject outright for compatibility reasons, and warning about use of deprecated fields or field values +(like selectors using beta os/arch node labels, [deprecated in v1.14](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)). +I'm excited to see progress in this area, continuing to make it easier to use Kubernetes. + +--- + +_[Jordan Liggitt](https://twitter.com/liggitt) is a software engineer at Google, and helps lead Kubernetes authentication, authorization, and API efforts._ \ No newline at end of file diff --git a/content/en/blog/_posts/2020-09-03-warnings/kubectl-warnings-as-errors.png b/content/en/blog/_posts/2020-09-03-warnings/kubectl-warnings-as-errors.png new file mode 100644 index 0000000000..5171eca6bc Binary files /dev/null and b/content/en/blog/_posts/2020-09-03-warnings/kubectl-warnings-as-errors.png differ diff --git a/content/en/blog/_posts/2020-09-03-warnings/kubectl-warnings.png b/content/en/blog/_posts/2020-09-03-warnings/kubectl-warnings.png new file mode 100644 index 0000000000..967bc591bf Binary files /dev/null and b/content/en/blog/_posts/2020-09-03-warnings/kubectl-warnings.png differ diff --git a/content/en/blog/_posts/2020-09-04-introducing-structured-logs.md b/content/en/blog/_posts/2020-09-04-introducing-structured-logs.md new file mode 100644 index 0000000000..e466711aa4 --- /dev/null +++ b/content/en/blog/_posts/2020-09-04-introducing-structured-logs.md @@ -0,0 +1,56 @@ +--- +layout: blog +title: 'Introducing Structured Logs' +date: 2020-09-04 +slug: kubernetes-1-19-Introducing-Structured-Logs +--- + +**Authors:** Marek Siarkowicz (Google), Nathan Beach (Google) + +Logs are an essential aspect of observability and a critical tool for debugging. But Kubernetes logs have traditionally been unstructured strings, making any automated parsing difficult and any downstream processing, analysis, or querying challenging to do reliably. + +In Kubernetes 1.19, we are adding support for structured logs, which natively support (key, value) pairs and object references. We have also updated many logging calls such that over 99% of logging volume in a typical deployment are now migrated to the structured format. + +To maintain backwards compatibility, structured logs will still be outputted as a string where the string contains representations of those "key"="value" pairs. Starting in alpha in 1.19, logs can also be outputted in JSON format using the `--logging-format=json` flag. + +## Using Structured Logs + +We've added two new methods to the klog library: InfoS and ErrorS. For example, this invocation of InfoS: + +```golang +klog.InfoS("Pod status updated", "pod", klog.KObj(pod), "status", status) +``` + +will result in this log: + +``` +I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready" +``` + +Or, if the --logging-format=json flag is set, it will result in this output: + +```json +{ + "ts": 1580306777.04728, + "msg": "Pod status updated", + "pod": { + "name": "coredns", + "namespace": "kube-system" + }, + "status": "ready" +} +``` + +This means downstream logging tools can easily ingest structured logging data and instead of using regular expressions to parse unstructured strings. This also makes processing logs easier, querying logs more robust, and analyzing logs much faster. + +With structured logs, all references to Kubernetes objects are structured the same way, so you can filter the output and only log entries referencing the particular pod. You can also find logs indicating how the scheduler was scheduling the pod, how the pod was created, the health probes of the pod, and all other changes in the lifecycle of the pod. + +Suppose you are debugging an issue with a pod. With structured logs, you can filter to only those log entries referencing the pod of interest, rather than needing to scan through potentially thousands of log lines to find the relevant ones. + +Not only are structured logs more useful when manual debugging of issues, they also enable richer features like automated pattern recognition within logs or tighter correlation of log and trace data. + +Finally, structured logs can help reduce storage costs for logs because most storage systems are more efficiently able to compress structured key=value data than unstructured strings. + +## Get Involved + +While we have updated over 99% of the log entries by log volume in a typical deployment, there are still thousands of logs to be updated. Pick a file or directory that you would like to improve and [migrate existing log calls to use structured logs](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md). It's a great and easy way to make your first contribution to Kubernetes! diff --git a/content/en/blog/_posts/2020-09-16-gsoc‘20 -building-operators-for-cluster-addons.md b/content/en/blog/_posts/2020-09-16-gsoc‘20 -building-operators-for-cluster-addons.md new file mode 100644 index 0000000000..377fb5fbff --- /dev/null +++ b/content/en/blog/_posts/2020-09-16-gsoc‘20 -building-operators-for-cluster-addons.md @@ -0,0 +1,118 @@ +--- +layout: blog +title: "GSoC 2020 - Building operators for cluster addons" +date: 2020-09-16 +slug: gsoc20-building-operators-for-cluster-addons +--- + +**Author**: Somtochi Onyekwere + +# Introduction + +[Google Summer of Code](https://summerofcode.withgoogle.com/) is a global program that is geared towards introducing students to open source. Students are matched with open-source organizations to work with them for three months during the summer. + +My name is Somtochi Onyekwere from the Federal University of Technology, Owerri (Nigeria) and this year, I was given the opportunity to work with Kubernetes (under the CNCF organization) and this led to an amazing summer spent learning, contributing and interacting with the community. + +Specifically, I worked on the _Cluster Addons: Package all the things!_ project. The project focused on building operators for better management of various cluster addons, extending the tooling for building these operators and making the creation of these operators a smooth process. + +# Background + +Kubernetes has progressed greatly in the past few years with a flourishing community and a large number of contributors. The codebase is gradually moving away from the monolith structure where all the code resides in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository to being split into multiple sub-projects. Part of the focus of cluster-addons is to make some of these sub-projects work together in an easy to assemble, self-monitoring, self-healing and Kubernetes-native way. It enables them to work seamlessly without human intervention. + +The community is exploring the use of operators as a mechanism to monitor various resources in the cluster and properly manage these resources. In addition to this, it provides self-healing and it is a kubernetes-native pattern that can encode how best these addons work and manage them properly. + +What are cluster addons? Cluster addons are a collection of resources (like Services and deployment) that are used to give a Kubernetes cluster additional functionalities. They range from things as simple as the Kubernetes dashboards (for visualization) to more complex ones like Calico (for networking). These addons are essential to different applications running in the cluster and the cluster itself. The addon operator provides a nicer way of managing these addons and understanding the health and status of the various resources that comprise the addon. You can get a deeper overview in this [article](https://kubernetes.io/docs/concepts/overview/components/#addons). + +Operators are custom controllers with custom resource definitions that encode application-specific knowledge and are used for managing complex stateful applications. It is a widely accepted pattern. Managing addons via operators, with these operators encoding knowledge of how best the addons work, introduces a lot of advantages while setting standards that will be easy to follow and scale. This [article](https://kubernetes.io/docs/concepts/extend-kubernetes/operator) does a good job of explaining operators. + +The addon operators can solve a lot of problems, but they have their challenges. Those under the [cluster-addons project](https://github.com/kubernetes-sigs/cluster-addons) had missing pieces and were still a proof of concept. Generating the RBAC configuration for the operators was a pain and sometimes the operators were given too much privilege. The operators weren’t very extensible as it only pulled manifests from local filesystems or HTTP(s) servers and a lot of simple addons were generating the same code. +I spent the summer working on these issues, looking at them with fresh eyes and coming up with solutions for both the known and unknown issues. + +# Various additions to kubebuilder-declarative-pattern + +The [kubebuilder-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern) (from here on referred to as KDP) repo is an extra layer of addon specific tooling on top of the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) SDK that is enabled by passing the experimental `--pattern=addon` flag to `kubebuilder create` command. Together, they create the base code for the addon operator. During the internship, I worked on a couple of features in KDP and cluster-addons. + +## Operator version checking +Enabling version checks for operators helped in making upgrades/downgrades safer to different versions of the addon, even though the operator had complex logic. It is a way of matching the version of an addon to the version of the operator that knows how to manage it well. Most addons have different versions and these versions might need to be managed differently. This feature checks the custom resource for the `addons.k8s.io/min-operator-version` annotation which states the minimum operator version that is needed to manage the version against the version of the operator. If the operator version is below the minimum version required, the operator pauses with an error telling the user that the version of the operator is too low. This helps to ensure that the correct operator is being used for the addon. + +## Git repository for storing the manifests +Previously, there was support for only local file directories and HTTPS repositories for storing manifests. Giving creators of addon operators the ability to store manifest in GitHub repository enables faster development and version control. When starting the controller, you can pass a flag to specify the location of your channels directory. The channels directory contains the manifests for different versions, the controller pulls the manifest from this directory and applies it to the cluster. During the internship period, I extended it to include Git repositories. + +## Annotations to temporarily disable reconciliation +The reconciliation loop that ensures that the desired state matches the actual state prevents modification of objects in the cluster. This makes it hard to experiment or investigate what might be wrong in the cluster as any changes made are promptly reverted. I resolved this by allowing users to place an `addons.k8s.io/ignore` annotation on the resource that they don’t want the controller to reconcile. The controller checks for this annotation and doesn’t reconcile that object. To resume reconciliation, the annotation can be removed from the resource. + +## Unstructured support in kubebuilder-declarative-pattern +One of the operators that I worked on is a generic controller that could manage more than one cluster addon that did not require extra configuration. To do this, the operator couldn’t use a particular type and needed the kubebuilder-declarative-repo to support using the [unstructured.Unstructured](https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured#Unstructured) type. There were various functions in the kubebuilder-declarative-pattern that couldn’t handle this type and returned an error if the object passed in was not of type `addonsv1alpha1.CommonObject`. The functions were modified to handle both `unstructured.Unstructured` and `addonsv1alpha.CommonObject`. + +# Tools and CLI programs +There were also some command-line programs I wrote that could be used to make working with addon operators easier. Most of them have uses outside the addon operators as they try to solve a specific problem that could surface anywhere while working with Kubernetes. I encourage you to [check them out](https://github.com/kubernetes-sigs/cluster-addons/tree/master/tools) when you have the chance! + +## RBAC Generator +One of the biggest concerns with the operator was RBAC. You had to manually look through the manifest and add the RBAC rule for each resource as it needs to have RBAC permissions to create, get, update and delete the resources in the manifest when running in-cluster. Building the [RBAC generator](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/rbac-gen) automated the process of writing the RBAC roles and role bindings. The function of the RBAC generator is simple. It accepts the file name of the manifest as a flag. Then, it parses the manifest and gets the API group and resource name of the resources and adds it to a role. It outputs the role and role binding to stdout or a file if the `--out` flag is parsed. + +Additionally, the tool enables you to split the RBAC by separating the cluster roles in the manifest. This lessened the security concern of an operator being over-privileged as it needed to have all the permissions that the clusterrole has. If you want to apply the clusterrole yourself and not give the operator these permissions, you can pass in a `--supervisory` boolean flag so that the generator does not add these permissions to the role. The CLI program resides [here](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/rbac-gen). + +## Kubectl Ownerref +It is hard to find out at a glance which objects were created by an addon custom resource. This kubectl plugin alleviates that pain by displaying all the objects in the cluster that a resource has ownerrefs on. You simply pass the kind and the name of the resource as arguments to the program and it checks the cluster for the objects and gives the kind, name, the namespace of such an object. It could be useful to get a general overview of all the objects that the controller is reconciling by passing in the name and kind of custom resource. The CLI program resides [here](https://github.com/kubernetes-sigs/cluster-addons/tree/master/tools/kubectl-ownerref). + +# Addon Operators +To fully understand addons operators and make changes to how they are being created, you have to try creating and using them. Part of the summer was spent building operators for some popular addons like the Kubernetes dashboard, flannel, NodeLocalDNS and so on. Please check the [cluster-addons](https://github.com/kubernetes-sigs/cluster-addons) repository for the different addon operators. In this section, I will just highlight one that is a little different from the others. + +## Generic Controller +The generic controller can be shared between addons that don’t require much configuration. This minimizes resource consumption on the cluster as it reduces the number of controllers that need to be run. Also instead of building your own operator, you can just use the generic controller and whenever you feel that your needs have grown and you need a more complex operator, you can always scaffold the code with kubebuilder and continue from where the generic operator stopped. To use the generic controller, you can generate the CustomResourceDefinition(CRD) using this tool ([generic-addon](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/generic-addon/README.md)). You pass in the kind, group, and the location of your channels directory (it could be a Git repository too!). The tool generates the - CRD, RBAC manifest and two custom resources for you. + +The process is as follows: +- Create the Generic CRD +- Generate all the manifests needed with the [`generic-addon tool`](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/generic-addon/README.md). + +This tool creates: +1. The CRD for your addon +2. The RBAC rules for the CustomResourceDefinitions +3. The RBAC rules for applying the manifests +4. The custom resource for your addon +5. A Generic custom resource + +The Generic custom resource looks like this: + +```yaml +apiVersion: addons.x-k8s.io/v1alpha1 +kind: Generic +metadata: + name: generic-sample +spec: + objectKind: + kind: NodeLocalDNS + version: "v1alpha1" + group: addons.x-k8s.io +channel: "../nodelocaldns/channels" +``` + +Apply these manifests but ensure to apply the CRD before the CR. +Then, run the Generic controller, either on your machine or in-cluster. + + +If you are interested in building an operator, Please check out [this guide](https://github.com/kubernetes-sigs/cluster-addons/blob/master/dashboard/README.md). + +# Relevant Links +- [Detailed breakdown of work done during the internship](https://github.com/SomtochiAma/gsoc-2020-meta-k8s) +- [Addon Operator (KEP)](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/addons/0035-20190128-addons-via-operators.md) +- [Original GSoC Issue](https://github.com/kubernetes-sigs/cluster-addons/issues/39) +- [Proposal Submitted for GSoC](https://github.com/SomtochiAma/gsoc-2020-meta-k8s/blob/master/GSoC%202020%20PROPOSAL%20-%20PACKAGE%20ALL%20THINGS.pdf) +- [All commits to kubernetes-sigs/cluster-addons](https://github.com/kubernetes-sigs/cluster-addons/commits?author=SomtochiAma) +- [All commits to kubernetes-sigs/kubebuidler-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/commits?author=SomtochiAma) + +# Further Work +A lot of work was definitely done on the cluster addons during the GSoC period. But we need more people building operators and using them in the cluster. We need wider adoption in the community. Build operators for your favourite addons and tell us how it went and if you had any issues. Check out this [README.md](https://github.com/kubernetes-sigs/cluster-addons/blob/master/dashboard/README.md) to get started. + +# Appreciation +I really want to appreciate my mentors [Justin Santa Barbara](https://github.com/justinsb) (Google) and [Leigh Capili](https://github.com/stealthybox) (Weaveworks). My internship was awesome because they were awesome. They set a golden standard for what mentorship should be. They were accessible and always available to clear any confusion. I think what I liked best was that they didn’t just dish out tasks, instead, we had open discussions about what was wrong and what could be improved. They are really the best and I hope I get to work with them again! + Also, I want to say a huge thanks to [Lubomir I. Ivanov](https://github.com/neolit123) for reviewing this blog post! + +# Conclusion +So far I have learnt a lot about Go, the internals of Kubernetes, and operators. I want to conclude by encouraging people to contribute to open-source (especially Kubernetes :)) regardless of your level of experience. It has been a well-rounded experience for me and I have come to love the community. It is a great initiative and it is a great way to learn and meet awesome people. Special shoutout to Google for organizing this program. + +If you are interested in cluster addons and finding out more on addon operators, you are welcome to join our slack channel on the Kubernetes [#cluster-addons](https://kubernetes.slack.com/messages/cluster-addons). + +--- + +_[Somtochi Onyekwere](https://twitter.com/SomtochiAma) is a software engineer that loves contributing to open-source and exploring cloud native solutions._ diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 05a65a1efc..ac0afcc693 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -5,12 +5,12 @@ content_type: concept +{{% thirdparty-content %}} + Add-ons extend the functionality of Kubernetes. This page lists some of the available add-ons and links to their respective installation instructions. -Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status. - ## Networking and Network Policy diff --git a/content/en/docs/concepts/cluster-administration/cloud-providers.md b/content/en/docs/concepts/cluster-administration/cloud-providers.md deleted file mode 100644 index 89bd0be3f6..0000000000 --- a/content/en/docs/concepts/cluster-administration/cloud-providers.md +++ /dev/null @@ -1,362 +0,0 @@ ---- -title: Cloud Providers -content_type: concept -weight: 30 ---- - - -This page explains how to manage Kubernetes running on a specific -cloud provider. There are many other third-party cloud provider projects, but this list is specific to projects embedded within, or relied upon by Kubernetes itself. - - -### kubeadm -[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters. -kubeadm has configuration options to specify configuration information for cloud providers. For example a typical -in-tree cloud provider can be configured using kubeadm as shown below: - -```yaml -apiVersion: kubeadm.k8s.io/v1beta2 -kind: InitConfiguration -nodeRegistration: - kubeletExtraArgs: - cloud-provider: "openstack" - cloud-config: "/etc/kubernetes/cloud.conf" ---- -apiVersion: kubeadm.k8s.io/v1beta2 -kind: ClusterConfiguration -kubernetesVersion: v1.13.0 -apiServer: - extraArgs: - cloud-provider: "openstack" - cloud-config: "/etc/kubernetes/cloud.conf" - extraVolumes: - - name: cloud - hostPath: "/etc/kubernetes/cloud.conf" - mountPath: "/etc/kubernetes/cloud.conf" -controllerManager: - extraArgs: - cloud-provider: "openstack" - cloud-config: "/etc/kubernetes/cloud.conf" - extraVolumes: - - name: cloud - hostPath: "/etc/kubernetes/cloud.conf" - mountPath: "/etc/kubernetes/cloud.conf" -``` - -The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines -for the [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/), -[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) and the -[kubelet](/docs/reference/command-line-tools-reference/kubelet/). -The contents of the file specified in `--cloud-config` for each provider is documented below as well. - -For all external cloud providers, please follow the instructions on the individual repositories, -which are listed under their headings below, or one may view [the list of all repositories](https://github.com/kubernetes?q=cloud-provider-&type=&language=) - -## AWS -This section describes all the possible configurations which can -be used when running Kubernetes on Amazon Web Services. - -If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-aws](https://github.com/kubernetes/cloud-provider-aws#readme) - -### Node Name - -The AWS cloud provider uses the private DNS name of the AWS instance as the name of the Kubernetes Node object. - -### Load Balancers -You can setup [external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/) -to use specific features in AWS by configuring the annotations as shown below. - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: example - namespace: kube-system - labels: - run: example - annotations: - service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value - service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http -spec: - type: LoadBalancer - ports: - - port: 443 - targetPort: 5556 - protocol: TCP - selector: - app: example -``` -Different settings can be applied to a load balancer service in AWS using _annotations_. The following describes the annotations supported on AWS ELBs: - -* `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`: Used to specify access log emit interval. -* `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`: Used on the service to enable or disable access logs. -* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`: Used to specify access log s3 bucket name. -* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`: Used to specify access log s3 bucket prefix. -* `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags`: Used on the service to specify a comma-separated list of key-value pairs which will be recorded as additional tags in the ELB. For example: `"Key1=Val1,Key2=Val2,KeyNoVal1=,KeyNoVal2"`. -* `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If `http` (default) or `https`, an HTTPS listener that terminates the connection and parses headers is created. If set to `ssl` or `tcp`, a "raw" SSL listener is used. If set to `http` and `aws-load-balancer-ssl-cert` is not used then a HTTP listener is used. -* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`. -* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled`: Used on the service to enable or disable connection draining. -* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout`: Used on the service to specify a connection draining timeout. -* `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout`: Used on the service to specify the idle connection timeout. -* `service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled`: Used on the service to enable or disable cross-zone load balancing. -* `service.beta.kubernetes.io/aws-load-balancer-security-groups`: Used to specify the security groups to be added to ELB created. This replaces all other security groups previously assigned to the ELB. Security groups defined here should not be shared between services. -* `service.beta.kubernetes.io/aws-load-balancer-extra-security-groups`: Used on the service to specify additional security groups to be added to ELB created -* `service.beta.kubernetes.io/aws-load-balancer-internal`: Used on the service to indicate that we want an internal ELB. -* `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol`: Used on the service to enable the proxy protocol on an ELB. Right now we only accept the value `*` which means enabling the proxy protocol on all ELB backends. In the future we could adjust this to allow setting the proxy protocol only on certain backends. -* `service.beta.kubernetes.io/aws-load-balancer-ssl-ports`: Used on the service to specify a comma-separated list of ports that will use SSL/HTTPS listeners. Defaults to `*` (all) - -The information for the annotations for AWS is taken from the comments on [aws.go](https://github.com/kubernetes/legacy-cloud-providers/blob/master/aws/aws.go) - -## Azure - -If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-azure](https://github.com/kubernetes/cloud-provider-azure#readme) - -### Node Name - -The Azure cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object. -Note that the Kubernetes Node name must match the Azure VM name. - -## GCE - -If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-gcp](https://github.com/kubernetes/cloud-provider-gcp#readme) - -### Node Name - -The GCE cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object. -Note that the first segment of the Kubernetes Node name must match the GCE instance name (e.g. a Node named `kubernetes-node-2.c.my-proj.internal` must correspond to an instance named `kubernetes-node-2`). - -## HUAWEI CLOUD - -If you wish to use the external cloud provider, its repository is [kubernetes-sigs/cloud-provider-huaweicloud](https://github.com/kubernetes-sigs/cloud-provider-huaweicloud). - -## OpenStack -This section describes all the possible configurations which can -be used when using OpenStack with Kubernetes. - -If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-openstack](https://github.com/kubernetes/cloud-provider-openstack#readme) - -### Node Name - -The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object. -Note that the instance name must be a valid Kubernetes Node name in order for the kubelet to successfully register its Node object. - -### Services - -The OpenStack cloud provider -implementation for Kubernetes supports the use of these OpenStack services from -the underlying cloud, where available: - -| Service | API Version(s) | Required | -|--------------------------|----------------|----------| -| Block Storage (Cinder) | V1†, V2, V3 | No | -| Compute (Nova) | V2 | No | -| Identity (Keystone) | V2‡, V3 | Yes | -| Load Balancing (Neutron) | V1§, V2 | No | -| Load Balancing (Octavia) | V2 | No | - -† Block Storage V1 API support is deprecated, Block Storage V3 API support was -added in Kubernetes 1.9. - -‡ Identity V2 API support is deprecated and will be removed from the provider in -a future release. As of the "Queens" release, OpenStack will no longer expose the -Identity V2 API. - -§ Load Balancing V1 API support was removed in Kubernetes 1.9. - -Service discovery is achieved by listing the service catalog managed by -OpenStack Identity (Keystone) using the `auth-url` provided in the provider -configuration. The provider will gracefully degrade in functionality when -OpenStack services other than Keystone are not available and simply disclaim -support for impacted features. Certain features are also enabled or disabled -based on the list of extensions published by Neutron in the underlying cloud. - -### cloud.conf -Kubernetes knows how to interact with OpenStack via the file cloud.conf. It is -the file that will provide Kubernetes with credentials and location for the OpenStack auth endpoint. -You can create a cloud.conf file by specifying the following details in it - -#### Typical configuration -This is an example of a typical configuration that touches the values that most -often need to be set. It points the provider at the OpenStack cloud's Keystone -endpoint, provides details for how to authenticate with it, and configures the -load balancer: - -```yaml -[Global] -username=user -password=pass -auth-url=https:///identity/v3 -tenant-id=c869168a828847f39f7f06edd7305637 -domain-id=2a73b8f597c04551a0fdc8e95544be8a - -[LoadBalancer] -subnet-id=6937f8fa-858d-4bc9-a3a5-18d2c957166a -``` - -##### Global -These configuration options for the OpenStack provider pertain to its global -configuration and should appear in the `[Global]` section of the `cloud.conf` -file: - -* `auth-url` (Required): The URL of the keystone API used to authenticate. On - OpenStack control panels, this can be found at Access and Security > API - Access > Credentials. -* `username` (Required): Refers to the username of a valid user set in keystone. -* `password` (Required): Refers to the password of a valid user set in keystone. -* `tenant-id` (Required): Used to specify the id of the project where you want - to create your resources. -* `tenant-name` (Optional): Used to specify the name of the project where you - want to create your resources. -* `trust-id` (Optional): Used to specify the identifier of the trust to use for - authorization. A trust represents a user's (the trustor) authorization to - delegate roles to another user (the trustee), and optionally allow the trustee - to impersonate the trustor. Available trusts are found under the - `/v3/OS-TRUST/trusts` endpoint of the Keystone API. -* `domain-id` (Optional): Used to specify the id of the domain your user belongs - to. -* `domain-name` (Optional): Used to specify the name of the domain your user - belongs to. -* `region` (Optional): Used to specify the identifier of the region to use when - running on a multi-region OpenStack cloud. A region is a general division of - an OpenStack deployment. Although a region does not have a strict geographical - connotation, a deployment can use a geographical name for a region identifier - such as `us-east`. Available regions are found under the `/v3/regions` - endpoint of the Keystone API. -* `ca-file` (Optional): Used to specify the path to your custom CA file. - - -When using Keystone V3 - which changes tenant to project - the `tenant-id` value -is automatically mapped to the project construct in the API. - -##### Load Balancer -These configuration options for the OpenStack provider pertain to the load -balancer and should appear in the `[LoadBalancer]` section of the `cloud.conf` -file: - -* `lb-version` (Optional): Used to override automatic version detection. Valid - values are `v1` or `v2`. Where no value is provided automatic detection will - select the highest supported version exposed by the underlying OpenStack - cloud. -* `use-octavia`(Optional): Whether or not to use Octavia for LoadBalancer type - of Service implementation instead of using Neutron-LBaaS. Default: true - Attention: Openstack CCM use Octavia as default load balancer implementation since v1.17.0 -* `subnet-id` (Optional): Used to specify the id of the subnet you want to - create your loadbalancer on. Can be found at Network > Networks. Click on the - respective network to get its subnets. -* `floating-network-id` (Optional): If specified, will create a floating IP for - the load balancer. -* `lb-method` (Optional): Used to specify an algorithm by which load will be - distributed amongst members of the load balancer pool. The value can be - `ROUND_ROBIN`, `LEAST_CONNECTIONS`, or `SOURCE_IP`. The default behavior if - none is specified is `ROUND_ROBIN`. -* `lb-provider` (Optional): Used to specify the provider of the load balancer. - If not specified, the default provider service configured in neutron will be - used. -* `create-monitor` (Optional): Indicates whether or not to create a health - monitor for the Neutron load balancer. Valid values are `true` and `false`. - The default is `false`. When `true` is specified then `monitor-delay`, - `monitor-timeout`, and `monitor-max-retries` must also be set. -* `monitor-delay` (Optional): The time between sending probes to - members of the load balancer. Ensure that you specify a valid time unit. The valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h" -* `monitor-timeout` (Optional): Maximum time for a monitor to wait - for a ping reply before it times out. The value must be less than the delay - value. Ensure that you specify a valid time unit. The valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h" -* `monitor-max-retries` (Optional): Number of permissible ping failures before - changing the load balancer member's status to INACTIVE. Must be a number - between 1 and 10. -* `manage-security-groups` (Optional): Determines whether or not the load - balancer should automatically manage the security group rules. Valid values - are `true` and `false`. The default is `false`. When `true` is specified - `node-security-group` must also be supplied. -* `node-security-group` (Optional): ID of the security group to manage. - -##### Block Storage -These configuration options for the OpenStack provider pertain to block storage -and should appear in the `[BlockStorage]` section of the `cloud.conf` file: - -* `bs-version` (Optional): Used to override automatic version detection. Valid - values are `v1`, `v2`, `v3` and `auto`. When `auto` is specified automatic - detection will select the highest supported version exposed by the underlying - OpenStack cloud. The default value if none is provided is `auto`. -* `trust-device-path` (Optional): In most scenarios the block device names - provided by Cinder (e.g. `/dev/vda`) can not be trusted. This boolean toggles - this behavior. Setting it to `true` results in trusting the block device names - provided by Cinder. The default value of `false` results in the discovery of - the device path based on its serial number and `/dev/disk/by-id` mapping and is - the recommended approach. -* `ignore-volume-az` (Optional): Used to influence availability zone use when - attaching Cinder volumes. When Nova and Cinder have different availability - zones, this should be set to `true`. This is most commonly the case where - there are many Nova availability zones but only one Cinder availability zone. - The default value is `false` to preserve the behavior used in earlier - releases, but may change in the future. -* `node-volume-attach-limit` (Optional): Maximum number of Volumes that can be - attached to the node, default is 256 for cinder. - -If deploying Kubernetes versions <= 1.8 on an OpenStack deployment that uses -paths rather than ports to differentiate between endpoints it may be necessary -to explicitly set the `bs-version` parameter. A path based endpoint is of the -form `http://foo.bar/volume` while a port based endpoint is of the form -`http://foo.bar:xxx`. - -In environments that use path based endpoints and Kubernetes is using the older -auto-detection logic a `BS API version autodetection failed.` error will be -returned on attempting volume detachment. To workaround this issue it is -possible to force the use of Cinder API version 2 by adding this to the cloud -provider configuration: - -```yaml -[BlockStorage] -bs-version=v2 -``` - -##### Metadata -These configuration options for the OpenStack provider pertain to metadata and -should appear in the `[Metadata]` section of the `cloud.conf` file: - -* `search-order` (Optional): This configuration key influences the way that the - provider retrieves metadata relating to the instance(s) in which it runs. The - default value of `configDrive,metadataService` results in the provider - retrieving metadata relating to the instance from the config drive first if - available and then the metadata service. Alternative values are: - * `configDrive` - Only retrieve instance metadata from the configuration - drive. - * `metadataService` - Only retrieve instance metadata from the metadata - service. - * `metadataService,configDrive` - Retrieve instance metadata from the metadata - service first if available, then the configuration drive. - - Influencing this behavior may be desirable as the metadata on the - configuration drive may grow stale over time, whereas the metadata service - always provides the most up to date view. Not all OpenStack clouds provide - both configuration drive and metadata service though and only one or the other - may be available which is why the default is to check both. - -##### Route - -These configuration options for the OpenStack provider pertain to the [kubenet] -Kubernetes network plugin and should appear in the `[Route]` section of the -`cloud.conf` file: - -* `router-id` (Optional): If the underlying cloud's Neutron deployment supports - the `extraroutes` extension then use `router-id` to specify a router to add - routes to. The router chosen must span the private networks containing your - cluster nodes (typically there is only one node network, and this value should be - the default router for the node network). This value is required to use - [kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet) - on OpenStack. - -[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet - -## vSphere - -{{< tabs name="vSphere cloud provider" >}} -{{% tab name="vSphere >= 6.7U3" %}} -For all vSphere deployments on vSphere >= 6.7U3, the [external vSphere cloud provider](https://github.com/kubernetes/cloud-provider-vsphere), along with the [vSphere CSI driver](https://github.com/kubernetes-sigs/vsphere-csi-driver) is recommended. See [Deploying a Kubernetes Cluster on vSphere with CSI and CPI](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html) for a quick start guide. -{{% /tab %}} -{{% tab name="vSphere < 6.7U3" %}} -If you are running vSphere < 6.7U3, the in-tree vSphere cloud provider is recommended. See [Running a Kubernetes Cluster on vSphere with kubeadm](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/k8s-vcp-on-vsphere-with-kubeadm.html) for a quick start guide. -{{% /tab %}} -{{< /tabs >}} - -For in-depth documentation on the vSphere cloud provider, visit the [vSphere cloud provider docs site](https://cloud-provider-vsphere.sigs.k8s.io). diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index 2d2abb7b26..5b5653260a 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -495,7 +495,7 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves system, system-nodes, 12, 0, system:node:127.0.0.1, 2020-07-23T15:26:57.179170694Z, ``` - In addition to the queued requests, the output includeas one phantom line for each priority level that is exempt from limitation. + In addition to the queued requests, the output includes one phantom line for each priority level that is exempt from limitation. You can get a more detailed listing with a command like this: ```shell diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md index ff30e60b12..3b692fb448 100644 --- a/content/en/docs/concepts/cluster-administration/networking.md +++ b/content/en/docs/concepts/cluster-administration/networking.md @@ -79,6 +79,8 @@ as an introduction to various technologies and serves as a jumping-off point. The following networking options are sorted alphabetically - the order does not imply any preferential status. +{{% thirdparty-content %}} + ### ACI [Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf). @@ -112,7 +114,7 @@ Additionally, the CNI can be run alongside [Calico for network policy enforcemen [Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) is an [open source](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node. Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni). - + ### Big Cloud Fabric from Big Switch Networks @@ -313,5 +315,4 @@ to run, and in both cases, the network provides one IP address per pod - as is s The early design of the networking model and its rationale, and some future plans are described in more detail in the -[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md). - +[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md). \ No newline at end of file diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md index f40c99fbc4..7cefb2e6cd 100644 --- a/content/en/docs/concepts/configuration/configmap.md +++ b/content/en/docs/concepts/configuration/configmap.md @@ -213,6 +213,9 @@ when new keys are projected to the Pod can be as long as the kubelet sync period propagation delay, where the cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero correspondingly). +ConfigMaps consumed as environment variables are not updated automatically and require a pod restart. +## Immutable ConfigMaps {#configmap-immutable} + {{< feature-state for_k8s_version="v1.19" state="beta" >}} The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set @@ -224,9 +227,10 @@ data has the following advantages: - improves performance of your cluster by significantly reducing load on kube-apiserver, by closing watches for config maps marked as immutable. -To use this feature, enable the `ImmutableEphemeralVolumes` -[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set -your Secret or ConfigMap `immutable` field to `true`. For example: +This feature is controlled by the `ImmutableEphemeralVolumes` [feature +gate](/docs/reference/command-line-tools-reference/feature-gates/), +which is enabled by default since v1.19. You can create an immutable +ConfigMap by setting the `immutable` field to `true`. For example, ```yaml apiVersion: v1 kind: ConfigMap diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 1f907ad0d0..e314a2c9d7 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -134,7 +134,6 @@ spec: containers: - name: app image: images.my-company.example/app:v4 - env: resources: requests: memory: "64Mi" diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 8c4dbcadb0..8db4cfa43a 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -37,6 +37,12 @@ its containers. - As [container environment variable](#using-secrets-as-environment-variables). - By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod. +The name of a Secret object must be a valid +[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +The keys of `data` and `stringData` must consist of alphanumeric characters, +`-`, `_` or `.`. + ### Built-in Secrets #### Service accounts automatically create and attach Secrets with API credentials @@ -52,401 +58,15 @@ this is the recommended workflow. See the [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/) documentation for more information on how service accounts work. -### Creating your own Secrets +### Creating a Secret -#### Creating a Secret Using `kubectl` +There are several options to create a Secret: -Secrets can contain user credentials required by Pods to access a database. -For example, a database connection string -consists of a username and password. You can store the username in a file `./username.txt` -and the password in a file `./password.txt` on your local machine. +- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) -```shell -# Create files needed for the rest of the example. -echo -n 'admin' > ./username.txt -echo -n '1f2d1e2e67df' > ./password.txt -``` - -The `kubectl create secret` command packages these files into a Secret and creates -the object on the API server. -The name of a Secret object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). - -```shell -kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt -``` - -The output is similar to: - -``` -secret "db-user-pass" created -``` - -Default key name is the filename. You may optionally set the key name using `[--from-file=[key=]source]`. - -```shell -kubectl create secret generic db-user-pass --from-file=username=./username.txt --from-file=password=./password.txt -``` - -{{< note >}} -Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping. -In most shells, the easiest way to escape the password is to surround it with single quotes (`'`). -For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way: - -```shell -kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb=' -``` - -You do not need to escape special characters in passwords from files (`--from-file`). -{{< /note >}} - -You can check that the secret was created: - -```shell -kubectl get secrets -``` - -The output is similar to: - -``` -NAME TYPE DATA AGE -db-user-pass Opaque 2 51s -``` - -You can view a description of the secret: - -```shell -kubectl describe secrets/db-user-pass -``` - -The output is similar to: - -``` -Name: db-user-pass -Namespace: default -Labels: -Annotations: - -Type: Opaque - -Data -==== -password.txt: 12 bytes -username.txt: 5 bytes -``` - -{{< note >}} -The commands `kubectl get` and `kubectl describe` avoid showing the contents of a secret by -default. This is to protect the secret from being exposed accidentally to an onlooker, -or from being stored in a terminal log. -{{< /note >}} - -See [decoding a secret](#decoding-a-secret) to learn how to view the contents of a secret. - -#### Creating a Secret manually - -You can also create a Secret in a file first, in JSON or YAML format, -and then create that object. -The name of a Secret object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). -The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) -contains two maps: -`data` and `stringData`. The `data` field is used to store arbitrary data, encoded using -base64. The `stringData` field is provided for convenience, and allows you to provide -secret data as unencoded strings. - -For example, to store two strings in a Secret using the `data` field, convert -the strings to base64 as follows: - -```shell -echo -n 'admin' | base64 -``` - -The output is similar to: - -``` -YWRtaW4= -``` - -```shell -echo -n '1f2d1e2e67df' | base64 -``` - -The output is similar to: - -``` -MWYyZDFlMmU2N2Rm -``` - -Write a Secret that looks like this: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -``` - -Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): - -```shell -kubectl apply -f ./secret.yaml -``` - -The output is similar to: - -``` -secret "mysecret" created -``` - -For certain scenarios, you may wish to use the `stringData` field instead. This -field allows you to put a non-base64 encoded string directly into the Secret, -and the string will be encoded for you when the Secret is created or updated. - -A practical example of this might be where you are deploying an application -that uses a Secret to store a configuration file, and you want to populate -parts of that configuration file during your deployment process. - -For example, if your application uses the following configuration file: - -```yaml -apiUrl: "https://my.api.com/api/v1" -username: "user" -password: "password" -``` - -You could store this in a Secret using the following definition: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -stringData: - config.yaml: |- - apiUrl: "https://my.api.com/api/v1" - username: {{username}} - password: {{password}} -``` - -Your deployment tool could then replace the `{{username}}` and `{{password}}` -template variables before running `kubectl apply`. - -The `stringData` field is a write-only convenience field. It is never output when -retrieving Secrets. For example, if you run the following command: - -```shell -kubectl get secret mysecret -o yaml -``` - -The output is similar to: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - creationTimestamp: 2018-11-15T20:40:59Z - name: mysecret - namespace: default - resourceVersion: "7225" - uid: c280ad2e-e916-11e8-98f2-025000000001 -type: Opaque -data: - config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 -``` - -If a field, such as `username`, is specified in both `data` and `stringData`, -the value from `stringData` is used. For example, the following Secret definition: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - username: YWRtaW4= -stringData: - username: administrator -``` - -Results in the following Secret: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - creationTimestamp: 2018-11-15T20:46:46Z - name: mysecret - namespace: default - resourceVersion: "7579" - uid: 91460ecb-e917-11e8-98f2-025000000001 -type: Opaque -data: - username: YWRtaW5pc3RyYXRvcg== -``` - -Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. - -The keys of `data` and `stringData` must consist of alphanumeric characters, -'-', '_' or '.'. - -{{< note >}} -The serialized JSON and YAML values of secret data are -encoded as base64 strings. Newlines are not valid within these strings and must -be omitted. When using the `base64` utility on Darwin/macOS, users should avoid -using the `-b` option to split long lines. Conversely, Linux users *should* add -the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if -the `-w` option is not available. -{{< /note >}} - -#### Creating a Secret from a generator - -Since Kubernetes v1.14, `kubectl` supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/). Kustomize provides resource Generators to -create Secrets and ConfigMaps. The Kustomize generators should be specified in a -`kustomization.yaml` file inside a directory. After generating the Secret, -you can create the Secret on the API server with `kubectl apply`. - -#### Generating a Secret from files - -You can generate a Secret by defining a `secretGenerator` from the -files ./username.txt and ./password.txt: - -```shell -cat <./kustomization.yaml -secretGenerator: -- name: db-user-pass - files: - - username.txt - - password.txt -EOF -``` - -Apply the directory, containing the `kustomization.yaml`, to create the Secret. - -```shell -kubectl apply -k . -``` - -The output is similar to: - -``` -secret/db-user-pass-96mffmfh4k created -``` - -You can check that the secret was created: - -```shell -kubectl get secrets -``` - -The output is similar to: - -``` -NAME TYPE DATA AGE -db-user-pass-96mffmfh4k Opaque 2 51s -``` - -You can view a description of the secret: - -```shell -kubectl describe secrets/db-user-pass-96mffmfh4k -``` - -The output is similar to: - -``` -Name: db-user-pass -Namespace: default -Labels: -Annotations: - -Type: Opaque - -Data -==== -password.txt: 12 bytes -username.txt: 5 bytes -``` - -#### Generating a Secret from string literals - -You can create a Secret by defining a `secretGenerator` -from literals `username=admin` and `password=secret`: - -```shell -cat <./kustomization.yaml -secretGenerator: -- name: db-user-pass - literals: - - username=admin - - password=secret -EOF -``` - -Apply the directory, containing the `kustomization.yaml`, to create the Secret. - -```shell -kubectl apply -k . -``` - -The output is similar to: - -``` -secret/db-user-pass-dddghtt9b5 created -``` - -{{< note >}} -When a Secret is generated, the Secret name is created by hashing -the Secret data and appending this value to the name. This ensures that -a new Secret is generated each time the data is modified. -{{< /note >}} - -#### Decoding a Secret - -Secrets can be retrieved by running `kubectl get secret`. -For example, you can view the Secret created in the previous section by -running the following command: - -```shell -kubectl get secret mysecret -o yaml -``` - -The output is similar to: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - creationTimestamp: 2016-01-22T18:41:56Z - name: mysecret - namespace: default - resourceVersion: "164619" - uid: cfee02d6-c137-11e5-8d73-42010af00002 -type: Opaque -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -``` - -Decode the `password` field: - -```shell -echo 'MWYyZDFlMmU2N2Rm' | base64 --decode -``` - -The output is similar to: - -``` -1f2d1e2e67df -``` - -#### Editing a Secret +### Editing a Secret An existing Secret may be edited with the following command: @@ -717,37 +337,6 @@ A container using a Secret as a Secret updates. {{< /note >}} -{{< feature-state for_k8s_version="v1.19" state="beta" >}} - -The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set -individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets -(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their -data has the following advantages: - -- protects you from accidental (or unwanted) updates that could cause applications outages -- improves performance of your cluster by significantly reducing load on kube-apiserver, by -closing watches for secrets marked as immutable. - -To use this feature, enable the `ImmutableEphemeralVolumes` -[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set -your Secret or ConfigMap `immutable` field to `true`. For example: -```yaml -apiVersion: v1 -kind: Secret -metadata: - ... -data: - ... -immutable: true -``` - -{{< note >}} -Once a Secret or ConfigMap is marked as immutable, it is _not_ possible to revert this change -nor to mutate the contents of the `data` field. You can only delete and recreate the Secret. -Existing Pods maintain a mount point to the deleted Secret - it is recommended to recreate -these pods. -{{< /note >}} - ### Using Secrets as environment variables To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}} @@ -808,6 +397,40 @@ The output is similar to: 1f2d1e2e67df ``` +## Immutable Secrets {#secret-immutable} + +{{< feature-state for_k8s_version="v1.19" state="beta" >}} + +The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set +individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets +(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their +data has the following advantages: + +- protects you from accidental (or unwanted) updates that could cause applications outages +- improves performance of your cluster by significantly reducing load on kube-apiserver, by +closing watches for secrets marked as immutable. + +This feature is controlled by the `ImmutableEphemeralVolumes` [feature +gate](/docs/reference/command-line-tools-reference/feature-gates/), +which is enabled by default since v1.19. You can create an immutable +Secret by setting the `immutable` field to `true`. For example, +```yaml +apiVersion: v1 +kind: Secret +metadata: + ... +data: + ... +immutable: true +``` + +{{< note >}} +Once a Secret or ConfigMap is marked as immutable, it is _not_ possible to revert this change +nor to mutate the contents of the `data` field. You can only delete and recreate the Secret. +Existing Pods maintain a mount point to the deleted Secret - it is recommended to recreate +these pods. +{{< /note >}} + ### Using imagePullSecrets The `imagePullSecrets` field is a list of references to secrets in the same namespace. @@ -1272,3 +895,11 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node. + + +## {{% heading "whatsnext" %}} + +- Learn how to [manage Secret using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) + diff --git a/content/en/docs/concepts/containers/_index.md b/content/en/docs/concepts/containers/_index.md index edee4eccc4..746e1b7fc9 100644 --- a/content/en/docs/concepts/containers/_index.md +++ b/content/en/docs/concepts/containers/_index.md @@ -31,7 +31,7 @@ and default values for any essential settings. By design, a container is immutable: you cannot change the code of a container that is already running. If you have a containerized application -and want to make changes, you need to build a new container that includes +and want to make changes, you need to build a new image that includes the change, then recreate the container to start from the updated image. ## Container runtimes diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index ddd294feda..e681f1a351 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -63,7 +63,7 @@ When `imagePullPolicy` is defined without a specific value, it is also set to `A ## Multi-architecture Images with Manifests -As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of an container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using. +As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using. Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes. diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md index aa09776753..4338c932d5 100644 --- a/content/en/docs/concepts/overview/kubernetes-api.md +++ b/content/en/docs/concepts/overview/kubernetes-api.md @@ -16,30 +16,23 @@ card: The core of Kubernetes' {{< glossary_tooltip text="control plane" term_id="control-plane" >}} is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The API server -exposes an HTTP API that lets end users, different parts of your cluster, and external components -communicate with one another. +exposes an HTTP API that lets end users, different parts of your cluster, and +external components communicate with one another. The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API (for example: Pods, Namespaces, ConfigMaps, and Events). -API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/). +Most operations can be performed through the +[kubectl](/docs/reference/kubectl/overview/) command-line interface or other +command-line tools, such as +[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), which in turn use the +API. However, you can also access the API directly using REST calls. + +Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/) +if you are writing an application using the Kubernetes API. -## API changes - -Any system that is successful needs to grow and change as new use cases emerge or existing ones change. -Therefore, Kubernetes has design features to allow the Kubernetes API to continuously change and grow. -The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that -compatibility for a length of time so that other projects have an opportunity to adapt. - -In general, new API resources and new resource fields can be added often and frequently. -Elimination of resources or fields requires following the -[API deprecation policy](/docs/reference/using-api/deprecation-policy/). - -What constitutes a compatible change, and how to change the API, are detailed in -[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme). - ## OpenAPI specification {#api-specification} Complete API details are documented using [OpenAPI](https://www.openapis.org/). @@ -78,95 +71,58 @@ You can request the response format using request headers as follows: Valid request header values for OpenAPI v2 queries -Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects. +Kubernetes implements an alternative Protobuf based serialization format that +is primarily intended for intra-cluster communication. For more information +about this format, see the [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) design proposal and the +Interface Definition Language (IDL) files for each schema located in the Go +packages that define the API objects. -## API versioning +## API changes -To make it easier to eliminate fields or restructure resource representations, Kubernetes supports -multiple API versions, each at a different API path, such as `/api/v1` or -`/apis/rbac.authorization.k8s.io/v1alpha1`. +Any system that is successful needs to grow and change as new use cases emerge or existing ones change. +Therefore, Kubernetes has designed its features to allow the Kubernetes API to continuously change and grow. +The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that +compatibility for a length of time so that other projects have an opportunity to adapt. -Versioning is done at the API level rather than at the resource or field level to ensure that the -API presents a clear, consistent view of system resources and behavior, and to enable controlling -access to end-of-life and/or experimental APIs. +In general, new API resources and new resource fields can be added often and frequently. +Elimination of resources or fields requires following the +[API deprecation policy](/docs/reference/using-api/deprecation-policy/). -The JSON and Protobuf serialization schemas follow the same guidelines for schema changes - all descriptions below cover both formats. +What constitutes a compatible change, and how to change the API, are detailed in +[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme). -Note that API versioning and Software versioning are only indirectly related. The -[Kubernetes Release Versioning](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) -proposal describes the relationship between API versioning and software versioning. +## API groups and versioning -Different API versions imply different levels of stability and support. The criteria for each level are described -in more detail in the -[API Changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions) -documentation. They are summarized here: +To make it easier to eliminate fields or restructure resource representations, +Kubernetes supports multiple API versions, each at a different API path, such +as `/api/v1` or `/apis/rbac.authorization.k8s.io/v1alpha1`. -- Alpha level: - - The version names contain `alpha` (e.g. `v1alpha1`). - - May be buggy. Enabling the feature may expose bugs. Disabled by default. - - Support for feature may be dropped at any time without notice. - - The API may change in incompatible ways in a later software release without notice. - - Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support. -- Beta level: - - The version names contain `beta` (e.g. `v2beta3`). - - Code is well tested. Enabling the feature is considered safe. Enabled by default. - - Support for the overall feature will not be dropped, though details may change. - - The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens, - we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating - API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature. - - Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have - multiple clusters which can be upgraded independently, you may be able to relax this restriction. - - **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.** -- Stable level: - - The version name is `vX` where `X` is an integer. - - Stable versions of features will appear in released software for many subsequent versions. +Versioning is done at the API level rather than at the resource or field level +to ensure that the API presents a clear, consistent view of system resources +and behavior, and to enable controlling access to end-of-life and/or +experimental APIs. -## API groups +Refer to [API versions reference](/docs/reference/using-api/api-overview/#api-versioning) +for more details on the API version level definitions. -To make it easier to extend its API, Kubernetes implements [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md). -The API group is specified in a REST path and in the `apiVersion` field of a serialized object. +To make it easier to evolve and to extend its API, Kubernetes implements +[API groups](/docs/reference/using-api/api-overview/#api-groups) that can be +[enabled or disabled](/docs/reference/using-api/api-overview/#enabling-or-disabling). -There are several API groups in a cluster: +## API Extension -1. The *core* group, also referred to as the *legacy* group, is at the REST path `/api/v1` and uses `apiVersion: v1`. - -1. *Named* groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION` - (e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a - full list of available API groups. - -There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/): - -1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) - lets you declaratively define how the API server should provide your chosen resource API. -1. You can also [implement your own extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) - and use the [aggregator](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) - to make it seamless for clients. - -## Enabling or disabling API groups - -Certain resources and API groups are enabled by default. They can be enabled or disabled by setting `--runtime-config` -as a command line option to the kube-apiserver. - -`--runtime-config` accepts comma separated values. For example: to disable batch/v1, set -`--runtime-config=batch/v1=false`; to enable batch/v2alpha1, set `--runtime-config=batch/v2alpha1`. -The flag accepts comma separated set of key=value pairs describing runtime configuration of the API server. - -{{< note >}}Enabling or disabling groups or resources requires restarting the kube-apiserver and the -kube-controller-manager to pick up the `--runtime-config` changes.{{< /note >}} - -## Persistence - -Kubernetes stores its serialized state in terms of the API resources by writing them into -{{< glossary_tooltip term_id="etcd" >}}. +The Kubernetes API can be extended in one of two ways: +1. [Custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) + let you declaratively define how the API server should provide your chosen resource API. +1. You can also extend the Kubernetes API by implementing an + [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/). ## {{% heading "whatsnext" %}} -[Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes -how the cluster manages authentication and authorization for API access. - -Overall API conventions are described in the -[API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions) -document. - -API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/). +- Learn how to extend the Kubernetes API by adding your own + [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). +- [Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes + how the cluster manages authentication and authorization for API access. +- Learn about API endpoints, resource types and samples by reading + [API Reference](/docs/reference/kubernetes-api/). diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 6a29cb1741..23b731ae41 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -208,6 +208,19 @@ field in the quota spec. A quota is matched and consumed only if `scopeSelector` in the quota spec selects the pod. +When quota is scoped for priority class using `scopeSelector` field, quota object is restricted to track only following resources: + +* `pods` +* `cpu` +* `memory` +* `ephemeral-storage` +* `limits.cpu` +* `limits.memory` +* `limits.ephemeral-storage` +* `requests.cpu` +* `requests.memory` +* `requests.ephemeral-storage` + This example creates a quota object and matches it with pods at specific priorities. The example works as follows: diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index a30efc6ef6..c132d9affe 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -73,17 +73,7 @@ verify that it worked by running `kubectl get pods -o wide` and looking at the ## Interlude: built-in node labels {#built-in-node-labels} In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated -with a standard set of labels. These labels are - -* [`kubernetes.io/hostname`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname) -* [`failure-domain.beta.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone) -* [`failure-domain.beta.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion) -* [`topology.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone) -* [`topology.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone) -* [`beta.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type) -* [`node.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type) -* [`kubernetes.io/os`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os) -* [`kubernetes.io/arch`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch) +with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/) for a list of these. {{< note >}} The value of these labels is cloud provider specific and is not guaranteed to be reliable. diff --git a/content/en/docs/concepts/scheduling-eviction/eviction-policy.md b/content/en/docs/concepts/scheduling-eviction/eviction-policy.md new file mode 100644 index 0000000000..32a7fb14b3 --- /dev/null +++ b/content/en/docs/concepts/scheduling-eviction/eviction-policy.md @@ -0,0 +1,23 @@ +--- +title: Eviction Policy +content_template: templates/concept +weight: 60 +--- + + + +This page is an overview of Kubernetes' policy for eviction. + + + +## Eviction Policy + +The {{< glossary_tooltip text="Kubelet" term_id="kubelet" >}} can proactively monitor for and prevent total starvation of a +compute resource. In those cases, the `kubelet` can reclaim the starved +resource by proactively failing one or more Pods. When the `kubelet` fails +a Pod, it terminates all of its containers and transitions its `PodPhase` to `Failed`. +If the evicted Pod is managed by a Deployment, the Deployment will create another Pod +to be scheduled by Kubernetes. + +## {{% heading "whatsnext" %}} +- Read [Configure out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) to learn more about eviction signals, thresholds, and handling. \ No newline at end of file diff --git a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md index 19e22707b8..cf7e2713b9 100644 --- a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md +++ b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -33,7 +33,7 @@ kube-scheduler is designed so that, if you want and need to, you can write your own scheduling component and use that instead. For every newly created pod or other unscheduled pods, kube-scheduler -selects an optimal node for them to run on. However, every container in +selects an optimal node for them to run on. However, every container in pods has different requirements for resources and every pod also has different requirements. Therefore, existing nodes need to be filtered according to the specific scheduling requirements. @@ -77,12 +77,9 @@ one of these at random. There are two supported ways to configure the filtering and scoring behavior of the scheduler: -1. [Scheduling Policies](/docs/reference/scheduling/policies) allow you to - configure _Predicates_ for filtering and _Priorities_ for scoring. -1. [Scheduling Profiles](/docs/reference/scheduling/config/#profiles) allow you - to configure Plugins that implement different scheduling stages, including: - `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You - can also configure the kube-scheduler to run different profiles. + +1. [Scheduling Policies](/docs/reference/scheduling/policies) allow you to configure _Predicates_ for filtering and _Priorities_ for scoring. +1. [Scheduling Profiles](/docs/reference/scheduling/profiles) allow you to configure Plugins that implement different scheduling stages, including: `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You can also configure the kube-scheduler to run different profiles. ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/concepts/configuration/resource-bin-packing.md b/content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md similarity index 100% rename from content/en/docs/concepts/configuration/resource-bin-packing.md rename to content/en/docs/concepts/scheduling-eviction/resource-bin-packing.md diff --git a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md index 06f535a574..932e076dfc 100644 --- a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md +++ b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md @@ -3,7 +3,7 @@ reviewers: - bsalamat title: Scheduler Performance Tuning content_type: concept -weight: 70 +weight: 80 --- @@ -48,17 +48,13 @@ To change the value, edit the kube-scheduler configuration file (this is likely to be `/etc/kubernetes/config/kube-scheduler.yaml`), then restart the scheduler. After you have made this change, you can run + ```bash -kubectl get componentstatuses -``` -to verify that the kube-scheduler component is healthy. The output is similar to: -``` -NAME STATUS MESSAGE ERROR -controller-manager Healthy ok -scheduler Healthy ok -... +kubectl get pods -n kube-system | grep kube-scheduler ``` +to verify that the kube-scheduler component is healthy. + ## Node scoring threshold {#percentage-of-nodes-to-score} To improve scheduling performance, the kube-scheduler can stop looking for diff --git a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md index de67c75016..20ecb6cf6b 100644 --- a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md +++ b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md @@ -3,7 +3,7 @@ reviewers: - ahg-g title: Scheduling Framework content_type: concept -weight: 60 +weight: 70 --- diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md index 2dddc711a9..18c1b7e862 100644 --- a/content/en/docs/concepts/security/pod-security-standards.md +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -292,7 +292,7 @@ Containers at runtime. Security contexts are defined as part of the Pod and cont in the Pod manifest, and represent parameters to the container runtime. Security policies are control plane mechanisms to enforce specific settings in the Security Context, -as well as other parameters outside the Security Contex. As of February 2020, the current native +as well as other parameters outside the Security Context. As of February 2020, the current native solution for enforcing these security policies is [Pod Security Policy](/docs/concepts/policy/pod-security-policy/) - a mechanism for centrally enforcing security policy on Pods across a cluster. Other alternatives for enforcing security policy are being diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index aa249566b9..427b8b0a53 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -47,6 +47,7 @@ To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/ * kube-apiserver: * `--feature-gates="IPv6DualStack=true"` + * `--service-cluster-ip-range=,` * kube-controller-manager: * `--feature-gates="IPv6DualStack=true"` * `--cluster-cidr=,` diff --git a/content/en/docs/concepts/services-networking/endpoint-slices.md b/content/en/docs/concepts/services-networking/endpoint-slices.md index 5f2522ae7e..b028139c1d 100644 --- a/content/en/docs/concepts/services-networking/endpoint-slices.md +++ b/content/en/docs/concepts/services-networking/endpoint-slices.md @@ -23,7 +23,7 @@ Endpoints. The Endpoints API has provided a simple and straightforward way of tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters -and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle +and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle and send more traffic to more backend Pods, limitations of that original API became more visible. Most notably, those included challenges with scaling to larger numbers of @@ -114,8 +114,8 @@ of the labels with the same names on the corresponding Node. Most often, the control plane (specifically, the endpoint slice {{< glossary_tooltip text="controller" term_id="controller" >}}) creates and manages EndpointSlice objects. There are a variety of other use cases for -EndpointSlices, such as service mesh implementations, that could result in othe -rentities or controllers managing additional sets of EndpointSlices. +EndpointSlices, such as service mesh implementations, that could result in other +entities or controllers managing additional sets of EndpointSlices. To ensure that multiple entities can manage EndpointSlices without interfering with each other, Kubernetes defines the diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index d3e60cccbe..795a3546b5 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -29,15 +29,26 @@ For clarity, this guide defines the following terms: {{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. -```none - internet - | - [ Ingress ] - --|-----|-- - [ Services ] -``` +Here is a simple example where an Ingress sends all its traffic to one Service: +{{< mermaid >}} +graph LR; + client([client])-. Ingress-managed
load balancer .->ingress[Ingress]; + ingress-->|routing rule|service[Service]; + subgraph cluster + ingress; + service-->pod1[Pod]; + service-->pod2[Pod]; + end + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class ingress,service,pod1,pod2 k8s; + class client plain; + class cluster cluster; +{{}} -An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. + +An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) or @@ -45,7 +56,7 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin ## Prerequisites -You must have an [ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect. +You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect. You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/). You can choose from a number of [Ingress controllers](/docs/concepts/services-networking/ingress-controllers). @@ -107,7 +118,7 @@ routed to your default backend. ### Resource backends {#resource-backend} A `Resource` backend is an ObjectRef to another Kubernetes resource within the -same namespace of the Ingress object. A `Resource` is a mutually exclusive +same namespace as the Ingress object. A `Resource` is a mutually exclusive setting with Service, and will fail validation if both are specified. A common usage for a `Resource` backend is to ingress data to an object storage backend with static assets. @@ -235,7 +246,7 @@ IngressClass resource will ensure that new Ingresses without an If you have more than one IngressClass marked as the default for your cluster, the admission controller prevents creating new Ingress objects that don't have an `ingressClassName` specified. You can resolve this by ensuring that at most 1 -IngressClasses are marked as default in your cluster. +IngressClass is marked as default in your cluster. {{< /caution >}} ## Types of Ingress @@ -274,10 +285,25 @@ A fanout configuration routes traffic from a single IP address to more than one based on the HTTP URI being requested. An Ingress allows you to keep the number of load balancers down to a minimum. For example, a setup like: -``` -foo.bar.com -> 178.91.123.132 -> / foo service1:4200 - / bar service2:8080 -``` +{{< mermaid >}} +graph LR; + client([client])-. Ingress-managed
load balancer .->ingress[Ingress, 178.91.123.132]; + ingress-->|/foo|service1[Service service1:4200]; + ingress-->|/bar|service2[Service service2:8080]; + subgraph cluster + ingress; + service1-->pod1[Pod]; + service1-->pod2[Pod]; + service2-->pod3[Pod]; + service2-->pod4[Pod]; + end + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s; + class client plain; + class cluster cluster; +{{}} would require an Ingress such as: @@ -321,11 +347,26 @@ you are using, you may need to create a default-http-backend Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address. -```none -foo.bar.com --| |-> foo.bar.com service1:80 - | 178.91.123.132 | -bar.foo.com --| |-> bar.foo.com service2:80 -``` +{{< mermaid >}} +graph LR; + client([client])-. Ingress-managed
load balancer .->ingress[Ingress, 178.91.123.132]; + ingress-->|Host: foo.bar.com|service1[Service service1:80]; + ingress-->|Host: bar.foo.com|service2[Service service2:80]; + subgraph cluster + ingress; + service1-->pod1[Pod]; + service1-->pod2[Pod]; + service2-->pod3[Pod]; + service2-->pod4[Pod]; + end + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s; + class client plain; + class cluster cluster; +{{}} + The following Ingress tells the backing load balancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4). @@ -491,9 +532,8 @@ You can achieve the same outcome by invoking `kubectl replace -f` on a modified ## Failing across availability zones -Techniques for spreading traffic across failure domains differs between cloud providers. +Techniques for spreading traffic across failure domains differ between cloud providers. Please check the documentation of the relevant [Ingress controller](/docs/concepts/services-networking/ingress-controllers) for details. -for details on deploying Ingress in a federated cluster. ## Alternatives @@ -509,4 +549,3 @@ You can expose a Service in multiple ways that don't directly involve the Ingres * Learn about the [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io) * Learn about [Ingress controllers](/docs/concepts/services-networking/ingress-controllers/) * [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube/) - diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index b7253a5bef..ded1451d80 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -24,8 +24,8 @@ and can load-balance across them. ## Motivation -Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal. -They are born and when they die, they are not resurrected. +Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are created and destroyed +to match the state of your cluster. Pods are nonpermanent resources. If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app, it can create and destroy Pods dynamically. @@ -45,9 +45,9 @@ Enter _Services_. In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined -by a {{< glossary_tooltip text="selector" term_id="selector" >}} -(see [below](#services-without-selectors) for why you might want a Service -_without_ a selector). +by a {{< glossary_tooltip text="selector" term_id="selector" >}}. +To learn about other ways to define Service endpoints, +see [Services _without_ selectors](#services-without-selectors). For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend @@ -129,12 +129,12 @@ Services most commonly abstract access to Kubernetes Pods, but they can also abstract other kinds of backends. For example: - * You want to have an external database cluster in production, but in your - test environment you use your own databases. - * You want to point your Service to a Service in a different - {{< glossary_tooltip term_id="namespace" >}} or on another cluster. - * You are migrating a workload to Kubernetes. Whilst evaluating the approach, - you run only a proportion of your backends in Kubernetes. +* You want to have an external database cluster in production, but in your + test environment you use your own databases. +* You want to point your Service to a Service in a different + {{< glossary_tooltip term_id="namespace" >}} or on another cluster. +* You are migrating a workload to Kubernetes. While evaluating the approach, + you run only a proportion of your backends in Kubernetes. In any of these scenarios you can define a Service _without_ a Pod selector. For example: @@ -151,7 +151,7 @@ spec: targetPort: 9376 ``` -Because this Service has no selector, the corresponding Endpoint object is *not* +Because this Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it's running, by adding an Endpoint object manually: @@ -188,6 +188,7 @@ selectors and uses DNS names instead. For more information, see the [ExternalName](#externalname) section later in this document. ### EndpointSlices + {{< feature-state for_k8s_version="v1.17" state="beta" >}} EndpointSlices are an API resource that can provide a more scalable alternative @@ -204,9 +205,8 @@ described in detail in [EndpointSlices](/docs/concepts/services-networking/endpo {{< feature-state for_k8s_version="v1.19" state="beta" >}} -The AppProtocol field provides a way to specify an application protocol to be -used for each Service port. The value of this field is mirrored by corresponding -Endpoints and EndpointSlice resources. +The `AppProtocol` field provides a way to specify an application protocol for each Service port. +The value of this field is mirrored by corresponding Endpoints and EndpointSlice resources. ## Virtual IPs and service proxies @@ -224,20 +224,19 @@ resolution? There are a few reasons for using proxying for Services: - * There is a long history of DNS implementations not respecting record TTLs, - and caching the results of name lookups after they should have expired. - * Some apps do DNS lookups only once and cache the results indefinitely. - * Even if apps and libraries did proper re-resolution, the low or zero TTLs - on the DNS records could impose a high load on DNS that then becomes - difficult to manage. +* There is a long history of DNS implementations not respecting record TTLs, + and caching the results of name lookups after they should have expired. +* Some apps do DNS lookups only once and cache the results indefinitely. +* Even if apps and libraries did proper re-resolution, the low or zero TTLs + on the DNS records could impose a high load on DNS that then becomes + difficult to manage. ### User space proxy mode {#proxy-mode-userspace} In this mode, kube-proxy watches the Kubernetes master for the addition and removal of Service and Endpoint objects. For each Service it opens a port (randomly chosen) on the local node. Any connections to this "proxy port" -are -proxied to one of the Service's backend Pods (as reported via +are proxied to one of the Service's backend Pods (as reported via Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into account when deciding which backend Pod to use. @@ -255,7 +254,7 @@ In this mode, kube-proxy watches the Kubernetes control plane for the addition a removal of Service and Endpoint objects. For each Service, it installs iptables rules, which capture traffic to the Service's `clusterIP` and `port`, and redirect that traffic to one of the Service's -backend sets. For each Endpoint object, it installs iptables rules which +backend sets. For each Endpoint object, it installs iptables rules which select a backend Pod. By default, kube-proxy in iptables mode chooses a backend at random. @@ -298,12 +297,12 @@ higher throughput of network traffic. IPVS provides more options for balancing traffic to backend Pods; these are: -- `rr`: round-robin -- `lc`: least connection (smallest number of open connections) -- `dh`: destination hashing -- `sh`: source hashing -- `sed`: shortest expected delay -- `nq`: never queue +* `rr`: round-robin +* `lc`: least connection (smallest number of open connections) +* `dh`: destination hashing +* `sh`: source hashing +* `sed`: shortest expected delay +* `nq`: never queue {{< note >}} To run kube-proxy in IPVS mode, you must make IPVS available on @@ -389,7 +388,7 @@ compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. -For example, the Service `"redis-master"` which exposes TCP port 6379 and has been +For example, the Service `redis-master` which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment variables: @@ -423,19 +422,19 @@ Services and creates a set of DNS records for each one. If DNS has been enabled throughout your cluster then all Pods should automatically be able to resolve Services by their DNS name. -For example, if you have a Service called `"my-service"` in a Kubernetes -Namespace `"my-ns"`, the control plane and the DNS Service acting together -create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace +For example, if you have a Service called `my-service` in a Kubernetes +namespace `my-ns`, the control plane and the DNS Service acting together +create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace should be able to find it by simply doing a name lookup for `my-service` -(`"my-service.my-ns"` would also work). +(`my-service.my-ns` would also work). -Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names +Pods in other namespaces must qualify the name as `my-service.my-ns`. These names will resolve to the cluster IP assigned for the Service. Kubernetes also supports DNS SRV (Service) records for named ports. If the -`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to +`my-service.my-ns` Service has a port named `http` with the protocol set to `TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover -the port number for `"http"`, as well as the IP address. +the port number for `http`, as well as the IP address. The Kubernetes DNS server is the only way to access `ExternalName` Services. You can find more information about `ExternalName` resolution in @@ -467,9 +466,9 @@ For headless Services that do not define selectors, the endpoints controller doe not create `Endpoints` records. However, the DNS system looks for and configures either: - * CNAME records for [`ExternalName`](#externalname)-type Services. - * A records for any `Endpoints` that share a name with the Service, for all - other types. +* CNAME records for [`ExternalName`](#externalname)-type Services. +* A records for any `Endpoints` that share a name with the Service, for all + other types. ## Publishing Services (ServiceTypes) {#publishing-services-service-types} @@ -481,26 +480,26 @@ The default is `ClusterIP`. `Type` values and their behaviors are: - * `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value - makes the Service only reachable from within the cluster. This is the - default `ServiceType`. - * [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port - (the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service - routes, is automatically created. You'll be able to contact the `NodePort` Service, - from outside the cluster, - by requesting `:`. - * [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud - provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external - load balancer routes, are automatically created. - * [`ExternalName`](#externalname): Maps the Service to the contents of the - `externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record +* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value + makes the Service only reachable from within the cluster. This is the + default `ServiceType`. +* [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port + (the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service + routes, is automatically created. You'll be able to contact the `NodePort` Service, + from outside the cluster, + by requesting `:`. +* [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud + provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external + load balancer routes, are automatically created. +* [`ExternalName`](#externalname): Maps the Service to the contents of the + `externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record + with its value. No proxying of any kind is set up. + {{< note >}}You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher + to use the `ExternalName` type. + {{< /note >}} - with its value. No proxying of any kind is set up. - {{< note >}} - You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type. - {{< /note >}} - -You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address. +You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules +into a single resource as it can expose multiple services under the same IP address. ### Type NodePort {#nodeport} @@ -509,7 +508,6 @@ allocates a port from a range specified by `--service-node-port-range` flag (def Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its `.spec.ports[*].nodePort` field. - If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10. This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node. @@ -530,6 +528,7 @@ Note that this Service is visible as `:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, would be filtered NodeIP(s).) For example: + ```yaml apiVersion: v1 kind: Service @@ -606,19 +605,21 @@ Specify the assigned IP address as loadBalancerIP. Ensure that you have updated {{< /note >}} #### Internal load balancer + In a mixed environment it is sometimes necessary to route traffic from Services inside the same (virtual) network address block. In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints. -You can achieve this by adding one the following annotations to a Service. -The annotation to add depends on the cloud Service provider you're using. +To set an internal load balancer, add one of the following annotations to your Service +depending on the cloud Service provider you're using. {{< tabs name="service_tabs" >}} {{% tab name="Default" %}} Select one of the tabs. {{% /tab %}} {{% tab name="GCP" %}} + ```yaml [...] metadata: @@ -627,8 +628,10 @@ metadata: cloud.google.com/load-balancer-type: "Internal" [...] ``` + {{% /tab %}} {{% tab name="AWS" %}} + ```yaml [...] metadata: @@ -637,8 +640,10 @@ metadata: service.beta.kubernetes.io/aws-load-balancer-internal: "true" [...] ``` + {{% /tab %}} {{% tab name="Azure" %}} + ```yaml [...] metadata: @@ -647,8 +652,10 @@ metadata: service.beta.kubernetes.io/azure-load-balancer-internal: "true" [...] ``` + {{% /tab %}} {{% tab name="IBM Cloud" %}} + ```yaml [...] metadata: @@ -657,8 +664,10 @@ metadata: service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private" [...] ``` + {{% /tab %}} {{% tab name="OpenStack" %}} + ```yaml [...] metadata: @@ -667,8 +676,10 @@ metadata: service.beta.kubernetes.io/openstack-internal-load-balancer: "true" [...] ``` + {{% /tab %}} {{% tab name="Baidu Cloud" %}} + ```yaml [...] metadata: @@ -677,8 +688,10 @@ metadata: service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" [...] ``` + {{% /tab %}} {{% tab name="Tencent Cloud" %}} + ```yaml [...] metadata: @@ -686,8 +699,10 @@ metadata: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx [...] ``` + {{% /tab %}} {{% tab name="Alibaba Cloud" %}} + ```yaml [...] metadata: @@ -695,10 +710,10 @@ metadata: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet" [...] ``` + {{% /tab %}} {{< /tabs >}} - #### TLS support on AWS {#ssl-support-on-aws} For partial TLS / SSL support on clusters running on AWS, you can add three @@ -823,7 +838,6 @@ to the value of `"true"`. The annotation `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances. - ```yaml metadata: name: my-service @@ -991,6 +1005,7 @@ spec: type: ExternalName externalName: my.database.example.com ``` + {{< note >}} ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName is intended to specify a canonical DNS name. To hardcode an IP address, consider using @@ -1045,7 +1060,7 @@ spec: ## Shortcomings -Using the userspace proxy for VIPs, work at small to medium scale, but will +Using the userspace proxy for VIPs works at small to medium scale, but will not scale to very large clusters with thousands of Services. The [original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107) has more details on this. @@ -1173,12 +1188,12 @@ of the Service. {{< note >}} You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service -to expose HTTP / HTTPS Services. +to expose HTTP/HTTPS Services. {{< /note >}} ### PROXY protocol -If your cloud provider supports it (eg, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)), +If your cloud provider supports it, you can use a Service in LoadBalancer mode to configure a load balancer outside of Kubernetes itself, that will forward connections prefixed with [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). @@ -1189,6 +1204,7 @@ incoming connection, similar to this example ``` PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n ``` + followed by the data from the client. ### SCTP @@ -1227,13 +1243,8 @@ SCTP is not supported on Windows based nodes. The kube-proxy does not support the management of SCTP associations when it is in userspace mode. {{< /warning >}} - - ## {{% heading "whatsnext" %}} - * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) * Read about [Ingress](/docs/concepts/services-networking/ingress/) * Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) - - diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md index 26e60eb775..e4831cfd16 100644 --- a/content/en/docs/concepts/storage/ephemeral-volumes.md +++ b/content/en/docs/concepts/storage/ephemeral-volumes.md @@ -39,11 +39,11 @@ simplifies application deployment and management. Kubernetes supports several different kinds of ephemeral volumes for different purposes: -- [emptyDir](/docs/concepts/volumes/#emptydir): empty at Pod startup, +- [emptyDir](/docs/concepts/storage/volumes/#emptydir): empty at Pod startup, with storage coming locally from the kubelet base directory (usually the root disk) or RAM -- [configMap](/docs/concepts/volumes/#configmap), - [downwardAPI](/docs/concepts/volumes/#downwardapi), +- [configMap](/docs/concepts/storage/volumes/#configmap), + [downwardAPI](/docs/concepts/storage/volumes/#downwardapi), [secret](/docs/concepts/storage/volumes/#secret): inject different kinds of Kubernetes data into a Pod - [CSI ephemeral @@ -92,7 +92,7 @@ Conceptually, CSI ephemeral volumes are similar to `configMap`, scheduled onto a node. Kubernetes has no concept of rescheduling Pods anymore at this stage. Volume creation has to be unlikely to fail, otherwise Pod startup gets stuck. In particular, [storage capacity -aware Pod scheduling](/docs/concepts/storage-capacity/) is *not* +aware Pod scheduling](/docs/concepts/storage/storage-capacity/) is *not* supported for these volumes. They are currently also not covered by the storage resource usage limits of a Pod, because that is something that kubelet can only enforce for storage that it manages itself. @@ -147,7 +147,7 @@ flexible: ([snapshotting](/docs/concepts/storage/volume-snapshots/), [cloning](/docs/concepts/storage/volume-pvc-datasource/), [resizing](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), - and [storage capacity tracking](/docs/concepts/storage-capacity/). + and [storage capacity tracking](/docs/concepts/storage/storage-capacity/). Example: diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index f2c8589db4..59579b443c 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -415,6 +415,21 @@ This internal provisioner of OpenStack is deprecated. Please use [the external c ### vSphere +There are two types of provisioners for vSphere storage classes: + +- [CSI provisioner](#csi-provisioner): `csi.vsphere.vmware.com` +- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume` + +In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#csi-migration-5). + +#### CSI Provisioner {#vsphere-provisioner-csi} + +The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/example/vanilla-k8s-file-driver/example-sc.yaml). + +#### vCP Provisioner + +The following examples use the VMware Cloud Provider (vCP) StorageClass provisioner. + 1. Create a StorageClass with a user specified disk format. ```yaml @@ -819,4 +834,3 @@ Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim. - diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 4ed310f6df..19d491b1b3 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -215,8 +215,7 @@ See the [CephFS example](https://github.com/kubernetes/examples/tree/{{< param " ### cinder {#cinder} {{< note >}} -Prerequisite: Kubernetes with OpenStack Cloud Provider configured. For cloudprovider -configuration please refer [cloud provider openstack](/docs/concepts/cluster-administration/cloud-providers/#openstack). +Prerequisite: Kubernetes with OpenStack Cloud Provider configured. {{< /note >}} `cinder` is used to mount OpenStack Cinder Volume into your Pod. @@ -757,8 +756,8 @@ See the [NFS example](https://github.com/kubernetes/examples/tree/{{< param "git ### persistentVolumeClaim {#persistentvolumeclaim} A `persistentVolumeClaim` volume is used to mount a -[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumes are a -way for users to "claim" durable storage (such as a GCE PersistentDisk or an +[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumeClaims +are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an iSCSI volume) without knowing the details of the particular cloud environment. See the [PersistentVolumes example](/docs/concepts/storage/persistent-volumes/) for more diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md index 90dd7b5618..bcd2cbcf16 100644 --- a/content/en/docs/concepts/workloads/pods/_index.md +++ b/content/en/docs/concepts/workloads/pods/_index.md @@ -254,7 +254,7 @@ but cannot be controlled from there. * Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/). * Learn about [PodPresets](/docs/concepts/workloads/pods/podpreset/). -* Lean about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to +* Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to configure different Pods with different container runtime configurations. * Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). * Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 31a92593da..9dc37c357c 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -99,7 +99,7 @@ assigns a Pod to a Node, the kubelet starts creating containers for that Pod using a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}. There are three possible container states: `Waiting`, `Running`, and `Terminated`. -To the check state of a Pod's containers, you can use +To check the state of a Pod's containers, you can use `kubectl describe pod `. The output shows the state for each container within that Pod. diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 5cc52cdbe6..09a02d6afc 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -30,13 +30,23 @@ node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB Then the cluster is logically viewed as below: -``` -+---------------+---------------+ -| zoneA | zoneB | -+-------+-------+-------+-------+ -| node1 | node2 | node3 | node4 | -+-------+-------+-------+-------+ -``` +{{}} +graph TB + subgraph "zoneB" + n3(Node3) + n4(Node4) + end + subgraph "zoneA" + n1(Node1) + n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/kubernetes-api/labels-annotations-taints/) that are created and populated automatically on most clusters. @@ -80,17 +90,25 @@ You can read more about this field by running `kubectl explain Pod.spec.topology ### Example: One TopologySpreadConstraint -Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod): +Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: -``` -+---------------+---------------+ -| zoneA | zoneB | -+-------+-------+-------+-------+ -| node1 | node2 | node3 | node4 | -+-------+-------+-------+-------+ -| P | P | P | | -+-------+-------+-------+-------+ -``` +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as: @@ -100,15 +118,46 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones, If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB": -``` -+---------------+---------------+ +---------------+---------------+ -| zoneA | zoneB | | zoneA | zoneB | -+-------+-------+-------+-------+ +-------+-------+-------+-------+ -| node1 | node2 | node3 | node4 | OR | node1 | node2 | node3 | node4 | -+-------+-------+-------+-------+ +-------+-------+-------+-------+ -| P | P | P | P | | P | P | P P | | -+-------+-------+-------+-------+ +-------+-------+-------+-------+ -``` +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + p4(mypod) --> n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +OR + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + p4(mypod) --> n3 + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} You can tweak the Pod spec to meet various kinds of requirements: @@ -118,17 +167,26 @@ You can tweak the Pod spec to meet various kinds of requirements: ### Example: Multiple TopologySpreadConstraints -This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod): +This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: -``` -+---------------+---------------+ -| zoneA | zoneB | -+-------+-------+-------+-------+ -| node1 | node2 | node3 | node4 | -+-------+-------+-------+-------+ -| P | P | P | | -+-------+-------+-------+-------+ -``` +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node: @@ -138,15 +196,24 @@ In this case, to match the first constraint, the incoming Pod can only be placed Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones: -``` -+---------------+-------+ -| zoneA | zoneB | -+-------+-------+-------+ -| node1 | node2 | node3 | -+-------+-------+-------+ -| P P | P | P P | -+-------+-------+-------+ -``` +{{}} +graph BT + subgraph "zoneB" + p4(Pod) --> n3(Node3) + p5(Pod) --> n3 + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n1 + p3(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing. @@ -169,15 +236,37 @@ There are some implicit conventions worth noting here: Suppose you have a 5-node cluster ranging from zoneA to zoneC: - ``` - +---------------+---------------+-------+ - | zoneA | zoneB | zoneC | - +-------+-------+-------+-------+-------+ - | node1 | node2 | node3 | node4 | node5 | - +-------+-------+-------+-------+-------+ - | P | P | P | | | - +-------+-------+-------+-------+-------+ - ``` + {{}} + graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; + {{< /mermaid >}} + + {{}} + graph BT + subgraph "zoneC" + n5(Node5) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n5 k8s; + class zoneC cluster; + {{< /mermaid >}} and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 9b99ba3abc..c0efac7959 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -42,12 +42,9 @@ The English-language documentation uses U.S. English spelling and grammar. ## Documentation formatting standards -### Use camel case for API objects +### Use upper camel case for API objects -When you refer to an API object, use the same uppercase and lowercase letters -that are used in the actual object name. Typically, the names of API -objects use -[camel case](https://en.wikipedia.org/wiki/Camel_case). +When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization). Don't split the API object name into separate words. For example, use PodTemplateList, not Pod Template List. @@ -58,9 +55,9 @@ leads to an awkward construction. {{< table caption = "Do and Don't - API objects" >}} Do | Don't :--| :----- -The Pod has two containers. | The pod has two containers. -The Deployment is responsible for ... | The Deployment object is responsible for ... -A PodList is a list of Pods. | A Pod List is a list of pods. +The pod has two containers. | The Pod has two containers. +The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ... +A PodList is a list of pods. | A Pod List is a list of pods. The two ContainerPorts ... | The two ContainerPort objects ... The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ... {{< /table >}} @@ -71,7 +68,7 @@ The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds Use angle brackets for placeholders. Tell the reader what a placeholder represents. -1. Display information about a Pod: +1. Display information about a pod: kubectl describe pod -n @@ -116,7 +113,7 @@ The copy is called a "fork". | The copy is called a "fork." ## Inline code formatting -### Use code style for inline code and commands +### Use code style for inline code, commands, and API objects For inline code in an HTML document, use the `` tag. In a Markdown document, use the backtick (`` ` ``). @@ -124,7 +121,9 @@ document, use the backtick (`` ` ``). {{< table caption = "Do and Don't - Use code style for inline code and commands" >}} Do | Don't :--| :----- -The `kubectl run`command creates a Pod. | The "kubectl run" command creates a Pod. +The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod. +The kubelet on each node acquires a `Lease`… | The kubelet on each node acquires a lease… +A `PersistentVolume` represents durable storage… | A Persistent Volume represents durable storage… For declarative management, use `kubectl apply`. | For declarative management, use "kubectl apply". Enclose code samples with triple backticks. (\`\`\`)| Enclose code samples with any other syntax. Use single backticks to enclose inline code. For example, `var example = true`. | Use two asterisks (`**`) or an underscore (`_`) to enclose inline code. For example, **var example = true**. @@ -201,7 +200,7 @@ kubectl get pods | $ kubectl get pods ### Separate commands from output -Verify that the Pod is running on your chosen node: +Verify that the pod is running on your chosen node: kubectl get pods --output=wide @@ -513,7 +512,7 @@ Do | Don't :--| :----- To create a ReplicaSet, ... | In order to create a ReplicaSet, ... See the configuration file. | Please see the configuration file. -View the Pods. | With this next command, we'll view the Pods. +View the pods. | With this next command, we'll view the pods. {{< /table >}} ### Address the reader as "you" @@ -552,7 +551,7 @@ Do | Don't :--| :----- Version 1.4 includes ... | In version 1.4, we have added ... Kubernetes provides a new feature for ... | We provide a new feature ... -This page teaches you how to use Pods. | In this page, we are going to learn about Pods. +This page teaches you how to use pods. | In this page, we are going to learn about pods. {{< /table >}} diff --git a/content/en/docs/contribute/suggesting-improvements.md b/content/en/docs/contribute/suggesting-improvements.md index e48c2915b9..dbf5bf1abb 100644 --- a/content/en/docs/contribute/suggesting-improvements.md +++ b/content/en/docs/contribute/suggesting-improvements.md @@ -24,7 +24,7 @@ of the Kubernetes community open a pull request with changes to resolve the issu If you want to suggest improvements to existing content, or notice an error, then open an issue. -1. Go to the bottom of the page and click the **Create an Issue** button. This redirects you +1. Click the **Create an issue** link on the right sidebar. This redirects you to a GitHub issue page pre-populated with some headers. 2. Describe the issue or suggestion for improvement. Provide as many details as you can. 3. Click **Submit new issue**. diff --git a/content/en/docs/doc-contributor-tools/README.md b/content/en/docs/doc-contributor-tools/README.md deleted file mode 100644 index b493e67bd3..0000000000 --- a/content/en/docs/doc-contributor-tools/README.md +++ /dev/null @@ -1,2 +0,0 @@ -Tools for Kubernetes docs contributors. View `README.md` files in -subdirectories for more info. \ No newline at end of file diff --git a/content/en/docs/doc-contributor-tools/snippets/README.md b/content/en/docs/doc-contributor-tools/snippets/README.md deleted file mode 100644 index 2b4c9c8d45..0000000000 --- a/content/en/docs/doc-contributor-tools/snippets/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# Snippets for Atom - -Snippets are bits of text that get inserted into your editor, to save typing and -reduce syntax errors. The snippets provided in `atom-snippets.cson` are scoped to -only work on Markdown files within Atom. - -## Installation - -Copy the contents of the `atom-snippets.cson` file into your existing -`~/.atom/snippets.cson`. **Do not replace your existing file.** - -You do not need to restart Atom. - -## Usage - -Have a look through `atom-snippets.cson` and note the titles and `prefix` values -of the snippets. - -You can trigger a given snippet in one of two ways: - -- By typing the snippet's `prefix` and pressing the `` key -- By searching for the snippet's title in **Packages / Snippets / Available** - -For example, open a Markdown file and type `anote` and press ``. A blank -note is added, with the correct Hugo shortcodes. - -A snippet can insert a single line or multiple lines of text. Some snippets -have placeholder values. To get to the next placeholder, press `` again. - -Some of the snippets only insert partially-formed Markdown or Hugo syntax. -For instance, `coverview` inserts the start of a concept overview tag, while -`cclose` inserts a close-capture tag. This is because every type of capture -needs a capture-close tab. - -## Creating new topics using snippets - -To create a new concept, task, or tutorial from a blank file, use one of the -following: - -- `newconcept` -- `newtask` -- `newtutorial` - -Placeholder text is included. - -## Submitting new snippets - -1. Develop the snippet locally and verify that it works as expected. -2. Copy the template's code into the `atom-snippets.cson` file on GitHub. Raise a - pull request, and ask for review from another Atom user in `#sig-docs` on - Kubernetes Slack. \ No newline at end of file diff --git a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson b/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson deleted file mode 100644 index 878ccc4ed7..0000000000 --- a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson +++ /dev/null @@ -1,226 +0,0 @@ -# Your snippets -# -# Atom snippets allow you to enter a simple prefix in the editor and hit tab to -# expand the prefix into a larger code block with templated values. -# -# You can create a new snippet in this file by typing "snip" and then hitting -# tab. -# -# An example CoffeeScript snippet to expand log to console.log: -# -# '.source.coffee': -# 'Console log': -# 'prefix': 'log' -# 'body': 'console.log $1' -# -# Each scope (e.g. '.source.coffee' above) can only be declared once. -# -# This file uses CoffeeScript Object Notation (CSON). -# If you are unfamiliar with CSON, you can read more about it in the -# Atom Flight Manual: -# http://flight-manual.atom.io/using-atom/sections/basic-customization/#_cson - -'.source.gfm': - - # Capture variables for concept template - # For full concept template see 'newconcept' below - 'Insert concept template': - 'prefix': 'ctemplate' - 'body': 'content_template: templates/concept' - 'Insert concept overview': - 'prefix': 'coverview' - 'body': '{{% capture overview %}}' - 'Insert concept body': - 'prefix': 'cbody' - 'body': '{{% capture body %}}' - 'Insert concept whatsnext': - 'prefix': 'cnext' - 'body': '{{% capture whatsnext %}}' - - - # Capture variables for task template - # For full task template see 'newtask' below - 'Insert task template': - 'prefix': 'ttemplate' - 'body': 'content_template: templates/task' - 'Insert task overview': - 'prefix': 'toverview' - 'body': '{{% capture overview %}}' - 'Insert task prerequisites': - 'prefix': 'tprereq' - 'body': """ - {{% capture prerequisites %}} - - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - {{% /capture %}} - """ - 'Insert task steps': - 'prefix': 'tsteps' - 'body': '{{% capture steps %}}' - 'Insert task discussion': - 'prefix': 'tdiscuss' - 'body': '{{% capture discussion %}}' - - - # Capture variables for tutorial template - # For full tutorial template see 'newtutorial' below - 'Insert tutorial template': - 'prefix': 'tutemplate' - 'body': 'content_template: templates/tutorial' - 'Insert tutorial overview': - 'prefix': 'tuoverview' - 'body': '{{% capture overview %}}' - 'Insert tutorial prerequisites': - 'prefix': 'tuprereq' - 'body': '{{% capture prerequisites %}}' - 'Insert tutorial objectives': - 'prefix': 'tuobjectives' - 'body': '{{% capture objectives %}}' - 'Insert tutorial lesson content': - 'prefix': 'tulesson' - 'body': '{{% capture lessoncontent %}}' - 'Insert tutorial whatsnext': - 'prefix': 'tunext' - 'body': '{{% capture whatsnext %}}' - 'Close capture': - 'prefix': 'ccapture' - 'body': '{{% /capture %}}' - 'Insert note': - 'prefix': 'anote' - 'body': """ - {{< note >}} - $1 - {{< /note >}} - """ - - # Admonitions - 'Insert caution': - 'prefix': 'acaution' - 'body': """ - {{< caution >}} - $1 - {{< /caution >}} - """ - 'Insert warning': - 'prefix': 'awarning' - 'body': """ - {{< warning >}} - $1 - {{< /warning >}} - """ - - # Misc one-liners - 'Insert TOC': - 'prefix': 'toc' - 'body': '{{< toc >}}' - 'Insert code from file': - 'prefix': 'codefile' - 'body': '{{< codenew file="$1" >}}' - 'Insert feature state': - 'prefix': 'fstate' - 'body': '{{< feature-state for_k8s_version="$1" state="$2" >}}' - 'Insert figure': - 'prefix': 'fig' - 'body': '{{< figure src="$1" title="$2" alt="$3" caption="$4" >}}' - 'Insert Youtube link': - 'prefix': 'yt' - 'body': '{{< youtube $1 >}}' - - - # Full concept template - 'Create new concept': - 'prefix': 'newconcept' - 'body': """ - --- - reviewers: - - ${1:"github-id-or-group"} - title: ${2:"topic-title"} - content_template: templates/concept - --- - {{% capture overview %}} - ${3:"overview-content"} - {{% /capture %}} - - {{< toc >}} - - {{% capture body %}} - ${4:"h2-heading-per-subtopic"} - {{% /capture %}} - - {{% capture whatsnext %}} - ${5:"next-steps-or-delete"} - {{% /capture %}} - """ - - - # Full task template - 'Create new task': - 'prefix': 'newtask' - 'body': """ - --- - reviewers: - - ${1:"github-id-or-group"} - title: ${2:"topic-title"} - content_template: templates/task - --- - {{% capture overview %}} - ${3:"overview-content"} - {{% /capture %}} - - {{< toc >}} - - {{% capture prerequisites %}} - - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ${4:"additional-prereqs-or-delete"} - - {{% /capture %}} - - {{% capture steps %}} - ${5:"h2-heading-per-step"} - {{% /capture %}} - - {{% capture discussion %}} - ${6:"task-discussion-or-delete"} - {{% /capture %}} - """ - - # Full tutorial template - 'Create new tutorial': - 'prefix': 'newtutorial' - 'body': """ - --- - reviewers: - - ${1:"github-id-or-group"} - title: ${2:"topic-title"} - content_template: templates/tutorial - --- - {{% capture overview %}} - ${3:"overview-content"} - {{% /capture %}} - - {{< toc >}} - - {{% capture prerequisites %}} - - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ${4:"additional-prereqs-or-delete"} - - {{% /capture %}} - - {{% capture objectives %}} - ${5:"tutorial-objectives"} - {{% /capture %}} - - {{% capture lessoncontent %}} - ${6:"lesson-content"} - {{% /capture %}} - - {{% capture whatsnext %}} - ${7:"next-steps-or-delete"} - {{% /capture %}} - """ - diff --git a/content/en/docs/home/supported-doc-versions.md b/content/en/docs/home/supported-doc-versions.md index bd368b2b54..b955f95f56 100644 --- a/content/en/docs/home/supported-doc-versions.md +++ b/content/en/docs/home/supported-doc-versions.md @@ -1,30 +1,12 @@ --- -title: Supported Versions of the Kubernetes Documentation -content_type: concept +title: Available Documentation Versions +content_type: custom +layout: supported-versions card: name: about weight: 10 - title: Supported Versions of the Documentation + title: Available Documentation Versions --- - - This website contains documentation for the current version of Kubernetes and the four previous versions of Kubernetes. - - - - - -## Current version - -The current version is -[{{< param "version" >}}](/). - -## Previous versions - -{{< versions-other >}} - - - - diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index fda7119caf..29639ac161 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -163,10 +163,11 @@ storage classes and how to mark a storage class as default. ### DefaultTolerationSeconds {#defaulttolerationseconds} This admission controller sets the default forgiveness toleration for pods to tolerate -the taints `notready:NoExecute` and `unreachable:NoExecute` for 5 minutes, -if the pods don't already have toleration for taints -`node.kubernetes.io/not-ready:NoExecute` or -`node.alpha.kubernetes.io/unreachable:NoExecute`. +the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters +`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already +have toleration for taints `node.kubernetes.io/not-ready:NoExecute` or +`node.kubernetes.io/unreachable:NoExecute`. +The default value for `default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` is 5 minutes. ### DenyExecOnPrivileged {#denyexeconprivileged} @@ -202,8 +203,6 @@ is recommended instead. This admission controller mitigates the problem where the API server gets flooded by event requests. The cluster admin can specify event rate limits by: - * Ensuring that `eventratelimit.admission.k8s.io/v1alpha1=true` is included in the - `--runtime-config` flag for the API server; * Enabling the `EventRateLimit` admission controller; * Referencing an `EventRateLimit` configuration file from the file provided to the API server's command line flag `--admission-control-config-file`: diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index d05ef76acd..efdc9026aa 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -443,7 +443,7 @@ current-context: webhook contexts: - context: cluster: name-of-remote-authn-service - user: name-of-api-sever + user: name-of-api-server name: webhook ``` diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index 4eb1705f4c..b255aec0d4 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -108,7 +108,6 @@ Kubernetes provides built-in signers that each have a well-known `signerName`: 1. `kubernetes.io/legacy-unknown`: has no guarantees for trust at all. Some distributions may honor these as client certs, but that behavior is not standard Kubernetes behavior. - This signerName can only be requested in CertificateSigningRequests created via the `certificates.k8s.io/v1beta1` API version. Never auto-approved by {{< glossary_tooltip term_id="kube-controller-manager" >}}. 1. Trust distribution: None. There is no standard trust or distribution for this signer in a Kubernetes cluster. 1. Permitted subjects - any @@ -245,7 +244,7 @@ Create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kub ``` cat <` **must** match precisely the name of the node as registered by the kubelet. By default, this is the host name as provided by `hostname`, or overridden via the [kubelet option](/docs/reference/command-line-tools-reference/kubelet/) `--hostname-override`. However, when using the `--cloud-provider` kubelet option, the specific hostname may be determined by the cloud provider, ignoring the local `hostname` and the `--hostname-override` option. -For specifics about how the kubelet determines the hostname, as well as cloud provider overrides, see the [kubelet options reference](/docs/reference/command-line-tools-reference/kubelet/) and the [cloud provider details](/docs/concepts/cluster-administration/cloud-providers/). +For specifics about how the kubelet determines the hostname, see the [kubelet options reference](/docs/reference/command-line-tools-reference/kubelet/). To enable the Node authorizer, start the apiserver with `--authorization-mode=Node`. diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 2be833826c..6dea2e0d31 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -891,6 +891,7 @@ rules: - apiGroups: ["rbac.authorization.k8s.io"] resources: ["clusterroles"] verbs: ["bind"] + # omit resourceNames to allow binding any ClusterRole resourceNames: ["admin","edit","view"] --- apiVersion: rbac.authorization.k8s.io/v1 diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 806d7a2021..eca4a65b87 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -423,7 +423,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `CustomResourceWebhookConversion`: Enable webhook-based conversion on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). troubleshoot a running Pod. -- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/monitoring.md). +- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/). - `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/) based resource provisioning on nodes. - `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do @@ -475,9 +475,6 @@ Each feature gate is designed for enabling/disabling a specific feature: - `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`. - `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir). - `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy. - [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and the backing filesystem for - [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor - [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy. - `MountContainers`: Enable using utility containers on host as the volume mounter. - `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods. For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation). diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index 54dc1a84d9..2a123796a1 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -42,21 +42,21 @@ kubelet [flags] --add-dir-header -If true, adds the file directory to the header +If true, adds the file directory to the header of the log messages ---address 0.0.0.0 +--address ip     Default: 0.0.0.0 -The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The IP address for the Kubelet to serve on (set to `0.0.0.0` for all IPv4 interfaces and `::` for all IPv6 interfaces) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --allowed-unsafe-sysctls strings -Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in `*`). Use these at your own risk. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) @@ -67,116 +67,108 @@ kubelet [flags] ---anonymous-auth +--anonymous-auth     Default: true -Enables anonymous requests to the Kubelet server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated. (default true) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enables anonymous requests to the Kubelet server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of `system:anonymous`, and a group name of `system:unauthenticated`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---application-metrics-count-limit int +--application-metrics-count-limit int     Default: 100 - Max number of application metrics to store (per container) (default 100) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns,it will follow the standard CLI deprecation timeline before being removed.) +Max number of application metrics to store (per container) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns,it will follow the standard CLI deprecation timeline before being removed.) --authentication-token-webhook -Use the TokenReview API to determine authentication for bearer tokens. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Use the `TokenReview` API to determine authentication for bearer tokens. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---authentication-token-webhook-cache-ttl duration +--authentication-token-webhook-cache-ttl duration     Default: `2m0s` -The duration to cache responses from the webhook token authenticator. (default 2m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's ---config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The duration to cache responses from the webhook token authenticator. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --authorization-mode string -Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (default "AlwaysAllow" when --config flag is not provided; "Webhook" when --config flag presents.) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Authorization mode for Kubelet server. Valid options are `AlwaysAllow` or `Webhook`. `Webhook` mode uses the `SubjectAccessReview` API to determine authorization. (default "AlwaysAllow" when `--config` flag is not provided; "Webhook" when `--config` flag presents.) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---authorization-webhook-cache-authorized-ttl duration +--authorization-webhook-cache-authorized-ttl duration     Default: `5m0s` -The duration to cache 'authorized' responses from the webhook authorizer. (default 5m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The duration to cache 'authorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---authorization-webhook-cache-unauthorized-ttl duration +--authorization-webhook-cache-unauthorized-ttl duration     Default: `30s` -The duration to cache 'unauthorized' responses from the webhook authorizer. (default 30s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The duration to cache 'unauthorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --azure-container-registry-config string -Path to the file container Azure container registry configuration information. +Path to the file containing Azure container registry configuration information. ---boot-id-file string +--boot-id-file string     Default: `/proc/sys/kernel/random/boot_id` -Comma-separated list of files to check for boot-id. Use the first one that exists. (default "/proc/sys/kernel/random/boot_id") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) - - - ---bootstrap-checkpoint-path string - - -<Warning: Alpha feature> Path to the directory where the checkpoints are stored +Comma-separated list of files to check for `boot-id`. Use the first one that exists. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) --bootstrap-kubeconfig string -Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by --kubeconfig does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated client certificate and key is written to the path specified by --kubeconfig. The client certificate and key file will be stored in the directory pointed by --cert-dir. +Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by `--kubeconfig` does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated client certificate and key is written to the path specified by `--kubeconfig`. The client certificate and key file will be stored in the directory pointed by `--cert-dir`. ---cert-dir string +--cert-dir string     Default: `/var/lib/kubelet/pki` -The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "/var/lib/kubelet/pki") +The directory where the TLS certs are located. If `--tls-cert-file` and `--tls-private-key-file` are provided, this flag will be ignored. ---cgroup-driver string +--cgroup-driver string     Default: `cgroupfs` -Driver that the kubelet uses to manipulate cgroups on the host. Possible values: 'cgroupfs', 'systemd' (default "cgroupfs") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)/td> +Driver that the kubelet uses to manipulate cgroups on the host. Possible values: `cgroupfs`, `systemd`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)/td> ---cgroup-root string +--cgroup-root string     Default: `''` Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---cgroups-per-qos +--cgroups-per-qos     Default: `true` -Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (default true) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --chaos-chance float -If > 0.0, introduce random client errors and latency. Intended for testing. +If > 0.0, introduce random client errors and latency. Intended for testing. (DEPRECATED: will be removed in a future version.) @@ -190,49 +182,49 @@ kubelet [flags] --cloud-config string -The path to the cloud provider configuration file. +The path to the cloud provider configuration file. Empty string for no configuration file. (DEPRECATED: will be removed in 1.23, in favor of removing cloud providers code from Kubelet.) --cloud-provider string -The provider for cloud services. Specify empty string for running with no cloud provider. If set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). +The provider for cloud services. Set to empty string for running with no cloud provider. If set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). (DEPRECATED: will be removed in 1.23, in favor of removing cloud provider code from Kubelet.) --cluster-dns strings -Comma-separated list of DNS server IP address. This value is used for containers DNS server in case of Pods with "dnsPolicy=ClusterFirst". Note: all DNS servers appearing in the list MUST serve the same set of records otherwise name resolution within the cluster may not work correctly. There is no guarantee as to which DNS server may be contacted for name resolution. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated list of DNS server IP address. This value is used for containers DNS server in case of Pods with "dnsPolicy=ClusterFirst". Note: all DNS servers appearing in the list MUST serve the same set of records otherwise name resolution within the cluster may not work correctly. There is no guarantee as to which DNS server may be contacted for name resolution. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --cluster-domain string -Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---cni-bin-dir string +--cni-bin-dir string     Default: `/opt/cni/bin` -<Warning: Alpha feature> A comma-separated list of full paths of directories in which to search for CNI plugin binaries. This docker-specific flag only works when container-runtime is set to docker. (default "/opt/cni/bin") +<Warning: Alpha feature> A comma-separated list of full paths of directories in which to search for CNI plugin binaries. This docker-specific flag only works when container-runtime is set to `docker`. ---cni-cache-dir string +--cni-cache-dir string     Default: `/var/lib/cni/cache` -<Warning: Alpha feature> The full path of the directory in which CNI should store cache files. This docker-specific flag only works when container-runtime is set to docker. (default "/var/lib/cni/cache") +<Warning: Alpha feature> The full path of the directory in which CNI should store cache files. This docker-specific flag only works when container-runtime is set to `docker`. ---cni-conf-dir string +--cni-conf-dir string     Default: `/etc/cni/net.d` -<Warning: Alpha feature> The full path of the directory in which to search for CNI config files. This docker-specific flag only works when container-runtime is set to docker. (default "/etc/cni/net.d") +<Warning: Alpha feature> The full path of the directory in which to search for CNI config files. This docker-specific flag only works when container-runtime is set to `docker`. @@ -243,94 +235,94 @@ kubelet [flags] ---container-hints string +--container-hints string     Default: `/etc/cadvisor/container_hints.json` -location of the container hints file (default "/etc/cadvisor/container_hints.json") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +location of the container hints file. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---container-log-max-files int32 +--container-log-max-files int32     Default: 5 -<Warning: Beta feature> Set the maximum number of container log files that can be present for a container. The number must be ≥ 2. This flag can only be used with --container-runtime=remote. (default 5) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Beta feature> Set the maximum number of container log files that can be present for a container. The number must be ≥ 2. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---container-log-max-size string +--container-log-max-size string     Default: `10Mi` -<Warning: Beta feature> Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with --container-runtime=remote. (default "10Mi") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Beta feature> Set the maximum size (e.g. 10Mi) of container log file before it is rotated. This flag can only be used with `--container-runtime=remote`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---container-runtime string +--container-runtime string     Default: `docker` -The container runtime to use. Possible values: 'docker', 'remote', 'rkt(deprecated)'. (default "docker") +The container runtime to use. Possible values: `docker`, `remote`. ---container-runtime-endpoint string +--container-runtime-endpoint string     Default: `unix:///var/run/dockershim.sock` -[Experimental] The endpoint of remote runtime service. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Examples:'unix:///var/run/dockershim.sock', 'npipe:////./pipe/dockershim' (default "unix:///var/run/dockershim.sock") +[Experimental] The endpoint of remote runtime service. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Examples: `unix:///var/run/dockershim.sock`, `npipe:////./pipe/dockershim`. ---containerd string +--containerd string     Default: `/run/containerd/containerd.sock` - containerd endpoint (default "/run/containerd/containerd.sock") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +The `containerd` endpoint. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) --contention-profiling -Enable lock contention profiling, if profiling is enabled (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enable lock contention profiling, if profiling is enabled (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---cpu-cfs-quota +--cpu-cfs-quota     Default: `true` -Enable CPU CFS quota enforcement for containers that specify CPU limits (default true) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enable CPU CFS quota enforcement for containers that specify CPU limits (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---cpu-cfs-quota-period duration +--cpu-cfs-quota-period duration     Default: `100ms` -Sets CPU CFS quota period value, cpu.cfs_period_us, defaults to Linux Kernel default (default 100ms) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Sets CPU CFS quota period value, `cpu.cfs_period_us`, defaults to Linux Kernel default. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---cpu-manager-policy string +--cpu-manager-policy string     Default: `none` -CPU Manager policy to use. Possible values: 'none', 'static'. Default: 'none' (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +CPU Manager policy to use. Possible values: `none`, `static`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---cpu-manager-reconcile-period NodeStatusUpdateFrequency +--cpu-manager-reconcile-period duration     Default: `10s` -<Warning: Alpha feature> CPU Manager reconciliation period. Examples: '10s', or '1m'. If not supplied, defaults to NodeStatusUpdateFrequency (default 10s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Alpha feature> CPU Manager reconciliation period. Examples: `10s`, or `1m`. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---docker string +--docker string     Default: `unix:///var/run/docker.sock` -docker endpoint (default "unix:///var/run/docker.sock") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +The `docker` endpoint. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---docker-endpoint string +--docker-endpoint string     Default: `unix:///var/run/docker.sock` -Use this for the docker endpoint to communicate with. This docker-specific flag only works when container-runtime is set to docker. (default "unix:///var/run/docker.sock") +Use this for the `docker` endpoint to communicate with. This docker-specific flag only works when container-runtime is set to `docker`. @@ -348,10 +340,10 @@ kubelet [flags] ---docker-root string +--docker-root string     Default: `/var/lib/docker` -DEPRECATED: docker root is read from docker info (this is a fallback, default: /var/lib/docker) (default "/var/lib/docker") +DEPRECATED: docker root is read from docker info (this is a fallback). @@ -362,143 +354,143 @@ kubelet [flags] ---docker-tls-ca string +--docker-tls-ca string     Default: `ca.pem` -path to trusted CA (default "ca.pem") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +path to trusted CA. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---docker-tls-cert string +--docker-tls-cert string     Default: `cert.pem` -path to client certificate (default "cert.pem") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +path to client certificate. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---docker-tls-key string +--docker-tls-key string     Default: `key.pem` -path to private key (default "key.pem") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Path to private key. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) --dynamic-config-dir string -The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The DynamicKubeletConfig feature gate must be enabled to pass this flag; this gate currently defaults to true because the feature is beta. +The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The `DynamicKubeletConfig` feature gate must be enabled to pass this flag; this gate currently defaults to `true` because the feature is beta. ---enable-cadvisor-json-endpoints +--enable-cadvisor-json-endpoints     Default: `false` -Enable cAdvisor json /spec and /stats/* endpoints. (default false) (DEPRECATED: will be removed in a future version) +Enable cAdvisor json `/spec` and `/stats/*` endpoints. (DEPRECATED: will be removed in a future version) ---enable-controller-attach-detach +--enable-controller-attach-detach     Default: `true` -Enables the Attach/Detach controller to manage attachment/detachment of volumes scheduled to this node, and disables kubelet from executing any attach/detach operations (default true) +Enables the Attach/Detach controller to manage attachment/detachment of volumes scheduled to this node, and disables kubelet from executing any attach/detach operations. ---enable-debugging-handlers +--enable-debugging-handlers     Default: `true` -Enables server endpoints for log collection and local running of containers and commands (default true) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Enables server endpoints for log collection and local running of containers and commands. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --enable-load-reader -Whether to enable cpu load reader (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Whether to enable CPU load reader (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---enable-server +--enable-server     Default: `true` -Enable the Kubelet's server (default true) +Enable the Kubelet's server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---enforce-node-allocatable stringSlice +--enforce-node-allocatable strings     Default: `pods` -A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are 'none', 'pods', 'system-reserved', and 'kube-reserved'. If the latter two options are specified, '--system-reserved-cgroup' and '--kube-reserved-cgroup' must also be set, respectively. If 'none' is specified, no additional options should be set. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (default [pods]) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are `none`, `pods`, `system-reserved`, and `kube-reserved`. If the latter two options are specified, `--system-reserved-cgroup` and `--kube-reserved-cgroup` must also be set, respectively. If `none` is specified, no additional options should be set. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---event-burst int32 +--event-burst int32     Default: 10 -Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding event-qps. Only used if --event-qps > 0 (default 10) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding `--event-qps`. Only used if `--event-qps` > 0. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---event-qps int32 +--event-qps int32     Default: 5 -If > 0, limit event creations per second to this value. If 0, unlimited. (default 5) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If > `0`, limit event creations per second to this value. If `0`, unlimited. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---event-storage-age-limit string +--event-storage-age-limit string     Default: `default=0` -Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non-specified event types (default "default=0") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: `creation`, `oom`) or `default` and the value is a duration. Default is applied to all non-specified event types. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---event-storage-event-limit string +--event-storage-event-limit string     Default: `default=0` -Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non-specified event types (default "default=0") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: `creation`, `oom`) or `default` and the value is an integer. Default is applied to all non-specified event types. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---eviction-hard mapStringString +--eviction-hard mapStringString     Default: `imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%` -A set of eviction thresholds (e.g. memory.available<1Gi) that if met would trigger a pod eviction. (default imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of eviction thresholds (e.g. `memory.available<1Gi`) that if met would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --eviction-max-pod-grace-period int32 - Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --eviction-minimum-reclaim mapStringString -A set of minimum reclaims (e.g. imagefs.available=2Gi) that describes the minimum amount of resource the kubelet will reclaim when performing a pod eviction if that resource is under pressure. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of minimum reclaims (e.g. `imagefs.available=2Gi`) that describes the minimum amount of resource the kubelet will reclaim when performing a pod eviction if that resource is under pressure. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---eviction-pressure-transition-period duration +--eviction-pressure-transition-period duration     Default: `5m0s` -Duration for which the kubelet has to wait before transitioning out of an eviction pressure condition. (default 5m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Duration for which the kubelet has to wait before transitioning out of an eviction pressure condition. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --eviction-soft mapStringString -A set of eviction thresholds (e.g. memory.available<1.5Gi) that if met over a corresponding grace period would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of eviction thresholds (e.g. `memory.available>1.5Gi`) that if met over a corresponding grace period would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --eviction-soft-grace-period mapStringString -A set of eviction grace periods (e.g. memory.available=1m30s) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of eviction grace periods (e.g. `memory.available=1m30s`) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) @@ -509,88 +501,173 @@ kubelet [flags] ---experimental-allocatable-ignore-eviction +--experimental-allocatable-ignore-eviction     Default: `false` -When set to 'true', Hard Eviction Thresholds will be ignored while calculating Node Allocatable. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. [default=false] +When set to `true`, Hard eviction thresholds will be ignored while calculating node allocatable. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: will be removed in 1.23) --experimental-bootstrap-kubeconfig string -(DEPRECATED: Use --bootstrap-kubeconfig) +DEPRECATED: Use `--bootstrap-kubeconfig` --experimental-check-node-capabilities-before-mount -[Experimental] if set true, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount +[Experimental] if set to `true`, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount (DEPRECATED: will be removed in 1.23, in favor of using CSI.) --experimental-kernel-memcg-notification -If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. +If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. This flag will be removed in 1.23. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---experimental-mounter-path string +--experimental-mounter-path string     Default: `mount` -[Experimental] Path of mounter binary. Leave empty to use the default mount. +[Experimental] Path of mounter binary. Leave empty to use the default `mount`. (DEPRECATED: will be removed in 1.23, in favor of using CSI.) ---fail-swap-on +--fail-swap-on     Default: `true` -Makes the Kubelet fail to start if swap is enabled on the node. (default true) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Makes the Kubelet fail to start if swap is enabled on the node. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --feature-gates mapStringBool -A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
APIListChunking=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AppArmor=true|false (BETA - default=true)
AttachVolumeLimit=true|false (BETA - default=true)
BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
BlockVolume=true|false (BETA - default=true)
BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
CPUManager=true|false (BETA - default=true)
CRIContainerLogRotation=true|false (BETA - default=true) -
CSIBlockVolume=true|false (BETA - default=true)
CSIDriverRegistry=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (ALPHA - default=false)
CSIMigrationAWS=true|false (ALPHA - default=false)
CSIMigrationAzureDisk=true|false (ALPHA - default=false)
CSIMigrationAzureFile=true|false (ALPHA - default=false)
CSIMigrationGCE=true|false (ALPHA - default=false)
CSIMigrationOpenStack=true|false (ALPHA - default=false)
CSINodeInfo=true|false (BETA - default=true)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceDefaulting=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DryRun=true|false (BETA - default=true)
DynamicAuditing=true|false (ALPHA - default=false)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (ALPHA - default=false)
EphemeralContainers=true|false (ALPHA - default=false)
EvenPodsSpread=true|false (ALPHA - default=false)
ExpandCSIVolumes=true|false (BETA - default=true)
ExpandInUsePersistentVolumes=true|false (BETA - default=true)
ExpandPersistentVolumes=true|false (BETA - default=true)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HyperVContainer=true|false (ALPHA - default=false)
IPv6DualStack=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
LegacyNodeRoleBehavior=true|false (ALPHA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
MountContainers=true|false (ALPHA - default=false)
NodeDisruptionExclusion=true|false (ALPHA - default=false)
NodeLease=true|false (BETA - default=true)
NonPreemptingPriority=true|false (ALPHA - default=false)
PodOverhead=true|false (ALPHA - default=false)
PodShareProcessNamespace=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)br/>QOSReserved=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
RemoveSelfLink=true|false (ALPHA - default=false)
RequestManagement=true|false (ALPHA - default=false)
ResourceLimitsPriorityFunction=true|false (ALPHA - default=false)
ResourceQuotaScopeSelectors=true|false (BETA - default=true)
RotateKubeletClientCertificate=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
RunAsGroup=true|false (BETA - default=true)
RuntimeClass=true|false (BETA - default=true)
SCTPSupport=true|false (ALPHA - default=false)
ScheduleDaemonSetPods=true|false (BETA - default=true)
ServerSideApply=true|false (BETA - default=true)
ServiceLoadBalancerFinalizer=true|false (BETA - default=true)
ServiceNodeExclusion=true|false (ALPHA - default=false)
StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
StreamingProxyRedirects=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
Sysctls=true|false (BETA - default=true)
TTLAfterFinished=true|false (ALPHA - default=false)
TaintBasedEvictions=true|false (BETA - default=true)
TaintNodesByCondition=true|false (BETA - default=true)
TokenRequest=true|false (BETA - default=true)
TokenRequestProjection=true|false (BETA - default=true)
TopologyManager=true|false (ALPHA - default=false)
ValidateProxyRedirects=true|false (BETA - default=true)
VolumePVCDataSource=true|false (BETA - default=true)
VolumeSnapshotDataSource=true|false (ALPHA - default=false)
VolumeSubpathEnvExpansion=true|false (BETA - default=true)
WatchBookmark=true|false (BETA - default=true)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (ALPHA - default=false)
WindowsGMSA=true|false (BETA - default=true)
WindowsRunAsUserName=true|false (ALPHA - default=false) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of `key=value` pairs that describe feature gates for alpha/experimental features. Options are:
+APIListChunking=true|false (BETA - default=true)
+APIPriorityAndFairness=true|false (ALPHA - default=false)
+APIResponseCompression=true|false (BETA - default=true)
+AllAlpha=true|false (ALPHA - default=false)
+AllBeta=true|false (BETA - default=false)
+AllowInsecureBackendProxy=true|false (BETA - default=true)
+AnyVolumeDataSource=true|false (ALPHA - default=false)
+AppArmor=true|false (BETA - default=true)
+BalanceAttachedNodeVolumes=true|false (ALPHA - default=false)
+BoundServiceAccountTokenVolume=true|false (ALPHA - default=false)
+CPUManager=true|false (BETA - default=true)
+CRIContainerLogRotation=true|false (BETA - default=true)
+CSIInlineVolume=true|false (BETA - default=true)
+CSIMigration=true|false (BETA - default=true)
+CSIMigrationAWS=true|false (BETA - default=false)
+CSIMigrationAWSComplete=true|false (ALPHA - default=false)
+CSIMigrationAzureDisk=true|false (BETA - default=false)
+CSIMigrationAzureDiskComplete=true|false (ALPHA - default=false)
+CSIMigrationAzureFile=true|false (ALPHA - default=false)
+CSIMigrationAzureFileComplete=true|false (ALPHA - default=false)
+CSIMigrationGCE=true|false (BETA - default=false)
+CSIMigrationGCEComplete=true|false (ALPHA - default=false)
+CSIMigrationOpenStack=true|false (BETA - default=false)
+CSIMigrationOpenStackComplete=true|false (ALPHA - default=false)
+CSIMigrationvSphere=true|false (BETA - default=false)
+CSIMigrationvSphereComplete=true|false (BETA - default=false)
+CSIStorageCapacity=true|false (ALPHA - default=false)
+CSIVolumeFSGroupPolicy=true|false (ALPHA - default=false)
+ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
+CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
+DefaultPodTopologySpread=true|false (ALPHA - default=false)
+DevicePlugins=true|false (BETA - default=true)
+DisableAcceleratorUsageMetrics=true|false (ALPHA - default=false)
+DynamicKubeletConfig=true|false (BETA - default=true)
+EndpointSlice=true|false (BETA - default=true)
+EndpointSliceProxying=true|false (BETA - default=true)
+EphemeralContainers=true|false (ALPHA - default=false)
+ExpandCSIVolumes=true|false (BETA - default=true)
+ExpandInUsePersistentVolumes=true|false (BETA - default=true)
+ExpandPersistentVolumes=true|false (BETA - default=true)
+ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
+GenericEphemeralVolume=true|false (ALPHA - default=false)
+HPAScaleToZero=true|false (ALPHA - default=false)
+HugePageStorageMediumSize=true|false (BETA - default=true)
+HyperVContainer=true|false (ALPHA - default=false)
+IPv6DualStack=true|false (ALPHA - default=false)
+ImmutableEphemeralVolumes=true|false (BETA - default=true)
+KubeletPodResources=true|false (BETA - default=true)
+LegacyNodeRoleBehavior=true|false (BETA - default=true)
+LocalStorageCapacityIsolation=true|false (BETA - default=true)
+LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
+NodeDisruptionExclusion=true|false (BETA - default=true)
+NonPreemptingPriority=true|false (BETA - default=true)
+PodDisruptionBudget=true|false (BETA - default=true)
+PodOverhead=true|false (BETA - default=true)
+ProcMountType=true|false (ALPHA - default=false)
+QOSReserved=true|false (ALPHA - default=false)
+RemainingItemCount=true|false (BETA - default=true)
+RemoveSelfLink=true|false (ALPHA - default=false)
+RotateKubeletServerCertificate=true|false (BETA - default=true)
+RunAsGroup=true|false (BETA - default=true)
+RuntimeClass=true|false (BETA - default=true)
+SCTPSupport=true|false (BETA - default=true)
+SelectorIndex=true|false (BETA - default=true)
+ServerSideApply=true|false (BETA - default=true)
+ServiceAccountIssuerDiscovery=true|false (ALPHA - default=false)
+ServiceAppProtocol=true|false (BETA - default=true)
+ServiceNodeExclusion=true|false (BETA - default=true)
+ServiceTopology=true|false (ALPHA - default=false)
+SetHostnameAsFQDN=true|false (ALPHA - default=false)
+StartupProbe=true|false (BETA - default=true)
+StorageVersionHash=true|false (BETA - default=true)
+SupportNodePidsLimit=true|false (BETA - default=true)
+SupportPodPidsLimit=true|false (BETA - default=true)
+Sysctls=true|false (BETA - default=true)
+TTLAfterFinished=true|false (ALPHA - default=false)
+TokenRequest=true|false (BETA - default=true)
+TokenRequestProjection=true|false (BETA - default=true)
+TopologyManager=true|false (BETA - default=true)
+ValidateProxyRedirects=true|false (BETA - default=true)
+VolumeSnapshotDataSource=true|false (BETA - default=true)
+WarningHeaders=true|false (BETA - default=true)
+WinDSR=true|false (ALPHA - default=false)
+WinOverlay=true|false (ALPHA - default=false)
+WindowsEndpointSliceProxying=true|false (ALPHA - default=false)
+(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---file-check-frequency duration +--file-check-frequency duration     Default: `20s` -Duration between checking config files for new data (default 20s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Duration between checking config files for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---global-housekeeping-interval duration +--global-housekeeping-interval duration     Default: `1m0s` -Interval between global housekeepings (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Interval between global housekeepings. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---hairpin-mode string +--hairpin-mode string     Default: `promiscuous-bridge` -How should the kubelet setup hairpin NAT. This allows endpoints of a Service to loadbalance back to themselves if they should try to access their own Service. Valid values are "promiscuous-bridge", "hairpin-veth" and "none". (default "promiscuous-bridge") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +How should the kubelet setup hairpin NAT. This allows endpoints of a Service to load balance back to themselves if they should try to access their own Service. Valid values are `promiscuous-bridge`, `hairpin-veth` and `none`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---healthz-bind-address 0.0.0.0 +--healthz-bind-address ip     Default: `127.0.0.1` -The IP address for the healthz server to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 127.0.0.1) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The IP address for the healthz server to serve on (set to `0.0.0.0` for all IPv4 interfaces and `::` for all IPv6 interfaces). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---healthz-port int32 +--healthz-port int32     Default: 10248 -The port of the localhost healthz endpoint (set to 0 to disable) (default 10248) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The port of the localhost healthz endpoint (set to `0` to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) @@ -604,119 +681,126 @@ kubelet [flags] --hostname-override string -If non-empty, will use this string as identification instead of the actual hostname. If --cloud-provider is set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). +If non-empty, will use this string as identification instead of the actual hostname. If `--cloud-provider` is set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). ---housekeeping-interval duration +--housekeeping-interval duration     Default: `10s` -Interval between container housekeepings (default 10s) +Interval between container housekeepings. ---http-check-frequency duration +--http-check-frequency duration     Default: `20s` -Duration between checking http for new data (default 20s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Duration between checking HTTP for new data. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---image-gc-high-threshold int32 +--image-gc-high-threshold int32     Default: 85 -The percent of disk usage after which image garbage collection is always run. Values must be within the range [0, 100], To disable image garbage collection, set to 100. (default 85) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The percent of disk usage after which image garbage collection is always run. Values must be within the range [0, 100], To disable image garbage collection, set to 100. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---image-gc-low-threshold int32 +--image-gc-low-threshold int32     Default: 80 -The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of --image-gc-high-threshold. (default 80) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of `--image-gc-high-threshold`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---image-pull-progress-deadline duration +--image-pull-progress-deadline duration     Default: `1m0s` -If no pulling progress is made before this deadline, the image pulling will be cancelled. This docker-specific flag only works when container-runtime is set to docker. (default 1m0s) +If no pulling progress is made before this deadline, the image pulling will be cancelled. This docker-specific flag only works when container-runtime is set to `docker`. --image-service-endpoint string -[Experimental] The endpoint of remote image service. If not specified, it will be the same with container-runtime-endpoint by default. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Examples:'unix:///var/run/dockershim.sock', 'npipe:////./pipe/dockershim' +[Experimental] The endpoint of remote image service. If not specified, it will be the same with `--container-runtime-endpoint` by default. Currently UNIX socket endpoint is supported on Linux, while npipe and TCP endpoints are supported on Windows. Examples: `unix:///var/run/dockershim.sock`, `npipe:////./pipe/dockershim` ---iptables-drop-bit int32 +--iptables-drop-bit int32     Default: 15 -The bit of the fwmark space to mark packets for dropping. Must be within the range [0, 31]. (default 15) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The bit of the `fwmark` space to mark packets for dropping. Must be within the range [0, 31]. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---iptables-masquerade-bit int32 +--iptables-masquerade-bit int32     Default: 14 -The bit of the fwmark space to mark packets for SNAT. Must be within the range [0, 31]. Please match this parameter with corresponding parameter in kube-proxy. (default 14) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The bit of the `fwmark` space to mark packets for SNAT. Must be within the range [0, 31]. Please match this parameter with corresponding parameter in `kube-proxy`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --keep-terminated-pod-volumes -Keep terminated pod volumes mounted to the node after the pod terminates. Can be useful for debugging volume related issues. (DEPRECATED: will be removed in a future version) - - - ---kube-api-burst int32 - - - Burst to use while talking with kubernetes apiserver (default 10) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Keep terminated pod volumes mounted to the node after the pod terminates. Can be useful for debugging volume related issues. (DEPRECATED: will be removed in a future version) ---kube-api-content-type string +--kernel-memcg-notification -Content type of requests sent to apiserver. (default "application/vnd.kubernetes.protobuf") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---kube-api-qps int32 +--kube-api-burst int32     Default: 10 -QPS to use while talking with kubernetes apiserver (default 5) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + Burst to use while talking with kubernetes API server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---kube-reserved mapStringString +--kube-api-content-type string     Default: `application/vnd.kubernetes.protobuf` -A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for kubernetes system components. Currently cpu, memory and local ephemeral storage for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Content type of requests sent to apiserver. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---kube-reserved-cgroup string +--kube-api-qps int32     Default: 5 -Absolute name of the top level cgroup that is used to manage kubernetes components for which compute resources were reserved via '--kube-reserved' flag. Ex. '/kube-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +QPS to use while talking with kubernetes API server. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + + + +--kube-reserved mapStringString     Default: <None> + + +A set of `=` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi`) pairs that describe resources reserved for kubernetes system components. Currently `cpu`, `memory` and local `ephemeral-storage` for root file system are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + + + +--kube-reserved-cgroup string     Default: `''` + + +Absolute name of the top level cgroup that is used to manage kubernetes components for which compute resources were reserved via `--kube-reserved` flag. Ex. `/kube-reserved`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --kubeconfig string -Path to a kubeconfig file, specifying how to connect to the API server. Providing --kubeconfig enables API server mode, omitting --kubeconfig enables standalone mode. +Path to a kubeconfig file, specifying how to connect to the API server. Providing `--kubeconfig` enables API server mode, omitting `--kubeconfig` enables standalone mode. --kubelet-cgroups string -Optional absolute name of cgroups to create and run the Kubelet in. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Optional absolute name of cgroups to create and run the Kubelet in. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) @@ -727,10 +811,10 @@ kubelet [flags] ---log-backtrace-at traceLocation +--log-backtrace-at traceLocation     Default: `:0` -when logging hits line file:N, emit a stack trace (default :0) +When logging hits line `:`, emit a stack trace. @@ -755,114 +839,121 @@ kubelet [flags] ---log-file-max-size uint +--log-file-max-size uint     Default: 1800 -Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) +Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. ---log-flush-frequency duration +--log-flush-frequency duration     Default: `5s` -Maximum number of seconds between log flushes (default 5s) +Maximum number of seconds between log flushes. ---logtostderr +--logging-format string     Default: `text` -log to standard error instead of files (default true) +Sets the log format. Permitted formats: `text`, `json`.\nNon-default formats don't honor these flags: `--add-dir-header`, `--alsologtostderr`, `--log-backtrace-at`, `--log_dir`, `--log-file`, `--log-file-max-size`, `--logtostderr`, `--skip_headers`, `--skip_log_headers`, `--stderrthreshold`, `--log-flush-frequency`.\nNon-default choices are currently alpha and subject to change without warning. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---machine-id-file string +--logtostderr     Default: `true` -Comma-separated list of files to check for machine-id. Use the first one that exists. (default "/etc/machine-id,/var/lib/dbus/machine-id") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +log to standard error instead of files. ---make-iptables-util-chains +--machine-id-file string     Default: `/etc/machine-id,/var/lib/dbus/machine-id` -If true, kubelet will ensure iptables utility rules are present on host. (default true) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated list of files to check for `machine-id`. Use the first one that exists. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) + + + +--make-iptables-util-chains     Default: `true` + + +If true, kubelet will ensure `iptables` utility rules are present on host. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --manifest-url string - URL for accessing additional Pod specifications to run (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +URL for accessing additional Pod specifications to run (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---manifest-url-header --manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful' +--manifest-url-header string -Comma-separated list of HTTP headers to use when accessing the url provided to --manifest-url. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: --manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful' (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated list of HTTP headers to use when accessing the URL provided to `--manifest-url`. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: `--manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful'` (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---master-service-namespace string +--master-service-namespace string     Default: `default` -The namespace from which the kubernetes master services should be injected into pods (default "default") (DEPRECATED: This flag will be removed in a future version.) +The namespace from which the kubernetes master services should be injected into pods. (DEPRECATED: This flag will be removed in a future version.) ---max-open-files int +--max-open-files int     Default: 1000000 -Number of files that can be opened by Kubelet process. (default 1000000) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Number of files that can be opened by Kubelet process. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---max-pods int32 +--max-pods int32     Default: 110 -Number of Pods that can run on this Kubelet. (default 110) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Number of Pods that can run on this Kubelet. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---maximum-dead-containers int32 +--maximum-dead-containers int32     Default: -1 -Maximum number of old instances of containers to retain globally. Each container takes up some disk space. To disable, set to a negative number. (default -1) (DEPRECATED: Use --eviction-hard or --eviction-soft instead. Will be removed in a future version.) +Maximum number of old instances of containers to retain globally. Each container takes up some disk space. To disable, set to a negative number. (DEPRECATED: Use `--eviction-hard` or `--eviction-soft` instead. Will be removed in a future version.) ---maximum-dead-containers-per-container int32 +--maximum-dead-containers-per-container int32     Default: 1 -Maximum number of old instances to retain per container. Each container takes up some disk space. (default 1) (DEPRECATED: Use --eviction-hard or --eviction-soft instead. Will be removed in a future version.) +Maximum number of old instances to retain per container. Each container takes up some disk space. (DEPRECATED: Use `--eviction-hard` or `--eviction-soft` instead. Will be removed in a future version.) --minimum-container-ttl-duration duration -Minimum age for a finished container before it is garbage collected. Examples: '300ms', '10s' or '2h45m' (DEPRECATED: Use --eviction-hard or --eviction-soft instead. Will be removed in a future version.) +Minimum age for a finished container before it is garbage collected. Examples: `300ms`, `10s` or `2h45m` (DEPRECATED: Use `--eviction-hard` or `--eviction-soft` instead. Will be removed in a future version.) ---minimum-image-ttl-duration duration +--minimum-image-ttl-duration duration     Default: `2m0s` -Minimum age for an unused image before it is garbage collected. Examples: '300ms', '10s' or '2h45m'. (default 2m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Minimum age for an unused image before it is garbage collected. Examples: `300ms`, `10s` or `2h45m`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --network-plugin string -<Warning: Alpha feature> The name of the network plugin to be invoked for various events in kubelet/pod lifecycle. This docker-specific flag only works when container-runtime is set to docker. +<Warning: Alpha feature> The name of the network plugin to be invoked for various events in kubelet/pod lifecycle. This docker-specific flag only works when container-runtime is set to `docker`. --network-plugin-mtu int32 -<Warning: Alpha feature> The MTU to be passed to the network plugin, to override the default. Set to 0 to use the default 1460 MTU. This docker-specific flag only works when container-runtime is set to docker. +<Warning: Alpha feature> The MTU to be passed to the network plugin, to override the default. Set to `0` to use the default 1460 MTU. This docker-specific flag only works when container-runtime is set to `docker`. @@ -876,189 +967,196 @@ kubelet [flags] --node-labels mapStringString -<Warning: Alpha feature> Labels to add when registering the node in the cluster. Labels must be key=value pairs separated by ','. Labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os) +<Warning: Alpha feature> Labels to add when registering the node in the cluster. Labels must be `key=value` pairs separated by `,`. Labels in the `kubernetes.io` namespace must begin with an allowed prefix (`kubelet.kubernetes.io`, `node.kubernetes.io`) or be in the specifically allowed set (`beta.kubernetes.io/arch`, `beta.kubernetes.io/instance-type`, `beta.kubernetes.io/os`, `failure-domain.beta.kubernetes.io/region`, `failure-domain.beta.kubernetes.io/zone`, `failure-domain.kubernetes.io/region`, `failure-domain.kubernetes.io/zone`, `kubernetes.io/arch`, `kubernetes.io/hostname`, `kubernetes.io/instance-type`, `kubernetes.io/os`) ---node-status-max-images int32 +--node-status-max-images int32     Default: 50 -<Warning: Alpha feature> The maximum number of images to report in Node.Status.Images. If -1 is specified, no cap will be applied. (default 50) +The maximum number of images to report in `node.status.images`. If `-1` is specified, no cap will be applied. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---node-status-update-frequency duration +--node-status-update-frequency duration     Default: `10s` -Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in nodecontroller. (default 10s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in Node controller. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---non-masquerade-cidr string +--non-masquerade-cidr string     Default: `10.0.0.0/8` -Traffic to IPs outside this range will use IP masquerade. Set to '0.0.0.0/0' to never masquerade. (default "10.0.0.0/8") (DEPRECATED: will be removed in a future version) +Traffic to IPs outside this range will use IP masquerade. Set to `0.0.0.0/0` to never masquerade. (DEPRECATED: will be removed in a future version) ---oom-score-adj int32 +--oom-score-adj int32     Default: -999 -The oom-score-adj value for kubelet process. Values must be within the range [-1000, 1000] (default -999) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The oom-score-adj value for kubelet process. Values must be within the range [-1000, 1000]. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --pod-cidr string -The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. For IPv6, the maximum number of IP's allocated is 65536 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. For IPv6, the maximum number of IP's allocated is 65536 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---pod-infra-container-image string +--pod-infra-container-image string     Default: `k8s.gcr.io/pause:3.2` -The image whose network/ipc namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to docker. (default "k8s.gcr.io/pause:3.2") +The image whose network/IPC namespaces containers in each pod will use. This docker-specific flag only works when container-runtime is set to `docker`. --pod-manifest-path string -Path to the directory containing static pod files to run, or the path to a single static pod file. Files starting with dots will be ignored. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Path to the directory containing static pod files to run, or the path to a single static pod file. Files starting with dots will be ignored. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---pod-max-pids int +--pod-max-pids int     Default: -1 -Set the maximum number of processes per pod. If -1, the kubelet defaults to the node allocatable pid capacity. (default -1) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Set the maximum number of processes per pod. If `-1`, the kubelet defaults to the node allocatable PID capacity. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --pods-per-core int32 -Number of Pods per core that can run on this Kubelet. The total number of Pods on this Kubelet cannot exceed max-pods, so max-pods will be used if this calculation results in a larger number of Pods allowed on the Kubelet. A value of 0 disables this limit. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Number of Pods per core that can run on this Kubelet. The total number of Pods on this Kubelet cannot exceed `--max-pods`, so `--max-pods` will be used if this calculation results in a larger number of Pods allowed on the Kubelet. A value of `0` disables this limit. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---port int32 +--port int32     Default: 10250 -The port for the Kubelet to serve on. (default 10250) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The port for the Kubelet to serve on. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --protect-kernel-defaults - Default kubelet behaviour for kernel tuning. If set, kubelet errors if any of kernel tunables is different than kubelet defaults. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + Default kubelet behaviour for kernel tuning. If set, kubelet errors if any of kernel tunables is different than kubelet defaults. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --provider-id string -Unique identifier for identifying the node in a machine database, i.e cloudprovider +Unique identifier for identifying the node in a machine database, i.e cloud provider. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --qos-reserved mapStringString -<Warning: Alpha feature> A set of ResourceName=Percentage (e.g. memory=50%) pairs that describe how pod resource requests are reserved at the QoS level. Currently only memory is supported. Requires the QOSReserved feature gate to be enabled. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Alpha feature> A set of `=` (e.g. `memory=50%`) pairs that describe how pod resource requests are reserved at the QoS level. Currently only memory is supported. Requires the `QOSReserved` feature gate to be enabled. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---read-only-port int32 +--read-only-port int32     Default: 10255 -The read-only port for the Kubelet to serve on with no authentication/authorization (set to 0 to disable) (default 10255) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +The read-only port for the Kubelet to serve on with no authentication/authorization (set to `0` to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --really-crash-for-testing -If true, when panics occur crash. Intended for testing. +If true, when panics occur crash. Intended for testing. (DEPRECATED: will be removed in a future version.) --redirect-container-streaming -Enables container streaming redirect. If false, kubelet will proxy container streaming data between apiserver and container runtime; if true, kubelet will return an http redirect to apiserver, and apiserver will access container runtime directly. The proxy approach is more secure, but introduces some overhead. The redirect approach is more performant, but less secure because the connection between apiserver and container runtime may not be authenticated. +Enables container streaming redirect. If false, kubelet will proxy container streaming data between the API server and container runtime; if `true`, kubelet will return an HTTP redirect to the API server, and the API server will access container runtime directly. The proxy approach is more secure, but introduces some overhead. The redirect approach is more performant, but less secure because the connection between apiserver and container runtime may not be authenticated. (DEPRECATED: Container streaming redirection will be removed from the kubelet in v1.20, and this flag will be removed in v1.22. For more details, see http://git.k8s.io/enhancements/keps/sig-node/20191205-container-streaming-requests.md) --register-node -Register the node with the apiserver. If --kubeconfig is not provided, this flag is irrelevant, as the Kubelet won't have an apiserver to register with. Default=true. (default true) +Register the node with the API server. If `--kubeconfig` is not provided, this flag is irrelevant, as the Kubelet won't have an API server to register with. Default to `true`. ---register-schedulable +--register-schedulable     Default: `true` -Register the node as schedulable. Won't have any effect if register-node is false. (default true) (DEPRECATED: will be removed in a future version) +Register the node as schedulable. Won't have any effect if `--register-node` is false. (DEPRECATED: will be removed in a future version) --register-with-taints []api.Taint -Register the node with the given list of taints (comma separated "=:"). No-op if register-node is false. +Register the node with the given list of taints (comma separated `=:`). No-op if `--register-node` is `false`. ---registry-burst int32 +--registry-burst int32     Default: 10 -Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry-qps. Only used if --registry-qps > 0 (default 10) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding `--registry-qps`. Only used if `--registry-qps > 0`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---registry-qps int32 +--registry-qps int32     Default: 5 -If > 0, limit registry pull QPS to this value. If 0, unlimited. (default 5) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +If > 0, limit registry pull QPS to this value. If `0`, unlimited. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---resolv-conf string +--reserved-cpus string -Resolver configuration file used as the basis for the container DNS resolution configuration. (default "/etc/resolv.conf") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A comma-separated list of CPUs or CPU ranges that are reserved for system and kubernetes usage. This specific list will supersede cpu counts in `--system-reserved` and `--kube-reserved`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---root-dir string +--resolv-conf string     Default: `/etc/resolv.conf` -Directory path for managing kubelet files (volume mounts,etc). (default "/var/lib/kubelet") +Resolver configuration file used as the basis for the container DNS resolution configuration. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) + + + +--root-dir string     Default: `/var/lib/kubelet` + + +Directory path for managing kubelet files (volume mounts, etc). --rotate-certificates -<Warning: Beta feature> Auto rotate the kubelet client certificates by requesting new certificates from the kube-apiserver when the certificate expiration approaches. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +<Warning: Beta feature> Auto rotate the kubelet client certificates by requesting new certificates from the `kube-apiserver` when the certificate expiration approaches. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --rotate-server-certificates -Auto-request and rotate the kubelet serving certificates by requesting new certificates from the kube-apiserver when the certificate expiration approaches. Requires the RotateKubeletServerCertificate feature gate to be enabled, and approval of the submitted CertificateSigningRequest objects. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Auto-request and rotate the kubelet serving certificates by requesting new certificates from the `kube-apiserver` when the certificate expiration approaches. Requires the `RotateKubeletServerCertificate` feature gate to be enabled, and approval of the submitted `CertificateSigningRequest` objects. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --runonce -If true, exit after spawning pods from local manifests or remote urls. Exclusive with --enable-server +If `true`, exit after spawning pods from local manifests or remote urls. Exclusive with `--enable-server` (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) @@ -1069,172 +1167,176 @@ kubelet [flags] ---runtime-request-timeout duration +--runtime-request-timeout duration     Default: `2m0s` -Timeout of all runtime requests except long running request - pull, logs, exec and attach. When timeout exceeded, kubelet will cancel the request, throw out an error and retry later. (default 2m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Timeout of all runtime requests except long running request - `pull`, `logs`, `exec` and `attach`. When timeout exceeded, kubelet will cancel the request, throw out an error and retry later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---seccomp-profile-root string +--seccomp-profile-root string     Default: `/var/lib/kubelet/seccomp` -<Warning: Alpha feature> Directory path for seccomp profiles. (default "/var/lib/kubelet/seccomp") +<Warning: Alpha feature> Directory path for seccomp profiles. (DEPRECATED: will be removed in 1.23, in favor of using the `/seccomp` directory) + ---serialize-image-pulls +--serialize-image-pulls     Default: `true` -Pull images one at a time. We recommend *not* changing the default value on nodes that run docker daemon with version < 1.9 or an Aufs storage backend. Issue #10959 has more details. (default true) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Pull images one at a time. We recommend *not* changing the default value on nodes that run docker daemon with version < 1.9 or an `aufs` storage backend. Issue #10959 has more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --skip-headers -If true, avoid header prefixes in the log messages +If `true`, avoid header prefixes in the log messages --skip-log-headers -If true, avoid headers when opening log files +If `true`, avoid headers when opening log files ---stderrthreshold severity +--stderrthreshold severity     Default: 2 -logs at or above this threshold go to stderr (default 2) +logs at or above this threshold go to stderr. ---storage-driver-buffer-duration duration +--storage-driver-buffer-duration duration     Default: `1m0s` -Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---storage-driver-db string +--storage-driver-db string     Default: `cadvisor` -database name (default "cadvisor") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Database name. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---storage-driver-host string +--storage-driver-host string     Default: `localhost:8086` -database host:port (default "localhost:8086") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Database `host:port`. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---storage-driver-password string +--storage-driver-password string     Default: `root` -database password (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Database password. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) --storage-driver-secure -use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---storage-driver-table string +--storage-driver-table string     Default: `stats` -table name (default "stats") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Table name. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---storage-driver-user string +--storage-driver-user string     Default: `root` -database username (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) +Database username. (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.) ---streaming-connection-idle-timeout duration +--streaming-connection-idle-timeout duration     Default: `4h0m0s` -Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m' (default 4h0m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Maximum time a streaming connection can be idle before the connection is automatically closed. `0` indicates no timeout. Example: `5m`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---sync-frequency duration +--sync-frequency duration     Default: `1m0s` -Max period between synchronizing running containers and config (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Max period between synchronizing running containers and config. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---system-cgroups / +--system-cgroups string -Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under /. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under `/`. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---system-reserved mapStringString +--system-reserved mapStringString     Default: \ -A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +A set of `=` (e.g. `cpu=200m,memory=500Mi,ephemeral-storage=1Gi`) pairs that describe resources reserved for non-kubernetes components. Currently only `cpu` and `memory` are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---system-reserved-cgroup string +--system-reserved-cgroup string     Default: `''` -Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via '--system-reserved' flag. Ex. '/system-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via `--system-reserved` flag. Ex. `/system-reserved`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --tls-cert-file string -File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If `--tls-cert-file` and `--tls-private-key-file` are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to `--cert-dir`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --tls-cipher-suites stringSlice -Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+Preferred values: +TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
+Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --tls-min-version string -Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Minimum TLS version supported. Possible values: `VersionTLS10`, `VersionTLS11`, `VersionTLS12`, `VersionTLS13` (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) --tls-private-key-file string -File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +File containing x509 private key matching `--tls-cert-file`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---topology-manager-policy string +--topology-manager-policy string     Default: `none` -Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Topology Manager policy to use. Possible values: `none`, `best-effort`, `restricted`, `single-numa-node`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) -v, --v Level -number for the log level verbosity +Number for the log level verbosity @@ -1248,21 +1350,21 @@ kubelet [flags] --vmodule moduleSpec -comma-separated list of pattern=N settings for file-filtered logging +Comma-separated list of `pattern=N` settings for file-filtered logging ---volume-plugin-dir string +--volume-plugin-dir string     Default: `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` -<Warning: Alpha feature> The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") +The full path of the directory in which to search for additional third party volume plugins. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) ---volume-stats-agg-period duration +--volume-stats-agg-period duration     Default: `1m0s` -Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) +Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to `0`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.) diff --git a/content/en/docs/reference/glossary/cloud-provider.md b/content/en/docs/reference/glossary/cloud-provider.md index 84b4d6fcac..c9f3f11fb5 100755 --- a/content/en/docs/reference/glossary/cloud-provider.md +++ b/content/en/docs/reference/glossary/cloud-provider.md @@ -2,7 +2,6 @@ title: Cloud Provider id: cloud-provider date: 2018-04-12 -full_link: /docs/concepts/cluster-administration/cloud-providers short_description: > An organization that offers a cloud computing platform. diff --git a/content/en/docs/reference/glossary/platform-developer.md b/content/en/docs/reference/glossary/platform-developer.md index ed961c27f2..8489aea94e 100755 --- a/content/en/docs/reference/glossary/platform-developer.md +++ b/content/en/docs/reference/glossary/platform-developer.md @@ -14,8 +14,8 @@ tags: -A platform developer may, for example, use [Custom Resources](/docs/concepts/extend-Kubernetes/api-extension/custom-resources/) or -[Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-Kubernetes/api-extension/apiserver-aggregation/) +A platform developer may, for example, use [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) or +[Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) to add functionality to their instance of Kubernetes, specifically for their application. Some Platform Developers are also {{< glossary_tooltip text="contributors" term_id="contributor" >}} and develop extensions which are contributed to the Kubernetes community. diff --git a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md index af0ba73b90..5afdebd557 100644 --- a/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/en/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -42,6 +42,7 @@ kubectl create deployment --image=nginx nginx-app # add env to nginx-app kubectl set env deployment/nginx-app DOMAIN=cluster ``` +``` deployment.apps/nginx-app created ``` @@ -52,7 +53,6 @@ kubectl set env deployment/nginx-app DOMAIN=cluster ``` deployment.apps/nginx-app env updated ``` -deployment.apps/nginx-app env updated {{< note >}} `kubectl` commands print the type and name of the resource created or mutated, which can then be used in subsequent commands. You can expose a new Service after a Deployment is created. diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index a9177da9f5..20f92e0926 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -127,14 +127,15 @@ To learn more about command operations, see the [kubectl](/docs/reference/kubect The following table includes a list of all the supported resource types and their abbreviated aliases. -(This output can be retrieved from `kubectl api-resources`, and was accurate as of Kubernetes 1.13.3.) +(This output can be retrieved from `kubectl api-resources`, and was accurate as of Kubernetes 1.19.1.) -| Resource Name | Short Names | API Group | Namespaced | Resource Kind | +| NAME | SHORTNAMES | APIGROUP | NAMESPACED | KIND | |---|---|---|---|---| -| `bindings` | | | true | Binding| +| `bindings` | | | true | Binding | | `componentstatuses` | `cs` | | false | ComponentStatus | | `configmaps` | `cm` | | true | ConfigMap | | `endpoints` | `ep` | | true | Endpoints | +| `events` | `ev` | | true | Event | | `limitranges` | `limits` | | true | LimitRange | | `namespaces` | `ns` | | false | Namespace | | `nodes` | `no` | | false | Node | @@ -142,14 +143,14 @@ The following table includes a list of all the supported resource types and thei | `persistentvolumes` | `pv` | | false | PersistentVolume | | `pods` | `po` | | true | Pod | | `podtemplates` | | | true | PodTemplate | -| `replicationcontrollers` | `rc` | | true| ReplicationController | +| `replicationcontrollers` | `rc` | | true | ReplicationController | | `resourcequotas` | `quota` | | true | ResourceQuota | | `secrets` | | | true | Secret | | `serviceaccounts` | `sa` | | true | ServiceAccount | | `services` | `svc` | | true | Service | | `mutatingwebhookconfigurations` | | admissionregistration.k8s.io | false | MutatingWebhookConfiguration | | `validatingwebhookconfigurations` | | admissionregistration.k8s.io | false | ValidatingWebhookConfiguration | -| `customresourcedefinitions` | `crd`, `crds` | apiextensions.k8s.io | false | CustomResourceDefinition | +| `customresourcedefinitions` | `crd,crds` | apiextensions.k8s.io | false | CustomResourceDefinition | | `apiservices` | | apiregistration.k8s.io | false | APIService | | `controllerrevisions` | | apps | true | ControllerRevision | | `daemonsets` | `ds` | apps | true | DaemonSet | @@ -166,9 +167,15 @@ The following table includes a list of all the supported resource types and thei | `jobs` | | batch | true | Job | | `certificatesigningrequests` | `csr` | certificates.k8s.io | false | CertificateSigningRequest | | `leases` | | coordination.k8s.io | true | Lease | +| `endpointslices` | | discovery.k8s.io | true | EndpointSlice | | `events` | `ev` | events.k8s.io | true | Event | | `ingresses` | `ing` | extensions | true | Ingress | +| `flowschemas` | | flowcontrol.apiserver.k8s.io | false | FlowSchema | +| `prioritylevelconfigurations` | | flowcontrol.apiserver.k8s.io | false | PriorityLevelConfiguration | +| `ingressclasses` | | networking.k8s.io | false | IngressClass | +| `ingresses` | `ing` | networking.k8s.io | true | Ingress | | `networkpolicies` | `netpol` | networking.k8s.io | true | NetworkPolicy | +| `runtimeclasses` | | node.k8s.io | false | RuntimeClass | | `poddisruptionbudgets` | `pdb` | policy | true | PodDisruptionBudget | | `podsecuritypolicies` | `psp` | policy | false | PodSecurityPolicy | | `clusterrolebindings` | | rbac.authorization.k8s.io | false | ClusterRoleBinding | @@ -178,7 +185,7 @@ The following table includes a list of all the supported resource types and thei | `priorityclasses` | `pc` | scheduling.k8s.io | false | PriorityClass | | `csidrivers` | | storage.k8s.io | false | CSIDriver | | `csinodes` | | storage.k8s.io | false | CSINode | -| `storageclasses` | `sc` | storage.k8s.io | false | StorageClass | +| `storageclasses` | `sc` | storage.k8s.io | false | StorageClass | | `volumeattachments` | | storage.k8s.io | false | VolumeAttachment | ## Output options diff --git a/content/en/docs/reference/scheduling/policies.md b/content/en/docs/reference/scheduling/policies.md index 9601b22ec4..946150322d 100644 --- a/content/en/docs/reference/scheduling/policies.md +++ b/content/en/docs/reference/scheduling/policies.md @@ -32,7 +32,7 @@ The following *predicates* implement filtering: - `PodFitsResources`: Checks if the Node has free resources (eg, CPU and Memory) to meet the requirement of the Pod. -- `PodMatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}} +- `MatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}} matches the Node's {{< glossary_tooltip text="label(s)" term_id="label" >}}. - `NoVolumeZoneConflict`: Evaluate if the {{< glossary_tooltip text="Volumes" term_id="volume" >}} diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index a937d18598..350dd41539 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -30,9 +30,12 @@ Kubernetes generally leverages standard RESTful terminology to describe the API All resource types are either scoped by the cluster (`/apis/GROUP/VERSION/*`) or to a namespace (`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`). A namespace-scoped resource type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope. The following paths are used to retrieve collections and resources: * Cluster-scoped resources: + * `GET /apis/GROUP/VERSION/RESOURCETYPE` - return the collection of resources of the resource type * `GET /apis/GROUP/VERSION/RESOURCETYPE/NAME` - return the resource with NAME under the resource type + * Namespace-scoped resources: + * `GET /apis/GROUP/VERSION/RESOURCETYPE` - return the collection of all instances of the resource type across all namespaces * `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE` - return collection of all instances of the resource type in NAMESPACE * `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME` - return the instance of the resource type with NAME in NAMESPACE @@ -57,33 +60,39 @@ For example: 1. List all of the pods in a given namespace. - GET /api/v1/namespaces/test/pods - --- - 200 OK - Content-Type: application/json - { - "kind": "PodList", - "apiVersion": "v1", - "metadata": {"resourceVersion":"10245"}, - "items": [...] - } + ```console + GET /api/v1/namespaces/test/pods + --- + 200 OK + Content-Type: application/json + + { + "kind": "PodList", + "apiVersion": "v1", + "metadata": {"resourceVersion":"10245"}, + "items": [...] + } + ``` 2. Starting from resource version 10245, receive notifications of any creates, deletes, or updates as individual JSON objects. - GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245 - --- - 200 OK - Transfer-Encoding: chunked - Content-Type: application/json - { - "type": "ADDED", - "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...} - } - { - "type": "MODIFIED", - "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "11020", ...}, ...} - } - ... + ``` + GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245 + --- + 200 OK + Transfer-Encoding: chunked + Content-Type: application/json + + { + "type": "ADDED", + "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...} + } + { + "type": "MODIFIED", + "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "11020", ...}, ...} + } + ... + ``` A given Kubernetes server will only preserve a historical list of changes for a limited time. Clusters using etcd3 preserve changes in the last 5 minutes by default. When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code `410 Gone`, clearing their local cache, performing a list operation, and starting the watch from the `resourceVersion` returned by that new list operation. Most client libraries offer some form of standard tool for this logic. (In Go this is called a `Reflector` and is located in the `k8s.io/client-go/cache` package.) @@ -91,24 +100,28 @@ A given Kubernetes server will only preserve a historical list of changes for a To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to mark that all changes up to a given `resourceVersion` the client is requesting have already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.: - GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true - --- - 200 OK - Transfer-Encoding: chunked - Content-Type: application/json - { - "type": "ADDED", - "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...} - } - ... - { - "type": "BOOKMARK", - "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "12746"} } - } +```console +GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true +--- +200 OK +Transfer-Encoding: chunked +Content-Type: application/json + +{ + "type": "ADDED", + "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...} +} +... +{ + "type": "BOOKMARK", + "object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "12746"} } +} +``` `Bookmark` events can be requested by `allowWatchBookmarks=true` option in watch requests, but clients shouldn't assume bookmarks are returned at any specific interval, nor may they assume the server will send any `bookmark` event. ## Retrieving large results sets in chunks + {{< feature-state for_k8s_version="v1.9" state="beta" >}} On large clusters, retrieving the collection of some resource types may result in very large responses that can impact the server and client. For instance, a cluster may have tens of thousands of pods, each of which is 1-2kb of encoded JSON. Retrieving all pods across all namespaces may result in a very large response (10-20MB) and consume a large amount of server resources. Starting in Kubernetes 1.9 the server supports the ability to break a single large collection request into many smaller chunks while preserving the consistency of the total request. Each chunk can be returned sequentially which reduces both the total size of the request and allows user-oriented clients to display results incrementally to improve responsiveness. @@ -121,54 +134,63 @@ For example, if there are 1,253 pods on the cluster and the client wants to rece 1. List all of the pods on a cluster, retrieving up to 500 pods each time. - GET /api/v1/pods?limit=500 - --- - 200 OK - Content-Type: application/json - { - "kind": "PodList", - "apiVersion": "v1", - "metadata": { - "resourceVersion":"10245", - "continue": "ENCODED_CONTINUE_TOKEN", - ... - }, - "items": [...] // returns pods 1-500 - } + ```console + GET /api/v1/pods?limit=500 + --- + 200 OK + Content-Type: application/json + + { + "kind": "PodList", + "apiVersion": "v1", + "metadata": { + "resourceVersion":"10245", + "continue": "ENCODED_CONTINUE_TOKEN", + ... + }, + "items": [...] // returns pods 1-500 + } + ``` 2. Continue the previous call, retrieving the next set of 500 pods. - GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN - --- - 200 OK - Content-Type: application/json - { - "kind": "PodList", - "apiVersion": "v1", - "metadata": { - "resourceVersion":"10245", - "continue": "ENCODED_CONTINUE_TOKEN_2", - ... - }, - "items": [...] // returns pods 501-1000 - } + ```console + GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN + --- + 200 OK + Content-Type: application/json + + { + "kind": "PodList", + "apiVersion": "v1", + "metadata": { + "resourceVersion":"10245", + "continue": "ENCODED_CONTINUE_TOKEN_2", + ... + }, + "items": [...] // returns pods 501-1000 + } + ``` 3. Continue the previous call, retrieving the last 253 pods. - GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2 - --- - 200 OK - Content-Type: application/json - { - "kind": "PodList", - "apiVersion": "v1", - "metadata": { - "resourceVersion":"10245", - "continue": "", // continue token is empty because we have reached the end of the list - ... - }, - "items": [...] // returns pods 1001-1253 - } + ```console + GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2 + --- + 200 OK + Content-Type: application/json + + { + "kind": "PodList", + "apiVersion": "v1", + "metadata": { + "resourceVersion":"10245", + "continue": "", // continue token is empty because we have reached the end of the list + ... + }, + "items": [...] // returns pods 1001-1253 + } + ``` Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates. @@ -180,52 +202,56 @@ A few limitations of that approach include non-trivial logic when dealing with c In order to avoid potential limitations as described above, clients may request the Table representation of objects, delegating specific details of printing to the server. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header containing a value of `application/json;as=Table;g=meta.k8s.io;v=v1beta1` with a `GET` call will request that the server return objects in the Table content type. -For example: +For example, list all of the pods on a cluster in the Table format. -1. List all of the pods on a cluster in the Table format. +```console +GET /api/v1/pods +Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1 +--- +200 OK +Content-Type: application/json - GET /api/v1/pods - Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1 - --- - 200 OK - Content-Type: application/json - { - "kind": "Table", - "apiVersion": "meta.k8s.io/v1beta1", - ... - "columnDefinitions": [ - ... - ] - } +{ + "kind": "Table", + "apiVersion": "meta.k8s.io/v1beta1", + ... + "columnDefinitions": [ + ... + ] +} +``` For API resource types that do not have a custom Table definition on the server, a default Table response is returned by the server, consisting of the resource's `name` and `creationTimestamp` fields. - GET /apis/crd.example.com/v1alpha1/namespaces/default/resources - --- - 200 OK - Content-Type: application/json - ... +```console +GET /apis/crd.example.com/v1alpha1/namespaces/default/resources +--- +200 OK +Content-Type: application/json +... + +{ + "kind": "Table", + "apiVersion": "meta.k8s.io/v1beta1", + ... + "columnDefinitions": [ { - "kind": "Table", - "apiVersion": "meta.k8s.io/v1beta1", + "name": "Name", + "type": "string", + ... + }, + { + "name": "Created At", + "type": "date", ... - "columnDefinitions": [ - { - "name": "Name", - "type": "string", - ... - }, - { - "name": "Created At", - "type": "date", - ... - } - ] } + ] +} +``` Table responses are available beginning in version 1.10 of the kube-apiserver. As such, not all API resource types will support a Table response, specifically when using a client against older clusters. Clients that must work against all resource types, or can potentially deal with older clusters, should specify multiple content types in their `Accept` header to support fallback to non-Tabular JSON: -``` +```console Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1, application/json ``` @@ -240,42 +266,47 @@ For example: 1. List all of the pods on a cluster in Protobuf format. - GET /api/v1/pods - Accept: application/vnd.kubernetes.protobuf - --- - 200 OK - Content-Type: application/vnd.kubernetes.protobuf - ... binary encoded PodList object + ```console + GET /api/v1/pods + Accept: application/vnd.kubernetes.protobuf + --- + 200 OK + Content-Type: application/vnd.kubernetes.protobuf + + ... binary encoded PodList object + ``` 2. Create a pod by sending Protobuf encoded data to the server, but request a response in JSON. - POST /api/v1/namespaces/test/pods - Content-Type: application/vnd.kubernetes.protobuf - Accept: application/json - ... binary encoded Pod object - --- - 200 OK - Content-Type: application/json - { - "kind": "Pod", - "apiVersion": "v1", - ... - } + ```console + POST /api/v1/namespaces/test/pods + Content-Type: application/vnd.kubernetes.protobuf + Accept: application/json + ... binary encoded Pod object + --- + 200 OK + Content-Type: application/json + + { + "kind": "Pod", + "apiVersion": "v1", + ... + } + ``` Not all API resource types will support Protobuf, specifically those defined via Custom Resource Definitions or those that are API extensions. Clients that must work against all resource types should specify multiple content types in their `Accept` header to support fallback to JSON: -``` +```console Accept: application/vnd.kubernetes.protobuf, application/json ``` - ### Protobuf encoding Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON), and then is followed by a Protobuf encoded wrapper message, which describes the encoding and type of the underlying object and then contains the object. The wrapper format is: -``` +```console A four byte magic number prefix: Bytes 0-3: "k8s\x00" [0x6b, 0x38, 0x73, 0x00] @@ -362,9 +393,11 @@ Dry-run is triggered by setting the `dryRun` query parameter. This parameter is For example: - POST /api/v1/namespaces/test/pods?dryRun=All - Content-Type: application/json - Accept: application/json +```console +POST /api/v1/namespaces/test/pods?dryRun=All +Content-Type: application/json +Accept: application/json +``` The response would look the same as for non-dry-run request, but the values of some generated fields may differ. @@ -400,7 +433,7 @@ Some values of an object are typically generated before the object is persisted. {{< feature-state for_k8s_version="v1.16" state="beta" >}} -{{< note >}}Starting from Kubernetes v1.18, if you have Server Side Apply enabled then the control plane tracks managed fields for all newly created objects.{{< /note >}} +Starting from Kubernetes v1.18, if you have Server Side Apply enabled then the control plane tracks managed fields for all newly created objects. ### Introduction @@ -484,10 +517,12 @@ data: The above object contains a single manager in `metadata.managedFields`. The manager consists of basic information about the managing entity itself, like -operation type, api version, and the fields managed by it. +operation type, API version, and the fields managed by it. -{{< note >}} This field is managed by the apiserver and should not be changed by -the user. {{< /note >}} +{{< note >}} +This field is managed by the API server and should not be changed by +the user. +{{< /note >}} Nevertheless it is possible to change `metadata.managedFields` through an `Update` operation. Doing so is highly discouraged, but might be a reasonable @@ -537,7 +572,7 @@ a little differently. {{< note >}} Whether you are submitting JSON data or YAML data, use `application/apply-patch+yaml` as the -Content-Type header value. +`Content-Type` header value. All JSON documents are valid YAML. {{< /note >}} @@ -580,8 +615,8 @@ In this example, a second operation was run as an `Update` by the manager called `kube-controller-manager`. The update changed a value in the data field which caused the field's management to change to the `kube-controller-manager`. -{{< note >}}If this update would have been an `Apply` operation, the operation -would have failed due to conflicting ownership.{{< /note >}} +If this update would have been an `Apply` operation, the operation +would have failed due to conflicting ownership. ### Merge strategy @@ -603,8 +638,8 @@ merging, see A number of markers were added in Kubernetes 1.16 and 1.17, to allow API developers to describe the merge strategy supported by lists, maps, and structs. These markers can be applied to objects of the respective type, -in Go files or in the [OpenAPI schema definition of the -CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io): +in Go files or in the +[OpenAPI schema definition of the CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io): | Golang marker | OpenAPI extension | Accepted values | Description | Introduced in | |---|---|---|---|---| @@ -641,8 +676,8 @@ might not be able to resolve or act on these conflicts. ### Transferring Ownership -In addition to the concurrency controls provided by [conflict -resolution](#conflicts), Server Side Apply provides ways to perform coordinated +In addition to the concurrency controls provided by [conflict resolution](#conflicts), +Server Side Apply provides ways to perform coordinated field ownership transfers from users to controllers. This is best explained by example. Let's look at how to safely transfer @@ -657,12 +692,12 @@ Say a user has defined deployment with `replicas` set to the desired value: And the user has created the deployment using server side apply like so: ```shell -kubectl apply -f application/ssa/nginx-deployment.yaml --server-side +kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment.yaml --server-side ``` Then later, HPA is enabled for the deployment, e.g.: -``` +```shell kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10 ``` @@ -691,7 +726,7 @@ First, the user defines a new configuration containing only the `replicas` field The user applies that configuration using the field manager name `handover-to-hpa`: ```shell -kubectl apply -f application/ssa/nginx-deployment-replicas-only.yaml --server-side --field-manager=handover-to-hpa --validate=false +kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replicas-only.yaml --server-side --field-manager=handover-to-hpa --validate=false ``` If the apply results in a conflict with the HPA controller, then do nothing. The @@ -737,7 +772,7 @@ case. Client-side apply users who manage a resource with `kubectl apply` can start using server-side apply with the following flag. -``` +```shell kubectl apply --server-side [--dry-run=server] ``` @@ -745,7 +780,9 @@ By default, field management of the object transfers from client-side apply to kubectl server-side apply without encountering conflicts. {{< caution >}} -Keep the `last-applied-configuration` annotation up to date. The annotation infers client-side apply's managed fields. Any fields not managed by client-side apply raise conflicts. +Keep the `last-applied-configuration` annotation up to date. +The annotation infers client-side apply's managed fields. +Any fields not managed by client-side apply raise conflicts. For example, if you used `kubectl scale` to update the replicas field after client-side apply, then this field is not owned by client-side apply and creates conflicts on `kubectl apply --server-side`. @@ -756,7 +793,7 @@ an exception, you can opt-out of this behavior by specifying a different, non-default field manager, as seen in the following example. The default field manager for kubectl server-side apply is `kubectl`. -``` +```shell kubectl apply --server-side --field-manager=my-manager [--dry-run=server] ``` @@ -774,7 +811,7 @@ an exception, you can opt-out of this behavior by specifying a different, non-default field manager, as seen in the following example. The default field manager for kubectl server-side apply is `kubectl`. -``` +```shell kubectl apply --server-side --field-manager=my-manager [--dry-run=server] ``` @@ -793,14 +830,14 @@ using `MergePatch`, `StrategicMergePatch`, `JSONPatch` or `Update`, so every non-apply operation. This can be done by overwriting the managedFields field with an empty entry. Two examples are: -```json +```console PATCH /api/v1/namespaces/default/configmaps/example-cm Content-Type: application/merge-patch+json Accept: application/json Data: {"metadata":{"managedFields": [{}]}} ``` -```json +```console PATCH /api/v1/namespaces/default/configmaps/example-cm Content-Type: application/json-patch+json Accept: application/json @@ -818,10 +855,12 @@ the managedFields, this will result in the managedFields being reset first and the other changes being processed afterwards. As a result the applier takes ownership of any fields updated in the same request. -{{< caution >}} Server Side Apply does not correctly track ownership on +{{< caution >}} +Server Side Apply does not correctly track ownership on sub-resources that don't receive the resource object type. If you are using Server Side Apply with such a sub-resource, the changed fields -won't be tracked. {{< /caution >}} +won't be tracked. +{{< /caution >}} ### Disabling the feature @@ -859,12 +898,13 @@ For get and list, the semantics of resource version are: **List:** -v1.19+ API servers and newer support the `resourceVersionMatch` parameter, which +v1.19+ API servers support the `resourceVersionMatch` parameter, which determines how resourceVersion is applied to list calls. It is highly recommended that `resourceVersionMatch` be set for list calls where `resourceVersion` is set. If `resourceVersion` is unset, `resourceVersionMatch` is not allowed. For backward compatibility, clients must tolerate the server ignoring `resourceVersionMatch`: + - When using `resourceVersionMatch=NotOlderThan` and limit is set, clients must handle HTTP 410 "Gone" responses. For example, the client might retry with a newer `resourceVersion` or fall back to `resourceVersion=""`. @@ -878,27 +918,29 @@ a known `resourceVersion` is preferable since it can achieve better performance of your cluster than leaving `resourceVersion` and `resourceVersionMatch` unset, which requires quorum read to be served. +{{< table caption="resourceVersionMatch and paging parameters for list" >}} + | resourceVersionMatch param | paging params | resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" | |---------------------------------------|-------------------------------|-----------------------|-------------------------------------------|----------------------------------------| | resourceVersionMatch unset | limit unset | Most Recent | Any | Not older than | -| resourceVersionMatch unset | limit="n", continue unset | Most Recent | Any | Exact | -| resourceVersionMatch unset | limit="n", continue="" | Continue Token, Exact | Invalid, treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` | -| resourceVersionMatch=Exact[^1] | limit unset | Invalid | Invalid | Exact | -| resourceVersionMatch=Exact[^1] | limit="n", continue unset | Invalid | Invalid | Exact | -| resourceVersionMatch=NotOlderThan[^1] | limit unset | Invalid | Any | Not older than | -| resourceVersionMatch=NotOlderThan[^1] | limit="n", continue unset | Invalid | Any | Not older than | +| resourceVersionMatch unset | limit=\, continue unset | Most Recent | Any | Exact | +| resourceVersionMatch unset | limit=\, continue=\ | Continue Token, Exact | Invalid, treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` | +| resourceVersionMatch=Exact [1] | limit unset | Invalid | Invalid | Exact | +| resourceVersionMatch=Exact [1] | limit=\, continue unset | Invalid | Invalid | Exact | +| resourceVersionMatch=NotOlderThan [1] | limit unset | Invalid | Any | Not older than | +| resourceVersionMatch=NotOlderThan [1] | limit=\, continue unset | Invalid | Any | Not older than | -[^1]: If the server does not honor the `resourceVersionMatch` parameter, it is treated as if it is unset. -| paging | resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" | -|---------------------------------|-----------------------|------------------------------------------------|----------------------------------------| -| limit unset | Most Recent | Any | Not older than | -| limit="n", continue unset | Most Recent | Any | Exact | -| limit="n", continue="\" | Continue Token, Exact | Invalid, but treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` | +{{< /table >}} + +**Footnotes:** + +[1] If the server does not honor the `resourceVersionMatch` parameter, it is treated as if it is unset. The meaning of the get and list semantics are: - **Most Recent:** Return data at the most recent resource version. The returned data must be consistent (i.e. served from etcd via a quorum read). + - **Any:** Return data at any resource version. The newest available resource version is preferred, but strong consistency is not required; data at any resource version may be served. It is possible for the request to return data at a much older resource version that the client has previously @@ -911,6 +953,7 @@ The meaning of the get and list semantics are: but does not make any guarantee about the resourceVersion in the ObjectMeta of the list items since ObjectMeta.resourceVersion tracks when an object was last updated, not how up-to-date the object is when served. + - **Exact:** Return data at the exact resource version provided. If the provided resourceVersion is unavailable, the server responds with HTTP 410 "Gone". For list requests to servers that honor the resourceVersionMatch parameter, this guarantees that resourceVersion in the ListMeta is the same as @@ -925,10 +968,14 @@ For watch, the semantics of resource version are: **Watch:** +{{< table caption="resourceVersion for watch" >}} + | resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" | |-------------------------------------|----------------------------|----------------------------------------| | Get State and Start at Most Recent | Get State and Start at Any | Start at Exact | +{{< /table >}} + The meaning of the watch semantics are: - **Get State and Start at Most Recent:** Start a watch at the most recent resource version, which must be consistent (i.e. served from etcd via a quorum read). To establish initial state, the watch begins with synthetic "Added" events of all resources instances that exist at the starting resource version. All following watch events are for all changes that occurred after the resource version the watch started at. diff --git a/content/en/docs/reference/using-api/api-overview.md b/content/en/docs/reference/using-api/api-overview.md index 529c6fc799..3b39d72e18 100644 --- a/content/en/docs/reference/using-api/api-overview.md +++ b/content/en/docs/reference/using-api/api-overview.md @@ -13,62 +13,60 @@ card: --- -This page provides an overview of the Kubernetes API. +This page provides an overview of the Kubernetes API. -The REST API is the fundamental fabric of Kubernetes. All operations and communications between components, and external user commands are REST API calls that the API Server handles. Consequently, everything in the Kubernetes + +The REST API is the fundamental fabric of Kubernetes. All operations and +communications between components, and external user commands are REST API +calls that the API Server handles. Consequently, everything in the Kubernetes platform is treated as an API object and has a corresponding entry in the [API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/). -Most operations can be performed through the -[kubectl](/docs/reference/kubectl/overview/) command-line interface or other -command-line tools, such as [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), which in turn use -the API. However, you can also access the API directly using REST calls. - -Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/) -if you are writing an application using the Kubernetes API. - ## API versioning -To eliminate fields or restructure resource representations, Kubernetes supports -multiple API versions, each at a different API path. For example: `/api/v1` or -`/apis/rbac.authorization.k8s.io/v1alpha1`. +The JSON and Protobuf serialization schemas follow the same guidelines for +schema changes. The following descriptions cover both formats. -The version is set at the API level rather than at the resource or field level to: +The API versioning and software versioning are indirectly related. +The [API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) +describes the relationship between API versioning and software versioning. -- Ensure that the API presents a clear and consistent view of system resources and behavior. -- Enable control access to end-of-life and/or experimental APIs. - -The JSON and Protobuf serialization schemas follow the same guidelines for schema changes. The following descriptions cover both formats. - -{{< note >}} -The API versioning and software versioning are indirectly related. The [API and release -versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) describes the relationship between API versioning and software versioning. -{{< /note >}} - -Different API versions indicate different levels of stability and support. You can find more information about the criteria for each level in the [API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). +Different API versions indicate different levels of stability and support. You +can find more information about the criteria for each level in the +[API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Here's a summary of each level: - Alpha: - The version names contain `alpha` (for example, `v1alpha1`). - - The software may contain bugs. Enabling a feature may expose bugs. A feature may be disabled by default. + - The software may contain bugs. Enabling a feature may expose bugs. A + feature may be disabled by default. - The support for a feature may be dropped at any time without notice. - The API may change in incompatible ways in a later software release without notice. - - The software is recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support. + - The software is recommended for use only in short-lived testing clusters, + due to increased risk of bugs and lack of long-term support. - Beta: - The version names contain `beta` (for example, `v2beta3`). - - The software is well tested. Enabling a feature is considered safe. Features are enabled by default. + - The software is well tested. Enabling a feature is considered safe. + Features are enabled by default. - The support for a feature will not be dropped, though the details may change. - - The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens, migration instructions are provided. This may require deleting, editing, and re-creating - API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature. - - The software is recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have multiple clusters which can be upgraded independently, you may be able to relax this restriction. + + - The schema and/or semantics of objects may change in incompatible ways in + a subsequent beta or stable release. When this happens, migration + instructions are provided. Schema changes may require deleting, editing, and + re-creating API objects. The editing process may not be straightforward. + The migration may require downtime for applications that rely on the feature. + - The software is not recommended for production uses. Subsequent releases + may introduce incompatible changes. If you have multiple clusters which + can be upgraded independently, you may be able to relax this restriction. - {{< note >}} -Try the beta features and provide feedback. After the features exit beta, it may not be practical to make more changes. - {{< /note >}} + {{< note >}} + Try the beta features and provide feedback. After the features exit beta, it + may not be practical to make more changes. + {{< /note >}} - Stable: - The version name is `vX` where `X` is an integer. @@ -76,33 +74,44 @@ Try the beta features and provide feedback. After the features exit beta, it may ## API groups -[*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md) make it easier to extend the Kubernetes API. The API group is specified in a REST path and in the `apiVersion` field of a serialized object. +[API groups](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md) +make it easier to extend the Kubernetes API. +The API group is specified in a REST path and in the `apiVersion` field of a +serialized object. Currently, there are several API groups in use: -* The *core* (also called *legacy*) group, which is at REST path `/api/v1` and is not specified as part of the `apiVersion` field, for example, `apiVersion: v1`. -* The named groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION` - (for example, `apiVersion: batch/v1`). You can find the full list of supported API groups in [Kubernetes API reference](/docs/reference/). +* The *core* (also called *legacy*) group is found at REST path `/api/v1`. + The core group is not specified as part of the `apiVersion` field, for + example, `apiVersion: v1`. +* The named groups are at REST path `/apis/$GROUP_NAME/$VERSION` and use + `apiVersion: $GROUP_NAME/$VERSION` (for example, `apiVersion: batch/v1`). + You can find the full list of supported API groups in + [Kubernetes API reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/). -The two paths that support extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) are: +## Enabling or disabling API groups {#enabling-or-disabling} - - [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) - for basic CRUD needs. - - [aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md) for a full set of Kubernetes API semantics to implement their own apiserver. - +Certain resources and API groups are enabled by default. You can enable or +disable them by setting `--runtime-config` on the API server. The +`--runtime-config` flag accepts comma separated `=` pairs +describing the runtime configuration of the API server. For example: -## Enabling or disabling API groups - -Certain resources and API groups are enabled by default. You can enable or disable them by setting `--runtime-config` -on the apiserver. `--runtime-config` accepts comma separated values. For example: - - - to disable batch/v1, set `--runtime-config=batch/v1=false` - - to enable batch/v2alpha1, set `--runtime-config=batch/v2alpha1` - -The flag accepts comma separated set of key=value pairs describing runtime configuration of the apiserver. + - to disable `batch/v1`, set `--runtime-config=batch/v1=false` + - to enable `batch/v2alpha1`, set `--runtime-config=batch/v2alpha1` {{< note >}} -When you enable or disable groups or resources, you need to restart the apiserver and controller-manager -to pick up the `--runtime-config` changes. +When you enable or disable groups or resources, you need to restart the API +server and controller manager to pick up the `--runtime-config` changes. {{< /note >}} +## Persistence + +Kubernetes stores its serialized state in terms of the API resources by writing them into +{{< glossary_tooltip term_id="etcd" >}}. + +## {{% heading "whatsnext" %}} + +- Learn more about [API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions) +- Read the design documentation for + [aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md) + diff --git a/content/en/docs/reference/using-api/client-libraries.md b/content/en/docs/reference/using-api/client-libraries.md index c4d7e5ea24..9e4ee1f67e 100644 --- a/content/en/docs/reference/using-api/client-libraries.md +++ b/content/en/docs/reference/using-api/client-libraries.md @@ -40,6 +40,8 @@ The following client libraries are officially maintained by ## Community-maintained client libraries +{{% thirdparty-content %}} + The following Kubernetes API client libraries are provided and maintained by their authors, not the Kubernetes team. diff --git a/content/en/docs/reference/using-api/deprecation-policy.md b/content/en/docs/reference/using-api/deprecation-policy.md index ba322d42b5..d93bbc03c9 100644 --- a/content/en/docs/reference/using-api/deprecation-policy.md +++ b/content/en/docs/reference/using-api/deprecation-policy.md @@ -303,12 +303,12 @@ Starting in Kubernetes v1.19, making an API request to a deprecated REST API end 2. Adds a `"k8s.io/deprecated":"true"` annotation to the [audit event](/docs/tasks/debug-application-cluster/audit/) recorded for the request. 3. Sets an `apiserver_requested_deprecated_apis` gauge metric to `1` in the `kube-apiserver` process. The metric has labels for `group`, `version`, `resource`, `subresource` that can be joined - to the `apiserver_request_total` metric, and a `removed_version` label that indicates the + to the `apiserver_request_total` metric, and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served. The following Prometheus query returns information about requests made to deprecated APIs which will be removed in v1.22: - + ```promql - apiserver_requested_deprecated_apis{removed_version="1.22"} * on(group,version,resource,subresource) group_right() apiserver_request_total + apiserver_requested_deprecated_apis{removed_release="1.22"} * on(group,version,resource,subresource) group_right() apiserver_request_total ``` ### Fields of REST resources diff --git a/content/en/docs/reference/using-api/health-checks.md b/content/en/docs/reference/using-api/health-checks.md index a7be3b267f..e198b3d2d9 100644 --- a/content/en/docs/reference/using-api/health-checks.md +++ b/content/en/docs/reference/using-api/health-checks.md @@ -94,7 +94,7 @@ The output show that the `etcd` check is excluded: {{< feature-state state="alpha" >}} Each individual health check exposes an http endpoint and could can be checked individually. -The schema for the individual health checks is `/livez/` where `livez` and `readyz` and be used to indicate if you want to check thee liveness or the readiness of the API server. +The schema for the individual health checks is `/livez/` where `livez` and `readyz` and be used to indicate if you want to check the liveness or the readiness of the API server. The `` path can be discovered using the `verbose` flag from above and take the path between `[+]` and `ok`. These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system: diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index 9e27b40943..a065462baf 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -124,8 +124,8 @@ Same considerations apply for the service account key pair: | private key path | public key path | command | argument | |------------------------------|-----------------------------|-------------------------|--------------------------------------| -| sa.key | | kube-controller-manager | service-account-private | -| | sa.pub | kube-apiserver | service-account-key | +| sa.key | | kube-controller-manager | --service-account-private-key-file | +| | sa.pub | kube-apiserver | --service-account-key-file | ## Configure certificates for user accounts diff --git a/content/en/docs/setup/best-practices/node-conformance.md b/content/en/docs/setup/best-practices/node-conformance.md index 5e75959b18..af02a7a903 100644 --- a/content/en/docs/setup/best-practices/node-conformance.md +++ b/content/en/docs/setup/best-practices/node-conformance.md @@ -13,12 +13,6 @@ verification and functionality test for a node. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the test is qualified to join a Kubernetes cluster. -## Limitations - -In Kubernetes version 1.5, node conformance test has the following limitations: - -* Node conformance test only supports Docker as the container runtime. - ## Node Prerequisite To run node conformance test, a node must satisfy the same prerequisites as a diff --git a/content/en/docs/setup/learning-environment/minikube.md b/content/en/docs/setup/learning-environment/minikube.md index d0b07b046a..0e3e26ea2a 100644 --- a/content/en/docs/setup/learning-environment/minikube.md +++ b/content/en/docs/setup/learning-environment/minikube.md @@ -452,11 +452,11 @@ Host folder sharing is not implemented in the KVM driver yet. | Driver | OS | HostFolder | VM | | --- | --- | --- | --- | -| VirtualBox | Linux | /home | /hosthome | -| VirtualBox | macOS | /Users | /Users | -| VirtualBox | Windows | C://Users | /c/Users | -| VMware Fusion | macOS | /Users | /mnt/hgfs/Users | -| Xhyve | macOS | /Users | /Users | +| VirtualBox | Linux | `/home` | `/hosthome` | +| VirtualBox | macOS | `/Users` | `/Users` | +| VirtualBox | Windows | `C://Users` | `/c/Users` | +| VMware Fusion | macOS | `/Users` | `/mnt/hgfs/Users` | +| Xhyve | macOS | `/Users` | `/Users` | ## Private Container Registries diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index 87275861df..bb01d93d36 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -471,7 +471,12 @@ Start-Service containerd ### systemd -To use the `systemd` cgroup driver, set `plugins.cri.systemd_cgroup = true` in `/etc/containerd/config.toml`. +To use the `systemd` cgroup driver in `/etc/containerd/config.toml` set + +``` +[plugins.cri] +systemd_cgroup = true +``` When using kubeadm, manually configure the [cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node) diff --git a/content/en/docs/setup/production-environment/on-premises-vm/_index.md b/content/en/docs/setup/production-environment/on-premises-vm/_index.md deleted file mode 100644 index 42c4ec3899..0000000000 --- a/content/en/docs/setup/production-environment/on-premises-vm/_index.md +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: On-Premises VMs -weight: 40 ---- diff --git a/content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md b/content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md deleted file mode 100644 index c440f14b31..0000000000 --- a/content/en/docs/setup/production-environment/on-premises-vm/cloudstack.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -reviewers: -- thockin -title: Cloudstack -content_type: concept ---- - - - -[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes. - -[CoreOS](https://coreos.com) templates for CloudStack are built [nightly](https://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](https://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. - -This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init. - - - -## Prerequisites - -```shell -sudo apt-get install -y python-pip libssl-dev -sudo pip install cs -sudo pip install sshpubkeys -sudo apt-get install software-properties-common -sudo apt-add-repository ppa:ansible/ansible -sudo apt-get update -sudo apt-get install ansible -``` - -On CloudStack server you also have to install libselinux-python : - -```shell -yum install libselinux-python -``` - -[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API. - -Set your CloudStack endpoint, API keys and HTTP method used. - -You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`. - -Or create a `~/.cloudstack.ini` file: - -```none -[cloudstack] -endpoint = -key = -secret = -method = post -``` - -We need to use the http POST method to pass the _large_ userdata to the coreOS instances. - -### Clone the playbook - -```shell -git clone https://github.com/apachecloudstack/k8s -cd kubernetes-cloudstack -``` - -### Create a Kubernetes cluster - -You simply need to run the playbook. - -```shell -ansible-playbook k8s.yml -``` - -Some variables can be edited in the `k8s.yml` file. - -```none -vars: - ssh_key: k8s - k8s_num_nodes: 2 - k8s_security_group_name: k8s - k8s_node_prefix: k8s2 - k8s_template: - k8s_instance_type: -``` - -This will start a Kubernetes master node and a number of compute nodes (by default 2). -The `instance_type` and `template` are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering). - -Check the tasks and templates in `roles/k8s` if you want to modify anything. - -Once the playbook as finished, it will print out the IP of the Kubernetes master: - -```none -TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ******** -``` - -SSH to it using the key that was created and using the _core_ user. - -```shell -ssh -i ~/.ssh/id_rsa_k8s core@ -``` - -And you can list the machines in your cluster: - -```shell -fleetctl list-machines -``` - -```none -MACHINE IP METADATA -a017c422... role=node -ad13bf84... role=master -e9af8293... role=node -``` - -## Support Level - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/production-environment/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/)) - diff --git a/content/en/docs/setup/production-environment/on-premises-vm/dcos.md b/content/en/docs/setup/production-environment/on-premises-vm/dcos.md deleted file mode 100644 index e4b310902c..0000000000 --- a/content/en/docs/setup/production-environment/on-premises-vm/dcos.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -reviewers: -- smugcloud -title: Kubernetes on DC/OS -content_type: concept ---- - - - -Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https://mesosphere.com/product/), offering: - -* Pure upstream Kubernetes -* Single-click cluster provisioning -* Highly available and secure by default -* Kubernetes running alongside fast-data platforms (e.g. Akka, Cassandra, Kafka, Spark) - - - - - -## Official Mesosphere Guide - -The canonical source of getting started on DC/OS is located in the [quickstart repo](https://github.com/mesosphere/dcos-kubernetes-quickstart). - - diff --git a/content/en/docs/setup/production-environment/on-premises-vm/ovirt.md b/content/en/docs/setup/production-environment/on-premises-vm/ovirt.md deleted file mode 100644 index 1d57b6f7eb..0000000000 --- a/content/en/docs/setup/production-environment/on-premises-vm/ovirt.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -reviewers: -- caesarxuchao -- erictune -title: oVirt -content_type: concept ---- - - - -oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center. - - - - - -## oVirt Cloud Provider Deployment - -The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster. -At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well. - -It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes. - -Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider. - -[import]: https://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html -[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines -[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates -[install the ovirt-guest-agent]: https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/ - -## Using the oVirt Cloud Provider - -The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file: - -```none -[connection] -uri = https://localhost:8443/ovirt-engine/api -username = admin@internal -password = admin -``` - -In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes: - -```none -[filters] -# Search query used to find nodes -vms = tag=kubernetes -``` - -In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes. - -The `ovirt-cloud.conf` file then must be specified in kube-controller-manager: - -```shell -kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ... -``` - -## oVirt Cloud Provider Screencast - -This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster. - -[![Screencast](https://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](https://www.youtube.com/watch?v=JyyST4ZKne8) - -## Support Level - - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ---------------------------- -oVirt | | | | [docs](/docs/setup/production-environment/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z)) - - - diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md index e91e9f7a60..b8c3236b73 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md @@ -36,7 +36,7 @@ LoadBalancer, or with dynamic PersistentVolumes. For both methods you need this infrastructure: - Three machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for - the masters + the control-plane nodes - Three machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers - Full network connectivity between all machines in the cluster (public or @@ -224,7 +224,7 @@ in the kubeadm config file. scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}": ``` - - Replace the value of `CONTROL_PLANE` with the `user@host` of the first control plane machine. + - Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane node. ### Set up the first control plane node @@ -372,4 +372,3 @@ SSH is required if you want to control all nodes from a single machine. # Quote this line if you are using external etcd mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key ``` - diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index 82ceef4696..2e347b0ef6 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -407,4 +407,12 @@ be advised that this is modifying a design principle of the Linux distribution. This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`. -This issue is fixed as of version 1.19. \ No newline at end of file +This issue is fixed as of version 1.19. + +## `kubeadm reset` unmounts `/var/lib/kubelet` + +If `/var/lib/kubelet` is being mounted, performing a `kubeadm reset` will effectively unmount it. + +To workaround the issue, re-mount the `/var/lib/kubelet` directory after performing the `kubeadm reset` operation. + +This is a regression introduced in kubeadm 1.15. The issue is fixed in 1.20. diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index 02d99d926a..ac635101a0 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -13,12 +13,13 @@ Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [i * a highly available cluster * composable attributes * support for most popular Linux distributions - * Container Linux by CoreOS + * Ubuntu 16.04, 18.04, 20.04 + * CentOS/RHEL/Oracle Linux 7, 8 * Debian Buster, Jessie, Stretch, Wheezy - * Ubuntu 16.04, 18.04 - * CentOS/RHEL/Oracle Linux 7 - * Fedora 28 + * Fedora 31, 32 + * Fedora CoreOS * openSUSE Leap 15 + * Flatcar Container Linux by Kinvolk * continuous integration tests To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to @@ -32,8 +33,8 @@ To choose a tool which best fits your use case, read [this comparison](https://g Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements): -* **Ansible v2.7.8 and python-netaddr is installed on the machine that will run Ansible commands** -* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks** +* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands** +* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks** * The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) * The target servers are configured to allow **IPv4 forwarding** * **Your ssh key must be copied** to all the servers part of your inventory diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index 7bfcf03ebd..47cec90f26 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -55,7 +55,7 @@ You can access Dashboard using the kubectl command-line tool by running the foll kubectl proxy ``` -Kubectl will make Dashboard available at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. +Kubectl will make Dashboard available at [http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/). The UI can _only_ be accessed from the machine where the command is executed. See `kubectl proxy --help` for more options. diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index 5c94dceffc..d09a29c671 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -171,7 +171,10 @@ The Go client can use the same [kubeconfig file](/docs/concepts/configuration/or as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go): ```golang +package main + import ( + "context" "fmt" "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" @@ -185,7 +188,7 @@ func main() { // creates the clientset clientset, _ := kubernetes.NewForConfig(config) // access the API to list pods - pods, _ := clientset.CoreV1().Pods("").List(v1.ListOptions{}) + pods, _ := clientset.CoreV1().Pods("").List(context.TODO(), v1.ListOptions{}) fmt.Printf("There are %d pods in the cluster\n", len(pods.Items)) } ``` diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md index 61add5312a..fed4a77f9d 100644 --- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md @@ -9,6 +9,7 @@ content_type: task This document helps you get started using the Kubernetes [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) to declare network policies that govern how pods communicate with each other. +{{% thirdparty-content %}} ## {{% heading "prerequisites" %}} @@ -23,11 +24,6 @@ Make sure you've configured a network provider with network policy support. Ther * [Romana](/docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/) * [Weave Net](/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/) -{{< note >}} -The above list is sorted alphabetically by product name, not by recommendation or preference. This example is valid for a Kubernetes cluster using any of these providers. -{{< /note >}} - - ## Create an `nginx` deployment and expose it via a service diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index 83989b1f58..948893d3ea 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -72,7 +72,7 @@ policies using an example application. ## Deploying Cilium for Production Use For detailed instructions around deploying Cilium for production, see: -[Cilium Kubernetes Installation Guide](https://docs.cilium.io/en/stable/kubernetes/intro/) +[Cilium Kubernetes Installation Guide](https://docs.cilium.io/en/stable/concepts/kubernetes/intro/) This documentation includes detailed requirements, instructions and example production DaemonSet files. diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md index b6b562620a..988eafda70 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy.md @@ -19,10 +19,10 @@ You need to have a Kubernetes cluster. Follow the ## Install the Weave Net addon -Follow the [Integrating Kubernetes via the Addon](https://www.weave.works/docs/net/latest/kube-addon/) guide. +Follow the [Integrating Kubernetes via the Addon](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) guide. The Weave Net addon for Kubernetes comes with a -[Network Policy Controller](https://www.weave.works/docs/net/latest/kube-addon/#npc) +[Network Policy Controller](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#npc) that automatically monitors Kubernetes for any NetworkPolicy annotations on all namespaces and configures `iptables` rules to allow or block traffic as directed by the policies. diff --git a/content/en/docs/tasks/administer-cluster/out-of-resource.md b/content/en/docs/tasks/administer-cluster/out-of-resource.md index 10ef986b8c..b989ceac62 100644 --- a/content/en/docs/tasks/administer-cluster/out-of-resource.md +++ b/content/en/docs/tasks/administer-cluster/out-of-resource.md @@ -18,28 +18,19 @@ nodes become unstable. -## Eviction Policy - -The `kubelet` can proactively monitor for and prevent total starvation of a -compute resource. In those cases, the `kubelet` can reclaim the starved -resource by proactively failing one or more Pods. When the `kubelet` fails -a Pod, it terminates all of its containers and transitions its `PodPhase` to `Failed`. -If the evicted Pod is managed by a Deployment, the Deployment will create another Pod -to be scheduled by Kubernetes. - ### Eviction Signals The `kubelet` supports eviction decisions based on the signals described in the following table. The value of each signal is described in the Description column, which is based on the `kubelet` summary API. -| Eviction Signal | Description | -|----------------------------|-----------------------------------------------------------------------| -| `memory.available` | `memory.available` := `node.status.capacity[memory]` - `node.stats.memory.workingSet` | -| `nodefs.available` | `nodefs.available` := `node.stats.fs.available` | -| `nodefs.inodesFree` | `nodefs.inodesFree` := `node.stats.fs.inodesFree` | -| `imagefs.available` | `imagefs.available` := `node.stats.runtime.imagefs.available` | -| `imagefs.inodesFree` | `imagefs.inodesFree` := `node.stats.runtime.imagefs.inodesFree` | +| Eviction Signal | Description | +|----------------------|---------------------------------------------------------------------------------------| +| `memory.available` | `memory.available` := `node.status.capacity[memory]` - `node.stats.memory.workingSet` | +| `nodefs.available` | `nodefs.available` := `node.stats.fs.available` | +| `nodefs.inodesFree` | `nodefs.inodesFree` := `node.stats.fs.inodesFree` | +| `imagefs.available` | `imagefs.available` := `node.stats.runtime.imagefs.available` | +| `imagefs.inodesFree` | `imagefs.inodesFree` := `node.stats.runtime.imagefs.inodesFree` | Each of the above signals supports either a literal or percentage based value. The percentage based value is calculated relative to the total capacity @@ -65,7 +56,7 @@ memory is reclaimable under pressure. `imagefs` is optional. `kubelet` auto-discovers these filesystems using cAdvisor. `kubelet` does not care about any other filesystems. Any other types of configurations are not currently supported by the kubelet. For example, it is -*not OK* to store volumes and logs in a dedicated `filesystem`. +_not OK_ to store volumes and logs in a dedicated `filesystem`. In future releases, the `kubelet` will deprecate the existing [garbage collection](/docs/concepts/cluster-administration/kubelet-garbage-collection/) @@ -83,9 +74,7 @@ where: * `eviction-signal` is an eviction signal token as defined in the previous table. * `operator` is the desired relational operator, such as `<` (less than). -* `quantity` is the eviction threshold quantity, such as `1Gi`. These tokens must -match the quantity representation used by Kubernetes. An eviction threshold can also -be expressed as a percentage using the `%` token. +* `quantity` is the eviction threshold quantity, such as `1Gi`. These tokens must match the quantity representation used by Kubernetes. An eviction threshold can also be expressed as a percentage using the `%` token. For example, if a node has `10Gi` of total memory and you want trigger eviction if the available memory falls below `1Gi`, you can define the eviction threshold as @@ -108,12 +97,9 @@ termination. To configure soft eviction thresholds, the following flags are supported: -* `eviction-soft` describes a set of eviction thresholds (e.g. `memory.available<1.5Gi`) that if met over a -corresponding grace period would trigger a Pod eviction. -* `eviction-soft-grace-period` describes a set of eviction grace periods (e.g. `memory.available=1m30s`) that -correspond to how long a soft eviction threshold must hold before triggering a Pod eviction. -* `eviction-max-pod-grace-period` describes the maximum allowed grace period (in seconds) to use when terminating -pods in response to a soft eviction threshold being met. +* `eviction-soft` describes a set of eviction thresholds (e.g. `memory.available<1.5Gi`) that if met over a corresponding grace period would trigger a Pod eviction. +* `eviction-soft-grace-period` describes a set of eviction grace periods (e.g. `memory.available=1m30s`) that correspond to how long a soft eviction threshold must hold before triggering a Pod eviction. +* `eviction-max-pod-grace-period` describes the maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. #### Hard Eviction Thresholds @@ -124,8 +110,7 @@ with no graceful termination. To configure hard eviction thresholds, the following flag is supported: -* `eviction-hard` describes a set of eviction thresholds (e.g. `memory.available<1Gi`) that if met -would trigger a Pod eviction. +* `eviction-hard` describes a set of eviction thresholds (e.g. `memory.available<1Gi`) that if met would trigger a Pod eviction. The `kubelet` has the following default hard eviction threshold: @@ -150,10 +135,10 @@ reflects the node is under pressure. The following node conditions are defined that correspond to the specified eviction signal. -| Node Condition | Eviction Signal | Description | -|-------------------------|-------------------------------|--------------------------------------------| -| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold | -| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold | +| Node Condition | Eviction Signal | Description | +|-------------------|---------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------| +| `MemoryPressure` | `memory.available` | Available memory on the node has satisfied an eviction threshold | +| `DiskPressure` | `nodefs.available`, `nodefs.inodesFree`, `imagefs.available`, or `imagefs.inodesFree` | Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold | The `kubelet` continues to report node status updates at the frequency specified by `--node-status-update-frequency` which defaults to `10s`. @@ -168,8 +153,7 @@ as a consequence. To protect against this oscillation, the following flag is defined to control how long the `kubelet` must wait before transitioning out of a pressure condition. -* `eviction-pressure-transition-period` is the duration for which the `kubelet` has -to wait before transitioning out of an eviction pressure condition. +* `eviction-pressure-transition-period` is the duration for which the `kubelet` has to wait before transitioning out of an eviction pressure condition. The `kubelet` would ensure that it has not observed an eviction threshold being met for the specified pressure condition for the period specified before toggling the @@ -207,17 +191,8 @@ then by [Priority](/docs/concepts/configuration/pod-priority-preemption/), and t As a result, `kubelet` ranks and evicts Pods in the following order: -* `BestEffort` or `Burstable` Pods whose usage of a starved resource exceeds its request. -Such pods are ranked by Priority, and then usage above request. -* `Guaranteed` pods and `Burstable` pods whose usage is beneath requests are evicted last. -`Guaranteed` Pods are guaranteed only when requests and limits are specified for all -the containers and they are equal. Such pods are guaranteed to never be evicted because -of another Pod's resource consumption. If a system daemon (such as `kubelet`, `docker`, -and `journald`) is consuming more resources than were reserved via `system-reserved` or -`kube-reserved` allocations, and the node only has `Guaranteed` or `Burstable` Pods using -less than requests remaining, then the node must choose to evict such a Pod in order to -preserve node stability and to limit the impact of the unexpected consumption to other Pods. -In this case, it will choose to evict pods of Lowest Priority first. +* `BestEffort` or `Burstable` Pods whose usage of a starved resource exceeds its request. Such pods are ranked by Priority, and then usage above request. +* `Guaranteed` pods and `Burstable` pods whose usage is beneath requests are evicted last. `Guaranteed` Pods are guaranteed only when requests and limits are specified for all the containers and they are equal. Such pods are guaranteed to never be evicted because of another Pod's resource consumption. If a system daemon (such as `kubelet`, `docker`, and `journald`) is consuming more resources than were reserved via `system-reserved` or `kube-reserved` allocations, and the node only has `Guaranteed` or `Burstable` Pods using less than requests remaining, then the node must choose to evict such a Pod in order to preserve node stability and to limit the impact of the unexpected consumption to other Pods. In this case, it will choose to evict pods of Lowest Priority first. If necessary, `kubelet` evicts Pods one at a time to reclaim disk when `DiskPressure` is encountered. If the `kubelet` is responding to `inode` starvation, it reclaims @@ -228,6 +203,7 @@ that consumes the largest amount of disk and kills those first. #### With `imagefs` If `nodefs` is triggering evictions, `kubelet` sorts Pods based on the usage on `nodefs` + - local volumes + logs of all its containers. If `imagefs` is triggering evictions, `kubelet` sorts Pods based on the writable layer usage of all its containers. @@ -235,13 +211,13 @@ If `imagefs` is triggering evictions, `kubelet` sorts Pods based on the writable #### Without `imagefs` If `nodefs` is triggering evictions, `kubelet` sorts Pods based on their total disk usage + - local volumes + logs & writable layer of all its containers. ### Minimum eviction reclaim In certain scenarios, eviction of Pods could result in reclamation of small amount of resources. This can result in -`kubelet` hitting eviction thresholds in repeated successions. In addition to that, eviction of resources like `disk`, - is time consuming. +`kubelet` hitting eviction thresholds in repeated successions. In addition to that, eviction of resources like `disk`, is time consuming. To mitigate these issues, `kubelet` can have a per-resource `minimum-reclaim`. Whenever `kubelet` observes resource pressure, `kubelet` attempts to reclaim at least `minimum-reclaim` amount of resource below @@ -268,10 +244,10 @@ The node reports a condition when a compute resource is under pressure. The scheduler views that condition as a signal to dissuade placing additional pods on the node. -| Node Condition | Scheduler Behavior | -| ---------------- | ------------------------------------------------ | -| `MemoryPressure` | No new `BestEffort` Pods are scheduled to the node. | -| `DiskPressure` | No new Pods are scheduled to the node. | +| Node Condition | Scheduler Behavior | +| ------------------| ----------------------------------------------------| +| `MemoryPressure` | No new `BestEffort` Pods are scheduled to the node. | +| `DiskPressure` | No new Pods are scheduled to the node. | ## Node OOM Behavior @@ -280,11 +256,11 @@ the node depends on the [oom_killer](https://lwn.net/Articles/391222/) to respon The `kubelet` sets a `oom_score_adj` value for each container based on the quality of service for the Pod. -| Quality of Service | oom_score_adj | -|----------------------------|-----------------------------------------------------------------------| -| `Guaranteed` | -998 | -| `BestEffort` | 1000 | -| `Burstable` | min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999) | +| Quality of Service | oom_score_adj | +|--------------------|-----------------------------------------------------------------------------------| +| `Guaranteed` | -998 | +| `BestEffort` | 1000 | +| `Burstable` | min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999) | If the `kubelet` is unable to reclaim memory prior to a node experiencing system OOM, the `oom_killer` calculates an `oom_score` based on the percentage of memory it's using on the node, and then add the `oom_score_adj` to get an @@ -325,10 +301,7 @@ and trigger eviction assuming those Pods use less than their configured request. ### DaemonSet -As `Priority` is a key factor in the eviction strategy, if you do not want -pods belonging to a `DaemonSet` to be evicted, specify a sufficiently high priorityClass -in the pod spec template. If you want pods belonging to a `DaemonSet` to run only if -there are sufficient resources, specify a lower or default priorityClass. +As `Priority` is a key factor in the eviction strategy, if you do not want pods belonging to a `DaemonSet` to be evicted, specify a sufficiently high priorityClass in the pod spec template. If you want pods belonging to a `DaemonSet` to run only if there are sufficient resources, specify a lower or default priorityClass. ## Deprecation of existing feature flags to reclaim disk @@ -338,15 +311,15 @@ there are sufficient resources, specify a lower or default priorityClass. As disk based eviction matures, the following `kubelet` flags are marked for deprecation in favor of the simpler configuration supported around eviction. -| Existing Flag | New Flag | -| ------------- | -------- | -| `--image-gc-high-threshold` | `--eviction-hard` or `eviction-soft` | -| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | -| `--maximum-dead-containers` | deprecated | -| `--maximum-dead-containers-per-container` | deprecated | -| `--minimum-container-ttl-duration` | deprecated | -| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | -| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | +| Existing Flag | New Flag | +| ------------------------------------------ | ----------------------------------------| +| `--image-gc-high-threshold` | `--eviction-hard` or `eviction-soft` | +| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | +| `--maximum-dead-containers` | deprecated | +| `--maximum-dead-containers-per-container` | deprecated | +| `--minimum-container-ttl-duration` | deprecated | +| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | +| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | ## Known issues diff --git a/content/en/docs/tasks/configmap-secret/_index.md b/content/en/docs/tasks/configmap-secret/_index.md new file mode 100755 index 0000000000..d80692c967 --- /dev/null +++ b/content/en/docs/tasks/configmap-secret/_index.md @@ -0,0 +1,6 @@ +--- +title: "Managing Secrets" +weight: 28 +description: Managing confidential settings data using Secrets. +--- + diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md new file mode 100644 index 0000000000..8ed9730415 --- /dev/null +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -0,0 +1,198 @@ +--- +title: Managing Secret using Configuration File +content_type: task +weight: 20 +description: Creating Secret objects using resource configuration file. +--- + + + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## Create the Config file + +You can create a Secret in a file first, in JSON or YAML format, and then +create that object. The +[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) +resource contains two maps: `data` and `stringData`. +The `data` field is used to store arbitrary data, encoded using base64. The +`stringData` field is provided for convenience, and it allows you to provide +Secret data as unencoded strings. +The keys of `data` and `stringData` must consist of alphanumeric characters, +`-`, `_` or `.`. + +For example, to store two strings in a Secret using the `data` field, convert +the strings to base64 as follows: + +```shell +echo -n 'admin' | base64 +``` + +The output is similar to: + +``` +YWRtaW4= +``` + +```shell +echo -n '1f2d1e2e67df' | base64 +``` + +The output is similar to: + +``` +MWYyZDFlMmU2N2Rm +``` + +Write a Secret config file that looks like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +Note that the name of a Secret object must be a valid +[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). + +{{< note >}} +The serialized JSON and YAML values of Secret data are encoded as base64 +strings. Newlines are not valid within these strings and must be omitted. When +using the `base64` utility on Darwin/macOS, users should avoid using the `-b` +option to split long lines. Conversely, Linux users *should* add the option +`-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` +option is not available. +{{< /note >}} + +For certain scenarios, you may wish to use the `stringData` field instead. This +field allows you to put a non-base64 encoded string directly into the Secret, +and the string will be encoded for you when the Secret is created or updated. + +A practical example of this might be where you are deploying an application +that uses a Secret to store a configuration file, and you want to populate +parts of that configuration file during your deployment process. + +For example, if your application uses the following configuration file: + +```yaml +apiUrl: "https://my.api.com/api/v1" +username: "" +password: "" +``` + +You could store this in a Secret using the following definition: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +stringData: + config.yaml: | + apiUrl: "https://my.api.com/api/v1" + username: + password: +``` + +## Create the Secret object + +Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): + +```shell +kubectl apply -f ./secret.yaml +``` + +The output is similar to: + +``` +secret/mysecret created +``` + +## Check the Secret + +The `stringData` field is a write-only convenience field. It is never output when +retrieving Secrets. For example, if you run the following command: + +```shell +kubectl get secret mysecret -o yaml +``` + +The output is similar to: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:40:59Z + name: mysecret + namespace: default + resourceVersion: "7225" + uid: c280ad2e-e916-11e8-98f2-025000000001 +type: Opaque +data: + config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 +``` + +The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by +default. This is to protect the `Secret` from being exposed accidentally to an onlooker, +or from being stored in a terminal log. +To check the actual content of the encoded data, please refer to +[decoding secret](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret). + +If a field, such as `username`, is specified in both `data` and `stringData`, +the value from `stringData` is used. For example, the following Secret definition: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= +stringData: + username: administrator +``` + +Results in the following Secret: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:46:46Z + name: mysecret + namespace: default + resourceVersion: "7579" + uid: 91460ecb-e917-11e8-98f2-025000000001 +type: Opaque +data: + username: YWRtaW5pc3RyYXRvcg== +``` + +Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. + +## Clean Up + +To delete the Secret you have just created: + +```shell +kubectl delete secret db-user-pass +``` + +## {{% heading "whatsnext" %}} + +- Read more about the [Secret concept](/docs/concepts/configuration/secret/) +- Learn how to [manage Secret with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) + diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md new file mode 100644 index 0000000000..f370e24281 --- /dev/null +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -0,0 +1,157 @@ +--- +title: Managing Secret using kubectl +content_type: task +weight: 10 +description: Creating Secret objects using kubectl command line. +--- + + + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## Create a Secret + +A `Secret` can contain user credentials required by Pods to access a database. +For example, a database connection string consists of a username and password. +You can store the username in a file `./username.txt` and the password in a +file `./password.txt` on your local machine. + +```shell +echo -n 'admin' > ./username.txt +echo -n '1f2d1e2e67df' > ./password.txt +``` + +The `-n` flag in the above two commands ensures that the generated files will +not contain an extra newline character at the end of the text. This is +important because when `kubectl` reads a file and encode the content into +base64 string, the extra newline character gets encoded too. + +The `kubectl create secret` command packages these files into a Secret and creates +the object on the API server. + +```shell +kubectl create secret generic db-user-pass \ + --from-file=./username.txt \ + --from-file=./password.txt +``` + +The output is similar to: + +``` +secret/db-user-pass created +``` + +Default key name is the filename. You may optionally set the key name using +`--from-file=[key=]source`. For example: + +```shell +kubectl create secret generic db-user-pass \ + --from-file=username=./username.txt \ + --from-file=password=./password.txt +``` + +You do not need to escape special characters in passwords from files +(`--from-file`). + +You can also provide Secret data using the `--from-literal==` tag. +This tag can be specified more than once to provide multiple key-value pairs. +Note that special characters such as `$`, `\`, `*`, `=`, and `!` will be +interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) +and require escaping. +In most shells, the easiest way to escape the password is to surround it with +single quotes (`'`). For example, if your actual password is `S!B\*d$zDsb=`, +you should execute the command this way: + +```shell +kubectl create secret generic dev-db-secret \ + --from-literal=username=devuser \ + --from-literal=password='S!B\*d$zDsb=' +``` + +## Verify the Secret + +You can check that the secret was created: + +```shell +kubectl get secrets +``` + +The output is similar to: + +``` +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s +``` + +You can view a description of the `Secret`: + +```shell +kubectl describe secrets/db-user-pass +``` + +The output is similar to: + +``` +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +The commands `kubectl get` and `kubectl describe` avoid showing the contents +of a `Secret` by default. This is to protect the `Secret` from being exposed +accidentally to an onlooker, or from being stored in a terminal log. + +## Decoding the Secret {#decoding-secret} + +To view the contents of the Secret we just created, you can run the following +command: + +```shell +kubectl get secret db-user-pass -o jsonpath='{.data}' +``` + +The output is similar to: + +```json +{"password.txt":"MWYyZDFlMmU2N2Rm","username.txt":"YWRtaW4="} +``` + +Now you can decode the `password.txt` data: + +```shell +echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +``` + +The output is similar to: + +``` +1f2d1e2e67df +``` + +## Clean Up + +To delete the Secret you have just created: + +```shell +kubectl delete secret db-user-pass +``` + + + +## {{% heading "whatsnext" %}} + +- Read more about the [Secret concept](/docs/concepts/configuration/secret/) +- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) +- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md new file mode 100644 index 0000000000..d7b1f48a4a --- /dev/null +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kustomize.md @@ -0,0 +1,128 @@ +--- +title: Managing Secret using Kustomize +content_type: task +weight: 30 +description: Creating Secret objects using kustomization.yaml file. +--- + + + +Since Kubernetes v1.14, `kubectl` supports +[managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/). +Kustomize provides resource Generators to create Secrets and ConfigMaps. The +Kustomize generators should be specified in a `kustomization.yaml` file inside +a directory. After generating the Secret, you can create the Secret on the API +server with `kubectl apply`. + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## Create the Kustomization file + +You can generate a Secret by defining a `secretGenerator` in a +`kustomization.yaml` file that references other existing files. +For example, the following kustomization file references the +`./username.txt` and the `./password.txt` files: + +```yaml +secretGenerator: +- name: db-user-pass + files: + - username.txt + - password.txt +``` + +You can also define the `secretGenerator` in the `kustomization.yaml` +file by providing some literals. +For example, the following `kustomization.yaml` file contains two literals +for `username` and `password` respectively: + +```yaml +secretGenerator: +- name: db-user-pass + literals: + - username=admin + - password=1f2d1e2e67df +``` + +Note that in both cases, you don't need to base64 encode the values. + +## Create the Secret + +Apply the directory containing the `kustomization.yaml` to create the Secret. + +```shell +kubectl apply -k . +``` + +The output is similar to: + +``` +secret/db-user-pass-96mffmfh4k created +``` + +Note that when a Secret is generated, the Secret name is created by hashing +the Secret data and appending the hash value to the name. This ensures that +a new Secret is generated each time the data is modified. + +## Check the Secret created + +You can check that the secret was created: + +```shell +kubectl get secrets +``` + +The output is similar to: + +``` +NAME TYPE DATA AGE +db-user-pass-96mffmfh4k Opaque 2 51s +``` + +You can view a description of the secret: + +```shell +kubectl describe secrets/db-user-pass-96mffmfh4k +``` + +The output is similar to: + +``` +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by +default. This is to protect the `Secret` from being exposed accidentally to an onlooker, +or from being stored in a terminal log. +To check the actual content of the encoded data, please refer to +[decoding secret](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret). + +## Clean Up + +To delete the Secret you have just created: + +```shell +kubectl delete secret db-user-pass-96mffmfh4k +``` + + +## {{% heading "whatsnext" %}} + +- Read more about the [Secret concept](/docs/concepts/configuration/secret/) +- Learn how to [manage Secret with the `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/) +- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/) + diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index f9333b01c5..3b51469e56 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -150,7 +150,7 @@ kubectl get ct -o yaml ``` You should see that it contains the custom `cronSpec` and `image` fields -from the yaml you used to create it: +from the YAML you used to create it: ```yaml apiVersion: v1 @@ -174,7 +174,7 @@ metadata: ## Delete a CustomResourceDefinition -When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint +When you delete a CustomResourceDefinition, the server will uninstall the RESTful API endpoint and delete all custom objects stored in it. ```shell @@ -190,11 +190,17 @@ If you later recreate the same CustomResourceDefinition, it will start out empty ## Specifying a structural schema -CustomResources store structured data in custom fiels (alongside the built-in fields `apiVersion`, `kind` and `metadata`, which the API server validates implicitly). With [OpenAPI v3.0 validation](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) a schema can be specified, which is validated during creation and updates, compare below for details and limits of such a schema. +CustomResources store structured data in custom fields (alongside the built-in +fields `apiVersion`, `kind` and `metadata`, which the API server validates +implicitly). With [OpenAPI v3.0 validation](#validation) a schema can be +specified, which is validated during creation and updates, compare below for +details and limits of such a schema. -With `apiextensions.k8s.io/v1` the definition of a structural schema is mandatory for CustomResourceDefinitions (in the beta version of CustomResourceDefinition, structural schemas were optional). +With `apiextensions.k8s.io/v1` the definition of a structural schema is +mandatory for CustomResourceDefinitions. In the beta version of +CustomResourceDefinition, the structural schema was optional. -A structural schema is an [OpenAPI v3.0 validation schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) which: +A structural schema is an [OpenAPI v3.0 validation schema](#validation) which: 1. specifies a non-empty type (via `type` in OpenAPI) for the root, for each specified field of an object node (via `properties` or `additionalProperties` in OpenAPI) and for each item in an array node (via `items` in OpenAPI), with the exception of: * a node with `x-kubernetes-int-or-string: true` @@ -274,7 +280,7 @@ is not a structural schema because of the following violations: * `bar` inside of `anyOf` is not specified outside (rule 2). * `bar`'s `type` is within `anyOf` (rule 3). * the description is set within `anyOf` (rule 3). -* `metadata.finalizer` might not be restricted (rule 4). +* `metadata.finalizers` might not be restricted (rule 4). In contrast, the following, corresponding schema is structural: ```yaml @@ -308,7 +314,7 @@ CustomResourceDefinitions store validated resource data in the cluster's persist {{< note >}} CRDs converted from `apiextensions.k8s.io/v1beta1` to `apiextensions.k8s.io/v1` might lack structural schemas, and `spec.preserveUnknownFields` might be `true`. -For migrated CustomResourceDefinitions where `spec.preserveUnknownFields` is set, pruning is _not_ enabled and you can store arbitrary data. For best compatibility, you should update customer resources to meet an OpenAPI schema, and you should set `spec.preserveUnknownFields` true for the CustomResourceDefinition itself. +For migrated CustomResourceDefinitions where `spec.preserveUnknownFields` is set, pruning is _not_ enabled and you can store arbitrary data. For best compatibility, you should update your custom resources to meet an OpenAPI schema, and you should set `spec.preserveUnknownFields` to true for the CustomResourceDefinition itself. {{< /note >}} If you save the following YAML to `my-crontab.yaml`: @@ -350,12 +356,12 @@ spec: Notice that the field `someRandomField` was pruned. This example turned off client-side validation to demonstrate the API server's behavior, by adding the `--validate=false` command line option. -Because the [OpenAPI validation schemas are also published](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#publish-validation-schema-in-openapi-v2) +Because the [OpenAPI validation schemas are also published](#publish-validation-schema-in-openapi-v2) to clients, `kubectl` also checks for unknown fields and rejects those objects well before they would be sent to the API server. #### Controlling pruning -By default, all unspecified fields for a custom resource, across all versions, are pruned. It is possible though to opt-out of that for specifc sub-trees fof fields by adding `x-kubernetes-preserve-unknown-fields: true` in the [structural OpenAPI v3 validation schema](#specifying-a-structural-schema). +By default, all unspecified fields for a custom resource, across all versions, are pruned. It is possible though to opt-out of that for specifc sub-trees of fields by adding `x-kubernetes-preserve-unknown-fields: true` in the [structural OpenAPI v3 validation schema](#specifying-a-structural-schema). For example: ```yaml @@ -458,7 +464,7 @@ allOf: With one of those specification, both an integer and a string validate. -In [Validation Schema Publishing](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#publish-validation-schema-in-openapi-v2), +In [Validation Schema Publishing](#publish-validation-schema-in-openapi-v2), `x-kubernetes-int-or-string: true` is unfolded to one of the two patterns shown above. ### RawExtension @@ -520,7 +526,7 @@ of a resource is not possible while they exist. The first delete request on an object with finalizers sets a value for the `metadata.deletionTimestamp` field but does not delete it. Once this value is set, -entries in the `finalizer` list can only be removed. +entries in the `finalizers` list can only be removed. When the `metadata.deletionTimestamp` field is set, controllers watching the object execute any finalizers they handle, by polling update requests for that @@ -537,7 +543,9 @@ meaning all finalizers have been executed. ### Validation Custom resources are validated via -[OpenAPI v3 schemas](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject) and you can add additional validation using [admission webhooks](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook). +[OpenAPI v3 schemas](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schemaObject) +and you can add additional validation using +[admission webhooks](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook). Additionally, the following restrictions are applied to the schema: @@ -552,17 +560,18 @@ Additionally, the following restrictions are applied to the schema: - `writeOnly`, - `xml`, - `$ref`. -- The field `uniqueItems` cannot be set to _true_. -- The field `additionalProperties` cannot be set to _false_. +- The field `uniqueItems` cannot be set to `true`. +- The field `additionalProperties` cannot be set to `false`. - The field `additionalProperties` is mutually exclusive with `properties`. -These fields can only be set with specific features enabled: +The `default` field can be set when the [Defaulting feature](#defaulting) is enabled, +which is the case with `apiextensions.k8s.io/v1` CustomResourceDefinitions. +Defaulting is in GA since 1.17 (beta since 1.16 with the `CustomResourceDefaulting` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +enabled, which is the case automatically for many clusters for beta features). -You can also use [Validation Schema Defaulting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting) to apply default values. - -{{< note >}} -Compare with [structural schemas](#specifying-a-structural-schema) for further restriction required for certain CustomResourceDefinition features. -{{< /note >}} +Refer to the [structural schemas](#specifying-a-structural-schema) section for other +restrictions and CustomResourceDefinition features. The schema is defined in the CustomResourceDefinition. In the following example, the CustomResourceDefinition applies the following validations on the custom object: @@ -756,7 +765,7 @@ Default values for `metadata` fields of `x-kubernetes-embedded-resources: true` ### Publish Validation Schema in OpenAPI v2 -CustomResourceDefinition [OpenAPI v3 validation schemas](#validation) which are [structural](#specifying-a-structural-schema) and [enable pruning](#preserving-unknown-fields) are published as part of the [OpenAPI v2 spec](/docs/concepts/overview/kubernetes-api/#openapi-and-swagger-definitions) from Kubernetes API server. +CustomResourceDefinition [OpenAPI v3 validation schemas](#validation) which are [structural](#specifying-a-structural-schema) and [enable pruning](#field-pruning) are published as part of the [OpenAPI v2 spec](/docs/concepts/overview/kubernetes-api/#openapi-and-swagger-definitions) from Kubernetes API server. The [kubectl](/docs/reference/kubectl/overview) command-line tool consumes the published schema to perform client-side validation (`kubectl create` and `kubectl apply`), schema explanation (`kubectl explain`) on custom resources. The published schema can be consumed for other purposes as well, like client generation or documentation. @@ -853,7 +862,7 @@ The `NAME` column is implicit and does not need to be defined in the CustomResou #### Priority -Each column includes a `priority` field for each column. Currently, the priority +Each column includes a `priority` field. Currently, the priority differentiates between columns shown in standard view or wide view (using the `-o wide` flag). - Columns with priority `0` are shown in standard view. @@ -866,7 +875,7 @@ A column's `type` field can be any of the following (compare [OpenAPI v3 data ty - `integer` – non-floating-point numbers - `number` – floating point numbers - `string` – strings -- `boolean` – true or false +- `boolean` – `true` or `false` - `date` – rendered differentially as time since this timestamp. If the value inside a CustomResource does not match the type specified for the column, @@ -906,42 +915,42 @@ When the status subresource is enabled, the `/status` subresource for the custom - The `.metadata.generation` value is incremented for all changes, except for changes to `.metadata` or `.status`. - Only the following constructs are allowed at the root of the CRD OpenAPI validation schema: - - Description - - Example - - ExclusiveMaximum - - ExclusiveMinimum - - ExternalDocs - - Format - - Items - - Maximum - - MaxItems - - MaxLength - - Minimum - - MinItems - - MinLength - - MultipleOf - - Pattern - - Properties - - Required - - Title - - Type - - UniqueItems + - `description` + - `example` + - `exclusiveMaximum` + - `exclusiveMinimum` + - `externalDocs` + - `format` + - `items` + - `maximum` + - `maxItems` + - `maxLength` + - `minimum` + - `minItems` + - `minLength` + - `multipleOf` + - `pattern` + - `properties` + - `required` + - `title` + - `type` + - `uniqueItems` #### Scale subresource When the scale subresource is enabled, the `/scale` subresource for the custom resource is exposed. The `autoscaling/v1.Scale` object is sent as the payload for `/scale`. -To enable the scale subresource, the following values are defined in the CustomResourceDefinition. +To enable the scale subresource, the following fields are defined in the CustomResourceDefinition. -- `SpecReplicasPath` defines the JSONPath inside of a custom resource that corresponds to `Scale.Spec.Replicas`. +- `specReplicasPath` defines the JSONPath inside of a custom resource that corresponds to `scale.spec.replicas`. - It is a required value. - Only JSONPaths under `.spec` and with the dot notation are allowed. - - If there is no value under the `SpecReplicasPath` in the custom resource, + - If there is no value under the `specReplicasPath` in the custom resource, the `/scale` subresource will return an error on GET. -- `StatusReplicasPath` defines the JSONPath inside of a custom resource that corresponds to `Scale.Status.Replicas`. +- `statusReplicasPath` defines the JSONPath inside of a custom resource that corresponds to `scale.status.replicas`. - It is a required value. - Only JSONPaths under `.status` and with the dot notation are allowed. @@ -1141,7 +1150,7 @@ and create it: kubectl apply -f my-crontab.yaml ``` -You can specify the category using `kubectl get`: +You can specify the category when using `kubectl get`: ``` kubectl get all diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md index f8403dec17..a51b5664ba 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md @@ -37,7 +37,7 @@ The `kubectl` tool supports verb-driven commands for creating some of the most c object types. The commands are named to be recognizable to users unfamiliar with the Kubernetes object types. -- `run`: Create a new Deployment object to run Containers in one or more Pods. +- `run`: Create a new Pod to run a Container. - `expose`: Create a new Service object to load balance traffic across Pods. - `autoscale`: Create a new Autoscaler object to automatically horizontally scale a controller, such as a Deployment. diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md index a8faefd63e..2b97ed271c 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md @@ -64,6 +64,18 @@ configuration file. * `kubectl delete -f ` +{{< note >}} +If configuration file has specified the `generateName` field in the `metadata` +section instead of the `name` field, you cannot delete the object using +`kubectl delete -f `. +You will have to use other flags for deleting the object. For example: + +```shell +kubectl delete +kubectl delete -l
diff --git a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html index fb782458de..ab008c38af 100644 --- a/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -31,7 +31,7 @@ weight: 20 To interact with the Terminal, please use the desktop/tablet version
-
+
diff --git a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html index ac23bbf61b..8c87cfab18 100644 --- a/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html @@ -24,7 +24,7 @@ weight: 20 To interact with the Terminal, please use the desktop/tablet version
-
+
diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html index 9e1605c518..e89414b917 100644 --- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html @@ -21,7 +21,7 @@ weight: 20
To interact with the Terminal, please use the desktop/tablet version
-
+
diff --git a/content/en/docs/tutorials/kubernetes-basics/public/css/styles.css b/content/en/docs/tutorials/kubernetes-basics/public/css/styles.css index 8c743991d9..a9e2a62009 100644 --- a/content/en/docs/tutorials/kubernetes-basics/public/css/styles.css +++ b/content/en/docs/tutorials/kubernetes-basics/public/css/styles.css @@ -8392,46 +8392,6 @@ button.close src: url('../fonts/Inconsolata-Bold.eot?#iefix') format('embedded-opentype'), url('../fonts/Inconsolata-Bold.woff2') format('woff2'), url('../fonts/Inconsolata-Bold.woff') format('woff'), url('../fonts/Inconsolata-Bold.ttf') format('truetype'), url('../fonts/Inconsolata-Bold.svg#Inconsolata-Bold') format('svg'); } -@font-face -{ - font-family: 'Roboto Slab'; - font-weight: normal; - font-style: normal; - - src: url('../fonts/RobotoSlab-Regular.eot'); - src: url('../fonts/RobotoSlab-Regular.eot?#iefix') format('embedded-opentype'), url('../fonts/RobotoSlab-Regular.woff2') format('woff2'), url('../fonts/RobotoSlab-Regular.woff') format('woff'), url('../fonts/RobotoSlab-Regular.ttf') format('truetype'), url('../fonts/RobotoSlab-Regular.svg#RobotoSlab-Regular') format('svg'); -} - -@font-face -{ - font-family: 'Roboto Slab'; - font-weight: 100; - font-style: normal; - - src: url('../fonts/RobotoSlab-Thin.eot'); - src: url('../fonts/RobotoSlab-Thin.eot?#iefix') format('embedded-opentype'), url('../fonts/RobotoSlab-Thin.woff2') format('woff2'), url('../fonts/RobotoSlab-Thin.woff') format('woff'), url('../fonts/RobotoSlab-Thin.ttf') format('truetype'), url('../fonts/RobotoSlab-Thin.svg#RobotoSlab-Thin') format('svg'); -} - -@font-face -{ - font-family: 'Roboto Slab'; - font-weight: 300; - font-style: normal; - - src: url('../fonts/RobotoSlab-Light.eot'); - src: url('../fonts/RobotoSlab-Light.eot?#iefix') format('embedded-opentype'), url('../fonts/RobotoSlab-Light.woff2') format('woff2'), url('../fonts/RobotoSlab-Light.woff') format('woff'), url('../fonts/RobotoSlab-Light.ttf') format('truetype'), url('../fonts/RobotoSlab-Light.svg#RobotoSlab-Light') format('svg'); -} - -@font-face -{ - font-family: 'Roboto Slab'; - font-weight: bold; - font-style: normal; - - src: url('../fonts/RobotoSlab-Bold.eot'); - src: url('../fonts/RobotoSlab-Bold.eot?#iefix') format('embedded-opentype'), url('../fonts/RobotoSlab-Bold.woff2') format('woff2'), url('../fonts/RobotoSlab-Bold.woff') format('woff'), url('../fonts/RobotoSlab-Bold.ttf') format('truetype'), url('../fonts/RobotoSlab-Bold.svg#RobotoSlab-Bold') format('svg'); -} - .layout { display: -webkit-box; @@ -8574,7 +8534,7 @@ button.close .nav { - font-family: Roboto Slab, Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; + font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; min-height: 600px; @@ -9035,7 +8995,7 @@ button.close } .content__modules .caption h5 { - font-family: Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; + font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 16px; } @@ -9075,641 +9035,6 @@ button.close max-width: 300px; } -/* -.header -{ - padding: 20px 20px; - - background: #fff; -} -@media screen and (max-width: 480px) -{ - .header - { - position: relative; - - padding: 10px 20px; - - background: none; - } -} - -.header__site -{ - font-family: Roboto Slab, Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; - font-size: 32px; - font-weight: bold; - line-height: 60px; - - text-decoration: none !important; - - color: #231f20 !important; -} -@media screen and (max-width: 480px) -{ - .header__site - { - display: block; - overflow: hidden; - - width: 42px; - } -} - -.header__logo -{ - height: 60px; - margin: 0 10px 0 0; - - vertical-align: top; -} -@media screen and (max-width: 480px) -{ - .header__logo - { - display: none; - } -} - -.header__logo-mobile -{ - display: none; -} -@media screen and (max-width: 480px) -{ - .header__logo-mobile - { - display: block; - - height: 40px; - } -} - -@media screen and (max-width: 480px) -{ - .header__name - { - display: none; - } -} - -.header__sider -{ - position: absolute; - z-index: 1010; - top: 15px; - right: 20px; - - display: none; - - box-sizing: border-box; - width: 30px; - height: 30px; - - cursor: pointer; - - -webkit-tap-highlight-color: transparent; -} -@media screen and (max-width: 480px) -{ - .header__sider - { - display: block; - } -} - -.header__burger -{ - position: absolute; - top: 50%; - left: 0; - - width: 30px; - height: 3px; - margin: -1px 0 0 0; - - border-radius: 3px; - background: #fff; -} -.header__burger:before, -.header__burger:after -{ - position: absolute; - top: -9px; - left: 0; - - width: 100%; - height: 3px; - margin: -1px 0 0 0; - - content: ''; - -webkit-transition: .12s linear; - transition: .12s linear; - - border-radius: 3px; - background: inherit; -} -.header__burger:after -{ - top: auto; - bottom: -10px; -} -.page_open .header__burger -{ - background: transparent; -} -.page_open .header__burger:before -{ - top: 0; - - -webkit-transform: rotate(45deg); - -ms-transform: rotate(45deg); - transform: rotate(45deg); - - background: #326de6; -} -.page_open .header__burger:after -{ - bottom: 0; - - -webkit-transform: rotate(-45deg); - -ms-transform: rotate(-45deg); - transform: rotate(-45deg); - - background: #326de6; -} - -.footer -{ - font-family: Roboto Slab, Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; - - margin: 20px 0 0; - - text-align: left; -} -.layout > .footer -{ - display: none; -} -@media screen and (max-width: 480px) -{ - .footer - { - display: none; - } - .layout > .footer - { - display: block; - - padding: 0 0 20px; - - text-align: center; - - background: #326de6; - } - .footer:before - { - position: relative; - - display: block; - - height: 15px; - margin: 0 0 40px; - - content: ''; - - background: -webkit-linear-gradient(45deg, #326de6 0%, #4cffd4 100%); - background: linear-gradient(45deg, #326de6 0%, #4cffd4 100%); - } -} - -.footer__content -{ - padding: 0 20px; -} - -.footer__social -{ - font-size: 32px; - line-height: 1; - - width: 60px; - margin: 0; - padding: 10px 0; - - list-style: none; - - text-align: left; - - border-top: 1px solid rgba(255, 255, 255, .3); - border-bottom: 1px solid rgba(255, 255, 255, .3); -} -.page_open .footer__social -{ - width: auto; - padding: 0; - - border: none; -} -@media screen and (min-width: 999px) -{ - .footer__social - { - width: auto; - padding: 0; - - border: none; - } -} -@media screen and (max-width: 480px) -{ - .footer__social - { - width: auto; - padding: 0; - - text-align: center; - - border: none; - } -} -@media screen and (min-width: 1000px) -{ - .page_desktop_hide .footer__social - { - width: 60px; - padding: 10px 0; - - border-top: 1px solid rgba(255, 255, 255, .3); - border-bottom: 1px solid rgba(255, 255, 255, .3); - } -} - -.footer__social-item -{ - width: 60px; - margin: 10px 0; - - text-align: center; -} -.page_open .footer__social-item -{ - display: inline-block; - - width: auto; - margin: 0 20px 0 0; - - text-align: left; -} -@media screen and (min-width: 999px) -{ - .footer__social-item - { - display: inline-block; - - width: auto; - margin: 0 20px 0 0; - - text-align: left; - } -} -@media screen and (max-width: 480px) -{ - .footer__social-item - { - display: inline-block; - - width: auto; - margin: 0 10px; - - text-align: center; - } -} -@media screen and (min-width: 1000px) -{ - .page_desktop_hide .footer__social-item - { - width: 60px; - margin: 10px 0; - - text-align: center; - } -} - -.footer__network -{ - display: inline-block; - - text-decoration: none; - - opacity: .7; - color: #fff; -} -.page_open .footer__network -{ - opacity: 1; - color: #326de6; -} -@media screen and (min-width: 999px) -{ - .footer__network - { - opacity: 1; - color: #326de6; - } -} -@media screen and (max-width: 480px) -{ - .footer__network - { - display: inline-block; - - width: auto; - margin: 0 20px 0 0; - } - .footer__network:hover - { - opacity: 1; - color: #fff; - } -} -@media screen and (min-width: 1000px) -{ - .page_desktop_hide .footer__network - { - display: inline-block; - - text-decoration: none; - - opacity: .7; - color: #fff; - } -} - -.footer__menu -{ - display: none; - - margin: 20px 0; - padding: 10px 0; - - list-style: none; - - border-top: 1px solid rgba(255, 255, 255, .3); - border-bottom: 1px solid rgba(255, 255, 255, .3); -} -.page_open .footer__menu -{ - display: block; -} -@media screen and (min-width: 999px) -{ - .footer__menu - { - display: block; - } -} -@media screen and (max-width: 480px) -{ - .footer__menu - { - display: block; - - padding-bottom: 40px; - - text-align: left; - - border-top: none; - border-bottom: 2px solid rgba(255, 255, 255, .3); - } -} -@media screen and (min-width: 1000px) -{ - .page_desktop_hide .footer__menu - { - display: none; - } -} - -.footer__item -{ - margin: 10px 0; -} -@media screen and (max-width: 480px) -{ - .footer__item:before - { - display: inline; - - margin: 0 5px 0 0; - - content: '›'; - - opacity: .7; - color: #fff; - } -} - -.footer__link -{ - color: #273d6d; -} -@media screen and (max-width: 480px) -{ - .footer__link - { - opacity: .7; - color: #fff; - } - .footer__link:hover - { - opacity: 1; - color: #fff; - } -} - -.footer__copyright -{ - font-size: 12px; - padding-top: 20px; -} -.page_open .footer__copyright -{ - display: block; -} -@media screen and (min-width: 999px) -{ - .footer__copyright - { - display: block; - } -} -@media screen and (max-width: 480px) -{ - .footer__copyright - { - display: block; - - color: #fff; - } - .footer__copyright b - { - font-size: 16px; - - display: block; - } -} -@media screen and (min-width: 1000px) -{ - .page_desktop_hide .footer__copyright - { - display: none; - } -} -*/ - -/* -.scrolltop -{ - position: fixed; - z-index: 300; - bottom: -100px; - left: 20px; - - display: none; - - width: 56px; - height: 56px; - - -webkit-transition: .36s ease-out; - transition: .36s ease-out; - - opacity: 0; - border: 2px solid #fff; - border-radius: 50%; - outline: none; -} -.scrolltop:before, -.scrolltop:after -{ - position: absolute; - top: 23px; - left: 4px; - - width: 26px; - height: 2px; - - content: ''; - -webkit-transform: rotate(-45deg); - -ms-transform: rotate(-45deg); - transform: rotate(-45deg); - - background: #fff; -} -.scrolltop:after -{ - right: 4px; - left: auto; - - -webkit-transform: rotate(45deg); - -ms-transform: rotate(45deg); - transform: rotate(45deg); - - background: #fff; -} -@media screen and (max-width: 992px) -{ - .scrolltop - { - display: block; - } -} -@media screen and (max-width: 480px) -{ - .scrolltop - { - background: #326de6; - background: rgba(50, 109, 230, .5); - box-shadow: 0 0 0 1px #326de6; - } -} -.scrolltop_active -{ - bottom: 20px; - - opacity: 1; -} -*/ - - -/* -body -{ - font-family: Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; - font-size: 15px; - line-height: 1.5; - - min-height: 100vh; - - background: #eee; -} -body.page_open -{ - overflow: hidden; -} - -h1, -h2, -h3, -h4, -h5, -h6 -{ - font-family: Roboto Slab, Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; -} - -h1 -{ - font-size: 36px; - font-weight: bold; - - margin-bottom: 20px; -} - -h2, -h3 -{ - padding-bottom: 10px; - - border-bottom: 1px solid #ebebec; -} - -h2 -{ - font-size: 28px; -} - -.title-light -{ - border: none; -} - -a -{ - color: #326de6; -} - -p a -{ - text-decoration: underline; -} -p a:hover -{ - text-decoration: none; -} -*/ - - .breadcrumb { font-weight: 100; @@ -9735,12 +9060,6 @@ p a:hover content: '|'; } -/* .btn -{ - font-family: Roboto Slab, Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; - - border-radius: 0; -} */ .btn.btn-success { color: #273d6d; @@ -9945,7 +9264,7 @@ p a:hover .katacoda__alert { - font-family: Inconsolata, Roboto, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; + font-family: Inconsolata, 'Helvetica Neue', Helvetica, 'Open Sans', Arial, sans-serif; display: none; diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html index 8c6da7c579..77e707c429 100644 --- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html @@ -21,7 +21,7 @@ weight: 20
To interact with the Terminal, please use the desktop/tablet version
-
+
diff --git a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html index 37f7cc0fbe..42663ecdaa 100644 --- a/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html +++ b/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html @@ -21,7 +21,7 @@ weight: 20
To interact with the Terminal, please use the desktop/tablet version
-
+
diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md index 9c6edf784f..64073549ef 100644 --- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md @@ -1039,7 +1039,7 @@ Even though you completely deleted the StatefulSet, and all of its Pods, the Pods are recreated with their PersistentVolumes mounted, and `web-0` and `web-1` continue to serve their hostnames. -Finally, delete the `web` StatefulSet... +Finally, delete the `nginx` Service... ```shell kubectl delete service nginx @@ -1047,7 +1047,7 @@ kubectl delete service nginx ``` service "nginx" deleted ``` -...and the `nginx` Service: +...and the `web` StatefulSet: ```shell kubectl delete statefulset web ``` diff --git a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md b/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md index 0c4964a17f..de67b9eab3 100644 --- a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md +++ b/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md @@ -60,23 +60,15 @@ kubectl create clusterrolebinding cluster-admin-binding \ ## Install kube-state-metrics Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in. - -### Check to see if kube-state-metrics is running -```shell -kubectl get pods --namespace=kube-system | grep kube-state -``` -### Install kube-state-metrics if needed - ```shell git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics kubectl apply -f kube-state-metrics/examples/standard -kubectl get pods --namespace=kube-system | grep kube-state-metrics -``` -Verify that kube-state-metrics is running and ready -```shell -kubectl get pods -n kube-system -l app.kubernetes.io/name=kube-state-metrics ``` +### Check to see if kube-state-metrics is running +```shell +kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics +``` Output: ```shell NAME READY STATUS RESTARTS AGE diff --git a/content/en/includes/partner-style.css b/content/en/includes/partner-style.css index 7aeb255dd5..dc120872e0 100644 --- a/content/en/includes/partner-style.css +++ b/content/en/includes/partner-style.css @@ -82,7 +82,6 @@ .button{ max-width: 100%; box-sizing: border-box; - font-family: "Roboto", sans-serif; margin: 0; display: inline-block; border-radius: 6px; diff --git a/content/en/partners/_index.html b/content/en/partners/_index.html index 7925e03188..80078ae76b 100644 --- a/content/en/partners/_index.html +++ b/content/en/partners/_index.html @@ -7,7 +7,6 @@ cid: partners ---
-
Kubernetes works with partners to create a strong, vibrant codebase that supports a spectrum of complementary platforms.
@@ -17,7 +16,7 @@ cid: partners
Vetted service providers with deep experience helping enterprises successfully adopt Kubernetes.


- +

Interested in becoming a KCSP? @@ -28,7 +27,7 @@ cid: partners Certified Kubernetes Distributions, Hosted Platforms, and Installers Software conformance ensures that every vendor’s version of Kubernetes supports the required APIs.


- +

Interested in becoming Kubernetes Certified? @@ -40,57 +39,13 @@ cid: partners
Vetted training providers who have deep experience in cloud native technology training.


- +

Interested in becoming a KTP?
- - - -
- - -
- -
+ {{< cncf-landscape helpers=true >}}