Merge pull request #24290 from kcmartin/merged-master-dev-1.20
Merge master into dev-1.20 to keep in syncpull/23332/head
commit
5e7cb3f9ca
|
@ -35,3 +35,5 @@ Note that code issues should be filed against the main kubernetes repository, wh
|
|||
### Submitting Documentation Pull Requests
|
||||
|
||||
If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/).
|
||||
|
||||
For more information, see [contributing to Kubernetes docs](https://kubernetes.io/docs/contribute/).
|
||||
|
|
6
OWNERS
6
OWNERS
|
@ -7,10 +7,10 @@ approvers:
|
|||
- sig-docs-en-owners # Defined in OWNERS_ALIASES
|
||||
|
||||
emeritus_approvers:
|
||||
# chenopis, you're welcome to return when you're ready to resume PR wrangling
|
||||
# jaredbhatti, you're welcome to return when you're ready to resume PR wrangling
|
||||
# stewart-yu, you're welcome to return when you're ready to resume PR wrangling
|
||||
# - chenopis, commented out to disable PR assignments
|
||||
# - jaredbhatti, commented out to disable PR assignments
|
||||
- stewart-yu
|
||||
- zacharysarah
|
||||
|
||||
labels:
|
||||
- sig/docs
|
||||
|
|
|
@ -22,12 +22,13 @@ aliases:
|
|||
- mkorbi
|
||||
- rlenferink
|
||||
sig-docs-en-owners: # Admins for English content
|
||||
- annajung
|
||||
- bradtopol
|
||||
- celestehorgan
|
||||
- irvifa
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
- kbhawkey
|
||||
- makoscafee
|
||||
- onlydole
|
||||
- savitharaghunathan
|
||||
- sftim
|
||||
|
@ -43,14 +44,12 @@ aliases:
|
|||
- jimangel
|
||||
- kbarnard10
|
||||
- kbhawkey
|
||||
- makoscafee
|
||||
- onlydole
|
||||
- rajeshdeshpande02
|
||||
- sftim
|
||||
- steveperry-53
|
||||
- tengqm
|
||||
- xiangpengzhao
|
||||
- zacharysarah
|
||||
- zparnold
|
||||
sig-docs-es-owners: # Admins for Spanish content
|
||||
- raelga
|
||||
|
@ -133,7 +132,6 @@ aliases:
|
|||
- ianychoi
|
||||
- seokho-son
|
||||
- ysyukr
|
||||
- zacharysarah
|
||||
sig-docs-ko-reviews: # PR reviews for Korean content
|
||||
- ClaudiaJKang
|
||||
- gochist
|
||||
|
@ -142,35 +140,36 @@ aliases:
|
|||
- ysyukr
|
||||
- pjhwa
|
||||
sig-docs-leads: # Website chairs and tech leads
|
||||
- irvifa
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
- kbhawkey
|
||||
- onlydole
|
||||
- sftim
|
||||
- zacharysarah
|
||||
sig-docs-zh-owners: # Admins for Chinese content
|
||||
- chenopis
|
||||
# chenopis
|
||||
- chenrui333
|
||||
- dchen1107
|
||||
- haibinxie
|
||||
- hanjiayao
|
||||
- lichuqiang
|
||||
- SataQiu
|
||||
- tengqm
|
||||
- xiangpengzhao
|
||||
- xichengliudui
|
||||
- zacharysarah
|
||||
- zhangxiaoyu-zidif
|
||||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- chenrui333
|
||||
- idealhack
|
||||
# dchen1107
|
||||
# haibinxie
|
||||
# hanjiayao
|
||||
# lichuqiang
|
||||
- SataQiu
|
||||
- tanjunchen
|
||||
- tengqm
|
||||
- xiangpengzhao
|
||||
- xichengliudui
|
||||
- zhangxiaoyu-zidif
|
||||
# zhangxiaoyu-zidif
|
||||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- chenrui333
|
||||
- howieyuen
|
||||
- idealhack
|
||||
- pigletfly
|
||||
- SataQiu
|
||||
- tanjunchen
|
||||
- tengqm
|
||||
- xiangpengzhao
|
||||
- xichengliudui
|
||||
# zhangxiaoyu-zidif
|
||||
sig-docs-pt-owners: # Admins for Portuguese content
|
||||
- femrtnz
|
||||
- jcjesus
|
||||
|
|
|
@ -40,13 +40,13 @@ Um die Kubernetes-Website lokal laufen zu lassen, empfiehlt es sich, ein speziel
|
|||
Wenn Sie Docker [installiert](https://www.docker.com/get-started) haben, erstellen Sie das Docker-Image `kubernetes-hugo` lokal:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Nachdem das Image erstellt wurde, können Sie die Site lokal ausführen:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Öffnen Sie Ihren Browser unter http://localhost:1313, um die Site anzuzeigen. Wenn Sie Änderungen an den Quelldateien vornehmen, aktualisiert Hugo die Site und erzwingt eine Browseraktualisierung.
|
||||
|
|
|
@ -33,13 +33,13 @@ El método recomendado para levantar una copia local del sitio web kubernetes.io
|
|||
Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Una vez tenga la imagen construida, puede levantar el sitio web ejecutando:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Abra su navegador y visite http://localhost:1313 para acceder a su copia local del sitio. A medida que vaya haciendo cambios en el código fuente, Hugo irá actualizando la página y forzará la actualización en el navegador.
|
||||
|
|
|
@ -16,13 +16,13 @@ Faites tous les changements que vous voulez dans votre fork, et quand vous êtes
|
|||
Une fois votre pull request créée, un examinateur de Kubernetes se chargera de vous fournir une revue claire et exploitable.
|
||||
En tant que propriétaire de la pull request, **il est de votre responsabilité de modifier votre pull request pour tenir compte des commentaires qui vous ont été fournis par l'examinateur de Kubernetes.**
|
||||
Notez également que vous pourriez vous retrouver avec plus d'un examinateur de Kubernetes pour vous fournir des commentaires ou vous pourriez finir par recevoir des commentaires d'un autre examinateur que celui qui vous a été initialement affecté pour vous fournir ces commentaires.
|
||||
De plus, dans certains cas, l'un de vos examinateur peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin.
|
||||
De plus, dans certains cas, l'un de vos examinateurs peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin.
|
||||
Les examinateurs feront de leur mieux pour fournir une revue rapidement, mais le temps de réponse peut varier selon les circonstances.
|
||||
|
||||
Pour plus d'informations sur la contribution à la documentation Kubernetes, voir :
|
||||
|
||||
* [Commencez à contribuer](https://kubernetes.io/docs/contribute/start/)
|
||||
* [Apperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
|
||||
* [Aperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
|
||||
* [Utilisation des modèles de page](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Traduction de la documentation Kubernetes](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
@ -38,13 +38,13 @@ La façon recommandée d'exécuter le site web Kubernetes localement est d'utili
|
|||
Si vous avez Docker [up and running](https://www.docker.com/get-started), construisez l'image Docker `kubernetes-hugo' localement:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Une fois l'image construite, vous pouvez exécuter le site localement :
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site.
|
||||
|
|
|
@ -41,13 +41,13 @@
|
|||
यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
एक बार image बन जाने के बाद, आप साइट को स्थानीय रूप से चला सकते हैं:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
साइट देखने के लिए अपने browser को `http://localhost:1313` पर खोलें। जैसा कि आप source फ़ाइलों में परिवर्तन करते हैं, Hugo साइट को अपडेट करता है और browser को refresh करने पर मजबूर करता है।
|
||||
|
|
|
@ -30,13 +30,13 @@ Petunjuk yang disarankan untuk menjalankan Dokumentasi Kubernetes pada mesin lok
|
|||
Jika kamu sudah memiliki **Docker** [yang sudah dapat digunakan](https://www.docker.com/get-started), kamu dapat melakukan **build** `kubernetes-hugo` **Docker image** secara lokal:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Setelah **image** berhasil di-**build**, kamu dapat menjalankan website tersebut pada mesin lokal-mu:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Buka **browser** kamu ke http://localhost:1313 untuk melihat laman dokumentasi. Selama kamu melakukan penambahan konten, **Hugo** akan secara otomatis melakukan perubahan terhadap laman dokumentasi apabila **browser** melakukan proses **refresh**.
|
||||
|
|
|
@ -30,13 +30,13 @@ Il modo consigliato per eseguire localmente il sito Web Kubernetes prevede l'uti
|
|||
Se hai Docker [attivo e funzionante](https://www.docker.com/get-started), crea l'immagine Docker `kubernetes-hugo` localmente:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Dopo aver creato l'immagine, è possibile eseguire il sito Web localmente:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Apri il tuo browser su http://localhost:1313 per visualizzare il sito Web. Mentre modifichi i file sorgenti, Hugo aggiorna automaticamente il sito Web e forza un aggiornamento della pagina visualizzata nel browser.
|
||||
|
|
|
@ -41,13 +41,13 @@
|
|||
도커 [동작 및 실행](https://www.docker.com/get-started) 환경이 있는 경우, 로컬에서 `kubernetes-hugo` 도커 이미지를 빌드 합니다:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
해당 이미지가 빌드 된 이후, 사이트를 로컬에서 실행할 수 있습니다:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
브라우저에서 http://localhost:1313 를 열어 사이트를 살펴봅니다. 소스 파일에 변경 사항이 있을 때, Hugo는 사이트를 업데이트하고 브라우저를 강제로 새로고침합니다.
|
||||
|
|
|
@ -49,13 +49,13 @@ choco install make
|
|||
Jeśli [zainstalowałeś i uruchomiłeś](https://www.docker.com/get-started) już Dockera, zbuduj obraz `kubernetes-hugo` lokalnie:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Po zbudowaniu obrazu, możesz uruchomić serwis lokalnie:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Aby obejrzeć zawartość serwisu otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
|
||||
|
|
|
@ -35,13 +35,13 @@ A maneira recomendada de executar o site do Kubernetes localmente é executar um
|
|||
Se você tiver o Docker [em funcionamento](https://www.docker.com/get-started), crie a imagem do Docker do `kubernetes-hugo` localmente:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Depois que a imagem foi criada, você pode executar o site localmente:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Abra seu navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.
|
||||
|
|
|
@ -38,8 +38,8 @@ hugo server --buildFuture
|
|||
Узнать подробнее о том, как поучаствовать в документации Kubernetes, вы можете по ссылкам ниже:
|
||||
|
||||
* [Начните вносить свой вклад](https://kubernetes.io/docs/contribute/)
|
||||
* [Использование шаблонов страниц](http://kubernetes.io/docs/contribute/style/page-templates/)
|
||||
* [Руководство по оформлению документации](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Использование шаблонов страниц](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Руководство по оформлению документации](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Руководство по локализации Kubernetes](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
## Файл `README.md` на других языках
|
||||
|
|
|
@ -31,13 +31,13 @@ Cách được đề xuất để chạy trang web Kubernetes cục bộ là dù
|
|||
Nếu bạn có Docker đang [up và running](https://www.docker.com/get-started), build `kubernetes-hugo` Docker image cục bộ:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
make container-image
|
||||
```
|
||||
|
||||
Khi image đã được built, bạn có thể chạy website cục bộ:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Mở trình duyệt và đến địa chỉ http://localhost:1313 để xem website. Khi bạn thay đổi các file nguồn, Hugo cập nhật website và buộc làm mới trình duyệt.
|
||||
|
|
|
@ -101,7 +101,7 @@ Learn more about SIG Docs Kubernetes community and meetings on the [community pa
|
|||
|
||||
You can also reach the maintainers of this project at:
|
||||
|
||||
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
|
||||
- [Slack](https://kubernetes.slack.com/messages/sig-docs) [Get an invite for this Slack](https://slack.k8s.io/)
|
||||
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
||||
|
||||
# Contributing to the docs
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
|
||||
# INSTRUCTIONS AT https://kubernetes.io/security/
|
||||
|
||||
irvifa
|
||||
jimangel
|
||||
kbarnard10
|
||||
zacharysarah
|
||||
sftim
|
|
@ -17,14 +17,22 @@ limitations under the License.
|
|||
var Search = {
|
||||
init: function () {
|
||||
$(document).ready(function () {
|
||||
// Fill the search input form with the current search keywords
|
||||
const searchKeywords = new URLSearchParams(location.search).get('q');
|
||||
if (searchKeywords !== null && searchKeywords !== '') {
|
||||
const searchInput = document.querySelector('.td-search-input');
|
||||
searchInput.focus();
|
||||
searchInput.value = searchKeywords;
|
||||
}
|
||||
|
||||
// Set a keydown event
|
||||
$(document).on("keypress", ".td-search-input", function (e) {
|
||||
if (e.keyCode !== 13) {
|
||||
return;
|
||||
}
|
||||
|
||||
var query = $(this).val();
|
||||
var searchPage = "{{ "docs/search/" | absURL }}?q=" + query;
|
||||
document.location = searchPage;
|
||||
document.location = "{{ "search/" | absURL }}?q=" + query;
|
||||
|
||||
return false;
|
||||
});
|
||||
|
|
|
@ -42,6 +42,10 @@ $video-section-height: 200px;
|
|||
|
||||
body {
|
||||
background-color: white;
|
||||
|
||||
a {
|
||||
color: $blue;
|
||||
}
|
||||
}
|
||||
|
||||
section {
|
||||
|
@ -71,6 +75,7 @@ footer {
|
|||
background-color: $blue;
|
||||
text-decoration: none;
|
||||
font-size: 1rem;
|
||||
border: 0px;
|
||||
}
|
||||
|
||||
#cellophane {
|
||||
|
@ -336,7 +341,6 @@ dd {
|
|||
width: 100%;
|
||||
height: 45px;
|
||||
line-height: 45px;
|
||||
font-family: "Roboto", sans-serif;
|
||||
font-size: 20px;
|
||||
color: $blue;
|
||||
}
|
||||
|
@ -612,7 +616,6 @@ section#cncf {
|
|||
padding-top: 30px;
|
||||
padding-bottom: 80px;
|
||||
background-size: auto;
|
||||
// font-family: "Roboto Mono", monospace !important;
|
||||
font-size: 24px;
|
||||
// font-weight: bold;
|
||||
|
||||
|
|
|
@ -20,6 +20,15 @@ $announcement-size-adjustment: 8px;
|
|||
padding-top: 2rem !important;
|
||||
}
|
||||
}
|
||||
|
||||
.ui-widget {
|
||||
font-family: inherit;
|
||||
font-size: inherit;
|
||||
}
|
||||
|
||||
.ui-widget-content a {
|
||||
color: $blue;
|
||||
}
|
||||
}
|
||||
|
||||
section {
|
||||
|
@ -44,6 +53,23 @@ section {
|
|||
}
|
||||
}
|
||||
|
||||
body.td-404 main .error-details {
|
||||
max-width: 1100px;
|
||||
margin-left: auto;
|
||||
margin-right: auto;
|
||||
margin-top: 4em;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
/* Global - Mermaid.js diagrams */
|
||||
|
||||
.mermaid {
|
||||
overflow-x: auto;
|
||||
max-width: 80%;
|
||||
border: 1px solid rgb(222, 226, 230);
|
||||
border-radius: 5px;
|
||||
}
|
||||
|
||||
/* HEADER */
|
||||
|
||||
.td-navbar {
|
||||
|
@ -268,22 +294,34 @@ main {
|
|||
|
||||
// blockquotes and callouts
|
||||
|
||||
blockquote {
|
||||
padding: 0.4rem 0.4rem 0.4rem 1rem !important;
|
||||
}
|
||||
.td-content, body {
|
||||
blockquote.callout {
|
||||
padding: 0.4rem 0.4rem 0.4rem 1rem;
|
||||
border: 1px solid #eee;
|
||||
border-left-width: 0.5em;
|
||||
background: #fff;
|
||||
color: #000;
|
||||
margin-top: 0.5em;
|
||||
margin-bottom: 0.5em;
|
||||
}
|
||||
blockquote.callout {
|
||||
border-radius: calc(1em/3);
|
||||
}
|
||||
.callout.caution {
|
||||
border-left-color: #f0ad4e;
|
||||
}
|
||||
|
||||
// callouts are contained in static CSS as well. these require override.
|
||||
.callout.note {
|
||||
border-left-color: #428bca;
|
||||
}
|
||||
|
||||
.caution {
|
||||
border-left-color: #f0ad4e !important;
|
||||
}
|
||||
.callout.warning {
|
||||
border-left-color: #d9534f;
|
||||
}
|
||||
|
||||
.note {
|
||||
border-left-color: #428bca !important;
|
||||
}
|
||||
|
||||
.warning {
|
||||
border-left-color: #d9534f !important;
|
||||
h1:first-of-type + blockquote.callout {
|
||||
margin-top: 1.5em;
|
||||
}
|
||||
}
|
||||
|
||||
.deprecation-warning {
|
||||
|
@ -393,6 +431,7 @@ body.cid-community > #deprecation-warning > .deprecation-warning > * {
|
|||
}
|
||||
|
||||
color: $blue;
|
||||
margin: 1rem;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -512,3 +551,4 @@ body.td-documentation {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -11,3 +11,5 @@ Add styles or override variables from the theme here. */
|
|||
@import "base";
|
||||
@import "tablet";
|
||||
@import "desktop";
|
||||
|
||||
$primary: #3371e3;
|
|
@ -112,6 +112,8 @@ copyright_linux = "Copyright © 2020 The Linux Foundation ®."
|
|||
version_menu = "Versions"
|
||||
|
||||
time_format_blog = "Monday, January 02, 2006"
|
||||
time_format_default = "January 02, 2006 at 3:04 PM PST"
|
||||
|
||||
description = "Production-Grade Container Orchestration"
|
||||
showedit = true
|
||||
|
||||
|
@ -124,9 +126,13 @@ docsbranch = "master"
|
|||
deprecated = false
|
||||
currentUrl = "https://kubernetes.io/docs/home/"
|
||||
nextUrl = "https://kubernetes-io-vnext-staging.netlify.com/"
|
||||
githubWebsiteRepo = "github.com/kubernetes/website"
|
||||
|
||||
# See codenew shortcode
|
||||
githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website"
|
||||
|
||||
# GitHub repository link for editing a page and opening issues.
|
||||
github_repo = "https://github.com/kubernetes/website"
|
||||
|
||||
# param for displaying an announcement block on every page.
|
||||
# See /i18n/en.toml for message text and title.
|
||||
announcement = true
|
||||
|
|
|
@ -54,7 +54,8 @@ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view
|
|||
# Zeigen Sie das Passwort für den e2e-Benutzer an
|
||||
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
|
||||
|
||||
kubectl config view -o jsonpath='{.users[].name}' # eine Liste der Benutzer erhalten
|
||||
kubectl config view -o jsonpath='{.users[].name}' # den ersten Benutzer anzeigen
|
||||
kubectl config view -o jsonpath='{.users[*].name}' # eine Liste der Benutzer erhalten
|
||||
kubectl config current-context # den aktuellen Kontext anzeigen
|
||||
kubectl config use-context my-cluster-name # Setzen Sie den Standardkontext auf my-cluster-name
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ weight: 20
|
|||
<div class="katacoda__alert">
|
||||
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/1" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/1" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
|
|
|
@ -23,7 +23,7 @@ weight: 20
|
|||
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
|
||||
</div>
|
||||
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/7" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/7" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
|
|
@ -24,7 +24,7 @@ weight: 20
|
|||
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
|
||||
</div>
|
||||
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/4" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/4" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
|
|
|
@ -21,7 +21,7 @@ weight: 20
|
|||
<div class="katacoda__alert">
|
||||
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/8" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/8" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
|
|
|
@ -21,7 +21,7 @@ weight: 20
|
|||
<div class="katacoda__alert">
|
||||
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/5" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/5" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
|
|
|
@ -21,7 +21,7 @@ weight: 20
|
|||
<div class="katacoda__alert">
|
||||
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/6" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/6" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
|
|
|
@ -41,11 +41,6 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu20" button id="desktopKCButton">Attend KubeCon EU virtually on August 17-20, 2020</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna20" button id="desktopKCButton">Attend KubeCon NA virtually on November 17-20, 2020</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
|
|
|
@ -45,7 +45,7 @@ Support for [dynamic maximum volume count](https://github.com/kubernetes/feature
|
|||
|
||||
The StorageObjectInUseProtection feature is now stable and prevents the removal of both [Persistent Volumes](https://github.com/kubernetes/features/issues/499) that are bound to a Persistent Volume Claim, and [Persistent Volume Claims](https://github.com/kubernetes/features/issues/498) that are being used by a pod. This safeguard will help prevent issues from deleting a PV or a PVC that is currently tied to an active pod.
|
||||
|
||||
Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#111-release-notes).
|
||||
Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/release-1.11/CHANGELOG-1.11.md#111-release-notes).
|
||||
|
||||
## Availability
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ mind.
|
|||
## Avoiding permanent beta
|
||||
|
||||
For Kubernetes REST APIs, when a new feature's API reaches beta, that starts a countdown.
|
||||
The beta-quality API now has **nine calendar months** to either:
|
||||
The beta-quality API now has **three releases** (about nine calendar months) to either:
|
||||
- reach GA, and deprecate the beta, or
|
||||
- have a new beta version (_and deprecate the previous beta_).
|
||||
|
||||
|
@ -69,9 +69,10 @@ To be clear, at this point **only REST APIs are affected**. For example, _APILis
|
|||
a beta feature but isn't itself a REST API. Right now there are no plans to automatically
|
||||
deprecate _APIListChunking_ nor any other features that aren't REST APIs.
|
||||
|
||||
If a REST API reaches the end of that 9 month countdown, then the next Kubernetes release
|
||||
will deprecate that API version. There's no option for the REST API to stay at the same
|
||||
beta version beyond the first Kubernetes release to come out after the 9 month window.
|
||||
If a beta API has not graduated to GA after three Kubernetes releases, then the
|
||||
next Kubernetes release will deprecate that API version. There's no option for
|
||||
the REST API to stay at the same beta version beyond the first Kubernetes
|
||||
release to come out after the release window.
|
||||
|
||||
### What this means for you
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ slug: kubernetes-release-1.19-accentuate-the-paw-sitive
|
|||
|
||||
**Authors:** [Kubernetes 1.19 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.19/release_team.md)
|
||||
|
||||
Finally, we have arrived with Kubernetes 1.19, the second release for 2020, and by far the longest release cycle lasting 20 weeks in total. It consists of 33 enhancements: 12 enhancements are moving to stable, 18 enhancements in beta, and 13 enhancements in alpha.
|
||||
Finally, we have arrived with Kubernetes 1.19, the second release for 2020, and by far the longest release cycle lasting 20 weeks in total. It consists of 34 enhancements: 10 enhancements are moving to stable, 15 enhancements in beta, and 9 enhancements in alpha.
|
||||
|
||||
The 1.19 release was quite different from a regular release due to COVID-19, the George Floyd protests, and several other global events that we experienced as a release team. Due to these events, we made the decision to adjust our timeline and allow the SIGs, Working Groups, and contributors more time to get things done. The extra time also allowed for people to take time to focus on their lives outside of the Kubernetes project, and ensure their mental wellbeing was in a good place.
|
||||
|
||||
|
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Increasing the Kubernetes Support Window to One Year'
|
||||
date: 2020-08-31
|
||||
slug: kubernetes-1-19-feature-one-year-support
|
||||
---
|
||||
|
||||
**Authors:** Tim Pepper (VMware), Nick Young (VMware)
|
||||
|
||||
Starting with Kubernetes 1.19, the support window for Kubernetes versions [will increase from 9 months to one year](https://github.com/kubernetes/enhancements/issues/1498). The longer support window is intended to allow organizations to perform major upgrades at a time of the year that works the best for them.
|
||||
|
||||
This is a big change. For many years, the Kubernetes project has delivered a new minor release (e.g.: 1.13 or 1.14) every 3 months. The project provides bugfix support via patch releases (e.g.: 1.13.Y) for three parallel branches of the codebase. Combined, this led to each minor release (e.g.: 1.13) having a patch release stream of support for approximately 9 months. In the end, a cluster operator had to upgrade at least every 9 months to remain supported.
|
||||
|
||||
A survey conducted in early 2019 by the WG LTS showed that a significant subset of Kubernetes end-users fail to upgrade within the 9-month support period.
|
||||
|
||||

|
||||
|
||||
This, and other responses from the survey, suggest that a considerable portion of our community would better be able to manage their deployments on supported versions if the patch support period were extended to 12-14 months. It appears to be true regardless of whether the users are on DIY builds or commercially vendored distributions. An extension in the patch support length of time would thus lead to a larger percentage of our user base running supported versions compared to what we have now.
|
||||
|
||||
A yearly support period provides the cushion end-users appear to desire, and is more aligned with familiar annual planning cycles.
|
||||
There are many unknowns about changing the support windows for a project with as many moving parts as Kubernetes. Keeping the change relatively small (relatively being the important word), gives us the chance to find out what those unknowns are in detail and address them.
|
||||
From Kubernetes version 1.19 on, the support window will be extended to one year. For Kubernetes versions 1.16, 1.17, and 1.18, the story is more complicated.
|
||||
|
||||
All of these versions still fall under the older “three releases support” model, and will drop out of support when 1.19, 1.20 and 1.21 are respectively released. However, because the 1.19 release has been delayed due to the events of 2020, they will end up with close to a year of support (depending on their exact release dates).
|
||||
|
||||
For example, 1.19 was released on the 26th of August 2020, which is 11 months since the release of 1.16. Since 1.16 is still under the old release policy, this means that it is now out of support.
|
||||
|
||||

|
||||
|
||||
If you’ve got thoughts or feedback, we’d love to hear them. Please contact us on [#wg-lts](https://kubernetes.slack.com/messages/wg-lts/) on the Kubernetes Slack, or to the [kubernetes-wg-lts mailing list](https://groups.google.com/g/kubernetes-wg-lts).
|
|
@ -0,0 +1,394 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Ephemeral volumes with storage capacity tracking: EmptyDir on steroids'
|
||||
date: 2020-09-01
|
||||
slug: ephemeral-volumes-with-storage-capacity-tracking
|
||||
---
|
||||
|
||||
**Author:** Patrick Ohly (Intel)
|
||||
|
||||
Some applications need additional storage but don't care whether that
|
||||
data is stored persistently across restarts. For example, caching
|
||||
services are often limited by memory size and can move infrequently
|
||||
used data into storage that is slower than memory with little impact
|
||||
on overall performance. Other applications expect some read-only input
|
||||
data to be present in files, like configuration data or secret keys.
|
||||
|
||||
Kubernetes already supports several kinds of such [ephemeral
|
||||
volumes](/docs/concepts/storage/ephemeral-volumes), but the
|
||||
functionality of those is limited to what is implemented inside
|
||||
Kubernetes.
|
||||
|
||||
[CSI ephemeral volumes](https://kubernetes.io/blog/2020/01/21/csi-ephemeral-inline-volumes/)
|
||||
made it possible to extend Kubernetes with CSI
|
||||
drivers that provide light-weight, local volumes. These [*inject
|
||||
arbitrary states, such as configuration, secrets, identity, variables
|
||||
or similar
|
||||
information*](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190122-csi-inline-volumes.md#motivation).
|
||||
CSI drivers must be modified to support this Kubernetes feature,
|
||||
i.e. normal, standard-compliant CSI drivers will not work, and
|
||||
by design such volumes are supposed to be usable on whatever node
|
||||
is chosen for a pod.
|
||||
|
||||
This is problematic for volumes which consume significant resources on
|
||||
a node or for special storage that is only available on some nodes.
|
||||
Therefore, Kubernetes 1.19 introduces two new alpha features for
|
||||
volumes that are conceptually more like the `EmptyDir` volumes:
|
||||
- [*generic* ephemeral volumes](/docs/concepts/storage/ephemeral-volumes#generic-ephemeral-volumes) and
|
||||
- [CSI storage capacity tracking](/docs/concepts/storage/storage-capacity).
|
||||
|
||||
The advantages of the new approach are:
|
||||
- Storage can be local or network-attached.
|
||||
- Volumes can have a fixed size that applications are never able to exceed.
|
||||
- Works with any CSI driver that supports provisioning of persistent
|
||||
volumes and (for capacity tracking) implements the CSI `GetCapacity` call.
|
||||
- Volumes may have some initial data, depending on the driver and
|
||||
parameters.
|
||||
- All of the typical volume operations (snapshotting,
|
||||
resizing, the future storage capacity tracking, etc.)
|
||||
are supported.
|
||||
- The volumes are usable with any app controller that accepts
|
||||
a Pod or volume specification.
|
||||
- The Kubernetes scheduler itself picks suitable nodes, i.e. there is
|
||||
no need anymore to implement and configure scheduler extenders and
|
||||
mutating webhooks.
|
||||
|
||||
This makes generic ephemeral volumes a suitable solution for several
|
||||
use cases:
|
||||
|
||||
# Use cases
|
||||
|
||||
## Persistent Memory as DRAM replacement for memcached
|
||||
|
||||
Recent releases of memcached added [support for using Persistent
|
||||
Memory](https://memcached.org/blog/persistent-memory/) (PMEM) instead
|
||||
of standard DRAM. When deploying memcached through one of the app
|
||||
controllers, generic ephemeral volumes make it possible to request a PMEM volume
|
||||
of a certain size from a CSI driver like
|
||||
[PMEM-CSI](https://intel.github.io/pmem-csi/).
|
||||
|
||||
## Local LVM storage as scratch space
|
||||
|
||||
Applications working with data sets that exceed the RAM size can
|
||||
request local storage with performance characteristics or size that is
|
||||
not met by the normal Kubernetes `EmptyDir` volumes. For example,
|
||||
[TopoLVM](https://github.com/cybozu-go/topolvm) was written for that
|
||||
purpose.
|
||||
|
||||
## Read-only access to volumes with data
|
||||
|
||||
Provisioning a volume might result in a non-empty volume:
|
||||
- [restore a snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)
|
||||
- [cloning a volume](/docs/concepts/storage/volume-pvc-datasource)
|
||||
- [generic data populators](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20200120-generic-data-populators.md)
|
||||
|
||||
Such volumes can be mounted read-only.
|
||||
|
||||
# How it works
|
||||
|
||||
## Generic ephemeral volumes
|
||||
|
||||
The key idea behind generic ephemeral volumes is that a new volume
|
||||
source, the so-called
|
||||
[`EphemeralVolumeSource`](/docs/reference/generated/kubernetes-api/#ephemeralvolumesource-v1alpha1-core)
|
||||
contains all fields that are needed to created a volume claim
|
||||
(historically called persistent volume claim, PVC). A new controller
|
||||
in the `kube-controller-manager` waits for Pods which embed such a
|
||||
volume source and then creates a PVC for that pod. To a CSI driver
|
||||
deployment, that PVC looks like any other, so no special support is
|
||||
needed.
|
||||
|
||||
As long as these PVCs exist, they can be used like any other volume claim. In
|
||||
particular, they can be referenced as data source in volume cloning or
|
||||
snapshotting. The PVC object also holds the current status of the
|
||||
volume.
|
||||
|
||||
Naming of the automatically created PVCs is deterministic: the name is
|
||||
a combination of Pod name and volume name, with a hyphen (`-`) in the
|
||||
middle. This deterministic naming makes it easier to
|
||||
interact with the PVC because one does not have to search for it once
|
||||
the Pod name and volume name are known. The downside is that the name might
|
||||
be in use already. This is detected by Kubernetes and then blocks Pod
|
||||
startup.
|
||||
|
||||
To ensure that the volume gets deleted together with the pod, the
|
||||
controller makes the Pod the owner of the volume claim. When the Pod
|
||||
gets deleted, the normal garbage-collection mechanism also removes the
|
||||
claim and thus the volume.
|
||||
|
||||
Claims select the storage driver through the normal storage class
|
||||
mechanism. Although storage classes with both immediate and late
|
||||
binding (aka `WaitForFirstConsumer`) are supported, for ephemeral
|
||||
volumes it makes more sense to use `WaitForFirstConsumer`: then Pod
|
||||
scheduling can take into account both node utilization and
|
||||
availability of storage when choosing a node. This is where the other
|
||||
new feature comes in.
|
||||
|
||||
## Storage capacity tracking
|
||||
|
||||
Normally, the Kubernetes scheduler has no information about where a
|
||||
CSI driver might be able to create a volume. It also has no way of
|
||||
talking directly to a CSI driver to retrieve that information. It
|
||||
therefore tries different nodes until it finds one where all volumes
|
||||
can be made available (late binding) or leaves it entirely to the
|
||||
driver to choose a location (immediate binding).
|
||||
|
||||
The new [`CSIStorageCapacity` alpha
|
||||
API](/docs/reference/generated/kubernetes-api/v1.19/#csistoragecapacity-v1alpha1-storage-k8s-io)
|
||||
allows storing the necessary information in etcd where it is available to the
|
||||
scheduler. In contrast to support for generic ephemeral volumes,
|
||||
storage capacity tracking must be [enabled when deploying a CSI
|
||||
driver](https://github.com/kubernetes-csi/external-provisioner/blob/master/README.md#capacity-support):
|
||||
the `external-provisioner` must be told to publish capacity
|
||||
information that it then retrieves from the CSI driver through the normal
|
||||
`GetCapacity` call.
|
||||
<!-- TODO: update the link with a revision once https://github.com/kubernetes-csi/external-provisioner/pull/450 is merged -->
|
||||
|
||||
When the Kubernetes scheduler needs to choose a node for a Pod with an
|
||||
unbound volume that uses late binding and the CSI driver deployment
|
||||
has opted into the feature by setting the [`CSIDriver.storageCapacity`
|
||||
flag](/docs/reference/generated/kubernetes-api/v1.19/#csidriver-v1beta1-storage-k8s-io)
|
||||
flag, the scheduler automatically filters out nodes that do not have
|
||||
access to enough storage capacity. This works for generic ephemeral
|
||||
and persistent volumes but *not* for CSI ephemeral volumes because the
|
||||
parameters of those are opaque for Kubernetes.
|
||||
|
||||
As usual, volumes with immediate binding get created before scheduling
|
||||
pods, with their location chosen by the storage driver. Therefore, the
|
||||
external-provisioner's default configuration skips storage
|
||||
classes with immediate binding as the information wouldn't be used anyway.
|
||||
|
||||
Because the Kubernetes scheduler must act on potentially outdated
|
||||
information, it cannot be ensured that the capacity is still available
|
||||
when a volume is to be created. Still, the chances that it can be created
|
||||
without retries should be higher.
|
||||
|
||||
# Security
|
||||
|
||||
## CSIStorageCapacity
|
||||
|
||||
CSIStorageCapacity objects are namespaced. When deploying each CSI
|
||||
drivers in its own namespace and, as recommended, limiting the RBAC
|
||||
permissions for CSIStorageCapacity to that namespace, it is
|
||||
always obvious where the data came from. However, Kubernetes does
|
||||
not check that and typically drivers get installed in the same
|
||||
namespace anyway, so ultimately drivers are *expected to behave* and
|
||||
not publish incorrect data.
|
||||
|
||||
## Generic ephemeral volumes
|
||||
|
||||
If users have permission to create a Pod (directly or indirectly),
|
||||
then they can also create generic ephemeral volumes even when they do
|
||||
not have permission to create a volume claim. That's because RBAC
|
||||
permission checks are applied to the controller which creates the
|
||||
PVC, not the original user. This is a fundamental change that must be
|
||||
[taken into
|
||||
account](/docs/concepts/storage/ephemeral-volumes#security) before
|
||||
enabling the feature in clusters where untrusted users are not
|
||||
supposed to have permission to create volumes.
|
||||
|
||||
# Example
|
||||
|
||||
A [special branch](https://github.com/intel/pmem-csi/commits/kubernetes-1-19-blog-post)
|
||||
in PMEM-CSI contains all the necessary changes to bring up a
|
||||
Kubernetes 1.19 cluster inside QEMU VMs with both alpha features
|
||||
enabled. The PMEM-CSI driver code is used unchanged, only the
|
||||
deployment was updated.
|
||||
|
||||
On a suitable machine (Linux, non-root user can use Docker - see the
|
||||
[QEMU and
|
||||
Kubernetes](https://intel.github.io/pmem-csi/0.7/docs/autotest.html#qemu-and-kubernetes)
|
||||
section in the PMEM-CSI documentation), the following commands bring
|
||||
up a cluster and install the PMEM-CSI driver:
|
||||
|
||||
```console
|
||||
git clone --branch=kubernetes-1-19-blog-post https://github.com/intel/pmem-csi.git
|
||||
cd pmem-csi
|
||||
export TEST_KUBERNETES_VERSION=1.19 TEST_FEATURE_GATES=CSIStorageCapacity=true,GenericEphemeralVolume=true TEST_PMEM_REGISTRY=intel
|
||||
make start && echo && test/setup-deployment.sh
|
||||
```
|
||||
|
||||
If all goes well, the output contains the following usage
|
||||
instructions:
|
||||
|
||||
```
|
||||
The test cluster is ready. Log in with [...]/pmem-csi/_work/pmem-govm/ssh.0, run
|
||||
kubectl once logged in. Alternatively, use kubectl directly with the
|
||||
following env variable:
|
||||
KUBECONFIG=[...]/pmem-csi/_work/pmem-govm/kube.config
|
||||
|
||||
secret/pmem-csi-registry-secrets created
|
||||
secret/pmem-csi-node-secrets created
|
||||
serviceaccount/pmem-csi-controller created
|
||||
...
|
||||
To try out the pmem-csi driver ephemeral volumes:
|
||||
cat deploy/kubernetes-1.19/pmem-app-ephemeral.yaml |
|
||||
[...]/pmem-csi/_work/pmem-govm/ssh.0 kubectl create -f -
|
||||
```
|
||||
|
||||
The CSIStorageCapacity objects are not meant to be human-readable, so
|
||||
some post-processing is needed. The following Golang template filters
|
||||
all objects by the storage class that the example uses and prints the
|
||||
name, topology and capacity:
|
||||
|
||||
```console
|
||||
kubectl get \
|
||||
-o go-template='{{range .items}}{{if eq .storageClassName "pmem-csi-sc-late-binding"}}{{.metadata.name}} {{.nodeTopology.matchLabels}} {{.capacity}}
|
||||
{{end}}{{end}}' \
|
||||
csistoragecapacities
|
||||
```
|
||||
|
||||
```
|
||||
csisc-2js6n map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker2] 30716Mi
|
||||
csisc-sqdnt map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker1] 30716Mi
|
||||
csisc-ws4bv map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker3] 30716Mi
|
||||
```
|
||||
|
||||
One individual object has the following content:
|
||||
|
||||
```console
|
||||
kubectl describe csistoragecapacities/csisc-6cw8j
|
||||
```
|
||||
|
||||
```
|
||||
Name: csisc-sqdnt
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
API Version: storage.k8s.io/v1alpha1
|
||||
Capacity: 30716Mi
|
||||
Kind: CSIStorageCapacity
|
||||
Metadata:
|
||||
Creation Timestamp: 2020-08-11T15:41:03Z
|
||||
Generate Name: csisc-
|
||||
Managed Fields:
|
||||
...
|
||||
Owner References:
|
||||
API Version: apps/v1
|
||||
Controller: true
|
||||
Kind: StatefulSet
|
||||
Name: pmem-csi-controller
|
||||
UID: 590237f9-1eb4-4208-b37b-5f7eab4597d1
|
||||
Resource Version: 2994
|
||||
Self Link: /apis/storage.k8s.io/v1alpha1/namespaces/default/csistoragecapacities/csisc-sqdnt
|
||||
UID: da36215b-3b9d-404a-a4c7-3f1c3502ab13
|
||||
Node Topology:
|
||||
Match Labels:
|
||||
pmem-csi.intel.com/node: pmem-csi-pmem-govm-worker1
|
||||
Storage Class Name: pmem-csi-sc-late-binding
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
Now let's create the example app with one generic ephemeral
|
||||
volume. The `pmem-app-ephemeral.yaml` file contains:
|
||||
|
||||
```yaml
|
||||
# This example Pod definition demonstrates
|
||||
# how to use generic ephemeral inline volumes
|
||||
# with a PMEM-CSI storage class.
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: my-csi-app-inline-volume
|
||||
spec:
|
||||
containers:
|
||||
- name: my-frontend
|
||||
image: intel/pmem-csi-driver-test:v0.7.14
|
||||
command: [ "sleep", "100000" ]
|
||||
volumeMounts:
|
||||
- mountPath: "/data"
|
||||
name: my-csi-volume
|
||||
volumes:
|
||||
- name: my-csi-volume
|
||||
ephemeral:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 4Gi
|
||||
storageClassName: pmem-csi-sc-late-binding
|
||||
```
|
||||
|
||||
After creating that as shown in the usage instructions above, we have one additional Pod and PVC:
|
||||
|
||||
```console
|
||||
kubectl get pods/my-csi-app-inline-volume -o wide
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
my-csi-app-inline-volume 1/1 Running 0 6m58s 10.36.0.2 pmem-csi-pmem-govm-worker1 <none> <none>
|
||||
```
|
||||
|
||||
```console
|
||||
kubectl get pvc/my-csi-app-inline-volume-my-csi-volume
|
||||
```
|
||||
|
||||
```
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
my-csi-app-inline-volume-my-csi-volume Bound pvc-c11eb7ab-a4fa-46fe-b515-b366be908823 4Gi RWO pmem-csi-sc-late-binding 9m21s
|
||||
```
|
||||
|
||||
That PVC is owned by the Pod:
|
||||
|
||||
```console
|
||||
kubectl get -o yaml pvc/my-csi-app-inline-volume-my-csi-volume
|
||||
```
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
annotations:
|
||||
pv.kubernetes.io/bind-completed: "yes"
|
||||
pv.kubernetes.io/bound-by-controller: "yes"
|
||||
volume.beta.kubernetes.io/storage-provisioner: pmem-csi.intel.com
|
||||
volume.kubernetes.io/selected-node: pmem-csi-pmem-govm-worker1
|
||||
creationTimestamp: "2020-08-11T15:44:57Z"
|
||||
finalizers:
|
||||
- kubernetes.io/pvc-protection
|
||||
managedFields:
|
||||
...
|
||||
name: my-csi-app-inline-volume-my-csi-volume
|
||||
namespace: default
|
||||
ownerReferences:
|
||||
- apiVersion: v1
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Pod
|
||||
name: my-csi-app-inline-volume
|
||||
uid: 75c925bf-ca8e-441a-ac67-f190b7a2265f
|
||||
...
|
||||
```
|
||||
|
||||
Eventually, the storage capacity information for `pmem-csi-pmem-govm-worker1` also gets updated:
|
||||
|
||||
```
|
||||
csisc-2js6n map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker2] 30716Mi
|
||||
csisc-sqdnt map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker1] 26620Mi
|
||||
csisc-ws4bv map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker3] 30716Mi
|
||||
```
|
||||
|
||||
If another app needs more than 26620Mi, the Kubernetes
|
||||
scheduler will not pick `pmem-csi-pmem-govm-worker1` anymore.
|
||||
|
||||
|
||||
# Next steps
|
||||
|
||||
Both features are under development. Several open questions were
|
||||
already raised during the alpha review process. The two enhancement
|
||||
proposals document the work that will be needed for migration to beta and what
|
||||
alternatives were already considered and rejected:
|
||||
|
||||
* [KEP-1698: generic ephemeral inline
|
||||
volumes](https://github.com/kubernetes/enhancements/blob/9d7a75d/keps/sig-storage/1698-generic-ephemeral-volumes/README.md)
|
||||
* [KEP-1472: Storage Capacity
|
||||
Tracking](https://github.com/kubernetes/enhancements/tree/9d7a75d/keps/sig-storage/1472-storage-capacity-tracking)
|
||||
|
||||
Your feedback is crucial for driving that development. SIG-Storage
|
||||
[meets
|
||||
regularly](https://github.com/kubernetes/community/tree/master/sig-storage#meetings)
|
||||
and can be reached via [Slack and a mailing
|
||||
list](https://github.com/kubernetes/community/tree/master/sig-storage#contact).
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Scaling Kubernetes Networking With EndpointSlices'
|
||||
date: 2020-09-02
|
||||
slug: scaling-kubernetes-networking-with-endpointslices
|
||||
---
|
||||
|
||||
**Author:** Rob Scott (Google)
|
||||
|
||||
EndpointSlices are an exciting new API that provides a scalable and extensible alternative to the Endpoints API. EndpointSlices track IP addresses, ports, readiness, and topology information for Pods backing a Service.
|
||||
|
||||
In Kubernetes 1.19 this feature is enabled by default with kube-proxy reading from [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) instead of Endpoints. Although this will mostly be an invisible change, it should result in noticeable scalability improvements in large clusters. It also enables significant new features in future Kubernetes releases like [Topology Aware Routing](/docs/concepts/services-networking/service-topology/).
|
||||
|
||||
## Scalability Limitations of the Endpoints API
|
||||
With the Endpoints API, there was only one Endpoints resource for a Service. That meant that it needed to be able to store IP addresses and ports (network endpoints) for every Pod that was backing the corresponding Service. This resulted in huge API resources. To compound this problem, kube-proxy was running on every node and watching for any updates to Endpoints resources. If even a single network endpoint changed in an Endpoints resource, the whole object would have to be sent to each of those instances of kube-proxy.
|
||||
|
||||
A further limitation of the Endpoints API is that it limits the number of network endpoints that can be tracked for a Service. The default size limit for an object stored in etcd is 1.5MB. In some cases that can limit an Endpoints resource to 5,000 Pod IPs. This is not an issue for most users, but it becomes a significant problem for users with Services approaching this size.
|
||||
|
||||
To show just how significant these issues become at scale it helps to have a simple example. Think about a Service which has 5,000 Pods, it might end up with a 1.5MB Endpoints resource. If even a single network endpoint in that list changes, the full Endpoints resource will need to be distributed to each Node in the cluster. This becomes quite an issue in a large cluster with 3,000 Nodes. Each update would involve sending 4.5GB of data (1.5MB Endpoints * 3,000 Nodes) across the cluster. That's nearly enough to fill up a DVD, and it would happen for each Endpoints change. Imagine a rolling update that results in all 5,000 Pods being replaced - that's more than 22TB (or 5,000 DVDs) worth of data transferred.
|
||||
|
||||
## Splitting endpoints up with the EndpointSlice API
|
||||
The EndpointSlice API was designed to address this issue with an approach similar to sharding. Instead of tracking all Pod IPs for a Service with a single Endpoints resource, we split them into multiple smaller EndpointSlices.
|
||||
|
||||
Consider an example where a Service is backed by 15 pods. We'd end up with a single Endpoints resource that tracked all of them. If EndpointSlices were configured to store 5 endpoints each, we'd end up with 3 different EndpointSlices:
|
||||

|
||||
|
||||
By default, EndpointSlices store as many as 100 endpoints each, though this can be configured with the `--max-endpoints-per-slice` flag on kube-controller-manager.
|
||||
|
||||
## EndpointSlices provide 10x scalability improvements
|
||||
This API dramatically improves networking scalability. Now when a Pod is added or removed, only 1 small EndpointSlice needs to be updated. This difference becomes quite noticeable when hundreds or thousands of Pods are backing a single Service.
|
||||
|
||||
Potentially more significant, now that all Pod IPs for a Service don't need to be stored in a single resource, we don't have to worry about the size limit for objects stored in etcd. EndpointSlices have already been used to scale Services beyond 100,000 network endpoints.
|
||||
|
||||
All of this is brought together with some significant performance improvements that have been made in kube-proxy. When using EndpointSlices at scale, significantly less data will be transferred for endpoints updates and kube-proxy should be faster to update iptables or ipvs rules. Beyond that, Services can now scale to at least 10 times beyond any previous limitations.
|
||||
|
||||
## EndpointSlices enable new functionality
|
||||
Introduced as an alpha feature in Kubernetes v1.16, EndpointSlices were built to enable some exciting new functionality in future Kubernetes releases. This could include dual-stack Services, topology aware routing, and endpoint subsetting.
|
||||
|
||||
Dual-Stack Services are an exciting new feature that has been in development alongside EndpointSlices. They will utilize both IPv4 and IPv6 addresses for Services and rely on the addressType field on EndpointSlices to track these addresses by IP family.
|
||||
|
||||
Topology aware routing will update kube-proxy to prefer routing requests within the same zone or region. This makes use of the topology fields stored for each endpoint in an EndpointSlice. As a further refinement of that, we're exploring the potential of endpoint subsetting. This would allow kube-proxy to only watch a subset of EndpointSlices. For example, this might be combined with topology aware routing so that kube-proxy would only need to watch EndpointSlices containing endpoints within the same zone. This would provide another very significant scalability improvement.
|
||||
|
||||
## What does this mean for the Endpoints API?
|
||||
Although the EndpointSlice API is providing a newer and more scalable alternative to the Endpoints API, the Endpoints API will continue to be considered generally available and stable. The most significant change planned for the Endpoints API will involve beginning to truncate Endpoints that would otherwise run into scalability issues.
|
||||
|
||||
The Endpoints API is not going away, but many new features will rely on the EndpointSlice API. To take advantage of the new scalability and functionality that EndpointSlices provide, applications that currently consume Endpoints will likely want to consider supporting EndpointSlices in the future.
|
|
@ -0,0 +1,278 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Warning: Helpful Warnings Ahead"
|
||||
date: 2020-09-03
|
||||
slug: warnings
|
||||
---
|
||||
|
||||
**Author**: Jordan Liggitt (Google)
|
||||
|
||||
As Kubernetes maintainers, we're always looking for ways to improve usability while preserving compatibility.
|
||||
As we develop features, triage bugs, and answer support questions, we accumulate information that would be helpful for Kubernetes users to know.
|
||||
In the past, sharing that information was limited to out-of-band methods like release notes, announcement emails, documentation, and blog posts.
|
||||
Unless someone knew to seek out that information and managed to find it, they would not benefit from it.
|
||||
|
||||
In Kubernetes v1.19, we added a feature that allows the Kubernetes API server to
|
||||
[send warnings to API clients](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1693-warnings).
|
||||
The warning is sent using a [standard `Warning` response header](https://tools.ietf.org/html/rfc7234#section-5.5),
|
||||
so it does not change the status code or response body in any way.
|
||||
This allows the server to send warnings easily readable by any API client, while remaining compatible with previous client versions.
|
||||
|
||||
Warnings are surfaced by `kubectl` v1.19+ in `stderr` output, and by the `k8s.io/client-go` client library v0.19.0+ in log output.
|
||||
The `k8s.io/client-go` behavior can be overridden [per-process](https://godoc.org/k8s.io/client-go/rest#SetDefaultWarningHandler)
|
||||
or [per-client](https://godoc.org/k8s.io/client-go/rest#Config).
|
||||
|
||||
## Deprecation Warnings
|
||||
|
||||
The first way we are using this new capability is to send warnings for use of deprecated APIs.
|
||||
|
||||
Kubernetes is a [big, fast-moving project](https://www.cncf.io/cncf-kubernetes-project-journey/#development-velocity).
|
||||
Keeping up with the [changes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#changelog-since-v1180)
|
||||
in each release can be daunting, even for people who work on the project full-time. One important type of change is API deprecations.
|
||||
As APIs in Kubernetes graduate to GA versions, pre-release API versions are deprecated and eventually removed.
|
||||
|
||||
Even though there is an [extended deprecation period](/docs/reference/using-api/deprecation-policy/),
|
||||
and deprecations are [included in release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecation),
|
||||
they can still be hard to track. During the deprecation period, the pre-release API remains functional,
|
||||
allowing several releases to transition to the stable API version. However, we have found that users often don't even realize
|
||||
they are depending on a deprecated API version until they upgrade to the release that stops serving it.
|
||||
|
||||
Starting in v1.19, whenever a request is made to a deprecated REST API, a warning is returned along with the API response.
|
||||
This warning includes details about the release in which the API will no longer be available, and the replacement API version.
|
||||
|
||||
Because the warning originates at the server, and is intercepted at the client level, it works for all kubectl commands,
|
||||
including high-level commands like `kubectl apply`, and low-level commands like `kubectl get --raw`:
|
||||
|
||||
<img alt="kubectl applying a manifest file, then displaying a warning message 'networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress'."
|
||||
src="kubectl-warnings.png"
|
||||
style="width:637px;max-width:100%;">
|
||||
|
||||
This helps people affected by the deprecation to know the request they are making is deprecated,
|
||||
how long they have to address the issue, and what API they should use instead.
|
||||
This is especially helpful when the user is applying a manifest they didn't create,
|
||||
so they have time to reach out to the authors to ask for an updated version.
|
||||
|
||||
We also realized that the person *using* a deprecated API is often not the same person responsible for upgrading the cluster,
|
||||
so we added two administrator-facing tools to help track use of deprecated APIs and determine when upgrades are safe.
|
||||
|
||||
### Metrics
|
||||
|
||||
Starting in Kubernetes v1.19, when a request is made to a deprecated REST API endpoint,
|
||||
an `apiserver_requested_deprecated_apis` gauge metric is set to `1` in the kube-apiserver process.
|
||||
This metric has labels for the API `group`, `version`, `resource`, and `subresource`,
|
||||
and a `removed_version` label that indicates the Kubernetes release in which the API will no longer be served.
|
||||
|
||||
This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json),
|
||||
and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested
|
||||
from the current instance of the API server:
|
||||
|
||||
```sh
|
||||
kubectl get --raw /metrics | prom2json | jq '
|
||||
.[] | select(.name=="apiserver_requested_deprecated_apis").metrics[].labels
|
||||
'
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```json
|
||||
{
|
||||
"group": "extensions",
|
||||
"removed_release": "1.22",
|
||||
"resource": "ingresses",
|
||||
"subresource": "",
|
||||
"version": "v1beta1"
|
||||
}
|
||||
{
|
||||
"group": "rbac.authorization.k8s.io",
|
||||
"removed_release": "1.22",
|
||||
"resource": "clusterroles",
|
||||
"subresource": "",
|
||||
"version": "v1beta1"
|
||||
}
|
||||
```
|
||||
|
||||
This shows the deprecated `extensions/v1beta1` Ingress and `rbac.authorization.k8s.io/v1beta1` ClusterRole APIs
|
||||
have been requested on this server, and will be removed in v1.22.
|
||||
|
||||
We can join that information with the `apiserver_request_total` metrics to get more details about the requests being made to these APIs:
|
||||
|
||||
```sh
|
||||
kubectl get --raw /metrics | prom2json | jq '
|
||||
# set $deprecated to a list of deprecated APIs
|
||||
[
|
||||
.[] |
|
||||
select(.name=="apiserver_requested_deprecated_apis").metrics[].labels |
|
||||
{group,version,resource}
|
||||
] as $deprecated
|
||||
|
||||
|
|
||||
|
||||
# select apiserver_request_total metrics which are deprecated
|
||||
.[] | select(.name=="apiserver_request_total").metrics[] |
|
||||
select(.labels | {group,version,resource} as $key | $deprecated | index($key))
|
||||
'
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```json
|
||||
{
|
||||
"labels": {
|
||||
"code": "0",
|
||||
"component": "apiserver",
|
||||
"contentType": "application/vnd.kubernetes.protobuf;stream=watch",
|
||||
"dry_run": "",
|
||||
"group": "extensions",
|
||||
"resource": "ingresses",
|
||||
"scope": "cluster",
|
||||
"subresource": "",
|
||||
"verb": "WATCH",
|
||||
"version": "v1beta1"
|
||||
},
|
||||
"value": "21"
|
||||
}
|
||||
{
|
||||
"labels": {
|
||||
"code": "200",
|
||||
"component": "apiserver",
|
||||
"contentType": "application/vnd.kubernetes.protobuf",
|
||||
"dry_run": "",
|
||||
"group": "extensions",
|
||||
"resource": "ingresses",
|
||||
"scope": "cluster",
|
||||
"subresource": "",
|
||||
"verb": "LIST",
|
||||
"version": "v1beta1"
|
||||
},
|
||||
"value": "1"
|
||||
}
|
||||
{
|
||||
"labels": {
|
||||
"code": "200",
|
||||
"component": "apiserver",
|
||||
"contentType": "application/json",
|
||||
"dry_run": "",
|
||||
"group": "rbac.authorization.k8s.io",
|
||||
"resource": "clusterroles",
|
||||
"scope": "cluster",
|
||||
"subresource": "",
|
||||
"verb": "LIST",
|
||||
"version": "v1beta1"
|
||||
},
|
||||
"value": "1"
|
||||
}
|
||||
```
|
||||
|
||||
The output shows that only read requests are being made to these APIs, and the most requests have been made to watch the deprecated Ingress API.
|
||||
|
||||
You can also find that information through the following Prometheus query,
|
||||
which returns information about requests made to deprecated APIs which will be removed in v1.22:
|
||||
|
||||
```promql
|
||||
apiserver_requested_deprecated_apis{removed_version="1.22"} * on(group,version,resource,subresource)
|
||||
group_right() apiserver_request_total
|
||||
```
|
||||
|
||||
### Audit annotations
|
||||
|
||||
Metrics are a fast way to check whether deprecated APIs are being used, and at what rate,
|
||||
but they don't include enough information to identify particular clients or API objects.
|
||||
Starting in Kubernetes v1.19, [audit events](/docs/tasks/debug-application-cluster/audit/)
|
||||
for requests to deprecated APIs include an audit annotation of `"k8s.io/deprecated":"true"`.
|
||||
Administrators can use those audit events to identify specific clients or objects that need to be updated.
|
||||
|
||||
## Custom Resource Definitions
|
||||
|
||||
Along with the API server ability to warn about deprecated API use, starting in v1.19, a CustomResourceDefinition can indicate a
|
||||
[particular version of the resource it defines is deprecated](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation).
|
||||
When API requests to a deprecated version of a custom resource are made, a warning message is returned, matching the behavior of built-in APIs.
|
||||
|
||||
The author of the CustomResourceDefinition can also customize the warning for each version if they want to.
|
||||
This allows them to give a pointer to a migration guide or other information if needed.
|
||||
|
||||
```yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
name: crontabs.example.com
|
||||
spec:
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
# This indicates the v1alpha1 version of the custom resource is deprecated.
|
||||
# API requests to this version receive a warning in the server response.
|
||||
deprecated: true
|
||||
# This overrides the default warning returned to clients making v1alpha1 API requests.
|
||||
deprecationWarning: "example.com/v1alpha1 CronTab is deprecated; use example.com/v1 CronTab (see http://example.com/v1alpha1-v1)"
|
||||
...
|
||||
|
||||
- name: v1beta1
|
||||
# This indicates the v1beta1 version of the custom resource is deprecated.
|
||||
# API requests to this version receive a warning in the server response.
|
||||
# A default warning message is returned for this version.
|
||||
deprecated: true
|
||||
...
|
||||
|
||||
- name: v1
|
||||
...
|
||||
```
|
||||
|
||||
## Admission Webhooks
|
||||
|
||||
[Admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers)
|
||||
are the primary way to integrate custom policies or validation with Kubernetes.
|
||||
Starting in v1.19, admission webhooks can [return warning messages](/docs/reference/access-authn-authz/extensible-admission-controllers/#response)
|
||||
that are passed along to the requesting API client. Warnings can be returned with allowed or rejected admission responses.
|
||||
|
||||
As an example, to allow a request but warn about a configuration known not to work well, an admission webhook could send this response:
|
||||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "admission.k8s.io/v1",
|
||||
"kind": "AdmissionReview",
|
||||
"response": {
|
||||
"uid": "<value from request.uid>",
|
||||
"allowed": true,
|
||||
"warnings": [
|
||||
".spec.memory: requests >1GB do not work on Fridays"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If you are implementing a webhook that returns a warning message, here are some tips:
|
||||
|
||||
* Don't include a "Warning:" prefix in the message (that is added by clients on output)
|
||||
* Use warning messages to describe problems the client making the API request should correct or be aware of
|
||||
* Be brief; limit warnings to 120 characters if possible
|
||||
|
||||
There are many ways admission webhooks could use this new feature, and I'm looking forward to seeing what people come up with.
|
||||
Here are a couple ideas to get you started:
|
||||
|
||||
* webhook implementations adding a "complain" mode, where they return warnings instead of rejections,
|
||||
to allow trying out a policy to verify it is working as expected before starting to enforce it
|
||||
* "lint" or "vet"-style webhooks, inspecting objects and surfacing warnings when best practices are not followed
|
||||
|
||||
## Kubectl strict mode
|
||||
|
||||
If you want to be sure you notice deprecations as soon as possible and get a jump start on addressing them,
|
||||
`kubectl` added a `--warnings-as-errors` option in v1.19. When invoked with this option,
|
||||
`kubectl` treats any warnings it receives from the server as errors and exits with a non-zero exit code:
|
||||
|
||||
<img alt="kubectl applying a manifest file with a --warnings-as-errors flag, displaying a warning message and exiting with a non-zero exit code."
|
||||
src="kubectl-warnings-as-errors.png"
|
||||
style="width:637px;max-width:100%;">
|
||||
|
||||
This could be used in a CI job to apply manifests to a current server,
|
||||
and required to pass with a zero exit code in order for the CI job to succeed.
|
||||
|
||||
## Future Possibilities
|
||||
|
||||
Now that we have a way to communicate helpful information to users in context,
|
||||
we're already considering other ways we can use this to improve people's experience with Kubernetes.
|
||||
A couple areas we're looking at next are warning about [known problematic values](http://issue.k8s.io/64841#issuecomment-395141013)
|
||||
we cannot reject outright for compatibility reasons, and warning about use of deprecated fields or field values
|
||||
(like selectors using beta os/arch node labels, [deprecated in v1.14](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)).
|
||||
I'm excited to see progress in this area, continuing to make it easier to use Kubernetes.
|
||||
|
||||
---
|
||||
|
||||
_[Jordan Liggitt](https://twitter.com/liggitt) is a software engineer at Google, and helps lead Kubernetes authentication, authorization, and API efforts._
|
Binary file not shown.
After Width: | Height: | Size: 221 KiB |
Binary file not shown.
After Width: | Height: | Size: 296 KiB |
|
@ -0,0 +1,56 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Introducing Structured Logs'
|
||||
date: 2020-09-04
|
||||
slug: kubernetes-1-19-Introducing-Structured-Logs
|
||||
---
|
||||
|
||||
**Authors:** Marek Siarkowicz (Google), Nathan Beach (Google)
|
||||
|
||||
Logs are an essential aspect of observability and a critical tool for debugging. But Kubernetes logs have traditionally been unstructured strings, making any automated parsing difficult and any downstream processing, analysis, or querying challenging to do reliably.
|
||||
|
||||
In Kubernetes 1.19, we are adding support for structured logs, which natively support (key, value) pairs and object references. We have also updated many logging calls such that over 99% of logging volume in a typical deployment are now migrated to the structured format.
|
||||
|
||||
To maintain backwards compatibility, structured logs will still be outputted as a string where the string contains representations of those "key"="value" pairs. Starting in alpha in 1.19, logs can also be outputted in JSON format using the `--logging-format=json` flag.
|
||||
|
||||
## Using Structured Logs
|
||||
|
||||
We've added two new methods to the klog library: InfoS and ErrorS. For example, this invocation of InfoS:
|
||||
|
||||
```golang
|
||||
klog.InfoS("Pod status updated", "pod", klog.KObj(pod), "status", status)
|
||||
```
|
||||
|
||||
will result in this log:
|
||||
|
||||
```
|
||||
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
|
||||
```
|
||||
|
||||
Or, if the --logging-format=json flag is set, it will result in this output:
|
||||
|
||||
```json
|
||||
{
|
||||
"ts": 1580306777.04728,
|
||||
"msg": "Pod status updated",
|
||||
"pod": {
|
||||
"name": "coredns",
|
||||
"namespace": "kube-system"
|
||||
},
|
||||
"status": "ready"
|
||||
}
|
||||
```
|
||||
|
||||
This means downstream logging tools can easily ingest structured logging data and instead of using regular expressions to parse unstructured strings. This also makes processing logs easier, querying logs more robust, and analyzing logs much faster.
|
||||
|
||||
With structured logs, all references to Kubernetes objects are structured the same way, so you can filter the output and only log entries referencing the particular pod. You can also find logs indicating how the scheduler was scheduling the pod, how the pod was created, the health probes of the pod, and all other changes in the lifecycle of the pod.
|
||||
|
||||
Suppose you are debugging an issue with a pod. With structured logs, you can filter to only those log entries referencing the pod of interest, rather than needing to scan through potentially thousands of log lines to find the relevant ones.
|
||||
|
||||
Not only are structured logs more useful when manual debugging of issues, they also enable richer features like automated pattern recognition within logs or tighter correlation of log and trace data.
|
||||
|
||||
Finally, structured logs can help reduce storage costs for logs because most storage systems are more efficiently able to compress structured key=value data than unstructured strings.
|
||||
|
||||
## Get Involved
|
||||
|
||||
While we have updated over 99% of the log entries by log volume in a typical deployment, there are still thousands of logs to be updated. Pick a file or directory that you would like to improve and [migrate existing log calls to use structured logs](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md). It's a great and easy way to make your first contribution to Kubernetes!
|
|
@ -0,0 +1,118 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "GSoC 2020 - Building operators for cluster addons"
|
||||
date: 2020-09-16
|
||||
slug: gsoc20-building-operators-for-cluster-addons
|
||||
---
|
||||
|
||||
**Author**: Somtochi Onyekwere
|
||||
|
||||
# Introduction
|
||||
|
||||
[Google Summer of Code](https://summerofcode.withgoogle.com/) is a global program that is geared towards introducing students to open source. Students are matched with open-source organizations to work with them for three months during the summer.
|
||||
|
||||
My name is Somtochi Onyekwere from the Federal University of Technology, Owerri (Nigeria) and this year, I was given the opportunity to work with Kubernetes (under the CNCF organization) and this led to an amazing summer spent learning, contributing and interacting with the community.
|
||||
|
||||
Specifically, I worked on the _Cluster Addons: Package all the things!_ project. The project focused on building operators for better management of various cluster addons, extending the tooling for building these operators and making the creation of these operators a smooth process.
|
||||
|
||||
# Background
|
||||
|
||||
Kubernetes has progressed greatly in the past few years with a flourishing community and a large number of contributors. The codebase is gradually moving away from the monolith structure where all the code resides in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository to being split into multiple sub-projects. Part of the focus of cluster-addons is to make some of these sub-projects work together in an easy to assemble, self-monitoring, self-healing and Kubernetes-native way. It enables them to work seamlessly without human intervention.
|
||||
|
||||
The community is exploring the use of operators as a mechanism to monitor various resources in the cluster and properly manage these resources. In addition to this, it provides self-healing and it is a kubernetes-native pattern that can encode how best these addons work and manage them properly.
|
||||
|
||||
What are cluster addons? Cluster addons are a collection of resources (like Services and deployment) that are used to give a Kubernetes cluster additional functionalities. They range from things as simple as the Kubernetes dashboards (for visualization) to more complex ones like Calico (for networking). These addons are essential to different applications running in the cluster and the cluster itself. The addon operator provides a nicer way of managing these addons and understanding the health and status of the various resources that comprise the addon. You can get a deeper overview in this [article](https://kubernetes.io/docs/concepts/overview/components/#addons).
|
||||
|
||||
Operators are custom controllers with custom resource definitions that encode application-specific knowledge and are used for managing complex stateful applications. It is a widely accepted pattern. Managing addons via operators, with these operators encoding knowledge of how best the addons work, introduces a lot of advantages while setting standards that will be easy to follow and scale. This [article](https://kubernetes.io/docs/concepts/extend-kubernetes/operator) does a good job of explaining operators.
|
||||
|
||||
The addon operators can solve a lot of problems, but they have their challenges. Those under the [cluster-addons project](https://github.com/kubernetes-sigs/cluster-addons) had missing pieces and were still a proof of concept. Generating the RBAC configuration for the operators was a pain and sometimes the operators were given too much privilege. The operators weren’t very extensible as it only pulled manifests from local filesystems or HTTP(s) servers and a lot of simple addons were generating the same code.
|
||||
I spent the summer working on these issues, looking at them with fresh eyes and coming up with solutions for both the known and unknown issues.
|
||||
|
||||
# Various additions to kubebuilder-declarative-pattern
|
||||
|
||||
The [kubebuilder-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern) (from here on referred to as KDP) repo is an extra layer of addon specific tooling on top of the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) SDK that is enabled by passing the experimental `--pattern=addon` flag to `kubebuilder create` command. Together, they create the base code for the addon operator. During the internship, I worked on a couple of features in KDP and cluster-addons.
|
||||
|
||||
## Operator version checking
|
||||
Enabling version checks for operators helped in making upgrades/downgrades safer to different versions of the addon, even though the operator had complex logic. It is a way of matching the version of an addon to the version of the operator that knows how to manage it well. Most addons have different versions and these versions might need to be managed differently. This feature checks the custom resource for the `addons.k8s.io/min-operator-version` annotation which states the minimum operator version that is needed to manage the version against the version of the operator. If the operator version is below the minimum version required, the operator pauses with an error telling the user that the version of the operator is too low. This helps to ensure that the correct operator is being used for the addon.
|
||||
|
||||
## Git repository for storing the manifests
|
||||
Previously, there was support for only local file directories and HTTPS repositories for storing manifests. Giving creators of addon operators the ability to store manifest in GitHub repository enables faster development and version control. When starting the controller, you can pass a flag to specify the location of your channels directory. The channels directory contains the manifests for different versions, the controller pulls the manifest from this directory and applies it to the cluster. During the internship period, I extended it to include Git repositories.
|
||||
|
||||
## Annotations to temporarily disable reconciliation
|
||||
The reconciliation loop that ensures that the desired state matches the actual state prevents modification of objects in the cluster. This makes it hard to experiment or investigate what might be wrong in the cluster as any changes made are promptly reverted. I resolved this by allowing users to place an `addons.k8s.io/ignore` annotation on the resource that they don’t want the controller to reconcile. The controller checks for this annotation and doesn’t reconcile that object. To resume reconciliation, the annotation can be removed from the resource.
|
||||
|
||||
## Unstructured support in kubebuilder-declarative-pattern
|
||||
One of the operators that I worked on is a generic controller that could manage more than one cluster addon that did not require extra configuration. To do this, the operator couldn’t use a particular type and needed the kubebuilder-declarative-repo to support using the [unstructured.Unstructured](https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured#Unstructured) type. There were various functions in the kubebuilder-declarative-pattern that couldn’t handle this type and returned an error if the object passed in was not of type `addonsv1alpha1.CommonObject`. The functions were modified to handle both `unstructured.Unstructured` and `addonsv1alpha.CommonObject`.
|
||||
|
||||
# Tools and CLI programs
|
||||
There were also some command-line programs I wrote that could be used to make working with addon operators easier. Most of them have uses outside the addon operators as they try to solve a specific problem that could surface anywhere while working with Kubernetes. I encourage you to [check them out](https://github.com/kubernetes-sigs/cluster-addons/tree/master/tools) when you have the chance!
|
||||
|
||||
## RBAC Generator
|
||||
One of the biggest concerns with the operator was RBAC. You had to manually look through the manifest and add the RBAC rule for each resource as it needs to have RBAC permissions to create, get, update and delete the resources in the manifest when running in-cluster. Building the [RBAC generator](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/rbac-gen) automated the process of writing the RBAC roles and role bindings. The function of the RBAC generator is simple. It accepts the file name of the manifest as a flag. Then, it parses the manifest and gets the API group and resource name of the resources and adds it to a role. It outputs the role and role binding to stdout or a file if the `--out` flag is parsed.
|
||||
|
||||
Additionally, the tool enables you to split the RBAC by separating the cluster roles in the manifest. This lessened the security concern of an operator being over-privileged as it needed to have all the permissions that the clusterrole has. If you want to apply the clusterrole yourself and not give the operator these permissions, you can pass in a `--supervisory` boolean flag so that the generator does not add these permissions to the role. The CLI program resides [here](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/rbac-gen).
|
||||
|
||||
## Kubectl Ownerref
|
||||
It is hard to find out at a glance which objects were created by an addon custom resource. This kubectl plugin alleviates that pain by displaying all the objects in the cluster that a resource has ownerrefs on. You simply pass the kind and the name of the resource as arguments to the program and it checks the cluster for the objects and gives the kind, name, the namespace of such an object. It could be useful to get a general overview of all the objects that the controller is reconciling by passing in the name and kind of custom resource. The CLI program resides [here](https://github.com/kubernetes-sigs/cluster-addons/tree/master/tools/kubectl-ownerref).
|
||||
|
||||
# Addon Operators
|
||||
To fully understand addons operators and make changes to how they are being created, you have to try creating and using them. Part of the summer was spent building operators for some popular addons like the Kubernetes dashboard, flannel, NodeLocalDNS and so on. Please check the [cluster-addons](https://github.com/kubernetes-sigs/cluster-addons) repository for the different addon operators. In this section, I will just highlight one that is a little different from the others.
|
||||
|
||||
## Generic Controller
|
||||
The generic controller can be shared between addons that don’t require much configuration. This minimizes resource consumption on the cluster as it reduces the number of controllers that need to be run. Also instead of building your own operator, you can just use the generic controller and whenever you feel that your needs have grown and you need a more complex operator, you can always scaffold the code with kubebuilder and continue from where the generic operator stopped. To use the generic controller, you can generate the CustomResourceDefinition(CRD) using this tool ([generic-addon](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/generic-addon/README.md)). You pass in the kind, group, and the location of your channels directory (it could be a Git repository too!). The tool generates the - CRD, RBAC manifest and two custom resources for you.
|
||||
|
||||
The process is as follows:
|
||||
- Create the Generic CRD
|
||||
- Generate all the manifests needed with the [`generic-addon tool`](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/generic-addon/README.md).
|
||||
|
||||
This tool creates:
|
||||
1. The CRD for your addon
|
||||
2. The RBAC rules for the CustomResourceDefinitions
|
||||
3. The RBAC rules for applying the manifests
|
||||
4. The custom resource for your addon
|
||||
5. A Generic custom resource
|
||||
|
||||
The Generic custom resource looks like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: addons.x-k8s.io/v1alpha1
|
||||
kind: Generic
|
||||
metadata:
|
||||
name: generic-sample
|
||||
spec:
|
||||
objectKind:
|
||||
kind: NodeLocalDNS
|
||||
version: "v1alpha1"
|
||||
group: addons.x-k8s.io
|
||||
channel: "../nodelocaldns/channels"
|
||||
```
|
||||
|
||||
Apply these manifests but ensure to apply the CRD before the CR.
|
||||
Then, run the Generic controller, either on your machine or in-cluster.
|
||||
|
||||
|
||||
If you are interested in building an operator, Please check out [this guide](https://github.com/kubernetes-sigs/cluster-addons/blob/master/dashboard/README.md).
|
||||
|
||||
# Relevant Links
|
||||
- [Detailed breakdown of work done during the internship](https://github.com/SomtochiAma/gsoc-2020-meta-k8s)
|
||||
- [Addon Operator (KEP)](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/addons/0035-20190128-addons-via-operators.md)
|
||||
- [Original GSoC Issue](https://github.com/kubernetes-sigs/cluster-addons/issues/39)
|
||||
- [Proposal Submitted for GSoC](https://github.com/SomtochiAma/gsoc-2020-meta-k8s/blob/master/GSoC%202020%20PROPOSAL%20-%20PACKAGE%20ALL%20THINGS.pdf)
|
||||
- [All commits to kubernetes-sigs/cluster-addons](https://github.com/kubernetes-sigs/cluster-addons/commits?author=SomtochiAma)
|
||||
- [All commits to kubernetes-sigs/kubebuidler-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/commits?author=SomtochiAma)
|
||||
|
||||
# Further Work
|
||||
A lot of work was definitely done on the cluster addons during the GSoC period. But we need more people building operators and using them in the cluster. We need wider adoption in the community. Build operators for your favourite addons and tell us how it went and if you had any issues. Check out this [README.md](https://github.com/kubernetes-sigs/cluster-addons/blob/master/dashboard/README.md) to get started.
|
||||
|
||||
# Appreciation
|
||||
I really want to appreciate my mentors [Justin Santa Barbara](https://github.com/justinsb) (Google) and [Leigh Capili](https://github.com/stealthybox) (Weaveworks). My internship was awesome because they were awesome. They set a golden standard for what mentorship should be. They were accessible and always available to clear any confusion. I think what I liked best was that they didn’t just dish out tasks, instead, we had open discussions about what was wrong and what could be improved. They are really the best and I hope I get to work with them again!
|
||||
Also, I want to say a huge thanks to [Lubomir I. Ivanov](https://github.com/neolit123) for reviewing this blog post!
|
||||
|
||||
# Conclusion
|
||||
So far I have learnt a lot about Go, the internals of Kubernetes, and operators. I want to conclude by encouraging people to contribute to open-source (especially Kubernetes :)) regardless of your level of experience. It has been a well-rounded experience for me and I have come to love the community. It is a great initiative and it is a great way to learn and meet awesome people. Special shoutout to Google for organizing this program.
|
||||
|
||||
If you are interested in cluster addons and finding out more on addon operators, you are welcome to join our slack channel on the Kubernetes [#cluster-addons](https://kubernetes.slack.com/messages/cluster-addons).
|
||||
|
||||
---
|
||||
|
||||
_[Somtochi Onyekwere](https://twitter.com/SomtochiAma) is a software engineer that loves contributing to open-source and exploring cloud native solutions._
|
|
@ -23,7 +23,7 @@ mechanism that allows different cloud providers to integrate their platforms wit
|
|||
|
||||
## Design
|
||||
|
||||

|
||||

|
||||
|
||||
The cloud controller manager runs in the control plane as a replicated set of processes
|
||||
(usually, these are containers in Pods). Each cloud-controller-manager implements
|
||||
|
|
|
@ -21,7 +21,7 @@ This document catalogs the communication paths between the control plane (really
|
|||
Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminate at the apiserver (none of the other control plane components are designed to expose remote services). The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled.
|
||||
One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed.
|
||||
|
||||
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. For example, on a default GKE deployment, the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
|
||||
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
|
||||
|
||||
Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated.
|
||||
The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
|
||||
|
|
|
@ -140,7 +140,7 @@ the {{< glossary_tooltip term_id="kube-controller-manager" >}}. These
|
|||
built-in controllers provide important core behaviors.
|
||||
|
||||
The Deployment controller and Job controller are examples of controllers that
|
||||
come as part of Kubernetes itself (“built-in” controllers).
|
||||
come as part of Kubernetes itself ("built-in" controllers).
|
||||
Kubernetes lets you run a resilient control plane, so that if any of the built-in
|
||||
controllers were to fail, another part of the control plane will take over the work.
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ Before choosing a guide, here are some considerations:
|
|||
|
||||
## Managing a cluster
|
||||
|
||||
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster’s master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
|
||||
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your cluster's master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
|
||||
|
||||
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
|
||||
|
||||
|
|
|
@ -5,12 +5,12 @@ content_type: concept
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
Add-ons extend the functionality of Kubernetes.
|
||||
|
||||
This page lists some of the available add-ons and links to their respective installation instructions.
|
||||
|
||||
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Networking and Network Policy
|
||||
|
|
|
@ -1,362 +0,0 @@
|
|||
---
|
||||
title: Cloud Providers
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
This page explains how to manage Kubernetes running on a specific
|
||||
cloud provider. There are many other third-party cloud provider projects, but this list is specific to projects embedded within, or relied upon by Kubernetes itself.
|
||||
|
||||
<!-- body -->
|
||||
### kubeadm
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
|
||||
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
|
||||
in-tree cloud provider can be configured using kubeadm as shown below:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta2
|
||||
kind: InitConfiguration
|
||||
nodeRegistration:
|
||||
kubeletExtraArgs:
|
||||
cloud-provider: "openstack"
|
||||
cloud-config: "/etc/kubernetes/cloud.conf"
|
||||
---
|
||||
apiVersion: kubeadm.k8s.io/v1beta2
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: v1.13.0
|
||||
apiServer:
|
||||
extraArgs:
|
||||
cloud-provider: "openstack"
|
||||
cloud-config: "/etc/kubernetes/cloud.conf"
|
||||
extraVolumes:
|
||||
- name: cloud
|
||||
hostPath: "/etc/kubernetes/cloud.conf"
|
||||
mountPath: "/etc/kubernetes/cloud.conf"
|
||||
controllerManager:
|
||||
extraArgs:
|
||||
cloud-provider: "openstack"
|
||||
cloud-config: "/etc/kubernetes/cloud.conf"
|
||||
extraVolumes:
|
||||
- name: cloud
|
||||
hostPath: "/etc/kubernetes/cloud.conf"
|
||||
mountPath: "/etc/kubernetes/cloud.conf"
|
||||
```
|
||||
|
||||
The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines
|
||||
for the [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/),
|
||||
[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) and the
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet/).
|
||||
The contents of the file specified in `--cloud-config` for each provider is documented below as well.
|
||||
|
||||
For all external cloud providers, please follow the instructions on the individual repositories,
|
||||
which are listed under their headings below, or one may view [the list of all repositories](https://github.com/kubernetes?q=cloud-provider-&type=&language=)
|
||||
|
||||
## AWS
|
||||
This section describes all the possible configurations which can
|
||||
be used when running Kubernetes on Amazon Web Services.
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-aws](https://github.com/kubernetes/cloud-provider-aws#readme)
|
||||
|
||||
### Node Name
|
||||
|
||||
The AWS cloud provider uses the private DNS name of the AWS instance as the name of the Kubernetes Node object.
|
||||
|
||||
### Load Balancers
|
||||
You can setup [external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
to use specific features in AWS by configuring the annotations as shown below.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: example
|
||||
namespace: kube-system
|
||||
labels:
|
||||
run: example
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
|
||||
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 443
|
||||
targetPort: 5556
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: example
|
||||
```
|
||||
Different settings can be applied to a load balancer service in AWS using _annotations_. The following describes the annotations supported on AWS ELBs:
|
||||
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`: Used to specify access log emit interval.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`: Used on the service to enable or disable access logs.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`: Used to specify access log s3 bucket name.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`: Used to specify access log s3 bucket prefix.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags`: Used on the service to specify a comma-separated list of key-value pairs which will be recorded as additional tags in the ELB. For example: `"Key1=Val1,Key2=Val2,KeyNoVal1=,KeyNoVal2"`.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If `http` (default) or `https`, an HTTPS listener that terminates the connection and parses headers is created. If set to `ssl` or `tcp`, a "raw" SSL listener is used. If set to `http` and `aws-load-balancer-ssl-cert` is not used then a HTTP listener is used.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled`: Used on the service to enable or disable connection draining.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout`: Used on the service to specify a connection draining timeout.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout`: Used on the service to specify the idle connection timeout.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled`: Used on the service to enable or disable cross-zone load balancing.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-security-groups`: Used to specify the security groups to be added to ELB created. This replaces all other security groups previously assigned to the ELB. Security groups defined here should not be shared between services.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-extra-security-groups`: Used on the service to specify additional security groups to be added to ELB created
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-internal`: Used on the service to indicate that we want an internal ELB.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol`: Used on the service to enable the proxy protocol on an ELB. Right now we only accept the value `*` which means enabling the proxy protocol on all ELB backends. In the future we could adjust this to allow setting the proxy protocol only on certain backends.
|
||||
* `service.beta.kubernetes.io/aws-load-balancer-ssl-ports`: Used on the service to specify a comma-separated list of ports that will use SSL/HTTPS listeners. Defaults to `*` (all)
|
||||
|
||||
The information for the annotations for AWS is taken from the comments on [aws.go](https://github.com/kubernetes/legacy-cloud-providers/blob/master/aws/aws.go)
|
||||
|
||||
## Azure
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-azure](https://github.com/kubernetes/cloud-provider-azure#readme)
|
||||
|
||||
### Node Name
|
||||
|
||||
The Azure cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the Kubernetes Node name must match the Azure VM name.
|
||||
|
||||
## GCE
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-gcp](https://github.com/kubernetes/cloud-provider-gcp#readme)
|
||||
|
||||
### Node Name
|
||||
|
||||
The GCE cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
|
||||
Note that the first segment of the Kubernetes Node name must match the GCE instance name (e.g. a Node named `kubernetes-node-2.c.my-proj.internal` must correspond to an instance named `kubernetes-node-2`).
|
||||
|
||||
## HUAWEI CLOUD
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes-sigs/cloud-provider-huaweicloud](https://github.com/kubernetes-sigs/cloud-provider-huaweicloud).
|
||||
|
||||
## OpenStack
|
||||
This section describes all the possible configurations which can
|
||||
be used when using OpenStack with Kubernetes.
|
||||
|
||||
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-openstack](https://github.com/kubernetes/cloud-provider-openstack#readme)
|
||||
|
||||
### Node Name
|
||||
|
||||
The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object.
|
||||
Note that the instance name must be a valid Kubernetes Node name in order for the kubelet to successfully register its Node object.
|
||||
|
||||
### Services
|
||||
|
||||
The OpenStack cloud provider
|
||||
implementation for Kubernetes supports the use of these OpenStack services from
|
||||
the underlying cloud, where available:
|
||||
|
||||
| Service | API Version(s) | Required |
|
||||
|--------------------------|----------------|----------|
|
||||
| Block Storage (Cinder) | V1†, V2, V3 | No |
|
||||
| Compute (Nova) | V2 | No |
|
||||
| Identity (Keystone) | V2‡, V3 | Yes |
|
||||
| Load Balancing (Neutron) | V1§, V2 | No |
|
||||
| Load Balancing (Octavia) | V2 | No |
|
||||
|
||||
† Block Storage V1 API support is deprecated, Block Storage V3 API support was
|
||||
added in Kubernetes 1.9.
|
||||
|
||||
‡ Identity V2 API support is deprecated and will be removed from the provider in
|
||||
a future release. As of the "Queens" release, OpenStack will no longer expose the
|
||||
Identity V2 API.
|
||||
|
||||
§ Load Balancing V1 API support was removed in Kubernetes 1.9.
|
||||
|
||||
Service discovery is achieved by listing the service catalog managed by
|
||||
OpenStack Identity (Keystone) using the `auth-url` provided in the provider
|
||||
configuration. The provider will gracefully degrade in functionality when
|
||||
OpenStack services other than Keystone are not available and simply disclaim
|
||||
support for impacted features. Certain features are also enabled or disabled
|
||||
based on the list of extensions published by Neutron in the underlying cloud.
|
||||
|
||||
### cloud.conf
|
||||
Kubernetes knows how to interact with OpenStack via the file cloud.conf. It is
|
||||
the file that will provide Kubernetes with credentials and location for the OpenStack auth endpoint.
|
||||
You can create a cloud.conf file by specifying the following details in it
|
||||
|
||||
#### Typical configuration
|
||||
This is an example of a typical configuration that touches the values that most
|
||||
often need to be set. It points the provider at the OpenStack cloud's Keystone
|
||||
endpoint, provides details for how to authenticate with it, and configures the
|
||||
load balancer:
|
||||
|
||||
```yaml
|
||||
[Global]
|
||||
username=user
|
||||
password=pass
|
||||
auth-url=https://<keystone_ip>/identity/v3
|
||||
tenant-id=c869168a828847f39f7f06edd7305637
|
||||
domain-id=2a73b8f597c04551a0fdc8e95544be8a
|
||||
|
||||
[LoadBalancer]
|
||||
subnet-id=6937f8fa-858d-4bc9-a3a5-18d2c957166a
|
||||
```
|
||||
|
||||
##### Global
|
||||
These configuration options for the OpenStack provider pertain to its global
|
||||
configuration and should appear in the `[Global]` section of the `cloud.conf`
|
||||
file:
|
||||
|
||||
* `auth-url` (Required): The URL of the keystone API used to authenticate. On
|
||||
OpenStack control panels, this can be found at Access and Security > API
|
||||
Access > Credentials.
|
||||
* `username` (Required): Refers to the username of a valid user set in keystone.
|
||||
* `password` (Required): Refers to the password of a valid user set in keystone.
|
||||
* `tenant-id` (Required): Used to specify the id of the project where you want
|
||||
to create your resources.
|
||||
* `tenant-name` (Optional): Used to specify the name of the project where you
|
||||
want to create your resources.
|
||||
* `trust-id` (Optional): Used to specify the identifier of the trust to use for
|
||||
authorization. A trust represents a user's (the trustor) authorization to
|
||||
delegate roles to another user (the trustee), and optionally allow the trustee
|
||||
to impersonate the trustor. Available trusts are found under the
|
||||
`/v3/OS-TRUST/trusts` endpoint of the Keystone API.
|
||||
* `domain-id` (Optional): Used to specify the id of the domain your user belongs
|
||||
to.
|
||||
* `domain-name` (Optional): Used to specify the name of the domain your user
|
||||
belongs to.
|
||||
* `region` (Optional): Used to specify the identifier of the region to use when
|
||||
running on a multi-region OpenStack cloud. A region is a general division of
|
||||
an OpenStack deployment. Although a region does not have a strict geographical
|
||||
connotation, a deployment can use a geographical name for a region identifier
|
||||
such as `us-east`. Available regions are found under the `/v3/regions`
|
||||
endpoint of the Keystone API.
|
||||
* `ca-file` (Optional): Used to specify the path to your custom CA file.
|
||||
|
||||
|
||||
When using Keystone V3 - which changes tenant to project - the `tenant-id` value
|
||||
is automatically mapped to the project construct in the API.
|
||||
|
||||
##### Load Balancer
|
||||
These configuration options for the OpenStack provider pertain to the load
|
||||
balancer and should appear in the `[LoadBalancer]` section of the `cloud.conf`
|
||||
file:
|
||||
|
||||
* `lb-version` (Optional): Used to override automatic version detection. Valid
|
||||
values are `v1` or `v2`. Where no value is provided automatic detection will
|
||||
select the highest supported version exposed by the underlying OpenStack
|
||||
cloud.
|
||||
* `use-octavia`(Optional): Whether or not to use Octavia for LoadBalancer type
|
||||
of Service implementation instead of using Neutron-LBaaS. Default: true
|
||||
Attention: Openstack CCM use Octavia as default load balancer implementation since v1.17.0
|
||||
* `subnet-id` (Optional): Used to specify the id of the subnet you want to
|
||||
create your loadbalancer on. Can be found at Network > Networks. Click on the
|
||||
respective network to get its subnets.
|
||||
* `floating-network-id` (Optional): If specified, will create a floating IP for
|
||||
the load balancer.
|
||||
* `lb-method` (Optional): Used to specify an algorithm by which load will be
|
||||
distributed amongst members of the load balancer pool. The value can be
|
||||
`ROUND_ROBIN`, `LEAST_CONNECTIONS`, or `SOURCE_IP`. The default behavior if
|
||||
none is specified is `ROUND_ROBIN`.
|
||||
* `lb-provider` (Optional): Used to specify the provider of the load balancer.
|
||||
If not specified, the default provider service configured in neutron will be
|
||||
used.
|
||||
* `create-monitor` (Optional): Indicates whether or not to create a health
|
||||
monitor for the Neutron load balancer. Valid values are `true` and `false`.
|
||||
The default is `false`. When `true` is specified then `monitor-delay`,
|
||||
`monitor-timeout`, and `monitor-max-retries` must also be set.
|
||||
* `monitor-delay` (Optional): The time between sending probes to
|
||||
members of the load balancer. Ensure that you specify a valid time unit. The valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h"
|
||||
* `monitor-timeout` (Optional): Maximum time for a monitor to wait
|
||||
for a ping reply before it times out. The value must be less than the delay
|
||||
value. Ensure that you specify a valid time unit. The valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h"
|
||||
* `monitor-max-retries` (Optional): Number of permissible ping failures before
|
||||
changing the load balancer member's status to INACTIVE. Must be a number
|
||||
between 1 and 10.
|
||||
* `manage-security-groups` (Optional): Determines whether or not the load
|
||||
balancer should automatically manage the security group rules. Valid values
|
||||
are `true` and `false`. The default is `false`. When `true` is specified
|
||||
`node-security-group` must also be supplied.
|
||||
* `node-security-group` (Optional): ID of the security group to manage.
|
||||
|
||||
##### Block Storage
|
||||
These configuration options for the OpenStack provider pertain to block storage
|
||||
and should appear in the `[BlockStorage]` section of the `cloud.conf` file:
|
||||
|
||||
* `bs-version` (Optional): Used to override automatic version detection. Valid
|
||||
values are `v1`, `v2`, `v3` and `auto`. When `auto` is specified automatic
|
||||
detection will select the highest supported version exposed by the underlying
|
||||
OpenStack cloud. The default value if none is provided is `auto`.
|
||||
* `trust-device-path` (Optional): In most scenarios the block device names
|
||||
provided by Cinder (e.g. `/dev/vda`) can not be trusted. This boolean toggles
|
||||
this behavior. Setting it to `true` results in trusting the block device names
|
||||
provided by Cinder. The default value of `false` results in the discovery of
|
||||
the device path based on its serial number and `/dev/disk/by-id` mapping and is
|
||||
the recommended approach.
|
||||
* `ignore-volume-az` (Optional): Used to influence availability zone use when
|
||||
attaching Cinder volumes. When Nova and Cinder have different availability
|
||||
zones, this should be set to `true`. This is most commonly the case where
|
||||
there are many Nova availability zones but only one Cinder availability zone.
|
||||
The default value is `false` to preserve the behavior used in earlier
|
||||
releases, but may change in the future.
|
||||
* `node-volume-attach-limit` (Optional): Maximum number of Volumes that can be
|
||||
attached to the node, default is 256 for cinder.
|
||||
|
||||
If deploying Kubernetes versions <= 1.8 on an OpenStack deployment that uses
|
||||
paths rather than ports to differentiate between endpoints it may be necessary
|
||||
to explicitly set the `bs-version` parameter. A path based endpoint is of the
|
||||
form `http://foo.bar/volume` while a port based endpoint is of the form
|
||||
`http://foo.bar:xxx`.
|
||||
|
||||
In environments that use path based endpoints and Kubernetes is using the older
|
||||
auto-detection logic a `BS API version autodetection failed.` error will be
|
||||
returned on attempting volume detachment. To workaround this issue it is
|
||||
possible to force the use of Cinder API version 2 by adding this to the cloud
|
||||
provider configuration:
|
||||
|
||||
```yaml
|
||||
[BlockStorage]
|
||||
bs-version=v2
|
||||
```
|
||||
|
||||
##### Metadata
|
||||
These configuration options for the OpenStack provider pertain to metadata and
|
||||
should appear in the `[Metadata]` section of the `cloud.conf` file:
|
||||
|
||||
* `search-order` (Optional): This configuration key influences the way that the
|
||||
provider retrieves metadata relating to the instance(s) in which it runs. The
|
||||
default value of `configDrive,metadataService` results in the provider
|
||||
retrieving metadata relating to the instance from the config drive first if
|
||||
available and then the metadata service. Alternative values are:
|
||||
* `configDrive` - Only retrieve instance metadata from the configuration
|
||||
drive.
|
||||
* `metadataService` - Only retrieve instance metadata from the metadata
|
||||
service.
|
||||
* `metadataService,configDrive` - Retrieve instance metadata from the metadata
|
||||
service first if available, then the configuration drive.
|
||||
|
||||
Influencing this behavior may be desirable as the metadata on the
|
||||
configuration drive may grow stale over time, whereas the metadata service
|
||||
always provides the most up to date view. Not all OpenStack clouds provide
|
||||
both configuration drive and metadata service though and only one or the other
|
||||
may be available which is why the default is to check both.
|
||||
|
||||
##### Route
|
||||
|
||||
These configuration options for the OpenStack provider pertain to the [kubenet]
|
||||
Kubernetes network plugin and should appear in the `[Route]` section of the
|
||||
`cloud.conf` file:
|
||||
|
||||
* `router-id` (Optional): If the underlying cloud's Neutron deployment supports
|
||||
the `extraroutes` extension then use `router-id` to specify a router to add
|
||||
routes to. The router chosen must span the private networks containing your
|
||||
cluster nodes (typically there is only one node network, and this value should be
|
||||
the default router for the node network). This value is required to use
|
||||
[kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet)
|
||||
on OpenStack.
|
||||
|
||||
[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet
|
||||
|
||||
## vSphere
|
||||
|
||||
{{< tabs name="vSphere cloud provider" >}}
|
||||
{{% tab name="vSphere >= 6.7U3" %}}
|
||||
For all vSphere deployments on vSphere >= 6.7U3, the [external vSphere cloud provider](https://github.com/kubernetes/cloud-provider-vsphere), along with the [vSphere CSI driver](https://github.com/kubernetes-sigs/vsphere-csi-driver) is recommended. See [Deploying a Kubernetes Cluster on vSphere with CSI and CPI](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html) for a quick start guide.
|
||||
{{% /tab %}}
|
||||
{{% tab name="vSphere < 6.7U3" %}}
|
||||
If you are running vSphere < 6.7U3, the in-tree vSphere cloud provider is recommended. See [Running a Kubernetes Cluster on vSphere with kubeadm](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/k8s-vcp-on-vsphere-with-kubeadm.html) for a quick start guide.
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
For in-depth documentation on the vSphere cloud provider, visit the [vSphere cloud provider docs site](https://cloud-provider-vsphere.sigs.k8s.io).
|
|
@ -495,7 +495,7 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
|
|||
system, system-nodes, 12, 0, system:node:127.0.0.1, 2020-07-23T15:26:57.179170694Z,
|
||||
```
|
||||
|
||||
In addition to the queued requests, the output includeas one phantom line for each priority level that is exempt from limitation.
|
||||
In addition to the queued requests, the output includes one phantom line for each priority level that is exempt from limitation.
|
||||
|
||||
You can get a more detailed listing with a command like this:
|
||||
```shell
|
||||
|
|
|
@ -79,6 +79,8 @@ as an introduction to various technologies and serves as a jumping-off point.
|
|||
The following networking options are sorted alphabetically - the order does not
|
||||
imply any preferential status.
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
### ACI
|
||||
|
||||
[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf).
|
||||
|
@ -314,4 +316,3 @@ to run, and in both cases, the network provides one IP address per pod - as is s
|
|||
The early design of the networking model and its rationale, and some future
|
||||
plans are described in more detail in the
|
||||
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
|
||||
|
||||
|
|
|
@ -16,7 +16,6 @@ or use additional (third party) tools to keep your data private.
|
|||
{{< /caution >}}
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
## Motivation
|
||||
|
||||
|
@ -24,25 +23,38 @@ Use a ConfigMap for setting configuration data separately from application code.
|
|||
|
||||
For example, imagine that you are developing an application that you can run on your
|
||||
own computer (for development) and in the cloud (to handle real traffic).
|
||||
You write the code to
|
||||
look in an environment variable named `DATABASE_HOST`. Locally, you set that variable
|
||||
to `localhost`. In the cloud, you set it to refer to a Kubernetes
|
||||
{{< glossary_tooltip text="Service" term_id="service" >}} that exposes the database
|
||||
component to your cluster.
|
||||
|
||||
You write the code to look in an environment variable named `DATABASE_HOST`.
|
||||
Locally, you set that variable to `localhost`. In the cloud, you set it to
|
||||
refer to a Kubernetes {{< glossary_tooltip text="Service" term_id="service" >}}
|
||||
that exposes the database component to your cluster.
|
||||
This lets you fetch a container image running in the cloud and
|
||||
debug the exact same code locally if needed.
|
||||
|
||||
A ConfigMap is not designed to hold large chunks of data. The data stored in a
|
||||
ConfigMap cannot exeed 1 MiB. If you need to store settings that are
|
||||
larger than this limit, you may want to consider mounting a volume or use a
|
||||
separate database or file service.
|
||||
|
||||
## ConfigMap object
|
||||
|
||||
A ConfigMap is an API [object](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
that lets you store configuration for other objects to use. Unlike most
|
||||
Kubernetes objects that have a `spec`, a ConfigMap has a `data` section to
|
||||
store items (keys) and their values.
|
||||
Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
|
||||
fields. These fields accepts key-value pairs as their values. Both the `data`
|
||||
field and the `binaryData` are optional. The `data` field is designed to
|
||||
contain UTF-8 byte sequences while the `binaryData` field is designed to
|
||||
contain binary data.
|
||||
|
||||
The name of a ConfigMap must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
Each key under the `data` or the `binaryData` field must consist of
|
||||
alphanumeric characters, `-`, `_` or `.`. The keys stored in `data` must not
|
||||
overlap with the keys in the `binaryData` field.
|
||||
|
||||
Starting from v1.19, you can add an `immutable` field to a ConfigMap
|
||||
definition to create an [immutable ConfigMap](#configmap-immutable).
|
||||
|
||||
## ConfigMaps and Pods
|
||||
|
||||
You can write a Pod `spec` that refers to a ConfigMap and configures the container(s)
|
||||
|
@ -62,7 +74,7 @@ data:
|
|||
# property-like keys; each key maps to a simple value
|
||||
player_initial_lives: "3"
|
||||
ui_properties_file_name: "user-interface.properties"
|
||||
#
|
||||
|
||||
# file-like keys
|
||||
game.properties: |
|
||||
enemy.types=aliens,monsters
|
||||
|
@ -94,6 +106,7 @@ when that happens. By accessing the Kubernetes API directly, this
|
|||
technique also lets you access a ConfigMap in a different namespace.
|
||||
|
||||
Here's an example Pod that uses values from `game-demo` to configure a Pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
@ -134,7 +147,6 @@ spec:
|
|||
path: "user-interface.properties"
|
||||
```
|
||||
|
||||
|
||||
A ConfigMap doesn't differentiate between single line property values and
|
||||
multi-line file-like values.
|
||||
What matters is how Pods and other objects consume those values.
|
||||
|
@ -153,7 +165,6 @@ ConfigMaps can be mounted as data volumes. ConfigMaps can also be used by other
|
|||
parts of the system, without being directly exposed to the Pod. For example,
|
||||
ConfigMaps can hold data that other parts of the system should use for configuration.
|
||||
|
||||
{{< note >}}
|
||||
The most common way to use ConfigMaps is to configure settings for
|
||||
containers running in a Pod in the same namespace. You can also use a
|
||||
ConfigMap separately.
|
||||
|
@ -162,16 +173,23 @@ For example, you
|
|||
might encounter {{< glossary_tooltip text="addons" term_id="addons" >}}
|
||||
or {{< glossary_tooltip text="operators" term_id="operator-pattern" >}} that
|
||||
adjust their behavior based on a ConfigMap.
|
||||
{{< /note >}}
|
||||
|
||||
### Using ConfigMaps as files from a Pod
|
||||
|
||||
To consume a ConfigMap in a volume in a Pod:
|
||||
|
||||
1. Create a config map or use an existing one. Multiple Pods can reference the same config map.
|
||||
1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name the volume anything, and have a `.spec.volumes[].configMap.name` field set to reference your ConfigMap object.
|
||||
1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the config map. Specify `.spec.containers[].volumeMounts[].readOnly = true` and `.spec.containers[].volumeMounts[].mountPath` to an unused directory name where you would like the config map to appear.
|
||||
1. Modify your image or command line so that the program looks for files in that directory. Each key in the config map `data` map becomes the filename under `mountPath`.
|
||||
1. Create a ConfigMap or use an existing one. Multiple Pods can reference the
|
||||
same ConfigMap.
|
||||
1. Modify your Pod definition to add a volume under `.spec.volumes[]`. Name
|
||||
the volume anything, and have a `.spec.volumes[].configMap.name` field set
|
||||
to reference your ConfigMap object.
|
||||
1. Add a `.spec.containers[].volumeMounts[]` to each container that needs the
|
||||
ConfigMap. Specify `.spec.containers[].volumeMounts[].readOnly = true` and
|
||||
`.spec.containers[].volumeMounts[].mountPath` to an unused directory name
|
||||
where you would like the ConfigMap to appear.
|
||||
1. Modify your image or command line so that the program looks for files in
|
||||
that directory. Each key in the ConfigMap `data` map becomes the filename
|
||||
under `mountPath`.
|
||||
|
||||
This is an example of a Pod that mounts a ConfigMap in a volume:
|
||||
|
||||
|
@ -201,8 +219,8 @@ own `volumeMounts` block, but only one `.spec.volumes` is needed per ConfigMap.
|
|||
|
||||
#### Mounted ConfigMaps are updated automatically
|
||||
|
||||
When a config map currently consumed in a volume is updated, projected keys are eventually updated as well.
|
||||
The kubelet checks whether the mounted config map is fresh on every periodic sync.
|
||||
When a ConfigMap currently consumed in a volume is updated, projected keys are eventually updated as well.
|
||||
The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync.
|
||||
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
|
||||
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
|
||||
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
|
||||
|
@ -213,6 +231,9 @@ when new keys are projected to the Pod can be as long as the kubelet sync period
|
|||
propagation delay, where the cache propagation delay depends on the chosen cache type
|
||||
(it equals to watch propagation delay, ttl of cache, or zero correspondingly).
|
||||
|
||||
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
|
||||
## Immutable ConfigMaps {#configmap-immutable}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
|
||||
|
||||
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
|
||||
|
@ -222,11 +243,13 @@ data has the following advantages:
|
|||
|
||||
- protects you from accidental (or unwanted) updates that could cause applications outages
|
||||
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
|
||||
closing watches for config maps marked as immutable.
|
||||
closing watches for ConfigMaps marked as immutable.
|
||||
|
||||
This feature is controlled by the `ImmutableEphemeralVolumes`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
You can create an immutable ConfigMap by setting the `immutable` field to `true`.
|
||||
For example:
|
||||
|
||||
To use this feature, enable the `ImmutableEphemeralVolumes`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set
|
||||
your Secret or ConfigMap `immutable` field to `true`. For example:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
|
@ -237,17 +260,13 @@ data:
|
|||
immutable: true
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Once a ConfigMap or Secret is marked as immutable, it is _not_ possible to revert this change
|
||||
nor to mutate the contents of the `data` field. You can only delete and recreate the ConfigMap.
|
||||
Existing Pods maintain a mount point to the deleted ConfigMap - it is recommended to recreate
|
||||
these pods.
|
||||
{{< /note >}}
|
||||
|
||||
Once a ConfigMap is marked as immutable, it is _not_ possible to revert this change
|
||||
nor to mutate the contents of the `data` or the `binaryData` field. You can
|
||||
only delete and recreate the ConfigMap. Because existing Pods maintain a mount point
|
||||
to the deleted ConfigMap, it is recommended to recreate these pods.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read about [Secrets](/docs/concepts/configuration/secret/).
|
||||
* Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/).
|
||||
* Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for
|
||||
|
|
|
@ -134,7 +134,6 @@ spec:
|
|||
containers:
|
||||
- name: app
|
||||
image: images.my-company.example/app:v4
|
||||
env:
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
|
|
|
@ -359,7 +359,7 @@ The only component that considers both QoS and Pod priority is
|
|||
[kubelet out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
|
||||
The kubelet ranks Pods for eviction first by whether or not their usage of the
|
||||
starved resource exceeds requests, then by Priority, and then by the consumption
|
||||
of the starved compute resource relative to the Pods’ scheduling requests.
|
||||
of the starved compute resource relative to the Pods' scheduling requests.
|
||||
See
|
||||
[evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
|
||||
for more details.
|
||||
|
|
|
@ -37,6 +37,12 @@ its containers.
|
|||
- As [container environment variable](#using-secrets-as-environment-variables).
|
||||
- By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod.
|
||||
|
||||
The name of a Secret object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
The keys of `data` and `stringData` must consist of alphanumeric characters,
|
||||
`-`, `_` or `.`.
|
||||
|
||||
### Built-in Secrets
|
||||
|
||||
#### Service accounts automatically create and attach Secrets with API credentials
|
||||
|
@ -52,401 +58,15 @@ this is the recommended workflow.
|
|||
See the [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/)
|
||||
documentation for more information on how service accounts work.
|
||||
|
||||
### Creating your own Secrets
|
||||
### Creating a Secret
|
||||
|
||||
#### Creating a Secret Using `kubectl`
|
||||
There are several options to create a Secret:
|
||||
|
||||
Secrets can contain user credentials required by Pods to access a database.
|
||||
For example, a database connection string
|
||||
consists of a username and password. You can store the username in a file `./username.txt`
|
||||
and the password in a file `./password.txt` on your local machine.
|
||||
- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
|
||||
- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
|
||||
- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
|
||||
|
||||
```shell
|
||||
# Create files needed for the rest of the example.
|
||||
echo -n 'admin' > ./username.txt
|
||||
echo -n '1f2d1e2e67df' > ./password.txt
|
||||
```
|
||||
|
||||
The `kubectl create secret` command packages these files into a Secret and creates
|
||||
the object on the API server.
|
||||
The name of a Secret object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
```shell
|
||||
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret "db-user-pass" created
|
||||
```
|
||||
|
||||
Default key name is the filename. You may optionally set the key name using `[--from-file=[key=]source]`.
|
||||
|
||||
```shell
|
||||
kubectl create secret generic db-user-pass --from-file=username=./username.txt --from-file=password=./password.txt
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping.
|
||||
In most shells, the easiest way to escape the password is to surround it with single quotes (`'`).
|
||||
For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb='
|
||||
```
|
||||
|
||||
You do not need to escape special characters in passwords from files (`--from-file`).
|
||||
{{< /note >}}
|
||||
|
||||
You can check that the secret was created:
|
||||
|
||||
```shell
|
||||
kubectl get secrets
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
db-user-pass Opaque 2 51s
|
||||
```
|
||||
|
||||
You can view a description of the secret:
|
||||
|
||||
```shell
|
||||
kubectl describe secrets/db-user-pass
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
Name: db-user-pass
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
password.txt: 12 bytes
|
||||
username.txt: 5 bytes
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The commands `kubectl get` and `kubectl describe` avoid showing the contents of a secret by
|
||||
default. This is to protect the secret from being exposed accidentally to an onlooker,
|
||||
or from being stored in a terminal log.
|
||||
{{< /note >}}
|
||||
|
||||
See [decoding a secret](#decoding-a-secret) to learn how to view the contents of a secret.
|
||||
|
||||
#### Creating a Secret manually
|
||||
|
||||
You can also create a Secret in a file first, in JSON or YAML format,
|
||||
and then create that object.
|
||||
The name of a Secret object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
|
||||
contains two maps:
|
||||
`data` and `stringData`. The `data` field is used to store arbitrary data, encoded using
|
||||
base64. The `stringData` field is provided for convenience, and allows you to provide
|
||||
secret data as unencoded strings.
|
||||
|
||||
For example, to store two strings in a Secret using the `data` field, convert
|
||||
the strings to base64 as follows:
|
||||
|
||||
```shell
|
||||
echo -n 'admin' | base64
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
YWRtaW4=
|
||||
```
|
||||
|
||||
```shell
|
||||
echo -n '1f2d1e2e67df' | base64
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
MWYyZDFlMmU2N2Rm
|
||||
```
|
||||
|
||||
Write a Secret that looks like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret
|
||||
type: Opaque
|
||||
data:
|
||||
username: YWRtaW4=
|
||||
password: MWYyZDFlMmU2N2Rm
|
||||
```
|
||||
|
||||
Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply):
|
||||
|
||||
```shell
|
||||
kubectl apply -f ./secret.yaml
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret "mysecret" created
|
||||
```
|
||||
|
||||
For certain scenarios, you may wish to use the `stringData` field instead. This
|
||||
field allows you to put a non-base64 encoded string directly into the Secret,
|
||||
and the string will be encoded for you when the Secret is created or updated.
|
||||
|
||||
A practical example of this might be where you are deploying an application
|
||||
that uses a Secret to store a configuration file, and you want to populate
|
||||
parts of that configuration file during your deployment process.
|
||||
|
||||
For example, if your application uses the following configuration file:
|
||||
|
||||
```yaml
|
||||
apiUrl: "https://my.api.com/api/v1"
|
||||
username: "user"
|
||||
password: "password"
|
||||
```
|
||||
|
||||
You could store this in a Secret using the following definition:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret
|
||||
type: Opaque
|
||||
stringData:
|
||||
config.yaml: |-
|
||||
apiUrl: "https://my.api.com/api/v1"
|
||||
username: {{username}}
|
||||
password: {{password}}
|
||||
```
|
||||
|
||||
Your deployment tool could then replace the `{{username}}` and `{{password}}`
|
||||
template variables before running `kubectl apply`.
|
||||
|
||||
The `stringData` field is a write-only convenience field. It is never output when
|
||||
retrieving Secrets. For example, if you run the following command:
|
||||
|
||||
```shell
|
||||
kubectl get secret mysecret -o yaml
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
creationTimestamp: 2018-11-15T20:40:59Z
|
||||
name: mysecret
|
||||
namespace: default
|
||||
resourceVersion: "7225"
|
||||
uid: c280ad2e-e916-11e8-98f2-025000000001
|
||||
type: Opaque
|
||||
data:
|
||||
config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19
|
||||
```
|
||||
|
||||
If a field, such as `username`, is specified in both `data` and `stringData`,
|
||||
the value from `stringData` is used. For example, the following Secret definition:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: mysecret
|
||||
type: Opaque
|
||||
data:
|
||||
username: YWRtaW4=
|
||||
stringData:
|
||||
username: administrator
|
||||
```
|
||||
|
||||
Results in the following Secret:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
creationTimestamp: 2018-11-15T20:46:46Z
|
||||
name: mysecret
|
||||
namespace: default
|
||||
resourceVersion: "7579"
|
||||
uid: 91460ecb-e917-11e8-98f2-025000000001
|
||||
type: Opaque
|
||||
data:
|
||||
username: YWRtaW5pc3RyYXRvcg==
|
||||
```
|
||||
|
||||
Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
|
||||
|
||||
The keys of `data` and `stringData` must consist of alphanumeric characters,
|
||||
'-', '_' or '.'.
|
||||
|
||||
{{< note >}}
|
||||
The serialized JSON and YAML values of secret data are
|
||||
encoded as base64 strings. Newlines are not valid within these strings and must
|
||||
be omitted. When using the `base64` utility on Darwin/macOS, users should avoid
|
||||
using the `-b` option to split long lines. Conversely, Linux users *should* add
|
||||
the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if
|
||||
the `-w` option is not available.
|
||||
{{< /note >}}
|
||||
|
||||
#### Creating a Secret from a generator
|
||||
|
||||
Since Kubernetes v1.14, `kubectl` supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/). Kustomize provides resource Generators to
|
||||
create Secrets and ConfigMaps. The Kustomize generators should be specified in a
|
||||
`kustomization.yaml` file inside a directory. After generating the Secret,
|
||||
you can create the Secret on the API server with `kubectl apply`.
|
||||
|
||||
#### Generating a Secret from files
|
||||
|
||||
You can generate a Secret by defining a `secretGenerator` from the
|
||||
files ./username.txt and ./password.txt:
|
||||
|
||||
```shell
|
||||
cat <<EOF >./kustomization.yaml
|
||||
secretGenerator:
|
||||
- name: db-user-pass
|
||||
files:
|
||||
- username.txt
|
||||
- password.txt
|
||||
EOF
|
||||
```
|
||||
|
||||
Apply the directory, containing the `kustomization.yaml`, to create the Secret.
|
||||
|
||||
```shell
|
||||
kubectl apply -k .
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret/db-user-pass-96mffmfh4k created
|
||||
```
|
||||
|
||||
You can check that the secret was created:
|
||||
|
||||
```shell
|
||||
kubectl get secrets
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
db-user-pass-96mffmfh4k Opaque 2 51s
|
||||
```
|
||||
|
||||
You can view a description of the secret:
|
||||
|
||||
```shell
|
||||
kubectl describe secrets/db-user-pass-96mffmfh4k
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
Name: db-user-pass
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
password.txt: 12 bytes
|
||||
username.txt: 5 bytes
|
||||
```
|
||||
|
||||
#### Generating a Secret from string literals
|
||||
|
||||
You can create a Secret by defining a `secretGenerator`
|
||||
from literals `username=admin` and `password=secret`:
|
||||
|
||||
```shell
|
||||
cat <<EOF >./kustomization.yaml
|
||||
secretGenerator:
|
||||
- name: db-user-pass
|
||||
literals:
|
||||
- username=admin
|
||||
- password=secret
|
||||
EOF
|
||||
```
|
||||
|
||||
Apply the directory, containing the `kustomization.yaml`, to create the Secret.
|
||||
|
||||
```shell
|
||||
kubectl apply -k .
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret/db-user-pass-dddghtt9b5 created
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
When a Secret is generated, the Secret name is created by hashing
|
||||
the Secret data and appending this value to the name. This ensures that
|
||||
a new Secret is generated each time the data is modified.
|
||||
{{< /note >}}
|
||||
|
||||
#### Decoding a Secret
|
||||
|
||||
Secrets can be retrieved by running `kubectl get secret`.
|
||||
For example, you can view the Secret created in the previous section by
|
||||
running the following command:
|
||||
|
||||
```shell
|
||||
kubectl get secret mysecret -o yaml
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
creationTimestamp: 2016-01-22T18:41:56Z
|
||||
name: mysecret
|
||||
namespace: default
|
||||
resourceVersion: "164619"
|
||||
uid: cfee02d6-c137-11e5-8d73-42010af00002
|
||||
type: Opaque
|
||||
data:
|
||||
username: YWRtaW4=
|
||||
password: MWYyZDFlMmU2N2Rm
|
||||
```
|
||||
|
||||
Decode the `password` field:
|
||||
|
||||
```shell
|
||||
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
#### Editing a Secret
|
||||
### Editing a Secret
|
||||
|
||||
An existing Secret may be edited with the following command:
|
||||
|
||||
|
@ -717,37 +337,6 @@ A container using a Secret as a
|
|||
Secret updates.
|
||||
{{< /note >}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
|
||||
|
||||
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
|
||||
individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets
|
||||
(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their
|
||||
data has the following advantages:
|
||||
|
||||
- protects you from accidental (or unwanted) updates that could cause applications outages
|
||||
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
|
||||
closing watches for secrets marked as immutable.
|
||||
|
||||
To use this feature, enable the `ImmutableEphemeralVolumes`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set
|
||||
your Secret or ConfigMap `immutable` field to `true`. For example:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
...
|
||||
data:
|
||||
...
|
||||
immutable: true
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Once a Secret or ConfigMap is marked as immutable, it is _not_ possible to revert this change
|
||||
nor to mutate the contents of the `data` field. You can only delete and recreate the Secret.
|
||||
Existing Pods maintain a mount point to the deleted Secret - it is recommended to recreate
|
||||
these pods.
|
||||
{{< /note >}}
|
||||
|
||||
### Using Secrets as environment variables
|
||||
|
||||
To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
|
||||
|
@ -808,6 +397,40 @@ The output is similar to:
|
|||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
## Immutable Secrets {#secret-immutable}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
|
||||
|
||||
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
|
||||
individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets
|
||||
(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their
|
||||
data has the following advantages:
|
||||
|
||||
- protects you from accidental (or unwanted) updates that could cause applications outages
|
||||
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
|
||||
closing watches for secrets marked as immutable.
|
||||
|
||||
This feature is controlled by the `ImmutableEphemeralVolumes` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
which is enabled by default since v1.19. You can create an immutable
|
||||
Secret by setting the `immutable` field to `true`. For example,
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
...
|
||||
data:
|
||||
...
|
||||
immutable: true
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Once a Secret or ConfigMap is marked as immutable, it is _not_ possible to revert this change
|
||||
nor to mutate the contents of the `data` field. You can only delete and recreate the Secret.
|
||||
Existing Pods maintain a mount point to the deleted Secret - it is recommended to recreate
|
||||
these pods.
|
||||
{{< /note >}}
|
||||
|
||||
### Using imagePullSecrets
|
||||
|
||||
The `imagePullSecrets` field is a list of references to secrets in the same namespace.
|
||||
|
@ -914,7 +537,7 @@ Create the Secret:
|
|||
kubectl apply -f mysecret.yaml
|
||||
```
|
||||
|
||||
Use `envFrom` to define all of the Secret’s data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
|
||||
Use `envFrom` to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -1272,3 +895,11 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa
|
|||
by impersonating the kubelet. It is a planned feature to only send secrets to
|
||||
nodes that actually require them, to restrict the impact of a root exploit on a
|
||||
single node.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Learn how to [manage Secret using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
|
||||
- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
|
||||
- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
|
||||
|
||||
|
|
|
@ -31,7 +31,7 @@ and default values for any essential settings.
|
|||
|
||||
By design, a container is immutable: you cannot change the code of a
|
||||
container that is already running. If you have a containerized application
|
||||
and want to make changes, you need to build a new container that includes
|
||||
and want to make changes, you need to build a new image that includes
|
||||
the change, then recreate the container to start from the updated image.
|
||||
|
||||
## Container runtimes
|
||||
|
|
|
@ -63,7 +63,7 @@ When `imagePullPolicy` is defined without a specific value, it is also set to `A
|
|||
|
||||
## Multi-architecture Images with Manifests
|
||||
|
||||
As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of an container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
|
||||
As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
|
||||
|
||||
Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ list of devices it manages, and the kubelet is then in charge of advertising tho
|
|||
resources to the API server as part of the kubelet node status update.
|
||||
For example, after a device plugin registers `hardware-vendor.example/foo` with the kubelet
|
||||
and reports two healthy devices on a node, the node status is updated
|
||||
to advertise that the node has 2 “Foo” devices installed and available.
|
||||
to advertise that the node has 2 "Foo" devices installed and available.
|
||||
|
||||
Then, users can request devices in a
|
||||
[Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
|
||||
|
|
|
@ -122,7 +122,7 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie
|
|||
* using [kubebuilder](https://book.kubebuilder.io/)
|
||||
* using [Metacontroller](https://metacontroller.app/) along with WebHooks that
|
||||
you implement yourself
|
||||
* using the [Operator Framework](https://github.com/operator-framework/getting-started)
|
||||
* using the [Operator Framework](https://operatorframework.io)
|
||||
* [Publish](https://operatorhub.io/) your operator for other people to use
|
||||
* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern
|
||||
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
|
||||
|
|
|
@ -21,7 +21,7 @@ a complete and working Kubernetes cluster.
|
|||
|
||||
Here's the diagram of a Kubernetes cluster with all the components tied together.
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
|
||||
|
|
|
@ -16,30 +16,23 @@ card:
|
|||
|
||||
The core of Kubernetes' {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
|
||||
is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The API server
|
||||
exposes an HTTP API that lets end users, different parts of your cluster, and external components
|
||||
communicate with one another.
|
||||
exposes an HTTP API that lets end users, different parts of your cluster, and
|
||||
external components communicate with one another.
|
||||
|
||||
The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API
|
||||
(for example: Pods, Namespaces, ConfigMaps, and Events).
|
||||
|
||||
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
|
||||
Most operations can be performed through the
|
||||
[kubectl](/docs/reference/kubectl/overview/) command-line interface or other
|
||||
command-line tools, such as
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), which in turn use the
|
||||
API. However, you can also access the API directly using REST calls.
|
||||
|
||||
Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/)
|
||||
if you are writing an application using the Kubernetes API.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## API changes
|
||||
|
||||
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
|
||||
Therefore, Kubernetes has design features to allow the Kubernetes API to continuously change and grow.
|
||||
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
|
||||
compatibility for a length of time so that other projects have an opportunity to adapt.
|
||||
|
||||
In general, new API resources and new resource fields can be added often and frequently.
|
||||
Elimination of resources or fields requires following the
|
||||
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
|
||||
|
||||
What constitutes a compatible change, and how to change the API, are detailed in
|
||||
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
|
||||
|
||||
## OpenAPI specification {#api-specification}
|
||||
|
||||
Complete API details are documented using [OpenAPI](https://www.openapis.org/).
|
||||
|
@ -78,95 +71,58 @@ You can request the response format using request headers as follows:
|
|||
<caption>Valid request header values for OpenAPI v2 queries</caption>
|
||||
</table>
|
||||
|
||||
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
|
||||
Kubernetes implements an alternative Protobuf based serialization format that
|
||||
is primarily intended for intra-cluster communication. For more information
|
||||
about this format, see the [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) design proposal and the
|
||||
Interface Definition Language (IDL) files for each schema located in the Go
|
||||
packages that define the API objects.
|
||||
|
||||
## API versioning
|
||||
## API changes
|
||||
|
||||
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports
|
||||
multiple API versions, each at a different API path, such as `/api/v1` or
|
||||
`/apis/rbac.authorization.k8s.io/v1alpha1`.
|
||||
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
|
||||
Therefore, Kubernetes has designed its features to allow the Kubernetes API to continuously change and grow.
|
||||
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
|
||||
compatibility for a length of time so that other projects have an opportunity to adapt.
|
||||
|
||||
Versioning is done at the API level rather than at the resource or field level to ensure that the
|
||||
API presents a clear, consistent view of system resources and behavior, and to enable controlling
|
||||
access to end-of-life and/or experimental APIs.
|
||||
In general, new API resources and new resource fields can be added often and frequently.
|
||||
Elimination of resources or fields requires following the
|
||||
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
|
||||
|
||||
The JSON and Protobuf serialization schemas follow the same guidelines for schema changes - all descriptions below cover both formats.
|
||||
What constitutes a compatible change, and how to change the API, are detailed in
|
||||
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
|
||||
|
||||
Note that API versioning and Software versioning are only indirectly related. The
|
||||
[Kubernetes Release Versioning](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)
|
||||
proposal describes the relationship between API versioning and software versioning.
|
||||
## API groups and versioning
|
||||
|
||||
Different API versions imply different levels of stability and support. The criteria for each level are described
|
||||
in more detail in the
|
||||
[API Changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)
|
||||
documentation. They are summarized here:
|
||||
To make it easier to eliminate fields or restructure resource representations,
|
||||
Kubernetes supports multiple API versions, each at a different API path, such
|
||||
as `/api/v1` or `/apis/rbac.authorization.k8s.io/v1alpha1`.
|
||||
|
||||
- Alpha level:
|
||||
- The version names contain `alpha` (e.g. `v1alpha1`).
|
||||
- May be buggy. Enabling the feature may expose bugs. Disabled by default.
|
||||
- Support for feature may be dropped at any time without notice.
|
||||
- The API may change in incompatible ways in a later software release without notice.
|
||||
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
|
||||
- Beta level:
|
||||
- The version names contain `beta` (e.g. `v2beta3`).
|
||||
- Code is well tested. Enabling the feature is considered safe. Enabled by default.
|
||||
- Support for the overall feature will not be dropped, though details may change.
|
||||
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
|
||||
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
|
||||
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
|
||||
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
|
||||
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
|
||||
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
|
||||
- Stable level:
|
||||
- The version name is `vX` where `X` is an integer.
|
||||
- Stable versions of features will appear in released software for many subsequent versions.
|
||||
Versioning is done at the API level rather than at the resource or field level
|
||||
to ensure that the API presents a clear, consistent view of system resources
|
||||
and behavior, and to enable controlling access to end-of-life and/or
|
||||
experimental APIs.
|
||||
|
||||
## API groups
|
||||
Refer to [API versions reference](/docs/reference/using-api/api-overview/#api-versioning)
|
||||
for more details on the API version level definitions.
|
||||
|
||||
To make it easier to extend its API, Kubernetes implements [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
|
||||
The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
|
||||
To make it easier to evolve and to extend its API, Kubernetes implements
|
||||
[API groups](/docs/reference/using-api/api-overview/#api-groups) that can be
|
||||
[enabled or disabled](/docs/reference/using-api/api-overview/#enabling-or-disabling).
|
||||
|
||||
There are several API groups in a cluster:
|
||||
## API Extension
|
||||
|
||||
1. The *core* group, also referred to as the *legacy* group, is at the REST path `/api/v1` and uses `apiVersion: v1`.
|
||||
|
||||
1. *Named* groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION`
|
||||
(e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a
|
||||
full list of available API groups.
|
||||
|
||||
There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/):
|
||||
|
||||
1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
|
||||
lets you declaratively define how the API server should provide your chosen resource API.
|
||||
1. You can also [implement your own extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/)
|
||||
and use the [aggregator](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
|
||||
to make it seamless for clients.
|
||||
|
||||
## Enabling or disabling API groups
|
||||
|
||||
Certain resources and API groups are enabled by default. They can be enabled or disabled by setting `--runtime-config`
|
||||
as a command line option to the kube-apiserver.
|
||||
|
||||
`--runtime-config` accepts comma separated values. For example: to disable batch/v1, set
|
||||
`--runtime-config=batch/v1=false`; to enable batch/v2alpha1, set `--runtime-config=batch/v2alpha1`.
|
||||
The flag accepts comma separated set of key=value pairs describing runtime configuration of the API server.
|
||||
|
||||
{{< note >}}Enabling or disabling groups or resources requires restarting the kube-apiserver and the
|
||||
kube-controller-manager to pick up the `--runtime-config` changes.{{< /note >}}
|
||||
|
||||
## Persistence
|
||||
|
||||
Kubernetes stores its serialized state in terms of the API resources by writing them into
|
||||
{{< glossary_tooltip term_id="etcd" >}}.
|
||||
The Kubernetes API can be extended in one of two ways:
|
||||
|
||||
1. [Custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
let you declaratively define how the API server should provide your chosen resource API.
|
||||
1. You can also extend the Kubernetes API by implementing an
|
||||
[aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
[Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes
|
||||
how the cluster manages authentication and authorization for API access.
|
||||
|
||||
Overall API conventions are described in the
|
||||
[API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)
|
||||
document.
|
||||
|
||||
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
|
||||
- Learn how to extend the Kubernetes API by adding your own
|
||||
[CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
|
||||
- [Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes
|
||||
how the cluster manages authentication and authorization for API access.
|
||||
- Learn about API endpoints, resource types and samples by reading
|
||||
[API Reference](/docs/reference/kubernetes-api/).
|
||||
|
|
|
@ -68,7 +68,7 @@ You can describe the desired state for your deployed containers using Kubernetes
|
|||
* **Automatic bin packing**
|
||||
You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
|
||||
* **Self-healing**
|
||||
Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
|
||||
Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
|
||||
* **Secret and configuration management**
|
||||
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
|
||||
|
||||
|
@ -84,7 +84,7 @@ Kubernetes:
|
|||
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
|
||||
* Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
|
||||
* Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems.
|
||||
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn’t matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
|
||||
* Additionally, Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn't matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -69,7 +69,7 @@ If the prefix is omitted, the annotation Key is presumed to be private to the us
|
|||
|
||||
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
|
||||
|
||||
For example, here’s the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
|
||||
For example, here's the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
|
||||
|
||||
```yaml
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core com
|
|||
|
||||
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
|
||||
|
||||
For example, here’s the configuration file for a Pod that has two labels `environment: production` and `app: nginx` :
|
||||
For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` :
|
||||
|
||||
```yaml
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ Some resource types require their names to be able to be safely encoded as a
|
|||
path segment. In other words, the name may not be "." or ".." and the name may
|
||||
not contain "/" or "%".
|
||||
|
||||
Here’s an example manifest for a Pod named `nginx-demo`.
|
||||
Here's an example manifest for a Pod named `nginx-demo`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
@ -208,6 +208,19 @@ field in the quota spec.
|
|||
|
||||
A quota is matched and consumed only if `scopeSelector` in the quota spec selects the pod.
|
||||
|
||||
When quota is scoped for priority class using `scopeSelector` field, quota object is restricted to track only following resources:
|
||||
|
||||
* `pods`
|
||||
* `cpu`
|
||||
* `memory`
|
||||
* `ephemeral-storage`
|
||||
* `limits.cpu`
|
||||
* `limits.memory`
|
||||
* `limits.ephemeral-storage`
|
||||
* `requests.cpu`
|
||||
* `requests.memory`
|
||||
* `requests.ephemeral-storage`
|
||||
|
||||
This example creates a quota object and matches it with pods at specific priorities. The example
|
||||
works as follows:
|
||||
|
||||
|
|
|
@ -73,17 +73,7 @@ verify that it worked by running `kubectl get pods -o wide` and looking at the
|
|||
## Interlude: built-in node labels {#built-in-node-labels}
|
||||
|
||||
In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated
|
||||
with a standard set of labels. These labels are
|
||||
|
||||
* [`kubernetes.io/hostname`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
|
||||
* [`failure-domain.beta.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone)
|
||||
* [`failure-domain.beta.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion)
|
||||
* [`topology.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
* [`topology.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
|
||||
* [`beta.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type)
|
||||
* [`node.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
* [`kubernetes.io/os`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os)
|
||||
* [`kubernetes.io/arch`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch)
|
||||
with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/) for a list of these.
|
||||
|
||||
{{< note >}}
|
||||
The value of these labels is cloud provider specific and is not guaranteed to be reliable.
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: Eviction Policy
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This page is an overview of Kubernetes' policy for eviction.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Eviction Policy
|
||||
|
||||
The {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} proactively monitors for
|
||||
and prevents total starvation of a compute resource. In those cases, the `kubelet` can reclaim
|
||||
the starved resource by failing one or more Pods. When the `kubelet` fails
|
||||
a Pod, it terminates all of its containers and transitions its `PodPhase` to `Failed`.
|
||||
If the evicted Pod is managed by a Deployment, the Deployment creates another Pod
|
||||
to be scheduled by Kubernetes.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Learn how to [configure out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) with eviction signals and thresholds.
|
|
@ -77,12 +77,9 @@ one of these at random.
|
|||
There are two supported ways to configure the filtering and scoring behavior
|
||||
of the scheduler:
|
||||
|
||||
1. [Scheduling Policies](/docs/reference/scheduling/policies) allow you to
|
||||
configure _Predicates_ for filtering and _Priorities_ for scoring.
|
||||
1. [Scheduling Profiles](/docs/reference/scheduling/config/#profiles) allow you
|
||||
to configure Plugins that implement different scheduling stages, including:
|
||||
`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You
|
||||
can also configure the kube-scheduler to run different profiles.
|
||||
|
||||
1. [Scheduling Policies](/docs/reference/scheduling/policies) allow you to configure _Predicates_ for filtering and _Priorities_ for scoring.
|
||||
1. [Scheduling Profiles](/docs/reference/scheduling/profiles) allow you to configure Plugins that implement different scheduling stages, including: `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You can also configure the kube-scheduler to run different profiles.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -3,7 +3,7 @@ reviewers:
|
|||
- bsalamat
|
||||
title: Scheduler Performance Tuning
|
||||
content_type: concept
|
||||
weight: 70
|
||||
weight: 80
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -48,17 +48,13 @@ To change the value, edit the kube-scheduler configuration file (this is likely
|
|||
to be `/etc/kubernetes/config/kube-scheduler.yaml`), then restart the scheduler.
|
||||
|
||||
After you have made this change, you can run
|
||||
|
||||
```bash
|
||||
kubectl get componentstatuses
|
||||
```
|
||||
to verify that the kube-scheduler component is healthy. The output is similar to:
|
||||
```
|
||||
NAME STATUS MESSAGE ERROR
|
||||
controller-manager Healthy ok
|
||||
scheduler Healthy ok
|
||||
...
|
||||
kubectl get pods -n kube-system | grep kube-scheduler
|
||||
```
|
||||
|
||||
to verify that the kube-scheduler component is healthy.
|
||||
|
||||
## Node scoring threshold {#percentage-of-nodes-to-score}
|
||||
|
||||
To improve scheduling performance, the kube-scheduler can stop looking for
|
||||
|
|
|
@ -3,7 +3,7 @@ reviewers:
|
|||
- ahg-g
|
||||
title: Scheduling Framework
|
||||
content_type: concept
|
||||
weight: 60
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -78,7 +78,7 @@ called for that node. Nodes may be evaluated concurrently.
|
|||
### PostFilter {#post-filter}
|
||||
|
||||
These plugins are called after Filter phase, but only when no feasible nodes
|
||||
were found for the node. Plugins are called in their configured order. If
|
||||
were found for the pod. Plugins are called in their configured order. If
|
||||
any postFilter plugin marks the node as `Schedulable`, the remaining plugins
|
||||
will not be called. A typical PostFilter implementation is preemption, which
|
||||
tries to make the pod schedulable by preempting other Pods.
|
||||
|
|
|
@ -62,7 +62,7 @@ tolerations:
|
|||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
Here’s an example of a pod that uses tolerations:
|
||||
Here's an example of a pod that uses tolerations:
|
||||
|
||||
{{< codenew file="pods/pod-with-toleration.yaml" >}}
|
||||
|
||||
|
|
|
@ -292,7 +292,7 @@ Containers at runtime. Security contexts are defined as part of the Pod and cont
|
|||
in the Pod manifest, and represent parameters to the container runtime.
|
||||
|
||||
Security policies are control plane mechanisms to enforce specific settings in the Security Context,
|
||||
as well as other parameters outside the Security Contex. As of February 2020, the current native
|
||||
as well as other parameters outside the Security Context. As of February 2020, the current native
|
||||
solution for enforcing these security policies is [Pod Security
|
||||
Policy](/docs/concepts/policy/pod-security-policy/) - a mechanism for centrally enforcing security
|
||||
policy on Pods across a cluster. Other alternatives for enforcing security policy are being
|
||||
|
@ -317,6 +317,6 @@ restrict privileged permissions is lessened when the workload is isolated from t
|
|||
kernel. This allows for workloads requiring heightened permissions to still be isolated.
|
||||
|
||||
Additionally, the protection of sandboxed workloads is highly dependent on the method of
|
||||
sandboxing. As such, no single ‘recommended’ policy is recommended for all sandboxed workloads.
|
||||
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
|
||||
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ weight: 30
|
|||
|
||||
Now that you have a continuously running, replicated application you can expose it on a network. Before discussing the Kubernetes approach to networking, it is worthwhile to contrast it with the "normal" way networking works with Docker.
|
||||
|
||||
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machine’s own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically.
|
||||
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machine's own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically.
|
||||
|
||||
Coordinating port allocations across multiple developers or teams that provide containers is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
|
||||
|
||||
|
|
|
@ -209,7 +209,7 @@ following pod-specific DNS policies. These policies are specified in the
|
|||
|
||||
{{< note >}}
|
||||
"Default" is not the default DNS policy. If `dnsPolicy` is not
|
||||
explicitly specified, then “ClusterFirst” is used.
|
||||
explicitly specified, then "ClusterFirst" is used.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
|
|
@ -47,6 +47,7 @@ To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/
|
|||
|
||||
* kube-apiserver:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* kube-controller-manager:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
|
|
|
@ -23,7 +23,7 @@ Endpoints.
|
|||
|
||||
The Endpoints API has provided a simple and straightforward way of
|
||||
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
|
||||
and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle
|
||||
and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle and
|
||||
send more traffic to more backend Pods, limitations of that original API became
|
||||
more visible.
|
||||
Most notably, those included challenges with scaling to larger numbers of
|
||||
|
@ -114,8 +114,8 @@ of the labels with the same names on the corresponding Node.
|
|||
Most often, the control plane (specifically, the endpoint slice
|
||||
{{< glossary_tooltip text="controller" term_id="controller" >}}) creates and
|
||||
manages EndpointSlice objects. There are a variety of other use cases for
|
||||
EndpointSlices, such as service mesh implementations, that could result in othe
|
||||
rentities or controllers managing additional sets of EndpointSlices.
|
||||
EndpointSlices, such as service mesh implementations, that could result in other
|
||||
entities or controllers managing additional sets of EndpointSlices.
|
||||
|
||||
To ensure that multiple entities can manage EndpointSlices without interfering
|
||||
with each other, Kubernetes defines the
|
||||
|
|
|
@ -29,15 +29,26 @@ For clarity, this guide defines the following terms:
|
|||
{{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster.
|
||||
Traffic routing is controlled by rules defined on the Ingress resource.
|
||||
|
||||
```none
|
||||
internet
|
||||
|
|
||||
[ Ingress ]
|
||||
--|-----|--
|
||||
[ Services ]
|
||||
```
|
||||
Here is a simple example where an Ingress sends all its traffic to one Service:
|
||||
{{< mermaid >}}
|
||||
graph LR;
|
||||
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress];
|
||||
ingress-->|routing rule|service[Service];
|
||||
subgraph cluster
|
||||
ingress;
|
||||
service-->pod1[Pod];
|
||||
service-->pod2[Pod];
|
||||
end
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class ingress,service,pod1,pod2 k8s;
|
||||
class client plain;
|
||||
class cluster cluster;
|
||||
{{</ mermaid >}}
|
||||
|
||||
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
|
||||
|
||||
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
|
||||
|
||||
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically
|
||||
uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) or
|
||||
|
@ -45,7 +56,7 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin
|
|||
|
||||
## Prerequisites
|
||||
|
||||
You must have an [ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect.
|
||||
You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect.
|
||||
|
||||
You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/). You can choose from a number of
|
||||
[Ingress controllers](/docs/concepts/services-networking/ingress-controllers).
|
||||
|
@ -107,7 +118,7 @@ routed to your default backend.
|
|||
### Resource backends {#resource-backend}
|
||||
|
||||
A `Resource` backend is an ObjectRef to another Kubernetes resource within the
|
||||
same namespace of the Ingress object. A `Resource` is a mutually exclusive
|
||||
same namespace as the Ingress object. A `Resource` is a mutually exclusive
|
||||
setting with Service, and will fail validation if both are specified. A common
|
||||
usage for a `Resource` backend is to ingress data to an object storage backend
|
||||
with static assets.
|
||||
|
@ -235,7 +246,7 @@ IngressClass resource will ensure that new Ingresses without an
|
|||
If you have more than one IngressClass marked as the default for your cluster,
|
||||
the admission controller prevents creating new Ingress objects that don't have
|
||||
an `ingressClassName` specified. You can resolve this by ensuring that at most 1
|
||||
IngressClasses are marked as default in your cluster.
|
||||
IngressClass is marked as default in your cluster.
|
||||
{{< /caution >}}
|
||||
|
||||
## Types of Ingress
|
||||
|
@ -274,10 +285,25 @@ A fanout configuration routes traffic from a single IP address to more than one
|
|||
based on the HTTP URI being requested. An Ingress allows you to keep the number of load balancers
|
||||
down to a minimum. For example, a setup like:
|
||||
|
||||
```
|
||||
foo.bar.com -> 178.91.123.132 -> / foo service1:4200
|
||||
/ bar service2:8080
|
||||
```
|
||||
{{< mermaid >}}
|
||||
graph LR;
|
||||
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress, 178.91.123.132];
|
||||
ingress-->|/foo|service1[Service service1:4200];
|
||||
ingress-->|/bar|service2[Service service2:8080];
|
||||
subgraph cluster
|
||||
ingress;
|
||||
service1-->pod1[Pod];
|
||||
service1-->pod2[Pod];
|
||||
service2-->pod3[Pod];
|
||||
service2-->pod4[Pod];
|
||||
end
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s;
|
||||
class client plain;
|
||||
class cluster cluster;
|
||||
{{</ mermaid >}}
|
||||
|
||||
would require an Ingress such as:
|
||||
|
||||
|
@ -321,11 +347,26 @@ you are using, you may need to create a default-http-backend
|
|||
|
||||
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
|
||||
|
||||
```none
|
||||
foo.bar.com --| |-> foo.bar.com service1:80
|
||||
| 178.91.123.132 |
|
||||
bar.foo.com --| |-> bar.foo.com service2:80
|
||||
```
|
||||
{{< mermaid >}}
|
||||
graph LR;
|
||||
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress, 178.91.123.132];
|
||||
ingress-->|Host: foo.bar.com|service1[Service service1:80];
|
||||
ingress-->|Host: bar.foo.com|service2[Service service2:80];
|
||||
subgraph cluster
|
||||
ingress;
|
||||
service1-->pod1[Pod];
|
||||
service1-->pod2[Pod];
|
||||
service2-->pod3[Pod];
|
||||
service2-->pod4[Pod];
|
||||
end
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s;
|
||||
class client plain;
|
||||
class cluster cluster;
|
||||
{{</ mermaid >}}
|
||||
|
||||
|
||||
The following Ingress tells the backing load balancer to route requests based on
|
||||
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
|
||||
|
@ -491,9 +532,8 @@ You can achieve the same outcome by invoking `kubectl replace -f` on a modified
|
|||
|
||||
## Failing across availability zones
|
||||
|
||||
Techniques for spreading traffic across failure domains differs between cloud providers.
|
||||
Techniques for spreading traffic across failure domains differ between cloud providers.
|
||||
Please check the documentation of the relevant [Ingress controller](/docs/concepts/services-networking/ingress-controllers) for details.
|
||||
for details on deploying Ingress in a federated cluster.
|
||||
|
||||
## Alternatives
|
||||
|
||||
|
@ -509,4 +549,3 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
|
|||
* Learn about the [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io)
|
||||
* Learn about [Ingress controllers](/docs/concepts/services-networking/ingress-controllers/)
|
||||
* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube/)
|
||||
|
||||
|
|
|
@ -9,11 +9,18 @@ weight: 50
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
A network policy is a specification of how groups of {{< glossary_tooltip text="pods" term_id="pod">}} are allowed to communicate with each other and other network endpoints.
|
||||
|
||||
NetworkPolicy resources use {{< glossary_tooltip text="labels" term_id="label">}} to select pods and define rules which specify what traffic is allowed to the selected pods.
|
||||
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a {{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.
|
||||
|
||||
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
|
||||
|
||||
1. Other pods that are allowed (exception: a pod cannot block access to itself)
|
||||
2. Namespaces that are allowed
|
||||
3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
|
||||
|
||||
When defining a pod- or namespace- based NetworkPolicy, you use a {{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to and from the Pod(s) that match the selector.
|
||||
|
||||
Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
|
||||
|
||||
<!-- body -->
|
||||
## Prerequisites
|
||||
|
@ -94,7 +101,7 @@ __egress__: Each NetworkPolicy may include a list of allowed `egress` rules. Ea
|
|||
So, the example NetworkPolicy:
|
||||
|
||||
1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated)
|
||||
2. (Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
|
||||
2. (Ingress rules) allows connections to all pods in the "default" namespace with the label "role=db" on TCP port 6379 from:
|
||||
|
||||
* any pod in the "default" namespace with the label "role=frontend"
|
||||
* any pod in a namespace with the label "project=myproject"
|
||||
|
@ -212,8 +219,21 @@ When the feature gate is enabled, you can set the `protocol` field of a NetworkP
|
|||
You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies.
|
||||
{{< /note >}}
|
||||
|
||||
# What you CAN'T do with network policies (at least, not yet)
|
||||
|
||||
As of Kubernetes 1.20, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API. Some (but not all) of these user stories are actively being discussed for future releases of the NetworkPolicy API.
|
||||
|
||||
- Forcing internal cluster traffic to go through a common gateway (this might be best served with a service mesh or other proxy).
|
||||
- Anything TLS related (use a service mesh or ingress controller for this).
|
||||
- Node specific policies (you can use CIDR notation for these, but you cannot target nodes by their Kubernetes identities specifically).
|
||||
- Targeting of namespaces or services by name (you can, however, target pods or namespaces by their{{< glossary_tooltip text="labels" term_id="label" >}}, which is often a viable workaround).
|
||||
- Creation or management of "Policy requests" that are fulfilled by a third party.
|
||||
- Default policies which are applied to all namespaces or pods (there are some third party Kubernetes distributions and projects which can do this).
|
||||
- Advanced policy querying and reachability tooling.
|
||||
- The ability to target ranges of Ports in a single policy declaration.
|
||||
- The ability to log network security events (for example connections that are blocked or accepted).
|
||||
- The ability to explicitly deny policies (currently the model for NetworkPolicies are deny by default, with only the ability to add allow rules).
|
||||
- The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost access, nor do they have the ability to block access from their resident node).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -221,5 +241,3 @@ You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin tha
|
|||
- See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/)
|
||||
walkthrough for further examples.
|
||||
- See more [recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource.
|
||||
|
||||
|
||||
|
|
|
@ -24,8 +24,8 @@ and can load-balance across them.
|
|||
|
||||
## Motivation
|
||||
|
||||
Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal.
|
||||
They are born and when they die, they are not resurrected.
|
||||
Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are created and destroyed
|
||||
to match the state of your cluster. Pods are nonpermanent resources.
|
||||
If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app,
|
||||
it can create and destroy Pods dynamically.
|
||||
|
||||
|
@ -33,8 +33,8 @@ Each Pod gets its own IP address, however in a Deployment, the set of Pods
|
|||
running in one moment in time could be different from
|
||||
the set of Pods running that application a moment later.
|
||||
|
||||
This leads to a problem: if some set of Pods (call them “backends”) provides
|
||||
functionality to other Pods (call them “frontends”) inside your cluster,
|
||||
This leads to a problem: if some set of Pods (call them "backends") provides
|
||||
functionality to other Pods (call them "frontends") inside your cluster,
|
||||
how do the frontends find out and keep track of which IP address to connect
|
||||
to, so that the frontend can use the backend part of the workload?
|
||||
|
||||
|
@ -45,9 +45,9 @@ Enter _Services_.
|
|||
In Kubernetes, a Service is an abstraction which defines a logical set of Pods
|
||||
and a policy by which to access them (sometimes this pattern is called
|
||||
a micro-service). The set of Pods targeted by a Service is usually determined
|
||||
by a {{< glossary_tooltip text="selector" term_id="selector" >}}
|
||||
(see [below](#services-without-selectors) for why you might want a Service
|
||||
_without_ a selector).
|
||||
by a {{< glossary_tooltip text="selector" term_id="selector" >}}.
|
||||
To learn about other ways to define Service endpoints,
|
||||
see [Services _without_ selectors](#services-without-selectors).
|
||||
|
||||
For example, consider a stateless image-processing backend which is running with
|
||||
3 replicas. Those replicas are fungible—frontends do not care which backend
|
||||
|
@ -91,7 +91,7 @@ spec:
|
|||
targetPort: 9376
|
||||
```
|
||||
|
||||
This specification creates a new Service object named “my-service”, which
|
||||
This specification creates a new Service object named "my-service", which
|
||||
targets TCP port 9376 on any Pod with the `app=MyApp` label.
|
||||
|
||||
Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
|
||||
|
@ -100,7 +100,7 @@ which is used by the Service proxies
|
|||
|
||||
The controller for the Service selector continuously scans for Pods that
|
||||
match its selector, and then POSTs any updates to an Endpoint object
|
||||
also named “my-service”.
|
||||
also named "my-service".
|
||||
|
||||
{{< note >}}
|
||||
A Service can map _any_ incoming `port` to a `targetPort`. By default and
|
||||
|
@ -129,11 +129,11 @@ Services most commonly abstract access to Kubernetes Pods, but they can also
|
|||
abstract other kinds of backends.
|
||||
For example:
|
||||
|
||||
* You want to have an external database cluster in production, but in your
|
||||
* You want to have an external database cluster in production, but in your
|
||||
test environment you use your own databases.
|
||||
* You want to point your Service to a Service in a different
|
||||
* You want to point your Service to a Service in a different
|
||||
{{< glossary_tooltip term_id="namespace" >}} or on another cluster.
|
||||
* You are migrating a workload to Kubernetes. Whilst evaluating the approach,
|
||||
* You are migrating a workload to Kubernetes. While evaluating the approach,
|
||||
you run only a proportion of your backends in Kubernetes.
|
||||
|
||||
In any of these scenarios you can define a Service _without_ a Pod selector.
|
||||
|
@ -151,7 +151,7 @@ spec:
|
|||
targetPort: 9376
|
||||
```
|
||||
|
||||
Because this Service has no selector, the corresponding Endpoint object is *not*
|
||||
Because this Service has no selector, the corresponding Endpoint object is not
|
||||
created automatically. You can manually map the Service to the network address and port
|
||||
where it's running, by adding an Endpoint object manually:
|
||||
|
||||
|
@ -188,6 +188,7 @@ selectors and uses DNS names instead. For more information, see the
|
|||
[ExternalName](#externalname) section later in this document.
|
||||
|
||||
### EndpointSlices
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
|
||||
|
||||
EndpointSlices are an API resource that can provide a more scalable alternative
|
||||
|
@ -204,9 +205,8 @@ described in detail in [EndpointSlices](/docs/concepts/services-networking/endpo
|
|||
|
||||
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
|
||||
|
||||
The AppProtocol field provides a way to specify an application protocol to be
|
||||
used for each Service port. The value of this field is mirrored by corresponding
|
||||
Endpoints and EndpointSlice resources.
|
||||
The `AppProtocol` field provides a way to specify an application protocol for each Service port.
|
||||
The value of this field is mirrored by corresponding Endpoints and EndpointSlice resources.
|
||||
|
||||
## Virtual IPs and service proxies
|
||||
|
||||
|
@ -224,10 +224,10 @@ resolution?
|
|||
|
||||
There are a few reasons for using proxying for Services:
|
||||
|
||||
* There is a long history of DNS implementations not respecting record TTLs,
|
||||
* There is a long history of DNS implementations not respecting record TTLs,
|
||||
and caching the results of name lookups after they should have expired.
|
||||
* Some apps do DNS lookups only once and cache the results indefinitely.
|
||||
* Even if apps and libraries did proper re-resolution, the low or zero TTLs
|
||||
* Some apps do DNS lookups only once and cache the results indefinitely.
|
||||
* Even if apps and libraries did proper re-resolution, the low or zero TTLs
|
||||
on the DNS records could impose a high load on DNS that then becomes
|
||||
difficult to manage.
|
||||
|
||||
|
@ -236,8 +236,7 @@ There are a few reasons for using proxying for Services:
|
|||
In this mode, kube-proxy watches the Kubernetes master for the addition and
|
||||
removal of Service and Endpoint objects. For each Service it opens a
|
||||
port (randomly chosen) on the local node. Any connections to this "proxy port"
|
||||
are
|
||||
proxied to one of the Service's backend Pods (as reported via
|
||||
are proxied to one of the Service's backend Pods (as reported via
|
||||
Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into
|
||||
account when deciding which backend Pod to use.
|
||||
|
||||
|
@ -298,12 +297,12 @@ higher throughput of network traffic.
|
|||
IPVS provides more options for balancing traffic to backend Pods;
|
||||
these are:
|
||||
|
||||
- `rr`: round-robin
|
||||
- `lc`: least connection (smallest number of open connections)
|
||||
- `dh`: destination hashing
|
||||
- `sh`: source hashing
|
||||
- `sed`: shortest expected delay
|
||||
- `nq`: never queue
|
||||
* `rr`: round-robin
|
||||
* `lc`: least connection (smallest number of open connections)
|
||||
* `dh`: destination hashing
|
||||
* `sh`: source hashing
|
||||
* `sed`: shortest expected delay
|
||||
* `nq`: never queue
|
||||
|
||||
{{< note >}}
|
||||
To run kube-proxy in IPVS mode, you must make IPVS available on
|
||||
|
@ -316,7 +315,7 @@ falls back to running in iptables proxy mode.
|
|||
|
||||

|
||||
|
||||
In these proxy models, the traffic bound for the Service’s IP:Port is
|
||||
In these proxy models, the traffic bound for the Service's IP:Port is
|
||||
proxied to an appropriate backend without the clients knowing anything
|
||||
about Kubernetes or Services or Pods.
|
||||
|
||||
|
@ -389,7 +388,7 @@ compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
|
|||
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
|
||||
where the Service name is upper-cased and dashes are converted to underscores.
|
||||
|
||||
For example, the Service `"redis-master"` which exposes TCP port 6379 and has been
|
||||
For example, the Service `redis-master` which exposes TCP port 6379 and has been
|
||||
allocated cluster IP address 10.0.0.11, produces the following environment
|
||||
variables:
|
||||
|
||||
|
@ -423,19 +422,19 @@ Services and creates a set of DNS records for each one. If DNS has been enabled
|
|||
throughout your cluster then all Pods should automatically be able to resolve
|
||||
Services by their DNS name.
|
||||
|
||||
For example, if you have a Service called `"my-service"` in a Kubernetes
|
||||
Namespace `"my-ns"`, the control plane and the DNS Service acting together
|
||||
create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace
|
||||
For example, if you have a Service called `my-service` in a Kubernetes
|
||||
namespace `my-ns`, the control plane and the DNS Service acting together
|
||||
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
|
||||
should be able to find it by simply doing a name lookup for `my-service`
|
||||
(`"my-service.my-ns"` would also work).
|
||||
(`my-service.my-ns` would also work).
|
||||
|
||||
Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names
|
||||
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
|
||||
will resolve to the cluster IP assigned for the Service.
|
||||
|
||||
Kubernetes also supports DNS SRV (Service) records for named ports. If the
|
||||
`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to
|
||||
`my-service.my-ns` Service has a port named `http` with the protocol set to
|
||||
`TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover
|
||||
the port number for `"http"`, as well as the IP address.
|
||||
the port number for `http`, as well as the IP address.
|
||||
|
||||
The Kubernetes DNS server is the only way to access `ExternalName` Services.
|
||||
You can find more information about `ExternalName` resolution in
|
||||
|
@ -444,7 +443,7 @@ You can find more information about `ExternalName` resolution in
|
|||
## Headless Services
|
||||
|
||||
Sometimes you don't need load-balancing and a single Service IP. In
|
||||
this case, you can create what are termed “headless” Services, by explicitly
|
||||
this case, you can create what are termed "headless" Services, by explicitly
|
||||
specifying `"None"` for the cluster IP (`.spec.clusterIP`).
|
||||
|
||||
You can use a headless Service to interface with other service discovery mechanisms,
|
||||
|
@ -467,8 +466,8 @@ For headless Services that do not define selectors, the endpoints controller doe
|
|||
not create `Endpoints` records. However, the DNS system looks for and configures
|
||||
either:
|
||||
|
||||
* CNAME records for [`ExternalName`](#externalname)-type Services.
|
||||
* A records for any `Endpoints` that share a name with the Service, for all
|
||||
* CNAME records for [`ExternalName`](#externalname)-type Services.
|
||||
* A records for any `Endpoints` that share a name with the Service, for all
|
||||
other types.
|
||||
|
||||
## Publishing Services (ServiceTypes) {#publishing-services-service-types}
|
||||
|
@ -481,26 +480,26 @@ The default is `ClusterIP`.
|
|||
|
||||
`Type` values and their behaviors are:
|
||||
|
||||
* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
|
||||
* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
|
||||
makes the Service only reachable from within the cluster. This is the
|
||||
default `ServiceType`.
|
||||
* [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
|
||||
* [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
|
||||
(the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service
|
||||
routes, is automatically created. You'll be able to contact the `NodePort` Service,
|
||||
from outside the cluster,
|
||||
by requesting `<NodeIP>:<NodePort>`.
|
||||
* [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
|
||||
* [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
|
||||
provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external
|
||||
load balancer routes, are automatically created.
|
||||
* [`ExternalName`](#externalname): Maps the Service to the contents of the
|
||||
* [`ExternalName`](#externalname): Maps the Service to the contents of the
|
||||
`externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
|
||||
|
||||
with its value. No proxying of any kind is set up.
|
||||
{{< note >}}
|
||||
You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type.
|
||||
{{< note >}}You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher
|
||||
to use the `ExternalName` type.
|
||||
{{< /note >}}
|
||||
|
||||
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.
|
||||
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules
|
||||
into a single resource as it can expose multiple services under the same IP address.
|
||||
|
||||
### Type NodePort {#nodeport}
|
||||
|
||||
|
@ -509,7 +508,6 @@ allocates a port from a range specified by `--service-node-port-range` flag (def
|
|||
Each node proxies that port (the same port number on every Node) into your Service.
|
||||
Your Service reports the allocated port in its `.spec.ports[*].nodePort` field.
|
||||
|
||||
|
||||
If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10.
|
||||
This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node.
|
||||
|
||||
|
@ -530,6 +528,7 @@ Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
|
|||
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -606,19 +605,21 @@ Specify the assigned IP address as loadBalancerIP. Ensure that you have updated
|
|||
{{< /note >}}
|
||||
|
||||
#### Internal load balancer
|
||||
|
||||
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
|
||||
(virtual) network address block.
|
||||
|
||||
In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
|
||||
|
||||
You can achieve this by adding one the following annotations to a Service.
|
||||
The annotation to add depends on the cloud Service provider you're using.
|
||||
To set an internal load balancer, add one of the following annotations to your Service
|
||||
depending on the cloud Service provider you're using.
|
||||
|
||||
{{< tabs name="service_tabs" >}}
|
||||
{{% tab name="Default" %}}
|
||||
Select one of the tabs.
|
||||
{{% /tab %}}
|
||||
{{% tab name="GCP" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -627,8 +628,10 @@ metadata:
|
|||
cloud.google.com/load-balancer-type: "Internal"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="AWS" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -637,8 +640,10 @@ metadata:
|
|||
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Azure" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -647,8 +652,10 @@ metadata:
|
|||
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="IBM Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -657,8 +664,10 @@ metadata:
|
|||
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="OpenStack" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -667,8 +676,10 @@ metadata:
|
|||
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Baidu Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -677,8 +688,10 @@ metadata:
|
|||
service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Tencent Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -686,8 +699,10 @@ metadata:
|
|||
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Alibaba Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -695,10 +710,10 @@ metadata:
|
|||
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
#### TLS support on AWS {#ssl-support-on-aws}
|
||||
|
||||
For partial TLS / SSL support on clusters running on AWS, you can add three
|
||||
|
@ -823,7 +838,6 @@ to the value of `"true"`. The annotation
|
|||
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
|
||||
also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
|
||||
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
|
@ -991,6 +1005,7 @@ spec:
|
|||
type: ExternalName
|
||||
externalName: my.database.example.com
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
|
||||
is intended to specify a canonical DNS name. To hardcode an IP address, consider using
|
||||
|
@ -1045,7 +1060,7 @@ spec:
|
|||
|
||||
## Shortcomings
|
||||
|
||||
Using the userspace proxy for VIPs, work at small to medium scale, but will
|
||||
Using the userspace proxy for VIPs works at small to medium scale, but will
|
||||
not scale to very large clusters with thousands of Services. The
|
||||
[original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107)
|
||||
has more details on this.
|
||||
|
@ -1117,7 +1132,7 @@ connections on it.
|
|||
|
||||
When a client connects to the Service's virtual IP address, the iptables
|
||||
rule kicks in, and redirects the packets to the proxy's own port.
|
||||
The “Service proxy” chooses a backend, and starts proxying traffic from the client to the backend.
|
||||
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
|
||||
|
||||
This means that Service owners can choose any port they want without risk of
|
||||
collision. Clients can simply connect to an IP and port, without being aware
|
||||
|
@ -1173,12 +1188,12 @@ of the Service.
|
|||
|
||||
{{< note >}}
|
||||
You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service
|
||||
to expose HTTP / HTTPS Services.
|
||||
to expose HTTP/HTTPS Services.
|
||||
{{< /note >}}
|
||||
|
||||
### PROXY protocol
|
||||
|
||||
If your cloud provider supports it (eg, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)),
|
||||
If your cloud provider supports it,
|
||||
you can use a Service in LoadBalancer mode to configure a load balancer outside
|
||||
of Kubernetes itself, that will forward connections prefixed with
|
||||
[PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
|
||||
|
@ -1189,6 +1204,7 @@ incoming connection, similar to this example
|
|||
```
|
||||
PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n
|
||||
```
|
||||
|
||||
followed by the data from the client.
|
||||
|
||||
### SCTP
|
||||
|
@ -1227,13 +1243,8 @@ SCTP is not supported on Windows based nodes.
|
|||
The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
|
||||
{{< /warning >}}
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
|
||||
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ from the API group `storage.k8s.io`. A cluster administrator can define as many
|
|||
that provisioner when provisioning.
|
||||
A cluster administrator can define and expose multiple flavors of storage (from
|
||||
the same or different storage systems) within a cluster, each with a custom set
|
||||
of parameters. This design also ensures that end users don’t have to worry
|
||||
of parameters. This design also ensures that end users don't have to worry
|
||||
about the complexity and nuances of how storage is provisioned, but still
|
||||
have the ability to select from multiple storage options.
|
||||
|
||||
|
@ -85,8 +85,8 @@ is deprecated since v1.6. Users now can and should instead use the
|
|||
this field must match the name of a `StorageClass` configured by the
|
||||
administrator (see [below](#enabling-dynamic-provisioning)).
|
||||
|
||||
To select the “fast” storage class, for example, a user would create the
|
||||
following `PersistentVolumeClaim`:
|
||||
To select the "fast" storage class, for example, a user would create the
|
||||
following PersistentVolumeClaim:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
@ -39,11 +39,11 @@ simplifies application deployment and management.
|
|||
|
||||
Kubernetes supports several different kinds of ephemeral volumes for
|
||||
different purposes:
|
||||
- [emptyDir](/docs/concepts/volumes/#emptydir): empty at Pod startup,
|
||||
- [emptyDir](/docs/concepts/storage/volumes/#emptydir): empty at Pod startup,
|
||||
with storage coming locally from the kubelet base directory (usually
|
||||
the root disk) or RAM
|
||||
- [configMap](/docs/concepts/volumes/#configmap),
|
||||
[downwardAPI](/docs/concepts/volumes/#downwardapi),
|
||||
- [configMap](/docs/concepts/storage/volumes/#configmap),
|
||||
[downwardAPI](/docs/concepts/storage/volumes/#downwardapi),
|
||||
[secret](/docs/concepts/storage/volumes/#secret): inject different
|
||||
kinds of Kubernetes data into a Pod
|
||||
- [CSI ephemeral
|
||||
|
@ -92,7 +92,7 @@ Conceptually, CSI ephemeral volumes are similar to `configMap`,
|
|||
scheduled onto a node. Kubernetes has no concept of rescheduling Pods
|
||||
anymore at this stage. Volume creation has to be unlikely to fail,
|
||||
otherwise Pod startup gets stuck. In particular, [storage capacity
|
||||
aware Pod scheduling](/docs/concepts/storage-capacity/) is *not*
|
||||
aware Pod scheduling](/docs/concepts/storage/storage-capacity/) is *not*
|
||||
supported for these volumes. They are currently also not covered by
|
||||
the storage resource usage limits of a Pod, because that is something
|
||||
that kubelet can only enforce for storage that it manages itself.
|
||||
|
@ -147,7 +147,7 @@ flexible:
|
|||
([snapshotting](/docs/concepts/storage/volume-snapshots/),
|
||||
[cloning](/docs/concepts/storage/volume-pvc-datasource/),
|
||||
[resizing](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims),
|
||||
and [storage capacity tracking](/docs/concepts/storage-capacity/).
|
||||
and [storage capacity tracking](/docs/concepts/storage/storage-capacity/).
|
||||
|
||||
Example:
|
||||
|
||||
|
|
|
@ -174,6 +174,45 @@ spec:
|
|||
|
||||
However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled.
|
||||
|
||||
### Reserving a PersistentVolume
|
||||
|
||||
The control plane can [bind PersistentVolumeClaims to matching PersistentVolumes](#binding) in the
|
||||
cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
|
||||
|
||||
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC.
|
||||
If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its `claimRef` field, then the PersistentVolume and PersistentVolumeClaim will be bound.
|
||||
|
||||
The binding happens regardless of some volume matching criteria, including node affinity.
|
||||
The control plane still checks that [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/), access modes, and requested storage size are valid.
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: foo-pvc
|
||||
namespace: foo
|
||||
spec:
|
||||
volumeName: foo-pv
|
||||
...
|
||||
```
|
||||
|
||||
This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify the relevant PersistentVolumeClaim in the `claimRef` field of the PV so that other PVCs can not bind to it.
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: foo-pv
|
||||
spec:
|
||||
claimRef:
|
||||
name: foo-pvc
|
||||
namespace: foo
|
||||
...
|
||||
```
|
||||
|
||||
This is useful if you want to consume PersistentVolumes that have their `claimPolicy` set
|
||||
to `Retain`, including cases where you are reusing an existing PV.
|
||||
|
||||
### Expanding Persistent Volumes Claims
|
||||
|
||||
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
|
||||
|
|
|
@ -415,6 +415,21 @@ This internal provisioner of OpenStack is deprecated. Please use [the external c
|
|||
|
||||
### vSphere
|
||||
|
||||
There are two types of provisioners for vSphere storage classes:
|
||||
|
||||
- [CSI provisioner](#csi-provisioner): `csi.vsphere.vmware.com`
|
||||
- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume`
|
||||
|
||||
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#csi-migration-5).
|
||||
|
||||
#### CSI Provisioner {#vsphere-provisioner-csi}
|
||||
|
||||
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/example/vanilla-k8s-file-driver/example-sc.yaml).
|
||||
|
||||
#### vCP Provisioner
|
||||
|
||||
The following examples use the VMware Cloud Provider (vCP) StorageClass provisioner.
|
||||
|
||||
1. Create a StorageClass with a user specified disk format.
|
||||
|
||||
```yaml
|
||||
|
@ -819,4 +834,3 @@ Delaying volume binding allows the scheduler to consider all of a Pod's
|
|||
scheduling constraints when choosing an appropriate PersistentVolume for a
|
||||
PersistentVolumeClaim.
|
||||
|
||||
|
||||
|
|
|
@ -215,8 +215,7 @@ See the [CephFS example](https://github.com/kubernetes/examples/tree/{{< param "
|
|||
### cinder {#cinder}
|
||||
|
||||
{{< note >}}
|
||||
Prerequisite: Kubernetes with OpenStack Cloud Provider configured. For cloudprovider
|
||||
configuration please refer [cloud provider openstack](/docs/concepts/cluster-administration/cloud-providers/#openstack).
|
||||
Prerequisite: Kubernetes with OpenStack Cloud Provider configured.
|
||||
{{< /note >}}
|
||||
|
||||
`cinder` is used to mount OpenStack Cinder Volume into your Pod.
|
||||
|
@ -757,8 +756,8 @@ See the [NFS example](https://github.com/kubernetes/examples/tree/{{< param "git
|
|||
### persistentVolumeClaim {#persistentvolumeclaim}
|
||||
|
||||
A `persistentVolumeClaim` volume is used to mount a
|
||||
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumes are a
|
||||
way for users to "claim" durable storage (such as a GCE PersistentDisk or an
|
||||
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumeClaims
|
||||
are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an
|
||||
iSCSI volume) without knowing the details of the particular cloud environment.
|
||||
|
||||
See the [PersistentVolumes example](/docs/concepts/storage/persistent-volumes/) for more
|
||||
|
|
|
@ -341,7 +341,7 @@ kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50
|
|||
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update
|
||||
them and their Pods via declarative, server-side rolling updates.
|
||||
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
|
||||
creation, deletion and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets that
|
||||
creation, deletion and updates. When you use Deployments you don't have to worry about managing the ReplicaSets that
|
||||
they create. Deployments own and manage their ReplicaSets.
|
||||
As such, it is recommended to use Deployments when you want ReplicaSets.
|
||||
|
||||
|
|
|
@ -254,8 +254,8 @@ API object can be found at:
|
|||
### ReplicaSet
|
||||
|
||||
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
|
||||
It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
|
||||
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all.
|
||||
It's mainly used by [Deployment](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
|
||||
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don't require updates at all.
|
||||
|
||||
|
||||
### Deployment (Recommended)
|
||||
|
|
|
@ -217,7 +217,7 @@ or POSIX shared memory. Containers in different Pods have distinct IP addresses
|
|||
and can not communicate by IPC without
|
||||
[special configuration](/docs/concepts/policy/pod-security-policy/).
|
||||
Containers that want to interact with a container running in a different Pod can
|
||||
use IP networking to comunicate.
|
||||
use IP networking to communicate.
|
||||
|
||||
Containers within the Pod see the system hostname as being the same as the configured
|
||||
`name` for the Pod. There's more about this in the [networking](/docs/concepts/cluster-administration/networking/)
|
||||
|
@ -254,7 +254,7 @@ but cannot be controlled from there.
|
|||
|
||||
* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
* Learn about [PodPresets](/docs/concepts/workloads/pods/podpreset/).
|
||||
* Lean about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
|
||||
* Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
|
||||
configure different Pods with different container runtime configurations.
|
||||
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
||||
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions.
|
||||
|
|
|
@ -45,7 +45,7 @@ higher-level abstraction, called a
|
|||
managing the relatively disposable Pod instances.
|
||||
|
||||
A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead,
|
||||
that Pod can be replaced by a new, near-identical Pod, with even the same name i
|
||||
that Pod can be replaced by a new, near-identical Pod, with even the same name if
|
||||
desired, but with a different UID.
|
||||
|
||||
When something is said to have the same lifetime as a Pod, such as a
|
||||
|
@ -99,7 +99,7 @@ assigns a Pod to a Node, the kubelet starts creating containers for that Pod
|
|||
using a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
|
||||
There are three possible container states: `Waiting`, `Running`, and `Terminated`.
|
||||
|
||||
To the check state of a Pod's containers, you can use
|
||||
To check the state of a Pod's containers, you can use
|
||||
`kubectl describe pod <name-of-pod>`. The output shows the state for each container
|
||||
within that Pod.
|
||||
|
||||
|
@ -107,7 +107,7 @@ Each state has a specific meaning:
|
|||
|
||||
### `Waiting` {#container-state-waiting}
|
||||
|
||||
If a container is not in either the `Running` or `Terminated` state, it `Waiting`.
|
||||
If a container is not in either the `Running` or `Terminated` state, it is `Waiting`.
|
||||
A container in the `Waiting` state is still running the operations it requires in
|
||||
order to complete start up: for example, pulling the container image from a container
|
||||
image registry, or applying {{< glossary_tooltip text="Secret" term_id="secret" >}}
|
||||
|
@ -118,7 +118,7 @@ a Reason field to summarize why the container is in that state.
|
|||
### `Running` {#container-state-running}
|
||||
|
||||
The `Running` status indicates that a container is executing without issues. If there
|
||||
was a `postStart` hook configured, it has already executed and executed. When you use
|
||||
was a `postStart` hook configured, it has already executed and finished. When you use
|
||||
`kubectl` to query a Pod with a container that is `Running`, you also see information
|
||||
about when the container entered the `Running` state.
|
||||
|
||||
|
|
|
@ -30,13 +30,23 @@ node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
|
|||
|
||||
Then the cluster is logically viewed as below:
|
||||
|
||||
```
|
||||
+---------------+---------------+
|
||||
| zoneA | zoneB |
|
||||
+-------+-------+-------+-------+
|
||||
| node1 | node2 | node3 | node4 |
|
||||
+-------+-------+-------+-------+
|
||||
```
|
||||
{{<mermaid>}}
|
||||
graph TB
|
||||
subgraph "zoneB"
|
||||
n3(Node3)
|
||||
n4(Node4)
|
||||
end
|
||||
subgraph "zoneA"
|
||||
n1(Node1)
|
||||
n2(Node2)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n1,n2,n3,n4 k8s;
|
||||
class zoneA,zoneB cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/kubernetes-api/labels-annotations-taints/) that are created and populated automatically on most clusters.
|
||||
|
||||
|
@ -80,55 +90,103 @@ You can read more about this field by running `kubectl explain Pod.spec.topology
|
|||
|
||||
### Example: One TopologySpreadConstraint
|
||||
|
||||
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
|
||||
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
|
||||
|
||||
```
|
||||
+---------------+---------------+
|
||||
| zoneA | zoneB |
|
||||
+-------+-------+-------+-------+
|
||||
| node1 | node2 | node3 | node4 |
|
||||
+-------+-------+-------+-------+
|
||||
| P | P | P | |
|
||||
+-------+-------+-------+-------+
|
||||
```
|
||||
{{<mermaid>}}
|
||||
graph BT
|
||||
subgraph "zoneB"
|
||||
p3(Pod) --> n3(Node3)
|
||||
n4(Node4)
|
||||
end
|
||||
subgraph "zoneA"
|
||||
p1(Pod) --> n1(Node1)
|
||||
p2(Pod) --> n2(Node2)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n1,n2,n3,n4,p1,p2,p3 k8s;
|
||||
class zoneA,zoneB cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
|
||||
|
||||
`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:<any value>" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can’t satisfy the constraint.
|
||||
`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:<any value>" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint.
|
||||
|
||||
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
|
||||
|
||||
```
|
||||
+---------------+---------------+ +---------------+---------------+
|
||||
| zoneA | zoneB | | zoneA | zoneB |
|
||||
+-------+-------+-------+-------+ +-------+-------+-------+-------+
|
||||
| node1 | node2 | node3 | node4 | OR | node1 | node2 | node3 | node4 |
|
||||
+-------+-------+-------+-------+ +-------+-------+-------+-------+
|
||||
| P | P | P | P | | P | P | P P | |
|
||||
+-------+-------+-------+-------+ +-------+-------+-------+-------+
|
||||
```
|
||||
{{<mermaid>}}
|
||||
graph BT
|
||||
subgraph "zoneB"
|
||||
p3(Pod) --> n3(Node3)
|
||||
p4(mypod) --> n4(Node4)
|
||||
end
|
||||
subgraph "zoneA"
|
||||
p1(Pod) --> n1(Node1)
|
||||
p2(Pod) --> n2(Node2)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n1,n2,n3,n4,p1,p2,p3 k8s;
|
||||
class p4 plain;
|
||||
class zoneA,zoneB cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
OR
|
||||
|
||||
{{<mermaid>}}
|
||||
graph BT
|
||||
subgraph "zoneB"
|
||||
p3(Pod) --> n3(Node3)
|
||||
p4(mypod) --> n3
|
||||
n4(Node4)
|
||||
end
|
||||
subgraph "zoneA"
|
||||
p1(Pod) --> n1(Node1)
|
||||
p2(Pod) --> n2(Node2)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n1,n2,n3,n4,p1,p2,p3 k8s;
|
||||
class p4 plain;
|
||||
class zoneA,zoneB cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
You can tweak the Pod spec to meet various kinds of requirements:
|
||||
|
||||
- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed onto "zoneA" as well.
|
||||
- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes instead of zones. In the above example, if `maxSkew` remains "1", the incoming Pod can only be placed onto "node4".
|
||||
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it’s preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.)
|
||||
- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it's preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.)
|
||||
|
||||
### Example: Multiple TopologySpreadConstraints
|
||||
|
||||
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
|
||||
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
|
||||
|
||||
```
|
||||
+---------------+---------------+
|
||||
| zoneA | zoneB |
|
||||
+-------+-------+-------+-------+
|
||||
| node1 | node2 | node3 | node4 |
|
||||
+-------+-------+-------+-------+
|
||||
| P | P | P | |
|
||||
+-------+-------+-------+-------+
|
||||
```
|
||||
{{<mermaid>}}
|
||||
graph BT
|
||||
subgraph "zoneB"
|
||||
p3(Pod) --> n3(Node3)
|
||||
n4(Node4)
|
||||
end
|
||||
subgraph "zoneA"
|
||||
p1(Pod) --> n1(Node1)
|
||||
p2(Pod) --> n2(Node2)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n1,n2,n3,n4,p1,p2,p3 k8s;
|
||||
class p4 plain;
|
||||
class zoneA,zoneB cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node:
|
||||
|
||||
|
@ -138,15 +196,24 @@ In this case, to match the first constraint, the incoming Pod can only be placed
|
|||
|
||||
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
|
||||
|
||||
```
|
||||
+---------------+-------+
|
||||
| zoneA | zoneB |
|
||||
+-------+-------+-------+
|
||||
| node1 | node2 | node3 |
|
||||
+-------+-------+-------+
|
||||
| P P | P | P P |
|
||||
+-------+-------+-------+
|
||||
```
|
||||
{{<mermaid>}}
|
||||
graph BT
|
||||
subgraph "zoneB"
|
||||
p4(Pod) --> n3(Node3)
|
||||
p5(Pod) --> n3
|
||||
end
|
||||
subgraph "zoneA"
|
||||
p1(Pod) --> n1(Node1)
|
||||
p2(Pod) --> n1
|
||||
p3(Pod) --> n2(Node2)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
|
||||
class zoneA,zoneB cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
|
||||
|
||||
|
@ -163,21 +230,43 @@ There are some implicit conventions worth noting here:
|
|||
1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA".
|
||||
2. the incoming Pod has no chances to be scheduled onto this kind of nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone".
|
||||
|
||||
- Be aware of what will happen if the incoming Pod’s `topologySpreadConstraints[*].labelSelector` doesn’t match its own labels. In the above example, if we remove the incoming Pod’s labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it’s still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload’s `topologySpreadConstraints[*].labelSelector` to match its own labels.
|
||||
- Be aware of what will happen if the incomingPod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed onto "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels.
|
||||
|
||||
- If the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined, nodes not matching them will be bypassed.
|
||||
|
||||
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
|
||||
|
||||
```
|
||||
+---------------+---------------+-------+
|
||||
| zoneA | zoneB | zoneC |
|
||||
+-------+-------+-------+-------+-------+
|
||||
| node1 | node2 | node3 | node4 | node5 |
|
||||
+-------+-------+-------+-------+-------+
|
||||
| P | P | P | | |
|
||||
+-------+-------+-------+-------+-------+
|
||||
```
|
||||
{{<mermaid>}}
|
||||
graph BT
|
||||
subgraph "zoneB"
|
||||
p3(Pod) --> n3(Node3)
|
||||
n4(Node4)
|
||||
end
|
||||
subgraph "zoneA"
|
||||
p1(Pod) --> n1(Node1)
|
||||
p2(Pod) --> n2(Node2)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n1,n2,n3,n4,p1,p2,p3 k8s;
|
||||
class p4 plain;
|
||||
class zoneA,zoneB cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
{{<mermaid>}}
|
||||
graph BT
|
||||
subgraph "zoneC"
|
||||
n5(Node5)
|
||||
end
|
||||
|
||||
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
|
||||
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
|
||||
class n5 k8s;
|
||||
class zoneC cluster;
|
||||
{{< /mermaid >}}
|
||||
|
||||
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
|
||||
|
||||
|
|
|
@ -184,8 +184,8 @@ Begin and end meetings on time.
|
|||
|
||||
### Recording meetings on Zoom
|
||||
|
||||
When you’re ready to start the recording, click Record to Cloud.
|
||||
When you're ready to start the recording, click Record to Cloud.
|
||||
|
||||
When you’re ready to stop recording, click Stop.
|
||||
When you're ready to stop recording, click Stop.
|
||||
|
||||
The video uploads automatically to YouTube.
|
||||
|
|
|
@ -127,7 +127,7 @@ Monitor your cherry-pick pull request until it is merged into the release branch
|
|||
|
||||
{{< note >}}
|
||||
Proposing a cherry pick requires that you have permission to set a label and a
|
||||
milestone in your pull request. If you don’t have those permissions, you will
|
||||
milestone in your pull request. If you don't have those permissions, you will
|
||||
need to work with someone who can set the label and milestone for you.
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ Each day in a week-long shift as PR Wrangler:
|
|||
- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality and adherence to the [Style](/docs/contribute/style/style-guide/) and [Content](/docs/contribute/style/content-guide/) guides.
|
||||
- Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`). Review as many PRs as you can.
|
||||
- Make sure PR contributors sign the [CLA](https://github.com/kubernetes/community/blob/master/CLA.md).
|
||||
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors that haven’t signed the CLA to do so.
|
||||
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors that haven't signed the CLA to do so.
|
||||
- Provide feedback on changes and ask for technical reviews from members of other SIGs.
|
||||
- Provide inline suggestions on the PR for the proposed content changes.
|
||||
- If you need to verify content, comment on the PR and request more details.
|
||||
|
|
|
@ -90,19 +90,39 @@ Renders to:
|
|||
|
||||
## Glossary
|
||||
|
||||
There are two glossary tooltips.
|
||||
|
||||
You can reference glossary terms with an inclusion that automatically updates and replaces content with the relevant links from [our glossary](/docs/reference/glossary/). When the term is moused-over by someone
|
||||
using the online documentation, the glossary entry displays a tooltip.
|
||||
|
||||
As well as inclusions with tooltips, you can reuse the definitions from the glossary in
|
||||
page content.
|
||||
|
||||
The raw data for glossary terms is stored at [https://github.com/kubernetes/website/tree/master/content/en/docs/reference/glossary](https://github.com/kubernetes/website/tree/master/content/en/docs/reference/glossary), with a content file for each glossary term.
|
||||
|
||||
### Glossary Demo
|
||||
### Glossary demo
|
||||
|
||||
For example, the following include within the markdown renders to {{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip:
|
||||
|
||||
```liquid
|
||||
```
|
||||
{{</* glossary_tooltip text="cluster" term_id="cluster" */>}}
|
||||
```
|
||||
|
||||
Here's a short glossary definition:
|
||||
|
||||
```
|
||||
{{</* glossary_definition prepend="A cluster is" term_id="cluster" length="short" */>}}
|
||||
```
|
||||
which renders as:
|
||||
{{< glossary_definition prepend="A cluster is" term_id="cluster" length="short" >}}
|
||||
|
||||
You can also include a full definition:
|
||||
```
|
||||
{{</* glossary_definition term_id="cluster" length="all" */>}}
|
||||
```
|
||||
which renders as:
|
||||
{{< glossary_definition term_id="cluster" length="all" >}}
|
||||
|
||||
## Table captions
|
||||
|
||||
You can make tables more accessible to screen readers by adding a table caption. To add a [caption](https://www.w3schools.com/tags/tag_caption.asp) to a table, enclose the table with a `table` shortcode and specify the caption with the `caption` parameter.
|
||||
|
|
|
@ -42,12 +42,9 @@ The English-language documentation uses U.S. English spelling and grammar.
|
|||
|
||||
## Documentation formatting standards
|
||||
|
||||
### Use camel case for API objects
|
||||
### Use upper camel case for API objects
|
||||
|
||||
When you refer to an API object, use the same uppercase and lowercase letters
|
||||
that are used in the actual object name. Typically, the names of API
|
||||
objects use
|
||||
[camel case](https://en.wikipedia.org/wiki/Camel_case).
|
||||
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
||||
|
||||
Don't split the API object name into separate words. For example, use
|
||||
PodTemplateList, not Pod Template List.
|
||||
|
@ -58,9 +55,9 @@ leads to an awkward construction.
|
|||
{{< table caption = "Do and Don't - API objects" >}}
|
||||
Do | Don't
|
||||
:--| :-----
|
||||
The Pod has two containers. | The pod has two containers.
|
||||
The Deployment is responsible for ... | The Deployment object is responsible for ...
|
||||
A PodList is a list of Pods. | A Pod List is a list of pods.
|
||||
The pod has two containers. | The Pod has two containers.
|
||||
The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ...
|
||||
A PodList is a list of pods. | A Pod List is a list of pods.
|
||||
The two ContainerPorts ... | The two ContainerPort objects ...
|
||||
The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
|
||||
{{< /table >}}
|
||||
|
@ -71,7 +68,7 @@ The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds
|
|||
Use angle brackets for placeholders. Tell the reader what a placeholder
|
||||
represents.
|
||||
|
||||
1. Display information about a Pod:
|
||||
1. Display information about a pod:
|
||||
|
||||
kubectl describe pod <pod-name> -n <namespace>
|
||||
|
||||
|
@ -116,7 +113,7 @@ The copy is called a "fork". | The copy is called a "fork."
|
|||
|
||||
## Inline code formatting
|
||||
|
||||
### Use code style for inline code and commands
|
||||
### Use code style for inline code, commands, and API objects
|
||||
|
||||
For inline code in an HTML document, use the `<code>` tag. In a Markdown
|
||||
document, use the backtick (`` ` ``).
|
||||
|
@ -124,7 +121,9 @@ document, use the backtick (`` ` ``).
|
|||
{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
|
||||
Do | Don't
|
||||
:--| :-----
|
||||
The `kubectl run`command creates a Pod. | The "kubectl run" command creates a Pod.
|
||||
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
|
||||
The kubelet on each node acquires a `Lease`… | The kubelet on each node acquires a lease…
|
||||
A `PersistentVolume` represents durable storage… | A Persistent Volume represents durable storage…
|
||||
For declarative management, use `kubectl apply`. | For declarative management, use "kubectl apply".
|
||||
Enclose code samples with triple backticks. (\`\`\`)| Enclose code samples with any other syntax.
|
||||
Use single backticks to enclose inline code. For example, `var example = true`. | Use two asterisks (`**`) or an underscore (`_`) to enclose inline code. For example, **var example = true**.
|
||||
|
@ -201,7 +200,7 @@ kubectl get pods | $ kubectl get pods
|
|||
|
||||
### Separate commands from output
|
||||
|
||||
Verify that the Pod is running on your chosen node:
|
||||
Verify that the pod is running on your chosen node:
|
||||
|
||||
kubectl get pods --output=wide
|
||||
|
||||
|
@ -447,7 +446,7 @@ Use three hyphens (`---`) to create a horizontal rule. Use horizontal rules for
|
|||
{{< table caption = "Do and Don't - Links" >}}
|
||||
Do | Don't
|
||||
:--| :-----
|
||||
Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See <a href="#check-required-ports">Check required ports</a> for more details. | Use ambiguous terms such as “click here”. For example: Certain ports are open on your machines. See <a href="#check-required-ports">here</a> for more details.
|
||||
Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See <a href="#check-required-ports">Check required ports</a> for more details. | Use ambiguous terms such as "click here". For example: Certain ports are open on your machines. See <a href="#check-required-ports">here</a> for more details.
|
||||
Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions)` and the output is [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions). | Write HTML-style links: `<a href="/media/examples/link-element-example.css" target="_blank">Visit our tutorial!</a>`, or create links that open in new tabs or windows. For example: `[example website](https://example.com){target="_blank"}`
|
||||
{{< /table >}}
|
||||
|
||||
|
@ -513,7 +512,7 @@ Do | Don't
|
|||
:--| :-----
|
||||
To create a ReplicaSet, ... | In order to create a ReplicaSet, ...
|
||||
See the configuration file. | Please see the configuration file.
|
||||
View the Pods. | With this next command, we'll view the Pods.
|
||||
View the pods. | With this next command, we'll view the pods.
|
||||
{{< /table >}}
|
||||
|
||||
### Address the reader as "you"
|
||||
|
@ -552,7 +551,7 @@ Do | Don't
|
|||
:--| :-----
|
||||
Version 1.4 includes ... | In version 1.4, we have added ...
|
||||
Kubernetes provides a new feature for ... | We provide a new feature ...
|
||||
This page teaches you how to use Pods. | In this page, we are going to learn about Pods.
|
||||
This page teaches you how to use pods. | In this page, we are going to learn about pods.
|
||||
{{< /table >}}
|
||||
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ of the Kubernetes community open a pull request with changes to resolve the issu
|
|||
|
||||
If you want to suggest improvements to existing content, or notice an error, then open an issue.
|
||||
|
||||
1. Go to the bottom of the page and click the **Create an Issue** button. This redirects you
|
||||
1. Click the **Create an issue** link on the right sidebar. This redirects you
|
||||
to a GitHub issue page pre-populated with some headers.
|
||||
2. Describe the issue or suggestion for improvement. Provide as many details as you can.
|
||||
3. Click **Submit new issue**.
|
||||
|
|
|
@ -1,2 +0,0 @@
|
|||
Tools for Kubernetes docs contributors. View `README.md` files in
|
||||
subdirectories for more info.
|
|
@ -1,51 +0,0 @@
|
|||
# Snippets for Atom
|
||||
|
||||
Snippets are bits of text that get inserted into your editor, to save typing and
|
||||
reduce syntax errors. The snippets provided in `atom-snippets.cson` are scoped to
|
||||
only work on Markdown files within Atom.
|
||||
|
||||
## Installation
|
||||
|
||||
Copy the contents of the `atom-snippets.cson` file into your existing
|
||||
`~/.atom/snippets.cson`. **Do not replace your existing file.**
|
||||
|
||||
You do not need to restart Atom.
|
||||
|
||||
## Usage
|
||||
|
||||
Have a look through `atom-snippets.cson` and note the titles and `prefix` values
|
||||
of the snippets.
|
||||
|
||||
You can trigger a given snippet in one of two ways:
|
||||
|
||||
- By typing the snippet's `prefix` and pressing the `<TAB>` key
|
||||
- By searching for the snippet's title in **Packages / Snippets / Available**
|
||||
|
||||
For example, open a Markdown file and type `anote` and press `<TAB>`. A blank
|
||||
note is added, with the correct Hugo shortcodes.
|
||||
|
||||
A snippet can insert a single line or multiple lines of text. Some snippets
|
||||
have placeholder values. To get to the next placeholder, press `<TAB>` again.
|
||||
|
||||
Some of the snippets only insert partially-formed Markdown or Hugo syntax.
|
||||
For instance, `coverview` inserts the start of a concept overview tag, while
|
||||
`cclose` inserts a close-capture tag. This is because every type of capture
|
||||
needs a capture-close tab.
|
||||
|
||||
## Creating new topics using snippets
|
||||
|
||||
To create a new concept, task, or tutorial from a blank file, use one of the
|
||||
following:
|
||||
|
||||
- `newconcept`
|
||||
- `newtask`
|
||||
- `newtutorial`
|
||||
|
||||
Placeholder text is included.
|
||||
|
||||
## Submitting new snippets
|
||||
|
||||
1. Develop the snippet locally and verify that it works as expected.
|
||||
2. Copy the template's code into the `atom-snippets.cson` file on GitHub. Raise a
|
||||
pull request, and ask for review from another Atom user in `#sig-docs` on
|
||||
Kubernetes Slack.
|
|
@ -1,226 +0,0 @@
|
|||
# Your snippets
|
||||
#
|
||||
# Atom snippets allow you to enter a simple prefix in the editor and hit tab to
|
||||
# expand the prefix into a larger code block with templated values.
|
||||
#
|
||||
# You can create a new snippet in this file by typing "snip" and then hitting
|
||||
# tab.
|
||||
#
|
||||
# An example CoffeeScript snippet to expand log to console.log:
|
||||
#
|
||||
# '.source.coffee':
|
||||
# 'Console log':
|
||||
# 'prefix': 'log'
|
||||
# 'body': 'console.log $1'
|
||||
#
|
||||
# Each scope (e.g. '.source.coffee' above) can only be declared once.
|
||||
#
|
||||
# This file uses CoffeeScript Object Notation (CSON).
|
||||
# If you are unfamiliar with CSON, you can read more about it in the
|
||||
# Atom Flight Manual:
|
||||
# http://flight-manual.atom.io/using-atom/sections/basic-customization/#_cson
|
||||
|
||||
'.source.gfm':
|
||||
|
||||
# Capture variables for concept template
|
||||
# For full concept template see 'newconcept' below
|
||||
'Insert concept template':
|
||||
'prefix': 'ctemplate'
|
||||
'body': 'content_template: templates/concept'
|
||||
'Insert concept overview':
|
||||
'prefix': 'coverview'
|
||||
'body': '{{% capture overview %}}'
|
||||
'Insert concept body':
|
||||
'prefix': 'cbody'
|
||||
'body': '{{% capture body %}}'
|
||||
'Insert concept whatsnext':
|
||||
'prefix': 'cnext'
|
||||
'body': '{{% capture whatsnext %}}'
|
||||
|
||||
|
||||
# Capture variables for task template
|
||||
# For full task template see 'newtask' below
|
||||
'Insert task template':
|
||||
'prefix': 'ttemplate'
|
||||
'body': 'content_template: templates/task'
|
||||
'Insert task overview':
|
||||
'prefix': 'toverview'
|
||||
'body': '{{% capture overview %}}'
|
||||
'Insert task prerequisites':
|
||||
'prefix': 'tprereq'
|
||||
'body': """
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
"""
|
||||
'Insert task steps':
|
||||
'prefix': 'tsteps'
|
||||
'body': '{{% capture steps %}}'
|
||||
'Insert task discussion':
|
||||
'prefix': 'tdiscuss'
|
||||
'body': '{{% capture discussion %}}'
|
||||
|
||||
|
||||
# Capture variables for tutorial template
|
||||
# For full tutorial template see 'newtutorial' below
|
||||
'Insert tutorial template':
|
||||
'prefix': 'tutemplate'
|
||||
'body': 'content_template: templates/tutorial'
|
||||
'Insert tutorial overview':
|
||||
'prefix': 'tuoverview'
|
||||
'body': '{{% capture overview %}}'
|
||||
'Insert tutorial prerequisites':
|
||||
'prefix': 'tuprereq'
|
||||
'body': '{{% capture prerequisites %}}'
|
||||
'Insert tutorial objectives':
|
||||
'prefix': 'tuobjectives'
|
||||
'body': '{{% capture objectives %}}'
|
||||
'Insert tutorial lesson content':
|
||||
'prefix': 'tulesson'
|
||||
'body': '{{% capture lessoncontent %}}'
|
||||
'Insert tutorial whatsnext':
|
||||
'prefix': 'tunext'
|
||||
'body': '{{% capture whatsnext %}}'
|
||||
'Close capture':
|
||||
'prefix': 'ccapture'
|
||||
'body': '{{% /capture %}}'
|
||||
'Insert note':
|
||||
'prefix': 'anote'
|
||||
'body': """
|
||||
{{< note >}}
|
||||
$1
|
||||
{{< /note >}}
|
||||
"""
|
||||
|
||||
# Admonitions
|
||||
'Insert caution':
|
||||
'prefix': 'acaution'
|
||||
'body': """
|
||||
{{< caution >}}
|
||||
$1
|
||||
{{< /caution >}}
|
||||
"""
|
||||
'Insert warning':
|
||||
'prefix': 'awarning'
|
||||
'body': """
|
||||
{{< warning >}}
|
||||
$1
|
||||
{{< /warning >}}
|
||||
"""
|
||||
|
||||
# Misc one-liners
|
||||
'Insert TOC':
|
||||
'prefix': 'toc'
|
||||
'body': '{{< toc >}}'
|
||||
'Insert code from file':
|
||||
'prefix': 'codefile'
|
||||
'body': '{{< codenew file="$1" >}}'
|
||||
'Insert feature state':
|
||||
'prefix': 'fstate'
|
||||
'body': '{{< feature-state for_k8s_version="$1" state="$2" >}}'
|
||||
'Insert figure':
|
||||
'prefix': 'fig'
|
||||
'body': '{{< figure src="$1" title="$2" alt="$3" caption="$4" >}}'
|
||||
'Insert Youtube link':
|
||||
'prefix': 'yt'
|
||||
'body': '{{< youtube $1 >}}'
|
||||
|
||||
|
||||
# Full concept template
|
||||
'Create new concept':
|
||||
'prefix': 'newconcept'
|
||||
'body': """
|
||||
---
|
||||
reviewers:
|
||||
- ${1:"github-id-or-group"}
|
||||
title: ${2:"topic-title"}
|
||||
content_template: templates/concept
|
||||
---
|
||||
{{% capture overview %}}
|
||||
${3:"overview-content"}
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture body %}}
|
||||
${4:"h2-heading-per-subtopic"}
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
${5:"next-steps-or-delete"}
|
||||
{{% /capture %}}
|
||||
"""
|
||||
|
||||
|
||||
# Full task template
|
||||
'Create new task':
|
||||
'prefix': 'newtask'
|
||||
'body': """
|
||||
---
|
||||
reviewers:
|
||||
- ${1:"github-id-or-group"}
|
||||
title: ${2:"topic-title"}
|
||||
content_template: templates/task
|
||||
---
|
||||
{{% capture overview %}}
|
||||
${3:"overview-content"}
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
${4:"additional-prereqs-or-delete"}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
${5:"h2-heading-per-step"}
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture discussion %}}
|
||||
${6:"task-discussion-or-delete"}
|
||||
{{% /capture %}}
|
||||
"""
|
||||
|
||||
# Full tutorial template
|
||||
'Create new tutorial':
|
||||
'prefix': 'newtutorial'
|
||||
'body': """
|
||||
---
|
||||
reviewers:
|
||||
- ${1:"github-id-or-group"}
|
||||
title: ${2:"topic-title"}
|
||||
content_template: templates/tutorial
|
||||
---
|
||||
{{% capture overview %}}
|
||||
${3:"overview-content"}
|
||||
{{% /capture %}}
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
${4:"additional-prereqs-or-delete"}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture objectives %}}
|
||||
${5:"tutorial-objectives"}
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
${6:"lesson-content"}
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
${7:"next-steps-or-delete"}
|
||||
{{% /capture %}}
|
||||
"""
|
||||
|
|
@ -53,7 +53,7 @@ cards:
|
|||
button_path: /docs/reference
|
||||
- name: contribute
|
||||
title: Contribute to the docs
|
||||
description: Anyone can contribute, whether you’re new to the project or you’ve been around a long time.
|
||||
description: Anyone can contribute, whether you're new to the project or you've been around a long time.
|
||||
button: Contribute to the docs
|
||||
button_path: /docs/contribute
|
||||
- name: release-notes
|
||||
|
|
|
@ -1,30 +1,12 @@
|
|||
---
|
||||
title: Supported Versions of the Kubernetes Documentation
|
||||
content_type: concept
|
||||
title: Available Documentation Versions
|
||||
content_type: custom
|
||||
layout: supported-versions
|
||||
card:
|
||||
name: about
|
||||
weight: 10
|
||||
title: Supported Versions of the Documentation
|
||||
title: Available Documentation Versions
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This website contains documentation for the current version of Kubernetes
|
||||
and the four previous versions of Kubernetes.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Current version
|
||||
|
||||
The current version is
|
||||
[{{< param "version" >}}](/).
|
||||
|
||||
## Previous versions
|
||||
|
||||
{{< versions-other >}}
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -163,10 +163,11 @@ storage classes and how to mark a storage class as default.
|
|||
### DefaultTolerationSeconds {#defaulttolerationseconds}
|
||||
|
||||
This admission controller sets the default forgiveness toleration for pods to tolerate
|
||||
the taints `notready:NoExecute` and `unreachable:NoExecute` for 5 minutes,
|
||||
if the pods don't already have toleration for taints
|
||||
`node.kubernetes.io/not-ready:NoExecute` or
|
||||
`node.alpha.kubernetes.io/unreachable:NoExecute`.
|
||||
the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters
|
||||
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
|
||||
have toleration for taints `node.kubernetes.io/not-ready:NoExecute` or
|
||||
`node.kubernetes.io/unreachable:NoExecute`.
|
||||
The default value for `default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` is 5 minutes.
|
||||
|
||||
### DenyExecOnPrivileged {#denyexeconprivileged}
|
||||
|
||||
|
@ -202,8 +203,6 @@ is recommended instead.
|
|||
This admission controller mitigates the problem where the API server gets flooded by
|
||||
event requests. The cluster admin can specify event rate limits by:
|
||||
|
||||
* Ensuring that `eventratelimit.admission.k8s.io/v1alpha1=true` is included in the
|
||||
`--runtime-config` flag for the API server;
|
||||
* Enabling the `EventRateLimit` admission controller;
|
||||
* Referencing an `EventRateLimit` configuration file from the file provided to the API
|
||||
server's command line flag `--admission-control-config-file`:
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue