Merge pull request #6 from kubernetes/master

merge from origin
pull/24183/head
Yong Zhang 2020-09-28 16:42:54 +08:00 committed by GitHub
commit b526e8b832
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
685 changed files with 38114 additions and 18245 deletions

View File

@ -35,3 +35,5 @@ Note that code issues should be filed against the main kubernetes repository, wh
### Submitting Documentation Pull Requests
If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/).
For more information, see [contributing to Kubernetes docs](https://kubernetes.io/docs/contribute/).

6
OWNERS
View File

@ -7,10 +7,10 @@ approvers:
- sig-docs-en-owners # Defined in OWNERS_ALIASES
emeritus_approvers:
# chenopis, you're welcome to return when you're ready to resume PR wrangling
# jaredbhatti, you're welcome to return when you're ready to resume PR wrangling
# stewart-yu, you're welcome to return when you're ready to resume PR wrangling
# - chenopis, commented out to disable PR assignments
# - jaredbhatti, commented out to disable PR assignments
- stewart-yu
- zacharysarah
labels:
- sig/docs

View File

@ -24,10 +24,10 @@ aliases:
sig-docs-en-owners: # Admins for English content
- bradtopol
- celestehorgan
- irvifa
- jimangel
- kbarnard10
- kbhawkey
- makoscafee
- onlydole
- savitharaghunathan
- sftim
@ -43,14 +43,12 @@ aliases:
- jimangel
- kbarnard10
- kbhawkey
- makoscafee
- onlydole
- rajeshdeshpande02
- sftim
- steveperry-53
- tengqm
- xiangpengzhao
- zacharysarah
- zparnold
sig-docs-es-owners: # Admins for Spanish content
- raelga
@ -133,7 +131,6 @@ aliases:
- ianychoi
- seokho-son
- ysyukr
- zacharysarah
sig-docs-ko-reviews: # PR reviews for Korean content
- ClaudiaJKang
- gochist
@ -142,35 +139,36 @@ aliases:
- ysyukr
- pjhwa
sig-docs-leads: # Website chairs and tech leads
- irvifa
- jimangel
- kbarnard10
- kbhawkey
- onlydole
- sftim
- zacharysarah
sig-docs-zh-owners: # Admins for Chinese content
- chenopis
# chenopis
- chenrui333
- dchen1107
- haibinxie
- hanjiayao
- lichuqiang
- SataQiu
- tengqm
- xiangpengzhao
- xichengliudui
- zacharysarah
- zhangxiaoyu-zidif
sig-docs-zh-reviews: # PR reviews for Chinese content
- chenrui333
- idealhack
# dchen1107
# haibinxie
# hanjiayao
# lichuqiang
- SataQiu
- tanjunchen
- tengqm
- xiangpengzhao
- xichengliudui
- zhangxiaoyu-zidif
# zhangxiaoyu-zidif
sig-docs-zh-reviews: # PR reviews for Chinese content
- chenrui333
- howieyuen
- idealhack
- pigletfly
- SataQiu
- tanjunchen
- tengqm
- xiangpengzhao
- xichengliudui
# zhangxiaoyu-zidif
sig-docs-pt-owners: # Admins for Portuguese content
- femrtnz
- jcjesus

View File

@ -40,13 +40,13 @@ Um die Kubernetes-Website lokal laufen zu lassen, empfiehlt es sich, ein speziel
Wenn Sie Docker [installiert](https://www.docker.com/get-started) haben, erstellen Sie das Docker-Image `kubernetes-hugo` lokal:
```bash
make docker-image
make container-image
```
Nachdem das Image erstellt wurde, können Sie die Site lokal ausführen:
```bash
make docker-serve
make container-serve
```
Öffnen Sie Ihren Browser unter http://localhost:1313, um die Site anzuzeigen. Wenn Sie Änderungen an den Quelldateien vornehmen, aktualisiert Hugo die Site und erzwingt eine Browseraktualisierung.

View File

@ -33,13 +33,13 @@ El método recomendado para levantar una copia local del sitio web kubernetes.io
Una vez tenga Docker [configurado en su máquina](https://www.docker.com/get-started), puede construir la imagen de Docker `kubernetes-hugo` localmente ejecutando el siguiente comando en la raíz del repositorio:
```bash
make docker-image
make container-image
```
Una vez tenga la imagen construida, puede levantar el sitio web ejecutando:
```bash
make docker-serve
make container-serve
```
Abra su navegador y visite http://localhost:1313 para acceder a su copia local del sitio. A medida que vaya haciendo cambios en el código fuente, Hugo irá actualizando la página y forzará la actualización en el navegador.

View File

@ -16,13 +16,13 @@ Faites tous les changements que vous voulez dans votre fork, et quand vous êtes
Une fois votre pull request créée, un examinateur de Kubernetes se chargera de vous fournir une revue claire et exploitable.
En tant que propriétaire de la pull request, **il est de votre responsabilité de modifier votre pull request pour tenir compte des commentaires qui vous ont été fournis par l'examinateur de Kubernetes.**
Notez également que vous pourriez vous retrouver avec plus d'un examinateur de Kubernetes pour vous fournir des commentaires ou vous pourriez finir par recevoir des commentaires d'un autre examinateur que celui qui vous a été initialement affecté pour vous fournir ces commentaires.
De plus, dans certains cas, l'un de vos examinateur peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin.
De plus, dans certains cas, l'un de vos examinateurs peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin.
Les examinateurs feront de leur mieux pour fournir une revue rapidement, mais le temps de réponse peut varier selon les circonstances.
Pour plus d'informations sur la contribution à la documentation Kubernetes, voir :
* [Commencez à contribuer](https://kubernetes.io/docs/contribute/start/)
* [Apperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Aperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Utilisation des modèles de page](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Traduction de la documentation Kubernetes](https://kubernetes.io/docs/contribute/localization/)
@ -38,13 +38,13 @@ La façon recommandée d'exécuter le site web Kubernetes localement est d'utili
Si vous avez Docker [up and running](https://www.docker.com/get-started), construisez l'image Docker `kubernetes-hugo' localement:
```bash
make docker-image
make container-image
```
Une fois l'image construite, vous pouvez exécuter le site localement :
```bash
make docker-serve
make container-serve
```
Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site.

View File

@ -41,13 +41,13 @@
यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ:
```bash
make docker-image
make container-image
```
एक बार image बन जाने के बाद, आप साइट को स्थानीय रूप से चला सकते हैं:
```bash
make docker-serve
make container-serve
```
साइट देखने के लिए अपने browser को `http://localhost:1313` पर खोलें। जैसा कि आप source फ़ाइलों में परिवर्तन करते हैं, Hugo साइट को अपडेट करता है और browser को refresh करने पर मजबूर करता है।

View File

@ -30,13 +30,13 @@ Petunjuk yang disarankan untuk menjalankan Dokumentasi Kubernetes pada mesin lok
Jika kamu sudah memiliki **Docker** [yang sudah dapat digunakan](https://www.docker.com/get-started), kamu dapat melakukan **build** `kubernetes-hugo` **Docker image** secara lokal:
```bash
make docker-image
make container-image
```
Setelah **image** berhasil di-**build**, kamu dapat menjalankan website tersebut pada mesin lokal-mu:
```bash
make docker-serve
make container-serve
```
Buka **browser** kamu ke http://localhost:1313 untuk melihat laman dokumentasi. Selama kamu melakukan penambahan konten, **Hugo** akan secara otomatis melakukan perubahan terhadap laman dokumentasi apabila **browser** melakukan proses **refresh**.

View File

@ -30,13 +30,13 @@ Il modo consigliato per eseguire localmente il sito Web Kubernetes prevede l'uti
Se hai Docker [attivo e funzionante](https://www.docker.com/get-started), crea l'immagine Docker `kubernetes-hugo` localmente:
```bash
make docker-image
make container-image
```
Dopo aver creato l'immagine, è possibile eseguire il sito Web localmente:
```bash
make docker-serve
make container-serve
```
Apri il tuo browser su http://localhost:1313 per visualizzare il sito Web. Mentre modifichi i file sorgenti, Hugo aggiorna automaticamente il sito Web e forza un aggiornamento della pagina visualizzata nel browser.

View File

@ -41,13 +41,13 @@
도커 [동작 및 실행](https://www.docker.com/get-started) 환경이 있는 경우, 로컬에서 `kubernetes-hugo` 도커 이미지를 빌드 합니다:
```bash
make docker-image
make container-image
```
해당 이미지가 빌드 된 이후, 사이트를 로컬에서 실행할 수 있습니다:
```bash
make docker-serve
make container-serve
```
브라우저에서 http://localhost:1313 를 열어 사이트를 살펴봅니다. 소스 파일에 변경 사항이 있을 때, Hugo는 사이트를 업데이트하고 브라우저를 강제로 새로고침합니다.

View File

@ -49,13 +49,13 @@ choco install make
Jeśli [zainstalowałeś i uruchomiłeś](https://www.docker.com/get-started) już Dockera, zbuduj obraz `kubernetes-hugo` lokalnie:
```bash
make docker-image
make container-image
```
Po zbudowaniu obrazu, możesz uruchomić serwis lokalnie:
```bash
make docker-serve
make container-serve
```
Aby obejrzeć zawartość serwisu otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.

View File

@ -35,13 +35,13 @@ A maneira recomendada de executar o site do Kubernetes localmente é executar um
Se você tiver o Docker [em funcionamento](https://www.docker.com/get-started), crie a imagem do Docker do `kubernetes-hugo` localmente:
```bash
make docker-image
make container-image
```
Depois que a imagem foi criada, você pode executar o site localmente:
```bash
make docker-serve
make container-serve
```
Abra seu navegador para http://localhost:1313 para visualizar o site. Conforme você faz alterações nos arquivos de origem, Hugo atualiza o site e força a atualização do navegador.

View File

@ -38,8 +38,8 @@ hugo server --buildFuture
Узнать подробнее о том, как поучаствовать в документации Kubernetes, вы можете по ссылкам ниже:
* [Начните вносить свой вклад](https://kubernetes.io/docs/contribute/)
* [Использование шаблонов страниц](http://kubernetes.io/docs/contribute/style/page-templates/)
* [Руководство по оформлению документации](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Использование шаблонов страниц](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Руководство по оформлению документации](https://kubernetes.io/docs/contribute/style/style-guide/)
* [Руководство по локализации Kubernetes](https://kubernetes.io/docs/contribute/localization/)
## Файл `README.md` на других языках

View File

@ -31,13 +31,13 @@ Cách được đề xuất để chạy trang web Kubernetes cục bộ là dù
Nếu bạn có Docker đang [up và running](https://www.docker.com/get-started), build `kubernetes-hugo` Docker image cục bộ:
```bash
make docker-image
make container-image
```
Khi image đã được built, bạn có thể chạy website cục bộ:
```bash
make docker-serve
make container-serve
```
Mở trình duyệt và đến địa chỉ http://localhost:1313 để xem website. Khi bạn thay đổi các file nguồn, Hugo cập nhật website và buộc làm mới trình duyệt.

View File

@ -101,7 +101,7 @@ Learn more about SIG Docs Kubernetes community and meetings on the [community pa
You can also reach the maintainers of this project at:
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Slack](https://kubernetes.slack.com/messages/sig-docs) [Get an invite for this Slack](https://slack.k8s.io/)
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
# Contributing to the docs

View File

@ -10,6 +10,7 @@
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
irvifa
jimangel
kbarnard10
zacharysarah
sftim

View File

@ -17,14 +17,22 @@ limitations under the License.
var Search = {
init: function () {
$(document).ready(function () {
// Fill the search input form with the current search keywords
const searchKeywords = new URLSearchParams(location.search).get('q');
if (searchKeywords !== null && searchKeywords !== '') {
const searchInput = document.querySelector('.td-search-input');
searchInput.focus();
searchInput.value = searchKeywords;
}
// Set a keydown event
$(document).on("keypress", ".td-search-input", function (e) {
if (e.keyCode !== 13) {
return;
}
var query = $(this).val();
var searchPage = "{{ "docs/search/" | absURL }}?q=" + query;
document.location = searchPage;
document.location = "{{ "search/" | absURL }}?q=" + query;
return false;
});

View File

@ -42,6 +42,10 @@ $video-section-height: 200px;
body {
background-color: white;
a {
color: $blue;
}
}
section {
@ -71,6 +75,7 @@ footer {
background-color: $blue;
text-decoration: none;
font-size: 1rem;
border: 0px;
}
#cellophane {
@ -336,7 +341,6 @@ dd {
width: 100%;
height: 45px;
line-height: 45px;
font-family: "Roboto", sans-serif;
font-size: 20px;
color: $blue;
}
@ -612,7 +616,6 @@ section#cncf {
padding-top: 30px;
padding-bottom: 80px;
background-size: auto;
// font-family: "Roboto Mono", monospace !important;
font-size: 24px;
// font-weight: bold;

View File

@ -20,6 +20,15 @@ $announcement-size-adjustment: 8px;
padding-top: 2rem !important;
}
}
.ui-widget {
font-family: inherit;
font-size: inherit;
}
.ui-widget-content a {
color: $blue;
}
}
section {
@ -44,6 +53,23 @@ section {
}
}
body.td-404 main .error-details {
max-width: 1100px;
margin-left: auto;
margin-right: auto;
margin-top: 4em;
margin-bottom: 0;
}
/* Global - Mermaid.js diagrams */
.mermaid {
overflow-x: auto;
max-width: 80%;
border: 1px solid rgb(222, 226, 230);
border-radius: 5px;
}
/* HEADER */
.td-navbar {
@ -268,22 +294,34 @@ main {
// blockquotes and callouts
blockquote {
padding: 0.4rem 0.4rem 0.4rem 1rem !important;
}
.td-content, body {
blockquote.callout {
padding: 0.4rem 0.4rem 0.4rem 1rem;
border: 1px solid #eee;
border-left-width: 0.5em;
background: #fff;
color: #000;
margin-top: 0.5em;
margin-bottom: 0.5em;
}
blockquote.callout {
border-radius: calc(1em/3);
}
.callout.caution {
border-left-color: #f0ad4e;
}
// callouts are contained in static CSS as well. these require override.
.callout.note {
border-left-color: #428bca;
}
.caution {
border-left-color: #f0ad4e !important;
}
.callout.warning {
border-left-color: #d9534f;
}
.note {
border-left-color: #428bca !important;
}
.warning {
border-left-color: #d9534f !important;
h1:first-of-type + blockquote.callout {
margin-top: 1.5em;
}
}
.deprecation-warning {
@ -393,6 +431,7 @@ body.cid-community > #deprecation-warning > .deprecation-warning > * {
}
color: $blue;
margin: 1rem;
}
}
}
@ -512,3 +551,4 @@ body.td-documentation {
}
}
}

View File

@ -11,3 +11,5 @@ Add styles or override variables from the theme here. */
@import "base";
@import "tablet";
@import "desktop";
$primary: #3371e3;

View File

@ -112,6 +112,8 @@ copyright_linux = "Copyright © 2020 The Linux Foundation ®."
version_menu = "Versions"
time_format_blog = "Monday, January 02, 2006"
time_format_default = "January 02, 2006 at 3:04 PM PST"
description = "Production-Grade Container Orchestration"
showedit = true
@ -124,9 +126,13 @@ docsbranch = "master"
deprecated = false
currentUrl = "https://kubernetes.io/docs/home/"
nextUrl = "https://kubernetes-io-vnext-staging.netlify.com/"
githubWebsiteRepo = "github.com/kubernetes/website"
# See codenew shortcode
githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website"
# GitHub repository link for editing a page and opening issues.
github_repo = "https://github.com/kubernetes/website"
# param for displaying an announcement block on every page.
# See /i18n/en.toml for message text and title.
announcement = true

View File

@ -54,7 +54,8 @@ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view
# Zeigen Sie das Passwort für den e2e-Benutzer an
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
kubectl config view -o jsonpath='{.users[].name}' # eine Liste der Benutzer erhalten
kubectl config view -o jsonpath='{.users[].name}' # den ersten Benutzer anzeigen
kubectl config view -o jsonpath='{.users[*].name}' # eine Liste der Benutzer erhalten
kubectl config current-context # den aktuellen Kontext anzeigen
kubectl config use-context my-cluster-name # Setzen Sie den Standardkontext auf my-cluster-name

View File

@ -21,7 +21,7 @@ weight: 20
<div class="katacoda__alert">
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/1" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/1" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
</div>
<div class="row">
<div class="col-md-12">

View File

@ -23,7 +23,7 @@ weight: 20
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/7" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/7" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>

View File

@ -24,7 +24,7 @@ weight: 20
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/4" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/4" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">

View File

@ -21,7 +21,7 @@ weight: 20
<div class="katacoda__alert">
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/8" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/8" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">

View File

@ -21,7 +21,7 @@ weight: 20
<div class="katacoda__alert">
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/5" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/5" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">

View File

@ -21,7 +21,7 @@ weight: 20
<div class="katacoda__alert">
Um mit dem Terminal zu interagieren, verwenden Sie bitte die Desktop- / Tablet-Version
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/6" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/6" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">

View File

@ -45,7 +45,7 @@ Support for [dynamic maximum volume count](https://github.com/kubernetes/feature
The StorageObjectInUseProtection feature is now stable and prevents the removal of both [Persistent Volumes](https://github.com/kubernetes/features/issues/499) that are bound to a Persistent Volume Claim, and [Persistent Volume Claims](https://github.com/kubernetes/features/issues/498) that are being used by a pod. This safeguard will help prevent issues from deleting a PV or a PVC that is currently tied to an active pod.
Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#111-release-notes).
Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the [release notes](https://github.com/kubernetes/kubernetes/blob/release-1.11/CHANGELOG-1.11.md#111-release-notes).
## Availability
@ -88,7 +88,7 @@ Is Kubernetes helping your team? Share your story with the community.
* The CNCF recently expanded its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual's ability to design, build, configure, and expose cloud native applications for Kubernetes. More information can be found [here](https://www.cncf.io/blog/2018/03/16/cncf-announces-ckad-exam/).
* The CNCF recently added a new partner category, Kubernetes Training Partners (KTP). KTPs are a tier of vetted training providers who have deep experience in cloud native technology training. View partners and learn more [here](https://www.cncf.io/certification/training/).
* CNCF also offers [online training](https://www.cncf.io/certification/training/) that teaches the skills needed to create and configure a real-world Kubernetes cluster.
* Kubernetes documentation now features [user journeys](https://k8s.io/docs/home/): specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.
* Kubernetes documentation now features [user journeys](https://k8s.io/docs/home/): specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.
## KubeCon

View File

@ -61,7 +61,7 @@ mind.
## Avoiding permanent beta
For Kubernetes REST APIs, when a new feature's API reaches beta, that starts a countdown.
The beta-quality API now has **nine calendar months** to either:
The beta-quality API now has **three releases** (about nine calendar months) to either:
- reach GA, and deprecate the beta, or
- have a new beta version (_and deprecate the previous beta_).
@ -69,9 +69,10 @@ To be clear, at this point **only REST APIs are affected**. For example, _APILis
a beta feature but isn't itself a REST API. Right now there are no plans to automatically
deprecate _APIListChunking_ nor any other features that aren't REST APIs.
If a REST API reaches the end of that 9 month countdown, then the next Kubernetes release
will deprecate that API version. There's no option for the REST API to stay at the same
beta version beyond the first Kubernetes release to come out after the 9 month window.
If a beta API has not graduated to GA after three Kubernetes releases, then the
next Kubernetes release will deprecate that API version. There's no option for
the REST API to stay at the same beta version beyond the first Kubernetes
release to come out after the release window.
### What this means for you

View File

@ -0,0 +1,30 @@
---
layout: blog
title: 'Increasing the Kubernetes Support Window to One Year'
date: 2020-08-31
slug: kubernetes-1-19-feature-one-year-support
---
**Authors:** Tim Pepper (VMware), Nick Young (VMware)
Starting with Kubernetes 1.19, the support window for Kubernetes versions [will increase from 9 months to one year](https://github.com/kubernetes/enhancements/issues/1498). The longer support window is intended to allow organizations to perform major upgrades at a time of the year that works the best for them.
This is a big change. For many years, the Kubernetes project has delivered a new minor release (e.g.: 1.13 or 1.14) every 3 months. The project provides bugfix support via patch releases (e.g.: 1.13.Y) for three parallel branches of the codebase. Combined, this led to each minor release (e.g.: 1.13) having a patch release stream of support for approximately 9 months. In the end, a cluster operator had to upgrade at least every 9 months to remain supported.
A survey conducted in early 2019 by the WG LTS showed that a significant subset of Kubernetes end-users fail to upgrade within the 9-month support period.
![Versions in Production](/images/blog/2020-08-31-increase-kubernetes-support-one-year/versions-in-production-text-2.png)
This, and other responses from the survey, suggest that a considerable portion of our community would better be able to manage their deployments on supported versions if the patch support period were extended to 12-14 months. It appears to be true regardless of whether the users are on DIY builds or commercially vendored distributions. An extension in the patch support length of time would thus lead to a larger percentage of our user base running supported versions compared to what we have now.
A yearly support period provides the cushion end-users appear to desire, and is more aligned with familiar annual planning cycles.
There are many unknowns about changing the support windows for a project with as many moving parts as Kubernetes. Keeping the change relatively small (relatively being the important word), gives us the chance to find out what those unknowns are in detail and address them.
From Kubernetes version 1.19 on, the support window will be extended to one year. For Kubernetes versions 1.16, 1.17, and 1.18, the story is more complicated.
All of these versions still fall under the older “three releases support” model, and will drop out of support when 1.19, 1.20 and 1.21 are respectively released. However, because the 1.19 release has been delayed due to the events of 2020, they will end up with close to a year of support (depending on their exact release dates).
For example, 1.19 was released on the 26th of August 2020, which is 11 months since the release of 1.16. Since 1.16 is still under the old release policy, this means that it is now out of support.
![Support Timeline](/images/blog/2020-08-31-increase-kubernetes-support-one-year/support-timeline.png)
If youve got thoughts or feedback, wed love to hear them. Please contact us on [#wg-lts](https://kubernetes.slack.com/messages/wg-lts/) on the Kubernetes Slack, or to the [kubernetes-wg-lts mailing list](https://groups.google.com/g/kubernetes-wg-lts).

View File

@ -0,0 +1,394 @@
---
layout: blog
title: 'Ephemeral volumes with storage capacity tracking: EmptyDir on steroids'
date: 2020-09-01
slug: ephemeral-volumes-with-storage-capacity-tracking
---
**Author:** Patrick Ohly (Intel)
Some applications need additional storage but don't care whether that
data is stored persistently across restarts. For example, caching
services are often limited by memory size and can move infrequently
used data into storage that is slower than memory with little impact
on overall performance. Other applications expect some read-only input
data to be present in files, like configuration data or secret keys.
Kubernetes already supports several kinds of such [ephemeral
volumes](/docs/concepts/storage/ephemeral-volumes), but the
functionality of those is limited to what is implemented inside
Kubernetes.
[CSI ephemeral volumes](https://kubernetes.io/blog/2020/01/21/csi-ephemeral-inline-volumes/)
made it possible to extend Kubernetes with CSI
drivers that provide light-weight, local volumes. These [*inject
arbitrary states, such as configuration, secrets, identity, variables
or similar
information*](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190122-csi-inline-volumes.md#motivation).
CSI drivers must be modified to support this Kubernetes feature,
i.e. normal, standard-compliant CSI drivers will not work, and
by design such volumes are supposed to be usable on whatever node
is chosen for a pod.
This is problematic for volumes which consume significant resources on
a node or for special storage that is only available on some nodes.
Therefore, Kubernetes 1.19 introduces two new alpha features for
volumes that are conceptually more like the `EmptyDir` volumes:
- [*generic* ephemeral volumes](/docs/concepts/storage/ephemeral-volumes#generic-ephemeral-volumes) and
- [CSI storage capacity tracking](/docs/concepts/storage/storage-capacity).
The advantages of the new approach are:
- Storage can be local or network-attached.
- Volumes can have a fixed size that applications are never able to exceed.
- Works with any CSI driver that supports provisioning of persistent
volumes and (for capacity tracking) implements the CSI `GetCapacity` call.
- Volumes may have some initial data, depending on the driver and
parameters.
- All of the typical volume operations (snapshotting,
resizing, the future storage capacity tracking, etc.)
are supported.
- The volumes are usable with any app controller that accepts
a Pod or volume specification.
- The Kubernetes scheduler itself picks suitable nodes, i.e. there is
no need anymore to implement and configure scheduler extenders and
mutating webhooks.
This makes generic ephemeral volumes a suitable solution for several
use cases:
# Use cases
## Persistent Memory as DRAM replacement for memcached
Recent releases of memcached added [support for using Persistent
Memory](https://memcached.org/blog/persistent-memory/) (PMEM) instead
of standard DRAM. When deploying memcached through one of the app
controllers, generic ephemeral volumes make it possible to request a PMEM volume
of a certain size from a CSI driver like
[PMEM-CSI](https://intel.github.io/pmem-csi/).
## Local LVM storage as scratch space
Applications working with data sets that exceed the RAM size can
request local storage with performance characteristics or size that is
not met by the normal Kubernetes `EmptyDir` volumes. For example,
[TopoLVM](https://github.com/cybozu-go/topolvm) was written for that
purpose.
## Read-only access to volumes with data
Provisioning a volume might result in a non-empty volume:
- [restore a snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support)
- [cloning a volume](/docs/concepts/storage/volume-pvc-datasource)
- [generic data populators](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20200120-generic-data-populators.md)
Such volumes can be mounted read-only.
# How it works
## Generic ephemeral volumes
The key idea behind generic ephemeral volumes is that a new volume
source, the so-called
[`EphemeralVolumeSource`](/docs/reference/generated/kubernetes-api/#ephemeralvolumesource-v1alpha1-core)
contains all fields that are needed to created a volume claim
(historically called persistent volume claim, PVC). A new controller
in the `kube-controller-manager` waits for Pods which embed such a
volume source and then creates a PVC for that pod. To a CSI driver
deployment, that PVC looks like any other, so no special support is
needed.
As long as these PVCs exist, they can be used like any other volume claim. In
particular, they can be referenced as data source in volume cloning or
snapshotting. The PVC object also holds the current status of the
volume.
Naming of the automatically created PVCs is deterministic: the name is
a combination of Pod name and volume name, with a hyphen (`-`) in the
middle. This deterministic naming makes it easier to
interact with the PVC because one does not have to search for it once
the Pod name and volume name are known. The downside is that the name might
be in use already. This is detected by Kubernetes and then blocks Pod
startup.
To ensure that the volume gets deleted together with the pod, the
controller makes the Pod the owner of the volume claim. When the Pod
gets deleted, the normal garbage-collection mechanism also removes the
claim and thus the volume.
Claims select the storage driver through the normal storage class
mechanism. Although storage classes with both immediate and late
binding (aka `WaitForFirstConsumer`) are supported, for ephemeral
volumes it makes more sense to use `WaitForFirstConsumer`: then Pod
scheduling can take into account both node utilization and
availability of storage when choosing a node. This is where the other
new feature comes in.
## Storage capacity tracking
Normally, the Kubernetes scheduler has no information about where a
CSI driver might be able to create a volume. It also has no way of
talking directly to a CSI driver to retrieve that information. It
therefore tries different nodes until it finds one where all volumes
can be made available (late binding) or leaves it entirely to the
driver to choose a location (immediate binding).
The new [`CSIStorageCapacity` alpha
API](/docs/reference/generated/kubernetes-api/v1.19/#csistoragecapacity-v1alpha1-storage-k8s-io)
allows storing the necessary information in etcd where it is available to the
scheduler. In contrast to support for generic ephemeral volumes,
storage capacity tracking must be [enabled when deploying a CSI
driver](https://github.com/kubernetes-csi/external-provisioner/blob/master/README.md#capacity-support):
the `external-provisioner` must be told to publish capacity
information that it then retrieves from the CSI driver through the normal
`GetCapacity` call.
<!-- TODO: update the link with a revision once https://github.com/kubernetes-csi/external-provisioner/pull/450 is merged -->
When the Kubernetes scheduler needs to choose a node for a Pod with an
unbound volume that uses late binding and the CSI driver deployment
has opted into the feature by setting the [`CSIDriver.storageCapacity`
flag](/docs/reference/generated/kubernetes-api/v1.19/#csidriver-v1beta1-storage-k8s-io)
flag, the scheduler automatically filters out nodes that do not have
access to enough storage capacity. This works for generic ephemeral
and persistent volumes but *not* for CSI ephemeral volumes because the
parameters of those are opaque for Kubernetes.
As usual, volumes with immediate binding get created before scheduling
pods, with their location chosen by the storage driver. Therefore, the
external-provisioner's default configuration skips storage
classes with immediate binding as the information wouldn't be used anyway.
Because the Kubernetes scheduler must act on potentially outdated
information, it cannot be ensured that the capacity is still available
when a volume is to be created. Still, the chances that it can be created
without retries should be higher.
# Security
## CSIStorageCapacity
CSIStorageCapacity objects are namespaced. When deploying each CSI
drivers in its own namespace and, as recommended, limiting the RBAC
permissions for CSIStorageCapacity to that namespace, it is
always obvious where the data came from. However, Kubernetes does
not check that and typically drivers get installed in the same
namespace anyway, so ultimately drivers are *expected to behave* and
not publish incorrect data.
## Generic ephemeral volumes
If users have permission to create a Pod (directly or indirectly),
then they can also create generic ephemeral volumes even when they do
not have permission to create a volume claim. That's because RBAC
permission checks are applied to the controller which creates the
PVC, not the original user. This is a fundamental change that must be
[taken into
account](/docs/concepts/storage/ephemeral-volumes#security) before
enabling the feature in clusters where untrusted users are not
supposed to have permission to create volumes.
# Example
A [special branch](https://github.com/intel/pmem-csi/commits/kubernetes-1-19-blog-post)
in PMEM-CSI contains all the necessary changes to bring up a
Kubernetes 1.19 cluster inside QEMU VMs with both alpha features
enabled. The PMEM-CSI driver code is used unchanged, only the
deployment was updated.
On a suitable machine (Linux, non-root user can use Docker - see the
[QEMU and
Kubernetes](https://intel.github.io/pmem-csi/0.7/docs/autotest.html#qemu-and-kubernetes)
section in the PMEM-CSI documentation), the following commands bring
up a cluster and install the PMEM-CSI driver:
```console
git clone --branch=kubernetes-1-19-blog-post https://github.com/intel/pmem-csi.git
cd pmem-csi
export TEST_KUBERNETES_VERSION=1.19 TEST_FEATURE_GATES=CSIStorageCapacity=true,GenericEphemeralVolume=true TEST_PMEM_REGISTRY=intel
make start && echo && test/setup-deployment.sh
```
If all goes well, the output contains the following usage
instructions:
```
The test cluster is ready. Log in with [...]/pmem-csi/_work/pmem-govm/ssh.0, run
kubectl once logged in. Alternatively, use kubectl directly with the
following env variable:
KUBECONFIG=[...]/pmem-csi/_work/pmem-govm/kube.config
secret/pmem-csi-registry-secrets created
secret/pmem-csi-node-secrets created
serviceaccount/pmem-csi-controller created
...
To try out the pmem-csi driver ephemeral volumes:
cat deploy/kubernetes-1.19/pmem-app-ephemeral.yaml |
[...]/pmem-csi/_work/pmem-govm/ssh.0 kubectl create -f -
```
The CSIStorageCapacity objects are not meant to be human-readable, so
some post-processing is needed. The following Golang template filters
all objects by the storage class that the example uses and prints the
name, topology and capacity:
```console
kubectl get \
-o go-template='{{range .items}}{{if eq .storageClassName "pmem-csi-sc-late-binding"}}{{.metadata.name}} {{.nodeTopology.matchLabels}} {{.capacity}}
{{end}}{{end}}' \
csistoragecapacities
```
```
csisc-2js6n map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker2] 30716Mi
csisc-sqdnt map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker1] 30716Mi
csisc-ws4bv map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker3] 30716Mi
```
One individual object has the following content:
```console
kubectl describe csistoragecapacities/csisc-6cw8j
```
```
Name: csisc-sqdnt
Namespace: default
Labels: <none>
Annotations: <none>
API Version: storage.k8s.io/v1alpha1
Capacity: 30716Mi
Kind: CSIStorageCapacity
Metadata:
Creation Timestamp: 2020-08-11T15:41:03Z
Generate Name: csisc-
Managed Fields:
...
Owner References:
API Version: apps/v1
Controller: true
Kind: StatefulSet
Name: pmem-csi-controller
UID: 590237f9-1eb4-4208-b37b-5f7eab4597d1
Resource Version: 2994
Self Link: /apis/storage.k8s.io/v1alpha1/namespaces/default/csistoragecapacities/csisc-sqdnt
UID: da36215b-3b9d-404a-a4c7-3f1c3502ab13
Node Topology:
Match Labels:
pmem-csi.intel.com/node: pmem-csi-pmem-govm-worker1
Storage Class Name: pmem-csi-sc-late-binding
Events: <none>
```
Now let's create the example app with one generic ephemeral
volume. The `pmem-app-ephemeral.yaml` file contains:
```yaml
# This example Pod definition demonstrates
# how to use generic ephemeral inline volumes
# with a PMEM-CSI storage class.
kind: Pod
apiVersion: v1
metadata:
name: my-csi-app-inline-volume
spec:
containers:
- name: my-frontend
image: intel/pmem-csi-driver-test:v0.7.14
command: [ "sleep", "100000" ]
volumeMounts:
- mountPath: "/data"
name: my-csi-volume
volumes:
- name: my-csi-volume
ephemeral:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
storageClassName: pmem-csi-sc-late-binding
```
After creating that as shown in the usage instructions above, we have one additional Pod and PVC:
```console
kubectl get pods/my-csi-app-inline-volume -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-csi-app-inline-volume 1/1 Running 0 6m58s 10.36.0.2 pmem-csi-pmem-govm-worker1 <none> <none>
```
```console
kubectl get pvc/my-csi-app-inline-volume-my-csi-volume
```
```
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-csi-app-inline-volume-my-csi-volume Bound pvc-c11eb7ab-a4fa-46fe-b515-b366be908823 4Gi RWO pmem-csi-sc-late-binding 9m21s
```
That PVC is owned by the Pod:
```console
kubectl get -o yaml pvc/my-csi-app-inline-volume-my-csi-volume
```
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: pmem-csi.intel.com
volume.kubernetes.io/selected-node: pmem-csi-pmem-govm-worker1
creationTimestamp: "2020-08-11T15:44:57Z"
finalizers:
- kubernetes.io/pvc-protection
managedFields:
...
name: my-csi-app-inline-volume-my-csi-volume
namespace: default
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true
controller: true
kind: Pod
name: my-csi-app-inline-volume
uid: 75c925bf-ca8e-441a-ac67-f190b7a2265f
...
```
Eventually, the storage capacity information for `pmem-csi-pmem-govm-worker1` also gets updated:
```
csisc-2js6n map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker2] 30716Mi
csisc-sqdnt map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker1] 26620Mi
csisc-ws4bv map[pmem-csi.intel.com/node:pmem-csi-pmem-govm-worker3] 30716Mi
```
If another app needs more than 26620Mi, the Kubernetes
scheduler will not pick `pmem-csi-pmem-govm-worker1` anymore.
# Next steps
Both features are under development. Several open questions were
already raised during the alpha review process. The two enhancement
proposals document the work that will be needed for migration to beta and what
alternatives were already considered and rejected:
* [KEP-1698: generic ephemeral inline
volumes](https://github.com/kubernetes/enhancements/blob/9d7a75d/keps/sig-storage/1698-generic-ephemeral-volumes/README.md)
* [KEP-1472: Storage Capacity
Tracking](https://github.com/kubernetes/enhancements/tree/9d7a75d/keps/sig-storage/1472-storage-capacity-tracking)
Your feedback is crucial for driving that development. SIG-Storage
[meets
regularly](https://github.com/kubernetes/community/tree/master/sig-storage#meetings)
and can be reached via [Slack and a mailing
list](https://github.com/kubernetes/community/tree/master/sig-storage#contact).

View File

@ -0,0 +1,46 @@
---
layout: blog
title: 'Scaling Kubernetes Networking With EndpointSlices'
date: 2020-09-02
slug: scaling-kubernetes-networking-with-endpointslices
---
**Author:** Rob Scott (Google)
EndpointSlices are an exciting new API that provides a scalable and extensible alternative to the Endpoints API. EndpointSlices track IP addresses, ports, readiness, and topology information for Pods backing a Service.
In Kubernetes 1.19 this feature is enabled by default with kube-proxy reading from [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) instead of Endpoints. Although this will mostly be an invisible change, it should result in noticeable scalability improvements in large clusters. It also enables significant new features in future Kubernetes releases like [Topology Aware Routing](/docs/concepts/services-networking/service-topology/).
## Scalability Limitations of the Endpoints API
With the Endpoints API, there was only one Endpoints resource for a Service. That meant that it needed to be able to store IP addresses and ports (network endpoints) for every Pod that was backing the corresponding Service. This resulted in huge API resources. To compound this problem, kube-proxy was running on every node and watching for any updates to Endpoints resources. If even a single network endpoint changed in an Endpoints resource, the whole object would have to be sent to each of those instances of kube-proxy.
A further limitation of the Endpoints API is that it limits the number of network endpoints that can be tracked for a Service. The default size limit for an object stored in etcd is 1.5MB. In some cases that can limit an Endpoints resource to 5,000 Pod IPs. This is not an issue for most users, but it becomes a significant problem for users with Services approaching this size.
To show just how significant these issues become at scale it helps to have a simple example. Think about a Service which has 5,000 Pods, it might end up with a 1.5MB Endpoints resource. If even a single network endpoint in that list changes, the full Endpoints resource will need to be distributed to each Node in the cluster. This becomes quite an issue in a large cluster with 3,000 Nodes. Each update would involve sending 4.5GB of data (1.5MB Endpoints * 3,000 Nodes) across the cluster. That's nearly enough to fill up a DVD, and it would happen for each Endpoints change. Imagine a rolling update that results in all 5,000 Pods being replaced - that's more than 22TB (or 5,000 DVDs) worth of data transferred.
## Splitting endpoints up with the EndpointSlice API
The EndpointSlice API was designed to address this issue with an approach similar to sharding. Instead of tracking all Pod IPs for a Service with a single Endpoints resource, we split them into multiple smaller EndpointSlices.
Consider an example where a Service is backed by 15 pods. We'd end up with a single Endpoints resource that tracked all of them. If EndpointSlices were configured to store 5 endpoints each, we'd end up with 3 different EndpointSlices:
![EndpointSlices](/images/blog/2020-09-02-scaling-kubernetes-networking-endpointslices/endpoint-slices.png)
By default, EndpointSlices store as many as 100 endpoints each, though this can be configured with the `--max-endpoints-per-slice` flag on kube-controller-manager.
## EndpointSlices provide 10x scalability improvements
This API dramatically improves networking scalability. Now when a Pod is added or removed, only 1 small EndpointSlice needs to be updated. This difference becomes quite noticeable when hundreds or thousands of Pods are backing a single Service.
Potentially more significant, now that all Pod IPs for a Service don't need to be stored in a single resource, we don't have to worry about the size limit for objects stored in etcd. EndpointSlices have already been used to scale Services beyond 100,000 network endpoints.
All of this is brought together with some significant performance improvements that have been made in kube-proxy. When using EndpointSlices at scale, significantly less data will be transferred for endpoints updates and kube-proxy should be faster to update iptables or ipvs rules. Beyond that, Services can now scale to at least 10 times beyond any previous limitations.
## EndpointSlices enable new functionality
Introduced as an alpha feature in Kubernetes v1.16, EndpointSlices were built to enable some exciting new functionality in future Kubernetes releases. This could include dual-stack Services, topology aware routing, and endpoint subsetting.
Dual-Stack Services are an exciting new feature that has been in development alongside EndpointSlices. They will utilize both IPv4 and IPv6 addresses for Services and rely on the addressType field on EndpointSlices to track these addresses by IP family.
Topology aware routing will update kube-proxy to prefer routing requests within the same zone or region. This makes use of the topology fields stored for each endpoint in an EndpointSlice. As a further refinement of that, we're exploring the potential of endpoint subsetting. This would allow kube-proxy to only watch a subset of EndpointSlices. For example, this might be combined with topology aware routing so that kube-proxy would only need to watch EndpointSlices containing endpoints within the same zone. This would provide another very significant scalability improvement.
## What does this mean for the Endpoints API?
Although the EndpointSlice API is providing a newer and more scalable alternative to the Endpoints API, the Endpoints API will continue to be considered generally available and stable. The most significant change planned for the Endpoints API will involve beginning to truncate Endpoints that would otherwise run into scalability issues.
The Endpoints API is not going away, but many new features will rely on the EndpointSlice API. To take advantage of the new scalability and functionality that EndpointSlices provide, applications that currently consume Endpoints will likely want to consider supporting EndpointSlices in the future.

View File

@ -0,0 +1,278 @@
---
layout: blog
title: "Warning: Helpful Warnings Ahead"
date: 2020-09-03
slug: warnings
---
**Author**: Jordan Liggitt (Google)
As Kubernetes maintainers, we're always looking for ways to improve usability while preserving compatibility.
As we develop features, triage bugs, and answer support questions, we accumulate information that would be helpful for Kubernetes users to know.
In the past, sharing that information was limited to out-of-band methods like release notes, announcement emails, documentation, and blog posts.
Unless someone knew to seek out that information and managed to find it, they would not benefit from it.
In Kubernetes v1.19, we added a feature that allows the Kubernetes API server to
[send warnings to API clients](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1693-warnings).
The warning is sent using a [standard `Warning` response header](https://tools.ietf.org/html/rfc7234#section-5.5),
so it does not change the status code or response body in any way.
This allows the server to send warnings easily readable by any API client, while remaining compatible with previous client versions.
Warnings are surfaced by `kubectl` v1.19+ in `stderr` output, and by the `k8s.io/client-go` client library v0.19.0+ in log output.
The `k8s.io/client-go` behavior can be overridden [per-process](https://godoc.org/k8s.io/client-go/rest#SetDefaultWarningHandler)
or [per-client](https://godoc.org/k8s.io/client-go/rest#Config).
## Deprecation Warnings
The first way we are using this new capability is to send warnings for use of deprecated APIs.
Kubernetes is a [big, fast-moving project](https://www.cncf.io/cncf-kubernetes-project-journey/#development-velocity).
Keeping up with the [changes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#changelog-since-v1180)
in each release can be daunting, even for people who work on the project full-time. One important type of change is API deprecations.
As APIs in Kubernetes graduate to GA versions, pre-release API versions are deprecated and eventually removed.
Even though there is an [extended deprecation period](/docs/reference/using-api/deprecation-policy/),
and deprecations are [included in release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecation),
they can still be hard to track. During the deprecation period, the pre-release API remains functional,
allowing several releases to transition to the stable API version. However, we have found that users often don't even realize
they are depending on a deprecated API version until they upgrade to the release that stops serving it.
Starting in v1.19, whenever a request is made to a deprecated REST API, a warning is returned along with the API response.
This warning includes details about the release in which the API will no longer be available, and the replacement API version.
Because the warning originates at the server, and is intercepted at the client level, it works for all kubectl commands,
including high-level commands like `kubectl apply`, and low-level commands like `kubectl get --raw`:
<img alt="kubectl applying a manifest file, then displaying a warning message 'networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress'."
src="kubectl-warnings.png"
style="width:637px;max-width:100%;">
This helps people affected by the deprecation to know the request they are making is deprecated,
how long they have to address the issue, and what API they should use instead.
This is especially helpful when the user is applying a manifest they didn't create,
so they have time to reach out to the authors to ask for an updated version.
We also realized that the person *using* a deprecated API is often not the same person responsible for upgrading the cluster,
so we added two administrator-facing tools to help track use of deprecated APIs and determine when upgrades are safe.
### Metrics
Starting in Kubernetes v1.19, when a request is made to a deprecated REST API endpoint,
an `apiserver_requested_deprecated_apis` gauge metric is set to `1` in the kube-apiserver process.
This metric has labels for the API `group`, `version`, `resource`, and `subresource`,
and a `removed_version` label that indicates the Kubernetes release in which the API will no longer be served.
This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json),
and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested
from the current instance of the API server:
```sh
kubectl get --raw /metrics | prom2json | jq '
.[] | select(.name=="apiserver_requested_deprecated_apis").metrics[].labels
'
```
Output:
```json
{
"group": "extensions",
"removed_release": "1.22",
"resource": "ingresses",
"subresource": "",
"version": "v1beta1"
}
{
"group": "rbac.authorization.k8s.io",
"removed_release": "1.22",
"resource": "clusterroles",
"subresource": "",
"version": "v1beta1"
}
```
This shows the deprecated `extensions/v1beta1` Ingress and `rbac.authorization.k8s.io/v1beta1` ClusterRole APIs
have been requested on this server, and will be removed in v1.22.
We can join that information with the `apiserver_request_total` metrics to get more details about the requests being made to these APIs:
```sh
kubectl get --raw /metrics | prom2json | jq '
# set $deprecated to a list of deprecated APIs
[
.[] |
select(.name=="apiserver_requested_deprecated_apis").metrics[].labels |
{group,version,resource}
] as $deprecated
|
# select apiserver_request_total metrics which are deprecated
.[] | select(.name=="apiserver_request_total").metrics[] |
select(.labels | {group,version,resource} as $key | $deprecated | index($key))
'
```
Output:
```json
{
"labels": {
"code": "0",
"component": "apiserver",
"contentType": "application/vnd.kubernetes.protobuf;stream=watch",
"dry_run": "",
"group": "extensions",
"resource": "ingresses",
"scope": "cluster",
"subresource": "",
"verb": "WATCH",
"version": "v1beta1"
},
"value": "21"
}
{
"labels": {
"code": "200",
"component": "apiserver",
"contentType": "application/vnd.kubernetes.protobuf",
"dry_run": "",
"group": "extensions",
"resource": "ingresses",
"scope": "cluster",
"subresource": "",
"verb": "LIST",
"version": "v1beta1"
},
"value": "1"
}
{
"labels": {
"code": "200",
"component": "apiserver",
"contentType": "application/json",
"dry_run": "",
"group": "rbac.authorization.k8s.io",
"resource": "clusterroles",
"scope": "cluster",
"subresource": "",
"verb": "LIST",
"version": "v1beta1"
},
"value": "1"
}
```
The output shows that only read requests are being made to these APIs, and the most requests have been made to watch the deprecated Ingress API.
You can also find that information through the following Prometheus query,
which returns information about requests made to deprecated APIs which will be removed in v1.22:
```promql
apiserver_requested_deprecated_apis{removed_version="1.22"} * on(group,version,resource,subresource)
group_right() apiserver_request_total
```
### Audit annotations
Metrics are a fast way to check whether deprecated APIs are being used, and at what rate,
but they don't include enough information to identify particular clients or API objects.
Starting in Kubernetes v1.19, [audit events](/docs/tasks/debug-application-cluster/audit/)
for requests to deprecated APIs include an audit annotation of `"k8s.io/deprecated":"true"`.
Administrators can use those audit events to identify specific clients or objects that need to be updated.
## Custom Resource Definitions
Along with the API server ability to warn about deprecated API use, starting in v1.19, a CustomResourceDefinition can indicate a
[particular version of the resource it defines is deprecated](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation).
When API requests to a deprecated version of a custom resource are made, a warning message is returned, matching the behavior of built-in APIs.
The author of the CustomResourceDefinition can also customize the warning for each version if they want to.
This allows them to give a pointer to a migration guide or other information if needed.
```yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: crontabs.example.com
spec:
versions:
- name: v1alpha1
# This indicates the v1alpha1 version of the custom resource is deprecated.
# API requests to this version receive a warning in the server response.
deprecated: true
# This overrides the default warning returned to clients making v1alpha1 API requests.
deprecationWarning: "example.com/v1alpha1 CronTab is deprecated; use example.com/v1 CronTab (see http://example.com/v1alpha1-v1)"
...
- name: v1beta1
# This indicates the v1beta1 version of the custom resource is deprecated.
# API requests to this version receive a warning in the server response.
# A default warning message is returned for this version.
deprecated: true
...
- name: v1
...
```
## Admission Webhooks
[Admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers)
are the primary way to integrate custom policies or validation with Kubernetes.
Starting in v1.19, admission webhooks can [return warning messages](/docs/reference/access-authn-authz/extensible-admission-controllers/#response)
that are passed along to the requesting API client. Warnings can be returned with allowed or rejected admission responses.
As an example, to allow a request but warn about a configuration known not to work well, an admission webhook could send this response:
```json
{
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": {
"uid": "<value from request.uid>",
"allowed": true,
"warnings": [
".spec.memory: requests >1GB do not work on Fridays"
]
}
}
```
If you are implementing a webhook that returns a warning message, here are some tips:
* Don't include a "Warning:" prefix in the message (that is added by clients on output)
* Use warning messages to describe problems the client making the API request should correct or be aware of
* Be brief; limit warnings to 120 characters if possible
There are many ways admission webhooks could use this new feature, and I'm looking forward to seeing what people come up with.
Here are a couple ideas to get you started:
* webhook implementations adding a "complain" mode, where they return warnings instead of rejections,
to allow trying out a policy to verify it is working as expected before starting to enforce it
* "lint" or "vet"-style webhooks, inspecting objects and surfacing warnings when best practices are not followed
## Kubectl strict mode
If you want to be sure you notice deprecations as soon as possible and get a jump start on addressing them,
`kubectl` added a `--warnings-as-errors` option in v1.19. When invoked with this option,
`kubectl` treats any warnings it receives from the server as errors and exits with a non-zero exit code:
<img alt="kubectl applying a manifest file with a --warnings-as-errors flag, displaying a warning message and exiting with a non-zero exit code."
src="kubectl-warnings-as-errors.png"
style="width:637px;max-width:100%;">
This could be used in a CI job to apply manifests to a current server,
and required to pass with a zero exit code in order for the CI job to succeed.
## Future Possibilities
Now that we have a way to communicate helpful information to users in context,
we're already considering other ways we can use this to improve people's experience with Kubernetes.
A couple areas we're looking at next are warning about [known problematic values](http://issue.k8s.io/64841#issuecomment-395141013)
we cannot reject outright for compatibility reasons, and warning about use of deprecated fields or field values
(like selectors using beta os/arch node labels, [deprecated in v1.14](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)).
I'm excited to see progress in this area, continuing to make it easier to use Kubernetes.
---
_[Jordan Liggitt](https://twitter.com/liggitt) is a software engineer at Google, and helps lead Kubernetes authentication, authorization, and API efforts._

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 296 KiB

View File

@ -0,0 +1,56 @@
---
layout: blog
title: 'Introducing Structured Logs'
date: 2020-09-04
slug: kubernetes-1-19-Introducing-Structured-Logs
---
**Authors:** Marek Siarkowicz (Google), Nathan Beach (Google)
Logs are an essential aspect of observability and a critical tool for debugging. But Kubernetes logs have traditionally been unstructured strings, making any automated parsing difficult and any downstream processing, analysis, or querying challenging to do reliably.
In Kubernetes 1.19, we are adding support for structured logs, which natively support (key, value) pairs and object references. We have also updated many logging calls such that over 99% of logging volume in a typical deployment are now migrated to the structured format.
To maintain backwards compatibility, structured logs will still be outputted as a string where the string contains representations of those "key"="value" pairs. Starting in alpha in 1.19, logs can also be outputted in JSON format using the `--logging-format=json` flag.
## Using Structured Logs
We've added two new methods to the klog library: InfoS and ErrorS. For example, this invocation of InfoS:
```golang
klog.InfoS("Pod status updated", "pod", klog.KObj(pod), "status", status)
```
will result in this log:
```
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
```
Or, if the --logging-format=json flag is set, it will result in this output:
```json
{
"ts": 1580306777.04728,
"msg": "Pod status updated",
"pod": {
"name": "coredns",
"namespace": "kube-system"
},
"status": "ready"
}
```
This means downstream logging tools can easily ingest structured logging data and instead of using regular expressions to parse unstructured strings. This also makes processing logs easier, querying logs more robust, and analyzing logs much faster.
With structured logs, all references to Kubernetes objects are structured the same way, so you can filter the output and only log entries referencing the particular pod. You can also find logs indicating how the scheduler was scheduling the pod, how the pod was created, the health probes of the pod, and all other changes in the lifecycle of the pod.
Suppose you are debugging an issue with a pod. With structured logs, you can filter to only those log entries referencing the pod of interest, rather than needing to scan through potentially thousands of log lines to find the relevant ones.
Not only are structured logs more useful when manual debugging of issues, they also enable richer features like automated pattern recognition within logs or tighter correlation of log and trace data.
Finally, structured logs can help reduce storage costs for logs because most storage systems are more efficiently able to compress structured key=value data than unstructured strings.
## Get Involved
While we have updated over 99% of the log entries by log volume in a typical deployment, there are still thousands of logs to be updated. Pick a file or directory that you would like to improve and [migrate existing log calls to use structured logs](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md). It's a great and easy way to make your first contribution to Kubernetes!

View File

@ -0,0 +1,118 @@
---
layout: blog
title: "GSoC 2020 - Building operators for cluster addons"
date: 2020-09-16
slug: gsoc20-building-operators-for-cluster-addons
---
**Author**: Somtochi Onyekwere
# Introduction
[Google Summer of Code](https://summerofcode.withgoogle.com/) is a global program that is geared towards introducing students to open source. Students are matched with open-source organizations to work with them for three months during the summer.
My name is Somtochi Onyekwere from the Federal University of Technology, Owerri (Nigeria) and this year, I was given the opportunity to work with Kubernetes (under the CNCF organization) and this led to an amazing summer spent learning, contributing and interacting with the community.
Specifically, I worked on the _Cluster Addons: Package all the things!_ project. The project focused on building operators for better management of various cluster addons, extending the tooling for building these operators and making the creation of these operators a smooth process.
# Background
Kubernetes has progressed greatly in the past few years with a flourishing community and a large number of contributors. The codebase is gradually moving away from the monolith structure where all the code resides in the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repository to being split into multiple sub-projects. Part of the focus of cluster-addons is to make some of these sub-projects work together in an easy to assemble, self-monitoring, self-healing and Kubernetes-native way. It enables them to work seamlessly without human intervention.
The community is exploring the use of operators as a mechanism to monitor various resources in the cluster and properly manage these resources. In addition to this, it provides self-healing and it is a kubernetes-native pattern that can encode how best these addons work and manage them properly.
What are cluster addons? Cluster addons are a collection of resources (like Services and deployment) that are used to give a Kubernetes cluster additional functionalities. They range from things as simple as the Kubernetes dashboards (for visualization) to more complex ones like Calico (for networking). These addons are essential to different applications running in the cluster and the cluster itself. The addon operator provides a nicer way of managing these addons and understanding the health and status of the various resources that comprise the addon. You can get a deeper overview in this [article](https://kubernetes.io/docs/concepts/overview/components/#addons).
Operators are custom controllers with custom resource definitions that encode application-specific knowledge and are used for managing complex stateful applications. It is a widely accepted pattern. Managing addons via operators, with these operators encoding knowledge of how best the addons work, introduces a lot of advantages while setting standards that will be easy to follow and scale. This [article](https://kubernetes.io/docs/concepts/extend-kubernetes/operator) does a good job of explaining operators.
The addon operators can solve a lot of problems, but they have their challenges. Those under the [cluster-addons project](https://github.com/kubernetes-sigs/cluster-addons) had missing pieces and were still a proof of concept. Generating the RBAC configuration for the operators was a pain and sometimes the operators were given too much privilege. The operators werent very extensible as it only pulled manifests from local filesystems or HTTP(s) servers and a lot of simple addons were generating the same code.
I spent the summer working on these issues, looking at them with fresh eyes and coming up with solutions for both the known and unknown issues.
# Various additions to kubebuilder-declarative-pattern
The [kubebuilder-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern) (from here on referred to as KDP) repo is an extra layer of addon specific tooling on top of the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) SDK that is enabled by passing the experimental `--pattern=addon` flag to `kubebuilder create` command. Together, they create the base code for the addon operator. During the internship, I worked on a couple of features in KDP and cluster-addons.
## Operator version checking
Enabling version checks for operators helped in making upgrades/downgrades safer to different versions of the addon, even though the operator had complex logic. It is a way of matching the version of an addon to the version of the operator that knows how to manage it well. Most addons have different versions and these versions might need to be managed differently. This feature checks the custom resource for the `addons.k8s.io/min-operator-version` annotation which states the minimum operator version that is needed to manage the version against the version of the operator. If the operator version is below the minimum version required, the operator pauses with an error telling the user that the version of the operator is too low. This helps to ensure that the correct operator is being used for the addon.
## Git repository for storing the manifests
Previously, there was support for only local file directories and HTTPS repositories for storing manifests. Giving creators of addon operators the ability to store manifest in GitHub repository enables faster development and version control. When starting the controller, you can pass a flag to specify the location of your channels directory. The channels directory contains the manifests for different versions, the controller pulls the manifest from this directory and applies it to the cluster. During the internship period, I extended it to include Git repositories.
## Annotations to temporarily disable reconciliation
The reconciliation loop that ensures that the desired state matches the actual state prevents modification of objects in the cluster. This makes it hard to experiment or investigate what might be wrong in the cluster as any changes made are promptly reverted. I resolved this by allowing users to place an `addons.k8s.io/ignore` annotation on the resource that they dont want the controller to reconcile. The controller checks for this annotation and doesnt reconcile that object. To resume reconciliation, the annotation can be removed from the resource.
## Unstructured support in kubebuilder-declarative-pattern
One of the operators that I worked on is a generic controller that could manage more than one cluster addon that did not require extra configuration. To do this, the operator couldnt use a particular type and needed the kubebuilder-declarative-repo to support using the [unstructured.Unstructured](https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured#Unstructured) type. There were various functions in the kubebuilder-declarative-pattern that couldnt handle this type and returned an error if the object passed in was not of type `addonsv1alpha1.CommonObject`. The functions were modified to handle both `unstructured.Unstructured` and `addonsv1alpha.CommonObject`.
# Tools and CLI programs
There were also some command-line programs I wrote that could be used to make working with addon operators easier. Most of them have uses outside the addon operators as they try to solve a specific problem that could surface anywhere while working with Kubernetes. I encourage you to [check them out](https://github.com/kubernetes-sigs/cluster-addons/tree/master/tools) when you have the chance!
## RBAC Generator
One of the biggest concerns with the operator was RBAC. You had to manually look through the manifest and add the RBAC rule for each resource as it needs to have RBAC permissions to create, get, update and delete the resources in the manifest when running in-cluster. Building the [RBAC generator](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/rbac-gen) automated the process of writing the RBAC roles and role bindings. The function of the RBAC generator is simple. It accepts the file name of the manifest as a flag. Then, it parses the manifest and gets the API group and resource name of the resources and adds it to a role. It outputs the role and role binding to stdout or a file if the `--out` flag is parsed.
Additionally, the tool enables you to split the RBAC by separating the cluster roles in the manifest. This lessened the security concern of an operator being over-privileged as it needed to have all the permissions that the clusterrole has. If you want to apply the clusterrole yourself and not give the operator these permissions, you can pass in a `--supervisory` boolean flag so that the generator does not add these permissions to the role. The CLI program resides [here](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/rbac-gen).
## Kubectl Ownerref
It is hard to find out at a glance which objects were created by an addon custom resource. This kubectl plugin alleviates that pain by displaying all the objects in the cluster that a resource has ownerrefs on. You simply pass the kind and the name of the resource as arguments to the program and it checks the cluster for the objects and gives the kind, name, the namespace of such an object. It could be useful to get a general overview of all the objects that the controller is reconciling by passing in the name and kind of custom resource. The CLI program resides [here](https://github.com/kubernetes-sigs/cluster-addons/tree/master/tools/kubectl-ownerref).
# Addon Operators
To fully understand addons operators and make changes to how they are being created, you have to try creating and using them. Part of the summer was spent building operators for some popular addons like the Kubernetes dashboard, flannel, NodeLocalDNS and so on. Please check the [cluster-addons](https://github.com/kubernetes-sigs/cluster-addons) repository for the different addon operators. In this section, I will just highlight one that is a little different from the others.
## Generic Controller
The generic controller can be shared between addons that dont require much configuration. This minimizes resource consumption on the cluster as it reduces the number of controllers that need to be run. Also instead of building your own operator, you can just use the generic controller and whenever you feel that your needs have grown and you need a more complex operator, you can always scaffold the code with kubebuilder and continue from where the generic operator stopped. To use the generic controller, you can generate the CustomResourceDefinition(CRD) using this tool ([generic-addon](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/generic-addon/README.md)). You pass in the kind, group, and the location of your channels directory (it could be a Git repository too!). The tool generates the - CRD, RBAC manifest and two custom resources for you.
The process is as follows:
- Create the Generic CRD
- Generate all the manifests needed with the [`generic-addon tool`](https://github.com/kubernetes-sigs/cluster-addons/blob/master/tools/generic-addon/README.md).
This tool creates:
1. The CRD for your addon
2. The RBAC rules for the CustomResourceDefinitions
3. The RBAC rules for applying the manifests
4. The custom resource for your addon
5. A Generic custom resource
The Generic custom resource looks like this:
```yaml
apiVersion: addons.x-k8s.io/v1alpha1
kind: Generic
metadata:
name: generic-sample
spec:
objectKind:
kind: NodeLocalDNS
version: "v1alpha1"
group: addons.x-k8s.io
channel: "../nodelocaldns/channels"
```
Apply these manifests but ensure to apply the CRD before the CR.
Then, run the Generic controller, either on your machine or in-cluster.
If you are interested in building an operator, Please check out [this guide](https://github.com/kubernetes-sigs/cluster-addons/blob/master/dashboard/README.md).
# Relevant Links
- [Detailed breakdown of work done during the internship](https://github.com/SomtochiAma/gsoc-2020-meta-k8s)
- [Addon Operator (KEP)](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/addons/0035-20190128-addons-via-operators.md)
- [Original GSoC Issue](https://github.com/kubernetes-sigs/cluster-addons/issues/39)
- [Proposal Submitted for GSoC](https://github.com/SomtochiAma/gsoc-2020-meta-k8s/blob/master/GSoC%202020%20PROPOSAL%20-%20PACKAGE%20ALL%20THINGS.pdf)
- [All commits to kubernetes-sigs/cluster-addons](https://github.com/kubernetes-sigs/cluster-addons/commits?author=SomtochiAma)
- [All commits to kubernetes-sigs/kubebuidler-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/commits?author=SomtochiAma)
# Further Work
A lot of work was definitely done on the cluster addons during the GSoC period. But we need more people building operators and using them in the cluster. We need wider adoption in the community. Build operators for your favourite addons and tell us how it went and if you had any issues. Check out this [README.md](https://github.com/kubernetes-sigs/cluster-addons/blob/master/dashboard/README.md) to get started.
# Appreciation
I really want to appreciate my mentors [Justin Santa Barbara](https://github.com/justinsb) (Google) and [Leigh Capili](https://github.com/stealthybox) (Weaveworks). My internship was awesome because they were awesome. They set a golden standard for what mentorship should be. They were accessible and always available to clear any confusion. I think what I liked best was that they didnt just dish out tasks, instead, we had open discussions about what was wrong and what could be improved. They are really the best and I hope I get to work with them again!
Also, I want to say a huge thanks to [Lubomir I. Ivanov](https://github.com/neolit123) for reviewing this blog post!
# Conclusion
So far I have learnt a lot about Go, the internals of Kubernetes, and operators. I want to conclude by encouraging people to contribute to open-source (especially Kubernetes :)) regardless of your level of experience. It has been a well-rounded experience for me and I have come to love the community. It is a great initiative and it is a great way to learn and meet awesome people. Special shoutout to Google for organizing this program.
If you are interested in cluster addons and finding out more on addon operators, you are welcome to join our slack channel on the Kubernetes [#cluster-addons](https://kubernetes.slack.com/messages/cluster-addons).
---
_[Somtochi Onyekwere](https://twitter.com/SomtochiAma) is a software engineer that loves contributing to open-source and exploring cloud native solutions._

View File

@ -5,12 +5,12 @@ content_type: concept
<!-- overview -->
{{% thirdparty-content %}}
Add-ons extend the functionality of Kubernetes.
This page lists some of the available add-ons and links to their respective installation instructions.
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
<!-- body -->
## Networking and Network Policy

View File

@ -1,362 +0,0 @@
---
title: Cloud Providers
content_type: concept
weight: 30
---
<!-- overview -->
This page explains how to manage Kubernetes running on a specific
cloud provider. There are many other third-party cloud provider projects, but this list is specific to projects embedded within, or relied upon by Kubernetes itself.
<!-- body -->
### kubeadm
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
kubeadm has configuration options to specify configuration information for cloud providers. For example a typical
in-tree cloud provider can be configured using kubeadm as shown below:
```yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
apiServer:
extraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
extraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
```
The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines
for the [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/),
[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) and the
[kubelet](/docs/reference/command-line-tools-reference/kubelet/).
The contents of the file specified in `--cloud-config` for each provider is documented below as well.
For all external cloud providers, please follow the instructions on the individual repositories,
which are listed under their headings below, or one may view [the list of all repositories](https://github.com/kubernetes?q=cloud-provider-&type=&language=)
## AWS
This section describes all the possible configurations which can
be used when running Kubernetes on Amazon Web Services.
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-aws](https://github.com/kubernetes/cloud-provider-aws#readme)
### Node Name
The AWS cloud provider uses the private DNS name of the AWS instance as the name of the Kubernetes Node object.
### Load Balancers
You can setup [external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/)
to use specific features in AWS by configuring the annotations as shown below.
```yaml
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
```
Different settings can be applied to a load balancer service in AWS using _annotations_. The following describes the annotations supported on AWS ELBs:
* `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`: Used to specify access log emit interval.
* `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`: Used on the service to enable or disable access logs.
* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`: Used to specify access log s3 bucket name.
* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`: Used to specify access log s3 bucket prefix.
* `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags`: Used on the service to specify a comma-separated list of key-value pairs which will be recorded as additional tags in the ELB. For example: `"Key1=Val1,Key2=Val2,KeyNoVal1=,KeyNoVal2"`.
* `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If `http` (default) or `https`, an HTTPS listener that terminates the connection and parses headers is created. If set to `ssl` or `tcp`, a "raw" SSL listener is used. If set to `http` and `aws-load-balancer-ssl-cert` is not used then a HTTP listener is used.
* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`.
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled`: Used on the service to enable or disable connection draining.
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout`: Used on the service to specify a connection draining timeout.
* `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout`: Used on the service to specify the idle connection timeout.
* `service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled`: Used on the service to enable or disable cross-zone load balancing.
* `service.beta.kubernetes.io/aws-load-balancer-security-groups`: Used to specify the security groups to be added to ELB created. This replaces all other security groups previously assigned to the ELB. Security groups defined here should not be shared between services.
* `service.beta.kubernetes.io/aws-load-balancer-extra-security-groups`: Used on the service to specify additional security groups to be added to ELB created
* `service.beta.kubernetes.io/aws-load-balancer-internal`: Used on the service to indicate that we want an internal ELB.
* `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol`: Used on the service to enable the proxy protocol on an ELB. Right now we only accept the value `*` which means enabling the proxy protocol on all ELB backends. In the future we could adjust this to allow setting the proxy protocol only on certain backends.
* `service.beta.kubernetes.io/aws-load-balancer-ssl-ports`: Used on the service to specify a comma-separated list of ports that will use SSL/HTTPS listeners. Defaults to `*` (all)
The information for the annotations for AWS is taken from the comments on [aws.go](https://github.com/kubernetes/legacy-cloud-providers/blob/master/aws/aws.go)
## Azure
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-azure](https://github.com/kubernetes/cloud-provider-azure#readme)
### Node Name
The Azure cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
Note that the Kubernetes Node name must match the Azure VM name.
## GCE
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-gcp](https://github.com/kubernetes/cloud-provider-gcp#readme)
### Node Name
The GCE cloud provider uses the hostname of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
Note that the first segment of the Kubernetes Node name must match the GCE instance name (e.g. a Node named `kubernetes-node-2.c.my-proj.internal` must correspond to an instance named `kubernetes-node-2`).
## HUAWEI CLOUD
If you wish to use the external cloud provider, its repository is [kubernetes-sigs/cloud-provider-huaweicloud](https://github.com/kubernetes-sigs/cloud-provider-huaweicloud).
## OpenStack
This section describes all the possible configurations which can
be used when using OpenStack with Kubernetes.
If you wish to use the external cloud provider, its repository is [kubernetes/cloud-provider-openstack](https://github.com/kubernetes/cloud-provider-openstack#readme)
### Node Name
The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object.
Note that the instance name must be a valid Kubernetes Node name in order for the kubelet to successfully register its Node object.
### Services
The OpenStack cloud provider
implementation for Kubernetes supports the use of these OpenStack services from
the underlying cloud, where available:
| Service | API Version(s) | Required |
|--------------------------|----------------|----------|
| Block Storage (Cinder) | V1†, V2, V3 | No |
| Compute (Nova) | V2 | No |
| Identity (Keystone) | V2‡, V3 | Yes |
| Load Balancing (Neutron) | V1§, V2 | No |
| Load Balancing (Octavia) | V2 | No |
† Block Storage V1 API support is deprecated, Block Storage V3 API support was
added in Kubernetes 1.9.
‡ Identity V2 API support is deprecated and will be removed from the provider in
a future release. As of the "Queens" release, OpenStack will no longer expose the
Identity V2 API.
§ Load Balancing V1 API support was removed in Kubernetes 1.9.
Service discovery is achieved by listing the service catalog managed by
OpenStack Identity (Keystone) using the `auth-url` provided in the provider
configuration. The provider will gracefully degrade in functionality when
OpenStack services other than Keystone are not available and simply disclaim
support for impacted features. Certain features are also enabled or disabled
based on the list of extensions published by Neutron in the underlying cloud.
### cloud.conf
Kubernetes knows how to interact with OpenStack via the file cloud.conf. It is
the file that will provide Kubernetes with credentials and location for the OpenStack auth endpoint.
You can create a cloud.conf file by specifying the following details in it
#### Typical configuration
This is an example of a typical configuration that touches the values that most
often need to be set. It points the provider at the OpenStack cloud's Keystone
endpoint, provides details for how to authenticate with it, and configures the
load balancer:
```yaml
[Global]
username=user
password=pass
auth-url=https://<keystone_ip>/identity/v3
tenant-id=c869168a828847f39f7f06edd7305637
domain-id=2a73b8f597c04551a0fdc8e95544be8a
[LoadBalancer]
subnet-id=6937f8fa-858d-4bc9-a3a5-18d2c957166a
```
##### Global
These configuration options for the OpenStack provider pertain to its global
configuration and should appear in the `[Global]` section of the `cloud.conf`
file:
* `auth-url` (Required): The URL of the keystone API used to authenticate. On
OpenStack control panels, this can be found at Access and Security > API
Access > Credentials.
* `username` (Required): Refers to the username of a valid user set in keystone.
* `password` (Required): Refers to the password of a valid user set in keystone.
* `tenant-id` (Required): Used to specify the id of the project where you want
to create your resources.
* `tenant-name` (Optional): Used to specify the name of the project where you
want to create your resources.
* `trust-id` (Optional): Used to specify the identifier of the trust to use for
authorization. A trust represents a user's (the trustor) authorization to
delegate roles to another user (the trustee), and optionally allow the trustee
to impersonate the trustor. Available trusts are found under the
`/v3/OS-TRUST/trusts` endpoint of the Keystone API.
* `domain-id` (Optional): Used to specify the id of the domain your user belongs
to.
* `domain-name` (Optional): Used to specify the name of the domain your user
belongs to.
* `region` (Optional): Used to specify the identifier of the region to use when
running on a multi-region OpenStack cloud. A region is a general division of
an OpenStack deployment. Although a region does not have a strict geographical
connotation, a deployment can use a geographical name for a region identifier
such as `us-east`. Available regions are found under the `/v3/regions`
endpoint of the Keystone API.
* `ca-file` (Optional): Used to specify the path to your custom CA file.
When using Keystone V3 - which changes tenant to project - the `tenant-id` value
is automatically mapped to the project construct in the API.
##### Load Balancer
These configuration options for the OpenStack provider pertain to the load
balancer and should appear in the `[LoadBalancer]` section of the `cloud.conf`
file:
* `lb-version` (Optional): Used to override automatic version detection. Valid
values are `v1` or `v2`. Where no value is provided automatic detection will
select the highest supported version exposed by the underlying OpenStack
cloud.
* `use-octavia`(Optional): Whether or not to use Octavia for LoadBalancer type
of Service implementation instead of using Neutron-LBaaS. Default: true
Attention: Openstack CCM use Octavia as default load balancer implementation since v1.17.0
* `subnet-id` (Optional): Used to specify the id of the subnet you want to
create your loadbalancer on. Can be found at Network > Networks. Click on the
respective network to get its subnets.
* `floating-network-id` (Optional): If specified, will create a floating IP for
the load balancer.
* `lb-method` (Optional): Used to specify an algorithm by which load will be
distributed amongst members of the load balancer pool. The value can be
`ROUND_ROBIN`, `LEAST_CONNECTIONS`, or `SOURCE_IP`. The default behavior if
none is specified is `ROUND_ROBIN`.
* `lb-provider` (Optional): Used to specify the provider of the load balancer.
If not specified, the default provider service configured in neutron will be
used.
* `create-monitor` (Optional): Indicates whether or not to create a health
monitor for the Neutron load balancer. Valid values are `true` and `false`.
The default is `false`. When `true` is specified then `monitor-delay`,
`monitor-timeout`, and `monitor-max-retries` must also be set.
* `monitor-delay` (Optional): The time between sending probes to
members of the load balancer. Ensure that you specify a valid time unit. The valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h"
* `monitor-timeout` (Optional): Maximum time for a monitor to wait
for a ping reply before it times out. The value must be less than the delay
value. Ensure that you specify a valid time unit. The valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h"
* `monitor-max-retries` (Optional): Number of permissible ping failures before
changing the load balancer member's status to INACTIVE. Must be a number
between 1 and 10.
* `manage-security-groups` (Optional): Determines whether or not the load
balancer should automatically manage the security group rules. Valid values
are `true` and `false`. The default is `false`. When `true` is specified
`node-security-group` must also be supplied.
* `node-security-group` (Optional): ID of the security group to manage.
##### Block Storage
These configuration options for the OpenStack provider pertain to block storage
and should appear in the `[BlockStorage]` section of the `cloud.conf` file:
* `bs-version` (Optional): Used to override automatic version detection. Valid
values are `v1`, `v2`, `v3` and `auto`. When `auto` is specified automatic
detection will select the highest supported version exposed by the underlying
OpenStack cloud. The default value if none is provided is `auto`.
* `trust-device-path` (Optional): In most scenarios the block device names
provided by Cinder (e.g. `/dev/vda`) can not be trusted. This boolean toggles
this behavior. Setting it to `true` results in trusting the block device names
provided by Cinder. The default value of `false` results in the discovery of
the device path based on its serial number and `/dev/disk/by-id` mapping and is
the recommended approach.
* `ignore-volume-az` (Optional): Used to influence availability zone use when
attaching Cinder volumes. When Nova and Cinder have different availability
zones, this should be set to `true`. This is most commonly the case where
there are many Nova availability zones but only one Cinder availability zone.
The default value is `false` to preserve the behavior used in earlier
releases, but may change in the future.
* `node-volume-attach-limit` (Optional): Maximum number of Volumes that can be
attached to the node, default is 256 for cinder.
If deploying Kubernetes versions <= 1.8 on an OpenStack deployment that uses
paths rather than ports to differentiate between endpoints it may be necessary
to explicitly set the `bs-version` parameter. A path based endpoint is of the
form `http://foo.bar/volume` while a port based endpoint is of the form
`http://foo.bar:xxx`.
In environments that use path based endpoints and Kubernetes is using the older
auto-detection logic a `BS API version autodetection failed.` error will be
returned on attempting volume detachment. To workaround this issue it is
possible to force the use of Cinder API version 2 by adding this to the cloud
provider configuration:
```yaml
[BlockStorage]
bs-version=v2
```
##### Metadata
These configuration options for the OpenStack provider pertain to metadata and
should appear in the `[Metadata]` section of the `cloud.conf` file:
* `search-order` (Optional): This configuration key influences the way that the
provider retrieves metadata relating to the instance(s) in which it runs. The
default value of `configDrive,metadataService` results in the provider
retrieving metadata relating to the instance from the config drive first if
available and then the metadata service. Alternative values are:
* `configDrive` - Only retrieve instance metadata from the configuration
drive.
* `metadataService` - Only retrieve instance metadata from the metadata
service.
* `metadataService,configDrive` - Retrieve instance metadata from the metadata
service first if available, then the configuration drive.
Influencing this behavior may be desirable as the metadata on the
configuration drive may grow stale over time, whereas the metadata service
always provides the most up to date view. Not all OpenStack clouds provide
both configuration drive and metadata service though and only one or the other
may be available which is why the default is to check both.
##### Route
These configuration options for the OpenStack provider pertain to the [kubenet]
Kubernetes network plugin and should appear in the `[Route]` section of the
`cloud.conf` file:
* `router-id` (Optional): If the underlying cloud's Neutron deployment supports
the `extraroutes` extension then use `router-id` to specify a router to add
routes to. The router chosen must span the private networks containing your
cluster nodes (typically there is only one node network, and this value should be
the default router for the node network). This value is required to use
[kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet)
on OpenStack.
[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet
## vSphere
{{< tabs name="vSphere cloud provider" >}}
{{% tab name="vSphere >= 6.7U3" %}}
For all vSphere deployments on vSphere >= 6.7U3, the [external vSphere cloud provider](https://github.com/kubernetes/cloud-provider-vsphere), along with the [vSphere CSI driver](https://github.com/kubernetes-sigs/vsphere-csi-driver) is recommended. See [Deploying a Kubernetes Cluster on vSphere with CSI and CPI](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html) for a quick start guide.
{{% /tab %}}
{{% tab name="vSphere < 6.7U3" %}}
If you are running vSphere < 6.7U3, the in-tree vSphere cloud provider is recommended. See [Running a Kubernetes Cluster on vSphere with kubeadm](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/k8s-vcp-on-vsphere-with-kubeadm.html) for a quick start guide.
{{% /tab %}}
{{< /tabs >}}
For in-depth documentation on the vSphere cloud provider, visit the [vSphere cloud provider docs site](https://cloud-provider-vsphere.sigs.k8s.io).

View File

@ -495,7 +495,7 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
system, system-nodes, 12, 0, system:node:127.0.0.1, 2020-07-23T15:26:57.179170694Z,
```
In addition to the queued requests, the output includeas one phantom line for each priority level that is exempt from limitation.
In addition to the queued requests, the output includes one phantom line for each priority level that is exempt from limitation.
You can get a more detailed listing with a command like this:
```shell

View File

@ -79,6 +79,8 @@ as an introduction to various technologies and serves as a jumping-off point.
The following networking options are sorted alphabetically - the order does not
imply any preferential status.
{{% thirdparty-content %}}
### ACI
[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf).
@ -112,7 +114,7 @@ Additionally, the CNI can be run alongside [Calico for network policy enforcemen
[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) is an [open source](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node.
Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni).
### Big Cloud Fabric from Big Switch Networks
@ -313,5 +315,4 @@ to run, and in both cases, the network provides one IP address per pod - as is s
The early design of the networking model and its rationale, and some future
plans are described in more detail in the
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).

View File

@ -213,6 +213,9 @@ when new keys are projected to the Pod can be as long as the kubelet sync period
propagation delay, where the cache propagation delay depends on the chosen cache type
(it equals to watch propagation delay, ttl of cache, or zero correspondingly).
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
## Immutable ConfigMaps {#configmap-immutable}
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
@ -224,9 +227,10 @@ data has the following advantages:
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
closing watches for config maps marked as immutable.
To use this feature, enable the `ImmutableEphemeralVolumes`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set
your Secret or ConfigMap `immutable` field to `true`. For example:
This feature is controlled by the `ImmutableEphemeralVolumes` [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/),
which is enabled by default since v1.19. You can create an immutable
ConfigMap by setting the `immutable` field to `true`. For example,
```yaml
apiVersion: v1
kind: ConfigMap

View File

@ -134,7 +134,6 @@ spec:
containers:
- name: app
image: images.my-company.example/app:v4
env:
resources:
requests:
memory: "64Mi"

View File

@ -37,6 +37,12 @@ its containers.
- As [container environment variable](#using-secrets-as-environment-variables).
- By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod.
The name of a Secret object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
The keys of `data` and `stringData` must consist of alphanumeric characters,
`-`, `_` or `.`.
### Built-in Secrets
#### Service accounts automatically create and attach Secrets with API credentials
@ -52,401 +58,15 @@ this is the recommended workflow.
See the [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/)
documentation for more information on how service accounts work.
### Creating your own Secrets
### Creating a Secret
#### Creating a Secret Using `kubectl`
There are several options to create a Secret:
Secrets can contain user credentials required by Pods to access a database.
For example, a database connection string
consists of a username and password. You can store the username in a file `./username.txt`
and the password in a file `./password.txt` on your local machine.
- [create Secret using `kubectl` command](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- [create Secret from config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- [create Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
```shell
# Create files needed for the rest of the example.
echo -n 'admin' > ./username.txt
echo -n '1f2d1e2e67df' > ./password.txt
```
The `kubectl create secret` command packages these files into a Secret and creates
the object on the API server.
The name of a Secret object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
```shell
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
```
The output is similar to:
```
secret "db-user-pass" created
```
Default key name is the filename. You may optionally set the key name using `[--from-file=[key=]source]`.
```shell
kubectl create secret generic db-user-pass --from-file=username=./username.txt --from-file=password=./password.txt
```
{{< note >}}
Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your [shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping.
In most shells, the easiest way to escape the password is to surround it with single quotes (`'`).
For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way:
```shell
kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb='
```
You do not need to escape special characters in passwords from files (`--from-file`).
{{< /note >}}
You can check that the secret was created:
```shell
kubectl get secrets
```
The output is similar to:
```
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
```
You can view a description of the secret:
```shell
kubectl describe secrets/db-user-pass
```
The output is similar to:
```
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password.txt: 12 bytes
username.txt: 5 bytes
```
{{< note >}}
The commands `kubectl get` and `kubectl describe` avoid showing the contents of a secret by
default. This is to protect the secret from being exposed accidentally to an onlooker,
or from being stored in a terminal log.
{{< /note >}}
See [decoding a secret](#decoding-a-secret) to learn how to view the contents of a secret.
#### Creating a Secret manually
You can also create a Secret in a file first, in JSON or YAML format,
and then create that object.
The name of a Secret object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
contains two maps:
`data` and `stringData`. The `data` field is used to store arbitrary data, encoded using
base64. The `stringData` field is provided for convenience, and allows you to provide
secret data as unencoded strings.
For example, to store two strings in a Secret using the `data` field, convert
the strings to base64 as follows:
```shell
echo -n 'admin' | base64
```
The output is similar to:
```
YWRtaW4=
```
```shell
echo -n '1f2d1e2e67df' | base64
```
The output is similar to:
```
MWYyZDFlMmU2N2Rm
```
Write a Secret that looks like this:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
```
Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply):
```shell
kubectl apply -f ./secret.yaml
```
The output is similar to:
```
secret "mysecret" created
```
For certain scenarios, you may wish to use the `stringData` field instead. This
field allows you to put a non-base64 encoded string directly into the Secret,
and the string will be encoded for you when the Secret is created or updated.
A practical example of this might be where you are deploying an application
that uses a Secret to store a configuration file, and you want to populate
parts of that configuration file during your deployment process.
For example, if your application uses the following configuration file:
```yaml
apiUrl: "https://my.api.com/api/v1"
username: "user"
password: "password"
```
You could store this in a Secret using the following definition:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: {{username}}
password: {{password}}
```
Your deployment tool could then replace the `{{username}}` and `{{password}}`
template variables before running `kubectl apply`.
The `stringData` field is a write-only convenience field. It is never output when
retrieving Secrets. For example, if you run the following command:
```shell
kubectl get secret mysecret -o yaml
```
The output is similar to:
```yaml
apiVersion: v1
kind: Secret
metadata:
creationTimestamp: 2018-11-15T20:40:59Z
name: mysecret
namespace: default
resourceVersion: "7225"
uid: c280ad2e-e916-11e8-98f2-025000000001
type: Opaque
data:
config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19
```
If a field, such as `username`, is specified in both `data` and `stringData`,
the value from `stringData` is used. For example, the following Secret definition:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
stringData:
username: administrator
```
Results in the following Secret:
```yaml
apiVersion: v1
kind: Secret
metadata:
creationTimestamp: 2018-11-15T20:46:46Z
name: mysecret
namespace: default
resourceVersion: "7579"
uid: 91460ecb-e917-11e8-98f2-025000000001
type: Opaque
data:
username: YWRtaW5pc3RyYXRvcg==
```
Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`.
The keys of `data` and `stringData` must consist of alphanumeric characters,
'-', '_' or '.'.
{{< note >}}
The serialized JSON and YAML values of secret data are
encoded as base64 strings. Newlines are not valid within these strings and must
be omitted. When using the `base64` utility on Darwin/macOS, users should avoid
using the `-b` option to split long lines. Conversely, Linux users *should* add
the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if
the `-w` option is not available.
{{< /note >}}
#### Creating a Secret from a generator
Since Kubernetes v1.14, `kubectl` supports [managing objects using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/). Kustomize provides resource Generators to
create Secrets and ConfigMaps. The Kustomize generators should be specified in a
`kustomization.yaml` file inside a directory. After generating the Secret,
you can create the Secret on the API server with `kubectl apply`.
#### Generating a Secret from files
You can generate a Secret by defining a `secretGenerator` from the
files ./username.txt and ./password.txt:
```shell
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
files:
- username.txt
- password.txt
EOF
```
Apply the directory, containing the `kustomization.yaml`, to create the Secret.
```shell
kubectl apply -k .
```
The output is similar to:
```
secret/db-user-pass-96mffmfh4k created
```
You can check that the secret was created:
```shell
kubectl get secrets
```
The output is similar to:
```
NAME TYPE DATA AGE
db-user-pass-96mffmfh4k Opaque 2 51s
```
You can view a description of the secret:
```shell
kubectl describe secrets/db-user-pass-96mffmfh4k
```
The output is similar to:
```
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password.txt: 12 bytes
username.txt: 5 bytes
```
#### Generating a Secret from string literals
You can create a Secret by defining a `secretGenerator`
from literals `username=admin` and `password=secret`:
```shell
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
```
Apply the directory, containing the `kustomization.yaml`, to create the Secret.
```shell
kubectl apply -k .
```
The output is similar to:
```
secret/db-user-pass-dddghtt9b5 created
```
{{< note >}}
When a Secret is generated, the Secret name is created by hashing
the Secret data and appending this value to the name. This ensures that
a new Secret is generated each time the data is modified.
{{< /note >}}
#### Decoding a Secret
Secrets can be retrieved by running `kubectl get secret`.
For example, you can view the Secret created in the previous section by
running the following command:
```shell
kubectl get secret mysecret -o yaml
```
The output is similar to:
```yaml
apiVersion: v1
kind: Secret
metadata:
creationTimestamp: 2016-01-22T18:41:56Z
name: mysecret
namespace: default
resourceVersion: "164619"
uid: cfee02d6-c137-11e5-8d73-42010af00002
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
```
Decode the `password` field:
```shell
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
```
The output is similar to:
```
1f2d1e2e67df
```
#### Editing a Secret
### Editing a Secret
An existing Secret may be edited with the following command:
@ -717,37 +337,6 @@ A container using a Secret as a
Secret updates.
{{< /note >}}
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets
(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their
data has the following advantages:
- protects you from accidental (or unwanted) updates that could cause applications outages
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
closing watches for secrets marked as immutable.
To use this feature, enable the `ImmutableEphemeralVolumes`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set
your Secret or ConfigMap `immutable` field to `true`. For example:
```yaml
apiVersion: v1
kind: Secret
metadata:
...
data:
...
immutable: true
```
{{< note >}}
Once a Secret or ConfigMap is marked as immutable, it is _not_ possible to revert this change
nor to mutate the contents of the `data` field. You can only delete and recreate the Secret.
Existing Pods maintain a mount point to the deleted Secret - it is recommended to recreate
these pods.
{{< /note >}}
### Using Secrets as environment variables
To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
@ -808,6 +397,40 @@ The output is similar to:
1f2d1e2e67df
```
## Immutable Secrets {#secret-immutable}
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
The Kubernetes beta feature _Immutable Secrets and ConfigMaps_ provides an option to set
individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets
(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their
data has the following advantages:
- protects you from accidental (or unwanted) updates that could cause applications outages
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
closing watches for secrets marked as immutable.
This feature is controlled by the `ImmutableEphemeralVolumes` [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/),
which is enabled by default since v1.19. You can create an immutable
Secret by setting the `immutable` field to `true`. For example,
```yaml
apiVersion: v1
kind: Secret
metadata:
...
data:
...
immutable: true
```
{{< note >}}
Once a Secret or ConfigMap is marked as immutable, it is _not_ possible to revert this change
nor to mutate the contents of the `data` field. You can only delete and recreate the Secret.
Existing Pods maintain a mount point to the deleted Secret - it is recommended to recreate
these pods.
{{< /note >}}
### Using imagePullSecrets
The `imagePullSecrets` field is a list of references to secrets in the same namespace.
@ -1272,3 +895,11 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa
by impersonating the kubelet. It is a planned feature to only send secrets to
nodes that actually require them, to restrict the impact of a root exploit on a
single node.
## {{% heading "whatsnext" %}}
- Learn how to [manage Secret using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)

View File

@ -31,7 +31,7 @@ and default values for any essential settings.
By design, a container is immutable: you cannot change the code of a
container that is already running. If you have a containerized application
and want to make changes, you need to build a new container that includes
and want to make changes, you need to build a new image that includes
the change, then recreate the container to start from the updated image.
## Container runtimes

View File

@ -63,7 +63,7 @@ When `imagePullPolicy` is defined without a specific value, it is also set to `A
## Multi-architecture Images with Manifests
As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of an container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.

View File

@ -16,30 +16,23 @@ card:
The core of Kubernetes' {{< glossary_tooltip text="control plane" term_id="control-plane" >}}
is the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}. The API server
exposes an HTTP API that lets end users, different parts of your cluster, and external components
communicate with one another.
exposes an HTTP API that lets end users, different parts of your cluster, and
external components communicate with one another.
The Kubernetes API lets you query and manipulate the state of objects in the Kubernetes API
(for example: Pods, Namespaces, ConfigMaps, and Events).
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
Most operations can be performed through the
[kubectl](/docs/reference/kubectl/overview/) command-line interface or other
command-line tools, such as
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), which in turn use the
API. However, you can also access the API directly using REST calls.
Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/)
if you are writing an application using the Kubernetes API.
<!-- body -->
## API changes
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
Therefore, Kubernetes has design features to allow the Kubernetes API to continuously change and grow.
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
compatibility for a length of time so that other projects have an opportunity to adapt.
In general, new API resources and new resource fields can be added often and frequently.
Elimination of resources or fields requires following the
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
What constitutes a compatible change, and how to change the API, are detailed in
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
## OpenAPI specification {#api-specification}
Complete API details are documented using [OpenAPI](https://www.openapis.org/).
@ -78,95 +71,58 @@ You can request the response format using request headers as follows:
<caption>Valid request header values for OpenAPI v2 queries</caption>
</table>
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
Kubernetes implements an alternative Protobuf based serialization format that
is primarily intended for intra-cluster communication. For more information
about this format, see the [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) design proposal and the
Interface Definition Language (IDL) files for each schema located in the Go
packages that define the API objects.
## API versioning
## API changes
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports
multiple API versions, each at a different API path, such as `/api/v1` or
`/apis/rbac.authorization.k8s.io/v1alpha1`.
Any system that is successful needs to grow and change as new use cases emerge or existing ones change.
Therefore, Kubernetes has designed its features to allow the Kubernetes API to continuously change and grow.
The Kubernetes project aims to _not_ break compatibility with existing clients, and to maintain that
compatibility for a length of time so that other projects have an opportunity to adapt.
Versioning is done at the API level rather than at the resource or field level to ensure that the
API presents a clear, consistent view of system resources and behavior, and to enable controlling
access to end-of-life and/or experimental APIs.
In general, new API resources and new resource fields can be added often and frequently.
Elimination of resources or fields requires following the
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
The JSON and Protobuf serialization schemas follow the same guidelines for schema changes - all descriptions below cover both formats.
What constitutes a compatible change, and how to change the API, are detailed in
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
Note that API versioning and Software versioning are only indirectly related. The
[Kubernetes Release Versioning](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)
proposal describes the relationship between API versioning and software versioning.
## API groups and versioning
Different API versions imply different levels of stability and support. The criteria for each level are described
in more detail in the
[API Changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)
documentation. They are summarized here:
To make it easier to eliminate fields or restructure resource representations,
Kubernetes supports multiple API versions, each at a different API path, such
as `/api/v1` or `/apis/rbac.authorization.k8s.io/v1alpha1`.
- Alpha level:
- The version names contain `alpha` (e.g. `v1alpha1`).
- May be buggy. Enabling the feature may expose bugs. Disabled by default.
- Support for feature may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
- Beta level:
- The version names contain `beta` (e.g. `v2beta3`).
- Code is well tested. Enabling the feature is considered safe. Enabled by default.
- Support for the overall feature will not be dropped, though details may change.
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
- Stable level:
- The version name is `vX` where `X` is an integer.
- Stable versions of features will appear in released software for many subsequent versions.
Versioning is done at the API level rather than at the resource or field level
to ensure that the API presents a clear, consistent view of system resources
and behavior, and to enable controlling access to end-of-life and/or
experimental APIs.
## API groups
Refer to [API versions reference](/docs/reference/using-api/api-overview/#api-versioning)
for more details on the API version level definitions.
To make it easier to extend its API, Kubernetes implements [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
To make it easier to evolve and to extend its API, Kubernetes implements
[API groups](/docs/reference/using-api/api-overview/#api-groups) that can be
[enabled or disabled](/docs/reference/using-api/api-overview/#enabling-or-disabling).
There are several API groups in a cluster:
## API Extension
1. The *core* group, also referred to as the *legacy* group, is at the REST path `/api/v1` and uses `apiVersion: v1`.
1. *Named* groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION`
(e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a
full list of available API groups.
There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/):
1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
lets you declaratively define how the API server should provide your chosen resource API.
1. You can also [implement your own extension API server](/docs/tasks/extend-kubernetes/setup-extension-api-server/)
and use the [aggregator](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
to make it seamless for clients.
## Enabling or disabling API groups
Certain resources and API groups are enabled by default. They can be enabled or disabled by setting `--runtime-config`
as a command line option to the kube-apiserver.
`--runtime-config` accepts comma separated values. For example: to disable batch/v1, set
`--runtime-config=batch/v1=false`; to enable batch/v2alpha1, set `--runtime-config=batch/v2alpha1`.
The flag accepts comma separated set of key=value pairs describing runtime configuration of the API server.
{{< note >}}Enabling or disabling groups or resources requires restarting the kube-apiserver and the
kube-controller-manager to pick up the `--runtime-config` changes.{{< /note >}}
## Persistence
Kubernetes stores its serialized state in terms of the API resources by writing them into
{{< glossary_tooltip term_id="etcd" >}}.
The Kubernetes API can be extended in one of two ways:
1. [Custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
let you declaratively define how the API server should provide your chosen resource API.
1. You can also extend the Kubernetes API by implementing an
[aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
## {{% heading "whatsnext" %}}
[Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes
how the cluster manages authentication and authorization for API access.
Overall API conventions are described in the
[API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)
document.
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
- Learn how to extend the Kubernetes API by adding your own
[CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
- [Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) describes
how the cluster manages authentication and authorization for API access.
- Learn about API endpoints, resource types and samples by reading
[API Reference](/docs/reference/kubernetes-api/).

View File

@ -208,6 +208,19 @@ field in the quota spec.
A quota is matched and consumed only if `scopeSelector` in the quota spec selects the pod.
When quota is scoped for priority class using `scopeSelector` field, quota object is restricted to track only following resources:
* `pods`
* `cpu`
* `memory`
* `ephemeral-storage`
* `limits.cpu`
* `limits.memory`
* `limits.ephemeral-storage`
* `requests.cpu`
* `requests.memory`
* `requests.ephemeral-storage`
This example creates a quota object and matches it with pods at specific priorities. The example
works as follows:

View File

@ -73,17 +73,7 @@ verify that it worked by running `kubectl get pods -o wide` and looking at the
## Interlude: built-in node labels {#built-in-node-labels}
In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated
with a standard set of labels. These labels are
* [`kubernetes.io/hostname`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
* [`failure-domain.beta.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone)
* [`failure-domain.beta.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion)
* [`topology.kubernetes.io/zone`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
* [`topology.kubernetes.io/region`](/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
* [`beta.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type)
* [`node.kubernetes.io/instance-type`](/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
* [`kubernetes.io/os`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os)
* [`kubernetes.io/arch`](/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch)
with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/) for a list of these.
{{< note >}}
The value of these labels is cloud provider specific and is not guaranteed to be reliable.

View File

@ -0,0 +1,23 @@
---
title: Eviction Policy
content_template: templates/concept
weight: 60
---
<!-- overview -->
This page is an overview of Kubernetes' policy for eviction.
<!-- body -->
## Eviction Policy
The {{< glossary_tooltip text="Kubelet" term_id="kubelet" >}} can proactively monitor for and prevent total starvation of a
compute resource. In those cases, the `kubelet` can reclaim the starved
resource by proactively failing one or more Pods. When the `kubelet` fails
a Pod, it terminates all of its containers and transitions its `PodPhase` to `Failed`.
If the evicted Pod is managed by a Deployment, the Deployment will create another Pod
to be scheduled by Kubernetes.
## {{% heading "whatsnext" %}}
- Read [Configure out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) to learn more about eviction signals, thresholds, and handling.

View File

@ -33,7 +33,7 @@ kube-scheduler is designed so that, if you want and need to, you can
write your own scheduling component and use that instead.
For every newly created pod or other unscheduled pods, kube-scheduler
selects an optimal node for them to run on. However, every container in
selects an optimal node for them to run on. However, every container in
pods has different requirements for resources and every pod also has
different requirements. Therefore, existing nodes need to be filtered
according to the specific scheduling requirements.
@ -77,12 +77,9 @@ one of these at random.
There are two supported ways to configure the filtering and scoring behavior
of the scheduler:
1. [Scheduling Policies](/docs/reference/scheduling/policies) allow you to
configure _Predicates_ for filtering and _Priorities_ for scoring.
1. [Scheduling Profiles](/docs/reference/scheduling/config/#profiles) allow you
to configure Plugins that implement different scheduling stages, including:
`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You
can also configure the kube-scheduler to run different profiles.
1. [Scheduling Policies](/docs/reference/scheduling/policies) allow you to configure _Predicates_ for filtering and _Priorities_ for scoring.
1. [Scheduling Profiles](/docs/reference/scheduling/profiles) allow you to configure Plugins that implement different scheduling stages, including: `QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You can also configure the kube-scheduler to run different profiles.
## {{% heading "whatsnext" %}}

View File

@ -3,7 +3,7 @@ reviewers:
- bsalamat
title: Scheduler Performance Tuning
content_type: concept
weight: 70
weight: 80
---
<!-- overview -->
@ -48,17 +48,13 @@ To change the value, edit the kube-scheduler configuration file (this is likely
to be `/etc/kubernetes/config/kube-scheduler.yaml`), then restart the scheduler.
After you have made this change, you can run
```bash
kubectl get componentstatuses
```
to verify that the kube-scheduler component is healthy. The output is similar to:
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
...
kubectl get pods -n kube-system | grep kube-scheduler
```
to verify that the kube-scheduler component is healthy.
## Node scoring threshold {#percentage-of-nodes-to-score}
To improve scheduling performance, the kube-scheduler can stop looking for

View File

@ -3,7 +3,7 @@ reviewers:
- ahg-g
title: Scheduling Framework
content_type: concept
weight: 60
weight: 70
---
<!-- overview -->

View File

@ -292,7 +292,7 @@ Containers at runtime. Security contexts are defined as part of the Pod and cont
in the Pod manifest, and represent parameters to the container runtime.
Security policies are control plane mechanisms to enforce specific settings in the Security Context,
as well as other parameters outside the Security Contex. As of February 2020, the current native
as well as other parameters outside the Security Context. As of February 2020, the current native
solution for enforcing these security policies is [Pod Security
Policy](/docs/concepts/policy/pod-security-policy/) - a mechanism for centrally enforcing security
policy on Pods across a cluster. Other alternatives for enforcing security policy are being

View File

@ -47,6 +47,7 @@ To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/
* kube-apiserver:
* `--feature-gates="IPv6DualStack=true"`
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
* kube-controller-manager:
* `--feature-gates="IPv6DualStack=true"`
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`

View File

@ -23,7 +23,7 @@ Endpoints.
The Endpoints API has provided a simple and straightforward way of
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle
and {{< glossary_tooltip text="Services" term_id="service" >}} have grown to handle and
send more traffic to more backend Pods, limitations of that original API became
more visible.
Most notably, those included challenges with scaling to larger numbers of
@ -114,8 +114,8 @@ of the labels with the same names on the corresponding Node.
Most often, the control plane (specifically, the endpoint slice
{{< glossary_tooltip text="controller" term_id="controller" >}}) creates and
manages EndpointSlice objects. There are a variety of other use cases for
EndpointSlices, such as service mesh implementations, that could result in othe
rentities or controllers managing additional sets of EndpointSlices.
EndpointSlices, such as service mesh implementations, that could result in other
entities or controllers managing additional sets of EndpointSlices.
To ensure that multiple entities can manage EndpointSlices without interfering
with each other, Kubernetes defines the

View File

@ -29,15 +29,26 @@ For clarity, this guide defines the following terms:
{{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
```none
internet
|
[ Ingress ]
--|-----|--
[ Services ]
```
Here is a simple example where an Ingress sends all its traffic to one Service:
{{< mermaid >}}
graph LR;
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress];
ingress-->|routing rule|service[Service];
subgraph cluster
ingress;
service-->pod1[Pod];
service-->pod2[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service,pod1,pod2 k8s;
class client plain;
class cluster cluster;
{{</ mermaid >}}
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically
uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) or
@ -45,7 +56,7 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin
## Prerequisites
You must have an [ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect.
You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect.
You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/). You can choose from a number of
[Ingress controllers](/docs/concepts/services-networking/ingress-controllers).
@ -107,7 +118,7 @@ routed to your default backend.
### Resource backends {#resource-backend}
A `Resource` backend is an ObjectRef to another Kubernetes resource within the
same namespace of the Ingress object. A `Resource` is a mutually exclusive
same namespace as the Ingress object. A `Resource` is a mutually exclusive
setting with Service, and will fail validation if both are specified. A common
usage for a `Resource` backend is to ingress data to an object storage backend
with static assets.
@ -235,7 +246,7 @@ IngressClass resource will ensure that new Ingresses without an
If you have more than one IngressClass marked as the default for your cluster,
the admission controller prevents creating new Ingress objects that don't have
an `ingressClassName` specified. You can resolve this by ensuring that at most 1
IngressClasses are marked as default in your cluster.
IngressClass is marked as default in your cluster.
{{< /caution >}}
## Types of Ingress
@ -274,10 +285,25 @@ A fanout configuration routes traffic from a single IP address to more than one
based on the HTTP URI being requested. An Ingress allows you to keep the number of load balancers
down to a minimum. For example, a setup like:
```
foo.bar.com -> 178.91.123.132 -> / foo service1:4200
/ bar service2:8080
```
{{< mermaid >}}
graph LR;
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress, 178.91.123.132];
ingress-->|/foo|service1[Service service1:4200];
ingress-->|/bar|service2[Service service2:8080];
subgraph cluster
ingress;
service1-->pod1[Pod];
service1-->pod2[Pod];
service2-->pod3[Pod];
service2-->pod4[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s;
class client plain;
class cluster cluster;
{{</ mermaid >}}
would require an Ingress such as:
@ -321,11 +347,26 @@ you are using, you may need to create a default-http-backend
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
```none
foo.bar.com --| |-> foo.bar.com service1:80
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com service2:80
```
{{< mermaid >}}
graph LR;
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress, 178.91.123.132];
ingress-->|Host: foo.bar.com|service1[Service service1:80];
ingress-->|Host: bar.foo.com|service2[Service service2:80];
subgraph cluster
ingress;
service1-->pod1[Pod];
service1-->pod2[Pod];
service2-->pod3[Pod];
service2-->pod4[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s;
class client plain;
class cluster cluster;
{{</ mermaid >}}
The following Ingress tells the backing load balancer to route requests based on
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
@ -491,9 +532,8 @@ You can achieve the same outcome by invoking `kubectl replace -f` on a modified
## Failing across availability zones
Techniques for spreading traffic across failure domains differs between cloud providers.
Techniques for spreading traffic across failure domains differ between cloud providers.
Please check the documentation of the relevant [Ingress controller](/docs/concepts/services-networking/ingress-controllers) for details.
for details on deploying Ingress in a federated cluster.
## Alternatives
@ -509,4 +549,3 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
* Learn about the [Ingress API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1beta1-networking-k8s-io)
* Learn about [Ingress controllers](/docs/concepts/services-networking/ingress-controllers/)
* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube/)

View File

@ -24,8 +24,8 @@ and can load-balance across them.
## Motivation
Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal.
They are born and when they die, they are not resurrected.
Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are created and destroyed
to match the state of your cluster. Pods are nonpermanent resources.
If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app,
it can create and destroy Pods dynamically.
@ -45,9 +45,9 @@ Enter _Services_.
In Kubernetes, a Service is an abstraction which defines a logical set of Pods
and a policy by which to access them (sometimes this pattern is called
a micro-service). The set of Pods targeted by a Service is usually determined
by a {{< glossary_tooltip text="selector" term_id="selector" >}}
(see [below](#services-without-selectors) for why you might want a Service
_without_ a selector).
by a {{< glossary_tooltip text="selector" term_id="selector" >}}.
To learn about other ways to define Service endpoints,
see [Services _without_ selectors](#services-without-selectors).
For example, consider a stateless image-processing backend which is running with
3 replicas. Those replicas are fungible&mdash;frontends do not care which backend
@ -129,12 +129,12 @@ Services most commonly abstract access to Kubernetes Pods, but they can also
abstract other kinds of backends.
For example:
* You want to have an external database cluster in production, but in your
test environment you use your own databases.
* You want to point your Service to a Service in a different
{{< glossary_tooltip term_id="namespace" >}} or on another cluster.
* You are migrating a workload to Kubernetes. Whilst evaluating the approach,
you run only a proportion of your backends in Kubernetes.
* You want to have an external database cluster in production, but in your
test environment you use your own databases.
* You want to point your Service to a Service in a different
{{< glossary_tooltip term_id="namespace" >}} or on another cluster.
* You are migrating a workload to Kubernetes. While evaluating the approach,
you run only a proportion of your backends in Kubernetes.
In any of these scenarios you can define a Service _without_ a Pod selector.
For example:
@ -151,7 +151,7 @@ spec:
targetPort: 9376
```
Because this Service has no selector, the corresponding Endpoint object is *not*
Because this Service has no selector, the corresponding Endpoint object is not
created automatically. You can manually map the Service to the network address and port
where it's running, by adding an Endpoint object manually:
@ -188,6 +188,7 @@ selectors and uses DNS names instead. For more information, see the
[ExternalName](#externalname) section later in this document.
### EndpointSlices
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
EndpointSlices are an API resource that can provide a more scalable alternative
@ -204,9 +205,8 @@ described in detail in [EndpointSlices](/docs/concepts/services-networking/endpo
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
The AppProtocol field provides a way to specify an application protocol to be
used for each Service port. The value of this field is mirrored by corresponding
Endpoints and EndpointSlice resources.
The `AppProtocol` field provides a way to specify an application protocol for each Service port.
The value of this field is mirrored by corresponding Endpoints and EndpointSlice resources.
## Virtual IPs and service proxies
@ -224,20 +224,19 @@ resolution?
There are a few reasons for using proxying for Services:
* There is a long history of DNS implementations not respecting record TTLs,
and caching the results of name lookups after they should have expired.
* Some apps do DNS lookups only once and cache the results indefinitely.
* Even if apps and libraries did proper re-resolution, the low or zero TTLs
on the DNS records could impose a high load on DNS that then becomes
difficult to manage.
* There is a long history of DNS implementations not respecting record TTLs,
and caching the results of name lookups after they should have expired.
* Some apps do DNS lookups only once and cache the results indefinitely.
* Even if apps and libraries did proper re-resolution, the low or zero TTLs
on the DNS records could impose a high load on DNS that then becomes
difficult to manage.
### User space proxy mode {#proxy-mode-userspace}
In this mode, kube-proxy watches the Kubernetes master for the addition and
removal of Service and Endpoint objects. For each Service it opens a
port (randomly chosen) on the local node. Any connections to this "proxy port"
are
proxied to one of the Service's backend Pods (as reported via
are proxied to one of the Service's backend Pods (as reported via
Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into
account when deciding which backend Pod to use.
@ -255,7 +254,7 @@ In this mode, kube-proxy watches the Kubernetes control plane for the addition a
removal of Service and Endpoint objects. For each Service, it installs
iptables rules, which capture traffic to the Service's `clusterIP` and `port`,
and redirect that traffic to one of the Service's
backend sets. For each Endpoint object, it installs iptables rules which
backend sets. For each Endpoint object, it installs iptables rules which
select a backend Pod.
By default, kube-proxy in iptables mode chooses a backend at random.
@ -298,12 +297,12 @@ higher throughput of network traffic.
IPVS provides more options for balancing traffic to backend Pods;
these are:
- `rr`: round-robin
- `lc`: least connection (smallest number of open connections)
- `dh`: destination hashing
- `sh`: source hashing
- `sed`: shortest expected delay
- `nq`: never queue
* `rr`: round-robin
* `lc`: least connection (smallest number of open connections)
* `dh`: destination hashing
* `sh`: source hashing
* `sed`: shortest expected delay
* `nq`: never queue
{{< note >}}
To run kube-proxy in IPVS mode, you must make IPVS available on
@ -389,7 +388,7 @@ compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
For example, the Service `"redis-master"` which exposes TCP port 6379 and has been
For example, the Service `redis-master` which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11, produces the following environment
variables:
@ -423,19 +422,19 @@ Services and creates a set of DNS records for each one. If DNS has been enabled
throughout your cluster then all Pods should automatically be able to resolve
Services by their DNS name.
For example, if you have a Service called `"my-service"` in a Kubernetes
Namespace `"my-ns"`, the control plane and the DNS Service acting together
create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace
For example, if you have a Service called `my-service` in a Kubernetes
namespace `my-ns`, the control plane and the DNS Service acting together
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
should be able to find it by simply doing a name lookup for `my-service`
(`"my-service.my-ns"` would also work).
(`my-service.my-ns` would also work).
Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
will resolve to the cluster IP assigned for the Service.
Kubernetes also supports DNS SRV (Service) records for named ports. If the
`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to
`my-service.my-ns` Service has a port named `http` with the protocol set to
`TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover
the port number for `"http"`, as well as the IP address.
the port number for `http`, as well as the IP address.
The Kubernetes DNS server is the only way to access `ExternalName` Services.
You can find more information about `ExternalName` resolution in
@ -467,9 +466,9 @@ For headless Services that do not define selectors, the endpoints controller doe
not create `Endpoints` records. However, the DNS system looks for and configures
either:
* CNAME records for [`ExternalName`](#externalname)-type Services.
* A records for any `Endpoints` that share a name with the Service, for all
other types.
* CNAME records for [`ExternalName`](#externalname)-type Services.
* A records for any `Endpoints` that share a name with the Service, for all
other types.
## Publishing Services (ServiceTypes) {#publishing-services-service-types}
@ -481,26 +480,26 @@ The default is `ClusterIP`.
`Type` values and their behaviors are:
* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
makes the Service only reachable from within the cluster. This is the
default `ServiceType`.
* [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
(the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service
routes, is automatically created. You'll be able to contact the `NodePort` Service,
from outside the cluster,
by requesting `<NodeIP>:<NodePort>`.
* [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external
load balancer routes, are automatically created.
* [`ExternalName`](#externalname): Maps the Service to the contents of the
`externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
makes the Service only reachable from within the cluster. This is the
default `ServiceType`.
* [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
(the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service
routes, is automatically created. You'll be able to contact the `NodePort` Service,
from outside the cluster,
by requesting `<NodeIP>:<NodePort>`.
* [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external
load balancer routes, are automatically created.
* [`ExternalName`](#externalname): Maps the Service to the contents of the
`externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
with its value. No proxying of any kind is set up.
{{< note >}}You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher
to use the `ExternalName` type.
{{< /note >}}
with its value. No proxying of any kind is set up.
{{< note >}}
You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type.
{{< /note >}}
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules
into a single resource as it can expose multiple services under the same IP address.
### Type NodePort {#nodeport}
@ -509,7 +508,6 @@ allocates a port from a range specified by `--service-node-port-range` flag (def
Each node proxies that port (the same port number on every Node) into your Service.
Your Service reports the allocated port in its `.spec.ports[*].nodePort` field.
If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10.
This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node.
@ -530,6 +528,7 @@ Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
For example:
```yaml
apiVersion: v1
kind: Service
@ -606,19 +605,21 @@ Specify the assigned IP address as loadBalancerIP. Ensure that you have updated
{{< /note >}}
#### Internal load balancer
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
(virtual) network address block.
In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
You can achieve this by adding one the following annotations to a Service.
The annotation to add depends on the cloud Service provider you're using.
To set an internal load balancer, add one of the following annotations to your Service
depending on the cloud Service provider you're using.
{{< tabs name="service_tabs" >}}
{{% tab name="Default" %}}
Select one of the tabs.
{{% /tab %}}
{{% tab name="GCP" %}}
```yaml
[...]
metadata:
@ -627,8 +628,10 @@ metadata:
cloud.google.com/load-balancer-type: "Internal"
[...]
```
{{% /tab %}}
{{% tab name="AWS" %}}
```yaml
[...]
metadata:
@ -637,8 +640,10 @@ metadata:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
[...]
```
{{% /tab %}}
{{% tab name="Azure" %}}
```yaml
[...]
metadata:
@ -647,8 +652,10 @@ metadata:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
[...]
```
{{% /tab %}}
{{% tab name="IBM Cloud" %}}
```yaml
[...]
metadata:
@ -657,8 +664,10 @@ metadata:
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private"
[...]
```
{{% /tab %}}
{{% tab name="OpenStack" %}}
```yaml
[...]
metadata:
@ -667,8 +676,10 @@ metadata:
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
[...]
```
{{% /tab %}}
{{% tab name="Baidu Cloud" %}}
```yaml
[...]
metadata:
@ -677,8 +688,10 @@ metadata:
service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
[...]
```
{{% /tab %}}
{{% tab name="Tencent Cloud" %}}
```yaml
[...]
metadata:
@ -686,8 +699,10 @@ metadata:
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx
[...]
```
{{% /tab %}}
{{% tab name="Alibaba Cloud" %}}
```yaml
[...]
metadata:
@ -695,10 +710,10 @@ metadata:
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet"
[...]
```
{{% /tab %}}
{{< /tabs >}}
#### TLS support on AWS {#ssl-support-on-aws}
For partial TLS / SSL support on clusters running on AWS, you can add three
@ -823,7 +838,6 @@ to the value of `"true"`. The annotation
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
```yaml
metadata:
name: my-service
@ -991,6 +1005,7 @@ spec:
type: ExternalName
externalName: my.database.example.com
```
{{< note >}}
ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
is intended to specify a canonical DNS name. To hardcode an IP address, consider using
@ -1045,7 +1060,7 @@ spec:
## Shortcomings
Using the userspace proxy for VIPs, work at small to medium scale, but will
Using the userspace proxy for VIPs works at small to medium scale, but will
not scale to very large clusters with thousands of Services. The
[original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107)
has more details on this.
@ -1173,12 +1188,12 @@ of the Service.
{{< note >}}
You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service
to expose HTTP / HTTPS Services.
to expose HTTP/HTTPS Services.
{{< /note >}}
### PROXY protocol
If your cloud provider supports it (eg, [AWS](/docs/concepts/cluster-administration/cloud-providers/#aws)),
If your cloud provider supports it,
you can use a Service in LoadBalancer mode to configure a load balancer outside
of Kubernetes itself, that will forward connections prefixed with
[PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
@ -1189,6 +1204,7 @@ incoming connection, similar to this example
```
PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n
```
followed by the data from the client.
### SCTP
@ -1227,13 +1243,8 @@ SCTP is not supported on Windows based nodes.
The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
{{< /warning >}}
## {{% heading "whatsnext" %}}
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)

View File

@ -39,11 +39,11 @@ simplifies application deployment and management.
Kubernetes supports several different kinds of ephemeral volumes for
different purposes:
- [emptyDir](/docs/concepts/volumes/#emptydir): empty at Pod startup,
- [emptyDir](/docs/concepts/storage/volumes/#emptydir): empty at Pod startup,
with storage coming locally from the kubelet base directory (usually
the root disk) or RAM
- [configMap](/docs/concepts/volumes/#configmap),
[downwardAPI](/docs/concepts/volumes/#downwardapi),
- [configMap](/docs/concepts/storage/volumes/#configmap),
[downwardAPI](/docs/concepts/storage/volumes/#downwardapi),
[secret](/docs/concepts/storage/volumes/#secret): inject different
kinds of Kubernetes data into a Pod
- [CSI ephemeral
@ -92,7 +92,7 @@ Conceptually, CSI ephemeral volumes are similar to `configMap`,
scheduled onto a node. Kubernetes has no concept of rescheduling Pods
anymore at this stage. Volume creation has to be unlikely to fail,
otherwise Pod startup gets stuck. In particular, [storage capacity
aware Pod scheduling](/docs/concepts/storage-capacity/) is *not*
aware Pod scheduling](/docs/concepts/storage/storage-capacity/) is *not*
supported for these volumes. They are currently also not covered by
the storage resource usage limits of a Pod, because that is something
that kubelet can only enforce for storage that it manages itself.
@ -147,7 +147,7 @@ flexible:
([snapshotting](/docs/concepts/storage/volume-snapshots/),
[cloning](/docs/concepts/storage/volume-pvc-datasource/),
[resizing](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims),
and [storage capacity tracking](/docs/concepts/storage-capacity/).
and [storage capacity tracking](/docs/concepts/storage/storage-capacity/).
Example:

View File

@ -415,6 +415,21 @@ This internal provisioner of OpenStack is deprecated. Please use [the external c
### vSphere
There are two types of provisioners for vSphere storage classes:
- [CSI provisioner](#csi-provisioner): `csi.vsphere.vmware.com`
- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume`
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#csi-migration-5).
#### CSI Provisioner {#vsphere-provisioner-csi}
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/example/vanilla-k8s-file-driver/example-sc.yaml).
#### vCP Provisioner
The following examples use the VMware Cloud Provider (vCP) StorageClass provisioner.
1. Create a StorageClass with a user specified disk format.
```yaml
@ -819,4 +834,3 @@ Delaying volume binding allows the scheduler to consider all of a Pod's
scheduling constraints when choosing an appropriate PersistentVolume for a
PersistentVolumeClaim.

View File

@ -215,8 +215,7 @@ See the [CephFS example](https://github.com/kubernetes/examples/tree/{{< param "
### cinder {#cinder}
{{< note >}}
Prerequisite: Kubernetes with OpenStack Cloud Provider configured. For cloudprovider
configuration please refer [cloud provider openstack](/docs/concepts/cluster-administration/cloud-providers/#openstack).
Prerequisite: Kubernetes with OpenStack Cloud Provider configured.
{{< /note >}}
`cinder` is used to mount OpenStack Cinder Volume into your Pod.
@ -757,8 +756,8 @@ See the [NFS example](https://github.com/kubernetes/examples/tree/{{< param "git
### persistentVolumeClaim {#persistentvolumeclaim}
A `persistentVolumeClaim` volume is used to mount a
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumes are a
way for users to "claim" durable storage (such as a GCE PersistentDisk or an
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumeClaims
are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.
See the [PersistentVolumes example](/docs/concepts/storage/persistent-volumes/) for more

View File

@ -254,7 +254,7 @@ but cannot be controlled from there.
* Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/).
* Learn about [PodPresets](/docs/concepts/workloads/pods/podpreset/).
* Lean about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
* Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to
configure different Pods with different container runtime configurations.
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions.

View File

@ -99,7 +99,7 @@ assigns a Pod to a Node, the kubelet starts creating containers for that Pod
using a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
There are three possible container states: `Waiting`, `Running`, and `Terminated`.
To the check state of a Pod's containers, you can use
To check the state of a Pod's containers, you can use
`kubectl describe pod <name-of-pod>`. The output shows the state for each container
within that Pod.

View File

@ -30,13 +30,23 @@ node4 Ready <none> 2m43s v1.16.0 node=node4,zone=zoneB
Then the cluster is logically viewed as below:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/kubernetes-api/labels-annotations-taints/) that are created and populated automatically on most clusters.
@ -80,17 +90,25 @@ You can read more about this field by running `kubectl explain Pod.spec.topology
### Example: One TopologySpreadConstraint
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
| P | P | P | |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as:
@ -100,15 +118,46 @@ If we want an incoming Pod to be evenly spread with existing Pods across zones,
If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed onto "zoneB":
```
+---------------+---------------+ +---------------+---------------+
| zoneA | zoneB | | zoneA | zoneB |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
| node1 | node2 | node3 | node4 | OR | node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
| P | P | P | P | | P | P | P P | |
+-------+-------+-------+-------+ +-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
OR
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
p4(mypod) --> n3
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can tweak the Pod spec to meet various kinds of requirements:
@ -118,17 +167,26 @@ You can tweak the Pod spec to meet various kinds of requirements:
### Example: Multiple TopologySpreadConstraints
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively (`P` represents Pod):
This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively:
```
+---------------+---------------+
| zoneA | zoneB |
+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 |
+-------+-------+-------+-------+
| P | P | P | |
+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node:
@ -138,15 +196,24 @@ In this case, to match the first constraint, the incoming Pod can only be placed
Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones:
```
+---------------+-------+
| zoneA | zoneB |
+-------+-------+-------+
| node1 | node2 | node3 |
+-------+-------+-------+
| P P | P | P P |
+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p4(Pod) --> n3(Node3)
p5(Pod) --> n3
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n1
p3(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s;
class zoneA,zoneB cluster;
{{< /mermaid >}}
If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only be put to "zoneB"; while in terms of the second constraint, "mypod" can only put to "node2". Then a joint result of "zoneB" and "node2" returns nothing.
@ -169,15 +236,37 @@ There are some implicit conventions worth noting here:
Suppose you have a 5-node cluster ranging from zoneA to zoneC:
```
+---------------+---------------+-------+
| zoneA | zoneB | zoneC |
+-------+-------+-------+-------+-------+
| node1 | node2 | node3 | node4 | node5 |
+-------+-------+-------+-------+-------+
| P | P | P | | |
+-------+-------+-------+-------+-------+
```
{{<mermaid>}}
graph BT
subgraph "zoneB"
p3(Pod) --> n3(Node3)
n4(Node4)
end
subgraph "zoneA"
p1(Pod) --> n1(Node1)
p2(Pod) --> n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4,p1,p2,p3 k8s;
class p4 plain;
class zoneA,zoneB cluster;
{{< /mermaid >}}
{{<mermaid>}}
graph BT
subgraph "zoneC"
n5(Node5)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n5 k8s;
class zoneC cluster;
{{< /mermaid >}}
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.

View File

@ -42,12 +42,9 @@ The English-language documentation uses U.S. English spelling and grammar.
## Documentation formatting standards
### Use camel case for API objects
### Use upper camel case for API objects
When you refer to an API object, use the same uppercase and lowercase letters
that are used in the actual object name. Typically, the names of API
objects use
[camel case](https://en.wikipedia.org/wiki/Camel_case).
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
Don't split the API object name into separate words. For example, use
PodTemplateList, not Pod Template List.
@ -58,9 +55,9 @@ leads to an awkward construction.
{{< table caption = "Do and Don't - API objects" >}}
Do | Don't
:--| :-----
The Pod has two containers. | The pod has two containers.
The Deployment is responsible for ... | The Deployment object is responsible for ...
A PodList is a list of Pods. | A Pod List is a list of pods.
The pod has two containers. | The Pod has two containers.
The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ...
A PodList is a list of pods. | A Pod List is a list of pods.
The two ContainerPorts ... | The two ContainerPort objects ...
The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
{{< /table >}}
@ -71,7 +68,7 @@ The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds
Use angle brackets for placeholders. Tell the reader what a placeholder
represents.
1. Display information about a Pod:
1. Display information about a pod:
kubectl describe pod <pod-name> -n <namespace>
@ -116,7 +113,7 @@ The copy is called a "fork". | The copy is called a "fork."
## Inline code formatting
### Use code style for inline code and commands
### Use code style for inline code, commands, and API objects
For inline code in an HTML document, use the `<code>` tag. In a Markdown
document, use the backtick (`` ` ``).
@ -124,7 +121,9 @@ document, use the backtick (`` ` ``).
{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
Do | Don't
:--| :-----
The `kubectl run`command creates a Pod. | The "kubectl run" command creates a Pod.
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
The kubelet on each node acquires a `Lease`… | The kubelet on each node acquires a lease…
A `PersistentVolume` represents durable storage… | A Persistent Volume represents durable storage…
For declarative management, use `kubectl apply`. | For declarative management, use "kubectl apply".
Enclose code samples with triple backticks. (\`\`\`)| Enclose code samples with any other syntax.
Use single backticks to enclose inline code. For example, `var example = true`. | Use two asterisks (`**`) or an underscore (`_`) to enclose inline code. For example, **var example = true**.
@ -201,7 +200,7 @@ kubectl get pods | $ kubectl get pods
### Separate commands from output
Verify that the Pod is running on your chosen node:
Verify that the pod is running on your chosen node:
kubectl get pods --output=wide
@ -513,7 +512,7 @@ Do | Don't
:--| :-----
To create a ReplicaSet, ... | In order to create a ReplicaSet, ...
See the configuration file. | Please see the configuration file.
View the Pods. | With this next command, we'll view the Pods.
View the pods. | With this next command, we'll view the pods.
{{< /table >}}
### Address the reader as "you"
@ -552,7 +551,7 @@ Do | Don't
:--| :-----
Version 1.4 includes ... | In version 1.4, we have added ...
Kubernetes provides a new feature for ... | We provide a new feature ...
This page teaches you how to use Pods. | In this page, we are going to learn about Pods.
This page teaches you how to use pods. | In this page, we are going to learn about pods.
{{< /table >}}

View File

@ -24,7 +24,7 @@ of the Kubernetes community open a pull request with changes to resolve the issu
If you want to suggest improvements to existing content, or notice an error, then open an issue.
1. Go to the bottom of the page and click the **Create an Issue** button. This redirects you
1. Click the **Create an issue** link on the right sidebar. This redirects you
to a GitHub issue page pre-populated with some headers.
2. Describe the issue or suggestion for improvement. Provide as many details as you can.
3. Click **Submit new issue**.

View File

@ -1,2 +0,0 @@
Tools for Kubernetes docs contributors. View `README.md` files in
subdirectories for more info.

View File

@ -1,51 +0,0 @@
# Snippets for Atom
Snippets are bits of text that get inserted into your editor, to save typing and
reduce syntax errors. The snippets provided in `atom-snippets.cson` are scoped to
only work on Markdown files within Atom.
## Installation
Copy the contents of the `atom-snippets.cson` file into your existing
`~/.atom/snippets.cson`. **Do not replace your existing file.**
You do not need to restart Atom.
## Usage
Have a look through `atom-snippets.cson` and note the titles and `prefix` values
of the snippets.
You can trigger a given snippet in one of two ways:
- By typing the snippet's `prefix` and pressing the `<TAB>` key
- By searching for the snippet's title in **Packages / Snippets / Available**
For example, open a Markdown file and type `anote` and press `<TAB>`. A blank
note is added, with the correct Hugo shortcodes.
A snippet can insert a single line or multiple lines of text. Some snippets
have placeholder values. To get to the next placeholder, press `<TAB>` again.
Some of the snippets only insert partially-formed Markdown or Hugo syntax.
For instance, `coverview` inserts the start of a concept overview tag, while
`cclose` inserts a close-capture tag. This is because every type of capture
needs a capture-close tab.
## Creating new topics using snippets
To create a new concept, task, or tutorial from a blank file, use one of the
following:
- `newconcept`
- `newtask`
- `newtutorial`
Placeholder text is included.
## Submitting new snippets
1. Develop the snippet locally and verify that it works as expected.
2. Copy the template's code into the `atom-snippets.cson` file on GitHub. Raise a
pull request, and ask for review from another Atom user in `#sig-docs` on
Kubernetes Slack.

View File

@ -1,226 +0,0 @@
# Your snippets
#
# Atom snippets allow you to enter a simple prefix in the editor and hit tab to
# expand the prefix into a larger code block with templated values.
#
# You can create a new snippet in this file by typing "snip" and then hitting
# tab.
#
# An example CoffeeScript snippet to expand log to console.log:
#
# '.source.coffee':
# 'Console log':
# 'prefix': 'log'
# 'body': 'console.log $1'
#
# Each scope (e.g. '.source.coffee' above) can only be declared once.
#
# This file uses CoffeeScript Object Notation (CSON).
# If you are unfamiliar with CSON, you can read more about it in the
# Atom Flight Manual:
# http://flight-manual.atom.io/using-atom/sections/basic-customization/#_cson
'.source.gfm':
# Capture variables for concept template
# For full concept template see 'newconcept' below
'Insert concept template':
'prefix': 'ctemplate'
'body': 'content_template: templates/concept'
'Insert concept overview':
'prefix': 'coverview'
'body': '{{% capture overview %}}'
'Insert concept body':
'prefix': 'cbody'
'body': '{{% capture body %}}'
'Insert concept whatsnext':
'prefix': 'cnext'
'body': '{{% capture whatsnext %}}'
# Capture variables for task template
# For full task template see 'newtask' below
'Insert task template':
'prefix': 'ttemplate'
'body': 'content_template: templates/task'
'Insert task overview':
'prefix': 'toverview'
'body': '{{% capture overview %}}'
'Insert task prerequisites':
'prefix': 'tprereq'
'body': """
{{% capture prerequisites %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{% /capture %}}
"""
'Insert task steps':
'prefix': 'tsteps'
'body': '{{% capture steps %}}'
'Insert task discussion':
'prefix': 'tdiscuss'
'body': '{{% capture discussion %}}'
# Capture variables for tutorial template
# For full tutorial template see 'newtutorial' below
'Insert tutorial template':
'prefix': 'tutemplate'
'body': 'content_template: templates/tutorial'
'Insert tutorial overview':
'prefix': 'tuoverview'
'body': '{{% capture overview %}}'
'Insert tutorial prerequisites':
'prefix': 'tuprereq'
'body': '{{% capture prerequisites %}}'
'Insert tutorial objectives':
'prefix': 'tuobjectives'
'body': '{{% capture objectives %}}'
'Insert tutorial lesson content':
'prefix': 'tulesson'
'body': '{{% capture lessoncontent %}}'
'Insert tutorial whatsnext':
'prefix': 'tunext'
'body': '{{% capture whatsnext %}}'
'Close capture':
'prefix': 'ccapture'
'body': '{{% /capture %}}'
'Insert note':
'prefix': 'anote'
'body': """
{{< note >}}
$1
{{< /note >}}
"""
# Admonitions
'Insert caution':
'prefix': 'acaution'
'body': """
{{< caution >}}
$1
{{< /caution >}}
"""
'Insert warning':
'prefix': 'awarning'
'body': """
{{< warning >}}
$1
{{< /warning >}}
"""
# Misc one-liners
'Insert TOC':
'prefix': 'toc'
'body': '{{< toc >}}'
'Insert code from file':
'prefix': 'codefile'
'body': '{{< codenew file="$1" >}}'
'Insert feature state':
'prefix': 'fstate'
'body': '{{< feature-state for_k8s_version="$1" state="$2" >}}'
'Insert figure':
'prefix': 'fig'
'body': '{{< figure src="$1" title="$2" alt="$3" caption="$4" >}}'
'Insert Youtube link':
'prefix': 'yt'
'body': '{{< youtube $1 >}}'
# Full concept template
'Create new concept':
'prefix': 'newconcept'
'body': """
---
reviewers:
- ${1:"github-id-or-group"}
title: ${2:"topic-title"}
content_template: templates/concept
---
{{% capture overview %}}
${3:"overview-content"}
{{% /capture %}}
{{< toc >}}
{{% capture body %}}
${4:"h2-heading-per-subtopic"}
{{% /capture %}}
{{% capture whatsnext %}}
${5:"next-steps-or-delete"}
{{% /capture %}}
"""
# Full task template
'Create new task':
'prefix': 'newtask'
'body': """
---
reviewers:
- ${1:"github-id-or-group"}
title: ${2:"topic-title"}
content_template: templates/task
---
{{% capture overview %}}
${3:"overview-content"}
{{% /capture %}}
{{< toc >}}
{{% capture prerequisites %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
${4:"additional-prereqs-or-delete"}
{{% /capture %}}
{{% capture steps %}}
${5:"h2-heading-per-step"}
{{% /capture %}}
{{% capture discussion %}}
${6:"task-discussion-or-delete"}
{{% /capture %}}
"""
# Full tutorial template
'Create new tutorial':
'prefix': 'newtutorial'
'body': """
---
reviewers:
- ${1:"github-id-or-group"}
title: ${2:"topic-title"}
content_template: templates/tutorial
---
{{% capture overview %}}
${3:"overview-content"}
{{% /capture %}}
{{< toc >}}
{{% capture prerequisites %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
${4:"additional-prereqs-or-delete"}
{{% /capture %}}
{{% capture objectives %}}
${5:"tutorial-objectives"}
{{% /capture %}}
{{% capture lessoncontent %}}
${6:"lesson-content"}
{{% /capture %}}
{{% capture whatsnext %}}
${7:"next-steps-or-delete"}
{{% /capture %}}
"""

View File

@ -1,30 +1,12 @@
---
title: Supported Versions of the Kubernetes Documentation
content_type: concept
title: Available Documentation Versions
content_type: custom
layout: supported-versions
card:
name: about
weight: 10
title: Supported Versions of the Documentation
title: Available Documentation Versions
---
<!-- overview -->
This website contains documentation for the current version of Kubernetes
and the four previous versions of Kubernetes.
<!-- body -->
## Current version
The current version is
[{{< param "version" >}}](/).
## Previous versions
{{< versions-other >}}

View File

@ -163,10 +163,11 @@ storage classes and how to mark a storage class as default.
### DefaultTolerationSeconds {#defaulttolerationseconds}
This admission controller sets the default forgiveness toleration for pods to tolerate
the taints `notready:NoExecute` and `unreachable:NoExecute` for 5 minutes,
if the pods don't already have toleration for taints
`node.kubernetes.io/not-ready:NoExecute` or
`node.alpha.kubernetes.io/unreachable:NoExecute`.
the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
have toleration for taints `node.kubernetes.io/not-ready:NoExecute` or
`node.kubernetes.io/unreachable:NoExecute`.
The default value for `default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` is 5 minutes.
### DenyExecOnPrivileged {#denyexeconprivileged}
@ -202,8 +203,6 @@ is recommended instead.
This admission controller mitigates the problem where the API server gets flooded by
event requests. The cluster admin can specify event rate limits by:
* Ensuring that `eventratelimit.admission.k8s.io/v1alpha1=true` is included in the
`--runtime-config` flag for the API server;
* Enabling the `EventRateLimit` admission controller;
* Referencing an `EventRateLimit` configuration file from the file provided to the API
server's command line flag `--admission-control-config-file`:

View File

@ -443,7 +443,7 @@ current-context: webhook
contexts:
- context:
cluster: name-of-remote-authn-service
user: name-of-api-sever
user: name-of-api-server
name: webhook
```

View File

@ -108,7 +108,6 @@ Kubernetes provides built-in signers that each have a well-known `signerName`:
1. `kubernetes.io/legacy-unknown`: has no guarantees for trust at all. Some distributions may honor these as client
certs, but that behavior is not standard Kubernetes behavior.
This signerName can only be requested in CertificateSigningRequests created via the `certificates.k8s.io/v1beta1` API version.
Never auto-approved by {{< glossary_tooltip term_id="kube-controller-manager" >}}.
1. Trust distribution: None. There is no standard trust or distribution for this signer in a Kubernetes cluster.
1. Permitted subjects - any
@ -245,7 +244,7 @@ Create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kub
```
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: john

View File

@ -46,7 +46,7 @@ This group and user name format match the identity created for each kubelet as p
[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/).
The value of `<nodeName>` **must** match precisely the name of the node as registered by the kubelet. By default, this is the host name as provided by `hostname`, or overridden via the [kubelet option](/docs/reference/command-line-tools-reference/kubelet/) `--hostname-override`. However, when using the `--cloud-provider` kubelet option, the specific hostname may be determined by the cloud provider, ignoring the local `hostname` and the `--hostname-override` option.
For specifics about how the kubelet determines the hostname, as well as cloud provider overrides, see the [kubelet options reference](/docs/reference/command-line-tools-reference/kubelet/) and the [cloud provider details](/docs/concepts/cluster-administration/cloud-providers/).
For specifics about how the kubelet determines the hostname, see the [kubelet options reference](/docs/reference/command-line-tools-reference/kubelet/).
To enable the Node authorizer, start the apiserver with `--authorization-mode=Node`.

View File

@ -891,6 +891,7 @@ rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles"]
verbs: ["bind"]
# omit resourceNames to allow binding any ClusterRole
resourceNames: ["admin","edit","view"]
---
apiVersion: rbac.authorization.k8s.io/v1

View File

@ -423,7 +423,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
troubleshoot a running Pod.
- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/monitoring.md).
- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/).
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
based resource provisioning on nodes.
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
@ -475,9 +475,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
- `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy.
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and the backing filesystem for
[emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy.
- `MountContainers`: Enable using utility containers on host as the volume mounter.
- `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods.
For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation).

View File

@ -2,7 +2,6 @@
title: Cloud Provider
id: cloud-provider
date: 2018-04-12
full_link: /docs/concepts/cluster-administration/cloud-providers
short_description: >
An organization that offers a cloud computing platform.

View File

@ -14,8 +14,8 @@ tags:
<!--more-->
A platform developer may, for example, use [Custom Resources](/docs/concepts/extend-Kubernetes/api-extension/custom-resources/) or
[Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-Kubernetes/api-extension/apiserver-aggregation/)
A platform developer may, for example, use [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) or
[Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
to add functionality to their instance of Kubernetes, specifically for their application.
Some Platform Developers are also {{< glossary_tooltip text="contributors" term_id="contributor" >}} and
develop extensions which are contributed to the Kubernetes community.

View File

@ -42,6 +42,7 @@ kubectl create deployment --image=nginx nginx-app
# add env to nginx-app
kubectl set env deployment/nginx-app DOMAIN=cluster
```
```
deployment.apps/nginx-app created
```
@ -52,7 +53,6 @@ kubectl set env deployment/nginx-app DOMAIN=cluster
```
deployment.apps/nginx-app env updated
```
deployment.apps/nginx-app env updated
{{< note >}}
`kubectl` commands print the type and name of the resource created or mutated, which can then be used in subsequent commands. You can expose a new Service after a Deployment is created.

View File

@ -127,14 +127,15 @@ To learn more about command operations, see the [kubectl](/docs/reference/kubect
The following table includes a list of all the supported resource types and their abbreviated aliases.
(This output can be retrieved from `kubectl api-resources`, and was accurate as of Kubernetes 1.13.3.)
(This output can be retrieved from `kubectl api-resources`, and was accurate as of Kubernetes 1.19.1.)
| Resource Name | Short Names | API Group | Namespaced | Resource Kind |
| NAME | SHORTNAMES | APIGROUP | NAMESPACED | KIND |
|---|---|---|---|---|
| `bindings` | | | true | Binding|
| `bindings` | | | true | Binding |
| `componentstatuses` | `cs` | | false | ComponentStatus |
| `configmaps` | `cm` | | true | ConfigMap |
| `endpoints` | `ep` | | true | Endpoints |
| `events` | `ev` | | true | Event |
| `limitranges` | `limits` | | true | LimitRange |
| `namespaces` | `ns` | | false | Namespace |
| `nodes` | `no` | | false | Node |
@ -142,14 +143,14 @@ The following table includes a list of all the supported resource types and thei
| `persistentvolumes` | `pv` | | false | PersistentVolume |
| `pods` | `po` | | true | Pod |
| `podtemplates` | | | true | PodTemplate |
| `replicationcontrollers` | `rc` | | true| ReplicationController |
| `replicationcontrollers` | `rc` | | true | ReplicationController |
| `resourcequotas` | `quota` | | true | ResourceQuota |
| `secrets` | | | true | Secret |
| `serviceaccounts` | `sa` | | true | ServiceAccount |
| `services` | `svc` | | true | Service |
| `mutatingwebhookconfigurations` | | admissionregistration.k8s.io | false | MutatingWebhookConfiguration |
| `validatingwebhookconfigurations` | | admissionregistration.k8s.io | false | ValidatingWebhookConfiguration |
| `customresourcedefinitions` | `crd`, `crds` | apiextensions.k8s.io | false | CustomResourceDefinition |
| `customresourcedefinitions` | `crd,crds` | apiextensions.k8s.io | false | CustomResourceDefinition |
| `apiservices` | | apiregistration.k8s.io | false | APIService |
| `controllerrevisions` | | apps | true | ControllerRevision |
| `daemonsets` | `ds` | apps | true | DaemonSet |
@ -166,9 +167,15 @@ The following table includes a list of all the supported resource types and thei
| `jobs` | | batch | true | Job |
| `certificatesigningrequests` | `csr` | certificates.k8s.io | false | CertificateSigningRequest |
| `leases` | | coordination.k8s.io | true | Lease |
| `endpointslices` | | discovery.k8s.io | true | EndpointSlice |
| `events` | `ev` | events.k8s.io | true | Event |
| `ingresses` | `ing` | extensions | true | Ingress |
| `flowschemas` | | flowcontrol.apiserver.k8s.io | false | FlowSchema |
| `prioritylevelconfigurations` | | flowcontrol.apiserver.k8s.io | false | PriorityLevelConfiguration |
| `ingressclasses` | | networking.k8s.io | false | IngressClass |
| `ingresses` | `ing` | networking.k8s.io | true | Ingress |
| `networkpolicies` | `netpol` | networking.k8s.io | true | NetworkPolicy |
| `runtimeclasses` | | node.k8s.io | false | RuntimeClass |
| `poddisruptionbudgets` | `pdb` | policy | true | PodDisruptionBudget |
| `podsecuritypolicies` | `psp` | policy | false | PodSecurityPolicy |
| `clusterrolebindings` | | rbac.authorization.k8s.io | false | ClusterRoleBinding |
@ -178,7 +185,7 @@ The following table includes a list of all the supported resource types and thei
| `priorityclasses` | `pc` | scheduling.k8s.io | false | PriorityClass |
| `csidrivers` | | storage.k8s.io | false | CSIDriver |
| `csinodes` | | storage.k8s.io | false | CSINode |
| `storageclasses` | `sc` | storage.k8s.io | false | StorageClass |
| `storageclasses` | `sc` | storage.k8s.io | false | StorageClass |
| `volumeattachments` | | storage.k8s.io | false | VolumeAttachment |
## Output options

View File

@ -32,7 +32,7 @@ The following *predicates* implement filtering:
- `PodFitsResources`: Checks if the Node has free resources (eg, CPU and Memory)
to meet the requirement of the Pod.
- `PodMatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}}
- `MatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}}
matches the Node's {{< glossary_tooltip text="label(s)" term_id="label" >}}.
- `NoVolumeZoneConflict`: Evaluate if the {{< glossary_tooltip text="Volumes" term_id="volume" >}}

View File

@ -30,9 +30,12 @@ Kubernetes generally leverages standard RESTful terminology to describe the API
All resource types are either scoped by the cluster (`/apis/GROUP/VERSION/*`) or to a namespace (`/apis/GROUP/VERSION/namespaces/NAMESPACE/*`). A namespace-scoped resource type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope. The following paths are used to retrieve collections and resources:
* Cluster-scoped resources:
* `GET /apis/GROUP/VERSION/RESOURCETYPE` - return the collection of resources of the resource type
* `GET /apis/GROUP/VERSION/RESOURCETYPE/NAME` - return the resource with NAME under the resource type
* Namespace-scoped resources:
* `GET /apis/GROUP/VERSION/RESOURCETYPE` - return the collection of all instances of the resource type across all namespaces
* `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE` - return collection of all instances of the resource type in NAMESPACE
* `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME` - return the instance of the resource type with NAME in NAMESPACE
@ -57,33 +60,39 @@ For example:
1. List all of the pods in a given namespace.
GET /api/v1/namespaces/test/pods
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {"resourceVersion":"10245"},
"items": [...]
}
```console
GET /api/v1/namespaces/test/pods
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {"resourceVersion":"10245"},
"items": [...]
}
```
2. Starting from resource version 10245, receive notifications of any creates, deletes, or updates as individual JSON objects.
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...}
}
{
"type": "MODIFIED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "11020", ...}, ...}
}
...
```
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...}
}
{
"type": "MODIFIED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "11020", ...}, ...}
}
...
```
A given Kubernetes server will only preserve a historical list of changes for a limited time. Clusters using etcd3 preserve changes in the last 5 minutes by default. When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code `410 Gone`, clearing their local cache, performing a list operation, and starting the watch from the `resourceVersion` returned by that new list operation. Most client libraries offer some form of standard tool for this logic. (In Go this is called a `Reflector` and is located in the `k8s.io/client-go/cache` package.)
@ -91,24 +100,28 @@ A given Kubernetes server will only preserve a historical list of changes for a
To mitigate the impact of short history window, we introduced a concept of `bookmark` watch event. It is a special kind of event to mark that all changes up to a given `resourceVersion` the client is requesting have already been sent. Object returned in that event is of the type requested by the request, but only `resourceVersion` field is set, e.g.:
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...}
}
...
{
"type": "BOOKMARK",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "12746"} }
}
```console
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...}
}
...
{
"type": "BOOKMARK",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "12746"} }
}
```
`Bookmark` events can be requested by `allowWatchBookmarks=true` option in watch requests, but clients shouldn't assume bookmarks are returned at any specific interval, nor may they assume the server will send any `bookmark` event.
## Retrieving large results sets in chunks
{{< feature-state for_k8s_version="v1.9" state="beta" >}}
On large clusters, retrieving the collection of some resource types may result in very large responses that can impact the server and client. For instance, a cluster may have tens of thousands of pods, each of which is 1-2kb of encoded JSON. Retrieving all pods across all namespaces may result in a very large response (10-20MB) and consume a large amount of server resources. Starting in Kubernetes 1.9 the server supports the ability to break a single large collection request into many smaller chunks while preserving the consistency of the total request. Each chunk can be returned sequentially which reduces both the total size of the request and allows user-oriented clients to display results incrementally to improve responsiveness.
@ -121,54 +134,63 @@ For example, if there are 1,253 pods on the cluster and the client wants to rece
1. List all of the pods on a cluster, retrieving up to 500 pods each time.
GET /api/v1/pods?limit=500
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "ENCODED_CONTINUE_TOKEN",
...
},
"items": [...] // returns pods 1-500
}
```console
GET /api/v1/pods?limit=500
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "ENCODED_CONTINUE_TOKEN",
...
},
"items": [...] // returns pods 1-500
}
```
2. Continue the previous call, retrieving the next set of 500 pods.
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "ENCODED_CONTINUE_TOKEN_2",
...
},
"items": [...] // returns pods 501-1000
}
```console
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "ENCODED_CONTINUE_TOKEN_2",
...
},
"items": [...] // returns pods 501-1000
}
```
3. Continue the previous call, retrieving the last 253 pods.
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "", // continue token is empty because we have reached the end of the list
...
},
"items": [...] // returns pods 1001-1253
}
```console
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2
---
200 OK
Content-Type: application/json
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion":"10245",
"continue": "", // continue token is empty because we have reached the end of the list
...
},
"items": [...] // returns pods 1001-1253
}
```
Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates.
@ -180,52 +202,56 @@ A few limitations of that approach include non-trivial logic when dealing with c
In order to avoid potential limitations as described above, clients may request the Table representation of objects, delegating specific details of printing to the server. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header containing a value of `application/json;as=Table;g=meta.k8s.io;v=v1beta1` with a `GET` call will request that the server return objects in the Table content type.
For example:
For example, list all of the pods on a cluster in the Table format.
1. List all of the pods on a cluster in the Table format.
```console
GET /api/v1/pods
Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1
---
200 OK
Content-Type: application/json
GET /api/v1/pods
Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1
---
200 OK
Content-Type: application/json
{
"kind": "Table",
"apiVersion": "meta.k8s.io/v1beta1",
...
"columnDefinitions": [
...
]
}
{
"kind": "Table",
"apiVersion": "meta.k8s.io/v1beta1",
...
"columnDefinitions": [
...
]
}
```
For API resource types that do not have a custom Table definition on the server, a default Table response is returned by the server, consisting of the resource's `name` and `creationTimestamp` fields.
GET /apis/crd.example.com/v1alpha1/namespaces/default/resources
---
200 OK
Content-Type: application/json
...
```console
GET /apis/crd.example.com/v1alpha1/namespaces/default/resources
---
200 OK
Content-Type: application/json
...
{
"kind": "Table",
"apiVersion": "meta.k8s.io/v1beta1",
...
"columnDefinitions": [
{
"kind": "Table",
"apiVersion": "meta.k8s.io/v1beta1",
"name": "Name",
"type": "string",
...
},
{
"name": "Created At",
"type": "date",
...
"columnDefinitions": [
{
"name": "Name",
"type": "string",
...
},
{
"name": "Created At",
"type": "date",
...
}
]
}
]
}
```
Table responses are available beginning in version 1.10 of the kube-apiserver. As such, not all API resource types will support a Table response, specifically when using a client against older clusters. Clients that must work against all resource types, or can potentially deal with older clusters, should specify multiple content types in their `Accept` header to support fallback to non-Tabular JSON:
```
```console
Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1, application/json
```
@ -240,42 +266,47 @@ For example:
1. List all of the pods on a cluster in Protobuf format.
GET /api/v1/pods
Accept: application/vnd.kubernetes.protobuf
---
200 OK
Content-Type: application/vnd.kubernetes.protobuf
... binary encoded PodList object
```console
GET /api/v1/pods
Accept: application/vnd.kubernetes.protobuf
---
200 OK
Content-Type: application/vnd.kubernetes.protobuf
... binary encoded PodList object
```
2. Create a pod by sending Protobuf encoded data to the server, but request a response in JSON.
POST /api/v1/namespaces/test/pods
Content-Type: application/vnd.kubernetes.protobuf
Accept: application/json
... binary encoded Pod object
---
200 OK
Content-Type: application/json
{
"kind": "Pod",
"apiVersion": "v1",
...
}
```console
POST /api/v1/namespaces/test/pods
Content-Type: application/vnd.kubernetes.protobuf
Accept: application/json
... binary encoded Pod object
---
200 OK
Content-Type: application/json
{
"kind": "Pod",
"apiVersion": "v1",
...
}
```
Not all API resource types will support Protobuf, specifically those defined via Custom Resource Definitions or those that are API extensions. Clients that must work against all resource types should specify multiple content types in their `Accept` header to support fallback to JSON:
```
```console
Accept: application/vnd.kubernetes.protobuf, application/json
```
### Protobuf encoding
Kubernetes uses an envelope wrapper to encode Protobuf responses. That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf (as opposed to JSON), and then is followed by a Protobuf encoded wrapper message, which describes the encoding and type of the underlying object and then contains the object.
The wrapper format is:
```
```console
A four byte magic number prefix:
Bytes 0-3: "k8s\x00" [0x6b, 0x38, 0x73, 0x00]
@ -362,9 +393,11 @@ Dry-run is triggered by setting the `dryRun` query parameter. This parameter is
For example:
POST /api/v1/namespaces/test/pods?dryRun=All
Content-Type: application/json
Accept: application/json
```console
POST /api/v1/namespaces/test/pods?dryRun=All
Content-Type: application/json
Accept: application/json
```
The response would look the same as for non-dry-run request, but the values of some generated fields may differ.
@ -400,7 +433,7 @@ Some values of an object are typically generated before the object is persisted.
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
{{< note >}}Starting from Kubernetes v1.18, if you have Server Side Apply enabled then the control plane tracks managed fields for all newly created objects.{{< /note >}}
Starting from Kubernetes v1.18, if you have Server Side Apply enabled then the control plane tracks managed fields for all newly created objects.
### Introduction
@ -484,10 +517,12 @@ data:
The above object contains a single manager in `metadata.managedFields`. The
manager consists of basic information about the managing entity itself, like
operation type, api version, and the fields managed by it.
operation type, API version, and the fields managed by it.
{{< note >}} This field is managed by the apiserver and should not be changed by
the user. {{< /note >}}
{{< note >}}
This field is managed by the API server and should not be changed by
the user.
{{< /note >}}
Nevertheless it is possible to change `metadata.managedFields` through an
`Update` operation. Doing so is highly discouraged, but might be a reasonable
@ -537,7 +572,7 @@ a little differently.
{{< note >}}
Whether you are submitting JSON data or YAML data, use `application/apply-patch+yaml` as the
Content-Type header value.
`Content-Type` header value.
All JSON documents are valid YAML.
{{< /note >}}
@ -580,8 +615,8 @@ In this example, a second operation was run as an `Update` by the manager called
`kube-controller-manager`. The update changed a value in the data field which
caused the field's management to change to the `kube-controller-manager`.
{{< note >}}If this update would have been an `Apply` operation, the operation
would have failed due to conflicting ownership.{{< /note >}}
If this update would have been an `Apply` operation, the operation
would have failed due to conflicting ownership.
### Merge strategy
@ -603,8 +638,8 @@ merging, see
A number of markers were added in Kubernetes 1.16 and 1.17, to allow API
developers to describe the merge strategy supported by lists, maps, and
structs. These markers can be applied to objects of the respective type,
in Go files or in the [OpenAPI schema definition of the
CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io):
in Go files or in the
[OpenAPI schema definition of the CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io):
| Golang marker | OpenAPI extension | Accepted values | Description | Introduced in |
|---|---|---|---|---|
@ -641,8 +676,8 @@ might not be able to resolve or act on these conflicts.
### Transferring Ownership
In addition to the concurrency controls provided by [conflict
resolution](#conflicts), Server Side Apply provides ways to perform coordinated
In addition to the concurrency controls provided by [conflict resolution](#conflicts),
Server Side Apply provides ways to perform coordinated
field ownership transfers from users to controllers.
This is best explained by example. Let's look at how to safely transfer
@ -657,12 +692,12 @@ Say a user has defined deployment with `replicas` set to the desired value:
And the user has created the deployment using server side apply like so:
```shell
kubectl apply -f application/ssa/nginx-deployment.yaml --server-side
kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment.yaml --server-side
```
Then later, HPA is enabled for the deployment, e.g.:
```
```shell
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10
```
@ -691,7 +726,7 @@ First, the user defines a new configuration containing only the `replicas` field
The user applies that configuration using the field manager name `handover-to-hpa`:
```shell
kubectl apply -f application/ssa/nginx-deployment-replicas-only.yaml --server-side --field-manager=handover-to-hpa --validate=false
kubectl apply -f https://k8s.io/examples/application/ssa/nginx-deployment-replicas-only.yaml --server-side --field-manager=handover-to-hpa --validate=false
```
If the apply results in a conflict with the HPA controller, then do nothing. The
@ -737,7 +772,7 @@ case.
Client-side apply users who manage a resource with `kubectl apply` can start
using server-side apply with the following flag.
```
```shell
kubectl apply --server-side [--dry-run=server]
```
@ -745,7 +780,9 @@ By default, field management of the object transfers from client-side apply
to kubectl server-side apply without encountering conflicts.
{{< caution >}}
Keep the `last-applied-configuration` annotation up to date. The annotation infers client-side apply's managed fields. Any fields not managed by client-side apply raise conflicts.
Keep the `last-applied-configuration` annotation up to date.
The annotation infers client-side apply's managed fields.
Any fields not managed by client-side apply raise conflicts.
For example, if you used `kubectl scale` to update the replicas field after client-side apply,
then this field is not owned by client-side apply and creates conflicts on `kubectl apply --server-side`.
@ -756,7 +793,7 @@ an exception, you can opt-out of this behavior by specifying a different,
non-default field manager, as seen in the following example. The default field manager for kubectl
server-side apply is `kubectl`.
```
```shell
kubectl apply --server-side --field-manager=my-manager [--dry-run=server]
```
@ -774,7 +811,7 @@ an exception, you can opt-out of this behavior by specifying a different,
non-default field manager, as seen in the following example. The default field manager for kubectl
server-side apply is `kubectl`.
```
```shell
kubectl apply --server-side --field-manager=my-manager [--dry-run=server]
```
@ -793,14 +830,14 @@ using `MergePatch`, `StrategicMergePatch`, `JSONPatch` or `Update`, so every
non-apply operation. This can be done by overwriting the managedFields field
with an empty entry. Two examples are:
```json
```console
PATCH /api/v1/namespaces/default/configmaps/example-cm
Content-Type: application/merge-patch+json
Accept: application/json
Data: {"metadata":{"managedFields": [{}]}}
```
```json
```console
PATCH /api/v1/namespaces/default/configmaps/example-cm
Content-Type: application/json-patch+json
Accept: application/json
@ -818,10 +855,12 @@ the managedFields, this will result in the managedFields being reset first and
the other changes being processed afterwards. As a result the applier takes
ownership of any fields updated in the same request.
{{< caution >}} Server Side Apply does not correctly track ownership on
{{< caution >}}
Server Side Apply does not correctly track ownership on
sub-resources that don't receive the resource object type. If you are
using Server Side Apply with such a sub-resource, the changed fields
won't be tracked. {{< /caution >}}
won't be tracked.
{{< /caution >}}
### Disabling the feature
@ -859,12 +898,13 @@ For get and list, the semantics of resource version are:
**List:**
v1.19+ API servers and newer support the `resourceVersionMatch` parameter, which
v1.19+ API servers support the `resourceVersionMatch` parameter, which
determines how resourceVersion is applied to list calls. It is highly
recommended that `resourceVersionMatch` be set for list calls where
`resourceVersion` is set. If `resourceVersion` is unset, `resourceVersionMatch`
is not allowed. For backward compatibility, clients must tolerate the server
ignoring `resourceVersionMatch`:
- When using `resourceVersionMatch=NotOlderThan` and limit is set, clients must
handle HTTP 410 "Gone" responses. For example, the client might retry with a
newer `resourceVersion` or fall back to `resourceVersion=""`.
@ -878,27 +918,29 @@ a known `resourceVersion` is preferable since it can achieve better performance
of your cluster than leaving `resourceVersion` and `resourceVersionMatch` unset, which requires
quorum read to be served.
{{< table caption="resourceVersionMatch and paging parameters for list" >}}
| resourceVersionMatch param | paging params | resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" |
|---------------------------------------|-------------------------------|-----------------------|-------------------------------------------|----------------------------------------|
| resourceVersionMatch unset | limit unset | Most Recent | Any | Not older than |
| resourceVersionMatch unset | limit="n", continue unset | Most Recent | Any | Exact |
| resourceVersionMatch unset | limit="n", continue="<token>" | Continue Token, Exact | Invalid, treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` |
| resourceVersionMatch=Exact[^1] | limit unset | Invalid | Invalid | Exact |
| resourceVersionMatch=Exact[^1] | limit="n", continue unset | Invalid | Invalid | Exact |
| resourceVersionMatch=NotOlderThan[^1] | limit unset | Invalid | Any | Not older than |
| resourceVersionMatch=NotOlderThan[^1] | limit="n", continue unset | Invalid | Any | Not older than |
| resourceVersionMatch unset | limit=\<n\>, continue unset | Most Recent | Any | Exact |
| resourceVersionMatch unset | limit=\<n\>, continue=\<token\> | Continue Token, Exact | Invalid, treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` |
| resourceVersionMatch=Exact [1] | limit unset | Invalid | Invalid | Exact |
| resourceVersionMatch=Exact [1] | limit=\<n\>, continue unset | Invalid | Invalid | Exact |
| resourceVersionMatch=NotOlderThan [1] | limit unset | Invalid | Any | Not older than |
| resourceVersionMatch=NotOlderThan [1] | limit=\<n\>, continue unset | Invalid | Any | Not older than |
[^1]: If the server does not honor the `resourceVersionMatch` parameter, it is treated as if it is unset.
| paging | resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" |
|---------------------------------|-----------------------|------------------------------------------------|----------------------------------------|
| limit unset | Most Recent | Any | Not older than |
| limit="n", continue unset | Most Recent | Any | Exact |
| limit="n", continue="\<token\>" | Continue Token, Exact | Invalid, but treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` |
{{< /table >}}
**Footnotes:**
[1] If the server does not honor the `resourceVersionMatch` parameter, it is treated as if it is unset.
The meaning of the get and list semantics are:
- **Most Recent:** Return data at the most recent resource version. The returned data must be
consistent (i.e. served from etcd via a quorum read).
- **Any:** Return data at any resource version. The newest available resource version is preferred,
but strong consistency is not required; data at any resource version may be served. It is possible
for the request to return data at a much older resource version that the client has previously
@ -911,6 +953,7 @@ The meaning of the get and list semantics are:
but does not make any guarantee about the resourceVersion in the ObjectMeta of the list items
since ObjectMeta.resourceVersion tracks when an object was last updated, not how up-to-date the
object is when served.
- **Exact:** Return data at the exact resource version provided. If the provided resourceVersion is
unavailable, the server responds with HTTP 410 "Gone". For list requests to servers that honor the
resourceVersionMatch parameter, this guarantees that resourceVersion in the ListMeta is the same as
@ -925,10 +968,14 @@ For watch, the semantics of resource version are:
**Watch:**
{{< table caption="resourceVersion for watch" >}}
| resourceVersion unset | resourceVersion="0" | resourceVersion="{value other than 0}" |
|-------------------------------------|----------------------------|----------------------------------------|
| Get State and Start at Most Recent | Get State and Start at Any | Start at Exact |
{{< /table >}}
The meaning of the watch semantics are:
- **Get State and Start at Most Recent:** Start a watch at the most recent resource version, which must be consistent (i.e. served from etcd via a quorum read). To establish initial state, the watch begins with synthetic "Added" events of all resources instances that exist at the starting resource version. All following watch events are for all changes that occurred after the resource version the watch started at.

View File

@ -13,62 +13,60 @@ card:
---
<!-- overview -->
This page provides an overview of the Kubernetes API.
This page provides an overview of the Kubernetes API.
<!-- body -->
The REST API is the fundamental fabric of Kubernetes. All operations and communications between components, and external user commands are REST API calls that the API Server handles. Consequently, everything in the Kubernetes
The REST API is the fundamental fabric of Kubernetes. All operations and
communications between components, and external user commands are REST API
calls that the API Server handles. Consequently, everything in the Kubernetes
platform is treated as an API object and has a corresponding entry in the
[API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/).
Most operations can be performed through the
[kubectl](/docs/reference/kubectl/overview/) command-line interface or other
command-line tools, such as [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), which in turn use
the API. However, you can also access the API directly using REST calls.
Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/)
if you are writing an application using the Kubernetes API.
## API versioning
To eliminate fields or restructure resource representations, Kubernetes supports
multiple API versions, each at a different API path. For example: `/api/v1` or
`/apis/rbac.authorization.k8s.io/v1alpha1`.
The JSON and Protobuf serialization schemas follow the same guidelines for
schema changes. The following descriptions cover both formats.
The version is set at the API level rather than at the resource or field level to:
The API versioning and software versioning are indirectly related.
The [API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)
describes the relationship between API versioning and software versioning.
- Ensure that the API presents a clear and consistent view of system resources and behavior.
- Enable control access to end-of-life and/or experimental APIs.
The JSON and Protobuf serialization schemas follow the same guidelines for schema changes. The following descriptions cover both formats.
{{< note >}}
The API versioning and software versioning are indirectly related. The [API and release
versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) describes the relationship between API versioning and software versioning.
{{< /note >}}
Different API versions indicate different levels of stability and support. You can find more information about the criteria for each level in the [API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions).
Different API versions indicate different levels of stability and support. You
can find more information about the criteria for each level in the
[API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions).
Here's a summary of each level:
- Alpha:
- The version names contain `alpha` (for example, `v1alpha1`).
- The software may contain bugs. Enabling a feature may expose bugs. A feature may be disabled by default.
- The software may contain bugs. Enabling a feature may expose bugs. A
feature may be disabled by default.
- The support for a feature may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- The software is recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
- The software is recommended for use only in short-lived testing clusters,
due to increased risk of bugs and lack of long-term support.
- Beta:
- The version names contain `beta` (for example, `v2beta3`).
- The software is well tested. Enabling a feature is considered safe. Features are enabled by default.
- The software is well tested. Enabling a feature is considered safe.
Features are enabled by default.
- The support for a feature will not be dropped, though the details may change.
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens, migration instructions are provided. This may require deleting, editing, and re-creating
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
- The software is recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have multiple clusters which can be upgraded independently, you may be able to relax this restriction.
- The schema and/or semantics of objects may change in incompatible ways in
a subsequent beta or stable release. When this happens, migration
instructions are provided. Schema changes may require deleting, editing, and
re-creating API objects. The editing process may not be straightforward.
The migration may require downtime for applications that rely on the feature.
- The software is not recommended for production uses. Subsequent releases
may introduce incompatible changes. If you have multiple clusters which
can be upgraded independently, you may be able to relax this restriction.
{{< note >}}
Try the beta features and provide feedback. After the features exit beta, it may not be practical to make more changes.
{{< /note >}}
{{< note >}}
Try the beta features and provide feedback. After the features exit beta, it
may not be practical to make more changes.
{{< /note >}}
- Stable:
- The version name is `vX` where `X` is an integer.
@ -76,33 +74,44 @@ Try the beta features and provide feedback. After the features exit beta, it may
## API groups
[*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md) make it easier to extend the Kubernetes API. The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
[API groups](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)
make it easier to extend the Kubernetes API.
The API group is specified in a REST path and in the `apiVersion` field of a
serialized object.
Currently, there are several API groups in use:
* The *core* (also called *legacy*) group, which is at REST path `/api/v1` and is not specified as part of the `apiVersion` field, for example, `apiVersion: v1`.
* The named groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION`
(for example, `apiVersion: batch/v1`). You can find the full list of supported API groups in [Kubernetes API reference](/docs/reference/).
* The *core* (also called *legacy*) group is found at REST path `/api/v1`.
The core group is not specified as part of the `apiVersion` field, for
example, `apiVersion: v1`.
* The named groups are at REST path `/apis/$GROUP_NAME/$VERSION` and use
`apiVersion: $GROUP_NAME/$VERSION` (for example, `apiVersion: batch/v1`).
You can find the full list of supported API groups in
[Kubernetes API reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/).
The two paths that support extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) are:
## Enabling or disabling API groups {#enabling-or-disabling}
- [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
for basic CRUD needs.
- [aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md) for a full set of Kubernetes API semantics to implement their own apiserver.
Certain resources and API groups are enabled by default. You can enable or
disable them by setting `--runtime-config` on the API server. The
`--runtime-config` flag accepts comma separated `<key>=<value>` pairs
describing the runtime configuration of the API server. For example:
## Enabling or disabling API groups
Certain resources and API groups are enabled by default. You can enable or disable them by setting `--runtime-config`
on the apiserver. `--runtime-config` accepts comma separated values. For example:
- to disable batch/v1, set `--runtime-config=batch/v1=false`
- to enable batch/v2alpha1, set `--runtime-config=batch/v2alpha1`
The flag accepts comma separated set of key=value pairs describing runtime configuration of the apiserver.
- to disable `batch/v1`, set `--runtime-config=batch/v1=false`
- to enable `batch/v2alpha1`, set `--runtime-config=batch/v2alpha1`
{{< note >}}
When you enable or disable groups or resources, you need to restart the apiserver and controller-manager
to pick up the `--runtime-config` changes.
When you enable or disable groups or resources, you need to restart the API
server and controller manager to pick up the `--runtime-config` changes.
{{< /note >}}
## Persistence
Kubernetes stores its serialized state in terms of the API resources by writing them into
{{< glossary_tooltip term_id="etcd" >}}.
## {{% heading "whatsnext" %}}
- Learn more about [API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)
- Read the design documentation for
[aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md)

View File

@ -40,6 +40,8 @@ The following client libraries are officially maintained by
## Community-maintained client libraries
{{% thirdparty-content %}}
The following Kubernetes API client libraries are provided and maintained by
their authors, not the Kubernetes team.

View File

@ -303,12 +303,12 @@ Starting in Kubernetes v1.19, making an API request to a deprecated REST API end
2. Adds a `"k8s.io/deprecated":"true"` annotation to the [audit event](/docs/tasks/debug-application-cluster/audit/) recorded for the request.
3. Sets an `apiserver_requested_deprecated_apis` gauge metric to `1` in the `kube-apiserver`
process. The metric has labels for `group`, `version`, `resource`, `subresource` that can be joined
to the `apiserver_request_total` metric, and a `removed_version` label that indicates the
to the `apiserver_request_total` metric, and a `removed_release` label that indicates the
Kubernetes release in which the API will no longer be served. The following Prometheus query
returns information about requests made to deprecated APIs which will be removed in v1.22:
```promql
apiserver_requested_deprecated_apis{removed_version="1.22"} * on(group,version,resource,subresource) group_right() apiserver_request_total
apiserver_requested_deprecated_apis{removed_release="1.22"} * on(group,version,resource,subresource) group_right() apiserver_request_total
```
### Fields of REST resources

View File

@ -94,7 +94,7 @@ The output show that the `etcd` check is excluded:
{{< feature-state state="alpha" >}}
Each individual health check exposes an http endpoint and could can be checked individually.
The schema for the individual health checks is `/livez/<healthcheck-name>` where `livez` and `readyz` and be used to indicate if you want to check thee liveness or the readiness of the API server.
The schema for the individual health checks is `/livez/<healthcheck-name>` where `livez` and `readyz` and be used to indicate if you want to check the liveness or the readiness of the API server.
The `<healthcheck-name>` path can be discovered using the `verbose` flag from above and take the path between `[+]` and `ok`.
These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system:

View File

@ -124,8 +124,8 @@ Same considerations apply for the service account key pair:
| private key path | public key path | command | argument |
|------------------------------|-----------------------------|-------------------------|--------------------------------------|
| sa.key | | kube-controller-manager | service-account-private |
| | sa.pub | kube-apiserver | service-account-key |
| sa.key | | kube-controller-manager | --service-account-private-key-file |
| | sa.pub | kube-apiserver | --service-account-key-file |
## Configure certificates for user accounts

View File

@ -13,12 +13,6 @@ verification and functionality test for a node. The test validates whether the
node meets the minimum requirements for Kubernetes; a node that passes the test
is qualified to join a Kubernetes cluster.
## Limitations
In Kubernetes version 1.5, node conformance test has the following limitations:
* Node conformance test only supports Docker as the container runtime.
## Node Prerequisite
To run node conformance test, a node must satisfy the same prerequisites as a

View File

@ -452,11 +452,11 @@ Host folder sharing is not implemented in the KVM driver yet.
| Driver | OS | HostFolder | VM |
| --- | --- | --- | --- |
| VirtualBox | Linux | /home | /hosthome |
| VirtualBox | macOS | /Users | /Users |
| VirtualBox | Windows | C://Users | /c/Users |
| VMware Fusion | macOS | /Users | /mnt/hgfs/Users |
| Xhyve | macOS | /Users | /Users |
| VirtualBox | Linux | `/home` | `/hosthome` |
| VirtualBox | macOS | `/Users` | `/Users` |
| VirtualBox | Windows | `C://Users` | `/c/Users` |
| VMware Fusion | macOS | `/Users` | `/mnt/hgfs/Users` |
| Xhyve | macOS | `/Users` | `/Users` |
## Private Container Registries

View File

@ -471,7 +471,12 @@ Start-Service containerd
### systemd
To use the `systemd` cgroup driver, set `plugins.cri.systemd_cgroup = true` in `/etc/containerd/config.toml`.
To use the `systemd` cgroup driver in `/etc/containerd/config.toml` set
```
[plugins.cri]
systemd_cgroup = true
```
When using kubeadm, manually configure the
[cgroup driver for kubelet](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node)

View File

@ -1,4 +0,0 @@
---
title: On-Premises VMs
weight: 40
---

View File

@ -1,116 +0,0 @@
---
reviewers:
- thockin
title: Cloudstack
content_type: concept
---
<!-- overview -->
[CloudStack](https://cloudstack.apache.org/) is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.
[CoreOS](https://coreos.com) templates for CloudStack are built [nightly](https://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](https://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions.
This guide uses a single [Ansible playbook](https://github.com/apachecloudstack/k8s), which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.
<!-- body -->
## Prerequisites
```shell
sudo apt-get install -y python-pip libssl-dev
sudo pip install cs
sudo pip install sshpubkeys
sudo apt-get install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
```
On CloudStack server you also have to install libselinux-python :
```shell
yum install libselinux-python
```
[_cs_](https://github.com/exoscale/cs) is a python module for the CloudStack API.
Set your CloudStack endpoint, API keys and HTTP method used.
You can define them as environment variables: `CLOUDSTACK_ENDPOINT`, `CLOUDSTACK_KEY`, `CLOUDSTACK_SECRET` and `CLOUDSTACK_METHOD`.
Or create a `~/.cloudstack.ini` file:
```none
[cloudstack]
endpoint = <your cloudstack api endpoint>
key = <your api access key>
secret = <your api secret key>
method = post
```
We need to use the http POST method to pass the _large_ userdata to the coreOS instances.
### Clone the playbook
```shell
git clone https://github.com/apachecloudstack/k8s
cd kubernetes-cloudstack
```
### Create a Kubernetes cluster
You simply need to run the playbook.
```shell
ansible-playbook k8s.yml
```
Some variables can be edited in the `k8s.yml` file.
```none
vars:
ssh_key: k8s
k8s_num_nodes: 2
k8s_security_group_name: k8s
k8s_node_prefix: k8s2
k8s_template: <templatename>
k8s_instance_type: <serviceofferingname>
```
This will start a Kubernetes master node and a number of compute nodes (by default 2).
The `instance_type` and `template` are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering).
Check the tasks and templates in `roles/k8s` if you want to modify anything.
Once the playbook as finished, it will print out the IP of the Kubernetes master:
```none
TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********
```
SSH to it using the key that was created and using the _core_ user.
```shell
ssh -i ~/.ssh/id_rsa_k8s core@<master IP>
```
And you can list the machines in your cluster:
```shell
fleetctl list-machines
```
```none
MACHINE IP METADATA
a017c422... <node #1 IP> role=node
ad13bf84... <master IP> role=master
e9af8293... <node #2 IP> role=node
```
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
CloudStack | Ansible | CoreOS | flannel | [docs](/docs/setup/production-environment/on-premises-vm/cloudstack/) | | Community ([@Guiques](https://github.com/ltupin/))

View File

@ -1,25 +0,0 @@
---
reviewers:
- smugcloud
title: Kubernetes on DC/OS
content_type: concept
---
<!-- overview -->
Mesosphere provides an easy option to provision Kubernetes onto [DC/OS](https://mesosphere.com/product/), offering:
* Pure upstream Kubernetes
* Single-click cluster provisioning
* Highly available and secure by default
* Kubernetes running alongside fast-data platforms (e.g. Akka, Cassandra, Kafka, Spark)
<!-- body -->
## Official Mesosphere Guide
The canonical source of getting started on DC/OS is located in the [quickstart repo](https://github.com/mesosphere/dcos-kubernetes-quickstart).

View File

@ -1,72 +0,0 @@
---
reviewers:
- caesarxuchao
- erictune
title: oVirt
content_type: concept
---
<!-- overview -->
oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.
<!-- body -->
## oVirt Cloud Provider Deployment
The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster.
At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to [import] or [install] Project Atomic (or Fedora) in a VM to [generate a template]. Any other distribution that includes Kubernetes may work as well.
It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
[import]: https://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines
[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates
[install the ovirt-guest-agent]: https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
## Using the oVirt Cloud Provider
The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the `ovirt-cloud.conf` file:
```none
[connection]
uri = https://localhost:8443/ovirt-engine/api
username = admin@internal
password = admin
```
In the same file it is possible to specify (using the `filters` section) what search query to use to identify the VMs to be reported to Kubernetes:
```none
[filters]
# Search query used to find nodes
vms = tag=kubernetes
```
In the above example all the VMs tagged with the `kubernetes` label will be reported as nodes to Kubernetes.
The `ovirt-cloud.conf` file then must be specified in kube-controller-manager:
```shell
kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...
```
## oVirt Cloud Provider Screencast
This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.
[![Screencast](https://img.youtube.com/vi/JyyST4ZKne8/0.jpg)](https://www.youtube.com/watch?v=JyyST4ZKne8)
## Support Level
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ---------| ----------------------------
oVirt | | | | [docs](/docs/setup/production-environment/on-premises-vm/ovirt/) | | Community ([@simon3z](https://github.com/simon3z))

View File

@ -36,7 +36,7 @@ LoadBalancer, or with dynamic PersistentVolumes.
For both methods you need this infrastructure:
- Three machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
the masters
the control-plane nodes
- Three machines that meet [kubeadm's minimum
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
- Full network connectivity between all machines in the cluster (public or
@ -224,7 +224,7 @@ in the kubeadm config file.
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
```
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control plane machine.
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane node.
### Set up the first control plane node
@ -372,4 +372,3 @@ SSH is required if you want to control all nodes from a single machine.
# Quote this line if you are using external etcd
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
```

View File

@ -407,4 +407,12 @@ be advised that this is modifying a design principle of the Linux distribution.
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`.
This issue is fixed as of version 1.19.
This issue is fixed as of version 1.19.
## `kubeadm reset` unmounts `/var/lib/kubelet`
If `/var/lib/kubelet` is being mounted, performing a `kubeadm reset` will effectively unmount it.
To workaround the issue, re-mount the `/var/lib/kubelet` directory after performing the `kubeadm reset` operation.
This is a regression introduced in kubeadm 1.15. The issue is fixed in 1.20.

View File

@ -13,12 +13,13 @@ Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [i
* a highly available cluster
* composable attributes
* support for most popular Linux distributions
* Container Linux by CoreOS
* Ubuntu 16.04, 18.04, 20.04
* CentOS/RHEL/Oracle Linux 7, 8
* Debian Buster, Jessie, Stretch, Wheezy
* Ubuntu 16.04, 18.04
* CentOS/RHEL/Oracle Linux 7
* Fedora 28
* Fedora 31, 32
* Fedora CoreOS
* openSUSE Leap 15
* Flatcar Container Linux by Kinvolk
* continuous integration tests
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
@ -32,8 +33,8 @@ To choose a tool which best fits your use case, read [this comparison](https://g
Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements):
* **Ansible v2.7.8 and python-netaddr is installed on the machine that will run Ansible commands**
* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks**
* **Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands**
* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks**
* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
* The target servers are configured to allow **IPv4 forwarding**
* **Your ssh key must be copied** to all the servers part of your inventory

Some files were not shown because too many files have changed in this diff Show More