Merge branch 'main' into ingressControllerNamespace
|
@ -11,7 +11,7 @@
|
|||
For overall help on editing and submitting pull requests, visit:
|
||||
https://kubernetes.io/docs/contribute/start/#improve-existing-content
|
||||
|
||||
Use the default base branch, “master”, if you're documenting existing
|
||||
Use the default base branch, “main”, if you're documenting existing
|
||||
features in the English localization.
|
||||
|
||||
If you're working on a different localization (not English), see
|
||||
|
|
5
Makefile
|
@ -6,8 +6,9 @@ NETLIFY_FUNC = $(NODE_BIN)/netlify-lambda
|
|||
# but this can be overridden when calling make, e.g.
|
||||
# CONTAINER_ENGINE=podman make container-image
|
||||
CONTAINER_ENGINE ?= docker
|
||||
IMAGE_REGISTRY ?= gcr.io/k8s-staging-sig-docs
|
||||
IMAGE_VERSION=$(shell scripts/hash-files.sh Dockerfile Makefile | cut -c 1-12)
|
||||
CONTAINER_IMAGE = kubernetes-hugo:v$(HUGO_VERSION)-$(IMAGE_VERSION)
|
||||
CONTAINER_IMAGE = $(IMAGE_REGISTRY)/k8s-website-hugo:v$(HUGO_VERSION)-$(IMAGE_VERSION)
|
||||
CONTAINER_RUN = $(CONTAINER_ENGINE) run --rm --interactive --tty --volume $(CURDIR):/src
|
||||
|
||||
CCRED=\033[0;31m
|
||||
|
@ -95,4 +96,4 @@ clean-api-reference: ## Clean all directories in API reference directory, preser
|
|||
|
||||
api-reference: clean-api-reference ## Build the API reference pages. go needed
|
||||
cd api-ref-generator/gen-resourcesdocs && \
|
||||
go run cmd/main.go kwebsite --config-dir config/v1.21/ --file api/v1.21/swagger.json --output-dir ../../content/en/docs/reference/kubernetes-api --templates templates
|
||||
go run cmd/main.go kwebsite --config-dir ../../api-ref-assets/config/ --file ../../api-ref-assets/api/swagger.json --output-dir ../../content/en/docs/reference/kubernetes-api --templates ../../api-ref-assets/templates
|
||||
|
|
2
OWNERS
|
@ -8,7 +8,9 @@ approvers:
|
|||
|
||||
emeritus_approvers:
|
||||
# - chenopis, commented out to disable PR assignments
|
||||
# - irvifa, commented out to disable PR assignments
|
||||
# - jaredbhatti, commented out to disable PR assignments
|
||||
# - kbarnard10, commented out to disable PR assignments
|
||||
# - steveperry-53, commented out to disable PR assignments
|
||||
- stewart-yu
|
||||
# - zacharysarah, commented out to disable PR assignments
|
||||
|
|
|
@ -1,12 +1,8 @@
|
|||
aliases:
|
||||
sig-docs-blog-owners: # Approvers for blog content
|
||||
- castrojo
|
||||
- kbarnard10
|
||||
- onlydole
|
||||
- mrbobbytables
|
||||
sig-docs-blog-reviewers: # Reviewers for blog content
|
||||
- castrojo
|
||||
- kbarnard10
|
||||
- mrbobbytables
|
||||
- onlydole
|
||||
- sftim
|
||||
|
@ -22,31 +18,25 @@ aliases:
|
|||
- annajung
|
||||
- bradtopol
|
||||
- celestehorgan
|
||||
- irvifa
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
- jlbutler
|
||||
- kbhawkey
|
||||
- onlydole
|
||||
- pi-victor
|
||||
- reylejano
|
||||
- savitharaghunathan
|
||||
- sftim
|
||||
- steveperry-53
|
||||
- tengqm
|
||||
- zparnold
|
||||
sig-docs-en-reviews: # PR reviews for English content
|
||||
- bradtopol
|
||||
- celestehorgan
|
||||
- daminisatya
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
- kbhawkey
|
||||
- onlydole
|
||||
- rajeshdeshpande02
|
||||
- sftim
|
||||
- steveperry-53
|
||||
- tengqm
|
||||
- zparnold
|
||||
sig-docs-es-owners: # Admins for Spanish content
|
||||
- raelga
|
||||
- electrocucaracha
|
||||
|
@ -94,7 +84,6 @@ aliases:
|
|||
- danninov
|
||||
- girikuncoro
|
||||
- habibrosyad
|
||||
- irvifa
|
||||
- phanama
|
||||
- wahyuoi
|
||||
sig-docs-id-reviews: # PR reviews for Indonesian content
|
||||
|
@ -102,7 +91,6 @@ aliases:
|
|||
- danninov
|
||||
- girikuncoro
|
||||
- habibrosyad
|
||||
- irvifa
|
||||
- phanama
|
||||
- wahyuoi
|
||||
sig-docs-it-owners: # Admins for Italian content
|
||||
|
@ -138,14 +126,14 @@ aliases:
|
|||
- ClaudiaJKang
|
||||
- gochist
|
||||
- ianychoi
|
||||
- seokho-son
|
||||
- ysyukr
|
||||
- jihoon-seo
|
||||
- jmyung
|
||||
- pjhwa
|
||||
- seokho-son
|
||||
- yoonian
|
||||
- ysyukr
|
||||
sig-docs-leads: # Website chairs and tech leads
|
||||
- irvifa
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
- kbhawkey
|
||||
- onlydole
|
||||
- sftim
|
||||
|
@ -163,8 +151,10 @@ aliases:
|
|||
# zhangxiaoyu-zidif
|
||||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- chenrui333
|
||||
- chenxuc
|
||||
- howieyuen
|
||||
- idealhack
|
||||
- mengjiao-liu
|
||||
- pigletfly
|
||||
- SataQiu
|
||||
- tanjunchen
|
||||
|
@ -235,10 +225,12 @@ aliases:
|
|||
- parispittman
|
||||
# authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES
|
||||
sig-release-leads:
|
||||
- cpanato # SIG Technical Lead
|
||||
- hasheddan # SIG Technical Lead
|
||||
- jeremyrickard # SIG Technical Lead
|
||||
- justaugustus # SIG Chair
|
||||
- LappleApple # SIG Program Manager
|
||||
- puerco # SIG Technical Lead
|
||||
- saschagrunert # SIG Chair
|
||||
release-engineering-approvers:
|
||||
- cpanato # Release Manager
|
||||
|
@ -250,6 +242,7 @@ aliases:
|
|||
release-engineering-reviewers:
|
||||
- ameukam # Release Manager Associate
|
||||
- jimangel # Release Manager Associate
|
||||
- markyjackson-taulia # Release Manager Associate
|
||||
- mkorbi # Release Manager Associate
|
||||
- palnabarun # Release Manager Associate
|
||||
- onlydole # Release Manager Associate
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Kubernetesのドキュメント
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
このリポジトリには、[KubernetesのWebサイトとドキュメント](https://kubernetes.io/)をビルドするために必要な全アセットが格納されています。貢献に興味を持っていただきありがとうございます!
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# 쿠버네티스 문서화
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
이 저장소에는 [쿠버네티스 웹사이트 및 문서](https://kubernetes.io/)를 빌드하는 데 필요한 자산이 포함되어 있습니다. 기여해주셔서 감사합니다!
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Dokumentacja projektu Kubernetes
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
W tym repozytorium znajdziesz wszystko, czego potrzebujesz do zbudowania [strony internetowej Kubernetesa wraz z dokumentacją](https://kubernetes.io/). Bardzo nam miło, że chcesz wziąć udział w jej współtworzeniu!
|
||||
|
||||
|
@ -18,7 +18,7 @@ Aby móc skorzystać z tego repozytorium, musisz lokalnie zainstalować:
|
|||
- [npm](https://www.npmjs.com/)
|
||||
- [Go](https://golang.org/)
|
||||
- [Hugo (Extended version)](https://gohugo.io/)
|
||||
- Środowisko obsługi kontenerów, np. [Docker-a](https://www.docker.com/).
|
||||
- Środowisko obsługi kontenerów, np. [Dockera](https://www.docker.com/).
|
||||
|
||||
Przed rozpoczęciem zainstaluj niezbędne zależności. Sklonuj repozytorium i przejdź do odpowiedniego katalogu:
|
||||
|
||||
|
@ -43,7 +43,9 @@ make container-image
|
|||
make container-serve
|
||||
```
|
||||
|
||||
Aby obejrzeć zawartość serwisu otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
|
||||
Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)).
|
||||
|
||||
Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
|
||||
|
||||
## Jak uruchomić lokalną kopię strony przy pomocy Hugo?
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# A documentação do Kubernetes
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Документация по Kubernetes
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
Данный репозиторий содержит все необходимые файлы для сборки [сайта Kubernetes и документации](https://kubernetes.io/). Мы благодарим вас за желание внести свой вклад!
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
<!-- # The Kubernetes documentation -->
|
||||
# Документація Kubernetes
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
<!-- This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute! -->
|
||||
Вітаємо! В цьому репозиторії міститься все необхідне для роботи над [сайтом і документацією Kubernetes](https://kubernetes.io/). Ми щасливі, що ви хочете зробити свій внесок!
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
# The Kubernetes documentation
|
||||
-->
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
<!--
|
||||
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
|
||||
|
|
110
README.md
|
@ -1,13 +1,13 @@
|
|||
# The Kubernetes documentation
|
||||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
|
||||
|
||||
+ [Contributing to the docs](#contributing-to-the-docs)
|
||||
+ [Localization ReadMes](#localization-readmemds)
|
||||
- [Contributing to the docs](#contributing-to-the-docs)
|
||||
- [Localization ReadMes](#localization-readmemds)
|
||||
|
||||
# Using this repository
|
||||
## Using this repository
|
||||
|
||||
You can run the website locally using Hugo (Extended version), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
|
||||
|
||||
|
@ -22,14 +22,14 @@ To use this repository, you need the following installed locally:
|
|||
|
||||
Before you start, install the dependencies. Clone the repository and navigate to the directory:
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/kubernetes/website.git
|
||||
cd website
|
||||
```
|
||||
|
||||
The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/docsy#readme). Even if you plan to run the website in a container, we strongly recommend pulling in the submodule and other development dependencies by running the following:
|
||||
|
||||
```
|
||||
```bash
|
||||
# pull in the Docsy submodule
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
@ -38,14 +38,14 @@ git submodule update --init --recursive --depth 1
|
|||
|
||||
To build the site in a container, run the following to build the container image and run it:
|
||||
|
||||
```
|
||||
```bash
|
||||
make container-image
|
||||
make container-serve
|
||||
```
|
||||
|
||||
If you see errors, it probably means that the hugo container did not have enough computing resources available. To solve it, increase the amount of allowed CPU and memory usage for Docker on your machine ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) and [Windows](https://docs.docker.com/docker-for-windows/#resources)).
|
||||
|
||||
Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
Open up your browser to <http://localhost:1313> to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
|
||||
## Running the website locally using Hugo
|
||||
|
||||
|
@ -59,54 +59,47 @@ npm ci
|
|||
make serve
|
||||
```
|
||||
|
||||
This will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
This will start the local Hugo server on port 1313. Open up your browser to <http://localhost:1313> to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
|
||||
## Building the API reference pages
|
||||
|
||||
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, using https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs.
|
||||
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
|
||||
|
||||
To update the reference pages for a new Kubernetes release (replace v1.20 in the following examples with the release to update to):
|
||||
|
||||
1. Pull the `kubernetes-resources-reference` submodule:
|
||||
|
||||
```
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
```bash
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
2. Create a new API revision into the submodule, and add the Swagger specification:
|
||||
2. Update the Swagger specification:
|
||||
|
||||
```
|
||||
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/gen-resourcesdocs/api/v1.20/swagger.json
|
||||
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
|
||||
```
|
||||
|
||||
3. Copy the table of contents and fields configuration for the new release from a previous one:
|
||||
3. In `api-ref-assets/config/`, adapt the files `toc.yaml` and `fields.yaml` to reflect the changes of the new release.
|
||||
|
||||
```
|
||||
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||
cp api-ref-generator/gen-resourcesdocs/api/v1.19/* api-ref-generator/gen-resourcesdocs/api/v1.20/
|
||||
```
|
||||
4. Next, build the pages:
|
||||
|
||||
4. Adapt the files `toc.yaml` and `fields.yaml` to reflect the changes between the two releases
|
||||
```bash
|
||||
make api-reference
|
||||
```
|
||||
|
||||
5. Next, build the pages:
|
||||
You can test the results locally by making and serving the site from a container image:
|
||||
|
||||
```
|
||||
make api-reference
|
||||
```
|
||||
```bash
|
||||
make container-image
|
||||
make container-serve
|
||||
```
|
||||
|
||||
You can test the results locally by making and serving the site from a container image:
|
||||
In a web browser, go to <http://localhost:1313/docs/reference/kubernetes-api/> to view the API reference.
|
||||
|
||||
```
|
||||
make container-image
|
||||
make container-serve
|
||||
```
|
||||
|
||||
In a web browser, go to http://localhost:1313/docs/reference/kubernetes-api/ to view the API reference.
|
||||
|
||||
6. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
|
||||
5. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
|
||||
|
||||
Hugo is shipped in two set of binaries for technical reasons. The current website runs based on the **Hugo Extended** version only. In the [release page](https://github.com/gohugoio/hugo/releases) look for archives with `extended` in the name. To confirm, run `hugo version` and look for the word `extended`.
|
||||
|
@ -115,7 +108,7 @@ Hugo is shipped in two set of binaries for technical reasons. The current websit
|
|||
|
||||
If you run `make serve` on macOS and receive the following error:
|
||||
|
||||
```
|
||||
```bash
|
||||
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
|
||||
make: *** [serve] Error 1
|
||||
```
|
||||
|
@ -124,7 +117,7 @@ Try checking the current limit for open files:
|
|||
|
||||
`launchctl limit maxfiles`
|
||||
|
||||
Then run the following commands (adapted from https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c):
|
||||
Then run the following commands (adapted from <https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c>):
|
||||
|
||||
```shell
|
||||
#!/bin/sh
|
||||
|
@ -147,8 +140,7 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
|
|||
|
||||
This works for Catalina as well as Mojave macOS.
|
||||
|
||||
|
||||
# Get involved with SIG Docs
|
||||
## Get involved with SIG Docs
|
||||
|
||||
Learn more about SIG Docs Kubernetes community and meetings on the [community page](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
|
||||
|
||||
|
@ -157,39 +149,39 @@ You can also reach the maintainers of this project at:
|
|||
- [Slack](https://kubernetes.slack.com/messages/sig-docs) [Get an invite for this Slack](https://slack.k8s.io/)
|
||||
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
||||
|
||||
# Contributing to the docs
|
||||
## Contributing to the docs
|
||||
|
||||
You can click the **Fork** button in the upper-right area of the screen to create a copy of this repository in your GitHub account. This copy is called a *fork*. Make any changes you want in your fork, and when you are ready to send those changes to us, go to your fork and create a new pull request to let us know about it.
|
||||
You can click the **Fork** button in the upper-right area of the screen to create a copy of this repository in your GitHub account. This copy is called a _fork_. Make any changes you want in your fork, and when you are ready to send those changes to us, go to your fork and create a new pull request to let us know about it.
|
||||
|
||||
Once your pull request is created, a Kubernetes reviewer will take responsibility for providing clear, actionable feedback. As the owner of the pull request, **it is your responsibility to modify your pull request to address the feedback that has been provided to you by the Kubernetes reviewer.**
|
||||
Once your pull request is created, a Kubernetes reviewer will take responsibility for providing clear, actionable feedback. As the owner of the pull request, **it is your responsibility to modify your pull request to address the feedback that has been provided to you by the Kubernetes reviewer.**
|
||||
|
||||
Also, note that you may end up having more than one Kubernetes reviewer provide you feedback or you may end up getting feedback from a Kubernetes reviewer that is different than the one initially assigned to provide you feedback.
|
||||
|
||||
Furthermore, in some cases, one of your reviewers might ask for a technical review from a Kubernetes tech reviewer when needed. Reviewers will do their best to provide feedback in a timely fashion but response time can vary based on circumstances.
|
||||
Furthermore, in some cases, one of your reviewers might ask for a technical review from a Kubernetes tech reviewer when needed. Reviewers will do their best to provide feedback in a timely fashion but response time can vary based on circumstances.
|
||||
|
||||
For more information about contributing to the Kubernetes documentation, see:
|
||||
|
||||
* [Contribute to Kubernetes docs](https://kubernetes.io/docs/contribute/)
|
||||
* [Page Content Types](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/)
|
||||
- [Contribute to Kubernetes docs](https://kubernetes.io/docs/contribute/)
|
||||
- [Page Content Types](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
- [Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
- [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
# Localization `README.md`'s
|
||||
## Localization `README.md`'s
|
||||
|
||||
| Language | Language |
|
||||
|---|---|
|
||||
|[Chinese](README-zh.md)|[Korean](README-ko.md)|
|
||||
|[French](README-fr.md)|[Polish](README-pl.md)|
|
||||
|[German](README-de.md)|[Portuguese](README-pt.md)|
|
||||
|[Hindi](README-hi.md)|[Russian](README-ru.md)|
|
||||
|[Indonesian](README-id.md)|[Spanish](README-es.md)|
|
||||
|[Italian](README-it.md)|[Ukrainian](README-uk.md)|
|
||||
|[Japanese](README-ja.md)|[Vietnamese](README-vi.md)|
|
||||
| Language | Language |
|
||||
| -------------------------- | -------------------------- |
|
||||
| [Chinese](README-zh.md) | [Korean](README-ko.md) |
|
||||
| [French](README-fr.md) | [Polish](README-pl.md) |
|
||||
| [German](README-de.md) | [Portuguese](README-pt.md) |
|
||||
| [Hindi](README-hi.md) | [Russian](README-ru.md) |
|
||||
| [Indonesian](README-id.md) | [Spanish](README-es.md) |
|
||||
| [Italian](README-it.md) | [Ukrainian](README-uk.md) |
|
||||
| [Japanese](README-ja.md) | [Vietnamese](README-vi.md) |
|
||||
|
||||
# Code of conduct
|
||||
## Code of conduct
|
||||
|
||||
Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
|
||||
# Thank you!
|
||||
## Thank you
|
||||
|
||||
Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation!
|
||||
|
|
|
@ -4,8 +4,6 @@
|
|||
|
||||
Join the [kubernetes-security-announce] group for security and vulnerability announcements.
|
||||
|
||||
You can also subscribe to an RSS feed of the above using [this link][kubernetes-security-announce-rss].
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Instructions for reporting a vulnerability can be found on the
|
||||
|
@ -17,6 +15,5 @@ Information about supported Kubernetes versions can be found on the
|
|||
[Kubernetes version and version skew support policy] page on the Kubernetes website.
|
||||
|
||||
[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce
|
||||
[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50
|
||||
[Kubernetes version and version skew support policy]: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
|
||||
[Kubernetes Security and Disclosure Information]: https://kubernetes.io/docs/reference/issues-security/security/#report-a-vulnerability
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Defined below are the security contacts for this repo.
|
||||
#
|
||||
# They are the contact point for the Product Security Committee to reach out
|
||||
# They are the contact point for the Security Response Committee to reach out
|
||||
# to for triaging and handling of incoming issues.
|
||||
#
|
||||
# The below names agree to abide by the
|
||||
|
@ -10,7 +10,5 @@
|
|||
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
|
||||
# INSTRUCTIONS AT https://kubernetes.io/security/
|
||||
|
||||
irvifa
|
||||
jimangel
|
||||
kbarnard10
|
||||
sftim
|
|
@ -0,0 +1,696 @@
|
|||
- definition: io.k8s.api.core.v1.PodSpec
|
||||
field_categories:
|
||||
- name: Containers
|
||||
fields:
|
||||
- containers
|
||||
- initContainers
|
||||
- imagePullSecrets
|
||||
- enableServiceLinks
|
||||
- name: Volumes
|
||||
fields:
|
||||
- volumes
|
||||
- name: Scheduling
|
||||
fields:
|
||||
- nodeSelector
|
||||
- nodeName
|
||||
- affinity
|
||||
- tolerations
|
||||
- schedulerName
|
||||
- runtimeClassName
|
||||
- priorityClassName
|
||||
- priority
|
||||
- topologySpreadConstraints
|
||||
- name: Lifecycle
|
||||
fields:
|
||||
- restartPolicy
|
||||
- terminationGracePeriodSeconds
|
||||
- activeDeadlineSeconds
|
||||
- readinessGates
|
||||
- name: Hostname and Name resolution
|
||||
fields:
|
||||
- hostname
|
||||
- setHostnameAsFQDN
|
||||
- subdomain
|
||||
- hostAliases
|
||||
- dnsConfig
|
||||
- dnsPolicy
|
||||
- name: Hosts namespaces
|
||||
fields:
|
||||
- hostNetwork
|
||||
- hostPID
|
||||
- hostIPC
|
||||
- shareProcessNamespace
|
||||
- name: Service account
|
||||
fields:
|
||||
- serviceAccountName
|
||||
- automountServiceAccountToken
|
||||
- name: Security context
|
||||
fields:
|
||||
- securityContext
|
||||
- name: Beta level
|
||||
fields:
|
||||
- preemptionPolicy
|
||||
- overhead
|
||||
- name: Alpha level
|
||||
fields:
|
||||
- ephemeralContainers
|
||||
- name: Deprecated
|
||||
fields:
|
||||
- serviceAccount
|
||||
|
||||
- definition: io.k8s.api.core.v1.PodSecurityContext
|
||||
field_categories:
|
||||
- fields:
|
||||
- runAsUser
|
||||
- runAsNonRoot
|
||||
- runAsGroup
|
||||
- supplementalGroups
|
||||
- fsGroup
|
||||
- fsGroupChangePolicy
|
||||
- seccompProfile
|
||||
- seLinuxOptions
|
||||
- sysctls
|
||||
- windowsOptions
|
||||
|
||||
- definition: io.k8s.api.core.v1.Toleration
|
||||
field_categories:
|
||||
- fields:
|
||||
- key
|
||||
- operator
|
||||
- value
|
||||
- effect
|
||||
- tolerationSeconds
|
||||
|
||||
- definition: io.k8s.api.core.v1.PodStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- nominatedNodeName
|
||||
- hostIP
|
||||
- startTime
|
||||
- phase
|
||||
- message
|
||||
- reason
|
||||
- podIP
|
||||
- podIPs
|
||||
- conditions
|
||||
- qosClass
|
||||
- initContainerStatuses
|
||||
- containerStatuses
|
||||
- ephemeralContainerStatuses
|
||||
|
||||
- definition: io.k8s.api.core.v1.Container
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- name: Image
|
||||
fields:
|
||||
- image
|
||||
- imagePullPolicy
|
||||
- name: Entrypoint
|
||||
fields:
|
||||
- command
|
||||
- args
|
||||
- workingDir
|
||||
- name: Ports
|
||||
fields:
|
||||
- ports
|
||||
- name: Environment variables
|
||||
fields:
|
||||
- env
|
||||
- envFrom
|
||||
- name: Volumes
|
||||
fields:
|
||||
- volumeMounts
|
||||
- volumeDevices
|
||||
- name: Resources
|
||||
fields:
|
||||
- resources
|
||||
- name: Lifecycle
|
||||
fields:
|
||||
- lifecycle
|
||||
- terminationMessagePath
|
||||
- terminationMessagePolicy
|
||||
- livenessProbe
|
||||
- readinessProbe
|
||||
- startupProbe
|
||||
- name: Security Context
|
||||
fields:
|
||||
- securityContext
|
||||
- name: Debugging
|
||||
fields:
|
||||
- stdin
|
||||
- stdinOnce
|
||||
- tty
|
||||
|
||||
- definition: io.k8s.api.core.v1.Probe
|
||||
field_categories:
|
||||
- fields:
|
||||
- exec
|
||||
- httpGet
|
||||
- tcpSocket
|
||||
- initialDelaySeconds
|
||||
- terminationGracePeriodSeconds
|
||||
- periodSeconds
|
||||
- timeoutSeconds
|
||||
- failureThreshold
|
||||
- successThreshold
|
||||
|
||||
- definition: io.k8s.api.core.v1.SecurityContext
|
||||
field_categories:
|
||||
- fields:
|
||||
- runAsUser
|
||||
- runAsNonRoot
|
||||
- runAsGroup
|
||||
- readOnlyRootFilesystem
|
||||
- procMount
|
||||
- privileged
|
||||
- allowPrivilegeEscalation
|
||||
- capabilities
|
||||
- seccompProfile
|
||||
- seLinuxOptions
|
||||
- windowsOptions
|
||||
|
||||
- definition: io.k8s.api.core.v1.ContainerStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- image
|
||||
- imageID
|
||||
- containerID
|
||||
- state
|
||||
- lastState
|
||||
- ready
|
||||
- restartCount
|
||||
- started
|
||||
|
||||
- definition: io.k8s.api.core.v1.ContainerStateTerminated
|
||||
field_categories:
|
||||
- fields:
|
||||
- containerID
|
||||
- exitCode
|
||||
- startedAt
|
||||
- finishedAt
|
||||
- message
|
||||
- reason
|
||||
- signal
|
||||
|
||||
- definition: io.k8s.api.core.v1.EphemeralContainer
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- targetContainerName
|
||||
- name: Image
|
||||
fields:
|
||||
- image
|
||||
- imagePullPolicy
|
||||
- name: Entrypoint
|
||||
fields:
|
||||
- command
|
||||
- args
|
||||
- workingDir
|
||||
- name: Environment variables
|
||||
fields:
|
||||
- env
|
||||
- envFrom
|
||||
- name: Volumes
|
||||
fields:
|
||||
- volumeMounts
|
||||
- volumeDevices
|
||||
- name: Lifecycle
|
||||
fields:
|
||||
- terminationMessagePath
|
||||
- terminationMessagePolicy
|
||||
- name: Debugging
|
||||
fields:
|
||||
- stdin
|
||||
- stdinOnce
|
||||
- tty
|
||||
- name: Not allowed
|
||||
fields:
|
||||
- ports
|
||||
- resources
|
||||
- lifecycle
|
||||
- livenessProbe
|
||||
- readinessProbe
|
||||
- securityContext
|
||||
- startupProbe
|
||||
|
||||
- definition: io.k8s.api.core.v1.ReplicationControllerSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- selector
|
||||
- template
|
||||
- replicas
|
||||
- minReadySeconds
|
||||
|
||||
- definition: io.k8s.api.core.v1.ReplicationControllerStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- replicas
|
||||
- availableReplicas
|
||||
- readyReplicas
|
||||
- fullyLabeledReplicas
|
||||
- conditions
|
||||
- observedGeneration
|
||||
|
||||
- definition: io.k8s.api.apps.v1.ReplicaSetSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- selector
|
||||
- template
|
||||
- replicas
|
||||
- minReadySeconds
|
||||
|
||||
- definition: io.k8s.api.apps.v1.ReplicaSetStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- replicas
|
||||
- availableReplicas
|
||||
- readyReplicas
|
||||
- fullyLabeledReplicas
|
||||
- conditions
|
||||
- observedGeneration
|
||||
|
||||
- definition: io.k8s.api.apps.v1.DeploymentSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- selector
|
||||
- template
|
||||
- replicas
|
||||
- minReadySeconds
|
||||
- strategy
|
||||
- revisionHistoryLimit
|
||||
- progressDeadlineSeconds
|
||||
- paused
|
||||
|
||||
- definition: io.k8s.api.apps.v1.DeploymentStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- replicas
|
||||
- availableReplicas
|
||||
- readyReplicas
|
||||
- unavailableReplicas
|
||||
- updatedReplicas
|
||||
- collisionCount
|
||||
- conditions
|
||||
- observedGeneration
|
||||
|
||||
- definition: io.k8s.api.apps.v1.DeploymentStrategy
|
||||
field_categories:
|
||||
- fields:
|
||||
- type
|
||||
- rollingUpdate
|
||||
|
||||
- definition: io.k8s.api.apps.v1.StatefulSetSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- serviceName
|
||||
- selector
|
||||
- template
|
||||
- replicas
|
||||
- updateStrategy
|
||||
- podManagementPolicy
|
||||
- revisionHistoryLimit
|
||||
- volumeClaimTemplates
|
||||
- minReadySeconds
|
||||
|
||||
- definition: io.k8s.api.apps.v1.StatefulSetUpdateStrategy
|
||||
field_categories:
|
||||
- fields:
|
||||
- type
|
||||
- rollingUpdate
|
||||
|
||||
- definition: io.k8s.api.apps.v1.StatefulSetStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- replicas
|
||||
- readyReplicas
|
||||
- currentReplicas
|
||||
- updatedReplicas
|
||||
- availableReplicas
|
||||
- collisionCount
|
||||
- conditions
|
||||
- currentRevision
|
||||
- updateRevision
|
||||
- observedGeneration
|
||||
|
||||
- definition: io.k8s.api.apps.v1.DaemonSetSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- selector
|
||||
- template
|
||||
- minReadySeconds
|
||||
- updateStrategy
|
||||
- revisionHistoryLimit
|
||||
|
||||
- definition: io.k8s.api.apps.v1.DaemonSetUpdateStrategy
|
||||
field_categories:
|
||||
- fields:
|
||||
- type
|
||||
- rollingUpdate
|
||||
|
||||
- definition: io.k8s.api.apps.v1.DaemonSetStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- numberReady
|
||||
- numberAvailable
|
||||
- numberUnavailable
|
||||
- numberMisscheduled
|
||||
- desiredNumberScheduled
|
||||
- currentNumberScheduled
|
||||
- updatedNumberScheduled
|
||||
- collisionCount
|
||||
- conditions
|
||||
- observedGeneration
|
||||
|
||||
- definition: io.k8s.api.batch.v1.JobSpec
|
||||
field_categories:
|
||||
- name: Replicas
|
||||
fields:
|
||||
- template
|
||||
- parallelism
|
||||
- name: Lifecycle
|
||||
fields:
|
||||
- completions
|
||||
- completionMode
|
||||
- backoffLimit
|
||||
- activeDeadlineSeconds
|
||||
- ttlSecondsAfterFinished
|
||||
- suspend
|
||||
- name: Selector
|
||||
fields:
|
||||
- selector
|
||||
- manualSelector
|
||||
|
||||
- definition: io.k8s.api.batch.v1.JobStatus
|
||||
field_categories:
|
||||
- fields:
|
||||
- startTime
|
||||
- completionTime
|
||||
- active
|
||||
- failed
|
||||
- succeeded
|
||||
- completedIndexes
|
||||
- conditions
|
||||
- uncountedTerminatedPods
|
||||
|
||||
- definition: io.k8s.api.batch.v1.CronJobSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- jobTemplate
|
||||
- schedule
|
||||
- concurrencyPolicy
|
||||
- startingDeadlineSeconds
|
||||
- suspend
|
||||
- successfulJobsHistoryLimit
|
||||
- failedJobsHistoryLimit
|
||||
|
||||
- definition: io.k8s.api.autoscaling.v2beta2.HorizontalPodAutoscalerSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- maxReplicas
|
||||
- scaleTargetRef
|
||||
- minReplicas
|
||||
- behavior
|
||||
- metrics
|
||||
|
||||
- definition: io.k8s.api.autoscaling.v2beta2.HPAScalingPolicy
|
||||
field_categories:
|
||||
- fields:
|
||||
- type
|
||||
- value
|
||||
- periodSeconds
|
||||
|
||||
- definition: io.k8s.api.core.v1.ServiceSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- selector
|
||||
- ports
|
||||
- type
|
||||
- ipFamilies
|
||||
- ipFamilyPolicy
|
||||
- clusterIP
|
||||
- clusterIPs
|
||||
- externalIPs
|
||||
- sessionAffinity
|
||||
- loadBalancerIP
|
||||
- loadBalancerSourceRanges
|
||||
- loadBalancerClass
|
||||
- externalName
|
||||
- externalTrafficPolicy
|
||||
- internalTrafficPolicy
|
||||
- healthCheckNodePort
|
||||
- publishNotReadyAddresses
|
||||
- sessionAffinityConfig
|
||||
- allocateLoadBalancerNodePorts
|
||||
|
||||
- definition: io.k8s.api.core.v1.ServicePort
|
||||
field_categories:
|
||||
- fields:
|
||||
- port
|
||||
- targetPort
|
||||
- protocol
|
||||
- name
|
||||
- nodePort
|
||||
- appProtocol
|
||||
|
||||
- definition: io.k8s.api.core.v1.EndpointSubset
|
||||
field_categories:
|
||||
- fields:
|
||||
- addresses
|
||||
- notReadyAddresses
|
||||
- ports
|
||||
|
||||
- definition: io.k8s.api.core.v1.EndpointPort
|
||||
field_categories:
|
||||
- fields:
|
||||
- port
|
||||
- protocol
|
||||
- name
|
||||
- appProtocol
|
||||
|
||||
- definition: io.k8s.api.discovery.v1.EndpointPort
|
||||
field_categories:
|
||||
- fields:
|
||||
- port
|
||||
- protocol
|
||||
- name
|
||||
- appProtocol
|
||||
|
||||
- definition: io.k8s.api.core.v1.Volume
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- name: Exposed Persistent volumes
|
||||
fields:
|
||||
- persistentVolumeClaim
|
||||
- name: Projections
|
||||
fields:
|
||||
- configMap
|
||||
- secret
|
||||
- downwardAPI
|
||||
- projected
|
||||
- name: Local / Temporary Directory
|
||||
fields:
|
||||
- emptyDir
|
||||
- hostPath
|
||||
- name: Persistent volumes
|
||||
fields:
|
||||
- awsElasticBlockStore
|
||||
- azureDisk
|
||||
- azureFile
|
||||
- cephfs
|
||||
- cinder
|
||||
- csi
|
||||
- fc
|
||||
- flexVolume
|
||||
- flocker
|
||||
- gcePersistentDisk
|
||||
- glusterfs
|
||||
- iscsi
|
||||
- nfs
|
||||
- photonPersistentDisk
|
||||
- portworxVolume
|
||||
- quobyte
|
||||
- rbd
|
||||
- scaleIO
|
||||
- storageos
|
||||
- vsphereVolume
|
||||
- name: Alpha level
|
||||
fields:
|
||||
- ephemeral
|
||||
- name: Deprecated
|
||||
fields:
|
||||
- gitRepo
|
||||
|
||||
- definition: io.k8s.api.core.v1.ConfigMapVolumeSource
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- optional
|
||||
- defaultMode
|
||||
- items
|
||||
|
||||
- definition: io.k8s.api.core.v1.SecretVolumeSource
|
||||
field_categories:
|
||||
- fields:
|
||||
- secretName
|
||||
- optional
|
||||
- defaultMode
|
||||
- items
|
||||
|
||||
- definition: io.k8s.api.core.v1.ConfigMapProjection
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- optional
|
||||
- items
|
||||
|
||||
- definition: io.k8s.api.core.v1.SecretProjection
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- optional
|
||||
- items
|
||||
|
||||
- definition: io.k8s.api.core.v1.ProjectedVolumeSource
|
||||
field_categories:
|
||||
- fields:
|
||||
- defaultMode
|
||||
- sources
|
||||
|
||||
- definition: io.k8s.api.core.v1.PersistentVolumeClaimSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- accessModes
|
||||
- selector
|
||||
- resources
|
||||
- volumeName
|
||||
- storageClassName
|
||||
- volumeMode
|
||||
- name: Alpha level
|
||||
fields:
|
||||
- dataSource
|
||||
- dataSourceRef
|
||||
|
||||
- definition: io.k8s.api.core.v1.PersistentVolumeSpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- accessModes
|
||||
- capacity
|
||||
- claimRef
|
||||
- mountOptions
|
||||
- nodeAffinity
|
||||
- persistentVolumeReclaimPolicy
|
||||
- storageClassName
|
||||
- volumeMode
|
||||
- name: Local
|
||||
fields:
|
||||
- hostPath
|
||||
- local
|
||||
- name: Persistent volumes
|
||||
fields:
|
||||
- awsElasticBlockStore
|
||||
- azureDisk
|
||||
- azureFile
|
||||
- cephfs
|
||||
- cinder
|
||||
- csi
|
||||
- fc
|
||||
- flexVolume
|
||||
- flocker
|
||||
- gcePersistentDisk
|
||||
- glusterfs
|
||||
- iscsi
|
||||
- nfs
|
||||
- photonPersistentDisk
|
||||
- portworxVolume
|
||||
- quobyte
|
||||
- rbd
|
||||
- scaleIO
|
||||
- storageos
|
||||
- vsphereVolume
|
||||
|
||||
- definition: io.k8s.api.rbac.v1.PolicyRule
|
||||
field_categories:
|
||||
- fields:
|
||||
- apiGroups
|
||||
- resources
|
||||
- verbs
|
||||
- resourceNames
|
||||
- nonResourceURLs
|
||||
|
||||
- definition: io.k8s.api.networking.v1.NetworkPolicySpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- podSelector
|
||||
- policyTypes
|
||||
- ingress
|
||||
- egress
|
||||
|
||||
- definition: io.k8s.api.networking.v1.NetworkPolicyEgressRule
|
||||
field_categories:
|
||||
- fields:
|
||||
- to
|
||||
- ports
|
||||
|
||||
- definition: io.k8s.api.networking.v1.NetworkPolicyPort
|
||||
field_categories:
|
||||
- fields:
|
||||
- port
|
||||
- endPort
|
||||
- protocol
|
||||
|
||||
- definition: io.k8s.api.policy.v1beta1.PodSecurityPolicySpec
|
||||
field_categories:
|
||||
- fields:
|
||||
- runAsUser
|
||||
- runAsGroup
|
||||
- fsGroup
|
||||
- supplementalGroups
|
||||
- seLinux
|
||||
- readOnlyRootFilesystem
|
||||
- privileged
|
||||
- allowPrivilegeEscalation
|
||||
- defaultAllowPrivilegeEscalation
|
||||
- allowedCSIDrivers
|
||||
- allowedCapabilities
|
||||
- requiredDropCapabilities
|
||||
- defaultAddCapabilities
|
||||
- allowedFlexVolumes
|
||||
- allowedHostPaths
|
||||
- allowedProcMountTypes
|
||||
- allowedUnsafeSysctls
|
||||
- forbiddenSysctls
|
||||
- hostIPC
|
||||
- hostNetwork
|
||||
- hostPID
|
||||
- hostPorts
|
||||
- runtimeClass
|
||||
- volumes
|
||||
|
||||
- definition: io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
|
||||
field_categories:
|
||||
- fields:
|
||||
- name
|
||||
- generateName
|
||||
- namespace
|
||||
- labels
|
||||
- annotations
|
||||
- name: System
|
||||
fields:
|
||||
- finalizers
|
||||
- managedFields
|
||||
- ownerReferences
|
||||
- name: Read-only
|
||||
fields:
|
||||
- creationTimestamp
|
||||
- deletionGracePeriodSeconds
|
||||
- deletionTimestamp
|
||||
- generation
|
||||
- resourceVersion
|
||||
- selfLink
|
||||
- uid
|
||||
- name: Ignored
|
||||
fields:
|
||||
- clusterName
|
|
@ -0,0 +1,267 @@
|
|||
# Copyright 2016 The Kubernetes Authors.
|
||||
# Copyright 2020 Philippe Martin
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
parts:
|
||||
- name: Workload Resources
|
||||
chapters:
|
||||
- name: Pod
|
||||
group: ""
|
||||
version: v1
|
||||
otherDefinitions:
|
||||
- PodSpec
|
||||
- Container
|
||||
- EphemeralContainer
|
||||
- Handler
|
||||
- NodeAffinity
|
||||
- PodAffinity
|
||||
- PodAntiAffinity
|
||||
- Probe
|
||||
- PodStatus
|
||||
- PodList
|
||||
- name: PodTemplate
|
||||
group: ""
|
||||
version: v1
|
||||
- name: ReplicationController
|
||||
group: ""
|
||||
version: v1
|
||||
- name: ReplicaSet
|
||||
group: apps
|
||||
version: v1
|
||||
- name: Deployment
|
||||
group: apps
|
||||
version: v1
|
||||
- name: StatefulSet
|
||||
group: apps
|
||||
version: v1
|
||||
- name: ControllerRevision
|
||||
group: apps
|
||||
version: v1
|
||||
- name: DaemonSet
|
||||
group: apps
|
||||
version: v1
|
||||
- name: Job
|
||||
group: batch
|
||||
version: v1
|
||||
- name: CronJob
|
||||
group: batch
|
||||
version: v1
|
||||
- name: HorizontalPodAutoscaler
|
||||
group: autoscaling
|
||||
version: v1
|
||||
- name: HorizontalPodAutoscaler
|
||||
group: autoscaling
|
||||
version: v2beta2
|
||||
- name: PriorityClass
|
||||
group: scheduling.k8s.io
|
||||
version: v1
|
||||
- name: Service Resources
|
||||
chapters:
|
||||
- name: Service
|
||||
group: ""
|
||||
version: v1
|
||||
- name: Endpoints
|
||||
group: ""
|
||||
version: v1
|
||||
- name: EndpointSlice
|
||||
group: discovery.k8s.io
|
||||
version: v1
|
||||
- name: Ingress
|
||||
group: networking.k8s.io
|
||||
version: v1
|
||||
otherDefinitions:
|
||||
- IngressSpec
|
||||
- IngressBackend
|
||||
- IngressStatus
|
||||
- IngressList
|
||||
- name: IngressClass
|
||||
group: networking.k8s.io
|
||||
version: v1
|
||||
- name: Config and Storage Resources
|
||||
chapters:
|
||||
- name: ConfigMap
|
||||
group: ""
|
||||
version: v1
|
||||
- name: Secret
|
||||
group: ""
|
||||
version: v1
|
||||
- name: Volume
|
||||
key: io.k8s.api.core.v1.Volume
|
||||
otherDefinitions:
|
||||
- DownwardAPIVolumeFile
|
||||
- KeyToPath
|
||||
- name: PersistentVolumeClaim
|
||||
group: ""
|
||||
version: v1
|
||||
- name: PersistentVolume
|
||||
group: ""
|
||||
version: v1
|
||||
- name: StorageClass
|
||||
group: storage.k8s.io
|
||||
version: v1
|
||||
- name: VolumeAttachment
|
||||
group: storage.k8s.io
|
||||
version: v1
|
||||
- name: CSIDriver
|
||||
group: storage.k8s.io
|
||||
version: v1
|
||||
- name: CSINode
|
||||
group: storage.k8s.io
|
||||
version: v1
|
||||
- name: CSIStorageCapacity
|
||||
group: storage.k8s.io
|
||||
version: v1beta1
|
||||
- name: Authentication Resources
|
||||
chapters:
|
||||
- name: ServiceAccount
|
||||
group: ""
|
||||
version: v1
|
||||
- name: TokenRequest
|
||||
group: authentication.k8s.io
|
||||
version: v1
|
||||
- name: TokenReview
|
||||
group: authentication.k8s.io
|
||||
version: v1
|
||||
- name: CertificateSigningRequest
|
||||
group: certificates.k8s.io
|
||||
version: v1
|
||||
- name: Authorization Resources
|
||||
chapters:
|
||||
- name: LocalSubjectAccessReview
|
||||
group: authorization.k8s.io
|
||||
version: v1
|
||||
- name: SelfSubjectAccessReview
|
||||
group: authorization.k8s.io
|
||||
version: v1
|
||||
- name: SelfSubjectRulesReview
|
||||
group: authorization.k8s.io
|
||||
version: v1
|
||||
- name: SubjectAccessReview
|
||||
group: authorization.k8s.io
|
||||
version: v1
|
||||
- name: ClusterRole
|
||||
group: rbac.authorization.k8s.io
|
||||
version: v1
|
||||
- name: ClusterRoleBinding
|
||||
group: rbac.authorization.k8s.io
|
||||
version: v1
|
||||
- name: Role
|
||||
group: rbac.authorization.k8s.io
|
||||
version: v1
|
||||
- name: RoleBinding
|
||||
group: rbac.authorization.k8s.io
|
||||
version: v1
|
||||
- name: Policy Resources
|
||||
chapters:
|
||||
- name: LimitRange
|
||||
group: ""
|
||||
version: v1
|
||||
- name: ResourceQuota
|
||||
group: ""
|
||||
version: v1
|
||||
- name: NetworkPolicy
|
||||
group: networking.k8s.io
|
||||
version: v1
|
||||
- name: PodDisruptionBudget
|
||||
group: policy
|
||||
version: v1
|
||||
- name: PodSecurityPolicy
|
||||
group: policy
|
||||
version: v1beta1
|
||||
- name: Extend Resources
|
||||
chapters:
|
||||
- name: CustomResourceDefinition
|
||||
group: apiextensions.k8s.io
|
||||
version: v1
|
||||
otherDefinitions:
|
||||
- CustomResourceDefinitionSpec
|
||||
- JSONSchemaProps
|
||||
- CustomResourceDefinitionStatus
|
||||
- CustomResourceDefinitionList
|
||||
- name: MutatingWebhookConfiguration
|
||||
group: admissionregistration.k8s.io
|
||||
version: v1
|
||||
- name: ValidatingWebhookConfiguration
|
||||
group: admissionregistration.k8s.io
|
||||
version: v1
|
||||
- name: Cluster Resources
|
||||
chapters:
|
||||
- name: Node
|
||||
group: ""
|
||||
version: v1
|
||||
- name: Namespace
|
||||
group: ""
|
||||
version: v1
|
||||
- name: Event
|
||||
group: events.k8s.io
|
||||
version: v1
|
||||
- name: APIService
|
||||
group: apiregistration.k8s.io
|
||||
version: v1
|
||||
- name: Lease
|
||||
group: coordination.k8s.io
|
||||
version: v1
|
||||
- name: RuntimeClass
|
||||
group: node.k8s.io
|
||||
version: v1
|
||||
- name: FlowSchema
|
||||
group: flowcontrol.apiserver.k8s.io
|
||||
version: v1beta1
|
||||
- name: PriorityLevelConfiguration
|
||||
group: flowcontrol.apiserver.k8s.io
|
||||
version: v1beta1
|
||||
- name: Binding
|
||||
group: ""
|
||||
version: v1
|
||||
- name: ComponentStatus
|
||||
group: ""
|
||||
version: v1
|
||||
- name: Common Definitions
|
||||
chapters:
|
||||
- name: DeleteOptions
|
||||
key: io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions
|
||||
- name: LabelSelector
|
||||
key: io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector
|
||||
- name: ListMeta
|
||||
key: io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta
|
||||
- name: LocalObjectReference
|
||||
key: io.k8s.api.core.v1.LocalObjectReference
|
||||
- name: NodeSelectorRequirement
|
||||
key: io.k8s.api.core.v1.NodeSelectorRequirement
|
||||
- name: ObjectFieldSelector
|
||||
key: io.k8s.api.core.v1.ObjectFieldSelector
|
||||
- name: ObjectMeta
|
||||
key: io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
|
||||
- name: ObjectReference
|
||||
key: io.k8s.api.core.v1.ObjectReference
|
||||
- name: Patch
|
||||
key: io.k8s.apimachinery.pkg.apis.meta.v1.Patch
|
||||
- name: Quantity
|
||||
key: "io.k8s.apimachinery.pkg.api.resource.Quantity"
|
||||
- name: ResourceFieldSelector
|
||||
key: io.k8s.api.core.v1.ResourceFieldSelector
|
||||
- name: Status
|
||||
key: io.k8s.apimachinery.pkg.apis.meta.v1.Status
|
||||
- name: TypedLocalObjectReference
|
||||
key: io.k8s.api.core.v1.TypedLocalObjectReference
|
||||
skippedResources:
|
||||
- APIGroup
|
||||
- APIGroupList
|
||||
- APIResourceList
|
||||
- APIVersions
|
||||
- Eviction
|
||||
- Scale
|
||||
- Status
|
||||
- StorageVersion
|
||||
- StorageVersionList
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
api_metadata:
|
||||
apiVersion: "{{.ApiVersion}}"
|
||||
import: "{{.Import}}"
|
||||
kind: "{{.Kind}}"
|
||||
content_type: "api_reference"
|
||||
description: "{{.Metadata.Description}}"
|
||||
title: "{{.Metadata.Title}}"
|
||||
weight: {{.Metadata.Weight}}
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
{{if .ApiVersion}}`apiVersion: {{.ApiVersion}}`{{end}}
|
||||
|
||||
{{if .Import}}`import "{{.Import}}"`{{end}}
|
||||
|
||||
{{range .Sections}}
|
||||
{{.Description | replace "<" "\\<" }}
|
||||
|
||||
<hr>
|
||||
{{range .Fields}}
|
||||
{{ "" | indent .Indent | indent .Indent}}- {{.Name}}{{if .Value}}: {{.Value}}{{end}}
|
||||
{{if .Description}}
|
||||
{{.Description | replace "<" "\\<" | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{- end}}
|
||||
{{if .TypeDefinition}}
|
||||
{{ "" | indent .Indent | indent .Indent}} <a name="{{.Type}}"></a>
|
||||
{{.TypeDefinition | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{end}}
|
||||
{{- end}}{{/* range .Fields */}}
|
||||
|
||||
{{range .FieldCategories}}
|
||||
### {{.Name}} {#{{"-" | regexReplaceAll "[^a-zA-Z0-9]+" .Name }}}{{/* explicitly set fragment to keep capitalization */}}
|
||||
|
||||
{{range .Fields}}
|
||||
{{ "" | indent .Indent | indent .Indent}}- {{.Name}}{{if .Value}}: {{.Value}}{{end}}
|
||||
{{if .Description}}
|
||||
{{.Description | replace "<" "\\<" | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{- end}}
|
||||
{{if .TypeDefinition}}
|
||||
{{ "" | indent .Indent | indent .Indent}} <a name="{{.Type}}"></a>
|
||||
{{.TypeDefinition | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{end}}
|
||||
{{- end}}{{/* range .Fields */}}
|
||||
|
||||
{{- end}}{{/* range .FieldCategories */}}
|
||||
|
||||
{{range .Operations}}
|
||||
|
||||
### `{{.Verb}}` {{.Title}}
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
{{.RequestMethod}} {{.RequestPath}}
|
||||
|
||||
#### Parameters
|
||||
|
||||
{{range .Parameters}}
|
||||
- {{.Title}}
|
||||
|
||||
{{.Description | indent 2}}
|
||||
|
||||
{{end}}{{/* range .Parameters */}}
|
||||
|
||||
#### Response
|
||||
|
||||
{{range .Responses}}
|
||||
{{.Code}}{{if .Type}} ({{.Type}}){{end}}: {{.Description}}
|
||||
{{end}}{{/* range .Responses */}}
|
||||
|
||||
{{- end}}{{/* range .Operations */}}
|
||||
{{- end}}{{/* range .Sections */}}
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
api_metadata:
|
||||
apiVersion: "{{.ApiVersion}}"
|
||||
import: "{{.Import}}"
|
||||
kind: "{{.Kind}}"
|
||||
content_type: "api_reference"
|
||||
description: "{{.Metadata.Description}}"
|
||||
title: "{{.Metadata.Title}}"
|
||||
weight: {{.Metadata.Weight}}
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
{{if .ApiVersion}}`apiVersion: {{.ApiVersion}}`{{end}}
|
||||
|
||||
{{if .Import}}`import "{{.Import}}"`{{end}}
|
||||
|
||||
{{range .Sections}}
|
||||
## {{.Name}} {#{{"-" | regexReplaceAll "[^a-zA-Z0-9]+" .Name }}}{{/* explicitly set fragment to keep capitalization */}}
|
||||
|
||||
{{.Description | replace "<" "\\<" }}
|
||||
|
||||
<hr>
|
||||
{{range .Fields}}
|
||||
{{ "" | indent .Indent | indent .Indent}}- {{.Name}}{{if .Value}}: {{.Value}}{{end}}
|
||||
{{if .Description}}
|
||||
{{.Description | replace "<" "\\<" | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{- end}}
|
||||
{{if .TypeDefinition}}
|
||||
{{ "" | indent .Indent | indent .Indent}} <a name="{{.Type}}"></a>
|
||||
{{.TypeDefinition | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{end}}
|
||||
{{- end}}{{/* range .Fields */}}
|
||||
|
||||
{{range .FieldCategories}}
|
||||
### {{.Name}}
|
||||
|
||||
{{range .Fields}}
|
||||
{{ "" | indent .Indent | indent .Indent}}- {{.Name}}{{if .Value}}: {{.Value}}{{end}}
|
||||
{{if .Description}}
|
||||
{{.Description | replace "<" "\\<" | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{- end}}
|
||||
{{if .TypeDefinition}}
|
||||
{{ "" | indent .Indent | indent .Indent}} <a name="{{.Type}}"></a>
|
||||
{{.TypeDefinition | indent 2 | indent .Indent | indent .Indent}}
|
||||
{{end}}
|
||||
{{- end}}{{/* range .Fields */}}
|
||||
|
||||
{{- end}}{{/* range .FieldCategories */}}
|
||||
|
||||
{{range .Operations}}
|
||||
|
||||
### `{{.Verb}}` {{.Title}}
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
{{.RequestMethod}} {{.RequestPath}}
|
||||
|
||||
#### Parameters
|
||||
|
||||
{{range .Parameters}}
|
||||
- {{.Title}}
|
||||
|
||||
{{.Description | indent 2}}
|
||||
|
||||
{{end}}{{/* range .Parameters */}}
|
||||
|
||||
#### Response
|
||||
|
||||
{{range .Responses}}
|
||||
{{.Code}}{{if .Type}} ({{.Type}}){{end}}: {{.Description}}
|
||||
{{end}}{{/* range .Responses */}}
|
||||
|
||||
{{- end}}{{/* range .Operations */}}
|
||||
{{- end}}{{/* range .Sections */}}
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: "{{.Title}}"
|
||||
weight: {{.Weight}}
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit 78e64febda1b53cafc79979c5978b42162cea276
|
||||
Subproject commit 55bce686224caba37f93e1e1eb53c0c9fc104ed4
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "{{ replace .Name "-" " " | title }}"
|
||||
date: {{ .Date }}
|
||||
draft: true
|
||||
slug: <seo-friendly-version-of-title-separated-by-dashes>
|
||||
---
|
||||
|
||||
**Author:** <your name> (<your organization name>), <another author's name> (<their organization>)
|
||||
|
||||
<!--
|
||||
Instructions:
|
||||
- Replace these instructions and the following text with your content.
|
||||
- Replace `<angle bracket placeholders>` with actual values. For example, you would update `date: <yyyy>-<mm>-<dd>` to look something like `date: 2021-10-21`.
|
||||
- For convenience, use third-party tools to author and collaborate on your content.
|
||||
- To save time and effort in reviews, check your content's spelling, grammar, and style before contributing.
|
||||
- Feel free to ask for assistance in the Kubernetes Slack channel, [#sig-docs-blog](https://kubernetes.slack.com/archives/CJDHVD54J).
|
||||
-->
|
||||
|
||||
Replace this first line of your content with one to three sentences that summarize the blog post.
|
||||
|
||||
## This is a section heading
|
||||
|
||||
To help the reader, organize your content into sections that contain about three to six paragraphs.
|
||||
|
||||
If you're documenting commands, separate the commands from the outputs, like this:
|
||||
|
||||
1. Verify that the Secret exists by running the following command:
|
||||
|
||||
```shell
|
||||
kubectl get secrets
|
||||
```
|
||||
|
||||
The response should be like this:
|
||||
|
||||
```shell
|
||||
NAME TYPE DATA AGE
|
||||
mysql-pass-c57bb4t7mf Opaque 1 9s
|
||||
```
|
||||
|
||||
You're free to create any sections you like. Below are a few common patterns we see at the end of blog posts.
|
||||
|
||||
## What’s next?
|
||||
|
||||
This optional section describes the future of the thing you've just described in the post.
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
This optional section provides links to more information. Please avoid promoting and over-represent your organization.
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
An optional section that links to resources for readers to get involved, and acknowledgments of individual contributors, such as:
|
||||
|
||||
* [The name of a channel on Slack, #a-channel](https://<a-workspace>.slack.com/messages/<a-channel>)
|
||||
|
||||
* [A link to a "contribute" page with more information](<https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact>).
|
||||
|
||||
* Acknowledgements and thanks to the contributors. <person's name> ([<github id>](https://github.com/<github id>)) who did X, Y, and Z.
|
||||
|
||||
* Those interested in getting involved with the design and development of <project>, join the [<name of the SIG>](https://github.com/project/community/tree/master/<sig-group>). We’re rapidly growing and always welcome new contributors.
|
|
@ -0,0 +1,25 @@
|
|||
# See https://cloud.google.com/cloud-build/docs/build-config
|
||||
|
||||
# this must be specified in seconds. If omitted, defaults to 600s (10 mins)
|
||||
timeout: 1200s
|
||||
# this prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF,
|
||||
# or any new substitutions added in the future.
|
||||
options:
|
||||
substitution_option: ALLOW_LOOSE
|
||||
steps:
|
||||
# It's fine to bump the tag to a recent version, as needed
|
||||
- name: "gcr.io/k8s-testimages/gcb-docker-gcloud:v20190906-745fed4"
|
||||
entrypoint: make
|
||||
env:
|
||||
- DOCKER_CLI_EXPERIMENTAL=enabled
|
||||
- TAG=$_GIT_TAG
|
||||
- BASE_REF=$_PULL_BASE_REF
|
||||
args:
|
||||
- container-image
|
||||
substitutions:
|
||||
# _GIT_TAG will be filled with a git-based tag for the image, of the form vYYYYMMDD-hash, and
|
||||
# can be used as a substitution
|
||||
_GIT_TAG: "12345"
|
||||
# _PULL_BASE_REF will contain the ref that was pushed to to trigger this build -
|
||||
# a branch like 'master' or 'release-0.2', or a tag like 'v0.2'.
|
||||
_PULL_BASE_REF: "master"
|
47
config.toml
|
@ -138,12 +138,12 @@ time_format_default = "January 02, 2006 at 3:04 PM PST"
|
|||
description = "Production-Grade Container Orchestration"
|
||||
showedit = true
|
||||
|
||||
latest = "v1.21"
|
||||
latest = "v1.22"
|
||||
|
||||
fullversion = "v1.21.0"
|
||||
version = "v1.21"
|
||||
githubbranch = "master"
|
||||
docsbranch = "master"
|
||||
fullversion = "v1.22.0"
|
||||
version = "v1.22"
|
||||
githubbranch = "main"
|
||||
docsbranch = "main"
|
||||
deprecated = false
|
||||
currentUrl = "https://kubernetes.io/docs/home/"
|
||||
nextUrl = "https://kubernetes-io-vnext-staging.netlify.com/"
|
||||
|
@ -178,45 +178,46 @@ js = [
|
|||
]
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.21.0"
|
||||
version = "v1.21"
|
||||
githubbranch = "v1.21.0"
|
||||
docsbranch = "master"
|
||||
fullversion = "v1.22.0"
|
||||
version = "v1.22"
|
||||
githubbranch = "v1.22.0"
|
||||
docsbranch = "main"
|
||||
url = "https://kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.20.5"
|
||||
fullversion = "v1.21.4"
|
||||
version = "v1.21"
|
||||
githubbranch = "v1.21.4"
|
||||
docsbranch = "release-1.21"
|
||||
url = "https://v1-21.docs.kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.20.10"
|
||||
version = "v1.20"
|
||||
githubbranch = "v1.20.5"
|
||||
githubbranch = "v1.20.10"
|
||||
docsbranch = "release-1.20"
|
||||
url = "https://v1-20.docs.kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.19.9"
|
||||
fullversion = "v1.19.14"
|
||||
version = "v1.19"
|
||||
githubbranch = "v1.19.9"
|
||||
githubbranch = "v1.19.14"
|
||||
docsbranch = "release-1.19"
|
||||
url = "https://v1-19.docs.kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.18.17"
|
||||
fullversion = "v1.18.20"
|
||||
version = "v1.18"
|
||||
githubbranch = "v1.18.17"
|
||||
githubbranch = "v1.18.20"
|
||||
docsbranch = "release-1.18"
|
||||
url = "https://v1-18.docs.kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.17.17"
|
||||
version = "v1.17"
|
||||
githubbranch = "v1.17.17"
|
||||
docsbranch = "release-1.17"
|
||||
url = "https://v1-17.docs.kubernetes.io"
|
||||
|
||||
|
||||
# User interface configuration
|
||||
[params.ui]
|
||||
# Enable to show the side bar menu in its compact state.
|
||||
sidebar_menu_compact = false
|
||||
# https://github.com/gohugoio/hugo/issues/8918#issuecomment-903314696
|
||||
sidebar_cache_limit = 1
|
||||
# Set to true to disable breadcrumb navigation.
|
||||
breadcrumb_disable = false
|
||||
# Set to true to hide the sidebar search box (the top nav search box will still be displayed if search is enabled)
|
||||
|
|
|
@ -9,7 +9,7 @@ cid: home
|
|||
{{% blocks/feature image="flower" %}}
|
||||
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) ist ein Open-Source-System zur Automatisierung der Bereitstellung, Skalierung und Verwaltung von containerisierten Anwendungen.
|
||||
|
||||
Es gruppiert Container, aus denen sich eine Anwendung zusammensetzt, in logische Einheiten, um die Verwaltung und Erkennung zu erleichtern. Kubernetes baut auf [15 Jahre Erfahrung in Bewältigung von Produktions-Workloads bei Google] (http://queue.acm.org/detail.cfm?id=2898444), kombiniert mit Best-of-Breed-Ideen und Praktiken aus der Community.
|
||||
Es gruppiert Container, aus denen sich eine Anwendung zusammensetzt, in logische Einheiten, um die Verwaltung und Erkennung zu erleichtern. Kubernetes baut auf [15 Jahre Erfahrung in Bewältigung von Produktions-Workloads bei Google](http://queue.acm.org/detail.cfm?id=2898444), kombiniert mit Best-of-Breed-Ideen und Praktiken aus der Community.
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature image="scalable" %}}
|
||||
|
@ -57,4 +57,4 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
|
|||
|
||||
{{< blocks/kubernetes-features >}}
|
||||
|
||||
{{< blocks/case-studies >}}
|
||||
{{< blocks/case-studies >}}
|
||||
|
|
|
@ -26,7 +26,7 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
|
|||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
|
||||
* [Contiv](http://contiv.github.io) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
|
||||
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
|
||||
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
|
||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
|
||||
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
|
||||
* [Multus](https://github.com/Intel-Corp/multus-cni) ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 20
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="de">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 20
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="de">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 10
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="de">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 20
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="de">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 10
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="de">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 20
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="de">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ weight: 10
|
|||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="de">
|
||||
|
||||
<body>
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise
|
|||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu21" button id="desktopKCButton">Revisit KubeCon EU 2021</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Attend KubeCon Europe on May 17-20, 2022</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -125,7 +125,7 @@ You may wish to, but you cannot create a hierarchy of namespaces. Namespaces can
|
|||
|
||||
|
||||
|
||||
Namespaces are easy to create and use but it’s also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a [kubectl context](/docs/user-guide/kubectl/kubectl_config_set-context/).
|
||||
Namespaces are easy to create and use but it’s also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a [kubectl context](/docs/reference/generated/kubectl/kubectl-commands#-em-set-context-em-).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -5,6 +5,11 @@ slug: visualize-kubelet-performance-with-node-dashboard
|
|||
url: /blog/2016/11/Visualize-Kubelet-Performance-With-Node-Dashboard
|
||||
---
|
||||
|
||||
_Since this article was published, the Node Performance Dashboard was retired and is no longer available._
|
||||
|
||||
_This retirement happened in early 2019, as part of the_ `kubernetes/contrib`
|
||||
_[repository deprecation](https://github.com/kubernetes-retired/contrib/issues/3007)_.
|
||||
|
||||
In Kubernetes 1.4, we introduced a new node performance analysis tool, called the _node performance dashboard_, to visualize and explore the behavior of the Kubelet in much richer details. This new feature will make it easy to understand and improve code performance for Kubelet developers, and lets cluster maintainer set configuration according to provided Service Level Objectives (SLOs).
|
||||
|
||||
**Background**
|
||||
|
|
|
@ -37,7 +37,7 @@ If you run your storage application on high-end hardware or extra-large instance
|
|||
[ZooKeeper](https://zookeeper.apache.org/doc/current/) is an interesting use case for StatefulSet for two reasons. First, it demonstrates that StatefulSet can be used to run a distributed, strongly consistent storage application on Kubernetes. Second, it's a prerequisite for running workloads like [Apache Hadoop](http://hadoop.apache.org/) and [Apache Kakfa](https://kafka.apache.org/) on Kubernetes. An [in-depth tutorial](/docs/tutorials/stateful-application/zookeeper/) on deploying a ZooKeeper ensemble on Kubernetes is available in the Kubernetes documentation, and we’ll outline a few of the key features below.
|
||||
|
||||
**Creating a ZooKeeper Ensemble**
|
||||
Creating an ensemble is as simple as using [kubectl create](/docs/user-guide/kubectl/kubectl_create/) to generate the objects stored in the manifest.
|
||||
Creating an ensemble is as simple as using [kubectl create](/docs/reference/generated/kubectl/kubectl-commands#create) to generate the objects stored in the manifest.
|
||||
|
||||
|
||||
```
|
||||
|
@ -297,7 +297,7 @@ zk-0 0/1 Terminating 0 15m
|
|||
|
||||
|
||||
|
||||
You can use [kubectl apply](/docs/user-guide/kubectl/kubectl_apply/) to recreate the zk StatefulSet and redeploy the ensemble.
|
||||
You can use [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) to recreate the zk StatefulSet and redeploy the ensemble.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -140,7 +140,7 @@ The local persistent volume beta feature is not complete by far. Some notable en
|
|||
|
||||
## Complementary features
|
||||
|
||||
[Pod priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it.
|
||||
[Pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) is another Kubernetes feature that is complementary to local persistent volumes. When your application uses local storage, it must be scheduled to the specific node where the local volume resides. You can give your local storage workload high priority so if that node ran out of room to run your workload, Kubernetes can preempt lower priority workloads to make room for it.
|
||||
|
||||
[Pod disruption budget](/docs/concepts/workloads/pods/disruptions/) is also very important for those workloads that must maintain quorum. Setting a disruption budget for your workload ensures that it does not drop below quorum due to voluntary disruption events, such as node drains during upgrade.
|
||||
|
||||
|
|
|
@ -94,7 +94,7 @@ JOSH BERKUS: That goes into release notes. I mean, keep in mind that one of the
|
|||
|
||||
However, stuff happens, and we do occasionally have to do those. And so far, our main way to identify that to people actually is in the release notes. If you look at [the current release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#no-really-you-must-do-this-before-you-upgrade), there are actually two things in there right now that are sort of breaking changes.
|
||||
|
||||
One of them is the bit with [priority and preemption](/docs/concepts/configuration/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was...
|
||||
One of them is the bit with [priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) in that preemption being on by default now allows badly behaved users of the system to cause trouble in new ways. I'd actually have to look at the release notes to see what the second one was...
|
||||
|
||||
TIM PEPPER: The [JSON capitalization case sensitivity](https://github.com/kubernetes/kubernetes/issues/64612).
|
||||
|
||||
|
|
|
@ -104,7 +104,7 @@ Master and Worker nodes should be protected from overload and resource exhaustio
|
|||
|
||||
Resource consumption by the control plane will correlate with the number of pods and the pod churn rate. Very large and very small clusters will benefit from non-default [settings](/docs/reference/command-line-tools-reference/kube-apiserver/) of kube-apiserver request throttling and memory. Having these too high can lead to request limit exceeded and out of memory errors.
|
||||
|
||||
On worker nodes, [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](/docs/tasks/administer-cluster/out-of-resource/) conditions can be configured.
|
||||
On worker nodes, [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](/docs/concepts/scheduling-eviction/node-pressure-eviction/) conditions can be configured.
|
||||
|
||||
## Security
|
||||
|
||||
|
@ -166,7 +166,7 @@ Some critical state is held outside etcd. Certificates, container images, and ot
|
|||
* Cloud provider specific account and configuration data
|
||||
|
||||
## Considerations for your production workloads
|
||||
Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](/docs/concepts/configuration/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads.
|
||||
Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](/docs/concepts/scheduling-eviction/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads.
|
||||
|
||||
For stateful services, external attached volume mounts are the standard Kubernetes recommendation for a non-clustered service (e.g., a typical SQL database). At this time Kubernetes managed snapshots of these external volumes is in the category of a [roadmap feature request](https://docs.google.com/presentation/d/1dgxfnroRAu0aF67s-_bmeWpkM1h2LCxe6lB1l1oS0EQ/edit#slide=id.g3ca07c98c2_0_47), likely to align with the Container Storage Interface (CSI) integration. Thus performing backups of such a service would involve application specific, in-pod activity that is beyond the scope of this document. While awaiting better Kubernetes support for a snapshot and backup workflow, running your database service in a VM rather than a container, and exposing it to your Kubernetes workload may be worth considering.
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ date: 2019-04-16
|
|||
|
||||
Kubernetes is well-known for running scalable workloads. It scales your workloads based on their resource usage. When a workload is scaled up, more instances of the application get created. When the application is critical for your product, you want to make sure that these new instances are scheduled even when your cluster is under resource pressure. One obvious solution to this problem is to over-provision your cluster resources to have some amount of slack resources available for scale-up situations. This approach often works, but costs more as you would have to pay for the resources that are idle most of the time.
|
||||
|
||||
[Pod priority and preemption](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) is a scheduler feature made generally available in Kubernetes 1.14 that allows you to achieve high levels of scheduling confidence for your critical workloads without overprovisioning your clusters. It also provides a way to improve resource utilization in your clusters without sacrificing the reliability of your essential workloads.
|
||||
[Pod priority and preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/) is a scheduler feature made generally available in Kubernetes 1.14 that allows you to achieve high levels of scheduling confidence for your critical workloads without overprovisioning your clusters. It also provides a way to improve resource utilization in your clusters without sacrificing the reliability of your essential workloads.
|
||||
|
||||
## Guaranteed scheduling with controlled cost
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ The team has made progress in the last few months that is well worth celebrating
|
|||
|
||||
- The K8s-Infrastructure Working Group released an automated billing report that they start every meeting off by reviewing as a group.
|
||||
- DNS for k8s.io and kubernetes.io are also fully [community-owned](https://groups.google.com/g/kubernetes-dev/c/LZTYJorGh7c/m/u-ydk-yNEgAJ), with community members able to [file issues](https://github.com/kubernetes/k8s.io/issues/new?assignees=&labels=wg%2Fk8s-infra&template=dns-request.md&title=DNS+REQUEST%3A+%3Cyour-dns-record%3E) to manage records.
|
||||
- The container registry [k8s.gcr.io](https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io) is also fully community-owned and available for all Kubernetes subprojects to use.
|
||||
- The container registry [k8s.gcr.io](https://github.com/kubernetes/k8s.io/tree/main/k8s.gcr.io) is also fully community-owned and available for all Kubernetes subprojects to use.
|
||||
- The Kubernetes [publishing-bot](https://github.com/kubernetes/publishing-bot) responsible for keeping k8s.io/kubernetes/staging repositories published to their own top-level repos (For example: [kubernetes/api](https://github.com/kubernetes/api)) runs on a community-owned cluster.
|
||||
- The gcsweb.k8s.io service used to provide anonymous access to GCS buckets for kubernetes artifacts runs on a community-owned cluster.
|
||||
- There is also an automated process of promoting all our container images. This includes a fully documented infrastructure, managed by the Kubernetes community, with automated processes for provisioning permissions.
|
||||
|
|
|
@ -186,7 +186,7 @@ metadata:
|
|||
|
||||
### Role Oriented Design
|
||||
|
||||
When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure. Its extensibility ensures that it will evolve for future use-cases while preserving portability. Ultimately these characteristics will allow Gateway API to adapt to different organizational models and implementations well into the future.
|
||||
When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API is not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure. Its extensibility ensures that it will evolve for future use-cases while preserving portability. Ultimately these characteristics will allow the Gateway API to adapt to different organizational models and implementations well into the future.
|
||||
|
||||
### Try it out and get involved
|
||||
|
||||
|
@ -194,4 +194,4 @@ There are many resources to check out to learn more.
|
|||
|
||||
* Check out the [user guides](https://gateway-api.sigs.k8s.io/guides/getting-started/) to see what use-cases can be addressed.
|
||||
* Try out one of the [existing Gateway controllers ](https://gateway-api.sigs.k8s.io/references/implementations/)
|
||||
* Or [get involved](https://gateway-api.sigs.k8s.io/contributing/community/) and help design and influence the future of Kubernetes service networking!
|
||||
* Or [get involved](https://gateway-api.sigs.k8s.io/contributing/community/) and help design and influence the future of Kubernetes service networking!
|
||||
|
|
|
@ -0,0 +1,467 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Writing a Controller for Pod Labels"
|
||||
date: 2021-06-21
|
||||
slug: writing-a-controller-for-pod-labels
|
||||
---
|
||||
|
||||
**Authors**: Arthur Busser (Padok)
|
||||
|
||||
[Operators][what-is-an-operator] are proving to be an excellent solution to
|
||||
running stateful distributed applications in Kubernetes. Open source tools like
|
||||
the [Operator SDK][operator-sdk] provide ways to build reliable and maintainable
|
||||
operators, making it easier to extend Kubernetes and implement custom
|
||||
scheduling.
|
||||
|
||||
Kubernetes operators run complex software inside your cluster. The open source
|
||||
community has already built [many operators][operatorhub] for distributed
|
||||
applications like Prometheus, Elasticsearch, or Argo CD. Even outside of
|
||||
open source, operators can help to bring new functionality to your Kubernetes
|
||||
cluster.
|
||||
|
||||
An operator is a set of [custom resources][custom-resource-definitions] and a
|
||||
set of [controllers][controllers]. A controller watches for changes to specific
|
||||
resources in the Kubernetes API and reacts by creating, updating, or deleting
|
||||
resources.
|
||||
|
||||
The Operator SDK is best suited for building fully-featured operators.
|
||||
Nonetheless, you can use it to write a single controller. This post will walk
|
||||
you through writing a Kubernetes controller in Go that will add a `pod-name`
|
||||
label to pods that have a specific annotation.
|
||||
|
||||
## Why do we need a controller for this?
|
||||
|
||||
I recently worked on a project where we needed to create a Service that routed
|
||||
traffic to a specific Pod in a ReplicaSet. The problem is that a Service can
|
||||
only select pods by label, and all pods in a ReplicaSet have the same labels.
|
||||
There are two ways to solve this problem:
|
||||
|
||||
1. Create a Service without a selector and manage the Endpoints or
|
||||
EndpointSlices for that Service directly. We would need to write a custom
|
||||
controller to insert our Pod's IP address into those resources.
|
||||
2. Add a label to the Pod with a unique value. We could then use this label in
|
||||
our Service's selector. Again, we would need to write a custom controller to
|
||||
add this label.
|
||||
|
||||
A controller is a control loop that tracks one or more Kubernetes resource
|
||||
types. The controller from option n°2 above only needs to track pods, which
|
||||
makes it simpler to implement. This is the option we are going to walk through
|
||||
by writing a Kubernetes controller that adds a `pod-name` label to our pods.
|
||||
|
||||
StatefulSets [do this natively][statefulset-pod-name-label] by adding a
|
||||
`pod-name` label to each Pod in the set. But what if we don't want to or can't
|
||||
use StatefulSets?
|
||||
|
||||
We rarely create pods directly; most often, we use a Deployment, ReplicaSet, or
|
||||
another high-level resource. We can specify labels to add to each Pod in the
|
||||
PodSpec, but not with dynamic values, so no way to replicate a StatefulSet's
|
||||
`pod-name` label.
|
||||
|
||||
We tried using a [mutating admission webhook][mutating-admission-webhook]. When
|
||||
anyone creates a Pod, the webhook patches the Pod with a label containing the
|
||||
Pod's name. Disappointingly, this does not work: not all pods have a name before
|
||||
being created. For instance, when the ReplicaSet controller creates a Pod, it
|
||||
sends a `namePrefix` to the Kubernetes API server and not a `name`. The API
|
||||
server generates a unique name before persisting the new Pod to etcd, but only
|
||||
after calling our admission webhook. So in most cases, we can't know a Pod's
|
||||
name with a mutating webhook.
|
||||
|
||||
Once a Pod exists in the Kubernetes API, it is mostly immutable, but we can
|
||||
still add a label. We can even do so from the command line:
|
||||
|
||||
```bash
|
||||
kubectl label my-pod my-label-key=my-label-value
|
||||
```
|
||||
|
||||
We need to watch for changes to any pods in the Kubernetes API and add the label
|
||||
we want. Rather than do this manually, we are going to write a controller that
|
||||
does it for us.
|
||||
|
||||
## Bootstrapping a controller with the Operator SDK
|
||||
|
||||
A controller is a reconciliation loop that reads the desired state of a resource
|
||||
from the Kubernetes API and takes action to bring the cluster's actual state
|
||||
closer to the desired state.
|
||||
|
||||
In order to write this controller as quickly as possible, we are going to use
|
||||
the Operator SDK. If you don't have it installed, follow the
|
||||
[official documentation][operator-sdk-installation].
|
||||
|
||||
```terminal
|
||||
$ operator-sdk version
|
||||
operator-sdk version: "v1.4.2", commit: "4b083393be65589358b3e0416573df04f4ae8d9b", kubernetes version: "v1.19.4", go version: "go1.15.8", GOOS: "darwin", GOARCH: "amd64"
|
||||
```
|
||||
|
||||
Let's create a new directory to write our controller in:
|
||||
|
||||
```bash
|
||||
mkdir label-operator && cd label-operator
|
||||
```
|
||||
|
||||
Next, let's initialize a new operator, to which we will add a single controller.
|
||||
To do this, you will need to specify a domain and a repository. The domain
|
||||
serves as a prefix for the group your custom Kubernetes resources will belong
|
||||
to. Because we are not going to be defining custom resources, the domain does
|
||||
not matter. The repository is going to be the name of the Go module we are going
|
||||
to write. By convention, this is the repository where you will be storing your
|
||||
code.
|
||||
|
||||
As an example, here is the command I ran:
|
||||
|
||||
```bash
|
||||
# Feel free to change the domain and repo values.
|
||||
operator-sdk init --domain=padok.fr --repo=github.com/busser/label-operator
|
||||
```
|
||||
|
||||
Next, we need a create a new controller. This controller will handle pods and
|
||||
not a custom resource, so no need to generate the resource code. Let's run this
|
||||
command to scaffold the code we need:
|
||||
|
||||
```bash
|
||||
operator-sdk create api --group=core --version=v1 --kind=Pod --controller=true --resource=false
|
||||
```
|
||||
|
||||
We now have a new file: `controllers/pod_controller.go`. This file contains a
|
||||
`PodReconciler` type with two methods that we need to implement. The first is
|
||||
`Reconcile`, and it looks like this for now:
|
||||
|
||||
```go
|
||||
func (r *PodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
|
||||
_ = r.Log.WithValues("pod", req.NamespacedName)
|
||||
|
||||
// your logic here
|
||||
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
```
|
||||
|
||||
The `Reconcile` method is called whenever a Pod is created, updated, or deleted.
|
||||
The name and namespace of the Pod are in the `ctrl.Request` the method receives
|
||||
as a parameter.
|
||||
|
||||
The second method is `SetupWithManager` and for now it looks like this:
|
||||
|
||||
```go
|
||||
func (r *PodReconciler) SetupWithManager(mgr ctrl.Manager) error {
|
||||
return ctrl.NewControllerManagedBy(mgr).
|
||||
// Uncomment the following line adding a pointer to an instance of the controlled resource as an argument
|
||||
// For().
|
||||
Complete(r)
|
||||
}
|
||||
```
|
||||
|
||||
The `SetupWithManager` method is called when the operator starts. It serves to
|
||||
tell the operator framework what types our `PodReconciler` needs to watch. To
|
||||
use the same `Pod` type used by Kubernetes internally, we need to import some of
|
||||
its code. All of the Kubernetes source code is open source, so you can import
|
||||
any part you like in your own Go code. You can find a complete list of available
|
||||
packages in the Kubernetes source code or [here on pkg.go.dev][pkg-go-dev]. To
|
||||
use pods, we need the `k8s.io/api/core/v1` package.
|
||||
|
||||
```go
|
||||
package controllers
|
||||
|
||||
import (
|
||||
// other imports...
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
// other imports...
|
||||
)
|
||||
```
|
||||
|
||||
Lets use the `Pod` type in `SetupWithManager` to tell the operator framework we
|
||||
want to watch pods:
|
||||
|
||||
```go
|
||||
func (r *PodReconciler) SetupWithManager(mgr ctrl.Manager) error {
|
||||
return ctrl.NewControllerManagedBy(mgr).
|
||||
For(&corev1.Pod{}).
|
||||
Complete(r)
|
||||
}
|
||||
```
|
||||
|
||||
Before moving on, we should set the RBAC permissions our controller needs. Above
|
||||
the `Reconcile` method, we have some default permissions:
|
||||
|
||||
```go
|
||||
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete
|
||||
// +kubebuilder:rbac:groups=core,resources=pods/status,verbs=get;update;patch
|
||||
// +kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=update
|
||||
```
|
||||
|
||||
We don't need all of those. Our controller will never interact with a Pod's
|
||||
status or its finalizers. It only needs to read and update pods. Lets remove the
|
||||
unnecessary permissions and keep only what we need:
|
||||
|
||||
```go
|
||||
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;update;patch
|
||||
```
|
||||
|
||||
We are now ready to write our controller's reconciliation logic.
|
||||
|
||||
## Implementing reconciliation
|
||||
|
||||
Here is what we want our `Reconcile` method to do:
|
||||
|
||||
1. Use the Pod's name and namespace from the `ctrl.Request` to fetch the Pod
|
||||
from the Kubernetes API.
|
||||
2. If the Pod has an `add-pod-name-label` annotation, add a `pod-name` label to
|
||||
the Pod; if the annotation is missing, don't add the label.
|
||||
3. Update the Pod in the Kubernetes API to persist the changes made.
|
||||
|
||||
Lets define some constants for the annotation and label:
|
||||
|
||||
```go
|
||||
const (
|
||||
addPodNameLabelAnnotation = "padok.fr/add-pod-name-label"
|
||||
podNameLabel = "padok.fr/pod-name"
|
||||
)
|
||||
```
|
||||
|
||||
The first step in our reconciliation function is to fetch the Pod we are working
|
||||
on from the Kubernetes API:
|
||||
|
||||
```go
|
||||
// Reconcile handles a reconciliation request for a Pod.
|
||||
// If the Pod has the addPodNameLabelAnnotation annotation, then Reconcile
|
||||
// will make sure the podNameLabel label is present with the correct value.
|
||||
// If the annotation is absent, then Reconcile will make sure the label is too.
|
||||
func (r *PodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
|
||||
log := r.Log.WithValues("pod", req.NamespacedName)
|
||||
|
||||
/*
|
||||
Step 0: Fetch the Pod from the Kubernetes API.
|
||||
*/
|
||||
|
||||
var pod corev1.Pod
|
||||
if err := r.Get(ctx, req.NamespacedName, &pod); err != nil {
|
||||
log.Error(err, "unable to fetch Pod")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
```
|
||||
|
||||
Our `Reconcile` method will be called when a Pod is created, updated, or
|
||||
deleted. In the deletion case, our call to `r.Get` will return a specific error.
|
||||
Let's import the package that defines this error:
|
||||
|
||||
```go
|
||||
package controllers
|
||||
|
||||
import (
|
||||
// other imports...
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
// other imports...
|
||||
)
|
||||
```
|
||||
|
||||
We can now handle this specific error and — since our controller does not care
|
||||
about deleted pods — explicitly ignore it:
|
||||
|
||||
```go
|
||||
/*
|
||||
Step 0: Fetch the Pod from the Kubernetes API.
|
||||
*/
|
||||
|
||||
var pod corev1.Pod
|
||||
if err := r.Get(ctx, req.NamespacedName, &pod); err != nil {
|
||||
if apierrors.IsNotFound(err) {
|
||||
// we'll ignore not-found errors, since we can get them on deleted requests.
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
log.Error(err, "unable to fetch Pod")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
```
|
||||
|
||||
Next, lets edit our Pod so that our dynamic label is present if and only if our
|
||||
annotation is present:
|
||||
|
||||
```go
|
||||
/*
|
||||
Step 1: Add or remove the label.
|
||||
*/
|
||||
|
||||
labelShouldBePresent := pod.Annotations[addPodNameLabelAnnotation] == "true"
|
||||
labelIsPresent := pod.Labels[podNameLabel] == pod.Name
|
||||
|
||||
if labelShouldBePresent == labelIsPresent {
|
||||
// The desired state and actual state of the Pod are the same.
|
||||
// No further action is required by the operator at this moment.
|
||||
log.Info("no update required")
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
|
||||
if labelShouldBePresent {
|
||||
// If the label should be set but is not, set it.
|
||||
if pod.Labels == nil {
|
||||
pod.Labels = make(map[string]string)
|
||||
}
|
||||
pod.Labels[podNameLabel] = pod.Name
|
||||
log.Info("adding label")
|
||||
} else {
|
||||
// If the label should not be set but is, remove it.
|
||||
delete(pod.Labels, podNameLabel)
|
||||
log.Info("removing label")
|
||||
}
|
||||
```
|
||||
|
||||
Finally, let's push our updated Pod to the Kubernetes API:
|
||||
|
||||
```go
|
||||
/*
|
||||
Step 2: Update the Pod in the Kubernetes API.
|
||||
*/
|
||||
|
||||
if err := r.Update(ctx, &pod); err != nil {
|
||||
log.Error(err, "unable to update Pod")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
```
|
||||
|
||||
When writing our updated Pod to the Kubernetes API, there is a risk that the Pod
|
||||
has been updated or deleted since we first read it. When writing a Kubernetes
|
||||
controller, we should keep in mind that we are not the only actors in the
|
||||
cluster. When this happens, the best thing to do is start the reconciliation
|
||||
from scratch, by requeuing the event. Lets do exactly that:
|
||||
|
||||
```go
|
||||
/*
|
||||
Step 2: Update the Pod in the Kubernetes API.
|
||||
*/
|
||||
|
||||
if err := r.Update(ctx, &pod); err != nil {
|
||||
if apierrors.IsConflict(err) {
|
||||
// The Pod has been updated since we read it.
|
||||
// Requeue the Pod to try to reconciliate again.
|
||||
return ctrl.Result{Requeue: true}, nil
|
||||
}
|
||||
if apierrors.IsNotFound(err) {
|
||||
// The Pod has been deleted since we read it.
|
||||
// Requeue the Pod to try to reconciliate again.
|
||||
return ctrl.Result{Requeue: true}, nil
|
||||
}
|
||||
log.Error(err, "unable to update Pod")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
```
|
||||
|
||||
Let's remember to return successfully at the end of the method:
|
||||
|
||||
```go
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
```
|
||||
|
||||
And that's it! We are now ready to run the controller on our cluster.
|
||||
|
||||
## Run the controller on your cluster
|
||||
|
||||
To run our controller on your cluster, we need to run the operator. For that,
|
||||
all you will need is `kubectl`. If you don't have a Kubernetes cluster at hand,
|
||||
I recommend you start one locally with [KinD (Kubernetes in Docker)][kind].
|
||||
|
||||
All it takes to run the operator from your machine is this command:
|
||||
|
||||
```bash
|
||||
make run
|
||||
```
|
||||
|
||||
After a few seconds, you should see the operator's logs. Notice that our
|
||||
controller's `Reconcile` method was called for all pods already running in the
|
||||
cluster.
|
||||
|
||||
Let's keep the operator running and, in another terminal, create a new Pod:
|
||||
|
||||
```bash
|
||||
kubectl run --image=nginx my-nginx
|
||||
```
|
||||
|
||||
The operator should quickly print some logs, indicating that it reacted to the
|
||||
Pod's creation and subsequent changes in status:
|
||||
|
||||
```text
|
||||
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
|
||||
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
|
||||
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
|
||||
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
|
||||
```
|
||||
|
||||
Lets check the Pod's labels:
|
||||
|
||||
```terminal
|
||||
$ kubectl get pod my-nginx --show-labels
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
my-nginx 1/1 Running 0 11m run=my-nginx
|
||||
```
|
||||
|
||||
Let's add an annotation to the Pod so that our controller knows to add our
|
||||
dynamic label to it:
|
||||
|
||||
```bash
|
||||
kubectl annotate pod my-nginx padok.fr/add-pod-name-label=true
|
||||
```
|
||||
|
||||
Notice that the controller immediately reacted and produced a new line in its
|
||||
logs:
|
||||
|
||||
```text
|
||||
INFO controllers.Pod adding label {"pod": "default/my-nginx"}
|
||||
```
|
||||
|
||||
```terminal
|
||||
$ kubectl get pod my-nginx --show-labels
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
my-nginx 1/1 Running 0 13m padok.fr/pod-name=my-nginx,run=my-nginx
|
||||
```
|
||||
|
||||
Bravo! You just successfully wrote a Kubernetes controller capable of adding
|
||||
labels with dynamic values to resources in your cluster.
|
||||
|
||||
Controllers and operators, both big and small, can be an important part of your
|
||||
Kubernetes journey. Writing operators is easier now than it has ever been. The
|
||||
possibilities are endless.
|
||||
|
||||
## What next?
|
||||
|
||||
If you want to go further, I recommend starting by deploying your controller or
|
||||
operator inside a cluster. The `Makefile` generated by the Operator SDK will do
|
||||
most of the work.
|
||||
|
||||
When deploying an operator to production, it is always a good idea to implement
|
||||
robust testing. The first step in that direction is to write unit tests.
|
||||
[This documentation][operator-sdk-testing] will guide you in writing tests for
|
||||
your operator. I wrote tests for the operator we just wrote; you can find all of
|
||||
my code in [this GitHub repository][github-repo].
|
||||
|
||||
## How to learn more?
|
||||
|
||||
The [Operator SDK documentation][operator-sdk-docs] goes into detail on how you
|
||||
can go further and implement more complex operators.
|
||||
|
||||
When modeling a more complex use-case, a single controller acting on built-in
|
||||
Kubernetes types may not be enough. You may need to build a more complex
|
||||
operator with [Custom Resource Definitions (CRDs)][custom-resource-definitions]
|
||||
and multiple controllers. The Operator SDK is a great tool to help you do this.
|
||||
|
||||
If you want to discuss building an operator, join the [#kubernetes-operator][slack-channel]
|
||||
channel in the [Kubernetes Slack workspace][slack-workspace]!
|
||||
|
||||
<!-- Links -->
|
||||
|
||||
[controllers]: https://kubernetes.io/docs/concepts/architecture/controller/
|
||||
[custom-resource-definitions]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
|
||||
[kind]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation
|
||||
[github-repo]: https://github.com/busser/label-operator
|
||||
[mutating-admission-webhook]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook
|
||||
[operator-sdk]: https://sdk.operatorframework.io/
|
||||
[operator-sdk-docs]: https://sdk.operatorframework.io/docs/
|
||||
[operator-sdk-installation]: https://sdk.operatorframework.io/docs/installation/
|
||||
[operator-sdk-testing]: https://sdk.operatorframework.io/docs/building-operators/golang/testing/
|
||||
[operatorhub]: https://operatorhub.io/
|
||||
[pkg-go-dev]: https://pkg.go.dev/k8s.io/api
|
||||
[slack-channel]: https://kubernetes.slack.com/messages/kubernetes-operators
|
||||
[slack-workspace]: https://slack.k8s.io/
|
||||
[statefulset-pod-name-label]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label
|
||||
[what-is-an-operator]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Announcing Kubernetes Community Group Annual Reports"
|
||||
description: >
|
||||
Introducing brand new Kubernetes Community Group Annual Reports for
|
||||
Special Interest Groups and Working Groups.
|
||||
date: 2021-06-28T10:00:00-08:00
|
||||
slug: Announcing-Kubernetes-Community-Group-Annual-Reports
|
||||
---
|
||||
|
||||
**Authors:** Divya Mohan
|
||||
|
||||
{{< figure src="k8s_annual_report_2020.svg" alt="Community annual report 2020" link="https://www.cncf.io/reports/kubernetes-community-annual-report-2020/" >}}
|
||||
|
||||
Given the growth and scale of the Kubernetes project, the existing reporting mechanisms were proving to be inadequate and challenging.
|
||||
Kubernetes is a large open source project. With over 100000 commits just to the main k/kubernetes repository, hundreds of other code
|
||||
repositories in the project, and thousands of contributors, there's a lot going on. In fact, there are 37 contributor groups at the time of
|
||||
writing. We also value all forms of contribution and not just code changes.
|
||||
|
||||
With that context in mind, the challenge of reporting on all this activity was a call to action for exploring better options. Therefore
|
||||
inspired by the Apache Software Foundation’s [open guide to PMC Reporting](https://www.apache.org/foundation/board/reporting) and the
|
||||
[CNCF project Annual Reporting](https://www.cncf.io/cncf-annual-report-2020/), the Kubernetes project is proud to announce the
|
||||
**Kubernetes Community Group Annual Reports for Special Interest Groups (SIGs) and Working Groups (WGs)**. In its flagship edition,
|
||||
the [2020 Summary report](https://www.cncf.io/reports/kubernetes-community-annual-report-2020/) focuses on bettering the
|
||||
Kubernetes ecosystem by assessing and promoting the healthiness of the groups within the upstream community.
|
||||
|
||||
Previously, the mechanisms for the Kubernetes project overall to report on groups and their activities were
|
||||
[devstats](https://k8s.devstats.cncf.io/), GitHub data, issues, to measure the healthiness of a given UG/WG/SIG/Committee. As a
|
||||
project spanning several diverse communities, it was essential to have something that captured the human side of things. With 50,000+
|
||||
contributors, it’s easy to assume that the project has enough help and this report surfaces more information than /help-wanted and
|
||||
/good-first-issue for end users. This is how we sustain the project. Paraphrasing one of the Steering Committee members,
|
||||
[Paris Pittman](https://github.com/parispittman), “There was a requirement for tighter feedback loops - ones that involved more than just
|
||||
GitHub data and issues. Given that Kubernetes, as a project, has grown in scale and number of contributors over the years, we have
|
||||
outgrown the existing reporting mechanisms."
|
||||
|
||||
The existing communication channels between the Steering committee members and the folks leading the groups and committees were also required
|
||||
to be made as open and as bi-directional as possible. Towards achieving this very purpose, every group and committee has been assigned a
|
||||
liaison from among the steering committee members for kick off, help, or guidance needed throughout the process. According to
|
||||
[Davanum Srinivas a.k.a. dims](https://github.com/dims), “... That was one of the main motivations behind this report. People (leading the
|
||||
groups/committees) know that they can reach out to us and there’s a vehicle for them to reach out to us… This is our way of setting up a
|
||||
two-way feedback for them." The progress on these action items would be updated and tracked on the monthly Steering Committee meetings
|
||||
ensuring that this is not a one-off activity. Quoting [Nikhita Raghunath](https://github.com/nikhita), one of the Steering Committee members,
|
||||
“... Once we have a base, the liaisons will work with these groups to ensure that the problems are resolved. When we have a report next year,
|
||||
we’ll have a look at the progress made and how we could still do better. But the idea is definitely to not stop at the report.”
|
||||
|
||||
With this report, we hope to empower our end user communities with information that they can use to identify ways in which they can support
|
||||
the project as well as a sneak peek into the roadmap for upcoming features. As a community, we thrive on feedback and would love to hear your
|
||||
views about the report. You can get in touch with the [Steering Committee](https://github.com/kubernetes/steering#contact) via
|
||||
[Slack](https://kubernetes.slack.com/messages/steering-committee) or via the [mailing list](steering@kubernetes.io).
|
After Width: | Height: | Size: 1.2 MiB |
|
@ -0,0 +1,276 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes API and Feature Removals In 1.22: Here’s What You Need To Know"
|
||||
date: 2021-07-14
|
||||
slug: upcoming-changes-in-kubernetes-1-22
|
||||
---
|
||||
|
||||
**Authors**: Krishna Kilari (Amazon Web Services), Tim Bannister (The Scale Factory)
|
||||
|
||||
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
|
||||
When APIs evolve, the old APIs they replace are deprecated, and eventually removed.
|
||||
See [Kubernetes API removals](#kubernetes-api-removals) to read more about Kubernetes'
|
||||
policy on removing APIs.
|
||||
|
||||
We want to make sure you're aware of some upcoming removals. These are
|
||||
beta APIs that you can use in current, supported Kubernetes versions,
|
||||
and they are already deprecated. The reason for all of these removals
|
||||
is that they have been superseded by a newer, stable (“GA”) API.
|
||||
|
||||
Kubernetes 1.22, due for release in August 2021, will remove a number of deprecated
|
||||
APIs.
|
||||
_Update_:
|
||||
[Kubernetes 1.22: Reaching New Peaks](/blog/2021/08/04/kubernetes-1-22-release-announcement/)
|
||||
has details on the v1.22 release.
|
||||
|
||||
## API removals for Kubernetes v1.22 {#api-changes}
|
||||
|
||||
The **v1.22** release will stop serving the API versions we've listed immediately below.
|
||||
These are all beta APIs that were previously deprecated in favor of newer and more stable
|
||||
API versions.
|
||||
<!-- sorted by API group -->
|
||||
|
||||
* Beta versions of the `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` API (the **admissionregistration.k8s.io/v1beta1** API versions)
|
||||
* The beta `CustomResourceDefinition` API (**apiextensions.k8s.io/v1beta1**)
|
||||
* The beta `APIService` API (**apiregistration.k8s.io/v1beta1**)
|
||||
* The beta `TokenReview` API (**authentication.k8s.io/v1beta1**)
|
||||
* Beta API versions of `SubjectAccessReview`, `LocalSubjectAccessReview`, `SelfSubjectAccessReview` (API versions from **authorization.k8s.io/v1beta1**)
|
||||
* The beta `CertificateSigningRequest` API (**certificates.k8s.io/v1beta1**)
|
||||
* The beta `Lease` API (**coordination.k8s.io/v1beta1**)
|
||||
* All beta `Ingress` APIs (the **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions)
|
||||
|
||||
The Kubernetes documentation covers these
|
||||
[API removals for v1.22](/docs/reference/using-api/deprecation-guide/#v1-22) and explains
|
||||
how each of those APIs change between beta and stable.
|
||||
|
||||
## What to do
|
||||
|
||||
We're going to run through each of the resources that are affected by these removals
|
||||
and explain the steps you'll need to take.
|
||||
|
||||
`Ingress`
|
||||
: Migrate to use the **networking.k8s.io/v1**
|
||||
[Ingress](/docs/reference/kubernetes-api/service-resources/ingress-v1/) API,
|
||||
[available since v1.19](/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/#ingress-graduates-to-general-availability).
|
||||
The related API [IngressClass](/docs/reference/kubernetes-api/service-resources/ingress-class-v1/)
|
||||
is designed to complement the [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
concept, allowing you to configure multiple kinds of Ingress within one cluster.
|
||||
If you're currently using the deprecated
|
||||
[`kubernetes.io/ingress.class`](https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-ingress-class-deprecated)
|
||||
annotation, plan to switch to using the `.spec.ingressClassName` field instead.
|
||||
On any cluster running Kubernetes v1.19 or later, you can use the v1 API to
|
||||
retrieve or update existing Ingress objects, even if they were created using an
|
||||
older API version.
|
||||
|
||||
When you convert an Ingress to the v1 API, you should review each rule in that Ingress.
|
||||
Older Ingresses use the legacy `ImplementationSpecific` path type. Instead of `ImplementationSpecific`, switch [path matching](/docs/concepts/services-networking/ingress/#path-types) to either `Prefix` or `Exact`. One of the benefits of moving to these alternative path types is that it becomes easier to migrate between different Ingress classes.
|
||||
|
||||
**ⓘ** As well as upgrading _your_ own use of the Ingress API as a client, make sure that
|
||||
every ingress controller that you use is compatible with the v1 Ingress API.
|
||||
Read [Ingress Prerequisites](/docs/concepts/services-networking/ingress/#prerequisites)
|
||||
for more context about Ingress and ingress controllers.
|
||||
|
||||
`ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration`
|
||||
: Migrate to use the **admissionregistration.k8s.io/v1** API versions of
|
||||
[ValidatingWebhookConfiguration](/docs/reference/kubernetes-api/extend-resources/validating-webhook-configuration-v1/)
|
||||
and [MutatingWebhookConfiguration](/docs/reference/kubernetes-api/extend-resources/mutating-webhook-configuration-v1/),
|
||||
available since v1.16.
|
||||
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version.
|
||||
|
||||
`CustomResourceDefinition`
|
||||
: Migrate to use the [CustomResourceDefinition](/docs/reference/kubernetes-api/extend-resources/custom-resource-definition-v1/)
|
||||
**apiextensions.k8s.io/v1** API, available since v1.16.
|
||||
You can use the v1 API to retrieve or update existing objects, even if they were created
|
||||
using an older API version. If you defined any custom resources in your cluster, those
|
||||
are still served after you upgrade.
|
||||
|
||||
If you're using external CustomResourceDefinitions, you can use
|
||||
[`kubectl convert`](#kubectl-convert) to translate existing manifests to use the newer API.
|
||||
Because there are some functional differences between beta and stable CustomResourceDefinitions,
|
||||
our advice is to test out each one to make sure it works how you expect after the upgrade.
|
||||
|
||||
`APIService`
|
||||
: Migrate to use the **apiregistration.k8s.io/v1** [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/)
|
||||
API, available since v1.10.
|
||||
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version.
|
||||
If you already have API aggregation using an APIService object, this aggregation continues
|
||||
to work after you upgrade.
|
||||
|
||||
`TokenReview`
|
||||
: Migrate to use the **authentication.k8s.io/v1** [TokenReview](/docs/reference/kubernetes-api/authentication-resources/token-review-v1/)
|
||||
API, available since v1.10.
|
||||
|
||||
As well as serving this API via HTTP, the Kubernetes API server uses the same format to
|
||||
[send](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
|
||||
TokenReviews to webhooks. The v1.22 release continues to use the v1beta1 API for TokenReviews
|
||||
sent to webhooks by default. See [Looking ahead](#looking-ahead) for some specific tips about
|
||||
switching to the stable API.
|
||||
|
||||
`SubjectAccessReview`, `SelfSubjectAccessReview` and `LocalSubjectAccessReview`
|
||||
: Migrate to use the **authorization.k8s.io/v1** versions of those
|
||||
[authorization APIs](/docs/reference/kubernetes-api/authorization-resources/), available since v1.6.
|
||||
|
||||
`CertificateSigningRequest`
|
||||
: Migrate to use the **certificates.k8s.io/v1**
|
||||
[CertificateSigningRequest](/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/)
|
||||
API, available since v1.19.
|
||||
You can use the v1 API to retrieve or update existing objects, even if they were created
|
||||
using an older API version. Existing issued certificates retain their validity when you upgrade.
|
||||
|
||||
`Lease`
|
||||
: Migrate to use the **coordination.k8s.io/v1** [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/)
|
||||
API, available since v1.14.
|
||||
You can use the v1 API to retrieve or update existing objects, even if they were created
|
||||
using an older API version.
|
||||
|
||||
### `kubectl convert`
|
||||
|
||||
There is a plugin to `kubectl` that provides the `kubectl convert` subcommand.
|
||||
It's an official plugin that you can download as part of Kubernetes.
|
||||
See [Download Kubernetes](/releases/download/) for more details.
|
||||
|
||||
You can use `kubectl convert` to update manifest files to use a different API
|
||||
version. For example, if you have a manifest in source control that uses the beta
|
||||
Ingress API, you can check that definition out,
|
||||
and run
|
||||
`kubectl convert -f <manifest> --output-version <group>/<version>`.
|
||||
You can use the `kubectl convert` command to automatically convert an
|
||||
existing manifest.
|
||||
|
||||
For example, to convert an older Ingress definition to
|
||||
`networking.k8s.io/v1`, you can run:
|
||||
```bash
|
||||
kubectl convert -f ./legacy-ingress.yaml --output-version networking.k8s.io/v1
|
||||
```
|
||||
|
||||
The automatic conversion uses a similar technique to how the Kubernetes control plane
|
||||
updates objects that were originally created using an older API version. Because it's
|
||||
a mechanical conversion, you might need to go in and change the manifest to adjust
|
||||
defaults etc.
|
||||
|
||||
### Rehearse for the upgrade
|
||||
|
||||
If you manage your cluster's API server component, you can try out these API
|
||||
removals before you upgrade to Kubernetes v1.22.
|
||||
|
||||
To do that, add the following to the kube-apiserver command line arguments:
|
||||
|
||||
`--runtime-config=admissionregistration.k8s.io/v1beta1=false,apiextensions.k8s.io/v1beta1=false,apiregistration.k8s.io/v1beta1=false,authentication.k8s.io/v1beta1=false,authorization.k8s.io/v1beta1=false,certificates.k8s.io/v1beta1=false,coordination.k8s.io/v1beta1=false,extensions/v1beta1/ingresses=false,networking.k8s.io/v1beta1=false`
|
||||
|
||||
(as a side effect, this also turns off v1beta1 of EndpointSlice - watch out for
|
||||
that when you're testing).
|
||||
|
||||
Once you've switched all the kube-apiservers in your cluster to use that setting,
|
||||
those beta APIs are removed. You can test that API clients (`kubectl`, deployment
|
||||
tools, custom controllers etc) still work how you expect, and you can revert if
|
||||
you need to without having to plan a more disruptive downgrade.
|
||||
|
||||
|
||||
|
||||
### Advice for software authors
|
||||
|
||||
Maybe you're reading this because you're a developer of an addon or other
|
||||
component that integrates with Kubernetes?
|
||||
|
||||
If you develop an Ingress controller, webhook authenticator, an API aggregation, or
|
||||
any other tool that relies on these deprecated APIs, you should already have started
|
||||
to switch your software over.
|
||||
|
||||
You can use the tips in
|
||||
[Rehearse for the upgrade](#rehearse-for-the-upgrade) to run your own Kubernetes
|
||||
cluster that only uses the new APIs, and make sure that your code works OK.
|
||||
For your documentation, make sure readers are aware of any steps they should take
|
||||
for the Kubernetes v1.22 upgrade.
|
||||
|
||||
Where possible, give your users a hand to adopt the new APIs early - perhaps in a
|
||||
test environment - so they can give you feedback about any problems.
|
||||
|
||||
There are some [more deprecations](#looking-ahead) coming in Kubernetes v1.25,
|
||||
so plan to have those covered too.
|
||||
|
||||
## Kubernetes API removals
|
||||
|
||||
Here's some background about why Kubernetes removes some APIs, and also a promise
|
||||
about _stable_ APIs in Kubernetes.
|
||||
|
||||
Kubernetes follows a defined
|
||||
[deprecation policy](/docs/reference/using-api/deprecation-policy/) for its
|
||||
features, including the Kubernetes API. That policy allows for replacing stable
|
||||
(“GA”) APIs from Kubernetes. Importantly, this policy means that a stable API only
|
||||
be deprecated when a newer stable version of that same API is available.
|
||||
|
||||
That stability guarantee matters: if you're using a stable Kubernetes API, there
|
||||
won't ever be a new version released that forces you to switch to an alpha or beta
|
||||
feature.
|
||||
|
||||
Earlier stages are different. Alpha features are under test and potentially
|
||||
incomplete. Almost always, alpha features are disabled by default.
|
||||
Kubernetes releases can and do remove alpha features that haven't worked out.
|
||||
|
||||
After alpha, comes beta. These features are typically enabled by default; if the
|
||||
testing works out, the feature can graduate to stable. If not, it might need
|
||||
a redesign.
|
||||
|
||||
Last year, Kubernetes officially
|
||||
[adopted](/blog/2020/08/21/moving-forward-from-beta/#avoiding-permanent-beta)
|
||||
a policy for APIs that have reached their beta phase:
|
||||
|
||||
> For Kubernetes REST APIs, when a new feature's API reaches beta, that starts
|
||||
> a countdown. The beta-quality API now has three releases …
|
||||
> to either:
|
||||
>
|
||||
> * reach GA, and deprecate the beta, or
|
||||
> * have a new beta version (and deprecate the previous beta).
|
||||
|
||||
_At the time of that article, three Kubernetes releases equated to roughly nine
|
||||
calendar months. Later that same month, Kubernetes
|
||||
adopted a new
|
||||
release cadence of three releases per calendar year, so the countdown period is
|
||||
now roughly twelve calendar months._
|
||||
|
||||
Whether an API removal is because of a beta feature graduating to stable, or
|
||||
because that API hasn't proved successful, Kubernetes will continue to remove
|
||||
APIs by following its deprecation policy and making sure that migration options
|
||||
are documented.
|
||||
|
||||
### Looking ahead
|
||||
|
||||
There's a setting that's relevant if you use webhook authentication checks.
|
||||
A future Kubernetes release will switch to sending TokenReview objects
|
||||
to webhooks using the `authentication.k8s.io/v1` API by default. At the moment,
|
||||
the default is to send `authentication.k8s.io/v1beta1` TokenReviews to webhooks,
|
||||
and that's still the default for Kubernetes v1.22.
|
||||
However, you can switch over to the stable API right now if you want:
|
||||
add `--authentication-token-webhook-version=v1` to the command line options for
|
||||
the kube-apiserver, and check that webhooks for authentication still work how you
|
||||
expected.
|
||||
|
||||
Once you're happy it works OK, you can leave the `--authentication-token-webhook-version=v1`
|
||||
option set across your control plane.
|
||||
|
||||
The **v1.25** release that's planned for next year will stop serving beta versions of
|
||||
several Kubernetes APIs that are stable right now and have been for some time.
|
||||
The same v1.25 release will **remove** PodSecurityPolicy, which is deprecated and won't
|
||||
graduate to stable. See
|
||||
[PodSecurityPolicy Deprecation: Past, Present, and Future](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)
|
||||
for more information.
|
||||
|
||||
The official [list of API removals](/docs/reference/using-api/deprecation-guide/#v1-25)
|
||||
planned for Kubernetes 1.25 is:
|
||||
|
||||
* The beta `CronJob` API (**batch/v1beta1**)
|
||||
* The beta `EndpointSlice` API (**networking.k8s.io/v1beta1**)
|
||||
* The beta `PodDisruptionBudget` API (**policy/v1beta1**)
|
||||
* The beta `PodSecurityPolicy` API (**policy/v1beta1**)
|
||||
|
||||
## Want to know more?
|
||||
|
||||
Deprecations are announced in the Kubernetes release notes. You can see the announcements
|
||||
of pending deprecations in the release notes for
|
||||
[1.19](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecations),
|
||||
[1.20](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation),
|
||||
and [1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#deprecation).
|
||||
|
||||
For information on the process of deprecation and removal, check out the official Kubernetes
|
||||
[deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api)
|
||||
document.
|
|
@ -0,0 +1,65 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Spotlight on SIG Usability"
|
||||
date: 2021-07-15
|
||||
slug: sig-usability-spotlight-2021
|
||||
---
|
||||
|
||||
**Author:** Kunal Kushwaha, Civo
|
||||
|
||||
## Introduction
|
||||
|
||||
Are you interested in learning about what [SIG Usability](https://github.com/kubernetes/community/tree/master/sig-usability) does and how you can get involved? Well, you're at the right place. SIG Usability is all about making Kubernetes more accessible to new folks, and its main activity is conducting user research for the community. In this blog, we have summarized our conversation with [Gaby Moreno](https://twitter.com/morengab), who walks us through the various aspects of being a part of the SIG and shares some insights about how others can get involved.
|
||||
|
||||
Gaby is a co-lead for SIG Usability. She works as a Product Designer at IBM and enjoys working on the user experience of open, hybrid cloud technologies like Kubernetes, OpenShift, Terraform, and Cloud Foundry.
|
||||
|
||||
## A summary of our conversation
|
||||
|
||||
### Q. Could you tell us a little about what SIG Usability does?
|
||||
|
||||
A. SIG Usability at a high level started because there was no dedicated user experience team for Kubernetes. The extent of SIG Usability is focussed on the end-client ease of use of the Kubernetes project. The main activity is user research for the community, which includes speaking to Kubernetes users.
|
||||
|
||||
This covers points like user experience and accessibility. The objectives of the SIG are to guarantee that the Kubernetes project is maximally usable by people of a wide range of foundations and capacities, such as incorporating internationalization and ensuring the openness of documentation.
|
||||
|
||||
### Q. Why should new and existing contributors consider joining SIG Usability?
|
||||
|
||||
A. There are plenty of territories where new contributors can begin. For example:
|
||||
- User research projects, where people can help understand the usability of the end-user experiences, including error messages, end-to-end tasks, etc.
|
||||
- Accessibility guidelines for Kubernetes community artifacts, examples include: internationalization of documentation, color choices for people with color blindness, ensuring compatibility with screen reader technology, user interface design for core components with user interfaces, and more.
|
||||
|
||||
### Q. What do you do to help new contributors get started?
|
||||
|
||||
A. New contributors can get started by shadowing one of the user interviews, going through user interview transcripts, analyzing them, and designing surveys.
|
||||
|
||||
SIG Usability is also open to new project ideas. If you have an idea, we’ll do what we can to support it. There are regular SIG Meetings where people can ask their questions live. These meetings are also recorded for those who may not be able to attend. As always, you can reach out to us on Slack as well.
|
||||
|
||||
### Q. What does the survey include?
|
||||
|
||||
A. In simple terms, the survey gathers information about how people use Kubernetes, such as trends in learning to deploy a new system, error messages they receive, and workflows.
|
||||
|
||||
One of our goals is to standardize the responses accordingly. The ultimate goal is to analyze survey responses for important user stories whose needs aren't being met.
|
||||
|
||||
### Q. Are there any particular skills you’d like to recruit for? What skills are contributors to SIG Usability likely to learn?
|
||||
|
||||
A. Although contributing to SIG Usability does not have any pre-requisites as such, experience with user research, qualitative research, or prior experience with how to conduct an interview would be great plus points. Quantitative research, like survey design and screening, is also helpful and something that we expect contributors to learn.
|
||||
|
||||
### Q. What are you getting positive feedback on, and what’s coming up next for SIG Usability?
|
||||
|
||||
A. We have had new members joining and coming to monthly meetings regularly and showing interests in becoming a contributor and helping the community. We have also had a lot of people reach out to us via Slack showcasing their interest in the SIG.
|
||||
|
||||
Currently, we are focused on finishing the study mentioned in our [talk](https://www.youtube.com/watch?v=Byn0N_ZstE0), also our project for this year. We are always happy to have new contributors join us.
|
||||
|
||||
### Q: Any closing thoughts/resources you’d like to share?
|
||||
|
||||
A. We love meeting new contributors and assisting them in investigating different Kubernetes project spaces. We will work with and team up with other SIGs to facilitate engaging with end-users, running studies, and help them integrate accessible design practices into their development practices.
|
||||
|
||||
Here are some resources for you to get started:
|
||||
- [GitHub](https://github.com/kubernetes/community/tree/master/sig-usability)
|
||||
- [Mailing list](https://groups.google.com/g/kubernetes-sig-usability)
|
||||
- [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fusability)
|
||||
- [Slack](https://slack.k8s.io/)
|
||||
- [Slack channel #sig-usability](https://kubernetes.slack.com/archives/CLC5EF63T)
|
||||
|
||||
## Wrap Up
|
||||
|
||||
SIG Usability hosted a [KubeCon talk](https://www.youtube.com/watch?v=Byn0N_ZstE0) about studying Kubernetes users' experiences. The talk focuses on updates to the user study projects, understanding who is using Kubernetes, what they are trying to achieve, how the project is addressing their needs, and where we need to improve the project and the client experience. Join the SIG's update to find out about the most recent research results, what the plans are for the forthcoming year, and how to get involved in the upstream usability team as a contributor!
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes Release Cadence Change: Here’s What You Need To Know"
|
||||
date: 2021-07-20
|
||||
slug: new-kubernetes-release-cadence
|
||||
---
|
||||
|
||||
**Authors**: Celeste Horgan, Adolfo García Veytia, James Laverack, Jeremy Rickard
|
||||
|
||||
On April 23, 2021, the Release Team merged a Kubernetes Enhancement Proposal (KEP) changing the Kubernetes release cycle from four releases a year (once a quarter) to three releases a year.
|
||||
|
||||
This blog post provides a high level overview about what this means for the Kubernetes community's contributors and maintainers.
|
||||
|
||||
## What's changing and when
|
||||
|
||||
Starting with the [Kubernetes 1.22 release](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.22), a lightweight policy will drive the creation of each release schedule. This policy states:
|
||||
|
||||
* The first Kubernetes release of a calendar year should start at the second or third
|
||||
week of January to provide people more time for contributors coming back from the
|
||||
end of year holidays.
|
||||
* The last Kubernetes release of a calendar year should be finished by the middle of
|
||||
December.
|
||||
* A Kubernetes release cycle has a length of approximately 15 weeks.
|
||||
* The week of KubeCon + CloudNativeCon is not considered a 'working week' for SIG Release. The Release Team will not hold meetings or make decisions in this period.
|
||||
* An explicit SIG Release break of at least two weeks between each cycle will
|
||||
be enforced.
|
||||
|
||||
As a result, Kubernetes will follow a three releases per year cadence. Kubernetes 1.23 will be the final release of the 2021 calendar year. This new policy results in a very predictable release schedule, allowing us to forecast upcoming release dates:
|
||||
|
||||
|
||||
*Proposed Kubernetes Release Schedule for the remainder of 2021*
|
||||
|
||||
| Week Number in Year | Release Number | Release Week | Note |
|
||||
| -------- | -------- | -------- | -------- |
|
||||
| 35 | 1.23 | 1 (August 23) | |
|
||||
| 50 | 1.23 | 16 (December 07) | KubeCon + CloudNativeCon NA Break (Oct 11-15) |
|
||||
|
||||
*Proposed Kubernetes Release Schedule for 2022*
|
||||
|
||||
| Week Number in Year | Release Number | Release Week | Note |
|
||||
| -------- | -------- | -------- | -------- |
|
||||
| 1 | 1.24 | 1 (January 03) | |
|
||||
| 15 | 1.24 | 15 (April 12) | |
|
||||
| 17 | 1.25 | 1 (April 26) | KubeCon + CloudNativeCon EU likely to occur |
|
||||
| 32 | 1.25 | 15 (August 09) | |
|
||||
| 34 | 1.26 | 1 (August 22 | KubeCon + CloudNativeCon NA likely to occur |
|
||||
| 49 | 1.26 | 14 (December 06) |
|
||||
|
||||
These proposed dates reflect only the start and end dates, and they are subject to change. The Release Team will select dates for enhancement freeze, code freeze, and other milestones at the start of each release. For more information on these milestones, please refer to the [release phases](https://www.k8s.dev/resources/release/#phases) documentation. Feedback from prior releases will feed into this process.
|
||||
|
||||
## What this means for end users
|
||||
|
||||
The major change end users will experience is a slower release cadence and a slower rate of enhancement graduation. Kubernetes release artifacts, release notes, and all other aspects of any given release will stay the same.
|
||||
|
||||
Prior to this change an enhancement could graduate from alpha to stable in 9 months. With the change in cadence, this will stretch to 12 months. Additionally, graduation of features over the last few releases has in some part been driven by release team activities.
|
||||
|
||||
With fewer releases, users can expect to see the rate of feature graduation slow. Users can also expect releases to contain a larger number of enhancements that they need to be aware of during upgrades. However, with fewer releases to consume per year, it's intended that end user organizations will spend less time on upgrades and gain more time on supporting their Kubernetes clusters. It also means that Kubernetes releases are in support for a slightly longer period of time, so bug fixes and security patches will be available for releases for a longer period of time.
|
||||
|
||||
|
||||
## What this means for Kubernetes contributors
|
||||
|
||||
With a lower release cadence, contributors have more time for project enhancements, feature development, planning, and testing. A slower release cadence also provides more room for maintaining their mental health, preparing for events like KubeCon + CloudNativeCon or work on downstream integrations.
|
||||
|
||||
|
||||
## Why we decided to change the release cadence
|
||||
|
||||
The Kubernetes 1.19 cycle was far longer than usual. SIG Release extended it to lessen the burden on both Kubernetes contributors and end users due the COVID-19 pandemic. Following this extended release, the Kubernetes 1.20 release became the third, and final, release for 2020.
|
||||
|
||||
As the Kubernetes project matures, the number of enhancements per cycle grows, along with the burden on contributors, the Release Engineering team. Downstream consumers and integrators also face increased challenges keeping up with [ever more feature-packed releases](https://kubernetes.io/blog/2021/04/08/kubernetes-1-21-release-announcement/). A wider project adoption means the complexity of supporting a rapidly evolving platform affects a bigger downstream chain of consumers.
|
||||
|
||||
Changing the release cadence from four to three releases per year balances a variety of factors for stakeholders: while it's not strictly an LTS policy, consumers and integrators will get longer support terms for each minor version as the extended release cycles lead to the [previous three releases being supported](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/) for a longer period. Contributors get more time to [mature enhancements](https://www.cncf.io/blog/2021/04/12/enhancing-the-kubernetes-enhancements-process/) and [get them ready for production](https://github.com/kubernetes/community/blob/master/sig-architecture/production-readiness.md).
|
||||
|
||||
Finally, the management overhead for SIG Release and the Release Engineering team diminishes allowing the team to spend more time on improving the quality of the software releases and the tooling that drives them.
|
||||
|
||||
## How you can help
|
||||
|
||||
Join the [discussion](https://github.com/kubernetes/sig-release/discussions/1566) about communicating future release dates and be sure to be on the lookout for post release surveys.
|
||||
|
||||
## Where you can find out more
|
||||
|
||||
- Read the KEP [here](https://github.com/kubernetes/enhancements/tree/master/keps/sig-release/2572-release-cadence)
|
||||
- Join the [kubernetes-dev](https://groups.google.com/g/kubernetes-dev) mailing list
|
||||
- Join [Kubernetes Slack](https://slack.k8s.io) and follow the #announcements channel
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Updating NGINX-Ingress to use the stable Ingress API'
|
||||
date: 2021-07-26
|
||||
slug: update-with-ingress-nginx
|
||||
---
|
||||
|
||||
**Authors:** James Strong, Ricardo Katz
|
||||
|
||||
With all Kubernetes APIs, there is a process to creating, maintaining, and
|
||||
ultimately deprecating them once they become GA. The networking.k8s.io API group is no
|
||||
different. The upcoming Kubernetes 1.22 release will remove several deprecated APIs
|
||||
that are relevant to networking:
|
||||
|
||||
- the `networking.k8s.io/v1beta1` API version of [IngressClass](/docs/concepts/services-networking/ingress/#ingress-class)
|
||||
- all beta versions of [Ingress](/docs/concepts/services-networking/ingress/): `extensions/v1beta1` and `networking.k8s.io/v1beta1`
|
||||
|
||||
On a v1.22 Kubernetes cluster, you'll be able to access Ingress and IngressClass
|
||||
objects through the stable (v1) APIs, but access via their beta APIs won't be possible.
|
||||
This change has been in
|
||||
in discussion since
|
||||
[2017](https://github.com/kubernetes/kubernetes/issues/43214),
|
||||
[2019](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) with
|
||||
1.16 Kubernetes API deprecations, and most recently in
|
||||
KEP-1453:
|
||||
[Graduate Ingress API to GA](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1453-ingress-api#122).
|
||||
|
||||
During community meetings, the networking Special Interest Group has decided to continue
|
||||
supporting Kubernetes versions older than 1.22 with Ingress-NGINX version 0.47.0.
|
||||
Support for Ingress-NGINX will continue for six months after Kubernetes 1.22
|
||||
is released. Any additional bug fixes and CVEs for Ingress-NGINX will be
|
||||
addressed on a need-by-need basis.
|
||||
|
||||
Ingress-NGINX will have separate branches and releases of Ingress-NGINX to
|
||||
support this model, mirroring the Kubernetes project process. Future
|
||||
releases of the Ingress-NGINX project will track and support the latest
|
||||
versions of Kubernetes.
|
||||
|
||||
{{< table caption="Ingress NGINX supported version with Kubernetes Versions" >}}
|
||||
Kubernetes version | Ingress-NGINX version | Notes
|
||||
:-------------------|:----------------------|:------------
|
||||
v1.22 | v1.0.0-alpha.2 | New features, plus bug fixes.
|
||||
v1.21 | v0.47.x | Bugfixes only, and just for security issues or crashes. No end-of-support date announced.
|
||||
v1.20 | v0.47.x | Bugfixes only, and just for security issues or crashes. No end-of-support date announced.
|
||||
v1.19 | v0.47.x | Bugfixes only, and just for security issues or crashes. Fixes only provided until 6 months after Kubernetes v1.22.0 is released.
|
||||
{{< /table >}}
|
||||
|
||||
Because of the updates in Kubernetes 1.22, **v0.47.0** will not work with
|
||||
Kubernetes 1.22.
|
||||
|
||||
# What you need to do
|
||||
|
||||
The team is currently in the process of upgrading ingress-nginx to support
|
||||
the v1 migration, you can track the progress
|
||||
[here](https://github.com/kubernetes/ingress-nginx/pull/7156).
|
||||
We're not making feature improvements to `ingress-nginx` until after the support for
|
||||
Ingress v1 is complete.
|
||||
|
||||
In the meantime to ensure no compatibility issues:
|
||||
|
||||
* Update to the latest version of Ingress-NGINX; currently
|
||||
[v0.47.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v0.47.0)
|
||||
* After Kubernetes 1.22 is released, ensure you are using the latest version of
|
||||
Ingress-NGINX that supports the stable APIs for Ingress and IngressClass.
|
||||
* Test Ingress-NGINX version v1.0.0-alpha.2 with Cluster versions >= 1.19
|
||||
and report any issues to the projects Github page.
|
||||
|
||||
The community’s feedback and support in this effort is welcome. The
|
||||
Ingress-NGINX Sub-project regularly holds community meetings where we discuss
|
||||
this and other issues facing the project. For more information on the sub-project,
|
||||
please see [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network).
|
|
@ -0,0 +1,231 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Roorkee robots, releases and racing: the Kubernetes 1.21 release interview"
|
||||
date: 2021-07-29
|
||||
---
|
||||
|
||||
**Author**: Craig Box (Google)
|
||||
|
||||
With Kubernetes 1.22 due out next week, now is a great time to look back on 1.21. The release team for that version was led by [Nabarun Pal](https://twitter.com/theonlynabarun) from VMware.
|
||||
|
||||
Back in April I [interviewed Nabarun](https://kubernetespodcast.com/episode/146-kubernetes-1.21/) on the weekly [Kubernetes Podcast from Google](https://kubernetespodcast.com/); the latest in a series of release lead conversations that started back with 1.11, not long after the show started back in 2018.
|
||||
|
||||
In these interviews we learn a little about the release, but also about the process behind it, and the story behind the person chosen to lead it. Getting to know a community member is my favourite part of the show each week, and so I encourage you to [subscribe wherever you get your podcasts](https://kubernetespodcast.com/subscribe/). With a release coming next week, you can probably guess what our next topic will be!
|
||||
|
||||
*This transcript has been edited and condensed for clarity.*
|
||||
|
||||
---
|
||||
|
||||
**CRAIG BOX: You have a Bachelor of Technology in Metallurgical and Materials Engineering. How are we doing at turning lead into gold?**
|
||||
|
||||
NABARUN PAL: Well, last I checked, we have yet to find the philosopher's stone!
|
||||
|
||||
**CRAIG BOX: One of the more important parts of the process?**
|
||||
|
||||
NABARUN PAL: We're not doing that well in terms of getting alchemists up and running. There is some improvement in nuclear technology, where you can turn lead into gold, but I would guess buying gold would be much more efficient.
|
||||
|
||||
**CRAIG BOX: Or Bitcoin? It depends what you want to do with the gold.**
|
||||
|
||||
NABARUN PAL: Yeah, seeing the increasing prices of Bitcoin, you'd probably prefer to bet on that. But, don't take this as a suggestion. I'm not a registered investment advisor, and I don't give investment advice!
|
||||
|
||||
**CRAIG BOX: But you are, of course, a trained materials engineer. How did you get into that line of education?**
|
||||
|
||||
NABARUN PAL: We had a graded and equated exam structure, where you sit a single exam, and then based on your performance in that exam, you can try any of the universities which take those scores into account. I went to the Indian Institute of Technology, Roorkee.
|
||||
|
||||
Materials engineering interested me a lot. I had a passion for computer science since childhood, but I also liked material science, so I wanted to explore that field. I did a lot of exploration around material science and metallurgy in my freshman and sophomore years, but then computing, since it was a passion, crept into the picture.
|
||||
|
||||
**CRAIG BOX: Let's dig in there a little bit. What did computing look like during your childhood?**
|
||||
|
||||
NABARUN PAL: It was a very interesting journey. I started exploring computers back when I was seven or eight. For my first programming language, if you call it a programming language, I explored LOGO.
|
||||
|
||||
You have a turtle on the screen, and you issue commands to it, like move forward or rotate or pen up or pen down. You basically draw geometric figures. I could visually see how I could draw a square and how I could draw a triangle. It was an interesting journey after that. I learned BASIC, then went to some amount of HTML, JavaScript.
|
||||
|
||||
**CRAIG BOX: It's interesting to me because Logo and BASIC were probably my first two programming languages, but I think there was probably quite a gap in terms of when HTML became a thing after those two! Did your love of computing always lead you down the path towards programming, or were you interested as a child in using computers for games or application software? What led you specifically into programming?**
|
||||
|
||||
NABARUN PAL: Programming came in late. Not just in computing, but in life, I'm curious with things. When my parents got me my first computer, I was curious. I was like, "how does this operating system work?" What even is running it? Using a television and using a computer is a different experience, but usability is kind of the same thing. The HCI device for a television is a remote, whereas with a computer, I had a keyboard and a mouse. I used to tinker with the box and reinstall operating systems.
|
||||
|
||||
We used to get magazines back then. They used to bundle OpenSuse or Debian, and I used to install them. It was an interesting experience, 15 years back, how Linux used to be. I have been a tinkerer all around, and that's what eventually led me to programming.
|
||||
|
||||
**CRAIG BOX: With an interest in both the physical and ethereal aspects of technology, you did a lot of robotics challenges during university. That's something that I am not surprised to hear from someone who has a background in Logo, to be honest. There's Mindstorms, and a lot of other technology that is based around robotics that a lot of LOGO people got into. How was that something that came about for you?**
|
||||
|
||||
NABARUN PAL: When I joined my university, apart from studying materials, one of the things they used to really encourage was to get involved in a lot of extracurricular activities. One which interested me was robotics. I joined [my college robotics team](https://github.com/marsiitr) and participated in a lot of challenges.
|
||||
|
||||
Predominantly, we used to participate in this competition called [ABU Robocon](https://en.wikipedia.org/wiki/ABU_Robocon), which is an event conducted by the Asia-Pacific Broadcasting Union. What they used to do was, every year, one of the participating countries in the contest would provide a problem statement. For example, one year, they asked us to build a badminton-playing robot. They asked us to build a rugby playing robot or a Frisbee thrower, and there are some interesting problem statements around the challenge: you can't do this. You can't do that. Weight has to be like this. Dimensions have to be like that.
|
||||
|
||||
I got involved in that, and most of my time at university, I used to spend there. Material science became kind of a backburner for me, and my hobby became my full time thing.
|
||||
|
||||
**CRAIG BOX: And you were not only involved there in terms of the project and contributions to it, but you got involved as a secretary of the team, effectively, doing a lot of the organization, which is a thread that will come up as we speak about Kubernetes.**
|
||||
|
||||
NABARUN PAL: Over the course of time, when I gained more knowledge into how the team works, it became very natural that I graduated up the ladder and then managed juniors. I became the joint secretary of the robotics club in our college. This was more of a broad, engaging role in evangelizing robotics at the university, to promote events, to help students to see the value in learning robotics - what you gain out of that mechanically or electronically, or how do you develop your logic by programming robots.
|
||||
|
||||
**CRAIG BOX: Your first job after graduation was working at a company called Algoshelf, but you were also an intern there while you were at school?**
|
||||
|
||||
NABARUN PAL: Algoshelf was known as Rorodata when I joined them as an intern. This was also an interesting opportunity for me in the sense that I was always interested in writing programs which people would use. One of the things that I did there was build an open source Function as a Service framework, if I may call it that - it was mostly turning Python functions into web servers without even writing any code. The interesting bit there was that it was targeted toward data scientists, and not towards programmers. We had to understand the pain of data scientists, that they had to learn a lot of programming in order to even deploy their machine learning models, and we wanted to solve that problem.
|
||||
|
||||
They offered me a job after my internship, and I kept on working for them after I graduated from university. There, I got introduced to Kubernetes, so we pivoted into a product structure where the very same thing I told you, the Functions as a Service thing, could be deployed in Kubernetes. I was exploring Kubernetes to use it as a scalable platform. Instead of managing pets, we wanted to manage cattle, as in, we wanted to have a very highly distributed architecture.
|
||||
|
||||
**CRAIG BOX: Not actual cattle. I've been to India. There are a lot of cows around.**
|
||||
|
||||
NABARUN PAL: Yeah, not actual cattle. That is a bit tough.
|
||||
|
||||
**CRAIG BOX: When Algoshelf we're looking at picking up Kubernetes, what was the evaluation process like? Were you looking at other tools at the time? Or had enough time passed that Kubernetes was clearly the platform that everyone was going to use?**
|
||||
|
||||
NABARUN PAL: Algoshelf was a natural evolution. Before Kubernetes, we used to deploy everything on a single big AWS server, using systemd. Everything was a systemd service, and everything was deployed using Fabric. Fabric is a Python package which essentially is like Ansible, but much leaner, as it does not have all the shims and things that Ansible has.
|
||||
|
||||
Then we asked "what if we need to scale out to different machines?" Kubernetes was in the hype. We hopped onto the hype train to see whether Kubernetes was worth it for us. And that's where my journey started, exploring the ecosystem, exploring the community. How can we improve the community in essence?
|
||||
|
||||
**CRAIG BOX: A couple of times now you've mentioned as you've grown in a role, becoming part of the organization and the arranging of the group. You've talked about working in Python. You had submitted some talks to Pycon India. And I understand you're now a tech lead for that conference. What does the tech community look like in India and how do you describe your involvement in it?**
|
||||
|
||||
NABARUN PAL: My involvement with the community began when I was at university. When I was working as an intern at Algoshelf, I was introduced to this-- I never knew about PyCon India, or tech conferences in general.
|
||||
|
||||
The person that I was working with just asked me, like hey, did you submit a talk to PyCon India? It's very useful, the library that we were making. So I [submitted a talk](https://www.nabarun.in/talk/2017/pyconindia/#1) to PyCon India in 2017. Eventually the talk got selected. That was not my first speaking opportunity, it was my second. I also spoke at PyData Delhi on a similar thing that I worked on in my internship.
|
||||
|
||||
It has been a journey since then. I talked about the same thing at FOSSASIA Summit in Singapore, and got really involved with the Python community because it was what I used to work on back then.
|
||||
|
||||
After giving all those talks at conferences, I got also introduced to this amazing group called [dgplug](https://dgplug.org/), which is an acronym for the Durgapur Linux Users Group. It is a group started in-- I don't remember the exact year, but it was around 12 to 13 years back, by someone called Kushal Das, with the ideology of [training students into being better open source contributors](https://foss.training/).
|
||||
|
||||
I liked the idea and got involved with in teaching last year. It is not limited to students. Professionals can also join in. It's about making anyone better at upstream contributions, making things sustainable. I started training people on Vim, on how to use text editors. so they are more efficient and productive. In general life, text editors are a really good tool.
|
||||
|
||||
The other thing was the shell. How do you navigate around the Linux shell and command line? That has been a fun experience.
|
||||
|
||||
**CRAIG BOX: It's very interesting to think about that, because my own involvement with a Linux User Group was probably around the year 2000. And back then we were teaching people how to install things-- Linux on CD was kinda new at that point in time. There was a lot more of, what is this new thing and how do we get involved? When the internet took off around that time, all of that stuff moved online - you no longer needed to go meet a group of people in a room to talk about Linux. And I haven't really given much thought to the concept of a LUG since then, but it's great to see it having turned into something that's now about contributing, rather than just about how you get things going for yourself.**
|
||||
|
||||
NABARUN PAL: Exactly. So as I mentioned earlier, my journey into Linux was installing SUSE from DVDs that came bundled with magazines. Back then it was a pain installing things because you did not get any instructions. There has certainly been a paradigm shift now. People are more open to reading instructions online, downloading ISOs, and then just installing them. So we really don't need to do that as part of LUGs.
|
||||
|
||||
We have shifted more towards enabling people to contribute to whichever project that they use. For example, if you're using Fedora, contribute to Fedora; make things better. It's just about giving back to the community in any way possible.
|
||||
|
||||
**CRAIG BOX: You're also involved in the [Kubernetes Bangalore meetup group](https://www.meetup.com/Bangalore-Kubernetes-Meetup/). Does that group have a similar mentality?**
|
||||
|
||||
NABARUN PAL: The Kubernetes Bangalore meetup group is essentially focused towards spreading the knowledge of Kubernetes and the aligned products in the ecosystem, whatever there is in the Cloud Native Landscape, in various ways. For example, to evangelize about using them in your company or how people use them in existing ways.
|
||||
|
||||
So a few months back in February, we did something like a [Kubernetes contributor workshop](https://www.youtube.com/watch?v=FgsXbHBRYIc). It was one of its kind in India. It was the first one if I recall correctly. We got a lot of traction and community members interested in contributing to Kubernetes and a lot of other projects. And this is becoming a really valuable thing.
|
||||
|
||||
I'm not much involved in the organization of the group. There are really great people already organizing it. I keep on being around and attending the meetups and trying to answer any questions if people have any.
|
||||
|
||||
**CRAIG BOX: One way that it is possible to contribute to the Kubernetes ecosystem is through the release process. You've [written a blog](https://blog.naba.run/posts/release-enhancements-journey/) which talks about your journey through that. It started in Kubernetes 1.17, where you took a shadow role for that release. Tell me about what it was like to first take that plunge.**
|
||||
|
||||
NABARUN PAL: Taking the plunge was a big step, I would say. It should not have been that way. After getting into the team, I saw that it is really encouraged that you should just apply to the team - but then write truthfully about yourself. What do you want? Write your passionate goal, why you want to be in the team.
|
||||
|
||||
So even right now the shadow applications are open for the next release. I wanted to give that a small shoutout. If you want to contribute to the Kubernetes release team, please do apply. The form is pretty simple. You just need to say why do you want to contribute to the release team.
|
||||
|
||||
**CRAIG BOX: What was your answer to that question?**
|
||||
|
||||
NABARUN PAL: It was a bit tricky. I have this philosophy of contributing to projects that I use in my day-to-day life. I use a lot of open source projects daily, and I started contributing to Kubernetes primarily because I was using the Kubernetes Python client. That was one of my first contributions.
|
||||
|
||||
When I was contributing to that, I explored the release team and it interested me a lot, particularly how interesting and varied the mechanics of releasing Kubernetes are. For most software projects, it's usually whenever you decide that you have made meaningful progress in terms of features, you release it. But Kubernetes is not like that. We follow a regular release cadence. And all those aspects really interested me. I actually applied for the first time in Kubernetes 1.16, but got rejected.
|
||||
|
||||
But I still applied to Kubernetes 1.17, and I got into the enhancements team. That team was led by [MrBobbyTables, Bob Killen](https://kubernetespodcast.com/episode/126-research-steering-honking/), back then, and [Jeremy Rickard](https://kubernetespodcast.com/episode/131-kubernetes-1.20/) was one of my co-shadows in the team. I shadowed enhancements again. Then I lead enhancements in 1.19. I then shadowed the lead in 1.20 and eventually led the 1.21 team. That's what my journey has been.
|
||||
|
||||
My suggestion to people is don't be afraid of failure. Even if you don't get selected, it's perfectly fine. You can still contribute to the release team. Just hop on the release calls, raise your hand, and introduce yourself.
|
||||
|
||||
**CRAIG BOX: Between the 1.20 and 1.21 releases, you moved to work on the upstream contribution team at VMware. I've noticed that VMware is hiring a lot of great upstream contributors at the moment. Is this something that [Stephen Augustus](https://kubernetespodcast.com/episode/130-kubecon-na-2020/) had his fingerprints all over? Is there something in the water?**
|
||||
|
||||
NABARUN PAL: A lot of people have fingerprints on this process. Stephen certainly had his fingerprints on it, I would say. We are expanding the team of upstream contributors primarily because the product that we are working for is based on Kubernetes. It helps us a lot in driving processes upstream and helping out the community as a whole, because everyone then gets enabled and benefits from what we contribute to the community.
|
||||
|
||||
**CRAIG BOX: I understand that the Tanzu team is being built out in India at the moment, but I guess you probably haven't been able to meet them in person yet?**
|
||||
|
||||
NABARUN PAL: Yes and no. I did not meet any of them after joining VMware, but I met a lot of my teammates, before I joined VMware, at KubeCons. For example, I met Nikhita, I met Dims, I met Stephen at KubeCon. I am yet to meet other members of the team and I'm really excited to catch up with them once everything comes out of lockdown and we go back to our normal lives.
|
||||
|
||||
**CRAIG BOX: Yes, everyone that I speak to who has changed jobs in the pandemic says it's a very odd experience, just nothing really being different. And the same perhaps for people who are working on open source moving companies as well. They're doing the same thing, perhaps just for a different employer.**
|
||||
|
||||
NABARUN PAL: As we say in the community, see you in another Slack in some time.
|
||||
|
||||
**CRAIG BOX: We now turn to the recent release of Kubernetes 1.21. First of all, congratulations on that.**
|
||||
|
||||
NABARUN PAL: Thank you.
|
||||
|
||||
**CRAIG BOX: [The announcement](https://kubernetes.io/blog/2021/04/08/kubernetes-1-21-release-announcement/) says the release consists of 51 enhancements, 13 graduating to stable, 16 moving to beta, 20 entering alpha, and then two features that have been deprecated. How would you summarize this release?**
|
||||
|
||||
NABARUN PAL: One of the big points for this release is that it is the largest release of all time.
|
||||
|
||||
**CRAIG BOX: Really?**
|
||||
|
||||
NABARUN PAL: Yep. 1.20 was the largest release back then, but 1.21 got more enhancements, primarily due to a lot of changes that we did to the process.
|
||||
|
||||
In the 1.21 release cycle, we did a few things differently compared to other release cycles-- for example, in the enhancement process. An enhancement, in the Kubernetes context, is basically a feature proposal. You will hear the terminology [Kubernetes Enhancement Proposals](https://github.com/kubernetes/enhancements/blob/master/keps/README.md), or KEP, a lot in the community. An enhancement is a broad thing encapsulated in a specific document.
|
||||
|
||||
**CRAIG BOX: I like to think of it as a thing that's worth having a heading in the release notes.**
|
||||
|
||||
NABARUN PAL: Indeed. Until the 1.20 release cycle, what we used to do was-- the release team has a vertical called enhancements. The enhancements team members used to ping each of the enhancement issues and ask whether they want to be part of the release cycle or not. The authors would decide, or talk to their SIG, and then come back with the answer, as to whether they wanted to be part of the cycle.
|
||||
|
||||
In this release, what we did was we eliminated that process and asked the SIGs proactively to discuss amongst themselves, what they wanted to pitch in for this release cycle. What set of features did they want to graduate this release? They may introduce things in alpha, graduate things to beta or stable, or they may also deprecate features.
|
||||
|
||||
What this did was promote a lot of async processes, and at the same time, give power back to the community. The community decides what they want in the release and then comes back collectively. It also reduces a lot of stress on the release team who previously had to ask people consistently what they wanted to pitch in for the release. You now have a deadline. You discuss amongst your SIG what your roadmap is and what it looks like for the near future. Maybe this release, and the next two. And you put all of those answers into a Google spreadsheet. Spreadsheets are still a thing.
|
||||
|
||||
**CRAIG BOX: The Kubernetes ecosystem runs entirely on Google Spreadsheets.**
|
||||
|
||||
NABARUN PAL: It does, and a lot of Google Docs for meeting notes! We did a lot of process improvements, which essentially led to a better release. This release cycle we had 13 enhancements graduating to stable, 16 which moved to beta, and 20 enhancements which were net new features into the ecosystem, and came in as alpha.
|
||||
|
||||
Along with that are features set for deprecation. One of them was PodSecurityPolicy. That has been a point of discussion in the Kubernetes user base and we also published [a blog post about it](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/). All credit to SIG Security who have been on top of things as to find a replacement for PodSecurityPolicy even before this release cycle ended, so that they could at least have a proposal of what will happen next.
|
||||
|
||||
**CRAIG BOX: Let's talk about some old things and some new things. You mentioned PodSecurityPolicy there. That's a thing that's been around a long time and is being deprecated. Two things that have been around a long time and that are now being promoted to stable are CronJobs and PodDisruptionBudgets, both of which were introduced in Kubernetes 1.4, which came out in 2016. Why do you think it took so long for them both to go stable?**
|
||||
|
||||
NABARUN PAL: I might not have a definitive answer to your question. One of the things that I feel is they might be already so good that nobody saw that they were beta features, and just kept on using them.
|
||||
|
||||
One of the things that I noticed when reading for the CronJobs graduation from beta to stable was the new controller. Users might not see this, but there has been a drastic change in the CronJob controller v2. What it essentially does is goes from a poll-based method of checking what users have defined as CronJobs to a queue architecture, which is the modern method of defining controllers. That has been one of the really good improvements in the case of CronJobs. Instead of the controller working in O(N) time, you now have constant time complexity.
|
||||
|
||||
**CRAIG BOX: A lot of these features that have been in beta for a long time, like you say, people have an expectation that they are complete. With PodSecurityPolicy, it's being deprecated, which is allowed because it's a feature that never made it out of beta. But how do you think people will react to it going away? And does that say something about the need for the process to make sure that features don't just languish in beta forever, which has been introduced recently?**
|
||||
|
||||
NABARUN PAL: That's true. One of the driving factors, when contributors are thinking of graduating beta features has been the ["prevention of perma-beta" KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/1635-prevent-permabeta/README.md). Back in 1.19 we [introduced this process](https://kubernetes.io/blog/2020/08/21/moving-forward-from-beta/) where each of the beta resources were marked for deprecation and removal in a certain time frame-- three releases for deprecation and another release for removal. That's also a motivating factor for eventually rethinking as to how beta resources work for us in the community. That is also very effective, I would say.
|
||||
|
||||
**CRAIG BOX: Do remember that Gmail was in beta for eight years.**
|
||||
|
||||
NABARUN PAL: I did not know that!
|
||||
|
||||
**CRAIG BOX: Nothing in Kubernetes is quite that old yet, but we'll get there. Of the 20 new enhancements, do you have a favorite or any that you'd like to call out?**
|
||||
|
||||
NABARUN PAL: There are two specific features in 1.21 that I'm really interested in, and are coming as net new features. One of them is the [persistent volume health monitor](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor), which gives the users the capability to actually see whether the backing volumes, which power persistent volumes in Kubernetes, are deleted or not. For example, the volumes may get deleted due to an inadvertent event, or they may get corrupted. That information is basically surfaced out as a field so that the user can leverage it in any way.
|
||||
|
||||
The other feature is the proposal for [adding headers with the command name to kubectl requests](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/859-kubectl-headers). We have always set the user-agent information when doing those kind of requests, but the proposal is to add what command the user put in so that we can enable more telemetry, and cluster administrators can determine the usage patterns of how people are using the cluster. I'm really excited about these kind of features coming into play.
|
||||
|
||||
**CRAIG BOX: You're the first release lead from the Asia-Pacific region, or more accurately, outside of the US and Europe. Most meetings in the Kubernetes ecosystem are traditionally in the window of overlap between the US and Europe, in the morning in California and the evening here in the UK. What's it been like to work outside of the time zones that the community had previously been operating in?**
|
||||
|
||||
NABARUN PAL: It has been a fun and a challenging proposition, I would say. In the last two-ish years that I have been contributing to Kubernetes, the community has also transformed from a lot of early morning Pacific calls to more towards async processes. For example, we in the release team have transformed our processes so we don't do updates in the calls anymore. What we do is ask for updates ahead of time, and then in the call, we just discuss things which need to be discussed synchronously in the team.
|
||||
|
||||
We leverage the meetings right now more for discussions. But we also don't come to decisions in those discussions, because if any stakeholder is not present on the call, it puts them at a disadvantage. We are trying to talk more on Slack, publicly, or talk on mailing lists. That's where most of the discussion should happen, and also to gain lazy consensus. What I mean by lazy consensus is come up with a pre-decision kind of thing, but then also invite feedback from the broader community about what people would like them to see about that specific thing being discussed. This is where we as a community are also transforming a lot, but there is a lot more headroom to grow.
|
||||
|
||||
The release team also started to have EU/APAC burndown meetings. In addition to having one meeting focused towards the US and European time zones, we also do a meeting which is more suited towards European and Asia-Pacific time zones. One of the driving factors for those decisions was that the release team is seeing a lot of participation from a variety of time zones. To give you one metric, we had release team members this cycle from UTC+8 all through UTC-8 - 16 hours of span. It's really difficult to accommodate all of those zones in a single meeting. And it's not just those 16 hours of span - what about the other eight hours?
|
||||
|
||||
**CRAIG BOX: Yeah, you're missing New Zealand. You could add another 5 hours of span right there.**
|
||||
|
||||
NABARUN PAL: Exactly. So we will always miss people in meetings, and that's why we should also innovate more, have different kinds of meetings. But that also may not be very sustainable in the future. Will people attend duplicate meetings? Will people follow both of the meetings? More meetings is one of the solutions.
|
||||
|
||||
The other solution is you have threaded discussions on some medium, be it Slack or be it a mailing list. Then, people can just pitch in whenever it is work time for them. Then, at the end of the day, a 24-hour rolling period, you digest it, and then push it out as meeting notes. That's what the Contributor Experience Special Interest Group is doing - shout-out to them for moving to that process. I may be wrong here, but I think once every two weeks, they do async updates on Slack. And that is a really nice thing to have, improving variety of geographies that people can contribute from.
|
||||
|
||||
**CRAIG BOX: Once you've put everything together that you hope to be in your release, you create a release candidate build. How do you motivate people to test those?**
|
||||
|
||||
NABARUN PAL: That's a very interesting question. It is difficult for us to motivate people into trying out these candidates. It's mostly people who are passionate about Kubernetes who try out the release candidates and see for themselves what the bugs are. I remember [Dims tweeting out a call](https://twitter.com/dims/status/1377272238420934656) that if somebody tries out the release candidate and finds a good bug or caveat, they could get a callout in the KubeCon keynote. That's one of the incentives - if you want to be called out in a KubeCon keynote, please try our release candidates.
|
||||
|
||||
**CRAIG BOX: Or get a new pair of Kubernetes socks?**
|
||||
|
||||
NABARUN PAL: We would love to give out goodies to people who try out our release candidates and find bugs. For example, if you want the brand new release team logo as a sticker, just hit me up. If you find a bug in a 1.22 release candidate, I would love to be able to send you some coupon codes for the store. Don't quote me on this, but do reach out.
|
||||
|
||||
**CRAIG BOX: Now the release is out, is it time for you to put your feet up? What more things do you have to do, and how do you feel about the path ahead for yourself?**
|
||||
|
||||
NABARUN PAL: I was discussing this with the team yesterday. Even after the release, we had kind of a water-cooler conversation. I just pasted in a Zoom link to all the release team members and said, hey, do you want to chat? One of the things that I realized that I'm really missing is the daily burndowns right now. I will be around in the release team and the SIG Release meetings, helping out the new lead in transitioning. And even my job, right now, is not over. I'm working with Taylor, who is the emeritus advisor for 1.21, on figuring out some of the mechanics for the next release cycle. I'm also documenting what all we did as part of the process and as part of the process changes, and making sure the next release cycle is up and running.
|
||||
|
||||
**CRAIG BOX: We've done a lot of these release lead interviews now, and there's a question which we always like to ask, which is, what will you write down in the transition envelope? Savitha Raghunathan is the release lead for 1.22. What is the advice that you will pass on to her?**
|
||||
|
||||
NABARUN PAL: Three words-- **Do, Delegate, and Defer**. Categorize things into those three buckets as to what you should do right away, what you need to defer, and things that you can delegate to your shadows or other release team members. That's one of the mantras that works really well when leading a team. It is not just in the context of the release team, but it's in the context of managing any team.
|
||||
|
||||
The other bit is **over-communicate**. No amount of communication is enough. What I've realized is the community is always willing to help you. One of the big examples that I can give is the day before release was supposed to happen, we were seeing a lot of test failures, and then one of the community members had an idea-- why don't you just send an email? I was like, "that sounds good. We can send an email mentioning all the flakes and call out for help to the broader Kubernetes developer community." And eventually, once we sent out the email, lots of people came in to help us in de-flaking the tests and trying to find out the root cause as to why those tests were failing so often. Big shout out to Antonio and all the SIG Network folks who came to pitch in.
|
||||
|
||||
No matter how many names I mention, it will never be enough. A lot of people, even outside the release team, have helped us a lot with this release. And that's where the release theme comes in - **Power to the Community**. I'm really stoked by how this community behaves and how people are willing to help you all the time. It's not about what they're telling you to do, but it's what they're also interested in, they're passionate about.
|
||||
|
||||
**CRAIG BOX: One of the things you're passionate about is Formula One. Do you think Lewis Hamilton is going to take it away this year?**
|
||||
|
||||
NABARUN PAL: It's a fair probability that Lewis will win the title this year as well.
|
||||
|
||||
**CRAIG BOX: Which would take him to eight all time career wins. And thus-- [he's currently tied with Michael Schumacher](https://www.nytimes.com/2020/11/15/sports/autoracing/lewis-hamilton-schumacher-formula-one-record.html)-- would pull him ahead.**
|
||||
|
||||
NABARUN PAL: Yes. Michael Schumacher was my first favorite F1 driver, I would say. It feels a bit heartbreaking to see someone break Michael's record.
|
||||
|
||||
**CRAIG BOX: How do you feel about [Michael Schumacher's son joining the contest?](https://www.formula1.com/en/latest/article.breaking-mick-schumacher-to-race-for-haas-in-2021-as-famous-surname-returns.66XTVfSt80GrZe91lvWVwJ.html)**
|
||||
|
||||
NABARUN PAL: I feel good. Mick Schumacher is in the fray right now. And I wish we could see him, in a few years, in a Ferrari. The Schumacher family back to Ferrari would be really great to see. But then, my fan favorite has always been McLaren, partly because I like the chemistry of Lando and Carlos over the last two years. It was heartbreaking to see Carlos go to Ferrari. But then we have Lando and Daniel Ricciardo in the team. They're also fun people.
|
||||
|
||||
---
|
||||
|
||||
_[Nabarun Pal](https://twitter.com/theonlynabarun) is on the Tanzu team at VMware and served as the Kubernetes 1.21 release team lead._
|
||||
|
||||
_You can find the [Kubernetes Podcast from Google](http://www.kubernetespodcast.com/) at [@KubernetesPod](https://twitter.com/KubernetesPod) on Twitter, and you can [subscribe](https://kubernetespodcast.com/subscribe/) so you never miss an episode._
|
|
@ -0,0 +1,157 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Kubernetes 1.22: Reaching New Peaks'
|
||||
date: 2021-08-04
|
||||
slug: kubernetes-1-22-release-announcement
|
||||
---
|
||||
|
||||
**Authors:** [Kubernetes 1.22 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.22/release-team.md)
|
||||
|
||||
We’re pleased to announce the release of Kubernetes 1.22, the second release of 2021!
|
||||
|
||||
This release consists of 53 enhancements: 13 enhancements have graduated to stable, 24 enhancements are moving to beta, and 16 enhancements are entering alpha. Also, three features have been deprecated.
|
||||
|
||||
In April of this year, the Kubernetes release cadence was officially changed from four to three releases yearly. This is the first longer-cycle release related to that change. As the Kubernetes project matures, the number of enhancements per cycle grows. This means more work, from version to version, for the contributor community and Release Engineering team, and it can put pressure on the end-user community to stay up-to-date with releases containing increasingly more features.
|
||||
|
||||
Changing the release cadence from four to three releases yearly balances many aspects of the project, both in how contributions and releases are managed, and also in the community's ability to plan for upgrades and stay up to date.
|
||||
|
||||
You can read more in the official blog post [Kubernetes Release Cadence Change: Here’s What You Need To Know](https://kubernetes.io/blog/2021/07/20/new-kubernetes-release-cadence/).
|
||||
|
||||
|
||||
## Major Themes
|
||||
|
||||
### Server-side Apply graduates to GA
|
||||
|
||||
[Server-side Apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/) is a new field ownership and object merge algorithm running on the Kubernetes API server. Server-side Apply helps users and controllers manage their resources via declarative configurations. It allows them to create and/or modify their objects declaratively, simply by sending their fully specified intent. After being in beta for a couple releases, Server-side Apply is now generally available.
|
||||
|
||||
### External credential providers now stable
|
||||
|
||||
Support for Kubernetes client [credential plugins](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) has been in beta since 1.11, and with the release of Kubernetes 1.22 now graduates to stable. The GA feature set includes improved support for plugins that provide interactive login flows, as well as a number of bug fixes. Aspiring plugin authors can look at [sample-exec-plugin](https://github.com/ankeesler/sample-exec-plugin) to get started.
|
||||
|
||||
### etcd moves to 3.5.0
|
||||
|
||||
Kubernetes' default backend storage, etcd, has a new release: 3.5.0. The new release comes with improvements to the security, performance, monitoring, and developer experience. There are numerous bug fixes and some critical new features like the migration to structured logging and built-in log rotation. The release comes with a detailed future roadmap to implement a solution to traffic overload. You can read a full and detailed list of changes in the [3.5.0 release announcement](https://etcd.io/blog/2021/announcing-etcd-3.5/).
|
||||
|
||||
### Quality of Service for memory resources
|
||||
|
||||
Originally, Kubernetes used the v1 cgroups API. With that design, the QoS class for a `Pod` only applied to CPU resources (such as `cpu_shares`). As an alpha feature, Kubernetes v1.22 can now use the cgroups v2 API to control memory allocation and isolation. This feature is designed to improve workload and node availability when there is contention for memory resources, and to improve the predictability of container lifecycle.
|
||||
|
||||
### Node system swap support
|
||||
|
||||
Every system administrator or Kubernetes user has been in the same boat regarding setting up and using Kubernetes: disable swap space. With the release of Kubernetes 1.22, alpha support is available to run nodes with swap memory. This change lets administrators opt in to configuring swap on Linux nodes, treating a portion of block storage as additional virtual memory.
|
||||
|
||||
### Windows enhancements and capabilities
|
||||
|
||||
Continuing to support the growing developer community, SIG Windows has released their [Development Environment](https://github.com/kubernetes-sigs/sig-windows-dev-tools/). These new tools support multiple CNI providers and can run on multiple platforms. There is also a new way to run bleeding-edge Windows features from scratch by compiling the Windows kubelet and kube-proxy, then using them along with daily builds of other Kubernetes components.
|
||||
|
||||
CSI support for Windows nodes moves to GA in the 1.22 release. In Kubernetes v1.22, Windows privileged containers are an alpha feature. To allow using CSI storage on Windows nodes, [CSIProxy](https://github.com/kubernetes-csi/csi-proxy) enables CSI node plugins to be deployed as unprivileged pods, using the proxy to perform privileged storage operations on the node.
|
||||
|
||||
### Default profiles for seccomp
|
||||
|
||||
An alpha feature for default seccomp profiles has been added to the kubelet, along with a new command line flag and configuration. When in use, this new feature provides cluster-wide seccomp defaults, using the `RuntimeDefault` seccomp profile rather than `Unconfined` by default. This enhances the default security of the Kubernetes Deployment. Security administrators will now sleep better knowing that workloads are more secure by default. To learn more about the feature, please refer to the official [seccomp tutorial](https://kubernetes.io/docs/tutorials/clusters/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads).
|
||||
|
||||
### More secure control plane with kubeadm
|
||||
|
||||
A new alpha feature allows running the `kubeadm` control plane components as non-root users. This is a long requested security measure in `kubeadm`. To try it you must enable the `kubeadm` specific RootlessControlPlane feature gate. When you deploy a cluster using this alpha feature, your control plane runs with lower privileges.
|
||||
|
||||
For `kubeadm`, Kubernetes 1.22 also brings a new [v1beta3 configuration API](/docs/reference/config-api/kubeadm-config.v1beta3/). This iteration adds some long requested features and deprecates some existing ones. The v1beta3 version is now the preferred API version; the v1beta2 API also remains available and is not yet deprecated.
|
||||
|
||||
## Major Changes
|
||||
|
||||
### Removal of several deprecated beta APIs
|
||||
|
||||
A number of deprecated beta APIs have been removed in 1.22 in favor of the GA version of those same APIs. All existing objects can be interacted with via stable APIs. This removal includes beta versions of the `Ingress`, `IngressClass`, `Lease`, `APIService`, `ValidatingWebhookConfiguration`, `MutatingWebhookConfiguration`, `CustomResourceDefinition`, `TokenReview`, `SubjectAccessReview`, and `CertificateSigningRequest` APIs.
|
||||
|
||||
For the full list, check out the [Deprecated API Migration Guide](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22) as well as the blog post [Kubernetes API and Feature Removals In 1.22: Here’s What You Need To Know](https://blog.k8s.io/2021/07/14/upcoming-changes-in-kubernetes-1-22/).
|
||||
|
||||
### API changes and improvements for ephemeral containers
|
||||
|
||||
The API used to create [Ephemeral Containers](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/) changes in 1.22. The Ephemeral Containers feature is alpha and disabled by default, and the new API does not work with clients that attempt to use the old API.
|
||||
|
||||
For stable features, the kubectl tool follows the Kubernetes [version skew policy](https://kubernetes.io/releases/version-skew-policy/); however, kubectl v1.21 and older do not support the new API for ephemeral containers. If you plan to use `kubectl debug` to create ephemeral containers, and your cluster is running Kubernetes v1.22, you cannot do so with kubectl v1.21 or earlier. Please update kubectl to 1.22 if you wish to use `kubectl debug` with a mix of cluster versions.
|
||||
|
||||
## Other Updates
|
||||
|
||||
### Graduated to Stable
|
||||
|
||||
* [Bound Service Account Token Volumes](https://github.com/kubernetes/enhancements/issues/542)
|
||||
* [CSI Service Account Token](https://github.com/kubernetes/enhancements/issues/2047)
|
||||
* [Windows Support for CSI Plugins](https://github.com/kubernetes/enhancements/issues/1122)
|
||||
* [Warning mechanism for deprecated API use](https://github.com/kubernetes/enhancements/issues/1693)
|
||||
* [PodDisruptionBudget Eviction](https://github.com/kubernetes/enhancements/issues/85)
|
||||
|
||||
### Notable Feature Updates
|
||||
|
||||
* A new [PodSecurity admission](https://github.com/kubernetes/enhancements/issues/2579) alpha feature is introduced, intended as a replacement for PodSecurityPolicy
|
||||
* [The Memory Manager](https://github.com/kubernetes/enhancements/issues/1769) moves to beta
|
||||
* A new alpha feature to enable [API Server Tracing](https://github.com/kubernetes/enhancements/issues/647)
|
||||
* A new v1beta3 version of the [kubeadm configuration](https://github.com/kubernetes/enhancements/issues/970) format
|
||||
* [Generic data populators](https://github.com/kubernetes/enhancements/issues/1495) for PersistentVolumes are now available in alpha
|
||||
* The Kubernetes control plane will now always use the [CronJobs v2 controller](https://github.com/kubernetes/enhancements/issues/19)
|
||||
* As an alpha feature, all Kubernetes node components (including the kubelet, kube-proxy, and container runtime) can be [run as a non-root user](https://github.com/kubernetes/enhancements/issues/2033)
|
||||
|
||||
# Release notes
|
||||
|
||||
You can check out the full details of the 1.22 release in the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md).
|
||||
|
||||
# Availability of release
|
||||
|
||||
Kubernetes 1.22 is [available for download](https://kubernetes.io/releases/download/) and also [on the GitHub project](https://github.com/kubernetes/kubernetes/releases/tag/v1.22.0).
|
||||
|
||||
There are some great resources out there for getting started with Kubernetes. You can check out some [interactive tutorials](https://kubernetes.io/docs/tutorials/) on the main Kubernetes site, or run a local cluster on your machine using Docker containers with [kind](https://kind.sigs.k8s.io). If you’d like to try building a cluster from scratch, check out the [Kubernetes the Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) tutorial by Kelsey Hightower.
|
||||
|
||||
# Release Team
|
||||
|
||||
This release was made possible by a very dedicated group of individuals, who came together as a team to deliver technical content, documentation, code, and a host of other components that go into every Kubernetes release.
|
||||
|
||||
A huge thank you to the release lead Savitha Raghunathan for leading us through a successful release cycle, and to everyone else on the release team for supporting each other, and working so hard to deliver the 1.22 release for the community.
|
||||
|
||||
We would also like to take this opportunity to remember Peeyush Gupta, a member of our team that we lost earlier this year. Peeyush was actively involved in SIG ContribEx and the Kubernetes Release Team, most recently serving as the 1.22 Communications lead. His contributions and efforts will continue to reflect in the community he helped build. A [CNCF memorial](https://github.com/cncf/memorials/blob/main/peeyush-gupta.md) page has been created where thoughts and memories can be shared by the community.
|
||||
|
||||
# Release Logo
|
||||
|
||||

|
||||
|
||||
Amidst the ongoing pandemic, natural disasters, and ever-present shadow of burnout, the 1.22 release of Kubernetes includes 53 enhancements. This makes it the largest release to date. This accomplishment was only made possible due to the hard-working and passionate Release Team members and the amazing contributors of the Kubernetes ecosystem. The release logo is our reminder to keep reaching for new milestones and setting new records. And it is dedicated to all the Release Team members, hikers, and stargazers!
|
||||
|
||||
The logo is designed by [Boris Zotkin](https://www.instagram.com/boris.z.man/). Boris is a Mac/Linux Administrator at the MathWorks. He enjoys simple things in life and loves spending time with his family. This tech-savvy individual is always up for a challenge and happy to help a friend!
|
||||
|
||||
# User Highlights
|
||||
|
||||
- In May, the CNCF welcomed 27 new organizations across the globe as members of the diverse cloud native ecosystem. These new [members](https://www.cncf.io/announcements/2021/05/05/27-new-members-join-the-cloud-native-computing-foundation/) will participate in CNCF events, including the upcoming [KubeCon + CloudNativeCon NA in Los Angeles](https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/) from October 12 – 15, 2021.
|
||||
- The CNCF granted Spotify the [Top End User Award](https://www.cncf.io/announcements/2021/05/05/cloud-native-computing-foundation-grants-spotify-the-top-end-user-award/) during [KubeCon + CloudNativeCon EU – Virtual 2021](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/).
|
||||
|
||||
# Project Velocity
|
||||
|
||||
The [CNCF K8s DevStats project](https://k8s.devstats.cncf.io/) aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing, and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.
|
||||
|
||||
In the v1.22 release cycle, which ran for 15 weeks (April 26 to August 4), we saw contributions from [1063 companies](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.21.0%20-%20now&var-metric=contributions) and [2054 individuals](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.21.0%20-%20now&var-metric=contributions&var-repogroup_name=Kubernetes&var-country_name=All&var-companies=All).
|
||||
|
||||
# Ecosystem Updates
|
||||
|
||||
- [KubeCon + CloudNativeCon Europe 2021](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/) was held in May, the third such event to be virtual. All talks are [now available on-demand](https://www.youtube.com/playlist?list=PLj6h78yzYM2MqBm19mRz9SYLsw4kfQBrC) for anyone that would like to catch up!
|
||||
- [Spring Term LFX Program](https://www.cncf.io/blog/2021/07/13/spring-term-lfx-program-largest-graduating-class-with-28-successful-cncf-interns) had the largest graduating class with 28 successful CNCF interns!
|
||||
- CNCF launched [livestreaming on Twitch](https://www.cncf.io/blog/2021/06/03/cloud-native-community-goes-live-with-10-shows-on-twitch/) at the beginning of the year targeting definitive interactive media experience for anyone wanting to learn, grow, and collaborate with others in the Cloud Native community from anywhere in the world.
|
||||
|
||||
# Event Updates
|
||||
|
||||
- [KubeCon + CloudNativeCon North America 2021](https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/) will take place in Los Angeles, October 12 – 15, 2021! You can find more information about the conference and registration on the event site.
|
||||
- [Kubernetes Community Days](https://community.cncf.io/kubernetes-community-days/about-kcd/) has upcoming events scheduled in Italy, the UK, and in Washington DC.
|
||||
|
||||
# Upcoming release webinar
|
||||
|
||||
Join members of the Kubernetes 1.22 release team on October 5, 2021 to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the [event page](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-122-release/) on the CNCF Online Programs site.
|
||||
|
||||
# Get Involved
|
||||
|
||||
If you’re interested in contributing to the Kubernetes community, Special Interest Groups (SIGs) are a great starting point. Many of them may align with your interests! If there are things you’d like to share with the community, you can join the weekly community meeting, or use any of the following channels:
|
||||
|
||||
* Find out more about contributing to Kubernetes at the [Kubernetes Contributors](https://www.kubernetes.dev/) website.
|
||||
* Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
|
||||
* Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
|
||||
* Join the community on [Slack](http://slack.k8s.io/)
|
||||
* Share your Kubernetes [story](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
|
||||
* Read more about what’s happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
|
||||
* Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team)
|
||||
|
||||
|
|
@ -0,0 +1,177 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.22: Server Side Apply moves to GA"
|
||||
date: 2021-08-06
|
||||
slug: server-side-apply-ga
|
||||
---
|
||||
|
||||
**Authors:** Jeffrey Ying, Google & Joe Betz, Google
|
||||
|
||||
Server-side Apply (SSA) has been promoted to GA in the Kubernetes v1.22 release. The GA milestone means you can depend on the feature and its API, without fear of future backwards-incompatible changes. GA features are protected by the Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/).
|
||||
|
||||
## What is Server-side Apply?
|
||||
|
||||
Server-side Apply helps users and controllers manage their resources through declarative configurations. Server-side Apply replaces the client side apply feature implemented by “kubectl apply” with a server-side implementation, permitting use by tools/clients other than kubectl. Server-side Apply is a new merging algorithm, as well as tracking of field ownership, running on the Kubernetes api-server. Server-side Apply enables new features like conflict detection, so the system knows when two actors are trying to edit the same field. Refer to the [Server-side Apply Documentation](/docs/reference/using-api/server-side-apply/) and [Beta 2 release announcement](https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/) for more information.
|
||||
|
||||
## What’s new since Beta?
|
||||
|
||||
Since the [Beta 2 release](https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/) subresources support has been added, and both client-go and Kubebuilder have added comprehensive support for Server-side Apply. This completes the Server-side Apply functionality required to make controller development practical.
|
||||
|
||||
### Support for subresources
|
||||
|
||||
Server-side Apply now fully supports subresources like `status` and `scale`. This is particularly important for [controllers](/docs/concepts/architecture/controller/), which are often responsible for writing to subresources.
|
||||
|
||||
## Server-side Apply support in client-go
|
||||
|
||||
Previously, Server-side Apply could only be called from the client-go typed client using the `Patch` function, with `PatchType` set to `ApplyPatchType`. Now, `Apply` functions are included in the client to allow for a more direct and typesafe way of calling Server-side Apply. Each `Apply` function takes an "apply configuration" type as an argument, which is a structured representation of an Apply request. For example:
|
||||
|
||||
```go
|
||||
import (
|
||||
...
|
||||
v1ac "k8s.io/client-go/applyconfigurations/autoscaling/v1"
|
||||
)
|
||||
|
||||
hpaApplyConfig := v1ac.HorizontalPodAutoscaler(autoscalerName, ns).
|
||||
WithSpec(v1ac.HorizontalPodAutoscalerSpec().
|
||||
WithMinReplicas(0)
|
||||
)
|
||||
|
||||
return hpav1client.Apply(ctx, hpaApplyConfig, metav1.ApplyOptions{FieldManager: "mycontroller", Force: true})
|
||||
```
|
||||
|
||||
Note in this example that `HorizontalPodAutoscaler` is imported from an "applyconfigurations" package. Each "apply configuration" type represents the same Kubernetes object kind as the corresponding go struct, but where all fields are pointers to make them optional, allowing apply requests to be accurately represented. For example, when the apply configuration in the above example is marshalled to YAML, it produces:
|
||||
|
||||
```yaml
|
||||
apiVersion: autoscaling/v1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: myHPA
|
||||
namespace: myNamespace
|
||||
spec:
|
||||
minReplicas: 0
|
||||
```
|
||||
|
||||
To understand why this is needed, the above YAML cannot be produced by the v1.HorizontalPodAutoscaler go struct. Take for example:
|
||||
|
||||
```go
|
||||
hpa := v1.HorizontalPodAutoscaler{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: "autoscaling/v1",
|
||||
Kind: "HorizontalPodAutoscaler",
|
||||
},
|
||||
ObjectMeta: ObjectMeta{
|
||||
Namespace: ns,
|
||||
Name: autoscalerName,
|
||||
},
|
||||
Spec: v1.HorizontalPodAutoscalerSpec{
|
||||
MinReplicas: pointer.Int32Ptr(0),
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
The above code attempts to declare the same apply configuration as shown in the previous examples, but when marshalled to YAML, produces:
|
||||
|
||||
```yaml
|
||||
kind: HorizontalPodAutoscaler
|
||||
apiVersion: autoscaling/v1
|
||||
metadata
|
||||
name: myHPA
|
||||
namespace: myNamespace
|
||||
creationTimestamp: null
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
kind: ""
|
||||
name: ""
|
||||
minReplicas: 0
|
||||
maxReplicas: 0
|
||||
```
|
||||
|
||||
Which, among other things, contains `spec.maxReplicas` set to `0`. This is almost certainly not what the caller intended (the intended apply configuration says nothing about the `maxReplicas` field), and could have serious consequences on a production system: it directs the autoscaler to downscale to zero pods. The problem here originates from the fact that the go structs contain required fields that are zero valued if not set explicitly. The go structs work as intended for create and update operations, but are fundamentally incompatible with apply, which is why we have introduced the generated "apply configuration" types.
|
||||
|
||||
The "apply configurations" also have convenience `With<FieldName>` functions that make it easier to build apply requests. This allows developers to set fields without having to deal with the fact that all the fields in the "apply configuration" types are pointers, and are inconvenient to set using go. For example `MinReplicas: &0` is not legal go code, so without the `With` functions, developers would work around this problem by using a library, e.g. `MinReplicas: pointer.Int32Ptr(0)`, but string enumerations like `corev1.Protocol` are still a problem since they cannot be supported by a general purpose library. In addition to the convenience, the `With` functions also isolate developers from the underlying representation, which makes it safer for the underlying representation to be changed to support additional features in the future.
|
||||
|
||||
## Using Server-side Apply in a controller
|
||||
|
||||
You can use the new support for Server-side Apply no matter how you implemented your controller. However, the new client-go support makes it easier to use Server-side Apply in controllers.
|
||||
|
||||
When authoring new controllers to use Server-side Apply, a good approach is to have the controller recreate the apply configuration for an object each time it reconciles that object. This ensures that the controller fully reconciles all the fields that it is responsible for. Controllers typically should unconditionally set all the fields they own by setting `Force: true` in the `ApplyOptions`. Controllers must also provide a `FieldManager` name that is unique to the reconciliation loop that apply is called from.
|
||||
|
||||
When upgrading existing controllers to use Server-side Apply the same approach often works well--migrate the controllers to recreate the apply configuration each time it reconciles any object. Unfortunately, the controller might have multiple code paths that update different parts of an object depending on various conditions. Migrating a controller like this to Server-side Apply can be risky because if the controller forgets to include any fields in an apply configuration that is included in a previous apply request, a field can be accidently deleted. To ease this type of migration, client-go apply support provides a way to replace any controller reconciliation code that performs a "read/modify-in-place/update" (or patch) workflow with a "extract/modify-in-place/apply" workflow. Here's an example of the new workflow:
|
||||
|
||||
```go
|
||||
fieldMgr := "my-field-manager"
|
||||
deploymentClient := clientset.AppsV1().Deployments("default")
|
||||
|
||||
// read, could also be read from a shared informer
|
||||
deployment, err := deploymentClient.Get(ctx, "example-deployment", metav1.GetOptions{})
|
||||
if err != nil {
|
||||
// handle error
|
||||
}
|
||||
|
||||
// extract
|
||||
deploymentApplyConfig, err := appsv1ac.ExtractDeployment(deployment, fieldMgr)
|
||||
if err != nil {
|
||||
// handle error
|
||||
}
|
||||
|
||||
// modify-in-place
|
||||
deploymentApplyConfig.Spec.Template.Spec.WithContainers(corev1ac.Container().
|
||||
WithName("modify-slice").
|
||||
WithImage("nginx:1.14.2"),
|
||||
)
|
||||
|
||||
// apply
|
||||
applied, err := deploymentClient.Apply(ctx, extractedDeployment, metav1.ApplyOptions{FieldManager: fieldMgr})
|
||||
```
|
||||
|
||||
For developers using Custom Resource Definitions (CRDs), the Kubebuilder apply support will provide the same capabilities. Documentation will be included in the Kubebuilder book when available.
|
||||
|
||||
## Server-side Apply and CustomResourceDefinitions
|
||||
|
||||
It is strongly recommended that all [Custom Resource Definitions](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRDs) have a schema. CRDs without a schema are treated as unstructured data by Server-side Apply. Keys are treated as fields in a struct and lists are assumed to be atomic.
|
||||
|
||||
CRDs that specify a schema are able to specify additional annotations in the schema. Please refer to the documentation on the full list of available annotations.
|
||||
|
||||
New annotations since beta:
|
||||
|
||||
**Defaulting:** Values for fields that appliers do not express explicit interest in should be defaulted. This prevents an applier from unintentionally owning a defaulted field that might cause conflicts with other appliers. If unspecified, the default value is nil or the nil equivalent for the corresponding type.
|
||||
|
||||
- Usage: see the [CRD Defaulting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting) documentation for more details.
|
||||
- Golang: `+default=<value>`
|
||||
- OpenAPI extension: `default: <value>`
|
||||
|
||||
|
||||
Atomic for maps and structs:
|
||||
|
||||
**Maps:** By default maps are granular. A different manager is able to manage each map entry. They can also be configured to be atomic such that a single manager owns the entire map.
|
||||
|
||||
- Usage: Refer to [Merge Strategy](/docs/reference/using-api/server-side-apply/#merge-strategy) for a more detailed overview
|
||||
- Golang: `+mapType=granular/atomic`
|
||||
- OpenAPI extension: `x-kubernetes-map-type: granular/atomic`
|
||||
|
||||
**Structs:** By default structs are granular and a separate applier may own each field. For certain kinds of structs, atomicity may be desired. This is most commonly seen in small coordinate-like structs such as Field/Object/Namespace Selectors, Object References, RGB values, Endpoints (Protocol/Port pairs), etc.
|
||||
|
||||
- Usage: Refer to [Merge Strategy](/docs/reference/using-api/server-side-apply/#merge-strategy) for a more detailed overview
|
||||
- Golang: `+structType=granular/atomic`
|
||||
- OpenAPI extension: `x-kubernetes-map-type:atomic/granular`
|
||||
|
||||
## What's Next?
|
||||
|
||||
After Server Side Apply, the next focus for the API Expression working-group is around improving the expressiveness and size of the published Kubernetes API schema. To see the full list of items we are working on, please join our working group and refer to the work items document.
|
||||
|
||||
## How to get involved?
|
||||
|
||||
The working-group for apply is [wg-api-expression](https://github.com/kubernetes/community/tree/master/wg-api-expression). It is available on slack [#wg-api-expression](https://kubernetes.slack.com/archives/C0123CNN8F3), through the [mailing list](https://groups.google.com/g/kubernetes-wg-api-expression) and we also meet every other Tuesday at 9.30 PT on Zoom.
|
||||
|
||||
We would also like to use the opportunity to thank the hard work of all the contributors involved in making this promotion to GA possible:
|
||||
|
||||
- Andrea Nodari
|
||||
- Antoine Pelisse
|
||||
- Daniel Smith
|
||||
- Jeffrey Ying
|
||||
- Jenny Buckley
|
||||
- Joe Betz
|
||||
- Julian Modesto
|
||||
- Kevin Delgado
|
||||
- Kevin Wiesmüller
|
||||
- Maria Ntalla
|
|
@ -0,0 +1,142 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'New in Kubernetes v1.22: alpha support for using swap memory'
|
||||
date: 2021-08-09
|
||||
slug: run-nodes-with-swap-alpha
|
||||
---
|
||||
|
||||
**Author:** Elana Hashman (Red Hat)
|
||||
|
||||
The 1.22 release introduced alpha support for configuring swap memory usage for
|
||||
Kubernetes workloads on a per-node basis.
|
||||
|
||||
In prior releases, Kubernetes did not support the use of swap memory on Linux,
|
||||
as it is difficult to provide guarantees and account for pod memory utilization
|
||||
when swap is involved. As part of Kubernetes' earlier design, swap support was
|
||||
considered out of scope, and a kubelet would by default fail to start if swap
|
||||
was detected on a node.
|
||||
|
||||
However, there are a number of [use cases](https://github.com/kubernetes/enhancements/blob/9d127347773ad19894ca488ee04f1cd3af5774fc/keps/sig-node/2400-node-swap/README.md#user-stories)
|
||||
that would benefit from Kubernetes nodes supporting swap, including improved
|
||||
node stability, better support for applications with high memory overhead but
|
||||
smaller working sets, the use of memory-constrained devices, and memory
|
||||
flexibility.
|
||||
|
||||
Hence, over the past two releases, [SIG Node](https://github.com/kubernetes/community/tree/master/sig-node#readme) has
|
||||
been working to gather appropriate use cases and feedback, and propose a design
|
||||
for adding swap support to nodes in a controlled, predictable manner so that
|
||||
Kubernetes users can perform testing and provide data to continue building
|
||||
cluster capabilities on top of swap. The alpha graduation of swap memory
|
||||
support for nodes is our first milestone towards this goal!
|
||||
|
||||
## How does it work?
|
||||
|
||||
There are a number of possible ways that one could envision swap use on a node.
|
||||
To keep the scope manageable for this initial implementation, when swap is
|
||||
already provisioned and available on a node, [we have proposed](https://github.com/kubernetes/enhancements/blob/9d127347773ad19894ca488ee04f1cd3af5774fc/keps/sig-node/2400-node-swap/README.md#proposal)
|
||||
the kubelet should be able to be configured such that:
|
||||
|
||||
- It can start with swap on.
|
||||
- It will direct the Container Runtime Interface to allocate zero swap memory
|
||||
to Kubernetes workloads by default.
|
||||
- You can configure the kubelet to specify swap utilization for the entire
|
||||
node.
|
||||
|
||||
Swap configuration on a node is exposed to a cluster admin via the
|
||||
[`memorySwap` in the KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/).
|
||||
As a cluster administrator, you can specify the node's behaviour in the
|
||||
presence of swap memory by setting `memorySwap.swapBehavior`.
|
||||
|
||||
This is possible through the addition of a `memory_swap_limit_in_bytes` field
|
||||
to the container runtime interface (CRI). The kubelet's config will control how
|
||||
much swap memory the kubelet instructs the container runtime to allocate to
|
||||
each container via the CRI. The container runtime will then write the swap
|
||||
settings to the container level cgroup.
|
||||
|
||||
## How do I use it?
|
||||
|
||||
On a node where swap memory is already provisioned, Kubernetes use of swap on a
|
||||
node can be enabled by enabling the `NodeSwap` feature gate on the kubelet, and
|
||||
disabling the `failSwapOn` [configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
or the `--fail-swap-on` command line flag.
|
||||
|
||||
You can also optionally configure `memorySwap.swapBehavior` in order to
|
||||
specify how a node will use swap memory. For example,
|
||||
|
||||
```yaml
|
||||
memorySwap:
|
||||
swapBehavior: LimitedSwap
|
||||
```
|
||||
|
||||
The available configuration options for `swapBehavior` are:
|
||||
|
||||
- `LimitedSwap` (default): Kubernetes workloads are limited in how much swap
|
||||
they can use. Workloads on the node not managed by Kubernetes can still swap.
|
||||
- `UnlimitedSwap`: Kubernetes workloads can use as much swap memory as they
|
||||
request, up to the system limit.
|
||||
|
||||
If configuration for `memorySwap` is not specified and the feature gate is
|
||||
enabled, by default the kubelet will apply the same behaviour as the
|
||||
`LimitedSwap` setting.
|
||||
|
||||
The behaviour of the `LimitedSwap` setting depends if the node is running with
|
||||
v1 or v2 of control groups (also known as "cgroups"):
|
||||
|
||||
- **cgroups v1:** Kubernetes workloads can use any combination of memory and
|
||||
swap, up to the pod's memory limit, if set.
|
||||
- **cgroups v2:** Kubernetes workloads cannot use swap memory.
|
||||
|
||||
### Caveats
|
||||
|
||||
Having swap available on a system reduces predictability. Swap's performance is
|
||||
worse than regular memory, sometimes by many orders of magnitude, which can
|
||||
cause unexpected performance regressions. Furthermore, swap changes a system's
|
||||
behaviour under memory pressure, and applications cannot directly control what
|
||||
portions of their memory usage are swapped out. Since enabling swap permits
|
||||
greater memory usage for workloads in Kubernetes that cannot be predictably
|
||||
accounted for, it also increases the risk of noisy neighbours and unexpected
|
||||
packing configurations, as the scheduler cannot account for swap memory usage.
|
||||
|
||||
The performance of a node with swap memory enabled depends on the underlying
|
||||
physical storage. When swap memory is in use, performance will be significantly
|
||||
worse in an I/O operations per second (IOPS) constrained environment, such as a
|
||||
cloud VM with I/O throttling, when compared to faster storage mediums like
|
||||
solid-state drives or NVMe.
|
||||
|
||||
Hence, we do not recommend the use of swap for certain performance-constrained
|
||||
workloads or environments. Cluster administrators and developers should
|
||||
benchmark their nodes and applications before using swap in production
|
||||
scenarios, and [we need your help](#how-do-i-get-involved) with that!
|
||||
|
||||
## Looking ahead
|
||||
|
||||
The Kubernetes 1.22 release introduces alpha support for swap memory on nodes,
|
||||
and we will continue to work towards beta graduation in the 1.23 release. This
|
||||
will include:
|
||||
|
||||
* Adding support for controlling swap consumption at the Pod level via cgroups.
|
||||
* This will include the ability to set a system-reserved quantity of swap
|
||||
from what kubelet detects on the host.
|
||||
* Determining a set of metrics for node QoS in order to evaluate the
|
||||
performance and stability of nodes with and without swap enabled.
|
||||
* Collecting feedback from test user cases.
|
||||
* We will consider introducing new configuration modes for swap, such as a
|
||||
node-wide swap limit for workloads.
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
You can review the current [documentation](https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory)
|
||||
on the Kubernetes website.
|
||||
|
||||
For more information, and to assist with testing and provide feedback, please
|
||||
see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
|
||||
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
Your feedback is always welcome! SIG Node [meets regularly](https://github.com/kubernetes/community/tree/master/sig-node#meetings)
|
||||
and [can be reached](https://github.com/kubernetes/community/tree/master/sig-node#contact)
|
||||
via [Slack](https://slack.k8s.io/) (channel **#sig-node**), or the SIG's
|
||||
[mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node).
|
||||
Feel free to reach out to me, Elana Hashman (**@ehashman** on Slack and GitHub)
|
||||
if you'd like to help.
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Kubernetes 1.22: CSI Windows Support (with CSI Proxy) reaches GA'
|
||||
date: 2021-08-09
|
||||
slug: csi-windows-support-with-csi-proxy-reaches-ga
|
||||
---
|
||||
|
||||
**Authors:** Mauricio Poppe (Google), Jing Xu (Google), and Deep Debroy (Apple)
|
||||
|
||||
*The stable version of CSI Proxy for Windows has been released alongside Kubernetes 1.22. CSI Proxy enables CSI Drivers running on Windows nodes to perform privileged storage operations.*
|
||||
|
||||
## Background
|
||||
|
||||
Container Storage Interface (CSI) for Kubernetes went GA in the Kubernetes 1.13 release. CSI has become the standard for exposing block and file storage to containerized workloads on Container Orchestration systems (COs) like Kubernetes. It enables third-party storage providers to write and deploy plugins without the need to alter the core Kubernetes codebase. Legacy in-tree drivers are deprecated and new storage features are introduced in CSI, therefore it is important to get CSI Drivers to work on Windows.
|
||||
|
||||
A CSI Driver in Kubernetes has two main components: a controller plugin which runs in the control plane and a node plugin which runs on every node.
|
||||
|
||||
- The controller plugin generally does not need direct access to the host and can perform all its operations through the Kubernetes API and external control plane services.
|
||||
|
||||
- The node plugin, however, requires direct access to the host for making block devices and/or file systems available to the Kubernetes kubelet. Due to the missing capability of running privileged operations from containers on Windows nodes [CSI Proxy was introduced as alpha in Kubernetes 1.18](https://kubernetes.io/blog/2020/04/03/kubernetes-1-18-feature-windows-csi-support-alpha/) as a way to enable containers to perform privileged storage operations. This enables containerized CSI Drivers to run on Windows nodes.
|
||||
|
||||
## What's CSI Proxy and how do CSI drivers interact with it?
|
||||
|
||||
When a workload that uses persistent volumes is scheduled, it'll go through a sequence of steps defined in the [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md). First, the workload will be scheduled to run on a node. Then the controller component of a CSI Driver will attach the persistent volume to the node. Finally the node component of a CSI Driver will mount the persistent volume on the node.
|
||||
|
||||
The node component of a CSI Driver needs to run on Windows nodes to support Windows workloads. Various privileged operations like scanning of disk devices, mounting of file systems, etc. cannot be done from a containerized application running on Windows nodes yet ([Windows HostProcess containers](https://github.com/kubernetes/enhancements/issues/1981) introduced in Kubernetes 1.22 as alpha enable functionalities that require host access like the operations mentioned before). However, we can perform these operations through a binary (CSI Proxy) that's pre-installed on the Window nodes. CSI Proxy has a client-server architecture and allows CSI drivers to issue privileged storage operations through a gRPC interface exposed over named pipes created during the startup of CSI Proxy.
|
||||
|
||||

|
||||
|
||||
## CSI Proxy reaches GA
|
||||
|
||||
The CSI Proxy development team has worked closely with storage vendors, many of whom started integrating CSI Proxy into their CSI Drivers and provided feedback as early as CSI Proxy design proposal. This cooperation uncovered use cases where additional APIs were needed, found bugs, and identified areas for documentation improvement.
|
||||
|
||||
The CSI Proxy design [KEP](https://github.com/kubernetes/enhancements/pull/2737) has been updated to reflect the current CSI Proxy architecture. Additional [development documentation](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/DEVELOPMENT.md) is included for contributors interested in helping with new features or bug fixes.
|
||||
|
||||
Before we reached GA we wanted to make sure that our API is simple and consistent. We went through an extensive API review of the v1beta API groups where we made sure that the CSI Proxy API methods and messages are consistent with the naming conventions defined in the [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md). As part of this effort we're graduating the [Disk](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/disk_v1.md), [Filesystem](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/filesystem_v1.md), [SMB](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/smb_v1.md) and [Volume](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/volume_v1.md) API groups to v1.
|
||||
|
||||
Additional Windows system APIs to get information from the Windows nodes and support to mount iSCSI targets in Windows nodes, are available as alpha APIs in the [System API](https://github.com/kubernetes-csi/csi-proxy/tree/v1.0.0/client/api/system/v1alpha1) and the [iSCSI API](https://github.com/kubernetes-csi/csi-proxy/tree/v1.0.0/client/api/iscsi/v1alpha2). These APIs will continue to be improved before we graduate them to v1.
|
||||
|
||||
CSI Proxy v1 is compatible with all the previous v1betaX releases. The GA `csi-proxy.exe` binary can handle requests from v1betaX clients thanks to the autogenerated conversion layer that transforms any versioned client request to a version-agnostic request that the server can process. Several [integration tests](https://github.com/kubernetes-csi/csi-proxy/tree/v1.0.0/integrationtests) were added for all the API versions of the API groups that are graduating to v1 to ensure that CSI Proxy is backwards compatible.
|
||||
|
||||
Version drift between CSI Proxy and the CSI Drivers that interact with it was also carefully considered. A [connection fallback mechanism](https://github.com/kubernetes-csi/csi-proxy/pull/124) has been provided for CSI Drivers to handle multiple versions of CSI Proxy for a smooth upgrade to v1. This allows CSI Drivers, like the GCE PD CSI Driver, [to recognize which version of the CSI Proxy binary is running](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/738) and handle multiple versions of the CSI Proxy binary deployed on the node.
|
||||
|
||||
CSI Proxy v1 is already being used by many CSI Drivers, including the [AWS EBS CSI Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/966), [Azure Disk CSI Driver](https://github.com/kubernetes-sigs/azuredisk-csi-driver/pull/919), [GCE PD CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/738), and [SMB CSI Driver](https://github.com/kubernetes-csi/csi-driver-smb/pull/319).
|
||||
|
||||
## Future plans
|
||||
|
||||
We're very excited for the future of CSI Proxy. With the upcoming [Windows HostProcess containers](https://github.com/kubernetes/enhancements/issues/1981), we are considering converting the CSI Proxy in to a library consumed by CSI Drivers in addition to the current client/server design. This will allow us to iterate faster on new features because the `csi-proxy.exe` binary will no longer be needed.
|
||||
|
||||
## How to get involved?
|
||||
|
||||
This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together. Those interested in getting involved with the design and development of CSI Proxy, or any part of the Kubernetes Storage system, may join the Kubernetes Storage Special Interest Group (SIG). We’re rapidly growing and always welcome new contributors.
|
||||
|
||||
For those interested in more details about CSI support in Windows please reach out in the [#csi-windows](https://kubernetes.slack.com/messages/csi-windows) Kubernetes slack channel.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
CSI-Proxy received many contributions from members of the Kubernetes community. We thank all of the people that contributed to CSI Proxy with design reviews, bug reports, bug fixes, and for their continuous support in reaching this milestone:
|
||||
|
||||
- [Andy Zhang](https://github.com/andyzhangx)
|
||||
- [Dan Ilan](https://github.com/jmpfar)
|
||||
- [Deep Debroy](https://github.com/ddebroy)
|
||||
- [Humble Devassy Chirammal](https://github.com/humblec)
|
||||
- [Jing Xu](https://github.com/jingxu97)
|
||||
- [Jean Rougé](https://github.com/wk8)
|
||||
- [Jordan Liggitt](https://github.com/liggitt)
|
||||
- [Kalya Subramanian](https://github.com/ksubrmnn)
|
||||
- [Krishnakumar R](https://github.com/kkmsft)
|
||||
- [Manuel Tellez](https://github.com/manueltellez)
|
||||
- [Mark Rossetti](https://github.com/marosset)
|
||||
- [Mauricio Poppe](https://github.com/mauriciopoppe)
|
||||
- [Matthew Wong](https://github.com/wongma7)
|
||||
- [Michelle Au](https://github.com/msau42)
|
||||
- [Patrick Lang](https://github.com/PatrickLang)
|
||||
- [Saad Ali](https://github.com/saad-ali)
|
||||
- [Yuju Hong](https://github.com/yujuhong)
|
|
@ -0,0 +1,144 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes Memory Manager moves to beta"
|
||||
date: 2021-08-11
|
||||
slug: kubernetes-1-22-feature-memory-manager-moves-to-beta
|
||||
---
|
||||
|
||||
**Authors:** Artyom Lukianov (Red Hat), Cezary Zukowski (Samsung)
|
||||
|
||||
The blog post explains some of the internals of the _Memory manager_, a beta feature
|
||||
of Kubernetes 1.22. In Kubernetes, the Memory Manager is a
|
||||
[kubelet](https://kubernetes.io/docs/concepts/overview/components/#kubelet) subcomponent.
|
||||
The memory manage provides guaranteed memory (and hugepages)
|
||||
allocation for pods in the `Guaranteed` [QoS class](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes).
|
||||
|
||||
This blog post covers:
|
||||
|
||||
1. [Why do you need it?](#Why-do-you-need-it?)
|
||||
2. [The internal details of how the **MemoryManager** works](#How-does-it-work?)
|
||||
3. [Current limitations of the **MemoryManager**](#Current-limitations)
|
||||
4. [Future work for the **MemoryManager**](#Future-work-for-the-Memory-Manager)
|
||||
|
||||
## Why do you need it?
|
||||
|
||||
Some Kubernetes workloads run on nodes with
|
||||
[non-uniform memory access](https://en.wikipedia.org/wiki/Non-uniform_memory_access) (NUMA).
|
||||
Suppose you have NUMA nodes in your cluster. In that case, you'll know about the potential for extra latency when
|
||||
compute resources need to access memory on the different NUMA locality.
|
||||
|
||||
To get the best performance and latency for your workload, container CPUs,
|
||||
peripheral devices, and memory should all be aligned to the same NUMA
|
||||
locality.
|
||||
Before Kubernetes v1.22, the kubelet already provided a set of managers to
|
||||
align CPUs and PCI devices, but you did not have a way to align memory.
|
||||
The Linux kernel was able to make best-effort attempts to allocate
|
||||
memory for tasks from the same NUMA node where the container is
|
||||
executing are placed, but without any guarantee about that placement.
|
||||
|
||||
## How does it work?
|
||||
|
||||
The memory manager is doing two main things:
|
||||
- provides the topology hint to the Topology Manager
|
||||
- allocates the memory for containers and updates the state
|
||||
|
||||
The overall sequence of the Memory Manager under the Kubelet
|
||||
|
||||

|
||||
|
||||
During the Admission phase:
|
||||
|
||||
1. When first handling a new pod, the kubelet calls the TopologyManager's `Admit()` method.
|
||||
2. The Topology Manager is calling `GetTopologyHints()` for every hint provider including the Memory Manager.
|
||||
3. The Memory Manager calculates all possible NUMA nodes combinations for every container inside the pod and returns hints to the Topology Manager.
|
||||
4. The Topology Manager calls to `Allocate()` for every hint provider including the Memory Manager.
|
||||
5. The Memory Manager allocates the memory under the state according to the hint that the Topology Manager chose.
|
||||
|
||||
During Pod creation:
|
||||
|
||||
1. The kubelet calls `PreCreateContainer()`.
|
||||
2. For each container, the Memory Manager looks the NUMA nodes where it allocated the
|
||||
memory for the container and then returns that information to the kubelet.
|
||||
3. The kubelet creates the container, via CRI, using a container specification
|
||||
that incorporates information from the Memory Manager information.
|
||||
|
||||
### Let's talk about the configuration
|
||||
|
||||
By default, the Memory Manager runs with the `None` policy, meaning it will just
|
||||
relax and not do anything. To make use of the Memory Manager, you should set
|
||||
two command line options for the kubelet:
|
||||
|
||||
- `--memory-manager-policy=Static`
|
||||
- `--reserved-memory="<numaNodeID>:<resourceName>=<quantity>"`
|
||||
|
||||
The value for `--memory-manager-policy` is straightforward: `Static`. Deciding what to specify for `--reserved-memory` takes more thought. To configure it correctly, you should follow two main rules:
|
||||
|
||||
- The amount of reserved memory for the `memory` resource must be greater than zero.
|
||||
- The amount of reserved memory for the resource type must be equal
|
||||
to [NodeAllocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
(`kube-reserved + system-reserved + eviction-hard`) for the resource.
|
||||
You can read more about memory reservations in [Reserve Compute Resources for System Daemons](/docs/tasks/administer-cluster/reserve-compute-resources/).
|
||||
|
||||

|
||||
|
||||
## Current limitations
|
||||
|
||||
The 1.22 release and promotion to beta brings along enhancements and fixes, but the Memory Manager still has several limitations.
|
||||
|
||||
### Single vs Cross NUMA node allocation
|
||||
|
||||
The NUMA node can not have both single and cross NUMA node allocations. When the container memory is pinned to two or more NUMA nodes, we can not know from which NUMA node the container will consume the memory.
|
||||
|
||||

|
||||
|
||||
1. The `container1` started on the NUMA node 0 and requests *5Gi* of the memory but currently is consuming only *3Gi* of the memory.
|
||||
2. For container2 the memory request is 10Gi, and no single NUMA node can satisfy it.
|
||||
3. The `container2` consumes *3.5Gi* of the memory from the NUMA node 0, but once the `container1` will require more memory, it will not have it, and the kernel will kill one of the containers with the *OOM* error.
|
||||
|
||||
To prevent such issues, the Memory Manager will fail the admission of the `container2` until the machine has two NUMA nodes without a single NUMA node allocation.
|
||||
|
||||
### Works only for Guaranteed pods
|
||||
|
||||
The Memory Manager can not guarantee memory allocation for Burstable pods,
|
||||
also when the Burstable pod has specified equal memory limit and request.
|
||||
|
||||
Let's assume you have two Burstable pods: `pod1` has containers with
|
||||
equal memory request and limits, and `pod2` has containers only with a
|
||||
memory request set. You want to guarantee memory allocation for the `pod1`.
|
||||
To the Linux kernel, processes in either pod have the same *OOM score*,
|
||||
once the kernel finds that it does not have enough memory, it can kill
|
||||
processes that belong to pod `pod1`.
|
||||
|
||||
### Memory fragmentation
|
||||
|
||||
The sequence of Pods and containers that start and stop can fragment the memory on NUMA nodes.
|
||||
The alpha implementation of the Memory Manager does not have any mechanism to balance pods and defragment memory back.
|
||||
|
||||
## Future work for the Memory Manager
|
||||
|
||||
We do not want to stop with the current state of the Memory Manager and are looking to
|
||||
make improvements, including in the following areas.
|
||||
|
||||
### Make the Memory Manager allocation algorithm smarter
|
||||
|
||||
The current algorithm ignores distances between NUMA nodes during the
|
||||
calculation of the allocation. If same-node placement isn't available, we can still
|
||||
provide better performance compared to the current implementation, by changing the
|
||||
Memory Manager to prefer the closest NUMA nodes for cross-node allocation.
|
||||
|
||||
### Reduce the number of admission errors
|
||||
|
||||
The default Kubernetes scheduler is not aware of the node's NUMA topology, and it can be a reason for many admission errors during the pod start.
|
||||
We're hoping to add a KEP (Kubernetes Enhancement Proposal) to cover improvements in this area.
|
||||
Follow [Topology aware scheduler plugin in kube-scheduler](https://github.com/kubernetes/enhancements/issues/2044) to see how this idea progresses.
|
||||
|
||||
|
||||
## Conclusion
|
||||
With the promotion of the Memory Manager to beta in 1.22, we encourage everyone to give it a try and look forward to any feedback you may have. While there are still several limitations, we have a set of enhancements planned to address them and look forward to providing you with many new features in upcoming releases.
|
||||
If you have ideas for additional enhancements or a desire for certain features, please let us know. The team is always open to suggestions to enhance and improve the Memory Manager.
|
||||
We hope you have found this blog informative and helpful! Let us know if you have any questions or comments.
|
||||
|
||||
You can contact us via:
|
||||
- The Kubernetes [#sig-node ](https://kubernetes.slack.com/messages/sig-node)
|
||||
channel in Slack (visit https://slack.k8s.io/ for an invitation if you need one)
|
||||
- The SIG Node mailing list, [kubernetes-sig-node@googlegroups.com](https://groups.google.com/g/kubernetes-sig-node)
|
After Width: | Height: | Size: 71 KiB |
|
@ -0,0 +1,79 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Alpha in v1.22: Windows HostProcess Containers'
|
||||
date: 2021-08-16
|
||||
slug: windows-hostprocess-containers
|
||||
---
|
||||
|
||||
**Authors:** Brandon Smith (Microsoft)
|
||||
|
||||
Kubernetes v1.22 introduced a new alpha feature for clusters that
|
||||
include Windows nodes: HostProcess containers.
|
||||
|
||||
HostProcess containers aim to extend the Windows container model to enable a wider
|
||||
range of Kubernetes cluster management scenarios. HostProcess containers run
|
||||
directly on the host and maintain behavior and access similar to that of a regular
|
||||
process. With HostProcess containers, users can package and distribute management
|
||||
operations and functionalities that require host access while retaining versioning
|
||||
and deployment methods provided by containers. This allows Windows containers to
|
||||
be used for a variety of device plugin, storage, and networking management scenarios
|
||||
in Kubernetes. With this comes the enablement of host network mode—allowing
|
||||
HostProcess containers to be created within the host's network namespace instead of
|
||||
their own. HostProcess containers can also be built on top of existing Windows server
|
||||
2019 (or later) base images, managed through the Windows container runtime, and run
|
||||
as any user that is available on or in the domain of the host machine.
|
||||
|
||||
Linux privileged containers are currently used for a variety of key scenarios in
|
||||
Kubernetes, including kube-proxy (via kubeadm), storage, and networking scenarios.
|
||||
Support for these scenarios in Windows previously required workarounds via proxies
|
||||
or other implementations. Using HostProcess containers, cluster operators no longer
|
||||
need to log onto and individually configure each Windows node for administrative
|
||||
tasks and management of Windows services. Operators can now utilize the container
|
||||
model to deploy management logic to as many clusters as needed with ease.
|
||||
|
||||
## How does it work?
|
||||
|
||||
Windows HostProcess containers are implemented with Windows _Job Objects_, a break from the
|
||||
previous container model using server silos. Job objects are components of the Windows OS which offer the ability to
|
||||
manage a group of processes as a group (a.k.a. _jobs_) and assign resource constraints to the
|
||||
group as a whole. Job objects are specific to the Windows OS and are not associated with the Kubernetes [Job API](https://kubernetes.io/docs/concepts/workloads/controllers/job/). They have no process or file system isolation,
|
||||
enabling the privileged payload to view and edit the host file system with the
|
||||
correct permissions, among other host resources. The init process, and any processes
|
||||
it launches or that are explicitly launched by the user, are all assigned to the
|
||||
job object of that container. When the init process exits or is signaled to exit,
|
||||
all the processes in the job will be signaled to exit, the job handle will be
|
||||
closed and the storage will be unmounted.
|
||||
|
||||
HostProcess and Linux privileged containers enable similar scenarios but differ
|
||||
greatly in their implementation (hence the naming difference). HostProcess containers
|
||||
have their own pod security policies. Those used to configure Linux privileged
|
||||
containers **do not** apply. Enabling privileged access to a Windows host is a
|
||||
fundamentally different process than with Linux so the configuration and
|
||||
capabilities of each differ significantly. Below is a diagram detailing the
|
||||
overall architecture of Windows HostProcess containers:
|
||||
|
||||
{{< figure src="hostprocess-architecture.png" alt="HostProcess Architecture" >}}
|
||||
|
||||
## How do I use it?
|
||||
|
||||
HostProcess containers can be run from within a
|
||||
[HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod).
|
||||
With the feature enabled on Kubernetes version 1.22, a containerd container runtime of
|
||||
1.5.4 or higher, and the latest version of hcsshim, deploying a pod spec with the
|
||||
[correct HostProcess configuration](/docs/tasks/configure-pod-container/create-hostprocess-pod/#before-you-begin)
|
||||
will enable you to run HostProcess containers. To get started with running
|
||||
Windows containers see the general guidance for [Windows in Kubernetes](/docs/setup/production-environment/windows/)
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
- Work through [Create a Windows HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod/)
|
||||
|
||||
- Read about Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
|
||||
|
||||
- Read the enhancement proposal [Windows Privileged Containers and Host Networking Mode](https://github.com/kubernetes/enhancements/tree/master/keps/sig-windows/1981-windows-privileged-container-support) (KEP-1981)
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
HostProcess containers are in active development. SIG Windows welcomes suggestions from the community.
|
||||
Get involved with [SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows)
|
||||
to contribute!
|
|
@ -0,0 +1,267 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Enable seccomp for all workloads with a new v1.22 alpha feature"
|
||||
date: 2021-08-25
|
||||
slug: seccomp-default
|
||||
---
|
||||
|
||||
**Author:** Sascha Grunert, Red Hat
|
||||
|
||||
This blog post is about a new Kubernetes feature introduced in v1.22, which adds
|
||||
an additional security layer on top of the existing seccomp support. Seccomp is
|
||||
a security mechanism for Linux processes to filter system calls (syscalls) based
|
||||
on a set of defined rules. Applying seccomp profiles to containerized workloads
|
||||
is one of the key tasks when it comes to enhancing the security of the
|
||||
application deployment. Developers, site reliability engineers and
|
||||
infrastructure administrators have to work hand in hand to create, distribute
|
||||
and maintain the profiles over the applications life-cycle.
|
||||
|
||||
You can use the [`securityContext`][seccontext] field of Pods and their
|
||||
containers can be used to adjust security related configurations of the
|
||||
workload. Kubernetes introduced dedicated [seccomp related API
|
||||
fields][seccontext] in this `SecurityContext` with the [graduation of seccomp to
|
||||
General Availability (GA)][ga] in v1.19.0. This enhancement allowed an easier
|
||||
way to specify if the whole pod or a specific container should run as:
|
||||
|
||||
[seccontext]: /docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1
|
||||
[ga]: https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/#graduated-to-stable
|
||||
|
||||
- `Unconfined`: seccomp will not be enabled
|
||||
- `RuntimeDefault`: the container runtimes default profile will be used
|
||||
- `Localhost`: a node local profile will be applied, which is being referenced
|
||||
by a relative path to the seccomp profile root (`<kubelet-root-dir>/seccomp`)
|
||||
of the kubelet
|
||||
|
||||
With the graduation of seccomp, nothing has changed from an overall security
|
||||
perspective, because `Unconfined` is still the default. This is totally fine if
|
||||
you consider this from the upgrade path and backwards compatibility perspective of
|
||||
Kubernetes releases. But it also means that it is more likely that a workload
|
||||
runs without seccomp at all, which should be fixed in the long term.
|
||||
|
||||
## `SeccompDefault` to the rescue
|
||||
|
||||
Kubernetes v1.22.0 introduces a new kubelet [feature gate][gate]
|
||||
`SeccompDefault`, which has been added in `alpha` state as every other new
|
||||
feature. This means that it is disabled by default and can be enabled manually
|
||||
for every single Kubernetes node.
|
||||
|
||||
[gate]: /docs/reference/command-line-tools-reference/feature-gates
|
||||
|
||||
What does the feature do? Well, it just changes the default seccomp profile from
|
||||
`Unconfined` to `RuntimeDefault`. If not specified differently in the pod
|
||||
manifest, then the feature will add a higher set of security constraints by
|
||||
using the default profile of the container runtime. These profiles may differ
|
||||
between runtimes like [CRI-O][crio] or [containerd][ctrd]. They also differ for
|
||||
its used hardware architectures. But generally speaking, those default profiles
|
||||
allow a common amount of syscalls while blocking the more dangerous ones, which
|
||||
are unlikely or unsafe to be used in a containerized application.
|
||||
|
||||
[crio]: https://github.com/cri-o/cri-o/blob/fe30d62/vendor/github.com/containers/common/pkg/seccomp/default_linux.go#L45
|
||||
[ctrd]: https://github.com/containerd/containerd/blob/e1445df/contrib/seccomp/seccomp_default.go#L51
|
||||
|
||||
### Enabling the feature
|
||||
|
||||
Two kubelet configuration changes have to be made to enable the feature:
|
||||
|
||||
1. **Enable the feature** gate by setting the `SeccompDefault=true` via the command
|
||||
line (`--feature-gates`) or the [kubelet configuration][kubelet] file.
|
||||
2. **Turn on the feature** by enabling the feature by adding the
|
||||
`--seccomp-default` command line flag or via the [kubelet
|
||||
configuration][kubelet] file (`seccompDefault: true`).
|
||||
|
||||
[kubelet]: /docs/tasks/administer-cluster/kubelet-config-file
|
||||
|
||||
The kubelet will error on startup if only one of the above steps have been done.
|
||||
|
||||
### Trying it out
|
||||
|
||||
If the feature is enabled on a node, then you can create a new workload like
|
||||
this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: nginx:1.21
|
||||
```
|
||||
|
||||
Now it is possible to inspect the used seccomp profile by using
|
||||
[`crictl`][crictl] while investigating the containers [runtime
|
||||
specification][rspec]:
|
||||
|
||||
[crictl]: https://github.com/kubernetes-sigs/cri-tools
|
||||
[rspec]: https://github.com/opencontainers/runtime-spec/blob/0c021c1/config-linux.md#seccomp
|
||||
|
||||
```bash
|
||||
CONTAINER_ID=$(sudo crictl ps -q --name=test-container)
|
||||
sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp
|
||||
```
|
||||
|
||||
```yaml
|
||||
{
|
||||
"defaultAction": "SCMP_ACT_ERRNO",
|
||||
"architectures": ["SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"],
|
||||
"syscalls": [
|
||||
{
|
||||
"names": ["_llseek", "_newselect", "accept", …, "write", "writev"],
|
||||
"action": "SCMP_ACT_ALLOW"
|
||||
},
|
||||
…
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
You can see that the lower level container runtime ([CRI-O][crio-home] and
|
||||
[runc][runc] in our case), successfully applied the default seccomp profile.
|
||||
This profile denies all syscalls per default, while allowing commonly used ones
|
||||
like [`accept`][accept] or [`write`][write].
|
||||
|
||||
[crio-home]: https://github.com/cri-o/cri-o
|
||||
[runc]: https://github.com/opencontainers/runc
|
||||
[accept]: https://man7.org/linux/man-pages/man2/accept.2.html
|
||||
[write]: https://man7.org/linux/man-pages/man2/write.2.html
|
||||
|
||||
Please note that the feature will not influence any Kubernetes API for now.
|
||||
Therefore, it is not possible to retrieve the used seccomp profile via `kubectl`
|
||||
`get` or `describe` if the [`SeccompProfile`][api] field is unset within the
|
||||
`SecurityContext`.
|
||||
|
||||
[api]: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1
|
||||
|
||||
The feature also works when using multiple containers within a pod, for example
|
||||
if you create a pod like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container-nginx
|
||||
image: nginx:1.21
|
||||
securityContext:
|
||||
seccompProfile:
|
||||
type: Unconfined
|
||||
- name: test-container-redis
|
||||
image: redis:6.2
|
||||
```
|
||||
|
||||
then you should see that the `test-container-nginx` runs without a seccomp profile:
|
||||
|
||||
```bash
|
||||
sudo crictl inspect $(sudo crictl ps -q --name=test-container-nginx) |
|
||||
jq '.info.runtimeSpec.linux.seccomp == null'
|
||||
true
|
||||
```
|
||||
|
||||
Whereas the container `test-container-redis` runs with `RuntimeDefault`:
|
||||
|
||||
```bash
|
||||
sudo crictl inspect $(sudo crictl ps -q --name=test-container-redis) |
|
||||
jq '.info.runtimeSpec.linux.seccomp != null'
|
||||
true
|
||||
```
|
||||
|
||||
The same applies to the pod itself, which also runs with the default profile:
|
||||
|
||||
```bash
|
||||
sudo crictl inspectp (sudo crictl pods -q --name test-pod) |
|
||||
jq '.info.runtimeSpec.linux.seccomp != null'
|
||||
true
|
||||
```
|
||||
|
||||
### Upgrade strategy
|
||||
|
||||
It is recommended to enable the feature in multiple steps, whereas different
|
||||
risks and mitigations exist for each one.
|
||||
|
||||
#### Feature gate enabling
|
||||
|
||||
Enabling the feature gate at the kubelet level will not turn on the feature, but
|
||||
will make it possible by using the `SeccompDefault` kubelet configuration or the
|
||||
`--seccomp-default` CLI flag. This can be done by an administrator for the whole
|
||||
cluster or only a set of nodes.
|
||||
|
||||
#### Testing the Application
|
||||
|
||||
If you're trying this within a dedicated test environment, you have to ensure
|
||||
that the application code does not trigger syscalls blocked by the
|
||||
`RuntimeDefault` profile before enabling the feature on a node. This can be done
|
||||
by:
|
||||
|
||||
- _Recommended_: Analyzing the code (manually or by running the application with
|
||||
[strace][strace]) for any executed syscalls which may be blocked by the
|
||||
default profiles. If that's the case, then you can override the default by
|
||||
explicitly setting the pod or container to run as `Unconfined`. Alternatively,
|
||||
you can create a custom seccomp profile (see optional step below).
|
||||
profile based on the default by adding the additional syscalls to the
|
||||
`"action": "SCMP_ACT_ALLOW"` section.
|
||||
|
||||
- _Recommended_: Manually set the profile to the target workload and use a
|
||||
rolling upgrade to deploy into production. Rollback the deployment if the
|
||||
application does not work as intended.
|
||||
|
||||
- _Optional_: Run the application against an end-to-end test suite to trigger
|
||||
all relevant code paths with `RuntimeDefault` enabled. If a test fails, use
|
||||
the same mitigation as mentioned above.
|
||||
|
||||
- _Optional_: Create a custom seccomp profile based on the default and change
|
||||
its default action from `SCMP_ACT_ERRNO` to `SCMP_ACT_LOG`. This means that
|
||||
the seccomp filter for unknown syscalls will have no effect on the application
|
||||
at all, but the system logs will now indicate which syscalls may be blocked.
|
||||
This requires at least a Kernel version 4.14 as well as a recent [runc][runc]
|
||||
release. Monitor the application hosts audit logs (defaults to
|
||||
`/var/log/audit/audit.log`) or syslog entries (defaults to `/var/log/syslog`)
|
||||
for syscalls via `type=SECCOMP` (for audit) or `type=1326` (for syslog).
|
||||
Compare the syscall ID with those [listed in the Linux Kernel
|
||||
sources][syscalls] and add them to the custom profile. Be aware that custom
|
||||
audit policies may lead into missing syscalls, depending on the configuration
|
||||
of auditd.
|
||||
|
||||
- _Optional_: Use cluster additions like the [Security Profiles Operator][spo]
|
||||
for profiling the application via its [log enrichment][logs] capabilities or
|
||||
recording a profile by using its [recording feature][rec]. This makes the
|
||||
above mentioned manual log investigation obsolete.
|
||||
|
||||
[syscalls]: https://github.com/torvalds/linux/blob/7bb7f2a/arch/x86/entry/syscalls/syscall_64.tbl
|
||||
[spo]: https://github.com/kubernetes-sigs/security-profiles-operator
|
||||
[logs]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/c90ef3a/installation-usage.md#record-profiles-from-workloads-with-profilerecordings
|
||||
[rec]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/c90ef3a/installation-usage.md#using-the-log-enricher
|
||||
[strace]: https://man7.org/linux/man-pages/man1/strace.1.html
|
||||
|
||||
#### Deploying the modified application
|
||||
|
||||
Based on the outcome of the application tests, it may be required to change the
|
||||
application deployment by either specifying `Unconfined` or a custom seccomp
|
||||
profile. This is not the case if the application works as intended with
|
||||
`RuntimeDefault`.
|
||||
|
||||
#### Enable the kubelet configuration
|
||||
|
||||
If everything went well, then the feature is ready to be enabled by the kubelet
|
||||
configuration or its corresponding CLI flag. This should be done on a per-node
|
||||
basis to reduce the overall risk of missing a syscall during the investigations
|
||||
when running the application tests. If it's possible to monitor audit logs
|
||||
within the cluster, then it's recommended to do this for eventually missed
|
||||
seccomp events. If the application works as intended then the feature can be
|
||||
enabled for further nodes within the cluster.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Thank you for reading this blog post! I hope you enjoyed to see how the usage of
|
||||
seccomp profiles has been evolved in Kubernetes over the past releases as much
|
||||
as I do. On your own cluster, change the default seccomp profile to
|
||||
`RuntimeDefault` (using this new feature) and see the security benefits, and, of
|
||||
course, feel free to reach out any time for feedback or questions.
|
||||
|
||||
---
|
||||
|
||||
_Editor's note: If you have any questions or feedback about this blog post, feel
|
||||
free to reach out via the [Kubernetes slack in #sig-node][slack]._
|
||||
|
||||
[slack]: https://kubernetes.slack.com/messages/sig-node
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Minimum Ready Seconds for StatefulSets'
|
||||
date: 2021-08-27
|
||||
slug: minreadyseconds-statefulsets
|
||||
---
|
||||
|
||||
**Authors:** Ravi Gudimetla (Red Hat), Maciej Szulik (Red Hat)
|
||||
|
||||
This blog describes the notion of Availability for `StatefulSet` workloads, and a new alpha feature in Kubernetes 1.22 which adds `minReadySeconds` configuration for `StatefulSets`.
|
||||
|
||||
## What problems does this solve?
|
||||
|
||||
Prior to Kubernetes 1.22 release, once a `StatefulSet` `Pod` is in the `Ready` state it is considered `Available` to receive traffic. For some of the `StatefulSet` workloads, it may not be the case. For example, a workload like Prometheus with multiple instances of Alertmanager, it should be considered `Available` only when Alertmanager's state transfer is complete, not when the `Pod` is in `Ready` state. Since `minReadySeconds` adds buffer, the state transfer may be complete before the `Pod` becomes `Available`. While this is not a fool proof way of identifying if the state transfer is complete or not, it gives a way to the end user to express their intention of waiting for sometime before the `Pod` is considered `Available` and it is ready to serve requests.
|
||||
|
||||
Another case, where `minReadySeconds` helps is when using `LoadBalancer` `Services` with cloud providers. Since `minReadySeconds` adds latency after a `Pod` is `Ready`, it provides buffer time to prevent killing pods in rotation before new pods show up. Imagine a load balancer in unhappy path taking 10-15s to propagate. If you have 2 replicas then, you'd kill the second replica only after the first one is up but in reality, first replica cannot be seen because it is not yet ready to serve requests.
|
||||
|
||||
So, in general, the notion of `Availability` in `StatefulSets` is pretty useful and this feature helps in solving the above problems. This is a feature that already exists for `Deployments` and `DaemonSets` and we now have them for `StatefulSets` too to give users consistent workload experience.
|
||||
|
||||
|
||||
## How does it work?
|
||||
|
||||
The statefulSet controller watches for both `StatefulSets` and the `Pods` associated with them. When the feature gate associated with this feature is enabled, the statefulSet controller identifies how long a particular `Pod` associated with a `StatefulSet` has been in the `Running` state.
|
||||
|
||||
If this value is greater than or equal to the time specified by the end user in `.spec.minReadySeconds` field, the statefulSet controller updates a field called `availableReplicas` in the `StatefulSet`'s status subresource to include this `Pod`. The `status.availableReplicas` in `StatefulSet`'s status is an integer field which tracks the number of pods that are `Available`.
|
||||
|
||||
## How do I use it?
|
||||
|
||||
You are required to prepare the following things in order to try out the feature:
|
||||
|
||||
- Download and install a kubectl greater than v1.22.0 version
|
||||
- Switch on the feature gate with the command line flag `--feature-gates=StatefulSetMinReadySeconds=true` on `kube-apiserver` and `kube-controller-manager`
|
||||
|
||||
After successfully starting `kube-apiserver` and `kube-controller-manager`, you will see `AvailableReplicas` in the status and `minReadySeconds` of spec (with a default value of 0).
|
||||
|
||||
Specify a value for `minReadySeconds` for any StatefulSet and you can check if `Pods` are available or not by checking `AvailableReplicas` field using:
|
||||
`kubectl get statefulset/<name_of_the_statefulset> -o yaml`
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
- Read the KEP: [minReadySeconds for StatefulSets](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2599-minreadyseconds-for-statefulsets#readme)
|
||||
- Read the documentation: [Minimum ready seconds](/docs/concepts/workloads/controllers/statefulset/#minimum-ready-seconds) for StatefulSet
|
||||
- Review the [API definition](/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/) for StatefulSet
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
Please reach out to us in the [#sig-apps](https://kubernetes.slack.com/archives/C18NZM5K9) channel on Slack (visit https://slack.k8s.io/ for an invitation if you need one), or on the SIG Apps mailing list: kubernetes-sig-apps@googlegroups.com
|
||||
|
|
@ -0,0 +1,219 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.22: A New Design for Volume Populators"
|
||||
date: 2021-08-30
|
||||
slug: volume-populators-redesigned
|
||||
---
|
||||
|
||||
**Authors:**
|
||||
Ben Swartzlander (NetApp)
|
||||
|
||||
Kubernetes v1.22, released earlier this month, introduced a redesigned approach for volume
|
||||
populators. Originally implemented
|
||||
in v1.18, the API suffered from backwards compatibility issues. Kubernetes v1.22 includes a new API
|
||||
field called `dataSourceRef` that fixes these problems.
|
||||
|
||||
## Data sources
|
||||
|
||||
Earlier Kubernetes releases already added a `dataSource` field into the
|
||||
[PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) API,
|
||||
used for cloning volumes and creating volumes from snapshots. You could use the `dataSource` field when
|
||||
creating a new PVC, referencing either an existing PVC or a VolumeSnapshot in the same namespace.
|
||||
That also modified the normal provisioning process so that instead of yielding an empty volume, the
|
||||
new PVC contained the same data as either the cloned PVC or the cloned VolumeSnapshot.
|
||||
|
||||
Volume populators embrace the same design idea, but extend it to any type of object, as long
|
||||
as there exists a [custom resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
to define the data source, and a populator controller to implement the logic. Initially,
|
||||
the `dataSource` field was directly extended to allow arbitrary objects, if the `AnyVolumeDataSource`
|
||||
feature gate was enabled on a cluster. That change unfortunately caused backwards compatibility
|
||||
problems, and so the new `dataSourceRef` field was born.
|
||||
|
||||
In v1.22 if the `AnyVolumeDataSource` feature gate is enabled, the `dataSourceRef` field is
|
||||
added, which behaves similarly to the `dataSource` field except that it allows arbitrary
|
||||
objects to be specified. The API server ensures that the two fields always have the same
|
||||
contents, and neither of them are mutable. The differences is that at creation time
|
||||
`dataSource` allows only PVCs or VolumeSnapshots, and ignores all other values, while
|
||||
`dataSourceRef` allows most types of objects, and in the few cases it doesn't allow an
|
||||
object (core objects other than PVCs) a validation error occurs.
|
||||
|
||||
When this API change graduates to stable, we would deprecate using `dataSource` and recommend
|
||||
using `dataSourceRef` field for all use cases.
|
||||
In the v1.22 release, `dataSourceRef` is available (as an alpha feature) specifically for cases
|
||||
where you want to use for custom volume populators.
|
||||
|
||||
## Using populators
|
||||
|
||||
Every volume populator must have one or more CRDs that it supports. Administrators may
|
||||
install the CRD and the populator controller and then PVCs with a `dataSourceRef` specifies
|
||||
a CR of the type that the populator supports will be handled by the populator controller
|
||||
instead of the CSI driver directly.
|
||||
|
||||
Underneath the covers, the CSI driver is still invoked to create an empty volume, which
|
||||
the populator controller fills with the appropriate data. The PVC doesn't bind to the PV
|
||||
until it's fully populated, so it's safe to define a whole application manifest including
|
||||
pod and PVC specs and the pods won't begin running until everything is ready, just as if
|
||||
the PVC was a clone of another PVC or VolumeSnapshot.
|
||||
|
||||
## How it works
|
||||
|
||||
PVCs with data sources are still noticed by the external-provisioner sidecar for the
|
||||
related storage class (assuming a CSI provisioner is used), but because the sidecar
|
||||
doesn't understand the data source kind, it doesn't do anything. The populator controller
|
||||
is also watching for PVCs with data sources of a kind that it understands and when it
|
||||
sees one, it creates a temporary PVC of the same size, volume mode, storage class,
|
||||
and even on the same topology (if topology is used) as the original PVC. The populator
|
||||
controller creates a worker pod that attaches to the volume and writes the necessary
|
||||
data to it, then detaches from the volume and the populator controller rebinds the PV
|
||||
from the temporary PVC to the orignal PVC.
|
||||
|
||||
## Trying it out
|
||||
|
||||
The following things are required to use volume populators:
|
||||
* Enable the `AnyVolumeDataSource` feature gate
|
||||
* Install a CRD for the specific data source / populator
|
||||
* Install the populator controller itself
|
||||
|
||||
Populator controllers may use the [lib-volume-populator](https://github.com/kubernetes-csi/lib-volume-populator)
|
||||
library to do most of the Kubernetes API level work. Individual populators only need to
|
||||
provide logic for actually writing data into the volume based on a particular CR
|
||||
instance. This library provides a sample populator implementation.
|
||||
|
||||
These optional components improve user experience:
|
||||
* Install the VolumePopulator CRD
|
||||
* Create a VolumePopulator custom respource for each specific data source
|
||||
* Install the [volume data source validator](https://github.com/kubernetes-csi/volume-data-source-validator)
|
||||
controller (alpha)
|
||||
|
||||
The purpose of these components is to generate warning events on PVCs with data sources
|
||||
for which there is no populator.
|
||||
|
||||
## Putting it all together
|
||||
|
||||
To see how this works, you can install the sample "hello" populator and try it
|
||||
out.
|
||||
|
||||
First install the volume-data-source-validator controller.
|
||||
|
||||
```terminal
|
||||
kubectl apply -f https://github.com/kubernetes-csi/volume-data-source-validator/blob/master/deploy/kubernetes/rbac-data-source-validator.yaml
|
||||
kubectl apply -f https://github.com/kubernetes-csi/volume-data-source-validator/blob/master/deploy/kubernetes/setup-data-source-validator.yaml
|
||||
```
|
||||
|
||||
Next install the example populator.
|
||||
|
||||
```terminal
|
||||
kubectl apply -f https://github.com/kubernetes-csi/lib-volume-populator/blob/master/example/hello-populator/crd.yaml
|
||||
kubectl apply -f https://github.com/kubernetes-csi/lib-volume-populator/blob/master/example/hello-populator/deploy.yaml
|
||||
```
|
||||
|
||||
Create an instance of the `Hello` CR, with some text.
|
||||
|
||||
```yaml
|
||||
apiVersion: hello.k8s.io/v1alpha1
|
||||
kind: Hello
|
||||
metadata:
|
||||
name: example-hello
|
||||
spec:
|
||||
fileName: example.txt
|
||||
fileContents: Hello, world!
|
||||
```
|
||||
|
||||
Create a PVC that refers to that CR as its data source.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: example-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Mi
|
||||
dataSourceRef:
|
||||
apiGroup: hello.k8s.io
|
||||
kind: Hello
|
||||
name: example-hello
|
||||
volumeMode: Filesystem
|
||||
```
|
||||
|
||||
Next, run a job that reads the file in the PVC.
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: example-job
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: example-container
|
||||
image: busybox:latest
|
||||
command:
|
||||
- cat
|
||||
- /mnt/example.txt
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
mountPath: /mnt
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: example-pvc
|
||||
```
|
||||
|
||||
Wait for the job to complete (including all of its dependencies).
|
||||
|
||||
```terminal
|
||||
kubectl wait --for=condition=Complete job/example-job
|
||||
```
|
||||
|
||||
And last examine the log from the job.
|
||||
|
||||
```terminal
|
||||
kubectl logs job/example-job
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
Note that the volume already contained a text file with the string contents from
|
||||
the CR. This is only the simplest example. Actual populators can set up the volume
|
||||
to contain arbitrary contents.
|
||||
|
||||
## How to write your own volume populator
|
||||
|
||||
Developers interested in writing new poplators are encouraged to use the
|
||||
[lib-volume-populator](https://github.com/kubernetes-csi/lib-volume-populator) library
|
||||
and to only supply a small controller wrapper around the library, and a pod image
|
||||
capable of attaching to volumes and writing the appropriate data to the volume.
|
||||
|
||||
Individual populators can be extremely generic such that they work with every type
|
||||
of PVC, or they can do vendor specific things to rapidly fill a volume with data
|
||||
if the volume was provisioned by a specific CSI driver from the same vendor, for
|
||||
example, by communicating directly with the storage for that volume.
|
||||
|
||||
## The future
|
||||
|
||||
As this feature is still in alpha, we expect to update the out of tree controllers
|
||||
with more tests and documentation. The community plans to eventually re-implement
|
||||
the populator library as a sidecar, for ease of operations.
|
||||
|
||||
We hope to see some official community-supported populators for some widely-shared
|
||||
use cases. Also, we expect that volume populators will be used by backup vendors
|
||||
as a way to "restore" backups to volumes, and possibly a standardized API to do
|
||||
this will evolve.
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
The enhancement proposal,
|
||||
[Volume Populators](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1495-volume-populators), includes lots of detail about the history and technical implementation
|
||||
of this feature.
|
||||
|
||||
[Volume populators and data sources](/docs/concepts/storage/persistent-volumes/#volume-populators-and-data-sources), within the documentation topic about persistent volumes,
|
||||
explains how to use this feature in your cluster.
|
||||
|
||||
Please get involved by joining the Kubernetes storage SIG to help us enhance this
|
||||
feature. There are a lot of good ideas already and we'd be thrilled to have more!
|
||||
|
|
@ -0,0 +1,67 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Alpha in Kubernetes v1.22: API Server Tracing'
|
||||
date: 2021-09-03
|
||||
slug: api-server-tracing
|
||||
---
|
||||
|
||||
**Authors:** David Ashpole (Google)
|
||||
|
||||
In distributed systems, it can be hard to figure out where problems are. You grep through one component's logs just to discover that the source of your problem is in another component. You search there only to discover that you need to enable debug logs to figure out what really went wrong... And it goes on. The more complex the path your request takes, the harder it is to answer questions about where it went. I've personally spent many hours doing this dance with a variety of Kubernetes components. Distributed tracing is a tool which is designed to help in these situations, and the Kubernetes API Server is, perhaps, the most important Kubernetes component to be able to debug. At Kubernetes' Sig Instrumentation, our mission is to make it easier to understand what's going on in your cluster, and we are happy to announce that distributed tracing in the Kubernetes API Server reached alpha in 1.22.
|
||||
|
||||
## What is Tracing?
|
||||
|
||||
Distributed tracing links together a bunch of super-detailed information from multiple different sources, and structures that telemetry into a single tree for that request. Unlike logging, which limits the quantity of data ingested by using log levels, tracing collects all of the details and uses sampling to collect only a small percentage of requests. This means that once you have a trace which demonstrates an issue, you should have all the information you need to root-cause the problem--no grepping for object UID required! My favorite aspect, though, is how useful the visualizations of traces are. Even if you don't understand the inner workings of the API Server, or don't have a clue what an etcd "Transaction" is, I'd wager you (yes, you!) could tell me roughly what the order of events was, and which components were involved in the request. If some step takes a long time, it is easy to tell where the problem is.
|
||||
|
||||
## Why OpenTelemetry?
|
||||
|
||||
It's important that Kubernetes works well for everyone, regardless of who manages your infrastructure, or which vendors you choose to integrate with. That is particularly true for Kubernetes' integrations with telemetry solutions. OpenTelemetry, being a CNCF project, shares these core values, and is creating exactly what we need in Kubernetes: A set of open standards for Tracing client library APIs and a standard trace format. By using OpenTelemetry, we can ensure users have the freedom to choose their backend, and ensure vendors have a level playing field. The timing couldn't be better: the OpenTelemetry golang API and SDK are very close to their 1.0 release, and will soon offer backwards-compatibility for these open standards.
|
||||
|
||||
## Why instrument the API Server?
|
||||
|
||||
The Kubernetes API Server is a great candidate for tracing for a few reasons:
|
||||
|
||||
* It follows the standard "RPC" model (serve a request by making requests to downstream components), which makes it easy to instrument.
|
||||
* Users are latency-sensitive: If a request takes more than 10 seconds to complete, many clients will time-out.
|
||||
* It has a complex service topology: A single request could require consulting a dozen webhooks, or involve multiple requests to etcd.
|
||||
|
||||
## Trying out APIServer Tracing with a webhook
|
||||
|
||||
### Enabling API Server Tracing
|
||||
|
||||
1. Enable the APIServerTracing [feature-gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
|
||||
2. Set our configuration for tracing by pointing the `--tracing-config-file` flag on the kube-apiserver at our config file, which contains:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1alpha1
|
||||
kind: TracingConfiguration
|
||||
# 1% sampling rate
|
||||
samplingRatePerMillion: 10000
|
||||
```
|
||||
|
||||
### Enabling Etcd Tracing
|
||||
|
||||
Add `--experimental-enable-distributed-tracing`, `--experimental-distributed-tracing-address=0.0.0.0:4317`, `--experimental-distributed-tracing-service-name=etcd` flags to etcd to enable tracing. Note that this traces every request, so it will probably generate a lot of traces if you enable it.
|
||||
|
||||
### Example Trace: List Nodes
|
||||
|
||||
I could've used any trace backend, but decided to use Jaeger, since it is one of the most popular open-source tracing projects. I deployed [the Jaeger All-in-one container](https://hub.docker.com/r/jaegertracing/all-in-one) in my cluster, deployed [the OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) on my control-plane node ([example](https://github.com/dashpole/dashpole_demos/tree/master/otel/controlplane)), and captured traces like this one:
|
||||
|
||||

|
||||
|
||||
The teal lines are from the API Server, and includes it serving a request to `/api/v1/nodes`, and issuing a grpc `Range` RPC to ETCD. The yellow-ish line is from ETCD handling the `Range` RPC.
|
||||
|
||||
### Example Trace: Create Pod with Mutating Webhook
|
||||
|
||||
I instrumented the [example webhook](https://github.com/kubernetes-sigs/controller-runtime/tree/master/examples/builtins) with OpenTelemetry (I had to [patch](https://github.com/dashpole/controller-runtime/commit/85fdda7ba03dd2c22ef62c1a3dbdf5aa651f90da) controller-runtime, but it makes a neat demo), and routed traces to Jaeger as well. I collected traces like this one:
|
||||
|
||||

|
||||
|
||||
Compared with the previous trace, there are two new spans: A teal span from the API Server making a request to the admission webhook, and a brown span from the admission webhook serving the request. Even if you didn't instrument your webhook, you would still get the span from the API Server making the request to the webhook.
|
||||
|
||||
## Get involved!
|
||||
|
||||
As this is our first attempt at adding distributed tracing to a Kubernetes component, there is probably a lot we can improve! If my struggles resonated with you, or if you just want to try out the latest Kubernetes has to offer, please give the feature a try and open issues with any problem you encountered and ways you think the feature could be improved.
|
||||
|
||||
This is just the very beginning of what we can do with distributed tracing in Kubernetes. If there are other components you think would benefit from distributed tracing, or want to help bring API Server Tracing to GA, join sig-instrumentation at our [regular meetings](https://github.com/kubernetes/community/tree/master/sig-instrumentation#instrumentation-special-interest-group) and get involved!
|
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 30 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 19 KiB |
Before Width: | Height: | Size: 18 KiB |
Before Width: | Height: | Size: 19 KiB |
Before Width: | Height: | Size: 20 KiB |
|
@ -13,7 +13,7 @@ cid: community
|
|||
<div class="intro">
|
||||
<br class="mobile">
|
||||
<p>The Kubernetes community -- users, contributors, and the culture we've built together -- is one of the biggest reasons for the meteoric rise of this open source project. Our culture and values continue to grow and change as the project itself grows and changes. We all work together toward constant improvement of the project and the ways we work on it.
|
||||
<br><br>We are the people who file issues and pull requests, attend SIG meetings, Kubernetes meetups, and KubeCon, advocate for it's adoption and innovation, run <code>kubectl get pods</code>, and contribute in a thousand other vital ways. Read on to learn how you can get involved and become part of this amazing community.</p>
|
||||
<br><br>We are the people who file issues and pull requests, attend SIG meetings, Kubernetes meetups, and KubeCon, advocate for its adoption and innovation, run <code>kubectl get pods</code>, and contribute in a thousand other vital ways. Read on to learn how you can get involved and become part of this amazing community.</p>
|
||||
<br class="mobile">
|
||||
</div>
|
||||
|
||||
|
|
|
@ -210,7 +210,7 @@ To upgrade a HA control plane to use the cloud controller manager, see [Migrate
|
|||
|
||||
Want to know how to implement your own cloud controller manager, or extend an existing project?
|
||||
|
||||
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.17/cloud.go#L42-L62) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider).
|
||||
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.21/cloud.go#L42-L69) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider).
|
||||
|
||||
The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to cloud providers are outside the core of Kubernetes and implement the `CloudProvider` interface.
|
||||
|
||||
|
|
|
@ -159,11 +159,12 @@ You can run your own controller as a set of Pods,
|
|||
or externally to Kubernetes. What fits best will depend on what that particular
|
||||
controller does.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about the [Kubernetes control plane](/docs/concepts/overview/components/#control-plane-components)
|
||||
* Discover some of the basic [Kubernetes objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
||||
* If you want to write your own controller, see [Extension Patterns](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) in Extending Kubernetes.
|
||||
* If you want to write your own controller, see
|
||||
[Extension Patterns](/docs/concepts/extend-kubernetes/#extension-patterns)
|
||||
in Extending Kubernetes.
|
||||
|
||||
|
|
|
@ -0,0 +1,182 @@
|
|||
---
|
||||
title: Garbage Collection
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
{{<glossary_definition term_id="garbage-collection" length="short">}} This
|
||||
allows the clean up of resources like the following:
|
||||
|
||||
* [Failed pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)
|
||||
* [Completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
* [Objects without owner references](#owners-dependents)
|
||||
* [Unused containers and container images](#containers-images)
|
||||
* [Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete](/docs/concepts/storage/persistent-volumes/#delete)
|
||||
* [Stale or expired CertificateSigningRequests (CSRs)](/reference/access-authn-authz/certificate-signing-requests/#request-signing-process)
|
||||
* {{<glossary_tooltip text="Nodes" term_id="node">}} deleted in the following scenarios:
|
||||
* On a cloud when the cluster uses a [cloud controller manager](/docs/concepts/architecture/cloud-controller/)
|
||||
* On-premises when the cluster uses an addon similar to a cloud controller
|
||||
manager
|
||||
* [Node Lease objects](/docs/concepts/architecture/nodes/#heartbeats)
|
||||
|
||||
## Owners and dependents {#owners-dependents}
|
||||
|
||||
Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
|
||||
Owner references tell the control plane which objects are dependent on others.
|
||||
Kubernetes uses owner references to give the control plane, and other API
|
||||
clients, the opportunity to clean up related resources before deleting an
|
||||
object. In most cases, Kubernetes manages owner references automatically.
|
||||
|
||||
Ownership is different from the [labels and selectors](/docs/concepts/overview/working-with-objects/labels/)
|
||||
mechanism that some resources also use. For example, consider a
|
||||
{{<glossary_tooltip text="Service" term_id="service">}} that creates
|
||||
`EndpointSlice` objects. The Service uses *labels* to allow the control plane to
|
||||
determine which `EndpointSlice` objects are used for that Service. In addition
|
||||
to the labels, each `EndpointSlice` that is managed on behalf of a Service has
|
||||
an owner reference. Owner references help different parts of Kubernetes avoid
|
||||
interfering with objects they don’t control.
|
||||
|
||||
{{< note >}}
|
||||
Cross-namespace owner references are disallowed by design.
|
||||
Namespaced dependents can specify cluster-scoped or namespaced owners.
|
||||
A namespaced owner **must** exist in the same namespace as the dependent.
|
||||
If it does not, the owner reference is treated as absent, and the dependent
|
||||
is subject to deletion once all owners are verified absent.
|
||||
|
||||
Cluster-scoped dependents can only specify cluster-scoped owners.
|
||||
In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner,
|
||||
it is treated as having an unresolvable owner reference, and is not able to be garbage collected.
|
||||
|
||||
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
|
||||
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
|
||||
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
|
||||
You can check for that kind of Event by running
|
||||
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
|
||||
{{< /note >}}
|
||||
|
||||
## Cascading deletion {#cascading-deletion}
|
||||
|
||||
Kubernetes checks for and deletes objects that no longer have owner
|
||||
references, like the pods left behind when you delete a ReplicaSet. When you
|
||||
delete an object, you can control whether Kubernetes deletes the object's
|
||||
dependents automatically, in a process called *cascading deletion*. There are
|
||||
two types of cascading deletion, as follows:
|
||||
|
||||
* Foreground cascading deletion
|
||||
* Background cascading deletion
|
||||
|
||||
You can also control how and when garbage collection deletes resources that have
|
||||
owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id="finalizer">}}.
|
||||
|
||||
### Foreground cascading deletion {#foreground-deletion}
|
||||
|
||||
In foreground cascading deletion, the owner object you're deleting first enters
|
||||
a *deletion in progress* state. In this state, the following happens to the
|
||||
owner object:
|
||||
|
||||
* The Kubernetes API server sets the object's `metadata.deletionTimestamp`
|
||||
field to the time the object was marked for deletion.
|
||||
* The Kubernetes API server also sets the `metadata.finalizers` field to
|
||||
`foregroundDeletion`.
|
||||
* The object remains visible through the Kubernetes API until the deletion
|
||||
process is complete.
|
||||
|
||||
After the owner object enters the deletion in progress state, the controller
|
||||
deletes the dependents. After deleting all the dependent objects, the controller
|
||||
deletes the owner object. At this point, the object is no longer visible in the
|
||||
Kubernetes API.
|
||||
|
||||
During foreground cascading deletion, the only dependents that block owner
|
||||
deletion are those that have the `ownerReference.blockOwnerDeletion=true` field.
|
||||
See [Use foreground cascading deletion](/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)
|
||||
to learn more.
|
||||
|
||||
### Background cascading deletion {#background-deletion}
|
||||
|
||||
In background cascading deletion, the Kubernetes API server deletes the owner
|
||||
object immediately and the controller cleans up the dependent objects in
|
||||
the background. By default, Kubernetes uses background cascading deletion unless
|
||||
you manually use foreground deletion or choose to orphan the dependent objects.
|
||||
|
||||
See [Use background cascading deletion](/docs/tasks/administer-cluster/use-cascading-deletion/#use-background-cascading-deletion)
|
||||
to learn more.
|
||||
|
||||
### Orphaned dependents
|
||||
|
||||
When Kubernetes deletes an owner object, the dependents left behind are called
|
||||
*orphan* objects. By default, Kubernetes deletes dependent objects. To learn how
|
||||
to override this behaviour, see [Delete owner objects and orphan dependents](/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy).
|
||||
|
||||
## Garbage collection of unused containers and images {#containers-images}
|
||||
|
||||
The {{<glossary_tooltip text="kubelet" term_id="kubelet">}} performs garbage
|
||||
collection on unused images every five minutes and on unused containers every
|
||||
minute. You should avoid using external garbage collection tools, as these can
|
||||
break the kubelet behavior and remove containers that should exist.
|
||||
|
||||
To configure options for unused container and image garbage collection, tune the
|
||||
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||
and change the parameters related to garbage collection using the
|
||||
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
resource type.
|
||||
|
||||
### Container image lifecycle
|
||||
|
||||
Kubernetes manages the lifecycle of all images through its *image manager*,
|
||||
which is part of the kubelet, with the cooperation of cadvisor. The kubelet
|
||||
considers the following disk usage limits when making garbage collection
|
||||
decisions:
|
||||
|
||||
* `HighThresholdPercent`
|
||||
* `LowThresholdPercent`
|
||||
|
||||
Disk usage above the configured `HighThresholdPercent` value triggers garbage
|
||||
collection, which deletes images in order based on the last time they were used,
|
||||
starting with the oldest first. The kubelet deletes images
|
||||
until disk usage reaches the `LowThresholdPercent` value.
|
||||
|
||||
### Container image garbage collection {#container-image-garbage-collection}
|
||||
|
||||
The kubelet garbage collects unused containers based on the following variables,
|
||||
which you can define:
|
||||
|
||||
* `MinAge`: the minimum age at which the kubelet can garbage collect a
|
||||
container. Disable by setting to `0`.
|
||||
* `MaxPerPodContainer`: the maximum number of dead containers each Pod pair
|
||||
can have. Disable by setting to less than `0`.
|
||||
* `MaxContainers`: the maximum number of dead containers the cluster can have.
|
||||
Disable by setting to less than `0`.
|
||||
|
||||
In addition to these variables, the kubelet garbage collects unidentified and
|
||||
deleted containers, typically starting with the oldest first.
|
||||
|
||||
`MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other
|
||||
in situations where retaining the maximum number of containers per Pod
|
||||
(`MaxPerPodContainer`) would go outside the allowable total of global dead
|
||||
containers (`MaxContainers`). In this situation, the kubelet adjusts
|
||||
`MaxPodPerContainer` to address the conflict. A worst-case scenario would be to
|
||||
downgrade `MaxPerPodContainer` to `1` and evict the oldest containers.
|
||||
Additionally, containers owned by pods that have been deleted are removed once
|
||||
they are older than `MinAge`.
|
||||
|
||||
{{<note>}}
|
||||
The kubelet only garbage collects the containers it manages.
|
||||
{{</note>}}
|
||||
|
||||
## Configuring garbage collection {#configuring-gc}
|
||||
|
||||
You can tune garbage collection of resources by configuring options specific to
|
||||
the controllers managing those resources. The following pages show you how to
|
||||
configure garbage collection:
|
||||
|
||||
* [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
|
||||
* [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
|
||||
<!-- * [Configuring unused container and image garbage collection](/docs/tasks/administer-cluster/reconfigure-kubelet/) -->
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [ownership of Kubernetes objects](/docs/concepts/overview/working-with-objects/owners-dependents/).
|
||||
* Learn more about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
|
||||
* Learn about the [TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) (beta) that cleans up finished Jobs.
|
|
@ -14,7 +14,7 @@ A node may be a virtual or physical machine, depending on the cluster. Each node
|
|||
is managed by the
|
||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
|
||||
and contains the services necessary to run
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}.
|
||||
|
||||
Typically you have several nodes in a cluster; in a learning or resource-limited
|
||||
environment, you might have only one node.
|
||||
|
@ -122,6 +122,9 @@ To mark a Node unschedulable, run:
|
|||
kubectl cordon $NODENAME
|
||||
```
|
||||
|
||||
See [Safely Drain a Node](/docs/tasks/administer-cluster/safely-drain-node/)
|
||||
for more details.
|
||||
|
||||
{{< note >}}
|
||||
Pods that are part of a {{< glossary_tooltip term_id="daemonset" >}} tolerate
|
||||
being run on an unschedulable Node. DaemonSets typically provide node-local services
|
||||
|
@ -162,8 +165,8 @@ The `conditions` field describes the status of all `Running` nodes. Examples of
|
|||
| Node Condition | Description |
|
||||
|----------------------|-------------|
|
||||
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
|
||||
| `DiskPressure` | `True` if pressure exists on the disk size--that is, if the disk capacity is low; otherwise `False` |
|
||||
| `MemoryPressure` | `True` if pressure exists on the node memory--that is, if the node memory is low; otherwise `False` |
|
||||
| `DiskPressure` | `True` if pressure exists on the disk size—that is, if the disk capacity is low; otherwise `False` |
|
||||
| `MemoryPressure` | `True` if pressure exists on the node memory—that is, if the node memory is low; otherwise `False` |
|
||||
| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False` |
|
||||
| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
|
||||
{{< /table >}}
|
||||
|
@ -174,7 +177,8 @@ If you use command-line tools to print details of a cordoned Node, the Condition
|
|||
cordoned nodes are marked Unschedulable in their spec.
|
||||
{{< /note >}}
|
||||
|
||||
The node condition is represented as a JSON object. For example, the following structure describes a healthy node:
|
||||
In the Kubernetes API, a node's condition is represented as part of the `.status`
|
||||
of the Node resource. For example, the following JSON structure describes a healthy node:
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
|
@ -189,7 +193,17 @@ The node condition is represented as a JSON object. For example, the following s
|
|||
]
|
||||
```
|
||||
|
||||
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||
If the `status` of the Ready condition remains `Unknown` or `False` for longer
|
||||
than the `pod-eviction-timeout` (an argument passed to the
|
||||
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager"
|
||||
>}}), then the [node controller](#node-controller) triggers
|
||||
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
|
||||
for all Pods assigned to that node. The default eviction timeout duration is
|
||||
**five minutes**.
|
||||
In some cases when the node is unreachable, the API server is unable to communicate
|
||||
with the kubelet on the node. The decision to delete the pods cannot be communicated to
|
||||
the kubelet until communication with the API server is re-established. In the meantime,
|
||||
the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||
|
||||
The node controller does not force delete pods until it is confirmed that they have stopped
|
||||
running in the cluster. You can see the pods that might be running on an unreachable node as
|
||||
|
@ -199,10 +213,12 @@ may need to delete the node object by hand. Deleting the node object from Kubern
|
|||
all the Pod objects running on the node to be deleted from the API server and frees up their
|
||||
names.
|
||||
|
||||
The node lifecycle controller automatically creates
|
||||
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that represent conditions.
|
||||
When problems occur on nodes, the Kubernetes control plane automatically creates
|
||||
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
|
||||
affecting the node.
|
||||
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
|
||||
Pods can also have tolerations which let them tolerate a Node's taints.
|
||||
Pods can also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
|
||||
them run on a Node even though it has a specific taint.
|
||||
|
||||
See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
|
||||
for more details.
|
||||
|
@ -222,10 +238,43 @@ on a Node.
|
|||
|
||||
### Info
|
||||
|
||||
Describes general information about the node, such as kernel version, Kubernetes version (kubelet and kube-proxy version), Docker version (if used), and OS name.
|
||||
This information is gathered by Kubelet from the node.
|
||||
Describes general information about the node, such as kernel version, Kubernetes
|
||||
version (kubelet and kube-proxy version), container runtime details, and which
|
||||
operating system the node uses.
|
||||
The kubelet gathers this information from the node and publishes it into
|
||||
the Kubernetes API.
|
||||
|
||||
### Node controller
|
||||
## Heartbeats
|
||||
|
||||
Heartbeats, sent by Kubernetes nodes, help your cluster determine the
|
||||
availability of each node, and to take action when failures are detected.
|
||||
|
||||
For nodes there are two forms of heartbeats:
|
||||
|
||||
* updates to the `.status` of a Node
|
||||
* [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects
|
||||
within the `kube-node-lease`
|
||||
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
|
||||
Each Node has an associated Lease object.
|
||||
|
||||
Compared to updates to `.status` of a Node, a Lease is a lightweight resource.
|
||||
Using Leases for heartbeats reduces the performance impact of these updates
|
||||
for large clusters.
|
||||
|
||||
The kubelet is responsible for creating and updating the `.status` of Nodes,
|
||||
and for updating their related Leases.
|
||||
|
||||
- The kubelet updates the node's `.status` either when there is change in status
|
||||
or if there has been no update for a configured interval. The default interval
|
||||
for `.status` updates to Nodes is 5 minutes, which is much longer than the 40
|
||||
second default timeout for unreachable nodes.
|
||||
- The kubelet creates and then updates its Lease object every 10 seconds
|
||||
(the default update interval). Lease updates occur independently from
|
||||
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
|
||||
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.
|
||||
|
||||
|
||||
## Node controller
|
||||
|
||||
The node {{< glossary_tooltip text="controller" term_id="controller" >}} is a
|
||||
Kubernetes control plane component that manages various aspects of nodes.
|
||||
|
@ -241,39 +290,18 @@ controller deletes the node from its list of nodes.
|
|||
|
||||
The third is monitoring the nodes' health. The node controller is
|
||||
responsible for:
|
||||
- Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node
|
||||
becomes unreachable, as the node controller stops receiving heartbeats for some
|
||||
reason such as the node being down.
|
||||
- Evicting all the pods from the node using graceful termination if
|
||||
the node continues to be unreachable. The default timeouts are 40s to start
|
||||
reporting ConditionUnknown and 5m after that to start evicting pods.
|
||||
- In the case that a node becomes unreachable, updating the NodeReady condition
|
||||
of within the Node's `.status`. In this case the node controller sets the
|
||||
NodeReady condition to `ConditionUnknown`.
|
||||
- If a node remains unreachable: triggering
|
||||
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)
|
||||
for all of the Pods on the unreachable node. By default, the node controller
|
||||
waits 5 minutes between marking the node as `ConditionUnknown` and submitting
|
||||
the first eviction request.
|
||||
|
||||
The node controller checks the state of each node every `--node-monitor-period` seconds.
|
||||
|
||||
#### Heartbeats
|
||||
|
||||
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
|
||||
|
||||
There are two forms of heartbeats: updates of `NodeStatus` and the
|
||||
[Lease object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io).
|
||||
Each Node has an associated Lease object in the `kube-node-lease`
|
||||
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
|
||||
Lease is a lightweight resource, which improves the performance
|
||||
of the node heartbeats as the cluster scales.
|
||||
|
||||
The kubelet is responsible for creating and updating the `NodeStatus` and
|
||||
a Lease object.
|
||||
|
||||
- The kubelet updates the `NodeStatus` either when there is change in status
|
||||
or if there has been no update for a configured interval. The default interval
|
||||
for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default
|
||||
timeout for unreachable nodes.
|
||||
- The kubelet creates and then updates its Lease object every 10 seconds
|
||||
(the default update interval). Lease updates occur independently from the
|
||||
`NodeStatus` updates. If the Lease update fails, the kubelet retries with
|
||||
exponential backoff starting at 200 milliseconds and capped at 7 seconds.
|
||||
|
||||
#### Reliability
|
||||
### Rate limits on eviction
|
||||
|
||||
In most cases, the node controller limits the eviction rate to
|
||||
`--node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
|
||||
|
@ -281,9 +309,9 @@ from more than 1 node per 10 seconds.
|
|||
|
||||
The node eviction behavior changes when a node in a given availability zone
|
||||
becomes unhealthy. The node controller checks what percentage of nodes in the zone
|
||||
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
|
||||
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
|
||||
the same time:
|
||||
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
|
||||
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
|
||||
(default 0.55), then the eviction rate is reduced.
|
||||
- If the cluster is small (i.e. has less than or equal to
|
||||
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
|
||||
|
@ -293,15 +321,17 @@ the same time:
|
|||
The reason these policies are implemented per availability zone is because one
|
||||
availability zone might become partitioned from the master while the others remain
|
||||
connected. If your cluster does not span multiple cloud provider availability zones,
|
||||
then there is only one availability zone (i.e. the whole cluster).
|
||||
then the eviction mechanism does not take per-zone unavailability into account.
|
||||
|
||||
A key reason for spreading your nodes across availability zones is so that the
|
||||
workload can be shifted to healthy zones when one entire zone goes down.
|
||||
Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
|
||||
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
|
||||
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
|
||||
case, the node controller assumes that there is some problem with master
|
||||
connectivity and stops all evictions until some connectivity is restored.
|
||||
completely unhealthy (none of the nodes in the cluster are healthy). In such a
|
||||
case, the node controller assumes that there is some problem with connectivity
|
||||
between the control plane and the nodes, and doesn't perform any evictions.
|
||||
(If there has been an outage and some nodes reappear, the node controller does
|
||||
evict pods from the remaining nodes that are unhealthy or unreachable).
|
||||
|
||||
The node controller is also responsible for evicting pods running on nodes with
|
||||
`NoExecute` taints, unless those pods tolerate that taint.
|
||||
|
@ -309,7 +339,7 @@ The node controller also adds {{< glossary_tooltip text="taints" term_id="taint"
|
|||
corresponding to node problems like node unreachable or not ready. This means
|
||||
that the scheduler won't place Pods onto unhealthy nodes.
|
||||
|
||||
### Node capacity
|
||||
## Resource capacity tracking {#node-capacity}
|
||||
|
||||
Node objects track information about the Node's resource capacity: for example, the amount
|
||||
of memory available and the number of CPUs.
|
||||
|
@ -377,6 +407,64 @@ For example, if `ShutdownGracePeriod=30s`, and
|
|||
for gracefully terminating normal pods, and the last 10 seconds would be
|
||||
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
|
||||
|
||||
{{< note >}}
|
||||
When pods were evicted during the graceful node shutdown, they are marked as failed.
|
||||
Running `kubectl get pods` shows the status of the the evicted pods as `Shutdown`.
|
||||
And `kubectl describe pod` indicates that the pod was evicted because of node shutdown:
|
||||
|
||||
```
|
||||
Status: Failed
|
||||
Reason: Shutdown
|
||||
Message: Node is shutting, evicting pods
|
||||
```
|
||||
|
||||
Failed pod objects will be preserved until explicitly deleted or [cleaned up by the GC](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection).
|
||||
This is a change of behavior compared to abrupt node termination.
|
||||
{{< /note >}}
|
||||
|
||||
## Swap memory management {#swap-memory}
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
|
||||
|
||||
Prior to Kubernetes 1.22, nodes did not support the use of swap memory, and a
|
||||
kubelet would by default fail to start if swap was detected on a node. In 1.22
|
||||
onwards, swap memory support can be enabled on a per-node basis.
|
||||
|
||||
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
|
||||
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
|
||||
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
must be set to false.
|
||||
|
||||
A user can also optionally configure `memorySwap.swapBehavior` in order to
|
||||
specify how a node will use swap memory. For example,
|
||||
|
||||
```yaml
|
||||
memorySwap:
|
||||
swapBehavior: LimitedSwap
|
||||
```
|
||||
|
||||
The available configuration options for `swapBehavior` are:
|
||||
|
||||
- `LimitedSwap`: Kubernetes workloads are limited in how much swap they can
|
||||
use. Workloads on the node not managed by Kubernetes can still swap.
|
||||
- `UnlimitedSwap`: Kubernetes workloads can use as much swap memory as they
|
||||
request, up to the system limit.
|
||||
|
||||
If configuration for `memorySwap` is not specified and the feature gate is
|
||||
enabled, by default the kubelet will apply the same behaviour as the
|
||||
`LimitedSwap` setting.
|
||||
|
||||
The behaviour of the `LimitedSwap` setting depends if the node is running with
|
||||
v1 or v2 of control groups (also known as "cgroups"):
|
||||
|
||||
- **cgroupsv1:** Kubernetes workloads can use any combination of memory and
|
||||
swap, up to the pod's memory limit, if set.
|
||||
- **cgroupsv2:** Kubernetes workloads cannot use swap memory.
|
||||
|
||||
For more information, and to assist with testing and provide feedback, please
|
||||
see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
|
||||
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
|
||||
|
|
|
@ -33,8 +33,6 @@ the `--max-requests-inflight` flag without the API Priority and
|
|||
Fairness feature enabled.
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Enabling/Disabling API Priority and Fairness
|
||||
|
@ -65,6 +63,7 @@ The command-line flag `--enable-priority-and-fairness=false` will disable the
|
|||
API Priority and Fairness feature, even if other flags have enabled it.
|
||||
|
||||
## Concepts
|
||||
|
||||
There are several distinct features involved in the API Priority and Fairness
|
||||
feature. Incoming requests are classified by attributes of the request using
|
||||
_FlowSchemas_, and assigned to priority levels. Priority levels add a degree of
|
||||
|
@ -75,12 +74,13 @@ each other, and allows for requests to be queued to prevent bursty traffic from
|
|||
causing failed requests when the average load is acceptably low.
|
||||
|
||||
### Priority Levels
|
||||
Without APF enabled, overall concurrency in
|
||||
the API server is limited by the `kube-apiserver` flags
|
||||
`--max-requests-inflight` and `--max-mutating-requests-inflight`. With APF
|
||||
enabled, the concurrency limits defined by these flags are summed and then the sum is divided up
|
||||
among a configurable set of _priority levels_. Each incoming request is assigned
|
||||
to a single priority level, and each priority level will only dispatch as many
|
||||
|
||||
Without APF enabled, overall concurrency in the API server is limited by the
|
||||
`kube-apiserver` flags `--max-requests-inflight` and
|
||||
`--max-mutating-requests-inflight`. With APF enabled, the concurrency limits
|
||||
defined by these flags are summed and then the sum is divided up among a
|
||||
configurable set of _priority levels_. Each incoming request is assigned to a
|
||||
single priority level, and each priority level will only dispatch as many
|
||||
concurrent requests as its configuration allows.
|
||||
|
||||
The default configuration, for example, includes separate priority levels for
|
||||
|
@ -90,6 +90,7 @@ requests cannot prevent leader election or actions by the built-in controllers
|
|||
from succeeding.
|
||||
|
||||
### Queuing
|
||||
|
||||
Even within a priority level there may be a large number of distinct sources of
|
||||
traffic. In an overload situation, it is valuable to prevent one stream of
|
||||
requests from starving others (in particular, in the relatively common case of a
|
||||
|
@ -114,15 +115,18 @@ independent flows will all make progress when total traffic exceeds capacity),
|
|||
tolerance for bursty traffic, and the added latency induced by queuing.
|
||||
|
||||
### Exempt requests
|
||||
|
||||
Some requests are considered sufficiently important that they are not subject to
|
||||
any of the limitations imposed by this feature. These exemptions prevent an
|
||||
improperly-configured flow control configuration from totally disabling an API
|
||||
server.
|
||||
|
||||
## Defaults
|
||||
|
||||
The Priority and Fairness feature ships with a suggested configuration that
|
||||
should suffice for experimentation; if your cluster is likely to
|
||||
experience heavy load then you should consider what configuration will work best. The suggested configuration groups requests into five priority
|
||||
experience heavy load then you should consider what configuration will work
|
||||
best. The suggested configuration groups requests into five priority
|
||||
classes:
|
||||
|
||||
* The `system` priority level is for requests from the `system:nodes` group,
|
||||
|
@ -180,19 +184,18 @@ If you add the following additional FlowSchema, this exempts those
|
|||
requests from rate limiting.
|
||||
|
||||
{{< caution >}}
|
||||
|
||||
Making this change also allows any hostile party to then send
|
||||
health-check requests that match this FlowSchema, at any volume they
|
||||
like. If you have a web traffic filter or similar external security
|
||||
mechanism to protect your cluster's API server from general internet
|
||||
traffic, you can configure rules to block any health check requests
|
||||
that originate from outside your cluster.
|
||||
|
||||
{{< /caution >}}
|
||||
|
||||
{{< codenew file="priority-and-fairness/health-for-strangers.yaml" >}}
|
||||
|
||||
## Resources
|
||||
|
||||
The flow control API involves two kinds of resources.
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta1-flowcontrol-apiserver-k8s-io)
|
||||
define the available isolation classes, the share of the available concurrency
|
||||
|
@ -204,6 +207,7 @@ of the same API group, and it has the same Kinds with the same syntax and
|
|||
semantics.
|
||||
|
||||
### PriorityLevelConfiguration
|
||||
|
||||
A PriorityLevelConfiguration represents a single isolation class. Each
|
||||
PriorityLevelConfiguration has an independent limit on the number of outstanding
|
||||
requests, and limitations on the number of queued requests.
|
||||
|
@ -217,6 +221,7 @@ server by restarting `kube-apiserver` with a different value for
|
|||
`--max-requests-inflight` (or `--max-mutating-requests-inflight`), and all
|
||||
PriorityLevelConfigurations will see their maximum allowed concurrency go up (or
|
||||
down) by the same fraction.
|
||||
|
||||
{{< caution >}}
|
||||
With the Priority and Fairness feature enabled, the total concurrency limit for
|
||||
the server is set to the sum of `--max-requests-inflight` and
|
||||
|
@ -235,8 +240,8 @@ above the threshold will be queued, with the shuffle sharding and fair queuing t
|
|||
to balance progress between request flows.
|
||||
|
||||
The queuing configuration allows tuning the fair queuing algorithm for a
|
||||
priority level. Details of the algorithm can be read in the [enhancement
|
||||
proposal](#whats-next), but in short:
|
||||
priority level. Details of the algorithm can be read in the
|
||||
[enhancement proposal](#whats-next), but in short:
|
||||
|
||||
* Increasing `queues` reduces the rate of collisions between different flows, at
|
||||
the cost of increased memory usage. A value of 1 here effectively disables the
|
||||
|
@ -249,15 +254,15 @@ proposal](#whats-next), but in short:
|
|||
* Changing `handSize` allows you to adjust the probability of collisions between
|
||||
different flows and the overall concurrency available to a single flow in an
|
||||
overload situation.
|
||||
{{< note >}}
|
||||
A larger `handSize` makes it less likely for two individual flows to collide
|
||||
(and therefore for one to be able to starve the other), but more likely that
|
||||
a small number of flows can dominate the apiserver. A larger `handSize` also
|
||||
potentially increases the amount of latency that a single high-traffic flow
|
||||
can cause. The maximum number of queued requests possible from a
|
||||
single flow is `handSize * queueLengthLimit`.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
A larger `handSize` makes it less likely for two individual flows to collide
|
||||
(and therefore for one to be able to starve the other), but more likely that
|
||||
a small number of flows can dominate the apiserver. A larger `handSize` also
|
||||
potentially increases the amount of latency that a single high-traffic flow
|
||||
can cause. The maximum number of queued requests possible from a
|
||||
single flow is `handSize * queueLengthLimit`.
|
||||
{{< /note >}}
|
||||
|
||||
Following is a table showing an interesting collection of shuffle
|
||||
sharding configurations, showing for each the probability that a
|
||||
|
@ -319,6 +324,7 @@ considered part of a single flow. The correct choice for a given FlowSchema
|
|||
depends on the resource and your particular environment.
|
||||
|
||||
## Diagnostics
|
||||
|
||||
Every HTTP response from an API server with the priority and fairness feature
|
||||
enabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and
|
||||
`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request
|
||||
|
@ -356,13 +362,14 @@ poorly-behaved workloads that may be harming system health.
|
|||
matched the request), `priority_level` (indicating the one to which
|
||||
the request was assigned), and `reason`. The `reason` label will be
|
||||
have one of the following values:
|
||||
* `queue-full`, indicating that too many requests were already
|
||||
queued,
|
||||
* `concurrency-limit`, indicating that the
|
||||
PriorityLevelConfiguration is configured to reject rather than
|
||||
queue excess requests, or
|
||||
* `time-out`, indicating that the request was still in the queue
|
||||
when its queuing time limit expired.
|
||||
|
||||
* `queue-full`, indicating that too many requests were already
|
||||
queued,
|
||||
* `concurrency-limit`, indicating that the
|
||||
PriorityLevelConfiguration is configured to reject rather than
|
||||
queue excess requests, or
|
||||
* `time-out`, indicating that the request was still in the queue
|
||||
when its queuing time limit expired.
|
||||
|
||||
* `apiserver_flowcontrol_dispatched_requests_total` is a counter
|
||||
vector (cumulative since server start) of requests that began
|
||||
|
@ -405,6 +412,10 @@ poorly-behaved workloads that may be harming system health.
|
|||
queue) requests, broken down by the labels `priority_level` and
|
||||
`flow_schema`.
|
||||
|
||||
* `apiserver_flowcontrol_request_concurrency_in_use` is a gauge vector
|
||||
holding the instantaneous number of occupied seats, broken down by
|
||||
the labels `priority_level` and `flow_schema`.
|
||||
|
||||
* `apiserver_flowcontrol_priority_level_request_count_samples` is a
|
||||
histogram vector of observations of the then-current number of
|
||||
requests broken down by the labels `phase` (which takes on the
|
||||
|
@ -430,14 +441,15 @@ poorly-behaved workloads that may be harming system health.
|
|||
sample to its histogram, reporting the length of the queue immediately
|
||||
after the request was added. Note that this produces different
|
||||
statistics than an unbiased survey would.
|
||||
{{< note >}}
|
||||
An outlier value in a histogram here means it is likely that a single flow
|
||||
(i.e., requests by one user or for one namespace, depending on
|
||||
configuration) is flooding the API server, and being throttled. By contrast,
|
||||
if one priority level's histogram shows that all queues for that priority
|
||||
level are longer than those for other priority levels, it may be appropriate
|
||||
to increase that PriorityLevelConfiguration's concurrency shares.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
An outlier value in a histogram here means it is likely that a single flow
|
||||
(i.e., requests by one user or for one namespace, depending on
|
||||
configuration) is flooding the API server, and being throttled. By contrast,
|
||||
if one priority level's histogram shows that all queues for that priority
|
||||
level are longer than those for other priority levels, it may be appropriate
|
||||
to increase that PriorityLevelConfiguration's concurrency shares.
|
||||
{{< /note >}}
|
||||
|
||||
* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector
|
||||
holding the computed concurrency limit (based on the API server's
|
||||
|
@ -450,12 +462,13 @@ poorly-behaved workloads that may be harming system health.
|
|||
`priority_level` (indicating the one to which the request was
|
||||
assigned), and `execute` (indicating whether the request started
|
||||
executing).
|
||||
{{< note >}}
|
||||
Since each FlowSchema always assigns requests to a single
|
||||
PriorityLevelConfiguration, you can add the histograms for all the
|
||||
FlowSchemas for one priority level to get the effective histogram for
|
||||
requests assigned to that priority level.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Since each FlowSchema always assigns requests to a single
|
||||
PriorityLevelConfiguration, you can add the histograms for all the
|
||||
FlowSchemas for one priority level to get the effective histogram for
|
||||
requests assigned to that priority level.
|
||||
{{< /note >}}
|
||||
|
||||
* `apiserver_flowcontrol_request_execution_seconds` is a histogram
|
||||
vector of how long requests took to actually execute, broken down by
|
||||
|
@ -465,14 +478,19 @@ poorly-behaved workloads that may be harming system health.
|
|||
|
||||
### Debug endpoints
|
||||
|
||||
When you enable the API Priority and Fairness feature, the kube-apiserver serves the following additional paths at its HTTP[S] ports.
|
||||
When you enable the API Priority and Fairness feature, the `kube-apiserver`
|
||||
serves the following additional paths at its HTTP[S] ports.
|
||||
|
||||
- `/debug/api_priority_and_fairness/dump_priority_levels` - a listing of
|
||||
all the priority levels and the current state of each. You can fetch like this:
|
||||
|
||||
- `/debug/api_priority_and_fairness/dump_priority_levels` - a listing of all the priority levels and the current state of each. You can fetch like this:
|
||||
```shell
|
||||
kubectl get --raw /debug/api_priority_and_fairness/dump_priority_levels
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
```none
|
||||
PriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests,
|
||||
workload-low, 0, true, false, 0, 0,
|
||||
global-default, 0, true, false, 0, 0,
|
||||
|
@ -483,12 +501,16 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
|
|||
workload-high, 0, true, false, 0, 0,
|
||||
```
|
||||
|
||||
- `/debug/api_priority_and_fairness/dump_queues` - a listing of all the queues and their current state. You can fetch like this:
|
||||
- `/debug/api_priority_and_fairness/dump_queues` - a listing of all the
|
||||
queues and their current state. You can fetch like this:
|
||||
|
||||
```shell
|
||||
kubectl get --raw /debug/api_priority_and_fairness/dump_queues
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
```none
|
||||
PriorityLevelName, Index, PendingRequests, ExecutingRequests, VirtualStart,
|
||||
workload-high, 0, 0, 0, 0.0000,
|
||||
workload-high, 1, 0, 0, 0.0000,
|
||||
|
@ -498,25 +520,33 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
|
|||
leader-election, 15, 0, 0, 0.0000,
|
||||
```
|
||||
|
||||
- `/debug/api_priority_and_fairness/dump_requests` - a listing of all the requests that are currently waiting in a queue. You can fetch like this:
|
||||
- `/debug/api_priority_and_fairness/dump_requests` - a listing of all the requests
|
||||
that are currently waiting in a queue. You can fetch like this:
|
||||
|
||||
```shell
|
||||
kubectl get --raw /debug/api_priority_and_fairness/dump_requests
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
```none
|
||||
PriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime,
|
||||
exempt, <none>, <none>, <none>, <none>, <none>,
|
||||
system, system-nodes, 12, 0, system:node:127.0.0.1, 2020-07-23T15:26:57.179170694Z,
|
||||
```
|
||||
|
||||
In addition to the queued requests, the output includes one phantom line for each priority level that is exempt from limitation.
|
||||
In addition to the queued requests, the output includes one phantom line
|
||||
for each priority level that is exempt from limitation.
|
||||
|
||||
You can get a more detailed listing with a command like this:
|
||||
|
||||
```shell
|
||||
kubectl get --raw '/debug/api_priority_and_fairness/dump_requests?includeRequestDetails=1'
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
```none
|
||||
PriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, UserName, Verb, APIPath, Namespace, Name, APIVersion, Resource, SubResource,
|
||||
system, system-nodes, 12, 0, system:node:127.0.0.1, 2020-07-23T15:31:03.583823404Z, system:node:127.0.0.1, create, /api/v1/namespaces/scaletest/configmaps,
|
||||
system, system-nodes, 12, 1, system:node:127.0.0.1, 2020-07-23T15:31:03.594555947Z, system:node:127.0.0.1, create, /api/v1/namespaces/scaletest/configmaps,
|
||||
|
@ -528,4 +558,4 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
|
|||
For background information on design details for API priority and fairness, see
|
||||
the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1040-priority-and-fairness).
|
||||
You can make suggestions and feature requests via [SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
|
||||
or the feature's [slack channel](http://kubernetes.slack.com/messages/api-priority-and-fairness).
|
||||
or the feature's [slack channel](https://kubernetes.slack.com/messages/api-priority-and-fairness).
|
||||
|
|
|
@ -1,86 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
title: Garbage collection for container images
|
||||
content_type: concept
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Garbage collection is a helpful function of kubelet that will clean up unused [images](/docs/concepts/containers/#container-images) and unused [containers](/docs/concepts/containers/). Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
|
||||
|
||||
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Image Collection
|
||||
|
||||
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
|
||||
of cadvisor.
|
||||
|
||||
The policy for garbage collecting images takes two factors into consideration:
|
||||
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
|
||||
will trigger garbage collection. The garbage collection will delete least recently used images until the low
|
||||
threshold has been met.
|
||||
|
||||
## Container Collection
|
||||
|
||||
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers every single
|
||||
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting `MinAge` to zero and setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
|
||||
|
||||
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other in situations where retaining the maximum number of containers per pod (`MaxPerPodContainer`) would go outside the allowable range of global dead containers (`MaxContainers`). `MaxPerPodContainer` would be adjusted in this situation: A worst case scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
|
||||
|
||||
Containers that are not managed by kubelet are not subject to container garbage collection.
|
||||
|
||||
## User Configuration
|
||||
|
||||
You can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
|
||||
|
||||
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
|
||||
Default is 85%.
|
||||
2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
|
||||
to free. Default is 80%.
|
||||
|
||||
You can customize the garbage collection policy through the following kubelet flags:
|
||||
|
||||
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
|
||||
garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
|
||||
2. `maximum-dead-containers-per-container`, maximum number of old instances to be retained
|
||||
per container. Default is 1.
|
||||
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
|
||||
Default is -1, which means there is no global limit.
|
||||
|
||||
Containers can potentially be garbage collected before their usefulness has expired. These containers
|
||||
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
|
||||
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
|
||||
retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a
|
||||
similar reason.
|
||||
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
|
||||
|
||||
|
||||
## Deprecation
|
||||
|
||||
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
|
||||
|
||||
Including:
|
||||
|
||||
| Existing Flag | New Flag | Rationale |
|
||||
| ------------- | -------- | --------- |
|
||||
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
|
||||
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
|
||||
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
|
||||
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
|
||||
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
|
||||
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
|
||||
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details.
|
||||
|
|
@ -81,10 +81,13 @@ rotate an application's logs automatically.
|
|||
|
||||
As an example, you can find detailed information about how `kube-up.sh` sets
|
||||
up logging for COS image on GCP in the corresponding
|
||||
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
|
||||
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh).
|
||||
|
||||
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. The kubelet
|
||||
sends this information to the CRI container runtime and the runtime writes the container logs to the given location. The two kubelet flags `container-log-max-size` and `container-log-max-files` can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
|
||||
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure.
|
||||
The kubelet sends this information to the CRI container runtime and the runtime writes the container logs to the given location.
|
||||
The two kubelet parameters [`containerLogMaxSize` and `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
in [kubelet config file](/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||
can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
|
||||
|
||||
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
|
||||
the basic logging example, the kubelet on the node handles the request and
|
||||
|
|
|
@ -50,7 +50,7 @@ It is a recommended practice to put resources related to the same microservice o
|
|||
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into GitHub:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/nginx/nginx-deployment.yaml
|
||||
```
|
||||
|
||||
```shell
|
||||
|
@ -160,7 +160,7 @@ If you're interested in learning more about `kubectl`, go ahead and read [kubect
|
|||
|
||||
The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
|
||||
|
||||
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
|
||||
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
|
|
|
@ -0,0 +1,89 @@
|
|||
---
|
||||
title: Traces For Kubernetes System Components
|
||||
reviewers:
|
||||
- logicalhan
|
||||
- lilic
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
|
||||
|
||||
System component traces record the latency of and relationships between operations in the cluster.
|
||||
|
||||
Kubernetes components emit traces using the
|
||||
[OpenTelemetry Protocol](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#opentelemetry-protocol-specification)
|
||||
with the gRPC exporter and can be collected and routed to tracing backends using an
|
||||
[OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Trace Collection
|
||||
|
||||
For a complete guide to collecting traces and using the collector, see
|
||||
[Getting Started with the OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/).
|
||||
However, there are a few things to note that are specific to Kubernetes components.
|
||||
|
||||
By default, Kubernetes components export traces using the grpc exporter for OTLP on the
|
||||
[IANA OpenTelemetry port](https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=opentelemetry), 4317.
|
||||
As an example, if the collector is running as a sidecar to a Kubernetes component,
|
||||
the following receiver configuration will collect spans and log them to standard output:
|
||||
|
||||
```yaml
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
exporters:
|
||||
# Replace this exporter with the exporter for your backend
|
||||
logging:
|
||||
logLevel: debug
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
exporters: [logging]
|
||||
```
|
||||
|
||||
## Component traces
|
||||
|
||||
### kube-apiserver traces
|
||||
|
||||
The kube-apiserver generates spans for incoming HTTP requests, and for outgoing requests
|
||||
to webhooks, etcd, and re-entrant requests. It propagates the
|
||||
[W3C Trace Context](https://www.w3.org/TR/trace-context/) with outgoing requests
|
||||
but does not make use of the trace context attached to incoming requests,
|
||||
as the kube-apiserver is often a public endpoint.
|
||||
|
||||
#### Enabling tracing in the kube-apiserver
|
||||
|
||||
To enable tracing, enable the `APIServerTracing`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the kube-apiserver. Also, provide the kube-apiserver with a tracing configration file
|
||||
with `--tracing-config-file=<path-to-config>`. This is an example config that records
|
||||
spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1alpha1
|
||||
kind: TracingConfiguration
|
||||
# default value
|
||||
#endpoint: localhost:4317
|
||||
samplingRatePerMillion: 100
|
||||
```
|
||||
|
||||
For more information about the `TracingConfiguration` struct, see
|
||||
[API server config API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration).
|
||||
|
||||
## Stability
|
||||
|
||||
Tracing instrumentation is still under active development, and may change
|
||||
in a variety of ways. This includes span names, attached attributes,
|
||||
instrumented endpoints, etc. Until this feature graduates to stable,
|
||||
there are no guarantees of backwards compatibility for tracing instrumentation.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about [Getting Started with the OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/)
|
||||
|
|
@ -61,6 +61,11 @@ You can write a Pod `spec` that refers to a ConfigMap and configures the contain
|
|||
in that Pod based on the data in the ConfigMap. The Pod and the ConfigMap must be in
|
||||
the same {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
|
||||
|
||||
{{< note >}}
|
||||
The `spec` of a {{< glossary_tooltip text="static Pod" term_id="static-pod" >}} cannot refer to a ConfigMap
|
||||
or any other API objects.
|
||||
{{< /note >}}
|
||||
|
||||
Here's an example ConfigMap that has some keys with single values,
|
||||
and other keys where the value looks like a fragment of a configuration
|
||||
format.
|
||||
|
|
|
@ -115,7 +115,7 @@ CPU is always requested as an absolute quantity, never as a relative quantity;
|
|||
|
||||
Limits and requests for `memory` are measured in bytes. You can express memory as
|
||||
a plain integer or as a fixed-point number using one of these suffixes:
|
||||
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following represent roughly the same value:
|
||||
|
||||
```shell
|
||||
|
@ -181,8 +181,9 @@ When using Docker:
|
|||
flag in the `docker run` command.
|
||||
|
||||
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and
|
||||
multiplied by 100. The resulting value is the total amount of CPU time that a container can use
|
||||
every 100ms. A container cannot use more than its share of CPU time during this interval.
|
||||
multiplied by 100. The resulting value is the total amount of CPU time in microseconds
|
||||
that a container can use every 100ms. A container cannot use more than its share of
|
||||
CPU time during this interval.
|
||||
|
||||
{{< note >}}
|
||||
The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
|
||||
|
@ -337,6 +338,9 @@ spec:
|
|||
ephemeral-storage: "2Gi"
|
||||
limits:
|
||||
ephemeral-storage: "4Gi"
|
||||
volumeMounts:
|
||||
- name: ephemeral
|
||||
mountPath: "/tmp"
|
||||
- name: log-aggregator
|
||||
image: images.my-company.example/log-aggregator:v6
|
||||
resources:
|
||||
|
@ -344,6 +348,12 @@ spec:
|
|||
ephemeral-storage: "2Gi"
|
||||
limits:
|
||||
ephemeral-storage: "4Gi"
|
||||
volumeMounts:
|
||||
- name: ephemeral
|
||||
mountPath: "/tmp"
|
||||
volumes:
|
||||
- name: ephemeral
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
### How Pods with ephemeral-storage requests are scheduled
|
||||
|
|
|
@ -17,6 +17,11 @@ a *kubeconfig file*. This is a generic way of referring to configuration files.
|
|||
It does not mean that there is a file named `kubeconfig`.
|
||||
{{< /note >}}
|
||||
|
||||
{{< warning >}}
|
||||
Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure.
|
||||
If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.
|
||||
{{< /warning>}}
|
||||
|
||||
By default, `kubectl` looks for a file named `config` in the `$HOME/.kube` directory.
|
||||
You can specify other kubeconfig files by setting the `KUBECONFIG` environment
|
||||
variable or by setting the
|
||||
|
@ -154,4 +159,3 @@ are stored absolutely.
|
|||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ This is a living document. If you think of something that is not on this list bu
|
|||
|
||||
- Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.
|
||||
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
|
||||
|
||||
- Note also that many `kubectl` commands can be called on a directory. For example, you can call `kubectl apply` on a directory of config files.
|
||||
|
||||
|
@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN
|
|||
|
||||
## Using Labels
|
||||
|
||||
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach.
|
||||
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
|
||||
|
||||
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).
|
||||
|
||||
|
@ -73,32 +73,6 @@ A desired state of an object is described by a Deployment, and if changes to tha
|
|||
|
||||
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
|
||||
|
||||
## Container Images
|
||||
|
||||
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull the specified image.
|
||||
|
||||
- `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally.
|
||||
|
||||
- `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
|
||||
|
||||
- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `imagePullPolicy` is automatically set to `Always`. Note that this will _not_ be updated to `IfNotPresent` if the tag changes value.
|
||||
|
||||
- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `imagePullPolicy` is automatically set to `IfNotPresent`. Note that this will _not_ be updated to `Always` if the tag is later removed or changed to `:latest`.
|
||||
|
||||
- `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image.
|
||||
|
||||
{{< note >}}
|
||||
To make sure the container always uses the same version of the image, you can specify its [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier); replace `<image-name>:<tag>` with `<image-name>@<digest>` (for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`). The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
You should avoid using the `:latest` tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient, as long as the registry is reliably accessible. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
|
||||
{{< /note >}}
|
||||
|
||||
## Using kubectl
|
||||
|
||||
- Use `kubectl apply -f <directory>`. This looks for Kubernetes configuration in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes it to `apply`.
|
||||
|
|
|
@ -12,26 +12,33 @@ weight: 30
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
Kubernetes Secrets let you store and manage sensitive information, such
|
||||
as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret
|
||||
is safer and more flexible than putting it verbatim in a
|
||||
{{< glossary_tooltip term_id="pod" >}} definition or in a
|
||||
{{< glossary_tooltip text="container image" term_id="image" >}}.
|
||||
See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
|
||||
|
||||
A Secret is an object that contains a small amount of sensitive data such as
|
||||
a password, a token, or a key. Such information might otherwise be put in a
|
||||
Pod specification or in an image. Users can create Secrets and the system
|
||||
also creates some Secrets.
|
||||
{{< glossary_tooltip term_id="pod" >}} specification or in a
|
||||
{{< glossary_tooltip text="container image" term_id="image" >}}. Using a
|
||||
Secret means that you don't need to include confidential data in your
|
||||
application code.
|
||||
|
||||
Because Secrets can be created independently of the Pods that use them, there
|
||||
is less risk of the Secret (and its data) being exposed during the workflow of
|
||||
creating, viewing, and editing Pods. Kubernetes, and applications that run in
|
||||
your cluster, can also take additional precautions with Secrets, such as
|
||||
avoiding writing confidential data to nonvolatile storage.
|
||||
|
||||
Secrets are similar to {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}}
|
||||
but are specifically intended to hold confidential data.
|
||||
|
||||
{{< caution >}}
|
||||
Kubernetes Secrets are, by default, stored as unencrypted base64-encoded
|
||||
strings. By default they can be retrieved - as plain text - by anyone with API
|
||||
access, or anyone with access to Kubernetes' underlying data store, etcd. In
|
||||
order to safely use Secrets, it is recommended you (at a minimum):
|
||||
Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd.
|
||||
Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
|
||||
|
||||
In order to safely use Secrets, take at least the following steps:
|
||||
|
||||
1. [Enable Encryption at Rest](/docs/tasks/administer-cluster/encrypt-data/) for Secrets.
|
||||
2. [Enable or configure RBAC rules](/docs/reference/access-authn-authz/authorization/) that restrict reading and writing the Secret. Be aware that secrets can be obtained implicitly by anyone with the permission to create a Pod.
|
||||
2. Enable or configure [RBAC rules](/docs/reference/access-authn-authz/authorization/) that
|
||||
restrict reading data in Secrets (including via indirect means).
|
||||
3. Where appropriate, also use mechanisms such as RBAC to limit which principals are allowed to create new Secrets or replace existing ones.
|
||||
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- body -->
|
||||
|
@ -47,6 +54,10 @@ A Secret can be used with a Pod in three ways:
|
|||
- As [container environment variable](#using-secrets-as-environment-variables).
|
||||
- By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod.
|
||||
|
||||
The Kubernetes control plane also uses Secrets; for example,
|
||||
[bootstrap token Secrets](#bootstrap-token-secrets) are a mechanism to
|
||||
help automate node registration.
|
||||
|
||||
The name of a Secret object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
You can specify the `data` and/or the `stringData` field when creating a
|
||||
|
@ -64,9 +75,9 @@ precedence.
|
|||
## Types of Secret {#secret-types}
|
||||
|
||||
When creating a Secret, you can specify its type using the `type` field of
|
||||
the [`Secret`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
|
||||
resource, or certain equivalent `kubectl` command line flags (if available).
|
||||
The Secret type is used to facilitate programmatic handling of the Secret data.
|
||||
a Secret resource, or certain equivalent `kubectl` command line flags (if available).
|
||||
The `type` of a Secret is used to facilitate programmatic handling of different
|
||||
kinds of confidential data.
|
||||
|
||||
Kubernetes provides several builtin types for some common usage scenarios.
|
||||
These types vary in terms of the validations performed and the constraints
|
||||
|
@ -822,7 +833,10 @@ are obtained from the API server.
|
|||
This includes any Pods created using `kubectl`, or indirectly via a replication
|
||||
controller. It does not include Pods created as a result of the kubelet
|
||||
`--manifest-url` flag, its `--config` flag, or its REST API (these are
|
||||
not common ways to create Pods.)
|
||||
not common ways to create Pods).
|
||||
The `spec` of a {{< glossary_tooltip text="static Pod" term_id="static-pod" >}} cannot refer to a Secret
|
||||
or any other API objects.
|
||||
|
||||
|
||||
Secrets must be created before they are consumed in Pods as environment
|
||||
variables unless they are marked as optional. References to secrets that do
|
||||
|
@ -1164,7 +1178,7 @@ limit access using [authorization policies](
|
|||
Secrets often hold values that span a spectrum of importance, many of which can
|
||||
cause escalations within Kubernetes (e.g. service account tokens) and to
|
||||
external systems. Even if an individual app can reason about the power of the
|
||||
secrets it expects to interact with, other apps within the same namespace can
|
||||
Secrets it expects to interact with, other apps within the same namespace can
|
||||
render those assumptions invalid.
|
||||
|
||||
For these reasons `watch` and `list` requests for secrets within a namespace are
|
||||
|
@ -1235,15 +1249,10 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa
|
|||
- A user who can create a Pod that uses a secret can also see the value of that secret. Even
|
||||
if the API server policy does not allow that user to read the Secret, the user could
|
||||
run a Pod which exposes the secret.
|
||||
- Currently, anyone with root permission on any node can read _any_ secret from the API server,
|
||||
by impersonating the kubelet. It is a planned feature to only send secrets to
|
||||
nodes that actually require them, to restrict the impact of a root exploit on a
|
||||
single node.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Learn how to [manage Secret using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
|
||||
- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
|
||||
- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
|
||||
|
||||
- Read the [API reference](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) for `Secret`
|
||||
|
|
|
@ -52,7 +52,7 @@ FOO_SERVICE_PORT=<the port the service is running on>
|
|||
```
|
||||
|
||||
Services have dedicated IP addresses and are available to the Container via DNS,
|
||||
if [DNS addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled.
|
||||
if [DNS addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/) is enabled.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -39,14 +39,6 @@ There are additional rules about where you can place the separator
|
|||
characters (`_`, `-`, and `.`) inside an image tag.
|
||||
If you don't specify a tag, Kubernetes assumes you mean the tag `latest`.
|
||||
|
||||
{{< caution >}}
|
||||
You should avoid using the `latest` tag when deploying containers in production,
|
||||
as it is harder to track which version of the image is running and more difficult
|
||||
to roll back to a working version.
|
||||
|
||||
Instead, specify a meaningful tag such as `v1.42.0`.
|
||||
{{< /caution >}}
|
||||
|
||||
## Updating images
|
||||
|
||||
When you first create a {{< glossary_tooltip text="Deployment" term_id="deployment" >}},
|
||||
|
@ -57,13 +49,68 @@ specified. This policy causes the
|
|||
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip pulling an
|
||||
image if it already exists.
|
||||
|
||||
If you would like to always force a pull, you can do one of the following:
|
||||
### Image pull policy
|
||||
|
||||
- set the `imagePullPolicy` of the container to `Always`.
|
||||
- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use;
|
||||
Kubernetes will set the policy to `Always`.
|
||||
- omit the `imagePullPolicy` and the tag for the image to use.
|
||||
- enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
|
||||
The `imagePullPolicy` for a container and the tag of the image affect when the
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull (download) the specified image.
|
||||
|
||||
Here's a list of the values you can set for `imagePullPolicy` and the effects
|
||||
these values have:
|
||||
|
||||
`IfNotPresent`
|
||||
: the image is pulled only if it is not already present locally.
|
||||
|
||||
`Always`
|
||||
: every time the kubelet launches a container, the kubelet queries the container
|
||||
image registry to resolve the name to an image
|
||||
[digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier). If the kubelet has a
|
||||
container image with that exact digest cached locally, the kubelet uses its cached
|
||||
image; otherwise, the kubelet pulls the image with the resolved digest,
|
||||
and uses that image to launch the container.
|
||||
|
||||
`Never`
|
||||
: the kubelet does not try fetching the image. If the image is somehow already present
|
||||
locally, the kubelet attempts to start the container; otherwise, startup fails.
|
||||
See [pre-pulled images](#pre-pulled-images) for more details.
|
||||
|
||||
The caching semantics of the underlying image provider make even
|
||||
`imagePullPolicy: Always` efficient, as long as the registry is reliably accessible.
|
||||
Your container runtime can notice that the image layers already exist on the node
|
||||
so that they don't need to be downloaded again.
|
||||
|
||||
{{< note >}}
|
||||
You should avoid using the `:latest` tag when deploying containers in production as
|
||||
it is harder to track which version of the image is running and more difficult to
|
||||
roll back properly.
|
||||
|
||||
Instead, specify a meaningful tag such as `v1.42.0`.
|
||||
{{< /note >}}
|
||||
|
||||
To make sure the Pod always uses the same version of a container image, you can specify
|
||||
the image's digest;
|
||||
replace `<image-name>:<tag>` with `<image-name>@<digest>`
|
||||
(for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`).
|
||||
|
||||
When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
|
||||
|
||||
There are third-party [admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
|
||||
that mutate Pods (and pod templates) when they are created, so that the
|
||||
running workload is defined based on an image digest rather than a tag.
|
||||
That might be useful if you want to make sure that all your workload is
|
||||
running the same code no matter what tag changes happen at the registry.
|
||||
|
||||
#### Default image pull policy {#imagepullpolicy-defaulting}
|
||||
|
||||
When you (or a controller) submit a new Pod to the API server, your cluster sets the
|
||||
`imagePullPolicy` field when specific conditions are met:
|
||||
|
||||
- if you omit the `imagePullPolicy` field, and the tag for the container image is
|
||||
`:latest`, `imagePullPolicy` is automatically set to `Always`;
|
||||
- if you omit the `imagePullPolicy` field, and you don't specify the tag for the
|
||||
container image, `imagePullPolicy` is automatically set to `Always`;
|
||||
- if you omit the `imagePullPolicy` field, and you don't specify the tag for the
|
||||
container image that isn't `:latest`, the `imagePullPolicy` is automatically set to
|
||||
`IfNotPresent`.
|
||||
|
||||
{{< note >}}
|
||||
The value of `imagePullPolicy` of the container is always set when the object is
|
||||
|
@ -75,7 +122,31 @@ For example, if you create a Deployment with an image whose tag is _not_
|
|||
the pull policy of any object after its initial creation.
|
||||
{{< /note >}}
|
||||
|
||||
When `imagePullPolicy` is defined without a specific value, it is also set to `Always`.
|
||||
#### Required image pull
|
||||
|
||||
If you would like to always force a pull, you can do one of the following:
|
||||
|
||||
- Set the `imagePullPolicy` of the container to `Always`.
|
||||
- Omit the `imagePullPolicy` and use `:latest` as the tag for the image to use;
|
||||
Kubernetes will set the policy to `Always` when you submit the Pod.
|
||||
- Omit the `imagePullPolicy` and the tag for the image to use;
|
||||
Kubernetes will set the policy to `Always` when you submit the Pod.
|
||||
- Enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
|
||||
|
||||
|
||||
### ImagePullBackOff
|
||||
|
||||
When a kubelet starts creating containers for a Pod using a container runtime,
|
||||
it might be possible the container is in [Waiting](/docs/concepts/workloads/pods/pod-lifecycle/#container-state-waiting)
|
||||
state because of `ImagePullBackOff`.
|
||||
|
||||
The status `ImagePullBackOff` means that a container could not start because Kubernetes
|
||||
could not pull a container image (for reasons such as invalid image name, or pulling
|
||||
from a private registry without `imagePullSecret`). The `BackOff` part indicates
|
||||
that Kubernetes will keep trying to pull the image, with an increasing back-off delay.
|
||||
|
||||
Kubernetes raises the delay between each attempt until it reaches a compiled-in limit,
|
||||
which is 300 seconds (5 minutes).
|
||||
|
||||
## Multi-architecture images with image indexes
|
||||
|
||||
|
@ -314,6 +385,8 @@ common use cases and suggested solutions.
|
|||
If you need access to multiple registries, you can create one secret for each registry.
|
||||
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md)
|
||||
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md).
|
||||
* Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection).
|
||||
|
|
|
@ -51,7 +51,7 @@ heterogeneous node configurations, see [Scheduling](#scheduling) below.
|
|||
{{< /note >}}
|
||||
|
||||
The configurations have a corresponding `handler` name, referenced by the RuntimeClass. The
|
||||
handler must be a valid DNS 1123 label (alpha-numeric + `-` characters).
|
||||
handler must be a valid [DNS label name](/docs/concepts/overview/working-with-objects/names/#dns-label-names).
|
||||
|
||||
### 2. Create the corresponding RuntimeClass resources
|
||||
|
||||
|
@ -118,7 +118,7 @@ Runtime handlers are configured through containerd's configuration at
|
|||
`/etc/containerd/config.toml`. Valid handlers are configured under the runtimes section:
|
||||
|
||||
```
|
||||
[plugins.cri.containerd.runtimes.${HANDLER_NAME}]
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}]
|
||||
```
|
||||
|
||||
See containerd's config documentation for more details:
|
||||
|
@ -135,7 +135,7 @@ table](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md#crioruntim
|
|||
runtime_path = "${PATH_TO_BINARY}"
|
||||
```
|
||||
|
||||
See CRI-O's [config documentation](https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md) for more details.
|
||||
See CRI-O's [config documentation](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md) for more details.
|
||||
|
||||
## Scheduling
|
||||
|
||||
|
@ -179,4 +179,4 @@ are accounted for in Kubernetes.
|
|||
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
|
||||
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
|
||||
- Read about the [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) concept
|
||||
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
|
||||
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Extending the Kubernetes API with the aggregation layer
|
||||
title: Kubernetes API Aggregation Layer
|
||||
reviewers:
|
||||
- lavalamp
|
||||
- cheftako
|
||||
|
@ -34,7 +34,7 @@ If your extension API server cannot achieve that latency requirement, consider m
|
|||
|
||||
* To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/).
|
||||
* Then, [setup an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer.
|
||||
* Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
|
||||
* Read the specification for [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io)
|
||||
* Read about [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) in the API reference
|
||||
|
||||
Alternatively: learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
|
||||
|
||||
|
|
|
@ -167,7 +167,7 @@ CRDs are easier to create than Aggregated APIs.
|
|||
|
||||
| CRDs | Aggregated API |
|
||||
| --------------------------- | -------------- |
|
||||
| Do not require programming. Users can choose any language for a CRD controller. | Requires programming in Go and building binary and image. |
|
||||
| Do not require programming. Users can choose any language for a CRD controller. | Requires programming and building binary and image. |
|
||||
| No additional service to run; CRDs are handled by API server. | An additional service to create and that could fail. |
|
||||
| No ongoing support once the CRD is created. Any bug fixes are picked up as part of normal Kubernetes Master upgrades. | May need to periodically pickup bug fixes from upstream and rebuild and update the Aggregated API server. |
|
||||
| No need to handle multiple versions of your API; for example, when you control the client for this resource, you can upgrade it in sync with the API. | You need to handle multiple versions of your API; for example, when developing an extension to share with the world. |
|
||||
|
|
|
@ -199,7 +199,7 @@ service PodResourcesLister {
|
|||
|
||||
The `List` endpoint provides information on resources of running pods, with details such as the
|
||||
id of exclusively allocated CPUs, device id as it was reported by device plugins and id of
|
||||
the NUMA node where these devices are allocated.
|
||||
the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains the information about memory and hugepages reserved for a container.
|
||||
|
||||
```gRPC
|
||||
// ListPodResourcesResponse is the response returned by List function
|
||||
|
@ -219,6 +219,14 @@ message ContainerResources {
|
|||
string name = 1;
|
||||
repeated ContainerDevices devices = 2;
|
||||
repeated int64 cpu_ids = 3;
|
||||
repeated ContainerMemory memory = 4;
|
||||
}
|
||||
|
||||
// ContainerMemory contains information about memory and hugepages assigned to a container
|
||||
message ContainerMemory {
|
||||
string memory_type = 1;
|
||||
uint64 size = 2;
|
||||
TopologyInfo topology = 3;
|
||||
}
|
||||
|
||||
// Topology describes hardware topology of the resource
|
||||
|
@ -247,6 +255,7 @@ It provides more information than kubelet exports to APIServer.
|
|||
message AllocatableResourcesResponse {
|
||||
repeated ContainerDevices devices = 1;
|
||||
repeated int64 cpu_ids = 2;
|
||||
repeated ContainerMemory memory = 3;
|
||||
}
|
||||
|
||||
```
|
||||
|
|
|
@ -51,8 +51,7 @@ Some of the things that you can use an operator to automate include:
|
|||
* choosing a leader for a distributed application without an internal
|
||||
member election process
|
||||
|
||||
What might an Operator look like in more detail? Here's an example in more
|
||||
detail:
|
||||
What might an Operator look like in more detail? Here's an example:
|
||||
|
||||
1. A custom resource named SampleDB, that you can configure into the cluster.
|
||||
2. A Deployment that makes sure a Pod is running that contains the
|
||||
|
@ -115,8 +114,9 @@ Operator.
|
|||
|
||||
* [Charmed Operator Framework](https://juju.is/)
|
||||
* [kubebuilder](https://book.kubebuilder.io/)
|
||||
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK)
|
||||
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
|
||||
* [Metacontroller](https://metacontroller.app/) along with WebHooks that
|
||||
* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html) along with WebHooks that
|
||||
you implement yourself
|
||||
* [Operator Framework](https://operatorframework.io)
|
||||
* [shell-operator](https://github.com/flant/shell-operator)
|
||||
|
@ -124,6 +124,7 @@ Operator.
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} [Operator White Paper](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md).
|
||||
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case
|
||||
* [Publish](https://operatorhub.io/) your operator for other people to use
|
||||
|
|
|
@ -16,7 +16,7 @@ card:
|
|||
When you deploy Kubernetes, you get a cluster.
|
||||
{{< glossary_definition term_id="cluster" length="all" prepend="A Kubernetes cluster consists of">}}
|
||||
|
||||
This document outlines the various components you need to have
|
||||
This document outlines the various components you need to have for
|
||||
a complete and working Kubernetes cluster.
|
||||
|
||||
Here's the diagram of a Kubernetes cluster with all the components tied together.
|
||||
|
|
|
@ -45,7 +45,7 @@ Containers have become popular because they provide extra benefits, such as:
|
|||
* Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
|
||||
* Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability).
|
||||
* Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
|
||||
* Observability not only surfaces OS-level information and metrics, but also application health and other signals.
|
||||
* Observability: not only surfaces OS-level information and metrics, but also application health and other signals.
|
||||
* Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
|
||||
* Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
|
||||
* Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
|
||||
|
|
|
@ -30,6 +30,11 @@ Annotations, like labels, are key/value maps:
|
|||
}
|
||||
```
|
||||
|
||||
{{<note>}}
|
||||
The keys and the values in the map must be strings. In other words, you cannot use
|
||||
numeric, boolean, list or other types for either the keys or the values.
|
||||
{{</note>}}
|
||||
|
||||
Here are some examples of information that could be recorded in annotations:
|
||||
|
||||
* Fields managed by a declarative configuration layer. Attaching these fields
|
||||
|
|
|
@ -48,7 +48,7 @@ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Alway
|
|||
|
||||
## Multiple resource types
|
||||
|
||||
You use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace:
|
||||
You can use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace:
|
||||
|
||||
```shell
|
||||
kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default
|
||||
|
|
|
@ -0,0 +1,80 @@
|
|||
---
|
||||
title: Finalizers
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{<glossary_definition term_id="finalizer" length="long">}}
|
||||
|
||||
You can use finalizers to control {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}
|
||||
of resources by alerting {{<glossary_tooltip text="controllers" term_id="controller">}} to perform specific cleanup tasks before
|
||||
deleting the target resource.
|
||||
|
||||
Finalizers don't usually specify the code to execute. Instead, they are
|
||||
typically lists of keys on a specific resource similar to annotations.
|
||||
Kubernetes specifies some finalizers automatically, but you can also specify
|
||||
your own.
|
||||
|
||||
## How finalizers work
|
||||
|
||||
When you create a resource using a manifest file, you can specify finalizers in
|
||||
the `metadata.finalizers` field. When you attempt to delete the resource, the
|
||||
controller that manages it notices the values in the `finalizers` field and does
|
||||
the following:
|
||||
|
||||
* Modifies the object to add a `metadata.deletionTimestamp` field with the
|
||||
time you started the deletion.
|
||||
* Marks the object as read-only until its `metadata.finalizers` field is empty.
|
||||
|
||||
The controller then attempts to satisfy the requirements of the finalizers
|
||||
specified for that resource. Each time a finalizer condition is satisfied, the
|
||||
controller removes that key from the resource's `finalizers` field. When the
|
||||
field is empty, garbage collection continues. You can also use finalizers to
|
||||
prevent deletion of unmanaged resources.
|
||||
|
||||
A common example of a finalizer is `kubernetes.io/pv-protection`, which prevents
|
||||
accidental deletion of `PersistentVolume` objects. When a `PersistentVolume`
|
||||
object is in use by a Pod, Kubernetes adds the `pv-protection` finalizer. If you
|
||||
try to delete the `PersistentVolume`, it enters a `Terminating` status, but the
|
||||
controller can't delete it because the finalizer exists. When the Pod stops
|
||||
using the `PersistentVolume`, Kubernetes clears the `pv-protection` finalizer,
|
||||
and the controller deletes the volume.
|
||||
|
||||
## Owner references, labels, and finalizers {#owners-labels-finalizers}
|
||||
|
||||
Like {{<glossary_tooltip text="labels" term_id="label">}}, [owner references](/concepts/overview/working-with-objects/owners-dependents/)
|
||||
describe the relationships between objects in Kubernetes, but are used for a
|
||||
different purpose. When a
|
||||
{{<glossary_tooltip text="controller" term_id="controller">}} manages objects
|
||||
like Pods, it uses labels to track changes to groups of related objects. For
|
||||
example, when a {{<glossary_tooltip text="Job" term_id="job">}} creates one or
|
||||
more Pods, the Job controller applies labels to those pods and tracks changes to
|
||||
any Pods in the cluster with the same label.
|
||||
|
||||
The Job controller also adds *owner references* to those Pods, pointing at the
|
||||
Job that created the Pods. If you delete the Job while these Pods are running,
|
||||
Kubernetes uses the owner references (not labels) to determine which Pods in the
|
||||
cluster need cleanup.
|
||||
|
||||
Kubernetes also processes finalizers when it identifies owner references on a
|
||||
resource targeted for deletion.
|
||||
|
||||
In some situations, finalizers can block the deletion of dependent objects,
|
||||
which can cause the targeted owner object to remain in a read-only state for
|
||||
longer than expected without being fully deleted. In these situations, you
|
||||
should check finalizers and owner references on the target owner and dependent
|
||||
objects to troubleshoot the cause.
|
||||
|
||||
{{<note>}}
|
||||
In cases where objects are stuck in a deleting state, try to avoid manually
|
||||
removing finalizers to allow deletion to continue. Finalizers are usually added
|
||||
to resources for a reason, so forcefully removing them can lead to issues in
|
||||
your cluster.
|
||||
{{</note>}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/)
|
||||
on the Kubernetes blog.
|