diff --git a/.gitignore b/.gitignore index 6a629010d04..a1bead7d255 100644 --- a/.gitignore +++ b/.gitignore @@ -29,6 +29,7 @@ nohup.out # Hugo output public/ resources/ +.hugo_build.lock # Netlify Functions build output package-lock.json diff --git a/Makefile b/Makefile index 4b3ca99864f..dedd387eb6a 100644 --- a/Makefile +++ b/Makefile @@ -9,7 +9,7 @@ CONTAINER_ENGINE ?= docker IMAGE_REGISTRY ?= gcr.io/k8s-staging-sig-docs IMAGE_VERSION=$(shell scripts/hash-files.sh Dockerfile Makefile | cut -c 1-12) CONTAINER_IMAGE = $(IMAGE_REGISTRY)/k8s-website-hugo:v$(HUGO_VERSION)-$(IMAGE_VERSION) -CONTAINER_RUN = $(CONTAINER_ENGINE) run --rm --interactive --tty --volume $(CURDIR):/src +CONTAINER_RUN = "$(CONTAINER_ENGINE)" run --rm --interactive --tty --volume "$(CURDIR):/src" CCRED=\033[0;31m CCEND=\033[0m @@ -95,7 +95,7 @@ docker-internal-linkcheck: container-internal-linkcheck: link-checker-image-pull $(CONTAINER_RUN) $(CONTAINER_IMAGE) hugo --config config.toml,linkcheck-config.toml --buildFuture --environment test - $(CONTAINER_ENGINE) run --mount type=bind,source=$(CURDIR),target=/test --rm wjdp/htmltest htmltest + $(CONTAINER_ENGINE) run --mount "type=bind,source=$(CURDIR),target=/test" --rm wjdp/htmltest htmltest clean-api-reference: ## Clean all directories in API reference directory, preserve _index.md rm -rf content/en/docs/reference/kubernetes-api/*/ diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 296e743a845..443410ae561 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -200,6 +200,7 @@ aliases: - devlware - jhonmike - rikatz + - stormqueen1990 - yagonobre sig-docs-vi-owners: # Admins for Vietnamese content - huynguyennovem diff --git a/README-ko.md b/README-ko.md index c3e1068b2e5..fd17e2b6559 100644 --- a/README-ko.md +++ b/README-ko.md @@ -4,6 +4,9 @@ 이 저장소에는 [쿠버네티스 웹사이트 및 문서](https://kubernetes.io/)를 빌드하는 데 필요한 자산이 포함되어 있습니다. 기여해주셔서 감사합니다! +- [문서에 기여하기](#contributing-to-the-docs) +- [`README.md`에 대한 쿠버네티스 문서 현지화](#localization-readmemds) + # 저장소 사용하기 Hugo(확장 버전)를 사용하여 웹사이트를 로컬에서 실행하거나, 컨테이너 런타임에서 실행할 수 있습니다. 라이브 웹사이트와의 배포 일관성을 제공하므로, 컨테이너 런타임을 사용하는 것을 적극 권장합니다. @@ -40,6 +43,8 @@ make container-image make container-serve ``` +에러가 발생한다면, Hugo 컨테이너를 위한 컴퓨팅 리소스가 충분하지 않기 때문일 수 있습니다. 이를 해결하려면, 머신에서 도커에 허용할 CPU 및 메모리 사용량을 늘립니다([MacOSX](https://docs.docker.com/docker-for-mac/#resources) / [Windows](https://docs.docker.com/docker-for-windows/#resources)). + 웹사이트를 보려면 브라우저를 http://localhost:1313 으로 엽니다. 소스 파일을 변경하면 Hugo가 웹사이트를 업데이트하고 브라우저를 강제로 새로 고칩니다. ## Hugo를 사용하여 로컬에서 웹사이트 실행하기 @@ -56,7 +61,45 @@ make serve 그러면 포트 1313에서 로컬 Hugo 서버가 시작됩니다. 웹사이트를 보려면 http://localhost:1313 으로 브라우저를 엽니다. 소스 파일을 변경하면, Hugo가 웹사이트를 업데이트하고 브라우저를 강제로 새로 고칩니다. +## API 레퍼런스 페이지 빌드하기 + +`content/en/docs/reference/kubernetes-api`에 있는 API 레퍼런스 페이지는 를 사용하여 Swagger 명세로부터 빌드되었습니다. + +새로운 쿠버네티스 릴리스를 위해 레퍼런스 페이지를 업데이트하려면 다음 단계를 수행합니다. + +1. `api-ref-generator` 서브모듈을 받아옵니다. + + ```bash + git submodule update --init --recursive --depth 1 + ``` + +2. Swagger 명세를 업데이트합니다. + + ```bash + curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json + ``` + +3. `api-ref-assets/config/`에서, 새 릴리스의 변경 사항이 반영되도록 `toc.yaml` 및 `fields.yaml` 파일을 업데이트합니다. + +4. 다음으로, 페이지를 빌드합니다. + + ```bash + make api-reference + ``` + + 로컬에서 결과를 테스트하기 위해 컨테이너 이미지를 이용하여 사이트를 빌드 및 실행합니다. + + ```bash + make container-image + make container-serve + ``` + + 웹 브라우저에서, 로 이동하여 API 레퍼런스를 확인합니다. + +5. 모든 API 변경사항이 `toc.yaml` 및 `fields.yaml` 구성 파일에 반영되었다면, 새로 생성된 API 레퍼런스 페이지에 대한 PR을 엽니다. + ## 문제 해결 + ### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version Hugo는 기술적인 이유로 2개의 바이너리 세트로 제공됩니다. 현재 웹사이트는 **Hugo 확장** 버전 기반에서만 실행됩니다. [릴리스 페이지](https://github.com/gohugoio/hugo/releases)에서 이름에 `extended` 가 포함된 아카이브를 찾습니다. 확인하려면, `hugo version` 을 실행하고 `extended` 라는 단어를 찾습니다. @@ -97,17 +140,17 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist 이 내용은 Catalina와 Mojave macOS에서 작동합니다. - # SIG Docs에 참여하기 [커뮤니티 페이지](https://github.com/kubernetes/community/tree/master/sig-docs#meetings)에서 SIG Docs 쿠버네티스 커뮤니티 및 회의에 대한 자세한 내용을 확인합니다. 이 프로젝트의 메인테이너에게 연락을 할 수도 있습니다. -- [슬랙](https://kubernetes.slack.com/messages/sig-docs) [슬랙에 초대 받기](https://slack.k8s.io/) +- [슬랙](https://kubernetes.slack.com/messages/sig-docs) + - [슬랙에 초대 받기](https://slack.k8s.io/) - [메일링 리스트](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) -# 문서에 기여하기 +# 문서에 기여하기 {#contributing-to-the-docs} 이 저장소에 대한 복제본을 여러분의 GitHub 계정에 생성하기 위해 화면 오른쪽 위 영역에 있는 **Fork** 버튼을 클릭하면 됩니다. 이 복제본은 *fork* 라고 부릅니다. 여러분의 fork에서 원하는 임의의 변경 사항을 만들고, 해당 변경 사항을 보낼 준비가 되었다면, 여러분의 fork로 이동하여 새로운 풀 리퀘스트를 만들어 우리에게 알려주시기 바랍니다. @@ -124,7 +167,15 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist * [문서화 스타일 가이드](http://kubernetes.io/docs/contribute/style/style-guide/) * [쿠버네티스 문서 현지화](https://kubernetes.io/docs/contribute/localization/) -# `README.md`에 대한 쿠버네티스 문서 현지화(localization) +### 신규 기여자 대사(ambassadors) + +기여 과정에서 도움이 필요하다면, [신규 기여자 대사](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador)에게 연락하는 것이 좋습니다. 이들은 신규 기여자를 멘토링하고 첫 PR 과정에서 도움을 주는 역할도 담당하는 SIG Docs 승인자입니다. 신규 기여자 대사에게 문의할 가장 좋은 곳은 [쿠버네티스 슬랙](https://slack.k8s.io/)입니다. 현재 SIG Docs 신규 기여자 대사는 다음과 같습니다. + +| Name | Slack | GitHub | +| -------------------------- | -------------------------- | -------------------------- | +| Arsh Sharma | @arsh | @RinkiyaKeDad | + +# `README.md`에 대한 쿠버네티스 문서 현지화(localization) {#localization-readmemds} ## 한국어 @@ -135,6 +186,7 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist * 손석호 ([GitHub - @seokho-son](https://github.com/seokho-son)) * [슬랙 채널](https://kubernetes.slack.com/messages/kubernetes-docs-ko) + # 행동 강령 쿠버네티스 커뮤니티 참여는 [CNCF 행동 강령](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/ko.md)을 따릅니다. diff --git a/README-zh.md b/README-zh.md index e3ac58fb935..0b9dfc1ce70 100644 --- a/README-zh.md +++ b/README-zh.md @@ -80,7 +80,7 @@ To build the site in a container, run the following to build the container image 要在容器中构建网站,请通过以下命令来构建容器镜像并运行: ```bash -make container-image +# 你可以将 $CONTAINER_ENGINE 设置为任何 Docker 类容器工具的名称 make container-serve ``` @@ -257,6 +257,51 @@ This works for Catalina as well as Mojave macOS. --> 这适用于 Catalina 和 Mojave macOS。 +### 对执行 make container-image 命令部分地区访问超时的故障排除 + +现象如下: + +```shell +langs/language.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout +langs/language.go:24:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout +common/text/transform.go:21:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout +common/text/transform.go:22:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout +common/text/transform.go:23:2: golang.org/x/text@v0.3.7: Get "https://proxy.golang.org/golang.org/x/text/@v/v0.3.7.zip": dial tcp 142.251.43.17:443: i/o timeout +hugolib/integrationtest_builder.go:29:2: golang.org/x/tools@v0.1.11: Get "https://proxy.golang.org/golang.org/x/tools/@v/v0.1.11.zip": dial tcp 142.251.42.241:443: i/o timeout +deploy/google.go:24:2: google.golang.org/api@v0.76.0: Get "https://proxy.golang.org/google.golang.org/api/@v/v0.76.0.zip": dial tcp 142.251.43.17:443: i/o timeout +parser/metadecoders/decoder.go:32:2: gopkg.in/yaml.v2@v2.4.0: Get "https://proxy.golang.org/gopkg.in/yaml.v2/@v/v2.4.0.zip": dial tcp 142.251.42.241:443: i/o timeout +The command '/bin/sh -c mkdir $HOME/src && cd $HOME/src && curl -L https://github.com/gohugoio/hugo/archive/refs/tags/v${HUGO_VERSION}.tar.gz | tar -xz && cd "hugo-${HUGO_VERS ION}" && go install --tags extended' returned a non-zero code: 1 +make: *** [Makefile:69:container-image] error 1 +``` + +请修改 `Dockerfile` 文件,为其添加网络代理。修改内容如下: + +```dockerfile +... +FROM golang:1.18-alpine + +LABEL maintainer="Luc Perkins " + +ENV GO111MODULE=on # 需要添加内容1 + +ENV GOPROXY=https://proxy.golang.org,direct # 需要添加内容2 + +RUN apk add --no-cache \ + curl \ + gcc \ + g++ \ + musl-dev \ + build-base \ + libc6-compat + +ARG HUGO_VERSION +... +``` + +将 "https://proxy.golang.org" 替换为本地可以使用的代理地址。 + +**注意:** 此部分仅适用于中国大陆 + -Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets. +Der Horizontal Pod Autoscaler skaliert automatisch die Anzahl der Pods eines Replication Controller, Deployment oder Replikat Set basierend auf der beobachteten CPU-Auslastung (oder, mit Unterstützung von [benutzerdefinierter Metriken](https://git.k8s.io/design-proposals-archive/instrumentation/custom-metrics-api.md), von der Anwendung bereitgestellten Metriken). Beachte, dass die horizontale Pod Autoskalierung nicht für Objekte gilt, die nicht skaliert werden können, z. B. DaemonSets. Der Horizontal Pod Autoscaler ist als Kubernetes API-Ressource und einem Controller implementiert. Die Ressource bestimmt das Verhalten des Controllers. @@ -46,7 +46,7 @@ Das Verwenden von Metriken aus Heapster ist seit der Kubernetes Version 1.11 ver Siehe [Unterstützung der Metrik APIs](#unterstützung-der-metrik-apis) für weitere Details. -Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource). +Der Autoscaler greift über die Scale Sub-Ressource auf die entsprechenden skalierbaren Controller (z.B. Replication Controller, Deployments und Replika Sets) zu. Scale ist eine Schnittstelle, mit der Sie die Anzahl der Replikate dynamisch einstellen und jeden ihrer aktuellen Zustände untersuchen können. Weitere Details zu der Scale Sub-Ressource findest du [hier](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#scale-subresource). ### Details zum Algorithmus @@ -90,7 +90,7 @@ Die aktuelle stabile Version, die nur die Unterstützung für die automatische S Die Beta-Version, welche die Skalierung des Speichers und benutzerdefinierte Metriken unterstützt, befindet sich unter `autoscaling/v2beta2`. Die in `autoscaling/v2beta2` neu eingeführten Felder bleiben bei der Arbeit mit `autoscaling/v1` als Anmerkungen erhalten. -Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden. +Weitere Details über das API Objekt kann unter dem [HorizontalPodAutoscaler Objekt](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object) gefunden werden. ## Unterstützung des Horizontal Pod Autoscaler in kubectl @@ -166,7 +166,7 @@ Standardmäßig ruft der HorizontalPodAutoscaler Controller Metriken aus einer R ## {{% heading "whatsnext" %}} -* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md). +* Design Dokument [Horizontal Pod Autoscaling](https://git.k8s.io/design-proposals-archive/autoscaling/horizontal-pod-autoscaler.md). * kubectl autoscale Befehl: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). * Verwenden des [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). diff --git a/content/en/_index.html b/content/en/_index.html index 7557fe8f993..fbdd5d57208 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -16,7 +16,7 @@ It groups containers that make up an application into logical units for easy man {{% blocks/feature image="scalable" %}} #### Planet Scale -Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team. +Designed on the same principles that allow Google to run billions of containers a week, Kubernetes can scale without increasing your operations team. {{% /blocks/feature %}} @@ -43,12 +43,12 @@ Kubernetes is open source giving you the freedom to take advantage of on-premise

- Attend KubeCon Europe on May 17-20, 2022 + Attend KubeCon North America on October 24-28, 2022



- Attend KubeCon North America on October 24-28, 2022 + Attend KubeCon Europe on April 17-21, 2023
diff --git a/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md b/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md index 38fd24372c3..4c9af11a9eb 100644 --- a/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md +++ b/content/en/blog/_posts/2018-11-08-kubernetes-docs-update-i18n.md @@ -41,7 +41,7 @@ These repo labels let reviewers filter for PRs and issues by language. For examp ### Team review -L10n teams can now review and approve their own PRs. For example, review and approval permissions for English are [assigned in an OWNERS file](https://github.com/kubernetes/website/blob/master/content/en/OWNERS) in the top subfolder for English content. +L10n teams can now review and approve their own PRs. For example, review and approval permissions for English are [assigned in an OWNERS file](https://github.com/kubernetes/website/blob/main/content/en/OWNERS) in the top subfolder for English content. Adding `OWNERS` files to subdirectories lets localization teams review and approve changes without requiring a rubber stamp approval from reviewers who may lack fluency. diff --git a/content/en/blog/_posts/2020-05-05-introducing-podtopologyspread.md b/content/en/blog/_posts/2020-05-05-introducing-podtopologyspread.md index dfed13ca43e..eab08ed299c 100644 --- a/content/en/blog/_posts/2020-05-05-introducing-podtopologyspread.md +++ b/content/en/blog/_posts/2020-05-05-introducing-podtopologyspread.md @@ -67,7 +67,7 @@ Let's see an example of a cluster to understand this API. As the feature name "PodTopologySpread" implies, the basic usage of this feature is to run your workload with an absolute even manner (maxSkew=1), or relatively even manner (maxSkew>=2). See the [official -document](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +document](/docs/concepts/scheduling-eviction/topology-spread-constraints/) for more details. In addition to this basic usage, there are some advanced usage examples that diff --git a/content/en/blog/_posts/2020-09-30-writing-crl-scheduler/index.md b/content/en/blog/_posts/2020-09-30-writing-crl-scheduler/index.md index 0fab185f986..dc4b97db2a9 100644 --- a/content/en/blog/_posts/2020-09-30-writing-crl-scheduler/index.md +++ b/content/en/blog/_posts/2020-09-30-writing-crl-scheduler/index.md @@ -70,7 +70,7 @@ To correct the latter issue, we now employ a "hunt and peck" approach to removin ### 1. Upgrade to kubernetes 1.18 and make use of Pod Topology Spread Constraints While this seems like it could have been the perfect solution, at the time of writing Kubernetes 1.18 was unavailable on the two most common managed Kubernetes services in public cloud, EKS and GKE. -Furthermore, [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) were still a [beta feature in 1.18](https://v1-18.docs.kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available. +Furthermore, [pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) were still a beta feature in 1.18 which meant that it [wasn't guaranteed to be available in managed clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#kubernetes_feature_choices) even when v1.18 became available. The entire endeavour was concerningly reminiscent of checking [caniuse.com](https://caniuse.com/) when Internet Explorer 8 was still around. ### 2. Deploy a statefulset _per zone_. diff --git a/content/en/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md b/content/en/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md index 4c969ff50c2..a1ac132c482 100644 --- a/content/en/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md +++ b/content/en/blog/_posts/2022-01-10-meet-our-contributors-APAC-India-region-01.md @@ -1,9 +1,9 @@ --- layout: blog title: "Meet Our Contributors - APAC (India region)" -date: 2022-01-10T12:00:00+0000 +date: 2022-01-10 slug: meet-our-contributors-india-ep-01 -canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/ +canonicalUrl: https://www.kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/ --- **Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde) @@ -19,7 +19,7 @@ Welcome to the first episode of the APAC edition of the "Meet Our Contributors" In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives. -💫 *Let's get started, so without further ado…* +💫 *Let's get started, so without further ado…* ## [Arsh Sharma](https://github.com/RinkiyaKeDad) @@ -39,7 +39,7 @@ To the newcomers, Arsh helps plan their early contributions sustainably. Kunal Kushwaha is a core member of the Kubernetes marketing council. He is also a CNCF ambassador and one of the founders of the [CNCF Students Program](https://community.cncf.io/cloud-native-students/).. He also served as a Communications role shadow during the 1.22 release cycle. -At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in. +At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in. As an open-source enthusiast, he believes that diverse participation in the community is beneficial since it introduces new perspectives and opinions and respect for one's peers. He has worked on various open-source projects, and his participation in communities has considerably assisted his development as a developer. @@ -103,4 +103,3 @@ If you have any recommendations/suggestions for who we should interview next, pl We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋 - diff --git a/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md index 74e61b117da..4950f880973 100644 --- a/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md +++ b/content/en/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -86,7 +86,7 @@ in Kubernetes 1.24. If you're running Kubernetes v1.24 or later, see [Can I still use Docker Engine as my container runtime?](#can-i-still-use-docker-engine-as-my-container-runtime). (Remember, you can switch away from the dockershim if you're using any supported Kubernetes release; from release v1.24, you -**must** switch as Kubernetes no longer incluides the dockershim). +**must** switch as Kubernetes no longer includes the dockershim). [kubelet]: /docs/reference/command-line-tools-reference/kubelet/ diff --git a/content/en/blog/_posts/2022-03-15-meet-our-contributors-APAC-AU-NZ-region-01.md b/content/en/blog/_posts/2022-03-15-meet-our-contributors-APAC-AU-NZ-region-02.md similarity index 98% rename from content/en/blog/_posts/2022-03-15-meet-our-contributors-APAC-AU-NZ-region-01.md rename to content/en/blog/_posts/2022-03-15-meet-our-contributors-APAC-AU-NZ-region-02.md index 5a8a4a2989b..e8d9cfdf89d 100644 --- a/content/en/blog/_posts/2022-03-15-meet-our-contributors-APAC-AU-NZ-region-01.md +++ b/content/en/blog/_posts/2022-03-15-meet-our-contributors-APAC-AU-NZ-region-02.md @@ -1,7 +1,7 @@ --- layout: blog title: "Meet Our Contributors - APAC (Aus-NZ region)" -date: 2022-03-16T12:00:00+0000 +date: 2022-03-16 slug: meet-our-contributors-au-nz-ep-02 canonicalUrl: https://www.kubernetes.dev/blog/2022/03/14/meet-our-contributors-au-nz-ep-02/ --- @@ -60,19 +60,13 @@ Nick Young works at VMware as a technical lead for Contour, a CNCF ingress contr His contribution path was notable in that he began working on major areas of the Kubernetes project early on, skewing his trajectory. -He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions. +He asserts the best thing a new contributor can do is to "start contributing". Naturally, if it is relevant to their employment, that is excellent; however, investing non-work time in contributing can pay off in the long run in terms of work. He believes that new contributors, particularly those who are currently Kubernetes users, should be encouraged to participate in higher-level project discussions. > _Just being active and contributing will get you a long way. Once you've been active for a while, you'll find that you're able to answer questions, which will mean you're asked questions, and before you know it you are an expert._ - - - --- If you have any recommendations/suggestions for who we should interview next, please let us know in #sig-contribex. Your suggestions would be much appreciated. We're thrilled to have additional folks assisting us in reaching out to even more wonderful individuals of the community. We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋 - - - diff --git a/content/en/blog/_posts/2022-07-13-gateway-api-in-beta.md b/content/en/blog/_posts/2022-07-13-gateway-api-in-beta.md new file mode 100644 index 00000000000..54457cd3366 --- /dev/null +++ b/content/en/blog/_posts/2022-07-13-gateway-api-in-beta.md @@ -0,0 +1,178 @@ +--- +layout: blog +title: Kubernetes Gateway API Graduates to Beta +date: 2022-07-13 +slug: gateway-api-graduates-to-beta +canonicalUrl: https://gateway-api.sigs.k8s.io/blog/2022/graduating-to-beta/ +--- + +**Authors:** Shane Utt (Kong), Rob Scott (Google), Nick Young (VMware), Jeff Apple (HashiCorp) + +We are excited to announce the v0.5.0 release of Gateway API. For the first +time, several of our most important Gateway API resources are graduating to +beta. Additionally, we are starting a new initiative to explore how Gateway API +can be used for mesh and introducing new experimental concepts such as URL +rewrites. We'll cover all of this and more below. + +## What is Gateway API? + +Gateway API is a collection of resources centered around [Gateway][gw] resources +(which represent the underlying network gateways / proxy servers) to enable +robust Kubernetes service networking through expressive, extensible and +role-oriented interfaces that are implemented by many vendors and have broad +industry support. + +Originally conceived as a successor to the well known [Ingress][ing] API, the +benefits of Gateway API include (but are not limited to) explicit support for +many commonly used networking protocols (e.g. `HTTP`, `TLS`, `TCP`, `UDP`) as +well as tightly integrated support for Transport Layer Security (TLS). The +`Gateway` resource in particular enables implementations to manage the lifecycle +of network gateways as a Kubernetes API. + +If you're an end-user interested in some of the benefits of Gateway API we +invite you to jump in and find an implementation that suits you. At the time of +this release there are over a dozen [implementations][impl] for popular API +gateways and service meshes and guides are available to start exploring quickly. + +[gw]:https://gateway-api.sigs.k8s.io/api-types/gateway/ +[ing]:https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/ +[impl]:https://gateway-api.sigs.k8s.io/implementations/ + +### Getting started + +Gateway API is an official Kubernetes API like +[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/). +Gateway API represents a superset of Ingress functionality, enabling more +advanced concepts. Similar to Ingress, there is no default implementation of +Gateway API built into Kubernetes. Instead, there are many different +[implementations][impl] available, providing significant choice in terms of underlying +technologies while providing a consistent and portable experience. + +Take a look at the [API concepts documentation][concepts] and check out some of +the [Guides][guides] to start familiarizing yourself with the APIs and how they +work. When you're ready for a practical application open the [implementations +page][impl] and select an implementation that belongs to an existing technology +you may already be familiar with or the one your cluster provider uses as a +default (if applicable). Gateway API is a [Custom Resource Definition +(CRD)][crd] based API so you'll need to [install the CRDs][install-crds] onto a +cluster to use the API. + +If you're specifically interested in helping to contribute to Gateway API, we +would love to have you! Please feel free to [open a new issue][issue] on the +repository, or join in the [discussions][disc]. Also check out the [community +page][community] which includes links to the Slack channel and community meetings. + +[crd]:https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/ +[concepts]:https://gateway-api.sigs.k8s.io/concepts/api-overview/ +[guides]:https://gateway-api.sigs.k8s.io/guides/getting-started/ +[impl]:https://gateway-api.sigs.k8s.io/implementations/ +[install-crds]:https://gateway-api.sigs.k8s.io/guides/getting-started/#install-the-crds +[issue]:https://github.com/kubernetes-sigs/gateway-api/issues/new/choose +[disc]:https://github.com/kubernetes-sigs/gateway-api/discussions +[community]:https://gateway-api.sigs.k8s.io/contributing/community/ + +## Release highlights + +### Graduation to beta + +The `v0.5.0` release is particularly historic because it marks the growth in +maturity to a beta API version (`v1beta1`) release for some of the key APIs: + +- [GatewayClass](https://gateway-api.sigs.k8s.io/api-types/gatewayclass/) +- [Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) +- [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) + +This achievement was marked by the completion of several graduation criteria: + +- API has been [widely implemented][impl]. +- Conformance tests provide basic coverage for all resources and have multiple implementations passing tests. +- Most of the API surface is actively being used. +- Kubernetes SIG Network API reviewers have approved graduation to beta. + +For more information on Gateway API versioning, refer to the [official +documentation](https://gateway-api.sigs.k8s.io/concepts/versioning/). To see +what's in store for future releases check out the [next steps](#next-steps) +section. + +[impl]:https://gateway-api.sigs.k8s.io/implementations/ + +### Release channels + +This release introduces the `experimental` and `standard` [release channels][ch] +which enable a better balance of maintaining stability while still enabling +experimentation and iterative development. + +The `standard` release channel includes: + +- resources that have graduated to beta +- fields that have graduated to standard (no longer considered experimental) + +The `experimental` release channel includes everything in the `standard` release +channel, plus: + +- `alpha` API resources +- fields that are considered experimental and have not graduated to `standard` channel + +Release channels are used internally to enable iterative development with +quick turnaround, and externally to indicate feature stability to implementors +and end-users. + +For this release we've added the following experimental features: + +- [Routes can attach to Gateways by specifying port numbers](https://gateway-api.sigs.k8s.io/geps/gep-957/) +- [URL rewrites and path redirects](https://gateway-api.sigs.k8s.io/geps/gep-726/) + +[ch]:https://gateway-api.sigs.k8s.io/concepts/versioning/#release-channels-eg-experimental-standard + +### Other improvements + +For an exhaustive list of changes included in the `v0.5.0` release, please see +the [v0.5.0 release notes](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.5.0). + +## Gateway API for service mesh: the GAMMA Initiative +Some service mesh projects have [already implemented support for the Gateway +API](https://gateway-api.sigs.k8s.io/implementations/). Significant overlap +between the Service Mesh Interface (SMI) APIs and the Gateway API has [inspired +discussion in the SMI +community](https://github.com/servicemeshinterface/smi-spec/issues/249) about +possible integration. + +We are pleased to announce that the service mesh community, including +representatives from Cilium Service Mesh, Consul, Istio, Kuma, Linkerd, NGINX +Service Mesh and Open Service Mesh, is coming together to form the [GAMMA +Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/), a dedicated +workstream within the Gateway API subproject focused on Gateway API for Mesh +Management and Administration. + +This group will deliver [enhancement +proposals](https://gateway-api.sigs.k8s.io/v1beta1/contributing/gep/) consisting +of resources, additions, and modifications to the Gateway API specification for +mesh and mesh-adjacent use-cases. + +This work has begun with [an exploration of using Gateway API for +service-to-service +traffic](https://docs.google.com/document/d/1T_DtMQoq2tccLAtJTpo3c0ohjm25vRS35MsestSL9QU/edit#heading=h.jt37re3yi6k5) +and will continue with enhancement in areas such as authentication and +authorization policy. + +## Next steps + +As we continue to mature the API for production use cases, here are some of the highlights of what we'll be working on for the next Gateway API releases: + +- [GRPCRoute][gep1016] for [gRPC][grpc] traffic routing +- [Route delegation][pr1085] +- Layer 4 API maturity: Graduating [TCPRoute][tcpr], [UDPRoute][udpr] and + [TLSRoute][tlsr] to beta +- [GAMMA Initiative](https://gateway-api.sigs.k8s.io/contributing/gamma/) - Gateway API for Service Mesh + +If there's something on this list you want to get involved in, or there's +something not on this list that you want to advocate for to get on the roadmap +please join us in the #sig-network-gateway-api channel on Kubernetes Slack or our weekly [community calls](https://gateway-api.sigs.k8s.io/contributing/community/#meetings). + +[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/master/site-src/geps/gep-1016.md +[grpc]:https://grpc.io/ +[pr1085]:https://github.com/kubernetes-sigs/gateway-api/pull/1085 +[tcpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tcproute_types.go +[udpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/udproute_types.go +[tlsr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tlsroute_types.go +[community]:https://gateway-api.sigs.k8s.io/contributing/community/ diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index df384800e96..785040cda31 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -33,7 +33,7 @@ are allowed. Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the API server along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See -[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) +[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. Pods that wish to connect to the API server can do so securely by leveraging a service account so diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index 2321fc6474f..f57179df8ea 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -479,29 +479,24 @@ these pods will be stuck in terminating status on the shutdown node forever. To mitigate the above situation, a user can manually add the taint `node kubernetes.io/out-of-service` with either `NoExecute` or `NoSchedule` effect to a Node marking it out-of-service. -If the `NodeOutOfServiceVolumeDetach` [feature gate](/docs/reference/ -command-line-tools-reference/feature-gates/) is enabled on -`kube-controller-manager`, and a Node is marked out-of-service with this taint, the -pods on the node will be forcefully deleted if there are no matching tolerations on -it and volume detach operations for the pods terminating on the node will happen -immediately. This allows the Pods on the out-of-service node to recover quickly on a -different node. +If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the +pods on the node will be forcefully deleted if there are no matching tolerations on it and volume +detach operations for the pods terminating on the node will happen immediately. This allows the +Pods on the out-of-service node to recover quickly on a different node. During a non-graceful shutdown, Pods are terminated in the two phases: 1. Force delete the Pods that do not have matching `out-of-service` tolerations. 2. Immediately perform detach volume operation for such pods. - {{< note >}} - Before adding the taint `node.kubernetes.io/out-of-service` , it should be verified -that the node is already in shutdown or power off state (not in the middle of -restarting). + that the node is already in shutdown or power off state (not in the middle of + restarting). - The user is required to manually remove the out-of-service taint after the pods are -moved to a new node and the user has checked that the shutdown node has been -recovered since the user was the one who originally added the taint. - - + moved to a new node and the user has checked that the shutdown node has been + recovered since the user was the one who originally added the taint. {{< /note >}} ### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown} diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index ace5297b330..d5d6a273e23 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -11,31 +11,37 @@ no_list: true --- + The cluster administration overview is for anyone creating or administering a Kubernetes cluster. It assumes some familiarity with core Kubernetes [concepts](/docs/concepts/). - + ## Planning a cluster -See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure Kubernetes clusters. The solutions listed in this article are called *distros*. +See the guides in [Setup](/docs/setup/) for examples of how to plan, set up, and configure +Kubernetes clusters. The solutions listed in this article are called *distros*. - {{< note >}} - Not all distros are actively maintained. Choose distros which have been tested with a recent version of Kubernetes. - {{< /note >}} +{{< note >}} +Not all distros are actively maintained. Choose distros which have been tested with a recent +version of Kubernetes. +{{< /note >}} Before choosing a guide, here are some considerations: - - Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, multi-node cluster? Choose distros best suited for your needs. - - Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**? - - Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters. - - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. - - Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? - - Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the - latter, choose an actively-developed distro. Some distros only use binary releases, but - offer a greater variety of choices. - - Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster. - +- Do you want to try out Kubernetes on your computer, or do you want to build a high-availability, + multi-node cluster? Choose distros best suited for your needs. +- Will you be using **a hosted Kubernetes cluster**, such as + [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**? +- Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly + support hybrid clusters. Instead, you can set up multiple clusters. +- **If you are configuring Kubernetes on-premises**, consider which + [networking model](/docs/concepts/cluster-administration/networking/) fits best. +- Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? +- Do you **want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? + If the latter, choose an actively-developed distro. Some distros only use binary releases, but + offer a greater variety of choices. +- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster. ## Managing a cluster @@ -45,29 +51,43 @@ Before choosing a guide, here are some considerations: ## Securing a cluster -* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to generate certificates using different tool chains. +* [Generate Certificates](/docs/tasks/administer-cluster/certificates/) describes the steps to + generate certificates using different tool chains. -* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes the environment for Kubelet managed containers on a Kubernetes node. +* [Kubernetes Container Environment](/docs/concepts/containers/container-environment/) describes + the environment for Kubelet managed containers on a Kubernetes node. -* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes how Kubernetes implements access control for its own API. +* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access) describes + how Kubernetes implements access control for its own API. -* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in Kubernetes, including the various authentication options. +* [Authenticating](/docs/reference/access-authn-authz/authentication/) explains authentication in + Kubernetes, including the various authentication options. -* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from authentication, and controls how HTTP calls are handled. +* [Authorization](/docs/reference/access-authn-authz/authorization/) is separate from + authentication, and controls how HTTP calls are handled. -* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization. +* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) + explains plug-ins which intercepts requests to the Kubernetes API server after authentication + and authorization. -* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters . +* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) + describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters +. -* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes' audit logs. +* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes' + audit logs. ### Securing the kubelet - * [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/) - * [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) - * [Kubelet authentication/authorization](/docs/reference/acess-authn-authz/kubelet-authn-authz/) + +* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/) +* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) +* [Kubelet authentication/authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) ## Optional Cluster Services -* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve a DNS name directly to a Kubernetes service. +* [DNS Integration](/docs/concepts/services-networking/dns-pod-service/) describes how to resolve + a DNS name directly to a Kubernetes service. + +* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) + explains how logging in Kubernetes works and how to implement it. -* [Logging and Monitoring Cluster Activity](/docs/concepts/cluster-administration/logging/) explains how logging in Kubernetes works and how to implement it. diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 2cb2ee1b7f2..9428da09e31 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -332,7 +332,7 @@ container of a Pod can specify either or both of the following: Limits and requests for `ephemeral-storage` are measured in byte quantities. You can express storage as a plain integer or as a fixed-point number using one of these suffixes: -E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, +E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following quantities all represent roughly the same value: - `128974848` @@ -340,6 +340,10 @@ Mi, Ki. For example, the following quantities all represent roughly the same val - `129M` - `123Mi` +Pay attention to the case of the suffixes. If you request `400m` of ephemeral-storage, this is a request +for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`) +or 400 megabytes (`400M`). + In the following example, the Pod has two containers. Each container has a request of 2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and @@ -799,7 +803,7 @@ memory limit (and possibly request) for that container. * Get hands-on experience [assigning Memory resources to containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/). * Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/). * Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container) - and its [resource requirements](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources) + and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources) * Read about [project quotas](https://xfs.org/index.php/XFS_FAQ#Q:_Quota:_Do_quotas_work_on_XFS.3F) in XFS * Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md index a4dd5c59014..5dc5a64826b 100644 --- a/content/en/docs/concepts/configuration/overview.md +++ b/content/en/docs/concepts/configuration/overview.md @@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN ## Using Labels -- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach. +- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach. A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/). diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index 6366ee05519..cd8f74aa292 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -116,7 +116,7 @@ Runtime handlers are configured through containerd's configuration at [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}] ``` -See containerd's [config documentation](https://github.com/containerd/cri/blob/master/docs/config.md) +See containerd's [config documentation](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) for more details: #### {{< glossary_tooltip term_id="cri-o" >}} diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index bcfd32ef7d7..8e4214991c7 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -8,21 +8,29 @@ card: --- -This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format. - +This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can +express them in `.yaml` format. ## Understanding Kubernetes objects {#kubernetes-objects} -*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe: +*Kubernetes objects* are persistent entities in the Kubernetes system. Kubernetes uses these +entities to represent the state of your cluster. Specifically, they can describe: * What containerized applications are running (and on which nodes) * The resources available to those applications * The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance -A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's *desired state*. +A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system +will constantly work to ensure that object exists. By creating an object, you're effectively +telling the Kubernetes system what you want your cluster's workload to look like; this is your +cluster's *desired state*. -To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use the Kubernetes API directly in your own programs using one of the [Client Libraries](/docs/reference/using-api/client-libraries/). +To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the +[Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line +interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use +the Kubernetes API directly in your own programs using one of the +[Client Libraries](/docs/reference/using-api/client-libraries/). ### Object Spec and Status @@ -48,11 +56,17 @@ the status to match your spec. If any of those instances should fail between spec and status by making a correction--in this case, starting a replacement instance. -For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). +For more information on the object spec, status, and metadata, see the +[Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). ### Describing a Kubernetes object -When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API request must include that information as JSON in the request body. **Most often, you provide the information to `kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API request. +When you create an object in Kubernetes, you must provide the object spec that describes its +desired state, as well as some basic information about the object (such as a name). When you use +the Kubernetes API to create the object (either directly or via `kubectl`), that API request must +include that information as JSON in the request body. **Most often, you provide the information to +`kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API +request. Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment: @@ -81,7 +95,9 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to * `metadata` - Data that helps uniquely identify the object, including a `name` string, `UID`, and optional `namespace` * `spec` - What state you desire for the object -The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes. +The precise format of the object `spec` is different for every Kubernetes object, and contains +nested fields specific to that object. The [Kubernetes API Reference](/docs/reference/kubernetes-api/) +can help you find the spec format for all of the objects you can create using Kubernetes. For example, see the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) for the Pod API reference. @@ -103,5 +119,3 @@ detail the structure of that `.status` field, and its content for each different * Learn about [controllers](/docs/concepts/architecture/controller/) in Kubernetes. * [Using the Kubernetes API](/docs/reference/using-api/) explains some more API concepts. - - diff --git a/content/en/docs/concepts/overview/working-with-objects/object-management.md b/content/en/docs/concepts/overview/working-with-objects/object-management.md index b85c6228231..10b6dacf856 100644 --- a/content/en/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/en/docs/concepts/overview/working-with-objects/object-management.md @@ -169,9 +169,9 @@ Disadvantages compared to imperative object configuration: ## {{% heading "whatsnext" %}} - [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/) -- [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/) -- [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/) -- [Managing Kubernetes Objects Using Kustomize (Declarative)](/docs/tasks/manage-kubernetes-objects/kustomization/) +- [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/) +- [Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/) +- [Declarative Management of Kubernetes Objects Using Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/) - [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/) - [Kubectl Book](https://kubectl.docs.kubernetes.io) - [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) diff --git a/content/en/docs/concepts/scheduling-eviction/_index.md b/content/en/docs/concepts/scheduling-eviction/_index.md index 21e9371f037..fd1c0bbf00b 100644 --- a/content/en/docs/concepts/scheduling-eviction/_index.md +++ b/content/en/docs/concepts/scheduling-eviction/_index.md @@ -23,6 +23,7 @@ of terminating one or more Pods on Nodes. * [Kubernetes Scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) * [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/) * [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) +* [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) * [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) * [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework) * [Scheduler Performance Tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index db9f1d900d6..9cd70f9e976 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -11,24 +11,27 @@ weight: 20 -You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of -{{< glossary_tooltip text="node(s)" term_id="node" >}}. +You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it is +_restricted_ to run on particular {{< glossary_tooltip text="node(s)" term_id="node" >}}, +or to _prefer_ to run on particular nodes. There are several ways to do this and the recommended approaches all use [label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection. -Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement +Often, you do not need to set any such constraints; the +{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} will automatically do a reasonable placement (for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources). However, there are some circumstances where you may want to control which node -the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different -services that communicate a lot into the same availability zone. +the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, +or to co-locate Pods from two different services that communicate a lot into the same availability zone. You can use any of the following methods to choose where Kubernetes schedules -specific Pods: +specific Pods: * [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels) * [Affinity and anti-affinity](#affinity-and-anti-affinity) * [nodeName](#nodename) field + * [Pod topology spread constraints](#pod-topology-spread-constraints) ## Node labels {#built-in-node-labels} @@ -170,7 +173,7 @@ For example, consider the following Pod spec: {{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}} If there are two possible nodes that match the -`requiredDuringSchedulingIgnoredDuringExecution` rule, one with the +`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the `label-1:key-1` label and another with the `label-2:key-2` label, the scheduler considers the `weight` of each node and adds the weight to the other scores for that node, and schedules the Pod onto the node with the highest final score. @@ -337,13 +340,15 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi Inter-pod affinity and anti-affinity can be even more useful when they are used with higher level collections such as ReplicaSets, StatefulSets, Deployments, etc. These rules allow you to configure that a set of workloads should -be co-located in the same defined topology, eg., the same node. +be co-located in the same defined topology; for example, preferring to place two related +Pods onto the same node. -Take, for example, a three-node cluster running a web application with an -in-memory cache like redis. You could use inter-pod affinity and anti-affinity -to co-locate the web servers with the cache as much as possible. +For example: imagine a three-node cluster. You use the cluster to run a web application +and also an in-memory cache (such as Redis). For this example, also assume that latency between +the web application and the memory cache should be as low as is practical. You could use inter-pod +affinity and anti-affinity to co-locate the web servers with the cache as much as possible. -In the following example Deployment for the redis cache, the replicas get the label `app=store`. The +In the following example Deployment for the Redis cache, the replicas get the label `app=store`. The `podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas with the `app=store` label on a single node. This creates each cache in a separate node. @@ -378,10 +383,10 @@ spec: image: redis:3.2-alpine ``` -The following Deployment for the web servers creates replicas with the label `app=web-store`. The -Pod affinity rule tells the scheduler to place each replica on a node that has a -Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler -to avoid placing multiple `app=web-store` servers on a single node. +The following example Deployment for the web servers creates replicas with the label `app=web-store`. +The Pod affinity rule tells the scheduler to place each replica on a node that has a Pod +with the label `app=store`. The Pod anti-affinity rule tells the scheduler never to place +multiple `app=web-store` servers on a single node. ```yaml apiVersion: apps/v1 @@ -430,6 +435,10 @@ where each web server is co-located with a cache, on three separate nodes. | *webserver-1* | *webserver-2* | *webserver-3* | | *cache-1* | *cache-2* | *cache-3* | +The overall effect is that each cache instance is likely to be accessed by a single client, that +is running on the same node. This approach aims to minimize both skew (imbalanced load) and latency. + +You might have other reasons to use Pod anti-affinity. See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique as this example. @@ -468,6 +477,16 @@ spec: The above Pod will only run on the node `kube-01`. +## Pod topology spread constraints + +You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} +are spread across your cluster among failure-domains such as regions, zones, nodes, or among any other +topology domains that you define. You might do this to improve performance, expected availability, or +overall utilization. + +Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) +to learn more about how these work. + ## {{% heading "whatsnext" %}} * Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) . diff --git a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md index df688ded9a5..c27013f6b7a 100644 --- a/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md +++ b/content/en/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -83,7 +83,7 @@ of the scheduler: ## {{% heading "whatsnext" %}} * Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/) -* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) * Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler * Read the [kube-scheduler config (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) reference * Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) diff --git a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md index 6142c050c6b..244298d1504 100644 --- a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md +++ b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md @@ -66,8 +66,8 @@ the signal. The value for `memory.available` is derived from the cgroupfs instead of tools like `free -m`. This is important because `free -m` does not work in a -container, and if users use the [node -allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) feature, out of resource decisions +container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) +feature, out of resource decisions are made local to the end user Pod part of the cgroup hierarchy as well as the root node. This [script](/examples/admin/resource/memory-available.sh) reproduces the same set of steps that the kubelet performs to calculate @@ -85,10 +85,15 @@ The kubelet supports the following filesystem partitions: Kubelet auto-discovers these filesystems and ignores other filesystems. Kubelet does not support other configurations. -{{}} -Some kubelet garbage collection features are deprecated in favor of eviction. -For a list of the deprecated features, see [kubelet garbage collection deprecation](/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation). -{{}} +Some kubelet garbage collection features are deprecated in favor of eviction: + +| Existing Flag | New Flag | Rationale | +| ------------- | -------- | --------- | +| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection | +| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior | +| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context | +| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context | +| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context | ### Eviction thresholds @@ -211,7 +216,7 @@ the kubelet frees up disk space in the following order: If the kubelet's attempts to reclaim node-level resources don't bring the eviction signal below the threshold, the kubelet begins to evict end-user pods. -The kubelet uses the following parameters to determine pod eviction order: +The kubelet uses the following parameters to determine the pod eviction order: 1. Whether the pod's resource usage exceeds requests 1. [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/) @@ -314,7 +319,7 @@ The kubelet sets an `oom_score_adj` value for each container based on the QoS fo {{}} The kubelet also sets an `oom_score_adj` value of `-997` for containers in Pods that have -`system-node-critical` {{}} +`system-node-critical` {{}}. {{}} If the kubelet can't reclaim memory before a node experiences OOM, the @@ -396,7 +401,7 @@ counted as `active_file`. If enough of these kernel block buffers are on the active LRU list, the kubelet is liable to observe this as high resource use and taint the node as experiencing memory pressure - triggering pod eviction. -For more more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916) +For more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916) You can work around that behavior by setting the memory limit and memory request the same for containers likely to perform intensive I/O activity. You will need diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index ff7718a1f3f..82e812c53b9 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -15,14 +15,15 @@ is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attrac a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods. -_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also [evaluates other parameters](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/) as part of its function. +_Tolerations_ are applied to pods. Tolerations allow the scheduler to schedule pods with matching +taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also +[evaluates other parameters](/docs/concepts/scheduling-eviction/pod-priority-preemption/) +as part of its function. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. - - ## Concepts @@ -266,7 +267,8 @@ This ensures that DaemonSet pods are never evicted due to these problems. ## Taint Nodes by Condition The control plane, using the node {{}}, -automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions). +automatically creates taints with a `NoSchedule` effect for +[node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions). The scheduler checks taints, not node conditions, when it makes scheduling decisions. This ensures that node conditions don't directly affect scheduling. @@ -297,7 +299,7 @@ arbitrary tolerations to DaemonSets. ## {{% heading "whatsnext" %}} -* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) and how you can configure it +* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) + and how you can configure it * Read about [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/) - diff --git a/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md new file mode 100644 index 00000000000..77f4d1ea553 --- /dev/null +++ b/content/en/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -0,0 +1,570 @@ +--- +title: Pod Topology Spread Constraints +content_type: concept +weight: 40 +--- + + + + +You can use _topology spread constraints_ to control how +{{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster +among failure-domains such as regions, zones, nodes, and other user-defined topology +domains. This can help to achieve high availability as well as efficient resource +utilization. + +You can set [cluster-level constraints](#cluster-level-default-constraints) as a default, +or configure topology spread constraints for individual workloads. + + + +## Motivation + +Imagine that you have a cluster of up to twenty nodes, and you want to run a +{{< glossary_tooltip text="workload" term_id="workload" >}} +that automatically scales how many replicas it uses. There could be as few as +two Pods or as many as fifteen. +When there are only two Pods, you'd prefer not to have both of those Pods run on the +same node: you would run the risk that a single node failure takes your workload +offline. + +In addition to this basic usage, there are some advanced usage examples that +enable your workloads to benefit on high availability and cluster utilization. + +As you scale up and run more Pods, a different concern becomes important. Imagine +that you have three nodes running five Pods each. The nodes have enough capacity +to run that many replicas; however, the clients that interact with this workload +are split across three different datacenters (or infrastructure zones). Now you +have less concern about a single node failure, but you notice that latency is +higher than you'd like, and you are paying for network costs associated with +sending network traffic between the different zones. + +You decide that under normal operation you'd prefer to have a similar number of replicas +[scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone, +and you'd like the cluster to self-heal in the case that there is a problem. + +Pod topology spread constraints offer you a declarative way to configure that. + + +## `topologySpreadConstraints` field + +The Pod API includes a field, `spec.topologySpreadConstraints`. Here is an example: + +```yaml +--- +apiVersion: v1 +kind: Pod +metadata: + name: example-pod +spec: + # Configure a topology spread constraint + topologySpreadConstraints: + - maxSkew: + minDomains: # optional; alpha since v1.24 + topologyKey: + whenUnsatisfiable: + labelSelector: + ### other Pod fields go here +``` + +You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. + +### Spread constraint definition + +You can define one or multiple `topologySpreadConstraints` entries to instruct the +kube-scheduler how to place each incoming Pod in relation to the existing Pods across +your cluster. Those fields are: + +- **maxSkew** describes the degree to which Pods may be unevenly distributed. You must + specify this field and the number must be greater than zero. Its semantics differ + according to the value of `whenUnsatisfiable`: + + - if you select `whenUnsatisfiable: DoNotSchedule`, then `maxSkew` defines the + maximum permitted difference between the number of matching pods in the target + topology and the _global minimum_ + (the minimum number of pods that match the label selector in a topology domain). + For example, if you have 3 zones with 2, 4 and 5 matching pods respectively, + then the global minimum is 2 and `maxSkew` is compared relative to that number. + - if you select `whenUnsatisfiable: ScheduleAnyway`, the scheduler gives higher + precedence to topologies that would help reduce the skew. + +- **minDomains** indicates a minimum number of eligible domains. This field is optional. + A domain is a particular instance of a topology. An eligible domain is a domain whose + nodes match the node selector. + + {{< note >}} + The `minDomains` field is an alpha field added in 1.24. You have to enable the + `MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) + in order to use it. + {{< /note >}} + + - The value of `minDomains` must be greater than 0, when specified. + You can only specify `minDomains` in conjunction with `whenUnsatisfiable: DoNotSchedule`. + - When the number of eligible domains with match topology keys is less than `minDomains`, + Pod topology spread treats global minimum as 0, and then the calculation of `skew` is performed. + The global minimum is the minimum number of matching Pods in an eligible domain, + or zero if the number of eligible domains is less than `minDomains`. + - When the number of eligible domains with matching topology keys equals or is greater than + `minDomains`, this value has no effect on scheduling. + - If you do not specify `minDomains`, the constraint behaves as if `minDomains` is 1. + +- **topologyKey** is the key of [node labels](#node-labels). If two Nodes are labelled + with this key and have identical values for that label, the scheduler treats both + Nodes as being in the same topology. The scheduler tries to place a balanced number + of Pods into each topology domain. + +- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint: + - `DoNotSchedule` (default) tells the scheduler not to schedule it. + - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. + +- **labelSelector** is used to find matching Pods. Pods + that match this label selector are counted to determine the + number of Pods in their corresponding topology domain. + See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) + for more details. + +When a Pod defines more than one `topologySpreadConstraint`, those constraints are +combined using a logical AND operation: the kube-scheduler looks for a node for the incoming Pod +that satisfies all the configured constraints. + +### Node labels + +Topology spread constraints rely on node labels to identify the topology +domain(s) that each {{< glossary_tooltip text="node" term_id="node" >}} is in. +For example, a node might have labels: +```yaml + region: us-east-1 + zone: us-east-1a +``` + +{{< note >}} +For brevity, this example doesn't use the +[well-known](/docs/reference/labels-annotations-taints/) label keys +`topology.kubernetes.io/zone` and `topology.kubernetes.io/region`. However, +those registered label keys are nonetheless recommended rather than the private +(unqualified) label keys `region` and `zone` that are used here. + +You can't make a reliable assumption about the meaning of a private label key +between different contexts. +{{< /note >}} + + +Suppose you have a 4-node cluster with the following labels: + +``` +NAME STATUS ROLES AGE VERSION LABELS +node1 Ready 4m26s v1.16.0 node=node1,zone=zoneA +node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA +node3 Ready 3m17s v1.16.0 node=node3,zone=zoneB +node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB +``` + +Then the cluster is logically viewed as below: + +{{}} +graph TB + subgraph "zoneB" + n3(Node3) + n4(Node4) + end + subgraph "zoneA" + n1(Node1) + n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +## Consistency + +You should set the same Pod topology spread constraints on all pods in a group. + +Usually, if you are using a workload controller such as a Deployment, the pod template +takes care of this for you. If you mix different spread constraints then Kubernetes +follows the API definition of the field; however, the behavior is more likely to become +confusing and troubleshooting is less straightforward. + +You need a mechanism to ensure that all the nodes in a topology domain (such as a +cloud provider region) are labelled consistently. +To avoid you needing to manually label nodes, most clusters automatically +populate well-known labels such as `topology.kubernetes.io/hostname`. Check whether +your cluster supports this. + +## Topology spread constraint examples + +### Example: one topology spread constraint {#example-one-topologyspreadconstraint} + +Suppose you have a 4-node cluster where 3 Pods labelled `foo: bar` are located in +node1, node2 and node3 respectively: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +If you want an incoming Pod to be evenly spread with existing Pods across zones, you +can use a manifest similar to: + +{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} + +From that manifest, `topologyKey: zone` implies the even distribution will only be applied +to nodes that are labelled `zone: ` (nodes that don't have a `zone` label +are skipped). The field `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let the +incoming Pod stay pending if the scheduler can't find a way to satisfy the constraint. + +If the scheduler placed this incoming Pod into zone `A`, the distribution of Pods would +become `[3, 1]`. That means the actual skew is then 2 (calculated as `3 - 1`), which +violates `maxSkew: 1`. To satisfy the constraints and context for this example, the +incoming Pod can only be placed onto a node in zone `B`: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + p4(mypod) --> n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +OR + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + p4(mypod) --> n3 + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +You can tweak the Pod spec to meet various kinds of requirements: + +- Change `maxSkew` to a bigger value - such as `2` - so that the incoming Pod can + be placed into zone `A` as well. +- Change `topologyKey` to `node` so as to distribute the Pods evenly across nodes + instead of zones. In the above example, if `maxSkew` remains `1`, the incoming + Pod can only be placed onto the node `node4`. +- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` + to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs + are satisfied). However, it's preferred to be placed into the topology domain which + has fewer matching Pods. (Be aware that this preference is jointly normalized + with other internal scheduling priorities such as resource usage ratio). + +### Example: multiple topology spread constraints {#example-multiple-topologyspreadconstraints} + +This builds upon the previous example. Suppose you have a 4-node cluster where 3 +existing Pods labeled `foo: bar` are located on node1, node2 and node3 respectively: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3 k8s; + class p4 plain; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +You can combine two topology spread constraints to control the spread of Pods both +by node and by zone: + +{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} + +In this case, to match the first constraint, the incoming Pod can only be placed onto +nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be +scheduled to the node `node4`. The scheduler only considers options that satisfy all +defined constraints, so the only valid placement is onto node `node4`. + +### Example: conflicting topology spread constraints {#example-conflicting-topologyspreadconstraints} + +Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones: + +{{}} +graph BT + subgraph "zoneB" + p4(Pod) --> n3(Node3) + p5(Pod) --> n3 + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n1 + p3(Pod) --> n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s; + class zoneA,zoneB cluster; +{{< /mermaid >}} + +If you were to apply +[`two-constraints.yaml`](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/topology-spread-constraints/two-constraints.yaml) +(the manifest from the previous example) +to **this** cluster, you would see that the Pod `mypod` stays in the `Pending` state. +This happens because: to satisfy the first constraint, the Pod `mypod` can only +be placed into zone `B`; while in terms of the second constraint, the Pod `mypod` +can only schedule to node `node2`. The intersection of the two constraints returns +an empty set, and the scheduler cannot place the Pod. + +To overcome this situation, you can either increase the value of `maxSkew` or modify +one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`. Depending on +circumstances, you might also decide to delete an existing Pod manually - for example, +if you are troubleshooting why a bug-fix rollout is not making progress. + +#### Interaction with node affinity and node selectors + +The scheduler will skip the non-matching nodes from the skew calculations if the +incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined. + +### Example: topology spread constraints with node affinity {#example-topologyspreadconstraints-with-nodeaffinity} + +Suppose you have a 5-node cluster ranging across zones A to C: + +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end + +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class n1,n2,n3,n4,p1,p2,p3 k8s; +class p4 plain; +class zoneA,zoneB cluster; +{{< /mermaid >}} + +{{}} +graph BT + subgraph "zoneC" + n5(Node5) + end + +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class n5 k8s; +class zoneC cluster; +{{< /mermaid >}} + +and you know that zone `C` must be excluded. In this case, you can compose a manifest +as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`. +Similarly, Kubernetes also respects `spec.nodeSelector`. + +{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} + +## Implicit conventions + +There are some implicit conventions worth noting here: + +- Only the Pods holding the same namespace as the incoming Pod can be matching candidates. + +- The scheduler bypasses any nodes that don't have any `topologySpreadConstraints[*].topologyKey` + present. This implies that: + + 1. any Pods located on those bypassed nodes do not impact `maxSkew` calculation - in the + above example, suppose the node `node1` does not have a label "zone", then the 2 Pods will + be disregarded, hence the incoming Pod will be scheduled into zone `A`. + 2. the incoming Pod has no chances to be scheduled onto this kind of nodes - + in the above example, suppose a node `node5` has the **mistyped** label `zone-typo: zoneC` + (and no `zone` label set). After node `node5` joins the cluster, it will be bypassed and + Pods for this workload aren't scheduled there. + +- Be aware of what will happen if the incoming Pod's + `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the + above example, if you remove the incoming Pod's labels, it can still be placed onto + nodes in zone `B`, since the constraints are still satisfied. However, after that + placement, the degree of imbalance of the cluster remains unchanged - it's still zone `A` + having 2 Pods labelled as `foo: bar`, and zone `B` having 1 Pod labelled as + `foo: bar`. If this is not what you expect, update the workload's + `topologySpreadConstraints[*].labelSelector` to match the labels in the pod template. + +## Cluster-level default constraints + +It is possible to set default topology spread constraints for a cluster. Default +topology spread constraints are applied to a Pod if, and only if: + +- It doesn't define any constraints in its `.spec.topologySpreadConstraints`. +- It belongs to a Service, ReplicaSet, StatefulSet or ReplicationController. + +Default constraints can be set as part of the `PodTopologySpread` plugin +arguments in a [scheduling profile](/docs/reference/scheduling/config/#profiles). +The constraints are specified with the same [API above](#api), except that +`labelSelector` must be empty. The selectors are calculated from the Services, +ReplicaSets, StatefulSets or ReplicationControllers that the Pod belongs to. + +An example configuration might look like follows: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration + +profiles: + - schedulerName: default-scheduler + pluginConfig: + - name: PodTopologySpread + args: + defaultConstraints: + - maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + defaultingType: List +``` + +{{< note >}} +The [`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins) +is disabled by default. The Kubernetes project recommends using `PodTopologySpread` +to achieve similar behavior. +{{< /note >}} + +### Built-in default constraints {#internal-default-constraints} + +{{< feature-state for_k8s_version="v1.24" state="stable" >}} + +If you don't configure any cluster-level default constraints for pod topology spreading, +then kube-scheduler acts as if you specified the following default topology constraints: + +```yaml +defaultConstraints: + - maxSkew: 3 + topologyKey: "kubernetes.io/hostname" + whenUnsatisfiable: ScheduleAnyway + - maxSkew: 5 + topologyKey: "topology.kubernetes.io/zone" + whenUnsatisfiable: ScheduleAnyway +``` + +Also, the legacy `SelectorSpread` plugin, which provides an equivalent behavior, +is disabled by default. + +{{< note >}} +The `PodTopologySpread` plugin does not score the nodes that don't have +the topology keys specified in the spreading constraints. This might result +in a different default behavior compared to the legacy `SelectorSpread` plugin when +using the default topology constraints. + +If your nodes are not expected to have **both** `kubernetes.io/hostname` and +`topology.kubernetes.io/zone` labels set, define your own constraints +instead of using the Kubernetes defaults. +{{< /note >}} + +If you don't want to use the default Pod spreading constraints for your cluster, +you can disable those defaults by setting `defaultingType` to `List` and leaving +empty `defaultConstraints` in the `PodTopologySpread` plugin configuration: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration + +profiles: + - schedulerName: default-scheduler + pluginConfig: + - name: PodTopologySpread + args: + defaultConstraints: [] + defaultingType: List +``` + +## Comparison with podAffinity and podAntiAffinity {#comparison-with-podaffinity-podantiaffinity} + +In Kubernetes, [inter-Pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) +control how Pods are scheduled in relation to one another - either more packed +or more scattered. + +`podAffinity` +: attracts Pods; you can try to pack any number of Pods into qualifying + topology domain(s) +`podAntiAffinity` +: repels Pods. If you set this to `requiredDuringSchedulingIgnoredDuringExecution` mode then + only a single Pod can be scheduled into a single topology domain; if you choose + `preferredDuringSchedulingIgnoredDuringExecution` then you lose the ability to enforce the + constraint. + +For finer control, you can specify topology spread constraints to distribute +Pods across different topology domains - to achieve either high availability or +cost-saving. This can also help on rolling update workloads and scaling out +replicas smoothly. + +For more context, see the +[Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation) +section of the enhancement proposal about Pod topology spread constraints. + +## Known limitations + +- There's no guarantee that the constraints remain satisfied when Pods are removed. For + example, scaling down a Deployment may result in imbalanced Pods distribution. + + You can use a tool such as the [Descheduler](https://github.com/kubernetes-sigs/descheduler) + to rebalance the Pods distribution. +- Pods matched on tainted nodes are respected. + See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921). +- The scheduler doesn't have prior knowledge of all the zones or other topology + domains that a cluster has. They are determined from the existing nodes in the + cluster. This could lead to a problem in autoscaled clusters, when a node pool (or + node group) is scaled to zero nodes, and you're expecting the cluster to scale up, + because, in this case, those topology domains won't be considered until there is + at least one node in them. + You can work around this by using an cluster autoscaling tool that is aware of + Pod topology spread constraints and is also aware of the overall set of topology + domains. + + +## {{% heading "whatsnext" %}} + +- The blog article [Introducing PodTopologySpread](/blog/2020/05/introducing-podtopologyspread/) + explains `maxSkew` in some detail, as well as covering some advanced usage examples. +- Read the [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) section of + the API reference for Pod. diff --git a/content/en/docs/concepts/security/controlling-access.md b/content/en/docs/concepts/security/controlling-access.md index 04b13a82c5f..1718aa4a54a 100644 --- a/content/en/docs/concepts/security/controlling-access.md +++ b/content/en/docs/concepts/security/controlling-access.md @@ -23,10 +23,11 @@ following diagram: ## Transport security -In a typical Kubernetes cluster, the API serves on port 443, protected by TLS. +By default, the Kubernetes API server listens on port 6443 on the first non-localhost network interface, protected by TLS. In a typical production Kubernetes cluster, the API serves on port 443. The port can be changed with the `--secure-port`, and the listening IP address with the `--bind-address` flag. + The API server presents a certificate. This certificate may be signed using a private certificate authority (CA), or based on a public key infrastructure linked -to a generally recognized CA. +to a generally recognized CA. The certificate and corresponding private key can be set by using the `--tls-cert-file` and `--tls-private-key-file` flags. If your cluster uses a private certificate authority, you need a copy of that CA certificate configured into your `~/.kube/config` on the client, so that you can @@ -137,34 +138,6 @@ The cluster audits the activities generated by users, by applications that use t For more information, see [Auditing](/docs/tasks/debug/debug-cluster/audit/). -## API server ports and IPs - -The previous discussion applies to requests sent to the secure port of the API server -(the typical case). The API server can actually serve on 2 ports: - -By default, the Kubernetes API server serves HTTP on 2 ports: - - 1. `localhost` port: - - - is intended for testing and bootstrap, and for other components of the master node - (scheduler, controller-manager) to talk to the API - - no TLS - - default is port 8080 - - default IP is localhost, change with `--insecure-bind-address` flag. - - request **bypasses** authentication and authorization modules. - - request handled by admission control module(s). - - protected by need to have host access - - 2. “Secure port”: - - - use whenever possible - - uses TLS. Set cert with `--tls-cert-file` and key with `--tls-private-key-file` flag. - - default is port 6443, change with `--secure-port` flag. - - default IP is first non-localhost network interface, change with `--bind-address` flag. - - request handled by authentication and authorization modules. - - request handled by admission control module(s). - - authentication and authorization modules run. - ## {{% heading "whatsnext" %}} Read more documentation on authentication, authorization and API access control: diff --git a/content/en/docs/concepts/security/multi-tenancy.md b/content/en/docs/concepts/security/multi-tenancy.md index 8db2ad87c1c..0ba9eb8d10b 100755 --- a/content/en/docs/concepts/security/multi-tenancy.md +++ b/content/en/docs/concepts/security/multi-tenancy.md @@ -126,10 +126,10 @@ Pod-to-pod communication can be controlled using [Network Policies](/docs/conce Namespace management tools may simplify the creation of default or common network policies. In addition, some of these tools allow you to enforce a consistent set of namespace labels across your cluster, ensuring that they are a trusted basis for your policies. {{< warning >}} -Network policies require a [CNI plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) that supports the implementation of network policies. Otherwise, NetworkPolicy resources will be ignored. +Network policies require a [CNI plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni) that supports the implementation of network policies. Otherwise, NetworkPolicy resources will be ignored. {{< /warning >}} -More advanced network isolation may be provided by service meshes, which provide OSI Layer 7 policies based on workload identity, in addition to namespaces. These higher-level policies can make it easier to manage namespaced based multi-tenancy, especially when multiple namespaces are dedicated to a single tenant. They frequently also offer encryption using mutual TLS, protecting your data even in the presence of a compromised node, and work across dedicated or virtual clusters. However, they can be significantly more complex to manage and may not be appropriate for all users. +More advanced network isolation may be provided by service meshes, which provide OSI Layer 7 policies based on workload identity, in addition to namespaces. These higher-level policies can make it easier to manage namespace-based multi-tenancy, especially when multiple namespaces are dedicated to a single tenant. They frequently also offer encryption using mutual TLS, protecting your data even in the presence of a compromised node, and work across dedicated or virtual clusters. However, they can be significantly more complex to manage and may not be appropriate for all users. ### Storage isolation @@ -165,7 +165,7 @@ Although workloads from different tenants are running on different nodes, it is Node isolation is a little easier to reason about from a billing standpoint than sandboxing containers since you can charge back per node rather than per pod. It also has fewer compatibility and performance issues and may be easier to implement than sandboxing containers. For example, nodes for each tenant can be configured with taints so that only pods with the corresponding toleration can run on them. A mutating webhook could then be used to automatically add tolerations and node affinities to pods deployed into tenant namespaces so that they run on a specific set of nodes designated for that tenant. -Node isolation can be implemented using an [pod node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) or a [Virtual Kubelet](https://github.com/virtual-kubelet). +Node isolation can be implemented using an [pod node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/) or a [Virtual Kubelet](https://github.com/virtual-kubelet). ## Additional Considerations diff --git a/content/en/docs/concepts/security/pod-security-policy.md b/content/en/docs/concepts/security/pod-security-policy.md index cc0acc410dc..c296f31cc2c 100644 --- a/content/en/docs/concepts/security/pod-security-policy.md +++ b/content/en/docs/concepts/security/pod-security-policy.md @@ -214,6 +214,9 @@ controller selects policies according to the following criteria: 2. If the pod must be defaulted or mutated, the first PodSecurityPolicy (ordered by name) to allow the pod is selected. +When a Pod is validated against a PodSecurityPolicy, [a `kubernetes.io/psp` annotation](/docs/reference/labels-annotations-taints/#kubernetes-io-psp) +is added to the Pod, with the name of the PodSecurityPolicy as the annotation value. + {{< note >}} During update operations (during which mutations to pod specs are disallowed) only non-mutating PodSecurityPolicies are used to validate the pod. @@ -245,8 +248,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n ### Create a policy and a pod -Define the example PodSecurityPolicy object in a file. This is a policy that -prevents the creation of privileged pods. +This is a policy that prevents the creation of privileged pods. The name of a PodSecurityPolicy object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). @@ -255,7 +257,7 @@ The name of a PodSecurityPolicy object must be a valid And create it with kubectl: ```shell -kubectl-admin create -f example-psp.yaml +kubectl-admin create -f https://k8s.io/examples/policy/example-psp.yaml ``` Now, as the unprivileged user, try to create a simple pod: @@ -284,6 +286,11 @@ pod's service account nor `fake-user` have permission to use the new policy: ```shell kubectl-user auth can-i use podsecuritypolicy/example +``` + +The output is similar to this: + +``` no ``` @@ -300,14 +307,27 @@ kubectl-admin create role psp:unprivileged \ --verb=use \ --resource=podsecuritypolicy \ --resource-name=example -role "psp:unprivileged" created +``` +``` +role "psp:unprivileged" created +``` + +```shell kubectl-admin create rolebinding fake-user:psp:unprivileged \ --role=psp:unprivileged \ --serviceaccount=psp-example:fake-user -rolebinding "fake-user:psp:unprivileged" created +``` +``` +rolebinding "fake-user:psp:unprivileged" created +``` + +```shell kubectl-user auth can-i use podsecuritypolicy/example +``` + +``` yes ``` @@ -332,7 +352,20 @@ The output is similar to this pod "pause" created ``` -It works as expected! But any attempts to create a privileged pod should still +It works as expected! You can verify that the pod was validated against the +newly created PodSecurityPolicy: + +```shell +kubectl-user get pod pause -o yaml | grep kubernetes.io/psp +``` + +The output is similar to this + +``` +kubernetes.io/psp: example +``` + +But any attempts to create a privileged pod should still be denied: ```shell diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md index 47e93d3e9ea..d60f3b0ae1e 100644 --- a/content/en/docs/concepts/security/pod-security-standards.md +++ b/content/en/docs/concepts/security/pod-security-standards.md @@ -462,11 +462,11 @@ of individual policies are not defined here. {{% thirdparty-content %}} Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as: + - [Kubewarden](https://github.com/kubewarden) - [Kyverno](https://kyverno.io/policies/pod-security/) - [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) - ## FAQ ### Why isn't there a profile between privileged and baseline? @@ -493,9 +493,9 @@ built-in [Pod Security Admission Controller](/docs/concepts/security/pod-securit ### What profiles should I apply to my Windows Pods? Windows in Kubernetes has some limitations and differentiators from standard Linux-based -workloads. Specifically, many of the Pod SecurityContext fields [have no effect on -Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext). As -such, no standardized Pod Security profiles currently exist. +workloads. Specifically, many of the Pod SecurityContext fields +[have no effect on Windows](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext). +As such, no standardized Pod Security profiles currently exist. If you apply the restricted profile for a Windows pod, this **may** have an impact on the pod at runtime. The restricted profile requires enforcing Linux-specific restrictions (such as seccomp @@ -504,7 +504,9 @@ these Linux-specific values, then the Windows pod should still work normally wit profile. However, the lack of enforcement means that there is no additional restriction, for Pods that use Windows containers, compared to the baseline profile. -The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy. Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies, so any HostProcess pod should be considered privileged. +The use of the HostProcess flag to create a HostProcess pod should only be done in alignment with the privileged policy. +Creation of a Windows HostProcess pod is blocked under the baseline and restricted policies, +so any HostProcess pod should be considered privileged. ### What about sandboxed Pods? @@ -518,3 +520,4 @@ kernel. This allows for workloads requiring heightened permissions to still be i Additionally, the protection of sandboxed workloads is highly dependent on the method of sandboxing. As such, no single recommended profile is recommended for all sandboxed workloads. + diff --git a/content/en/docs/concepts/security/rbac-good-practices.md b/content/en/docs/concepts/security/rbac-good-practices.md index cfcc8b3cb93..fe858bba72b 100644 --- a/content/en/docs/concepts/security/rbac-good-practices.md +++ b/content/en/docs/concepts/security/rbac-good-practices.md @@ -15,7 +15,8 @@ execute their roles. It is important to ensure that, when designing permissions users, the cluster administrator understands the areas where privilge escalation could occur, to reduce the risk of excessive access leading to security incidents. -The good practices laid out here should be read in conjunction with the general [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update). +The good practices laid out here should be read in conjunction with the general +[RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update). @@ -23,18 +24,19 @@ The good practices laid out here should be read in conjunction with the general ### Least privilege -Ideally minimal RBAC rights should be assigned to users and service accounts. Only permissions -explicitly required for their operation should be used. Whilst each cluster will be different, +Ideally, minimal RBAC rights should be assigned to users and service accounts. Only permissions +explicitly required for their operation should be used. While each cluster will be different, some general rules that can be applied are : - Assign permissions at the namespace level where possible. Use RoleBindings as opposed to ClusterRoleBindings to give users rights only within a specific namespace. - Avoid providing wildcard permissions when possible, especially to all resources. As Kubernetes is an extensible system, providing wildcard access gives rights - not just to all object types presently in the cluster, but also to all future object types + not just to all object types that currently exist in the cluster, but also to all future object types which are created in the future. - Administrators should not use `cluster-admin` accounts except where specifically needed. - Providing a low privileged account with [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation) + Providing a low privileged account with + [impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation) can avoid accidental modification of cluster resources. - Avoid adding users to the `system:masters` group. Any user who is a member of this group bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be @@ -44,15 +46,17 @@ some general rules that can be applied are : ### Minimize distribution of privileged tokens -Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions (for example, any of the rights listed under -[privilege escalation risks](#privilege-escalation-risks)). +Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions +(for example, any of the rights listed under [privilege escalation risks](#privilege-escalation-risks)). In cases where a workload requires powerful permissions, consider the following practices: - Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run are necessary and are run with least privilege to limit the blast radius of container escapes. - Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using - [Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/), [NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) to ensure - pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to + [Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/), + [NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or + [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) + to ensure pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard. ### Hardening @@ -62,7 +66,7 @@ the RBAC rights provided by default can provide opportunities for security harde In general, changes should not be made to rights provided to `system:` accounts some options to harden cluster rights exist: -- Review bindings for the `system:unauthenticated` group and remove where possible, as this gives +- Review bindings for the `system:unauthenticated` group and remove them where possible, as this gives access to anyone who can contact the API server at a network level. - Avoid the default auto-mounting of service account tokens by setting `automountServiceAccountToken: false`. For more details, see @@ -107,7 +111,7 @@ with the ability to create suitably secure and isolated Pods, you should enforce You can use [Pod Security admission](/docs/concepts/security/pod-security-admission/) or other (third party) mechanisms to implement that enforcement. -You can also use the deprecated [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) mechanism +You can also use the deprecated [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) mechanism to restrict users' abilities to create privileged Pods (N.B. PodSecurityPolicy is scheduled for removal in version 1.25). @@ -117,25 +121,27 @@ Secrets they would not have through RBAC directly. ### Persistent volume creation -As noted in the [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems) documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host. Where access to persistent storage is required trusted administrators should create +As noted in the [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/#volumes-and-file-systems) +documentation, access to create PersistentVolumes can allow for escalation of access to the underlying host. +Where access to persistent storage is required trusted administrators should create PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage. ### Access to `proxy` subresource of Nodes Users with access to the proxy sub-resource of node objects have rights to the Kubelet API, -which allows for command execution on every pod on the node(s) which they have rights to. +which allows for command execution on every pod on the node(s) to which they have rights. This access bypasses audit logging and admission control, so care should be taken before granting rights to this resource. ### Escalate verb -Generally the RBAC system prevents users from creating clusterroles with more rights than -they possess. The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), +Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses. +The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update), users with this right can effectively escalate their privileges. ### Bind verb -Similar to the `escalate` verb, granting users this right allows for bypass of Kubernetes +Similar to the `escalate` verb, granting users this right allows for the bypass of Kubernetes in-built protections against privilege escalation, allowing users to create bindings to roles with rights they do not already have. @@ -173,8 +179,11 @@ objects to create a denial of service condition either based on the size or numb specifically relevant in multi-tenant clusters if semi-trusted or untrusted users are allowed limited access to a system. -One option for mitigation of this issue would be to use [resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota) +One option for mitigation of this issue would be to use +[resource quotas](/docs/concepts/policy/resource-quotas/#object-count-quota) to limit the quantity of objects which can be created. ## {{% heading "whatsnext" %}} + * To learn more about RBAC, see the [RBAC documentation](/docs/reference/access-authn-authz/rbac/). + diff --git a/content/en/docs/concepts/security/windows-security.md b/content/en/docs/concepts/security/windows-security.md index 8c0704ac2ba..c6523e3a43a 100644 --- a/content/en/docs/concepts/security/windows-security.md +++ b/content/en/docs/concepts/security/windows-security.md @@ -22,34 +22,41 @@ storage (as compared to using tmpfs / in-memory filesystems on Linux). As a clus operator, you should take both of the following additional measures: 1. Use file ACLs to secure the Secrets' file location. -1. Apply volume-level encryption using [BitLocker](https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server). +1. Apply volume-level encryption using + [BitLocker](https://docs.microsoft.com/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server). ## Container users [RunAsUsername](/docs/tasks/configure-pod-container/configure-runasusername) can be specified for Windows Pods or containers to execute the container processes as specific user. This is roughly equivalent to -[RunAsUser](/docs/concepts/policy/pod-security-policy/#users-and-groups). +[RunAsUser](/docs/concepts/security/pod-security-policy/#users-and-groups). Windows containers offer two default user accounts, ContainerUser and ContainerAdministrator. The differences between these two user accounts are covered in -[When to use ContainerAdmin and ContainerUser user accounts](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/container-security#when-to-use-containeradmin-and-containeruser-user-accounts) within Microsoft's _Secure Windows containers_ documentation. +[When to use ContainerAdmin and ContainerUser user accounts](https://docs.microsoft.com/virtualization/windowscontainers/manage-containers/container-security#when-to-use-containeradmin-and-containeruser-user-accounts) +within Microsoft's _Secure Windows containers_ documentation. Local users can be added to container images during the container build process. {{< note >}} -* [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver) based images run as `ContainerUser` by default -* [Server Core](https://hub.docker.com/_/microsoft-windows-servercore) based images run as `ContainerAdministrator` by default +* [Nano Server](https://hub.docker.com/_/microsoft-windows-nanoserver) based images run as + `ContainerUser` by default +* [Server Core](https://hub.docker.com/_/microsoft-windows-servercore) based images run as + `ContainerAdministrator` by default {{< /note >}} -Windows containers can also run as Active Directory identities by utilizing [Group Managed Service Accounts](/docs/tasks/configure-pod-container/configure-gmsa/) +Windows containers can also run as Active Directory identities by utilizing +[Group Managed Service Accounts](/docs/tasks/configure-pod-container/configure-gmsa/) ## Pod-level security isolation Linux-specific pod security context mechanisms (such as SELinux, AppArmor, Seccomp, or custom POSIX capabilities) are not supported on Windows nodes. -Privileged containers are [not supported](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext) on Windows. -Instead [HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod) can be used on Windows to perform many of the tasks performed by privileged containers on Linux. +Privileged containers are [not supported](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext) +on Windows. +Instead [HostProcess containers](/docs/tasks/configure-pod-container/create-hostprocess-pod) +can be used on Windows to perform many of the tasks performed by privileged containers on Linux. diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index 921d69e8fbf..1716d7483b2 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -37,7 +37,7 @@ IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features: The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters: -* Kubernetes 1.20 or later +* Kubernetes 1.20 or later For information about using dual-stack services with earlier Kubernetes versions, refer to the documentation for that version @@ -95,7 +95,7 @@ set the `.spec.ipFamilyPolicy` field to one of the following values: If you would like to define which IP family to use for single stack or define the order of IP families for dual-stack, you can choose the address families by setting an optional field, -`.spec.ipFamilies`, on the Service. +`.spec.ipFamilies`, on the Service. {{< note >}} The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a @@ -133,11 +133,11 @@ These examples demonstrate the behavior of various dual-stack Service configurat address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from `.spec.ClusterIPs`. - + * For the `.spec.ClusterIP` field, the control plane records the IP address that is from the - same address family as the first service cluster IP range. + same address family as the first service cluster IP range. * On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list - one address. + one address. * On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy` behaves the same as `PreferDualStack`. @@ -174,7 +174,7 @@ dual-stack.) kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: 10.0.197.123 @@ -188,7 +188,7 @@ dual-stack.) protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp type: ClusterIP status: loadBalancer: {} @@ -214,7 +214,7 @@ dual-stack.) kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: None @@ -228,7 +228,7 @@ dual-stack.) protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp ``` #### Switching Services between single-stack and dual-stack diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index 5516306ffa7..f2d593448e0 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -46,6 +46,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet is an [Istio](https://istio.io/) based ingress controller. * The [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme) is an ingress controller driving [Kong Gateway](https://konghq.com/kong/). +* [Kusk Gateway](https://kusk.kubeshop.io/) is an OpenAPI-driven ingress controller based on [Envoy](https://www.envoyproxy.io). * The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/) works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy). * The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy. diff --git a/content/en/docs/concepts/services-networking/service-traffic-policy.md b/content/en/docs/concepts/services-networking/service-traffic-policy.md index b9abe34b3fc..8755b5298b5 100644 --- a/content/en/docs/concepts/services-networking/service-traffic-policy.md +++ b/content/en/docs/concepts/services-networking/service-traffic-policy.md @@ -43,7 +43,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index ad5af427ec3..eda3b6b9b33 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -75,7 +75,7 @@ The name of a Service object must be a valid [RFC 1035 label name](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names). For example, suppose you have a set of Pods where each listens on TCP port 9376 -and contains a label `app=MyApp`: +and contains a label `app.kubernetes.io/name=MyApp`: ```yaml apiVersion: v1 @@ -84,7 +84,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 @@ -92,7 +92,7 @@ spec: ``` This specification creates a new Service object named "my-service", which -targets TCP port 9376 on any Pod with the `app=MyApp` label. +targets TCP port 9376 on any Pod with the `app.kubernetes.io/name=MyApp` label. Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"), which is used by the Service proxies @@ -126,7 +126,7 @@ spec: ports: - containerPort: 80 name: http-web-svc - + --- apiVersion: v1 kind: Service @@ -144,9 +144,9 @@ spec: This works even if there is a mixture of Pods in the Service using a single -configured name, with the same network protocol available via different -port numbers. This offers a lot of flexibility for deploying and evolving -your Services. For example, you can change the port numbers that Pods expose +configured name, with the same network protocol available via different +port numbers. This offers a lot of flexibility for deploying and evolving +your Services. For example, you can change the port numbers that Pods expose in the next version of your backend software, without breaking clients. The default protocol for Services is TCP; you can also use any other @@ -159,7 +159,7 @@ Each port definition can have the same `protocol`, or a different one. ### Services without selectors Services most commonly abstract access to Kubernetes Pods thanks to the selector, -but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends, +but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends, including ones that run outside the cluster. For example: * You want to have an external database cluster in production, but in your @@ -222,10 +222,10 @@ In the example above, traffic is routed to the single endpoint defined in the YAML: `192.0.2.42:9376` (TCP). {{< note >}} -The Kubernetes API server does not allow proxying to endpoints that are not mapped to -pods. Actions such as `kubectl proxy ` where the service has no -selector will fail due to this constraint. This prevents the Kubernetes API server -from being used as a proxy to endpoints the caller may not be authorized to access. +The Kubernetes API server does not allow proxying to endpoints that are not mapped to +pods. Actions such as `kubectl proxy ` where the service has no +selector will fail due to this constraint. This prevents the Kubernetes API server +from being used as a proxy to endpoints the caller may not be authorized to access. {{< /note >}} An ExternalName Service is a special case of Service that does not have @@ -289,7 +289,7 @@ There are a few reasons for using proxying for Services: Later in this page you can read about various kube-proxy implementations work. Overall, you should note that, when running `kube-proxy`, kernel level rules may be -modified (for example, iptables rules might get created), which won't get cleaned up, +modified (for example, iptables rules might get created), which won't get cleaned up, in some cases until you reboot. Thus, running kube-proxy is something that should only be done by an administrator which understands the consequences of having a low level, privileged network proxying service on a computer. Although the `kube-proxy` @@ -299,9 +299,14 @@ thus is only available to use as-is. ### Configuration Note that the kube-proxy starts up in different modes, which are determined by its configuration. -- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy effectively deprecates the behaviour for almost all of the flags for the kube-proxy. +- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy + effectively deprecates the behaviour for almost all of the flags for the kube-proxy. - The ConfigMap for the kube-proxy does not support live reloading of configuration. -- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup. For example, if your operating system doesn't allow you to run iptables commands, the standard kernel kube-proxy implementation will not work. Likewise, if you have an operating system which doesn't support `netsh`, it will not run in Windows userspace mode. +- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup. + For example, if your operating system doesn't allow you to run iptables commands, + the standard kernel kube-proxy implementation will not work. + Likewise, if you have an operating system which doesn't support `netsh`, + it will not run in Windows userspace mode. ### User space proxy mode {#proxy-mode-userspace} @@ -418,7 +423,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - name: http protocol: TCP @@ -492,7 +497,11 @@ variables and DNS. ### Environment variables When a Pod is run on a Node, the kubelet adds a set of environment variables -for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature. +for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, +where the Service name is upper-cased and dashes are converted to underscores. +It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) +that are compatible with Docker Engine's +"_[legacy container links](https://docs.docker.com/network/links/)_" feature. For example, the Service `redis-master` which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment @@ -604,8 +613,10 @@ The default is `ClusterIP`. to use the `ExternalName` type. {{< /note >}} -You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules -into a single resource as it can expose multiple services under the same IP address. +You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. +Ingress is not a Service type, but it acts as the entry point for your cluster. +It lets you consolidate your routing rules into a single resource as it can expose multiple +services under the same IP address. ### Type NodePort {#type-nodeport} @@ -620,9 +631,14 @@ field of the [kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/) to particular IP block(s). -This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`) to specify IP address ranges that kube-proxy should consider as local to this node. +This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`) +to specify IP address ranges that kube-proxy should consider as local to this node. -For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, kube-proxy only selects the loopback interface for NodePort Services. The default for `--nodeport-addresses` is an empty list. This means that kube-proxy should consider all available network interfaces for NodePort. (That's also compatible with earlier Kubernetes releases). +For example, if you start kube-proxy with the `--nodeport-addresses=127.0.0.0/8` flag, +kube-proxy only selects the loopback interface for NodePort Services. +The default for `--nodeport-addresses` is an empty list. +his means that kube-proxy should consider all available network interfaces for NodePort. +(That's also compatible with earlier Kubernetes releases). If you want a specific port number, you can specify a value in the `nodePort` field. The control plane will either allocate you that port or report that @@ -650,7 +666,7 @@ metadata: spec: type: NodePort selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - port: 80 @@ -676,7 +692,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 @@ -689,7 +705,8 @@ status: - ip: 192.0.2.127 ``` -Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced. +Traffic from the external load balancer is directed at the backend Pods. +The cloud provider decides how it is load balanced. Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified, @@ -704,7 +721,11 @@ to create a static type public IP address resource. This public IP address resou be in the same resource group of the other automatically created resources of the cluster. For example, `MC_myResourceGroup_myAKSCluster_eastus`. -Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the securityGroupName in the cloud provider configuration file. For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357). +Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the +`securityGroupName` in the cloud provider configuration file. +For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see, +[Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip) +or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357). {{< /note >}} @@ -744,13 +765,13 @@ You must explicitly remove the `nodePorts` entry in every Service port to de-all `spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default. By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses the cloud provider's default load balancer implementation if the cluster is configured with -a cloud provider using the `--cloud-provider` component flag. +a cloud provider using the `--cloud-provider` component flag. If `spec.loadBalancerClass` is specified, it is assumed that a load balancer implementation that matches the specified class is watching for Services. Any default load balancer implementation (for example, the one provided by the cloud provider) will ignore Services that have this field set. `spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only. -Once set, it cannot be changed. +Once set, it cannot be changed. The value of `spec.loadBalancerClass` must be a label-style identifier, with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`". Unprefixed names are reserved for end-users. @@ -760,7 +781,8 @@ Unprefixed names are reserved for end-users. In a mixed environment it is sometimes necessary to route traffic from Services inside the same (virtual) network address block. -In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints. +In a split-horizon DNS environment you would need two Services to be able to route both external +and internal traffic to your endpoints. To set an internal load balancer, add one of the following annotations to your Service depending on the cloud Service provider you're using. @@ -925,7 +947,9 @@ you can use the following annotations: In the above example, if the Service contained three ports, `80`, `443`, and `8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP. -From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services. +From Kubernetes v1.9 onwards you can use +[predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) +with HTTPS or SSL listeners for your Services. To see which policies are available for use, you can use the `aws` command line tool: ```bash @@ -981,14 +1005,17 @@ specifies the logical hierarchy you created for your Amazon S3 bucket. metadata: name: my-service annotations: - service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" # Specifies whether access logs are enabled for the load balancer - service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60" + service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" + # The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes). - service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket" + service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60" + # The name of the Amazon S3 bucket where the access logs are stored - service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod" + service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket" + # The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod` + service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod" ``` #### Connection Draining on AWS @@ -997,7 +1024,8 @@ Connection draining for Classic ELBs can be managed with the annotation `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set to the value of `"true"`. The annotation `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can -also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances. +also be used to set maximum time, in seconds, to keep the existing connections open before +deregistering the instances. ```yaml metadata: @@ -1015,50 +1043,56 @@ There are other annotations to manage Classic Elastic Load Balancers that are de metadata: name: my-service annotations: + # The time, in seconds, that the connection is allowed to be idle (no data has been sent + # over the connection) before it is closed by the load balancer service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60" - # The time, in seconds, that the connection is allowed to be idle (no data has been sent over the connection) before it is closed by the load balancer - service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" # Specifies whether cross-zone load balancing is enabled for the load balancer + service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" - service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops" # A comma-separated list of key-value pairs which will be recorded as # additional tags in the ELB. + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops" - service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "" # The number of successive successful health checks required for a backend to # be considered healthy for traffic. Defaults to 2, must be between 2 and 10 + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "" - service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" # The number of unsuccessful health checks required for a backend to be # considered unhealthy for traffic. Defaults to 6, must be between 2 and 10 + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" - service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20" # The approximate interval, in seconds, between health checks of an # individual instance. Defaults to 10, must be between 5 and 300 + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20" - service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5" # The amount of time, in seconds, during which no response means a failed # health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval # value. Defaults to 5, must be between 2 and 60 + service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5" - service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f" # A list of existing security groups to be configured on the ELB created. Unlike the annotation - # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation + # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other + # security groups previously assigned to the ELB and also overrides the creation # of a uniquely generated security group for this ELB. - # The first security group ID on this list is used as a source to permit incoming traffic to target worker nodes (service traffic and health checks). - # If multiple ELBs are configured with the same security group ID, only a single permit line will be added to the worker node security groups, that means if you delete any + # The first security group ID on this list is used as a source to permit incoming traffic to + # target worker nodes (service traffic and health checks). + # If multiple ELBs are configured with the same security group ID, only a single permit line + # will be added to the worker node security groups, that means if you delete any # of those ELBs it will remove the single permit line and block access for all ELBs that shared the same security group ID. # This can cause a cross-service outage if not used properly + service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f" + # A list of additional security groups to be added to the created ELB, this leaves the uniquely + # generated security group in place, this ensures that every ELB + # has a unique security group ID and a matching permit line to allow traffic to the target worker nodes + # (service traffic and health checks). + # Security groups defined here can be shared between services. service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e" - # A list of additional security groups to be added to the created ELB, this leaves the uniquely generated security group in place, this ensures that every ELB - # has a unique security group ID and a matching permit line to allow traffic to the target worker nodes (service traffic and health checks). - # Security groups defined here can be shared between services. - service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api" # A comma separated list of key-value pairs which are used # to select the target nodes for the load balancer + service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api" ``` #### Network Load Balancer support on AWS {#aws-nlb-support} @@ -1075,7 +1109,8 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet ``` {{< note >}} -NLB only works with certain instance classes; see the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) +NLB only works with certain instance classes; see the +[AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets) on Elastic Load Balancing for a list of supported instance types. {{< /note >}} @@ -1182,7 +1217,8 @@ spec: ``` {{< note >}} -ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName +ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. +ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName is intended to specify a canonical DNS name. To hardcode an IP address, consider using [headless Services](#headless-services). {{< /note >}} @@ -1196,9 +1232,13 @@ can start its Pods, add appropriate selectors or endpoints, and change the Service's `type`. {{< warning >}} -You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. If you use ExternalName then the hostname used by clients inside your cluster is different from the name that the ExternalName references. +You may have trouble using ExternalName for some common protocols, including HTTP and HTTPS. +If you use ExternalName then the hostname used by clients inside your cluster is different from +the name that the ExternalName references. -For protocols that use hostnames this difference may lead to errors or unexpected responses. HTTP requests will have a `Host:` header that the origin server does not recognize; TLS servers will not be able to provide a certificate matching the hostname that the client connected to. +For protocols that use hostnames this difference may lead to errors or unexpected responses. +HTTP requests will have a `Host:` header that the origin server does not recognize; +TLS servers will not be able to provide a certificate matching the hostname that the client connected to. {{< /warning >}} {{< note >}} @@ -1223,7 +1263,7 @@ metadata: name: my-service spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - name: http protocol: TCP @@ -1357,12 +1397,15 @@ through a load-balancer, though in those cases the client IP does get altered. #### IPVS iptables operations slow down dramatically in large scale cluster e.g 10,000 Services. -IPVS is designed for load balancing and based on in-kernel hash tables. So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence). +IPVS is designed for load balancing and based on in-kernel hash tables. +So you can achieve performance consistency in large number of Services from IPVS-based kube-proxy. +Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms +(least conns, locality, weighted, persistence). ## API Object Service is a top-level resource in the Kubernetes REST API. You can find more details -about the API object at: [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). +about the [Service API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core). ## Supported protocols {#protocol-support} @@ -1388,7 +1431,8 @@ provider offering this facility. (Most do not). ##### Support for multihomed SCTP associations {#caveat-sctp-multihomed} {{< warning >}} -The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod. +The support of multihomed SCTP associations requires that the CNI plugin can support the +assignment of multiple interfaces and IP addresses to a Pod. NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules. {{< /warning >}} diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md index 63263fb3708..6da1850adaf 100644 --- a/content/en/docs/concepts/storage/dynamic-provisioning.md +++ b/content/en/docs/concepts/storage/dynamic-provisioning.md @@ -116,7 +116,7 @@ can enable this behavior by: is enabled on the API server. An administrator can mark a specific `StorageClass` as default by adding the -`storageclass.kubernetes.io/is-default-class` annotation to it. +`storageclass.kubernetes.io/is-default-class` [annotation](/docs/reference/labels-annotations-taints/#storageclass-kubernetes-io-is-default-class) to it. When a default `StorageClass` exists in a cluster and a user creates a `PersistentVolumeClaim` with `storageClassName` unspecified, the `DefaultStorageClass` admission controller automatically adds the diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md index a51984885e9..045bcafe768 100644 --- a/content/en/docs/concepts/storage/ephemeral-volumes.md +++ b/content/en/docs/concepts/storage/ephemeral-volumes.md @@ -76,8 +76,8 @@ is managed by kubelet, or injecting different data. {{< feature-state for_k8s_version="v1.16" state="beta" >}} -This feature requires the `CSIInlineVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It -is enabled by default starting with Kubernetes 1.16. +This feature requires the `CSIInlineVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +to be enabled. It is enabled by default starting with Kubernetes 1.16. {{< note >}} CSI ephemeral volumes are only supported by a subset of CSI drivers. @@ -136,8 +136,11 @@ should not be exposed to users through the use of inline ephemeral volumes. Cluster administrators who need to restrict the CSI drivers that are allowed to be used as inline volumes within a Pod spec may do so by: -- Removing `Ephemeral` from `volumeLifecycleModes` in the CSIDriver spec, which prevents the driver from being used as an inline ephemeral volume. -- Using an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) to restrict how this driver is used. + +- Removing `Ephemeral` from `volumeLifecycleModes` in the CSIDriver spec, which prevents the + driver from being used as an inline ephemeral volume. +- Using an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) + to restrict how this driver is used. ### Generic ephemeral volumes @@ -207,7 +210,7 @@ because then the scheduler is free to choose a suitable node for the Pod. With immediate binding, the scheduler is forced to select a node that has access to the volume once it is available. -In terms of [resource ownership](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents), +In terms of [resource ownership](/docs/concepts/architecture/garbage-collection/#owners-dependents), a Pod that has generic ephemeral storage is the owner of the PersistentVolumeClaim(s) that provide that ephemeral storage. When the Pod is deleted, the Kubernetes garbage collector deletes the PVC, which then usually @@ -252,10 +255,11 @@ Enabling the GenericEphemeralVolume feature allows users to create PVCs indirectly if they can create Pods, even if they do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit their security model, they should -use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) that rejects objects like Pods that have a generic ephemeral volume. +use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/) +that rejects objects like Pods that have a generic ephemeral volume. -The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota) still applies, so -even if users are allowed to use this new mechanism, they cannot use +The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota) +still applies, so even if users are allowed to use this new mechanism, they cannot use it to circumvent other policies. ## {{% heading "whatsnext" %}} @@ -266,11 +270,13 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont ### CSI ephemeral volumes -- For more information on the design, see the [Ephemeral Inline CSI - volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md). -- For more information on further development of this feature, see the [enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596). +- For more information on the design, see the + [Ephemeral Inline CSI volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md). +- For more information on further development of this feature, see the + [enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596). ### Generic ephemeral volumes - For more information on the design, see the -[Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md). + [Generic ephemeral inline volumes KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1698-generic-ephemeral-volumes/README.md). + diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 66aa8f7f8a8..07521f42eec 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -558,7 +558,7 @@ If the access modes are specified as ReadWriteOncePod, the volume is constrained | AzureFile | ✓ | ✓ | ✓ | - | | AzureDisk | ✓ | - | - | - | | CephFS | ✓ | ✓ | ✓ | - | -| Cinder | ✓ | - | - | - | +| Cinder | ✓ | - | ([if multi-attach volumes are available](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/features.md#multi-attach-volumes)) | - | | CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver | | FC | ✓ | ✓ | - | - | | FlexVolume | ✓ | ✓ | depends on the driver | - | diff --git a/content/en/docs/concepts/storage/projected-volumes.md b/content/en/docs/concepts/storage/projected-volumes.md index df67132cf59..a404a26e5e0 100644 --- a/content/en/docs/concepts/storage/projected-volumes.md +++ b/content/en/docs/concepts/storage/projected-volumes.md @@ -73,7 +73,7 @@ volume mount will not receive updates for those volume sources. ## SecurityContext interactions -The [proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the the correct owner permissions set. +The [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set. ### Linux @@ -99,6 +99,7 @@ into their own volume mount outside of `C:\`. By default, the projected files will have the following ownership as shown for an example projected volume file: + ```powershell PS C:\> Get-Acl C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt | Format-List @@ -111,6 +112,7 @@ Access : NT AUTHORITY\SYSTEM Allow FullControl Audit : Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU) ``` + This implies all administrator users like `ContainerAdministrator` will have read, write and execute access while, non-administrator users will have read and execute access. diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md index 91e9412759f..f3867ea86cd 100644 --- a/content/en/docs/concepts/windows/intro.md +++ b/content/en/docs/concepts/windows/intro.md @@ -121,7 +121,7 @@ section refers to several key workload abstractions and how they map to Windows. In the above list, wildcards (`*`) indicate all elements in a list. For example, `spec.containers[*].securityContext` refers to the SecurityContext object for all containers. If any of these fields is specified, the Pod will - not be admited by the API server. + not be admitted by the API server. * [Workload resources](/docs/concepts/workloads/controllers/) including: * ReplicaSet @@ -132,7 +132,7 @@ section refers to several key workload abstractions and how they map to Windows. * CronJob * ReplicationController * {{< glossary_tooltip text="Services" term_id="service" >}} - See [Load balancing and Services](#load-balancing-and-services) for more details. + See [Load balancing and Services](/docs/concepts/services-networking/windows-networking/#load-balancing-and-services) for more details. Pods, workload resources, and Services are critical elements to managing Windows workloads on Kubernetes. However, on their own they are not enough to enable diff --git a/content/en/docs/concepts/windows/user-guide.md b/content/en/docs/concepts/windows/user-guide.md index 450a1bb5e07..da5b8a6fec7 100644 --- a/content/en/docs/concepts/windows/user-guide.md +++ b/content/en/docs/concepts/windows/user-guide.md @@ -105,12 +105,12 @@ port 80 of the container directly to the Service. * Node-to-pod communication across the network, `curl` port 80 of your pod IPs from the Linux control plane node to check for a web server response * Pod-to-pod communication, ping between pods (and across hosts, if you have more than one Windows node) - using docker exec or kubectl exec + using `docker exec` or `kubectl exec` * Service-to-pod communication, `curl` the virtual service IP (seen under `kubectl get services`) from the Linux control plane node and from individual pods * Service discovery, `curl` the service name with the Kubernetes [default DNS suffix](/docs/concepts/services-networking/dns-pod-service/#services) * Inbound connectivity, `curl` the NodePort from the Linux control plane node or machines outside of the cluster - * Outbound connectivity, `curl` external IPs from inside the pod using kubectl exec + * Outbound connectivity, `curl` external IPs from inside the pod using `kubectl exec` {{< note >}} Windows container hosts are not able to access the IP of services scheduled on them due to current platform limitations of the Windows networking stack. @@ -307,4 +307,4 @@ spec: app: iis-2019 ``` -[RuntimeClass]: https://kubernetes.io/docs/concepts/containers/runtime-class/ +[RuntimeClass]: /docs/concepts/containers/runtime-class/ diff --git a/content/en/docs/concepts/workloads/_index.md b/content/en/docs/concepts/workloads/_index.md index 2c9dd8aa8eb..dffd727505d 100644 --- a/content/en/docs/concepts/workloads/_index.md +++ b/content/en/docs/concepts/workloads/_index.md @@ -70,7 +70,7 @@ visit [Configuration](/docs/concepts/configuration/). There are two supporting concepts that provide backgrounds about how Kubernetes manages pods for applications: -* [Garbage collection](/docs/concepts/workloads/controllers/garbage-collection/) tidies up objects +* [Garbage collection](/docs/concepts/architecture/garbage-collection/) tidies up objects from your cluster after their _owning resource_ has been removed. * The [_time-to-live after finished_ controller](/docs/concepts/workloads/controllers/ttlafterfinished/) removes Jobs once a defined time has passed since they completed. diff --git a/content/en/docs/concepts/workloads/controllers/job.md b/content/en/docs/concepts/workloads/controllers/job.md index 9d917505dee..b4ab6f72717 100644 --- a/content/en/docs/concepts/workloads/controllers/job.md +++ b/content/en/docs/concepts/workloads/controllers/job.md @@ -71,7 +71,7 @@ Pod Template: job-name=pi Containers: pi: - Image: perl + Image: perl:5.34.0 Port: Host Port: Command: @@ -125,7 +125,7 @@ spec: - -Mbignum=bpi - -wle - print bpi(2000) - image: perl + image: perl:5.34.0 imagePullPolicy: Always name: pi resources: {} @@ -356,7 +356,7 @@ spec: spec: containers: - name: pi - image: perl + image: perl:5.34.0 command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never ``` @@ -402,7 +402,7 @@ spec: spec: containers: - name: pi - image: perl + image: perl:5.34.0 command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never ``` diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 470a5e50241..a282b8455a4 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -13,9 +13,6 @@ weight: 20 A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. - - - ## How a ReplicaSet works @@ -26,14 +23,14 @@ it should create to meet the number of replicas criteria. A ReplicaSet then fulf and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template. -A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) +A ReplicaSet is linked to its Pods via the Pods' [metadata.ownerReferences](/docs/concepts/architecture/garbage-collection/#owners-and-dependents) field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly. -A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the -OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it matches a ReplicaSet's selector, it will be immediately acquired by said -ReplicaSet. +A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no +OwnerReference or the OwnerReference is not a {{< glossary_tooltip term_id="controller" >}} and it +matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet. ## When to use a ReplicaSet @@ -253,7 +250,9 @@ In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, be rejected by the API. {{< note >}} -For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet. +For 2 ReplicaSets specifying the same `.spec.selector` but different +`.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the +Pods created by the other ReplicaSet. {{< /note >}} ### Replicas @@ -267,11 +266,14 @@ If you do not specify `.spec.replicas`, then it defaults to 1. ### Deleting a ReplicaSet and its Pods -To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default. +To delete a ReplicaSet and all of its Pods, use +[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The +[Garbage collector](/docs/concepts/architecture/garbage-collection/) automatically deletes all of +the dependent Pods by default. + +When using the REST API or the `client-go` library, you must set `propagationPolicy` to +`Background` or `Foreground` in the `-d` option. For example: -When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in -the -d option. -For example: ```shell kubectl proxy --port=8080 curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ @@ -281,9 +283,12 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron ### Deleting just a ReplicaSet -You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=orphan` option. +You can delete a ReplicaSet without affecting any of its Pods using +[`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) +with the `--cascade=orphan` option. When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`. For example: + ```shell kubectl proxy --port=8080 curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \ @@ -295,7 +300,8 @@ Once the original is deleted, you can create a new ReplicaSet to replace it. As as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods. However, it will not make any effort to make existing Pods match a new, different pod template. To update Pods to a new spec in a controlled way, use a -[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as ReplicaSets do not support a rolling update directly. +[Deployment](/docs/concepts/workloads/controllers/deployment/#creating-a-deployment), as +ReplicaSets do not support a rolling update directly. ### Isolating Pods from a ReplicaSet @@ -310,17 +316,19 @@ ensures that a desired number of Pods with a matching label selector are availab When scaling down, the ReplicaSet controller chooses which pods to delete by sorting the available pods to prioritize scaling down pods based on the following general algorithm: - 1. Pending (and unschedulable) pods are scaled down first - 2. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then - the pod with the lower value will come first. - 3. Pods on nodes with more replicas come before pods on nodes with fewer replicas. - 4. If the pods' creation times differ, the pod that was created more recently - comes before the older pod (the creation times are bucketed on an integer log scale - when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled) + +1. Pending (and unschedulable) pods are scaled down first +1. If `controller.kubernetes.io/pod-deletion-cost` annotation is set, then + the pod with the lower value will come first. +1. Pods on nodes with more replicas come before pods on nodes with fewer replicas. +1. If the pods' creation times differ, the pod that was created more recently + comes before the older pod (the creation times are bucketed on an integer log scale + when the `LogarithmicScaleDown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled) If all of the above match, then selection is random. ### Pod deletion cost + {{< feature-state for_k8s_version="v1.22" state="beta" >}} Using the [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost) @@ -344,6 +352,7 @@ This feature is beta and enabled by default. You can disable it using the {{< /note >}} #### Example Use Case + The different pods of an application could have different utilization levels. On scale down, the application may prefer to remove the pods with lower utilization. To avoid frequently updating the pods, the application should update `controller.kubernetes.io/pod-deletion-cost` once before issuing a scale down (setting the @@ -387,12 +396,17 @@ As such, it is recommended to use Deployments when you want ReplicaSets. ### Bare Pods -Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node such as Kubelet. +Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or +terminated for any reason, such as in the case of node failure or disruptive node maintenance, +such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your +application requires only a single Pod. Think of it similarly to a process supervisor, only it +supervises multiple Pods across multiple nodes instead of individual processes on a single node. A +ReplicaSet delegates local container restarts to some agent on the node such as Kubelet. ### Job -Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are expected to terminate on their own -(that is, batch jobs). +Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicaSet for Pods that are +expected to terminate on their own (that is, batch jobs). ### DaemonSet @@ -402,12 +416,12 @@ to a machine lifetime: the Pod needs to be running on the machine before other P safe to terminate when the machine is otherwise ready to be rebooted/shutdown. ### ReplicationController -ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/). + +ReplicaSets are the successors to [ReplicationControllers](/docs/concepts/workloads/controllers/replicationcontroller/). The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors). As such, ReplicaSets are preferred over ReplicationControllers - ## {{% heading "whatsnext" %}} * Learn about [Pods](/docs/concepts/workloads/pods). @@ -419,3 +433,4 @@ As such, ReplicaSets are preferred over ReplicationControllers object definition to understand the API for replica sets. * Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. + diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md index 2ae481a4311..1687399abd3 100644 --- a/content/en/docs/concepts/workloads/controllers/statefulset.md +++ b/content/en/docs/concepts/workloads/controllers/statefulset.md @@ -39,10 +39,18 @@ that provides a set of stateless replicas. ## Limitations -* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin. -* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources. -* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service. -* StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is possible to scale the StatefulSet down to 0 prior to deletion. +* The storage for a given Pod must either be provisioned by a + [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/README.md) + based on the requested `storage class`, or pre-provisioned by an admin. +* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the + StatefulSet. This is done to ensure data safety, which is generally more valuable than an + automatic purge of all related StatefulSet resources. +* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) + to be responsible for the network identity of the Pods. You are responsible for creating this + Service. +* StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is + deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is + possible to scale the StatefulSet down to 0 prior to deletion. * When using [Rolling Updates](#rolling-updates) with the default [Pod Management Policy](#pod-management-policies) (`OrderedReady`), it's possible to get into a broken state that requires @@ -108,18 +116,24 @@ In the above example: * A Headless Service, named `nginx`, is used to control the network domain. * The StatefulSet, named `web`, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods. -* The `volumeClaimTemplates` will provide stable storage using [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a PersistentVolume Provisioner. +* The `volumeClaimTemplates` will provide stable storage using + [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a + PersistentVolume Provisioner. The name of a StatefulSet object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). ### Pod Selector -You must set the `.spec.selector` field of a StatefulSet to match the labels of its `.spec.template.metadata.labels`. Failing to specify a matching Pod Selector will result in a validation error during StatefulSet creation. +You must set the `.spec.selector` field of a StatefulSet to match the labels of its +`.spec.template.metadata.labels`. Failing to specify a matching Pod Selector will result in a +validation error during StatefulSet creation. ### Volume Claim Templates -You can set the `.spec.volumeClaimTemplates` which can provide stable storage using [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a PersistentVolume Provisioner. +You can set the `.spec.volumeClaimTemplates` which can provide stable storage using +[PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a PersistentVolume +Provisioner. ### Minimum ready seconds @@ -128,9 +142,11 @@ You can set the `.spec.volumeClaimTemplates` which can provide stable storage u `.spec.minReadySeconds` is an optional field that specifies the minimum number of seconds for which a newly created Pod should be ready without any of its containers crashing, for it to be considered available. -Please note that this feature is beta and enabled by default. Please opt out by unsetting the StatefulSetMinReadySeconds flag, if you don't +Please note that this feature is beta and enabled by default. Please opt out by unsetting the +StatefulSetMinReadySeconds flag, if you don't want this feature to be enabled. This field defaults to 0 (the Pod will be considered -available as soon as it is ready). To learn more about when a Pod is considered ready, see [Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). +available as soon as it is ready). To learn more about when a Pod is considered ready, see +[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes). ## Pod Identity @@ -166,8 +182,8 @@ remembered and reused, even after the Pod is running, for at least a few seconds If you need to discover Pods promptly after they are created, you have a few options: - Query the Kubernetes API directly (for example, using a watch) rather than relying on DNS lookups. -- Decrease the time of caching in your Kubernetes DNS provider (typically this means editing the config map for CoreDNS, which currently caches for 30 seconds). - +- Decrease the time of caching in your Kubernetes DNS provider (typically this means editing the + config map for CoreDNS, which currently caches for 30 seconds). As mentioned in the [limitations](#limitations) section, you are responsible for creating the [Headless Service](/docs/concepts/services-networking/service/#headless-services) @@ -189,7 +205,9 @@ Cluster Domain will be set to `cluster.local` unless ### Stable Storage -For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass +For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one +PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume +with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass is specified, then the default StorageClass will be used. When a Pod is (re)scheduled onto a node, its `volumeMounts` mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the @@ -210,7 +228,9 @@ the StatefulSet. * Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready. * Before a Pod is terminated, all of its successors must be completely shutdown. -The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/). +The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. This practice +is unsafe and strongly discouraged. For further explanation, please refer to +[force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/). When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed before web-0 is @@ -256,7 +276,8 @@ annotations for the Pods in a StatefulSet. There are two possible values: create new Pods that reflect modifications made to a StatefulSet's `.spec.template`. `RollingUpdate` -: The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a StatefulSet. This is the default update strategy. +: The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a + StatefulSet. This is the default update strategy. ## Rolling Updates @@ -299,7 +320,7 @@ unavailable Pod in the range `0` to `replicas - 1`, it will be counted towards {{< note >}} The `maxUnavailable` field is in Alpha stage and it is honored only by API servers that are running with the `MaxUnavailableStatefulSet` -[feature gate](/docs/reference/commmand-line-tools-reference/feature-gates/) +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled. {{< /note >}} @@ -375,8 +396,8 @@ spec: ... ``` -The StatefulSet {{}} adds [owner -references](/docs/concepts/overview/working-with-objects/owners-dependents/#owner-references-in-object-specifications) +The StatefulSet {{}} adds +[owner references](/docs/concepts/overview/working-with-objects/owners-dependents/#owner-references-in-object-specifications) to its PVCs, which are then deleted by the {{}} after the Pod is terminated. This enables the Pod to cleanly unmount all volumes before the PVCs are deleted (and before the backing PV and diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md index e7b5b4dc86e..77994f754f9 100644 --- a/content/en/docs/concepts/workloads/pods/_index.md +++ b/content/en/docs/concepts/workloads/pods/_index.md @@ -320,12 +320,12 @@ in the Pod Lifecycle documentation. * Learn about the [lifecycle of a Pod](/docs/concepts/workloads/pods/pod-lifecycle/). * Learn about [RuntimeClass](/docs/concepts/containers/runtime-class/) and how you can use it to configure different Pods with different container runtime configurations. -* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). * Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how you can use it to manage application availability during disruptions. * Pod is a top-level resource in the Kubernetes REST API. The {{< api-reference page="workload-resources/pod-v1" >}} object definition describes the object in detail. * [The Distributed System Toolkit: Patterns for Composite Containers](/blog/2015/06/the-distributed-system-toolkit-patterns/) explains common layouts for Pods with more than one container. +* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}), you can read about the prior art, including: diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 61b90d17d0b..85c5ec413b0 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -28,7 +28,7 @@ Init containers are exactly like regular containers, except: * Init containers always run to completion. * Each init container must complete successfully before the next one starts. -If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds. +If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds. However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed. To specify an init container for a Pod, add the `initContainers` field into @@ -115,7 +115,7 @@ kind: Pod metadata: name: myapp-pod labels: - app: myapp + app.kubernetes.io/name: MyApp spec: containers: - name: myapp-container @@ -159,7 +159,7 @@ The output is similar to this: Name: myapp-pod Namespace: default [...] -Labels: app=myapp +Labels: app.kubernetes.io/name=MyApp Status: Pending [...] Init Containers: diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md deleted file mode 100644 index fe50cf7e2d4..00000000000 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ /dev/null @@ -1,421 +0,0 @@ ---- -title: Pod Topology Spread Constraints -content_type: concept -weight: 40 ---- - - - - -You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization. - - - - -## Prerequisites - -### Node Labels - -Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. For example, a Node might have labels: `node=node1,zone=us-east-1a,region=us-east-1` - -Suppose you have a 4-node cluster with the following labels: - -``` -NAME STATUS ROLES AGE VERSION LABELS -node1 Ready 4m26s v1.16.0 node=node1,zone=zoneA -node2 Ready 3m58s v1.16.0 node=node2,zone=zoneA -node3 Ready 3m17s v1.16.0 node=node3,zone=zoneB -node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB -``` - -Then the cluster is logically viewed as below: - -{{}} -graph TB - subgraph "zoneB" - n3(Node3) - n4(Node4) - end - subgraph "zoneA" - n1(Node1) - n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4 k8s; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -Instead of manually applying labels, you can also reuse the [well-known labels](/docs/reference/labels-annotations-taints/) that are created and populated automatically on most clusters. - -## Spread Constraints for Pods - -### API - -The API field `pod.spec.topologySpreadConstraints` is defined as below: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - topologySpreadConstraints: - - maxSkew: - minDomains: - topologyKey: - whenUnsatisfiable: - labelSelector: -``` - -You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are: - -- **maxSkew** describes the degree to which Pods may be unevenly distributed. - It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`: - - - when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum - permitted difference between the number of matching pods in the target - topology and the global minimum - (the minimum number of pods that match the label selector in a topology domain. - For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, - The global minimum is 0). - - when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher - precedence to topologies that would help reduce the skew. - -- **minDomains** indicates a minimum number of eligible domains. - A domain is a particular instance of a topology. An eligible domain is a domain whose - nodes match the node selector. - - - The value of `minDomains` must be greater than 0, when specified. - - When the number of eligible domains with match topology keys is less than `minDomains`, - Pod topology spread treats "global minimum" as 0, and then the calculation of `skew` is performed. - The "global minimum" is the minimum number of matching Pods in an eligible domain, - or zero if the number of eligible domains is less than `minDomains`. - - When the number of eligible domains with matching topology keys equals or is greater than - `minDomains`, this value has no effect on scheduling. - - When `minDomains` is nil, the constraint behaves as if `minDomains` is 1. - - When `minDomains` is not nil, the value of `whenUnsatisfiable` must be "`DoNotSchedule`". - - {{< note >}} - The `minDomains` field is an alpha field added in 1.24. You have to enable the - `MinDomainsInPodToplogySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) - in order to use it. - {{< /note >}} - -- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain. - -- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint: - - `DoNotSchedule` (default) tells the scheduler not to schedule it. - - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew. - -- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details. - -When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints. - -You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`. - -### Example: One TopologySpreadConstraint - -Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -If we want an incoming Pod to be evenly spread with existing Pods across zones, the spec can be given as: - -{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} - -`topologyKey: zone` implies the even distribution will only be applied to the nodes which have label pair "zone:<any value>" present. `whenUnsatisfiable: DoNotSchedule` tells the scheduler to let it stay pending if the incoming Pod can't satisfy the constraint. - -If the scheduler placed this incoming Pod into "zoneA", the Pods distribution would become [3, 1], hence the actual skew is 2 (3 - 1) - which violates `maxSkew: 1`. In this example, the incoming Pod can only be placed into "zoneB": - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - p4(mypod) --> n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class p4 plain; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -OR - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - p4(mypod) --> n3 - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class p4 plain; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -You can tweak the Pod spec to meet various kinds of requirements: - -- Change `maxSkew` to a bigger value like "2" so that the incoming Pod can be placed into "zoneA" as well. -- Change `topologyKey` to "node" so as to distribute the Pods evenly across nodes instead of zones. In the above example, if `maxSkew` remains "1", the incoming Pod can only be placed onto "node4". -- Change `whenUnsatisfiable: DoNotSchedule` to `whenUnsatisfiable: ScheduleAnyway` to ensure the incoming Pod to be always schedulable (suppose other scheduling APIs are satisfied). However, it's preferred to be placed onto the topology domain which has fewer matching Pods. (Be aware that this preferability is jointly normalized with other internal scheduling priorities like resource usage ratio, etc.) - -### Example: Multiple TopologySpreadConstraints - -This builds upon the previous example. Suppose you have a 4-node cluster where 3 Pods labeled `foo:bar` are located in node1, node2 and node3 respectively: - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class p4 plain; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -You can use 2 TopologySpreadConstraints to control the Pods spreading on both zone and node: - -{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} - -In this case, to match the first constraint, the incoming Pod can only be placed into "zoneB"; while in terms of the second constraint, the incoming Pod can only be placed onto "node4". Then the results of 2 constraints are ANDed, so the only viable option is to place on "node4". - -Multiple constraints can lead to conflicts. Suppose you have a 3-node cluster across 2 zones: - -{{}} -graph BT - subgraph "zoneB" - p4(Pod) --> n3(Node3) - p5(Pod) --> n3 - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n1 - p3(Pod) --> n2(Node2) - end - - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3,p4,p5 k8s; - class zoneA,zoneB cluster; -{{< /mermaid >}} - -If you apply "two-constraints.yaml" to this cluster, you will notice "mypod" stays in `Pending` state. This is because: to satisfy the first constraint, "mypod" can only placed into "zoneB"; while in terms of the second constraint, "mypod" can only be placed onto "node2". Then a joint result of "zoneB" and "node2" returns nothing. - -To overcome this situation, you can either increase the `maxSkew` or modify one of the constraints to use `whenUnsatisfiable: ScheduleAnyway`. - -### Interaction With Node Affinity and Node Selectors - -The scheduler will skip the non-matching nodes from the skew calculations if the incoming Pod has `spec.nodeSelector` or `spec.affinity.nodeAffinity` defined. - -### Example: TopologySpreadConstraints with NodeAffinity - -Suppose you have a 5-node cluster ranging from zoneA to zoneC: - -{{}} -graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end - -classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; -classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; -classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; -class n1,n2,n3,n4,p1,p2,p3 k8s; -class p4 plain; -class zoneA,zoneB cluster; -{{< /mermaid >}} - -{{}} -graph BT - subgraph "zoneC" - n5(Node5) - end - -classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; -classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; -classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; -class n5 k8s; -class zoneC cluster; -{{< /mermaid >}} - -and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed into "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected. - -{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} - -The scheduler doesn't have prior knowledge of all the zones or other topology domains that a cluster has. They are determined from the existing nodes in the cluster. This could lead to a problem in autoscaled clusters, when a node pool (or node group) is scaled to zero nodes and the user is expecting them to scale up, because, in this case, those topology domains won't be considered until there is at least one node in them. - -### Other Noticeable Semantics - -There are some implicit conventions worth noting here: - -- Only the Pods holding the same namespace as the incoming Pod can be matching candidates. - -- The scheduler will bypass the nodes without `topologySpreadConstraints[*].topologyKey` present. This implies that: - - 1. the Pods located on those nodes do not impact `maxSkew` calculation - in the above example, suppose "node1" does not have label "zone", then the 2 Pods will be disregarded, hence the incoming Pod will be scheduled into "zoneA". - 2. the incoming Pod has no chances to be scheduled onto such nodes - in the above example, suppose a "node5" carrying label `{zone-typo: zoneC}` joins the cluster, it will be bypassed due to the absence of label key "zone". - -- Be aware of what will happen if the incoming Pod's `topologySpreadConstraints[*].labelSelector` doesn't match its own labels. In the above example, if we remove the incoming Pod's labels, it can still be placed into "zoneB" since the constraints are still satisfied. However, after the placement, the degree of imbalance of the cluster remains unchanged - it's still zoneA having 2 Pods which hold label {foo:bar}, and zoneB having 1 Pod which holds label {foo:bar}. So if this is not what you expect, we recommend the workload's `topologySpreadConstraints[*].labelSelector` to match its own labels. - -### Cluster-level default constraints - -It is possible to set default topology spread constraints for a cluster. Default -topology spread constraints are applied to a Pod if, and only if: - -- It doesn't define any constraints in its `.spec.topologySpreadConstraints`. -- It belongs to a service, replication controller, replica set or stateful set. - -Default constraints can be set as part of the `PodTopologySpread` plugin args -in a [scheduling profile](/docs/reference/scheduling/config/#profiles). -The constraints are specified with the same [API above](#api), except that -`labelSelector` must be empty. The selectors are calculated from the services, -replication controllers, replica sets or stateful sets that the Pod belongs to. - -An example configuration might look like follows: - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1beta3 -kind: KubeSchedulerConfiguration - -profiles: - - schedulerName: default-scheduler - pluginConfig: - - name: PodTopologySpread - args: - defaultConstraints: - - maxSkew: 1 - topologyKey: topology.kubernetes.io/zone - whenUnsatisfiable: ScheduleAnyway - defaultingType: List -``` - -{{< note >}} -[`SelectorSpread` plugin](/docs/reference/scheduling/config/#scheduling-plugins) -is disabled by default. It's recommended to use `PodTopologySpread` to achieve similar -behavior. -{{< /note >}} - -#### Built-in default constraints {#internal-default-constraints} - -{{< feature-state for_k8s_version="v1.24" state="stable" >}} - -If you don't configure any cluster-level default constraints for pod topology spreading, -then kube-scheduler acts as if you specified the following default topology constraints: - -```yaml -defaultConstraints: - - maxSkew: 3 - topologyKey: "kubernetes.io/hostname" - whenUnsatisfiable: ScheduleAnyway - - maxSkew: 5 - topologyKey: "topology.kubernetes.io/zone" - whenUnsatisfiable: ScheduleAnyway -``` - -Also, the legacy `SelectorSpread` plugin, which provides an equivalent behavior, -is disabled by default. - -{{< note >}} -The `PodTopologySpread` plugin does not score the nodes that don't have -the topology keys specified in the spreading constraints. This might result -in a different default behavior compared to the legacy `SelectorSpread` plugin when -using the default topology constraints. - -If your nodes are not expected to have **both** `kubernetes.io/hostname` and -`topology.kubernetes.io/zone` labels set, define your own constraints -instead of using the Kubernetes defaults. -{{< /note >}} - -If you don't want to use the default Pod spreading constraints for your cluster, -you can disable those defaults by setting `defaultingType` to `List` and leaving -empty `defaultConstraints` in the `PodTopologySpread` plugin configuration: - -```yaml -apiVersion: kubescheduler.config.k8s.io/v1beta3 -kind: KubeSchedulerConfiguration - -profiles: - - schedulerName: default-scheduler - pluginConfig: - - name: PodTopologySpread - args: - defaultConstraints: [] - defaultingType: List -``` - -## Comparison with PodAffinity/PodAntiAffinity - -In Kubernetes, directives related to "Affinity" control how Pods are -scheduled - more packed or more scattered. - -- For `PodAffinity`, you can try to pack any number of Pods into qualifying - topology domain(s) -- For `PodAntiAffinity`, only one Pod can be scheduled into a - single topology domain. - -For finer control, you can specify topology spread constraints to distribute -Pods across different topology domains - to achieve either high availability or -cost-saving. This can also help on rolling update workloads and scaling out -replicas smoothly. See -[Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation) -for more details. - -## Known Limitations - -- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution. -You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution. -- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921) - -## {{% heading "whatsnext" %}} - -- [Blog: Introducing PodTopologySpread](https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/) - explains `maxSkew` in details, as well as bringing up some advanced usage examples. diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 0358005d268..1c741b32e57 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -278,7 +278,7 @@ For an example of adding a new localization, see the PR to enable To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of -[k/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code. +[kubernetes/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code. For example, a German README file would be `README-de.md`. Provide guidance to localization contributors in the localized `README-**.md` file. @@ -299,7 +299,7 @@ Once a localization meets requirements for workflow and minimum output, SIG Docs - Enable language selection on the website - Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/about/)(CNCF) channels, including the - [Kubernetes blog](https://kubernetes.io/blog/). + [Kubernetes blog](/blog/). ## Translating content @@ -418,7 +418,7 @@ To collaborate on a localization branch: `dev--.` For example, an approver on a German localization team opens the localization branch - `dev-1.12-de.1` directly against the k/website repository, based on the source branch for + `dev-1.12-de.1` directly against the `kubernetes/website` repository, based on the source branch for Kubernetes v1.12. 2. Individual contributors open feature branches based on the localization branch. diff --git a/content/en/docs/contribute/new-content/new-features.md b/content/en/docs/contribute/new-content/new-features.md index 268c447402e..d40195c32d1 100644 --- a/content/en/docs/contribute/new-content/new-features.md +++ b/content/en/docs/contribute/new-content/new-features.md @@ -37,7 +37,7 @@ the techniques described in ### Find out about upcoming features To find out about upcoming features, attend the weekly SIG Release meeting (see -the [community](https://kubernetes.io/community/) page for upcoming meetings) +the [community](/community/) page for upcoming meetings) and monitor the release-specific documentation in the [kubernetes/sig-release](https://github.com/kubernetes/sig-release/) repository. Each release has a sub-directory in the [/sig-release/tree/master/releases/](https://github.com/kubernetes/sig-release/tree/master/releases) diff --git a/content/en/docs/contribute/new-content/open-a-pr.md b/content/en/docs/contribute/new-content/open-a-pr.md index 666dfdce965..50b5a00608e 100644 --- a/content/en/docs/contribute/new-content/open-a-pr.md +++ b/content/en/docs/contribute/new-content/open-a-pr.md @@ -15,13 +15,15 @@ upcoming Kubernetes release, see [Document a new feature](/docs/contribute/new-content/new-features/). {{< /note >}} -To contribute new content pages or improve existing content pages, open a pull request (PR). Make sure you follow all the requirements in the [Before you begin](/docs/contribute/new-content/overview/#before-you-begin) section. - -If your change is small, or you're unfamiliar with git, read [Changes using GitHub](#changes-using-github) to learn how to edit a page. - -If your changes are large, read [Work from a local fork](#fork-the-repo) to learn how to make changes locally on your computer. +To contribute new content pages or improve existing content pages, open a pull request (PR). +Make sure you follow all the requirements in the +[Before you begin](/docs/contribute/new-content/) section. +If your change is small, or you're unfamiliar with git, read +[Changes using GitHub](#changes-using-github) to learn how to edit a page. +If your changes are large, read [Work from a local fork](#fork-the-repo) to learn how to make +changes locally on your computer. @@ -63,38 +65,39 @@ class id1 k8s Figure 1. Steps for opening a PR using GitHub. -1. On the page where you see the issue, select the pencil icon at the top right. - You can also scroll to the bottom of the page and select **Edit this page**. +1. On the page where you see the issue, select the pencil icon at the top right. + You can also scroll to the bottom of the page and select **Edit this page**. -2. Make your changes in the GitHub markdown editor. +1. Make your changes in the GitHub markdown editor. -3. Below the editor, fill in the **Propose file change** - form. In the first field, give your commit message a title. In - the second field, provide a description. +1. Below the editor, fill in the **Propose file change** form. + In the first field, give your commit message a title. + In the second field, provide a description. - {{< note >}} - Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request - description later. - {{< /note >}} + {{< note >}} + Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) + in your commit message. You can add those to the pull request description later. + {{< /note >}} -4. Select **Propose file change**. +1. Select **Propose file change**. -5. Select **Create pull request**. +1. Select **Create pull request**. -6. The **Open a pull request** screen appears. Fill in the form: +1. The **Open a pull request** screen appears. Fill in the form: - - The **Subject** field of the pull request defaults to the commit summary. - You can change it if needed. - - The **Body** contains your extended commit message, if you have one, - and some template text. Add the - details the template text asks for, then delete the extra template text. - - Leave the **Allow edits from maintainers** checkbox selected. + - The **Subject** field of the pull request defaults to the commit summary. + You can change it if needed. + - The **Body** contains your extended commit message, if you have one, + and some template text. Add the + details the template text asks for, then delete the extra template text. + - Leave the **Allow edits from maintainers** checkbox selected. - {{< note >}} - PR descriptions are a great way to help reviewers understand your change. For more information, see [Opening a PR](#open-a-pr). - {{}} + {{< note >}} + PR descriptions are a great way to help reviewers understand your change. + For more information, see [Opening a PR](#open-a-pr). + {{}} -7. Select **Create pull request**. +1. Select **Create pull request**. ### Addressing feedback in GitHub @@ -106,12 +109,12 @@ leave a comment with their GitHub username in it. If a reviewer asks you to make changes: 1. Go to the **Files changed** tab. -2. Select the pencil (edit) icon on any files changed by the -pull request. -3. Make the changes requested. -4. Commit the changes. +1. Select the pencil (edit) icon on any files changed by the pull request. +1. Make the changes requested. +1. Commit the changes. -If you are waiting on a reviewer, reach out once every 7 days. You can also post a message in the `#sig-docs` Slack channel. +If you are waiting on a reviewer, reach out once every 7 days. You can also post a message in the +`#sig-docs` Slack channel. When your review is complete, a reviewer merges your PR and your changes go live a few minutes later. @@ -120,7 +123,8 @@ When your review is complete, a reviewer merges your PR and your changes go live If you're more experienced with git, or if your changes are larger than a few lines, work from a local fork. -Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed on your computer. You can also use a git UI application. +Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) installed +on your computer. You can also use a git UI application. Figure 2 shows the steps to follow when you work from a local fork. The details for each step follow. @@ -157,75 +161,80 @@ Figure 2. Working from a local fork to make your changes. ### Fork the kubernetes/website repository 1. Navigate to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository. -2. Select **Fork**. +1. Select **Fork**. ### Create a local clone and set the upstream -3. In a terminal window, clone your fork and update the [Docsy Hugo theme](https://github.com/google/docsy#readme): +1. In a terminal window, clone your fork and update the [Docsy Hugo theme](https://github.com/google/docsy#readme): - ```bash - git clone git@github.com//website - cd website - git submodule update --init --recursive --depth 1 - ``` + ```shell + git clone git@github.com//website + cd website + git submodule update --init --recursive --depth 1 + ``` -4. Navigate to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote: +1. Navigate to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote: - ```bash - cd website + ```shell + cd website - git remote add upstream https://github.com/kubernetes/website.git - ``` + git remote add upstream https://github.com/kubernetes/website.git + ``` -5. Confirm your `origin` and `upstream` repositories: +1. Confirm your `origin` and `upstream` repositories: - ```bash - git remote -v - ``` + ```shell + git remote -v + ``` - Output is similar to: + Output is similar to: - ```bash - origin git@github.com:/website.git (fetch) - origin git@github.com:/website.git (push) - upstream https://github.com/kubernetes/website.git (fetch) - upstream https://github.com/kubernetes/website.git (push) - ``` + ```none + origin git@github.com:/website.git (fetch) + origin git@github.com:/website.git (push) + upstream https://github.com/kubernetes/website.git (fetch) + upstream https://github.com/kubernetes/website.git (push) + ``` -6. Fetch commits from your fork's `origin/main` and `kubernetes/website`'s `upstream/main`: +1. Fetch commits from your fork's `origin/main` and `kubernetes/website`'s `upstream/main`: - ```bash - git fetch origin - git fetch upstream - ``` + ```shell + git fetch origin + git fetch upstream + ``` - This makes sure your local repository is up to date before you start making changes. + This makes sure your local repository is up to date before you start making changes. - {{< note >}} - This workflow is different than the [Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). You do not need to merge your local copy of `main` with `upstream/main` before pushing updates to your fork. - {{< /note >}} + {{< note >}} + This workflow is different than the + [Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). + You do not need to merge your local copy of `main` with `upstream/main` before pushing updates + to your fork. + {{< /note >}} ### Create a branch 1. Decide which branch base to your work on: - - For improvements to existing content, use `upstream/main`. - - For new content about existing features, use `upstream/main`. - - For localized content, use the localization's conventions. For more information, see [localizing Kubernetes documentation](/docs/contribute/localization/). - - For new features in an upcoming Kubernetes release, use the feature branch. For more information, see [documenting for a release](/docs/contribute/new-content/new-features/). - - For long-running efforts that multiple SIG Docs contributors collaborate on, - like content reorganization, use a specific feature branch created for that - effort. + - For improvements to existing content, use `upstream/main`. + - For new content about existing features, use `upstream/main`. + - For localized content, use the localization's conventions. For more information, see + [localizing Kubernetes documentation](/docs/contribute/localization/). + - For new features in an upcoming Kubernetes release, use the feature branch. For more + information, see [documenting for a release](/docs/contribute/new-content/new-features/). + - For long-running efforts that multiple SIG Docs contributors collaborate on, + like content reorganization, use a specific feature branch created for that effort. - If you need help choosing a branch, ask in the `#sig-docs` Slack channel. + If you need help choosing a branch, ask in the `#sig-docs` Slack channel. -2. Create a new branch based on the branch identified in step 1. This example assumes the base branch is `upstream/main`: +1. Create a new branch based on the branch identified in step 1. This example assumes the base + branch is `upstream/main`: - ```bash - git checkout -b upstream/main - ``` + ```shell + git checkout -b upstream/main + ``` -3. Make your changes using a text editor. +1. Make your changes using a text editor. At any time, use the `git status` command to see what files you've changed. @@ -235,109 +244,116 @@ When you are ready to submit a pull request, commit your changes. 1. In your local repository, check which files you need to commit: - ```bash - git status - ``` + ```shell + git status + ``` - Output is similar to: + Output is similar to: - ```bash - On branch - Your branch is up to date with 'origin/'. + ```none + On branch + Your branch is up to date with 'origin/'. - Changes not staged for commit: - (use "git add ..." to update what will be committed) - (use "git checkout -- ..." to discard changes in working directory) + Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git checkout -- ..." to discard changes in working directory) - modified: content/en/docs/contribute/new-content/contributing-content.md + modified: content/en/docs/contribute/new-content/contributing-content.md - no changes added to commit (use "git add" and/or "git commit -a") - ``` + no changes added to commit (use "git add" and/or "git commit -a") + ``` -2. Add the files listed under **Changes not staged for commit** to the commit: +1. Add the files listed under **Changes not staged for commit** to the commit: - ```bash - git add - ``` + ```shell + git add + ``` - Repeat this for each file. + Repeat this for each file. -3. After adding all the files, create a commit: +1. After adding all the files, create a commit: - ```bash - git commit -m "Your commit message" - ``` + ```shell + git commit -m "Your commit message" + ``` - {{< note >}} - Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) in your commit message. You can add those to the pull request - description later. - {{< /note >}} + {{< note >}} + Do not use any [GitHub Keywords](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) + in your commit message. You can add those to the pull request + description later. + {{< /note >}} -4. Push your local branch and its new commit to your remote fork: +1. Push your local branch and its new commit to your remote fork: - ```bash - git push origin - ``` + ```shell + git push origin + ``` ### Preview your changes locally {#preview-locally} -It's a good idea to preview your changes locally before pushing them or opening a pull request. A preview lets you catch build errors or markdown formatting problems. +It's a good idea to preview your changes locally before pushing them or opening a pull request. +A preview lets you catch build errors or markdown formatting problems. -You can either build the website's container image or run Hugo locally. Building the container image is slower but displays [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/), which can be useful for debugging. +You can either build the website's container image or run Hugo locally. Building the container +image is slower but displays [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/), which can +be useful for debugging. {{< tabs name="tab_with_hugo" >}} {{% tab name="Hugo in a container" %}} {{< note >}} -The commands below use Docker as default container engine. Set the `CONTAINER_ENGINE` environment variable to override this behaviour. +The commands below use Docker as default container engine. Set the `CONTAINER_ENGINE` environment +variable to override this behaviour. {{< /note >}} 1. Build the container image locally _You only need this step if you are testing a change to the Hugo tool itself_ - ```bash + + ```shell # Run this in a terminal (if required) make container-image ``` 1. Start Hugo in a container: - ```bash + ```shell # Run this in a terminal make container-serve ``` -1. In a web browser, navigate to `https://localhost:1313`. Hugo watches the - changes and rebuilds the site as needed. +1. In a web browser, navigate to `https://localhost:1313`. Hugo watches the + changes and rebuilds the site as needed. -1. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, - or close the terminal window. +1. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, + or close the terminal window. {{% /tab %}} {{% tab name="Hugo on the command line" %}} Alternately, install and use the `hugo` command on your computer: -1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/main/netlify.toml). +1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in + [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/main/netlify.toml). -2. If you have not updated your website repository, the `website/themes/docsy` directory is empty. - The site cannot build without a local copy of the theme. To update the website theme, run: +1. If you have not updated your website repository, the `website/themes/docsy` directory is empty. + The site cannot build without a local copy of the theme. To update the website theme, run: - ```bash - git submodule update --init --recursive --depth 1 - ``` + ```shell + git submodule update --init --recursive --depth 1 + ``` -3. In a terminal, go to your Kubernetes website repository and start the Hugo server: +1. In a terminal, go to your Kubernetes website repository and start the Hugo server: - ```bash - cd /website - hugo server --buildFuture - ``` + ```shell + cd /website + hugo server --buildFuture + ``` -4. In a web browser, navigate to `https://localhost:1313`. Hugo watches the - changes and rebuilds the site as needed. +1. In a web browser, navigate to `https://localhost:1313`. Hugo watches the + changes and rebuilds the site as needed. -5. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, - or close the terminal window. +1. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`, + or close the terminal window. {{% /tab %}} {{< /tabs >}} @@ -345,6 +361,7 @@ Alternately, install and use the `hugo` command on your computer: ### Open a pull request from your fork to kubernetes/website {#open-a-pr} Figure 3 shows the steps to open a PR from your fork to the K8s/website. The details follow. + @@ -374,47 +391,55 @@ class first,second white Figure 3. Steps to open a PR from your fork to the K8s/website. 1. In a web browser, go to the [`kubernetes/website`](https://github.com/kubernetes/website/) repository. -2. Select **New Pull Request**. -3. Select **compare across forks**. -4. From the **head repository** drop-down menu, select your fork. -5. From the **compare** drop-down menu, select your branch. -6. Select **Create Pull Request**. -7. Add a description for your pull request: +1. Select **New Pull Request**. +1. Select **compare across forks**. +1. From the **head repository** drop-down menu, select your fork. +1. From the **compare** drop-down menu, select your branch. +1. Select **Create Pull Request**. +1. Add a description for your pull request: + - **Title** (50 characters or less): Summarize the intent of the change. - **Description**: Describe the change in more detail. - - If there is a related GitHub issue, include `Fixes #12345` or `Closes #12345` in the description. GitHub's automation closes the mentioned issue after merging the PR if used. If there are other related PRs, link those as well. - - If you want advice on something specific, include any questions you'd like reviewers to think about in your description. -8. Select the **Create pull request** button. + - If there is a related GitHub issue, include `Fixes #12345` or `Closes #12345` in the + description. GitHub's automation closes the mentioned issue after merging the PR if used. + If there are other related PRs, link those as well. + - If you want advice on something specific, include any questions you'd like reviewers to + think about in your description. + +1. Select the **Create pull request** button. Congratulations! Your pull request is available in [Pull requests](https://github.com/kubernetes/website/pulls). +After opening a PR, GitHub runs automated tests and tries to deploy a preview using +[Netlify](https://www.netlify.com/). -After opening a PR, GitHub runs automated tests and tries to deploy a preview using [Netlify](https://www.netlify.com/). +- If the Netlify build fails, select **Details** for more information. +- If the Netlify build succeeds, select **Details** opens a staged version of the Kubernetes + website with your changes applied. This is how reviewers check your changes. - - If the Netlify build fails, select **Details** for more information. - - If the Netlify build succeeds, select **Details** opens a staged version of the Kubernetes website with your changes applied. This is how reviewers check your changes. - -GitHub also automatically assigns labels to a PR, to help reviewers. You can add them too, if needed. For more information, see [Adding and removing issue labels](/docs/contribute/review/for-approvers/#adding-and-removing-issue-labels). +GitHub also automatically assigns labels to a PR, to help reviewers. You can add them too, if +needed. For more information, see [Adding and removing issue labels](/docs/contribute/review/for-approvers/#adding-and-removing-issue-labels). ### Addressing feedback locally 1. After making your changes, amend your previous commit: - ```bash - git commit -a --amend - ``` + ```shell + git commit -a --amend + ``` - - `-a`: commits all changes - - `--amend`: amends the previous commit, rather than creating a new one + - `-a`: commits all changes + - `--amend`: amends the previous commit, rather than creating a new one -2. Update your commit message if needed. +1. Update your commit message if needed. -3. Use `git push origin ` to push your changes and re-run the Netlify tests. +1. Use `git push origin ` to push your changes and re-run the Netlify tests. - {{< note >}} - If you use `git commit -m` instead of amending, you must [squash your commits](#squashing-commits) before merging. - {{< /note >}} + {{< note >}} + If you use `git commit -m` instead of amending, you must [squash your commits](#squashing-commits) + before merging. + {{< /note >}} #### Changes from reviewers @@ -422,89 +447,97 @@ Sometimes reviewers commit to your pull request. Before making any other changes 1. Fetch commits from your remote fork and rebase your working branch: - ```bash - git fetch origin - git rebase origin/ - ``` + ```shell + git fetch origin + git rebase origin/ + ``` -2. After rebasing, force-push new changes to your fork: +1. After rebasing, force-push new changes to your fork: - ```bash - git push --force-with-lease origin - ``` + ```shell + git push --force-with-lease origin + ``` #### Merge conflicts and rebasing {{< note >}} -For more information, see [Git Branching - Basic Branching and Merging](https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging#_basic_merge_conflicts), [Advanced Merging](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging), or ask in the `#sig-docs` Slack channel for help. +For more information, see [Git Branching - Basic Branching and Merging](https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging#_basic_merge_conflicts), +[Advanced Merging](https://git-scm.com/book/en/v2/Git-Tools-Advanced-Merging), or ask in the +`#sig-docs` Slack channel for help. {{< /note >}} -If another contributor commits changes to the same file in another PR, it can create a merge conflict. You must resolve all merge conflicts in your PR. +If another contributor commits changes to the same file in another PR, it can create a merge +conflict. You must resolve all merge conflicts in your PR. 1. Update your fork and rebase your local branch: - ```bash - git fetch origin - git rebase origin/ - ``` + ```shell + git fetch origin + git rebase origin/ + ``` - Then force-push the changes to your fork: + Then force-push the changes to your fork: - ```bash - git push --force-with-lease origin - ``` + ```shell + git push --force-with-lease origin + ``` -2. Fetch changes from `kubernetes/website`'s `upstream/main` and rebase your branch: +1. Fetch changes from `kubernetes/website`'s `upstream/main` and rebase your branch: - ```bash - git fetch upstream - git rebase upstream/main - ``` + ```shell + git fetch upstream + git rebase upstream/main + ``` -3. Inspect the results of the rebase: +1. Inspect the results of the rebase: - ```bash - git status - ``` + ```shell + git status + ``` - This results in a number of files marked as conflicted. + This results in a number of files marked as conflicted. -4. Open each conflicted file and look for the conflict markers: `>>>`, `<<<`, and `===`. Resolve the conflict and delete the conflict marker. +1. Open each conflicted file and look for the conflict markers: `>>>`, `<<<`, and `===`. + Resolve the conflict and delete the conflict marker. - {{< note >}} - For more information, see [How conflicts are presented](https://git-scm.com/docs/git-merge#_how_conflicts_are_presented). - {{< /note >}} + {{< note >}} + For more information, see [How conflicts are presented](https://git-scm.com/docs/git-merge#_how_conflicts_are_presented). + {{< /note >}} -5. Add the files to the changeset: +1. Add the files to the changeset: - ```bash - git add - ``` -6. Continue the rebase: + ```shell + git add + ``` - ```bash - git rebase --continue - ``` +1. Continue the rebase: -7. Repeat steps 2 to 5 as needed. + ```shell + git rebase --continue + ``` - After applying all commits, the `git status` command shows that the rebase is complete. +1. Repeat steps 2 to 5 as needed. -8. Force-push the branch to your fork: + After applying all commits, the `git status` command shows that the rebase is complete. - ```bash - git push --force-with-lease origin - ``` +1. Force-push the branch to your fork: - The pull request no longer shows any conflicts. + ```shell + git push --force-with-lease origin + ``` + + The pull request no longer shows any conflicts. ### Squashing commits {{< note >}} -For more information, see [Git Tools - Rewriting History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History), or ask in the `#sig-docs` Slack channel for help. +For more information, see [Git Tools - Rewriting History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History), +or ask in the `#sig-docs` Slack channel for help. {{< /note >}} -If your PR has multiple commits, you must squash them into a single commit before merging your PR. You can check the number of commits on your PR's **Commits** tab or by running the `git log` command locally. +If your PR has multiple commits, you must squash them into a single commit before merging your PR. +You can check the number of commits on your PR's **Commits** tab or by running the `git log` +command locally. {{< note >}} This topic assumes `vim` as the command line text editor. @@ -512,79 +545,83 @@ This topic assumes `vim` as the command line text editor. 1. Start an interactive rebase: - ```bash - git rebase -i HEAD~ - ``` + ```shell + git rebase -i HEAD~ + ``` - Squashing commits is a form of rebasing. The `-i` switch tells git you want to rebase interactively. `HEAD~}} - For more information, see [Interactive Mode](https://git-scm.com/docs/git-rebase#_interactive_mode). - {{< /note >}} + {{< note >}} + For more information, see [Interactive Mode](https://git-scm.com/docs/git-rebase#_interactive_mode). + {{< /note >}} -2. Start editing the file. +1. Start editing the file. - Change the original text: + Change the original text: - ```bash - pick d875112ca Original commit - pick 4fa167b80 Address feedback 1 - pick 7d54e15ee Address feedback 2 - ``` + ```none + pick d875112ca Original commit + pick 4fa167b80 Address feedback 1 + pick 7d54e15ee Address feedback 2 + ``` - To: + To: - ```bash - pick d875112ca Original commit - squash 4fa167b80 Address feedback 1 - squash 7d54e15ee Address feedback 2 - ``` + ```none + pick d875112ca Original commit + squash 4fa167b80 Address feedback 1 + squash 7d54e15ee Address feedback 2 + ``` - This squashes commits `4fa167b80 Address feedback 1` and `7d54e15ee Address feedback 2` into `d875112ca Original commit`, leaving only `d875112ca Original commit` as a part of the timeline. + This squashes commits `4fa167b80 Address feedback 1` and `7d54e15ee Address feedback 2` into + `d875112ca Original commit`, leaving only `d875112ca Original commit` as a part of the timeline. -3. Save and exit your file. +1. Save and exit your file. -4. Push your squashed commit: +1. Push your squashed commit: - ```bash - git push --force-with-lease origin - ``` + ```shell + git push --force-with-lease origin + ``` ## Contribute to other repos -The [Kubernetes project](https://github.com/kubernetes) contains 50+ repositories. Many of these repositories contain documentation: user-facing help text, error messages, API references or code comments. +The [Kubernetes project](https://github.com/kubernetes) contains 50+ repositories. Many of these +repositories contain documentation: user-facing help text, error messages, API references or code +comments. -If you see text you'd like to improve, use GitHub to search all repositories in the Kubernetes organization. -This can help you figure out where to submit your issue or PR. +If you see text you'd like to improve, use GitHub to search all repositories in the Kubernetes +organization. This can help you figure out where to submit your issue or PR. -Each repository has its own processes and procedures. Before you file an -issue or submit a PR, read that repository's `README.md`, `CONTRIBUTING.md`, and -`code-of-conduct.md`, if they exist. +Each repository has its own processes and procedures. Before you file an issue or submit a PR, +read that repository's `README.md`, `CONTRIBUTING.md`, and `code-of-conduct.md`, if they exist. -Most repositories use issue and PR templates. Have a look through some open -issues and PRs to get a feel for that team's processes. Make sure to fill out -the templates with as much detail as possible when you file issues or PRs. +Most repositories use issue and PR templates. Have a look through some open issues and PRs to get +a feel for that team's processes. Make sure to fill out the templates with as much detail as +possible when you file issues or PRs. ## {{% heading "whatsnext" %}} - - Read [Reviewing](/docs/contribute/review/reviewing-prs) to learn more about the review process. diff --git a/content/en/docs/contribute/review/for-approvers.md b/content/en/docs/contribute/review/for-approvers.md index 9f137c23b53..5d0606d60df 100644 --- a/content/en/docs/contribute/review/for-approvers.md +++ b/content/en/docs/contribute/review/for-approvers.md @@ -141,7 +141,7 @@ To add a label, leave a comment in one of the following formats: To remove a label, leave a comment in one of the following formats: - `/remove-` (for example, `/remove-help`) -- `/remove- ` (for example, `/remove-triage needs-information`)` +- `/remove- ` (for example, `/remove-triage needs-information`) In both cases, the label must already exist. If you try to add a label that does not exist, the command is silently ignored. @@ -181,7 +181,7 @@ If the dead link issue is in the API or `kubectl` documentation, assign them `/p ### Blog issues -We expect [Kubernetes Blog](https://kubernetes.io/blog/) entries to become +We expect [Kubernetes Blog](/blog/) entries to become outdated over time. Therefore, we only maintain blog entries less than a year old. If an issue is related to a blog entry that is more than one year old, close the issue without fixing. diff --git a/content/en/docs/contribute/review/reviewing-prs.md b/content/en/docs/contribute/review/reviewing-prs.md index 54b209dc0e2..41aaecea171 100644 --- a/content/en/docs/contribute/review/reviewing-prs.md +++ b/content/en/docs/contribute/review/reviewing-prs.md @@ -10,9 +10,8 @@ weight: 10 Anyone can review a documentation pull request. Visit the [pull requests](https://github.com/kubernetes/website/pulls) section in the Kubernetes website repository to see open pull requests. -Reviewing documentation pull requests is a -great way to introduce yourself to the Kubernetes community. -It helps you learn the code base and build trust with other contributors. +Reviewing documentation pull requests is a great way to introduce yourself to the Kubernetes +community. It helps you learn the code base and build trust with other contributors. Before reviewing, it's a good idea to: @@ -28,7 +27,6 @@ Before reviewing, it's a good idea to: Before you start a review: - - Read the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md) and ensure that you abide by it at all times. - Be polite, considerate, and helpful. @@ -73,6 +71,7 @@ class third,fourth white Figure 1. Review process steps. + 1. Go to [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). You see a list of every open pull request against the Kubernetes website and docs. @@ -103,12 +102,20 @@ Figure 1. Review process steps. 4. Go to the **Files changed** tab to start your review. 1. Click on the `+` symbol beside the line you want to comment on. - 1. Fill in any comments you have about the line and click either **Add single comment** (if you - have only one comment to make) or **Start a review** (if you have multiple comments to make). + 1. Fill in any comments you have about the line and click either **Add single comment** + (if you have only one comment to make) or **Start a review** (if you have multiple comments to make). 1. When finished, click **Review changes** at the top of the page. Here, you can add - a summary of your review (and leave some positive comments for the contributor!), - approve the PR, comment or request changes as needed. New contributors should always - choose **Comment**. + a summary of your review (and leave some positive comments for the contributor!). + Please always use the "Comment" + + - Avoid clicking the "Request changes" button when finishing your review. + If you want to block a PR from being merged before some further changes are made, + you can leave a "/hold" comment. + Mention why you are setting a hold, and optionally specify the conditions under + which the hold can be removed by you or other reviewers. + + - Avoid clicking the "Approve" button when finishing your review. + Leaving a "/approve" comment is recommended most of the time. ## Reviewing checklist diff --git a/content/en/docs/contribute/style/diagram-guide.md b/content/en/docs/contribute/style/diagram-guide.md index b31c2e190ea..ac3a4fd529b 100644 --- a/content/en/docs/contribute/style/diagram-guide.md +++ b/content/en/docs/contribute/style/diagram-guide.md @@ -438,7 +438,7 @@ Note that the live editor doesn't recognize Hugo shortcodes. ### Example 1 - Pod topology spread constraints Figure 6 shows the diagram appearing in the -[Pod topology pread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels) +[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/#node-labels) page. {{< mermaid >}} diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 73119e0375e..960efc01a79 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -46,10 +46,6 @@ When you refer specifically to interacting with an API object, use [UpperCamelCa When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization). -You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence. - -Don't split an API object name into separate words. For example, use PodTemplateList, not Pod Template List. - The following examples focus on capitalization. For more information about formatting API object names, review the related guidance on [Code Style](#code-style-inline-code). {{< table caption = "Do and Don't - Use Pascal case for API objects" >}} @@ -187,6 +183,36 @@ Set the value of `image` to nginx:1.16. | Set the value of `image` to `nginx:1.1 Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`. {{< /table >}} +## Referring to Kubernetes API resources + +This section talks about how we reference API resources in the documentation. + +### Clarification about "resource" + +Kubernetes uses the word "resource" to refer to API resources, such as `pod`, `deployment`, and so on. We also use "resource" to talk about CPU and memory requests and limits. Always refer to API resources as "API resources" to avoid confusion with CPU and memory resources. + +### When to use Kubernetes API terminologies + +The different Kubernetes API terminologies are: + +- Resource type: the name used in the API URL (such as `pods`, `namespaces`) +- Resource: a single instance of a resource type (such as `pod`, `secret`) +- Object: a resource that serves as a "record of intent". An object is a desired state for a specific part of your cluster, which the Kubernetes control plane tries to maintain. + +Always use "resource" or "object" when referring to an API resource in docs. For example, use "a `Secret` object" over just "a `Secret`". + +### API resource names + +Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as PascalCase, and code formatting. + +For inline code in an HTML document, use the `` tag. In a Markdown document, use the backtick (`` ` ``). + +Don't split an API object name into separate words. For example, use `PodTemplateList`, not Pod Template List. + +For more information about PascalCase and code formatting, please review the related guidance on [Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects) and [Use code style for inline code, commands, and API objects](/docs/contribute/style/style-guide/#code-style-inline-code). + +For more information about Kubernetes API terminologies, please review the related guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/#standard-api-terminology). + ## Code snippet formatting ### Don't include the command prompt @@ -361,7 +387,7 @@ Beware. ### Katacoda Embedded Live Environment -This button lets users run Minikube in their browser using the [Katacoda Terminal](https://www.katacoda.com/embed/panel). +This button lets users run Minikube in their browser using the Katacoda Terminal. It lowers the barrier of entry by allowing users to use Minikube with one click instead of going through the complete Minikube and Kubectl installation process locally. diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index 0ebd59c4068..2ed1396c8fa 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -77,7 +77,7 @@ operator to use or manage a cluster. * [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) * [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/) -* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1/) +* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/) * [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/) * [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index 3e7d71977c3..ea6147fcbad 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -74,6 +74,10 @@ PUT | update PATCH | patch DELETE | delete (for individual resources), deletecollection (for collections) +{{< caution >}} +The `get`, `list` and `watch` verbs can all return the full details of a resource. In terms of the returned data they are equivalent. For example, `list` on `secrets` will still reveal the `data` attributes of any returned resources. +{{< /caution >}} + Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example: * [PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md index 6f7154cc8a1..05f9b8369c0 100644 --- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md @@ -16,8 +16,8 @@ In addition to [compiled-in admission plugins](/docs/reference/access-authn-auth admission plugins can be developed as extensions and run as webhooks configured at runtime. This page describes how to build, configure, use, and monitor admission webhooks. - + ## What are admission webhooks? Admission webhooks are HTTP callbacks that receive admission requests and do @@ -37,29 +37,25 @@ should use a validating admission webhook, since objects can be modified after b ## Experimenting with admission webhooks Admission webhooks are essentially part of the cluster control-plane. You should -write and deploy them with great caution. Please read the [user -guides](/docs/reference/access-authn-authz/extensible-admission-controllers/#write-an-admission-webhook-server) for -instructions if you intend to write/deploy production-grade admission webhooks. +write and deploy them with great caution. Please read the +[user guides](/docs/reference/access-authn-authz/extensible-admission-controllers/#write-an-admission-webhook-server) +for instructions if you intend to write/deploy production-grade admission webhooks. In the following, we describe how to quickly experiment with admission webhooks. ### Prerequisites -* Ensure that the Kubernetes cluster is at least as new as v1.16 (to use `admissionregistration.k8s.io/v1`), - or v1.9 (to use `admissionregistration.k8s.io/v1beta1`). - * Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers are enabled. [Here](/docs/reference/access-authn-authz/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use) is a recommended set of admission controllers to enable in general. -* Ensure that the `admissionregistration.k8s.io/v1` or `admissionregistration.k8s.io/v1beta1` API is enabled. +* Ensure that the `admissionregistration.k8s.io/v1` API is enabled. ### Write an admission webhook server -Please refer to the implementation of the [admission webhook -server](https://github.com/kubernetes/kubernetes/blob/release-1.21/test/images/agnhost/webhook/main.go) +Please refer to the implementation of the [admission webhook server](https://github.com/kubernetes/kubernetes/blob/release-1.21/test/images/agnhost/webhook/main.go) that is validated in a Kubernetes e2e test. The webhook handles the -`AdmissionReview` request sent by the apiservers, and sends back its decision +`AdmissionReview` request sent by the API servers, and sends back its decision as an `AdmissionReview` object in the same version it received. See the [webhook request](#request) section for details on the data sent to webhooks. @@ -69,9 +65,9 @@ See the [webhook response](#response) section for the data expected from webhook The example admission webhook server leaves the `ClientAuth` field [empty](https://github.com/kubernetes/kubernetes/blob/v1.22.0/test/images/agnhost/webhook/config.go#L38-L39), which defaults to `NoClientCert`. This means that the webhook server does not -authenticate the identity of the clients, supposedly apiservers. If you need +authenticate the identity of the clients, supposedly API servers. If you need mutual TLS or other ways to authenticate the clients, see -how to [authenticate apiservers](#authenticate-apiservers). +how to [authenticate API servers](#authenticate-apiservers). ### Deploy the admission webhook service @@ -95,8 +91,6 @@ or The following is an example `ValidatingWebhookConfiguration`, a mutating webhook configuration is similar. See the [webhook configuration](#webhook-configuration) section for details about each config field. -{{< tabs name="ValidatingWebhookConfiguration_example_1" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration @@ -114,39 +108,18 @@ webhooks: service: namespace: "example-namespace" name: "example-service" - caBundle: "Ci0tLS0tQk...<`caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate.>...tLS0K" - admissionReviewVersions: ["v1", "v1beta1"] + caBundle: + admissionReviewVersions: ["v1"] sideEffects: None timeoutSeconds: 5 ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -metadata: - name: "pod-policy.example.com" -webhooks: -- name: "pod-policy.example.com" - rules: - - apiGroups: [""] - apiVersions: ["v1"] - operations: ["CREATE"] - resources: ["pods"] - scope: "Namespaced" - clientConfig: - service: - namespace: "example-namespace" - name: "example-service" - caBundle: "Ci0tLS0tQk...<`caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate>...tLS0K" - admissionReviewVersions: ["v1beta1"] - timeoutSeconds: 5 -``` -{{% /tab %}} -{{< /tabs >}} -The scope field specifies if only cluster-scoped resources ("Cluster") or namespace-scoped +{{< note >}} +You must replace the `` in the above example by a valid CA bundle +which is a PEM-encoded CA bundle for validating the webhook's server certificate. +{{< /note >}} + +The `scope` field specifies if only cluster-scoped resources ("Cluster") or namespace-scoped resources ("Namespaced") will match this rule. "∗" means that there are no scope restrictions. {{< note >}} @@ -155,27 +128,26 @@ When using `clientConfig.service`, the server cert must be valid for {{< /note >}} {{< note >}} -Default timeout for a webhook call is 10 seconds for webhooks registered created using `admissionregistration.k8s.io/v1`, -and 30 seconds for webhooks created using `admissionregistration.k8s.io/v1beta1`. Starting in kubernetes 1.14 you -can set the timeout and it is encouraged to use a small timeout for webhooks. +Default timeout for a webhook call is 10 seconds, +You can set the `timeout` and it is encouraged to use a short timeout for webhooks. If the webhook call times out, the request is handled according to the webhook's failure policy. {{< /note >}} -When an apiserver receives a request that matches one of the `rules`, the -apiserver sends an `admissionReview` request to webhook as specified in the +When an API server receives a request that matches one of the `rules`, the +API server sends an `admissionReview` request to webhook as specified in the `clientConfig`. After you create the webhook configuration, the system will take a few seconds to honor the new configuration. -### Authenticate apiservers +### Authenticate API servers {#authenticate-apiservers} If your admission webhooks require authentication, you can configure the -apiservers to use basic auth, bearer token, or a cert to authenticate itself to +API servers to use basic auth, bearer token, or a cert to authenticate itself to the webhooks. There are three steps to complete the configuration. -* When starting the apiserver, specify the location of the admission control +* When starting the API server, specify the location of the admission control configuration file via the `--admission-control-config-file` flag. * In the admission control configuration file, specify where the @@ -228,55 +200,55 @@ For more information about `AdmissionConfiguration`, see the [AdmissionConfiguration (v1) reference](/docs/reference/config-api/apiserver-webhookadmission.v1/). See the [webhook configuration](#webhook-configuration) section for details about each config field. -* In the kubeConfig file, provide the credentials: +In the kubeConfig file, provide the credentials: - ```yaml - apiVersion: v1 - kind: Config - users: - # name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to. - # If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers. - # - # For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service: - # - name: webhook1.ns1.svc - # user: ... - # - # For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+: - # - name: webhook1.ns1.svc:8443 - # user: ... - # and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers: - # - name: webhook1.ns1.svc - # user: ... - # - # For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples: - # A webhook with `url: https://www.example.com`: - # - name: www.example.com - # user: ... - # - # A webhook with `url: https://www.example.com:443`: - # - name: www.example.com:443 - # user: ... - # - # A webhook with `url: https://www.example.com:8443`: - # - name: www.example.com:8443 - # user: ... - # - - name: 'webhook1.ns1.svc' - user: - client-certificate-data: "" - client-key-data: "" - # The `name` supports using * to wildcard-match prefixing segments. - - name: '*.webhook-company.org' - user: - password: "" - username: "" - # '*' is the default match. - - name: '*' - user: - token: "" - ``` +```yaml +apiVersion: v1 +kind: Config +users: +# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to. +# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers. +# +# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service: +# - name: webhook1.ns1.svc +# user: ... +# +# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+: +# - name: webhook1.ns1.svc:8443 +# user: ... +# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers: +# - name: webhook1.ns1.svc +# user: ... +# +# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples: +# A webhook with `url: https://www.example.com`: +# - name: www.example.com +# user: ... +# +# A webhook with `url: https://www.example.com:443`: +# - name: www.example.com:443 +# user: ... +# +# A webhook with `url: https://www.example.com:8443`: +# - name: www.example.com:8443 +# user: ... +# +- name: 'webhook1.ns1.svc' + user: + client-certificate-data: "" + client-key-data: "" +# The `name` supports using * to wildcard-match prefixing segments. +- name: '*.webhook-company.org' + user: + password: "" + username: "" +# '*' is the default match. +- name: '*' + user: + token: "" +``` -Of course you need to set up the webhook server to handle these authentications. +Of course you need to set up the webhook server to handle these authentication requests. ## Webhook request and response @@ -289,39 +261,17 @@ serialized to JSON as the body. Webhooks can specify what versions of `AdmissionReview` objects they accept with the `admissionReviewVersions` field in their configuration: -{{< tabs name="ValidatingWebhookConfiguration_admissionReviewVersions" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com admissionReviewVersions: ["v1", "v1beta1"] - ... ``` -`admissionReviewVersions` is a required field when creating -`admissionregistration.k8s.io/v1` webhook configurations. +`admissionReviewVersions` is a required field when creating webhook configurations. Webhooks are required to support at least one `AdmissionReview` version understood by the current and previous API server. -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - admissionReviewVersions: ["v1beta1"] - ... -``` - -If no `admissionReviewVersions` are specified, the default when creating -`admissionregistration.k8s.io/v1beta1` webhook configurations is `v1beta1`. -{{% /tab %}} -{{< /tabs >}} API servers send the first `AdmissionReview` version in the `admissionReviewVersions` list they support. If none of the versions in the list are supported by the API server, the configuration will not be allowed to be created. @@ -331,154 +281,100 @@ versions the API server knows how to send, attempts to call to the webhook will This example shows the data contained in an `AdmissionReview` object for a request to update the `scale` subresource of an `apps/v1` `Deployment`: - -{{< tabs name="AdmissionReview_request" >}} -{{% tab name="admission.k8s.io/v1" %}} ```yaml -{ - "apiVersion": "admission.k8s.io/v1", - "kind": "AdmissionReview", - "request": { - # Random uid uniquely identifying this admission call - "uid": "705ab4f5-6393-11e8-b7cc-42010a800002", +apiVersion: admission.k8s.io/v1 +kind: AdmissionReview +request: + # Random uid uniquely identifying this admission call + uid: 705ab4f5-6393-11e8-b7cc-42010a800002 - # Fully-qualified group/version/kind of the incoming object - "kind": {"group":"autoscaling","version":"v1","kind":"Scale"}, - # Fully-qualified group/version/kind of the resource being modified - "resource": {"group":"apps","version":"v1","resource":"deployments"}, - # subresource, if the request is to a subresource - "subResource": "scale", + # Fully-qualified group/version/kind of the incoming object + kind: + group: autoscaling + version: v1 + kind: Scale - # Fully-qualified group/version/kind of the incoming object in the original request to the API server. - # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the - # original request to the API server was converted to a version the webhook registered for. - "requestKind": {"group":"autoscaling","version":"v1","kind":"Scale"}, - # Fully-qualified group/version/kind of the resource being modified in the original request to the API server. - # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the - # original request to the API server was converted to a version the webhook registered for. - "requestResource": {"group":"apps","version":"v1","resource":"deployments"}, - # subresource, if the request is to a subresource - # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the - # original request to the API server was converted to a version the webhook registered for. - "requestSubResource": "scale", + # Fully-qualified group/version/kind of the resource being modified + resource: + group: apps + version: v1 + resource: deployments - # Name of the resource being modified - "name": "my-deployment", - # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object) - "namespace": "my-namespace", + # subresource, if the request is to a subresource + subResource: scale - # operation can be CREATE, UPDATE, DELETE, or CONNECT - "operation": "UPDATE", + # Fully-qualified group/version/kind of the incoming object in the original request to the API server. + # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the + # original request to the API server was converted to a version the webhook registered for. + requestKind: + group: autoscaling + version: v1 + kind: Scale - "userInfo": { - # Username of the authenticated user making the request to the API server - "username": "admin", - # UID of the authenticated user making the request to the API server - "uid": "014fbff9a07c", - # Group memberships of the authenticated user making the request to the API server - "groups": ["system:authenticated","my-admin-group"], - # Arbitrary extra info associated with the user making the request to the API server. - # This is populated by the API server authentication layer and should be included - # if any SubjectAccessReview checks are performed by the webhook. - "extra": { - "some-key":["some-value1", "some-value2"] - } - }, + # Fully-qualified group/version/kind of the resource being modified in the original request to the API server. + # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the + # original request to the API server was converted to a version the webhook registered for. + requestResource: + group: apps + version: v1 + resource: deployments - # object is the new object being admitted. - # It is null for DELETE operations. - "object": {"apiVersion":"autoscaling/v1","kind":"Scale",...}, - # oldObject is the existing object. - # It is null for CREATE and CONNECT operations. - "oldObject": {"apiVersion":"autoscaling/v1","kind":"Scale",...}, - # options contains the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions, UpdateOptions, or DeleteOptions. - # It is null for CONNECT operations. - "options": {"apiVersion":"meta.k8s.io/v1","kind":"UpdateOptions",...}, + # subresource, if the request is to a subresource + # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the + # original request to the API server was converted to a version the webhook registered for. + requestSubResource: scale - # dryRun indicates the API request is running in dry run mode and will not be persisted. - # Webhooks with side effects should avoid actuating those side effects when dryRun is true. - # See http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request for more details. - "dryRun": false - } -} + # Name of the resource being modified + name: my-deployment + + # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object) + namespace: my-namespace + + # operation can be CREATE, UPDATE, DELETE, or CONNECT + operation: UPDATE + + userInfo: + # Username of the authenticated user making the request to the API server + username: admin + + # UID of the authenticated user making the request to the API server + uid: 014fbff9a07c + + # Group memberships of the authenticated user making the request to the API server + groups: + - system:authenticated + - my-admin-group + # Arbitrary extra info associated with the user making the request to the API server. + # This is populated by the API server authentication layer and should be included + # if any SubjectAccessReview checks are performed by the webhook. + extra: + some-key: + - some-value1 + - some-value2 + + # object is the new object being admitted. + # It is null for DELETE operations. + object: + apiVersion: autoscaling/v1 + kind: Scale + + # oldObject is the existing object. + # It is null for CREATE and CONNECT operations. + oldObject: + apiVersion: autoscaling/v1 + kind: Scale + + # options contains the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions, UpdateOptions, or DeleteOptions. + # It is null for CONNECT operations. + options: + apiVersion: meta.k8s.io/v1 + kind: UpdateOptions + + # dryRun indicates the API request is running in dry run mode and will not be persisted. + # Webhooks with side effects should avoid actuating those side effects when dryRun is true. + # See http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request for more details. + dryRun: False ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```yaml -{ - # Deprecated in v1.16 in favor of admission.k8s.io/v1 - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "request": { - # Random uid uniquely identifying this admission call - "uid": "705ab4f5-6393-11e8-b7cc-42010a800002", - - # Fully-qualified group/version/kind of the incoming object - "kind": {"group":"autoscaling","version":"v1","kind":"Scale"}, - # Fully-qualified group/version/kind of the resource being modified - "resource": {"group":"apps","version":"v1","resource":"deployments"}, - # subresource, if the request is to a subresource - "subResource": "scale", - - # Fully-qualified group/version/kind of the incoming object in the original request to the API server. - # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the - # original request to the API server was converted to a version the webhook registered for. - # Only sent by v1.15+ API servers. - "requestKind": {"group":"autoscaling","version":"v1","kind":"Scale"}, - # Fully-qualified group/version/kind of the resource being modified in the original request to the API server. - # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the - # original request to the API server was converted to a version the webhook registered for. - # Only sent by v1.15+ API servers. - "requestResource": {"group":"apps","version":"v1","resource":"deployments"}, - # subresource, if the request is to a subresource - # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the - # original request to the API server was converted to a version the webhook registered for. - # Only sent by v1.15+ API servers. - "requestSubResource": "scale", - - # Name of the resource being modified - "name": "my-deployment", - # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object) - "namespace": "my-namespace", - - # operation can be CREATE, UPDATE, DELETE, or CONNECT - "operation": "UPDATE", - - "userInfo": { - # Username of the authenticated user making the request to the API server - "username": "admin", - # UID of the authenticated user making the request to the API server - "uid": "014fbff9a07c", - # Group memberships of the authenticated user making the request to the API server - "groups": ["system:authenticated","my-admin-group"], - # Arbitrary extra info associated with the user making the request to the API server. - # This is populated by the API server authentication layer and should be included - # if any SubjectAccessReview checks are performed by the webhook. - "extra": { - "some-key":["some-value1", "some-value2"] - } - }, - - # object is the new object being admitted. - # It is null for DELETE operations. - "object": {"apiVersion":"autoscaling/v1","kind":"Scale",...}, - # oldObject is the existing object. - # It is null for CREATE and CONNECT operations (and for DELETE operations in API servers prior to v1.15.0) - "oldObject": {"apiVersion":"autoscaling/v1","kind":"Scale",...}, - # options contains the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions, UpdateOptions, or DeleteOptions. - # It is null for CONNECT operations. - # Only sent by v1.15+ API servers. - "options": {"apiVersion":"meta.k8s.io/v1","kind":"UpdateOptions",...}, - - # dryRun indicates the API request is running in dry run mode and will not be persisted. - # Webhooks with side effects should avoid actuating those side effects when dryRun is true. - # See http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request for more details. - "dryRun": false - } -} -``` -{{% /tab %}} -{{< /tabs >}} ### Response @@ -492,8 +388,7 @@ At a minimum, the `response` stanza must contain the following fields: * `allowed`, either set to `true` or `false` Example of a minimal response from a webhook to allow a request: -{{< tabs name="AdmissionReview_response_allow" >}} -{{% tab name="admission.k8s.io/v1" %}} + ```json { "apiVersion": "admission.k8s.io/v1", @@ -504,55 +399,26 @@ Example of a minimal response from a webhook to allow a request: } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": true - } -} -``` -{{% /tab %}} -{{< /tabs >}} Example of a minimal response from a webhook to forbid a request: -{{< tabs name="AdmissionReview_response_forbid_minimal" >}} -{{% tab name="admission.k8s.io/v1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": false - } -} -``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": false - } -} -``` -{{% /tab %}} -{{< /tabs >}} -When rejecting a request, the webhook can customize the http code and message returned to the user using the `status` field. -The specified status object is returned to the user. -See the [API documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#status-v1-meta) for details about the status type. +```json +{ + "apiVersion": "admission.k8s.io/v1", + "kind": "AdmissionReview", + "response": { + "uid": "", + "allowed": false + } +} +``` + +When rejecting a request, the webhook can customize the http code and message returned to the user +using the `status` field. The specified status object is returned to the user. +See the [API documentation](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#status-v1-meta) +for details about the `status` type. Example of a response to forbid a request, customizing the HTTP status code and message presented to the user: -{{< tabs name="AdmissionReview_response_forbid_details" >}} -{{% tab name="admission.k8s.io/v1" %}} + ```json { "apiVersion": "admission.k8s.io/v1", @@ -567,24 +433,6 @@ Example of a response to forbid a request, customizing the HTTP status code and } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": false, - "status": { - "code": 403, - "message": "You cannot do this because it is Tuesday and your name starts with A" - } - } -} -``` -{{% /tab %}} -{{< /tabs >}} When allowing a request, a mutating admission webhook may optionally modify the incoming object as well. This is done using the `patch` and `patchType` fields in the response. @@ -592,13 +440,13 @@ The only currently supported `patchType` is `JSONPatch`. See [JSON patch](https://jsonpatch.com/) documentation for more details. For `patchType: JSONPatch`, the `patch` field contains a base64-encoded array of JSON patch operations. -As an example, a single patch operation that would set `spec.replicas` would be `[{"op": "add", "path": "/spec/replicas", "value": 3}]` +As an example, a single patch operation that would set `spec.replicas` would be +`[{"op": "add", "path": "/spec/replicas", "value": 3}]` Base64-encoded, this would be `W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=` So a webhook response to add that label would be: -{{< tabs name="AdmissionReview_response_modify" >}} -{{% tab name="admission.k8s.io/v1" %}} + ```json { "apiVersion": "admission.k8s.io/v1", @@ -611,27 +459,12 @@ So a webhook response to add that label would be: } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": true, - "patchType": "JSONPatch", - "patch": "W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=" - } -} -``` -{{% /tab %}} -{{< /tabs >}} -Starting in v1.19, admission webhooks can optionally return warning messages that are returned to the requesting client +Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP `Warning` headers with a warning code of 299. Warnings can be sent with allowed or rejected admission responses. If you're implementing a webhook that returns a warning: + * Don't include a "Warning:" prefix in the message * Use warning messages to describe problems the client making the API request should correct or be aware of * Limit warnings to 120 characters if possible @@ -641,8 +474,6 @@ Individual warning messages over 256 characters may be truncated by the API serv If more than 4096 characters of warning messages are added (from all sources), additional warning messages are ignored. {{< /caution >}} -{{< tabs name="AdmissionReview_response_warning" >}} -{{% tab name="admission.k8s.io/v1" %}} ```json { "apiVersion": "admission.k8s.io/v1", @@ -657,24 +488,6 @@ If more than 4096 characters of warning messages are added (from all sources), a } } ``` -{{% /tab %}} -{{% tab name="admission.k8s.io/v1beta1" %}} -```json -{ - "apiVersion": "admission.k8s.io/v1beta1", - "kind": "AdmissionReview", - "response": { - "uid": "", - "allowed": true, - "warnings": [ - "duplicate envvar entries specified with name MY_ENV", - "memory request less than 4MB specified for container mycontainer, which will not start successfully" - ] - } -} -``` -{{% /tab %}} -{{< /tabs >}} ## Webhook configuration @@ -683,9 +496,9 @@ The name of a `MutatingWebhookConfiguration` or a `ValidatingWebhookConfiguratio [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). Each configuration can contain one or more webhooks. -If multiple webhooks are specified in a single configuration, each should be given a unique name. -This is required in `admissionregistration.k8s.io/v1`, but strongly recommended when using `admissionregistration.k8s.io/v1beta1`, -in order to make resulting audit logs and metrics easier to match up to active configurations. +If multiple webhooks are specified in a single configuration, each must be given a unique name. +This is required in order to make resulting audit logs and metrics easier to match up to active +configurations. Each webhook defines the following things. @@ -694,27 +507,31 @@ Each webhook defines the following things. Each webhook must specify a list of rules used to determine if a request to the API server should be sent to the webhook. Each rule specifies one or more operations, apiGroups, apiVersions, and resources, and a resource scope: -* `operations` lists one or more operations to match. Can be `"CREATE"`, `"UPDATE"`, `"DELETE"`, `"CONNECT"`, or `"*"` to match all. +* `operations` lists one or more operations to match. Can be `"CREATE"`, `"UPDATE"`, `"DELETE"`, `"CONNECT"`, + or `"*"` to match all. * `apiGroups` lists one or more API groups to match. `""` is the core API group. `"*"` matches all API groups. * `apiVersions` lists one or more API versions to match. `"*"` matches all API versions. * `resources` lists one or more resources to match. - * `"*"` matches all resources, but not subresources. - * `"*/*"` matches all resources and subresources. - * `"pods/*"` matches all subresources of pods. - * `"*/status"` matches all status subresources. -* `scope` specifies a scope to match. Valid values are `"Cluster"`, `"Namespaced"`, and `"*"`. Subresources match the scope of their parent resource. Supported in v1.14+. Default is `"*"`, matching pre-1.14 behavior. - * `"Cluster"` means that only cluster-scoped resources will match this rule (Namespace API objects are cluster-scoped). - * `"Namespaced"` means that only namespaced resources will match this rule. - * `"*"` means that there are no scope restrictions. -If an incoming request matches one of the specified operations, groups, versions, resources, and scope for any of a webhook's rules, the request is sent to the webhook. + * `"*"` matches all resources, but not subresources. + * `"*/*"` matches all resources and subresources. + * `"pods/*"` matches all subresources of pods. + * `"*/status"` matches all status subresources. + +* `scope` specifies a scope to match. Valid values are `"Cluster"`, `"Namespaced"`, and `"*"`. + Subresources match the scope of their parent resource. Default is `"*"`. + + * `"Cluster"` means that only cluster-scoped resources will match this rule (Namespace API objects are cluster-scoped). + * `"Namespaced"` means that only namespaced resources will match this rule. + * `"*"` means that there are no scope restrictions. + +If an incoming request matches one of the specified `operations`, `groups`, `versions`, +`resources`, and `scope` for any of a webhook's `rules`, the request is sent to the webhook. Here are other examples of rules that could be used to specify which resources should be intercepted. Match `CREATE` or `UPDATE` requests to `apps/v1` and `apps/v1beta1` `deployments` and `replicasets`: -{{< tabs name="ValidatingWebhookConfiguration_rules_1" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration @@ -729,123 +546,56 @@ webhooks: scope: "Namespaced" ... ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - rules: - - operations: ["CREATE", "UPDATE"] - apiGroups: ["apps"] - apiVersions: ["v1", "v1beta1"] - resources: ["deployments", "replicasets"] - scope: "Namespaced" - ... -``` -{{% /tab %}} -{{< /tabs >}} Match create requests for all resources (but not subresources) in all API groups and versions: -{{< tabs name="ValidatingWebhookConfiguration_rules_2" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "*" - ... + - name: my-webhook.example.com + rules: + - operations: ["CREATE"] + apiGroups: ["*"] + apiVersions: ["*"] + resources: ["*"] + scope: "*" ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "*" - ... -``` -{{% /tab %}} -{{< /tabs >}} Match update requests for all `status` subresources in all API groups and versions: -{{< tabs name="ValidatingWebhookConfiguration_rules_3" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - rules: - - operations: ["UPDATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*/status"] - scope: "*" - ... + - name: my-webhook.example.com + rules: + - operations: ["UPDATE"] + apiGroups: ["*"] + apiVersions: ["*"] + resources: ["*/status"] + scope: "*" ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - rules: - - operations: ["UPDATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*/status"] - scope: "*" - ... -``` -{{% /tab %}} -{{< /tabs >}} ### Matching requests: objectSelector -In v1.15+, webhooks may optionally limit which requests are intercepted based on the labels of the +Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent, by specifying an `objectSelector`. If specified, the objectSelector is evaluated against both the object and oldObject that would be sent to the webhook, and is considered to match if either object matches the selector. -A null object (oldObject in the case of create, or newObject in the case of delete), +A null object (`oldObject` in the case of create, or `newObject` in the case of delete), or an object that cannot have labels (like a `DeploymentRollback` or a `PodProxyOptions` object) is not considered to match. -Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. +Use the object selector only if the webhook is opt-in, because end users may skip +the admission webhook by setting the labels. This example shows a mutating webhook that would match a `CREATE` of any resource with the label `foo: bar`: -{{< tabs name="objectSelector_example" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com objectSelector: @@ -857,32 +607,10 @@ webhooks: apiVersions: ["*"] resources: ["*"] scope: "*" - ... ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - objectSelector: - matchLabels: - foo: bar - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "*" - ... -``` -{{% /tab %}} -{{< /tabs >}} -See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels for more examples of label selectors. +See [labels concept](/docs/concepts/overview/working-with-objects/labels) +for more examples of label selectors. ### Matching requests: namespaceSelector @@ -897,128 +625,75 @@ If the object is a cluster scoped resource other than a Namespace, `namespaceSel This example shows a mutating webhook that matches a `CREATE` of any namespaced resource inside a namespace that does not have a "runlevel" label of "0" or "1": -{{< tabs name="MutatingWebhookConfiguration_namespaceSelector_1" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - namespaceSelector: - matchExpressions: - - key: runlevel - operator: NotIn - values: ["0","1"] - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "Namespaced" - ... + - name: my-webhook.example.com + namespaceSelector: + matchExpressions: + - key: runlevel + operator: NotIn + values: ["0","1"] + rules: + - operations: ["CREATE"] + apiGroups: ["*"] + apiVersions: ["*"] + resources: ["*"] + scope: "Namespaced" ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - namespaceSelector: - matchExpressions: - - key: runlevel - operator: NotIn - values: ["0","1"] - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "Namespaced" - ... -``` -{{% /tab %}} -{{< /tabs >}} -This example shows a validating webhook that matches a `CREATE` of any namespaced resource inside a namespace -that is associated with the "environment" of "prod" or "staging": +This example shows a validating webhook that matches a `CREATE` of any namespaced resource inside +a namespace that is associated with the "environment" of "prod" or "staging": -{{< tabs name="ValidatingWebhookConfiguration_namespaceSelector_2" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - namespaceSelector: - matchExpressions: - - key: environment - operator: In - values: ["prod","staging"] - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "Namespaced" - ... + - name: my-webhook.example.com + namespaceSelector: + matchExpressions: + - key: environment + operator: In + values: ["prod","staging"] + rules: + - operations: ["CREATE"] + apiGroups: ["*"] + apiVersions: ["*"] + resources: ["*"] + scope: "Namespaced" ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - namespaceSelector: - matchExpressions: - - key: environment - operator: In - values: ["prod","staging"] - rules: - - operations: ["CREATE"] - apiGroups: ["*"] - apiVersions: ["*"] - resources: ["*"] - scope: "Namespaced" - ... -``` -{{% /tab %}} -{{< /tabs >}} -See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels for more examples of label selectors. +See [labels concept](/docs/concepts/overview/working-with-objects/labels) +for more examples of label selectors. ### Matching requests: matchPolicy API servers can make objects available via multiple API groups or versions. -For example, the Kubernetes API server may allow creating and modifying `Deployment` objects -via `extensions/v1beta1`, `apps/v1beta1`, `apps/v1beta2`, and `apps/v1` APIs. -For example, if a webhook only specified a rule for some API groups/versions (like `apiGroups:["apps"], apiVersions:["v1","v1beta1"]`), +For example, if a webhook only specified a rule for some API groups/versions +(like `apiGroups:["apps"], apiVersions:["v1","v1beta1"]`), and a request was made to modify the resource via another API group/version (like `extensions/v1beta1`), the request would not be sent to the webhook. -In v1.15+, `matchPolicy` lets a webhook define how its `rules` are used to match incoming requests. +The `matchPolicy` lets a webhook define how its `rules` are used to match incoming requests. Allowed values are `Exact` or `Equivalent`. * `Exact` means a request should be intercepted only if it exactly matches a specified rule. -* `Equivalent` means a request should be intercepted if modifies a resource listed in `rules`, even via another API group or version. +* `Equivalent` means a request should be intercepted if modifies a resource listed in `rules`, + even via another API group or version. In the example given above, the webhook that only registered for `apps/v1` could use `matchPolicy`: * `matchPolicy: Exact` would mean the `extensions/v1beta1` request would not be sent to the webhook -* `matchPolicy: Equivalent` means the `extensions/v1beta1` request would be sent to the webhook (with the objects converted to a version the webhook had specified: `apps/v1`) +* `matchPolicy: Equivalent` means the `extensions/v1beta1` request would be sent to the webhook + (with the objects converted to a version the webhook had specified: `apps/v1`) Specifying `Equivalent` is recommended, and ensures that webhooks continue to intercept the resources they expect when upgrades enable new versions of the resource in the API server. -When a resource stops being served by the API server, it is no longer considered equivalent to other versions of that resource that are still served. +When a resource stops being served by the API server, it is no longer considered equivalent to +other versions of that resource that are still served. For example, `extensions/v1beta1` deployments were first deprecated and then removed (in Kubernetes v1.16). Since that removal, a webhook with a `apiGroups:["extensions"], apiVersions:["v1beta1"], resources:["deployments"]` rule @@ -1028,12 +703,9 @@ for stable versions of resources. This example shows a validating webhook that intercepts modifications to deployments (no matter the API group or version), and is always sent an `apps/v1` `Deployment` object: -{{< tabs name="ValidatingWebhookConfiguration_matchPolicy" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com matchPolicy: Equivalent @@ -1043,32 +715,9 @@ webhooks: apiVersions: ["v1"] resources: ["deployments"] scope: "Namespaced" - ... ``` -Admission webhooks created using `admissionregistration.k8s.io/v1` default to `Equivalent`. -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - matchPolicy: Equivalent - rules: - - operations: ["CREATE","UPDATE","DELETE"] - apiGroups: ["apps"] - apiVersions: ["v1"] - resources: ["deployments"] - scope: "Namespaced" - ... -``` - -Admission webhooks created using `admissionregistration.k8s.io/v1beta1` default to `Exact`. -{{% /tab %}} -{{< /tabs >}} +The `matchPolicy` for an admission webhooks defaults to `Equivalent`. ### Contacting the webhook @@ -1086,51 +735,32 @@ and can optionally include a custom CA bundle to use to verify the TLS connectio The `host` should not refer to a service running in the cluster; use a service reference by specifying the `service` field instead. -The host might be resolved via external DNS in some apiservers +The host might be resolved via external DNS in some API servers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address. Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts -which run an apiserver which might need to make calls to this +which run an API server which might need to make calls to this webhook. Such installations are likely to be non-portable or not readily run in a new cluster. The scheme must be "https"; the URL must begin with "https://". -Attempting to use a user or basic auth (for example "user:password@") is not allowed. -Fragments ("#...") and query parameters ("?...") are also not allowed. +Attempting to use a user or basic auth (for example `user:password@`) is not allowed. +Fragments (`#...`) and query parameters (`?...`) are also not allowed. Here is an example of a mutating webhook configured to call a URL (and expects the TLS certificate to be verified using system trust roots, so does not specify a caBundle): -{{< tabs name="MutatingWebhookConfiguration_url" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com clientConfig: url: "https://my-webhook.example.com:9443/my-webhook-path" - ... ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - clientConfig: - url: "https://my-webhook.example.com:9443/my-webhook-path" - ... -``` -{{% /tab %}} -{{< /tabs >}} #### Service reference @@ -1143,43 +773,24 @@ Here is an example of a mutating webhook configured to call a service on port "1 at the subpath "/my-path", and to verify the TLS connection against the ServerName `my-service-name.my-service-namespace.svc` using a custom CA bundle: -{{< tabs name="MutatingWebhookConfiguration_service" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com clientConfig: - caBundle: "Ci0tLS0tQk......tLS0K" + caBundle: service: namespace: my-service-namespace name: my-service-name path: /my-path port: 1234 - ... ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - clientConfig: - caBundle: "Ci0tLS0tQk...<`caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate>...tLS0K" - service: - namespace: my-service-namespace - name: my-service-name - path: /my-path - port: 1234 - ... -``` -{{% /tab %}} -{{< /tabs >}} + +{{< note >}} +You must replace the `` in the above example by a valid CA bundle +which is a PEM-encoded CA bundle for validating the webhook's server certificate. +{{< /note >}} ### Side effects @@ -1199,46 +810,20 @@ or the dry-run request will not be sent to the webhook and the API request will Webhooks indicate whether they have side effects using the `sideEffects` field in the webhook configuration: -* `Unknown`: no information is known about the side effects of calling the webhook. -If a request with `dryRun: true` would trigger a call to this webhook, the request will instead fail, and the webhook will not be called. * `None`: calling the webhook will have no side effects. -* `Some`: calling the webhook will possibly have side effects. -If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail, and the webhook will not be called. -* `NoneOnDryRun`: calling the webhook will possibly have side effects, -but if a request with `dryRun: true` is sent to the webhook, the webhook will suppress the side effects (the webhook is `dryRun`-aware). - -Allowed values: - -* In `admissionregistration.k8s.io/v1beta1`, `sideEffects` may be set to `Unknown`, `None`, `Some`, or `NoneOnDryRun`, and defaults to `Unknown`. -* In `admissionregistration.k8s.io/v1`, `sideEffects` must be set to `None` or `NoneOnDryRun`. +* `NoneOnDryRun`: calling the webhook will possibly have side effects, but if a request with + `dryRun: true` is sent to the webhook, the webhook will suppress the side effects (the webhook + is `dryRun`-aware). Here is an example of a validating webhook indicating it has no side effects on `dryRun: true` requests: -{{< tabs name="ValidatingWebhookConfiguration_sideEffects" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - sideEffects: NoneOnDryRun - ... + - name: my-webhook.example.com + sideEffects: NoneOnDryRun ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - sideEffects: NoneOnDryRun - ... -``` -{{% /tab %}} -{{< /tabs >}} ### Timeouts @@ -1253,35 +838,15 @@ The timeout value must be between 1 and 30 seconds. Here is an example of a validating webhook with a custom timeout of 2 seconds: -{{< tabs name="ValidatingWebhookConfiguration_timeoutSeconds" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration -... webhooks: -- name: my-webhook.example.com - timeoutSeconds: 2 - ... + - name: my-webhook.example.com + timeoutSeconds: 2 ``` -Admission webhooks created using `admissionregistration.k8s.io/v1` default timeouts to 10 seconds. -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: ValidatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - timeoutSeconds: 2 - ... -``` - -Admission webhooks created using `admissionregistration.k8s.io/v1` default timeouts to 30 seconds. -{{% /tab %}} -{{< /tabs >}} +The timeout for an admission webhook defaults to 10 seconds. ### Reinvocation policy @@ -1290,50 +855,35 @@ A single ordering of mutating admissions plugins (including webhooks) does not w to the object (like adding a `container` to a `pod`), and other mutating plugins which have already run may have opinions on those new structures (like setting an `imagePullPolicy` on all containers). -In v1.15+, to allow mutating admission plugins to observe changes made by other plugins, +To allow mutating admission plugins to observe changes made by other plugins, built-in mutating admission plugins are re-run if a mutating webhook modifies an object, and mutating webhooks can specify a `reinvocationPolicy` to control whether they are reinvoked as well. `reinvocationPolicy` may be set to `Never` or `IfNeeded`. It defaults to `Never`. -* `Never`: the webhook must not be called more than once in a single admission evaluation +* `Never`: the webhook must not be called more than once in a single admission evaluation. * `IfNeeded`: the webhook may be called again as part of the admission evaluation if the object -being admitted is modified by other admission plugins after the initial webhook call. + being admitted is modified by other admission plugins after the initial webhook call. The important elements to note are: * The number of additional invocations is not guaranteed to be exactly one. -* If additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. +* If additional invocations result in further modifications to the object, webhooks are not + guaranteed to be invoked again. * Webhooks that use this option may be reordered to minimize the number of additional invocations. -* To validate an object after all mutations are guaranteed complete, use a validating admission webhook instead (recommended for webhooks with side-effects). +* To validate an object after all mutations are guaranteed complete, use a validating admission + webhook instead (recommended for webhooks with side-effects). -Here is an example of a mutating webhook opting into being re-invoked if later admission plugins modify the object: +Here is an example of a mutating webhook opting into being re-invoked if later admission plugins +modify the object: -{{< tabs name="MutatingWebhookConfiguration_reinvocationPolicy" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com reinvocationPolicy: IfNeeded - ... ``` -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - reinvocationPolicy: IfNeeded - ... -``` -{{% /tab %}} -{{< /tabs >}} Mutating webhooks must be [idempotent](#idempotence), able to successfully process an object they have already admitted and potentially modified. This is true for all mutating admission webhooks, since any change they can make @@ -1349,35 +899,15 @@ are handled. Allowed values are `Ignore` or `Fail`. Here is a mutating webhook configured to reject an API request if errors are encountered calling the admission webhook: -{{< tabs name="MutatingWebhookConfiguration_failurePolicy" >}} -{{% tab name="admissionregistration.k8s.io/v1" %}} ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration -... webhooks: - name: my-webhook.example.com failurePolicy: Fail - ... ``` -Admission webhooks created using `admissionregistration.k8s.io/v1` default `failurePolicy` to `Fail`. -{{% /tab %}} -{{% tab name="admissionregistration.k8s.io/v1beta1" %}} -```yaml -# Deprecated in v1.16 in favor of admissionregistration.k8s.io/v1 -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -... -webhooks: -- name: my-webhook.example.com - failurePolicy: Fail - ... -``` - -Admission webhooks created using `admissionregistration.k8s.io/v1beta1` default `failurePolicy` to `Ignore`. -{{% /tab %}} -{{< /tabs >}} +The default `failurePolicy` for an admission webhooks is `Fail`. ## Monitoring admission webhooks @@ -1388,121 +918,123 @@ monitoring mechanisms help cluster admins to answer questions like: 2. What change did the mutating webhook applied to the object? -3. Which webhooks are frequently rejecting API requests? What's the reason for a - rejection? +3. Which webhooks are frequently rejecting API requests? What's the reason for a rejection? ### Mutating webhook auditing annotations Sometimes it's useful to know which mutating webhook mutated the object in a API request, and what change did the webhook apply. -In v1.16+, kube-apiserver performs [auditing](/docs/tasks/debug/debug-cluster/audit/) on each mutating webhook -invocation. Each invocation generates an auditing annotation -capturing if a request object is mutated by the invocation, and optionally generates an annotation capturing the applied -patch from the webhook admission response. The annotations are set in the audit event for given request on given stage of -its execution, which is then pre-processed according to a certain policy and written to a backend. +The Kubernetes API server performs [auditing](/docs/tasks/debug/debug-cluster/audit/) on each +mutating webhook invocation. Each invocation generates an auditing annotation +capturing if a request object is mutated by the invocation, and optionally generates an annotation +capturing the applied patch from the webhook admission response. The annotations are set in the +audit event for given request on given stage of its execution, which is then pre-processed +according to a certain policy and written to a backend. The audit level of a event determines which annotations get recorded: - At `Metadata` audit level or higher, an annotation with key -`mutation.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` gets logged with JSON payload indicating -a webhook gets invoked for given request and whether it mutated the object or not. + `mutation.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` gets logged with JSON + payload indicating a webhook gets invoked for given request and whether it mutated the object or not. -For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the third in the -mutating webhook chain, and didn't mutated the request object during the invocation. + For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is + ordered the third in the mutating webhook chain, and didn't mutated the request object during the + invocation. -```yaml -# the audit event recorded -{ - "kind": "Event", - "apiVersion": "audit.k8s.io/v1", - "annotations": { - "mutation.webhook.admission.k8s.io/round_1_index_2": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook.example.com\",\"mutated\": false}" - # other annotations - ... - } - # other fields - ... -} -``` + ```yaml + # the audit event recorded + { + "kind": "Event", + "apiVersion": "audit.k8s.io/v1", + "annotations": { + "mutation.webhook.admission.k8s.io/round_1_index_2": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook.example.com\",\"mutated\": false}" + # other annotations + ... + } + # other fields + ... + } + ``` + + ```yaml + # the annotation value deserialized + { + "configuration": "my-mutating-webhook-configuration.example.com", + "webhook": "my-webhook.example.com", + "mutated": false + } + ``` + + The following annotation gets recorded for a webhook being invoked in the first round. The webhook + is ordered the first in the mutating webhook chain, and mutated the request object during the + invocation. -```yaml -# the annotation value deserialized -{ - "configuration": "my-mutating-webhook-configuration.example.com", - "webhook": "my-webhook.example.com", - "mutated": false -} -``` - -The following annotation gets recorded for a webhook being invoked in the first round. The webhook is ordered the first in\ -the mutating webhook chain, and mutated the request object during the invocation. - -```yaml -# the audit event recorded -{ - "kind": "Event", - "apiVersion": "audit.k8s.io/v1", - "annotations": { - "mutation.webhook.admission.k8s.io/round_0_index_0": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"mutated\": true}" - # other annotations - ... - } - # other fields - ... -} -``` - -```yaml -# the annotation value deserialized -{ - "configuration": "my-mutating-webhook-configuration.example.com", - "webhook": "my-webhook-always-mutate.example.com", - "mutated": true -} -``` + ```yaml + # the audit event recorded + { + "kind": "Event", + "apiVersion": "audit.k8s.io/v1", + "annotations": { + "mutation.webhook.admission.k8s.io/round_0_index_0": "{\"configuration\":\"my-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"mutated\": true}" + # other annotations + ... + } + # other fields + ... + } + ``` + + ```yaml + # the annotation value deserialized + { + "configuration": "my-mutating-webhook-configuration.example.com", + "webhook": "my-webhook-always-mutate.example.com", + "mutated": true + } + ``` - At `Request` audit level or higher, an annotation with key -`patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` gets logged with JSON payload indicating -a webhook gets invoked for given request and what patch gets applied to the request object. + `patch.webhook.admission.k8s.io/round_{round idx}_index_{order idx}` gets logged with JSON payload indicating + a webhook gets invoked for given request and what patch gets applied to the request object. -For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the fourth in the -mutating webhook chain, and responded with a JSON patch which got applied to the request object. - -```yaml -# the audit event recorded -{ - "kind": "Event", - "apiVersion": "audit.k8s.io/v1", - "annotations": { - "patch.webhook.admission.k8s.io/round_1_index_3": "{\"configuration\":\"my-other-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"patch\":[{\"op\":\"add\",\"path\":\"/data/mutation-stage\",\"value\":\"yes\"}],\"patchType\":\"JSONPatch\"}" - # other annotations - ... - } - # other fields - ... -} -``` - -```yaml -# the annotation value deserialized -{ - "configuration": "my-other-mutating-webhook-configuration.example.com", - "webhook": "my-webhook-always-mutate.example.com", - "patchType": "JSONPatch", - "patch": [ - { - "op": "add", - "path": "/data/mutation-stage", - "value": "yes" - } - ] -} -``` + For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the fourth in the + mutating webhook chain, and responded with a JSON patch which got applied to the request object. + + ```yaml + # the audit event recorded + { + "kind": "Event", + "apiVersion": "audit.k8s.io/v1", + "annotations": { + "patch.webhook.admission.k8s.io/round_1_index_3": "{\"configuration\":\"my-other-mutating-webhook-configuration.example.com\",\"webhook\":\"my-webhook-always-mutate.example.com\",\"patch\":[{\"op\":\"add\",\"path\":\"/data/mutation-stage\",\"value\":\"yes\"}],\"patchType\":\"JSONPatch\"}" + # other annotations + ... + } + # other fields + ... + } + ``` + + ```yaml + # the annotation value deserialized + { + "configuration": "my-other-mutating-webhook-configuration.example.com", + "webhook": "my-webhook-always-mutate.example.com", + "patchType": "JSONPatch", + "patch": [ + { + "op": "add", + "path": "/data/mutation-stage", + "value": "yes" + } + ] + } + ``` ### Admission webhook metrics -Kube-apiserver exposes Prometheus metrics from the `/metrics` endpoint, which can be used for monitoring and +The API server exposes Prometheus metrics from the `/metrics` endpoint, which can be used for monitoring and diagnosing API server status. The following metrics record status related to admission webhooks. #### API server admission webhook rejection count @@ -1510,7 +1042,7 @@ diagnosing API server status. The following metrics record status related to adm Sometimes it's useful to know which admission webhooks are frequently rejecting API requests, and the reason for a rejection. -In v1.16+, kube-apiserver exposes a Prometheus counter metric recording admission webhook rejections. The +The API server exposes a Prometheus counter metric recording admission webhook rejections. The metrics are labelled to identify the causes of webhook rejection(s): - `name`: the name of the webhook that rejected a request. @@ -1519,11 +1051,13 @@ metrics are labelled to identify the causes of webhook rejection(s): - `type`: the admission webhook type, can be one of `admit` and `validating`. - `error_type`: identifies if an error occurred during the webhook invocation that caused the rejection. Its value can be one of: - - `calling_webhook_error`: unrecognized errors or timeout errors from the admission webhook happened and the - webhook's [Failure policy](#failure-policy) is set to `Fail`. - - `no_error`: no error occurred. The webhook rejected the request with `allowed: false` in the admission - response. The metrics label `rejection_code` records the `.status.code` set in the admission response. - - `apiserver_internal_error`: an API server internal error happened. + + - `calling_webhook_error`: unrecognized errors or timeout errors from the admission webhook happened and the + webhook's [Failure policy](#failure-policy) is set to `Fail`. + - `no_error`: no error occurred. The webhook rejected the request with `allowed: false` in the admission + response. The metrics label `rejection_code` records the `.status.code` set in the admission response. + - `apiserver_internal_error`: an API server internal error happened. + - `rejection_code`: the HTTP status code set in the admission response when a webhook rejected a request. @@ -1553,7 +1087,8 @@ the initial application. 2. For a `CREATE` pod request, if the field `.spec.containers[].resources.limits` of a container is not set, set default resource limits. -3. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar` if no container with the name `foo-sidecar` already exists. +3. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar` if no container + with the name `foo-sidecar` already exists. In the cases above, the webhook can be safely reinvoked, or admit an object that already has the fields set. @@ -1587,21 +1122,25 @@ versions. See [Matching requests: matchPolicy](#matching-requests-matchpolicy) f ### Availability -It is recommended that admission webhooks should evaluate as quickly as possible (typically in milliseconds), since they add to API request latency. +It is recommended that admission webhooks should evaluate as quickly as possible (typically in +milliseconds), since they add to API request latency. It is encouraged to use a small timeout for webhooks. See [Timeouts](#timeouts) for more detail. -It is recommended that admission webhooks should leverage some format of load-balancing, to provide high availability and -performance benefits. If a webhook is running within the cluster, you can run multiple webhook backends behind a service -to leverage the load-balancing that service supports. +It is recommended that admission webhooks should leverage some format of load-balancing, to +provide high availability and performance benefits. If a webhook is running within the cluster, +you can run multiple webhook backends behind a service to leverage the load-balancing that service +supports. ### Guaranteeing the final state of the object is seen Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy should use a validating admission webhook, since objects can be modified after being seen by mutating webhooks. -For example, a mutating admission webhook is configured to inject a sidecar container with name "foo-sidecar" on every -`CREATE` pod request. If the sidecar *must* be present, a validating admisson webhook should also be configured to intercept `CREATE` pod requests, and validate -that a container with name "foo-sidecar" with the expected configuration exists in the to-be-created object. +For example, a mutating admission webhook is configured to inject a sidecar container with name +"foo-sidecar" on every `CREATE` pod request. If the sidecar *must* be present, a validating +admisson webhook should also be configured to intercept `CREATE` pod requests, and validate that a +container with name "foo-sidecar" with the expected configuration exists in the to-be-created +object. ### Avoiding deadlocks in self-hosted webhooks @@ -1614,7 +1153,8 @@ When a node that runs the webhook server pods becomes unhealthy, the webhook deployment will try to reschedule the pods to another node. However the requests will get rejected by the existing webhook server since the `"env"` label is unset, and the migration cannot happen. -It is recommended to exclude the namespace where your webhook is running with a [namespaceSelector](#matching-requests-namespaceselector). +It is recommended to exclude the namespace where your webhook is running with a +[namespaceSelector](#matching-requests-namespaceselector). ### Side effects @@ -1636,4 +1176,3 @@ If your admission webhooks don't intend to modify the behavior of the Kubernetes plane, exclude the `kube-system` namespace from being intercepted using a [`namespaceSelector`](#matching-requests-namespaceselector). - diff --git a/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md b/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md index 6c820a6e99c..82394b363af 100644 --- a/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md +++ b/content/en/docs/reference/access-authn-authz/psp-to-pod-security-standards.md @@ -9,7 +9,7 @@ weight: 95 The tables below enumerate the configuration parameters on -[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) objects, whether the field mutates +[PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) objects, whether the field mutates and/or validates pods, and how the configuration values map to the [Pod Security Standards](/docs/concepts/security/pod-security-standards/). @@ -31,9 +31,9 @@ The fields enumerated in this table are part of the `PodSecurityPolicySpec`, whi under the `.spec` field path. - - - + + + @@ -54,19 +54,19 @@ under the `.spec` field path. @@ -236,9 +236,9 @@ The [annotations](/docs/concepts/overview/working-with-objects/annotations/) enu table can be specified under `.metadata.annotations` on the PodSecurityPolicy object.
Mapping PodSecurityPolicySpec fields to Pod Security Standards
Mapping PodSecurityPolicySpec fields to Pod Security Standards
PodSecurityPolicySpec Type Pod Security Standards Equivalent

Baseline: subset of

    -
  • AUDIT_WRITE
  • -
  • CHOWN
  • -
  • DAC_OVERRIDE
  • -
  • FOWNER
  • -
  • FSETID
  • -
  • KILL
  • -
  • MKNOD
  • -
  • NET_BIND_SERVICE
  • -
  • SETFCAP
  • -
  • SETGID
  • -
  • SETPCAP
  • -
  • SETUID
  • -
  • SYS_CHROOT
  • +
  • AUDIT_WRITE
  • +
  • CHOWN
  • +
  • DAC_OVERRIDE
  • +
  • FOWNER
  • +
  • FSETID
  • +
  • KILL
  • +
  • MKNOD
  • +
  • NET_BIND_SERVICE
  • +
  • SETFCAP
  • +
  • SETGID
  • +
  • SETPCAP
  • +
  • SETUID
  • +
  • SYS_CHROOT

Restricted: empty / undefined / nil OR a list containing only NET_BIND_SERVICE

- - - + + + diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index d085251e433..a681164fe9d 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -54,8 +54,8 @@ it can't be both. ClusterRoles have several uses. You can use a ClusterRole to: -1. define permissions on namespaced resources and be granted within individual namespace(s) -1. define permissions on namespaced resources and be granted across all namespaces +1. define permissions on namespaced resources and be granted access within individual namespace(s) +1. define permissions on namespaced resources and be granted access across all namespaces 1. define permissions on cluster-scoped resources If you want to define a role within a namespace, use a Role; if you want to define diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 9d1e67b3c05..d2d740c1930 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -808,7 +808,7 @@ Each feature gate is designed for enabling/disabling a specific feature: availability during update per node. See [Perform a Rolling Update on a DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/). - `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do - [default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints). + [default spreading](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints). - `DelegateFSGroupToCSIDriver`: If supported by the CSI driver, delegates the role of applying `fsGroup` from a Pod's `securityContext` to the driver by passing `fsGroup` through the NodeStageVolume and NodePublishVolume CSI calls. @@ -854,7 +854,7 @@ Each feature gate is designed for enabling/disabling a specific feature: {{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}} to running pods. - `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See - [Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). + [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/). - `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts. This feature gate exists in case any of your existing workloads depend on a now-corrected fault where Kubernetes ignored exec probe timeouts. See @@ -995,7 +995,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `MemoryQoS`: Enable memory protection and usage throttle on pod / container using cgroup v2 memory controller. - `MinDomainsInPodTopologySpread`: Enable `minDomains` in Pod - [topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). + [topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/). - `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type Service instance. - `MountContainers`: Enable using utility containers on host as the volume mounter. diff --git a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md index ed0980e2994..84d1d84b153 100644 --- a/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/en/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -27,7 +27,7 @@ each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node. Multiple different schedulers may be used within a cluster; kube-scheduler is the reference implementation. -See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/) +See [scheduling](/docs/concepts/scheduling-eviction/) for more information about scheduling and the kube-scheduler component. ``` diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index b10e1b70574..f2a80315021 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -90,7 +90,7 @@ kubelet [flags] - + @@ -187,27 +187,6 @@ kubelet [flags] - - - - - - - - - - - - - - - - - - - - - @@ -230,20 +209,19 @@ kubelet [flags] - + - + - + - + - @@ -276,7 +254,7 @@ kubelet [flags] - + @@ -286,20 +264,6 @@ kubelet [flags] - - - - - - - - - - - - - - @@ -398,13 +362,6 @@ kubelet [flags] - - - - - - - @@ -412,13 +369,6 @@ kubelet [flags] - - - - - - @@ -445,83 +395,76 @@ APIServerIdentity=true|false (ALPHA - default=false)
APIServerTracing=true|false (ALPHA - default=false)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
-AnyVolumeDataSource=true|false (ALPHA - default=false)
+AnyVolumeDataSource=true|false (BETA - default=true)
AppArmor=true|false (BETA - default=true)
CPUManager=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
-CPUManagerPolicyOptions=true|false (ALPHA - default=false)
+CPUManagerPolicyOptions=true|false (BETA - default=true)
CSIInlineVolume=true|false (BETA - default=true)
CSIMigration=true|false (BETA - default=true)
-CSIMigrationAWS=true|false (BETA - default=false)
-CSIMigrationAzureDisk=true|false (BETA - default=true)
-CSIMigrationAzureFile=true|false (BETA - default=false)
+CSIMigrationAWS=true|false (BETA - default=true)
+CSIMigrationAzureFile=true|false (BETA - default=true)
CSIMigrationGCE=true|false (BETA - default=true)
-CSIMigrationOpenStack=true|false (BETA - default=true)
CSIMigrationPortworx=true|false (ALPHA - default=false)
+CSIMigrationRBD=true|false (ALPHA - default=false)
CSIMigrationvSphere=true|false (BETA - default=false)
-CSIStorageCapacity=true|false (BETA - default=true)
CSIVolumeHealth=true|false (ALPHA - default=false)
-CSRDuration=true|false (BETA - default=true)
-ControllerManagerLeaderMigration=true|false (BETA - default=true)
+ContextualLogging=true|false (ALPHA - default=false)
+CronJobTimeZone=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (ALPHA - default=false)
DaemonSetUpdateSurge=true|false (BETA - default=true)
-DefaultPodTopologySpread=true|false (BETA - default=true)
DelegateFSGroupToCSIDriver=true|false (BETA - default=true)
DevicePlugins=true|false (BETA - default=true)
DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DownwardAPIHugePages=true|false (BETA - default=true)
-EfficientWatchResumption=true|false (BETA - default=true)
EndpointSliceTerminatingCondition=true|false (BETA - default=true)
EphemeralContainers=true|false (BETA - default=true)
-ExpandCSIVolumes=true|false (BETA - default=true)
-ExpandInUsePersistentVolumes=true|false (BETA - default=true)
-ExpandPersistentVolumes=true|false (BETA - default=true)
ExpandedDNSConfig=true|false (ALPHA - default=false)
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
-GRPCContainerProbe=true|false (ALPHA - default=false)
+GRPCContainerProbe=true|false (BETA - default=true)
GracefulNodeShutdown=true|false (BETA - default=true)
-GracefulNodeShutdownBasedOnPodPriority=true|false (ALPHA - default=false)
+GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (ALPHA - default=false)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
-IdentifyPodOS=true|false (ALPHA - default=false)
+IdentifyPodOS=true|false (BETA - default=true)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
-InTreePluginRBDUnregister=true|false (ALPHA - default=false)
+InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
-IndexedJob=true|false (BETA - default=true)
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)
-JobReadyPods=true|false (ALPHA - default=false)
-JobTrackingWithFinalizers=true|false (BETA - default=true)
-KubeletCredentialProviders=true|false (ALPHA - default=false)
+JobReadyPods=true|false (BETA - default=true)
+JobTrackingWithFinalizers=true|false (BETA - default=false)
+KubeletCredentialProviders=true|false (BETA - default=true)
KubeletInUserNamespace=true|false (ALPHA - default=false)
KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
+LegacyServiceAccountTokenNoAutoGeneration=true|false (BETA - default=true)
LocalStorageCapacityIsolation=true|false (BETA - default=true)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
+MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
-MixedProtocolLBService=true|false (ALPHA - default=false)
+MinDomainsInPodTopologySpread=true|false (ALPHA - default=false)
+MixedProtocolLBService=true|false (BETA - default=true)
NetworkPolicyEndPort=true|false (BETA - default=true)
+NetworkPolicyStatus=true|false (ALPHA - default=false)
+NodeOutOfServiceVolumeDetach=true|false (ALPHA - default=false)
NodeSwap=true|false (ALPHA - default=false)
-NonPreemptingPriority=true|false (BETA - default=true)
-OpenAPIEnums=true|false (ALPHA - default=false)
-OpenAPIV3=true|false (ALPHA - default=false)
-PodAffinityNamespaceSelector=true|false (BETA - default=true)
+OpenAPIEnums=true|false (BETA - default=true)
+OpenAPIV3=true|false (BETA - default=true)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
-PodOverhead=true|false (BETA - default=true)
PodSecurity=true|false (BETA - default=true)
-PreferNominatedNode=true|false (BETA - default=true)
ProbeTerminationGracePeriod=true|false (BETA - default=false)
ProcMountType=true|false (ALPHA - default=false)
ProxyTerminatingEndpoints=true|false (ALPHA - default=false)
@@ -529,25 +472,22 @@ QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (ALPHA - default=false)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
-RemoveSelfLink=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SeccompDefault=true|false (ALPHA - default=false)
+ServerSideFieldValidation=true|false (ALPHA - default=false)
+ServiceIPStaticSubrange=true|false (ALPHA - default=false)
ServiceInternalTrafficPolicy=true|false (BETA - default=true)
-ServiceLBNodePortControl=true|false (BETA - default=true)
-ServiceLoadBalancerClass=true|false (BETA - default=true)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)
StatefulSetMinReadySeconds=true|false (BETA - default=true)
StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
-SuspendJob=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManager=true|false (BETA - default=true)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
WinOverlay=true|false (BETA - default=true)
WindowsHostProcessContainers=true|false (BETA - default=true)
-csiMigrationRBD=true|false (ALPHA - default=false)
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) @@ -628,18 +568,11 @@ csiMigrationRBD=true|false (ALPHA - default=false)
- - - - - - - - + @@ -866,20 +799,6 @@ csiMigrationRBD=true|false (ALPHA - default=false)
- - - - - - - - - - - - - - @@ -908,13 +827,6 @@ csiMigrationRBD=true|false (ALPHA - default=false)
- - - - - - - @@ -999,13 +911,6 @@ csiMigrationRBD=true|false (ALPHA - default=false)
- - - - - - - @@ -1105,7 +1010,7 @@ csiMigrationRBD=true|false (ALPHA - default=false)
- + @@ -1187,10 +1092,10 @@ csiMigrationRBD=true|false (ALPHA - default=false)
@@ -1237,7 +1142,7 @@ TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_E - + diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md index 377ac021b67..a6e2bd98cd3 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta2.md @@ -113,7 +113,7 @@ components by adding customized setting or overriding kubeadm default settings.<

The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.

See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ or -https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration +https://pkg.go.dev/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration for kube proxy official documentation.

apiVersion: kubelet.config.k8s.io/v1beta1
 kind: KubeletConfiguration
@@ -121,7 +121,7 @@ for kube proxy official documentation.

The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.

See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or -https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration +https://pkg.go.dev/k8s.io/kubelet/config/v1beta1#KubeletConfiguration for kubelet official documentation.

Here is a fully populated example of a single YAML file containing multiple configuration types to be used during a kubeadm init run.

@@ -1307,4 +1307,4 @@ current node is registered.

Mapping PodSecurityPolicy annotations to Pod Security Standards
Mapping PodSecurityPolicy annotations to Pod Security Standards
PSP Annotation Type Pod Security Standards Equivalent
--authorization-mode string     Default: AlwaysAllow--authorization-mode string     Default: AlwaysAllow
Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
--cni-bin-dir string     Default: /opt/cni/bin
A comma-separated list of full paths of directories in which to search for CNI plugin binaries. This docker-specific flag only works when container-runtime is set to docker. (DEPRECATED: will be removed along with dockershim.)
--cni-cache-dir string     Default: /var/lib/cni/cache
The full path of the directory in which CNI should store cache files. This docker-specific flag only works when container-runtime is set to docker. (DEPRECATED: will be removed along with dockershim.)
--cni-conf-dir string     Default: /etc/cni/net.d
<Warning: Alpha feature> The full path of the directory in which to search for CNI config files. This docker-specific flag only works when container-runtime is set to docker. (DEPRECATED: will be removed along with dockershim.)
--config string
--container-runtime string     Default: docker--container-runtime string     Default: remote
The container runtime to use. Possible values: docker, remote.The container runtime to use. Possible values: docker, remote. (DEPRECATED: will be removed in 1.27 as the only valid value is 'remote')
--container-runtime-endpoint string     Default: unix:///var/run/dockershim.sock--container-runtime-endpoint string
[Experimental] The endpoint of remote runtime service. Currently unix socket endpoint is supported on Linux, while npipe and tcp endpoints are supported on windows. Examples: unix:///var/run/dockershim.sock, npipe:////./pipe/dockershim.The endpoint of remote runtime service. Unix Domain SOckets are supported on Linux, while npipe and tcp endpoints are supported on windows. Examples: unix:///var/run/dockershim.sock, npipe:////./pipe/dockershim.
--contention-profiling
--cpu-manager-policy-options mapStringString
Comma-separated list of options to fine-tune the behavior of the selected CPU Manager policy. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)A set of key=value CPU Manager policy options to use, to fine tune their behaviour. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
<Warning: Alpha feature> CPU Manager reconciliation period. Examples: 10s, or 1m. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
--docker-endpoint string     Default: unix:///var/run/docker.sock
Use this for the docker endpoint to communicate with. This docker-specific flag only works when container-runtime is set to docker. (DEPRECATED: will be removed along with dockershim.)
--dynamic-config-dir string
The Kubelet will use this directory for checkpointing downloaded configurations and tracking configuration health. The Kubelet will create this directory if it does not already exist. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Providing this flag enables dynamic Kubelet configuration. The DynamicKubeletConfig feature gate must be enabled to pass this flag. (DEPRECATED: Feature DynamicKubeletConfig is deprecated in 1.22 and will not move to GA. It is planned to be removed from Kubernetes in the version 1.24 or later. Please use alternative ways to update kubelet configuration.)
--enable-controller-attach-detach     Default: true
When set to true, hard eviction thresholds will be ignored while calculating node allocatable. See here for more details. (DEPRECATED: will be removed in 1.24 or later)
--experimental-check-node-capabilities-before-mount
[Experimental] if set to true, the kubelet will check the underlying node for required components (binaries, etc.) before performing the mount (DEPRECATED: will be removed in 1.24 or later, in favor of using CSI.)
--experimental-kernel-memcg-notification
Use kernelMemcgNotification configuration, this flag will be removed in 1.24 or later. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
--experimental-log-sanitization bool
[Experimental] When enabled, prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) -
--experimental-mounter-path string     Default: mount
The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of --image-gc-high-threshold. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
--image-pull-progress-deadline duration     Default: 1m0s
If no pulling progress is made before this deadline, the image pulling will be cancelled. This docker-specific flag only works when container-runtime is set to docker. (DEPRECATED: will be removed along with dockershim.)
--image-service-endpoint string
[Experimental] The endpoint of remote image service. If not specified, it will be the same with --container-runtime-endpoint by default. Currently UNIX socket endpoint is supported on Linux, while npipe and TCP endpoints are supported on Windows. Examples: unix:///var/run/dockershim.sock, npipe:////./pipe/dockershim[Experimental] The endpoint of remote image service. If not specified, it will be the same with --container-runtime-endpoint by default. Unix Domain Socket are supported on Linux, while npipe and TCP endpoints are supported on Windows. Examples: unix:///var/run/dockershim.sock, npipe:////./pipe/dockershim
Minimum age for an unused image before it is garbage collected. Examples: '300ms', '10s' or '2h45m'. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
--network-plugin string
The name of the network plugin to be invoked for various events in kubelet/pod lifecycle. This docker-specific flag only works when container-runtime is set to docker. (DEPRECATED: will be removed along with dockershim.)
--network-plugin-mtu int32
The MTU to be passed to the network plugin, to override the default. Set to 0 to use the default 1460 MTU. This docker-specific flag only works when container-runtime is set to docker. (DEPRECATED: will be removed along with dockershim.)
--node-ip string
Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in Node controller. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
--non-masquerade-cidr string     Default: 10.0.0.0/8
Traffic to IPs outside this range will use IP masquerade. Set to '0.0.0.0/0' to never masquerade. (DEPRECATED: will be removed in a future version)
--one-output
The read-only port for the kubelet to serve on with no authentication/authorization (set to 0 to disable). (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.)
--really-crash-for-testing
If true, when panics occur crash. Intended for testing. (DEPRECATED: will be removed in a future version.)
--register-node     Default: true
--seccomp-default RuntimeDefault--seccomp-default string
<Warning: Alpha feature> Enable the use of RuntimeDefault as the default seccomp profile for all workloads. The SeccompDefault feature gate must be enabled to allow this flag, which is disabled by default.
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: -TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384
-Insecure values: -TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. -(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See kubelet-config-file for more information.) +`TLS_AES_128_GCM_SHA256`, `TLS_AES_256_GCM_SHA384`, `TLS_CHACHA20_POLY1305_SHA256`, `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA`, `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`, `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305`, `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256`, `TLS_RSA_WITH_AES_128_CBC_SHA`, `TLS_RSA_WITH_AES_128_GCM_SHA256`, `TLS_RSA_WITH_AES_256_CBC_SHA`, `TLS_RSA_WITH_AES_256_GCM_SHA384`
+Insecure values:
+`TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_ECDSA_WITH_RC4_128_SHA`, `TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_RSA_WITH_RC4_128_SHA`, `TLS_RSA_WITH_3DES_EDE_CBC_SHA`, `TLS_RSA_WITH_AES_128_CBC_SHA256`, `TLS_RSA_WITH_RC4_128_SHA`.
+(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See kubelet-config-file for more information.)
--vmodule <A list of 'pattern=N' string>--vmodule <A list of 'pattern=N' strings>
Comma-separated list of pattern=N settings for file-filtered logging
- + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md index 75fc7c1ecfd..20f5e44d93a 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md @@ -122,7 +122,7 @@ components by adding customized setting or overriding kubeadm default settings.<

The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.

See https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ or -https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration +https://pkg.go.dev/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration for kube-proxy official documentation.

apiVersion: kubelet.config.k8s.io/v1beta1
 kind: KubeletConfiguration
@@ -130,7 +130,7 @@ for kube-proxy official documentation.

The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.

See https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or -https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration +https://pkg.go.dev/k8s.io/kubelet/config/v1beta1#KubeletConfiguration for kubelet official documentation.

Here is a fully populated example of a single YAML file containing multiple configuration types to be used during a kubeadm init run.

@@ -555,7 +555,7 @@ by kubeadm during kubeadm join.

int32 -

bindPorti sets the secure port for the API Server to bind to. +

bindPort sets the secure port for the API Server to bind to. Defaults to 6443.

@@ -1159,7 +1159,7 @@ This information will be annotated to the Node API object, for later re-use

[]core/v1.Taint -

tains specifies the taints the Node API object should be registered with. +

taints specifies the taints the Node API object should be registered with. If this field is unset, i.e. nil, in the kubeadm init process it will be defaulted with a control-plane taint for control-plane nodes. If you don't want to taint your control-plane node, set this field to an empty list, diff --git a/content/en/docs/reference/glossary/api-eviction.md b/content/en/docs/reference/glossary/api-eviction.md index d450f907439..e6db5624614 100644 --- a/content/en/docs/reference/glossary/api-eviction.md +++ b/content/en/docs/reference/glossary/api-eviction.md @@ -22,6 +22,6 @@ When an `Eviction` object is created, the API server terminates the Pod. API-initiated evictions respect your configured [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/) and [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination). -API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/eviction/#kubelet-eviction). +API-initiated eviction is not the same as [node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/). * See [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/) for more information. diff --git a/content/en/docs/reference/glossary/extensions.md b/content/en/docs/reference/glossary/extensions.md index 4f5c5ebd787..c994b601a1a 100644 --- a/content/en/docs/reference/glossary/extensions.md +++ b/content/en/docs/reference/glossary/extensions.md @@ -2,9 +2,10 @@ title: Extensions id: Extensions date: 2019-02-01 -full_link: /docs/concepts/extend-kubernetes/extend-cluster/#extensions +full_link: /docs/concepts/extend-kubernetes/#extensions short_description: > - Extensions are software components that extend and deeply integrate with Kubernetes to support new types of hardware. + Extensions are software components that extend and deeply integrate with Kubernetes to support + new types of hardware. aka: tags: @@ -15,4 +16,6 @@ tags: -Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters come with extensions pre-installed. As a result, most Kubernetes users will not need to install [extensions](/docs/concepts/extend-kubernetes/extend-cluster/#extensions) and even fewer users will need to author new ones. +Many cluster administrators use a hosted or distribution instance of Kubernetes. These clusters +come with extensions pre-installed. As a result, most Kubernetes users will not need to install +[extensions](/docs/concepts/extend-kubernetes/) and even fewer users will need to author new ones. diff --git a/content/en/docs/reference/glossary/garbage-collection.md b/content/en/docs/reference/glossary/garbage-collection.md index ec2fe19af7c..1d4b9b57852 100644 --- a/content/en/docs/reference/glossary/garbage-collection.md +++ b/content/en/docs/reference/glossary/garbage-collection.md @@ -2,7 +2,7 @@ title: Garbage Collection id: garbage-collection date: 2021-07-07 -full_link: /docs/concepts/workloads/controllers/garbage-collection/ +full_link: /docs/concepts/architecture/garbage-collection/ short_description: > A collective term for the various mechanisms Kubernetes uses to clean up cluster resources. @@ -12,13 +12,16 @@ tags: - fundamental - operation --- - Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up - cluster resources. + +Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up +cluster resources. -Kubernetes uses garbage collection to clean up resources like [unused containers and images](/docs/concepts/workloads/controllers/garbage-collection/#containers-images), +Kubernetes uses garbage collection to clean up resources like +[unused containers and images](/docs/concepts/architecture/garbage-collection/#containers-images), [failed Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection), [objects owned by the targeted resource](/docs/concepts/overview/working-with-objects/owners-dependents/), [completed Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/), and resources -that have expired or failed. \ No newline at end of file +that have expired or failed. + diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 693c66d1fbe..fa7e9bffe0b 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -68,6 +68,11 @@ kubectl config get-contexts # display list of contexts kubectl config current-context # display the current-context kubectl config use-context my-cluster-name # set the default context to my-cluster-name +kubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig + +# configure the URL to a proxy server to use for requests made by this client in the kubeconfig +kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url + # add a new user to your kubeconf that supports basic auth kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword @@ -182,6 +187,9 @@ kubectl get pods --selector=app=cassandra -o \ kubectl get configmap myconfig \ -o jsonpath='{.data.ca\.crt}' +# Retrieve a base64 encoded value with dashes instead of underscores. +kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}' + # Get all worker nodes (use a selector to exclude results that have a label # named 'node-role.kubernetes.io/control-plane') kubectl get node --selector='!node-role.kubernetes.io/control-plane' @@ -381,6 +389,9 @@ kubectl cluster-info # Display kubectl cluster-info dump # Dump current cluster state to stdout kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state +# View existing taints on which exist on current nodes. +kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect + # If a taint with that key and effect already exists, its value is replaced as specified. kubectl taint nodes foo dedicated=special-user:NoSchedule ``` diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index 4b74774ea78..d132642e2c5 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -618,6 +618,16 @@ or updating objects that contain Pod templates, such as Deployments, Jobs, State See [Enforcing Pod Security at the Namespace Level](/docs/concepts/security/pod-security-admission) for more information. +### kubernetes.io/psp (deprecated) {#kubernetes-io-psp} + +Example: `kubernetes.io/psp: restricted` + +This annotation is only relevant if you are using [PodSecurityPolicies](/docs/concepts/security/pod-security-policy/). + +When the PodSecurityPolicy admission controller admits a Pod, the admission controller +modifies the Pod to have this annotation. +The value of the annotation is the name of the PodSecurityPolicy that was used for validation. + ### seccomp.security.alpha.kubernetes.io/pod (deprecated) {#seccomp-security-alpha-kubernetes-io-pod} This annotation has been deprecated since Kubernetes v1.19 and will become non-functional in v1.25. diff --git a/content/en/docs/reference/scheduling/config.md b/content/en/docs/reference/scheduling/config.md index 502ffcb61e4..0911058ad46 100644 --- a/content/en/docs/reference/scheduling/config.md +++ b/content/en/docs/reference/scheduling/config.md @@ -123,7 +123,7 @@ extension points: and [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity). Extension points: `filter`, `score`. - `PodTopologySpread`: Implements - [Pod topology spread](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). + [Pod topology spread](/docs/concepts/scheduling-eviction/topology-spread-constraints/). Extension points: `preFilter`, `filter`, `preScore`, `score`. - `NodeUnschedulable`: Filters out nodes that have `.spec.unschedulable` set to true. diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md index 9cd82c43167..4bb4c30f76d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md @@ -17,7 +17,7 @@ Generate keys and certificate signing requests Generates keys and certificate signing requests (CSRs) for all the certificates required to run the control plane. This command also generates partial kubeconfig files with private key data in the "users > user > client-key-data" field, and for each kubeconfig file an accompanying ".csr" file is created. -This command is designed for use in [Kubeadm External CA Mode](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing. +This command is designed for use in [Kubeadm External CA Mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing. The PEM encoded signed certificates should then be saved alongside the key files, using ".crt" as the file extension, or in the case of kubeconfig files, the PEM encoded signed certificate should be base64 encoded and added to the kubeconfig file in the "users > user > client-certificate-data" field. diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index fdb117c5d54..124d1ddfd22 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -6,7 +6,9 @@ title: kubeadm init content_type: concept weight: 20 --- + + This command initializes a Kubernetes control-plane node. @@ -26,12 +28,12 @@ following steps: 1. Generates a self-signed CA to set up identities for each component in the cluster. The user can provide their own CA cert and/or key by dropping it in the cert directory configured via `--cert-dir` (`/etc/kubernetes/pki` by default). - The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans` arguments, lowercased if necessary. + The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans` + arguments, lowercased if necessary. -1. Writes kubeconfig files in `/etc/kubernetes/` for - the kubelet, the controller-manager and the scheduler to use to connect to the - API server, each with its own identity, as well as an additional - kubeconfig file for administration named `admin.conf`. +1. Writes kubeconfig files in `/etc/kubernetes/` for the kubelet, the controller-manager and the + scheduler to use to connect to the API server, each with its own identity, as well as an + additional kubeconfig file for administration named `admin.conf`. 1. Generates static Pod manifests for the API server, controller-manager and scheduler. In case an external etcd is not provided, @@ -76,10 +78,12 @@ following steps: Kubeadm allows you to create a control-plane node in phases using the `kubeadm init phase` command. -To view the ordered list of phases and sub-phases you can call `kubeadm init --help`. The list will be located at the top of the help screen and each phase will have a description next to it. +To view the ordered list of phases and sub-phases you can call `kubeadm init --help`. The list +will be located at the top of the help screen and each phase will have a description next to it. Note that by calling `kubeadm init` all of the phases and sub-phases will be executed in this exact order. -Some phases have unique flags, so if you want to have a look at the list of available options add `--help`, for example: +Some phases have unique flags, so if you want to have a look at the list of available options add +`--help`, for example: ```shell sudo kubeadm init phase control-plane controller-manager --help @@ -91,7 +95,8 @@ You can also use `--help` to see the list of sub-phases for a certain parent pha sudo kubeadm init phase control-plane --help ``` -`kubeadm init` also exposes a flag called `--skip-phases` that can be used to skip certain phases. The flag accepts a list of phase names and the names can be taken from the above ordered list. +`kubeadm init` also exposes a flag called `--skip-phases` that can be used to skip certain phases. +The flag accepts a list of phase names and the names can be taken from the above ordered list. An example: @@ -102,7 +107,10 @@ sudo kubeadm init phase etcd local --config=configfile.yaml sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml ``` -What this example would do is write the manifest files for the control plane and etcd in `/etc/kubernetes/manifests` based on the configuration in `configfile.yaml`. This allows you to modify the files and then skip these phases using `--skip-phases`. By calling the last command you will create a control plane node with the custom manifest files. +What this example would do is write the manifest files for the control plane and etcd in +`/etc/kubernetes/manifests` based on the configuration in `configfile.yaml`. This allows you to +modify the files and then skip these phases using `--skip-phases`. By calling the last command you +will create a control plane node with the custom manifest files. {{< feature-state for_k8s_version="v1.22" state="beta" >}} @@ -249,7 +257,7 @@ To set a custom image for these you need to configure this in your to use the image. Consult the documentation for your container runtime to find out how to change this setting; for selected container runtimes, you can also find advice within the -[Container Runtimes]((/docs/setup/production-environment/container-runtimes/) topic. +[Container Runtimes](/docs/setup/production-environment/container-runtimes/) topic. ### Uploading control-plane certificates to the cluster @@ -284,30 +292,35 @@ and certificate renewal. ### Managing the kubeadm drop-in file for the kubelet {#kubelet-drop-in} -The `kubeadm` package ships with a configuration file for running the `kubelet` by `systemd`. Note that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm DEB/RPM package. +The `kubeadm` package ships with a configuration file for running the `kubelet` by `systemd`. +Note that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm +DEB/RPM package. -For further information, see [Managing the kubeadm drop-in file for systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd). +For further information, see +[Managing the kubeadm drop-in file for systemd](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd). ### Use kubeadm with CRI runtimes -By default kubeadm attempts to detect your container runtime. For more details on this detection, see -the [kubeadm CRI installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime). +By default kubeadm attempts to detect your container runtime. For more details on this detection, +see the [kubeadm CRI installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime). ### Setting the node name -By default, `kubeadm` assigns a node name based on a machine's host address. You can override this setting with the `--node-name` flag. +By default, `kubeadm` assigns a node name based on a machine's host address. +You can override this setting with the `--node-name` flag. The flag passes the appropriate [`--hostname-override`](/docs/reference/command-line-tools-reference/kubelet/#options) value to the kubelet. -Be aware that overriding the hostname can [interfere with cloud providers](https://github.com/kubernetes/website/pull/8873). +Be aware that overriding the hostname can +[interfere with cloud providers](https://github.com/kubernetes/website/pull/8873). ### Automating kubeadm Rather than copying the token you obtained from `kubeadm init` to each node, as -in the [basic kubeadm tutorial](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), you can parallelize the -token distribution for easier automation. To implement this automation, you must -know the IP address that the control-plane node will have after it is started, -or use a DNS name or an address of a load balancer. +in the [basic kubeadm tutorial](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), +you can parallelize the token distribution for easier automation. To implement this automation, +you must know the IP address that the control-plane node will have after it is started, or use a +DNS name or an address of a load balancer. 1. Generate a token. This token must have the form `<6 character string>.<16 character string>`. More formally, it must match the regex: @@ -341,7 +354,11 @@ provisioned). For details, see the [kubeadm join](/docs/reference/setup-tools/ku ## {{% heading "whatsnext" %}} * [kubeadm init phase](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/) to understand more about -`kubeadm init` phases -* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes worker node and join it to the cluster -* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes cluster to a newer version -* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join` + `kubeadm init` phases +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to bootstrap a Kubernetes + worker node and join it to the cluster +* [kubeadm upgrade](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) to upgrade a Kubernetes + cluster to a newer version +* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made + to this host by `kubeadm init` or `kubeadm join` + diff --git a/content/en/docs/reference/using-api/_index.md b/content/en/docs/reference/using-api/_index.md index 6592deb3c71..182bbd4fe63 100644 --- a/content/en/docs/reference/using-api/_index.md +++ b/content/en/docs/reference/using-api/_index.md @@ -39,7 +39,7 @@ The JSON and Protobuf serialization schemas follow the same guidelines for schema changes. The following descriptions cover both formats. The API versioning and software versioning are indirectly related. -The [API and release versioning proposal](https://git.k8s.io/design-proposals-archive/release/versioning.md) +The [API and release versioning proposal](https://git.k8s.io/sig-release/release-engineering/versioning.md) describes the relationship between API versioning and software versioning. Different API versions indicate different levels of stability and support. You diff --git a/content/en/docs/reference/using-api/deprecation-guide.md b/content/en/docs/reference/using-api/deprecation-guide.md index d448344504a..0b5969e9f18 100644 --- a/content/en/docs/reference/using-api/deprecation-guide.md +++ b/content/en/docs/reference/using-api/deprecation-guide.md @@ -330,7 +330,7 @@ For example: ### Locate use of deprecated APIs -Use [client warnings, metrics, and audit information available in 1.19+](https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings) +Use [client warnings, metrics, and audit information available in 1.19+](/blog/2020/09/03/warnings/#deprecation-warnings) to locate use of deprecated APIs. ### Migrate to non-deprecated APIs @@ -340,11 +340,11 @@ to locate use of deprecated APIs. You can use the `kubectl-convert` command (`kubectl convert` prior to v1.20) to automatically convert an existing object: - + `kubectl-convert -f --output-version /`. For example, to convert an older Deployment to `apps/v1`, you can run: - + `kubectl-convert -f ./my-deployment.yaml --output-version apps/v1` Note that this may use non-ideal default values. To learn more about a specific diff --git a/content/en/docs/setup/best-practices/multiple-zones.md b/content/en/docs/setup/best-practices/multiple-zones.md index 8f51a3bd060..3bce1669372 100644 --- a/content/en/docs/setup/best-practices/multiple-zones.md +++ b/content/en/docs/setup/best-practices/multiple-zones.md @@ -63,7 +63,7 @@ These labels can include If your cluster spans multiple zones or regions, you can use node labels in conjunction with -[Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/) +[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) to control how Pods are spread across your cluster among fault domains: regions, zones, and even specific nodes. These hints enable the diff --git a/content/en/docs/setup/production-environment/_index.md b/content/en/docs/setup/production-environment/_index.md index 32c91d4f752..f1c6587c752 100644 --- a/content/en/docs/setup/production-environment/_index.md +++ b/content/en/docs/setup/production-environment/_index.md @@ -55,7 +55,7 @@ are influenced by the following issues: Before building a Kubernetes production environment on your own, consider handing off some or all of this job to [Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/) -providers or other [Kubernetes Partners](https://kubernetes.io/partners/). +providers or other [Kubernetes Partners](/partners/). Options include: - *Serverless*: Just run workloads on third-party equipment without managing @@ -288,7 +288,7 @@ needs of your cluster's workloads: - Decide if you want to build your own production Kubernetes or obtain one from available [Turnkey Cloud Solutions](/docs/setup/production-environment/turnkey-solutions/) - or [Kubernetes Partners](https://kubernetes.io/partners/). + or [Kubernetes Partners](/partners/). - If you choose to build your own cluster, plan how you want to handle [certificates](/docs/setup/best-practices/certificates/) and set up high availability for features such as diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index dd432afd3de..44f098ccfcf 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -179,9 +179,9 @@ Follow the instructions for [getting started with containerd](https://github.com {{% tab name="Linux" %}} You can find this file under the path `/etc/containerd/config.toml`. {{% /tab %}} -{{< tab name="Windows" >}} +{{% tab name="Windows" %}} You can find this file under the path `C:\Program Files\containerd\config.toml`. -{{< /tab >}} +{{% /tab %}} {{< /tabs >}} On Linux the default CRI socket for containerd is `/run/containerd/containerd.sock`. @@ -217,7 +217,7 @@ When using kubeadm, manually configure the #### Overriding the sandbox (pause) image {#override-pause-image-containerd} -In your [containerd config](https://github.com/containerd/cri/blob/master/docs/config.md) you can overwrite the +In your [containerd config](https://github.com/containerd/containerd/blob/main/docs/cri/config.md) you can overwrite the sandbox image by setting the following config: ```toml diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index eedee3b5a30..9aa1dfbff5b 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -11,7 +11,7 @@ weight: 30 Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the -[Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). +[Kubernetes Conformance tests](/blog/2017/10/software-conformance-certification/). `kubeadm` also supports other cluster lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades. @@ -76,8 +76,9 @@ Install a {{< glossary_tooltip term_id="container-runtime" text="container runti For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). {{< note >}} -If you have already installed kubeadm, run `apt-get update && -apt-get upgrade` or `yum update` to get the latest version of kubeadm. +If you have already installed kubeadm, run +`apt-get update && apt-get upgrade` or +`yum update` to get the latest version of kubeadm. When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for kubeadm to tell it what to do. This crashloop is expected and normal. @@ -582,7 +583,7 @@ Example for `kubeadm upgrade`: or {{< skew currentVersion >}} To learn more about the version skew between the different Kubernetes component see -the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/). +the [Version Skew Policy](/releases/version-skew-policy/). ## Limitations {#limitations} diff --git a/content/en/docs/setup/production-environment/tools/kubespray.md b/content/en/docs/setup/production-environment/tools/kubespray.md index e0525562c65..9877088a5b1 100644 --- a/content/en/docs/setup/production-environment/tools/kubespray.md +++ b/content/en/docs/setup/production-environment/tools/kubespray.md @@ -8,19 +8,24 @@ weight: 30 This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). -Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: +Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. -* a highly available cluster -* composable attributes -* support for most popular Linux distributions - * Ubuntu 16.04, 18.04, 20.04 - * CentOS/RHEL/Oracle Linux 7, 8 - * Debian Buster, Jessie, Stretch, Wheezy - * Fedora 31, 32 - * Fedora CoreOS - * openSUSE Leap 15 - * Flatcar Container Linux by Kinvolk -* continuous integration tests +Kubespray provides: +* Highly available cluster. +* Composable (Choice of the network plugin for instance). +* Supports most popular Linux distributions: + - Flatcar Container Linux by Kinvolk + - Debian Bullseye, Buster, Jessie, Stretch + - Ubuntu 16.04, 18.04, 20.04, 22.04 + - CentOS/RHEL 7, 8 + - Fedora 34, 35 + - Fedora CoreOS + - openSUSE Leap 15.x/Tumbleweed + - Oracle Linux 7, 8 + - Alma Linux 8 + - Rocky Linux 8 + - Amazon Linux 2 +* Continuous integration tests. To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/reference/setup-tools/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/). @@ -33,13 +38,13 @@ To choose a tool which best fits your use case, read [this comparison](https://g Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements): -* **Ansible v2.9 and python-netaddr are installed on the machine that will run Ansible commands** -* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks** -* The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) -* The target servers are configured to allow **IPv4 forwarding** -* **Your ssh key must be copied** to all the servers in your inventory -* **Firewalls are not managed by kubespray**. You'll need to implement appropriate rules as needed. You should disable your firewall in order to avoid any issues during deployment -* If kubespray is run from a non-root user account, correct privilege escalation method should be configured in the target servers and the `ansible_become` flag or command parameters `--become` or `-b` should be specified +* **Minimum required version of Kubernetes is v1.22** +* **Ansible v2.11+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands** +* The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required See ([Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) +* The target servers are configured to allow **IPv4 forwarding**. +* If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**. +* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall. +* If kubespray is run from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified. Kubespray provides the following utilities to help provision your environment: @@ -110,11 +115,10 @@ When running the reset playbook, be sure not to accidentally target your product ## Feedback -* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)) -* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues) +* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)). +* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues). ## {{% heading "whatsnext" %}} - -Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md). - +* Check out planned work on Kubespray's [roadmap](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/roadmap.md). +* Learn more about [Kubespray](https://github.com/kubernetes-sigs/kubespray). diff --git a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md index 95066ac6121..897d44a54f9 100644 --- a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md +++ b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md @@ -127,7 +127,7 @@ the shared Volume is lost. ## {{% heading "whatsnext" %}} -* Learn more about [patterns for composite containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns). +* Learn more about [patterns for composite containers](/blog/2015/06/the-distributed-system-toolkit-patterns/). * Learn about [composite containers for modular architecture](https://www.slideshare.net/Docker/slideshare-burns). diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md index 5732aed3af8..3d620e4b63f 100644 --- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md +++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md @@ -226,7 +226,7 @@ mvn install See [https://github.com/kubernetes-client/java/releases](https://github.com/kubernetes-client/java/releases) to see which versions are supported. The Java client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) -as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java): +as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java): ```java package io.kubernetes.client.examples; diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md index 44effe93403..f48ddaac662 100644 --- a/content/en/docs/tasks/administer-cluster/certificates.md +++ b/content/en/docs/tasks/administer-cluster/certificates.md @@ -4,124 +4,156 @@ content_type: task weight: 20 --- - When using client certificate authentication, you can generate certificates manually through `easyrsa`, `openssl` or `cfssl`. - - - ### easyrsa **easyrsa** can manually generate certificates for your cluster. -1. Download, unpack, and initialize the patched version of easyrsa3. +1. Download, unpack, and initialize the patched version of `easyrsa3`. - curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz - tar xzf easy-rsa.tar.gz - cd easy-rsa-master/easyrsa3 - ./easyrsa init-pki -1. Generate a new certificate authority (CA). `--batch` sets automatic mode; - `--req-cn` specifies the Common Name (CN) for the CA's new root certificate. + ```shell + curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + tar xzf easy-rsa.tar.gz + cd easy-rsa-master/easyrsa3 + ./easyrsa init-pki + ``` +1. Generate a new certificate authority (CA). `--batch` sets automatic mode; + `--req-cn` specifies the Common Name (CN) for the CA's new root certificate. - ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass -1. Generate server certificate and key. - The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will - be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR - that is specified as the `--service-cluster-ip-range` argument for both the API server and - the controller manager component. The argument `--days` is used to set the number of days - after which the certificate expires. - The sample below also assumes that you are using `cluster.local` as the default - DNS domain name. + ```shell + ./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass + ``` - ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ - "IP:${MASTER_CLUSTER_IP},"\ - "DNS:kubernetes,"\ - "DNS:kubernetes.default,"\ - "DNS:kubernetes.default.svc,"\ - "DNS:kubernetes.default.svc.cluster,"\ - "DNS:kubernetes.default.svc.cluster.local" \ - --days=10000 \ - build-server-full server nopass -1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory. -1. Fill in and add the following parameters into the API server start parameters: +1. Generate server certificate and key. - --client-ca-file=/yourdirectory/ca.crt - --tls-cert-file=/yourdirectory/server.crt - --tls-private-key-file=/yourdirectory/server.key + The argument `--subject-alt-name` sets the possible IPs and DNS names the API server will + be accessed with. The `MASTER_CLUSTER_IP` is usually the first IP from the service CIDR + that is specified as the `--service-cluster-ip-range` argument for both the API server and + the controller manager component. The argument `--days` is used to set the number of days + after which the certificate expires. + The sample below also assumes that you are using `cluster.local` as the default + DNS domain name. + + ```shell + ./easyrsa --subject-alt-name="IP:${MASTER_IP},"\ + "IP:${MASTER_CLUSTER_IP},"\ + "DNS:kubernetes,"\ + "DNS:kubernetes.default,"\ + "DNS:kubernetes.default.svc,"\ + "DNS:kubernetes.default.svc.cluster,"\ + "DNS:kubernetes.default.svc.cluster.local" \ + --days=10000 \ + build-server-full server nopass + ``` + +1. Copy `pki/ca.crt`, `pki/issued/server.crt`, and `pki/private/server.key` to your directory. + +1. Fill in and add the following parameters into the API server start parameters: + + ```shell + --client-ca-file=/yourdirectory/ca.crt + --tls-cert-file=/yourdirectory/server.crt + --tls-private-key-file=/yourdirectory/server.key + ``` ### openssl **openssl** can manually generate certificates for your cluster. -1. Generate a ca.key with 2048bit: +1. Generate a ca.key with 2048bit: - openssl genrsa -out ca.key 2048 -1. According to the ca.key generate a ca.crt (use -days to set the certificate effective time): + ```shell + openssl genrsa -out ca.key 2048 + ``` - openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt -1. Generate a server.key with 2048bit: +1. According to the ca.key generate a ca.crt (use `-days` to set the certificate effective time): - openssl genrsa -out server.key 2048 -1. Create a config file for generating a Certificate Signing Request (CSR). - Be sure to substitute the values marked with angle brackets (e.g. ``) - with real values before saving this to a file (e.g. `csr.conf`). - Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the - API server as described in previous subsection. - The sample below also assumes that you are using `cluster.local` as the default - DNS domain name. + ```shell + openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt + ``` - [ req ] - default_bits = 2048 - prompt = no - default_md = sha256 - req_extensions = req_ext - distinguished_name = dn +1. Generate a server.key with 2048bit: - [ dn ] - C = - ST = - L = - O = - OU = - CN = + ```shell + openssl genrsa -out server.key 2048 + ``` - [ req_ext ] - subjectAltName = @alt_names +1. Create a config file for generating a Certificate Signing Request (CSR). - [ alt_names ] - DNS.1 = kubernetes - DNS.2 = kubernetes.default - DNS.3 = kubernetes.default.svc - DNS.4 = kubernetes.default.svc.cluster - DNS.5 = kubernetes.default.svc.cluster.local - IP.1 = - IP.2 = + Be sure to substitute the values marked with angle brackets (e.g. ``) + with real values before saving this to a file (e.g. `csr.conf`). + Note that the value for `MASTER_CLUSTER_IP` is the service cluster IP for the + API server as described in previous subsection. + The sample below also assumes that you are using `cluster.local` as the default + DNS domain name. - [ v3_ext ] - authorityKeyIdentifier=keyid,issuer:always - basicConstraints=CA:FALSE - keyUsage=keyEncipherment,dataEncipherment - extendedKeyUsage=serverAuth,clientAuth - subjectAltName=@alt_names -1. Generate the certificate signing request based on the config file: + ```ini + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn - openssl req -new -key server.key -out server.csr -config csr.conf -1. Generate the server certificate using the ca.key, ca.crt and server.csr: + [ dn ] + C = + ST = + L = + O = + OU = + CN = - openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ - -CAcreateserial -out server.crt -days 10000 \ - -extensions v3_ext -extfile csr.conf -1. View the certificate signing request: + [ req_ext ] + subjectAltName = @alt_names - openssl req -noout -text -in ./server.csr -1. View the certificate: + [ alt_names ] + DNS.1 = kubernetes + DNS.2 = kubernetes.default + DNS.3 = kubernetes.default.svc + DNS.4 = kubernetes.default.svc.cluster + DNS.5 = kubernetes.default.svc.cluster.local + IP.1 = + IP.2 = - openssl x509 -noout -text -in ./server.crt + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth,clientAuth + subjectAltName=@alt_names + ``` + +1. Generate the certificate signing request based on the config file: + + ```shell + openssl req -new -key server.key -out server.csr -config csr.conf + ``` + +1. Generate the server certificate using the ca.key, ca.crt and server.csr: + + ```shell + openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out server.crt -days 10000 \ + -extensions v3_ext -extfile csr.conf + ``` + +1. View the certificate signing request: + + ```shell + openssl req -noout -text -in ./server.csr + ``` + +1. View the certificate: + + ```shell + openssl x509 -noout -text -in ./server.crt + ``` Finally, add the same parameters into the API server start parameters. @@ -129,101 +161,121 @@ Finally, add the same parameters into the API server start parameters. **cfssl** is another tool for certificate generation. -1. Download, unpack and prepare the command line tools as shown below. - Note that you may need to adapt the sample commands based on the hardware - architecture and cfssl version you are using. +1. Download, unpack and prepare the command line tools as shown below. - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl - chmod +x cfssl - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson - chmod +x cfssljson - curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo - chmod +x cfssl-certinfo -1. Create a directory to hold the artifacts and initialize cfssl: + Note that you may need to adapt the sample commands based on the hardware + architecture and cfssl version you are using. - mkdir cert - cd cert - ../cfssl print-defaults config > config.json - ../cfssl print-defaults csr > csr.json -1. Create a JSON config file for generating the CA file, for example, `ca-config.json`: + ```shell + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o cfssl + chmod +x cfssl + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o cfssljson + chmod +x cfssljson + curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64 -o cfssl-certinfo + chmod +x cfssl-certinfo + ``` - { - "signing": { - "default": { - "expiry": "8760h" - }, - "profiles": { - "kubernetes": { - "usages": [ - "signing", - "key encipherment", - "server auth", - "client auth" - ], - "expiry": "8760h" - } - } - } - } -1. Create a JSON config file for CA certificate signing request (CSR), for example, - `ca-csr.json`. Be sure to replace the values marked with angle brackets with - real values you want to use. +1. Create a directory to hold the artifacts and initialize cfssl: - { - "CN": "kubernetes", - "key": { - "algo": "rsa", - "size": 2048 - }, - "names":[{ - "C": "", - "ST": "", - "L": "", - "O": "", - "OU": "" - }] - } -1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`): + ```shell + mkdir cert + cd cert + ../cfssl print-defaults config > config.json + ../cfssl print-defaults csr > csr.json + ``` - ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca -1. Create a JSON config file for generating keys and certificates for the API - server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with - real values you want to use. The `MASTER_CLUSTER_IP` is the service cluster - IP for the API server as described in previous subsection. - The sample below also assumes that you are using `cluster.local` as the default - DNS domain name. +1. Create a JSON config file for generating the CA file, for example, `ca-config.json`: - { - "CN": "kubernetes", - "hosts": [ - "127.0.0.1", - "", - "", - "kubernetes", - "kubernetes.default", - "kubernetes.default.svc", - "kubernetes.default.svc.cluster", - "kubernetes.default.svc.cluster.local" - ], - "key": { - "algo": "rsa", - "size": 2048 - }, - "names": [{ - "C": "", - "ST": "", - "L": "", - "O": "", - "OU": "" - }] - } -1. Generate the key and certificate for the API server, which are by default - saved into file `server-key.pem` and `server.pem` respectively: + ```json + { + "signing": { + "default": { + "expiry": "8760h" + }, + "profiles": { + "kubernetes": { + "usages": [ + "signing", + "key encipherment", + "server auth", + "client auth" + ], + "expiry": "8760h" + } + } + } + } + ``` - ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ +1. Create a JSON config file for CA certificate signing request (CSR), for example, + `ca-csr.json`. Be sure to replace the values marked with angle brackets with + real values you want to use. + + ```json + { + "CN": "kubernetes", + "key": { + "algo": "rsa", + "size": 2048 + }, + "names":[{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } + ``` + +1. Generate CA key (`ca-key.pem`) and certificate (`ca.pem`): + + ```shell + ../cfssl gencert -initca ca-csr.json | ../cfssljson -bare ca + ``` + +1. Create a JSON config file for generating keys and certificates for the API + server, for example, `server-csr.json`. Be sure to replace the values in angle brackets with + real values you want to use. The `` is the service cluster + IP for the API server as described in previous subsection. + The sample below also assumes that you are using `cluster.local` as the default + DNS domain name. + + ```json + { + "CN": "kubernetes", + "hosts": [ + "127.0.0.1", + "", + "", + "kubernetes", + "kubernetes.default", + "kubernetes.default.svc", + "kubernetes.default.svc.cluster", + "kubernetes.default.svc.cluster.local" + ], + "key": { + "algo": "rsa", + "size": 2048 + }, + "names": [{ + "C": "", + "ST": "", + "L": "", + "O": "", + "OU": "" + }] + } + ``` + +1. Generate the key and certificate for the API server, which are by default + saved into file `server-key.pem` and `server.pem` respectively: + + ```shell + ../cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \ --config=ca-config.json -profile=kubernetes \ server-csr.json | ../cfssljson -bare server - + ``` ## Distributing Self-Signed CA Certificate @@ -234,12 +286,12 @@ refresh the local list for valid certificates. On each client, perform the following operations: -```bash +```shell sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt sudo update-ca-certificates ``` -``` +```none Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d.... @@ -250,6 +302,6 @@ done. You can use the `certificates.k8s.io` API to provision x509 certificates to use for authentication as documented -[here](/docs/tasks/tls/managing-tls-in-a-cluster). - +in the [Managing TLS in a cluster](/docs/tasks/tls/managing-tls-in-a-cluster) +task page. diff --git a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md index 45abdde6aca..7d0890197bc 100644 --- a/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md +++ b/content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md @@ -15,7 +15,7 @@ content_type: task ## Background -As part of the [cloud provider extraction effort](https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/), all cloud specific controllers must be moved out of the `kube-controller-manager`. All existing clusters that run cloud controllers in the `kube-controller-manager` must migrate to instead run the controllers in a cloud provider specific `cloud-controller-manager`. +As part of the [cloud provider extraction effort](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/), all cloud specific controllers must be moved out of the `kube-controller-manager`. All existing clusters that run cloud controllers in the `kube-controller-manager` must migrate to instead run the controllers in a cloud provider specific `cloud-controller-manager`. Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud specific" controllers between the `kube-controller-manager` and the `cloud-controller-manager` via a shared resource lock between the two components while upgrading the replicated control plane. For a single-node control plane, or if unavailability of controller managers can be tolerated during the upgrade, Leader Migration is not needed and this guide can be ignored. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 783b1079271..66f59a3dadb 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -11,7 +11,7 @@ weight: 20 This page explains how to upgrade a Kubernetes cluster created with kubeadm from version {{< skew currentVersionAddMinor -1 >}}.x to version {{< skew currentVersion >}}.x, and from version {{< skew currentVersion >}}.x to {{< skew currentVersion >}}.y (where `y > x`). Skipping MINOR versions -when upgrading is unsupported. For more details, please visit [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/). +when upgrading is unsupported. For more details, please visit [Version Skew Policy](/releases/version-skew-policy/). To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead: diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index 668f4532a51..2ed522628d3 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -27,12 +27,12 @@ The configuration file must be a JSON or YAML representation of the parameters in this struct. Make sure the Kubelet has read permissions on the file. Here is an example of what this file might look like: -``` +```yaml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration -address: "192.168.0.8", -port: 20250, -serializeImagePulls: false, +address: "192.168.0.8" +port: 20250 +serializeImagePulls: false evictionHard: memory.available: "200Mi" ``` diff --git a/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md b/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md index 2b088b7a5dc..90e4a11f631 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-in-userns.md @@ -41,13 +41,12 @@ See [Running kind with Rootless Docker](https://kind.sigs.k8s.io/docs/user/rootl ### minikube -[minikube](https://minikube.sigs.k8s.io/) also supports running Kubernetes inside Rootless Docker. +[minikube](https://minikube.sigs.k8s.io/) also supports running Kubernetes inside Rootless Docker or Rootless Podman. -See the page about the [docker](https://minikube.sigs.k8s.io/docs/drivers/docker/) driver in the Minikube documentation. +See the Minikube documentation: -Rootless Podman is not supported. - - +* [Rootless Docker](https://minikube.sigs.k8s.io/docs/drivers/docker/) +* [Rootless Podman](https://minikube.sigs.k8s.io/docs/drivers/podman/) ## Running Kubernetes inside Unprivileged Containers diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md index 8c5c8b3a724..5b6afe04e5a 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd.md @@ -5,8 +5,8 @@ content_type: task --- This task outlines the steps needed to update your container runtime to containerd from Docker. It -is applicable for cluster operators running Kubernetes 1.23 or earlier. Also this covers an -example scenario for migrating from dockershim to containerd and alternative container runtimes +is applicable for cluster operators running Kubernetes 1.23 or earlier. This also covers an +example scenario for migrating from dockershim to containerd. Alternative container runtimes can be picked from this [page](/docs/setup/production-environment/container-runtimes/). ## {{% heading "prerequisites" %}} @@ -100,7 +100,7 @@ then run the following commands: Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags. `--container-runtime=remote` and -`--container-runtime-endpoint=unix:///run/containerd/containerd.sock"`. +`--container-runtime-endpoint=unix:///run/containerd/containerd.sock`. Users using kubeadm should be aware that the `kubeadm` tool stores the CRI socket for each host as an annotation in the Node object for that host. To change it you can execute the following command diff --git a/content/en/docs/tasks/administer-cluster/nodelocaldns.md b/content/en/docs/tasks/administer-cluster/nodelocaldns.md index c953196fa19..ba019080690 100644 --- a/content/en/docs/tasks/administer-cluster/nodelocaldns.md +++ b/content/en/docs/tasks/administer-cluster/nodelocaldns.md @@ -3,7 +3,7 @@ reviewers: - bowei - zihongz - sftim -title: Using NodeLocal DNSCache in Kubernetes clusters +title: Using NodeLocal DNSCache in Kubernetes Clusters content_type: task --- @@ -40,7 +40,7 @@ hostnames ("`cluster.local`" suffix by default). [conntrack races](https://github.com/kubernetes/kubernetes/issues/56903) and avoid UDP DNS entries filling up conntrack table. -* Connections from local caching agent to kube-dns service can be upgraded to TCP. +* Connections from the local caching agent to kube-dns service can be upgraded to TCP. TCP conntrack entries will be removed on connection close in contrast with UDP entries that have to timeout ([default](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt) @@ -52,7 +52,7 @@ hostnames ("`cluster.local`" suffix by default). * Metrics & visibility into DNS requests at a node level. -* Negative caching can be re-enabled, thereby reducing number of queries to kube-dns service. +* Negative caching can be re-enabled, thereby reducing the number of queries for the kube-dns service. ## Architecture Diagram @@ -66,7 +66,7 @@ This is the path followed by DNS Queries after NodeLocal DNSCache is enabled: {{< note >}} The local listen IP address for NodeLocal DNSCache can be any address that can be guaranteed to not collide with any existing IP in your cluster. -It's recommended to use an address with a local scope, per example, +It's recommended to use an address with a local scope, for example, from the 'link-local' range '169.254.0.0/16' for IPv4 or from the 'Unique Local Address' range in IPv6 'fd00::/8'. {{< /note >}} @@ -77,9 +77,9 @@ This feature can be enabled using the following steps: [`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml) and save it as `nodelocaldns.yaml.` -* If using IPv6, the CoreDNS configuration file need to enclose all the IPv6 addresses +* If using IPv6, the CoreDNS configuration file needs to enclose all the IPv6 addresses into square brackets if used in 'IP:Port' format. - If you are using the sample manifest from the previous point, this will require to modify + If you are using the sample manifest from the previous point, this will require you to modify [the configuration line L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70) like this: "`health [__PILLAR__LOCAL__DNS__]:8080`" @@ -103,7 +103,7 @@ This feature can be enabled using the following steps: `__PILLAR__CLUSTER__DNS__` and `__PILLAR__UPSTREAM__SERVERS__` will be populated by the `node-local-dns` pods. In this mode, the `node-local-dns` pods listen on both the kube-dns service IP - as well as ``, so pods can lookup DNS records using either IP address. + as well as ``, so pods can look up DNS records using either IP address. * If kube-proxy is running in IPVS mode: diff --git a/content/en/docs/tasks/administer-cluster/verify-signed-images.md b/content/en/docs/tasks/administer-cluster/verify-signed-images.md index 5ae1db11348..074a9f3410e 100644 --- a/content/en/docs/tasks/administer-cluster/verify-signed-images.md +++ b/content/en/docs/tasks/administer-cluster/verify-signed-images.md @@ -68,5 +68,5 @@ e.g. [conformance image](https://github.com/kubernetes/kubernetes/blob/master/te admission controller. To get started with `cosigned` here are a few helpful resources: -* [Installation](https://github.com/sigstore/helm-charts/tree/main/charts/cosigned) -* [Configuration Options](https://github.com/sigstore/cosign/tree/main/config) +* [Installation](https://github.com/sigstore/cosign#installation) +* [Configuration Options](https://github.com/sigstore/cosign/blob/main/USAGE.md#detailed-usage) diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 6fb5cdca3d1..e3299ec8434 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -13,65 +13,70 @@ description: Creating Secret objects using resource configuration file. -## Create the Config file +## Create the Secret {#create-the-config-file} -You can create a Secret in a file first, in JSON or YAML format, and then -create that object. The +You can define the `Secret` object in a manifest first, in JSON or YAML format, +and then create that object. The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) resource contains two maps: `data` and `stringData`. The `data` field is used to store arbitrary data, encoded using base64. The `stringData` field is provided for convenience, and it allows you to provide -Secret data as unencoded strings. +the same data as unencoded strings. The keys of `data` and `stringData` must consist of alphanumeric characters, `-`, `_` or `.`. -For example, to store two strings in a Secret using the `data` field, convert -the strings to base64 as follows: +The following example stores two strings in a Secret using the `data` field. -```shell -echo -n 'admin' | base64 -``` +1. Convert the strings to base64: -The output is similar to: + ```shell + echo -n 'admin' | base64 + echo -n '1f2d1e2e67df' | base64 + ``` -``` -YWRtaW4= -``` + {{< note >}} + The serialized JSON and YAML values of Secret data are encoded as base64 strings. Newlines are not valid within these strings and must be omitted. When using the `base64` utility on Darwin/macOS, users should avoid using the `-b` option to split long lines. Conversely, Linux users *should* add the option `-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` option is not available. + {{< /note >}} -```shell -echo -n '1f2d1e2e67df' | base64 -``` + The output is similar to: -The output is similar to: + ``` + YWRtaW4= + MWYyZDFlMmU2N2Rm + ``` -``` -MWYyZDFlMmU2N2Rm -``` +1. Create the manifest: -Write a Secret config file that looks like this: + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: mysecret + type: Opaque + data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm + ``` -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: mysecret -type: Opaque -data: - username: YWRtaW4= - password: MWYyZDFlMmU2N2Rm -``` + Note that the name of a Secret object must be a valid + [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). -Note that the name of a Secret object must be a valid -[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). +1. Create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): -{{< note >}} -The serialized JSON and YAML values of Secret data are encoded as base64 -strings. Newlines are not valid within these strings and must be omitted. When -using the `base64` utility on Darwin/macOS, users should avoid using the `-b` -option to split long lines. Conversely, Linux users *should* add the option -`-w 0` to `base64` commands or the pipeline `base64 | tr -d '\n'` if the `-w` -option is not available. -{{< /note >}} + ```shell + kubectl apply -f ./secret.yaml + ``` + + The output is similar to: + + ``` + secret/mysecret created + ``` + +To verify that the Secret was created and to decode the Secret data, refer to +[Managing Secrets using kubectl](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#verify-the-secret). + +### Specify unencoded data when creating a Secret For certain scenarios, you may wish to use the `stringData` field instead. This field allows you to put a non-base64 encoded string directly into the Secret, @@ -103,25 +108,10 @@ stringData: username: password: ``` +When you retrieve the Secret data, the command returns the encoded values, +and not the plaintext values you provided in `stringData`. -## Create the Secret object - -Now create the Secret using [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): - -```shell -kubectl apply -f ./secret.yaml -``` - -The output is similar to: - -``` -secret/mysecret created -``` - -## Check the Secret - -The `stringData` field is a write-only convenience field. It is never output when -retrieving Secrets. For example, if you run the following command: +For example, if you run the following command: ```shell kubectl get secret mysecret -o yaml @@ -143,14 +133,11 @@ metadata: type: Opaque ``` -The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by -default. This is to protect the `Secret` from being exposed accidentally to an onlooker, -or from being stored in a terminal log. -To check the actual content of the encoded data, please refer to -[decoding secret](/docs/tasks/configmap-secret/managing-secret-using-kubectl/#decoding-secret). +### Specifying both `data` and `stringData` -If a field, such as `username`, is specified in both `data` and `stringData`, -the value from `stringData` is used. For example, the following Secret definition: +If you specify a field in both `data` and `stringData`, the value from `stringData` is used. + +For example, if you define the following Secret: ```yaml apiVersion: v1 @@ -164,7 +151,7 @@ stringData: username: administrator ``` -Results in the following Secret: +The `Secret` object is created as follows: ```yaml apiVersion: v1 @@ -180,7 +167,7 @@ metadata: type: Opaque ``` -Where `YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. +`YWRtaW5pc3RyYXRvcg==` decodes to `administrator`. ## Clean Up diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md index 7e607b9b799..72ec2a7bc30 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-kubectl.md @@ -113,6 +113,8 @@ The commands `kubectl get` and `kubectl describe` avoid showing the contents of a `Secret` by default. This is to protect the `Secret` from being exposed accidentally, or from being stored in a terminal log. +To check the actual content of the encoded data, refer to [Decoding the Secret](#decoding-secret). + ## Decoding the Secret {#decoding-secret} To view the contents of the Secret you created, run the following command: diff --git a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 54a9da1c08a..8148a465f04 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -88,7 +88,7 @@ would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume. Cluster administrators can also use [StorageClasses](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageclass-v1-storage) to set up -[dynamic provisioning](https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes). +[dynamic provisioning](/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes). Here is the configuration file for the hostPath PersistentVolume: diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index a4882ff8829..02329f931ca 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -10,7 +10,7 @@ card: Many applications rely on configuration which is used during either application initialization or runtime. Most of the times there is a requirement to adjust values assigned to configuration parameters. -ConfigMaps is the kubernetes way to inject application pods with configuration data. +ConfigMaps are the Kubernetes way to inject application pods with configuration data. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps. @@ -623,24 +623,6 @@ Like before, all previous files in the `/etc/config/` directory will be deleted. You can project keys to specific paths and specific permissions on a per-file basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) user guide explains the syntax. -### Optional References - -A ConfigMap reference may be marked "optional". If the ConfigMap is non-existent, the mounted volume will be empty. If the ConfigMap exists, but the referenced -key is non-existent the path will be absent beneath the mount point. - -### Mounted ConfigMaps are updated automatically - -When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into -existence after a pod has started. - -Kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the -ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as -kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. - -{{< note >}} -A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. -{{< /note >}} - @@ -675,7 +657,7 @@ data: ### Restrictions -- You must create a ConfigMap before referencing it in a Pod specification (unless you mark the ConfigMap as "optional"). If you reference a ConfigMap that doesn't exist, the Pod won't start. Likewise, references to keys that don't exist in the ConfigMap will prevent the pod from starting. +- You must create the `ConfigMap` object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as `optional` in the Pod spec (see [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist and you don't mark the reference as `optional`, the Pod won't start. Similarly, references to keys that don't exist in the ConfigMap will also prevent the Pod from starting, unless you mark the key references as `optional`. - If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered invalid will be skipped. The pod will be allowed to start, but the invalid names will be recorded in the event log (`InvalidVariableNames`). The log message lists each skipped key. For example: @@ -693,7 +675,75 @@ data: - You can't use ConfigMaps for {{< glossary_tooltip text="static pods" term_id="static-pod" >}}, because the Kubelet does not support this. +### Optional ConfigMaps +You can mark a reference to a ConfigMap as _optional_ in a Pod specification. +If the ConfigMap doesn't exist, the configuration for which it provides data in the Pod (e.g. environment variable, mounted volume) will be empty. +If the ConfigMap exists, but the referenced key is non-existent the data is also empty. + +For example, the following Pod specification marks an environment variable from a ConfigMap as optional: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: gcr.io/google_containers/busybox + command: [ "/bin/sh", "-c", "env" ] + env: + - name: SPECIAL_LEVEL_KEY + valueFrom: + configMapKeyRef: + name: a-config + key: akey + optional: true # mark the variable as optional + restartPolicy: Never +``` + +If you run this pod, and there is no ConfigMap named `a-config`, the output is empty. +If you run this pod, and there is a ConfigMap named `a-config` but that ConfigMap doesn't have +a key named `akey`, the output is also empty. If you do set a value for `akey` in the `a-config` +ConfigMap, this pod prints that value and then terminates. + +You can also mark the volumes and files provided by a ConfigMap as optional. Kubernetes always creates the mount paths for the volume, even if the referenced ConfigMap or key doesn't exist. For example, the following +Pod specification marks a volume that references a ConfigMap as optional: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: dapi-test-pod +spec: + containers: + - name: test-container + image: gcr.io/google_containers/busybox + command: [ "/bin/sh", "-c", "ls /etc/config" ] + volumeMounts: + - name: config-volume + mountPath: /etc/config + volumes: + - name: config-volume + configMap: + name: no-config + optional: true # mark the source ConfigMap as optional + restartPolicy: Never +``` + +### Mounted ConfigMaps are updated automatically + +When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into +existence after a pod has started. + +The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the +ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as +kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. + +{{< note >}} +A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. +{{< /note >}} ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 70122389448..b241eb9d126 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -29,7 +29,7 @@ When they do, they are authenticated as a particular Service Account (for exampl -## Use the Default Service Account to access the API server. +## Use the Default Service Account to access the API server When you create a pod, if you do not specify a service account, it is automatically assigned the `default` service account in the same namespace. @@ -68,7 +68,7 @@ spec: The pod spec takes precedence over the service account if both specify a `automountServiceAccountToken` value. -## Use Multiple Service Accounts. +## Use Multiple Service Accounts Every namespace has a default service account resource called `default`. You can list this and any other serviceAccount resources in the namespace with this command: @@ -136,7 +136,7 @@ You can clean up the service account from this example like this: kubectl delete serviceaccount/build-robot ``` -## Manually create a service account API token. +## Manually create a service account API token Suppose we have an existing service account named "build-robot" as mentioned above, and we create a new secret manually. diff --git a/content/en/docs/tasks/debug/debug-application/debug-statefulset.md b/content/en/docs/tasks/debug/debug-application/debug-statefulset.md index 73c0d0c78ad..428b8d0ee56 100644 --- a/content/en/docs/tasks/debug/debug-application/debug-statefulset.md +++ b/content/en/docs/tasks/debug/debug-application/debug-statefulset.md @@ -24,11 +24,11 @@ This task shows you how to debug a StatefulSet. ## Debugging a StatefulSet -In order to list all the pods which belong to a StatefulSet, which have a label `app=myapp` set on them, +In order to list all the pods which belong to a StatefulSet, which have a label `app.kubernetes.io/name=MyApp` set on them, you can use the following: ```shell -kubectl get pods -l app=myapp +kubectl get pods -l app.kubernetes.io/name=MyApp ``` If you find that any Pods listed are in `Unknown` or `Terminating` state for an extended period of time, diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index 7766bcedf3b..812c723a51b 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -362,9 +362,9 @@ and create it: kubectl create --validate=false -f my-crontab.yaml -o yaml ``` -your output is similar to: +Your output is similar to: -```console +```yaml apiVersion: stable.example.com/v1 kind: CronTab metadata: @@ -836,7 +836,7 @@ Validation Rules Examples: | `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration | | `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' | | `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 | -| `type(self) == string ? self == '100%' : self == 1000` | Validate an int-or-string field for both the the int and string cases | +| `type(self) == string ? self == '100%' : self == 1000` | Validate an int-or-string field for both the int and string cases | | `self.metadata.name.startsWith(self.prefix)` | Validate that an object's name has the prefix of another field value | | `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint | | `size(self.names) == size(self.details) && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet | @@ -844,7 +844,6 @@ Validation Rules Examples: Xref: [Supported evaluation on CEL](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#evaluation) - - If the Rule is scoped to the root of a resource, it may make field selection into any fields declared in the OpenAPIv3 schema of the CRD as well as `apiVersion`, `kind`, `metadata.name` and `metadata.generateName`. This includes selection of fields in both the `spec` and `status` in the diff --git a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md index e99ea5473c5..ca514103406 100644 --- a/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/en/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -26,21 +26,16 @@ Therefore, jobs should be idempotent. For more limitations, see [CronJobs](/docs/concepts/workloads/controllers/cron-jobs). - - ## {{% heading "prerequisites" %}} - * {{< include "task-tutorial-prereqs.md" >}} - - -## Creating a Cron Job +## Creating a CronJob {#creating-a-cron-job} Cron jobs require a config file. -This example cron job config `.spec` file prints the current time and a hello message every minute: +Here is a manifest for a CronJob that runs a simple demonstration task every minute: {{< codenew file="application/job/cronjob.yaml" >}} @@ -60,6 +55,7 @@ After creating the cron job, get its status using this command: ```shell kubectl get cronjob hello ``` + The output is similar to this: ``` @@ -102,14 +98,14 @@ You should see that the cron job `hello` successfully scheduled a job at the tim Now, find the pods that the last scheduled job created and view the standard output of one of the pods. {{< note >}} -The job name and pod name are different. +The job name is different from the pod name. {{< /note >}} ```shell # Replace "hello-4111706356" with the job name in your system pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items[*].metadata.name}) ``` -Show pod log: +Show the pod log: ```shell kubectl logs $pods @@ -121,7 +117,7 @@ Fri Feb 22 11:02:09 UTC 2019 Hello from the Kubernetes cluster ``` -## Deleting a Cron Job +## Deleting a CronJob {#deleting-a-cron-job} When you don't need a cron job any more, delete it with `kubectl delete cronjob `: @@ -132,16 +128,20 @@ kubectl delete cronjob hello Deleting the cron job removes all the jobs and pods it created and stops it from creating additional jobs. You can read more about removing jobs in [garbage collection](/docs/concepts/architecture/garbage-collection/). -## Writing a Cron Job Spec +## Writing a CronJob Spec {#writing-a-cron-job-spec} -As with all other Kubernetes configs, a cron job needs `apiVersion`, `kind`, and `metadata` fields. For general -information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), +As with all other Kubernetes objects, a CronJob must have `apiVersion`, `kind`, and `metadata` fields. +For more information about working with Kubernetes objects and their +{{< glossary_tooltip text="manifests" term_id="manifest" >}}, see the +[managing resources](/docs/concepts/cluster-administration/manage-deployment/), and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. -A cron job config also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). +Each manifest for a CrobJob also needs a [`.spec`](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status) section. {{< note >}} -All modifications to a cron job, especially its `.spec`, are applied only to the following runs. +If you modify a CronJob, the changes you make will apply to new jobs that start to run after your modification +is complete. Jobs (and their Pods) that have already started continue to run without changes. +That is, the CronJob does _not_ update existing jobs, even if those remain running. {{< /note >}} ### Schedule @@ -153,11 +153,11 @@ as schedule time of its jobs to be created and executed. The format also includes extended "Vixie cron" step values. As explained in the [FreeBSD manual](https://www.freebsd.org/cgi/man.cgi?crontab%285%29): -> Step values can be used in conjunction with ranges. Following a range -> with `/` specifies skips of the number's value through the -> range. For example, `0-23/2` can be used in the hours field to specify -> command execution every other hour (the alternative in the V7 standard is -> `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after an +> Step values can be used in conjunction with ranges. Following a range +> with `/` specifies skips of the number's value through the +> range. For example, `0-23/2` can be used in the hours field to specify +> command execution every other hour (the alternative in the V7 standard is +> `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after an > asterisk, so if you want to say "every two hours", just use `*/2`. {{< note >}} @@ -221,5 +221,3 @@ The `.spec.successfulJobsHistoryLimit` and `.spec.failedJobsHistoryLimit` fields These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to `0` corresponds to keeping none of the corresponding kind of jobs after they finish. - - diff --git a/content/en/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md b/content/en/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md index 70e514b4732..16547f0bf45 100644 --- a/content/en/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md +++ b/content/en/docs/tasks/kubelet-credential-provider/kubelet-credential-provider.md @@ -7,7 +7,7 @@ description: Configure the kubelet's image credential provider plugin content_type: task --- -{{< feature-state for_k8s_version="v1.20" state="alpha" >}} +{{< feature-state for_k8s_version="v1.24" state="beta" >}} diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md index 8e0670a89f1..07c631bc3a6 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md @@ -165,8 +165,8 @@ kubectl create --edit -f /tmp/srv.yaml ## {{% heading "whatsnext" %}} -* [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/) -* [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/) +* [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/) +* [Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/) * [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/) * [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md index 87cc423da7f..4e59491c62f 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-config.md @@ -161,7 +161,7 @@ template: * [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/) -* [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/) +* [Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/) * [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/) * [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) diff --git a/content/en/docs/tasks/network/validate-dual-stack.md b/content/en/docs/tasks/network/validate-dual-stack.md index 33a8d390914..a27fc8050de 100644 --- a/content/en/docs/tasks/network/validate-dual-stack.md +++ b/content/en/docs/tasks/network/validate-dual-stack.md @@ -134,7 +134,7 @@ spec: protocol: TCP targetPort: 9376 selector: - app: MyApp + app.kubernetes.io/name: MyApp sessionAffinity: None type: ClusterIP status: @@ -158,7 +158,7 @@ apiVersion: v1 kind: Service metadata: labels: - app: MyApp + app.kubernetes.io/name: MyApp name: my-service spec: clusterIP: fd00::5118 @@ -172,7 +172,7 @@ spec: protocol: TCP targetPort: 80 selector: - app: MyApp + app.kubernetes.io/name: MyApp sessionAffinity: None type: ClusterIP status: @@ -187,7 +187,7 @@ Create the following Service that explicitly defines `PreferDualStack` in `.spec The `kubectl get svc` command will only show the primary IP in the `CLUSTER-IP` field. ```shell -kubectl get svc -l app=MyApp +kubectl get svc -l app.kubernetes.io/name=MyApp NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service ClusterIP 10.0.216.242 80/TCP 5s @@ -197,15 +197,15 @@ my-service ClusterIP 10.0.216.242 80/TCP 5s Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using `kubectl describe`. You may then validate access to the service via the IPs and ports. ```shell -kubectl describe svc -l app=MyApp +kubectl describe svc -l app.kubernetes.io/name=MyApp ``` ``` Name: my-service Namespace: default -Labels: app=MyApp +Labels: app.kubernetes.io/name=MyApp Annotations: -Selector: app=MyApp +Selector: app.kubernetes.io/name=MyApp Type: ClusterIP IP Family Policy: PreferDualStack IP Families: IPv4,IPv6 @@ -220,14 +220,14 @@ Events: ### Create a dual-stack load balanced Service -If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with `PreferDualStack` in `.spec.ipFamilyPolicy`, `IPv6` as the first element of the `.spec.ipFamilies` array and the `type` field set to `LoadBalancer`. +If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with `PreferDualStack` in `.spec.ipFamilyPolicy`, `IPv6` as the first element of the `.spec.ipFamilies` array and the `type` field set to `LoadBalancer`. {{< codenew file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" >}} Check the Service: ```shell -kubectl get svc -l app=MyApp +kubectl get svc -l app.kubernetes.io/name=MyApp ``` Validate that the Service receives a `CLUSTER-IP` address from the IPv6 address block along with an `EXTERNAL-IP`. You may then validate access to the service via the IP and port. diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index eff3aaee176..a867b73a617 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -50,10 +50,10 @@ For example: kubectl delete -f --cascade=orphan ``` -By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app=myapp`, you can then delete them as follows: +By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows: ```shell -kubectl delete pods -l app=myapp +kubectl delete pods -l app.kubernetes.io/name=MyApp ``` ### Persistent Volumes @@ -70,13 +70,13 @@ To delete everything in a StatefulSet, including the associated pods, you can ru ```shell grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}') -kubectl delete statefulset -l app=myapp +kubectl delete statefulset -l app.kubernetes.io/name=MyApp sleep $grace -kubectl delete pvc -l app=myapp +kubectl delete pvc -l app.kubernetes.io/name=MyApp ``` -In the example above, the Pods have the label `app=myapp`; substitute your own label as appropriate. +In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; substitute your own label as appropriate. ### Force deletion of StatefulSet pods diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index 8d9e754c2e3..89127691e99 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -206,7 +206,7 @@ Ready before starting Pod `N+1`. After the init containers complete successfully, the regular containers run. The MySQL Pods consist of a `mysql` container that runs the actual `mysqld` server, and an `xtrabackup` container that acts as a -[sidecar](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns). +[sidecar](/blog/2015/06/the-distributed-system-toolkit-patterns). The `xtrabackup` sidecar looks at the cloned data files and determines if it's necessary to initialize MySQL replication on the replica. diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index 2ca497842d0..bca428d7501 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -18,7 +18,7 @@ draft](https://github.com/ietf-wg-acme/acme/). {{< note >}} Certificates created using the `certificates.k8s.io` API are signed by a -[dedicated CA](#a-note-to-cluster-administrators). It is possible to configure your cluster to use the cluster root +[dedicated CA](#configuring-your-cluster-to-provide-signing). It is possible to configure your cluster to use the cluster root CA for this purpose, but you should never rely on this. Do not assume that these certificates will validate against the cluster root CA. {{< /note >}} @@ -42,7 +42,7 @@ install it via your operating system's software sources, or fetch it from ## Trusting TLS in a cluster -Trusting the [custom CA](#a-note-to-cluster-administrators) from an application running as a pod usually requires +Trusting the [custom CA](#configuring-your-cluster-to-provide-signing) from an application running as a pod usually requires some extra application configuration. You will need to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. For example, you would do this with a golang TLS config by parsing the certificate diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md index d027ab647d8..77af85d887c 100644 --- a/content/en/docs/tasks/tools/install-kubectl-linux.md +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -110,9 +110,19 @@ For example, to download version {{< param "fullversion" >}} on Linux, type: ```shell sudo apt-get update - sudo apt-get install -y apt-transport-https ca-certificates curl + sudo apt-get install -y ca-certificates curl ``` - + + {{< note >}} + + If you use Debian 9 (stretch) or earlier you would also need to install `apt-transport-https`: + + ```shell + sudo apt-get install -y apt-transport-https + ``` + + {{< /note >}} + 2. Download the Google Cloud public signing key: ```shell diff --git a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md index dccf214c473..8055f6774eb 100644 --- a/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md +++ b/content/en/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md @@ -6,34 +6,63 @@ weight: 10 -In this tutorial you will learn how and why to externalize your microservice’s configuration. Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment variables and then consume them using MicroProfile Config. +In this tutorial you will learn how and why to externalize your microservice’s configuration. +Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment +variables and then consume them using MicroProfile Config. ## {{% heading "prerequisites" %}} ### Creating Kubernetes ConfigMaps & Secrets -There are several ways to set environment variables for a Docker container in Kubernetes, including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the tutorial, you will learn how to use the latter two for setting your environment variables whose values will be injected into your microservices. One of the benefits for using ConfigMaps and Secrets is that they can be re-used across multiple containers, including being assigned to different environment variables for the different containers. -ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive Tutorial you will learn how to use a ConfigMap to store the application's name. For more information regarding ConfigMaps, you can find the documentation [here](/docs/tasks/configure-pod-container/configure-pod-configmap/). +There are several ways to set environment variables for a Docker container in Kubernetes, +including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the +tutorial, you will learn how to use the latter two for setting your environment variables whose +values will be injected into your microservices. One of the benefits for using ConfigMaps and +Secrets is that they can be re-used across multiple containers, including being assigned to +different environment variables for the different containers. -Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that they're intended for confidential/sensitive information and are stored using Base64 encoding. This makes secrets the appropriate choice for storing such things as credentials, keys, and tokens, the former of which you'll do in the Interactive Tutorial. For more information on Secrets, you can find the documentation [here](/docs/concepts/configuration/secret/). +ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive +Tutorial you will learn how to use a ConfigMap to store the application's name. For more +information regarding ConfigMaps, you can find the documentation +[here](/docs/tasks/configure-pod-container/configure-pod-configmap/). + +Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that +they're intended for confidential/sensitive information and are stored using Base64 encoding. +This makes secrets the appropriate choice for storing such things as credentials, keys, and +tokens, the former of which you'll do in the Interactive Tutorial. For more information on +Secrets, you can find the documentation [here](/docs/concepts/configuration/secret/). ### Externalizing Config from Code -Externalized application configuration is useful because configuration usually changes depending on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set of open Java technologies for developing and deploying cloud-native microservices. -CDI provides a standard dependency injection capability enabling an application to be assembled from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a standard way to obtain config properties from various sources, including the application, runtime, and environment. Based on the source's defined priority, the properties are automatically combined into a single set of properties that the application can access via an API. Together, CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code. +Externalized application configuration is useful because configuration usually changes depending +on your environment. In order to accomplish this, we'll use Java's Contexts and Dependency +Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of MicroProfile, a set +of open Java technologies for developing and deploying cloud-native microservices. -Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for building and running cloud-native apps and microservices. However, any MicroProfile compatible runtime could be used instead. +CDI provides a standard dependency injection capability enabling an application to be assembled +from collaborating, loosely-coupled beans. MicroProfile Config provides apps and microservices a +standard way to obtain config properties from various sources, including the application, runtime, +and environment. Based on the source's defined priority, the properties are automatically +combined into a single set of properties that the application can access via an API. Together, +CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the externally provided +properties from the Kubernetes ConfigMaps and Secrets and get injected into your application code. + +Many open source frameworks and runtimes implement and support MicroProfile Config. Throughout +the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java runtime for +building and running cloud-native apps and microservices. However, any MicroProfile compatible +runtime could be used instead. ## {{% heading "objectives" %}} * Create a Kubernetes ConfigMap and Secret * Inject microservice configuration using MicroProfile Config - ## Example: Externalizing config using MicroProfile, ConfigMaps and Secrets -### [Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) + +[Start Interactive Tutorial](/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/) + diff --git a/content/en/docs/tutorials/security/cluster-level-pss.md b/content/en/docs/tutorials/security/cluster-level-pss.md index 4da0502aca0..3b662efc602 100644 --- a/content/en/docs/tutorials/security/cluster-level-pss.md +++ b/content/en/docs/tutorials/security/cluster-level-pss.md @@ -17,14 +17,18 @@ created. This tutorial shows you how to enforce the `baseline` Pod Security Standard at the cluster level which applies a standard configuration to all namespaces in a cluster. -To apply Pod Security Standards to specific namespaces, refer to [Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss). +To apply Pod Security Standards to specific namespaces, refer to +[Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss). + +If you are running a version of Kubernetes other than v{{< skew currentVersion >}}, +check the documentation for that version. ## {{% heading "prerequisites" %}} Install the following on your workstation: - [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) -- [kubectl](https://kubernetes.io/docs/tasks/tools/) +- [kubectl](/docs/tasks/tools/) ## Choose the right Pod Security Standard to apply @@ -38,12 +42,12 @@ that are most appropriate for your configuration, do the following: 1. Create a cluster with no Pod Security Standards applied: ```shell - kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.23.0 + kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0 ``` The output is similar to this: ``` Creating cluster "psa-wo-cluster-pss" ... - ✓ Ensuring node image (kindest/node:v1.23.0) 🖼 + ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ @@ -245,12 +249,12 @@ following: these Pod Security Standards: ```shell - kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.23.0 --config /tmp/pss/cluster-config.yaml + kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml ``` The output is similar to this: ``` Creating cluster "psa-with-cluster-pss" ... - ✓ Ensuring node image (kindest/node:v1.23.0) 🖼 + ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ diff --git a/content/en/docs/tutorials/security/ns-level-pss.md b/content/en/docs/tutorials/security/ns-level-pss.md index 4a20895df73..43a48d0932d 100644 --- a/content/en/docs/tutorials/security/ns-level-pss.md +++ b/content/en/docs/tutorials/security/ns-level-pss.md @@ -17,7 +17,7 @@ one namespace at a time. You can also apply Pod Security Standards to multiple namespaces at once at the cluster level. For instructions, refer to -[Apply Pod Security Standards at the cluster level](/docs/tutorials/security/cluster-level-pss). +[Apply Pod Security Standards at the cluster level](/docs/tutorials/security/cluster-level-pss/). ## {{% heading "prerequisites" %}} diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md index cc2bd853f6e..3f57ab3ad47 100644 --- a/content/en/docs/tutorials/stateful-application/zookeeper.md +++ b/content/en/docs/tutorials/stateful-application/zookeeper.md @@ -123,7 +123,7 @@ zk-2 1/1 Running 0 40s ``` The StatefulSet controller creates three Pods, and each Pod has a container with -a [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) server. +a [ZooKeeper](https://archive.apache.org/dist/zookeeper/stable/) server. ### Facilitating leader election @@ -305,7 +305,7 @@ numChildren = 0 ### Providing durable storage -As mentioned in the [ZooKeeper Basics](#zookeeper-basics) section, +As mentioned in the [ZooKeeper Basics](#zookeeper) section, ZooKeeper commits all entries to a durable WAL, and periodically writes snapshots in memory state, to storage media. Using WALs to provide durability is a common technique for applications that use consensus protocols to achieve a replicated diff --git a/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml index c85f25ea515..631e6cc2686 100644 --- a/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml +++ b/content/en/examples/admin/konnectivity/egress-selector-configuration.yaml @@ -2,7 +2,7 @@ apiVersion: apiserver.k8s.io/v1beta1 kind: EgressSelectorConfiguration egressSelections: # Since we want to control the egress traffic to the cluster, we use the -# "cluster" as the name. Other supported values are "etcd", and "master". +# "cluster" as the name. Other supported values are "etcd", and "controlplane". - name: cluster connection: # This controls the protocol between the API Server and the Konnectivity diff --git a/content/en/examples/controllers/job.yaml b/content/en/examples/controllers/job.yaml index a6e40bc778d..d8befe89dba 100644 --- a/content/en/examples/controllers/job.yaml +++ b/content/en/examples/controllers/job.yaml @@ -7,7 +7,7 @@ spec: spec: containers: - name: pi - image: perl:5.34 + image: perl:5.34.0 command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4 diff --git a/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml index 5dcc7693b67..10d3056f2d2 100644 --- a/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml +++ b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml @@ -8,11 +8,10 @@ spec: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - - key: topology.kubernetes.io/zone + - key: kubernetes.io/os operator: In values: - - antarctica-east1 - - antarctica-west1 + - linux preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: diff --git a/content/en/examples/pods/pod-with-node-affinity.yaml b/content/en/examples/pods/pod-with-node-affinity.yaml index e077f79883e..ebc6f144903 100644 --- a/content/en/examples/pods/pod-with-node-affinity.yaml +++ b/content/en/examples/pods/pod-with-node-affinity.yaml @@ -8,10 +8,11 @@ spec: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - - key: kubernetes.io/os + - key: topology.kubernetes.io/zone operator: In values: - - linux + - antarctica-east1 + - antarctica-west1 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: diff --git a/content/en/examples/service/networking/dual-stack-default-svc.yaml b/content/en/examples/service/networking/dual-stack-default-svc.yaml index 86eadd5478a..a42c7d8a251 100644 --- a/content/en/examples/service/networking/dual-stack-default-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-default-svc.yaml @@ -3,10 +3,10 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml b/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml index 7c7239cae6c..77949c883f0 100644 --- a/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml +++ b/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml @@ -3,12 +3,12 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilies: - IPv6 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml b/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml index 2aa0725059b..feb12f61a91 100644 --- a/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-ipv6-svc.yaml @@ -5,8 +5,8 @@ metadata: spec: ipFamily: IPv6 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 - targetPort: 9376 \ No newline at end of file + targetPort: 9376 diff --git a/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml b/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml index 0949a754281..5a4a99a45ca 100644 --- a/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml @@ -3,14 +3,14 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 type: LoadBalancer selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml b/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml index c31acfec581..79a4f34a7f7 100644 --- a/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml @@ -3,14 +3,14 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack ipFamilies: - IPv6 - IPv4 selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/examples/service/networking/dual-stack-preferred-svc.yaml b/content/en/examples/service/networking/dual-stack-preferred-svc.yaml index 8fb5bfa3d34..66d42b96129 100644 --- a/content/en/examples/service/networking/dual-stack-preferred-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-preferred-svc.yaml @@ -3,11 +3,11 @@ kind: Service metadata: name: my-service labels: - app: MyApp + app.kubernetes.io/name: MyApp spec: ipFamilyPolicy: PreferDualStack selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index 91bdcc58bb9..f263e431c2a 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,7 +78,6 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| July 2022 | 2022-07-08 | 2022-07-13 | | August 2022 | 2022-08-12 | 2022-08-17 | | September 2022 | 2022-09-09 | 2022-09-14 | | October 2022 | 2022-10-07 | 2022-10-12 | @@ -87,24 +86,28 @@ releases may also occur in between these. ### 1.24 -Next patch release is **1.24.1** +Next patch release is **1.24.4** -End of Life for **1.24** is **2023-09-29** +End of Life for **1.24** is **2023-07-28** | PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE | |---------------|----------------------|-------------|------| +| 1.24.4 | 2022-08-12 | 2022-08-17 | | | 1.24.3 | 2022-07-08 | 2022-07-13 | | | 1.24.2 | 2022-06-10 | 2022-06-15 | | | 1.24.1 | 2022-05-20 | 2022-05-24 | | ### 1.23 +Next patch release is **1.23.10** + **1.23** enters maintenance mode on **2022-12-28**. End of Life for **1.23** is **2023-02-28**. | Patch Release | Cherry Pick Deadline | Target Date | Note | |---------------|----------------------|-------------|------| +| 1.23.10 | 2022-08-12 | 2022-08-17 | | | 1.23.9 | 2022-07-08 | 2022-07-13 | | | 1.23.8 | 2022-06-10 | 2022-06-15 | | | 1.23.7 | 2022-05-20 | 2022-05-24 | | @@ -117,12 +120,15 @@ End of Life for **1.23** is **2023-02-28**. ### 1.22 +Next patch release is **1.22.13** + **1.22** enters maintenance mode on **2022-08-28** End of Life for **1.22** is **2022-10-28** | Patch Release | Cherry Pick Deadline | Target Date | Note | |---------------|----------------------|-------------|------| +| 1.22.13 | 2022-08-12 | 2022-08-17 | | | 1.22.12 | 2022-07-08 | 2022-07-13 | | | 1.22.11 | 2022-06-10 | 2022-06-15 | | | 1.22.10 | 2022-05-20 | 2022-05-24 | | diff --git a/content/en/releases/release-managers.md b/content/en/releases/release-managers.md index b3a562c9e3a..61afd672679 100644 --- a/content/en/releases/release-managers.md +++ b/content/en/releases/release-managers.md @@ -4,8 +4,8 @@ type: docs --- "Release Managers" is an umbrella term that encompasses the set of Kubernetes -contributors responsible for maintaining release branches, tagging releases, -and building/packaging Kubernetes. +contributors responsible for maintaining release branches and creating releases +by using the tools SIG Release provides. The responsibilities of each role are described below. @@ -69,7 +69,7 @@ Release Managers are responsible for: - Reviewing cherry picks - Ensuring the release branch stays healthy and that no unintended patch gets merged -- Mentoring the [Release Manager Associates](#associates) group +- Mentoring the [Release Manager Associates](#release-manager-associates) group - Actively developing features and maintaining the code in k/release - Supporting Release Manager Associates and contributors through actively participating in the Buddy program @@ -133,7 +133,9 @@ referred to as Release Manager shadows. They are responsible for: GitHub Mentions: @kubernetes/release-engineering - Arnaud Meukam ([@ameukam](https://github.com/ameukam)) +- Jeremy Rickard ([@jeremyrickard](https://github.com/jeremyrickard)) - Jim Angel ([@jimangel](https://github.com/jimangel)) +- Joseph Sandoval ([@jrsapi](https://github.com/jrsapi)) - Joyce Kung ([@thejoycekung](https://github.com/thejoycekung)) - Max Körbächer ([@mkorbi](https://github.com/mkorbi)) - Seth McCombs ([@sethmccombs](https://github.com/sethmccombs)) @@ -212,7 +214,7 @@ Example: [1.15 Release Team](https://git.k8s.io/sig-release/releases/release-1.1 [handbook-packaging]: https://git.k8s.io/sig-release/release-engineering/packaging.md [handbook-patch-release]: https://git.k8s.io/sig-release/release-engineering/role-handbooks/patch-release-team.md [k-sig-release-releases]: https://git.k8s.io/sig-release/releases -[patches]: /patch-releases.md +[patches]: /releases/patch-releases/ [src]: https://git.k8s.io/community/committee-security-response/README.md [release-team]: https://git.k8s.io/sig-release/release-team/README.md [security-release-process]: https://git.k8s.io/security/security-release-process.md diff --git a/content/en/releases/release.md b/content/en/releases/release.md index f69424b7f9f..e0d21c0df16 100644 --- a/content/en/releases/release.md +++ b/content/en/releases/release.md @@ -124,7 +124,7 @@ The general labeling process should be consistent across artifact types. referring to a release MAJOR.MINOR `vX.Y` version. See also - [release versioning](https://git.k8s.io/design-proposals-archive/release/versioning.md). + [release versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md). - *release branch*: Git branch `release-X.Y` created for the `vX.Y` milestone. @@ -136,7 +136,7 @@ The general labeling process should be consistent across artifact types. ## The Release Cycle -![Image of one Kubernetes release cycle](release-cycle.jpg) +![Image of one Kubernetes release cycle](/images/releases/release-cycle.jpg) Kubernetes releases currently happen approximately three times per year. @@ -204,7 +204,7 @@ back to the release branch. The release is built from the release branch. Each release is part of a broader Kubernetes lifecycle: -![Image of Kubernetes release lifecycle spanning three releases](release-lifecycle.jpg) +![Image of Kubernetes release lifecycle spanning three releases](/images/releases/release-lifecycle.jpg) ## Removal Of Items From The Milestone @@ -281,7 +281,7 @@ Issues are marked as targeting a milestone via the Prow "/milestone" command. The Release Team's [Bug Triage Lead](https://git.k8s.io/sig-release/release-team/role-handbooks/bug-triage/README.md) and overall community watch incoming issues and triage them, as described in the contributor guide section on -[issue triage](/contributors/guide/issue-triage.md). +[issue triage](https://k8s.dev/docs/guide/issue-triage/). Marking issues with the milestone provides the community better visibility regarding when an issue was observed and by when the community feels it must be @@ -355,11 +355,11 @@ issue kind labels must be set: - `kind/feature`: New functionality. - `kind/flake`: CI test case is showing intermittent failures. -[cherry-picks]: /contributors/devel/sig-release/cherry-picks.md +[cherry-picks]: https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md [code-freeze]: https://git.k8s.io/sig-release/releases/release_phases.md#code-freeze [enhancements-freeze]: https://git.k8s.io/sig-release/releases/release_phases.md#enhancements-freeze [exceptions]: https://git.k8s.io/sig-release/releases/release_phases.md#exceptions [keps]: https://git.k8s.io/enhancements/keps -[release-managers]: https://kubernetes.io/releases/release-managers/ +[release-managers]: /releases/release-managers/ [release-team]: https://git.k8s.io/sig-release/release-team -[sig-list]: /sig-list.md \ No newline at end of file +[sig-list]: https://k8s.dev/sigs diff --git a/content/en/releases/version-skew-policy.md b/content/en/releases/version-skew-policy.md index 730f892c80c..0d1c96f9226 100644 --- a/content/en/releases/version-skew-policy.md +++ b/content/en/releases/version-skew-policy.md @@ -21,12 +21,12 @@ Specific cluster deployment tools may place additional restrictions on version s ## Supported versions Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. -For more information, see [Kubernetes Release Versioning](https://git.k8s.io/design-proposals-archive/release/versioning.md#kubernetes-release-versioning). +For more information, see [Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning). The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew currentVersion >}}, {{< skew currentVersionAddMinor -1 >}}, {{< skew currentVersionAddMinor -2 >}}). Kubernetes 1.19 and newer receive approximately 1 year of patch support. Kubernetes 1.18 and older received approximately 9 months of patch support. Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility. -Patch releases are cut from those branches at a [regular cadence](https://kubernetes.io/releases/patch-releases/#cadence), plus additional urgent releases, when required. +Patch releases are cut from those branches at a [regular cadence](/releases/patch-releases/#cadence), plus additional urgent releases, when required. The [Release Managers](/releases/release-managers/) group owns this decision. diff --git a/content/es/_index.html b/content/es/_index.html index af07c0764bb..1a5d8d06cdf 100644 --- a/content/es/_index.html +++ b/content/es/_index.html @@ -41,12 +41,12 @@ Kubernetes es código abierto lo que le brinda la libertad de aprovechar su infr

- Asista a la KubeCon en San Diego del 18 al 21 de Nov. 2019 + Asista a la KubeCon en Norte América del 24 al 28 de Octubre 2022



- Asista a la KubeCon en Amsterdam del 30 Marzo al 2 Abril + Asista a la KubeCon en Europa del 17 al 21 de Abril 2023

diff --git a/content/es/docs/concepts/architecture/nodes.md b/content/es/docs/concepts/architecture/nodes.md index d2313849eff..ccb3e582905 100644 --- a/content/es/docs/concepts/architecture/nodes.md +++ b/content/es/docs/concepts/architecture/nodes.md @@ -8,7 +8,7 @@ weight: 10 -Un nodo es una máquina de trabajo en Kubernetes, previamente conocida como `minion`. Un nodo puede ser una máquina virtual o física, dependiendo del tipo de clúster. Cada nodo está gestionado por el componente máster y contiene los servicios necesarios para ejecutar [pods](/docs/concepts/workloads/pods/pod). Los servicios en un nodo incluyen el [container runtime](/docs/concepts/overview/components/#node-components), kubelet y el kube-proxy. Accede a la sección [The Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) en el documento de diseño de arquitectura para más detalle. +Un nodo es una máquina de trabajo en Kubernetes, previamente conocida como `minion`. Un nodo puede ser una máquina virtual o física, dependiendo del tipo de clúster. Cada nodo está gestionado por el componente máster y contiene los servicios necesarios para ejecutar [pods](/docs/concepts/workloads/pods/pod). Los servicios en un nodo incluyen el [container runtime](/docs/concepts/overview/components/#node-components), kubelet y el kube-proxy. Accede a la sección [The Kubernetes Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) en el documento de diseño de arquitectura para más detalle. diff --git a/content/es/docs/concepts/configuration/manage-resources-containers.md b/content/es/docs/concepts/configuration/manage-resources-containers.md index 9630d271ba7..4cdc6c4a26a 100644 --- a/content/es/docs/concepts/configuration/manage-resources-containers.md +++ b/content/es/docs/concepts/configuration/manage-resources-containers.md @@ -676,7 +676,7 @@ La cantidad de recursos disponibles para los pods es menor que la capacidad del los demonios del sistema utilizan una parte de los recursos disponibles. El campo `allocatable` [NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core) indica la cantidad de recursos que están disponibles para los Pods. Para más información, mira -[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md). +[Node Allocatable Resources](https://git.k8s.io/design-proposals-archive/node/node-allocatable.md). La característica [resource quota](/docs/concepts/policy/resource-quotas/) se puede configurar para limitar la cantidad total de recursos que se pueden consumir. Si se usa en conjunto @@ -757,7 +757,7 @@ Puedes ver que el Contenedor fué terminado a causa de `reason:OOM Killed`, dond * Obtén experiencia práctica [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/). * Para más detalles sobre la diferencia entre solicitudes y límites, mira - [Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md). + [Resource QoS](https://git.k8s.io/design-proposals-archive/node/resource-qos.md). * Lee [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) referencia de API diff --git a/content/es/docs/concepts/configuration/pod-overhead.md b/content/es/docs/concepts/configuration/pod-overhead.md index 0d7a89bcd69..baaf8a902fc 100644 --- a/content/es/docs/concepts/configuration/pod-overhead.md +++ b/content/es/docs/concepts/configuration/pod-overhead.md @@ -38,4 +38,4 @@ Para obtener más detalles vea la [documentación sobre autorización](/docs/ref * [RuntimeClass](/docs/concepts/containers/runtime-class/) -* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) +* [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) diff --git a/content/es/docs/concepts/configuration/secret.md b/content/es/docs/concepts/configuration/secret.md index 8fd08c02854..ffbc78fa700 100644 --- a/content/es/docs/concepts/configuration/secret.md +++ b/content/es/docs/concepts/configuration/secret.md @@ -14,7 +14,7 @@ weight: 50 Los objetos de tipo {{< glossary_tooltip text="Secret" term_id="secret" >}} en Kubernetes te permiten almacenar y administrar información confidencial, como contraseñas, tokens OAuth y llaves ssh. Poniendo esta información en un Secret -es más seguro y más flexible que ponerlo en la definición de un {{< glossary_tooltip term_id="pod" >}} o en un {{< glossary_tooltip text="container image" term_id="image" >}}. Ver [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) para más información. +es más seguro y más flexible que ponerlo en la definición de un {{< glossary_tooltip term_id="pod" >}} o en un {{< glossary_tooltip text="container image" term_id="image" >}}. Ver [Secrets design document](https://git.k8s.io/design-proposals-archive/auth/secrets.md) para más información. @@ -345,7 +345,7 @@ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode ## Usando Secrets Los Secrets se pueden montar como volúmenes de datos o ser expuestos como -{{< glossary_tooltip text="variables de ambiente" term_id="container-env-variables" >}} +{{< glossary_tooltip text="variables de entorno" term_id="container-env-variables" >}} para ser usados por un contenedor en un pod. También pueden ser utilizados por otras partes del sistema, sin estar directamente expuesto en el pod. Por ejemplo, pueden tener credenciales que otras partes del sistema usan para interactuar con sistemas externos en su nombre. diff --git a/content/es/docs/concepts/containers/container-environment-variables.md b/content/es/docs/concepts/containers/container-environment-variables.md index 7f35309329c..f266ff7796b 100644 --- a/content/es/docs/concepts/containers/container-environment-variables.md +++ b/content/es/docs/concepts/containers/container-environment-variables.md @@ -48,7 +48,7 @@ FOO_SERVICE_HOST= FOO_SERVICE_PORT= ``` Los servicios tienen direcciones IP dedicadas y están disponibles para el Container a través de DNS, -si el [complemento para DNS](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) está habilitado. +si el [complemento para DNS](http://releases.k8s.io/master/cluster/addons/dns/) está habilitado. diff --git a/content/es/docs/concepts/containers/runtime-class.md b/content/es/docs/concepts/containers/runtime-class.md index fa27366b35a..ef4c5e0d2c8 100644 --- a/content/es/docs/concepts/containers/runtime-class.md +++ b/content/es/docs/concepts/containers/runtime-class.md @@ -202,4 +202,4 @@ cuentan en Kubernetes. - [Diseño de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md) - [Diseño de programación de RuntimeClass](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling) - Leer sobre el concepto de [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) -- [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) +- [Diseño de capacidad de PodOverhead](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) diff --git a/content/es/docs/concepts/overview/kubernetes-api.md b/content/es/docs/concepts/overview/kubernetes-api.md index 25cfe180493..54155a63026 100644 --- a/content/es/docs/concepts/overview/kubernetes-api.md +++ b/content/es/docs/concepts/overview/kubernetes-api.md @@ -66,7 +66,7 @@ Para facilitar la eliminación de propiedades o reestructurar la representación Se versiona a nivel de la API en vez de a nivel de los recursos o propiedades para asegurarnos de que la API presenta una visión clara y consistente de los recursos y el comportamiento del sistema, y para controlar el acceso a las APIs experimentales o que estén terminando su ciclo de vida. Los esquemas de serialización JSON y Protobuf siguen los mismos lineamientos para los cambios, es decir, estas descripciones cubren ambos formatos. -Se ha de tener en cuenta que hay una relación indirecta entre el versionado de la API y el versionado del resto del software. La propuesta de [versionado de la API y releases](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) describe esta relación. +Se ha de tener en cuenta que hay una relación indirecta entre el versionado de la API y el versionado del resto del software. La propuesta de [versionado de la API y releases](https://git.k8s.io/design-proposals-archive/release/versioning.md) describe esta relación. Las distintas versiones de la API implican distintos niveles de estabilidad y soporte. El criterio para cada nivel se describe en detalle en la documentación de [Cambios a la API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). A continuación se ofrece un resumen: @@ -89,7 +89,7 @@ Las distintas versiones de la API implican distintos niveles de estabilidad y so ## Grupos de API -Para que sea más fácil extender la API de Kubernetes, se han creado los [*grupos de API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md). +Para que sea más fácil extender la API de Kubernetes, se han creado los [*grupos de API*](https://git.k8s.io/design-proposals-archive/api-machinery/api-group.md). Estos grupos se especifican en una ruta REST y en la propiedad `apiVersion` de un objeto serializado. Actualmente hay varios grupos de API en uso: diff --git a/content/es/docs/concepts/overview/what-is-kubernetes.md b/content/es/docs/concepts/overview/what-is-kubernetes.md index 510f32202d8..9fe53180b27 100644 --- a/content/es/docs/concepts/overview/what-is-kubernetes.md +++ b/content/es/docs/concepts/overview/what-is-kubernetes.md @@ -60,7 +60,7 @@ APIs](/docs/concepts/api-extension/custom-resources/) desde una [herramienta de línea de comandos](/docs/user-guide/kubectl-overview/). Este -[diseño](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) +[diseño](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) ha permitido que otros sistemas sean construidos sobre Kubernetes. ## Lo que Kubernetes no es diff --git a/content/es/docs/concepts/overview/working-with-objects/names.md b/content/es/docs/concepts/overview/working-with-objects/names.md index ef241f6aff5..f683b3d9854 100644 --- a/content/es/docs/concepts/overview/working-with-objects/names.md +++ b/content/es/docs/concepts/overview/working-with-objects/names.md @@ -10,7 +10,7 @@ Todos los objetos de la API REST de Kubernetes se identifica de forma inequívoc Para aquellos atributos provistos por el usuario que no son únicos, Kubernetes provee de [etiquetas](/docs/user-guide/labels) y [anotaciones](/docs/concepts/overview/working-with-objects/annotations/). -Echa un vistazo al [documento de diseño de identificadores](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) para información precisa acerca de las reglas sintácticas de los Nombres y UIDs. +Echa un vistazo al [documento de diseño de identificadores](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) para información precisa acerca de las reglas sintácticas de los Nombres y UIDs. diff --git a/content/es/docs/concepts/policy/limit-range.md b/content/es/docs/concepts/policy/limit-range.md index 22d4c74f518..5fccd8b13dc 100644 --- a/content/es/docs/concepts/policy/limit-range.md +++ b/content/es/docs/concepts/policy/limit-range.md @@ -58,7 +58,7 @@ Ni la contención ni los cambios en un LimitRange afectarán a los recursos ya c ## {{% heading "whatsnext" %}} -Consulte el [documento de diseño del LimitRanger](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) para más información. +Consulte el [documento de diseño del LimitRanger](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) para más información. Los siguientes ejemplos utilizan límites y están pendientes de su traducción: diff --git a/content/es/docs/concepts/services-networking/network-policies.md b/content/es/docs/concepts/services-networking/network-policies.md new file mode 100644 index 00000000000..09f65765a4c --- /dev/null +++ b/content/es/docs/concepts/services-networking/network-policies.md @@ -0,0 +1,287 @@ +--- +reviewers: +- raelga +- electrocucaracha +title: Políticas de red (Network Policies) +content_type: concept +weight: 50 +--- + + + +Si quieres controlar el tráfico de red a nivel de dirección IP o puerto (capa OSI 3 o 4), puedes considerar el uso de Kubernetes NetworkPolicies para las aplicaciones que corren en tu clúster. Las NetworkPolicies son una estructura enfocada en las aplicaciones que permite establecer cómo un {{< glossary_tooltip text="Pod" term_id="pod">}} puede comunicarse con otras "entidades" (utilizamos la palabra "entidad" para evitar sobrecargar términos más comunes como "Endpoint" o "Service", que tienen connotaciones específicas de Kubernetes) a través de la red. Las NetworkPolicies se aplican a uno o ambos extremos de la conexión a un Pod, sin afectar a otras conexiones. + +Las entidades con las que un Pod puede comunicarse son de una combinación de estos 3 tipos: + +1. Otros Pods permitidos (excepción: un Pod no puede bloquear el acceso a sí mismo) +2. Namespaces permitidos +3. Bloqueos de IP (excepción: el tráfico hacia y desde el nodo donde se ejecuta un Pod siempre está permitido, independientemente de la dirección IP del Pod o del nodo) + +Cuando se define una NetworkPolicy basada en Pods o Namespaces, se utiliza un {{< glossary_tooltip text="Selector" term_id="selector">}} para especificar qué tráfico se permite desde y hacia los Pod(s) que coinciden con el selector. + +Por otro lado, cuando se crean NetworkPolicies basadas en IP, se definen políticas basadas en bloques de IP (rangos CIDR). + + + +## Prerrequisitos + +Las políticas de red son implementadas por el [plugin de red](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). Para usar políticas de red, debes estar utilizando una solución de red que soporte NetworkPolicy. Crear un recurso NetworkPolicy sin un controlador que lo habilite no tendrá efecto alguno. + + +## Dos Tipos de Aislamiento de Pod + +Hay dos tipos de aislamiento para un Pod: el aislamiento para la salida y el aislamiento para la entrada. Estos se refieren a las conexiones que pueden establecerse. El término "Aislamiento" en el contexto de este documento no es absoluto, sino que significa "se aplican algunas restricciones". La alternativa, "no aislado para $dirección", significa que no se aplican restricciones en la dirección descrita. Los dos tipos de aislamiento (o no) se declaran independientemente, y ambos son relevantes para una conexión de un Pod a otro. + +Por defecto, un Pod no está aislado para la salida; todas las conexiones salientes están permitidas. Un Pod está aislado para la salida si hay alguna NetworkPolicy con "Egress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la salida. Cuando un Pod está aislado para la salida, las únicas conexiones permitidas desde el Pod son las permitidas por la lista `egress` de las NetworkPolicy que se aplique al Pod para la salida. Los valores de esas listas `egress` se combinan de forma aditiva. + +Por defecto, un Pod no está aislado para la entrada; todas las conexiones entrantes están permitidas. Un Pod está aislado para la entrada si hay alguna NetworkPolicy con "Ingress" en su `policyTypes` que seleccione el Pod; decimos que tal política se aplica al Pod para la entrada. Cuando un Pod está aislado para la entrada, las únicas conexiones permitidas en el Pod son las del nodo del Pod y las permitidas por la lista `ingress` de alguna NetworkPolicy que se aplique al Pod para la entrada. Los valores de esas listas de direcciones se combinan de forma aditiva. + +Las políticas de red no entran en conflicto; son aditivas. Si alguna política(s) se aplica a un Pod para una dirección determinada, las conexiones permitidas en esa dirección desde ese Pod es la unión de lo que permiten las políticas aplicables. Por tanto, el orden de evaluación no afecta al resultado de la política. + +Para que se permita una conexión desde un Pod de origen a un Pod de destino, tanto la política de salida del Pod de origen como la de entrada del Pod de destino deben permitir la conexión. Si cualquiera de los dos lados no permite la conexión, ésta no se producirá. + + +## El Recurso NetworkPolicy {#networkpolicy-resource} + +Ver la referencia [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) para una definición completa del recurso. + +Un ejemplo de NetworkPolicy pudiera ser este: + +{{< codenew file="service/networking/networkpolicy.yaml" >}} + +{{< note >}} +Enviar esto al API Server de su clúster no tendrá ningún efecto a menos que su solución de red soporte de políticas de red. +{{< /note >}} + +__Campos Obligatorios__: Como con todos los otras configuraciones de Kubernetes, una NetworkPolicy +necesita los campos `apiVersion`, `kind`, y `metadata`. Para obtener información general +sobre cómo funcionan esos ficheros de configuración, puedes consultar +[Configurar un Pod para usar un ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/), +y [Gestión de Objetos](/docs/concepts/overview/working-with-objects/object-management). + +__spec__: NetworkPolicy [spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) contiene toda la información necesaria para definir una política de red dado un Namespace. + +__podSelector__: Cada NetworkPolicy incluye un `podSelector` el cual selecciona el grupo de Pods en los cuales aplica la política. La política de ejemplo selecciona Pods con el label "role=db". Un `podSelector` vacío selecciona todos los Pods en un Namespace. + +__policyTypes__: Cada NetworkPolicy incluye una lista de `policyTypes` la cual puede incluir `Ingress`, `Egress`, o ambas. Los campos `policyTypes` indican si la política aplica o no al tráfico de entrada hacia el Pod seleccionado, el tráfico de salida desde el Pod seleccionado, o ambos. Si no se especifican `policyTypes` en una NetworkPolicy el valor `Ingress` será siempre aplicado por defecto y `Egress` será aplicado si la NetworkPolicy contiene alguna regla de salida. + +__ingress__: Cada NetworkPolicy puede incluir una lista de reglas `ingress` permitidas. Cada regla permite el tráfico con que se relaciona a ambos valores de las secciones de `from` y `ports`. La política de ejemplo contiene una única regla, la cual se relaciona con el tráfico sobre un solo puerto, desde uno de los tres orígenes definidos, el primero especificado por el valor `ipBlock`, el segundo especificado por el valor `namespaceSelector` y el tercero especificado por el `podSelector`. + +__egress__: Cada NetworkPolicy puede incluir una lista de reglas de `egress` permitidas. Cada regla permite el tráfico con que se relaciona a ambos valores de las secciones de `to` and `ports`. La política de ejemplo contiene una única regla, la cual se relaciona con el tráfico en un único puerto para cualquier destino en el rango de IPs `10.0.0.0/24`. + +Por lo tanto, la NetworkPolicy de ejemplo: + +1. Aísla los Pods "role=db" en el Namespace "default" para ambos tipos de tráfico ingress y egress (si ellos no están aún aislados) +2. (Reglas Ingress) permite la conexión hacia todos los Pods en el Namespace "default" con el label "role=db" en el puerto TCP 6379 desde los siguientes orígenes: + + * cualquier Pod en el Namespace "default" con el label "role=frontend" + * cualquier Pod en un Namespace con el label "project=myproject" + * La dirección IP en los rangos 172.17.0.0–172.17.0.255 y 172.17.2.0–172.17.255.255 (por ejemplo, todo el rango de IPs de 172.17.0.0/16 con excepción del 172.17.1.0/24) +3. (Egress rules) permite conexión desde cualquier Pod en el Namespace "default" con el label "role=db" hacia CIDR 10.0.0.0/24 en el puerto TCP 5978 + +Ver el artículo de [Declarar Network Policy](/docs/tasks/administer-clúster/declare-network-policy/) para más ejemplos. + + +## Comportamiento de los selectores `to` y `from` + +Existen cuatro tipos de selectores que pueden ser especificados en una sección de `ingress` `from` o en una sección de `egress` `to`: + +__podSelector__: Este selector selecciona Pods específicos en el mismo Namespace que la NetworkPolicy para permitir el tráfico como origen de entrada o destino de salida. + +__namespaceSelector__: Este selector selecciona Namespaces específicos para permitir el tráfico como origen de entrada o destino de salida. + +__namespaceSelector__ *y* __podSelector__: Una única entrada `to`/`from` que especifica tanto `namespaceSelector` como `podSelector` selecciona Pods específicos dentro de Namespaces específicos. Es importante revisar que se utiliza la sintaxis de YAML correcta. A continuación se muestra un ejemplo de esta política: + +```yaml + ... + ingress: + - from: + - namespaceSelector: + matchLabels: + user: alice + podSelector: + matchLabels: + role: client + ... +``` + +contiene un elemento `from` permitiendo conexiones desde los Pods con el label `role=client` en Namespaces con el label `user=alice`. Por el contrario, *esta* política: + +```yaml + ... + ingress: + - from: + - namespaceSelector: + matchLabels: + user: alice + - podSelector: + matchLabels: + role: client + ... +``` + +contiene dos elementos en el array `from`, y permite conexiones desde Pods en los Namespaces con el label `role=client`, *o* desde cualquier Pod en cualquier Namespace con el label `user=alice`. + +En caso de duda, utilice `kubectl describe` para ver cómo Kubernetes ha interpretado la política. + + + +__ipBlock__: Este selector selecciona rangos CIDR de IP específicos para permitirlas como origen de entrada o destino de salida. Estas IPs deben ser externas al clúster, ya que las IPs de Pod son efímeras e impredecibles. + +Los mecanismos de entrada y salida del clúster a menudo requieren reescribir la IP de origen o destino +de los paquetes. En los casos en los que esto ocurre, no está definido si esto ocurre antes o +después del procesamiento de NetworkPolicy, y el comportamiento puede ser diferente para diferentes +combinaciones de plugin de red, proveedor de nube, implementación de `Service`, etc. + +En el caso de la entrada, esto significa que en algunos casos se pueden filtrar paquetes +entrantes basándose en la IP de origen real, mientras que en otros casos, la "IP de origen" sobre la que actúa la +la NetworkPolicy actúa puede ser la IP de un `LoadBalancer` o la IP del Nodo donde este el Pod involucrado, etc. + +Para la salida, esto significa que las conexiones de los Pods a las IPs de `Service` que se reescriben a +IPs externas al clúster pueden o no estar sujetas a políticas basadas en `ipBlock`. + + +## Políticas por defecto + +Por defecto, si no existen políticas en un Namespace, se permite todo el tráfico de entrada y salida hacia y desde los Pods de ese Namespace. Los siguientes ejemplos muestran cómo cambiar el comportamiento por defecto en ese Namespace. + + +### Denegar todo el tráfico de entrada por defecto + +Puedes crear una política que "por defecto" aisle a un Namespace del tráfico de entrada con la creación de una política que seleccione todos los Pods del Namespace pero no permite ningún tráfico de entrada en esos Pods. + +{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}} + +Esto asegura que incluso los Pods que no están seleccionados por ninguna otra NetworkPolicy también serán aislados del tráfico de entrada. Esta política no afecta el aislamiento en el tráfico de salida desde cualquier Pod. + + +### Permitir todo el tráfico de entrada + +Si tu quieres permitir todo el tráfico de entrada a todos los Pods en un Namespace, puedes crear una política que explícitamente permita eso. + +{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}} + +Con esta política en curso, ninguna política(s) adicional puede hacer que se niegue cualquier conexión entrante a esos Pods. Esta política no tiene efecto sobre el aislamiento del tráfico de salida de cualquier Pod. + + +### Denegar por defecto todo el tráfico de salida + +Puedes crear una política que "por defecto" aisle el tráfico de salida para un Namespace, creando una NetworkPolicy que seleccione todos los Pods pero que no permita ningún tráfico de salida desde esos Pods. + +{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}} + +Esto asegura que incluso los Pods que no son seleccionados por ninguna otra NetworkPolicy no tengan permitido el tráfico de salida. Esta política no cambia el comportamiento de aislamiento para el tráfico de entrada de ningún Pod. + + +### Permitir todo el tráfico de salida + +Si quieres permitir todas las conexiones desde todos los Pods de un Namespace, puedes crear una política que permita explícitamente todas las conexiones salientes de los Pods de ese Namespace. + +{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}} + +Con esta política en vigor, ninguna política(s) adicional puede hacer que se niegue cualquier conexión de salida desde esos Pods. Esta política no tiene efecto sobre el aislamiento para el tráfico de entrada a cualquier Pod. + + +### Denegar por defecto todo el tráfico de entrada y de salida + +Puede crear una política que "por defecto" en un Namespace impida todo el tráfico de entrada y de salida creando la siguiente NetworkPolicy en ese Namespace. + +{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}} + +Esto asegura que incluso los Pods que no son seleccionados por ninguna otra NetworkPolicy no tendrán permitido el tráfico de entrada o salida. + + +## Soporte a SCTP + +{{< feature-state for_k8s_version="v1.20" state="stable" >}} + +Como característica estable, está activada por defecto. Para deshabilitar SCTP a nivel de clúster, usted (o el administrador de su clúster) tiene que deshabilitar la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `SCTPSupport` para el API Server con el flag `--feature-gates=SCTPSupport=false,...`. +Cuando esta feature gate está habilitada, puede establecer el campo `protocol` de una NetworkPolicy como `SCTP`. + +{{< note >}} +Debes utilizar un plugin de {{< glossary_tooltip text="CNI" term_id="cni" >}} que soporte el protocolo SCTP NetworkPolicies. +{{< /note >}} + + +## Apuntar a un rango de puertos + +{{< feature-state for_k8s_version="v1.22" state="beta" >}} + +Cuando se escribe una NetworkPolicy, se puede apuntar a un rango de puertos en lugar de un solo puerto. + +Esto se puede lograr con el uso del campo `endPort`, como el siguiente ejemplo: + +```yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: multi-port-egress + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Egress + egress: + - to: + - ipBlock: + cidr: 10.0.0.0/24 + ports: + - protocol: TCP + port: 32000 + endPort: 32768 +``` + +La regla anterior permite que cualquier Pod con la etiqueta `role=db` en el Namespace `default` se comunique +con cualquier IP dentro del rango `10.0.0.0/24` sobre el protocolo TCP, siempre que el puerto +esté entre el rango 32000 y 32768. + +Se aplican las siguientes restricciones al utilizar este campo: +* Como característica en estado beta, está activada por defecto. Para desactivar el campo `endPort` a nivel de clúster, usted (o su administrador de clúster) debe desactivar la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NetworkPolicyEndPort` +en el API Server con el flag `--feature-gates=NetworkPolicyEndPort=false,...`. +* El campo `endPort` debe ser igual o mayor que el campo `port`. +* Sólo se puede definir `endPort` si también se define `port`. +* Ambos puertos deben ser numéricos. + + +{{< note >}} +Su clúster debe utilizar un plugin de {{< glossary_tooltip text="CNI" term_id="cni" >}} que +soporte el campo `endPort` en las especificaciones de NetworkPolicy. +Si su [plugin de red](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +no soporta el campo `endPort` y usted especifica una NetworkPolicy que use este campo, +la política se aplicará sólo para el campo `port`. +{{< /note >}} + + +## Como apuntar a un Namespace usando su nombre + +{{< feature-state for_k8s_version="1.22" state="stable" >}} + +El plano de control de Kubernetes establece una etiqueta inmutable `kubernetes.io/metadata.name` en todos los +Namespaces, siempre que se haya habilitado la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `NamespaceDefaultLabelName`. +El valor de la etiqueta es el nombre del Namespace. + +Aunque NetworkPolicy no puede apuntar a un Namespace por su nombre con algún campo de objeto, puede utilizar la etiqueta estandarizada para apuntar a un Namespace específico. + + + ## Que no puedes hacer con políticas de red (al menos, aún no) + +Actualmente, en Kubernetes {{< skew currentVersion >}}, la siguiente funcionalidad no existe en la API de NetworkPolicy, pero es posible que se puedan implementar soluciones mediante componentes del sistema operativo (como SELinux, OpenVSwitch, IPTables, etc.) o tecnologías de capa 7 (Ingress controllers, implementaciones de Service Mesh) o controladores de admisión. En caso de que seas nuevo en la seguridad de la red en Kubernetes, vale la pena señalar que las siguientes historias de usuario no pueden (todavía) ser implementadas usando la API NetworkPolicy. + +- Forzar que el tráfico interno del clúster pase por una puerta de enlace común (esto se puede implementar con una malla de servicios u otro proxy). +- Cualquier cosa relacionada con TLS (se puede implementar con una malla de servicios o un Ingress controllers para esto). +- Políticas específicas de los nodos (se puede utilizar la notación CIDR para esto, pero no se puede apuntar a los nodos por sus identidades Kubernetes específicamente). +- Apuntar Services por nombre (sin embargo, puede orientar los Pods o los Namespaces por su {{< glossary_tooltip text="labels" term_id="label" >}}, lo que suele ser una solución viable). +- Creación o gestión de "solicitudes de políticas" que son atendidas por un tercero. +- Políticas que por defecto son aplicadas a todos los Namespaces o Pods (hay algunas distribuciones y proyectos de Kubernetes de terceros que pueden hacer esto). +- Consulta avanzada de políticas y herramientas de accesibilidad. +- La capacidad de registrar los eventos de seguridad de la red (por ejemplo, las conexiones bloqueadas o aceptadas). +- La capacidad de negar explícitamente las políticas (actualmente el modelo para NetworkPolicies es negar por defecto, con sólo la capacidad de añadir reglas de permitir). +- La capacidad de impedir el tráfico entrante de Loopback o de Host (actualmente los Pods no pueden bloquear el acceso al host local, ni tienen la capacidad de bloquear el acceso desde su nodo residente). + + +## {{% heading "whatsnext" %}} + +- Leer el artículo de como [Declarar de Políticas de Red](/docs/tasks/administer-clúster/declare-network-policy/) para ver más ejemplos. +- Ver más [recetas](https://github.com/ahmetb/kubernetes-network-policy-recipes) de escenarios comunes habilitados por los recursos de las NetworkPolicy. diff --git a/content/es/docs/concepts/services-networking/service.md b/content/es/docs/concepts/services-networking/service.md index 5ef231df3a5..3bd85a17012 100644 --- a/content/es/docs/concepts/services-networking/service.md +++ b/content/es/docs/concepts/services-networking/service.md @@ -54,7 +54,7 @@ Para aplicaciones no nativas, Kubernetes ofrece una manera de colocar un puerto Un Service en Kubernetes es un objeto REST, similar a un Pod. Como todos los objetos REST, puedes hacer un `Post` a una definición de un Service al servidor API para crear una nueva instancia. EL nombre de un objeto Service debe ser un [nombre RFC 1035 válido](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names). -Por ejemplo, supongamos que tienes un conjunto de Pods en el que cada uno escucha el puerto TCP 9376 y contiene la etiqueta `app=MyApp`: +Por ejemplo, supongamos que tienes un conjunto de Pods en el que cada uno escucha el puerto TCP 9376 y contiene la etiqueta `app.kubernetes.io/name=MyApp`: ```yaml apiVersion: v1 @@ -63,14 +63,14 @@ metadata: name: mi-servicio spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 ``` -Esta especificación crea un nuevo objeto Service llamado "mi-servicio", que apunta via TCP al puerto 9376 de cualquier Pod con la etiqueta `app=MyApp`. +Esta especificación crea un nuevo objeto Service llamado "mi-servicio", que apunta via TCP al puerto 9376 de cualquier Pod con la etiqueta `app.kubernetes.io/name=MyApp`. Kubernetes asigna una dirección IP a este Service (Algunas veces llamado "Cluster IP"), la cual es usada por los proxies de los Services (mira [IPs Virtuales y proxies de servicios](#virtual-ips-and-service-proxies) abajo). @@ -95,7 +95,7 @@ Los Services comúnmente abstraen el acceso a los Pods de Kubernetes, pero tambi Por ejemplo: -- Quieres tener un clúster de base de datos externo en producción, pero en el ambiente de pruebas quieres usar tus propias bases de datos. +- Quieres tener un clúster de base de datos externo en producción, pero en el entorno de pruebas quieres usar tus propias bases de datos. - Quieres apuntar tu Service a un Service en un {{< glossary_tooltip term_id="namespace" text="Namespace" >}} o en un clúster diferente. - Estás migrando tu carga de trabajo a Kubernetes. Mientras evalúas la aproximación, corres solo una porción de tus backends en Kubernetes. @@ -297,7 +297,7 @@ spec: {{< note >}} Como con los {{< glossary_tooltip term_id="name" text="nombres">}} de Kubernetes en general, los nombres para los puertos deben contener alfanuméricos en minúsculas y `-`. Los nombres de puertos deben comenzar y terminar con un carácter alfanumérico. -Por ejemplo, los nombres `123-abc` and `web` son válidos, pero `123_abc` y `-web` no lo son. +Por ejemplo, los nombres `123-abc` and `web` son válidos, pero `123_abc` y `-web` no lo son. {{< /note >}} ## Eligiendo tu propia dirección IP @@ -418,7 +418,7 @@ Si quieres un número de puerto específico, puedes especificar un valor en el c Esto significa que necesitas prestar atención a posibles colisiones de puerto por tu cuenta. También tienes que usar un número de puerto válido, uno que esté dentro del rango configurado para uso del NodePort. -Usar un NodePort te da libertad para configurar tu propia solución de balanceo de cargas, para configurar ambientes que no soportan Kubernetes del todo, o para exponer uno o más IPs del nodo directamente. +Usar un NodePort te da libertad para configurar tu propia solución de balanceo de cargas, para configurar entornos que no soportan Kubernetes del todo, o para exponer uno o más IPs del nodo directamente. Ten en cuenta que este Service es visible como `:spec.ports[*].nodePort` y `.spec.clusterIP:spec.ports[*].port`. Si la bandera `--nodeport-addresses` está configurada para el kube-proxy o para el campo equivalente en el fichero de configuración, `` sería IP filtrada del nodo. Si @@ -514,7 +514,7 @@ El valor de `spec.loadBalancerClass` debe ser un identificador de etiqueta, con #### Balanceador de carga interno -En un ambiente mixto algunas veces es necesario enrutar el tráfico desde Services dentro del mismo bloque (virtual) de direcciones de red. +En un entorno mixto algunas veces es necesario enrutar el tráfico desde Services dentro del mismo bloque (virtual) de direcciones de red. En un entorno de split-horizon DNS necesitarías dos Services para ser capaz de enrutar tanto el tráfico externo como el interno a tus Endpoints. @@ -646,7 +646,7 @@ HTTP y HTTPS seleccionan un proxy de capa 7: el ELB termina la conexión con el TCP y SSL seleccionan un proxy de capa 4: el ELB reenvía el tráfico sin modificar los encabezados. -En un ambiente mixto donde algunos puertos están asegurados y otros se dejan sin encriptar, puedes usar una de las siguientes anotaciones: +En un entorno mixto donde algunos puertos están asegurados y otros se dejan sin encriptar, puedes usar una de las siguientes anotaciones: ```yaml metadata: @@ -799,11 +799,11 @@ NLB solo funciona con ciertas clases de instancias; mira la [documentación AWS] A diferencia de los balanceadores de cargas, el balanceador de carga de red (NLB) reenvía la dirección IP del cliente a través del nodo. Si el campo `.spec.externalTrafficPolicy` está fijado a `clúster`, la dirección IP del cliente no es propagada a los Pods finales. Al fijar `.spec.externalTrafficPolicy` en `Local`, la dirección IP del cliente se propaga a los Pods finales, -pero esto puede resultar a una distribución de tráfico desigual. Los nodos sin ningún Pod para un Service particular de tipo LoadBalancer fallarán en la comprobación de estado del grupo objetivo del NLB en el puerto `.spec.healthCheckNodePort` y no recibirán ningún tráfico. +pero esto puede resultar a una distribución de tráfico desigual. Los nodos sin ningún Pod para un Service particular de tipo LoadBalancer fallarán en la comprobación de estado del grupo objetivo del NLB en el puerto `.spec.healthCheckNodePort` y no recibirán ningún tráfico. -Para conseguir trafico equilibrado, usa un DaemonSet o especifica [pod anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) para no localizar en el mismo nodo. +Para conseguir trafico equilibrado, usa un DaemonSet o especifica [pod anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) para no localizar en el mismo nodo. -También puedes usar Services NLB con la anotación del [balanceador de carga interno](/docs/concepts/services-networking/service/#internal-load-balancer) +También puedes usar Services NLB con la anotación del [balanceador de carga interno](/docs/concepts/services-networking/service/#internal-load-balancer) Para permitir que el tráfico del cliente alcance las instancias detrás del NLB, los grupos de seguridad del Nodo se modifican con las siguientes reglas de IP: @@ -822,7 +822,7 @@ spec: ``` {{< note >}} -Si no se establece `.spec.loadBalancerSourceRanges`, Kubernetes permite el tráfico +Si no se establece `.spec.loadBalancerSourceRanges`, Kubernetes permite el tráfico desde `0.0.0.0/0` a los Grupos de Seguridad del Nodo. Si los nodos tienen direcciones IP públicas, ten en cuenta que el tráfico que no viene del NLB también puede alcanzar todas las instancias en esos grupos de seguridad modificados. {{< /note >}} @@ -865,9 +865,9 @@ Hay otras anotaciones para administrar balanceadores de carga en la nube en TKE ### Tipo ExternalName {#externalname} -Los Services de tipo ExternalName mapean un Service a un nombre DNS, no a un selector típico como `mi-servicio` o `cassandra`. Estos Services se especifican con el parámetro `spec.externalName`. +Los Services de tipo ExternalName mapean un Service a un nombre DNS, no a un selector típico como `mi-servicio` o `cassandra`. Estos Services se especifican con el parámetro `spec.externalName`. -Esta definición de Service, por ejemplo, mapea el Service `mi-Servicio` en el namespace `prod` a `my.database.example.com`: +Esta definición de Service, por ejemplo, mapea el Service `mi-Servicio` en el namespace `prod` a `my.database.example.com`: ```yaml apiVersion: v1 @@ -884,7 +884,7 @@ spec: ExternalName acepta una cadena de texto IPv4, pero como un nombre DNS compuesto de dígitos, no como una dirección IP. ExternalNames que se parecen a direcciones IPv4 no se resuelven por el CoreDNS o ingress-nginx, ya que ExternalName se usa para especificar un nombre DNS canónico. Al fijar una dirección IP, considera usar [headless Services](#headless-services). {{< /note >}} -Cuando busca el host `mi-servicio.prod.svc.cluster.local`, el Service DNS del clúster devuelve un registro `CNAME` con el valor `my.database.example.com`. Acceder a `mi-servicio` funciona de la misma manera que otros Services, pero con la diferencia crucial de que la redirección ocurre a nivel del DNS en lugar reenviarlo o redirigirlo. Si posteriormente decides mover tu base de datos al clúster, puedes iniciar sus Pods, agregar selectores apropiados o endpoints, y cambiar el `type` del Service. +Cuando busca el host `mi-servicio.prod.svc.cluster.local`, el Service DNS del clúster devuelve un registro `CNAME` con el valor `my.database.example.com`. Acceder a `mi-servicio` funciona de la misma manera que otros Services, pero con la diferencia crucial de que la redirección ocurre a nivel del DNS en lugar reenviarlo o redirigirlo. Si posteriormente decides mover tu base de datos al clúster, puedes iniciar sus Pods, agregar selectores apropiados o endpoints, y cambiar el `type` del Service. {{< warning >}} @@ -894,15 +894,15 @@ Para protocolos que usan el nombre del host esta diferencia puede llevar a error {{< /warning >}} {{< note >}} -Esta sección está en deuda con el artículo de blog [Kubernetes Tips - Part 1](https://akomljen.com/kubernetes-tips-part-1/) de [Alen Komljen](https://akomljen.com/). +Esta sección está en deuda con el artículo de blog [Kubernetes Tips - Part 1](https://akomljen.com/kubernetes-tips-part-1/) de [Alen Komljen](https://akomljen.com/). {{< /note >}} ### IPs Externas -Si existen IPs externas que enrutan hacia uno o más nodos del clúster, los Services de Kubernetes pueden ser expuestos en esas `externalIPs`. El tráfico que ingresa al clúster con la IP externa (como IP de destino), en el puerto del Service, será enrutado a uno de estos endpoints del Service. Las `externalIPs` no son administradas por Kubernetes y son responsabilidad del administrador del clúster. +Si existen IPs externas que enrutan hacia uno o más nodos del clúster, los Services de Kubernetes pueden ser expuestos en esas `externalIPs`. El tráfico que ingresa al clúster con la IP externa (como IP de destino), en el puerto del Service, será enrutado a uno de estos endpoints del Service. Las `externalIPs` no son administradas por Kubernetes y son responsabilidad del administrador del clúster. En la especificación del Service, las `externalIPs` se pueden especificar junto con cualquiera de los `ServiceTypes`. -En el ejemplo de abajo, "`mi-servicio`" puede ser accedido por clientes en "`80.11.12.10:80`" (`externalIP:port`) +En el ejemplo de abajo, "`mi-servicio`" puede ser accedido por clientes en "`80.11.12.10:80`" (`externalIP:port`) ```yaml apiVersion: v1 @@ -911,7 +911,7 @@ metadata: name: mi-servicio spec: selector: - app: MyApp + app.kubernetes.io/name: MyApp ports: - name: http protocol: TCP @@ -925,21 +925,21 @@ spec: Usar el proxy del userspace for VIPs funciona en pequeña y mediana escala, pero no escalará a clústeres muy grandes con miles de Services. El tópico [original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107) tiene más detalles sobre esto. -Usar el proxy del userspace oculta la dirección IP de origen de un paquete que accede al Service. Esto hace que algún tipo de filtrado (firewalling) sea imposible. El modo proxy iptables no oculta IPs de origen en el clúster, pero aún tiene impacto en clientes que vienen desde un balanceador de carga o un node-port. +Usar el proxy del userspace oculta la dirección IP de origen de un paquete que accede al Service. Esto hace que algún tipo de filtrado (firewalling) sea imposible. El modo proxy iptables no oculta IPs de origen en el clúster, pero aún tiene impacto en clientes que vienen desde un balanceador de carga o un node-port. -El campo `Type` está diseñado como una funcionalidad anidada - cada nivel se agrega al anterior. Esto no es estrictamente requerido en todos los proveedores de la nube (ej. Google Compute Engine no necesita asignar un `NodePort` para que funcione el `LoadBalancer`, pero AWS si) pero la API actual lo requiere. +El campo `Type` está diseñado como una funcionalidad anidada - cada nivel se agrega al anterior. Esto no es estrictamente requerido en todos los proveedores de la nube (ej. Google Compute Engine no necesita asignar un `NodePort` para que funcione el `LoadBalancer`, pero AWS si) pero la API actual lo requiere. ## Implementación de IP Virtual {#the-gory-details-of-virtual-ips} La información previa sería suficiente para muchas personas que quieren usar Services. Sin embargo, ocurren muchas cosas detrás de bastidores que valdría la pena entender. -### Evitar colisiones +### Evitar colisiones Una de las principales filosofías de Kubernetes es que no debe estar expuesto a situaciones que podrían hacer que sus acciones fracasen por su propia culpa. Para el diseño del recurso de Service, esto significa no obligarlo a elegir su propio número de puerto si esa elección puede colisionar con la de otra persona. Eso es un fracaso de aislamiento. Para permitirte elegir un número de puerto en tus Services, debemos asegurarnos que dos Services no puedan colisionar. Kubernetes lo hace asignando a cada Service su propia dirección IP. -Para asegurarse que cada Service recibe una IP única, un asignador interno actualiza atómicamente el mapa global de asignaciones en {{< glossary_tooltip term_id="etcd" >}} antes de crear cada Service. El objeto mapa debe existir en el registro para que los Services obtengan asignaciones de dirección IP, de lo contrario las creaciones fallarán con un mensaje indicando que la dirección IP no pudo ser asignada. +Para asegurarse que cada Service recibe una IP única, un asignador interno actualiza atómicamente el mapa global de asignaciones en {{< glossary_tooltip term_id="etcd" >}} antes de crear cada Service. El objeto mapa debe existir en el registro para que los Services obtengan asignaciones de dirección IP, de lo contrario las creaciones fallarán con un mensaje indicando que la dirección IP no pudo ser asignada. En el plano de control, un controlador de trasfondo es responsable de crear el mapa (requerido para soportar la migración desde versiones más antiguas de Kubernetes que usaban bloqueo en memoria). Kubernetes también utiliza controladores para revisar asignaciones inválidas (ej. debido a la intervención de un administrador) y para limpiar las direcciones IP que ya no son usadas por ningún Service. @@ -947,7 +947,7 @@ En el plano de control, un controlador de trasfondo es responsable de crear el m A diferencia de direcciones IP del Pod, que enrutan a un destino fijo, las IPs del Service no son respondidas por ningún host. En lugar de ello, El kube-proxy usa iptables (lógica de procesamiento de paquetes en Linux) para definir direcciones IP _virtuales_ que se redirigen de forma transparente cuando se necesita. Cuando el cliente se conecta con la VIP, su tráfico es transportado automáticamente al endpoint apropiado. Las variables de entorno y DNS para los Services son pobladas en términos de la dirección IP virtual del Service (y el puerto). -Kube-proxy soporta tres modos — userspace, iptables e IPVS — los cuales operan ligeramente diferente cada uno. +Kube-proxy soporta tres modos — userspace, iptables e IPVS — los cuales operan ligeramente diferente cada uno. #### Userspace @@ -955,11 +955,11 @@ Por ejemplo, considera la aplicación de procesamiento de imágenes descrita arr Cuando un cliente se conecta a la dirección IP virtual del Service, la regla de iptables entra en acción, y redirige los paquetes al propio puerto del proxy. El "proxy del Service" elige un backend, y comienza a redirigir el tráfico desde el cliente al backend. -Esto quiere decir que los dueños del Service pueden elegir cualquier puerto que quieran sin riesgo de colisión. Los clientes pueden conectarse a una IP y un puerto, sin estar conscientes de a cuáles Pods están accediendo. +Esto quiere decir que los dueños del Service pueden elegir cualquier puerto que quieran sin riesgo de colisión. Los clientes pueden conectarse a una IP y un puerto, sin estar conscientes de a cuáles Pods están accediendo. #### iptables -Nuevamente, considera la aplicación de procesamiento de imágenes descrita arriba. Cuando se crea el Service Backend, el plano de control de Kubernetes asigna una dirección IP virtual, por ejemplo 10.0.0.1. Asumiendo que el puerto del servicio es 1234, el Service es observado por todas las instancias del kube-proxy en el clúster. Cuando un proxy mira un nuevo Service, instala una serie de reglas de iptables que redirigen desde la dirección IP virtual a las reglas del Service. Las reglas del Service enlazan a las reglas del Endpoint que redirigen el tráfico (usando NAT de destino) a los backends. +Nuevamente, considera la aplicación de procesamiento de imágenes descrita arriba. Cuando se crea el Service Backend, el plano de control de Kubernetes asigna una dirección IP virtual, por ejemplo 10.0.0.1. Asumiendo que el puerto del servicio es 1234, el Service es observado por todas las instancias del kube-proxy en el clúster. Cuando un proxy mira un nuevo Service, instala una serie de reglas de iptables que redirigen desde la dirección IP virtual a las reglas del Service. Las reglas del Service enlazan a las reglas del Endpoint que redirigen el tráfico (usando NAT de destino) a los backends. Cuando un cliente se conecta a la dirección IP virtual del Service la regla de iptables son aplicadas. A diferencia del modo proxy userspace, el kube-proxy no tiene que estar corriendo para que funcione la dirección IP virtual, y los nodos observan el tráfico que viene desde la dirección IP del cliente sin alteraciones. @@ -1014,11 +1014,11 @@ El kube-proxy no soporta la administración de asociaciones SCTP cuando está en Si tu proveedor de la nube lo soporta, puedes usar un Service en modo LoadBalancer para configurar un proxy invertido HTTP/HTTPS, redirigido a los Endpoints del Service. {{< note >}} -También puedes usar {{< glossary_tooltip term_id="ingress" >}} en lugar de un Service para exponer Services HTTP/HTTPS. +También puedes usar {{< glossary_tooltip term_id="ingress" >}} en lugar de un Service para exponer Services HTTP/HTTPS. {{< /note >}} ### Protocolo PROXY -Si tu proveedor de la nube lo soporta, puedes usar un Service en modo LoadBalancer para configurar un balanceador de carga fuera de Kubernetes mismo, que redirigirá las conexiones prefijadas con [protocolo PROXY](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). +Si tu proveedor de la nube lo soporta, puedes usar un Service en modo LoadBalancer para configurar un balanceador de carga fuera de Kubernetes mismo, que redirigirá las conexiones prefijadas con [protocolo PROXY](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). El balanceador de carga enviará una serie inicial de octetos describiendo la conexión entrante, similar a este ejemplo diff --git a/content/es/docs/concepts/storage/volume-health-monitoring.md b/content/es/docs/concepts/storage/volume-health-monitoring.md new file mode 100644 index 00000000000..a300368f836 --- /dev/null +++ b/content/es/docs/concepts/storage/volume-health-monitoring.md @@ -0,0 +1,36 @@ +--- +reviewers: +- edithturn +- raelga +- electrocucaracha +title: Supervisión del Estado del Volumen +content_type: concept +--- + + + +{{< feature-state for_k8s_version="v1.21" state="alpha" >}} + +La supervisión del estado del volumen de {{< glossary_tooltip text="CSI" term_id="csi" >}} permite que los controladores de CSI detecten condiciones de volumen anómalas de los sistemas de almacenamiento subyacentes y las notifiquen como eventos en {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} o {{< glossary_tooltip text="Pods" term_id="pod" >}}. + + + + + +## Supervisión del Estado del Volumen + +El _monitoreo del estado del volumen_ de Kubernetes es parte de cómo Kubernetes implementa la Interfaz de Almacenamiento de Contenedores (CSI). La función de supervisión del estado del volumen se implementa en dos componentes: un controlador de supervisión del estado externo y {{< glossary_tooltip term_id="kubelet" text="Kubelet" >}}. + +Si un controlador CSI admite la función supervisión del estado del volumen desde el lado del controlador, se informará un evento en el {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC) relacionado cuando se detecte una condición de volumen anormal en un volumen CSI. + +El {{< glossary_tooltip text="controlador" term_id="controller" >}} de estado externo también observa los eventos de falla del nodo. Se puede habilitar la supervisión de fallas de nodos configurando el indicador `enable-node-watcher` en verdadero. Cuando el monitor de estado externo detecta un evento de falla de nodo, el controlador reporta que se informará un evento en el PVC para indicar que los Pods que usan este PVC están en un nodo fallido. + +Si un controlador CSI es compatible con la función monitoreo del estado del volumen desde el lado del nodo, se informará un evento en cada Pod que use el PVC cuando se detecte una condición de volumen anormal en un volumen CSI. + +{{< note >}} +Se necesita habilitar el `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) para usar esta función desde el lado del nodo. +{{< /note >}} + +## {{% heading "whatsnext" %}} + +Ver la [documentación del controlador CSI](https://kubernetes-csi.github.io/docs/drivers.html) para averiguar qué controladores CSI han implementado esta característica. diff --git a/content/es/docs/concepts/storage/volumes.md b/content/es/docs/concepts/storage/volumes.md index c8e7a9d24b8..84ee2a2be2e 100644 --- a/content/es/docs/concepts/storage/volumes.md +++ b/content/es/docs/concepts/storage/volumes.md @@ -638,7 +638,7 @@ Mira el [ ejemplo NFS ](https://github.com/kubernetes/examples/tree/master/stagi ### persistentVolumeClaim {#persistentvolumeclaim} -Un volumen `persistenceVolumeClain` se utiliza para montar un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) en tu Pod. PersistentVolumeClaims son una forma en que el usuario "reclama" almacenamiento duradero (como un PersistentDisk GCE o un volumen ISCSI) sin conocer los detalles del ambiente de la nube en particular. +Un volumen `persistenceVolumeClain` se utiliza para montar un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) en tu Pod. PersistentVolumeClaims son una forma en que el usuario "reclama" almacenamiento duradero (como un PersistentDisk GCE o un volumen ISCSI) sin conocer los detalles del entorno de la nube en particular. Mira la información spbre [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) para más detalles. diff --git a/content/es/docs/concepts/workloads/controllers/garbage-collection.md b/content/es/docs/concepts/workloads/controllers/garbage-collection.md index bc18541ce2f..9653e6f582c 100644 --- a/content/es/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/es/docs/concepts/workloads/controllers/garbage-collection.md @@ -170,7 +170,7 @@ Seguimiento en [#26120](https://github.com/kubernetes/kubernetes/issues/26120) ## {{% heading "whatsnext" %}} -[Documento de Diseño 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md) +[Documento de Diseño 1](https://git.k8s.io/design-proposals-archive/api-machinery/garbage-collection.md) -[Documento de Diseño 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md) +[Documento de Diseño 2](https://git.k8s.io/design-proposals-archive/api-machinery/synchronous-garbage-collection.md) diff --git a/content/es/docs/concepts/workloads/pods/init-containers.md b/content/es/docs/concepts/workloads/pods/init-containers.md index fafb6ae2f60..2ae5323771d 100644 --- a/content/es/docs/concepts/workloads/pods/init-containers.md +++ b/content/es/docs/concepts/workloads/pods/init-containers.md @@ -113,7 +113,7 @@ kind: Pod metadata: name: myapp-pod labels: - app: myapp + app.kubernetes.io/name: MyApp spec: containers: - name: myapp-container @@ -165,7 +165,7 @@ El resultado es similar a esto: Name: myapp-pod Namespace: default [...] -Labels: app=myapp +Labels: app.kubernetes.io/name=MyApp Status: Pending [...] Init Containers: diff --git a/content/es/docs/concepts/workloads/pods/podpreset.md b/content/es/docs/concepts/workloads/pods/podpreset.md index d4406f56ea8..76110e5ee84 100644 --- a/content/es/docs/concepts/workloads/pods/podpreset.md +++ b/content/es/docs/concepts/workloads/pods/podpreset.md @@ -92,4 +92,4 @@ modificación del Pod Preset. En estos casos, se puede añadir una observación Ver [Inyectando datos en un Pod usando PodPreset](/docs/tasks/inject-data-application/podpreset/) -Para más información sobre los detalles de los trasfondos, consulte la [propuesta de diseño de PodPreset](https://git.k8s.io/community/contributors/design-proposals/service-catalog/pod-preset.md). +Para más información sobre los detalles de los trasfondos, consulte la [propuesta de diseño de PodPreset](https://git.k8s.io/design-proposals-archive/service-catalog/pod-preset.md). diff --git a/content/es/docs/reference/_index.md b/content/es/docs/reference/_index.md index a3625a903ee..8ea72cfb26a 100644 --- a/content/es/docs/reference/_index.md +++ b/content/es/docs/reference/_index.md @@ -59,6 +59,6 @@ En estos momento, las librerías con soporte oficial son: Un archivo de los documentos de diseño para la funcionalidad de Kubernetes. -Puedes empezar por [Arquitectura de Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) y [Vista general del diseño de Kubernetes](https://git.k8s.io/community/contributors/design-proposals). +Puedes empezar por [Arquitectura de Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/architecture.md) y [Vista general del diseño de Kubernetes](https://git.k8s.io/community/contributors/design-proposals). diff --git a/content/es/docs/setup/release/building-from-source.md b/content/es/docs/setup/release/building-from-source.md index 42db05df4c3..0ff6152b97a 100644 --- a/content/es/docs/setup/release/building-from-source.md +++ b/content/es/docs/setup/release/building-from-source.md @@ -30,7 +30,7 @@ cd kubernetes make release ``` -Para más detalles sobre el proceso de compilación de una release, visita la carpeta kubernetes/kubernetes [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) +Para más detalles sobre el proceso de compilación de una release, visita la carpeta kubernetes/kubernetes [`build`](http://releases.k8s.io/master/build/) diff --git a/content/es/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/es/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md index 9e7097ffbca..080a91df302 100644 --- a/content/es/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md +++ b/content/es/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md @@ -52,6 +52,6 @@ El servidor reune métricas de la Summary API, que es expuesta por el [Kubelet]( El servidor de métricas se añadió a la API de Kubernetes utilizando el [Kubernetes aggregator](/docs/concepts/api-extension/apiserver-aggregation/) introducido en Kubernetes 1.7. -Puedes aprender más acerca del servidor de métricas en el [documento de diseño](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md). +Puedes aprender más acerca del servidor de métricas en el [documento de diseño](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/metrics-server.md). diff --git a/content/es/docs/tutorials/hello-minikube.md b/content/es/docs/tutorials/hello-minikube.md index 67d7bf7afae..fb40f2fdb0b 100644 --- a/content/es/docs/tutorials/hello-minikube.md +++ b/content/es/docs/tutorials/hello-minikube.md @@ -17,7 +17,7 @@ card: Este tutorial muestra como ejecutar una aplicación Node.js Hola Mundo en Kubernetes utilizando [Minikube](/docs/setup/learning-environment/minikube) y Katacoda. -Katacoda provee un ambiente de Kubernetes desde el navegador. +Katacoda provee un entorno de Kubernetes desde el navegador. {{< note >}} También se puede seguir este tutorial si se ha instalado [Minikube localmente](/docs/tasks/tools/install-minikube/). @@ -63,9 +63,9 @@ Para más información sobre el comando `docker build`, lea la [documentación d minikube dashboard ``` -3. Solo en el ambiente de Katacoda: En la parte superior de la terminal, haz clic en el símbolo + y luego clic en **Select port to view on Host 1**. +3. Solo en el entorno de Katacoda: En la parte superior de la terminal, haz clic en el símbolo + y luego clic en **Select port to view on Host 1**. -4. Solo en el ambiente de Katacoda: Escribir `30000`, y hacer clic en **Display Port**. +4. Solo en el entorno de Katacoda: Escribir `30000`, y hacer clic en **Display Port**. ## Crear un Deployment @@ -154,15 +154,15 @@ Por defecto, el Pod es accedido por su dirección IP interna dentro del clúster minikube service hello-node ``` -4. Solo en el ambiente de Katacoda: Hacer clic sobre el símbolo +, y luego en **Select port to view on Host 1**. +4. Solo en el entorno de Katacoda: Hacer clic sobre el símbolo +, y luego en **Select port to view on Host 1**. -5. Solo en el ambiente de Katacoda: Anotar el puerto de 5 dígitos ubicado al lado del valor de `8080` en el resultado de servicios. Este número de puerto es generado aleatoriamente y puede ser diferente al indicado en el ejemplo. Escribir el número de puerto en el cuadro de texto y hacer clic en Display Port. Usando el ejemplo anterior, usted escribiría `30369`. +5. Solo en el entorno de Katacoda: Anotar el puerto de 5 dígitos ubicado al lado del valor de `8080` en el resultado de servicios. Este número de puerto es generado aleatoriamente y puede ser diferente al indicado en el ejemplo. Escribir el número de puerto en el cuadro de texto y hacer clic en Display Port. Usando el ejemplo anterior, usted escribiría `30369`. Esto abre una ventana de navegador que contiene la aplicación y muestra el mensaje "Hello World". ## Habilitar Extensiones -Minikube tiene un conjunto de {{< glossary_tooltip text="Extensiones" term_id="addons" >}} que pueden ser habilitados y desahabilitados en el ambiente local de Kubernetes. +Minikube tiene un conjunto de {{< glossary_tooltip text="Extensiones" term_id="addons" >}} que pueden ser habilitados y desahabilitados en el entorno local de Kubernetes. 1. Listar las extensiones soportadas actualmente: diff --git a/content/es/examples/service/networking/network-policy-allow-all-egress.yaml b/content/es/examples/service/networking/network-policy-allow-all-egress.yaml new file mode 100644 index 00000000000..42b2a2a2966 --- /dev/null +++ b/content/es/examples/service/networking/network-policy-allow-all-egress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-egress +spec: + podSelector: {} + egress: + - {} + policyTypes: + - Egress diff --git a/content/es/examples/service/networking/network-policy-allow-all-ingress.yaml b/content/es/examples/service/networking/network-policy-allow-all-ingress.yaml new file mode 100644 index 00000000000..462912dae4e --- /dev/null +++ b/content/es/examples/service/networking/network-policy-allow-all-ingress.yaml @@ -0,0 +1,11 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-ingress +spec: + podSelector: {} + ingress: + - {} + policyTypes: + - Ingress diff --git a/content/es/examples/service/networking/network-policy-default-deny-all.yaml b/content/es/examples/service/networking/network-policy-default-deny-all.yaml new file mode 100644 index 00000000000..5c0086bd71e --- /dev/null +++ b/content/es/examples/service/networking/network-policy-default-deny-all.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-all +spec: + podSelector: {} + policyTypes: + - Ingress + - Egress diff --git a/content/es/examples/service/networking/network-policy-default-deny-egress.yaml b/content/es/examples/service/networking/network-policy-default-deny-egress.yaml new file mode 100644 index 00000000000..a4659e14174 --- /dev/null +++ b/content/es/examples/service/networking/network-policy-default-deny-egress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-egress +spec: + podSelector: {} + policyTypes: + - Egress diff --git a/content/es/examples/service/networking/network-policy-default-deny-ingress.yaml b/content/es/examples/service/networking/network-policy-default-deny-ingress.yaml new file mode 100644 index 00000000000..e8238024878 --- /dev/null +++ b/content/es/examples/service/networking/network-policy-default-deny-ingress.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny-ingress +spec: + podSelector: {} + policyTypes: + - Ingress diff --git a/content/es/examples/service/networking/networkpolicy.yaml b/content/es/examples/service/networking/networkpolicy.yaml new file mode 100644 index 00000000000..e91eed2f67e --- /dev/null +++ b/content/es/examples/service/networking/networkpolicy.yaml @@ -0,0 +1,35 @@ +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: test-network-policy + namespace: default +spec: + podSelector: + matchLabels: + role: db + policyTypes: + - Ingress + - Egress + ingress: + - from: + - ipBlock: + cidr: 172.17.0.0/16 + except: + - 172.17.1.0/24 + - namespaceSelector: + matchLabels: + project: myproject + - podSelector: + matchLabels: + role: frontend + ports: + - protocol: TCP + port: 6379 + egress: + - to: + - ipBlock: + cidr: 10.0.0.0/24 + ports: + - protocol: TCP + port: 5978 + diff --git a/content/es/partners/_index.html b/content/es/partners/_index.html index 87ae9d5349a..afb2f6128f1 100644 --- a/content/es/partners/_index.html +++ b/content/es/partners/_index.html @@ -7,85 +7,48 @@ cid: partners ---
-
-
Kubernetes trabaja con socios para crear una base de código sólida y vibrante que admita un espectro de plataformas complementarias.
-
-
-
-
- Kubernetes Certified Service Providers -
-
Proveedores de servicios con amplia experiencia ayudando a las empresas a adoptar Kubernetes con éxito. -


- -

Conviértete en KCSP -
-
-
-
-
- Certified Kubernetes Distributions, Hosted Platforms, and Installers -
La conformidad del software garantiza que la versión de Kubernetes de cada proveedor sea compatible con las API requeridas. -


- -

Conviértete en Certified Kubernetes -
-
-
-
-
Kubernetes Training Partners
-
Partners de formación que ofrecen cursos de alta calidad y con una amplia experiencia en formación de tecnologías nativas de la nube. -



- -

Conviértete en KTP -
-
-
- - - -
- - +
Kubernetes trabaja con socios para crear una base de código sólida y vibrante que admita un espectro de plataformas complementarias.
+
+
+
+
+ Kubernetes Certified Service Providers +
+
Proveedores de servicios con amplia experiencia ayudando a las empresas a adoptar Kubernetes con éxito. +


+ +

Conviértete en + KCSP? +
+
+
+
+
+ Certified Kubernetes Distributions, Hosted Platforms, and Installers +
La conformidad del software garantiza que la versión de Kubernetes de cada proveedor sea compatible con las API requeridas. +


+ +

Conviértete en + Certified Kubernetes? +
+
+
+
+
+ Kubernetes Training Partners +
+
Partners de formación que ofrecen cursos de alta calidad y con una amplia experiencia en formación de tecnologías nativas de la nube. +


+ +

Conviértete en + KTP? +
+
- -
+ {{< cncf-landscape helpers=true >}}
- diff --git a/content/fr/docs/concepts/containers/container-environment.md b/content/fr/docs/concepts/containers/container-environment.md index adad1ab64a3..d73e26442db 100644 --- a/content/fr/docs/concepts/containers/container-environment.md +++ b/content/fr/docs/concepts/containers/container-environment.md @@ -49,7 +49,7 @@ FOO_SERVICE_PORT= ``` Les services ont des adresses IP dédiées et sont disponibles pour le conteneur avec le DNS, -si le [module DNS](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) est activé.  +si le [module DNS](http://releases.k8s.io/master/cluster/addons/dns/) est activé.  diff --git a/content/fr/docs/concepts/overview/what-is-kubernetes.md b/content/fr/docs/concepts/overview/what-is-kubernetes.md index 07a6c738d80..e962c05dd7e 100644 --- a/content/fr/docs/concepts/overview/what-is-kubernetes.md +++ b/content/fr/docs/concepts/overview/what-is-kubernetes.md @@ -46,7 +46,7 @@ C'est pourquoi Kubernetes a également été conçu pour servir de plate-forme e De plus, le [plan de contrôle Kubernetes (control plane)](/docs/concepts/overview/components/) est construit sur les mêmes [APIs](/docs/reference/using-api/api-overview/) que celles accessibles aux développeurs et utilisateurs. -Les utilisateurs peuvent écrire leurs propres contrôleurs (controllers), tels que les [ordonnanceurs (schedulers)](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md), +Les utilisateurs peuvent écrire leurs propres contrôleurs (controllers), tels que les [ordonnanceurs (schedulers)](https://github.com/kubernetes/community/blob/master/contributors/devel/scheduler.md), avec [leurs propres APIs](/docs/concepts/api-extension/custom-resources/) qui peuvent être utilisés par un [outil en ligne de commande](/docs/user-guide/kubectl-overview/). Ce choix de [conception](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) a permis de construire un ensemble d'autres systèmes par dessus Kubernetes. diff --git a/content/fr/docs/concepts/services-networking/service.md b/content/fr/docs/concepts/services-networking/service.md index 9d8484b18e7..d8587068fc7 100644 --- a/content/fr/docs/concepts/services-networking/service.md +++ b/content/fr/docs/concepts/services-networking/service.md @@ -289,7 +289,7 @@ Kubernetes prend en charge 2 modes principaux de recherche d'un service: les var ### Variables d'environnement Lorsqu'un pod est exécuté sur un nœud, le kubelet ajoute un ensemble de variables d'environnement pour chaque service actif. -Il prend en charge à la fois les variables [Docker links](https://docs.docker.com/userguide/dockerlinks/) (voir [makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49)) et plus simplement les variables `{SVCNAME}_SERVICE_HOST` et `{SVCNAME}_SERVICE_PORT`, où le nom du service est en majuscules et les tirets sont convertis en underscore. +Il prend en charge à la fois les variables [Docker links](https://docs.docker.com/userguide/dockerlinks/) (voir [makeLinkVariables](http://releases.k8s.io/master/pkg/kubelet/envvars/envvars.go#L49)) et plus simplement les variables `{SVCNAME}_SERVICE_HOST` et `{SVCNAME}_SERVICE_PORT`, où le nom du service est en majuscules et les tirets sont convertis en underscore. Par exemple, le service `redis-master` qui expose le port TCP 6379 et a reçu l'adresse IP de cluster 10.0.0.11, produit les variables d'environnement suivantes: diff --git a/content/fr/docs/concepts/storage/volumes.md b/content/fr/docs/concepts/storage/volumes.md index 9fc229b5f27..596bb846447 100644 --- a/content/fr/docs/concepts/storage/volumes.md +++ b/content/fr/docs/concepts/storage/volumes.md @@ -137,7 +137,7 @@ Afin d'utiliser cette fonctionnalité, le [Pilote AWS EBS CSI](https://github.co Un type de volume `azureDisk` est utilisé pour monter un disque de données ([Data Disk](https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-about-disks-vhds/)) dans un Pod. -Plus de détails sont disponibles [ici](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/azure_disk/README.md). +Plus de détails sont disponibles [ici](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_disk/README.md). #### Migration CSI @@ -150,7 +150,7 @@ Afin d'utiliser cette fonctionnalité, le [Pilote Azure Disk CSI](https://github Un type de volume `azureFile` est utilisé pour monter un volume de fichier Microsoft Azure (SMB 2.1 et 3.0) dans un Pod. -Plus de détails sont disponibles [ici](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/azure_file/README.md). +Plus de détails sont disponibles [ici](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_file/README.md). #### Migration CSI @@ -170,7 +170,7 @@ CephFS peut être monté plusieurs fois en écriture simultanément. Vous devez exécuter votre propre serveur Ceph avec le partage exporté avant de pouvoir l'utiliser. {{< /caution >}} -Voir [l'exemple CephFS](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/cephfs/) pour plus de détails. +Voir [l'exemple CephFS](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/) pour plus de détails. ### cinder {#cinder} @@ -315,7 +315,7 @@ Si plusieurs WWNs sont spécifiés, targetWWNs s'attend à ce que ces WWNs provi Vous devez configurer un zonage FC SAN pour allouer et masquer au préalable ces LUNs (volumes) aux cibles WWNs afin que les hôtes Kubernetes puissent y accéder. {{< /caution >}} -Voir [l'exemple FC](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/fibre_channel) pour plus de détails. +Voir [l'exemple FC](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel) pour plus de détails. ### flocker {#flocker} @@ -330,7 +330,7 @@ Cela signifie que les données peuvent être transmises entre les Pods selon les Vous devez exécuter votre propre installation de Flocker avant de pouvoir l'utiliser. {{< /caution >}} -Voir [l'exemple Flocker](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/flocker) pour plus de détails. +Voir [l'exemple Flocker](https://github.com/kubernetes/examples/tree/master/staging/volumes/flocker) pour plus de détails. ### gcePersistentDisk {#gcepersistentdisk} @@ -465,7 +465,7 @@ GlusterFS peut être monté plusieurs fois en écriture simultanément. Vous devez exécuter votre propre installation de GlusterFS avant de pouvoir l'utiliser. {{< /caution >}} -Voir [l'exemple GlusterFS](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/glusterfs) pour plus de détails. +Voir [l'exemple GlusterFS](https://github.com/kubernetes/examples/tree/master/volumes/glusterfs) pour plus de détails. ### hostPath {#hostpath} @@ -537,7 +537,7 @@ Une fonctionnalité de iSCSI est qu'il peut être monté en lecture seule par pl Cela signifie que vous pouvez préremplir un volume avec votre jeu de données et l'exposer en parallèle à partir d'autant de Pods que nécessaire. Malheureusement, les volumes iSCSI peuvent seulement être montés par un seul consommateur en mode lecture-écriture - les écritures simultanées ne sont pas autorisées. -Voir [l'exemple iSCSI](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/iscsi) pour plus de détails. +Voir [l'exemple iSCSI](https://github.com/kubernetes/examples/tree/master/volumes/iscsi) pour plus de détails. ### local {#local} @@ -605,7 +605,7 @@ Cela signifie qu'un volume NFS peut être prérempli avec des données et que le Vous devez exécuter votre propre serveur NFS avec le partage exporté avant de pouvoir l'utiliser. {{< /caution >}} -Voir [l'exemple NFS](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/nfs) pour plus de détails. +Voir [l'exemple NFS](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs) pour plus de détails. ### persistentVolumeClaim {#persistentvolumeclaim} @@ -624,7 +624,7 @@ Actuellement, les types de sources de volume suivantes peuvent être projetés : - [`configMap`](#configmap) - `serviceAccountToken` -Toutes les sources doivent se trouver dans le même namespace que celui du Pod. Pour plus de détails, voir le [document de conception tout-en-un ](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/node/all-in-one-volume.md). +Toutes les sources doivent se trouver dans le même namespace que celui du Pod. Pour plus de détails, voir le [document de conception tout-en-un ](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/all-in-one-volume.md). La projection des jetons de compte de service (service account) est une fonctionnalité introduite dans Kubernetes 1.11 et promue en Beta dans la version 1.12. Pour activer cette fonctionnalité dans la version 1.11, il faut configurer explicitement la ["feature gate" `TokenRequestProjection`](/docs/reference/command-line-tools-reference/feature-gates/) à "True". @@ -776,7 +776,7 @@ spec: Il faut s'assurer d'avoir un PortworxVolume existant avec le nom `pxvol` avant de l'utiliser dans le Pod. {{< /caution >}} -Plus de détails et d'exemples peuvent être trouvé [ici](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/portworx/README.md). +Plus de détails et d'exemples peuvent être trouvé [ici](https://github.com/kubernetes/examples/tree/master/staging/volumes/portworx/README.md). ### quobyte {#quobyte} @@ -804,7 +804,7 @@ Une fonctionnalité de RBD est qu'il peut être monté en lecture seule par plus Cela signifie que vous pouvez préremplir un volume avec votre jeu de données et l'exposer en parallèle à partir d'autant de Pods que nécessaire. Malheureusement, les volumes RBD peuvent seulement être montés par un seul consommateur en mode lecture-écriture - les écritures simultanées ne sont pas autorisées. -Voir [l'exemple RBD](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/rbd) pour plus de détails. +Voir [l'exemple RBD](https://github.com/kubernetes/examples/tree/master/volumes/rbd) pour plus de détails. ### scaleIO {#scaleio} @@ -842,7 +842,7 @@ spec: fsType: xfs ``` -Pour plus de détails, consulter [les exemples ScaleIO](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/scaleio). +Pour plus de détails, consulter [les exemples ScaleIO](https://github.com/kubernetes/examples/tree/master/staging/volumes/scaleio). ### secret {#secret} diff --git a/content/fr/docs/concepts/workloads/controllers/statefulset.md b/content/fr/docs/concepts/workloads/controllers/statefulset.md index f223a8432f6..14221ba82af 100644 --- a/content/fr/docs/concepts/workloads/controllers/statefulset.md +++ b/content/fr/docs/concepts/workloads/controllers/statefulset.md @@ -32,7 +32,7 @@ Un [Deployment](/fr/docs/concepts/workloads/controllers/deployment/) ou ## Limitations -* Le stockage pour un Pod donné doit être provisionné soit par un [approvisionneur de PersistentVolume](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/README.md) basé sur un `storage class` donné, soit pré-provisionné par un admin. +* Le stockage pour un Pod donné doit être provisionné soit par un [approvisionneur de PersistentVolume](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/README.md) basé sur un `storage class` donné, soit pré-provisionné par un admin. * Supprimer et/ou réduire l'échelle d'un StatefulSet à zéro ne supprimera *pas* les volumes associés avec le StatefulSet. Ceci est fait pour garantir la sécurité des données, ce qui a généralement plus de valeur qu'une purge automatique de toutes les ressources relatives à un StatefulSet. * Les StatefulSets nécessitent actuellement un [Service Headless](/fr/docs/concepts/services-networking/service/#headless-services) qui est responsable de l'identité réseau des Pods. Vous êtes responsable de la création de ce Service. * Les StatefulSets ne fournissent aucune garantie de la terminaison des pods lorsqu'un StatefulSet est supprimé. Pour avoir une terminaison ordonnée et maîtrisée des pods du StatefulSet, il est possible de réduire l'échelle du StatefulSet à 0 avant de le supprimer. diff --git a/content/fr/docs/contribute/localization.md b/content/fr/docs/contribute/localization.md index 91667afdd24..332aeb4ea44 100644 --- a/content/fr/docs/contribute/localization.md +++ b/content/fr/docs/contribute/localization.md @@ -147,7 +147,7 @@ La dernière version est **{{< latest-version >}}**, donc la branche de la relea ### Chaînes de sites en i18n/ -Les localisations doivent inclure le contenu des éléments suivants [`i18n/en.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) dans un nouveau fichier spécifique à la langue. +Les localisations doivent inclure le contenu des éléments suivants [`i18n/en.toml`](https://github.com/kubernetes/website/blob/main/i18n/en.toml) dans un nouveau fichier spécifique à la langue. Prenons l'allemand comme exemple : `i18n/de.toml`. Ajouter un nouveau fichier de localisation dans `i18n/`. Par exemple, avec l'allemand (de) : @@ -230,5 +230,3 @@ Une fois qu'une traduction répond aux exigences de logistique et à une couvert - Activer la sélection de la langue sur le site Web - Publier la disponibilité de la traduction via les canaux de la [Cloud Native Computing Foundation](https://www.cncf.io/), y compris sur le blog de [Kubernetes](https://kubernetes.io/blog/). - - diff --git a/content/fr/docs/contribute/participating.md b/content/fr/docs/contribute/participating.md index 0658be231f6..4fe25838593 100644 --- a/content/fr/docs/contribute/participating.md +++ b/content/fr/docs/contribute/participating.md @@ -102,7 +102,7 @@ Pour en savoir plus sur comment devenir un relecteur SIG Docs et sur les respons Lorsque vous remplissez les [conditions requises](https://github.com/kubernetes/community/blob/master/community-membership.md#reviewer), vous pouvez devenir un relecteur SIG Docs. Les relecteurs d'autres SIG doivent demander séparément le statut de relecteur dans le SIG Docs. -Pour postuler, ouvrez une pull request et ajoutez vous à la section `reviewers` du fichier [top-level OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) dans le dépôt `kubernetes/website`. +Pour postuler, ouvrez une pull request et ajoutez vous à la section `reviewers` du fichier [top-level OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) dans le dépôt `kubernetes/website`. Affectez la PR à un ou plusieurs approbateurs SIG Docs. Si votre pull request est approuvée, vous êtes maintenant un relecteur SIG Docs. @@ -130,7 +130,7 @@ Pour en savoir plus sur comment devenir un approbateur SIG Docs et sur les respo Lorsque vous remplissez les [conditions requises](https://github.com/kubernetes/community/blob/master/community-membership.md#approver), vous pouvez devenir un approbateur SIG Docs. Les approbateurs appartenant à d'autres SIG doivent demander séparément le statut d'approbateur dans SIG Docs. -Pour postuler, ouvrez une pull request pour vous ajouter à la section `approvers` du fichier [top-level OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) dans le dépot `kubernetes/website`. +Pour postuler, ouvrez une pull request pour vous ajouter à la section `approvers` du fichier [top-level OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) dans le dépot `kubernetes/website`. Affectez la PR à un ou plusieurs approbateurs SIG Docs. Si votre Pull Request est approuvée, vous êtes à présent approbateur SIG Docs. @@ -184,9 +184,9 @@ Le [dépôt du site web Kubernetes](https://github.com/kubernetes/website) utili - blunderbuss - approve -Ces deux plugins utilisent les fichiers [OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) et [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES) à la racine du dépôt Github `kubernetes/website` pour contrôler comment prow fonctionne. +Ces deux plugins utilisent les fichiers [OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) et [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) à la racine du dépôt Github `kubernetes/website` pour contrôler comment prow fonctionne. -Un fichier [OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) contient une liste de personnes qui sont des relecteurs et des approbateurs SIG Docs. +Un fichier [OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) contient une liste de personnes qui sont des relecteurs et des approbateurs SIG Docs. Les fichiers OWNERS existent aussi dans les sous-dossiers, et peuvent ignorer qui peut agir en tant que relecteur ou approbateur des fichiers de ce sous-répertoire et de ses descendants. Pour plus d'informations sur les fichiers OWNERS en général, voir [OWNERS](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md). @@ -203,5 +203,3 @@ Pour plus d'informations sur la contribution à la documentation Kubernetes, voi - [Commencez à contribuer](/docs/contribute/start/) - [Documentation style](/docs/contribute/style/) - - diff --git a/content/fr/docs/reference/kubectl/cheatsheet.md b/content/fr/docs/reference/kubectl/cheatsheet.md index a50eb8f320c..e1d86f7dac9 100644 --- a/content/fr/docs/reference/kubectl/cheatsheet.md +++ b/content/fr/docs/reference/kubectl/cheatsheet.md @@ -36,7 +36,7 @@ Vous pouvez de plus déclarer un alias pour `kubectl` qui fonctionne aussi avec ```bash alias k=kubectl -complete -F __start_kubectl k +complete -o default -F __start_kubectl k ``` ### ZSH diff --git a/content/fr/docs/setup/custom-cloud/kubespray.md b/content/fr/docs/setup/custom-cloud/kubespray.md index cde3cbb3f92..4a9709818d5 100644 --- a/content/fr/docs/setup/custom-cloud/kubespray.md +++ b/content/fr/docs/setup/custom-cloud/kubespray.md @@ -15,8 +15,8 @@ Kubespray se base sur des outils de provisioning, des [paramètres](https://gith * Le support des principales distributions Linux: * Container Linux de CoreOS * Debian Jessie, Stretch, Wheezy - * Ubuntu 16.04, 18.04 - * CentOS/RHEL 7 + * Ubuntu 16.04, 18.04, 20.04, 22.04 + * CentOS/RHEL 7, 8 * Fedora/CentOS Atomic * openSUSE Leap 42.3/Tumbleweed * des tests d'intégration continue @@ -33,8 +33,8 @@ Afin de choisir l'outil le mieux adapté à votre besoin, veuillez lire [cette c Les serveurs doivent être installés en s'assurant des éléments suivants: -* **Ansible v2.6 (ou version plus récente) et python-netaddr installés sur la machine qui exécutera les commandes Ansible** -* **Jinja 2.9 (ou version plus récente) est nécessaire pour exécuter les playbooks Ansible** +* **Ansible v2.11 (ou version plus récente) et python-netaddr installés sur la machine qui exécutera les commandes Ansible** +* **Jinja 2.11 (ou version plus récente) est nécessaire pour exécuter les playbooks Ansible** * Les serveurs cibles doivent avoir **accès à Internet** afin de télécharger les images Docker. Autrement, une configuration supplémentaire est nécessaire, (se référer à [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment)) * Les serveurs cibles doivent être configurés afin d'autoriser le transfert IPv4 (**IPv4 forwarding**) * **Votre clé ssh doit être copiée** sur tous les serveurs faisant partie de votre inventaire Ansible. diff --git a/content/fr/docs/setup/release/building-from-source.md b/content/fr/docs/setup/release/building-from-source.md index b8a8e46bca0..81b2328a1a5 100644 --- a/content/fr/docs/setup/release/building-from-source.md +++ b/content/fr/docs/setup/release/building-from-source.md @@ -29,6 +29,6 @@ cd kubernetes make release ``` -Pour plus de détails sur le processus de release, voir le repertoire [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) dans kubernetes/kubernetes. +Pour plus de détails sur le processus de release, voir le repertoire [`build`](http://releases.k8s.io/master/build/) dans kubernetes/kubernetes. diff --git a/content/fr/docs/tasks/tools/install-kubectl.md b/content/fr/docs/tasks/tools/install-kubectl.md index 2d57145edd2..651b224987a 100644 --- a/content/fr/docs/tasks/tools/install-kubectl.md +++ b/content/fr/docs/tasks/tools/install-kubectl.md @@ -363,7 +363,7 @@ Vous devez maintenant vérifier que le script de completion de kubectl est bien ```shell echo 'alias k=kubectl' >>~/.bashrc - echo 'complete -F __start_kubectl k' >>~/.bashrc + echo 'complete -o default -F __start_kubectl k' >>~/.bashrc ``` {{< note >}} @@ -431,7 +431,7 @@ Si vous n'avez pas installé via Homebrew, vous devez maintenant vous assurer qu ```shell echo 'alias k=kubectl' >>~/.bashrc - echo 'complete -F __start_kubectl k' >>~/.bashrc + echo 'complete -o default -F __start_kubectl k' >>~/.bashrc ``` Si vous avez installé kubectl avec Homebrew (comme expliqué [ici](#installer-avec-homebrew-sur-macos)), alors le script de complétion a été automatiquement installé dans `/usr/local/etc/bash_completion.d/kubectl`. Dans ce cas, vous n'avez rien à faire. diff --git a/content/id/docs/concepts/cluster-administration/addons.md b/content/id/docs/concepts/cluster-administration/addons.md index 5d44364314d..a39668ee247 100644 --- a/content/id/docs/concepts/cluster-administration/addons.md +++ b/content/id/docs/concepts/cluster-administration/addons.md @@ -22,15 +22,15 @@ Laman ini akan menjabarkan beberapa *add-ons* yang tersedia serta tautan instruk * [ACI](https://www.github.com/noironetworks/aci-containers) menyediakan integrasi jaringan kontainer dan keamanan jaringan dengan Cisco ACI. * [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) merupakan penyedia jaringan L3 yang aman dan *policy* jaringan. -* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) menggabungkan Flannel dan Calico, menyediakan jaringan serta *policy* jaringan. +* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) menggabungkan Flannel dan Calico, menyediakan jaringan serta *policy* jaringan. * [Cilium](https://github.com/cilium/cilium) merupakan *plugin* jaringan L3 dan *policy* jaringan yang dapat menjalankan *policy* HTTP/API/L7 secara transparan. Mendukung mode *routing* maupun *overlay/encapsulation*. -* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) memungkinkan Kubernetes agar dapat terkoneksi dengan beragam *plugin* CNI, seperti Calico, Canal, Flannel, Romana, atau Weave dengan mulus. +* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) memungkinkan Kubernetes agar dapat terkoneksi dengan beragam *plugin* CNI, seperti Calico, Canal, Flannel, Romana, atau Weave dengan mulus. * [Contiv](http://contiv.github.io) menyediakan jaringan yang dapat dikonfigurasi (*native* L3 menggunakan BGP, *overlay* menggunakan vxlan, klasik L2, dan Cisco-SDN/ACI) untuk berbagai penggunaan serta *policy framework* yang kaya dan beragam. Proyek Contiv merupakan proyek [open source](http://github.com/contiv). Laman [instalasi](http://github.com/contiv/install) ini akan menjabarkan cara instalasi, baik untuk klaster dengan kubeadm maupun non-kubeadm. * [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), yang berbasis dari [Tungsten Fabric](https://tungsten.io), merupakan sebuah proyek *open source* yang menyediakan virtualisasi jaringan *multi-cloud* serta platform manajemen *policy*. Contrail dan Tungsten Fabric terintegrasi dengan sistem orkestrasi lainnya seperti Kubernetes, OpenShift, OpenStack dan Mesos, serta menyediakan mode isolasi untuk mesin virtual (VM), kontainer/pod dan *bare metal*. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) merupakan penyedia jaringan *overlay* yang dapat digunakan pada Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) merupakan solusi jaringan yang mendukung multipel jaringan pada Kubernetes. -* Multus merupakan sebuah multi *plugin* agar Kubernetes mendukung multipel jaringan secara bersamaan sehingga dapat menggunakan semua *plugin* CNI (contoh: Calico, Cilium, Contiv, Flannel), ditambah pula dengan SRIOV, DPDK, OVS-DPDK dan VPP pada *workload* Kubernetes. -* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) menyediakan integrasi antara VMware NSX-T dan orkestrator kontainer seperti Kubernetes, termasuk juga integrasi antara NSX-T dan platform CaaS/PaaS berbasis kontainer seperti *Pivotal Container Service* (PKS) dan OpenShift. +* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) merupakan sebuah multi *plugin* agar Kubernetes mendukung multipel jaringan secara bersamaan sehingga dapat menggunakan semua *plugin* CNI (contoh: Calico, Cilium, Contiv, Flannel), ditambah pula dengan SRIOV, DPDK, OVS-DPDK dan VPP pada *workload* Kubernetes. +* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) menyediakan integrasi antara VMware NSX-T dan orkestrator kontainer seperti Kubernetes, termasuk juga integrasi antara NSX-T dan platform CaaS/PaaS berbasis kontainer seperti *Pivotal Container Service* (PKS) dan OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) merupakan platform SDN yang menyediakan *policy-based* jaringan antara Kubernetes Pods dan non-Kubernetes *environment* dengan *monitoring* visibilitas dan keamanan. * [Romana](http://romana.io) merupakan solusi jaringan *Layer* 3 untuk jaringan pod yang juga mendukung [*NetworkPolicy* API](/id/docs/concepts/services-networking/network-policies/). Instalasi Kubeadm *add-on* ini tersedia [di sini](https://github.com/romana/romana/tree/master/containerize). * [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) menyediakan jaringan serta *policy* jaringan, yang akan membawa kedua sisi dari partisi jaringan, serta tidak membutuhkan basis data eksternal. diff --git a/content/it/docs/concepts/cluster-administration/addons.md b/content/it/docs/concepts/cluster-administration/addons.md index aed6cdf7561..782ce4ecfa4 100644 --- a/content/it/docs/concepts/cluster-administration/addons.md +++ b/content/it/docs/concepts/cluster-administration/addons.md @@ -22,14 +22,14 @@ I componenti aggiuntivi in ogni sezione sono ordinati alfabeticamente - l'ordine * [ACI](https://www.github.com/noironetworks/aci-containers) fornisce funzionalità integrate di networking e sicurezza di rete con Cisco ACI. * [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) è un provider di sicurezza e rete L3 sicuro. -* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unisce Flannel e Calico, fornendo i criteri di rete e di rete. +* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) unisce Flannel e Calico, fornendo i criteri di rete e di rete. * [Cilium](https://github.com/cilium/cilium) è un plug-in di criteri di rete e di rete L3 in grado di applicare in modo trasparente le politiche HTTP / API / L7. Sono supportate entrambe le modalità di routing e overlay / incapsulamento. -* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) consente a Kubernetes di connettersi senza problemi a una scelta di plugin CNI, come Calico, Canal, Flannel, Romana o Weave. +* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) consente a Kubernetes di connettersi senza problemi a una scelta di plugin CNI, come Calico, Canal, Flannel, Romana o Weave. * [Contiv](https://contivpp.io/) offre networking configurabile (L3 nativo con BGP, overlay con vxlan, L2 classico e Cisco-SDN / ACI) per vari casi d'uso e un ricco framework di policy. Il progetto Contiv è completamente [open source](http://github.com/contiv). Il [programma di installazione](http://github.com/contiv/install) fornisce sia opzioni di installazione basate su kubeadm che non su Kubeadm. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) è un provider di reti sovrapposte che può essere utilizzato con Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) è una soluzione di rete che supporta più reti in Kubernetes. -* Multus è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes. -* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) fornisce l'integrazione tra VMware NSX-T e orchestratori di contenitori come Kubernetes, oltre all'integrazione tra NSX-T e piattaforme CaaS / PaaS basate su container come Pivotal Container Service (PKS) e OpenShift. +* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes. +* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) fornisce l'integrazione tra VMware NSX-T e orchestratori di contenitori come Kubernetes, oltre all'integrazione tra NSX-T e piattaforme CaaS / PaaS basate su container come Pivotal Container Service (PKS) e OpenShift. * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1/docs/kubernetes-1-installation.rst) è una piattaforma SDN che fornisce una rete basata su policy tra i pod di Kubernetes e non Kubernetes con visibilità e monitoraggio della sicurezza. * [Romana](https://github.com/romana/romana) è una soluzione di rete Layer 3 per pod network che supporta anche [API NetworkPolicy](/docs/concepts/services-networking/network-policies/). Dettagli di installazione del componente aggiuntivo di Kubeadm disponibili [qui](https://github.com/romana/romana/tree/master/containerize). * [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) fornisce i criteri di rete e di rete, continuerà a funzionare su entrambi i lati di una partizione di rete e non richiede un database esterno. diff --git a/content/ja/docs/concepts/cluster-administration/addons.md b/content/ja/docs/concepts/cluster-administration/addons.md index 70619ab3c1b..f1740c56c86 100644 --- a/content/ja/docs/concepts/cluster-administration/addons.md +++ b/content/ja/docs/concepts/cluster-administration/addons.md @@ -17,19 +17,20 @@ content_type: concept * [ACI](https://www.github.com/noironetworks/aci-containers)は、統合されたコンテナネットワークとネットワークセキュリティをCisco ACIを使用して提供します。 * [Antrea](https://antrea.io/)は、L3またはL4で動作して、Open vSwitchをネットワークデータプレーンとして活用する、Kubernetes向けのネットワークとセキュリティサービスを提供します。 -* [Calico](https://docs.projectcalico.org/latest/introduction/)はネットワークとネットワークプリシーのプロバイダーです。Calicoは、BGPを使用または未使用の非オーバーレイおよびオーバーレイネットワークを含む、フレキシブルなさまざまなネットワークオプションをサポートします。Calicoはホスト、Pod、そして(IstioとEnvoyを使用している場合には)サービスメッシュ上のアプリケーションに対してネットワークポリシーを強制するために、同一のエンジンを使用します。 -* [Canal](https://github.com/tigera/canal/tree/master/k8s-install)はFlannelとCalicoをあわせたもので、ネットワークとネットワークポリシーを提供します。 +* [Calico](https://docs.projectcalico.org/latest/introduction/)はネットワークとネットワークポリシーのプロバイダーです。Calicoは、BGPを使用または未使用の非オーバーレイおよびオーバーレイネットワークを含む、フレキシブルなさまざまなネットワークオプションをサポートします。Calicoはホスト、Pod、そして(IstioとEnvoyを使用している場合には)サービスメッシュ上のアプリケーションに対してネットワークポリシーを強制するために、同一のエンジンを使用します。 +* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel)はFlannelとCalicoをあわせたもので、ネットワークとネットワークポリシーを提供します。 * [Cilium](https://github.com/cilium/cilium)は、L3のネットワークとネットワークポリシーのプラグインで、HTTP/API/L7のポリシーを透過的に強制できます。ルーティングとoverlay/encapsulationモードの両方をサポートしており、他のCNIプラグイン上で機能できます。 -* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie)は、KubernetesをCalico、Canal、Flannel、Romana、Weaveなど選択したCNIプラグインをシームレスに接続できるようにするプラグインです。 +* [CNI-Genie](https://github.com/cni-genie/CNI-Genie)は、KubernetesをCalico、Canal、Flannel、Weaveなど選択したCNIプラグインをシームレスに接続できるようにするプラグインです。 +* [Contiv](https://contivpp.io/)は、さまざまなユースケースと豊富なポリシーフレームワーク向けに設定可能なネットワーク(BGPを使用したネイティブのL3、vxlanを使用したオーバーレイ、古典的なL2、Cisco-SDN/ACI)を提供します。Contivプロジェクトは完全に[オープンソース](https://github.com/contiv)です。[インストーラー](https://github.com/contiv/install)はkubeadmとkubeadm以外の両方をベースとしたインストールオプションがあります。 * [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)は、[Tungsten Fabric](https://tungsten.io)をベースにしている、オープンソースでマルチクラウドに対応したネットワーク仮想化およびポリシー管理プラットフォームです。ContrailおよびTungsten Fabricは、Kubernetes、OpenShift、OpenStack、Mesosなどのオーケストレーションシステムと統合されており、仮想マシン、コンテナ/Pod、ベアメタルのワークロードに隔離モードを提供します。 * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)は、Kubernetesで使用できるオーバーレイネットワークプロバイダーです。 * [Knitter](https://github.com/ZTE/Knitter/)は、1つのKubernetes Podで複数のネットワークインターフェイスをサポートするためのプラグインです。 -* Multus は、すべてのCNIプラグイン(たとえば、Calico、Cilium、Contiv、Flannel)に加えて、SRIOV、DPDK、OVS-DPDK、VPPをベースとするKubernetes上のワークロードをサポートする、複数のネットワークサポートのためのマルチプラグインです。 -* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/)は、Open vSwitch(OVS)プロジェクトから生まれた仮想ネットワーク実装である[OVN(Open Virtual Network)](https://github.com/ovn-org/ovn/)をベースとする、Kubernetesのためのネットワークプロバイダです。OVN-Kubernetesは、OVSベースのロードバランサーおよびネットワークポリシーの実装を含む、Kubernetes向けのオーバーレイベースのネットワーク実装を提供します。 +* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni)は、すべてのCNIプラグイン(たとえば、Calico、Cilium、Contiv、Flannel)に加えて、SRIOV、DPDK、OVS-DPDK、VPPをベースとするKubernetes上のワークロードをサポートする、複数のネットワークサポートのためのマルチプラグインです。 +* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/)は、Open vSwitch(OVS)プロジェクトから生まれた仮想ネットワーク実装である[OVN(Open Virtual Network)](https://github.com/ovn-org/ovn/)をベースとする、Kubernetesのためのネットワークプロバイダーです。OVN-Kubernetesは、OVSベースのロードバランサーおよびネットワークポリシーの実装を含む、Kubernetes向けのオーバーレイベースのネットワーク実装を提供します。 * [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin)は、クラウドネイティブベースのService function chaining(SFC)、Multiple OVNオーバーレイネットワーク、動的なサブネットの作成、動的な仮想ネットワークの作成、VLANプロバイダーネットワーク、Directプロバイダーネットワークを提供し、他のMulti-networkプラグインと付け替え可能なOVNベースのCNIコントローラープラグインです。 -* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in(NCP)は、VMware NSX-TとKubernetesなどのコンテナオーケストレーター間のインテグレーションを提供します。また、NSX-Tと、Pivotal Container Service(PKS)とOpenShiftなどのコンテナベースのCaaS/PaaSプラットフォームとのインテグレーションも提供します。 +* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in(NCP)は、VMware NSX-TとKubernetesなどのコンテナオーケストレーター間のインテグレーションを提供します。また、NSX-Tと、Pivotal Container Service(PKS)とOpenShiftなどのコンテナベースのCaaS/PaaSプラットフォームとのインテグレーションも提供します。 * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)は、Kubernetes Podと非Kubernetes環境間で可視化とセキュリティモニタリングを使用してポリシーベースのネットワークを提供するSDNプラットフォームです。 -* [Romana](https://github.com/romana/romana)は、[NetworkPolicy API](/ja/docs/concepts/services-networking/network-policies/)もサポートするPodネットワーク向けのL3のネットワークソリューションです。Kubeadmアドオンのインストールの詳細は[こちら](https://github.com/romana/romana/tree/master/containerize)で確認できます。 +* [Romana](https://github.com/romana)は、[NetworkPolicy](/ja/docs/concepts/services-networking/network-policies/) APIもサポートするPodネットワーク向けのL3のネットワークソリューションです。 * [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)は、ネットワークパーティションの両面で機能し、外部データベースを必要とせずに、ネットワークとネットワークポリシーを提供します。 ## サービスディスカバリ @@ -44,6 +45,7 @@ content_type: concept ## インフラストラクチャ * [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation)は仮想マシンをKubernetes上で実行するためのアドオンです。通常、ベアメタルのクラスタで実行します。 +* [node problem detector](https://github.com/kubernetes/node-problem-detector)はLinuxノード上で動作し、システムの問題を[Event](/docs/reference/kubernetes-api/cluster-resources/event-v1/)または[ノードのCondition](/ja/docs/concepts/architecture/nodes/#condition)として報告します。 ## レガシーなアドオン diff --git a/content/ja/docs/concepts/configuration/secret.md b/content/ja/docs/concepts/configuration/secret.md index ca8b3126593..3a2dd6ce431 100644 --- a/content/ja/docs/concepts/configuration/secret.md +++ b/content/ja/docs/concepts/configuration/secret.md @@ -277,7 +277,7 @@ kubectl create secret tls my-tls-secret \ Bootstrap token Secretは、Secretの`type`を`bootstrap.kubernetes.io/token`に明示的に指定することで作成できます。このタイプのSecretは、ノードのブートストラッププロセス中に使用されるトークン用に設計されています。よく知られているConfigMapに署名するために使用されるトークンを格納します。 -Bootstrap toke Secretは通常、`kube-system`namespaceで作成され`bootstrap-token-`の形式で名前が付けられます。ここで``はトークンIDの6文字の文字列です。 +Bootstrap token Secretは通常、`kube-system`namespaceで作成され`bootstrap-token-`の形式で名前が付けられます。ここで``はトークンIDの6文字の文字列です。 Kubernetesマニフェストとして、Bootstrap token Secretは次のようになります。 diff --git a/content/ja/docs/reference/using-api/_index.md b/content/ja/docs/reference/using-api/_index.md index 2dff67b71fc..4cf5b25a953 100644 --- a/content/ja/docs/reference/using-api/_index.md +++ b/content/ja/docs/reference/using-api/_index.md @@ -30,7 +30,7 @@ JSONとProtobufなどのシリアル化スキーマの変更については同 以下の説明は、両方のフォーマットをカバーしています。 APIのバージョニングとソフトウェアのバージョニングは間接的に関係しています。 -[API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)は、APIバージョニングとソフトウェアバージョニングの関係を説明しています。 +[API and release versioning proposal](https://git.k8s.io/sig-release/release-engineering/versioning.md)は、APIバージョニングとソフトウェアバージョニングの関係を説明しています。 APIのバージョンが異なると、安定性やサポートのレベルも異なります。 各レベルの基準については、[API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)で詳しく説明しています。 diff --git a/content/ja/docs/setup/production-environment/tools/kubespray.md b/content/ja/docs/setup/production-environment/tools/kubespray.md index 4fe5c6e978b..9d6497c0547 100644 --- a/content/ja/docs/setup/production-environment/tools/kubespray.md +++ b/content/ja/docs/setup/production-environment/tools/kubespray.md @@ -6,19 +6,20 @@ weight: 30 -This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Packet (bare metal), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). +This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray). Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray provides: * a highly available cluster * composable attributes * support for most popular Linux distributions - * Container Linux by CoreOS + * Ubuntu 16.04, 18.04, 20.04, 22.04 + * CentOS/RHEL/Oracle Linux 7, 8 * Debian Buster, Jessie, Stretch, Wheezy - * Ubuntu 16.04, 18.04 - * CentOS/RHEL/Oracle Linux 7 - * Fedora 28 + * Fedora 34, 35 + * Fedora CoreOS * openSUSE Leap 15 + * Flatcar Container Linux by Kinvolk * continuous integration tests To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to @@ -34,13 +35,13 @@ To choose a tool which best fits your use case, read [this comparison](https://g Provision servers with the following [requirements](https://github.com/kubernetes-sigs/kubespray#requirements): -* **Ansible v2.7.8 and python-netaddr is installed on the machine that will run Ansible commands** -* **Jinja 2.9 (or newer) is required to run the Ansible Playbooks** +* **Ansible v2.11 and python-netaddr are installed on the machine that will run Ansible commands** +* **Jinja 2.11 (or newer) is required to run the Ansible Playbooks** * The target servers must have access to the Internet in order to pull docker images. Otherwise, additional configuration is required ([See Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md)) * The target servers are configured to allow **IPv4 forwarding** -* **Your ssh key must be copied** to all the servers part of your inventory -* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall -* If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified +* **Your ssh key must be copied** to all the servers in your inventory +* **Firewalls are not managed by kubespray**. You'll need to implement appropriate rules as needed. You should disable your firewall in order to avoid any issues during deployment +* If kubespray is run from a non-root user account, correct privilege escalation method should be configured in the target servers and the `ansible_become` flag or command parameters `--become` or `-b` should be specified Kubespray provides the following utilities to help provision your environment: diff --git a/content/ja/docs/setup/release/version-skew-policy.md b/content/ja/docs/setup/release/version-skew-policy.md index 3e58503462e..bd1875aa419 100644 --- a/content/ja/docs/setup/release/version-skew-policy.md +++ b/content/ja/docs/setup/release/version-skew-policy.md @@ -12,7 +12,7 @@ weight: 30 ## サポートされるバージョン {#supported-versions} -Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](https://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning)を参照してください。 +Kubernetesのバージョンは**x.y.z**の形式で表現され、**x**はメジャーバージョン、**y**はマイナーバージョン、**z**はパッチバージョンを指します。これは[セマンティック バージョニング](https://semver.org/)に従っています。詳細は、[Kubernetesのリリースバージョニング](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning)を参照してください。 Kubernetesプロジェクトでは、最新の3つのマイナーリリースについてリリースブランチを管理しています ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}})。 diff --git a/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md b/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md index d5405651871..13e137f3b5b 100644 --- a/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md +++ b/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md @@ -3,6 +3,13 @@ layout: blog title: "당황하지 마세요. 쿠버네티스와 도커" date: 2020-12-02 slug: dont-panic-kubernetes-and-docker +evergreen: true +--- + +**업데이트:** _쿠버네티스의 `dockershim`을 통한 도커 지원이 제거되었습니다. +더 자세한 정보는 [제거와 관련된 자주 묻는 질문](/dockershim/)을 참고하세요. +또는 지원 중단에 대한 [GitHub 이슈](https://github.com/kubernetes/kubernetes/issues/106917)에서 논의를 할 수도 있습니다._ + --- **저자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas @@ -10,8 +17,7 @@ slug: dont-panic-kubernetes-and-docker **번역:** 박재화(삼성SDS), 손석호(한국전자통신연구원) 쿠버네티스는 v1.20 이후 컨테이너 런타임으로서 -[도커를 -사용 중단(deprecating)](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)합니다. +[도커를 사용 중단(deprecating)](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)합니다. **당황할 필요는 없습니다. 말처럼 극적이진 않습니다.** @@ -26,7 +32,7 @@ slug: dont-panic-kubernetes-and-docker 빌드하는 데 유용한 도구이며, `docker build` 실행 결과로 만들어진 이미지도 여전히 쿠버네티스 클러스터에서 동작합니다. -GKE, EKS 또는 AKS([containerd가 기본](https://github.com/Azure/AKS/releases/tag/2020-11-16)인)와 같은 관리형 쿠버네티스 서비스를 +AKS, EKS 또는 GKE와 같은 관리형 쿠버네티스 서비스를 사용하는 경우 쿠버네티스의 향후 버전에서 도커에 대한 지원이 없어지기 전에, 워커 노드가 지원되는 컨테이너 런타임을 사용하고 있는지 확인해야 합니다. 노드에 사용자 정의가 적용된 경우 사용자 환경 및 런타임 요구 사항에 따라 업데이트가 필요할 수도 @@ -35,8 +41,8 @@ GKE, EKS 또는 AKS([containerd가 기본](https://github.com/Azure/AKS/releases 자체 클러스터를 운영하는 경우에도, 클러스터의 고장을 피하기 위해서 변경을 수행해야 합니다. v1.20에서는 도커에 대한 지원 중단 경고(deprecation warning)가 표시됩니다. -도커 런타임 지원이 쿠버네티스의 향후 릴리스(현재는 2021년 하반기의 -1.22 릴리스로 계획됨)에서 제거되면 더 이상 지원되지 +도커 런타임 지원이 쿠버네티스의 향후 릴리스(현재는 2021년 하반기의 +1.22 릴리스로 계획됨)에서 제거되면 더 이상 지원되지 않으며, containerd 또는 CRI-O와 같은 다른 호환 컨테이너 런타임 중 하나로 전환해야 합니다. 선택한 런타임이 현재 사용 중인 도커 데몬 구성(예: 로깅)을 지원하는지 확인하세요. @@ -103,4 +109,4 @@ containerd가 정말 필요로 하는 것들을 확보하기 위해서 도커심 모든 사람이 다가오는 변경 사항에 대해 최대한 많은 교육을 받을 수 있도록 하는 것입니다. 이 글이 여러분이 가지는 대부분의 질문에 대한 답이 되었고, 불안을 약간은 진정시켰기를 바랍니다! ❤️ -더 많은 답변을 찾고 계신가요? 함께 제공되는 [도커심 사용 중단 FAQ](/blog/2020/12/02/dockershim-faq/)를 확인하세요. +더 많은 답변을 찾고 계신가요? 함께 제공되는 [도커심 제거 FAQ](/blog/2022/02/17/dockershim-faq/)(2022년 2월에 갱신됨)를 확인하세요. diff --git a/content/ko/blog/_posts/2021-08-04-kubernetes-release-1.22.md b/content/ko/blog/_posts/2021-08-04-kubernetes-release-1.22.md index d6df46ac55d..c67cc47ea11 100644 --- a/content/ko/blog/_posts/2021-08-04-kubernetes-release-1.22.md +++ b/content/ko/blog/_posts/2021-08-04-kubernetes-release-1.22.md @@ -3,6 +3,7 @@ layout: blog title: '쿠버네티스 1.22: 새로운 정점에 도달(Reaching New Peaks)' date: 2021-08-04 slug: kubernetes-1-22-release-announcement +evergreen: true --- **저자:** [쿠버네티스 1.22 릴리스 팀](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.22/release-team.md) @@ -50,7 +51,7 @@ SIG Windows는 계속해서 성장하는 개발자 커뮤니티를 지원하기 ### 기본(default) seccomp 프로파일 -알파 기능인 기본 seccomp 프로파일이 신규 커맨드라인 플래그 및 설정과 함께 kubelet에 추가되었습니다. 이 신규 기능을 사용하면, `Unconfined`대신 `RuntimeDefault` seccomp 프로파일을 기본으로 사용하는 seccomp이 클러스터 전반에서 기본이 됩니다. 이는 쿠버네티스 디플로이먼트(Deployment)의 기본 보안을 강화합니다. 워크로드에 대한 보안이 기본으로 더 강화되었으므로, 이제 보안 관리자도 조금 더 안심하고 쉴 수 있습니다. 이 기능에 대한 자세한 사항은 공식적인 [seccomp 튜토리얼](https://kubernetes.io/docs/tutorials/clusters/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads)을 참고하시기 바랍니다. +알파 기능인 기본 seccomp 프로파일이 신규 커맨드라인 플래그 및 설정과 함께 kubelet에 추가되었습니다. 이 신규 기능을 사용하면, `Unconfined`대신 `RuntimeDefault` seccomp 프로파일을 기본으로 사용하는 seccomp이 클러스터 전반에서 기본이 됩니다. 이는 쿠버네티스 디플로이먼트(Deployment)의 기본 보안을 강화합니다. 워크로드에 대한 보안이 기본으로 더 강화되었으므로, 이제 보안 관리자도 조금 더 안심하고 쉴 수 있습니다. 이 기능에 대한 자세한 사항은 공식적인 [seccomp 튜토리얼](/docs/tutorials/security/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads)을 참고하시기 바랍니다. ### kubeadm을 통한 보안성이 더 높은 컨트롤 플레인 diff --git a/content/ko/community/_index.html b/content/ko/community/_index.html index 30ede236bc7..867916e2083 100644 --- a/content/ko/community/_index.html +++ b/content/ko/community/_index.html @@ -1,257 +1,183 @@ --- -title: 커뮤니티 +title: Community layout: basic cid: community +community_styles_migrated: true --- + -
-
- 쿠버네티스 컨퍼런스 갤러리 - 쿠버네티스 컨퍼런스 갤러리 +
+

사용자, 기여자, 그리고 우리가 함께 구축한 문화를 통해 구성된 쿠버네티스 커뮤니티는 + 본 오픈소스 프로젝트가 급부상하는 가장 큰 이유 중 하나입니다. + 프로젝트 자체가 성장하고 변화함에 따라 + 우리의 문화와 가치관 또한 지속적으로 성장하고 변화하고 있습니다. + 우리 모두는 프로젝트와 작업 방식을 지속적으로 개선하기 위해 함께 노력합니다.

+

우리는 이슈(issue)와 풀 리퀘스트(pull request)를 제출하고, SIG 미팅과 쿠버네티스 모임 그리고 KubeCon에 참석하고, + 도입(adoption)과 혁신(innovation)을 지지하며, + kubectl get pods 를 실행하고, + 다른 수천가지 중요한 방법으로 기여하는 사람들 입니다. + 어떻게 하면 이 놀라운 공동체의 일부가 될 수 있는지 계속 읽어보세요.

-
-
-

사용자, 기여자, 그리고 우리가 함께 구축한 문화를 통해 구성된 쿠버네티스 커뮤니티는 본 오픈소스 프로젝트가 급부상하는 가장 큰 이유 중 하나입니다. 프로젝트 자체가 성장하고 변화함에 따라 우리의 문화와 가치관 또한 지속적으로 성장하고 변화하고 있습니다. 우리 모두는 프로젝트와 작업 방식을 지속적으로 개선하기 위해 함께 노력합니다. -

우리는 이슈(issue)와 풀 리퀘스트(pull request)를 제출하고, SIG 미팅과 쿠버네티스 모임 그리고 KubeCon에 참석하고, 도입(adoption)과 혁신(innovation)을 지지하며, kubectl get pods 를 실행하고, 다른 수천가지 중요한 방법으로 기여하는 사람들 입니다. 어떻게 하면 이 놀라운 공동체의 일부가 될 수 있는지 계속 읽어보세요.

-
-
- -
- -기여자 커뮤니티      -커뮤니티 가치      -행동 강령       -비디오      -토론      -이벤트와 모임들      -새소식      -릴리즈 - -
-

-
-
-
- 쿠버네티스 컨퍼런스 갤러리 +