diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 22c93329763..7e7bbdef6b5 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -138,6 +138,7 @@ aliases: - Sea-n - tanjunchen - tengqm + - windsonsea - xichengliudui sig-docs-zh-reviews: # PR reviews for Chinese content - chenrui333 diff --git a/README.md b/README.md index cbc8617dff8..2c70f76b8c5 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ This repository contains the assets required to build the [Kubernetes website an ## Using this repository -You can run the website locally using Hugo (Extended version), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website. +You can run the website locally using [Hugo (Extended version)](https://gohugo.io/), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website. ## Prerequisites @@ -70,7 +70,7 @@ This will start the local Hugo server on port 1313. Open up your browser to . +The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, also known as OpenAPI specification, using . To update the reference pages for a new Kubernetes release follow these steps: diff --git a/content/de/_index.html b/content/de/_index.html index b1bc57459cd..ab7427938fa 100644 --- a/content/de/_index.html +++ b/content/de/_index.html @@ -42,12 +42,12 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or

- Besuchen die KubeCon North America vom 24. bis 28. Oktober 2022 + Besuche die KubeCon Europe vom 18. bis 21. April 2023



- Besuche die KubeCon Europe vom 17. bis 21. April 2023 + Besuche die KubeCon North America vom 6. bis 9. November 2023
diff --git a/content/de/blog/_posts/2023-02-06-k8s-gcr-io-freeze-announcement.md b/content/de/blog/_posts/2023-02-06-k8s-gcr-io-freeze-announcement.md new file mode 100644 index 00000000000..298809c1276 --- /dev/null +++ b/content/de/blog/_posts/2023-02-06-k8s-gcr-io-freeze-announcement.md @@ -0,0 +1,48 @@ +--- +layout: blog +title: "k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April 2023" +date: 2023-03-10 +slug: k8s-gcr-io-freeze-announcement +--- + +**Authors**: Michael Mueller (Giant Swarm) + +Das Kubernetes-Projekt betreibt eine zur Community gehörende Container-Image-Registry namens `registry.k8s.io`, um die zum Projekt gehörenden Container-Images zu hosten. Am 3. April 2023 wird diese Container-Image-Registry `k8s.gcr.io` eingefroren und es werden keine weiteren Container-Images für Kubernetes und Teilprojekte in die alte Registry gepusht. + +Die Container-Image-Registry `registry.k8s.io` ist bereits seit einigen Monaten verfügbar und wird die alte Registry ersetzen. Wir haben einen [Blogbeitrag](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/) über die Vorteile für die Community und das Kubernetes-Projekt veröffentlicht. In diesem Beitrag wurde auch angekündigt, dass zukünftige Versionen von Kubernetes nicht mehr in der alten Registry Released sein werden. + +Was bedeutet dies für Contributors: +- Wenn Du ein Maintainer eines Teilprojekts bist, musst du die Manifeste und Helm-Charts entsprechend anpassen, um die neue Container-Registry zu verwenden. + +Was bedeutet dies Änderung für Endanwender: +- Das Kubernetes Release 1.27 wird nicht auf der alten Registry veröffentlicht. +- Patchreleases für 1.24, 1.25 und 1.26 werden ab April nicht mehr in der alten Container-Image-Registry veröffentlicht. Bitte beachte den untenstehenden Zeitplan für die Details zu Patchreleases in der alten Container-Registry. +- Beginnend mit dem Release 1.25, wurde die Standardeinstellung der Container-Image-Registry auf `registry.k8s.io` geändert. Diese Einstellung kann in `kubeadm` und dem `kubelet` abgeändert werden, sollte der Wert jedoch auf `k8s.gcr.io` gesetzt werden, wird dies für neue Releases ab April fehlschlagen, da diese nicht in die alte Container-Image-Registry gepusht werden. +- Solltest Du die Zuverlässigkeit der Cluster erhöhen wollen und Abhängigkeiten zu dem zur Community gehörenden Container-Image-Registry auflösen wollen, oder betreibst Cluster in einer Umgebung mit eingeschränktem externen Netzwerkzugriff, solltest Du in Betracht ziehen eine lokale Container-Image-Registry als Mirror zu betreiben. Einige Cloud-Anbieter haben hierfür entsprechende Angebote. + +## Zeitplan der Änderungen + +- `k8s.gcr.io` wird zum 3.April 2023 eingefroren +- Der 1.27 Release wird zum 12.April 2023 erwartet +- Das letzte 1.23 Release auf `k8s.gcr.io` wird 1.23.18 sein (1.23 wird end-of-life vor dem einfrieren erreichen) +- Das letzte 1.24 Release auf `k8s.gcr.io` wird 1.24.12 sein +- Das letzte 1.25 Release auf `k8s.gcr.io` wird 1.25.8 sein +- Das letzte 1.26 Release auf `k8s.gcr.io` wird 1.26.3 sein + +## Was geschieht nun + +Bitte stelle sicher, dass die Cluster keine Abhängigkeiten zu der Alten Container-Image-Registry haben. Dies kann zum Beispiel folgendermaßen überprüft werden, durch Ausführung des fogenden Kommandos erhält man eine Liste der Container-Images die ein Pod verwendet: + +```shell +kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\ +tr -s '[[:space:]]' '\n' |\ +sort |\ +uniq -c +``` + +Es können durchaus weitere Abhängigkeiten zu der alten Container-Image-Registry bestehen, stelle also sicher, dass du alle möglichen Abhängigkeiten überprüfst, um die Cluster funktional und auf dem neuesten Stand zu halten. +## Acknowledgments + +__Change is hard__, die Weiterentwicklung unserer Container-Image-Registry ist notwendig, um eine nachhaltige Zukunft für das Projekt zu gewährleisten. Wir bemühen uns, Dinge für alle, die Kubernetes nutzen, zu verbessern. Viele Mitwirkende aus allen Ecken unserer Community haben lange und hart daran gearbeitet, sicherzustellen, dass wir die bestmöglichen Entscheidungen treffen, Pläne umsetzen und unser Bestes tun, um diese Pläne zu kommunizieren. + +Dank geht an Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine, Davanum Srinivas, Mahamed Ali, und Tim Hockin von SIG K8s Infra, Brian McQueen, und Sergey Kanzhelev von SIG Node, Lubomir Ivanov von SIG Cluster Lifecycle, Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, und Stephen Augustus von SIG Release, Bob Killen und Kaslin Fields von SIG Contribex, Tim Allclair von the Security Response Committee. Also a big thank you to our friends acting as liaisons with our cloud provider partners: Jay Pipes von Amazon und Jon Johnson Jr. von Google. \ No newline at end of file diff --git a/content/de/docs/setup/_index.md b/content/de/docs/setup/_index.md index 2203fcc19cf..b1c8bbb9b78 100644 --- a/content/de/docs/setup/_index.md +++ b/content/de/docs/setup/_index.md @@ -34,7 +34,7 @@ Benutzen Sie eine Docker-basierende Lösung, wenn Sie Kubernetes erlernen wollen | | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) | | | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)| | | [k3s](https://k3s.io)| - +{{< /table >}} ## Produktionsumgebung @@ -98,5 +98,6 @@ Die folgende Tabelle für Produktionsumgebungs-Lösungen listet Anbieter und der | [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | | | [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | | +{{< /table >}} diff --git a/content/de/docs/setup/minikube.md b/content/de/docs/setup/minikube.md index 35de917a657..ec078c60042 100644 --- a/content/de/docs/setup/minikube.md +++ b/content/de/docs/setup/minikube.md @@ -52,7 +52,7 @@ Creating machine... Starting local Kubernetes cluster... ``` ```shell -kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 +kubectl create deployment hello-minikube --image=registry.k8s.io/echoserver:1.10 ``` ``` deployment.apps/hello-minikube created diff --git a/content/de/docs/tasks/service-catalog/_index.md b/content/de/docs/tasks/service-catalog/_index.md deleted file mode 100644 index 0f63c1df824..00000000000 --- a/content/de/docs/tasks/service-catalog/_index.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "Service Catalog installieren" -weight: 150 ---- - diff --git a/content/de/docs/tutorials/hello-minikube.md b/content/de/docs/tutorials/hello-minikube.md index 2137c5fdd9f..ba54846cb64 100644 --- a/content/de/docs/tutorials/hello-minikube.md +++ b/content/de/docs/tutorials/hello-minikube.md @@ -77,7 +77,7 @@ Deployments sind die empfohlene Methode zum Verwalten der Erstellung und Skalier Der Pod führt einen Container basierend auf dem bereitgestellten Docker-Image aus. ```shell - kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4 ``` 2. Anzeigen des Deployments: diff --git a/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md b/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md index 7ce41060762..9fc3f4702df 100644 --- a/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md +++ b/content/en/blog/_posts/2018-12-04-kubeadm-ga-release.md @@ -33,7 +33,7 @@ General Availability means different things for different projects. For kubeadm, We now consider kubeadm to have achieved GA-level maturity in each of these important domains: * **Stable command-line UX** --- The kubeadm CLI conforms to [#5a GA rule of the Kubernetes Deprecation Policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli), which states that a command or flag that exists in a GA version must be kept for at least 12 months after deprecation. - * **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/wgs/0014-20180707-componentconfig-api-types-to-staging.md) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/). + * **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/wgs/115-componentconfig) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/). * **Configuration file schema** --- With the new **v1beta1** API version, you can now tune almost every part of the cluster declaratively and thus build a "GitOps" flow around kubeadm-built clusters. In future versions, we plan to graduate the API to version **v1** with minimal changes (and perhaps none). * **The "toolbox" interface of kubeadm** --- Also known as **phases**. If you don't want to perform all [`kubeadm init`](/docs/reference/setup-tools/kubeadm/kubeadm-init/) tasks, you can instead apply more fine-grained actions using the `kubeadm init phase` command (for example generating certificates or control plane [Static Pod](/docs/tasks/administer-cluster/static-pod/) manifests). * **Upgrades between minor versions** --- The [`kubeadm upgrade`](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) command is now fully GA. It handles control plane upgrades for you, which includes upgrades to [etcd](https://etcd.io), the [API Server](/docs/reference/using-api/api-overview/), the [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/), and the [Scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). You can seamlessly upgrade your cluster between minor or patch versions (e.g. v1.12.2 -> v1.13.1 or v1.13.1 -> v1.13.3). diff --git a/content/en/blog/_posts/2021-04-20-annotating-k8s-for-humans.md b/content/en/blog/_posts/2021-04-20-annotating-k8s-for-humans.md index 155ff5a3b31..f4820fced31 100644 --- a/content/en/blog/_posts/2021-04-20-annotating-k8s-for-humans.md +++ b/content/en/blog/_posts/2021-04-20-annotating-k8s-for-humans.md @@ -83,6 +83,7 @@ Adopting a common convention for annotations ensures consistency and understanda | `a8r.io/uptime` | Link to external uptime dashboard. | | `a8r.io/performance` | Link to external performance dashboard. | | `a8r.io/dependencies` | Unstructured text describing the service dependencies for humans. | +{{< /table >}} ## Visualizing annotations: Service Catalogs diff --git a/content/en/blog/_posts/2022-11-28-registry-k8s-io-change.md b/content/en/blog/_posts/2022-11-28-registry-k8s-io-change.md index 604c6e738ea..64f2580bd9a 100644 --- a/content/en/blog/_posts/2022-11-28-registry-k8s-io-change.md +++ b/content/en/blog/_posts/2022-11-28-registry-k8s-io-change.md @@ -11,7 +11,7 @@ Starting with Kubernetes 1.25, our container image registry has changed from k8s ## TL;DR: What you need to know about this change -* Container images for Kubernetes releases from 1.25 onward are no longer published to k8s.gcr.io, only to registry.k8s.io. +* Container images for Kubernetes releases from 1.25 1.27 onward are not published to k8s.gcr.io, only to registry.k8s.io. * In the upcoming December patch releases, the new registry domain default will be backported to all branches still in support (1.22, 1.23, 1.24). * If you run in a restricted environment and apply strict domain/IP address access policies limited to k8s.gcr.io, the __image pulls will not function__ after the migration to this new registry. For these users, the recommended method is to mirror the release images to a private registry. @@ -68,8 +68,15 @@ The image used by kubelet for the pod sandbox (`pause`) can be overridden by set kubelet --pod-infra-container-image=k8s.gcr.io/pause:3.5 ``` +## Legacy container registry freeze {#registry-freeze} + +[k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April 2023](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/) announces the freeze of the +legacy k8s.gcr.io image registry. Read that article for more details. + ## Acknowledgments __Change is hard__, and evolving our image-serving platform is needed to ensure a sustainable future for the project. We strive to make things better for everyone using Kubernetes. Many contributors from all corners of our community have been working long and hard to ensure we are making the best decisions possible, executing plans, and doing our best to communicate those plans. Thanks to Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine, Davanum Srinivas, Mahamed Ali, and Tim Hockin from SIG K8s Infra, Brian McQueen, and Sergey Kanzhelev from SIG Node, Lubomir Ivanov from SIG Cluster Lifecycle, Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, and Stephen Augustus from SIG Release, Bob Killen and Kaslin Fields from SIG Contribex, Tim Allclair from the Security Response Committee. Also a big thank you to our friends acting as liaisons with our cloud provider partners: Jay Pipes from Amazon and Jon Johnson Jr. from Google. + +_This article was updated on the 28th of February 2023._ diff --git a/content/en/blog/_posts/2022-12-05-forensic-container-checkpointing/index.md b/content/en/blog/_posts/2022-12-05-forensic-container-checkpointing/index.md index 14293556a43..9cd3832e5f4 100644 --- a/content/en/blog/_posts/2022-12-05-forensic-container-checkpointing/index.md +++ b/content/en/blog/_posts/2022-12-05-forensic-container-checkpointing/index.md @@ -207,3 +207,11 @@ and without losing the state of the containers in that Pod. You can reach SIG Node by several means: - Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node) - [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node) + +## Further reading + +Please see the follow-up article [Forensic container +analysis][forensic-container-analysis] for details on how a container checkpoint +can be analyzed. + +[forensic-container-analysis]: /blog/2023/03/10/forensic-container-analysis/ diff --git a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md index 433fac80e40..8ffcd99cfdd 100644 --- a/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md +++ b/content/en/blog/_posts/2023-01-20-Security-Bahavior-Analysis/index.md @@ -8,39 +8,99 @@ slug: security-behavior-analysis **Author:** David Hadas (IBM Research Labs) -_This post warns Devops from a false sense of security. Following security best practices when developing and configuring microservices do not result in non-vulnerable microservices. The post shows that although all deployed microservices are vulnerable, there is much that can be done to ensure microservices are not exploited. It explains how analyzing the behavior of clients and services from a security standpoint, named here **"Security-Behavior Analytics"**, can protect the deployed vulnerable microservices. It points to [Guard](http://knative.dev/security-guard), an open source project offering security-behavior monitoring and control of Kubernetes microservices presumed vulnerable._ +_This post warns Devops from a false sense of security. Following security +best practices when developing and configuring microservices do not result +in non-vulnerable microservices. The post shows that although all deployed +microservices are vulnerable, there is much that can be done to ensure +microservices are not exploited. It explains how analyzing the behavior of +clients and services from a security standpoint, named here +**"Security-Behavior Analytics"**, can protect the deployed vulnerable microservices. +It points to [Guard](http://knative.dev/security-guard), an open source project offering +security-behavior monitoring and control of Kubernetes microservices presumed vulnerable._ -As cyber attacks continue to intensify in sophistication, organizations deploying cloud services continue to grow their cyber investments aiming to produce safe and non-vulnerable services. However, the year-by-year growth in cyber investments does not result in a parallel reduction in cyber incidents. Instead, the number of cyber incidents continues to grow annually. Evidently, organizations are doomed to fail in this struggle - no matter how much effort is made to detect and remove cyber weaknesses from deployed services, it seems offenders always have the upper hand. +As cyber attacks continue to intensify in sophistication, organizations deploying +cloud services continue to grow their cyber investments aiming to produce safe and +non-vulnerable services. However, the year-by-year growth in cyber investments does +not result in a parallel reduction in cyber incidents. Instead, the number of cyber +incidents continues to grow annually. Evidently, organizations are doomed to fail in +this struggle - no matter how much effort is made to detect and remove cyber weaknesses +from deployed services, it seems offenders always have the upper hand. -Considering the current spread of offensive tools, sophistication of offensive players, and ever-growing cyber financial gains to offenders, any cyber strategy that relies on constructing a non-vulnerable, weakness-free service in 2023 is clearly too naïve. It seems the only viable strategy is to: +Considering the current spread of offensive tools, sophistication of offensive players, +and ever-growing cyber financial gains to offenders, any cyber strategy that relies on +constructing a non-vulnerable, weakness-free service in 2023 is clearly too naïve. +It seems the only viable strategy is to: ➥ **Admit that your services are vulnerable!** -In other words, consciously accept that you will never create completely invulnerable services. If your opponents find even a single weakness as an entry-point, you lose! Admitting that in spite of your best efforts, all your services are still vulnerable is an important first step. Next, this post discusses what you can do about it... +In other words, consciously accept that you will never create completely invulnerable +services. If your opponents find even a single weakness as an entry-point, you lose! +Admitting that in spite of your best efforts, all your services are still vulnerable +is an important first step. Next, this post discusses what you can do about it... ## How to protect microservices from being exploited -Being vulnerable does not necessarily mean that your service will be exploited. Though your services are vulnerable in some ways unknown to you, offenders still need to identify these vulnerabilities and then exploit them. If offenders fail to exploit your service vulnerabilities, you win! In other words, having a vulnerability that can’t be exploited, represents a risk that can’t be realized. +Being vulnerable does not necessarily mean that your service will be exploited. +Though your services are vulnerable in some ways unknown to you, offenders still +need to identify these vulnerabilities and then exploit them. If offenders fail +to exploit your service vulnerabilities, you win! In other words, having a +vulnerability that can’t be exploited, represents a risk that can’t be realized. {{< figure src="security_behavior_figure_1.svg" alt="Image of an example of offender gaining foothold in a service" class="diagram-large" caption="Figure 1. An Offender gaining foothold in a vulnerable service" >}} -The above diagram shows an example in which the offender does not yet have a foothold in the service; that is, it is assumed that your service does not run code controlled by the offender on day 1. In our example the service has vulnerabilities in the API exposed to clients. To gain an initial foothold the offender uses a malicious client to try and exploit one of the service API vulnerabilities. The malicious client sends an exploit that triggers some unplanned behavior of the service. +The above diagram shows an example in which the offender does not yet have a +foothold in the service; that is, it is assumed that your service does not run +code controlled by the offender on day 1. In our example the service has +vulnerabilities in the API exposed to clients. To gain an initial foothold the +offender uses a malicious client to try and exploit one of the service API +vulnerabilities. The malicious client sends an exploit that triggers some +unplanned behavior of the service. -More specifically, let’s assume the service is vulnerable to an SQL injection. The developer failed to sanitize the user input properly, thereby allowing clients to send values that would change the intended behavior. In our example, if a client sends a query string with key “username” and value of _“tom or 1=1”_, the client will receive the data of all users. Exploiting this vulnerability requires the client to send an irregular string as the value. Note that benign users will not be sending a string with spaces or with the equal sign character as a username, instead they will normally send legal usernames which for example may be defined as a short sequence of characters a-z. No legal username can trigger service unplanned behavior. +More specifically, let’s assume the service is vulnerable to an SQL injection. +The developer failed to sanitize the user input properly, thereby allowing clients +to send values that would change the intended behavior. In our example, if a client +sends a query string with key “username” and value of _“tom or 1=1”_, the client will +receive the data of all users. Exploiting this vulnerability requires the client to +send an irregular string as the value. Note that benign users will not be sending a +string with spaces or with the equal sign character as a username, instead they will +normally send legal usernames which for example may be defined as a short sequence of +characters a-z. No legal username can trigger service unplanned behavior. -In this simple example, one can already identify several opportunities to detect and block an attempt to exploit the vulnerability (un)intentionally left behind by the developer, making the vulnerability unexploitable. First, the malicious client behavior differs from the behavior of benign clients, as it sends irregular requests. If such a change in behavior is detected and blocked, the exploit will never reach the service. Second, the service behavior in response to the exploit differs from the service behavior in response to a regular request. Such behavior may include making subsequent irregular calls to other services such as a data store, taking irregular time to respond, and/or responding to the malicious client with an irregular response (for example, containing much more data than normally sent in case of benign clients making regular requests). Service behavioral changes, if detected, will also allow blocking the exploit in different stages of the exploitation attempt. +In this simple example, one can already identify several opportunities to detect and +block an attempt to exploit the vulnerability (un)intentionally left behind by the +developer, making the vulnerability unexploitable. First, the malicious client behavior +differs from the behavior of benign clients, as it sends irregular requests. If such a +change in behavior is detected and blocked, the exploit will never reach the service. +Second, the service behavior in response to the exploit differs from the service behavior +in response to a regular request. Such behavior may include making subsequent irregular +calls to other services such as a data store, taking irregular time to respond, and/or +responding to the malicious client with an irregular response (for example, containing +much more data than normally sent in case of benign clients making regular requests). +Service behavioral changes, if detected, will also allow blocking the exploit in +different stages of the exploitation attempt. More generally: -- Monitoring the behavior of clients can help detect and block exploits against service API vulnerabilities. In fact, deploying efficient client behavior monitoring makes many vulnerabilities unexploitable and others very hard to achieve. To succeed, the offender needs to create an exploit undetectable from regular requests. +- Monitoring the behavior of clients can help detect and block exploits against + service API vulnerabilities. In fact, deploying efficient client behavior + monitoring makes many vulnerabilities unexploitable and others very hard to achieve. + To succeed, the offender needs to create an exploit undetectable from regular requests. -- Monitoring the behavior of services can help detect services as they are being exploited regardless of the attack vector used. Efficient service behavior monitoring limits what an attacker may be able to achieve as the offender needs to ensure the service behavior is undetectable from regular service behavior. +- Monitoring the behavior of services can help detect services as they are being + exploited regardless of the attack vector used. Efficient service behavior + monitoring limits what an attacker may be able to achieve as the offender needs + to ensure the service behavior is undetectable from regular service behavior. -Combining both approaches may add a protection layer to the deployed vulnerable services, drastically decreasing the probability for anyone to successfully exploit any of the deployed vulnerable services. Next, let us identify four use cases where you need to use security-behavior monitoring. +Combining both approaches may add a protection layer to the deployed vulnerable services, +drastically decreasing the probability for anyone to successfully exploit any of the +deployed vulnerable services. Next, let us identify four use cases where you need to +use security-behavior monitoring. ## Use cases -One can identify the following four different stages in the life of any service from a security standpoint. In each stage, security-behavior monitoring is required to meet different challenges: +One can identify the following four different stages in the life of any service +from a security standpoint. In each stage, security-behavior monitoring is required +to meet different challenges: Service State | Use case | What do you need in order to cope with this use case? ------------- | ------------- | ----------------------------------------- @@ -53,25 +113,57 @@ Fortunately, microservice architecture is well suited to security-behavior monit ## Security-Behavior of microservices versus monoliths {#microservices-vs-monoliths} -Kubernetes is often used to support workloads designed with microservice architecture. By design, microservices aim to follow the UNIX philosophy of "Do One Thing And Do It Well". Each microservice has a bounded context and a clear interface. In other words, you can expect the microservice clients to send relatively regular requests and the microservice to present a relatively regular behavior as a response to these requests. Consequently, a microservice architecture is an excellent candidate for security-behavior monitoring. +Kubernetes is often used to support workloads designed with microservice architecture. +By design, microservices aim to follow the UNIX philosophy of "Do One Thing And Do It Well". +Each microservice has a bounded context and a clear interface. In other words, you can expect +the microservice clients to send relatively regular requests and the microservice to present +a relatively regular behavior as a response to these requests. Consequently, a microservice +architecture is an excellent candidate for security-behavior monitoring. {{< figure src="security_behavior_figure_2.svg" alt="Image showing why microservices are well suited for security-behavior monitoring" class="diagram-large" caption="Figure 2. Microservices are well suited for security-behavior monitoring" >}} -The diagram above clarifies how dividing a monolithic service to a set of microservices improves our ability to perform security-behavior monitoring and control. In a monolithic service approach, different client requests are intertwined, resulting in a diminished ability to identify irregular client behaviors. Without prior knowledge, an observer of the intertwined client requests will find it hard to distinguish between types of requests and their related characteristics. Further, internal client requests are not exposed to the observer. Lastly, the aggregated behavior of the monolithic service is a compound of the many different internal behaviors of its components, making it hard to identify irregular service behavior. +The diagram above clarifies how dividing a monolithic service to a set of +microservices improves our ability to perform security-behavior monitoring +and control. In a monolithic service approach, different client requests are +intertwined, resulting in a diminished ability to identify irregular client +behaviors. Without prior knowledge, an observer of the intertwined client +requests will find it hard to distinguish between types of requests and their +related characteristics. Further, internal client requests are not exposed to +the observer. Lastly, the aggregated behavior of the monolithic service is a +compound of the many different internal behaviors of its components, making +it hard to identify irregular service behavior. -In a microservice environment, each microservice is expected by design to offer a more well-defined service and serve better defined type of requests. This makes it easier for an observer to identify irregular client behavior and irregular service behavior. Further, a microservice design exposes the internal requests and internal services which offer more security-behavior data to identify irregularities by an observer. Overall, this makes the microservice design pattern better suited for security-behavior monitoring and control. +In a microservice environment, each microservice is expected by design to offer +a more well-defined service and serve better defined type of requests. This makes +it easier for an observer to identify irregular client behavior and irregular +service behavior. Further, a microservice design exposes the internal requests +and internal services which offer more security-behavior data to identify +irregularities by an observer. Overall, this makes the microservice design +pattern better suited for security-behavior monitoring and control. ## Security-Behavior monitoring on Kubernetes -Kubernetes deployments seeking to add Security-Behavior may use [Guard](http://knative.dev/security-guard), developed under the CNCF project Knative. Guard is integrated into the full Knative automation suite that runs on top of Kubernetes. Alternatively, **you can deploy Guard as a standalone tool** to protect any HTTP-based workload on Kubernetes. +Kubernetes deployments seeking to add Security-Behavior may use +[Guard](http://knative.dev/security-guard), developed under the CNCF project Knative. +Guard is integrated into the full Knative automation suite that runs on top of Kubernetes. +Alternatively, **you can deploy Guard as a standalone tool** to protect any HTTP-based workload on Kubernetes. See: -- [Guard](https://github.com/knative-sandbox/security-guard) on Github, for using Guard as a standalone tool. -- The Knative automation suite - Read about Knative, in the blog post [Opinionated Kubernetes](https://davidhadas.wordpress.com/2022/08/29/knative-an-opinionated-kubernetes) which describes how Knative simplifies and unifies the way web services are deployed on Kubernetes. -- You may contact Guard maintainers on the [SIG Security](https://kubernetes.slack.com/archives/C019LFTGNQ3) Slack channel or on the Knative community [security](https://knative.slack.com/archives/CBYV1E0TG) Slack channel. The Knative community channel will move soon to the [CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf) under the name `#knative-security`. +- [Guard](https://github.com/knative-sandbox/security-guard) on Github, + for using Guard as a standalone tool. +- The Knative automation suite - Read about Knative, in the blog post + [Opinionated Kubernetes](https://davidhadas.wordpress.com/2022/08/29/knative-an-opinionated-kubernetes) + which describes how Knative simplifies and unifies the way web services are deployed on Kubernetes. +- You may contact Guard maintainers on the + [SIG Security](https://kubernetes.slack.com/archives/C019LFTGNQ3) Slack channel + or on the Knative community [security](https://knative.slack.com/archives/CBYV1E0TG) + Slack channel. The Knative community channel will move soon to the + [CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf) under the name `#knative-security`. -The goal of this post is to invite the Kubernetes community to action and introduce Security-Behavior monitoring and control to help secure Kubernetes based deployments. Hopefully, the community as a follow up will: +The goal of this post is to invite the Kubernetes community to action and introduce +Security-Behavior monitoring and control to help secure Kubernetes based deployments. +Hopefully, the community as a follow up will: 1. Analyze the cyber challenges presented for different Kubernetes use cases 1. Add appropriate security documentation for users on how to introduce Security-Behavior monitoring and control. diff --git a/content/en/blog/_posts/2023-03-10-forensic-container-analysis/index.md b/content/en/blog/_posts/2023-03-10-forensic-container-analysis/index.md new file mode 100644 index 00000000000..7edff1196a2 --- /dev/null +++ b/content/en/blog/_posts/2023-03-10-forensic-container-analysis/index.md @@ -0,0 +1,373 @@ +--- +layout: blog +title: "Forensic container analysis" +date: 2023-03-10 +slug: forensic-container-analysis +--- + +**Authors:** Adrian Reber (Red Hat) + +In my previous article, [Forensic container checkpointing in +Kubernetes][forensic-blog], I introduced checkpointing in Kubernetes +and how it has to be setup and how it can be used. The name of the +feature is Forensic container checkpointing, but I did not go into +any details how to do the actual analysis of the checkpoint created by +Kubernetes. In this article I want to provide details how the +checkpoint can be analyzed. + +Checkpointing is still an alpha feature in Kubernetes and this article +wants to provide a preview how the feature might work in the future. + +## Preparation + +Details about how to configure Kubernetes and the underlying CRI implementation +to enable checkpointing support can be found in my [Forensic container +checkpointing in Kubernetes][forensic-blog] article. + +As an example I prepared a container image (`quay.io/adrianreber/counter:blog`) +which I want to checkpoint and then analyze in this article. This container allows +me to create files in the container and also store information in memory which +I later want to find in the checkpoint. + +To run that container I need a pod, and for this example I am using the following Pod manifest: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: counters +spec: + containers: + - name: counter + image: quay.io/adrianreber/counter:blog +``` + +This results in a container called `counter` running in a pod called `counters`. + +Once the container is running I am performing following actions with that +container: + +```console +$ kubectl get pod counters --template '{{.status.podIP}}' +10.88.0.25 +$ curl 10.88.0.25:8088/create?test-file +$ curl 10.88.0.25:8088/secret?RANDOM_1432_KEY +$ curl 10.88.0.25:8088 +``` + +The first access creates a file called `test-file` with the content `test-file` +in the container and the second access stores my secret information +(`RANDOM_1432_KEY`) somewhere in the container's memory. The last access just +adds an additional line to the internal log file. + +The last step before I can analyze the checkpoint it to tell Kubernetes to create +the checkpoint. As described in the previous article this requires access to the +*kubelet* only `checkpoint` API endpoint. + +For a container named *counter* in a pod named *counters* in a namespace named +*default* the *kubelet* API endpoint is reachable at: + +```shell +# run this on the node where that Pod is executing +curl -X POST "https://localhost:10250/checkpoint/default/counters/counter" +``` + +For completeness the following `curl` command-line options are necessary to +have `curl` accept the *kubelet*'s self signed certificate and authorize the +use of the *kubelet* `checkpoint` API: + +```shell +--insecure --cert /var/run/kubernetes/client-admin.crt --key /var/run/kubernetes/client-admin.key +``` + +Once the checkpointing has finished the checkpoint should be available at +`/var/lib/kubelet/checkpoints/checkpoint-_--.tar` + +In the following steps of this article I will use the name `checkpoint.tar` +when analyzing the checkpoint archive. + +## Checkpoint archive analysis using `checkpointctl` + +To get some initial information about the checkpointed container I am using the +tool [checkpointctl][checkpointctl] like this: + +```console +$ checkpointctl show checkpoint.tar --print-stats ++-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+ +| CONTAINER | IMAGE | ID | RUNTIME | CREATED | ENGINE | IP | CHKPT SIZE | ROOT FS DIFF SIZE | ++-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+ +| counter | quay.io/adrianreber/counter:blog | 059a219a22e5 | runc | 2023-03-02T06:06:49 | CRI-O | 10.88.0.23 | 8.6 MiB | 3.0 KiB | ++-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+ +CRIU dump statistics ++---------------+-------------+--------------+---------------+---------------+---------------+ +| FREEZING TIME | FROZEN TIME | MEMDUMP TIME | MEMWRITE TIME | PAGES SCANNED | PAGES WRITTEN | ++---------------+-------------+--------------+---------------+---------------+---------------+ +| 100809 us | 119627 us | 11602 us | 7379 us | 7800 | 2198 | ++---------------+-------------+--------------+---------------+---------------+---------------+ +``` + +This gives me already some information about the checkpoint in that checkpoint +archive. I can see the name of the container, information about the container +runtime and container engine. It also lists the size of the checkpoint (`CHKPT +SIZE`). This is mainly the size of the memory pages included in the checkpoint, +but there is also information about the size of all changed files in the +container (`ROOT FS DIFF SIZE`). + +The additional parameter `--print-stats` decodes information in the checkpoint +archive and displays them in the second table (*CRIU dump statistics*). This +information is collected during checkpoint creation and gives an overview how much +time CRIU needed to checkpoint the processes in the container and how many +memory pages were analyzed and written during checkpoint creation. + +## Digging deeper + +With the help of `checkpointctl` I am able to get some high level information +about the checkpoint archive. To be able to analyze the checkpoint archive +further I have to extract it. The checkpoint archive is a *tar* archive and can +be extracted with the help of `tar xf checkpoint.tar`. + +Extracting the checkpoint archive will result in following files and directories: + +* `bind.mounts` - this file contains information about bind mounts and is needed + during restore to mount all external files and directories at the right location +* `checkpoint/` - this directory contains the actual checkpoint as created by + CRIU +* `config.dump` and `spec.dump` - these files contain metadata about the container + which is needed during restore +* `dump.log` - this file contains the debug output of CRIU created during + checkpointing +* `stats-dump` - this file contains the data which is used by `checkpointctl` + to display dump statistics (`--print-stats`) +* `rootfs-diff.tar` - this file contains all changed files on the container's + file-system + +### File-system changes - `rootfs-diff.tar` + +The first step to analyze the container's checkpoint further is to look at +the files that have changed in my container. This can be done by looking at the +file `rootfs-diff.tar`: + +```console +$ tar xvf rootfs-diff.tar +home/counter/logfile +home/counter/test-file +``` + +Now the files that changed in the container can be studied: + +```console +$ cat home/counter/logfile +10.88.0.1 - - [02/Mar/2023 06:07:29] "GET /create?test-file HTTP/1.1" 200 - +10.88.0.1 - - [02/Mar/2023 06:07:40] "GET /secret?RANDOM_1432_KEY HTTP/1.1" 200 - +10.88.0.1 - - [02/Mar/2023 06:07:43] "GET / HTTP/1.1" 200 - +$ cat home/counter/test-file +test-file  +``` + +Compared to the container image (`quay.io/adrianreber/counter:blog`) this +container is based on, I can see that the file `logfile` contains information +about all access to the service the container provides and the file `test-file` +was created just as expected. + +With the help of `rootfs-diff.tar` it is possible to inspect all files that +were created or changed compared to the base image of the container. + +### Analyzing the checkpointed processes - `checkpoint/` + +The directory `checkpoint/` contains data created by CRIU while checkpointing +the processes in the container. The content in the directory `checkpoint/` +consists of different [image files][image-files] which can be analyzed with the +help of the tool [CRIT][crit] which is distributed as part of CRIU. + +First lets get an overview of the processes inside of the container: + +```console +$ crit show checkpoint/pstree.img | jq .entries[].pid +1 +7 +8 +``` + +This output means that I have three processes inside of the container's PID +namespace with the PIDs: 1, 7, 8 + +This is only the view from the inside of the container's PID namespace. During +restore exactly these PIDs will be recreated. From the outside of the +container's PID namespace the PIDs will change after restore. + +The next step is to get some additional information about these three processes: + +```console +$ crit show checkpoint/core-1.img | jq .entries[0].tc.comm +"bash" +$ crit show checkpoint/core-7.img | jq .entries[0].tc.comm +"counter.py" +$ crit show checkpoint/core-8.img | jq .entries[0].tc.comm +"tee" +``` + +This means the three processes in my container are `bash`, `counter.py` (a Python +interpreter) and `tee`. For details about the parent child relations of these processes there +is more data to be analyzed in `checkpoint/pstree.img`. + +Let's compare the so far collected information to the still running container: + +```console +$ crictl inspect --output go-template --template "{{(index .info.pid)}}" 059a219a22e56 +722520 +$ ps auxf | grep -A 2 722520 +fedora 722520 \_ bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile +fedora 722541 \_ /usr/bin/python3 /home/counter/counter.py +fedora 722542 \_ /usr/bin/coreutils --coreutils-prog-shebang=tee /usr/bin/tee /home/counter/logfile +$ cat /proc/722520/comm +bash +$ cat /proc/722541/comm +counter.py +$ cat /proc/722542/comm +tee +``` + +In this output I am first retrieving the PID of the first process in the +container and then I am looking for that PID and child processes on the system +where the container is running. I am seeing three processes and the first one is +"bash" which is PID 1 inside of the containers PID namespace. Then I am looking +at `/proc//comm` and I can find the exact same value +as in the checkpoint image. + +Important to remember is that the checkpoint will contain the view from within the +container's PID namespace because that information is important to restore the +processes. + +One last example of what `crit` can tell us about the container is the information +about the UTS namespace: + +```console +$ crit show checkpoint/utsns-12.img +{ + "magic": "UTSNS", + "entries": [ + { + "nodename": "counters", + "domainname": "(none)" + } + ] +} +``` + +This tells me that the hostname inside of the UTS namespace is `counters`. + +For every resource CRIU collected during checkpointing the `checkpoint/` +directory contains corresponding image files which can be analyzed with the help +of `crit`. + +#### Looking at the memory pages + +In addition to the information from CRIU that can be decoded with the help +of CRIT, there are also files containing the raw memory pages written by +CRIU to disk: + +```console +$ ls checkpoint/pages-* +checkpoint/pages-1.img checkpoint/pages-2.img checkpoint/pages-3.img +``` + +When I initially used the container I stored a random key (`RANDOM_1432_KEY`) +somewhere in the memory. Let see if I can find it: + +```console +$ grep -ao RANDOM_1432_KEY checkpoint/pages-* +checkpoint/pages-2.img:RANDOM_1432_KEY +``` + +And indeed, there is my data. This way I can easily look at the content +of all memory pages of the processes in the container, but it is also +important to remember that anyone that can access the checkpoint +archive has access to all information that was stored in the memory of the +container's processes. + +#### Using gdb for further analysis + +Another possibility to look at the checkpoint images is `gdb`. The CRIU repository +contains the script [coredump][criu-coredump] which can convert a checkpoint +into a coredump file: + +```console +$ /home/criu/coredump/coredump-python3 +$ ls -al core* +core.1 core.7 core.8 +``` + +Running the `coredump-python3` script will convert the checkpoint images into +one coredump file for each process in the container. Using `gdb` I can also look +at the details of the processes: + +```console +$ echo info registers | gdb --core checkpoint/core.1 -q + +[New LWP 1] + +Core was generated by `bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile'. + +#0 0x00007fefba110198 in ?? () +(gdb) +rax 0x3d 61 +rbx 0x8 8 +rcx 0x7fefba11019a 140667595587994 +rdx 0x0 0 +rsi 0x7fffed9c1110 140737179816208 +rdi 0xffffffff 4294967295 +rbp 0x1 0x1 +rsp 0x7fffed9c10e8 0x7fffed9c10e8 +r8 0x1 1 +r9 0x0 0 +r10 0x0 0 +r11 0x246 582 +r12 0x0 0 +r13 0x7fffed9c1170 140737179816304 +r14 0x0 0 +r15 0x0 0 +rip 0x7fefba110198 0x7fefba110198 +eflags 0x246 [ PF ZF IF ] +cs 0x33 51 +ss 0x2b 43 +ds 0x0 0 +es 0x0 0 +fs 0x0 0 +gs 0x0 0 +``` + +In this example I can see the value of all registers as they were during +checkpointing and I can also see the complete command-line of my container's PID +1 process: `bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile` + +## Summary + +With the help of container checkpointing, it is possible to create a +checkpoint of a running container without stopping the container and without the +container knowing that it was checkpointed. The result of checkpointing a +container in Kubernetes is a checkpoint archive; using different tools like +`checkpointctl`, `tar`, `crit` and `gdb` the checkpoint can be analyzed. Even +with simple tools like `grep` it is possible to find information in the +checkpoint archive. + +The different examples I have shown in this article how to analyze a checkpoint +are just the starting point. Depending on your requirements it is possible to +look at certain things in much more detail, but this article should give you an +introduction how to start the analysis of your checkpoint. + +## How do I get involved? + +You can reach SIG Node by several means: + +* Slack: [#sig-node][slack-sig-node] +* Slack: [#sig-security][slack-sig-security] +* [Mailing list][sig-node-ml] + +[forensic-blog]: https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/ +[checkpointctl]: https://github.com/checkpoint-restore/checkpointctl +[image-files]: https://criu.org/Images +[crit]: https://criu.org/CRIT +[slack-sig-node]: https://kubernetes.slack.com/messages/sig-node +[slack-sig-security]: https://kubernetes.slack.com/messages/sig-security +[sig-node-ml]: https://groups.google.com/forum/#!forum/kubernetes-sig-node +[criu-coredump]: https://github.com/checkpoint-restore/criu/tree/criu-dev/coredump diff --git a/content/en/blog/_posts/2023-03-10-image-registry-change.md b/content/en/blog/_posts/2023-03-10-image-registry-change.md new file mode 100644 index 00000000000..39a03283ce5 --- /dev/null +++ b/content/en/blog/_posts/2023-03-10-image-registry-change.md @@ -0,0 +1,187 @@ +--- +layout: blog +title: "k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know" +date: 2023-03-10T17:00:00.000Z +slug: image-registry-redirect +--- + +**Authors**: Bob Killen (Google), Davanum Srinivas (AWS), Chris Short (AWS), Frederico Muñoz (SAS +Institute), Tim Bannister (The Scale Factory), Ricky Sadowski (AWS), Grace Nguyen (Expo), Mahamed +Ali (Rackspace Technology), Mars Toktonaliev (independent), Laura Santamaria (Dell), Kat Cosgrove +(Dell) + + +On Monday, March 20th, the k8s.gcr.io registry [will be redirected to the community owned +registry](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/), +**registry.k8s.io** . + + +## TL;DR: What you need to know about this change + +- On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to + registry.k8s.io with the eventual goal of sunsetting k8s.gcr.io. +- If you run in a restricted environment, and apply strict domain name or IP address access policies + limited to k8s.gcr.io, **the image pulls will not function** after k8s.gcr.io starts redirecting + to the new registry.  +- A small subset of non-standard clients do not handle HTTP redirects by image registries, and will + need to be pointed directly at registry.k8s.io. +- The redirect is a stopgap to assist users in making the switch. The deprecated k8s.gcr.io registry + will be phased out at some point. **Please update your manifests as soon as possible to point to + registry.k8s.io**. +- If you host your own image registry, you can copy images you need there as well to reduce traffic + to community owned registries. + +If you think you may be impacted, or would like to know more about this change, please keep reading. + +## How can I check if I am impacted? + +To test connectivity to registry.k8s.io and being able to pull images from there, here is a sample +command that can be executed in the namespace of your choosing: + +``` +kubectl run hello-world -ti --rm --image=registry.k8s.io/busybox:latest --restart=Never -- date +``` + +When you run the command above, here’s what to expect when things work correctly: + +``` +$ kubectl run hello-world -ti --rm --image=registry.k8s.io/busybox:latest --restart=Never -- date +Fri Feb 31 07:07:07 UTC 2023 +pod "hello-world" deleted +``` + +## What kind of errors will I see if I’m impacted? + +Errors may depend on what kind of container runtime you are using, and what endpoint you are routed +to, but it should present such as `ErrImagePull`, `ImagePullBackOff`, or a container failing to be +created with the warning `FailedCreatePodSandBox`. + +Below is an example error message showing a proxied deployment failing to pull due to an unknown +certificate: + +``` +FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Head “https://us-west1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8”: x509: certificate signed by unknown authority +``` + +## What images will be impacted? + +**ALL** images on k8s.gcr.io will be impacted by this change. k8s.gcr.io hosts many images beyond +Kubernetes releases. A large number of Kubernetes subprojects host their images there as well. Some +examples include the `dns/k8s-dns-node-cache`, `ingress-nginx/controller`, and +`node-problem-detector/node-problem-detector` images. + +## I am impacted. What should I do? + +For impacted users that run in a restricted environment, the best option is to copy over the +required images to a private registry or configure a pull-through cache in their registry. + +There are several tools to copy images between registries; +[crane](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_copy.md) is one +of those tools, and images can be copied to a private registry by using `crane copy SRC DST`. There +are also vendor-specific tools, like e.g. Google’s +[gcrane](https://cloud.google.com/container-registry/docs/migrate-external-containers#copy), that +perform a similar function but are streamlined for their platform. + +## How can I find which images are using the legacy registry, and fix them? + +**Option 1**: See the one line kubectl command in our [earlier blog +post](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/#what-s-next): + +``` +kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\ +tr -s '[[:space:]]' '\n' |\ +sort |\ +uniq -c +``` + +**Option 2**: A `kubectl` [krew](https://krew.sigs.k8s.io/) plugin has been developed called +[`community-images`](https://github.com/kubernetes-sigs/community-images#kubectl-community-images), +that will scan and report any images using the k8s.gcr.io endpoint. + +If you have krew installed, you can install it with: + +``` +kubectl krew install community-images +``` + +and generate a report with: + +``` +kubectl community-images +``` + +For alternate methods of install and example output, check out the repo: +[kubernetes-sigs/community-images](https://github.com/kubernetes-sigs/community-images). + +**Option 3**: If you do not have access to a cluster directly, or manage many clusters - the best +way is to run a search over your manifests and charts for _"k8s.gcr.io"_. + +**Option 4**: If you wish to prevent k8s.gcr.io based images from running in your cluster, example +policies for [Gatekeeper](https://open-policy-agent.github.io/gatekeeper-library/website/) and +[Kyverno](https://kyverno.io/) are available in the [AWS EKS Best Practices +repository](https://github.com/aws/aws-eks-best-practices/tree/master/policies/k8s-registry-deprecation) +that will block them from being pulled. You can use these third-party policies with any Kubernetes +cluster. + +**Option 5**: As a **LAST** possible option, you can use a [Mutating +Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks) +to change the image address dynamically. This should only be +considered a stopgap till your manifests have been updated. You can +find a (third party) Mutating Webhook and Kyverno policy in +[k8s-gcr-quickfix](https://github.com/abstractinfrastructure/k8s-gcr-quickfix). + +## Why did Kubernetes change to a different image registry? + +k8s.gcr.io is hosted on a custom [Google Container Registry +(GCR)](https://cloud.google.com/container-registry) domain that was set up solely for the Kubernetes +project. This has worked well since the inception of the project, and we thank Google for providing +these resources, but today, there are other cloud providers and vendors that would like to host +images to provide a better experience for the people on their platforms. In addition to Google’s +[renewed commitment to donate $3 +million](https://www.cncf.io/google-cloud-recommits-3m-to-kubernetes/) to support the project's +infrastructure last year, Amazon Web Services announced a matching donation [during their Kubecon NA +2022 keynote in Detroit](https://youtu.be/PPdimejomWo?t=236). This will provide a better experience +for users (closer servers = faster downloads) and will reduce the egress bandwidth and costs from +GCR at the same time. + +For more details on this change, check out [registry.k8s.io: faster, cheaper and Generally Available +(GA)](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/). + +## Why is a redirect being put in place? + +The project switched to [registry.k8s.io last year with the 1.25 +release](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/); however, most of +the image pull traffic is still directed at the old endpoint k8s.gcr.io. This has not been +sustainable for us as a project, as it is not utilizing the resources that have been donated to the +project from other providers, and we are in the danger of running out of funds due to the cost of +serving this traffic. + +A redirect will enable the project to take advantage of these new resources, significantly reducing +our egress bandwidth costs. We only expect this change to impact a small subset of users running in +restricted environments or using very old clients that do not respect redirects properly. + +## What will happen to k8s.gcr.io? + +Separate from the the redirect, k8s.gcr.io will be frozen [and will not be updated with new images +after April 3rd, 2023](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/). `k8s.gcr.io` +will not get any new releases, patches, or security updates. It will continue to remain available to +help people migrate, but it **WILL** be phased out entirely in the future. + +## I still have questions, where should I go? + +For more information on registry.k8s.io and why it was developed, see [registry.k8s.io: faster, +cheaper and Generally Available](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/). + +If you would like to know more about the image freeze and the last images that will be available +there, see the blog post: [k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April +2023](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/). + +Information on the architecture of registry.k8s.io and its [request handling decision +tree](https://github.com/kubernetes/registry.k8s.io/blob/8408d0501a88b3d2531ff54b14eeb0e3c900a4f3/cmd/archeio/docs/request-handling.md) +can be found in the [kubernetes/registry.k8s.io +repo](https://github.com/kubernetes/registry.k8s.io). + +If you believe you have encountered a bug with the new registry or the redirect, please open an +issue in the [kubernetes/registry.k8s.io +repo](https://github.com/kubernetes/registry.k8s.io/issues/new/choose). **Please check if there is an issue already +open similar to what you are seeing before you create a new issue**. diff --git a/content/en/blog/_posts/2023-03-17-kubernetes-1.27-deprecations-and-removals.md b/content/en/blog/_posts/2023-03-17-kubernetes-1.27-deprecations-and-removals.md new file mode 100644 index 00000000000..befcfd4edf8 --- /dev/null +++ b/content/en/blog/_posts/2023-03-17-kubernetes-1.27-deprecations-and-removals.md @@ -0,0 +1,235 @@ +--- +layout: blog +title: "Kubernetes Removals and Major Changes In v1.27" +date: 2023-03-17T14:00:00+0000 +slug: upcoming-changes-in-kubernetes-v1-27 +--- + +**Author**: Harshita Sao + +As Kubernetes develops and matures, features may be deprecated, removed, or replaced +with better ones for the project's overall health. Based on the information available +at this point in the v1.27 release process, which is still ongoing and can introduce +additional changes, this article identifies and describes some of the planned changes +for the Kubernetes v1.27 release. + +## A note about the k8s.gcr.io redirect to registry.k8s.io + +To host its container images, the Kubernetes project uses a community-owned image +registry called registry.k8s.io. **On March 20th, all traffic from the out-of-date +[k8s.gcr.io](https://cloud.google.com/container-registry/) registry will be redirected +to [registry.k8s.io](https://github.com/kubernetes/registry.k8s.io)**. The deprecated +k8s.gcr.io registry will eventually be phased out. + +### What does this change mean? + +- If you are a subproject maintainer, you must update your manifests and Helm + charts to use the new registry. + +- The v1.27 Kubernetes release will not be published to the old registry. + +- From April, patch releases for v1.24, v1.25, and v1.26 will no longer be + published to the old registry. + +We have a [blog post](/blog/2023/03/10/image-registry-redirect/) with all +the information about this change and what to do if it impacts you. + +## The Kubernetes API Removal and Deprecation process + +The Kubernetes project has a well-documented +[deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) +for features. This policy states that stable APIs may only be deprecated when +a newer, stable version of that same API is available and that APIs have a +minimum lifetime for each stability level. A deprecated API has been marked +for removal in a future Kubernetes release, it will continue to function until +removal (at least one year from the deprecation), but usage will result in a +warning being displayed. Removed APIs are no longer available in the current +version, at which point you must migrate to using the replacement. + +- Generally available (GA) or stable API versions may be marked as deprecated + but must not be removed within a major version of Kubernetes. + +- Beta or pre-release API versions must be supported for 3 releases after the deprecation. + +- Alpha or experimental API versions may be removed in any release without prior deprecation notice. + +Whether an API is removed as a result of a feature graduating from beta to stable +or because that API simply did not succeed, all removals comply with this +deprecation policy. Whenever an API is removed, migration options are communicated +in the documentation. + +## API removals, and other changes for Kubernetes v1.27 + +### Removal of `storage.k8s.io/v1beta1` from `CSIStorageCapacity` + +The [CSIStorageCapacity](/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/) +API supports exposing currently available storage capacity via CSIStorageCapacity +objects and enhances the scheduling of pods that use CSI volumes with late binding. +The `storage.k8s.io/v1beta1` API version of CSIStorageCapacity was deprecated in v1.24, +and it will no longer be served in v1.27. + +Migrate manifests and API clients to use the `storage.k8s.io/v1` API version, +available since v1.24. All existing persisted objects are accessible via the new API. + +Refer to the +[Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking) +for more information. + +Kubernetes v1.27 is not removing any other APIs; however several other aspects are going +to be removed. Read on for details. + +### Support for deprecated seccomp annotations + +In Kubernetes v1.19, the +[seccomp](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/135-seccomp) +(secure computing mode) support graduated to General Availability (GA). +This feature can be used to increase the workload security by restricting +the system calls for a Pod (applies to all containers) or single containers. + +The support for the alpha seccomp annotations `seccomp.security.alpha.kubernetes.io/pod` +and `container.seccomp.security.alpha.kubernetes.io` were deprecated since v1.19, now +have been completely removed. The seccomp fields are no longer auto-populated when pods +with seccomp annotations are created. Pods should use the corresponding pod or container +`securityContext.seccompProfile` field instead. + +### Removal of several feature gates for volume expansion + +The following feature gates for +[volume expansion](https://github.com/kubernetes/enhancements/issues/284) GA features +will be removed and must no longer be referenced in `--feature-gates` flags: + +`ExpandCSIVolumes` +: Enable expanding of CSI volumes. + +`ExpandInUsePersistentVolumes` +: Enable expanding in-use PVCs. + +`ExpandPersistentVolumes` +: Enable expanding of persistent volumes. + +### Removal of `--master-service-namespace` command line argument + +The kube-apiserver accepts a deprecated command line argument, `--master-service-namespace`, +that specified where to create the Service named `kubernetes` to represent the API server. +Kubernetes v1.27 will remove that argument, which has been deprecated since the v1.26 release. + +### Removal of the `ControllerManagerLeaderMigration` feature gate + +[Leader Migration](https://github.com/kubernetes/enhancements/issues/2436) provides +a mechanism in which HA clusters can safely migrate "cloud-specific" controllers +between the `kube-controller-manager` and the `cloud-controller-manager` via a shared +resource lock between the two components while upgrading the replicated control plane. + +The `ControllerManagerLeaderMigration` feature, GA since v1.24, is unconditionally +enabled and for the v1.27 release the feature gate option will be removed. If you're +setting this feature gate explicitly, you'll need to remove that from command line +arguments or configuration files. + +### Removal of `--enable-taint-manager` command line argument + +The kube-controller-manager command line argument `--enable-taint-manager` is +deprecated, and will be removed in Kubernetes v1.27. The feature that it supports, +[taint based eviction](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions), +is already enabled by default and will continue to be implicitly enabled when the flag is removed. + +### Removal of `--pod-eviction-timeout` command line argument + +The deprecated command line argument `--pod-eviction-timeout` will be removed from the +kube-controller-manager. + +### Removal of the `CSI Migration` feature gate + +The [CSI migration](https://github.com/kubernetes/enhancements/issues/625) +programme allows moving from in-tree volume plugins to out-of-tree CSI drivers. +CSI migration is generally available since Kubernetes v1.16, and the associated +`CSIMigration` feature gate will be removed in v1.27. + +### Removal of `CSIInlineVolume` feature gate + +The [CSI Ephemeral Volume](https://github.com/kubernetes/kubernetes/pull/111258) +feature allows CSI volumes to be specified directly in the pod specification for +ephemeral use cases. They can be used to inject arbitrary states, such as +configuration, secrets, identity, variables or similar information, directly +inside pods using a mounted volume. This feature graduated to GA in v1.25. +Hence, the feature gate `CSIInlineVolume` will be removed in the v1.27 release. + +### Removal of `EphemeralContainers` feature gate + +[Ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/) +graduated to GA in v1.25. These are containers with a temporary duration that +executes within namespaces of an existing pod. Ephemeral containers are +typically initiated by a user in order to observe the state of other pods +and containers for troubleshooting and debugging purposes. For Kubernetes v1.27, +API support for ephemeral containers is unconditionally enabled; the +`EphemeralContainers` feature gate will be removed. + +### Removal of `LocalStorageCapacityIsolation` feature gate + +The [Local Ephemeral Storage Capacity Isolation](https://github.com/kubernetes/kubernetes/pull/111513) +feature moved to GA in v1.25. The feature provides support for capacity isolation +of local ephemeral storage between pods, such as `emptyDir` volumes, so that a pod +can be hard limited in its consumption of shared resources. The kubelet will +evicting Pods if consumption of local ephemeral storage exceeds the configured limit. +The feature gate, `LocalStorageCapacityIsolation`, will be removed in the v1.27 release. + +### Removal of `NetworkPolicyEndPort` feature gate + +The v1.25 release of Kubernetes promoted `endPort` in NetworkPolicy to GA. +NetworkPolicy providers that support the `endPort` field that can be used to +specify a range of ports to apply a NetworkPolicy. Previously, each NetworkPolicy +could only target a single port. So the feature gate `NetworkPolicyEndPort` +will be removed in this release. + +Please be aware that `endPort` field must be supported by the Network Policy +provider. If your provider does not support `endPort`, and this field is +specified in a Network Policy, the Network Policy will be created covering +only the port field (single port). + +### Removal of `StatefulSetMinReadySeconds` feature gate + +For a pod that is part of a StatefulSet, Kubernetes can mark the Pod ready only +if Pod is available (and passing checks) for at least the period you specify in +[`minReadySeconds`](/docs/concepts/workloads/controllers/statefulset/#minimum-ready-seconds). +The feature became generally available in Kubernetes v1.25, and the `StatefulSetMinReadySeconds` +feature gate will be locked to true and removed in the v1.27 release. + +### Removal of `IdentifyPodOS` feature gate + +You can specify the operating system for a Pod, and the feature support for that +is stable since the v1.25 release. The `IdentifyPodOS` feature gate will be +removed for Kubernetes v1.27. + +### Removal of `DaemonSetUpdateSurge` feature gate + +The v1.25 release of Kubernetes also stabilised surge support for DaemonSet pods, +implemented in order to minimize DaemonSet downtime during rollouts. +The `DaemonSetUpdateSurge` feature gate will be removed in Kubernetes v1.27. + +## Looking ahead + +The official list of +[API removals](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-29) +planned for Kubernetes v1.29 includes: + +- The `flowcontrol.apiserver.k8s.io/v1beta2` API version of FlowSchema and + PriorityLevelConfiguration will no longer be served in v1.29. + +## Want to know more? + +Deprecations are announced in the Kubernetes release notes. You can see the +announcements of pending deprecations in the release notes for: + +- [Kubernetes v1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation) + +- [Kubernetes v1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation) + +- [Kubernetes v1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation) + +- [Kubernetes v1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation) + +We will formally announce the deprecations that come with +[Kubernetes v1.27](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation) +as part of the CHANGELOG for that release. + +For information on the process of deprecation and removal, check out the official Kubernetes +[deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document. diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 0e691a24865..e149e4b0d0e 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -18,7 +18,7 @@ This page lists some of the available add-ons and links to their respective inst * [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI. * [Antrea](https://antrea.io/) operates at Layer 3/4 to provide networking and security services for Kubernetes, leveraging Open vSwitch as the networking data plane. Antrea is a [CNCF project at the Sandbox level](https://www.cncf.io/projects/antrea/). -* [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. +* [Calico](https://www.tigera.io/project-calico/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. * [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) unites Flannel and Calico, providing networking and network policy. * [Cilium](https://github.com/cilium/cilium) is a networking, observability, and security solution with an eBPF-based data plane. Cilium provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay/encapsulation mode, and can enforce network policies on L3-L7 using an identity-based security model that is decoupled from network addressing. Cilium can act as a replacement for kube-proxy; it also offers additional, opt-in observability and security features. Cilium is a [CNCF project at the Incubation level](https://www.cncf.io/projects/cilium/). * [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave. CNI-Genie is a [CNCF project at the Sandbox level](https://www.cncf.io/projects/cni-genie/). diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 3adab67be31..c67880458a3 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -122,7 +122,7 @@ In addition, you can limit consumption of storage resources based on associated | `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. | | `persistentvolumeclaims` | The total number of [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | | `.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the ``, the sum of storage requests cannot exceed this value. | -| `.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the storage-class-name, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | +| `.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the ``, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. | For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can define a quota as follows: diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 1ce5ea8e953..67bcf7e4dd1 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -221,7 +221,7 @@ unexpected to them. Use node labels that have a clear correlation to the scheduler profile name. {{< note >}} -The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler), +The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#how-daemon-pods-are-scheduled), does not support scheduling profiles. When the DaemonSet controller creates Pods, the default Kubernetes scheduler places those Pods and honors any `nodeAffinity` rules in the DaemonSet controller. diff --git a/content/en/docs/concepts/services-networking/endpoint-slices.md b/content/en/docs/concepts/services-networking/endpoint-slices.md index 5d833000327..985e9e6c81e 100644 --- a/content/en/docs/concepts/services-networking/endpoint-slices.md +++ b/content/en/docs/concepts/services-networking/endpoint-slices.md @@ -96,10 +96,10 @@ Services will always have the `ready` condition set to `true`. #### Serving -{{< feature-state for_k8s_version="v1.22" state="beta" >}} +{{< feature-state for_k8s_version="v1.26" state="stable" >}} -`serving` is identical to the `ready` condition, except it does not account for terminating states. -Consumers of the EndpointSlice API should check this condition if they care about pod readiness while +The `serving` condition is almost identical to the `ready` condition. The difference is that +consumers of the EndpointSlice API should check the `serving` condition if they care about pod readiness while the pod is also terminating. {{< note >}} @@ -235,7 +235,7 @@ at different times. {{< note >}} Clients of the EndpointSlice API must iterate through all the existing EndpointSlices associated to a Service and build a complete list of unique network endpoints. It is -important to mention that endpoints may be duplicated in different EndointSlices. +important to mention that endpoints may be duplicated in different EndpointSlices. You can find a reference implementation for how to perform this endpoint aggregation and deduplication as part of the `EndpointSliceCache` code within `kube-proxy`. diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index bb6b1d37500..d096096ec2a 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -656,7 +656,7 @@ by making the changes that are equivalent to you requesting a Service of `type: NodePort`. The cloud-controller-manager component then configures the external load balancer to forward traffic to that assigned node port. -_As an alpha feature_, you can configure a load balanced Service to +You can configure a load balanced Service to [omit](#load-balancer-nodeport-allocation) assigning a node port, provided that the cloud provider implementation supports this. @@ -1165,7 +1165,7 @@ will be routed to one of the Service endpoints. `externalIPs` are not managed by of the cluster administrator. In the Service spec, `externalIPs` can be specified along with any of the `ServiceTypes`. -In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`" (`externalIP:port`) +In the example below, "`my-service`" can be accessed by clients on "`198.51.100.32:80`" (`externalIP:port`) ```yaml apiVersion: v1 diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index dd327758e33..4428822b710 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -202,7 +202,7 @@ That is, the CronJob does _not_ update existing Jobs, even if those remain runni A CronJob creates a Job object approximately once per execution time of its schedule. The scheduling is approximate because there are certain circumstances where two Jobs might be created, or no Job might be created. -Kubernetes tries to avoid those situations, but do not completely prevent them. Therefore, +Kubernetes tries to avoid those situations, but does not completely prevent them. Therefore, the Jobs that you define should be _idempotent_. If `startingDeadlineSeconds` is set to a large value or left unset (the default) diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 43f9c70ccf3..9e599d23e71 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -296,7 +296,7 @@ Each probe must define exactly one of these four mechanisms: The target should implement [gRPC health checks](https://grpc.io/grpc/core/md_doc_health-checking.html). The diagnostic is considered successful if the `status` - of the response is `SERVING`. + of the response is `SERVING`. gRPC probes are an alpha feature and are only available if you enable the `GRPCContainerProbe` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/). @@ -465,14 +465,32 @@ An example flow: The containers in the Pod receive the TERM signal at different times and in an arbitrary order. If the order of shutdowns matters, consider using a `preStop` hook to synchronize. {{< /note >}} -1. At the same time as the kubelet is starting graceful shutdown, the control plane removes that - shutting-down Pod from EndpointSlice (and Endpoints) objects where these represent +1. At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane evaluates whether to remove that shutting-down Pod from EndpointSlice (and Endpoints) objects, where those objects represent a {{< glossary_tooltip term_id="service" text="Service" >}} with a configured {{< glossary_tooltip text="selector" term_id="selector" >}}. {{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and other workload resources no longer treat the shutting-down Pod as a valid, in-service replica. Pods that shut down slowly - cannot continue to serve traffic as load balancers (like the service proxy) remove the Pod from - the list of endpoints as soon as the termination grace period _begins_. + should not continue to serve regular traffic and should start terminating and finish processing open connections. + Some applications need to go beyond finishing open connections and need more graceful termination - + for example: session draining and completion. Any endpoints that represent the terminating pods + are not immediately removed from EndpointSlices, + and a status indicating [terminating state](/docs/concepts/services-networking/endpoint-slices/#conditions) + is exposed from the EndpointSlice API (and the legacy Endpoints API). Terminating + endpoints always have their `ready` status + as `false` (for backward compatibility with versions before 1.26), + so load balancers will not use it for regular traffic. + If traffic draining on terminating pod is needed, the actual readiness can be checked as a condition `serving`. + You can find more details on how to implement connections draining + in the tutorial [Pods And Endpoints Termination Flow](/docs/tutorials/services/pods-and-endpoint-termination-flow/) + +{{}} +If you don't have the `EndpointSliceTerminatingCondition` feature gate enabled +in your cluster (the gate is on by default from Kubernetes 1.22, and locked to default in 1.26), then the Kubernetes control +plane removes a Pod from any relevant EndpointSlices as soon as the Pod's +termination grace period _begins_. The behavior above is described when the +feature gate `EndpointSliceTerminatingCondition` is enabled. +{{}} + 1. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends `SIGKILL` to any processes still running in any container in the Pod. The kubelet also cleans up a hidden `pause` container if that container runtime uses one. diff --git a/content/en/docs/concepts/workloads/pods/pod-qos.md b/content/en/docs/concepts/workloads/pods/pod-qos.md index b2035c520f4..9b0b10dedad 100644 --- a/content/en/docs/concepts/workloads/pods/pod-qos.md +++ b/content/en/docs/concepts/workloads/pods/pod-qos.md @@ -71,7 +71,7 @@ A Pod is given a QoS class of `Burstable` if: Pods in the `BestEffort` QoS class can use node resources that aren't specifically assigned to Pods in other QoS classes. For example, if you have a node with 16 CPU cores available to the -kubelet, and you assign assign 4 CPU cores to a `Guaranteed` Pod, then a Pod in the `BestEffort` +kubelet, and you assign 4 CPU cores to a `Guaranteed` Pod, then a Pod in the `BestEffort` QoS class can try to use any amount of the remaining 12 CPU cores. The kubelet prefers to evict `BestEffort` Pods if the node comes under resource pressure. diff --git a/content/en/docs/concepts/workloads/pods/user-namespaces.md b/content/en/docs/concepts/workloads/pods/user-namespaces.md index 4241104ad4b..0217490aa87 100644 --- a/content/en/docs/concepts/workloads/pods/user-namespaces.md +++ b/content/en/docs/concepts/workloads/pods/user-namespaces.md @@ -10,7 +10,7 @@ min-kubernetes-server-version: v1.25 {{< feature-state for_k8s_version="v1.25" state="alpha" >}} This page explains how user namespaces are used in Kubernetes pods. A user -namespace allows to isolate the user running inside the container from the one +namespace isolates the user running inside the container from the one in the host. A process running as root in a container can run as a different (non-root) user diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 2832e72900c..39f435e1351 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -162,10 +162,10 @@ For an example of adding a label, see the PR for adding the The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in the -[`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml) -file. You'll need to modify `config.toml` to support a new localization. +[`hugo.toml`](https://github.com/kubernetes/website/tree/main/hugo.toml) +file. You'll need to modify `hugo.toml` to support a new localization. -Add a configuration block for the new language to `config.toml` under the +Add a configuration block for the new language to `hugo.toml` under the existing `[languages]` block. The German block, for example, looks like: ```toml diff --git a/content/en/docs/contribute/style/hugo-shortcodes/index.md b/content/en/docs/contribute/style/hugo-shortcodes/index.md index 2d751f73f30..4551e728bb7 100644 --- a/content/en/docs/contribute/style/hugo-shortcodes/index.md +++ b/content/en/docs/contribute/style/hugo-shortcodes/index.md @@ -271,6 +271,33 @@ Renders to: {{< tab name="JSON File" include="podtemplate.json" />}} {{< /tabs >}} +### Source code files + +You can use the `{{}}` shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents of the sample file is generic and reusable, and you want the users to try it out themselves. + +This shortcode takes in two named parameters: `language` and `file`. The mandatory parameter `file` is used to specify the path to the file being displayed. The optional parameter `language` is used to specify the programming language of the file. If the `language` parameter is not provided, the shortcode will attempt to guess the language based on the file extension. + +For example: + +```none +{{}} +``` + +The output is: + +{{< codenew language="yaml" file="application/deployment-scale.yaml" >}} + +When adding a new sample file, such as a YAML file, create the file in one of the `/examples/` subdirectories where `` is the language for the page. In the markdown of your page, use the `codenew` shortcode: + +```none +{{/example-yaml>" */>}} +``` +where `` is the path to the sample file to include, relative to the `examples` directory. The following shortcode references a YAML file located at `/content/en/examples/configmap/configmaps.yaml`. + +```none +{{}} +``` + ## Third party content marker Running Kubernetes requires third-party software. For example: you @@ -311,7 +338,7 @@ before the item, or just below the heading for the specific item. To generate a version string for inclusion in the documentation, you can choose from several version shortcodes. Each version shortcode displays a version string derived from -the value of a version parameter found in the site configuration file, `config.toml`. +the value of a version parameter found in the site configuration file, `hugo.toml`. The two most commonly used version parameters are `latest` and `version`. ### `{{}}` diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index dce6eb291ff..c7f020a5fc1 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -631,4 +631,5 @@ These steps ... | These simple steps ... * Learn about [writing a new topic](/docs/contribute/style/write-new-topic/). * Learn about [using page templates](/docs/contribute/style/page-content-types/). +* Learn about [custom hugo shortcodes](/docs/contribute/style/hugo-shortcodes/). * Learn about [creating a pull request](/docs/contribute/new-content/open-a-pr/). diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index 05db47b7b46..a24535ba0ce 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -89,6 +89,7 @@ operator to use or manage a cluster. * [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/), [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) and [kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/) +* [kube-controller-manager configuration (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/) * [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/) * [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/) * [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) and diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index f58fe099d95..9d1b17796da 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -107,7 +107,7 @@ CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultI {{< note >}} The [`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) admission plugin is enabled -by default, but is only active if you enable the the `ValidatingAdmissionPolicy` +by default, but is only active if you enable the `ValidatingAdmissionPolicy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) **and** the `admissionregistration.k8s.io/v1alpha1` API. {{< /note >}} diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md b/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md index a3b704d891e..0244c7703e1 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates-removed.md @@ -333,6 +333,7 @@ In the following table: | `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 | | `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 | | `WindowsRunAsUserName` | `true` | GA | 1.18 | 1.20 | +{{< /table >}} ## Descriptions for removed feature gates diff --git a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md new file mode 100644 index 00000000000..4ec29226a5d --- /dev/null +++ b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md @@ -0,0 +1,1811 @@ +--- +title: kube-controller-manager Configuration (v1alpha1) +content_type: tool-reference +package: controllermanager.config.k8s.io/v1alpha1 +auto_generated: true +--- + + +## Resource Types + + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + + + +## `ControllerLeaderConfiguration` {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration} + + +**Appears in:** + +- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) + + +

ControllerLeaderConfiguration provides the configuration for a migrating leader lock.

+ + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name is the name of the controller being migrated +E.g. service-controller, route-controller, cloud-node-controller, etc

+
component [Required]
+string +
+

Component is the name of the component in which the controller should be running. +E.g. kube-controller-manager, cloud-controller-manager, etc +Or '*' meaning the controller can be run under any component that participates in the migration

+
+ +## `GenericControllerManagerConfiguration` {#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

GenericControllerManagerConfiguration holds configuration for a generic controller-manager.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
Port [Required]
+int32 +
+

port is the port that the controller-manager's http service runs on.

+
Address [Required]
+string +
+

address is the IP address to serve on (set to 0.0.0.0 for all interfaces).

+
MinResyncPeriod [Required]
+meta/v1.Duration +
+

minResyncPeriod is the resync period in reflectors; will be random between +minResyncPeriod and 2*minResyncPeriod.

+
ClientConnection [Required]
+ClientConnectionConfiguration +
+

ClientConnection specifies the kubeconfig file and client connection +settings for the proxy server to use when communicating with the apiserver.

+
ControllerStartInterval [Required]
+meta/v1.Duration +
+

How long to wait between starting controller managers

+
LeaderElection [Required]
+LeaderElectionConfiguration +
+

leaderElection defines the configuration of leader election client.

+
Controllers [Required]
+[]string +
+

Controllers is the list of controllers to enable or disable +'*' means "all enabled by default controllers" +'foo' means "enable 'foo'" +'-foo' means "disable 'foo'" +first item for a particular name wins

+
Debugging [Required]
+DebuggingConfiguration +
+

DebuggingConfiguration holds configuration for Debugging related features.

+
LeaderMigrationEnabled [Required]
+bool +
+

LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.

+
LeaderMigration [Required]
+LeaderMigrationConfiguration +
+

LeaderMigration holds the configuration for Leader Migration.

+
+ +## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration} + + +**Appears in:** + +- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) + + +

LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.

+ + + + + + + + + + + + + + + + + +
FieldDescription
leaderName [Required]
+string +
+

LeaderName is the name of the leader election resource that protects the migration +E.g. 1-20-KCM-to-1-21-CCM

+
resourceLock [Required]
+string +
+

ResourceLock indicates the resource object type that will be used to lock +Should be "leases" or "endpoints"

+
controllerLeaders [Required]
+[]ControllerLeaderConfiguration +
+

ControllerLeaders contains a list of migrating leader lock configurations

+
+ + + + +## `KubeControllerManagerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration} + + + +

KubeControllerManagerConfiguration contains elements describing kube-controller manager.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
kubecontrollermanager.config.k8s.io/v1alpha1
kind
string
KubeControllerManagerConfiguration
Generic [Required]
+GenericControllerManagerConfiguration +
+

Generic holds configuration for a generic controller-manager

+
KubeCloudShared [Required]
+KubeCloudSharedConfiguration +
+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

+
AttachDetachController [Required]
+AttachDetachControllerConfiguration +
+

AttachDetachControllerConfiguration holds configuration for +AttachDetachController related features.

+
CSRSigningController [Required]
+CSRSigningControllerConfiguration +
+

CSRSigningControllerConfiguration holds configuration for +CSRSigningController related features.

+
DaemonSetController [Required]
+DaemonSetControllerConfiguration +
+

DaemonSetControllerConfiguration holds configuration for DaemonSetController +related features.

+
DeploymentController [Required]
+DeploymentControllerConfiguration +
+

DeploymentControllerConfiguration holds configuration for +DeploymentController related features.

+
StatefulSetController [Required]
+StatefulSetControllerConfiguration +
+

StatefulSetControllerConfiguration holds configuration for +StatefulSetController related features.

+
DeprecatedController [Required]
+DeprecatedControllerConfiguration +
+

DeprecatedControllerConfiguration holds configuration for some deprecated +features.

+
EndpointController [Required]
+EndpointControllerConfiguration +
+

EndpointControllerConfiguration holds configuration for EndpointController +related features.

+
EndpointSliceController [Required]
+EndpointSliceControllerConfiguration +
+

EndpointSliceControllerConfiguration holds configuration for +EndpointSliceController related features.

+
EndpointSliceMirroringController [Required]
+EndpointSliceMirroringControllerConfiguration +
+

EndpointSliceMirroringControllerConfiguration holds configuration for +EndpointSliceMirroringController related features.

+
EphemeralVolumeController [Required]
+EphemeralVolumeControllerConfiguration +
+

EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController +related features.

+
GarbageCollectorController [Required]
+GarbageCollectorControllerConfiguration +
+

GarbageCollectorControllerConfiguration holds configuration for +GarbageCollectorController related features.

+
HPAController [Required]
+HPAControllerConfiguration +
+

HPAControllerConfiguration holds configuration for HPAController related features.

+
JobController [Required]
+JobControllerConfiguration +
+

JobControllerConfiguration holds configuration for JobController related features.

+
CronJobController [Required]
+CronJobControllerConfiguration +
+

CronJobControllerConfiguration holds configuration for CronJobController related features.

+
NamespaceController [Required]
+NamespaceControllerConfiguration +
+

NamespaceControllerConfiguration holds configuration for NamespaceController +related features. +NamespaceControllerConfiguration holds configuration for NamespaceController +related features.

+
NodeIPAMController [Required]
+NodeIPAMControllerConfiguration +
+

NodeIPAMControllerConfiguration holds configuration for NodeIPAMController +related features.

+
NodeLifecycleController [Required]
+NodeLifecycleControllerConfiguration +
+

NodeLifecycleControllerConfiguration holds configuration for +NodeLifecycleController related features.

+
PersistentVolumeBinderController [Required]
+PersistentVolumeBinderControllerConfiguration +
+

PersistentVolumeBinderControllerConfiguration holds configuration for +PersistentVolumeBinderController related features.

+
PodGCController [Required]
+PodGCControllerConfiguration +
+

PodGCControllerConfiguration holds configuration for PodGCController +related features.

+
ReplicaSetController [Required]
+ReplicaSetControllerConfiguration +
+

ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features.

+
ReplicationController [Required]
+ReplicationControllerConfiguration +
+

ReplicationControllerConfiguration holds configuration for +ReplicationController related features.

+
ResourceQuotaController [Required]
+ResourceQuotaControllerConfiguration +
+

ResourceQuotaControllerConfiguration holds configuration for +ResourceQuotaController related features.

+
SAController [Required]
+SAControllerConfiguration +
+

SAControllerConfiguration holds configuration for ServiceAccountController +related features.

+
ServiceController [Required]
+ServiceControllerConfiguration +
+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

+
TTLAfterFinishedController [Required]
+TTLAfterFinishedControllerConfiguration +
+

TTLAfterFinishedControllerConfiguration holds configuration for +TTLAfterFinishedController related features.

+
+ +## `AttachDetachControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

AttachDetachControllerConfiguration contains elements describing AttachDetachController.

+ + + + + + + + + + + + + + +
FieldDescription
DisableAttachDetachReconcilerSync [Required]
+bool +
+

Reconciler runs a periodic loop to reconcile the desired state of the with +the actual state of the world by triggering attach detach operations. +This flag enables or disables reconcile. Is false by default, and thus enabled.

+
ReconcilerSyncLoopPeriod [Required]
+meta/v1.Duration +
+

ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop +wait between successive executions. Is set to 5 sec by default.

+
+ +## `CSRSigningConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration} + + +**Appears in:** + +- [CSRSigningControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration) + + +

CSRSigningConfiguration holds information about a particular CSR signer

+ + + + + + + + + + + + + + +
FieldDescription
CertFile [Required]
+string +
+

certFile is the filename containing a PEM-encoded +X509 CA certificate used to issue certificates

+
KeyFile [Required]
+string +
+

keyFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue certificates

+
+ +## `CSRSigningControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

CSRSigningControllerConfiguration contains elements describing CSRSigningController.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ClusterSigningCertFile [Required]
+string +
+

clusterSigningCertFile is the filename containing a PEM-encoded +X509 CA certificate used to issue cluster-scoped certificates

+
ClusterSigningKeyFile [Required]
+string +
+

clusterSigningCertFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue cluster-scoped certificates

+
KubeletServingSignerConfiguration [Required]
+CSRSigningConfiguration +
+

kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kubelet-serving signer

+
KubeletClientSignerConfiguration [Required]
+CSRSigningConfiguration +
+

kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet

+
KubeAPIServerClientSignerConfiguration [Required]
+CSRSigningConfiguration +
+

kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client

+
LegacyUnknownSignerConfiguration [Required]
+CSRSigningConfiguration +
+

legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/legacy-unknown

+
ClusterSigningDuration [Required]
+meta/v1.Duration +
+

clusterSigningDuration is the max length of duration signed certificates will be given. +Individual CSRs may request shorter certs by setting spec.expirationSeconds.

+
+ +## `CronJobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

CronJobControllerConfiguration contains elements describing CrongJob2Controller.

+ + + + + + + + + + + +
FieldDescription
ConcurrentCronJobSyncs [Required]
+int32 +
+

concurrentCronJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

+
+ +## `DaemonSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

DaemonSetControllerConfiguration contains elements describing DaemonSetController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentDaemonSetSyncs [Required]
+int32 +
+

concurrentDaemonSetSyncs is the number of daemonset objects that are +allowed to sync concurrently. Larger number = more responsive daemonset, +but more CPU (and network) load.

+
+ +## `DeploymentControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

DeploymentControllerConfiguration contains elements describing DeploymentController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentDeploymentSyncs [Required]
+int32 +
+

concurrentDeploymentSyncs is the number of deployment objects that are +allowed to sync concurrently. Larger number = more responsive deployments, +but more CPU (and network) load.

+
+ +## `DeprecatedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

DeprecatedControllerConfiguration contains elements be deprecated.

+ + + + +## `EndpointControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EndpointControllerConfiguration contains elements describing EndpointController.

+ + + + + + + + + + + + + + +
FieldDescription
ConcurrentEndpointSyncs [Required]
+int32 +
+

concurrentEndpointSyncs is the number of endpoint syncing operations +that will be done concurrently. Larger number = faster endpoint updating, +but more CPU (and network) load.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

+
+ +## `EndpointSliceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EndpointSliceControllerConfiguration contains elements describing +EndpointSliceController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
ConcurrentServiceEndpointSyncs [Required]
+int32 +
+

concurrentServiceEndpointSyncs is the number of service endpoint syncing +operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

+
MaxEndpointsPerSlice [Required]
+int32 +
+

maxEndpointsPerSlice is the maximum number of endpoints that will be +added to an EndpointSlice. More endpoints per slice will result in fewer +and larger endpoint slices, but larger resources.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

+
+ +## `EndpointSliceMirroringControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EndpointSliceMirroringControllerConfiguration contains elements describing +EndpointSliceMirroringController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
MirroringConcurrentServiceEndpointSyncs [Required]
+int32 +
+

mirroringConcurrentServiceEndpointSyncs is the number of service endpoint +syncing operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

+
MirroringMaxEndpointsPerSubset [Required]
+int32 +
+

mirroringMaxEndpointsPerSubset is the maximum number of endpoints that +will be mirrored to an EndpointSlice for an EndpointSubset.

+
MirroringEndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice +updates. All updates triggered by EndpointSlice changes will be delayed +by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the +same Endpoints resource change in that period, they will be batched to a +single EndpointSlice update. Default 0 value means that each Endpoints +update triggers an EndpointSlice update.

+
+ +## `EphemeralVolumeControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentEphemeralVolumeSyncs [Required]
+int32 +
+

ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations +that will be done concurrently. Larger number = faster ephemeral volume updating, +but more CPU (and network) load.

+
+ +## `GarbageCollectorControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
EnableGarbageCollector [Required]
+bool +
+

enables the generic garbage collector. MUST be synced with the +corresponding flag of the kube-apiserver. WARNING: the generic garbage +collector is an alpha feature.

+
ConcurrentGCSyncs [Required]
+int32 +
+

concurrentGCSyncs is the number of garbage collector workers that are +allowed to sync concurrently.

+
GCIgnoredResources [Required]
+[]GroupResource +
+

gcIgnoredResources is the list of GroupResources that garbage collection should ignore.

+
+ +## `GroupResource` {#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource} + + +**Appears in:** + +- [GarbageCollectorControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration) + + +

GroupResource describes an group resource.

+ + + + + + + + + + + + + + +
FieldDescription
Group [Required]
+string +
+

group is the group portion of the GroupResource.

+
Resource [Required]
+string +
+

resource is the resource portion of the GroupResource.

+
+ +## `HPAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

HPAControllerConfiguration contains elements describing HPAController.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ConcurrentHorizontalPodAutoscalerSyncs [Required]
+int32 +
+

ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently. +Larger number = more responsive HPA processing, but more CPU (and network) load.

+
HorizontalPodAutoscalerSyncPeriod [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of +pods in horizontal pod autoscaler.

+
HorizontalPodAutoscalerUpscaleForbiddenWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerUpscaleForbiddenWindow is a period after which next upscale allowed.

+
HorizontalPodAutoscalerDownscaleStabilizationWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look +backwards and not scale down below any recommendation it made during that period.

+
HorizontalPodAutoscalerDownscaleForbiddenWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerDownscaleForbiddenWindow is a period after which next downscale allowed.

+
HorizontalPodAutoscalerTolerance [Required]
+float64 +
+

HorizontalPodAutoscalerTolerance is the tolerance for when +resource usage suggests upscaling/downscaling

+
HorizontalPodAutoscalerCPUInitializationPeriod [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples +might be skipped.

+
HorizontalPodAutoscalerInitialReadinessDelay [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness +changes are treated as readiness being set for the first time. The only effect of this is that +HPA will disregard CPU samples from unready pods that had last readiness change during that +period.

+
+ +## `JobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

JobControllerConfiguration contains elements describing JobController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentJobSyncs [Required]
+int32 +
+

concurrentJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

+
+ +## `NamespaceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NamespaceControllerConfiguration contains elements describing NamespaceController.

+ + + + + + + + + + + + + + +
FieldDescription
NamespaceSyncPeriod [Required]
+meta/v1.Duration +
+

namespaceSyncPeriod is the period for syncing namespace life-cycle +updates.

+
ConcurrentNamespaceSyncs [Required]
+int32 +
+

concurrentNamespaceSyncs is the number of namespace objects that are +allowed to sync concurrently.

+
+ +## `NodeIPAMControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NodeIPAMControllerConfiguration contains elements describing NodeIpamController.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ServiceCIDR [Required]
+string +
+

serviceCIDR is CIDR Range for Services in cluster.

+
SecondaryServiceCIDR [Required]
+string +
+

secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR

+
NodeCIDRMaskSize [Required]
+int32 +
+

NodeCIDRMaskSize is the mask size for node cidr in cluster.

+
NodeCIDRMaskSizeIPv4 [Required]
+int32 +
+

NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.

+
NodeCIDRMaskSizeIPv6 [Required]
+int32 +
+

NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.

+
+ +## `NodeLifecycleControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
EnableTaintManager [Required]
+bool +
+

If set to true enables NoExecute Taints and will evict all not-tolerating +Pod running on Nodes tainted with this kind of Taints.

+
NodeEvictionRate [Required]
+float32 +
+

nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy

+
SecondaryNodeEvictionRate [Required]
+float32 +
+

secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy

+
NodeStartupGracePeriod [Required]
+meta/v1.Duration +
+

nodeStartupGracePeriod is the amount of time which we allow starting a node to +be unresponsive before marking it unhealthy.

+
NodeMonitorGracePeriod [Required]
+meta/v1.Duration +
+

nodeMontiorGracePeriod is the amount of time which we allow a running node to be +unresponsive before marking it unhealthy. Must be N times more than kubelet's +nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet +to post node status.

+
PodEvictionTimeout [Required]
+meta/v1.Duration +
+

podEvictionTimeout is the grace period for deleting pods on failed nodes.

+
LargeClusterSizeThreshold [Required]
+int32 +
+

secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold

+
UnhealthyZoneThreshold [Required]
+float32 +
+

Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least +unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

+
+ +## `PersistentVolumeBinderControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

PersistentVolumeBinderControllerConfiguration contains elements describing +PersistentVolumeBinderController.

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
PVClaimBinderSyncPeriod [Required]
+meta/v1.Duration +
+

pvClaimBinderSyncPeriod is the period for syncing persistent volumes +and persistent volume claims.

+
VolumeConfiguration [Required]
+VolumeConfiguration +
+

volumeConfiguration holds configuration for volume related features.

+
VolumeHostCIDRDenylist [Required]
+[]string +
+

VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the +controller from plugins.

+
VolumeHostAllowLocalLoopback [Required]
+bool +
+

VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc) +should be allowed from plugins.

+
+ +## `PersistentVolumeRecyclerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration} + + +**Appears in:** + +- [VolumeConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration) + + +

PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
MaximumRetry [Required]
+int32 +
+

maximumRetry is number of retries the PV recycler will execute on failure to recycle +PV.

+
MinimumTimeoutNFS [Required]
+int32 +
+

minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler +pod.

+
PodTemplateFilePathNFS [Required]
+string +
+

podTemplateFilePathNFS is the file path to a pod definition used as a template for +NFS persistent volume recycling

+
IncrementTimeoutNFS [Required]
+int32 +
+

incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds +for an NFS scrubber pod.

+
PodTemplateFilePathHostPath [Required]
+string +
+

podTemplateFilePathHostPath is the file path to a pod definition used as a template for +HostPath persistent volume recycling. This is for development and testing only and +will not work in a multi-node cluster.

+
MinimumTimeoutHostPath [Required]
+int32 +
+

minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath +Recycler pod. This is for development and testing only and will not work in a multi-node +cluster.

+
IncrementTimeoutHostPath [Required]
+int32 +
+

incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds +for a HostPath scrubber pod. This is for development and testing only and will not work +in a multi-node cluster.

+
+ +## `PodGCControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

PodGCControllerConfiguration contains elements describing PodGCController.

+ + + + + + + + + + + +
FieldDescription
TerminatedPodGCThreshold [Required]
+int32 +
+

terminatedPodGCThreshold is the number of terminated pods that can exist +before the terminated pod garbage collector starts deleting terminated pods. +If <= 0, the terminated pod garbage collector is disabled.

+
+ +## `ReplicaSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentRSSyncs [Required]
+int32 +
+

concurrentRSSyncs is the number of replica sets that are allowed to sync +concurrently. Larger number = more responsive replica management, but more +CPU (and network) load.

+
+ +## `ReplicationControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ReplicationControllerConfiguration contains elements describing ReplicationController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentRCSyncs [Required]
+int32 +
+

concurrentRCSyncs is the number of replication controllers that are +allowed to sync concurrently. Larger number = more responsive replica +management, but more CPU (and network) load.

+
+ +## `ResourceQuotaControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.

+ + + + + + + + + + + + + + +
FieldDescription
ResourceQuotaSyncPeriod [Required]
+meta/v1.Duration +
+

resourceQuotaSyncPeriod is the period for syncing quota usage status +in the system.

+
ConcurrentResourceQuotaSyncs [Required]
+int32 +
+

concurrentResourceQuotaSyncs is the number of resource quotas that are +allowed to sync concurrently. Larger number = more responsive quota +management, but more CPU (and network) load.

+
+ +## `SAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

SAControllerConfiguration contains elements describing ServiceAccountController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
ServiceAccountKeyFile [Required]
+string +
+

serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key +used to sign service account tokens.

+
ConcurrentSATokenSyncs [Required]
+int32 +
+

concurrentSATokenSyncs is the number of service account token syncing operations +that will be done concurrently.

+
RootCAFile [Required]
+string +
+

rootCAFile is the root certificate authority will be included in service +account's token secret. This must be a valid PEM-encoded CA bundle.

+
+ +## `StatefulSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

StatefulSetControllerConfiguration contains elements describing StatefulSetController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentStatefulSetSyncs [Required]
+int32 +
+

concurrentStatefulSetSyncs is the number of statefulset objects that are +allowed to sync concurrently. Larger number = more responsive statefulsets, +but more CPU (and network) load.

+
+ +## `TTLAfterFinishedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentTTLSyncs [Required]
+int32 +
+

concurrentTTLSyncs is the number of TTL-after-finished collector workers that are +allowed to sync concurrently.

+
+ +## `VolumeConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration} + + +**Appears in:** + +- [PersistentVolumeBinderControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration) + + +

VolumeConfiguration contains all enumerated flags meant to configure all volume +plugins. From this config, the controller-manager binary will create many instances of +volume.VolumeConfig, each containing only the configuration needed for that plugin which +are then passed to the appropriate plugin. The ControllerManager binary is the only part +of the code which knows what plugins are supported and which flags correspond to each plugin.

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
EnableHostPathProvisioning [Required]
+bool +
+

enableHostPathProvisioning enables HostPath PV provisioning when running without a +cloud provider. This allows testing and development of provisioning features. HostPath +provisioning is not supported in any way, won't work in a multi-node cluster, and +should not be used for anything other than testing or development.

+
EnableDynamicProvisioning [Required]
+bool +
+

enableDynamicProvisioning enables the provisioning of volumes when running within an environment +that supports dynamic provisioning. Defaults to true.

+
PersistentVolumeRecyclerConfiguration [Required]
+PersistentVolumeRecyclerConfiguration +
+

persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins.

+
FlexVolumePluginDir [Required]
+string +
+

volumePluginDir is the full path of the directory in which the flex +volume plugin should search for additional third party volume plugins

+
+ + + + +## `ServiceControllerConfiguration` {#ServiceControllerConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ServiceControllerConfiguration contains elements describing ServiceController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentServiceSyncs [Required]
+int32 +
+

concurrentServiceSyncs is the number of services that are +allowed to sync concurrently. Larger number = more responsive service +management, but more CPU (and network) load.

+
+ + + +## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration} + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
cloudcontrollermanager.config.k8s.io/v1alpha1
kind
string
CloudControllerManagerConfiguration
Generic [Required]
+GenericControllerManagerConfiguration +
+

Generic holds configuration for a generic controller-manager

+
KubeCloudShared [Required]
+KubeCloudSharedConfiguration +
+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

+
ServiceController [Required]
+ServiceControllerConfiguration +
+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

+
NodeStatusUpdateFrequency [Required]
+meta/v1.Duration +
+

NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status

+
+ +## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration} + + +**Appears in:** + +- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration) + + +

CloudProviderConfiguration contains basically elements about cloud provider.

+ + + + + + + + + + + + + + +
FieldDescription
Name [Required]
+string +
+

Name is the provider for cloud services.

+
CloudConfigFile [Required]
+string +
+

cloudConfigFile is the path to the cloud provider configuration file.

+
+ +## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

KubeCloudSharedConfiguration contains elements shared by both kube-controller manager +and cloud-controller manager, but not genericconfig.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
CloudProvider [Required]
+CloudProviderConfiguration +
+

CloudProviderConfiguration holds configuration for CloudProvider related features.

+
ExternalCloudVolumePlugin [Required]
+string +
+

externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external". +It is currently used by the in repo cloud providers to handle node and volume control in the KCM.

+
UseServiceAccountCredentials [Required]
+bool +
+

useServiceAccountCredentials indicates whether controllers should be run with +individual service account credentials.

+
AllowUntaggedCloud [Required]
+bool +
+

run with untagged cloud instances

+
RouteReconciliationPeriod [Required]
+meta/v1.Duration +
+

routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..

+
NodeMonitorPeriod [Required]
+meta/v1.Duration +
+

nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.

+
ClusterName [Required]
+string +
+

clusterName is the instance prefix for the cluster.

+
ClusterCIDR [Required]
+string +
+

clusterCIDR is CIDR Range for Pods in cluster.

+
AllocateNodeCIDRs [Required]
+bool +
+

AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if +ConfigureCloudRoutes is true, to be set on the cloud provider.

+
CIDRAllocatorType [Required]
+string +
+

CIDRAllocatorType determines what kind of pod CIDR allocator will be used.

+
ConfigureCloudRoutes [Required]
+bool +
+

configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs +to be configured on the cloud provider.

+
NodeSyncPeriod [Required]
+meta/v1.Duration +
+

nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer +periods will result in fewer calls to cloud provider, but may delay addition +of new nodes to cluster.

+
+ \ No newline at end of file diff --git a/content/en/docs/reference/issues-security/official-cve-feed.md b/content/en/docs/reference/issues-security/official-cve-feed.md index ea0a9bffc74..11eb4edee1d 100644 --- a/content/en/docs/reference/issues-security/official-cve-feed.md +++ b/content/en/docs/reference/issues-security/official-cve-feed.md @@ -1,9 +1,11 @@ --- title: Official CVE Feed +linkTitle: CVE feed weight: 25 outputs: - json - - html + - html + - rss layout: cve-feed --- @@ -14,19 +16,25 @@ the Kubernetes Security Response Committee. See [Kubernetes Security and Disclosure Information](/docs/reference/issues-security/security/) for more details. -The Kubernetes project publishes a programmatically accessible -[JSON Feed](/docs/reference/issues-security/official-cve-feed/index.json) of -published security issues. You can access it by executing the following command: - -{{< comment >}} -`replace` is used to bypass known issue with rendering ">" -: https://github.com/gohugoio/hugo/issues/7229 in JSON layouts template -`layouts/_default/cve-feed.json` -{{< /comment >}} +The Kubernetes project publishes a programmatically accessible feed of published +security issues in [JSON feed](/docs/reference/issues-security/official-cve-feed/index.json) +and [RSS feed](/docs/reference/issues-security/official-cve-feed/feed.xml) +formats. You can access it by executing the following commands: +{{< tabs name="CVE feeds" >}} +{{% tab name="JSON feed" %}} +[Link to JSON format](/docs/reference/issues-security/official-cve-feed/index.json) ```shell curl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/index.json ``` +{{% /tab %}} +{{% tab name="RSS feed" %}} +[Link to RSS format](/docs/reference/issues-security/official-cve-feed/feed.xml) +```shell +curl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml +``` +{{% /tab %}} +{{< /tabs >}} {{< cve-feed >}} diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index c78884a314b..03397f462f5 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -168,6 +168,7 @@ Automanaged APIService objects are deleted by kube-apiserver when it has no buil {{< /note >}} There are two possible values: + - `onstart`: The APIService should be reconciled when an API server starts up, but not otherwise. - `true`: The API server should reconcile this APIService continuously. @@ -191,7 +192,6 @@ The Kubelet populates this label with the hostname. Note that the hostname can b This label is also used as part of the topology hierarchy. See [topology.kubernetes.io/zone](#topologykubernetesiozone) for more information. - ### kubernetes.io/change-cause {#change-cause} Example: `kubernetes.io/change-cause: "kubectl edit --record deployment foo"` @@ -409,6 +409,7 @@ A zone represents a logical failure domain. It is common for Kubernetes cluster A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not. Kubernetes makes a few assumptions about the structure of zones and regions: + 1) regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions 2) zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b" @@ -431,6 +432,17 @@ Used on: PersistentVolumeClaim This annotation has been deprecated. +### volume.beta.kubernetes.io/storage-class (deprecated) + +Example: `volume.beta.kubernetes.io/storage-class: "example-class"` + +Used on: PersistentVolume, PersistentVolumeClaim + +This annotation can be used for PersistentVolume(PV) or PersistentVolumeClaim(PVC) to specify the name of [StorageClass](/docs/concepts/storage/storage-classes/). When both `storageClassName` attribute and `volume.beta.kubernetes.io/storage-class` annotation are specified, the annotation `volume.beta.kubernetes.io/storage-class` takes precedence over the `storageClassName` attribute. + +This annotation has been deprecated. Instead, set the [`storageClassName` field](/docs/concepts/storage/persistent-volumes/#class) +for the PersistentVolumeClaim or PersistentVolume. + ### volume.beta.kubernetes.io/mount-options (deprecated) {#mount-options} Example : `volume.beta.kubernetes.io/mount-options: "ro,soft"` @@ -528,7 +540,6 @@ a request where the client authenticated using the service account token. If a legacy token was last used before the cluster gained the feature (added in Kubernetes v1.26), then the label isn't set. - ### endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by} Example: `endpointslice.kubernetes.io/managed-by: "controller"` @@ -614,6 +625,17 @@ Example: `kubectl.kubernetes.io/default-container: "front-end-app"` The value of the annotation is the container name that is default for this Pod. For example, `kubectl logs` or `kubectl exec` without `-c` or `--container` flag will use this default container. +### kubectl.kubernetes.io/default-logs-container (deprecated) + +Example: `kubectl.kubernetes.io/default-logs-container: "front-end-app"` + +The value of the annotation is the container name that is the default logging container for this Pod. For example, `kubectl logs` without `-c` or `--container` flag will use this default container. + +{{< note >}} +This annotation is deprecated. You should use the [`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container) annotation instead. +Kubernetes versions 1.25 and newer ignore this annotation. +{{< /note >}} + ### endpoints.kubernetes.io/over-capacity Example: `endpoints.kubernetes.io/over-capacity:truncated` @@ -634,7 +656,7 @@ The presence of this annotation on a Job indicates that the control plane is [tracking the Job status using finalizers](/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers). The control plane uses this annotation to safely transition to tracking Jobs using finalizers, while the feature is in development. -You should **not** manually add or remove this annotation. +You should **not** manually add or remove this annotation. {{< note >}} Starting from Kubernetes 1.26, this annotation is deprecated. @@ -716,7 +738,6 @@ Refer to for further details about when and how to use this taint. {{< /caution >}} - ### node.cloudprovider.kubernetes.io/uninitialized Example: `node.cloudprovider.kubernetes.io/uninitialized: "NoSchedule"` diff --git a/content/en/docs/reference/using-api/cel.md b/content/en/docs/reference/using-api/cel.md new file mode 100644 index 00000000000..eb5d696de9a --- /dev/null +++ b/content/en/docs/reference/using-api/cel.md @@ -0,0 +1,300 @@ +--- +title: Common Expression Language in Kubernetes +reviewers: +- jpbetz +- cici37 +content_type: concept +weight: 35 +min-kubernetes-server-version: 1.25 +--- + + + +The [Common Expression Language (CEL)](https://github.com/google/cel-go) is used +in the Kubernetes API to declare validation rules, policy rules, and other +constraints or conditions. + +CEL expressions are evaluated directly in the +{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}, making CEL a +convenient alternative to out-of-process mechanisms, such as webhooks, for many +extensibility use cases. Your CEL expressions continue to execute so long as the +control plane's API server component remains available. + + + +## Language overview + +The [CEL +language](https://github.com/google/cel-spec/blob/master/doc/langdef.md) has a +straightforward syntax that is similar to the expressions in C, C++, Java, +JavaScript and Go. + +CEL was designed to be embedded into applications. Each CEL "program" is a +single expression that evaluates to a single value. CEL expressions are +typically short "one-liners" that inline well into the string fields of Kubernetes +API resources. + +Inputs to a CEL program are "variables". Each Kubernetes API field that contains +CEL declares in the API documentation which variables are available to use for +that field. For example, in the `x-kubernetes-validations[i].rules` field of +CustomResourceDefinitions, the `self` and `oldSelf` variables are available and +refer to the previous and current state of the custom resource data to be +validated by the CEL expression. Other Kubernetes API fields may declare +different variables. See the API documentation of the API fields to learn which +variables are available for that field. + +Example CEL expressions: + +{{< table caption="Examples of CEL expressions and the purpose of each" >}} +| Rule | Purpose | +|------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------| +| `self.minReplicas <= self.replicas && self.replicas <= self.maxReplicas` | Validate that the three fields defining replicas are ordered appropriately | +| `'Available' in self.stateCounts` | Validate that an entry with the 'Available' key exists in a map | +| `(self.list1.size() == 0) != (self.list2.size() == 0)` | Validate that one of two lists is non-empty, but not both | +| `self.envars.filter(e, e.name = 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$')` | Validate the 'value' field of a listMap entry where key field 'name' is 'MY_ENV' | +| `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration | +| `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' | +| `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 | +| `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the the int and string cases | +| `self.metadata.name == 'singleton'` | Validate that an object's name matches a specific value (making it a singleton) | +| `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint | +| `self.names.size() == self.details.size() && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet | +{{< /table >}} + +## CEL community libraries + +Kubernetes CEL expressions have access to the following CEL community libraries: + +- CEL standard functions, defined in the [list of standard definitions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) +- CEL standard [macros](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros) +- CEL [extended string function library](https://pkg.go.dev/github.com/google/cel-go/ext#Strings) + +## Kubernetes CEL libraries + +In additional to the CEL community libraries, Kubernetes includes CEL libraries +that are available everywhere CEL is used in Kubernetes. + +### Kubernetes list library + +The list library includes `indexOf` and `lastIndexOf`, which work similar to the +strings functions of the same names. These functions either the first or last +positional index of the provided element in the list. + +The list library also includes `min`, `max` and `sum`. Sum is supported on all +number types as well as the duration type. Min and max are supported on all +comparable types. + +`isSorted` is also provided as a convenience function and is supported on all +comparable types. + +Examples: + +{{< table caption="Examples of CEL expressions using list library functions" >}} +| CEL Expression | Purpose | +|------------------------------------------------------------------------------------|-----------------------------------------------------------| +| `names.isSorted()` | Verify that a list of names is kept in alphabetical order | +| `items.map(x, x.weight).sum() == 1.0` | Verify that the "weights" of a list of objects sum to 1.0 | +| `lowPriorities.map(x, x.priority).max() < highPriorities.map(x, x.priority).min()` | Verify that two sets of priorities do not overlap | +| `names.indexOf('should-be-first') == 1` | Require that the first name in a list if a specific value | +{{< /table >}} + +See the [Kubernetes List Library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Lists) +godoc for more information. + +### Kubernetes regex library + +In addition to the `matches` function provided by the CEL standard library, the +regex library provides `find` and `findAll`, enabling a much wider range of +regex operations. + +Examples: + +{{< table caption="Examples of CEL expressions using regex library functions" >}} +| CEL Expression | Purpose | +|-------------------------------------------------------------|----------------------------------------------------------| +| `"abc 123".find('[0-9]*')` | Find the first number in a string | +| `"1, 2, 3, 4".findAll('[0-9]*').map(x, int(x)).sum() < 100` | Verify that the numbers in a string sum to less than 100 | +{{< /table >}} + +See the [Kubernetes regex library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Regex) +godoc for more information. + +### Kubernetes URL library + +To make it easier and safer to process URLs, the following functions have been added: + +- `isURL(string)` checks if a string is a valid URL according to the [Go's + net/url](https://pkg.go.dev/net/url#URL) package. The string must be an + absolute URL. +- `url(string) URL` converts a string to a URL or results in an error if the + string is not a valid URL. + +Once parsed via the `url` function, the resulting URL object has `getScheme`, +`getHost`, `getHostname`, `getPort`, `getEscapedPath` and `getQuery` accessors. + +Examples: + +{{< table caption="Examples of CEL expressions using URL library functions" >}} +| CEL Expression | Purpose | +|-----------------------------------------------------------------|------------------------------------------------| +| `url('https://example.com:80/').getHost()` | Get the 'example.com:80' host part of the URL. | +| `url('https://example.com/path with spaces/').getEscapedPath()` | Returns '/path%20with%20spaces/' | +{{< /table >}} + +See the [Kubernetes URL library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#URLs) +godoc for more information. + +## Type checking + +CEL is a [gradually typed language](https://github.com/google/cel-spec/blob/master/doc/langdef.md#gradual-type-checking). + +Some Kubernetes API fields contain fully type checked CEL expressions. For +example, [CustomResourceDefinitions Validation +Rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) +are fully type checked. + +Some Kubernetes API fields contain partially type checked CEL expressions. A +partially type checked expression is an experessions where some of the variables +are statically typed but others are dynamically typed. For example, in the CEL +expressions of +[ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/) +the `request` variable is typed, but the `object` variable is dynamically typed. +As a result, an expression containing `request.namex` would fail type checking +because the `namex` field is not defined. However, `object.namex` would pass +type checking even when the `namex` field is not defined for the resource kinds +that `object` refers to, because `object` is dynamically typed. + +The `has()` macro in CEL may be used in CEL expressions to check if a field of a +dynamically typed variable is accessible before attempting to access the field's +value. For example: + +```cel +has(object.namex) ? object.namex == 'special' : request.name == 'special' +``` + +## Type system integration + +{{< table caption="Table showing the relationship between OpenAPIv3 types and CEL types" >}} +| OpenAPIv3 type | CEL type | +|----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------| +| 'object' with Properties | object / "message type" (`type()` evaluates to `selfType.path.to.object.from.self` | +| 'object' with AdditionalProperties | map | +| 'object' with x-kubernetes-embedded-type | object / "message type", 'apiVersion', 'kind', 'metadata.name' and 'metadata.generateName' are implicitly included in schema | +| 'object' with x-kubernetes-preserve-unknown-fields | object / "message type", unknown fields are NOT accessible in CEL expression | +| x-kubernetes-int-or-string | union of int or string, `self.intOrString < 100 \|\| self.intOrString == '50%'` evaluates to true for both `50` and `"50%"` | +| 'array | list | +| 'array' with x-kubernetes-list-type=map | list with map based Equality & unique key guarantees | +| 'array' with x-kubernetes-list-type=set | list with set based Equality & unique entry guarantees | +| 'boolean' | boolean | +| 'number' (all formats) | double | +| 'integer' (all formats) | int (64) | +| _no equivalent_ | uint (64) | +| 'null' | null_type | +| 'string' | string | +| 'string' with format=byte (base64 encoded) | bytes | +| 'string' with format=date | timestamp (google.protobuf.Timestamp) | +| 'string' with format=datetime | timestamp (google.protobuf.Timestamp) | +| 'string' with format=duration | duration (google.protobuf.Duration) | +{{< /table >}} + +Also see: [CEL types](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#values), +[OpenAPI types](https://swagger.io/specification/#data-types), +[Kubernetes Structural Schemas](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema). + +Equality comparison for arrays with `x-kubernetes-list-type` of `set` or `map` ignores element +order. For example `[1, 2] == [2, 1]` if the arrays represent Kubernetes `set` values. + +Concatenation on arrays with `x-kubernetes-list-type` use the semantics of the +list type: + +- `set`: `X + Y` performs a union where the array positions of all elements in + `X` are preserved and non-intersecting elements in `Y` are appended, retaining + their partial order. +- `map`: `X + Y` performs a merge where the array positions of all keys in `X` + are preserved but the values are overwritten by values in `Y` when the key + sets of `X` and `Y` intersect. Elements in `Y` with non-intersecting keys are + appended, retaining their partial order. + +## Escaping + +Only Kubernetes resource property names of the form +`[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible from CEL. Accessible property +names are escaped according to the following rules when accessed in the +expression: + +{{< table caption="Table of CEL identifier escaping rules" >}} +| escape sequence | property name equivalent | +|-------------------|----------------------------------------------------------------------------------------------| +| `__underscores__` | `__` | +| `__dot__` | `.` | +| `__dash__` | `-` | +| `__slash__` | `/` | +| `__{keyword}__` | [CEL **RESERVED** keyword](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#syntax) | +{{< /table >}} + +When you escape any of CEL's **RESERVED** keywords you need to match the exact property name +use the underscore escaping +(for example, `int` in the word `sprint` would not be escaped and nor would it need to be). + +Examples on escaping: + +{{< table caption="Examples escaped CEL identifiers" >}} +| property name | rule with escaped property name | +|---------------|-----------------------------------| +| `namespace` | `self.__namespace__ > 0` | +| `x-prop` | `self.x__dash__prop > 0` | +| `redact__d` | `self.redact__underscores__d > 0` | +| `string` | `self.startsWith('kube')` | +{{< /table >}} + +## Resource constraints + +CEL is non-Turing complete and offers a variety of production safety controls to +limit execution time. CEL's _resource constraint_ features provide feedback to +developers about expression complexity and help protect the API server from +excessive resource consumption during evaluation. CEL's resource constraint +features are used to prevent CEL evaluation from consuming excessive API server +resources. + +A key element of the resource constraint features is a _cost unit_ that CEL +defines as a way of tracking CPU utilization. Cost units are independent of +system load and hardware. Cost units are also deterministic; for any given CEL +expression and input data, evaluation of the expression by the CEL interpreter +will always result in the same cost. + +Many of CEL's core operations have fixed costs. The simplest operations, such as +comparisons (e.g. `<`) have a cost of 1. Some have a higher fixed cost, for +example list literal declarations have a fixed base cost of 40 cost units. + +Calls to functions implemented in native code approximate cost based on the time +complexity of the operation. For example: operations that use regular +expressions, such as `match` and `find`, are estimated using an approximated +cost of `length(regexString)*length(inputString)`. The approximated cost +reflects the worst case time complexity of Go's RE2 implementation. + +### Runtime cost budget + +All CEL expressions evaluated by Kubernetes are constrained by a runtime cost +budget. The runtime cost budget is an estimate of actual CPU utilization +computed by incrementing a cost unit counter while interpreting a CEL +expression. If the CEL interpreter executes too many instructions, the runtime +cost budget will be exceeded, execution of the expressions will be halted, and +an error will result. + +Some Kubernetes resources define an additional runtime cost budget that bounds +the execution of multiple expressions. If the sum total of the cost of +expressions exceed the budget, execution of the expressions will be halted, and +an error will result. For example the validation of a custom resource has a +_per-validation_ runtime cost budget for all [Validation +Rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) +evaluated to validate the custom resource. + +### Estimated cost limits + +For some Kubernetes resources, the API server may also check if worst case +estimated running time of CEL expressions would be prohibitively expensive to +execute. If so, the API server prevent the CEL expression from being written to +API resources by rejecting create or update operations containing the CEL +expression to the API resources. This feature offers a stronger assurance that +CEL expressions written to the API resource will be evaluate at runtime without +exceeding the runtime cost budget. \ No newline at end of file diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 03dc6b4dc03..ca7f6897d32 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -41,12 +41,12 @@ cluster's API server. ## Define clusters, users, and contexts -Suppose you have two clusters, one for development work and one for scratch work. +Suppose you have two clusters, one for development work and one for test work. In the `development` cluster, your frontend developers work in a namespace called `frontend`, -and your storage developers work in a namespace called `storage`. In your `scratch` cluster, +and your storage developers work in a namespace called `storage`. In your `test` cluster, developers work in the default namespace, or they create auxiliary namespaces as they see fit. Access to the development cluster requires authentication by certificate. Access -to the scratch cluster requires authentication by username and password. +to the test cluster requires authentication by username and password. Create a directory named `config-exercise`. In your `config-exercise` directory, create a file named `config-demo` with this content: @@ -60,7 +60,7 @@ clusters: - cluster: name: development - cluster: - name: scratch + name: test users: - name: developer @@ -72,7 +72,7 @@ contexts: - context: name: dev-storage - context: - name: exp-scratch + name: exp-test ``` A configuration file describes clusters, users, and contexts. Your `config-demo` file @@ -83,7 +83,7 @@ your configuration file: ```shell kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file -kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify +kubectl config --kubeconfig=config-demo set-cluster test --server=https://5.6.7.8 --insecure-skip-tls-verify ``` Add user details to your configuration file: @@ -108,7 +108,7 @@ Add context details to your configuration file: ```shell kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer -kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter +kubectl config --kubeconfig=config-demo set-context exp-test --cluster=test --namespace=default --user=experimenter ``` Open your `config-demo` file to see the added details. As an alternative to opening the @@ -130,7 +130,7 @@ clusters: - cluster: insecure-skip-tls-verify: true server: https://5.6.7.8 - name: scratch + name: test contexts: - context: cluster: development @@ -143,10 +143,10 @@ contexts: user: developer name: dev-storage - context: - cluster: scratch + cluster: test namespace: default user: experimenter - name: exp-scratch + name: exp-test current-context: "" kind: Config preferences: {} @@ -220,19 +220,19 @@ users: client-key: fake-key-file ``` -Now suppose you want to work for a while in the scratch cluster. +Now suppose you want to work for a while in the test cluster. -Change the current context to `exp-scratch`: +Change the current context to `exp-test`: ```shell -kubectl config --kubeconfig=config-demo use-context exp-scratch +kubectl config --kubeconfig=config-demo use-context exp-test ``` Now any `kubectl` command you give will apply to the default namespace of -the `scratch` cluster. And the command will use the credentials of the user -listed in the `exp-scratch` context. +the `test` cluster. And the command will use the credentials of the user +listed in the `exp-test` context. -View configuration associated with the new current context, `exp-scratch`. +View configuration associated with the new current context, `exp-test`. ```shell kubectl config --kubeconfig=config-demo view --minify @@ -338,10 +338,10 @@ contexts: user: developer name: dev-storage - context: - cluster: scratch + cluster: test namespace: default user: experimenter - name: exp-scratch + name: exp-test ``` For more information about how kubeconfig files are merged, see diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md index c683c5aa9b2..656762c30f5 100644 --- a/content/en/docs/tasks/administer-cluster/encrypt-data.md +++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md @@ -103,6 +103,7 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations `aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented. `aescbc` | AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks. `kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding (prior to v1.25), using AES-GCM starting from v1.25, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/). +{{< /table >}} Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider is the first provider, the first key is used for encryption. diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md index 267d614ef9d..d31f708043d 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md @@ -8,7 +8,7 @@ weight: 50 -The `dockershim` component of Kubernetes allows to use Docker as a Kubernetes's +The `dockershim` component of Kubernetes allows the use of Docker as a Kubernetes's {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}. Kubernetes' built-in `dockershim` component was removed in release v1.24. @@ -40,11 +40,11 @@ dependency on Docker: 1. Third-party tools that perform above mentioned privileged operations. See [Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents) for more information. -1. Make sure there is no indirect dependencies on dockershim behavior. +1. Make sure there are no indirect dependencies on dockershim behavior. This is an edge case and unlikely to affect your application. Some tooling may be configured to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for a specific log message as part of troubleshooting instructions. - If you have such tooling configured, test the behavior on test + If you have such tooling configured, test the behavior on a test cluster before migration. ## Dependency on Docker explained {#role-of-dockershim} @@ -74,7 +74,7 @@ before to check on these containers is no longer available. You cannot get container information using `docker ps` or `docker inspect` commands. As you cannot list containers, you cannot get logs, stop containers, -or execute something inside container using `docker exec`. +or execute something inside a container using `docker exec`. {{< note >}} diff --git a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index a9a281f5b72..f60b36f7128 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -67,6 +67,7 @@ cat /mnt/data/index.html ``` The output should be: + ``` Hello from Kubernetes storage ``` @@ -247,8 +248,8 @@ You can now close the shell to your Node. You can perform 2 volume mounts on your nginx container: -`/usr/share/nginx/html` for the static website -`/etc/nginx/nginx.conf` for the default config +- `/usr/share/nginx/html` for the static website +- `/etc/nginx/nginx.conf` for the default config @@ -261,6 +262,7 @@ with a GID. Then the GID is automatically added to any Pod that uses the PersistentVolume. Use the `pv.beta.kubernetes.io/gid` annotation as follows: + ```yaml apiVersion: v1 kind: PersistentVolume @@ -269,6 +271,7 @@ metadata: annotations: pv.beta.kubernetes.io/gid: "1234" ``` + When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every GID, whether it originates from a PersistentVolume diff --git a/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md b/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md index 5dddd1119a5..2de5a39c522 100644 --- a/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md +++ b/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md @@ -59,46 +59,32 @@ Figure 1. SOCKS5 tutorial components ## Using ssh to create a SOCKS5 proxy -This command starts a SOCKS5 proxy between your client machine and the remote server. -The SOCKS5 proxy lets you connect to your cluster's API server. +The following command starts a SOCKS5 proxy between your client machine and the remote SOCKS server: ```shell # The SSH tunnel continues running in the foreground after you run this ssh -D 1080 -q -N username@kubernetes-remote-server.example ``` +The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration: * `-D 1080`: opens a SOCKS proxy on local port :1080. * `-q`: quiet mode. Causes most warning and diagnostic messages to be suppressed. * `-N`: Do not execute a remote command. Useful for just forwarding ports. -* `username@kubernetes-remote-server.example`: the remote SSH server where the Kubernetes cluster is running. +* `username@kubernetes-remote-server.example`: the remote SSH server behind which the Kubernetes cluster + is running (eg: a bastion host). ## Client configuration -To explore the Kubernetes API you'll first need to instruct your clients to send their queries through -the SOCKS5 proxy we created earlier. - -For command-line tools, set the `https_proxy` environment variable and pass it to commands that you run. +To access the Kubernetes API server through the proxy you must instruct `kubectl` to send queries through +the `SOCKS` proxy we created earlier. Do this by either setting the appropriate environment variable, +or via the `proxy-url` attribute in the kubeconfig file. Using an environment variable: ```shell -export https_proxy=socks5h://localhost:1080 +export HTTPS_PROXY=socks5://localhost:1080 ``` -When you set the `https_proxy` variable, tools such as `curl` route HTTPS traffic through the proxy -you configured. For this to work, the tool must support SOCKS5 proxying. - -{{< note >}} -In the URL https://localhost:6443/api, `localhost` does not refer to your local client computer. -Instead, it refers to the endpoint on the remote server known as `localhost`. -The `curl` tool sends the hostname from the HTTPS URL over SOCKS, and the remote server -resolves that locally (to an address that belongs to its loopback interface). -{{}} - -```shell -curl -k -v https://localhost:6443/api -``` - -To use the official Kubernetes client `kubectl` with a proxy, set the `proxy-url` element -for the relevant `cluster` entry within your `~/.kube/config` file. For example: +To always use this setting on a specific `kubectl` context, specify the `proxy-url` attribute in the relevant +`cluster` entry within the `~/.kube/config` file. For example: ```yaml apiVersion: v1 @@ -106,7 +92,7 @@ clusters: - cluster: certificate-authority-data: LRMEMMW2 # shortened for readability server: https://:6443 # the "Kubernetes API" server, in other words the IP address of kubernetes-remote-server.example - proxy-url: socks5://localhost:1080 # the "SSH SOCKS5 proxy" in the diagram above (DNS resolution over socks is built-in) + proxy-url: socks5://localhost:1080 # the "SSH SOCKS5 proxy" in the diagram above name: default contexts: - context: @@ -123,7 +109,8 @@ users: client-key-data: LS0tLS1CRUdJT= # shortened for readability ``` -If the tunnel is operating and you use `kubectl` with a context that uses this cluster, you can interact with your cluster through that proxy. For example: +Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or +the `proxy-url` attribute, you can interact with your cluster through that proxy. For example: ```shell kubectl get pods @@ -134,6 +121,24 @@ NAMESPACE NAME READY STATUS RESTA kube-system coredns-85cb69466-klwq8 1/1 Running 0 5m46s ``` +{{< note >}} +- Before `kubectl` 1.24, most `kubectl` commands worked when using a socks proxy, except `kubectl exec`. +- `kubectl` supports both `HTTPS_PROXY` and `https_proxy` environment variables. These are used by other + programs that support SOCKS, such as `curl`. Therefore in some cases it + will be better to define the environment variable on the command line: + ```shell + HTTPS_PROXY=socks5://localhost:1080 kubectl get pods + ``` +- When using `proxy-url`, the proxy is used only for the relevant `kubectl` context, + whereas the environment variable will affect all contexts. +- The k8s API server hostname can be further protected from DNS leakage by using the `socks5h` protocol name + instead of the more commonly known `socks5` protocol shown above. In this case, `kubectl` will ask the proxy server + (such as an ssh bastion) to resolve the k8s API server domain name, instead of resolving it on the system running + `kubectl`. Note also that with `socks5h`, a k8s API server URL like `https://localhost:6443/api` does not refer + to your local client computer. Instead, it refers to `localhost` as known on the proxy server (eg the ssh bastion). +{{}} + + ## Clean up Stop the ssh port-forwarding process by pressing `CTRL+C` on the terminal where it is running. diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 40e7bf547b1..d02a20577e7 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -88,65 +88,5 @@ If you're using AMD GPU devices, you can deploy Node Labeller is a {{< glossary_tooltip text="controller" term_id="controller" >}} that automatically labels your nodes with GPU device properties. -At the moment, that controller can add labels for: - -* Device ID (-device-id) -* VRAM Size (-vram) -* Number of SIMD (-simd-count) -* Number of Compute Unit (-cu-count) -* Firmware and Feature Versions (-firmware) -* GPU Family, in two letters acronym (-family) - * SI - Southern Islands - * CI - Sea Islands - * KV - Kaveri - * VI - Volcanic Islands - * CZ - Carrizo - * AI - Arctic Islands - * RV - Raven - -```shell -kubectl describe node cluster-node-23 -``` - -``` -Name: cluster-node-23 -Roles: -Labels: beta.amd.com/gpu.cu-count.64=1 - beta.amd.com/gpu.device-id.6860=1 - beta.amd.com/gpu.family.AI=1 - beta.amd.com/gpu.simd-count.256=1 - beta.amd.com/gpu.vram.16G=1 - kubernetes.io/arch=amd64 - kubernetes.io/os=linux - kubernetes.io/hostname=cluster-node-23 -Annotations: node.alpha.kubernetes.io/ttl: 0 -… -``` - -With the Node Labeller in use, you can specify the GPU type in the Pod spec: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: cuda-vector-add -spec: - restartPolicy: OnFailure - containers: - - name: cuda-vector-add - # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile - image: "registry.k8s.io/cuda-vector-add:v0.1" - resources: - limits: - nvidia.com/gpu: 1 - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - – matchExpressions: - – key: beta.amd.com/gpu.family.AI # Arctic Islands GPU family - operator: Exist -``` - -This ensures that the Pod will be scheduled to a node that has the GPU type -you specified. +Similar functionality for NVIDIA is provided by +[GPU feature discovery](https://github.com/NVIDIA/gpu-feature-discovery/blob/main/README.md). diff --git a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md index 525c404ef74..f53eae1d882 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -120,7 +120,7 @@ metadata: ``` {{< note >}} -Each variable in the `.env` file becomes a separate key in the ConfigMap that you generate. This is different from the previous example which embeds a file named `.properties` (and all its entries) as the value for a single key. +Each variable in the `.env` file becomes a separate key in the ConfigMap that you generate. This is different from the previous example which embeds a file named `application.properties` (and all its entries) as the value for a single key. {{< /note >}} ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the `literals` list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a key-value pair: diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md index 99ab27ab5db..ed3eebe6270 100644 --- a/content/en/docs/tasks/run-application/configure-pdb.md +++ b/content/en/docs/tasks/run-application/configure-pdb.md @@ -50,7 +50,9 @@ specified by one of the built-in Kubernetes controllers: In this case, make a note of the controller's `.spec.selector`; the same selector goes into the PDBs `.spec.selector`. -From version 1.15 PDBs support custom controllers where the [scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource) is enabled. +From version 1.15 PDBs support custom controllers where the +[scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource) +is enabled. You can also use PDBs with pods which are not controlled by one of the above controllers, or arbitrary groups of pods, but there are some restrictions, @@ -74,7 +76,8 @@ due to a voluntary disruption. - Multiple-instance Stateful application such as Consul, ZooKeeper, or etcd: - Concern: Do not reduce number of instances below quorum, otherwise writes fail. - Possible Solution 1: set maxUnavailable to 1 (works with varying scale of application). - - Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). (Allows more disruptions at once). + - Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). + (Allows more disruptions at once). - Restartable Batch Job: - Concern: Job needs to complete in case of voluntary disruption. - Possible solution: Do not create a PDB. The Job controller will create a replacement pod. @@ -83,17 +86,20 @@ due to a voluntary disruption. Values for `minAvailable` or `maxUnavailable` can be expressed as integers or as a percentage. -- When you specify an integer, it represents a number of Pods. For instance, if you set `minAvailable` to 10, then 10 - Pods must always be available, even during a disruption. -- When you specify a percentage by setting the value to a string representation of a percentage (eg. `"50%"`), it represents a percentage of - total Pods. For instance, if you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available during a - disruption. +- When you specify an integer, it represents a number of Pods. For instance, if you set + `minAvailable` to 10, then 10 Pods must always be available, even during a disruption. +- When you specify a percentage by setting the value to a string representation of a + percentage (eg. `"50%"`), it represents a percentage of total Pods. For instance, if + you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available + during a disruption. -When you specify the value as a percentage, it may not map to an exact number of Pods. For example, if you have 7 Pods and -you set `minAvailable` to `"50%"`, it's not immediately obvious whether that means 3 Pods or 4 Pods must be available. -Kubernetes rounds up to the nearest integer, so in this case, 4 Pods must be available. When you specify the value -`maxUnavailable` as a percentage, Kubernetes rounds up the number of Pods that may be disrupted. Thereby a disruption -can exceed your defined `maxUnavailable` percentage. You can examine the +When you specify the value as a percentage, it may not map to an exact number of Pods. +For example, if you have 7 Pods and you set `minAvailable` to `"50%"`, it's not +immediately obvious whether that means 3 Pods or 4 Pods must be available. Kubernetes +rounds up to the nearest integer, so in this case, 4 Pods must be available. When you +specify the value `maxUnavailable` as a percentage, Kubernetes rounds up the number of +Pods that may be disrupted. Thereby a disruption can exceed your defined +`maxUnavailable` percentage. You can examine the [code](https://github.com/kubernetes/kubernetes/blob/23be9587a0f8677eb8091464098881df939c44a9/pkg/controller/disruption/disruption.go#L539) that controls this behavior. @@ -151,8 +157,8 @@ voluntary evictions, not all causes of unavailability. If you set `maxUnavailable` to 0% or 0, or you set `minAvailable` to 100% or the number of replicas, you are requiring zero voluntary evictions. When you set zero voluntary evictions for a workload object such as ReplicaSet, then you cannot successfully drain a Node running one of those Pods. -If you try to drain a Node where an unevictable Pod is running, the drain never completes. This is permitted as per the -semantics of `PodDisruptionBudget`. +If you try to drain a Node where an unevictable Pod is running, the drain never completes. +This is permitted as per the semantics of `PodDisruptionBudget`. You can find examples of pod disruption budgets defined below. They match pods with the label `app: zookeeper`. @@ -229,7 +235,8 @@ status: ### Healthiness of a Pod -The current implementation considers healthy pods, as pods that have `.status.conditions` item with `type="Ready"` and `status="True"`. +The current implementation considers healthy pods, as pods that have `.status.conditions` +item with `type="Ready"` and `status="True"`. These pods are tracked via `.status.currentHealthy` field in the PDB status. ## Unhealthy Pod Eviction Policy @@ -251,22 +258,26 @@ to the `IfHealthyBudget` policy. Policies: `IfHealthyBudget` -: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only if the guarded application is not -disrupted (`.status.currentHealthy` is at least equal to `.status.desiredHealthy`). +: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only + if the guarded application is not disrupted (`.status.currentHealthy` is at least + equal to `.status.desiredHealthy`). -: This policy ensures that running pods of an already disrupted application have the best chance to become healthy. -This has negative implications for draining nodes, which can be blocked by misbehaving applications that are guarded by a PDB. -More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), -or pods that are just failing to report the `Ready` condition. +: This policy ensures that running pods of an already disrupted application have + the best chance to become healthy. This has negative implications for draining + nodes, which can be blocked by misbehaving applications that are guarded by a PDB. + More specifically applications with pods in `CrashLoopBackOff` state + (due to a bug or misconfiguration), or pods that are just failing to report the + `Ready` condition. `AlwaysAllow` -: Running pods (`.status.phase="Running"`), but not yet healthy are considered disrupted and can be evicted -regardless of whether the criteria in a PDB is met. +: Running pods (`.status.phase="Running"`), but not yet healthy are considered + disrupted and can be evicted regardless of whether the criteria in a PDB is met. -: This means prospective running pods of a disrupted application might not get a chance to become healthy. -By using this policy, cluster managers can easily evict misbehaving applications that are guarded by a PDB. -More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), -or pods that are just failing to report the `Ready` condition. +: This means prospective running pods of a disrupted application might not get a + chance to become healthy. By using this policy, cluster managers can easily evict + misbehaving applications that are guarded by a PDB. More specifically applications + with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), or pods + that are just failing to report the `Ready` condition. {{< note >}} Pods in `Pending`, `Succeeded` or `Failed` phase are always considered for eviction. diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index 41e6ddd9702..e8cd5c3398d 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -22,7 +22,8 @@ This task shows you how to delete a {{< glossary_tooltip term_id="StatefulSet" > ## Deleting a StatefulSet -You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name. +You can delete a StatefulSet in the same way you delete other resources in Kubernetes: +use the `kubectl delete` command, and specify the StatefulSet either by file or by name. ```shell kubectl delete -f @@ -38,14 +39,17 @@ You may need to delete the associated headless service separately after the Stat kubectl delete service ``` -When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=orphan`. -For example: +When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. +All Pods that are part of this workload are also deleted. If you want to delete +only the StatefulSet and not the Pods, use `--cascade=orphan`. For example: ```shell kubectl delete -f --cascade=orphan ``` -By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows: +By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet +are left behind even after the StatefulSet object itself is deleted. If the pods have +a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows: ```shell kubectl delete pods -l app.kubernetes.io/name=MyApp @@ -53,7 +57,12 @@ kubectl delete pods -l app.kubernetes.io/name=MyApp ### Persistent Volumes -Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have terminated might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion. +Deleting the Pods in a StatefulSet will not delete the associated volumes. +This is to ensure that you have the chance to copy data off the volume before +deleting it. Deleting the PVC after the pods have terminated might trigger +deletion of the backing Persistent Volumes depending on the storage class +and reclaim policy. You should never assume ability to access a volume +after claim deletion. {{< note >}} Use caution when deleting a PVC, as it may lead to data loss. @@ -61,7 +70,8 @@ Use caution when deleting a PVC, as it may lead to data loss. ### Complete deletion of a StatefulSet -To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following: +To delete everything in a StatefulSet, including the associated pods, +you can run a series of commands similar to the following: ```shell grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}') @@ -71,11 +81,17 @@ kubectl delete pvc -l app.kubernetes.io/name=MyApp ``` -In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; substitute your own label as appropriate. +In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; +substitute your own label as appropriate. ### Force deletion of StatefulSet pods -If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) for details. +If you find that some pods in your StatefulSet are stuck in the 'Terminating' +or 'Unknown' states for an extended period of time, you may need to manually +intervene to forcefully delete the pods from the apiserver. +This is a potentially dangerous task. Refer to +[Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) +for details. ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/tasks/run-application/scale-stateful-set.md b/content/en/docs/tasks/run-application/scale-stateful-set.md index 51ae43ccfdd..025eb47a440 100644 --- a/content/en/docs/tasks/run-application/scale-stateful-set.md +++ b/content/en/docs/tasks/run-application/scale-stateful-set.md @@ -14,14 +14,17 @@ weight: 50 -This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas. +This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to +increasing or decreasing the number of replicas. ## {{% heading "prerequisites" %}} - StatefulSets are only available in Kubernetes version 1.5 or later. To check your version of Kubernetes, run `kubectl version`. -- Not all stateful applications scale nicely. If you are unsure about whether to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information. +- Not all stateful applications scale nicely. If you are unsure about whether + to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) + or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information. - You should perform scaling only when you are confident that your stateful application cluster is completely healthy. @@ -46,7 +49,9 @@ kubectl scale statefulsets --replicas= ### Make in-place updates on your StatefulSets -Alternatively, you can do [in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) on your StatefulSets. +Alternatively, you can do +[in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) +on your StatefulSets. If your StatefulSet was initially created with `kubectl apply`, update `.spec.replicas` of the StatefulSet manifests, and then do a `kubectl apply`: @@ -71,10 +76,12 @@ kubectl patch statefulsets -p '{"spec":{"replicas": 1, Kubernetes cannot determine the reason for an unhealthy Pod. It might be the result of a permanent fault or of a transient fault. A transient fault can be caused by a restart required by upgrading or maintenance. +If spec.replicas > 1, Kubernetes cannot determine the reason for an unhealthy Pod. +It might be the result of a permanent fault or of a transient fault. A transient +fault can be caused by a restart required by upgrading or maintenance. If the Pod is unhealthy due to a permanent fault, scaling without correcting the fault may lead to a state where the StatefulSet membership diff --git a/content/en/docs/tutorials/security/cluster-level-pss.md b/content/en/docs/tutorials/security/cluster-level-pss.md index 07273c3be8e..a892f366d63 100644 --- a/content/en/docs/tutorials/security/cluster-level-pss.md +++ b/content/en/docs/tutorials/security/cluster-level-pss.md @@ -30,6 +30,11 @@ Install the following on your workstation: - [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) - [kubectl](/docs/tasks/tools/) +This tutorial demonstrates what you can configure for a Kubernetes cluster that you fully +control. If you are learning how to configure Pod Security Admission for a managed cluster +where you are not able to configure the control plane, read +[Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss). + ## Choose the right Pod Security Standard to apply [Pod Security Admission](/docs/concepts/security/pod-security-admission/) @@ -42,22 +47,22 @@ that are most appropriate for your configuration, do the following: 1. Create a cluster with no Pod Security Standards applied: ```shell - kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0 + kind create cluster --name psa-wo-cluster-pss ``` - The output is similar to this: + The output is similar to: ``` Creating cluster "psa-wo-cluster-pss" ... - ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 - ✓ Preparing nodes 📦 + ✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼 + ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-psa-wo-cluster-pss" You can now use your cluster with: - + kubectl cluster-info --context kind-psa-wo-cluster-pss - + Thanks for using kind! 😊 ``` @@ -72,7 +77,7 @@ that are most appropriate for your configuration, do the following: Kubernetes control plane is running at https://127.0.0.1:61350 CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy - + To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ``` @@ -82,7 +87,7 @@ that are most appropriate for your configuration, do the following: kubectl get ns ``` The output is similar to this: - ``` + ``` NAME STATUS AGE default Active 9m30s kube-node-lease Active 9m32s @@ -99,8 +104,9 @@ that are most appropriate for your configuration, do the following: kubectl label --dry-run=server --overwrite ns --all \ pod-security.kubernetes.io/enforce=privileged ``` - The output is similar to this: - ``` + + The output is similar to: + ``` namespace/default labeled namespace/kube-node-lease labeled namespace/kube-public labeled @@ -108,12 +114,13 @@ that are most appropriate for your configuration, do the following: namespace/local-path-storage labeled ``` 2. Baseline - ```shell + ```shell kubectl label --dry-run=server --overwrite ns --all \ pod-security.kubernetes.io/enforce=baseline ``` - The output is similar to this: - ``` + + The output is similar to: + ``` namespace/default labeled namespace/kube-node-lease labeled namespace/kube-public labeled @@ -123,15 +130,16 @@ that are most appropriate for your configuration, do the following: Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged namespace/kube-system labeled namespace/local-path-storage labeled - ``` + ``` 3. Restricted ```shell kubectl label --dry-run=server --overwrite ns --all \ pod-security.kubernetes.io/enforce=restricted ``` - The output is similar to this: - ``` + + The output is similar to: + ``` namespace/default labeled namespace/kube-node-lease labeled namespace/kube-public labeled @@ -180,7 +188,7 @@ following: ``` mkdir -p /tmp/pss - cat < /tmp/pss/cluster-level-pss.yaml + cat < /tmp/pss/cluster-level-pss.yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: @@ -212,7 +220,7 @@ following: 1. Configure the API server to consume this file during cluster creation: ``` - cat < /tmp/pss/cluster-config.yaml + cat < /tmp/pss/cluster-config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: @@ -255,22 +263,22 @@ following: these Pod Security Standards: ```shell - kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml + kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml ``` The output is similar to this: ``` Creating cluster "psa-with-cluster-pss" ... - ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 - ✓ Preparing nodes 📦 - ✓ Writing configuration 📜 - ✓ Starting control-plane 🕹️ - ✓ Installing CNI 🔌 - ✓ Installing StorageClass 💾 + ✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼 + ✓ Preparing nodes 📦 + ✓ Writing configuration 📜 + ✓ Starting control-plane 🕹️ + ✓ Installing CNI 🔌 + ✓ Installing StorageClass 💾 Set kubectl context to "kind-psa-with-cluster-pss" You can now use your cluster with: - + kubectl cluster-info --context kind-psa-with-cluster-pss - + Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂 ``` @@ -281,36 +289,21 @@ following: The output is similar to this: ``` Kubernetes control plane is running at https://127.0.0.1:63855 - CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy - + To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ``` -1. Create the following Pod specification for a minimal configuration in the default namespace: - ``` - cat < /tmp/pss/nginx-pod.yaml - apiVersion: v1 - kind: Pod - metadata: - name: nginx - spec: - containers: - - image: nginx - name: nginx - ports: - - containerPort: 80 - EOF - ``` -1. Create the Pod in the cluster: +1. Create a Pod in the default namespace: ```shell - kubectl apply -f /tmp/pss/nginx-pod.yaml + kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml ``` - The output is similar to this: + + The pod is started normally, but the output includes a warning: ``` - Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") - pod/nginx created + Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") + pod/nginx created ``` ## Clean up diff --git a/content/en/docs/tutorials/security/ns-level-pss.md b/content/en/docs/tutorials/security/ns-level-pss.md index 64aaf64832a..cff35050e1f 100644 --- a/content/en/docs/tutorials/security/ns-level-pss.md +++ b/content/en/docs/tutorials/security/ns-level-pss.md @@ -31,14 +31,14 @@ Install the following on your workstation: 1. Create a `KinD` cluster as follows: ```shell - kind create cluster --name psa-ns-level --image kindest/node:v1.23.0 + kind create cluster --name psa-ns-level ``` The output is similar to this: ``` Creating cluster "psa-ns-level" ... - ✓ Ensuring node image (kindest/node:v1.23.0) 🖼 + ✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ @@ -80,11 +80,12 @@ The output is similar to this: namespace/example created ``` -## Apply Pod Security Standards +## Enable Pod Security Standards checking for that namespace 1. Enable Pod Security Standards on this namespace using labels supported by - built-in Pod Security Admission. In this step we will warn on baseline pod - security standard as per the latest version (default value) + built-in Pod Security Admission. In this step you will configure a check to + warn on Pods that don't meet the latest version of the _baseline_ pod + security standard. ```shell kubectl label --overwrite ns example \ @@ -92,8 +93,8 @@ namespace/example created pod-security.kubernetes.io/warn-version=latest ``` -2. Multiple pod security standards can be enabled on any namespace, using labels. - Following command will `enforce` the `baseline` Pod Security Standard, but +2. You can configure multiple pod security standard checks on any namespace, using labels. + The following command will `enforce` the `baseline` Pod Security Standard, but `warn` and `audit` for `restricted` Pod Security Standards as per the latest version (default value) @@ -107,41 +108,24 @@ namespace/example created pod-security.kubernetes.io/audit-version=latest ``` -## Verify the Pod Security Standards +## Verify the Pod Security Standard enforcement -1. Create a minimal pod in `example` namespace: +1. Create a baseline Pod in the `example` namespace: ```shell - cat < /tmp/pss/nginx-pod.yaml - apiVersion: v1 - kind: Pod - metadata: - name: nginx - spec: - containers: - - image: nginx - name: nginx - ports: - - containerPort: 80 - EOF + kubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml ``` - -1. Apply the pod spec to the cluster in `example` namespace: - - ```shell - kubectl apply -n example -f /tmp/pss/nginx-pod.yaml - ``` - The output is similar to this: + The Pod does start OK; the output includes a warning. For example: ``` Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") pod/nginx created ``` -1. Apply the pod spec to the cluster in `default` namespace: +1. Create a baseline Pod in the `default` namespace: ```shell - kubectl apply -n default -f /tmp/pss/nginx-pod.yaml + kubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml ``` Output is similar to this: @@ -149,9 +133,9 @@ namespace/example created pod/nginx created ``` -The Pod Security Standards were applied only to the `example` -namespace. You could create the same Pod in the `default` namespace -with no warnings. +The Pod Security Standards enforcement and warning settings were applied only +to the `example` namespace. You could create the same Pod in the `default` +namespace with no warnings. ## Clean up diff --git a/content/en/docs/tutorials/services/pods-and-endpoint-termination-flow.md b/content/en/docs/tutorials/services/pods-and-endpoint-termination-flow.md new file mode 100644 index 00000000000..3ff7e2d987d --- /dev/null +++ b/content/en/docs/tutorials/services/pods-and-endpoint-termination-flow.md @@ -0,0 +1,221 @@ +--- +title: Explore Termination Behavior for Pods And Their Endpoints +content_type: tutorial +weight: 60 +--- + + + + +Once you connected your Application with Service following steps +like those outlined in [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/), +you have a continuously running, replicated application, that is exposed on a network. +This tutorial helps you look at the termination flow for Pods and to explore ways to implement +graceful connection draining. + + + +## Termination process for Pods and their endpoints + +There are often cases when you need to terminate a Pod - be it for upgrade or scale down. +In order to improve application availability, it may be important to implement +a proper active connections draining. This tutorial explains the flow of +Pod termination in connection with the corresponding endpoint state and removal. + +This tutorial explains the flow of Pod termination in connection with the +corresponding endpoint state and removal by using +a simple nginx web server to demonstrate the concept. + + + +## Example flow with endpoint termination + +The following is the example of the flow described in the +[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) +document. + +Let's say you have a Deployment containing of a single `nginx` replica +(just for demonstration purposes) and a Service: + +{{< codenew file="service/pod-with-graceful-termination.yaml" >}} + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 1 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + terminationGracePeriodSeconds: 120 # extra long grace period + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 + lifecycle: + preStop: + exec: + # Real life termination may take any time up to terminationGracePeriodSeconds. + # In this example - just hang around for at least the duration of terminationGracePeriodSeconds, + # at 120 seconds container will be forcibly terminated. + # Note, all this time nginx will keep processing requests. + command: [ + "/bin/sh", "-c", "sleep 180" + ] + +--- + +apiVersion: v1 +kind: Service +metadata: + name: nginx-service +spec: + selector: + app: nginx + ports: + - protocol: TCP + port: 80 + targetPort: 80 +``` + +Once the Pod and Service are running, you can get the name of any associated EndpointSlices: + +```shell +kubectl get endpointslice +``` + +The output is similar to this: + +```none +NAME ADDRESSTYPE PORTS ENDPOINTS AGE +nginx-service-6tjbr IPv4 80 10.12.1.199,10.12.1.201 22m +``` + +You can see its status, and validate that there is one endpoint registered: + +```shell +kubectl get endpointslices -o json -l kubernetes.io/service-name=nginx-service +``` + +The output is similar to this: + +```none +{ + "addressType": "IPv4", + "apiVersion": "discovery.k8s.io/v1", + "endpoints": [ + { + "addresses": [ + "10.12.1.201" + ], + "conditions": { + "ready": true, + "serving": true, + "terminating": false +``` + +Now let's terminate the Pod and validate that the Pod is being terminated +respecting the graceful termination period configuration: + +```shell +kubectl delete pod nginx-deployment-7768647bf9-b4b9s +``` + +All pods: + +```shell +kubectl get pods +``` + +The output is similar to this: + +```none +NAME READY STATUS RESTARTS AGE +nginx-deployment-7768647bf9-b4b9s 1/1 Terminating 0 4m1s +nginx-deployment-7768647bf9-rkxlw 1/1 Running 0 8s +``` + +You can see that the new pod got scheduled. + +While the new endpoint is being created for the new Pod, the old endpoint is +still around in the terminating state: + +```shell +kubectl get endpointslice -o json nginx-service-6tjbr +``` + +The output is similar to this: + +```none +{ + "addressType": "IPv4", + "apiVersion": "discovery.k8s.io/v1", + "endpoints": [ + { + "addresses": [ + "10.12.1.201" + ], + "conditions": { + "ready": false, + "serving": true, + "terminating": true + }, + "nodeName": "gke-main-default-pool-dca1511c-d17b", + "targetRef": { + "kind": "Pod", + "name": "nginx-deployment-7768647bf9-b4b9s", + "namespace": "default", + "uid": "66fa831c-7eb2-407f-bd2c-f96dfe841478" + }, + "zone": "us-central1-c" + }, + { + "addresses": [ + "10.12.1.202" + ], + "conditions": { + "ready": true, + "serving": true, + "terminating": false + }, + "nodeName": "gke-main-default-pool-dca1511c-d17b", + "targetRef": { + "kind": "Pod", + "name": "nginx-deployment-7768647bf9-rkxlw", + "namespace": "default", + "uid": "722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35" + }, + "zone": "us-central1-c" +``` + +This allows applications to communicate their state during termination +and clients (such as load balancers) to implement a connections draining functionality. +These clients may detect terminating endpoints and implement a special logic for them. + +In Kubernetes, endpoints that are terminating always have their `ready` status set as as `false`. +This needs to happen for backward +compatibility, so existing load balancers will not use it for regular traffic. +If traffic draining on terminating pod is needed, the actual readiness can be +checked as a condition `serving`. + +When Pod is deleted, the old endpoint will also be deleted. + + +## {{% heading "whatsnext" %}} + + +* Learn how to [Connect Applications with Services](/docs/tutorials/services/connect-applications-service/) +* Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) +* Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/) +* Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/) + diff --git a/content/en/examples/security/example-baseline-pod.yaml b/content/en/examples/security/example-baseline-pod.yaml new file mode 100644 index 00000000000..eca57ea4de8 --- /dev/null +++ b/content/en/examples/security/example-baseline-pod.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + containers: + - image: nginx + name: nginx + ports: + - containerPort: 80 diff --git a/content/en/examples/security/kind-with-cluster-level-baseline-pod-security.sh b/content/en/examples/security/kind-with-cluster-level-baseline-pod-security.sh index 2fbd0dfe81e..76e092807fd 100644 --- a/content/en/examples/security/kind-with-cluster-level-baseline-pod-security.sh +++ b/content/en/examples/security/kind-with-cluster-level-baseline-pod-security.sh @@ -51,11 +51,12 @@ nodes: # default None propagation: None EOF -kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.23.0 --config /tmp/pss/cluster-config.yaml +kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml kubectl cluster-info --context kind-psa-with-cluster-pss + # Wait for 15 seconds (arbitrary) ServiceAccount Admission Controller to be available sleep 15 -cat < /tmp/pss/nginx-pod.yaml +cat </dev/null && bash -c 'read -p "Press any key to continue... " -n1 -s' ) || \ + ( printf "Press Enter to continue... " && read ) 1>&2 + +# Clean up +printf "\n\nCleaning up:\n" 1>&2 +set -e +kubectl delete pod --all -n example --now +kubectl delete ns example +kind delete cluster --name psa-with-cluster-pss +rm -f /tmp/pss/cluster-config.yaml diff --git a/content/en/examples/security/kind-with-namespace-level-baseline-pod-security.sh b/content/en/examples/security/kind-with-namespace-level-baseline-pod-security.sh index 2081de7c14c..637e23df514 100644 --- a/content/en/examples/security/kind-with-namespace-level-baseline-pod-security.sh +++ b/content/en/examples/security/kind-with-namespace-level-baseline-pod-security.sh @@ -1,11 +1,11 @@ #!/bin/sh -# Until v1.23 is released, kind node image needs to be built from k/k master branch -# Ref: https://kind.sigs.k8s.io/docs/user/quick-start/#building-images -kind create cluster --name psa-ns-level --image kindest/node:v1.23.0 +kind create cluster --name psa-ns-level kubectl cluster-info --context kind-psa-ns-level -# Wait for 15 seconds (arbitrary) ServiceAccount Admission Controller to be available +# Wait for 15 seconds (arbitrary) for ServiceAccount Admission Controller to be available sleep 15 -kubectl create ns example + +# Create and label the namespace +kubectl create ns example || exit 1 # if namespace exists, don't do the next steps kubectl label --overwrite ns example \ pod-security.kubernetes.io/enforce=baseline \ pod-security.kubernetes.io/enforce-version=latest \ @@ -13,7 +13,9 @@ kubectl label --overwrite ns example \ pod-security.kubernetes.io/warn-version=latest \ pod-security.kubernetes.io/audit=restricted \ pod-security.kubernetes.io/audit-version=latest -cat < /tmp/pss/nginx-pod.yaml + +# Try running a Pod +cat </dev/null && bash -c 'read -p "Press any key to continue... " -n1 -s' ) || \ + ( printf "Press Enter to continue... " && read ) 1>&2 + +# Clean up +printf "\n\nCleaning up:\n" 1>&2 +set -e +kubectl delete pod --all -n example --now +kubectl delete ns example +kind delete cluster --name psa-ns-level diff --git a/content/en/examples/service/pod-with-graceful-termination.yaml b/content/en/examples/service/pod-with-graceful-termination.yaml new file mode 100644 index 00000000000..4a39d2d368a --- /dev/null +++ b/content/en/examples/service/pod-with-graceful-termination.yaml @@ -0,0 +1,32 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 1 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + terminationGracePeriodSeconds: 120 # extra long grace period + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 + lifecycle: + preStop: + exec: + # Real life termination may take any time up to terminationGracePeriodSeconds. + # In this example - just hang around for at least the duration of terminationGracePeriodSeconds, + # at 120 seconds container will be forcibly terminated. + # Note, all this time nginx will keep processing requests. + command: [ + "/bin/sh", "-c", "sleep 180" + ] diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index dc80e19baf2..1347042d686 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,9 +78,9 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| February 2023 | 2023-02-10 | 2023-02-15 | -| March 2023 | 2023-03-10 | 2023-03-15 | | April 2023 | 2023-04-07 | 2023-04-12 | +| May 2023 | 2023-05-12 | 2023-05-17 | +| June 2023 | 2023-06-09 | 2023-06-14 | ## Detailed Release History for Active Branches diff --git a/content/es/docs/concepts/configuration/manage-resources-containers.md b/content/es/docs/concepts/configuration/manage-resources-containers.md index 4cdc6c4a26a..5b27b6ca7cd 100644 --- a/content/es/docs/concepts/configuration/manage-resources-containers.md +++ b/content/es/docs/concepts/configuration/manage-resources-containers.md @@ -722,7 +722,7 @@ Conditions: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f - Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine + Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "registry.k8s.io/pause:0.8.0" already present on machine Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a diff --git a/content/es/docs/concepts/configuration/secret.md b/content/es/docs/concepts/configuration/secret.md index 1025ebd7851..42c63f81b66 100644 --- a/content/es/docs/concepts/configuration/secret.md +++ b/content/es/docs/concepts/configuration/secret.md @@ -840,7 +840,7 @@ spec: secretName: dotfile-secret containers: - name: dotfile-test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: - ls - "-l" diff --git a/content/es/docs/concepts/overview/working-with-objects/finalizers.md b/content/es/docs/concepts/overview/working-with-objects/finalizers.md new file mode 100644 index 00000000000..95a68f7852b --- /dev/null +++ b/content/es/docs/concepts/overview/working-with-objects/finalizers.md @@ -0,0 +1,87 @@ +--- +title: Finalizadores +content_type: concept +weight: 80 +--- + + + +{{}} + +Puedes usar finalizadores para controlar {{}} +de los recursos alertando a los controladores para que ejecuten tareas de limpieza especificas antes de eliminar el recurso. + +Los finalizadores usualmente no especifican codigo a ejecutar, sino que son generalmente listas de parametros referidos a +un recurso especifico, similares a las anotaciones. Kubernetes especifica algunos finalizadores automaticamente, +pero podrías especificar tus propios. + +## Cómo funcionan los finalizadores + +Cuando creas un recurso utilizando un archivo de manifiesto, puedes especificar +finalizadores mediante el campo `metadata.finalizers`. Cuando intentas eliminar el +recurso, el servidor API que maneja el pedido de eliminación ve los valores en el +campo `finalizadores` y hace lo siguiente: + + * Modifica el objecto para agregar un campo `metadata.deletionTimestamp` con + el momento en que comenzaste la eliminación. + * Previene que el objeto sea eliminado hasta que su campo `metadata.finalizers` + este vacío. + * Retorna un codigo de estado `202` (HTTP "Aceptado") + +El controlador que meneja ese finalizador recibe la actualización del objecto +configurando el campo `metadata.deletionTimestamp`, indicando que la eliminación +del objeto ha sido solicitada. +El controlador luego intenta satisfacer los requerimientos de los finalizadores +especificados para ese recurso. Cada vez que una condición del finalizador es +satisfecha, el controlador remueve ese parametro del campo `finalizadores`. Cuando +el campo `finalizadores` esta vacío, un objeto con un campo `deletionTimestamp` +configurado es automaticamente borrado. Puedes tambien utilizar finalizadores para +prevenir el borrado de recursos no manejados. + +Un ejemplo usual de un finalizador es `kubernetes.io/pv-protection`, el cual +previene el borrado accidental de objetos `PersistentVolume`. Cuando un objeto +`PersistentVolume` está en uso por un Pod, Kubernetes agrega el finalizador +`pv-protection`. Si intentas elimiar el `PersistentVolume`, este pasa a un estado +`Terminating`, pero el controlador no puede eliminarlo ya que existe el finalizador. +Cuando el Pod deja de utilizar el `PersistentVolume`, Kubernetes borra el finalizador +`pv-protection` y el controlador borra el volumen. + +## Referencias de dueño, etiquetas y finalizadores (#owners-labels-finalizers) + +Al igual que las {{}}, las +[referencias de dueño](/docs/concepts/overview/working-with-objects/owners-dependents/) +describen las relaciones entre objetos en Kubernetes, pero son utilizadas para un +propósito diferente. Cuando un +{{}} maneja objetos como +Pods, utiliza etiquetas para identificar cambios a grupos de objetos relacionados. +Por ejemplo, cuando un {{}} crea uno +o más Pods, el controlador del Job agrega etiquetas a esos pods para identificar cambios +a cualquier Pod en el cluster con la misma etiqueta. + +El controlador del Job tambien agrega *referencias de dueño* a esos Pods, referidas +al Job que creo a los Pods. Si borras el Job mientras estos Pods estan corriendo, +Kubernetes utiliza las referencias de dueño (no las etiquetas) para determinar +cuáles Pods en el cluster deberían ser borrados. + +Kubernetes también procesa finalizadores cuando identifica referencias de dueño en +un recurso que ha sido marcado para eliminación. + +En algunas situaciones, los finalizadores pueden bloquear el borrado de objetos +dependientes, causando que el objeto inicial a borrar permanezca más de lo +esperado sin ser completamente eliminado. En esas situaciones, deberías chequear +finalizadores y referencias de dueños en los objetos y sus dependencias para +intentar solucionarlo. + +{{}} +En casos donde los objetos queden bloqueados en un estado de eliminación, evita +borrarlos manualmente para que el proceso continue. Los finalizadores usualmente +son agregados a los recursos por una razón, por lo cual eliminarlos forzosamente +puede causar problemas en tu cluster. Borrados manuales sólo deberían ejecutados +cuando el propósito del finalizador es entendido y satisfecho de alguna otra manera (por +ejemplo, borrando manualmente un objeto dependiente). +{{}} + +## {{% heading "whatsnext" %}} + +* Lea [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/) + en el blog de Kubernetes. \ No newline at end of file diff --git a/content/es/docs/concepts/overview/working-with-objects/labels.md b/content/es/docs/concepts/overview/working-with-objects/labels.md index 18815c01a4e..7420aac5c6d 100644 --- a/content/es/docs/concepts/overview/working-with-objects/labels.md +++ b/content/es/docs/concepts/overview/working-with-objects/labels.md @@ -95,7 +95,7 @@ metadata: spec: containers: - name: cuda-test - image: "k8s.gcr.io/cuda-vector-add:v0.1" + image: "registry.k8s.io/cuda-vector-add:v0.1" resources: limits: nvidia.com/gpu: 1 diff --git a/content/es/docs/concepts/storage/volumes.md b/content/es/docs/concepts/storage/volumes.md index ff079512bd4..ac462bec7b1 100644 --- a/content/es/docs/concepts/storage/volumes.md +++ b/content/es/docs/concepts/storage/volumes.md @@ -72,7 +72,7 @@ metadata: name: test-ebs spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-ebs @@ -160,7 +160,7 @@ metadata: name: test-cinder spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-cinder-container volumeMounts: - mountPath: /test-cinder @@ -271,7 +271,7 @@ metadata: name: test-pd spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /cache @@ -349,7 +349,7 @@ metadata: name: test-pd spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd @@ -496,7 +496,7 @@ metadata: name: test-pd spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd @@ -526,7 +526,7 @@ metadata: spec: containers: - name: test-webserver - image: k8s.gcr.io/test-webserver:latest + image: registry.k8s.io/test-webserver:latest volumeMounts: - mountPath: /var/local/aaa name: mydir @@ -657,7 +657,7 @@ metadata: name: test-portworx-volume-pod spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /mnt @@ -847,7 +847,7 @@ metadata: name: pod-0 spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: pod-0 volumeMounts: - mountPath: /test-pd @@ -976,7 +976,7 @@ metadata: name: test-vmdk spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-vmdk diff --git a/content/es/docs/concepts/workloads/controllers/statefulset.md b/content/es/docs/concepts/workloads/controllers/statefulset.md index 4ff09f2148c..95e86a7a3f6 100644 --- a/content/es/docs/concepts/workloads/controllers/statefulset.md +++ b/content/es/docs/concepts/workloads/controllers/statefulset.md @@ -85,7 +85,7 @@ spec: terminationGracePeriodSeconds: 10 containers: - name: nginx - image: k8s.gcr.io/nginx-slim:0.8 + image: registry.k8s.io/nginx-slim:0.8 ports: - containerPort: 80 name: web diff --git a/content/es/docs/reference/glossary/finalizer.md b/content/es/docs/reference/glossary/finalizer.md new file mode 100644 index 00000000000..af8367da7b6 --- /dev/null +++ b/content/es/docs/reference/glossary/finalizer.md @@ -0,0 +1,35 @@ +--- +title: Finalizador +id: finalizer +date: 2021-07-07 +full_link: /docs/concepts/overview/working-with-objects/finalizers/ +short_description: > + Un atributo de un namespace que dicta a Kubernetes a esperar hasta que condiciones + especificas son satisfechas antes que pueda borrar un objeto marcado para eliminacion. +aka: +tags: +- fundamental +--- +Los finalizadores son atributos de un namespace que instruyen a Kubernetes a +esperar a que ciertas condiciones sean satisfechas antes que pueda borrar +definitivamente un objeto que ha sido marcado para eliminarse. +Los finalizadores alertan a los {{}} +para borrar recursos que poseian esos objetos eliminados. + + + +Cuando instruyes a Kubernetes a borrar un objeto que tiene finalizadores +especificados, la API de Kubernetes marca ese objeto para eliminacion +configurando el campo `metadata.deletionTimestamp`, y retorna un codigo de +estado `202` (HTTP "Aceptado"). +El objeto a borrar permanece en un estado +de terminacion mientras el plano de contol, u otros componentes, ejecutan +las acciones definidas en los finalizadores. +Luego de que esas acciones son completadas, el controlador borra los +finalizadores relevantes del objeto. Cuando el campo `metadata.finalizers` +esta vacio, Kubernetes considera el proceso de eliminacion completo y borra +el objeto. + +Puedes utilizar finalizadores para controlar {{}} +de recursos. Por ejemplo, puedes definir un finalizador para borrar recursos +relacionados o infraestructura antes que el controlador elimine el objeto. \ No newline at end of file diff --git a/content/es/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/es/docs/tasks/configure-pod-container/configure-volume-storage.md index c4f08f29690..f5f8b17d97a 100644 --- a/content/es/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/es/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -41,7 +41,7 @@ En este ejercicio crearás un Pod que ejecuta un único Contenedor. Este Pod tie La salida debería ser similar a: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s ``` @@ -69,7 +69,7 @@ En este ejercicio crearás un Pod que ejecuta un único Contenedor. Este Pod tie La salida debería ser similar a: - ```shell + ```console USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash @@ -86,7 +86,7 @@ En este ejercicio crearás un Pod que ejecuta un único Contenedor. Este Pod tie 1. En el terminal original, observa los cambios en el Pod de Redis. Eventualmente verás algo como lo siguiente: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s redis 0/1 Completed 0 6m diff --git a/content/es/docs/tasks/tools/included/install-kubectl-windows.md b/content/es/docs/tasks/tools/included/install-kubectl-windows.md index 427d0bcc75f..46a43a35ca8 100644 --- a/content/es/docs/tasks/tools/included/install-kubectl-windows.md +++ b/content/es/docs/tasks/tools/included/install-kubectl-windows.md @@ -57,7 +57,7 @@ Existen los siguientes métodos para instalar kubectl en Windows: - Usando PowerShell puede automatizar la verificación usando el operador `-eq` para obtener un resultado de `True` o `False`: ```powershell - $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) + $(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256)) ``` 1. Agregue el binario a su `PATH`. diff --git a/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md index 00437e7a673..ce0cce6be63 100644 --- a/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md +++ b/content/es/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md @@ -50,8 +50,7 @@ brew install bash-completion@2 Como se indica en el resultado de este comando, agregue lo siguiente a su archivo `~/.bash_profile`: ```bash -export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d" -[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh" +brew_etc="$(brew --prefix)/etc" && [[ -r "${brew_etc}/profile.d/bash_completion.sh" ]] && . "${brew_etc}/profile.d/bash_completion.sh" ``` Vuelva a cargar su shell y verifique que bash-complete v2 esté instalado correctamente con `type _init_completion`. diff --git a/content/es/docs/tutorials/hello-minikube.md b/content/es/docs/tutorials/hello-minikube.md index fb40f2fdb0b..ed78c76a16c 100644 --- a/content/es/docs/tutorials/hello-minikube.md +++ b/content/es/docs/tutorials/hello-minikube.md @@ -76,7 +76,7 @@ Un [*Deployment*](/docs/concepts/workloads/controllers/deployment/) en Kubernete 1. Ejecutar el comando `kubectl create` para crear un Deployment que maneje un Pod. El Pod ejecuta un contenedor basado en la imagen proveida por Docker. ```shell - kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4 ``` 2. Ver el Deployment: diff --git a/content/fr/docs/concepts/configuration/secret.md b/content/fr/docs/concepts/configuration/secret.md index bad71e79d7c..eecabde7807 100644 --- a/content/fr/docs/concepts/configuration/secret.md +++ b/content/fr/docs/concepts/configuration/secret.md @@ -889,7 +889,7 @@ spec: secretName: dotfile-secret containers: - name: dotfile-test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: - ls - "-l" diff --git a/content/fr/docs/concepts/storage/persistent-volumes.md b/content/fr/docs/concepts/storage/persistent-volumes.md index ad88ec14caa..230ce5f7024 100644 --- a/content/fr/docs/concepts/storage/persistent-volumes.md +++ b/content/fr/docs/concepts/storage/persistent-volumes.md @@ -192,7 +192,7 @@ spec: path: /any/path/it/will/be/replaced containers: - name: pv-recycler - image: "k8s.gcr.io/busybox" + image: "registry.k8s.io/busybox" command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] volumeMounts: - name: vol diff --git a/content/fr/docs/concepts/storage/volumes.md b/content/fr/docs/concepts/storage/volumes.md index e694a96661e..e78e7ce76b3 100644 --- a/content/fr/docs/concepts/storage/volumes.md +++ b/content/fr/docs/concepts/storage/volumes.md @@ -113,7 +113,7 @@ metadata: name: test-ebs spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-ebs @@ -190,7 +190,7 @@ metadata: name: test-cinder spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-cinder-container volumeMounts: - mountPath: /test-cinder @@ -294,7 +294,7 @@ metadata: name: test-pd spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /cache @@ -369,7 +369,7 @@ metadata: name: test-pd spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd @@ -509,7 +509,7 @@ metadata: name: test-pd spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd @@ -759,7 +759,7 @@ metadata: name: test-portworx-volume-pod spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /mnt @@ -824,7 +824,7 @@ metadata: name: pod-0 spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: pod-0 volumeMounts: - mountPath: /test-pd @@ -953,7 +953,7 @@ metadata: name: test-vmdk spec: containers: - - image: k8s.gcr.io/test-webserver + - image: registry.k8s.io/test-webserver name: test-container volumeMounts: - mountPath: /test-vmdk diff --git a/content/fr/docs/concepts/workloads/controllers/statefulset.md b/content/fr/docs/concepts/workloads/controllers/statefulset.md index 14221ba82af..b89e0416b81 100644 --- a/content/fr/docs/concepts/workloads/controllers/statefulset.md +++ b/content/fr/docs/concepts/workloads/controllers/statefulset.md @@ -78,7 +78,7 @@ spec: terminationGracePeriodSeconds: 10 containers: - name: nginx - image: k8s.gcr.io/nginx-slim:0.8 + image: registry.k8s.io/nginx-slim:0.8 ports: - containerPort: 80 name: web diff --git a/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md b/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md index aece7de62cf..833b60fbf52 100644 --- a/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/fr/docs/concepts/workloads/pods/pod-lifecycle.md @@ -329,7 +329,7 @@ spec: containers: - args: - /server - image: k8s.gcr.io/liveness + image: registry.k8s.io/liveness livenessProbe: httpGet: # lorsque "host" n'est pas défini, "PodIP" sera utilisé diff --git a/content/fr/docs/contribute/generate-ref-docs/federation-api.md b/content/fr/docs/contribute/generate-ref-docs/federation-api.md index aa25853d64a..30bb6c525d8 100644 --- a/content/fr/docs/contribute/generate-ref-docs/federation-api.md +++ b/content/fr/docs/contribute/generate-ref-docs/federation-api.md @@ -48,7 +48,7 @@ cd hack/update-federation-api-reference-docs.sh ``` -Le script exécute le [k8s.gcr.io/gen-swagger-docs](https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/gen-swagger-docs?gcrImageListquery=%255B%255D&gcrImageListpage=%257B%2522t%2522%253A%2522%2522%252C%2522i%2522%253A0%257D&gcrImageListsize=50&gcrImageListsort=%255B%257B%2522p%2522%253A%2522uploaded%2522%252C%2522s%2522%253Afalse%257D%255D) image pour générer cet ensemble de documents de référence: +Le script exécute le [registry.k8s.io/gen-swagger-docs](https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/gen-swagger-docs?gcrImageListquery=%255B%255D&gcrImageListpage=%257B%2522t%2522%253A%2522%2522%252C%2522i%2522%253A0%257D&gcrImageListsize=50&gcrImageListsort=%255B%257B%2522p%2522%253A%2522uploaded%2522%252C%2522s%2522%253Afalse%257D%255D) image pour générer cet ensemble de documents de référence: * /docs/api-reference/extensions/v1beta1/operations.html * /docs/api-reference/extensions/v1beta1/definitions.html diff --git a/content/fr/docs/reference/kubectl/cheatsheet.md b/content/fr/docs/reference/kubectl/cheatsheet.md index e1d86f7dac9..9d2f018bb8c 100644 --- a/content/fr/docs/reference/kubectl/cheatsheet.md +++ b/content/fr/docs/reference/kubectl/cheatsheet.md @@ -359,8 +359,8 @@ Exemples utilisant `-o=custom-columns` : # Toutes les images s'exécutant dans un cluster kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image' - # Toutes les images excepté "k8s.gcr.io/coredns:1.6.2" -kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image' + # Toutes les images excepté "registry.k8s.io/coredns:1.6.2" +kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="registry.k8s.io/coredns:1.6.2")].image' # Tous les champs dans metadata quel que soit leur nom kubectl get pods -A -o=custom-columns='DATA:metadata.*' diff --git a/content/fr/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md b/content/fr/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md index 1e8d0971f90..88ccdd8c6f6 100644 --- a/content/fr/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md +++ b/content/fr/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md @@ -62,7 +62,7 @@ kubeadm init [flags] --feature-gates string Un ensemble de paires clef=valeur qui décrivent l'entrée de configuration pour des fonctionnalités diverses. Il n'y en a aucune dans cette version. -h, --help aide pour l'initialisation (init) --ignore-preflight-errors strings Une liste de contrôles dont les erreurs seront catégorisées comme "warnings" (avertissements). Par exemple : 'IsPrivilegedUser,Swap'. La valeur 'all' ignore les erreurs de tous les contrôles. - --image-repository string Choisis un container registry d'où télécharger les images du control plane. (par défaut "k8s.gcr.io") + --image-repository string Choisis un container registry d'où télécharger les images du control plane. (par défaut "registry.k8s.io") --kubernetes-version string Choisis une version Kubernetes spécifique pour le control plane. (par défaut "stable-1") --node-name string Spécifie le nom du noeud. --pod-network-cidr string Spécifie l'intervalle des adresses IP pour le réseau des pods. Si fournie, le control plane allouera automatiquement les CIDRs pour chacun des noeuds. diff --git a/content/fr/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/fr/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 0df7c1a65b5..0f7db594856 100644 --- a/content/fr/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/fr/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -131,12 +131,12 @@ Pour de l'information sur comment passer des options aux composants du control p ### Utiliser des images personnalisées {#custom-images} -Par défaut, kubeadm télécharge les images depuis `k8s.gcr.io`, à moins que la version demandée de Kubernetes soit une version Intégration Continue (CI). Dans ce cas, `gcr.io/k8s-staging-ci-images` est utilisé. +Par défaut, kubeadm télécharge les images depuis `registry.k8s.io`, à moins que la version demandée de Kubernetes soit une version Intégration Continue (CI). Dans ce cas, `gcr.io/k8s-staging-ci-images` est utilisé. Vous pouvez outrepasser ce comportement en utilisant [kubeadm avec un fichier de configuration](#config-file). Les personnalisations permises sont : -* fournir un `imageRepository` à utiliser à la place de `k8s.gcr.io`. +* fournir un `imageRepository` à utiliser à la place de `registry.k8s.io`. * régler `useHyperKubeImage` à `true` pour utiliser l'image HyperKube. * fournir un `imageRepository` et un `imageTag` pour etcd et l'extension (add-on) DNS. @@ -264,7 +264,7 @@ kubeadm config images list kubeadm config images pull ``` -A partir de Kubernetes 1.12, les images prefixées par `k8s.gcr.io/kube-*`, `k8s.gcr.io/etcd` et `k8s.gcr.io/pause` +A partir de Kubernetes 1.12, les images prefixées par `registry.k8s.io/kube-*`, `registry.k8s.io/etcd` et `registry.k8s.io/pause` ne nécessitent pas un suffix `-${ARCH}`. ### Automatiser kubeadm diff --git a/content/fr/docs/setup/learning-environment/minikube.md b/content/fr/docs/setup/learning-environment/minikube.md index 94bd0744f56..5c843f4b3fd 100644 --- a/content/fr/docs/setup/learning-environment/minikube.md +++ b/content/fr/docs/setup/learning-environment/minikube.md @@ -56,7 +56,7 @@ Suivez les étapes ci-dessous pour commencer et explorer Minikube. Créons un déploiement Kubernetes en utilisant une image existante nommée `echoserver`, qui est un serveur HTTP, et exposez-la sur le port 8080 à l’aide de `--port`. ```shell - kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 + kubectl create deployment hello-minikube --image=registry.k8s.io/echoserver:1.10 ``` Le résultat est similaire à ceci: diff --git a/content/fr/docs/setup/production-environment/on-premises-vm/_index.md b/content/fr/docs/setup/production-environment/on-premises-vm/_index.md deleted file mode 100644 index 96054a37cd0..00000000000 --- a/content/fr/docs/setup/production-environment/on-premises-vm/_index.md +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: On-Premises VMs -weight: 60 ---- diff --git a/content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md index ba5d296adb5..539bec39f9e 100644 --- a/content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -29,7 +29,7 @@ L'interface utilisateur du tableau de bord n'est pas déployée par défaut. Pour le déployer, exécutez la commande suivante: ```text -kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/charts/recommended.yaml ``` ## Accès à l'interface utilisateur du tableau de bord diff --git a/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 2aa904144f2..39cf2ecbd42 100644 --- a/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/fr/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -27,7 +27,7 @@ Cela peut être utilisé dans le cas des liveness checks sur les conteneurs à d De nombreuses applications fonctionnant pour des longues périodes finissent par passer à des états de rupture et ne peuvent pas se rétablir, sauf en étant redémarrées. Kubernetes fournit des liveness probes pour détecter et remédier à ces situations. -Dans cet exercice, vous allez créer un Pod qui exécute un conteneur basé sur l'image `k8s.gcr.io/busybox`. Voici le fichier de configuration pour le Pod : +Dans cet exercice, vous allez créer un Pod qui exécute un conteneur basé sur l'image `registry.k8s.io/busybox`. Voici le fichier de configuration pour le Pod : {{< codenew file="pods/probe/exec-liveness.yaml" >}} @@ -61,8 +61,8 @@ La sortie indique qu'aucune liveness probe n'a encore échoué : FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 -23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" -23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "registry.k8s.io/busybox" +23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "registry.k8s.io/busybox" 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] 23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e ``` @@ -79,8 +79,8 @@ Au bas de la sortie, il y a des messages indiquant que les liveness probes ont FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0 -36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox" -36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox" +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "registry.k8s.io/busybox" +36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "registry.k8s.io/busybox" 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined] 36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e 2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory @@ -102,7 +102,7 @@ liveness-exec 1/1 Running 1 1m ## Définir une requête HTTP de liveness Un autre type de liveness probe utilise une requête GET HTTP. Voici la configuration -d'un Pod qui fait fonctionner un conteneur basé sur l'image `k8s.gcr.io/liveness`. +d'un Pod qui fait fonctionner un conteneur basé sur l'image `registry.k8s.io/liveness`. {{< codenew file="pods/probe/http-liveness.yaml" >}} diff --git a/content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md index eed01fc4ed6..3c17aa3cda1 100644 --- a/content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/fr/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -45,7 +45,7 @@ Voici le fichier de configuration du Pod : La sortie ressemble à ceci : - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s ``` @@ -73,7 +73,7 @@ Voici le fichier de configuration du Pod : La sortie ressemble à ceci : - ```shell + ```console USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash @@ -91,7 +91,7 @@ Voici le fichier de configuration du Pod : 1. Dans votre terminal initial, surveillez les changements apportés au Pod de Redis. Éventuellement, vous verrez quelque chose comme ça : - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s redis 0/1 Completed 0 6m diff --git a/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index 0ade44801bd..aec74239666 100644 --- a/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -100,7 +100,7 @@ En quelques étapes, nous vous emmenons de Docker Compose à Kubernetes. Tous do services: redis-master: - image: k8s.gcr.io/redis:e2e + image: registry.k8s.io/redis:e2e ports: - "6379" diff --git a/content/fr/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/fr/docs/tasks/inject-data-application/define-environment-variable-container.md new file mode 100644 index 00000000000..c55aa45ac51 --- /dev/null +++ b/content/fr/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -0,0 +1,112 @@ +--- +title: Définir des variables d'environnement pour un Container +content_type: task +weight: 20 +--- + + + +Cette page montre comment définir des variables d'environnement pour un +container au sein d'un Pod Kubernetes. + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + + + +## Définir une variable d'environnement pour un container + +Lorsque vous créez un Pod, vous pouvez définir des variables d'environnement +pour les containers qui seront exécutés au sein du Pod. +Pour les définir, utilisez le champ `env` ou `envFrom` +dans le fichier de configuration. + +Dans cet exercice, vous allez créer un Pod qui exécute un container. Le fichier de configuration pour ce Pod contient une variable d'environnement s'appelant `DEMO_GREETING` et sa valeur est `"Hello from the environment"`. Voici le fichier de configuration du Pod: + +{{< codenew file="pods/inject/envars.yaml" >}} + +1. Créez un Pod à partir de ce fichier: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml + ``` + +1. Listez les Pods: + + ```shell + kubectl get pods -l purpose=demonstrate-envars + ``` + + Le résultat sera similaire à celui-ci: + + ``` + NAME READY STATUS RESTARTS AGE + envar-demo 1/1 Running 0 9s + ``` + +1. Listez les variables d'environnement au sein du container: + + ```shell + kubectl exec envar-demo -- printenv + ``` + + Le résultat sera similaire à celui-ci: + + ``` + NODE_VERSION=4.4.2 + EXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237 + HOSTNAME=envar-demo + ... + DEMO_GREETING=Hello from the environment + DEMO_FAREWELL=Such a sweet sorrow + ``` + +{{< note >}} +Les variables d'environnement définies dans les champs `env` ou `envFrom` +écraseront les variables définies dans l'image utilisée par le container. +{{< /note >}} + +{{< note >}} +Une variable d'environnement peut faire référence à une autre variable, +cependant l'ordre de déclaration est important. Une variable faisant référence +à une autre doit être déclarée après la variable référencée. +De plus, il est recommandé d'éviter les références circulaires. +{{< /note >}} + +## Utilisez des variables d'environnement dans la configuration + +Les variables d'environnement que vous définissez dans la configuration d'un Pod peuvent être utilisées à d'autres endroits de la configuration, comme par exemple dans les commandes et arguments pour les containers. +Dans l'exemple ci-dessous, les variables d'environnement `GREETING`, `HONORIFIC`, et +`NAME` ont des valeurs respectives de `Warm greetings to`, `The Most +Honorable`, et `Kubernetes`. Ces variables sont ensuites utilisées comme arguments +pour le container `env-print-demo`. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: print-greeting +spec: + containers: + - name: env-print-demo + image: bash + env: + - name: GREETING + value: "Warm greetings to" + - name: HONORIFIC + value: "The Most Honorable" + - name: NAME + value: "Kubernetes" + command: ["echo"] + args: ["$(GREETING) $(HONORIFIC) $(NAME)"] +``` + +Une fois le Pod créé, la commande `echo Warm greetings to The Most Honorable Kubernetes` sera exécutée dans le container. + +## {{% heading "whatsnext" %}} + +* En savoir plus sur les [variables d'environnement](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). +* Apprendre à [utiliser des secrets comme variables d'environnement](/docs/concepts/configuration/secret/#using-secrets-as-environment-variables). +* Voir la documentation de référence pour [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core). + diff --git a/content/fr/docs/tasks/inject-data-application/define-interdependent-environment-variables.md b/content/fr/docs/tasks/inject-data-application/define-interdependent-environment-variables.md new file mode 100644 index 00000000000..73bf9afa948 --- /dev/null +++ b/content/fr/docs/tasks/inject-data-application/define-interdependent-environment-variables.md @@ -0,0 +1,83 @@ +--- +title: Définir des variables d'environnement dépendantes +content_type: task +weight: 20 +--- + + + +Cette page montre comment définir des variables d'environnement +interdépendantes pour un container dans un Pod Kubernetes. + + +## {{% heading "prerequisites" %}} + + +{{< include "task-tutorial-prereqs.md" >}} + + + + +## Définir une variable d'environnement dépendante pour un container + +Lorsque vous créez un Pod, vous pouvez configurer des variables d'environnement interdépendantes pour les containers exécutés dans un Pod. +Pour définir une variable d'environnement dépendante, vous pouvez utiliser le format $(VAR_NAME) dans le champ `value` de la spécification `env` dans le fichier de configuration. + +Dans cette exercice, vous allez créer un Pod qui exécute un container. Le fichier de configuration de ce Pod définit des variables d'environnement interdépendantes avec une ré-utilisation entre les différentes variables. Voici le fichier de configuration de ce Pod: + +{{< codenew file="pods/inject/dependent-envars.yaml" >}} + +1. Créez un Pod en utilisant ce fichier de configuration: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/inject/dependent-envars.yaml + ``` + ``` + pod/dependent-envars-demo created + ``` + +2. Listez le Pod: + + ```shell + kubectl get pods dependent-envars-demo + ``` + ``` + NAME READY STATUS RESTARTS AGE + dependent-envars-demo 1/1 Running 0 9s + ``` + +3. Affichez les logs pour le container exécuté dans votre Pod: + + ```shell + kubectl logs pod/dependent-envars-demo + ``` + ``` + + UNCHANGED_REFERENCE=$(PROTOCOL)://172.17.0.1:80 + SERVICE_ADDRESS=https://172.17.0.1:80 + ESCAPED_REFERENCE=$(PROTOCOL)://172.17.0.1:80 + ``` + +Comme montré ci-dessus, vous avez défini une dépendance correcte pour `SERVICE_ADDRESS`, une dépendance manquante pour `UNCHANGED_REFERENCE`, et avez ignoré la dépendance pour `ESCAPED_REFERENCE`. + +Lorsqu'une variable d'environnement est déja définie alors +qu'elle est référencée par une autre variable, la référence s'effectue +correctement, comme dans l'exemple de `SERVICE_ADDRESS`. + +Il est important de noter que l'ordre dans la liste `env` est important. +Une variable d'environnement ne sera pas considérée comme "définie" +si elle est spécifiée plus bas dans la liste. C'est pourquoi +`UNCHANGED_REFERENCE` ne résout pas correctement `$(PROTOCOL)` dans l'exemple précédent. + +Lorsque la variable d'environnement n'est pas définie, ou n'inclut qu'une partie des variables, la variable non définie sera traitée comme une chaine de caractères, par exemple `UNCHANGED_REFERENCE`. Notez que les variables d'environnement malformées n'empêcheront généralement pas le démarrage du conteneur. + +La syntaxe `$(VAR_NAME)` peut être échappée avec un double `$`, par exemple `$$(VAR_NAME)`. +Les références échappées ne sont jamais développées, que la variable référencée +soit définie ou non. C'est le cas pour l'exemple `ESCAPED_REFERENCE` ci-dessus. + +## {{% heading "whatsnext" %}} + + +* En savoir plus sur les [variables d'environnement](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/). +* Lire la documentation pour [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core). + diff --git a/content/fr/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/fr/docs/tasks/inject-data-application/distribute-credentials-secure.md new file mode 100644 index 00000000000..f2052c52ff6 --- /dev/null +++ b/content/fr/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -0,0 +1,355 @@ +--- +title: Distribuer des données sensibles de manière sécurisée avec les Secrets +content_type: task +weight: 50 +min-kubernetes-server-version: v1.6 +--- + + + +Cette page montre comment injecter des données sensibles comme des mots de passe ou des clés de chiffrement dans des Pods. + +## {{% heading "prerequisites" %}} + +{{< include "task-tutorial-prereqs.md" >}} + +### Encoder vos données en format base64 + +Supposons que vous avez deux données sensibles: un identifiant `my-app` et un +mot de passe +`39528$vdg7Jb`. Premièrement, utilisez un outil capable d'encoder vos données +dans un format base64. Voici un exemple en utilisant le programme base64: +```shell +echo -n 'my-app' | base64 +echo -n '39528$vdg7Jb' | base64 +``` + +Le résultat montre que la représentation base64 de l'utilisateur est `bXktYXBw`, +et que la représentation base64 du mot de passe est `Mzk1MjgkdmRnN0pi`. + +{{< caution >}} +Utilisez un outil local approuvé par votre système d'exploitation +afin de réduire les risques de sécurité liés à l'utilisation d'un outil externe. +{{< /caution >}} + + + +## Créer un Secret + +Voici un fichier de configuration que vous pouvez utiliser pour créer un Secret +qui contiendra votre identifiant et mot de passe: + +{{< codenew file="pods/inject/secret.yaml" >}} + +1. Créez le Secret: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml + ``` + +1. Listez les informations du Secret: + + ```shell + kubectl get secret test-secret + ``` + + Résultat: + + ``` + NAME TYPE DATA AGE + test-secret Opaque 2 1m + ``` + +1. Affichez les informations détaillées du Secret: + + ```shell + kubectl describe secret test-secret + ``` + + Résultat: + + ``` + Name: test-secret + Namespace: default + Labels: + Annotations: + + Type: Opaque + + Data + ==== + password: 13 bytes + username: 7 bytes + ``` + +### Créer un Secret en utilisant kubectl + +Si vous voulez sauter l'étape d'encodage, vous pouvez créer le même Secret +en utilisant la commande `kubectl create secret`. Par exemple: + +```shell +kubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb' +``` + +Cette approche est plus pratique. La façon de faire plus explicite +montrée précédemment permet de démontrer et comprendre le fonctionnement des Secrets. + + +## Créer un Pod qui a accès aux données sensibles à travers un Volume + +Voici un fichier de configuration qui permet de créer un Pod: + +{{< codenew file="pods/inject/secret-pod.yaml" >}} + +1. Créez le Pod: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml + ``` + +1. Vérifiez que le Pod est opérationnel: + + ```shell + kubectl get pod secret-test-pod + ``` + + Résultat: + ``` + NAME READY STATUS RESTARTS AGE + secret-test-pod 1/1 Running 0 42m + ``` + +1. Exécutez une session shell dans le Container qui est dans votre Pod: + ```shell + kubectl exec -i -t secret-test-pod -- /bin/bash + ``` + +1. Les données sont exposées au container à travers un Volume monté sur +`/etc/secret-volume`. + + Dans votre shell, listez les fichiers du dossier `/etc/secret-volume`: + ```shell + # À exécuter à l'intérieur du container + ls /etc/secret-volume + ``` + Le résultat montre deux fichiers, un pour chaque donnée du Secret: + ``` + password username + ``` + +1. Toujours dans le shell, affichez le contenu des fichiers + `username` et `password`: + ```shell + # À exécuter à l'intérieur du container + echo "$( cat /etc/secret-volume/username )" + echo "$( cat /etc/secret-volume/password )" + ``` + Le résultat doit contenir votre identifiant et mot de passe: + ``` + my-app + 39528$vdg7Jb + ``` + +Vous pouvez alors modifier votre image ou votre ligne de commande pour que le programme +recherche les fichiers contenus dans le dossier du champ `mountPath`. +Chaque clé du Secret `data` sera exposée comme un fichier à l'intérieur de ce dossier. + +### Monter les données du Secret sur des chemins spécifiques + +Vous pouvez contrôler les chemins sur lesquels les données des Secrets sont montées. +Utilisez le champ `.spec.volumes[].secret.items` pour changer le +chemin cible de chaque donnée: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username +``` + +Voici ce qu'il se passe lorsque vous déployez ce Pod: + +* La clé `username` du Secret `mysecret` est montée dans le container sur le chemin + `/etc/foo/my-group/my-username` au lieu de `/etc/foo/username`. +* La clé `password` du Secret n'est pas montée dans le container. + +Si vous listez de manière explicite les clés en utilisant le champ `.spec.volumes[].secret.items`, +il est important de prendre en considération les points suivants: + +* Seules les clés listées dans le champ `items` seront montées. +* Pour monter toutes les clés du Secret, toutes doivent être + définies dans le champ `items`. +* Toutes les clés définies doivent exister dans le Secret. + Sinon, le volume ne sera pas créé. + +### Appliquer des permissions POSIX aux données + +Vous pouvez appliquer des permissions POSIX pour une clé d'un Secret. Si vous n'en configurez pas, les permissions seront par défaut `0644`. +Vous pouvez aussi définir des permissions pour tout un Secret, et redéfinir les permissions pour chaque clé si nécessaire. + +Par exemple, il est possible de définir un mode par défaut: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + defaultMode: 0400 +``` + +Le Secret sera monté sur `/etc/foo`; tous les fichiers créés par le secret +auront des permissions de type `0400`. + +{{< note >}} +Si vous définissez un Pod en utilisant le format JSON, il est important de +noter que la spécification JSON ne supporte pas le système octal, et qu'elle +comprendra la valeur `0400` comme la valeur _décimale_ `400`. +En JSON, utilisez plutôt l'écriture décimale pour le champ `defaultMode`. +Si vous utilisez le format YAML, vous pouvez utiliser le système octal +pour définir `defaultMode`. +{{< /note >}} + +## Définir des variables d'environnement avec des Secrets + +Il est possible de monter les données des Secrets comme variables d'environnement dans vos containers. + +Si un container consomme déja un Secret en variables d'environnement, +la mise à jour de ce Secret ne sera pas répercutée dans le container tant +qu'il n'aura pas été redémarré. Il existe cependant des solutions tierces +permettant de redémarrer les containers lors d'une mise à jour du Secret. + +### Définir une variable d'environnement à partir d'un seul Secret + +* Définissez une variable d'environnement et sa valeur à l'intérieur d'un Secret: + + ```shell + kubectl create secret generic backend-user --from-literal=backend-username='backend-admin' + ``` + +* Assignez la valeur de `backend-username` définie dans le Secret + à la variable d'environnement `SECRET_USERNAME` dans la configuration du Pod. + + {{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}} + +* Créez le Pod: + + ```shell + kubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml + ``` + +* À l'intérieur d'une session shell, affichez le contenu de la variable + d'environnement `SECRET_USERNAME`: + + ```shell + kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME' + ``` + + Le résultat est: + ``` + backend-admin + ``` + +### Définir des variables d'environnement à partir de plusieurs Secrets + +* Comme précédemment, créez d'abord les Secrets: + + ```shell + kubectl create secret generic backend-user --from-literal=backend-username='backend-admin' + kubectl create secret generic db-user --from-literal=db-username='db-admin' + ``` + +* Définissez les variables d'environnement dans la configuration du Pod. + + {{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}} + +* Créez le Pod: + + ```shell + kubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml + ``` + +* Dans un shell, listez les variables d'environnement du container: + + ```shell + kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME' + ``` + Le résultat est: + ``` + DB_USERNAME=db-admin + BACKEND_USERNAME=backend-admin + ``` + + +## Configurez toutes les paires de clé-valeur d'un Secret comme variables d'environnement + +{{< note >}} +Cette fonctionnalité n'est disponible que dans les versions de Kubernetes +égales ou supérieures à v1.6. +{{< /note >}} + +* Créez un Secret contenant plusieurs paires de clé-valeur: + + ```shell + kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb' + ``` + +* Utilisez `envFrom` pour définir toutes les données du Secret comme variables + d'environnement. Les clés du Secret deviendront les noms des variables + d'environnement à l'intérieur du Pod. + + {{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}} + +* Créez le Pod: + + ```shell + kubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml + ``` + +* Dans votre shell, affichez les variables d'environnement `username` et `password`: + + ```shell + kubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo "username: $username\npassword: $password\n"' + ``` + + Le résultat est: + ``` + username: my-app + password: 39528$vdg7Jb + ``` + +### Références + +* [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core) +* [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core) +* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) + +## {{% heading "whatsnext" %}} + +* En savoir plus sur les [Secrets](/docs/concepts/configuration/secret/). +* En savoir plus sur les [Volumes](/docs/concepts/storage/volumes/). diff --git a/content/fr/docs/tasks/service-catalog/_index.md b/content/fr/docs/tasks/service-catalog/_index.md deleted file mode 100644 index 4b055cc49ac..00000000000 --- a/content/fr/docs/tasks/service-catalog/_index.md +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: Installation du catalogue de services -weight: 150 ---- diff --git a/content/fr/docs/tutorials/hello-minikube.md b/content/fr/docs/tutorials/hello-minikube.md index a934464b777..f097b375a47 100644 --- a/content/fr/docs/tutorials/hello-minikube.md +++ b/content/fr/docs/tutorials/hello-minikube.md @@ -78,7 +78,7 @@ Les déploiements sont le moyen recommandé pour gérer la création et la mise Pod utilise un conteneur basé sur l'image Docker fournie. ```shell - kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4 ``` 2. Affichez le déploiement : diff --git a/content/fr/examples/admin/cloud/ccm-example.yaml b/content/fr/examples/admin/cloud/ccm-example.yaml index e3bc52e53c0..2f789446a23 100644 --- a/content/fr/examples/admin/cloud/ccm-example.yaml +++ b/content/fr/examples/admin/cloud/ccm-example.yaml @@ -41,9 +41,9 @@ spec: serviceAccountName: cloud-controller-manager containers: - name: cloud-controller-manager - # pour les fournisseurs in-tree, nous utilisons k8s.gcr.io/cloud-controller-manager + # pour les fournisseurs in-tree, nous utilisons registry.k8s.io/cloud-controller-manager # cela peut être remplacé par n'importe quelle autre image pour les fournisseurs out-of-tree - image: k8s.gcr.io/cloud-controller-manager:v1.8.0 + image: registry.k8s.io/cloud-controller-manager:v1.8.0 command: - /usr/local/bin/cloud-controller-manager - --cloud-provider= # Ajoutez votre propre fournisseur de cloud ici! diff --git a/content/fr/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml b/content/fr/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml index b37b616e6f7..1053cac5772 100644 --- a/content/fr/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml +++ b/content/fr/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml @@ -22,7 +22,7 @@ spec: - name: varlog mountPath: /var/log - name: count-agent - image: k8s.gcr.io/fluentd-gcp:1.30 + image: registry.k8s.io/fluentd-gcp:1.30 env: - name: FLUENTD_ARGS value: -c /etc/fluentd-config/fluentd.conf diff --git a/content/fr/examples/application/guestbook/redis-master-deployment.yaml b/content/fr/examples/application/guestbook/redis-master-deployment.yaml index 478216d1acc..d96f9d6d76b 100644 --- a/content/fr/examples/application/guestbook/redis-master-deployment.yaml +++ b/content/fr/examples/application/guestbook/redis-master-deployment.yaml @@ -20,7 +20,7 @@ spec: spec: containers: - name: master - image: k8s.gcr.io/redis:e2e # or just image: redis + image: registry.k8s.io/redis:e2e # or just image: redis resources: requests: cpu: 100m diff --git a/content/fr/examples/pods/inject/dependent-envars.yaml b/content/fr/examples/pods/inject/dependent-envars.yaml new file mode 100644 index 00000000000..67d07098bae --- /dev/null +++ b/content/fr/examples/pods/inject/dependent-envars.yaml @@ -0,0 +1,26 @@ +apiVersion: v1 +kind: Pod +metadata: + name: dependent-envars-demo +spec: + containers: + - name: dependent-envars-demo + args: + - while true; do echo -en '\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\n'; sleep 30; done; + command: + - sh + - -c + image: busybox:1.28 + env: + - name: SERVICE_PORT + value: "80" + - name: SERVICE_IP + value: "172.17.0.1" + - name: UNCHANGED_REFERENCE + value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)" + - name: PROTOCOL + value: "https" + - name: SERVICE_ADDRESS + value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)" + - name: ESCAPED_REFERENCE + value: "$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)" diff --git a/content/fr/examples/pods/inject/envars.yaml b/content/fr/examples/pods/inject/envars.yaml new file mode 100644 index 00000000000..ebf5214376f --- /dev/null +++ b/content/fr/examples/pods/inject/envars.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Pod +metadata: + name: envar-demo + labels: + purpose: demonstrate-envars +spec: + containers: + - name: envar-demo-container + image: gcr.io/google-samples/node-hello:1.0 + env: + - name: DEMO_GREETING + value: "Hello from the environment" + - name: DEMO_FAREWELL + value: "Such a sweet sorrow" diff --git a/content/fr/examples/pods/inject/pod-multiple-secret-env-variable.yaml b/content/fr/examples/pods/inject/pod-multiple-secret-env-variable.yaml new file mode 100644 index 00000000000..f285e419326 --- /dev/null +++ b/content/fr/examples/pods/inject/pod-multiple-secret-env-variable.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Pod +metadata: + name: envvars-multiple-secrets +spec: + containers: + - name: envars-test-container + image: nginx + env: + - name: BACKEND_USERNAME + valueFrom: + secretKeyRef: + name: backend-user + key: backend-username + - name: DB_USERNAME + valueFrom: + secretKeyRef: + name: db-user + key: db-username diff --git a/content/fr/examples/pods/inject/pod-secret-envFrom.yaml b/content/fr/examples/pods/inject/pod-secret-envFrom.yaml new file mode 100644 index 00000000000..eb1d3213efe --- /dev/null +++ b/content/fr/examples/pods/inject/pod-secret-envFrom.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Pod +metadata: + name: envfrom-secret +spec: + containers: + - name: envars-test-container + image: nginx + envFrom: + - secretRef: + name: test-secret diff --git a/content/fr/examples/pods/inject/pod-single-secret-env-variable.yaml b/content/fr/examples/pods/inject/pod-single-secret-env-variable.yaml new file mode 100644 index 00000000000..af4cf8732fe --- /dev/null +++ b/content/fr/examples/pods/inject/pod-single-secret-env-variable.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: env-single-secret +spec: + containers: + - name: envars-test-container + image: nginx + env: + - name: SECRET_USERNAME + valueFrom: + secretKeyRef: + name: backend-user + key: backend-username diff --git a/content/fr/examples/pods/inject/secret-envars-pod.yaml b/content/fr/examples/pods/inject/secret-envars-pod.yaml new file mode 100644 index 00000000000..1637c0eac35 --- /dev/null +++ b/content/fr/examples/pods/inject/secret-envars-pod.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Pod +metadata: + name: secret-envars-test-pod +spec: + containers: + - name: envars-test-container + image: nginx + env: + - name: SECRET_USERNAME + valueFrom: + secretKeyRef: + name: test-secret + key: username + - name: SECRET_PASSWORD + valueFrom: + secretKeyRef: + name: test-secret + key: password diff --git a/content/fr/examples/pods/inject/secret-pod.yaml b/content/fr/examples/pods/inject/secret-pod.yaml new file mode 100644 index 00000000000..8487da8d1c1 --- /dev/null +++ b/content/fr/examples/pods/inject/secret-pod.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod +spec: + containers: + - name: test-container + image: nginx + volumeMounts: + # name must match the volume name below + - name: secret-volume + mountPath: /etc/secret-volume + readOnly: true + # The secret data is exposed to Containers in the Pod through a Volume. + volumes: + - name: secret-volume + secret: + secretName: test-secret diff --git a/content/fr/examples/pods/inject/secret.yaml b/content/fr/examples/pods/inject/secret.yaml new file mode 100644 index 00000000000..706ca8670fa --- /dev/null +++ b/content/fr/examples/pods/inject/secret.yaml @@ -0,0 +1,7 @@ +apiVersion: v1 +kind: Secret +metadata: + name: test-secret +data: + username: bXktYXBw + password: Mzk1MjgkdmRnN0pi diff --git a/content/fr/examples/pods/pod-configmap-env-var-valueFrom.yaml b/content/fr/examples/pods/pod-configmap-env-var-valueFrom.yaml index 00827ec98aa..fa172abd371 100644 --- a/content/fr/examples/pods/pod-configmap-env-var-valueFrom.yaml +++ b/content/fr/examples/pods/pod-configmap-env-var-valueFrom.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: [ "/bin/echo", "$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ] env: - name: SPECIAL_LEVEL_KEY diff --git a/content/fr/examples/pods/pod-configmap-envFrom.yaml b/content/fr/examples/pods/pod-configmap-envFrom.yaml index 70ae7e5bcfa..e7b5b60841e 100644 --- a/content/fr/examples/pods/pod-configmap-envFrom.yaml +++ b/content/fr/examples/pods/pod-configmap-envFrom.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: [ "/bin/sh", "-c", "env" ] envFrom: - configMapRef: diff --git a/content/fr/examples/pods/pod-configmap-volume-specific-key.yaml b/content/fr/examples/pods/pod-configmap-volume-specific-key.yaml index 72e38fd8363..ec7a8fb541c 100644 --- a/content/fr/examples/pods/pod-configmap-volume-specific-key.yaml +++ b/content/fr/examples/pods/pod-configmap-volume-specific-key.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: [ "/bin/sh","-c","cat /etc/config/keys" ] volumeMounts: - name: config-volume diff --git a/content/fr/examples/pods/pod-configmap-volume.yaml b/content/fr/examples/pods/pod-configmap-volume.yaml index 478c2e8d2b7..1724ae5c049 100644 --- a/content/fr/examples/pods/pod-configmap-volume.yaml +++ b/content/fr/examples/pods/pod-configmap-volume.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: [ "/bin/sh", "-c", "ls /etc/config/" ] volumeMounts: - name: config-volume diff --git a/content/fr/examples/pods/pod-multiple-configmap-env-variable.yaml b/content/fr/examples/pods/pod-multiple-configmap-env-variable.yaml index 4790a9c661c..c7b2b7abb82 100644 --- a/content/fr/examples/pods/pod-multiple-configmap-env-variable.yaml +++ b/content/fr/examples/pods/pod-multiple-configmap-env-variable.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: SPECIAL_LEVEL_KEY diff --git a/content/fr/examples/pods/pod-single-configmap-env-variable.yaml b/content/fr/examples/pods/pod-single-configmap-env-variable.yaml index 09d6f4a696f..e8061133504 100644 --- a/content/fr/examples/pods/pod-single-configmap-env-variable.yaml +++ b/content/fr/examples/pods/pod-single-configmap-env-variable.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-container - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox command: [ "/bin/sh", "-c", "env" ] env: # Définie la variable d'environnement diff --git a/content/fr/examples/pods/probe/exec-liveness.yaml b/content/fr/examples/pods/probe/exec-liveness.yaml index 6a9c9b32137..7d6ca96b3d6 100644 --- a/content/fr/examples/pods/probe/exec-liveness.yaml +++ b/content/fr/examples/pods/probe/exec-liveness.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: liveness - image: k8s.gcr.io/busybox + image: registry.k8s.io/busybox args: - /bin/sh - -c diff --git a/content/fr/examples/pods/probe/http-liveness.yaml b/content/fr/examples/pods/probe/http-liveness.yaml index 670af18399e..48ca861c142 100644 --- a/content/fr/examples/pods/probe/http-liveness.yaml +++ b/content/fr/examples/pods/probe/http-liveness.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: liveness - image: k8s.gcr.io/liveness + image: registry.k8s.io/liveness args: - /server livenessProbe: diff --git a/content/fr/examples/pods/probe/tcp-liveness-readiness.yaml b/content/fr/examples/pods/probe/tcp-liveness-readiness.yaml index 08fb77ff0f5..ef8a2f9500b 100644 --- a/content/fr/examples/pods/probe/tcp-liveness-readiness.yaml +++ b/content/fr/examples/pods/probe/tcp-liveness-readiness.yaml @@ -7,7 +7,7 @@ metadata: spec: containers: - name: goproxy - image: k8s.gcr.io/goproxy:0.1 + image: registry.k8s.io/goproxy:0.1 ports: - containerPort: 8080 readinessProbe: diff --git a/content/fr/examples/pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml b/content/fr/examples/pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml index 858ecd1b9bb..ed6c7ffbcae 100644 --- a/content/fr/examples/pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml +++ b/content/fr/examples/pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml @@ -23,4 +23,4 @@ spec: - zoneC containers: - name: pause - image: k8s.gcr.io/pause:3.1 + image: registry.k8s.io/pause:3.1 diff --git a/content/fr/examples/pods/topology-spread-constraints/one-constraint.yaml b/content/fr/examples/pods/topology-spread-constraints/one-constraint.yaml index 1062e8a489e..cc1d503c5d9 100644 --- a/content/fr/examples/pods/topology-spread-constraints/one-constraint.yaml +++ b/content/fr/examples/pods/topology-spread-constraints/one-constraint.yaml @@ -14,4 +14,4 @@ spec: foo: bar containers: - name: pause - image: k8s.gcr.io/pause:3.1 + image: registry.k8s.io/pause:3.1 diff --git a/content/fr/examples/pods/topology-spread-constraints/two-constraints.yaml b/content/fr/examples/pods/topology-spread-constraints/two-constraints.yaml index 6c0ab0009be..a75749b2841 100644 --- a/content/fr/examples/pods/topology-spread-constraints/two-constraints.yaml +++ b/content/fr/examples/pods/topology-spread-constraints/two-constraints.yaml @@ -20,4 +20,4 @@ spec: foo: bar containers: - name: pause - image: k8s.gcr.io/pause:3.1 + image: registry.k8s.io/pause:3.1 diff --git a/content/hi/_index.html b/content/hi/_index.html index 640f3f104b7..2009559be3b 100644 --- a/content/hi/_index.html +++ b/content/hi/_index.html @@ -43,12 +43,12 @@ sitemap:

- अक्टूबर 11-15, 2021 को KubeCon North America में भाग लें + अप्रैल 18-21, 2023 KubeCon + CloudNativeCon Europe में भाग लें



- मई 17-20, 2022 को KubeCon Europe में भाग लें + 6-9 नवंबर, 2023 को KubeCon + CloudNativeCon North America में भाग लें
diff --git a/content/hi/docs/tasks/tools/install-kubectl-windows.md b/content/hi/docs/tasks/tools/install-kubectl-windows.md index fc7d1e1870a..7632d08486f 100644 --- a/content/hi/docs/tasks/tools/install-kubectl-windows.md +++ b/content/hi/docs/tasks/tools/install-kubectl-windows.md @@ -27,7 +27,7 @@ Windows पर kubectl संस्थापित करने के लिए या यदि आपके पास `curl` है, तो इस कमांड का उपयोग करें: ```powershell - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe + curl -LO https://dl.k8s.io/release/{{% param "fullversion" %}}/bin/windows/amd64/kubectl.exe ``` {{< note >}} @@ -39,7 +39,7 @@ Windows पर kubectl संस्थापित करने के लिए kubectl चेकसम फाइल डाउनलोड करें: ```powershell - curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256 + curl -LO https://dl.k8s.io/{{% param "fullversion" %}}/bin/windows/amd64/kubectl.exe.sha256 ``` चेकसम फ़ाइल से kubectl बाइनरी को मान्य करें: @@ -54,7 +54,7 @@ Windows पर kubectl संस्थापित करने के लिए - `True` या `False` परिणाम प्राप्त करने के लिए `-eq` ऑपरेटर का उपयोग करके सत्यापन को ऑटोमेट करने के लिए powershell का उपयोग करें: ```powershell - $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256) + $(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256) ``` 1. अपने `PATH` में बाइनरी जोड़ें। @@ -143,7 +143,7 @@ kubectl Bash और Zsh के लिए ऑटोकम्प्लेशन 1. इस कमांड से नवीनतम रिलीज डाउनलोड करें: ```powershell - curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe + curl -LO https://dl.k8s.io/release/{{% param "fullversion" %}}/bin/windows/amd64/kubectl-convert.exe ``` 1. बाइनरी को मान्य करें (वैकल्पिक) @@ -151,7 +151,7 @@ kubectl Bash और Zsh के लिए ऑटोकम्प्लेशन kubectl-convert चेकसम फ़ाइल डाउनलोड करें: ```powershell - curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe.sha256 + curl -LO https://dl.k8s.io/{{% param "fullversion" %}}/bin/windows/amd64/kubectl-convert.exe.sha256 ``` चेकसम फ़ाइल से kubectl-convert बाइनरी को मान्य करें: diff --git a/content/id/docs/concepts/cluster-administration/flow-control.md b/content/id/docs/concepts/cluster-administration/flow-control.md index 9cfe98e650f..5aa02f4725f 100644 --- a/content/id/docs/concepts/cluster-administration/flow-control.md +++ b/content/id/docs/concepts/cluster-administration/flow-control.md @@ -243,6 +243,7 @@ https://play.golang.org/p/Gi0PLgVHiUg, yang digunakan untuk menghitung nilai-nil | 6| 256| 2.7134626662687968e-12| 2.9516464018476436e-07| 0.0008895654642000348| | 6| 512| 4.116062922897309e-14| 4.982983350480894e-09| 2.26025764343413e-05| | 6| 1024| 6.337324016514285e-16| 8.09060164312957e-11| 4.517408062903668e-07| +{{< /table >}} ### FlowSchema diff --git a/content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index 5f02e629d3d..ab3e5567c07 100644 --- a/content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/id/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -70,7 +70,8 @@ Contoh: }, { "type": "portmap", - "capabilities": {"portMappings": true} + "capabilities": {"portMappings": true}, + "externalSetMarkChain": "KUBE-MARK-MASQ" } ] } diff --git a/content/id/docs/tasks/administer-cluster/sysctl-cluster.md b/content/id/docs/tasks/administer-cluster/sysctl-cluster.md index 42acb5d0f59..667017180fe 100644 --- a/content/id/docs/tasks/administer-cluster/sysctl-cluster.md +++ b/content/id/docs/tasks/administer-cluster/sysctl-cluster.md @@ -60,6 +60,7 @@ Sysctl berikut ini didukung dalam kelompok _safe_: {{< note >}} Contoh `net.ipv4.tcp_syncookies` bukan merupakan Namespace pada kernel Linux versi 4.4 atau lebih rendah. +{{< /note >}} Daftar ini akan terus dikembangkan dalam versi Kubernetes berikutnya ketika kubelet mendukung mekanisme isolasi yang lebih baik. diff --git a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md index 2da9cebcdab..02d664d5304 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -41,7 +41,7 @@ yang tetap bertahan, meski Container berakhir dan dimulai ulang. Berikut berkas Hasil keluaran seperti ini: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s ``` @@ -69,7 +69,7 @@ yang tetap bertahan, meski Container berakhir dan dimulai ulang. Berikut berkas Keluarannya mirip seperti ini: - ```shell + ```console USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash @@ -86,7 +86,7 @@ yang tetap bertahan, meski Container berakhir dan dimulai ulang. Berikut berkas 2. Di dalam terminal awal, amati perubahan terhadap Pod Redis. Sampai akhirnya kamu akan melihat hal seperti ini: - ```shell + ```console NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s redis 0/1 Completed 0 6m diff --git a/content/id/examples/application/php-apache.yaml b/content/id/examples/application/php-apache.yaml index d29d2b91593..a194dce6f95 100644 --- a/content/id/examples/application/php-apache.yaml +++ b/content/id/examples/application/php-apache.yaml @@ -6,7 +6,6 @@ spec: selector: matchLabels: run: php-apache - replicas: 1 template: metadata: labels: diff --git a/content/it/docs/tutorials/hello-minikube.md b/content/it/docs/tutorials/hello-minikube.md index 3dadc7a0cca..566ed419185 100644 --- a/content/it/docs/tutorials/hello-minikube.md +++ b/content/it/docs/tutorials/hello-minikube.md @@ -76,7 +76,7 @@ modalità raccomandata per gestire la creazione e lo scaling dei Pods. eseguirà un Container basato sulla Docker image specificata. ```shell - kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 + kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4 ``` 2. Visualizza il Deployment: diff --git a/content/it/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml b/content/it/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml index b37b616e6f7..1053cac5772 100644 --- a/content/it/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml +++ b/content/it/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml @@ -22,7 +22,7 @@ spec: - name: varlog mountPath: /var/log - name: count-agent - image: k8s.gcr.io/fluentd-gcp:1.30 + image: registry.k8s.io/fluentd-gcp:1.30 env: - name: FLUENTD_ARGS value: -c /etc/fluentd-config/fluentd.conf diff --git a/content/ja/blog/_index.md b/content/ja/blog/_index.md new file mode 100644 index 00000000000..f4f2c571cae --- /dev/null +++ b/content/ja/blog/_index.md @@ -0,0 +1,16 @@ +--- +title: Kubernetesブログ +linkTitle: ブログ +menu: + main: + title: "ブログ" + weight: 40 + post: > +

Kubernetesやコンテナ全般に関する最新ニュースを読んで、技術的なハウツーをいち早く入手しましょう。

+--- +{{< comment >}} + +ブログへの寄稿についての情報は、以下を参照してください +https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post + +{{< /comment >}} diff --git a/content/ja/case-studies/sos/index.html b/content/ja/case-studies/sos/index.html index c63fa4ab516..07ab28aad6d 100644 --- a/content/ja/case-studies/sos/index.html +++ b/content/ja/case-studies/sos/index.html @@ -37,7 +37,7 @@ case_study_details: SOS Internationalは60年にわたり、北欧諸国の顧客に信頼性の高い緊急医療および旅行支援を提供してきました。 {{< /case-studies/lead >}} -

SOSのオペレータは年間100万件の案件を扱い、100万件以上の電話を処理しています。しかし、過去4年間で同社のビジネス戦略にデジタル空間でのますます激しい開発が必要になりました。

+

SOSのオペレーターは年間100万件の案件を扱い、100万件以上の電話を処理しています。しかし、過去4年間で同社のビジネス戦略にデジタル空間でのますます激しい開発が必要になりました。

ITシステムに関していえば、会社のデータセンターで稼働する3つの伝統的なモノリスとウォーターフォールアプローチにおいて「SOSは非常に断片化された資産があります。」とエンタープライズアーキテクチャ責任者のMartin Ahrentsen氏は言います。「市場投入までの時間を短縮し、効率を高めるために新しい技術と新しい働き方の両方を導入する必要がありました。それははるかに機敏なアプローチであり、それをビジネスに提供するために役立つプラットフォームが必要でした。」

diff --git a/content/ja/docs/concepts/architecture/cri.md b/content/ja/docs/concepts/architecture/cri.md index 9ca24e41f06..fa1ba577504 100644 --- a/content/ja/docs/concepts/architecture/cri.md +++ b/content/ja/docs/concepts/architecture/cri.md @@ -1,7 +1,7 @@ --- title: コンテナランタイムインターフェイス(CRI) content_type: concept -weight: 50 +weight: 60 --- diff --git a/content/ja/docs/concepts/architecture/garbage-collection.md b/content/ja/docs/concepts/architecture/garbage-collection.md index 2ac48608b91..a92ef75f2e2 100644 --- a/content/ja/docs/concepts/architecture/garbage-collection.md +++ b/content/ja/docs/concepts/architecture/garbage-collection.md @@ -1,7 +1,7 @@ --- title: ガベージコレクション content_type: concept -weight: 50 +weight: 70 --- @@ -70,19 +70,19 @@ Kubernetesは、ReplicaSetを削除したときに残されたPodなど、owner この時点で、オブジェクトはKubernetesAPIに表示されなくなります。 フォアグラウンドカスケード削除中に、オーナーの削除をブロックする依存関係は、`ownerReference.blockOwnerDeletion=true`フィールドを持つ依存関係のみです。 -詳細については、[フォアグラウンドカスケード削除の使用](/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)を参照してください。 +詳細については、[フォアグラウンドカスケード削除の使用](/ja/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)を参照してください。 ### バックグラウンドカスケード削除 {#background-deletion} バックグラウンドカスケード削除では、Kubernetes APIサーバーがオーナーオブジェクトをすぐに削除し、コントローラーがバックグラウンドで依存オブジェクトをクリーンアップします。 デフォルトでは、フォアグラウンド削除を手動で使用するか、依存オブジェクトを孤立させることを選択しない限り、Kubernetesはバックグラウンドカスケード削除を使用します。 -詳細については、[バックグラウンドカスケード削除の使用](/docs/tasks/administer-cluster/use-cascading-deletion/#use-background-cascading-deletion)を参照してください。 +詳細については、[バックグラウンドカスケード削除の使用](/ja/docs/tasks/administer-cluster/use-cascading-deletion/#use-background-cascading-deletion)を参照してください。 ### 孤立した依存関係 Kubernetesがオーナーオブジェクトを削除すると、残された依存関係は*orphan*オブジェクトと呼ばれます。 -デフォルトでは、Kubernetesは依存関係オブジェクトを削除します。この動作をオーバーライドする方法については、[オーナーオブジェクトと孤立した依存関係の削除](/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy)を参照してください。 +デフォルトでは、Kubernetesは依存関係オブジェクトを削除します。この動作をオーバーライドする方法については、[オーナーオブジェクトの削除と従属オブジェクトの孤立](/ja/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy)を参照してください。 ## 未使用のコンテナとイメージのガベージコレクション {#containers-images} @@ -124,7 +124,7 @@ kubeletは、次の変数に基づいて未使用のコンテナをガベージ これらのリソースを管理するコントローラーに固有のオプションを設定することにより、リソースのガベージコレクションを調整できます。次のページは、ガベージコレクションを設定する方法を示しています。 - * [Kubernetesオブジェクトのカスケード削除の設定](/docs/tasks/administer-cluster/use-cascading-deletion/) + * [Kubernetesオブジェクトのカスケード削除の設定](/ja/docs/tasks/administer-cluster/use-cascading-deletion/) * [完了したジョブのクリーンアップの設定](/ja/docs/concepts/workloads/controllers/ttlafterfinished/) diff --git a/content/ja/docs/concepts/cluster-administration/addons.md b/content/ja/docs/concepts/cluster-administration/addons.md index 4e8adfd7bfd..aea14c681a6 100644 --- a/content/ja/docs/concepts/cluster-administration/addons.md +++ b/content/ja/docs/concepts/cluster-administration/addons.md @@ -1,6 +1,7 @@ --- title: アドオンのインストール content_type: concept +weight: 120 --- diff --git a/content/ja/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/ja/docs/concepts/cluster-administration/cluster-administration-overview.md index 9f27d7779df..11c7285609b 100644 --- a/content/ja/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/ja/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -49,7 +49,7 @@ Kubernetesクラスターの計画、セットアップ、設定の例を知る * [Kubernetesクラスターでのsysctlの使用](/docs/concepts/cluster-administration/sysctl-cluster/)では、管理者向けにカーネルパラメーターを設定するため`sysctl`コマンドラインツールの使用方法について解説します。 -* [クラスターの監査](/docs/tasks/debug-application-cluster/audit/)では、Kubernetesの監査ログの扱い方について解説します。 +* [クラスターの監査](/ja/docs/tasks/debug/debug-cluster/audit/)では、Kubernetesの監査ログの扱い方について解説します。 ### kubeletをセキュアにする * [マスターとノードのコミュニケーション](/ja/docs/concepts/architecture/master-node-communication/) @@ -61,7 +61,3 @@ Kubernetesクラスターの計画、セットアップ、設定の例を知る * [DNSのインテグレーション](/ja/docs/concepts/services-networking/dns-pod-service/)では、DNS名をKubernetes Serviceに直接名前解決する方法を解説します。 * [クラスターアクティビィのロギングと監視](/docs/concepts/cluster-administration/logging/)では、Kubernetesにおけるロギングがどのように行われ、どう実装されているかについて解説します。 - - - - diff --git a/content/ja/docs/concepts/cluster-administration/manage-deployment.md b/content/ja/docs/concepts/cluster-administration/manage-deployment.md index 084ff304b5c..280dda20466 100644 --- a/content/ja/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/ja/docs/concepts/cluster-administration/manage-deployment.md @@ -1,6 +1,6 @@ --- reviewers: -- +- title: リソースの管理 content_type: concept weight: 40 @@ -157,7 +157,7 @@ deployment.apps/my-deployment created persistentvolumeclaim/my-pvc created ``` -`kubectl`についてさらに知りたい場合は、[kubectlの概要](/ja/docs/reference/kubectl/overview/)を参照してください。 +`kubectl`についてさらに知りたい場合は、[コマンドラインツール(kubectl)](/ja/docs/reference/kubectl/)を参照してください。 ## ラベルを有効に使う @@ -449,6 +449,5 @@ kubectl edit deployment/my-nginx ## {{% heading "whatsnext" %}} -- [アプリケーションの調査とデバッグのための`kubectl`の使用方法](/docs/tasks/debug-application-cluster/debug-application-introspection/)について学んでください。 +- [アプリケーションの調査とデバッグのための`kubectl`の使用方法](/ja/docs/tasks/debug/debug-application/debug-running-pod/)について学んでください。 - [設定のベストプラクティスとTIPS](/ja/docs/concepts/configuration/overview/)を参照してください。 - diff --git a/content/ja/docs/concepts/cluster-administration/proxies.md b/content/ja/docs/concepts/cluster-administration/proxies.md index 4738993ab05..5f815ed401b 100644 --- a/content/ja/docs/concepts/cluster-administration/proxies.md +++ b/content/ja/docs/concepts/cluster-administration/proxies.md @@ -1,7 +1,7 @@ --- title: Kubernetesのプロキシー content_type: concept -weight: 90 +weight: 100 --- diff --git a/content/ja/docs/concepts/cluster-administration/system-logs.md b/content/ja/docs/concepts/cluster-administration/system-logs.md index cae76d7e34b..a83c4c0c296 100644 --- a/content/ja/docs/concepts/cluster-administration/system-logs.md +++ b/content/ja/docs/concepts/cluster-administration/system-logs.md @@ -1,7 +1,7 @@ --- title: システムログ content_type: concept -weight: 60 +weight: 80 --- diff --git a/content/ja/docs/concepts/configuration/manage-resources-containers.md b/content/ja/docs/concepts/configuration/manage-resources-containers.md index 62b01aa0dfe..8c01988ea81 100644 --- a/content/ja/docs/concepts/configuration/manage-resources-containers.md +++ b/content/ja/docs/concepts/configuration/manage-resources-containers.md @@ -170,7 +170,7 @@ Dockerを使用する場合: Podのリソース使用量は、Podのステータスの一部として報告されます。 -オプションの[監視ツール](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)がクラスターにおいて利用可能な場合、Podのリソース使用量は[メトリクスAPI](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)から直接、もしくは監視ツールから取得できます。 +オプションの[監視ツール](/ja/docs/tasks/debug/debug-cluster/resource-usage-monitoring/)がクラスターにおいて利用可能な場合、Podのリソース使用量は[メトリクスAPI](/ja/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#the-metrics-api)から直接、もしくは監視ツールから取得できます。 ## ローカルのエフェメラルストレージ {#local-ephemeral-storage} @@ -378,7 +378,7 @@ Kubernetesが使用しないようにする必要があります。 ## 拡張リソース {#extended-resources} 拡張リソースは`kubernetes.io`ドメインの外で完全に修飾されたリソース名です。 -これにより、クラスタオペレータはKubernetesに組み込まれていないリソースをアドバタイズし、ユーザはそれを利用することができるようになります。 +これにより、クラスタオペレーターはKubernetesに組み込まれていないリソースをアドバタイズし、ユーザはそれを利用することができるようになります。 拡張リソースを使用するためには、2つのステップが必要です。 第一に、クラスタオペレーターは拡張リソースをアドバタイズする必要があります。 @@ -394,7 +394,7 @@ Nodeレベルの拡張リソースはNodeに関連付けられています。 各Nodeにデバイスプラグインで管理されているリソースをアドバタイズする方法については、[デバイスプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)を参照してください。 ##### その他のリソース {#other-resources} -新しいNodeレベルの拡張リソースをアドバタイズするには、クラスタオペレータはAPIサーバに`PATCH`HTTPリクエストを送信し、クラスタ内のNodeの`status.capacity`に利用可能な量を指定します。 +新しいNodeレベルの拡張リソースをアドバタイズするには、クラスタオペレーターはAPIサーバに`PATCH`HTTPリクエストを送信し、クラスタ内のNodeの`status.capacity`に利用可能な量を指定します。 この操作の後、ノードの`status.capacity`には新しいリソースが含まれます。 `status.allocatable`フィールドは、kubeletによって非同期的に新しいリソースで自動的に更新されます。 スケジューラはPodの適合性を評価する際にNodeの`status.allocatable`値を使用するため、Nodeの容量に新しいリソースを追加してから、そのNodeでリソースのスケジューリングを要求する最初のPodが現れるまでには、短い遅延が生じる可能性があることに注意してください。 diff --git a/content/ja/docs/concepts/containers/container-lifecycle-hooks.md b/content/ja/docs/concepts/containers/container-lifecycle-hooks.md index 908025a1750..b59fe653002 100644 --- a/content/ja/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/ja/docs/concepts/containers/container-lifecycle-hooks.md @@ -1,7 +1,7 @@ --- title: コンテナライフサイクルフック content_type: concept -weight: 30 +weight: 40 --- diff --git a/content/ja/docs/concepts/containers/runtime-class.md b/content/ja/docs/concepts/containers/runtime-class.md index 2c663bf7c13..87110cf13f3 100644 --- a/content/ja/docs/concepts/containers/runtime-class.md +++ b/content/ja/docs/concepts/containers/runtime-class.md @@ -2,7 +2,7 @@ reviewers: title: ランタイムクラス(Runtime Class) content_type: concept -weight: 20 +weight: 30 --- diff --git a/content/ja/docs/concepts/extend-kubernetes/api-extension/_index.md b/content/ja/docs/concepts/extend-kubernetes/api-extension/_index.md index fc1a95ddfff..b00bd839eb2 100644 --- a/content/ja/docs/concepts/extend-kubernetes/api-extension/_index.md +++ b/content/ja/docs/concepts/extend-kubernetes/api-extension/_index.md @@ -1,4 +1,4 @@ --- title: Kubernetes APIの拡張 -weight: 20 +weight: 30 --- diff --git a/content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md index 9fea0905c6e..1679d2929f0 100644 --- a/content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md +++ b/content/ja/docs/concepts/extend-kubernetes/api-extension/custom-resources.md @@ -18,7 +18,7 @@ weight: 10 *カスタムリソース* は、Kubernetes APIの拡張で、デフォルトのKubernetesインストールでは、必ずしも利用できるとは限りません。つまりそれは、特定のKubernetesインストールのカスタマイズを表します。しかし、今現在、多数のKubernetesのコア機能は、カスタムリソースを用いて作られており、Kubernetesをモジュール化しています。 -カスタムリソースは、稼働しているクラスターに動的に登録され、現れたり、消えたりし、クラスター管理者はクラスター自体とは無関係にカスタムリソースを更新できます。一度、カスタムリソースがインストールされると、ユーザーは[kubectl](/ja/docs/reference/kubectl/overview/)を使い、ビルトインのリソースである *Pods* と同じように、オブジェクトを作成、アクセスすることが可能です。 +カスタムリソースは、稼働しているクラスターに動的に登録され、現れたり、消えたりし、クラスター管理者はクラスター自体とは無関係にカスタムリソースを更新できます。一度、カスタムリソースがインストールされると、ユーザーは[kubectl](/ja/docs/reference/kubectl/)を使い、ビルトインのリソースである *Pods* と同じように、オブジェクトを作成、アクセスすることが可能です。 ## カスタムコントローラー diff --git a/content/ja/docs/concepts/overview/kubernetes-api.md b/content/ja/docs/concepts/overview/kubernetes-api.md index 5b6388a306a..e662648ec46 100644 --- a/content/ja/docs/concepts/overview/kubernetes-api.md +++ b/content/ja/docs/concepts/overview/kubernetes-api.md @@ -18,7 +18,7 @@ APIサーバーは、エンドユーザー、クラスターのさまざまな Kubernetes APIを使用すると、Kubernetes API内のオブジェクトの状態をクエリで操作できます(例:Pod、Namespace、ConfigMap、Events)。 -ほとんどの操作は、APIを使用している[kubectl](/docs/reference/kubectl/overview/)コマンドラインインターフェースもしくは[kubeadm](/docs/reference/setup-tools/kubeadm/)のような別のコマンドラインツールを通して実行できます。 +ほとんどの操作は、APIを使用している[kubectl](/ja/docs/reference/kubectl/)コマンドラインインターフェースもしくは[kubeadm](/docs/reference/setup-tools/kubeadm/)のような別のコマンドラインツールを通して実行できます。 RESTコールを利用して直接APIにアクセスすることも可能です。 Kubernetes APIを利用してアプリケーションを書いているのであれば、[client libraries](/docs/reference/using-api/client-libraries/)の利用を考えてみてください。 @@ -145,7 +145,9 @@ APIの発展や拡張を簡易に行えるようにするため、Kubernetesは[ APIリソースは、APIグループ、リソースタイプ、ネームスペース(namespacedリソースのための)、名前によって区別されます。APIサーバーは、APIバージョン間の変換を透過的に処理します。すべてのバージョンの違いは、実際のところ同じ永続データとして表現されます。APIサーバーは、同じ基本的なデータを複数のAPIバージョンで提供することができます。 -例えば、同じリソースで`v1`と`v1beta1`の2つのバージョンが有ることを考えてみます。`v1beta1`バージョンのAPIを利用しオブジェクトを最初に作成したとして、`v1beta1`もしくは`v1`どちらのAPIバージョンを利用してもオブジェクトのread、update、deleteができます。 +例えば、同じリソースで`v1`と`v1beta1`の2つのバージョンが有ることを考えてみます。 +`v1beta1`バージョンのAPIを利用しオブジェクトを最初に作成したとして、`v1beta1`バージョンが非推奨となり削除されるまで、`v1beta1`もしくは`v1`どちらのAPIバージョンを利用してもオブジェクトのread、update、deleteができます。 +その時点では、`v1` APIを使用してオブジェクトの修正やアクセスを継続することが可能です。 ## APIの変更 @@ -156,10 +158,18 @@ Kubernetesプロジェクトは、既存のクライアントとの互換性を 基本的に、新しいAPIリソースと新しいリソースフィールドは追加することができます。 リソースまたはフィールドを削除するには、[API非推奨ポリシー](/docs/reference/using-api/deprecation-policy/)に従ってください。 -Kubernetesは、公式のKubernetes APIが一度一般提供(GA)に達した場合、通常は`v1`APIバージョンです、互換性を維持することを強い責任があります。さらに、Kubernetesは _beta_ についても可能な限り互換性を維持し続けます。ベータAPIを採用した場合、その機能が安定版になったあとでも、APIを利用してクラスタを操作し続けることができます。 +Kubernetesは、通常はAPIバージョン`v1`として、公式のKubernetes APIが一度一般提供(GA)に達した場合、互換性を維持することを強く確約します。 +さらに、Kubernetesは、公式Kubernetes APIの _beta_ APIバージョン経由で永続化されたデータとの互換性を維持します。 +そして、機能が安定したときにGA APIバージョン経由でデータを変換してアクセスできることを保証します。 + +beta APIを採用した場合、APIが卒業(Graduate)したら、後続のbetaまたはstable APIに移行する必要があります。 +これを行うのに最適な時期は、オブジェクトが両方のAPIバージョンから同時にアクセスできるbeta APIの非推奨期間中です。 +beta APIが非推奨期間を終えて提供されなくなったら、代替APIバージョンを使用する必要があります。 {{< note >}} -Kubernetesは、 _alpha_ APIバージョンについても互換性の維持に注力しますが、いくつかの事情により不可である場合もあります。アルファAPIバージョンを使っている場合、クラスタのアップグレードやAPIが変更された場合に備えて、Kubernetesのリリースノートを確認してください。 +Kubernetesは、 _alpha_ APIバージョンについても互換性の維持に注力しますが、いくつかの事情により不可である場合もあります。 +alpha APIバージョンを使っている場合、クラスターをアップグレードする時にKubernetesのリリースノートを確認してください。 +APIが互換性のない方法で変更された場合は、アップグレードをする前に既存のalphaオブジェクトをすべて削除する必要があります。 {{< /note >}} APIバージョンレベルの定義に関する詳細は[APIバージョンのリファレンス](/docs/reference/using-api/#api-versioning)を参照してください。 diff --git a/content/ja/docs/concepts/policy/resource-quotas.md b/content/ja/docs/concepts/policy/resource-quotas.md index 86fe7194ae4..89e8b3f7b88 100644 --- a/content/ja/docs/concepts/policy/resource-quotas.md +++ b/content/ja/docs/concepts/policy/resource-quotas.md @@ -2,7 +2,7 @@ reviewers: title: リソースクォータ content_type: concept -weight: 10 +weight: 20 --- diff --git a/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md index 7256b988f1b..e01cef7d0f3 100644 --- a/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/ja/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -7,212 +7,230 @@ weight: 20 -{{< glossary_tooltip text="Pod" term_id="pod" >}}が稼働する{{< glossary_tooltip text="Node" term_id="node" >}}を特定のものに指定したり、優先条件を指定して制限することができます。 -これを実現するためにはいくつかの方法がありますが、推奨されている方法は[ラベルでの選択](/ja/docs/concepts/overview/working-with-objects/labels/)です。 -スケジューラーが最適な配置を選択するため、一般的にはこのような制限は不要です(例えば、複数のPodを別々のNodeへデプロイしたり、Podを配置する際にリソースが不十分なNodeにはデプロイされないことが挙げられます)が、 -SSDが搭載されているNodeにPodをデプロイしたり、同じアベイラビリティーゾーン内で通信する異なるサービスのPodを同じNodeにデプロイする等、柔軟な制御が必要なこともあります。 - - +{{< glossary_tooltip text="Pod" term_id="pod" >}}を特定の{{< glossary_tooltip text="Node" term_id="node" >}}で実行するように _制限_ したり、特定のNodeで実行することを _優先_ させたりといった制約をかけることができます。 +これを実現するためにはいくつかの方法がありますが、推奨されている方法は、すべて[ラベルセレクター](/ja/docs/concepts/overview/working-with-objects/labels/)を使用して選択を容易にすることです。 +多くの場合、このような制約を設定する必要はなく、{{< glossary_tooltip text="スケジューラー" term_id="kube-scheduler" >}}が自動的に妥当な配置を行います(例えば、Podを複数のNodeに分散させ、空きリソースが十分でないNodeにPodを配置しないようにすることができます)。 +しかし、例えばSSDが接続されているNodeにPodが配置されるようにしたり、多くの通信を行う2つの異なるサービスのPodを同じアベイラビリティーゾーンに配置したりする等、どのNodeに配置するかを制御したい状況もあります。 -## nodeSelector +Kubernetesが特定のPodの配置場所を選択するために、以下の方法があります: -`nodeSelector`は、Nodeを選択するための、最も簡単で推奨されている手法です。 -`nodeSelector`はPodSpecのフィールドです。これはkey-valueペアのマップを特定します。 -あるノードでPodを稼働させるためには、そのノードがラベルとして指定されたkey-valueペアを保持している必要があります(複数のラベルを保持することも可能です)。 -最も一般的な使用方法は、1つのkey-valueペアを付与する方法です。 + * [nodeラベル](#built-in-node-labels)に対してマッチングを行う[nodeSelector](#nodeselector)フィールド + * [アフィニティとアンチアフィニティ](#affinity-and-anti-affinity) + * [nodeName](#nodename)フィールド + * [Podのトポロジー分散制約](#pod-topology-spread-constraints) -以下に、`nodeSelector`の使用例を紹介します。 +## Nodeラベル {#built-in-node-labels} -### ステップ0: 前提条件 +他の多くのKubernetesオブジェクトと同様に、Nodeにも[ラベル](/ja/docs/concepts/overview/working-with-objects/labels/)があります。[手動でラベルを付ける](/ja/docs/tasks/configure-pod-container/assign-pods-nodes/#ラベルをNodeに追加する)ことができます。 +また、Kubernetesはクラスター内のすべてのNodeに対し、いくつかの標準ラベルを付けます。Nodeラベルの一覧については[よく使われるラベル、アノテーションとtaint](/docs/reference/labels-annotations-taints/)を参照してください。 -この例では、KubernetesのPodに関して基本的な知識を有していることと、[Kubernetesクラスターのセットアップ](/ja/docs/setup/)がされていることが前提となっています。 +{{}} +これらのラベルの値はクラウドプロバイダー固有のもので、信頼性を保証できません。 +例えば、`kubernetes.io/hostname`の値はある環境ではNode名と同じになり、他の環境では異なる値になることがあります。 +{{}} -### ステップ1: Nodeへのラベルの付与 +### Nodeの分離/制限 {#node-isolation-restriction} -`kubectl get nodes`で、クラスターのノードの名前を取得してください。 -そして、ラベルを付与するNodeを選び、`kubectl label nodes =`で選択したNodeにラベルを付与します。 -例えば、Nodeの名前が'kubernetes-foo-node-1.c.a-robinson.internal'、付与するラベルが'disktype=ssd'の場合、`kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`によってラベルが付与されます。 +Nodeにラベルを追加することで、Podを特定のNodeまたはNodeグループ上でのスケジューリングの対象にすることができます。この機能を使用すると、特定のPodが一定の独立性、安全性、または規制といった属性を持ったNode上でのみ実行されるようにすることができます。 -`kubectl get nodes --show-labels`によって、ノードにラベルが付与されたかを確認することができます。 -また、`kubectl describe node "nodename"`から、そのNodeの全てのラベルを表示することもできます。 +Node分離するのにラベルを使用する場合、{{}}が修正できないラベルキーを選択してください。 +これにより、侵害されたNodeが自身でそれらのラベルを設定することで、スケジューラーがそのNodeにワークロードをスケジュールしてしまうのを防ぐことができます。 -### ステップ2: PodへのnodeSelectorフィールドの追加 +[`NodeRestriction`アドミッションプラグイン](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)は、kubeletが`node-restriction.kubernetes.io/`というプレフィックスを持つラベルを設定または変更するのを防ぎます。 -該当のPodのconfigファイルに、nodeSelectorのセクションを追加します: -例として以下のconfigファイルを扱います: +ラベルプレフィックスをNode分離に利用するには: -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: nginx - labels: - env: test -spec: - containers: - - name: nginx - image: nginx -``` +1. [Node認可](/docs/reference/access-authn-authz/node/)を使用していることと、`NodeRestriction` アドミッションプラグインが _有効_ になっていることを確認します。 +2. `node-restriction.kubernetes.io/`プレフィックスを持つラベルをNodeに追加し、 [nodeSelector](#nodeselector)でそれらのラベルを使用します。 + 例えば、`example.com.node-restriction.kubernetes.io/fips=true`や`example.com.node-restriction.kubernetes.io/pci-dss=true`などです。 -nodeSelectorを以下のように追加します: +## nodeSelector {#nodeselector} -{{< codenew file="pods/pod-nginx.yaml" >}} +`nodeSelector`は、Node選択制約の中で最もシンプルな推奨形式です。 +Podのspec(仕様)に`nodeSelector`フィールドを追加することで、ターゲットNodeが持つべき[Nodeラベル](#built-in-node-labels)を指定できます。 +Kubernetesは指定された各ラベルを持つNodeにのみ、Podをスケジューリングします。 -`kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml`により、Podは先ほどラベルを付与したNodeへスケジュールされます。 -`kubectl get pods -o wide`で表示される"NODE"の列から、PodがデプロイされているNodeを確認することができます。 - -## 補足: ビルトインNodeラベル {#built-in-node-labels} - -明示的に[付与](#step-one-attach-label-to-the-node)するラベルの他に、事前にNodeへ付与されているものもあります。 -これらのラベルのリストは、[Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/)を参照してください。 - -{{< note >}} -これらのラベルは、クラウドプロバイダー固有であり、確実なものではありません。 -例えば、`kubernetes.io/hostname`の値はNodeの名前と同じである環境もあれば、異なる環境もあります。 -{{< /note >}} - - -## Nodeの隔離や制限 -Nodeにラベルを付与することで、Podは特定のNodeやNodeグループにスケジュールされます。 -これにより、特定のPodを、確かな隔離性や安全性、特性を持ったNodeで稼働させることができます。 -この目的でラベルを使用する際に、Node上のkubeletプロセスに上書きされないラベルキーを選択することが強く推奨されています。 -これは、安全性が損なわれたNodeがkubeletの認証情報をNodeのオブジェクトに設定したり、スケジューラーがそのようなNodeにデプロイすることを防ぎます。 - -`NodeRestriction`プラグインは、kubeletが`node-restriction.kubernetes.io/`プレフィックスを有するラベルの設定や上書きを防ぎます。 -Nodeの隔離にラベルのプレフィックスを使用するためには、以下のようにします。 - -1. [Node authorizer](/docs/reference/access-authn-authz/node/)を使用していることと、[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)が _有効_ になっていること。 -2. Nodeに`node-restriction.kubernetes.io/` プレフィックスのラベルを付与し、そのラベルがnode selectorに指定されていること。 -例えば、`example.com.node-restriction.kubernetes.io/fips=true` または `example.com.node-restriction.kubernetes.io/pci-dss=true`のようなラベルです。 +詳しい情報については[PodをNodeに割り当てる](/ja/docs/tasks/configure-pod-container/assign-pods-nodes/)を参照してください。 ## アフィニティとアンチアフィニティ {#affinity-and-anti-affinity} -`nodeSelector`はPodの稼働を特定のラベルが付与されたNodeに制限する最も簡単な方法です。 -アフィニティ/アンチアフィニティでは、より柔軟な指定方法が提供されています。 -拡張機能は以下の通りです。 +`nodeSelector`はPodを特定のラベルが付与されたNodeに制限する最も簡単な方法です。 +アフィニティとアンチアフィニティでは、定義できる制約の種類が拡張されています。 +アフィニティとアンチアフィニティのメリットは以下の通りです。 -1. アフィニティ/アンチアフィニティという用語はとても表現豊かです。この用語は論理AND演算で作成された完全一致だけではなく、より多くのマッチングルールを提供します。 -2. 必須条件ではなく優先条件を指定でき、条件を満たさない場合でもPodをスケジュールさせることができます。 -3. Node自体のラベルではなく、Node(または他のトポロジカルドメイン)上で稼働している他のPodのラベルに対して条件を指定することができ、そのPodと同じ、または異なるドメインで稼働させることができます。 +* アフィニティとアンチアフィニティで使われる言語は、より表現力が豊かです。`nodeSelector`は指定されたラベルを全て持つNodeを選択するだけです。アフィニティとアンチアフィニティは選択ロジックをより細かく制御することができます。 +* ルールが*柔軟*であったり*優先*での指定ができたりするため、一致するNodeが見つからない場合でも、スケジューラーはPodをスケジュールします。 +* Node自体のラベルではなく、Node(または他のトポロジカルドメイン)上で稼働している他のPodのラベルを使ってPodを制約することができます。これにより、Node上にどのPodを共存させるかのルールを定義することができます。 -アフィニティは"Nodeアフィニティ"と"Pod間アフィニティ/アンチアフィニティ"の2種類から成ります。 -Nodeアフィニティは`nodeSelector`(前述の2つのメリットがあります)に似ていますが、Pod間アフィニティ/アンチアフィニティは、上記の3番目の機能に記載している通り、NodeのラベルではなくPodのラベルに対して制限をかけます。 +アフィニティ機能は、2種類のアフィニティで構成されています: -### Nodeアフィニティ +* *Nodeアフィニティ*は`nodeSelector`フィールドと同様に機能しますが、より表現力が豊かで、より柔軟にルールを指定することができます。 +* *Pod間アフィニティとアンチアフィニティ*は、他のPodのラベルを元に、Podを制約することができます。 + +### Nodeアフィニティ {#node-affinity} Nodeアフィニティは概念的には、NodeのラベルによってPodがどのNodeにスケジュールされるかを制限する`nodeSelector`と同様です。 -現在は2種類のNodeアフィニティがあり、`requiredDuringSchedulingIgnoredDuringExecution`と`preferredDuringSchedulingIgnoredDuringExecution`です。 -前者はNodeにスケジュールされるPodが条件を満たすことが必須(`nodeSelector`に似ていますが、より柔軟に条件を指定できます)であり、後者は条件を指定できますが保証されるわけではなく、優先的に考慮されます。 -"IgnoredDuringExecution"の意味するところは、`nodeSelector`の機能と同様であり、Nodeのラベルが変更され、Podがその条件を満たさなくなった場合でも -PodはそのNodeで稼働し続けるということです。 -将来的には、`requiredDuringSchedulingIgnoredDuringExecution`に、PodのNodeアフィニティに記された必須要件を満たさなくなったNodeからそのPodを退避させることができる機能を備えた`requiredDuringSchedulingRequiredDuringExecution`が提供される予定です。 +Nodeアフィニティには2種類あります: -それぞれの使用例として、 -`requiredDuringSchedulingIgnoredDuringExecution` は、"インテルCPUを供えたNode上でPodを稼働させる"、 -`preferredDuringSchedulingIgnoredDuringExecution`は、"ゾーンXYZでPodの稼働を試みますが、実現不可能な場合には他の場所で稼働させる" -といった方法が挙げられます。 + * `requiredDuringSchedulingIgnoredDuringExecution`: + スケジューラーは、ルールが満たされない限り、Podをスケジュールすることができません。これは`nodeSelector`と同じように機能しますが、より表現力豊かな構文になっています。 + * `preferredDuringSchedulingIgnoredDuringExecution`: + スケジューラーは、対応するルールを満たすNodeを探そうとします。 一致するNodeが見つからなくても、スケジューラーはPodをスケジュールします。 -Nodeアフィニティは、PodSpecの`affinity`フィールドにある`nodeAffinity`フィールドで特定します。 +{{}} +上記の2種類にある`IgnoredDuringExecution`は、KubernetesがPodをスケジュールした後にNodeラベルが変更されても、Podは実行し続けることを意味します。 +{{}} -Nodeアフィニティを使用したPodの例を以下に示します: +Podのspec(仕様)にある`.spec.affinity.nodeAffinity`フィールドを使用して、Nodeアフィニティを指定することができます。 + +例えば、次のようなPodのspec(仕様)を考えてみましょう: {{< codenew file="pods/pod-with-node-affinity.yaml" >}} -このNodeアフィニティでは、Podはキーが`kubernetes.io/e2e-az-name`、値が`e2e-az1`または`e2e-az2`のラベルが付与されたNodeにしか配置されません。 -加えて、キーが`another-node-label-key`、値が`another-node-label-value`のラベルが付与されたNodeが優先されます。 +この例では、以下のルールが適用されます: -この例ではオペレーター`In`が使われています。 -Nodeアフィニティでは、`In`、`NotIn`、`Exists`、`DoesNotExist`、`Gt`、`Lt`のオペレーターが使用できます。 -`NotIn`と`DoesNotExist`はNodeアンチアフィニティ、またはPodを特定のNodeにスケジュールさせない場合に使われる[Taints](/ja/docs/concepts/scheduling-eviction/taint-and-toleration/)に使用します。 + * Nodeには`topology.kubernetes.io/zone`をキーとするラベルが*必要*で、そのラベルの値は`antarctica-east1`または`antarctica-west1`のいずれかでなければなりません。 + * Nodeにはキー名が`another-node-label-key`で、値が`another-node-label-value`のラベルを持つことが*望ましい*です。 -`nodeSelector`と`nodeAffinity`の両方を指定した場合、Podは**両方の**条件を満たすNodeにスケジュールされます。 +`operator`フィールドを使用して、Kubernetesがルールを解釈する際に使用できる論理演算子を指定することができます。`In`、`NotIn`、`Exists`、`DoesNotExist`、`Gt`、`Lt`が使用できます。 -`nodeAffinity`内で複数の`nodeSelectorTerms`を指定した場合、Podは**いずれかの**`nodeSelectorTerms`を満たしたNodeへスケジュールされます。 +`NotIn`と`DoesNotExist`を使って、Nodeのアンチアフィニティ動作を定義することができます。また、[NodeのTaint](/ja/docs/concepts/scheduling-eviction/taint-and-toleration/)を使用して、特定のNodeからPodをはじくこともできます。 -`nodeSelectorTerms`内で複数の`matchExpressions`を指定した場合にはPodは**全ての**`matchExpressions`を満たしたNodeへスケジュールされます。 +{{}} +`nodeSelector`と`nodeAffinity`の両方を指定した場合、*両方の*条件を満たさないとPodはNodeにスケジュールされません。 -PodがスケジュールされたNodeのラベルを削除したり変更しても、Podは削除されません。 -言い換えると、アフィニティはPodをスケジュールする際にのみ考慮されます。 +`nodeAffinity`タイプに関連付けられた`nodeSelectorTerms`内に、複数の条件を指定した場合、Podは指定した条件のいずれかを満たしたNodeへスケジュールされます(条件はORされます)。 -`preferredDuringSchedulingIgnoredDuringExecution`内の`weight`フィールドは、1から100の範囲で指定します。 -全ての必要条件(リソースやRequiredDuringSchedulingアフィニティ等)を満たしたNodeに対して、スケジューラーはそのNodeがMatchExpressionsを満たした場合に、このフィルードの"weight"を加算して合計を計算します。 -このスコアがNodeの他の優先機能のスコアと組み合わせれ、最も高いスコアを有したNodeが優先されます。 +`nodeSelectorTerms`内の条件に関連付けられた1つの`matchExpressions`フィールド内に、複数の条件を指定した場合、Podは全ての条件を満たしたNodeへスケジュールされます(条件はANDされます)。 +{{}} -### Pod間アフィニティとアンチアフィニティ +詳細については[Nodeアフィニティを利用してPodをNodeに割り当てる](/ja/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)を参照してください。 -Pod間アフィニティとアンチアフィニティは、Nodeのラベルではなく、すでにNodeで稼働しているPodのラベルに従ってPodがスケジュールされるNodeを制限します。 -このポリシーは、"XにてルールYを満たすPodがすでに稼働している場合、このPodもXで稼働させる(アンチアフィニティの場合は稼働させない)"という形式です。 -Yはnamespaceのリストで指定したLabelSelectorで表されます。 -Nodeと異なり、Podはnamespaceで区切られているため(それゆえPodのラベルも暗黙的にnamespaceで区切られます)、Podのラベルを指定するlabel selectorは、どのnamespaceにselectorを適用するかを指定する必要があります。 -概念的に、XはNodeや、ラック、クラウドプロバイダゾーン、クラウドプロバイダのリージョン等を表すトポロジードメインです。 -これらを表すためにシステムが使用するNodeラベルのキーである`topologyKey`を使うことで、トポロジードメインを指定することができます。 -先述のセクション[補足: ビルトインNodeラベル](#interlude-built-in-node-labels)にてラベルの例が紹介されています。 +#### Nodeアフィニティの重み {#node-affinity-weight} +`preferredDuringSchedulingIgnoredDuringExecution`アフィニティタイプの各インスタンスに、1から100の範囲の`weight`を指定できます。 +Podの他のスケジューリング要件をすべて満たすNodeを見つけると、スケジューラーはそのNodeが満たすすべての優先ルールを繰り返し実行し、対応する式の`weight`値を合計に加算します。 + +最終的な合計は、そのNodeの他の優先度関数のスコアに加算されます。合計スコアが最も高いNodeが、スケジューラーがPodのスケジューリングを決定する際に優先されます。 + +例えば、次のようなPodのspec(仕様)を考えてみましょう: + +{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}} + +`preferredDuringSchedulingIgnoredDuringExecution`ルールにマッチするNodeとして、一つは`label-1:key-1`ラベル、もう一つは`label-2:key-2`ラベルの2つの候補がある場合、スケジューラーは各Nodeの`weight`を考慮し、その重みとNodeの他のスコアを加え、最終スコアが最も高いNodeにPodをスケジューリングします。 + +{{}} +この例でKubernetesにPodを正常にスケジュールさせるには、`kubernetes.io/os=linux`ラベルを持つ既存のNodeが必要です。 +{{}} + +#### スケジューリングプロファイルごとのNodeアフィニティ {#node-affinity-per-scheduling-profile} + +{{< feature-state for_k8s_version="v1.20" state="beta" >}} + +複数の[スケジューリングプロファイル](/ja/docs/reference/scheduling/config/#multiple-profiles)を設定する場合、プロファイルにNodeアフィニティを関連付けることができます。これは、プロファイルが特定のNode群にのみ適用される場合に便利です。[スケジューラーの設定](/ja/docs/reference/scheduling/config/)にある[`NodeAffinity`プラグイン](/ja/docs/reference/scheduling/config/#scheduling-plugins)の`args`フィールドに`addedAffinity`を追加すると実現できます。例えば: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta3 +kind: KubeSchedulerConfiguration + +profiles: + - schedulerName: default-scheduler + - schedulerName: foo-scheduler + pluginConfig: + - name: NodeAffinity + args: + addedAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: scheduler-profile + operator: In + values: + - foo +``` + +`addedAffinity`は、Podの仕様(spec)で指定されたNodeアフィニティに加え、`.spec.schedulerName`を`foo-scheduler`に設定したすべてのPodに適用されます。つまり、Podにマッチするためには、Nodeは`addedAffinity`とPodの`.spec.NodeAffinity`を満たす必要があるのです。 + +`addedAffinity`はエンドユーザーには見えないので、その動作はエンドユーザーにとって予期しないものになる可能性があります。スケジューラープロファイル名と明確な相関関係のあるNodeラベルを使用すべきです。 {{< note >}} -Pod間アフィニティとアンチアフィニティは、大規模なクラスター上で使用する際にスケジューリングを非常に遅くする恐れのある多くの処理を要します。 -そのため、数百台以上のNodeから成るクラスターでは使用することを推奨されません。 +[DaemonSetのPodを作成する](/ja/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler)DaemonSetコントローラーは、スケジューリングプロファイルをサポートしていません。DaemonSetコントローラーがPodを作成すると、デフォルトのKubernetesスケジューラーがそれらのPodを配置し、DaemonSetコントローラーの`nodeAffinity`ルールに優先して従います。 +{{< /note >}} + +### Pod間のアフィニティとアンチアフィニティ {#inter-pod-affinity-and-anti-affinity} + +Pod間のアフィニティとアンチアフィニティは、Nodeのラベルではなく、すでにNode上で稼働している**Pod**のラベルに従って、PodがどのNodeにスケジュールされるかを制限できます。 + +XはNodeや、ラック、クラウドプロバイダーのゾーンやリージョン等を表すトポロジードメインで、YはKubernetesが満たそうとするルールである場合、Pod間のアフィニティとアンチアフィニティのルールは、"XにてルールYを満たすPodがすでに稼働している場合、このPodもXで実行すべき(アンチアフィニティの場合はすべきではない)"という形式です。 + +これらのルール(Y)は、オプションの関連する名前空間のリストを持つ[ラベルセレクター](/ja/docs/concepts/overview/working-with-objects/labels/#label-selectors)で表現されます。PodはKubernetesの名前空間オブジェクトであるため、Podラベルも暗黙的に名前空間を持ちます。Kubernetesが指定された名前空間でラベルを探すため、Podラベルのラベルセレクターは、名前空間を指定する必要があります。 + +トポロジードメイン(X)は`topologyKey`で表現され、システムがドメインを示すために使用するNodeラベルのキーになります。具体例は[よく知られたラベル、アノテーションとTaint](/docs/reference/labels-annotations-taints/)を参照してください。 + +{{< note >}} +Pod間アフィニティとアンチアフィニティはかなりの処理量を必要とするため、大規模クラスターでのスケジューリングが大幅に遅くなる可能性があります +そのため、数百台以上のNodeから成るクラスターでの使用は推奨されません。 {{< /note >}} {{< note >}} -Podのアンチアフィニティは、Nodeに必ずラベルが付与されている必要があります。 -言い換えると、クラスターの全てのNodeが、`topologyKey`で指定されたものに合致する適切なラベルが必要になります。 -それらが付与されていないNodeが存在する場合、意図しない挙動を示すことがあります。 +Podのアンチアフィニティは、Nodeに必ず一貫性の持つラベルが付与されている必要があります。 +言い換えると、クラスターの全てのNodeが、`topologyKey`に合致する適切なラベルが必要になります。 +一部、または全部のNodeに`topologyKey`ラベルが指定されていない場合、意図しない挙動に繋がる可能性があります。 {{< /note >}} -Nodeアフィニティと同様に、PodアフィニティとPodアンチアフィニティにも必須条件と優先条件を示す`requiredDuringSchedulingIgnoredDuringExecution`と`preferredDuringSchedulingIgnoredDuringExecution`があります。 -前述のNodeアフィニティのセクションを参照してください。 -`requiredDuringSchedulingIgnoredDuringExecution`を指定するアフィニティの使用例は、"Service AのPodとService BのPodが密に通信する際、それらを同じゾーンで稼働させる場合"です。 -また、`preferredDuringSchedulingIgnoredDuringExecution`を指定するアンチアフィニティの使用例は、"ゾーンをまたいでPodのサービスを稼働させる場合"(Podの数はゾーンの数よりも多いため、必須条件を指定すると合理的ではありません)です。 +#### Pod間のアフィニティとアンチアフィニティの種類 {#types-of-inter-pod-affinity-and-anti-affinity} -Pod間アフィニティは、PodSpecの`affinity`フィールド内に`podAffinity`で指定し、Pod間アンチアフィニティは、`podAntiAffinity`で指定します。 +[Nodeアフィニティ](#node-affinity)と同様に、Podアフィニティとアンチアフィニティにも下記の2種類があります: -#### Podアフィニティを使用したPodの例 + * `requiredDuringSchedulingIgnoredDuringExecution` + * `preferredDuringSchedulingIgnoredDuringExecution` + +例えば、`requiredDuringSchedulingIgnoredDuringExecution`アフィニティを使用して、2つのサービスのPodはお互いのやり取りが多いため、同じクラウドプロバイダーゾーンに併置するようにスケジューラーに指示することができます。 +同様に、`preferredDuringSchedulingIgnoredDuringExecution`アンチアフィニティを使用して、あるサービスのPodを複数のクラウドプロバイダーゾーンに分散させることができます。 + +Pod間アフィニティを使用するには、Pod仕様(spec)の`affinity.podAffinity`フィールドで指定します。Pod間アンチアフィニティを使用するには、Pod仕様(spec)の`affinity.podAntiAffinity`フィールドで指定します。 + +#### Podアフィニティ使用例 {#an-example-of-a-pod-that-uses-pod-affinity} + +次のようなPod仕様(spec)を考えてみましょう: {{< codenew file="pods/pod-with-pod-affinity.yaml" >}} -このPodのアフィニティは、PodアフィニティとPodアンチアフィニティを1つずつ定義しています。 -この例では、`podAffinity`に`requiredDuringSchedulingIgnoredDuringExecution`、`podAntiAffinity`に`preferredDuringSchedulingIgnoredDuringExecution`が設定されています。 -Podアフィニティは、「キーが"security"、値が"S1"のラベルが付与されたPodが少なくとも1つは稼働しているNodeが同じゾーンにあれば、PodはそのNodeにスケジュールされる」という条件を指定しています(より正確には、キーが"security"、値が"S1"のラベルが付与されたPodが稼働しており、キーが`topology.kubernetes.io/zone`、値がVであるNodeが少なくとも1つはある状態で、 -Node Nがキー`topology.kubernetes.io/zone`、値Vのラベルを持つ場合に、PodはNode Nで稼働させることができます)。 -Podアンチアフィニティは、「すでにあるNode上で、キーが"security"、値が"S2"であるPodが稼働している場合に、Podを可能な限りそのNode上で稼働させない」という条件を指定しています -(`topologyKey`が`topology.kubernetes.io/zone`であった場合、キーが"security"、値が"S2"であるであるPodが稼働しているゾーンと同じゾーン内のNodeにはスケジュールされなくなります)。 -PodアフィニティとPodアンチアフィニティや、`requiredDuringSchedulingIgnoredDuringExecution`と`preferredDuringSchedulingIgnoredDuringExecution`に関する他の使用例は[デザインドック](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)を参照してください。 +この例では、PodアフィニティルールとPodアンチアフィニティルールを1つずつ定義しています。 +Podアフィニティルールは"ハード"な`requiredDuringSchedulingIgnoredDuringExecution`を使用し、アンチアフィニティルールは"ソフト"な`preferredDuringSchedulingIgnoredDuringExecution`を使用しています。 -PodアフィニティとPodアンチアフィニティで使用できるオペレーターは、`In`、`NotIn`、 `Exists`、 `DoesNotExist`です。 +アフィニティルールは、スケジューラーがNodeにPodをスケジュールできるのは、そのNodeが、`security=S1`ラベルを持つ1つ以上の既存のPodと同じゾーンにある場合のみであることを示しています。より正確には、現在Podラベル`security=S1`を持つPodが1つ以上あるNodeが、そのゾーン内に少なくとも1つ存在する限り、スケジューラーは`topology.kubernetes.io/zone=V`ラベルを持つNodeにPodを配置しなければなりません。 -原則として、`topologyKey`には任意のラベルとキーが使用できます。 -しかし、パフォーマンスやセキュリティの観点から、以下の制約があります: +アンチアフィニティルールは、`security=S2`ラベルを持つ1つ以上のPodと同じゾーンにあるNodeには、スケジューラーがPodをスケジュールしないようにすることを示しています。より正確には、Podラベル`Security=S2`を持つPodが稼働している他のNodeが、同じゾーン内に存在する場合、スケジューラーは`topology.kubernetes.io/zone=R`ラベルを持つNodeにはPodを配置しないようにしなければなりません。 -1. アフィニティと、`requiredDuringSchedulingIgnoredDuringExecution`を指定したPodアンチアフィニティは、`topologyKey`を指定しないことは許可されていません。 -2. `requiredDuringSchedulingIgnoredDuringExecution`を指定したPodアンチアフィニティでは、`kubernetes.io/hostname`の`topologyKey`を制限するため、アドミッションコントローラー`LimitPodHardAntiAffinityTopology`が導入されました。 -トポロジーをカスタマイズする場合には、アドミッションコントローラーを修正または無効化する必要があります。 -3. `preferredDuringSchedulingIgnoredDuringExecution`を指定したPodアンチアフィニティでは、`topologyKey`を省略することはできません。 -4. 上記の場合を除き、`topologyKey` は任意のラベルとキーを指定することができます。 +Podアフィニティとアンチアフィニティの使用例についてもっと知りたい方は[デザイン案](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md)を参照してください。 -`labelSelector`と`topologyKey`に加え、`labelSelector`が合致すべき`namespaces`のリストを特定することも可能です(これは`labelSelector`と`topologyKey`を定義することと同等です)。 -省略した場合や空の場合は、アフィニティとアンチアフィニティが定義されたPodのnamespaceがデフォルトで設定されます。 +Podアフィニティとアンチアフィニティの`operator`フィールドで使用できるのは、`In`、`NotIn`、 `Exists`、 `DoesNotExist`です。 -`requiredDuringSchedulingIgnoredDuringExecution`が指定されたアフィニティとアンチアフィニティでは、`matchExpressions`に記載された全ての条件が満たされるNodeにPodがスケジュールされます。 +原則として、`topologyKey`には任意のラベルキーが指定できますが、パフォーマンスやセキュリティの観点から、以下の例外があります: +* Podアフィニティとアンチアフィニティでは、`requiredDuringSchedulingIgnoredDuringExecution`と`preferredDuringSchedulingIgnoredDuringExecution`内のどちらも、`topologyKey`フィールドが空であることは許可されていません。 +* Podアンチアフィニティルールの`requiredDuringSchedulingIgnoredDuringExecution`では、アドミッションコントローラー`LimitPodHardAntiAffinityTopology`が`topologyKey`を`kubernetes.io/hostname`に制限しています。アドミッションコントローラーを修正または無効化すると、トポロジーのカスタマイズができるようになります。 -#### 実際的なユースケース +`labelSelector`と`topologyKey`に加え、`labelSelector`と`topologyKey`と同じレベルの`namespaces`フィールドを使用して、`labelSelector`が合致すべき名前空間のリストを任意に指定することができます。省略または空の場合、`namespaces`がデフォルトで、アフィニティとアンチアフィニティが定義されたPodの名前空間に設定されます。 -Pod間アフィニティとアンチアフィニティは、ReplicaSet、StatefulSet、Deploymentなどのより高レベルなコレクションと併せて使用するとさらに有用です。 -Workloadが、Node等の定義された同じトポロジーに共存させるよう、簡単に設定できます。 +#### 名前空間セレクター {#namespace-selector} +{{< feature-state for_k8s_version="v1.24" state="stable" >}} +`namespaceSelector`を使用し、ラベルで名前空間の集合に対して検索することによって、名前空間を選択することができます。 +アフィニティ項は`namespaceSelector`と`namespaces`フィールドによって選択された名前空間に適用されます。 +要注意なのは、空の`namespaceSelector`({})はすべての名前空間にマッチし、nullまたは空の`namespaces`リストとnullの`namespaceSelector`は、ルールが定義されているPodの名前空間にマッチします。 -##### 常に同じNodeで稼働させる場合 +#### 実践的なユースケース {#more-practical-use-cases} -3つのノードから成るクラスターでは、ウェブアプリケーションはredisのようにインメモリキャッシュを保持しています。 -このような場合、ウェブサーバーは可能な限りキャッシュと共存させることが望ましいです。 +Pod間アフィニティとアンチアフィニティは、ReplicaSet、StatefulSet、Deploymentなどのより高レベルなコレクションと併せて使用するとさらに有用です。これらのルールにより、ワークロードのセットが同じ定義されたトポロジーに併置されるように設定できます。たとえば、2つの関連するPodを同じNodeに配置することが好ましい場合です。 -ラベル`app=store`を付与した3つのレプリカから成るredisのdeploymentを記述したyamlファイルを示します。 -Deploymentには、1つのNodeにレプリカを共存させないために`PodAntiAffinity`を付与しています。 +例えば、3つのNodeで構成されるクラスターを想像してください。そのクラスターを使用してウェブアプリケーションを実行し、さらにインメモリーキャッシュ(Redisなど)を使用します。この例では、ウェブアプリケーションとメモリーキャッシュの間のレイテンシーは実用的な範囲の低さも想定しています。Pod間アフィニティやアンチアフィニティを使って、ウェブサーバーとキャッシュをなるべく同じ場所に配置することができます。 +以下のRedisキャッシュのDeploymentの例では、各レプリカはラベル`app=store`が付与されています。`podAntiAffinity`ルールは、`app=store`ラベルを持つ複数のレプリカを単一Nodeに配置しないよう、スケジューラーに指示します。これにより、各キャッシュが別々のNodeに作成されます。 ```yaml apiVersion: apps/v1 @@ -244,10 +262,7 @@ spec: image: redis:3.2-alpine ``` -ウェブサーバーのDeploymentを記載した以下のyamlファイルには、`podAntiAffinity` と`podAffinity`が設定されています。 -全てのレプリカが`app=store`のラベルが付与されたPodと同じゾーンで稼働するよう、スケジューラーに設定されます。 -また、それぞれのウェブサーバーは1つのノードで稼働されないことも保証されます。 - +次のウェブサーバーのDeployment例では、`app=web-store`ラベルが付与されたレプリカを作成します。Podアフィニティルールは、各レプリカを、`app=store`ラベルが付与されたPodを持つNodeに配置するようスケジューラーに指示します。Podアンチアフィニティルールは、1つのNodeに複数の`app=web-store`サーバーを配置しないようにスケジューラーに指示します。 ```yaml apiVersion: apps/v1 @@ -288,49 +303,35 @@ spec: image: nginx:1.16-alpine ``` -上記2つのDeploymentが生成されると、3つのノードは以下のようになります。 +上記2つのDeploymentが生成されると、以下のようなクラスター構成になり、各ウェブサーバーはキャッシュと同位置に、3つの別々のNodeに配置されます。 | node-1 | node-2 | node-3 | |:--------------------:|:-------------------:|:------------------:| | *webserver-1* | *webserver-2* | *webserver-3* | | *cache-1* | *cache-2* | *cache-3* | -このように、3つの`web-server`は期待通り自動的にキャッシュと共存しています。 +全体的な効果として、各キャッシュインスタンスは、同じNode上で実行している単一のクライアントによってアクセスされる可能性が高いです。この方法は、スキュー(負荷の偏り)とレイテンシーの両方を最小化することを目的としています。 -``` -kubectl get pods -o wide -``` -出力は以下のようになります: -``` -NAME READY STATUS RESTARTS AGE IP NODE -redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3 -redis-cache-1450370735-j2j96 1/1 Running 0 8m 10.192.2.2 kube-node-1 -redis-cache-1450370735-z73mh 1/1 Running 0 8m 10.192.3.1 kube-node-2 -web-server-1287567482-5d4dz 1/1 Running 0 7m 10.192.2.3 kube-node-1 -web-server-1287567482-6f7v5 1/1 Running 0 7m 10.192.4.3 kube-node-3 -web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3.2 kube-node-2 -``` +Podアンチアフィニティを使用する理由は他にもあります。 +この例と同様の方法で、アンチアフィニティを用いて高可用性を実現したStatefulSetの使用例は[ZooKeeperチュートリアル](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)を参照してください。 -##### 同じNodeに共存させない場合 +## nodeName {#nodename} -上記の例では `PodAntiAffinity`を`topologyKey: "kubernetes.io/hostname"`と合わせて指定することで、redisクラスター内の2つのインスタンスが同じホストにデプロイされない場合を扱いました。 -同様の方法で、アンチアフィニティを用いて高可用性を実現したStatefulSetの使用例は[ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)を参照してください。 +`nodeName`はアフィニティや`nodeSelector`よりも直接的なNode選択形式になります。`nodeName`はPod仕様(spec)内のフィールドです。`nodeName`フィールドが空でない場合、スケジューラーはPodを考慮せずに、指定されたNodeにあるkubeletがそのNodeにPodを配置しようとします。`nodeName`を使用すると、`nodeSelector`やアフィニティおよびアンチアフィニティルールを使用するよりも優先されます。 + `nodeName`を使ってNodeを選択する場合の制約は以下の通りです: -## nodeName +- 指定されたNodeが存在しない場合、Podは実行されず、場合によっては自動的に削除されることがあります。 +- 指定されたNodeがPodを収容するためのリソースを持っていない場合、Podの起動は失敗し、OutOfmemoryやOutOfcpuなどの理由が表示されます。 +- クラウド環境におけるNode名は、常に予測可能で安定したものではありません。 -`nodeName`はNodeの選択を制限する最も簡単な方法ですが、制約があることからあまり使用されません。 -`nodeName`はPodSpecのフィールドです。 -ここに値が設定されると、schedulerはそのPodを考慮しなくなり、その名前が付与されているNodeのkubeletはPodを稼働させようとします。 -そのため、PodSpecに`nodeName`が指定されると、上述のNodeの選択方法よりも優先されます。 +{{< note >}} +`nodeName`は、カスタムスケジューラーや、設定済みのスケジューラーをバイパスする必要がある高度なユースケースで使用することを目的としています。 +スケジューラーをバイパスすると、割り当てられたNodeに過剰なPodの配置をしようとした場合には、Podの起動に失敗することがあります。 +[Nodeアフィニティ](#node-affinity)または[`nodeSelector`フィールド](#nodeselector)を使用すれば、スケジューラーをバイパスせずに、特定のNodeにPodを割り当てることができます。 +{{}} - `nodeName`を使用することによる制約は以下の通りです: - -- その名前のNodeが存在しない場合、Podは起動されす、自動的に削除される場合があります。 -- その名前のNodeにPodを稼働させるためのリソースがない場合、Podの起動は失敗し、理由は例えばOutOfmemoryやOutOfcpuになります。 -- クラウド上のNodeの名前は予期できず、変更される可能性があります。 - -`nodeName`を指定したPodの設定ファイルの例を示します: +以下は、`nodeName`フィールドを使用したPod仕様(spec)の例になります: ```yaml apiVersion: v1 @@ -344,18 +345,18 @@ spec: nodeName: kube-01 ``` -上記のPodはkube-01という名前のNodeで稼働します。 +上記のPodは`kube-01`というNodeでのみ実行されます。 +## Podトポロジー分散制約 {#pod-topology-spread-constraints} +_トポロジー分散制約_ を使って、リージョン、ゾーン、Nodeなどの障害ドメイン間、または定義したその他のトポロジードメイン間で、クラスター全体にどのように{{< glossary_tooltip text="Pod" term_id="Pod" >}}を分散させるかを制御することができます。これにより、パフォーマンス、予想される可用性、または全体的な使用率を向上させることができます。 + +詳しい仕組みについては、[トポロジー分散制約](/docs/concepts/scheduling-eviction/topology-spread-constraints/)を参照してください。 ## {{% heading "whatsnext" %}} - -[Taints](/ja/docs/concepts/scheduling-eviction/taint-and-toleration/)を使うことで、NodeはPodを追い出すことができます。 - -[Nodeアフィニティ](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)と -[Pod間アフィニティ/アンチアフィニティ](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) -のデザインドキュメントには、これらの機能の追加のバックグラウンドの情報が記載されています。 - -一度PodがNodeに割り当たると、kubeletはPodを起動してノード内のリソースを確保します。 -[トポロジーマネージャー](/docs/tasks/administer-cluster/topology-manager/)はNodeレベルのリソース割り当てを決定する際に関与します。 +* [TaintとToleration](/ja/docs/concepts/scheduling-eviction/taint-and-toleration/)についてもっと読む。 +* [Nodeアフィニティ](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)と[Pod間アフィニティ/アンチアフィニティ](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md)のデザインドキュメントを読む。 +* [トポロジーマネージャー](/ja/docs/tasks/administer-cluster/topology-manager/)がNodeレベルのリソース割り当ての決定にどのように関与しているかについて学ぶ。 +* [nodeSelector](/ja/docs/tasks/configure-pod-container/assign-pods-nodes/)の使用方法について学ぶ。 +* [アフィニティとアンチアフィニティ](/ja/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)の使用方法について学ぶ。 diff --git a/content/ja/docs/concepts/security/_index.md b/content/ja/docs/concepts/security/_index.md index 0088a3ea95d..b0912322b02 100644 --- a/content/ja/docs/concepts/security/_index.md +++ b/content/ja/docs/concepts/security/_index.md @@ -1,6 +1,6 @@ --- title: "セキュリティ" -weight: 81 +weight: 85 description: > クラウドネイティブなワークロードをセキュアに維持するための概念 --- diff --git a/content/ja/docs/concepts/security/controlling-access.md b/content/ja/docs/concepts/security/controlling-access.md index b9ec55417f5..576613895ee 100644 --- a/content/ja/docs/concepts/security/controlling-access.md +++ b/content/ja/docs/concepts/security/controlling-access.md @@ -1,6 +1,7 @@ --- title: Kubernetes APIへのアクセスコントロール content_type: concept +weight: 50 --- diff --git a/content/ja/docs/concepts/security/overview.md b/content/ja/docs/concepts/security/overview.md index c9d656a4232..ca4410b7751 100644 --- a/content/ja/docs/concepts/security/overview.md +++ b/content/ja/docs/concepts/security/overview.md @@ -2,7 +2,7 @@ reviewers: title: クラウドネイティブセキュリティの概要 content_type: concept -weight: 10 +weight: 1 --- diff --git a/content/ja/docs/concepts/storage/dynamic-provisioning.md b/content/ja/docs/concepts/storage/dynamic-provisioning.md index 94bee64ed10..07206c09830 100644 --- a/content/ja/docs/concepts/storage/dynamic-provisioning.md +++ b/content/ja/docs/concepts/storage/dynamic-provisioning.md @@ -2,7 +2,7 @@ reviewers: title: ボリュームの動的プロビジョニング(Dynamic Volume Provisioning) content_type: concept -weight: 40 +weight: 50 --- diff --git a/content/ja/docs/concepts/storage/storage-capacity.md b/content/ja/docs/concepts/storage/storage-capacity.md index cff887a125a..1151706a4f9 100644 --- a/content/ja/docs/concepts/storage/storage-capacity.md +++ b/content/ja/docs/concepts/storage/storage-capacity.md @@ -1,7 +1,7 @@ --- title: ストレージ容量 content_type: concept -weight: 45 +weight: 80 --- diff --git a/content/ja/docs/concepts/storage/storage-limits.md b/content/ja/docs/concepts/storage/storage-limits.md index 4f38361f084..e3df1f3bc97 100644 --- a/content/ja/docs/concepts/storage/storage-limits.md +++ b/content/ja/docs/concepts/storage/storage-limits.md @@ -1,6 +1,7 @@ --- title: ノード固有のボリューム制限 content_type: concept +weight: 90 --- diff --git a/content/ja/docs/concepts/storage/volume-pvc-datasource.md b/content/ja/docs/concepts/storage/volume-pvc-datasource.md index fc1b7ae4b9d..8e0a9f7c7b8 100644 --- a/content/ja/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/ja/docs/concepts/storage/volume-pvc-datasource.md @@ -1,7 +1,7 @@ --- title: CSI Volume Cloning content_type: concept -weight: 30 +weight: 70 --- diff --git a/content/ja/docs/concepts/storage/volume-snapshot-classes.md b/content/ja/docs/concepts/storage/volume-snapshot-classes.md index ca381652d22..5c4c9996258 100644 --- a/content/ja/docs/concepts/storage/volume-snapshot-classes.md +++ b/content/ja/docs/concepts/storage/volume-snapshot-classes.md @@ -2,7 +2,7 @@ reviewers: title: VolumeSnapshotClass content_type: concept -weight: 30 +weight: 61 # just after volume snapshots --- diff --git a/content/ja/docs/concepts/workloads/_index.md b/content/ja/docs/concepts/workloads/_index.md index ca846cd0e7e..94631dc878d 100644 --- a/content/ja/docs/concepts/workloads/_index.md +++ b/content/ja/docs/concepts/workloads/_index.md @@ -1,6 +1,6 @@ --- title: "ワークロード" -weight: 50 +weight: 55 description: > Kubernetesにおけるデプロイ可能な最小のオブジェクトであるPodと、高レベルな抽象化がPodの実行を助けることを理解します。 no_list: true diff --git a/content/ja/docs/concepts/workloads/controllers/daemonset.md b/content/ja/docs/concepts/workloads/controllers/daemonset.md index 42647228e61..349b779c161 100644 --- a/content/ja/docs/concepts/workloads/controllers/daemonset.md +++ b/content/ja/docs/concepts/workloads/controllers/daemonset.md @@ -22,9 +22,9 @@ DaemonSetのいくつかの典型的な使用例は以下の通りです。 -## DaemonSet Specの記述 +## DaemonSet Specの記述 {#writing-a-daemonset-spec} -### DaemonSetの作成 +### DaemonSetの作成 {#create-a-daemonset} ユーザーはYAMLファイル内でDaemonSetの設定を記述することができます。例えば、下記の`daemonset.yaml`ファイルでは`fluentd-elasticsearch`というDockerイメージを稼働させるDaemonSetの設定を記述します。 @@ -36,30 +36,31 @@ YAMLファイルに基づいてDaemonSetを作成します。 kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml ``` -### 必須のフィールド +### 必須のフィールド {#required-fields} -他の全てのKubernetesの設定と同様に、DaemonSetは`apiVersion`、`kind`と`metadata`フィールドが必須となります。設定ファイルの活用法に関する一般的な情報は、[ステートレスアプリケーションの稼働](/ja/docs/tasks/run-application/run-stateless-application-deployment/)、[コンテナの設定](/ja/docs/tasks/)、[kubectlを用いたオブジェクトの管理](/ja/docs/concepts/overview/working-with-objects/object-management/)といったドキュメントを参照ください。 +他の全てのKubernetesの設定と同様に、DaemonSetは`apiVersion`、`kind`と`metadata`フィールドが必須となります。設定ファイルの活用法に関する一般的な情報は、[ステートレスアプリケーションの稼働](/ja/docs/tasks/run-application/run-stateless-application-deployment/)、[kubectlを用いたオブジェクトの管理](/ja/docs/concepts/overview/working-with-objects/object-management/)といったドキュメントを参照ください。 DaemonSetオブジェクトの名前は、有効な [DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)である必要があります。 また、DaemonSetにおいて[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)セクションも必須となります。 -### Podテンプレート +### Podテンプレート {#pod-template} `.spec.template`は`.spec`内での必須のフィールドの1つです。 -`.spec.template`は[Podテンプレート](/docs/concepts/workloads/pods/#pod-templates)となります。これはフィールドがネストされていて、`apiVersion`や`kind`をもたないことを除いては、{{< glossary_tooltip text="Pod" term_id="pod" >}}のテンプレートと同じスキーマとなります。 +`.spec.template`は[Podテンプレート](/ja/docs/concepts/workloads/pods/#pod-template)となります。これはフィールドがネストされていて、`apiVersion`や`kind`をもたないことを除いては、{{< glossary_tooltip text="Pod" term_id="pod" >}}のテンプレートと同じスキーマとなります。 Podに対する必須のフィールドに加えて、DaemonSet内のPodテンプレートは適切なラベルを指定しなくてはなりません([Podセレクター](#pod-selector)の項目を参照ください)。 DaemonSet内のPodテンプレートでは、[`RestartPolicy`](/ja/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)フィールドを指定せずにデフォルトの`Always`を使用するか、明示的に`Always`を設定するかのどちらかである必要があります。 -### Podセレクター +### Podセレクター {#pod-selector} -`.spec.selector`フィールドはPodセレクターとなります。これは[Job](/docs/concepts/workloads/controllers/job/)の`.spec.selector`と同じものです。 +`.spec.selector`フィールドはPodセレクターとなります。これは[Job](/ja/docs/concepts/workloads/controllers/job/)の`.spec.selector`と同じものです。 -Kubernetes1.8のように、ユーザーは`.spec.template`のラベルにマッチするPodセレクターを指定しなくてはいけません。Podセレクターは、値を空のままにしてもデフォルト設定にならなくなりました。セレクターのデフォルト化は`kubectl apply`と互換性はありません。また、一度DaemonSetが作成されると、その`.spec.selector`は変更不可能になります。Podセレクターの変更は、意図しないPodの孤立を引き起こし、ユーザーにとってやっかいなものとなります。 +ユーザーは`.spec.template`のラベルにマッチするPodセレクターを指定しなくてはいけません。 +また、一度DaemonSetが作成されると、その`.spec.selector`は変更不可能になります。Podセレクターの変更は、意図しないPodの孤立を引き起こし、ユーザーにとってやっかいなものとなります。 `.spec.selector`は2つのフィールドからなるオブジェクトです。 @@ -70,24 +71,18 @@ Kubernetes1.8のように、ユーザーは`.spec.template`のラベルにマッ もし`spec.selector`が指定されたとき、`.spec.template.metadata.labels`とマッチしなければなりません。この2つの値がマッチしない設定をした場合、APIによってリジェクトされます。 -### 選択したNode上でPodを稼働させる +### 選択したNode上でPodを稼働させる {#running-pods-on-select-nodes} もしユーザーが`.spec.template.spec.nodeSelector`を指定したとき、DaemonSetコントローラーは、その[node selector](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)にマッチするNode上にPodを作成します。同様に、もし`.spec.template.spec.affinity`を指定したとき、DaemonSetコントローラーは[node affinity](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)にマッチするNode上にPodを作成します。 もしユーザーがどちらも指定しないとき、DaemonSetコントローラーは全てのNode上にPodを作成します。 -## Daemon Podがどのようにスケジューリングされるか +## Daemon Podがどのようにスケジューリングされるか {#how-daemon-pods-are-scheduled} -### デフォルトスケジューラーによってスケジューリングされる場合 +DaemonSetは、全ての利用可能なNodeがPodのコピーを稼働させることを保証します。DaemonSetコントローラーは対象となる各Nodeに対してPodを作成し、ターゲットホストに一致するようにPodの`spec.affinity.nodeAffinity`フィールドを追加します。Podが作成されると、通常はデフォルトのスケジューラーが引き継ぎ、`.spec.nodeName`を設定することでPodをターゲットホストにバインドします。新しいNodeに適合できない場合、デフォルトスケジューラーは新しいPodの[優先度](/ja/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)に基づいて、既存Podのいくつかを先取り(退避)させることがあります。 -{{< feature-state for_k8s_version="1.17" state="stable" >}} +ユーザーは、DaemonSetの`.spec.template.spec.schedulerName`フィールドを設定することにより、DaemonSetのPodに対して異なるスケジューラーを指定することができます。 -DaemonSetは全ての利用可能なNodeが単一のPodのコピーを稼働させることを保証します。通常、Podが稼働するNodeはKubernetesスケジューラーによって選択されます。しかし、DaemonSetのPodは代わりにDaemonSetコントローラーによって作成され、スケジューリングされます。 -下記の問題について説明します: - - * 矛盾するPodのふるまい: スケジューリングされるのを待っている通常のPodは、作成されているが`Pending`状態となりますが、DaemonSetのPodは`Pending`状態で作成されません。これはユーザーにとって困惑するものです。 - * [Podプリエンプション(Pod preemption)](/docs/concepts/configuration/pod-priority-preemption/)はデフォルトスケジューラーによってハンドルされます。もしプリエンプションが有効な場合、そのDaemonSetコントローラーはPodの優先順位とプリエンプションを考慮することなくスケジューリングの判断を行います。 - -`ScheduleDaemonSetPods`は、DaemonSetのPodに対して`NodeAffinity`項目を追加することにより、DaemonSetコントローラーの代わりにデフォルトスケジューラーを使ってDaemonSetのスケジュールを可能にします。その際に、デフォルトスケジューラーはPodをターゲットのホストにバインドします。もしDaemonSetのNodeAffinityが存在するとき、それは新しいものに置き換えられます(ターゲットホストを選択する前に、元のNodeAffinityが考慮されます)。DaemonSetコントローラーはDaemonSetのPodの作成や修正を行うときのみそれらの操作を実施します。そしてDaemonSetの`.spec.template`フィールドに対しては何も変更が加えられません。 +`.spec.template.spec.affinity.nodeAffinity`フィールド(指定された場合)で指定された元のNodeアフィニティは、DaemonSetコントローラーが対象Nodeを評価する際に考慮されますが、作成されたPod上では対象Nodeの名前と一致するNodeアフィニティに置き換わります。 ```yaml nodeAffinity: @@ -100,62 +95,87 @@ nodeAffinity: - target-host-name ``` -さらに、`node.kubernetes.io/unschedulable:NoSchedule`というtolarationがDaemonSetのPodに自動的に追加されます。デフォルトスケジューラーは、DaemonSetのPodのスケジューリングのときに、`unschedulable`なNodeを無視します。 +### TaintとToleration {#taints-and-tolerations} -### TaintsとTolerations +DaemonSetコントローラーはDaemonSet Podに一連の{{< glossary_tooltip +text="Toleration" term_id="toleration" >}}を自動的に追加します: -DaemonSetのPodは[TaintsとTolerations](/ja/docs/concepts/scheduling-eviction/taint-and-toleration/)の設定を尊重します。下記のTolerationsは、関連する機能によって自動的にDaemonSetのPodに追加されます。 +{{< table caption="Tolerations for DaemonSet pods" >}} -| Toleration Key | Effect | Version | Description | -| ---------------------------------------- | ---------- | ------- | ----------- | -| `node.kubernetes.io/not-ready` | NoExecute | 1.13+ | DaemonSetのPodはネットワーク分割のようなNodeの問題が発生したときに除外されません。| -| `node.kubernetes.io/unreachable` | NoExecute | 1.13+ | DaemonSetのPodはネットワーク分割のようなNodeの問題が発生したときに除外されません。| -| `node.kubernetes.io/disk-pressure` | NoSchedule | 1.8+ | | -| `node.kubernetes.io/memory-pressure` | NoSchedule | 1.8+ | | -| `node.kubernetes.io/unschedulable` | NoSchedule | 1.12+ | DaemonSetのPodはデフォルトスケジューラーによってスケジュール不可能な属性を許容(tolerate)します。 | -| `node.kubernetes.io/network-unavailable` | NoSchedule | 1.12+ | ホストネットワークを使うDaemonSetのPodはデフォルトスケジューラーによってネットワーク利用不可能な属性を許容(tolerate)します。 | +| Toleration key | Effect | Details | +| --------------------------------------------------------------------------------------------------------------------- | ------------ | --------------------------------------------------------------------------------------------------------------------------------------------- | +| [`node.kubernetes.io/not-ready`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-not-ready) | `NoExecute` | 健康でないNodeや、Podを受け入れる準備ができていないNodeにDaemonSet Podをスケジュールできるように設定します。そのようなNode上で動作しているDaemonSet Podは退避されることがありません。 | +| [`node.kubernetes.io/unreachable`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-unreachable) | `NoExecute` | Nodeコントローラーから到達できないNodeにDaemonSet Podをスケジュールできるように設定します。このようなNode上で動作しているDaemonSet Podは、退避されません。 | +| [`node.kubernetes.io/disk-pressure`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-disk-pressure) | `NoSchedule` | ディスク不足問題のあるNodeにDaemonSet Podをスケジュールできるように設定します。 | +| [`node.kubernetes.io/memory-pressure`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-memory-pressure) | `NoSchedule` | メモリー不足問題のあるNodeにDaemonSet Podをスケジュールできるように設定します。 | +| [`node.kubernetes.io/pid-pressure`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-pid-pressure) | `NoSchedule` | 処理負荷に問題のあるNodeにDaemonSet Podをスケジュールできるように設定します。 | +| [`node.kubernetes.io/unschedulable`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-unschedulable) | `NoSchedule` | スケジューリング不可能なNodeにDaemonSet Podをスケジュールできるように設定します。 | +| [`node.kubernetes.io/network-unavailable`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-network-unavailable) | `NoSchedule` | **ホストネットワークを要求するDaemonSet Podにのみ追加できます**、つまり`spec.hostNetwork: true`と設定されているPodです。このようなDaemonSet Podは、ネットワークが利用できないNodeにスケジュールできるように設定します。| -## Daemon Podとのコミュニケーション +{{< /table >}} -DaemonSet内のPodとのコミュニケーションをする際に考えられるパターンは以下の通りです。: +DaemonSetのPodテンプレートで定義すれば、DaemonSetのPodに独自のTolerationを追加することも可能です。 -- **Push**: DaemonSet内のPodは他のサービスに対して更新情報を送信するように設定されます。 +DaemonSetコントローラーは`node.kubernetes.io/unschedulable:NoSchedule`のTolerationを自動的に設定するため、Kubernetesは _スケジューリング不可能_ としてマークされているNodeでDaemonSet Podを実行することが可能です。 + +[クラスターのネットワーク](/ja/docs/concepts/cluster-administration/networking/)のような重要なNodeレベルの機能をDaemonSetで提供する場合、KubernetesがDaemonSet PodをNodeが準備完了になる前に配置することは有用です。 +例えば、その特別なTolerationがなければ、ネットワークプラグインがそこで実行されていないためにNodeが準備完了としてマークされず、同時にNodeがまだ準備完了でないためにそのNode上でネットワークプラグインが実行されていないというデッドロック状態に陥ってしまう可能性があるのです。 + +## Daemon Podとのコミュニケーション {#communicating-with-daemon-pods} + +DaemonSet内のPodとのコミュニケーションをする際に考えられるパターンは以下の通りです: + +- **Push**: DaemonSet内のPodは統計データベースなどの他のサービスに対して更新情報を送信するように設定されます。クライアントは持っていません。 - **NodeIPとKnown Port**: PodがNodeIPを介して疎通できるようにするため、DaemonSet内のPodは`hostPort`を使用できます。慣例により、クライアントはNodeIPのリストとポートを知っています。 - **DNS**: 同じPodセレクターを持つ[HeadlessService](/ja/docs/concepts/services-networking/service/#headless-service)を作成し、`endpoints`リソースを使ってDaemonSetを探すか、DNSから複数のAレコードを取得します。 -- **Service**: 同じPodセレクターを持つServiceを作成し、複数のうちのいずれかのNode上のDaemonに疎通させるためにそのServiceを使います。 +- **Service**: 同じPodセレクターを持つServiceを作成し、複数のうちのいずれかのNode上のDaemonに疎通させるためにそのServiceを使います。(特定のNodeにアクセスする方法はありません。) -## DaemonSetの更新 +## DaemonSetの更新 {#updating-a-daemonset} もしNodeラベルが変更されたとき、そのDaemonSetは直ちに新しくマッチしたNodeにPodを追加し、マッチしなくなったNodeからPodを削除します。 ユーザーはDaemonSetが作成したPodを修正可能です。しかし、Podは全てのフィールドの更新を許可していません。また、DaemonSetコントローラーは次のNode(同じ名前でも)が作成されたときにオリジナルのテンプレートを使ってPodを作成します。 -ユーザーはDaemonSetを削除可能です。`kubectl`コマンドで`--cascade=false`を指定するとDaemonSetのPodはNode上に残り続けます。その後、同じセレクターで新しいDaemonSetを作成すると、新しいDaemonSetは既存のPodを再利用します。PodでDaemonSetを置き換える必要がある場合は、`updateStrategy`に従ってそれらを置き換えます。 +ユーザーはDaemonSetを削除可能です。`kubectl`コマンドで`--cascade=orphan`を指定するとDaemonSetのPodはNode上に残り続けます。その後、同じセレクターで新しいDaemonSetを作成すると、新しいDaemonSetは既存のPodを再利用します。PodでDaemonSetを置き換える必要がある場合は、`updateStrategy`に従ってそれらを置き換えます。 ユーザーはDaemonSet上で[ローリングアップデートの実施](/docs/tasks/manage-daemon/update-daemon-set/)が可能です。 -## DaemonSetの代替案 +## DaemonSetの代替案 {#alternatives-to-daemonset} -### Initスクリプト +### Initスクリプト {#init-scripts} Node上で直接起動することにより(例: `init`、`upstartd`、`systemd`を使用する)、デーモンプロセスを稼働することが可能です。この方法は非常に良いですが、このようなプロセスをDaemonSetを介して起動することはいくつかの利点があります。 - アプリケーションと同じ方法でデーモンの監視とログの管理ができる。 - デーモンとアプリケーションで同じ設定用の言語とツール(例: Podテンプレート、`kubectl`)を使える。 -- リソースリミットを使ったコンテナ内でデーモンを稼働させることにより、デーモンとアプリケーションコンテナの分離を促進します。しかし、これはPod内でなく、コンテナ内でデーモンを稼働させることにより可能です(Dockerを介して直接起動する)。 +- リソースリミットを使ったコンテナ内でデーモンを稼働させることにより、デーモンとアプリケーションコンテナの分離性が高まります。ただし、これはPod内ではなく、コンテナ内でデーモンを稼働させることでも可能です。 -### ベアPod +### ベアPod {#bare-pods} 特定のNode上で稼働するように指定したPodを直接作成することは可能です。しかし、DaemonSetはNodeの故障やNodeの破壊的なメンテナンスやカーネルのアップグレードなど、どのような理由に限らず、削除されたもしくは停止されたPodを置き換えます。このような理由で、ユーザーはPod単体を作成するよりもむしろDaemonSetを使うべきです。 -### 静的Pod Pods +### 静的Pod {#static-pods} Kubeletによって監視されているディレクトリに対してファイルを書き込むことによって、Podを作成することが可能です。これは[静的Pod](/ja/docs/tasks/configure-pod-container/static-pod/)と呼ばれます。DaemonSetと違い、静的Podはkubectlや他のKubernetes APIクライアントで管理できません。静的PodはApiServerに依存しておらず、クラスターの自立起動時に最適です。また、静的Podは将来的には廃止される予定です。 -### Deployment +### Deployment {#deployments} DaemonSetは、Podの作成し、そのPodが停止されることのないプロセスを持つことにおいて[Deployment](/ja/docs/concepts/workloads/controllers/deployment/)と同様です(例: webサーバー、ストレージサーバー)。 フロントエンドのようなServiceのように、どのホスト上にPodが稼働するか制御するよりも、レプリカ数をスケールアップまたはスケールダウンしたりローリングアップデートする方が重要であるような、状態をもたないServiceに対してDeploymentを使ってください。 -Podのコピーが全てまたは特定のホスト上で常に稼働していることが重要な場合や、他のPodの前に起動させる必要があるときにDaemonSetを使ってください。 +DaemonSetがNodeレベルの機能を提供し、他のPodがその特定のNodeで正しく動作するようにする場合、Podのコピーが全てまたは特定のホスト上で常に稼働していることが重要な場合にDaemonSetを使ってください。 +例えば、[ネットワークプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)には、DaemonSetとして動作するコンポーネントが含まれていることがよくあります。DaemonSetコンポーネントは、それが動作しているNodeでクラスターネットワークが動作していることを確認します。 + + +## {{% heading "whatsnext" %}} + +* [Pod](/ja/docs/concepts/workloads/pods/)について学ぶ。 + * Kubernetesの{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}コンポーネントを実行するのに便利な[静的Pod](#static-pods)について学ぶ。 +* DaemonSetの使用方法を確認する + * [DaemonSetでローリングアップデートを実施する](/docs/tasks/manage-daemon/update-daemon-set/) + * [DaemonSetでロールバックを実行する](/docs/tasks/manage-daemon/rollback-daemon-set/) + (例えば、ロールアウトが期待通りに動作しなかった場合)。 +* [Node上へのPodのスケジューリング](/ja/docs/concepts/scheduling-eviction/assign-pod-node/)の仕組みを理解する +* よくDaemonSetとして実行される[デバイスプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)と[アドオン](/ja/docs/concepts/cluster-administration/addons/)について学ぶ。 +* `DaemonSet`は、Kubernetes REST APIのトップレベルのリソースです。デーモンセットのAPIを理解するため{{< api-reference page="workload-resources/daemon-set-v1" >}}オブジェクトの定義を読む。 diff --git a/content/ja/docs/contribute/localization.md b/content/ja/docs/contribute/localization.md index f151b850e94..f75e25bcba9 100644 --- a/content/ja/docs/contribute/localization.md +++ b/content/ja/docs/contribute/localization.md @@ -1,6 +1,7 @@ --- title: Kubernetesのドキュメントを翻訳する content_type: concept +weight: 50 card: name: contribute weight: 30 diff --git a/content/ja/docs/contribute/review/for-approvers.md b/content/ja/docs/contribute/review/for-approvers.md index ea17bba99ec..822396fbf67 100644 --- a/content/ja/docs/contribute/review/for-approvers.md +++ b/content/ja/docs/contribute/review/for-approvers.md @@ -61,13 +61,13 @@ Prowコマンド | Roleの制限 | 説明 :------------|:------------------|:----------- `/lgtm` | Organizationメンバー | PRのレビューが完了し、変更に納得したことを知らせる。 `/approve` | Approver | PRをマージすることを承認する。 -`/assign` | ReviewerまたはApprover | PRのレビューまたは承認するひとを割り当てる。 -`/close` | ReviewerまたはApprover | issueまたはPRをcloseする。 +`/assign` | 誰でも | PRのレビューまたは承認するひとを割り当てる。 +`/close` | Organizationメンバー | issueまたはPRをcloseする。 `/hold` | 誰でも | `do-not-merge/hold`ラベルを追加して、自動的にマージできないPRであることを示す。 `/hold cancel` | 誰でも | `do-not-merge/hold`ラベルを削除する。 {{< /table >}} -PRで利用できるすべてのコマンド一覧を確認するには、[Prowコマンドリファレンス](https://prow.k8s.io/command-help)を参照してください。 +PRで利用できるすべてのコマンドを確認するには、[Prowコマンドリファレンス](https://prow.k8s.io/command-help?repo=kubernetes%2Fwebsite)を参照してください。 ## issueのトリアージとカテゴリー分類 @@ -141,7 +141,7 @@ SIG Docsでは、対処方法をドキュメントに書いても良いくらい ### Blogに関するissue -[Kubernetes Blog](https://kubernetes.io/blog/)のエントリーは時間が経つと情報が古くなるものだと考えています。そのため、ブログのエントリーは1年以内のものだけをメンテナンスします。1年以上前のブログエントリーに関するissueは修正せずにcloseします。 +[Kubernetes Blog](/blog/)のエントリーは時間が経つと情報が古くなるものだと考えています。そのため、ブログのエントリーは1年以内のものだけをメンテナンスします。1年以上前のブログエントリーに関するissueは修正せずにcloseします。 ### サポートリクエストまたはコードのバグレポート {#support-requests-or-code-bug-reports} diff --git a/content/ja/docs/contribute/style/hugo-shortcodes/index.md b/content/ja/docs/contribute/style/hugo-shortcodes/index.md index a5596e027bb..1aedca628be 100644 --- a/content/ja/docs/contribute/style/hugo-shortcodes/index.md +++ b/content/ja/docs/contribute/style/hugo-shortcodes/index.md @@ -1,6 +1,7 @@ --- title: カスタムHugoショートコード content_type: concept +weight: 120 --- diff --git a/content/ja/docs/reference/_index.md b/content/ja/docs/reference/_index.md index cafca8fe448..3940d23f429 100644 --- a/content/ja/docs/reference/_index.md +++ b/content/ja/docs/reference/_index.md @@ -30,7 +30,7 @@ content_type: concept ## CLIリファレンス -* [kubectl](/ja/docs/reference/kubectl/overview/) - コマンドの実行やKubernetesクラスターの管理に使う主要なCLIツールです。 +* [kubectl](/ja/docs/reference/kubectl/) - コマンドの実行やKubernetesクラスターの管理に使う主要なCLIツールです。 * [JSONPath](/ja/docs/reference/kubectl/jsonpath/) - kubectlで[JSONPath記法](https://goessner.net/articles/JsonPath/)を使うための構文ガイドです。 * [kubeadm](/ja/docs/reference/setup-tools/kubeadm/) - セキュアなKubernetesクラスターを簡単にプロビジョニングするためのCLIツールです。 diff --git a/content/ja/docs/reference/command-line-tools-reference/_index.md b/content/ja/docs/reference/command-line-tools-reference/_index.md index 89d64ce646d..2e4806da13d 100644 --- a/content/ja/docs/reference/command-line-tools-reference/_index.md +++ b/content/ja/docs/reference/command-line-tools-reference/_index.md @@ -1,5 +1,5 @@ --- title: コマンドラインツールのリファレンス -weight: 60 +weight: 120 toc-hide: true --- diff --git a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md index 76ee414f7bb..5667cbee61b 100644 --- a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md @@ -346,7 +346,7 @@ GAになってからさらなる変更を加えることは現実的ではない 各フィーチャーゲートは特定の機能を有効/無効にするように設計されています。 - `Accelerators`: DockerでのNvidia GPUのサポートを有効にします。 -- `AdvancedAuditing`: [高度な監査機能](/docs/tasks/debug-application-cluster/audit/#advanced-audit)を有効にします。 +- `AdvancedAuditing`: [高度な監査機能](/ja/docs/tasks/debug/debug-cluster/audit/#advanced-audit)を有効にします。 - `AffinityInAnnotations`(*非推奨*): [Podのアフィニティまたはアンチアフィニティ](/ja/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)を有効にします。 - `AnyVolumeDataSource`: {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}の`DataSource`としてカスタムリソースの使用を有効にします。 - `AllowExtTrafficLocalEndpoints`: サービスが外部へのリクエストをノードのローカルエンドポイントにルーティングできるようにします。 @@ -387,7 +387,6 @@ GAになってからさらなる変更を加えることは現実的ではない - `CustomResourceWebhookConversion`: [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)から作成されたリソースのWebhookベースの変換を有効にします。 - `DevicePlugins`: [device-plugins](/docs/concepts/cluster-administration/device-plugins/)によるノードでのリソースプロビジョニングを有効にします。 - `DryRun`: サーバーサイドでの[dry run](/docs/reference/using-api/api-concepts/#dry-run)リクエストを有効にします。 -- `DynamicAuditing`: [動的監査](/docs/tasks/debug-application-cluster/audit/#dynamic-backend)を有効にします。 - `DynamicKubeletConfig`: kubeletの動的構成を有効にします。[kubeletの再設定](/docs/tasks/administer-cluster/reconfigure-kubelet/)を参照してください。 - `DynamicProvisioningScheduling`: デフォルトのスケジューラーを拡張してボリュームトポロジーを認識しPVプロビジョニングを処理します。この機能は、v1.12の`VolumeScheduling`機能に完全に置き換えられました。 - `DynamicVolumeProvisioning`(*非推奨*): Podへの永続ボリュームの[動的プロビジョニング](/ja/docs/concepts/storage/dynamic-provisioning/)を有効にします。 diff --git a/content/ja/docs/reference/glossary/kubectl.md b/content/ja/docs/reference/glossary/kubectl.md new file mode 100644 index 00000000000..14bf028f943 --- /dev/null +++ b/content/ja/docs/reference/glossary/kubectl.md @@ -0,0 +1,19 @@ +--- +title: Kubectl +id: kubectl +date: 2018-04-12 +full_link: /ja/docs/reference/kubectl/ +short_description: > + Kubernetesクラスターと通信するためのコマンドラインツールです。 + +aka: +- kubectl +tags: +- tool +- fundamental +--- +Kubernetes APIを使用してKubernetesクラスターの{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}と通信するためのコマンドラインツールです。 + + + +Kubernetesオブジェクトの作成、検査、更新、削除には `kubectl` を使用することができます。 diff --git a/content/ja/docs/reference/kubectl/_index.md b/content/ja/docs/reference/kubectl/_index.md index 7b6c2d720b1..bceb521c2fa 100644 --- a/content/ja/docs/reference/kubectl/_index.md +++ b/content/ja/docs/reference/kubectl/_index.md @@ -1,5 +1,495 @@ --- -title: "kubectl CLI" -weight: 60 +title: コマンドラインツール(kubectl) +content_type: reference +weight: 110 +no_list: true +card: + name: reference + weight: 20 --- + +{{< glossary_definition prepend="Kubernetesが提供する、" term_id="kubectl" length="short" >}} + +このツールの名前は、`kubectl` です。 + +`kubectl`コマンドラインツールを使うと、Kubernetesクラスターを制御できます。環境設定のために、`kubectl`は、`$HOME/.kube`ディレクトリにある`config`という名前のファイルを探します。他の[kubeconfig](/ja/docs/concepts/configuration/organize-cluster-access-kubeconfig/)ファイルは、`KUBECONFIG`環境変数を設定するか、[`--kubeconfig`](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)フラグを設定することで指定できます。 + +この概要では、`kubectl`の構文を扱い、コマンド操作を説明し、一般的な例を示します。サポートされているすべてのフラグやサブコマンドを含め、各コマンドの詳細については、[kubectl](/docs/reference/generated/kubectl/kubectl-commands/)リファレンスドキュメントを参照してください。 + +インストール方法については、[kubectlのインストールおよびセットアップ](/ja/docs/tasks/tools/install-kubectl/)をご覧ください。クイックガイドは、[cheat sheet](/docs/reference/kubectl/cheatsheet/) をご覧ください。`docker`コマンドラインツールに慣れている方は、[`kubectl` for Docker Users](/docs/reference/kubectl/docker-cli-to-kubectl/) でKubernetesの同等のコマンドを説明しています。 + + + +## 構文 + +ターミナルウィンドウから`kubectl`コマンドを実行するには、以下の構文を使用します。 + +```shell +kubectl [command] [TYPE] [NAME] [flags] +``` + +ここで、`command`、`TYPE`、`NAME`、`flags`は、以下を表します。 + +* `command`: 1つ以上のリソースに対して実行したい操作を指定します。例えば、`create`、`get`、`describe`、`delete`です。 + +* `TYPE`: [リソースタイプ](#resource-types)を指定します。リソースタイプは大文字と小文字を区別せず、単数形や複数形、省略形を指定できます。例えば、以下のコマンドは同じ出力を生成します。 + + ```shell + kubectl get pod pod1 + kubectl get pods pod1 + kubectl get po pod1 + ``` + +* `NAME`: リソースの名前を指定します。名前は大文字と小文字を区別します。`kubectl get pods`のように名前が省略された場合は、すべてのリソースの詳細が表示されます。 + + 複数のリソースに対して操作を行う場合は、各リソースをタイプと名前で指定するか、1つまたは複数のファイルを指定することができます。 + + * リソースをタイプと名前で指定する場合 + + * タイプがすべて同じとき、リソースをグループ化するには`TYPE1 name1 name2 name<#>`とします。
+ 例: `kubectl get pod example-pod1 example-pod2` + + * 複数のリソースタイプを個別に指定するには、`TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`とします。
+ 例: `kubectl get pod/example-pod1 replicationcontroller/example-rc1` + + * リソースを1つ以上のファイルで指定する場合は、`-f file1 -f file2 -f file<#>`とします。 + + * 特に設定ファイルについては、YAMLの方がより使いやすいため、[JSONではなくYAMLを使用してください](/ja/docs/concepts/configuration/overview/#一般的な設定のtips)。
+ 例: `kubectl get pod -f ./pod.yaml` + +* `flags`: オプションのフラグを指定します。例えば、`-s`または`--server`フラグを使って、Kubernetes APIサーバーのアドレスやポートを指定できます。
+ +{{< caution >}} +コマンドラインから指定したフラグは、デフォルト値および対応する任意の環境変数を上書きします。 +{{< /caution >}} + +ヘルプが必要な場合は、ターミナルウィンドウから`kubectl help`を実行してください。 + +## 操作 + +以下の表に、`kubectl`のすべての操作の簡単な説明と一般的な構文を示します。 + +操作                 | 構文 | 説明 +-------------------- | -------------------- | -------------------- +`alpha`| `kubectl alpha SUBCOMMAND [flags]` | アルファ機能に該当する利用可能なコマンドを一覧表示します。これらの機能は、デフォルトではKubernetesクラスターで有効になっていません。 +`annotate` | kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 1つ以上のリソースのアノテーションを、追加または更新します。 +`api-resources` | `kubectl api-resources [flags]` | 利用可能なAPIリソースを一覧表示します。 +`api-versions` | `kubectl api-versions [flags]` | 利用可能なAPIバージョンを一覧表示します。 +`apply` | `kubectl apply -f FILENAME [flags]`| ファイルまたは標準出力から、リソースの設定変更を適用します。 +`attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | 実行中のコンテナにアタッチして、出力ストリームを表示するか、コンテナ(標準入力)と対話します。 +`auth` | `kubectl auth [flags] [options]` | 認可を検査します。 +`autoscale` | kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | ReplicationControllerで管理されているPodのセットを、自動的にスケールします。 +`certificate` | `kubectl certificate SUBCOMMAND [options]` | 証明書のリソースを変更します。 +`cluster-info` | `kubectl cluster-info [flags]` | クラスター内のマスターとサービスに関するエンドポイント情報を表示します。 +`completion` | `kubectl completion SHELL [options]` | 指定されたシェル(bashまたはzsh)のシェル補完コードを出力します。 +`config` | `kubectl config SUBCOMMAND [flags]` | kubeconfigファイルを変更します。詳細は、個々のサブコマンドを参照してください。 +`convert` | `kubectl convert -f FILENAME [options]` | 異なるAPIバージョン間で設定ファイルを変換します。YAMLとJSONに対応しています。 +`cordon` | `kubectl cordon NODE [options]` | Nodeをスケジュール不可に設定します。 +`cp` | `kubectl cp [options]` | コンテナとの間でファイルやディレクトリをコピーします。 +`create` | `kubectl create -f FILENAME [flags]` | ファイルまたは標準出力から、1つ以上のリソースを作成します。 +`delete` | kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags] | ファイル、標準出力、またはラベルセレクター、リソースセレクター、リソースを指定して、リソースを削除します。 +`describe` | kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags] | 1つ以上のリソースの詳細な状態を表示します。 +`diff` | `kubectl diff -f FILENAME [flags]`| ファイルまたは標準出力と、現在の設定との差分を表示します。 +`drain` | `kubectl drain NODE [options]` | メンテナンスの準備のためにNodeをdrainします。 +`edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | デファルトのエディタを使い、サーバー上の1つ以上のリソースリソースの定義を編集し、更新します。 +`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Pod内のコンテナに対して、コマンドを実行します。 +`explain` | `kubectl explain [--recursive=false] [flags]` | 様々なリソースのドキュメントを取得します。例えば、Pod、Node、Serviceなどです。 +`expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | ReplicationController、Service、Podを、新しいKubernetesサービスとして公開します。 +`get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | 1つ以上のリソースを表示します。 +`kustomize` | `kubectl kustomize [flags] [options]` | kustomization.yamlファイル内の指示から生成されたAPIリソースのセットを一覧表示します。引数はファイルを含むディレクトリのPath,またはリポジトリルートに対して同じ場所を示すパスサフィックス付きのgitリポジトリのURLを指定しなければなりません。 +`label` | kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 1つ以上のリソースのラベルを、追加または更新します。 +`logs` | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | Pod内のコンテナのログを表示します。 +`options` | `kubectl options` | すべてのコマンドに適用されるグローバルコマンドラインオプションを一覧表示します。 +`patch` | kubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags] | Strategic Merge Patchの処理を使用して、リソースの1つ以上のフィールドを更新します。 +`plugin` | `kubectl plugin [flags] [options]` | プラグインと対話するためのユーティリティを提供します。 +`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | 1つ以上のローカルポートを、Podに転送します。 +`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Kubernetes APIサーバーへのプロキシーを実行します。 +`replace` | `kubectl replace -f FILENAME` | ファイルや標準出力から、リソースを置き換えます。 +`rollout` | `kubectl rollout SUBCOMMAND [options]` | リソースのロールアウトを管理します。有効なリソースには、Deployment、DaemonSetとStatefulSetが含まれます。 +`run` | kubectl run NAME --image=image [--env="key=value"] [--port=port] [--dry-run=server|client|none] [--overrides=inline-json] [flags] | 指定したイメージを、クラスタ上で実行します。 +`scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | 指定したReplicationControllerのサイズを更新します。 +`set` | `kubectl set SUBCOMMAND [options]` | アプリケーションリソースを設定します。 +`taint` | `kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]` | 1つ以上のNodeのtaintを更新します。 +`top` | `kubectl top [flags] [options]` | リソース(CPU/メモリー/ストレージ)の使用量を表示します。 +`uncordon` | `kubectl uncordon NODE [options]` | Nodeをスケジュール可に設定します。 +`version` | `kubectl version [--client] [flags]` | クライアントとサーバーで実行中のKubernetesのバージョンを表示します。 +`wait` | kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available] [options] | 実験中の機能: 1つ以上のリソースが特定の状態になるまで待ちます。 + +コマンド操作について詳しく知りたい場合は、[kubectl](/docs/reference/kubectl/kubectl/)リファレンスドキュメントを参照してください。 + +## リソースタイプ {#resource-types} + +以下の表に、サポートされているすべてのリソースと、省略されたエイリアスの一覧を示します。 + +(この出力は`kubectl api-resources`から取得でき、Kubernetes 1.13.3時点で正確でした。) + +| リソース名 | 短縮名 | APIグループ | 名前空間に属するか | リソースの種類 | +|---|---|---|---|---| +| `bindings` | | | true | Binding| +| `componentstatuses` | `cs` | | false | ComponentStatus | +| `configmaps` | `cm` | | true | ConfigMap | +| `endpoints` | `ep` | | true | Endpoints | +| `limitranges` | `limits` | | true | LimitRange | +| `namespaces` | `ns` | | false | Namespace | +| `nodes` | `no` | | false | Node | +| `persistentvolumeclaims` | `pvc` | | true | PersistentVolumeClaim | +| `persistentvolumes` | `pv` | | false | PersistentVolume | +| `pods` | `po` | | true | Pod | +| `podtemplates` | | | true | PodTemplate | +| `replicationcontrollers` | `rc` | | true| ReplicationController | +| `resourcequotas` | `quota` | | true | ResourceQuota | +| `secrets` | | | true | Secret | +| `serviceaccounts` | `sa` | | true | ServiceAccount | +| `services` | `svc` | | true | Service | +| `mutatingwebhookconfigurations` | | admissionregistration.k8s.io | false | MutatingWebhookConfiguration | +| `validatingwebhookconfigurations` | | admissionregistration.k8s.io | false | ValidatingWebhookConfiguration | +| `customresourcedefinitions` | `crd`, `crds` | apiextensions.k8s.io | false | CustomResourceDefinition | +| `apiservices` | | apiregistration.k8s.io | false | APIService | +| `controllerrevisions` | | apps | true | ControllerRevision | +| `daemonsets` | `ds` | apps | true | DaemonSet | +| `deployments` | `deploy` | apps | true | Deployment | +| `replicasets` | `rs` | apps | true | ReplicaSet | +| `statefulsets` | `sts` | apps | true | StatefulSet | +| `tokenreviews` | | authentication.k8s.io | false | TokenReview | +| `localsubjectaccessreviews` | | authorization.k8s.io | true | LocalSubjectAccessReview | +| `selfsubjectaccessreviews` | | authorization.k8s.io | false | SelfSubjectAccessReview | +| `selfsubjectrulesreviews` | | authorization.k8s.io | false | SelfSubjectRulesReview | +| `subjectaccessreviews` | | authorization.k8s.io | false | SubjectAccessReview | +| `horizontalpodautoscalers` | `hpa` | autoscaling | true | HorizontalPodAutoscaler | +| `cronjobs` | `cj` | batch | true | CronJob | +| `jobs` | | batch | true | Job | +| `certificatesigningrequests` | `csr` | certificates.k8s.io | false | CertificateSigningRequest | +| `leases` | | coordination.k8s.io | true | Lease | +| `events` | `ev` | events.k8s.io | true | Event | +| `ingresses` | `ing` | extensions | true | Ingress | +| `networkpolicies` | `netpol` | networking.k8s.io | true | NetworkPolicy | +| `poddisruptionbudgets` | `pdb` | policy | true | PodDisruptionBudget | +| `podsecuritypolicies` | `psp` | policy | false | PodSecurityPolicy | +| `clusterrolebindings` | | rbac.authorization.k8s.io | false | ClusterRoleBinding | +| `clusterroles` | | rbac.authorization.k8s.io | false | ClusterRole | +| `rolebindings` | | rbac.authorization.k8s.io | true | RoleBinding | +| `roles` | | rbac.authorization.k8s.io | true | Role | +| `priorityclasses` | `pc` | scheduling.k8s.io | false | PriorityClass | +| `csidrivers` | | storage.k8s.io | false | CSIDriver | +| `csinodes` | | storage.k8s.io | false | CSINode | +| `storageclasses` | `sc` | storage.k8s.io | false | StorageClass | +| `volumeattachments` | | storage.k8s.io | false | VolumeAttachment | + +## 出力オプション + +ある特定のコマンドの出力に対してフォーマットやソートを行う方法については、以下の節を参照してください。どのコマンドが様々な出力オプションをサポートしているかについては、[kubectl](/docs/reference/kubectl/kubectl/)リファレンスドキュメントをご覧ください。 + +### 出力のフォーマット + +すべての`kubectl`コマンドのデフォルトの出力フォーマットは、人間が読みやすいプレーンテキスト形式です。特定のフォーマットで、詳細をターミナルウィンドウに出力するには、サポートされている`kubectl`コマンドに`-o`または`--output`フラグのいずれかを追加します。 + +#### 構文 + +```shell +kubectl [command] [TYPE] [NAME] -o +``` + +`kubectl`の操作に応じて、以下の出力フォーマットがサポートされています。 + +出力フォーマット | 説明 +--------------| ----------- +`-o custom-columns=` | [カスタムカラム](#custom-columns)のコンマ区切りのリストを使用して、テーブルを表示します。 +`-o custom-columns-file=` | ``ファイル内の[カスタムカラム](#custom-columns)のテンプレートを使用して、テーブルを表示します。 +`-o json` | JSON形式のAPIオブジェクトを出力します。 +`-o jsonpath=