Merge branch 'kubernetes:main' into main
commit
cbbc5e1863
|
|
@ -138,6 +138,7 @@ aliases:
|
|||
- Sea-n
|
||||
- tanjunchen
|
||||
- tengqm
|
||||
- windsonsea
|
||||
- xichengliudui
|
||||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- chenrui333
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ This repository contains the assets required to build the [Kubernetes website an
|
|||
|
||||
## Using this repository
|
||||
|
||||
You can run the website locally using Hugo (Extended version), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
|
||||
You can run the website locally using [Hugo (Extended version)](https://gohugo.io/), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
@ -70,7 +70,7 @@ This will start the local Hugo server on port 1313. Open up your browser to <htt
|
|||
|
||||
## Building the API reference pages
|
||||
|
||||
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
|
||||
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, also known as OpenAPI specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
|
||||
|
||||
To update the reference pages for a new Kubernetes release follow these steps:
|
||||
|
||||
|
|
|
|||
|
|
@ -42,12 +42,12 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Video ansehen</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Besuchen die KubeCon North America vom 24. bis 28. Oktober 2022</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon Europe vom 18. bis 21. April 2023</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon Europe vom 17. bis 21. April 2023</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Besuche die KubeCon North America vom 6. bis 9. November 2023</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April 2023"
|
||||
date: 2023-03-10
|
||||
slug: k8s-gcr-io-freeze-announcement
|
||||
---
|
||||
|
||||
**Authors**: Michael Mueller (Giant Swarm)
|
||||
|
||||
Das Kubernetes-Projekt betreibt eine zur Community gehörende Container-Image-Registry namens `registry.k8s.io`, um die zum Projekt gehörenden Container-Images zu hosten. Am 3. April 2023 wird diese Container-Image-Registry `k8s.gcr.io` eingefroren und es werden keine weiteren Container-Images für Kubernetes und Teilprojekte in die alte Registry gepusht.
|
||||
|
||||
Die Container-Image-Registry `registry.k8s.io` ist bereits seit einigen Monaten verfügbar und wird die alte Registry ersetzen. Wir haben einen [Blogbeitrag](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/) über die Vorteile für die Community und das Kubernetes-Projekt veröffentlicht. In diesem Beitrag wurde auch angekündigt, dass zukünftige Versionen von Kubernetes nicht mehr in der alten Registry Released sein werden.
|
||||
|
||||
Was bedeutet dies für Contributors:
|
||||
- Wenn Du ein Maintainer eines Teilprojekts bist, musst du die Manifeste und Helm-Charts entsprechend anpassen, um die neue Container-Registry zu verwenden.
|
||||
|
||||
Was bedeutet dies Änderung für Endanwender:
|
||||
- Das Kubernetes Release 1.27 wird nicht auf der alten Registry veröffentlicht.
|
||||
- Patchreleases für 1.24, 1.25 und 1.26 werden ab April nicht mehr in der alten Container-Image-Registry veröffentlicht. Bitte beachte den untenstehenden Zeitplan für die Details zu Patchreleases in der alten Container-Registry.
|
||||
- Beginnend mit dem Release 1.25, wurde die Standardeinstellung der Container-Image-Registry auf `registry.k8s.io` geändert. Diese Einstellung kann in `kubeadm` und dem `kubelet` abgeändert werden, sollte der Wert jedoch auf `k8s.gcr.io` gesetzt werden, wird dies für neue Releases ab April fehlschlagen, da diese nicht in die alte Container-Image-Registry gepusht werden.
|
||||
- Solltest Du die Zuverlässigkeit der Cluster erhöhen wollen und Abhängigkeiten zu dem zur Community gehörenden Container-Image-Registry auflösen wollen, oder betreibst Cluster in einer Umgebung mit eingeschränktem externen Netzwerkzugriff, solltest Du in Betracht ziehen eine lokale Container-Image-Registry als Mirror zu betreiben. Einige Cloud-Anbieter haben hierfür entsprechende Angebote.
|
||||
|
||||
## Zeitplan der Änderungen
|
||||
|
||||
- `k8s.gcr.io` wird zum 3.April 2023 eingefroren
|
||||
- Der 1.27 Release wird zum 12.April 2023 erwartet
|
||||
- Das letzte 1.23 Release auf `k8s.gcr.io` wird 1.23.18 sein (1.23 wird end-of-life vor dem einfrieren erreichen)
|
||||
- Das letzte 1.24 Release auf `k8s.gcr.io` wird 1.24.12 sein
|
||||
- Das letzte 1.25 Release auf `k8s.gcr.io` wird 1.25.8 sein
|
||||
- Das letzte 1.26 Release auf `k8s.gcr.io` wird 1.26.3 sein
|
||||
|
||||
## Was geschieht nun
|
||||
|
||||
Bitte stelle sicher, dass die Cluster keine Abhängigkeiten zu der Alten Container-Image-Registry haben. Dies kann zum Beispiel folgendermaßen überprüft werden, durch Ausführung des fogenden Kommandos erhält man eine Liste der Container-Images die ein Pod verwendet:
|
||||
|
||||
```shell
|
||||
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
|
||||
tr -s '[[:space:]]' '\n' |\
|
||||
sort |\
|
||||
uniq -c
|
||||
```
|
||||
|
||||
Es können durchaus weitere Abhängigkeiten zu der alten Container-Image-Registry bestehen, stelle also sicher, dass du alle möglichen Abhängigkeiten überprüfst, um die Cluster funktional und auf dem neuesten Stand zu halten.
|
||||
## Acknowledgments
|
||||
|
||||
__Change is hard__, die Weiterentwicklung unserer Container-Image-Registry ist notwendig, um eine nachhaltige Zukunft für das Projekt zu gewährleisten. Wir bemühen uns, Dinge für alle, die Kubernetes nutzen, zu verbessern. Viele Mitwirkende aus allen Ecken unserer Community haben lange und hart daran gearbeitet, sicherzustellen, dass wir die bestmöglichen Entscheidungen treffen, Pläne umsetzen und unser Bestes tun, um diese Pläne zu kommunizieren.
|
||||
|
||||
Dank geht an Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine, Davanum Srinivas, Mahamed Ali, und Tim Hockin von SIG K8s Infra, Brian McQueen, und Sergey Kanzhelev von SIG Node, Lubomir Ivanov von SIG Cluster Lifecycle, Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, und Stephen Augustus von SIG Release, Bob Killen und Kaslin Fields von SIG Contribex, Tim Allclair von the Security Response Committee. Also a big thank you to our friends acting as liaisons with our cloud provider partners: Jay Pipes von Amazon und Jon Johnson Jr. von Google.
|
||||
|
|
@ -34,7 +34,7 @@ Benutzen Sie eine Docker-basierende Lösung, wenn Sie Kubernetes erlernen wollen
|
|||
| | [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) |
|
||||
| | [IBM Cloud Private-CE (Community Edition) on Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers)|
|
||||
| | [k3s](https://k3s.io)|
|
||||
|
||||
{{< /table >}}
|
||||
|
||||
## Produktionsumgebung
|
||||
|
||||
|
|
@ -98,5 +98,6 @@ Die folgende Tabelle für Produktionsumgebungs-Lösungen listet Anbieter und der
|
|||
| [VEXXHOST](https://vexxhost.com/) | ✔ | ✔ | | | |
|
||||
| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks)
|
||||
| [Z.A.R.V.I.S.](https://zarvis.ai/) | ✔ | | | | | |
|
||||
{{< /table >}}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ Creating machine...
|
|||
Starting local Kubernetes cluster...
|
||||
```
|
||||
```shell
|
||||
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
|
||||
kubectl create deployment hello-minikube --image=registry.k8s.io/echoserver:1.10
|
||||
```
|
||||
```
|
||||
deployment.apps/hello-minikube created
|
||||
|
|
|
|||
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
title: "Service Catalog installieren"
|
||||
weight: 150
|
||||
---
|
||||
|
||||
|
|
@ -77,7 +77,7 @@ Deployments sind die empfohlene Methode zum Verwalten der Erstellung und Skalier
|
|||
Der Pod führt einen Container basierend auf dem bereitgestellten Docker-Image aus.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
|
||||
kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4
|
||||
```
|
||||
|
||||
2. Anzeigen des Deployments:
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ General Availability means different things for different projects. For kubeadm,
|
|||
We now consider kubeadm to have achieved GA-level maturity in each of these important domains:
|
||||
|
||||
* **Stable command-line UX** --- The kubeadm CLI conforms to [#5a GA rule of the Kubernetes Deprecation Policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-flag-or-cli), which states that a command or flag that exists in a GA version must be kept for at least 12 months after deprecation.
|
||||
* **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/wgs/0014-20180707-componentconfig-api-types-to-staging.md) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/).
|
||||
* **Stable underlying implementation** --- kubeadm now creates a new Kubernetes cluster using methods that shouldn't change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join/) flow, and [ComponentConfig](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/wgs/115-componentconfig) is used for configuring the [kubelet](/docs/reference/command-line-tools-reference/kubelet/).
|
||||
* **Configuration file schema** --- With the new **v1beta1** API version, you can now tune almost every part of the cluster declaratively and thus build a "GitOps" flow around kubeadm-built clusters. In future versions, we plan to graduate the API to version **v1** with minimal changes (and perhaps none).
|
||||
* **The "toolbox" interface of kubeadm** --- Also known as **phases**. If you don't want to perform all [`kubeadm init`](/docs/reference/setup-tools/kubeadm/kubeadm-init/) tasks, you can instead apply more fine-grained actions using the `kubeadm init phase` command (for example generating certificates or control plane [Static Pod](/docs/tasks/administer-cluster/static-pod/) manifests).
|
||||
* **Upgrades between minor versions** --- The [`kubeadm upgrade`](/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/) command is now fully GA. It handles control plane upgrades for you, which includes upgrades to [etcd](https://etcd.io), the [API Server](/docs/reference/using-api/api-overview/), the [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/), and the [Scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). You can seamlessly upgrade your cluster between minor or patch versions (e.g. v1.12.2 -> v1.13.1 or v1.13.1 -> v1.13.3).
|
||||
|
|
|
|||
|
|
@ -83,6 +83,7 @@ Adopting a common convention for annotations ensures consistency and understanda
|
|||
| `a8r.io/uptime` | Link to external uptime dashboard. |
|
||||
| `a8r.io/performance` | Link to external performance dashboard. |
|
||||
| `a8r.io/dependencies` | Unstructured text describing the service dependencies for humans. |
|
||||
{{< /table >}}
|
||||
|
||||
|
||||
## Visualizing annotations: Service Catalogs
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ Starting with Kubernetes 1.25, our container image registry has changed from k8s
|
|||
|
||||
## TL;DR: What you need to know about this change
|
||||
|
||||
* Container images for Kubernetes releases from 1.25 onward are no longer published to k8s.gcr.io, only to registry.k8s.io.
|
||||
* Container images for Kubernetes releases from <del>1.25</del> 1.27 onward are not published to k8s.gcr.io, only to registry.k8s.io.
|
||||
* In the upcoming December patch releases, the new registry domain default will be backported to all branches still in support (1.22, 1.23, 1.24).
|
||||
* If you run in a restricted environment and apply strict domain/IP address access policies limited to k8s.gcr.io, the __image pulls will not function__ after the migration to this new registry. For these users, the recommended method is to mirror the release images to a private registry.
|
||||
|
||||
|
|
@ -68,8 +68,15 @@ The image used by kubelet for the pod sandbox (`pause`) can be overridden by set
|
|||
kubelet --pod-infra-container-image=k8s.gcr.io/pause:3.5
|
||||
```
|
||||
|
||||
## Legacy container registry freeze {#registry-freeze}
|
||||
|
||||
[k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April 2023](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/) announces the freeze of the
|
||||
legacy k8s.gcr.io image registry. Read that article for more details.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
__Change is hard__, and evolving our image-serving platform is needed to ensure a sustainable future for the project. We strive to make things better for everyone using Kubernetes. Many contributors from all corners of our community have been working long and hard to ensure we are making the best decisions possible, executing plans, and doing our best to communicate those plans.
|
||||
|
||||
Thanks to Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine, Davanum Srinivas, Mahamed Ali, and Tim Hockin from SIG K8s Infra, Brian McQueen, and Sergey Kanzhelev from SIG Node, Lubomir Ivanov from SIG Cluster Lifecycle, Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, and Stephen Augustus from SIG Release, Bob Killen and Kaslin Fields from SIG Contribex, Tim Allclair from the Security Response Committee. Also a big thank you to our friends acting as liaisons with our cloud provider partners: Jay Pipes from Amazon and Jon Johnson Jr. from Google.
|
||||
|
||||
_This article was updated on the 28th of February 2023._
|
||||
|
|
|
|||
|
|
@ -207,3 +207,11 @@ and without losing the state of the containers in that Pod.
|
|||
You can reach SIG Node by several means:
|
||||
- Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node)
|
||||
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node)
|
||||
|
||||
## Further reading
|
||||
|
||||
Please see the follow-up article [Forensic container
|
||||
analysis][forensic-container-analysis] for details on how a container checkpoint
|
||||
can be analyzed.
|
||||
|
||||
[forensic-container-analysis]: /blog/2023/03/10/forensic-container-analysis/
|
||||
|
|
|
|||
|
|
@ -8,39 +8,99 @@ slug: security-behavior-analysis
|
|||
**Author:**
|
||||
David Hadas (IBM Research Labs)
|
||||
|
||||
_This post warns Devops from a false sense of security. Following security best practices when developing and configuring microservices do not result in non-vulnerable microservices. The post shows that although all deployed microservices are vulnerable, there is much that can be done to ensure microservices are not exploited. It explains how analyzing the behavior of clients and services from a security standpoint, named here **"Security-Behavior Analytics"**, can protect the deployed vulnerable microservices. It points to [Guard](http://knative.dev/security-guard), an open source project offering security-behavior monitoring and control of Kubernetes microservices presumed vulnerable._
|
||||
_This post warns Devops from a false sense of security. Following security
|
||||
best practices when developing and configuring microservices do not result
|
||||
in non-vulnerable microservices. The post shows that although all deployed
|
||||
microservices are vulnerable, there is much that can be done to ensure
|
||||
microservices are not exploited. It explains how analyzing the behavior of
|
||||
clients and services from a security standpoint, named here
|
||||
**"Security-Behavior Analytics"**, can protect the deployed vulnerable microservices.
|
||||
It points to [Guard](http://knative.dev/security-guard), an open source project offering
|
||||
security-behavior monitoring and control of Kubernetes microservices presumed vulnerable._
|
||||
|
||||
As cyber attacks continue to intensify in sophistication, organizations deploying cloud services continue to grow their cyber investments aiming to produce safe and non-vulnerable services. However, the year-by-year growth in cyber investments does not result in a parallel reduction in cyber incidents. Instead, the number of cyber incidents continues to grow annually. Evidently, organizations are doomed to fail in this struggle - no matter how much effort is made to detect and remove cyber weaknesses from deployed services, it seems offenders always have the upper hand.
|
||||
As cyber attacks continue to intensify in sophistication, organizations deploying
|
||||
cloud services continue to grow their cyber investments aiming to produce safe and
|
||||
non-vulnerable services. However, the year-by-year growth in cyber investments does
|
||||
not result in a parallel reduction in cyber incidents. Instead, the number of cyber
|
||||
incidents continues to grow annually. Evidently, organizations are doomed to fail in
|
||||
this struggle - no matter how much effort is made to detect and remove cyber weaknesses
|
||||
from deployed services, it seems offenders always have the upper hand.
|
||||
|
||||
Considering the current spread of offensive tools, sophistication of offensive players, and ever-growing cyber financial gains to offenders, any cyber strategy that relies on constructing a non-vulnerable, weakness-free service in 2023 is clearly too naïve. It seems the only viable strategy is to:
|
||||
Considering the current spread of offensive tools, sophistication of offensive players,
|
||||
and ever-growing cyber financial gains to offenders, any cyber strategy that relies on
|
||||
constructing a non-vulnerable, weakness-free service in 2023 is clearly too naïve.
|
||||
It seems the only viable strategy is to:
|
||||
|
||||
➥ **Admit that your services are vulnerable!**
|
||||
|
||||
In other words, consciously accept that you will never create completely invulnerable services. If your opponents find even a single weakness as an entry-point, you lose! Admitting that in spite of your best efforts, all your services are still vulnerable is an important first step. Next, this post discusses what you can do about it...
|
||||
In other words, consciously accept that you will never create completely invulnerable
|
||||
services. If your opponents find even a single weakness as an entry-point, you lose!
|
||||
Admitting that in spite of your best efforts, all your services are still vulnerable
|
||||
is an important first step. Next, this post discusses what you can do about it...
|
||||
|
||||
## How to protect microservices from being exploited
|
||||
|
||||
Being vulnerable does not necessarily mean that your service will be exploited. Though your services are vulnerable in some ways unknown to you, offenders still need to identify these vulnerabilities and then exploit them. If offenders fail to exploit your service vulnerabilities, you win! In other words, having a vulnerability that can’t be exploited, represents a risk that can’t be realized.
|
||||
Being vulnerable does not necessarily mean that your service will be exploited.
|
||||
Though your services are vulnerable in some ways unknown to you, offenders still
|
||||
need to identify these vulnerabilities and then exploit them. If offenders fail
|
||||
to exploit your service vulnerabilities, you win! In other words, having a
|
||||
vulnerability that can’t be exploited, represents a risk that can’t be realized.
|
||||
|
||||
{{< figure src="security_behavior_figure_1.svg" alt="Image of an example of offender gaining foothold in a service" class="diagram-large" caption="Figure 1. An Offender gaining foothold in a vulnerable service" >}}
|
||||
|
||||
The above diagram shows an example in which the offender does not yet have a foothold in the service; that is, it is assumed that your service does not run code controlled by the offender on day 1. In our example the service has vulnerabilities in the API exposed to clients. To gain an initial foothold the offender uses a malicious client to try and exploit one of the service API vulnerabilities. The malicious client sends an exploit that triggers some unplanned behavior of the service.
|
||||
The above diagram shows an example in which the offender does not yet have a
|
||||
foothold in the service; that is, it is assumed that your service does not run
|
||||
code controlled by the offender on day 1. In our example the service has
|
||||
vulnerabilities in the API exposed to clients. To gain an initial foothold the
|
||||
offender uses a malicious client to try and exploit one of the service API
|
||||
vulnerabilities. The malicious client sends an exploit that triggers some
|
||||
unplanned behavior of the service.
|
||||
|
||||
More specifically, let’s assume the service is vulnerable to an SQL injection. The developer failed to sanitize the user input properly, thereby allowing clients to send values that would change the intended behavior. In our example, if a client sends a query string with key “username” and value of _“tom or 1=1”_, the client will receive the data of all users. Exploiting this vulnerability requires the client to send an irregular string as the value. Note that benign users will not be sending a string with spaces or with the equal sign character as a username, instead they will normally send legal usernames which for example may be defined as a short sequence of characters a-z. No legal username can trigger service unplanned behavior.
|
||||
More specifically, let’s assume the service is vulnerable to an SQL injection.
|
||||
The developer failed to sanitize the user input properly, thereby allowing clients
|
||||
to send values that would change the intended behavior. In our example, if a client
|
||||
sends a query string with key “username” and value of _“tom or 1=1”_, the client will
|
||||
receive the data of all users. Exploiting this vulnerability requires the client to
|
||||
send an irregular string as the value. Note that benign users will not be sending a
|
||||
string with spaces or with the equal sign character as a username, instead they will
|
||||
normally send legal usernames which for example may be defined as a short sequence of
|
||||
characters a-z. No legal username can trigger service unplanned behavior.
|
||||
|
||||
In this simple example, one can already identify several opportunities to detect and block an attempt to exploit the vulnerability (un)intentionally left behind by the developer, making the vulnerability unexploitable. First, the malicious client behavior differs from the behavior of benign clients, as it sends irregular requests. If such a change in behavior is detected and blocked, the exploit will never reach the service. Second, the service behavior in response to the exploit differs from the service behavior in response to a regular request. Such behavior may include making subsequent irregular calls to other services such as a data store, taking irregular time to respond, and/or responding to the malicious client with an irregular response (for example, containing much more data than normally sent in case of benign clients making regular requests). Service behavioral changes, if detected, will also allow blocking the exploit in different stages of the exploitation attempt.
|
||||
In this simple example, one can already identify several opportunities to detect and
|
||||
block an attempt to exploit the vulnerability (un)intentionally left behind by the
|
||||
developer, making the vulnerability unexploitable. First, the malicious client behavior
|
||||
differs from the behavior of benign clients, as it sends irregular requests. If such a
|
||||
change in behavior is detected and blocked, the exploit will never reach the service.
|
||||
Second, the service behavior in response to the exploit differs from the service behavior
|
||||
in response to a regular request. Such behavior may include making subsequent irregular
|
||||
calls to other services such as a data store, taking irregular time to respond, and/or
|
||||
responding to the malicious client with an irregular response (for example, containing
|
||||
much more data than normally sent in case of benign clients making regular requests).
|
||||
Service behavioral changes, if detected, will also allow blocking the exploit in
|
||||
different stages of the exploitation attempt.
|
||||
|
||||
More generally:
|
||||
|
||||
- Monitoring the behavior of clients can help detect and block exploits against service API vulnerabilities. In fact, deploying efficient client behavior monitoring makes many vulnerabilities unexploitable and others very hard to achieve. To succeed, the offender needs to create an exploit undetectable from regular requests.
|
||||
- Monitoring the behavior of clients can help detect and block exploits against
|
||||
service API vulnerabilities. In fact, deploying efficient client behavior
|
||||
monitoring makes many vulnerabilities unexploitable and others very hard to achieve.
|
||||
To succeed, the offender needs to create an exploit undetectable from regular requests.
|
||||
|
||||
- Monitoring the behavior of services can help detect services as they are being exploited regardless of the attack vector used. Efficient service behavior monitoring limits what an attacker may be able to achieve as the offender needs to ensure the service behavior is undetectable from regular service behavior.
|
||||
- Monitoring the behavior of services can help detect services as they are being
|
||||
exploited regardless of the attack vector used. Efficient service behavior
|
||||
monitoring limits what an attacker may be able to achieve as the offender needs
|
||||
to ensure the service behavior is undetectable from regular service behavior.
|
||||
|
||||
Combining both approaches may add a protection layer to the deployed vulnerable services, drastically decreasing the probability for anyone to successfully exploit any of the deployed vulnerable services. Next, let us identify four use cases where you need to use security-behavior monitoring.
|
||||
Combining both approaches may add a protection layer to the deployed vulnerable services,
|
||||
drastically decreasing the probability for anyone to successfully exploit any of the
|
||||
deployed vulnerable services. Next, let us identify four use cases where you need to
|
||||
use security-behavior monitoring.
|
||||
|
||||
## Use cases
|
||||
|
||||
One can identify the following four different stages in the life of any service from a security standpoint. In each stage, security-behavior monitoring is required to meet different challenges:
|
||||
One can identify the following four different stages in the life of any service
|
||||
from a security standpoint. In each stage, security-behavior monitoring is required
|
||||
to meet different challenges:
|
||||
|
||||
Service State | Use case | What do you need in order to cope with this use case?
|
||||
------------- | ------------- | -----------------------------------------
|
||||
|
|
@ -53,25 +113,57 @@ Fortunately, microservice architecture is well suited to security-behavior monit
|
|||
|
||||
## Security-Behavior of microservices versus monoliths {#microservices-vs-monoliths}
|
||||
|
||||
Kubernetes is often used to support workloads designed with microservice architecture. By design, microservices aim to follow the UNIX philosophy of "Do One Thing And Do It Well". Each microservice has a bounded context and a clear interface. In other words, you can expect the microservice clients to send relatively regular requests and the microservice to present a relatively regular behavior as a response to these requests. Consequently, a microservice architecture is an excellent candidate for security-behavior monitoring.
|
||||
Kubernetes is often used to support workloads designed with microservice architecture.
|
||||
By design, microservices aim to follow the UNIX philosophy of "Do One Thing And Do It Well".
|
||||
Each microservice has a bounded context and a clear interface. In other words, you can expect
|
||||
the microservice clients to send relatively regular requests and the microservice to present
|
||||
a relatively regular behavior as a response to these requests. Consequently, a microservice
|
||||
architecture is an excellent candidate for security-behavior monitoring.
|
||||
|
||||
{{< figure src="security_behavior_figure_2.svg" alt="Image showing why microservices are well suited for security-behavior monitoring" class="diagram-large" caption="Figure 2. Microservices are well suited for security-behavior monitoring" >}}
|
||||
|
||||
The diagram above clarifies how dividing a monolithic service to a set of microservices improves our ability to perform security-behavior monitoring and control. In a monolithic service approach, different client requests are intertwined, resulting in a diminished ability to identify irregular client behaviors. Without prior knowledge, an observer of the intertwined client requests will find it hard to distinguish between types of requests and their related characteristics. Further, internal client requests are not exposed to the observer. Lastly, the aggregated behavior of the monolithic service is a compound of the many different internal behaviors of its components, making it hard to identify irregular service behavior.
|
||||
The diagram above clarifies how dividing a monolithic service to a set of
|
||||
microservices improves our ability to perform security-behavior monitoring
|
||||
and control. In a monolithic service approach, different client requests are
|
||||
intertwined, resulting in a diminished ability to identify irregular client
|
||||
behaviors. Without prior knowledge, an observer of the intertwined client
|
||||
requests will find it hard to distinguish between types of requests and their
|
||||
related characteristics. Further, internal client requests are not exposed to
|
||||
the observer. Lastly, the aggregated behavior of the monolithic service is a
|
||||
compound of the many different internal behaviors of its components, making
|
||||
it hard to identify irregular service behavior.
|
||||
|
||||
In a microservice environment, each microservice is expected by design to offer a more well-defined service and serve better defined type of requests. This makes it easier for an observer to identify irregular client behavior and irregular service behavior. Further, a microservice design exposes the internal requests and internal services which offer more security-behavior data to identify irregularities by an observer. Overall, this makes the microservice design pattern better suited for security-behavior monitoring and control.
|
||||
In a microservice environment, each microservice is expected by design to offer
|
||||
a more well-defined service and serve better defined type of requests. This makes
|
||||
it easier for an observer to identify irregular client behavior and irregular
|
||||
service behavior. Further, a microservice design exposes the internal requests
|
||||
and internal services which offer more security-behavior data to identify
|
||||
irregularities by an observer. Overall, this makes the microservice design
|
||||
pattern better suited for security-behavior monitoring and control.
|
||||
|
||||
## Security-Behavior monitoring on Kubernetes
|
||||
|
||||
Kubernetes deployments seeking to add Security-Behavior may use [Guard](http://knative.dev/security-guard), developed under the CNCF project Knative. Guard is integrated into the full Knative automation suite that runs on top of Kubernetes. Alternatively, **you can deploy Guard as a standalone tool** to protect any HTTP-based workload on Kubernetes.
|
||||
Kubernetes deployments seeking to add Security-Behavior may use
|
||||
[Guard](http://knative.dev/security-guard), developed under the CNCF project Knative.
|
||||
Guard is integrated into the full Knative automation suite that runs on top of Kubernetes.
|
||||
Alternatively, **you can deploy Guard as a standalone tool** to protect any HTTP-based workload on Kubernetes.
|
||||
|
||||
See:
|
||||
|
||||
- [Guard](https://github.com/knative-sandbox/security-guard) on Github, for using Guard as a standalone tool.
|
||||
- The Knative automation suite - Read about Knative, in the blog post [Opinionated Kubernetes](https://davidhadas.wordpress.com/2022/08/29/knative-an-opinionated-kubernetes) which describes how Knative simplifies and unifies the way web services are deployed on Kubernetes.
|
||||
- You may contact Guard maintainers on the [SIG Security](https://kubernetes.slack.com/archives/C019LFTGNQ3) Slack channel or on the Knative community [security](https://knative.slack.com/archives/CBYV1E0TG) Slack channel. The Knative community channel will move soon to the [CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf) under the name `#knative-security`.
|
||||
- [Guard](https://github.com/knative-sandbox/security-guard) on Github,
|
||||
for using Guard as a standalone tool.
|
||||
- The Knative automation suite - Read about Knative, in the blog post
|
||||
[Opinionated Kubernetes](https://davidhadas.wordpress.com/2022/08/29/knative-an-opinionated-kubernetes)
|
||||
which describes how Knative simplifies and unifies the way web services are deployed on Kubernetes.
|
||||
- You may contact Guard maintainers on the
|
||||
[SIG Security](https://kubernetes.slack.com/archives/C019LFTGNQ3) Slack channel
|
||||
or on the Knative community [security](https://knative.slack.com/archives/CBYV1E0TG)
|
||||
Slack channel. The Knative community channel will move soon to the
|
||||
[CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf) under the name `#knative-security`.
|
||||
|
||||
The goal of this post is to invite the Kubernetes community to action and introduce Security-Behavior monitoring and control to help secure Kubernetes based deployments. Hopefully, the community as a follow up will:
|
||||
The goal of this post is to invite the Kubernetes community to action and introduce
|
||||
Security-Behavior monitoring and control to help secure Kubernetes based deployments.
|
||||
Hopefully, the community as a follow up will:
|
||||
|
||||
1. Analyze the cyber challenges presented for different Kubernetes use cases
|
||||
1. Add appropriate security documentation for users on how to introduce Security-Behavior monitoring and control.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,373 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Forensic container analysis"
|
||||
date: 2023-03-10
|
||||
slug: forensic-container-analysis
|
||||
---
|
||||
|
||||
**Authors:** Adrian Reber (Red Hat)
|
||||
|
||||
In my previous article, [Forensic container checkpointing in
|
||||
Kubernetes][forensic-blog], I introduced checkpointing in Kubernetes
|
||||
and how it has to be setup and how it can be used. The name of the
|
||||
feature is Forensic container checkpointing, but I did not go into
|
||||
any details how to do the actual analysis of the checkpoint created by
|
||||
Kubernetes. In this article I want to provide details how the
|
||||
checkpoint can be analyzed.
|
||||
|
||||
Checkpointing is still an alpha feature in Kubernetes and this article
|
||||
wants to provide a preview how the feature might work in the future.
|
||||
|
||||
## Preparation
|
||||
|
||||
Details about how to configure Kubernetes and the underlying CRI implementation
|
||||
to enable checkpointing support can be found in my [Forensic container
|
||||
checkpointing in Kubernetes][forensic-blog] article.
|
||||
|
||||
As an example I prepared a container image (`quay.io/adrianreber/counter:blog`)
|
||||
which I want to checkpoint and then analyze in this article. This container allows
|
||||
me to create files in the container and also store information in memory which
|
||||
I later want to find in the checkpoint.
|
||||
|
||||
To run that container I need a pod, and for this example I am using the following Pod manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counters
|
||||
spec:
|
||||
containers:
|
||||
- name: counter
|
||||
image: quay.io/adrianreber/counter:blog
|
||||
```
|
||||
|
||||
This results in a container called `counter` running in a pod called `counters`.
|
||||
|
||||
Once the container is running I am performing following actions with that
|
||||
container:
|
||||
|
||||
```console
|
||||
$ kubectl get pod counters --template '{{.status.podIP}}'
|
||||
10.88.0.25
|
||||
$ curl 10.88.0.25:8088/create?test-file
|
||||
$ curl 10.88.0.25:8088/secret?RANDOM_1432_KEY
|
||||
$ curl 10.88.0.25:8088
|
||||
```
|
||||
|
||||
The first access creates a file called `test-file` with the content `test-file`
|
||||
in the container and the second access stores my secret information
|
||||
(`RANDOM_1432_KEY`) somewhere in the container's memory. The last access just
|
||||
adds an additional line to the internal log file.
|
||||
|
||||
The last step before I can analyze the checkpoint it to tell Kubernetes to create
|
||||
the checkpoint. As described in the previous article this requires access to the
|
||||
*kubelet* only `checkpoint` API endpoint.
|
||||
|
||||
For a container named *counter* in a pod named *counters* in a namespace named
|
||||
*default* the *kubelet* API endpoint is reachable at:
|
||||
|
||||
```shell
|
||||
# run this on the node where that Pod is executing
|
||||
curl -X POST "https://localhost:10250/checkpoint/default/counters/counter"
|
||||
```
|
||||
|
||||
For completeness the following `curl` command-line options are necessary to
|
||||
have `curl` accept the *kubelet*'s self signed certificate and authorize the
|
||||
use of the *kubelet* `checkpoint` API:
|
||||
|
||||
```shell
|
||||
--insecure --cert /var/run/kubernetes/client-admin.crt --key /var/run/kubernetes/client-admin.key
|
||||
```
|
||||
|
||||
Once the checkpointing has finished the checkpoint should be available at
|
||||
`/var/lib/kubelet/checkpoints/checkpoint-<pod-name>_<namespace-name>-<container-name>-<timestamp>.tar`
|
||||
|
||||
In the following steps of this article I will use the name `checkpoint.tar`
|
||||
when analyzing the checkpoint archive.
|
||||
|
||||
## Checkpoint archive analysis using `checkpointctl`
|
||||
|
||||
To get some initial information about the checkpointed container I am using the
|
||||
tool [checkpointctl][checkpointctl] like this:
|
||||
|
||||
```console
|
||||
$ checkpointctl show checkpoint.tar --print-stats
|
||||
+-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+
|
||||
| CONTAINER | IMAGE | ID | RUNTIME | CREATED | ENGINE | IP | CHKPT SIZE | ROOT FS DIFF SIZE |
|
||||
+-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+
|
||||
| counter | quay.io/adrianreber/counter:blog | 059a219a22e5 | runc | 2023-03-02T06:06:49 | CRI-O | 10.88.0.23 | 8.6 MiB | 3.0 KiB |
|
||||
+-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+
|
||||
CRIU dump statistics
|
||||
+---------------+-------------+--------------+---------------+---------------+---------------+
|
||||
| FREEZING TIME | FROZEN TIME | MEMDUMP TIME | MEMWRITE TIME | PAGES SCANNED | PAGES WRITTEN |
|
||||
+---------------+-------------+--------------+---------------+---------------+---------------+
|
||||
| 100809 us | 119627 us | 11602 us | 7379 us | 7800 | 2198 |
|
||||
+---------------+-------------+--------------+---------------+---------------+---------------+
|
||||
```
|
||||
|
||||
This gives me already some information about the checkpoint in that checkpoint
|
||||
archive. I can see the name of the container, information about the container
|
||||
runtime and container engine. It also lists the size of the checkpoint (`CHKPT
|
||||
SIZE`). This is mainly the size of the memory pages included in the checkpoint,
|
||||
but there is also information about the size of all changed files in the
|
||||
container (`ROOT FS DIFF SIZE`).
|
||||
|
||||
The additional parameter `--print-stats` decodes information in the checkpoint
|
||||
archive and displays them in the second table (*CRIU dump statistics*). This
|
||||
information is collected during checkpoint creation and gives an overview how much
|
||||
time CRIU needed to checkpoint the processes in the container and how many
|
||||
memory pages were analyzed and written during checkpoint creation.
|
||||
|
||||
## Digging deeper
|
||||
|
||||
With the help of `checkpointctl` I am able to get some high level information
|
||||
about the checkpoint archive. To be able to analyze the checkpoint archive
|
||||
further I have to extract it. The checkpoint archive is a *tar* archive and can
|
||||
be extracted with the help of `tar xf checkpoint.tar`.
|
||||
|
||||
Extracting the checkpoint archive will result in following files and directories:
|
||||
|
||||
* `bind.mounts` - this file contains information about bind mounts and is needed
|
||||
during restore to mount all external files and directories at the right location
|
||||
* `checkpoint/` - this directory contains the actual checkpoint as created by
|
||||
CRIU
|
||||
* `config.dump` and `spec.dump` - these files contain metadata about the container
|
||||
which is needed during restore
|
||||
* `dump.log` - this file contains the debug output of CRIU created during
|
||||
checkpointing
|
||||
* `stats-dump` - this file contains the data which is used by `checkpointctl`
|
||||
to display dump statistics (`--print-stats`)
|
||||
* `rootfs-diff.tar` - this file contains all changed files on the container's
|
||||
file-system
|
||||
|
||||
### File-system changes - `rootfs-diff.tar`
|
||||
|
||||
The first step to analyze the container's checkpoint further is to look at
|
||||
the files that have changed in my container. This can be done by looking at the
|
||||
file `rootfs-diff.tar`:
|
||||
|
||||
```console
|
||||
$ tar xvf rootfs-diff.tar
|
||||
home/counter/logfile
|
||||
home/counter/test-file
|
||||
```
|
||||
|
||||
Now the files that changed in the container can be studied:
|
||||
|
||||
```console
|
||||
$ cat home/counter/logfile
|
||||
10.88.0.1 - - [02/Mar/2023 06:07:29] "GET /create?test-file HTTP/1.1" 200 -
|
||||
10.88.0.1 - - [02/Mar/2023 06:07:40] "GET /secret?RANDOM_1432_KEY HTTP/1.1" 200 -
|
||||
10.88.0.1 - - [02/Mar/2023 06:07:43] "GET / HTTP/1.1" 200 -
|
||||
$ cat home/counter/test-file
|
||||
test-file
|
||||
```
|
||||
|
||||
Compared to the container image (`quay.io/adrianreber/counter:blog`) this
|
||||
container is based on, I can see that the file `logfile` contains information
|
||||
about all access to the service the container provides and the file `test-file`
|
||||
was created just as expected.
|
||||
|
||||
With the help of `rootfs-diff.tar` it is possible to inspect all files that
|
||||
were created or changed compared to the base image of the container.
|
||||
|
||||
### Analyzing the checkpointed processes - `checkpoint/`
|
||||
|
||||
The directory `checkpoint/` contains data created by CRIU while checkpointing
|
||||
the processes in the container. The content in the directory `checkpoint/`
|
||||
consists of different [image files][image-files] which can be analyzed with the
|
||||
help of the tool [CRIT][crit] which is distributed as part of CRIU.
|
||||
|
||||
First lets get an overview of the processes inside of the container:
|
||||
|
||||
```console
|
||||
$ crit show checkpoint/pstree.img | jq .entries[].pid
|
||||
1
|
||||
7
|
||||
8
|
||||
```
|
||||
|
||||
This output means that I have three processes inside of the container's PID
|
||||
namespace with the PIDs: 1, 7, 8
|
||||
|
||||
This is only the view from the inside of the container's PID namespace. During
|
||||
restore exactly these PIDs will be recreated. From the outside of the
|
||||
container's PID namespace the PIDs will change after restore.
|
||||
|
||||
The next step is to get some additional information about these three processes:
|
||||
|
||||
```console
|
||||
$ crit show checkpoint/core-1.img | jq .entries[0].tc.comm
|
||||
"bash"
|
||||
$ crit show checkpoint/core-7.img | jq .entries[0].tc.comm
|
||||
"counter.py"
|
||||
$ crit show checkpoint/core-8.img | jq .entries[0].tc.comm
|
||||
"tee"
|
||||
```
|
||||
|
||||
This means the three processes in my container are `bash`, `counter.py` (a Python
|
||||
interpreter) and `tee`. For details about the parent child relations of these processes there
|
||||
is more data to be analyzed in `checkpoint/pstree.img`.
|
||||
|
||||
Let's compare the so far collected information to the still running container:
|
||||
|
||||
```console
|
||||
$ crictl inspect --output go-template --template "{{(index .info.pid)}}" 059a219a22e56
|
||||
722520
|
||||
$ ps auxf | grep -A 2 722520
|
||||
fedora 722520 \_ bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile
|
||||
fedora 722541 \_ /usr/bin/python3 /home/counter/counter.py
|
||||
fedora 722542 \_ /usr/bin/coreutils --coreutils-prog-shebang=tee /usr/bin/tee /home/counter/logfile
|
||||
$ cat /proc/722520/comm
|
||||
bash
|
||||
$ cat /proc/722541/comm
|
||||
counter.py
|
||||
$ cat /proc/722542/comm
|
||||
tee
|
||||
```
|
||||
|
||||
In this output I am first retrieving the PID of the first process in the
|
||||
container and then I am looking for that PID and child processes on the system
|
||||
where the container is running. I am seeing three processes and the first one is
|
||||
"bash" which is PID 1 inside of the containers PID namespace. Then I am looking
|
||||
at `/proc/<PID>/comm` and I can find the exact same value
|
||||
as in the checkpoint image.
|
||||
|
||||
Important to remember is that the checkpoint will contain the view from within the
|
||||
container's PID namespace because that information is important to restore the
|
||||
processes.
|
||||
|
||||
One last example of what `crit` can tell us about the container is the information
|
||||
about the UTS namespace:
|
||||
|
||||
```console
|
||||
$ crit show checkpoint/utsns-12.img
|
||||
{
|
||||
"magic": "UTSNS",
|
||||
"entries": [
|
||||
{
|
||||
"nodename": "counters",
|
||||
"domainname": "(none)"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This tells me that the hostname inside of the UTS namespace is `counters`.
|
||||
|
||||
For every resource CRIU collected during checkpointing the `checkpoint/`
|
||||
directory contains corresponding image files which can be analyzed with the help
|
||||
of `crit`.
|
||||
|
||||
#### Looking at the memory pages
|
||||
|
||||
In addition to the information from CRIU that can be decoded with the help
|
||||
of CRIT, there are also files containing the raw memory pages written by
|
||||
CRIU to disk:
|
||||
|
||||
```console
|
||||
$ ls checkpoint/pages-*
|
||||
checkpoint/pages-1.img checkpoint/pages-2.img checkpoint/pages-3.img
|
||||
```
|
||||
|
||||
When I initially used the container I stored a random key (`RANDOM_1432_KEY`)
|
||||
somewhere in the memory. Let see if I can find it:
|
||||
|
||||
```console
|
||||
$ grep -ao RANDOM_1432_KEY checkpoint/pages-*
|
||||
checkpoint/pages-2.img:RANDOM_1432_KEY
|
||||
```
|
||||
|
||||
And indeed, there is my data. This way I can easily look at the content
|
||||
of all memory pages of the processes in the container, but it is also
|
||||
important to remember that anyone that can access the checkpoint
|
||||
archive has access to all information that was stored in the memory of the
|
||||
container's processes.
|
||||
|
||||
#### Using gdb for further analysis
|
||||
|
||||
Another possibility to look at the checkpoint images is `gdb`. The CRIU repository
|
||||
contains the script [coredump][criu-coredump] which can convert a checkpoint
|
||||
into a coredump file:
|
||||
|
||||
```console
|
||||
$ /home/criu/coredump/coredump-python3
|
||||
$ ls -al core*
|
||||
core.1 core.7 core.8
|
||||
```
|
||||
|
||||
Running the `coredump-python3` script will convert the checkpoint images into
|
||||
one coredump file for each process in the container. Using `gdb` I can also look
|
||||
at the details of the processes:
|
||||
|
||||
```console
|
||||
$ echo info registers | gdb --core checkpoint/core.1 -q
|
||||
|
||||
[New LWP 1]
|
||||
|
||||
Core was generated by `bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile'.
|
||||
|
||||
#0 0x00007fefba110198 in ?? ()
|
||||
(gdb)
|
||||
rax 0x3d 61
|
||||
rbx 0x8 8
|
||||
rcx 0x7fefba11019a 140667595587994
|
||||
rdx 0x0 0
|
||||
rsi 0x7fffed9c1110 140737179816208
|
||||
rdi 0xffffffff 4294967295
|
||||
rbp 0x1 0x1
|
||||
rsp 0x7fffed9c10e8 0x7fffed9c10e8
|
||||
r8 0x1 1
|
||||
r9 0x0 0
|
||||
r10 0x0 0
|
||||
r11 0x246 582
|
||||
r12 0x0 0
|
||||
r13 0x7fffed9c1170 140737179816304
|
||||
r14 0x0 0
|
||||
r15 0x0 0
|
||||
rip 0x7fefba110198 0x7fefba110198
|
||||
eflags 0x246 [ PF ZF IF ]
|
||||
cs 0x33 51
|
||||
ss 0x2b 43
|
||||
ds 0x0 0
|
||||
es 0x0 0
|
||||
fs 0x0 0
|
||||
gs 0x0 0
|
||||
```
|
||||
|
||||
In this example I can see the value of all registers as they were during
|
||||
checkpointing and I can also see the complete command-line of my container's PID
|
||||
1 process: `bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile`
|
||||
|
||||
## Summary
|
||||
|
||||
With the help of container checkpointing, it is possible to create a
|
||||
checkpoint of a running container without stopping the container and without the
|
||||
container knowing that it was checkpointed. The result of checkpointing a
|
||||
container in Kubernetes is a checkpoint archive; using different tools like
|
||||
`checkpointctl`, `tar`, `crit` and `gdb` the checkpoint can be analyzed. Even
|
||||
with simple tools like `grep` it is possible to find information in the
|
||||
checkpoint archive.
|
||||
|
||||
The different examples I have shown in this article how to analyze a checkpoint
|
||||
are just the starting point. Depending on your requirements it is possible to
|
||||
look at certain things in much more detail, but this article should give you an
|
||||
introduction how to start the analysis of your checkpoint.
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
You can reach SIG Node by several means:
|
||||
|
||||
* Slack: [#sig-node][slack-sig-node]
|
||||
* Slack: [#sig-security][slack-sig-security]
|
||||
* [Mailing list][sig-node-ml]
|
||||
|
||||
[forensic-blog]: https://kubernetes.io/blog/2022/12/05/forensic-container-checkpointing-alpha/
|
||||
[checkpointctl]: https://github.com/checkpoint-restore/checkpointctl
|
||||
[image-files]: https://criu.org/Images
|
||||
[crit]: https://criu.org/CRIT
|
||||
[slack-sig-node]: https://kubernetes.slack.com/messages/sig-node
|
||||
[slack-sig-security]: https://kubernetes.slack.com/messages/sig-security
|
||||
[sig-node-ml]: https://groups.google.com/forum/#!forum/kubernetes-sig-node
|
||||
[criu-coredump]: https://github.com/checkpoint-restore/criu/tree/criu-dev/coredump
|
||||
|
|
@ -0,0 +1,187 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know"
|
||||
date: 2023-03-10T17:00:00.000Z
|
||||
slug: image-registry-redirect
|
||||
---
|
||||
|
||||
**Authors**: Bob Killen (Google), Davanum Srinivas (AWS), Chris Short (AWS), Frederico Muñoz (SAS
|
||||
Institute), Tim Bannister (The Scale Factory), Ricky Sadowski (AWS), Grace Nguyen (Expo), Mahamed
|
||||
Ali (Rackspace Technology), Mars Toktonaliev (independent), Laura Santamaria (Dell), Kat Cosgrove
|
||||
(Dell)
|
||||
|
||||
|
||||
On Monday, March 20th, the k8s.gcr.io registry [will be redirected to the community owned
|
||||
registry](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/),
|
||||
**registry.k8s.io** .
|
||||
|
||||
|
||||
## TL;DR: What you need to know about this change
|
||||
|
||||
- On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to
|
||||
registry.k8s.io with the eventual goal of sunsetting k8s.gcr.io.
|
||||
- If you run in a restricted environment, and apply strict domain name or IP address access policies
|
||||
limited to k8s.gcr.io, **the image pulls will not function** after k8s.gcr.io starts redirecting
|
||||
to the new registry.
|
||||
- A small subset of non-standard clients do not handle HTTP redirects by image registries, and will
|
||||
need to be pointed directly at registry.k8s.io.
|
||||
- The redirect is a stopgap to assist users in making the switch. The deprecated k8s.gcr.io registry
|
||||
will be phased out at some point. **Please update your manifests as soon as possible to point to
|
||||
registry.k8s.io**.
|
||||
- If you host your own image registry, you can copy images you need there as well to reduce traffic
|
||||
to community owned registries.
|
||||
|
||||
If you think you may be impacted, or would like to know more about this change, please keep reading.
|
||||
|
||||
## How can I check if I am impacted?
|
||||
|
||||
To test connectivity to registry.k8s.io and being able to pull images from there, here is a sample
|
||||
command that can be executed in the namespace of your choosing:
|
||||
|
||||
```
|
||||
kubectl run hello-world -ti --rm --image=registry.k8s.io/busybox:latest --restart=Never -- date
|
||||
```
|
||||
|
||||
When you run the command above, here’s what to expect when things work correctly:
|
||||
|
||||
```
|
||||
$ kubectl run hello-world -ti --rm --image=registry.k8s.io/busybox:latest --restart=Never -- date
|
||||
Fri Feb 31 07:07:07 UTC 2023
|
||||
pod "hello-world" deleted
|
||||
```
|
||||
|
||||
## What kind of errors will I see if I’m impacted?
|
||||
|
||||
Errors may depend on what kind of container runtime you are using, and what endpoint you are routed
|
||||
to, but it should present such as `ErrImagePull`, `ImagePullBackOff`, or a container failing to be
|
||||
created with the warning `FailedCreatePodSandBox`.
|
||||
|
||||
Below is an example error message showing a proxied deployment failing to pull due to an unknown
|
||||
certificate:
|
||||
|
||||
```
|
||||
FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Head “https://us-west1-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8”: x509: certificate signed by unknown authority
|
||||
```
|
||||
|
||||
## What images will be impacted?
|
||||
|
||||
**ALL** images on k8s.gcr.io will be impacted by this change. k8s.gcr.io hosts many images beyond
|
||||
Kubernetes releases. A large number of Kubernetes subprojects host their images there as well. Some
|
||||
examples include the `dns/k8s-dns-node-cache`, `ingress-nginx/controller`, and
|
||||
`node-problem-detector/node-problem-detector` images.
|
||||
|
||||
## I am impacted. What should I do?
|
||||
|
||||
For impacted users that run in a restricted environment, the best option is to copy over the
|
||||
required images to a private registry or configure a pull-through cache in their registry.
|
||||
|
||||
There are several tools to copy images between registries;
|
||||
[crane](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane_copy.md) is one
|
||||
of those tools, and images can be copied to a private registry by using `crane copy SRC DST`. There
|
||||
are also vendor-specific tools, like e.g. Google’s
|
||||
[gcrane](https://cloud.google.com/container-registry/docs/migrate-external-containers#copy), that
|
||||
perform a similar function but are streamlined for their platform.
|
||||
|
||||
## How can I find which images are using the legacy registry, and fix them?
|
||||
|
||||
**Option 1**: See the one line kubectl command in our [earlier blog
|
||||
post](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/#what-s-next):
|
||||
|
||||
```
|
||||
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
|
||||
tr -s '[[:space:]]' '\n' |\
|
||||
sort |\
|
||||
uniq -c
|
||||
```
|
||||
|
||||
**Option 2**: A `kubectl` [krew](https://krew.sigs.k8s.io/) plugin has been developed called
|
||||
[`community-images`](https://github.com/kubernetes-sigs/community-images#kubectl-community-images),
|
||||
that will scan and report any images using the k8s.gcr.io endpoint.
|
||||
|
||||
If you have krew installed, you can install it with:
|
||||
|
||||
```
|
||||
kubectl krew install community-images
|
||||
```
|
||||
|
||||
and generate a report with:
|
||||
|
||||
```
|
||||
kubectl community-images
|
||||
```
|
||||
|
||||
For alternate methods of install and example output, check out the repo:
|
||||
[kubernetes-sigs/community-images](https://github.com/kubernetes-sigs/community-images).
|
||||
|
||||
**Option 3**: If you do not have access to a cluster directly, or manage many clusters - the best
|
||||
way is to run a search over your manifests and charts for _"k8s.gcr.io"_.
|
||||
|
||||
**Option 4**: If you wish to prevent k8s.gcr.io based images from running in your cluster, example
|
||||
policies for [Gatekeeper](https://open-policy-agent.github.io/gatekeeper-library/website/) and
|
||||
[Kyverno](https://kyverno.io/) are available in the [AWS EKS Best Practices
|
||||
repository](https://github.com/aws/aws-eks-best-practices/tree/master/policies/k8s-registry-deprecation)
|
||||
that will block them from being pulled. You can use these third-party policies with any Kubernetes
|
||||
cluster.
|
||||
|
||||
**Option 5**: As a **LAST** possible option, you can use a [Mutating
|
||||
Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
|
||||
to change the image address dynamically. This should only be
|
||||
considered a stopgap till your manifests have been updated. You can
|
||||
find a (third party) Mutating Webhook and Kyverno policy in
|
||||
[k8s-gcr-quickfix](https://github.com/abstractinfrastructure/k8s-gcr-quickfix).
|
||||
|
||||
## Why did Kubernetes change to a different image registry?
|
||||
|
||||
k8s.gcr.io is hosted on a custom [Google Container Registry
|
||||
(GCR)](https://cloud.google.com/container-registry) domain that was set up solely for the Kubernetes
|
||||
project. This has worked well since the inception of the project, and we thank Google for providing
|
||||
these resources, but today, there are other cloud providers and vendors that would like to host
|
||||
images to provide a better experience for the people on their platforms. In addition to Google’s
|
||||
[renewed commitment to donate $3
|
||||
million](https://www.cncf.io/google-cloud-recommits-3m-to-kubernetes/) to support the project's
|
||||
infrastructure last year, Amazon Web Services announced a matching donation [during their Kubecon NA
|
||||
2022 keynote in Detroit](https://youtu.be/PPdimejomWo?t=236). This will provide a better experience
|
||||
for users (closer servers = faster downloads) and will reduce the egress bandwidth and costs from
|
||||
GCR at the same time.
|
||||
|
||||
For more details on this change, check out [registry.k8s.io: faster, cheaper and Generally Available
|
||||
(GA)](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/).
|
||||
|
||||
## Why is a redirect being put in place?
|
||||
|
||||
The project switched to [registry.k8s.io last year with the 1.25
|
||||
release](https://kubernetes.io/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/); however, most of
|
||||
the image pull traffic is still directed at the old endpoint k8s.gcr.io. This has not been
|
||||
sustainable for us as a project, as it is not utilizing the resources that have been donated to the
|
||||
project from other providers, and we are in the danger of running out of funds due to the cost of
|
||||
serving this traffic.
|
||||
|
||||
A redirect will enable the project to take advantage of these new resources, significantly reducing
|
||||
our egress bandwidth costs. We only expect this change to impact a small subset of users running in
|
||||
restricted environments or using very old clients that do not respect redirects properly.
|
||||
|
||||
## What will happen to k8s.gcr.io?
|
||||
|
||||
Separate from the the redirect, k8s.gcr.io will be frozen [and will not be updated with new images
|
||||
after April 3rd, 2023](https://kubernetes.io/blog/2023/02/06/k8s-gcr-io-freeze-announcement/). `k8s.gcr.io`
|
||||
will not get any new releases, patches, or security updates. It will continue to remain available to
|
||||
help people migrate, but it **WILL** be phased out entirely in the future.
|
||||
|
||||
## I still have questions, where should I go?
|
||||
|
||||
For more information on registry.k8s.io and why it was developed, see [registry.k8s.io: faster,
|
||||
cheaper and Generally Available](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/).
|
||||
|
||||
If you would like to know more about the image freeze and the last images that will be available
|
||||
there, see the blog post: [k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April
|
||||
2023](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/).
|
||||
|
||||
Information on the architecture of registry.k8s.io and its [request handling decision
|
||||
tree](https://github.com/kubernetes/registry.k8s.io/blob/8408d0501a88b3d2531ff54b14eeb0e3c900a4f3/cmd/archeio/docs/request-handling.md)
|
||||
can be found in the [kubernetes/registry.k8s.io
|
||||
repo](https://github.com/kubernetes/registry.k8s.io).
|
||||
|
||||
If you believe you have encountered a bug with the new registry or the redirect, please open an
|
||||
issue in the [kubernetes/registry.k8s.io
|
||||
repo](https://github.com/kubernetes/registry.k8s.io/issues/new/choose). **Please check if there is an issue already
|
||||
open similar to what you are seeing before you create a new issue**.
|
||||
|
|
@ -0,0 +1,235 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes Removals and Major Changes In v1.27"
|
||||
date: 2023-03-17T14:00:00+0000
|
||||
slug: upcoming-changes-in-kubernetes-v1-27
|
||||
---
|
||||
|
||||
**Author**: Harshita Sao
|
||||
|
||||
As Kubernetes develops and matures, features may be deprecated, removed, or replaced
|
||||
with better ones for the project's overall health. Based on the information available
|
||||
at this point in the v1.27 release process, which is still ongoing and can introduce
|
||||
additional changes, this article identifies and describes some of the planned changes
|
||||
for the Kubernetes v1.27 release.
|
||||
|
||||
## A note about the k8s.gcr.io redirect to registry.k8s.io
|
||||
|
||||
To host its container images, the Kubernetes project uses a community-owned image
|
||||
registry called registry.k8s.io. **On March 20th, all traffic from the out-of-date
|
||||
[k8s.gcr.io](https://cloud.google.com/container-registry/) registry will be redirected
|
||||
to [registry.k8s.io](https://github.com/kubernetes/registry.k8s.io)**. The deprecated
|
||||
k8s.gcr.io registry will eventually be phased out.
|
||||
|
||||
### What does this change mean?
|
||||
|
||||
- If you are a subproject maintainer, you must update your manifests and Helm
|
||||
charts to use the new registry.
|
||||
|
||||
- The v1.27 Kubernetes release will not be published to the old registry.
|
||||
|
||||
- From April, patch releases for v1.24, v1.25, and v1.26 will no longer be
|
||||
published to the old registry.
|
||||
|
||||
We have a [blog post](/blog/2023/03/10/image-registry-redirect/) with all
|
||||
the information about this change and what to do if it impacts you.
|
||||
|
||||
## The Kubernetes API Removal and Deprecation process
|
||||
|
||||
The Kubernetes project has a well-documented
|
||||
[deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/)
|
||||
for features. This policy states that stable APIs may only be deprecated when
|
||||
a newer, stable version of that same API is available and that APIs have a
|
||||
minimum lifetime for each stability level. A deprecated API has been marked
|
||||
for removal in a future Kubernetes release, it will continue to function until
|
||||
removal (at least one year from the deprecation), but usage will result in a
|
||||
warning being displayed. Removed APIs are no longer available in the current
|
||||
version, at which point you must migrate to using the replacement.
|
||||
|
||||
- Generally available (GA) or stable API versions may be marked as deprecated
|
||||
but must not be removed within a major version of Kubernetes.
|
||||
|
||||
- Beta or pre-release API versions must be supported for 3 releases after the deprecation.
|
||||
|
||||
- Alpha or experimental API versions may be removed in any release without prior deprecation notice.
|
||||
|
||||
Whether an API is removed as a result of a feature graduating from beta to stable
|
||||
or because that API simply did not succeed, all removals comply with this
|
||||
deprecation policy. Whenever an API is removed, migration options are communicated
|
||||
in the documentation.
|
||||
|
||||
## API removals, and other changes for Kubernetes v1.27
|
||||
|
||||
### Removal of `storage.k8s.io/v1beta1` from `CSIStorageCapacity`
|
||||
|
||||
The [CSIStorageCapacity](/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/)
|
||||
API supports exposing currently available storage capacity via CSIStorageCapacity
|
||||
objects and enhances the scheduling of pods that use CSI volumes with late binding.
|
||||
The `storage.k8s.io/v1beta1` API version of CSIStorageCapacity was deprecated in v1.24,
|
||||
and it will no longer be served in v1.27.
|
||||
|
||||
Migrate manifests and API clients to use the `storage.k8s.io/v1` API version,
|
||||
available since v1.24. All existing persisted objects are accessible via the new API.
|
||||
|
||||
Refer to the
|
||||
[Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking)
|
||||
for more information.
|
||||
|
||||
Kubernetes v1.27 is not removing any other APIs; however several other aspects are going
|
||||
to be removed. Read on for details.
|
||||
|
||||
### Support for deprecated seccomp annotations
|
||||
|
||||
In Kubernetes v1.19, the
|
||||
[seccomp](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/135-seccomp)
|
||||
(secure computing mode) support graduated to General Availability (GA).
|
||||
This feature can be used to increase the workload security by restricting
|
||||
the system calls for a Pod (applies to all containers) or single containers.
|
||||
|
||||
The support for the alpha seccomp annotations `seccomp.security.alpha.kubernetes.io/pod`
|
||||
and `container.seccomp.security.alpha.kubernetes.io` were deprecated since v1.19, now
|
||||
have been completely removed. The seccomp fields are no longer auto-populated when pods
|
||||
with seccomp annotations are created. Pods should use the corresponding pod or container
|
||||
`securityContext.seccompProfile` field instead.
|
||||
|
||||
### Removal of several feature gates for volume expansion
|
||||
|
||||
The following feature gates for
|
||||
[volume expansion](https://github.com/kubernetes/enhancements/issues/284) GA features
|
||||
will be removed and must no longer be referenced in `--feature-gates` flags:
|
||||
|
||||
`ExpandCSIVolumes`
|
||||
: Enable expanding of CSI volumes.
|
||||
|
||||
`ExpandInUsePersistentVolumes`
|
||||
: Enable expanding in-use PVCs.
|
||||
|
||||
`ExpandPersistentVolumes`
|
||||
: Enable expanding of persistent volumes.
|
||||
|
||||
### Removal of `--master-service-namespace` command line argument
|
||||
|
||||
The kube-apiserver accepts a deprecated command line argument, `--master-service-namespace`,
|
||||
that specified where to create the Service named `kubernetes` to represent the API server.
|
||||
Kubernetes v1.27 will remove that argument, which has been deprecated since the v1.26 release.
|
||||
|
||||
### Removal of the `ControllerManagerLeaderMigration` feature gate
|
||||
|
||||
[Leader Migration](https://github.com/kubernetes/enhancements/issues/2436) provides
|
||||
a mechanism in which HA clusters can safely migrate "cloud-specific" controllers
|
||||
between the `kube-controller-manager` and the `cloud-controller-manager` via a shared
|
||||
resource lock between the two components while upgrading the replicated control plane.
|
||||
|
||||
The `ControllerManagerLeaderMigration` feature, GA since v1.24, is unconditionally
|
||||
enabled and for the v1.27 release the feature gate option will be removed. If you're
|
||||
setting this feature gate explicitly, you'll need to remove that from command line
|
||||
arguments or configuration files.
|
||||
|
||||
### Removal of `--enable-taint-manager` command line argument
|
||||
|
||||
The kube-controller-manager command line argument `--enable-taint-manager` is
|
||||
deprecated, and will be removed in Kubernetes v1.27. The feature that it supports,
|
||||
[taint based eviction](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions),
|
||||
is already enabled by default and will continue to be implicitly enabled when the flag is removed.
|
||||
|
||||
### Removal of `--pod-eviction-timeout` command line argument
|
||||
|
||||
The deprecated command line argument `--pod-eviction-timeout` will be removed from the
|
||||
kube-controller-manager.
|
||||
|
||||
### Removal of the `CSI Migration` feature gate
|
||||
|
||||
The [CSI migration](https://github.com/kubernetes/enhancements/issues/625)
|
||||
programme allows moving from in-tree volume plugins to out-of-tree CSI drivers.
|
||||
CSI migration is generally available since Kubernetes v1.16, and the associated
|
||||
`CSIMigration` feature gate will be removed in v1.27.
|
||||
|
||||
### Removal of `CSIInlineVolume` feature gate
|
||||
|
||||
The [CSI Ephemeral Volume](https://github.com/kubernetes/kubernetes/pull/111258)
|
||||
feature allows CSI volumes to be specified directly in the pod specification for
|
||||
ephemeral use cases. They can be used to inject arbitrary states, such as
|
||||
configuration, secrets, identity, variables or similar information, directly
|
||||
inside pods using a mounted volume. This feature graduated to GA in v1.25.
|
||||
Hence, the feature gate `CSIInlineVolume` will be removed in the v1.27 release.
|
||||
|
||||
### Removal of `EphemeralContainers` feature gate
|
||||
|
||||
[Ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/)
|
||||
graduated to GA in v1.25. These are containers with a temporary duration that
|
||||
executes within namespaces of an existing pod. Ephemeral containers are
|
||||
typically initiated by a user in order to observe the state of other pods
|
||||
and containers for troubleshooting and debugging purposes. For Kubernetes v1.27,
|
||||
API support for ephemeral containers is unconditionally enabled; the
|
||||
`EphemeralContainers` feature gate will be removed.
|
||||
|
||||
### Removal of `LocalStorageCapacityIsolation` feature gate
|
||||
|
||||
The [Local Ephemeral Storage Capacity Isolation](https://github.com/kubernetes/kubernetes/pull/111513)
|
||||
feature moved to GA in v1.25. The feature provides support for capacity isolation
|
||||
of local ephemeral storage between pods, such as `emptyDir` volumes, so that a pod
|
||||
can be hard limited in its consumption of shared resources. The kubelet will
|
||||
evicting Pods if consumption of local ephemeral storage exceeds the configured limit.
|
||||
The feature gate, `LocalStorageCapacityIsolation`, will be removed in the v1.27 release.
|
||||
|
||||
### Removal of `NetworkPolicyEndPort` feature gate
|
||||
|
||||
The v1.25 release of Kubernetes promoted `endPort` in NetworkPolicy to GA.
|
||||
NetworkPolicy providers that support the `endPort` field that can be used to
|
||||
specify a range of ports to apply a NetworkPolicy. Previously, each NetworkPolicy
|
||||
could only target a single port. So the feature gate `NetworkPolicyEndPort`
|
||||
will be removed in this release.
|
||||
|
||||
Please be aware that `endPort` field must be supported by the Network Policy
|
||||
provider. If your provider does not support `endPort`, and this field is
|
||||
specified in a Network Policy, the Network Policy will be created covering
|
||||
only the port field (single port).
|
||||
|
||||
### Removal of `StatefulSetMinReadySeconds` feature gate
|
||||
|
||||
For a pod that is part of a StatefulSet, Kubernetes can mark the Pod ready only
|
||||
if Pod is available (and passing checks) for at least the period you specify in
|
||||
[`minReadySeconds`](/docs/concepts/workloads/controllers/statefulset/#minimum-ready-seconds).
|
||||
The feature became generally available in Kubernetes v1.25, and the `StatefulSetMinReadySeconds`
|
||||
feature gate will be locked to true and removed in the v1.27 release.
|
||||
|
||||
### Removal of `IdentifyPodOS` feature gate
|
||||
|
||||
You can specify the operating system for a Pod, and the feature support for that
|
||||
is stable since the v1.25 release. The `IdentifyPodOS` feature gate will be
|
||||
removed for Kubernetes v1.27.
|
||||
|
||||
### Removal of `DaemonSetUpdateSurge` feature gate
|
||||
|
||||
The v1.25 release of Kubernetes also stabilised surge support for DaemonSet pods,
|
||||
implemented in order to minimize DaemonSet downtime during rollouts.
|
||||
The `DaemonSetUpdateSurge` feature gate will be removed in Kubernetes v1.27.
|
||||
|
||||
## Looking ahead
|
||||
|
||||
The official list of
|
||||
[API removals](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-29)
|
||||
planned for Kubernetes v1.29 includes:
|
||||
|
||||
- The `flowcontrol.apiserver.k8s.io/v1beta2` API version of FlowSchema and
|
||||
PriorityLevelConfiguration will no longer be served in v1.29.
|
||||
|
||||
## Want to know more?
|
||||
|
||||
Deprecations are announced in the Kubernetes release notes. You can see the
|
||||
announcements of pending deprecations in the release notes for:
|
||||
|
||||
- [Kubernetes v1.23](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#deprecation)
|
||||
|
||||
- [Kubernetes v1.24](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#deprecation)
|
||||
|
||||
- [Kubernetes v1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation)
|
||||
|
||||
- [Kubernetes v1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation)
|
||||
|
||||
We will formally announce the deprecations that come with
|
||||
[Kubernetes v1.27](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation)
|
||||
as part of the CHANGELOG for that release.
|
||||
|
||||
For information on the process of deprecation and removal, check out the official Kubernetes
|
||||
[deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document.
|
||||
|
|
@ -18,7 +18,7 @@ This page lists some of the available add-ons and links to their respective inst
|
|||
|
||||
* [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI.
|
||||
* [Antrea](https://antrea.io/) operates at Layer 3/4 to provide networking and security services for Kubernetes, leveraging Open vSwitch as the networking data plane. Antrea is a [CNCF project at the Sandbox level](https://www.cncf.io/projects/antrea/).
|
||||
* [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
|
||||
* [Calico](https://www.tigera.io/project-calico/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
|
||||
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) unites Flannel and Calico, providing networking and network policy.
|
||||
* [Cilium](https://github.com/cilium/cilium) is a networking, observability, and security solution with an eBPF-based data plane. Cilium provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay/encapsulation mode, and can enforce network policies on L3-L7 using an identity-based security model that is decoupled from network addressing. Cilium can act as a replacement for kube-proxy; it also offers additional, opt-in observability and security features. Cilium is a [CNCF project at the Incubation level](https://www.cncf.io/projects/cilium/).
|
||||
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave. CNI-Genie is a [CNCF project at the Sandbox level](https://www.cncf.io/projects/cni-genie/).
|
||||
|
|
|
|||
|
|
@ -122,7 +122,7 @@ In addition, you can limit consumption of storage resources based on associated
|
|||
| `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. |
|
||||
| `persistentvolumeclaims` | The total number of [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
|
||||
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the `<storage-class-name>`, the sum of storage requests cannot exceed this value. |
|
||||
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the storage-class-name, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
|
||||
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the `<storage-class-name>`, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
|
||||
|
||||
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
|
||||
define a quota as follows:
|
||||
|
|
|
|||
|
|
@ -221,7 +221,7 @@ unexpected to them. Use node labels that have a clear correlation to the
|
|||
scheduler profile name.
|
||||
|
||||
{{< note >}}
|
||||
The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler),
|
||||
The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#how-daemon-pods-are-scheduled),
|
||||
does not support scheduling profiles. When the DaemonSet controller creates
|
||||
Pods, the default Kubernetes scheduler places those Pods and honors any
|
||||
`nodeAffinity` rules in the DaemonSet controller.
|
||||
|
|
|
|||
|
|
@ -96,10 +96,10 @@ Services will always have the `ready` condition set to `true`.
|
|||
|
||||
#### Serving
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
`serving` is identical to the `ready` condition, except it does not account for terminating states.
|
||||
Consumers of the EndpointSlice API should check this condition if they care about pod readiness while
|
||||
The `serving` condition is almost identical to the `ready` condition. The difference is that
|
||||
consumers of the EndpointSlice API should check the `serving` condition if they care about pod readiness while
|
||||
the pod is also terminating.
|
||||
|
||||
{{< note >}}
|
||||
|
|
@ -235,7 +235,7 @@ at different times.
|
|||
{{< note >}}
|
||||
Clients of the EndpointSlice API must iterate through all the existing EndpointSlices
|
||||
associated to a Service and build a complete list of unique network endpoints. It is
|
||||
important to mention that endpoints may be duplicated in different EndointSlices.
|
||||
important to mention that endpoints may be duplicated in different EndpointSlices.
|
||||
|
||||
You can find a reference implementation for how to perform this endpoint aggregation
|
||||
and deduplication as part of the `EndpointSliceCache` code within `kube-proxy`.
|
||||
|
|
|
|||
|
|
@ -656,7 +656,7 @@ by making the changes that are equivalent to you requesting a Service of
|
|||
`type: NodePort`. The cloud-controller-manager component then configures the external load balancer to
|
||||
forward traffic to that assigned node port.
|
||||
|
||||
_As an alpha feature_, you can configure a load balanced Service to
|
||||
You can configure a load balanced Service to
|
||||
[omit](#load-balancer-nodeport-allocation) assigning a node port, provided that the
|
||||
cloud provider implementation supports this.
|
||||
|
||||
|
|
@ -1165,7 +1165,7 @@ will be routed to one of the Service endpoints. `externalIPs` are not managed by
|
|||
of the cluster administrator.
|
||||
|
||||
In the Service spec, `externalIPs` can be specified along with any of the `ServiceTypes`.
|
||||
In the example below, "`my-service`" can be accessed by clients on "`80.11.12.10:80`" (`externalIP:port`)
|
||||
In the example below, "`my-service`" can be accessed by clients on "`198.51.100.32:80`" (`externalIP:port`)
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
|||
|
|
@ -202,7 +202,7 @@ That is, the CronJob does _not_ update existing Jobs, even if those remain runni
|
|||
A CronJob creates a Job object approximately once per execution time of its schedule.
|
||||
The scheduling is approximate because there
|
||||
are certain circumstances where two Jobs might be created, or no Job might be created.
|
||||
Kubernetes tries to avoid those situations, but do not completely prevent them. Therefore,
|
||||
Kubernetes tries to avoid those situations, but does not completely prevent them. Therefore,
|
||||
the Jobs that you define should be _idempotent_.
|
||||
|
||||
If `startingDeadlineSeconds` is set to a large value or left unset (the default)
|
||||
|
|
|
|||
|
|
@ -296,7 +296,7 @@ Each probe must define exactly one of these four mechanisms:
|
|||
The target should implement
|
||||
[gRPC health checks](https://grpc.io/grpc/core/md_doc_health-checking.html).
|
||||
The diagnostic is considered successful if the `status`
|
||||
of the response is `SERVING`.
|
||||
of the response is `SERVING`.
|
||||
gRPC probes are an alpha feature and are only available if you
|
||||
enable the `GRPCContainerProbe`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
|
|
@ -465,14 +465,32 @@ An example flow:
|
|||
The containers in the Pod receive the TERM signal at different times and in an arbitrary
|
||||
order. If the order of shutdowns matters, consider using a `preStop` hook to synchronize.
|
||||
{{< /note >}}
|
||||
1. At the same time as the kubelet is starting graceful shutdown, the control plane removes that
|
||||
shutting-down Pod from EndpointSlice (and Endpoints) objects where these represent
|
||||
1. At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane evaluates whether to remove that shutting-down Pod from EndpointSlice (and Endpoints) objects, where those objects represent
|
||||
a {{< glossary_tooltip term_id="service" text="Service" >}} with a configured
|
||||
{{< glossary_tooltip text="selector" term_id="selector" >}}.
|
||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and other workload resources
|
||||
no longer treat the shutting-down Pod as a valid, in-service replica. Pods that shut down slowly
|
||||
cannot continue to serve traffic as load balancers (like the service proxy) remove the Pod from
|
||||
the list of endpoints as soon as the termination grace period _begins_.
|
||||
should not continue to serve regular traffic and should start terminating and finish processing open connections.
|
||||
Some applications need to go beyond finishing open connections and need more graceful termination -
|
||||
for example: session draining and completion. Any endpoints that represent the terminating pods
|
||||
are not immediately removed from EndpointSlices,
|
||||
and a status indicating [terminating state](/docs/concepts/services-networking/endpoint-slices/#conditions)
|
||||
is exposed from the EndpointSlice API (and the legacy Endpoints API). Terminating
|
||||
endpoints always have their `ready` status
|
||||
as `false` (for backward compatibility with versions before 1.26),
|
||||
so load balancers will not use it for regular traffic.
|
||||
If traffic draining on terminating pod is needed, the actual readiness can be checked as a condition `serving`.
|
||||
You can find more details on how to implement connections draining
|
||||
in the tutorial [Pods And Endpoints Termination Flow](/docs/tutorials/services/pods-and-endpoint-termination-flow/)
|
||||
|
||||
{{<note>}}
|
||||
If you don't have the `EndpointSliceTerminatingCondition` feature gate enabled
|
||||
in your cluster (the gate is on by default from Kubernetes 1.22, and locked to default in 1.26), then the Kubernetes control
|
||||
plane removes a Pod from any relevant EndpointSlices as soon as the Pod's
|
||||
termination grace period _begins_. The behavior above is described when the
|
||||
feature gate `EndpointSliceTerminatingCondition` is enabled.
|
||||
{{</note>}}
|
||||
|
||||
1. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends
|
||||
`SIGKILL` to any processes still running in any container in the Pod.
|
||||
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ A Pod is given a QoS class of `Burstable` if:
|
|||
|
||||
Pods in the `BestEffort` QoS class can use node resources that aren't specifically assigned
|
||||
to Pods in other QoS classes. For example, if you have a node with 16 CPU cores available to the
|
||||
kubelet, and you assign assign 4 CPU cores to a `Guaranteed` Pod, then a Pod in the `BestEffort`
|
||||
kubelet, and you assign 4 CPU cores to a `Guaranteed` Pod, then a Pod in the `BestEffort`
|
||||
QoS class can try to use any amount of the remaining 12 CPU cores.
|
||||
|
||||
The kubelet prefers to evict `BestEffort` Pods if the node comes under resource pressure.
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ min-kubernetes-server-version: v1.25
|
|||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
|
||||
This page explains how user namespaces are used in Kubernetes pods. A user
|
||||
namespace allows to isolate the user running inside the container from the one
|
||||
namespace isolates the user running inside the container from the one
|
||||
in the host.
|
||||
|
||||
A process running as root in a container can run as a different (non-root) user
|
||||
|
|
|
|||
|
|
@ -162,10 +162,10 @@ For an example of adding a label, see the PR for adding the
|
|||
|
||||
The Kubernetes website uses Hugo as its web framework. The website's Hugo
|
||||
configuration resides in the
|
||||
[`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml)
|
||||
file. You'll need to modify `config.toml` to support a new localization.
|
||||
[`hugo.toml`](https://github.com/kubernetes/website/tree/main/hugo.toml)
|
||||
file. You'll need to modify `hugo.toml` to support a new localization.
|
||||
|
||||
Add a configuration block for the new language to `config.toml` under the
|
||||
Add a configuration block for the new language to `hugo.toml` under the
|
||||
existing `[languages]` block. The German block, for example, looks like:
|
||||
|
||||
```toml
|
||||
|
|
|
|||
|
|
@ -271,6 +271,33 @@ Renders to:
|
|||
{{< tab name="JSON File" include="podtemplate.json" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Source code files
|
||||
|
||||
You can use the `{{</* codenew */>}}` shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents of the sample file is generic and reusable, and you want the users to try it out themselves.
|
||||
|
||||
This shortcode takes in two named parameters: `language` and `file`. The mandatory parameter `file` is used to specify the path to the file being displayed. The optional parameter `language` is used to specify the programming language of the file. If the `language` parameter is not provided, the shortcode will attempt to guess the language based on the file extension.
|
||||
|
||||
For example:
|
||||
|
||||
```none
|
||||
{{</* codenew language="yaml" file="application/deployment-scale.yaml" */>}}
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
{{< codenew language="yaml" file="application/deployment-scale.yaml" >}}
|
||||
|
||||
When adding a new sample file, such as a YAML file, create the file in one of the `<LANG>/examples/` subdirectories where `<LANG>` is the language for the page. In the markdown of your page, use the `codenew` shortcode:
|
||||
|
||||
```none
|
||||
{{</* codenew file="<RELATIVE-PATH>/example-yaml>" */>}}
|
||||
```
|
||||
where `<RELATIVE-PATH>` is the path to the sample file to include, relative to the `examples` directory. The following shortcode references a YAML file located at `/content/en/examples/configmap/configmaps.yaml`.
|
||||
|
||||
```none
|
||||
{{</* codenew file="configmap/configmaps.yaml" */>}}
|
||||
```
|
||||
|
||||
## Third party content marker
|
||||
|
||||
Running Kubernetes requires third-party software. For example: you
|
||||
|
|
@ -311,7 +338,7 @@ before the item, or just below the heading for the specific item.
|
|||
|
||||
To generate a version string for inclusion in the documentation, you can choose from
|
||||
several version shortcodes. Each version shortcode displays a version string derived from
|
||||
the value of a version parameter found in the site configuration file, `config.toml`.
|
||||
the value of a version parameter found in the site configuration file, `hugo.toml`.
|
||||
The two most commonly used version parameters are `latest` and `version`.
|
||||
|
||||
### `{{</* param "version" */>}}`
|
||||
|
|
|
|||
|
|
@ -631,4 +631,5 @@ These steps ... | These simple steps ...
|
|||
|
||||
* Learn about [writing a new topic](/docs/contribute/style/write-new-topic/).
|
||||
* Learn about [using page templates](/docs/contribute/style/page-content-types/).
|
||||
* Learn about [custom hugo shortcodes](/docs/contribute/style/hugo-shortcodes/).
|
||||
* Learn about [creating a pull request](/docs/contribute/new-content/open-a-pr/).
|
||||
|
|
|
|||
|
|
@ -89,6 +89,7 @@ operator to use or manage a cluster.
|
|||
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/),
|
||||
[kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) and
|
||||
[kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
* [kube-controller-manager configuration (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/)
|
||||
* [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
|
||||
* [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/)
|
||||
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) and
|
||||
|
|
|
|||
|
|
@ -107,7 +107,7 @@ CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultI
|
|||
|
||||
{{< note >}}
|
||||
The [`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) admission plugin is enabled
|
||||
by default, but is only active if you enable the the `ValidatingAdmissionPolicy`
|
||||
by default, but is only active if you enable the `ValidatingAdmissionPolicy`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) **and**
|
||||
the `admissionregistration.k8s.io/v1alpha1` API.
|
||||
{{< /note >}}
|
||||
|
|
|
|||
|
|
@ -333,6 +333,7 @@ In the following table:
|
|||
| `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 |
|
||||
| `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 |
|
||||
| `WindowsRunAsUserName` | `true` | GA | 1.18 | 1.20 |
|
||||
{{< /table >}}
|
||||
|
||||
## Descriptions for removed feature gates
|
||||
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,9 +1,11 @@
|
|||
---
|
||||
title: Official CVE Feed
|
||||
linkTitle: CVE feed
|
||||
weight: 25
|
||||
outputs:
|
||||
- json
|
||||
- html
|
||||
- html
|
||||
- rss
|
||||
layout: cve-feed
|
||||
---
|
||||
|
||||
|
|
@ -14,19 +16,25 @@ the Kubernetes Security Response Committee. See
|
|||
[Kubernetes Security and Disclosure Information](/docs/reference/issues-security/security/)
|
||||
for more details.
|
||||
|
||||
The Kubernetes project publishes a programmatically accessible
|
||||
[JSON Feed](/docs/reference/issues-security/official-cve-feed/index.json) of
|
||||
published security issues. You can access it by executing the following command:
|
||||
|
||||
{{< comment >}}
|
||||
`replace` is used to bypass known issue with rendering ">"
|
||||
: https://github.com/gohugoio/hugo/issues/7229 in JSON layouts template
|
||||
`layouts/_default/cve-feed.json`
|
||||
{{< /comment >}}
|
||||
The Kubernetes project publishes a programmatically accessible feed of published
|
||||
security issues in [JSON feed](/docs/reference/issues-security/official-cve-feed/index.json)
|
||||
and [RSS feed](/docs/reference/issues-security/official-cve-feed/feed.xml)
|
||||
formats. You can access it by executing the following commands:
|
||||
|
||||
{{< tabs name="CVE feeds" >}}
|
||||
{{% tab name="JSON feed" %}}
|
||||
[Link to JSON format](/docs/reference/issues-security/official-cve-feed/index.json)
|
||||
```shell
|
||||
curl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/index.json
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="RSS feed" %}}
|
||||
[Link to RSS format](/docs/reference/issues-security/official-cve-feed/feed.xml)
|
||||
```shell
|
||||
curl -Lv https://k8s.io/docs/reference/issues-security/official-cve-feed/feed.xml
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
{{< cve-feed >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -168,6 +168,7 @@ Automanaged APIService objects are deleted by kube-apiserver when it has no buil
|
|||
{{< /note >}}
|
||||
|
||||
There are two possible values:
|
||||
|
||||
- `onstart`: The APIService should be reconciled when an API server starts up, but not otherwise.
|
||||
- `true`: The API server should reconcile this APIService continuously.
|
||||
|
||||
|
|
@ -191,7 +192,6 @@ The Kubelet populates this label with the hostname. Note that the hostname can b
|
|||
|
||||
This label is also used as part of the topology hierarchy. See [topology.kubernetes.io/zone](#topologykubernetesiozone) for more information.
|
||||
|
||||
|
||||
### kubernetes.io/change-cause {#change-cause}
|
||||
|
||||
Example: `kubernetes.io/change-cause: "kubectl edit --record deployment foo"`
|
||||
|
|
@ -409,6 +409,7 @@ A zone represents a logical failure domain. It is common for Kubernetes cluster
|
|||
A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
|
||||
|
||||
Kubernetes makes a few assumptions about the structure of zones and regions:
|
||||
|
||||
1) regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions
|
||||
2) zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b"
|
||||
|
||||
|
|
@ -431,6 +432,17 @@ Used on: PersistentVolumeClaim
|
|||
|
||||
This annotation has been deprecated.
|
||||
|
||||
### volume.beta.kubernetes.io/storage-class (deprecated)
|
||||
|
||||
Example: `volume.beta.kubernetes.io/storage-class: "example-class"`
|
||||
|
||||
Used on: PersistentVolume, PersistentVolumeClaim
|
||||
|
||||
This annotation can be used for PersistentVolume(PV) or PersistentVolumeClaim(PVC) to specify the name of [StorageClass](/docs/concepts/storage/storage-classes/). When both `storageClassName` attribute and `volume.beta.kubernetes.io/storage-class` annotation are specified, the annotation `volume.beta.kubernetes.io/storage-class` takes precedence over the `storageClassName` attribute.
|
||||
|
||||
This annotation has been deprecated. Instead, set the [`storageClassName` field](/docs/concepts/storage/persistent-volumes/#class)
|
||||
for the PersistentVolumeClaim or PersistentVolume.
|
||||
|
||||
### volume.beta.kubernetes.io/mount-options (deprecated) {#mount-options}
|
||||
|
||||
Example : `volume.beta.kubernetes.io/mount-options: "ro,soft"`
|
||||
|
|
@ -528,7 +540,6 @@ a request where the client authenticated using the service account token.
|
|||
If a legacy token was last used before the cluster gained the feature (added in Kubernetes v1.26), then
|
||||
the label isn't set.
|
||||
|
||||
|
||||
### endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by}
|
||||
|
||||
Example: `endpointslice.kubernetes.io/managed-by: "controller"`
|
||||
|
|
@ -614,6 +625,17 @@ Example: `kubectl.kubernetes.io/default-container: "front-end-app"`
|
|||
|
||||
The value of the annotation is the container name that is default for this Pod. For example, `kubectl logs` or `kubectl exec` without `-c` or `--container` flag will use this default container.
|
||||
|
||||
### kubectl.kubernetes.io/default-logs-container (deprecated)
|
||||
|
||||
Example: `kubectl.kubernetes.io/default-logs-container: "front-end-app"`
|
||||
|
||||
The value of the annotation is the container name that is the default logging container for this Pod. For example, `kubectl logs` without `-c` or `--container` flag will use this default container.
|
||||
|
||||
{{< note >}}
|
||||
This annotation is deprecated. You should use the [`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container) annotation instead.
|
||||
Kubernetes versions 1.25 and newer ignore this annotation.
|
||||
{{< /note >}}
|
||||
|
||||
### endpoints.kubernetes.io/over-capacity
|
||||
|
||||
Example: `endpoints.kubernetes.io/over-capacity:truncated`
|
||||
|
|
@ -634,7 +656,7 @@ The presence of this annotation on a Job indicates that the control plane is
|
|||
[tracking the Job status using finalizers](/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers).
|
||||
The control plane uses this annotation to safely transition to tracking Jobs
|
||||
using finalizers, while the feature is in development.
|
||||
You should **not** manually add or remove this annotation.
|
||||
You should **not** manually add or remove this annotation.
|
||||
|
||||
{{< note >}}
|
||||
Starting from Kubernetes 1.26, this annotation is deprecated.
|
||||
|
|
@ -716,7 +738,6 @@ Refer to
|
|||
for further details about when and how to use this taint.
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
### node.cloudprovider.kubernetes.io/uninitialized
|
||||
|
||||
Example: `node.cloudprovider.kubernetes.io/uninitialized: "NoSchedule"`
|
||||
|
|
|
|||
|
|
@ -0,0 +1,300 @@
|
|||
---
|
||||
title: Common Expression Language in Kubernetes
|
||||
reviewers:
|
||||
- jpbetz
|
||||
- cici37
|
||||
content_type: concept
|
||||
weight: 35
|
||||
min-kubernetes-server-version: 1.25
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
The [Common Expression Language (CEL)](https://github.com/google/cel-go) is used
|
||||
in the Kubernetes API to declare validation rules, policy rules, and other
|
||||
constraints or conditions.
|
||||
|
||||
CEL expressions are evaluated directly in the
|
||||
{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}, making CEL a
|
||||
convenient alternative to out-of-process mechanisms, such as webhooks, for many
|
||||
extensibility use cases. Your CEL expressions continue to execute so long as the
|
||||
control plane's API server component remains available.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Language overview
|
||||
|
||||
The [CEL
|
||||
language](https://github.com/google/cel-spec/blob/master/doc/langdef.md) has a
|
||||
straightforward syntax that is similar to the expressions in C, C++, Java,
|
||||
JavaScript and Go.
|
||||
|
||||
CEL was designed to be embedded into applications. Each CEL "program" is a
|
||||
single expression that evaluates to a single value. CEL expressions are
|
||||
typically short "one-liners" that inline well into the string fields of Kubernetes
|
||||
API resources.
|
||||
|
||||
Inputs to a CEL program are "variables". Each Kubernetes API field that contains
|
||||
CEL declares in the API documentation which variables are available to use for
|
||||
that field. For example, in the `x-kubernetes-validations[i].rules` field of
|
||||
CustomResourceDefinitions, the `self` and `oldSelf` variables are available and
|
||||
refer to the previous and current state of the custom resource data to be
|
||||
validated by the CEL expression. Other Kubernetes API fields may declare
|
||||
different variables. See the API documentation of the API fields to learn which
|
||||
variables are available for that field.
|
||||
|
||||
Example CEL expressions:
|
||||
|
||||
{{< table caption="Examples of CEL expressions and the purpose of each" >}}
|
||||
| Rule | Purpose |
|
||||
|------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|
|
||||
| `self.minReplicas <= self.replicas && self.replicas <= self.maxReplicas` | Validate that the three fields defining replicas are ordered appropriately |
|
||||
| `'Available' in self.stateCounts` | Validate that an entry with the 'Available' key exists in a map |
|
||||
| `(self.list1.size() == 0) != (self.list2.size() == 0)` | Validate that one of two lists is non-empty, but not both |
|
||||
| `self.envars.filter(e, e.name = 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$')` | Validate the 'value' field of a listMap entry where key field 'name' is 'MY_ENV' |
|
||||
| `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration |
|
||||
| `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' |
|
||||
| `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |
|
||||
| `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the the int and string cases |
|
||||
| `self.metadata.name == 'singleton'` | Validate that an object's name matches a specific value (making it a singleton) |
|
||||
| `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint |
|
||||
| `self.names.size() == self.details.size() && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet |
|
||||
{{< /table >}}
|
||||
|
||||
## CEL community libraries
|
||||
|
||||
Kubernetes CEL expressions have access to the following CEL community libraries:
|
||||
|
||||
- CEL standard functions, defined in the [list of standard definitions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions)
|
||||
- CEL standard [macros](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros)
|
||||
- CEL [extended string function library](https://pkg.go.dev/github.com/google/cel-go/ext#Strings)
|
||||
|
||||
## Kubernetes CEL libraries
|
||||
|
||||
In additional to the CEL community libraries, Kubernetes includes CEL libraries
|
||||
that are available everywhere CEL is used in Kubernetes.
|
||||
|
||||
### Kubernetes list library
|
||||
|
||||
The list library includes `indexOf` and `lastIndexOf`, which work similar to the
|
||||
strings functions of the same names. These functions either the first or last
|
||||
positional index of the provided element in the list.
|
||||
|
||||
The list library also includes `min`, `max` and `sum`. Sum is supported on all
|
||||
number types as well as the duration type. Min and max are supported on all
|
||||
comparable types.
|
||||
|
||||
`isSorted` is also provided as a convenience function and is supported on all
|
||||
comparable types.
|
||||
|
||||
Examples:
|
||||
|
||||
{{< table caption="Examples of CEL expressions using list library functions" >}}
|
||||
| CEL Expression | Purpose |
|
||||
|------------------------------------------------------------------------------------|-----------------------------------------------------------|
|
||||
| `names.isSorted()` | Verify that a list of names is kept in alphabetical order |
|
||||
| `items.map(x, x.weight).sum() == 1.0` | Verify that the "weights" of a list of objects sum to 1.0 |
|
||||
| `lowPriorities.map(x, x.priority).max() < highPriorities.map(x, x.priority).min()` | Verify that two sets of priorities do not overlap |
|
||||
| `names.indexOf('should-be-first') == 1` | Require that the first name in a list if a specific value |
|
||||
{{< /table >}}
|
||||
|
||||
See the [Kubernetes List Library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Lists)
|
||||
godoc for more information.
|
||||
|
||||
### Kubernetes regex library
|
||||
|
||||
In addition to the `matches` function provided by the CEL standard library, the
|
||||
regex library provides `find` and `findAll`, enabling a much wider range of
|
||||
regex operations.
|
||||
|
||||
Examples:
|
||||
|
||||
{{< table caption="Examples of CEL expressions using regex library functions" >}}
|
||||
| CEL Expression | Purpose |
|
||||
|-------------------------------------------------------------|----------------------------------------------------------|
|
||||
| `"abc 123".find('[0-9]*')` | Find the first number in a string |
|
||||
| `"1, 2, 3, 4".findAll('[0-9]*').map(x, int(x)).sum() < 100` | Verify that the numbers in a string sum to less than 100 |
|
||||
{{< /table >}}
|
||||
|
||||
See the [Kubernetes regex library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Regex)
|
||||
godoc for more information.
|
||||
|
||||
### Kubernetes URL library
|
||||
|
||||
To make it easier and safer to process URLs, the following functions have been added:
|
||||
|
||||
- `isURL(string)` checks if a string is a valid URL according to the [Go's
|
||||
net/url](https://pkg.go.dev/net/url#URL) package. The string must be an
|
||||
absolute URL.
|
||||
- `url(string) URL` converts a string to a URL or results in an error if the
|
||||
string is not a valid URL.
|
||||
|
||||
Once parsed via the `url` function, the resulting URL object has `getScheme`,
|
||||
`getHost`, `getHostname`, `getPort`, `getEscapedPath` and `getQuery` accessors.
|
||||
|
||||
Examples:
|
||||
|
||||
{{< table caption="Examples of CEL expressions using URL library functions" >}}
|
||||
| CEL Expression | Purpose |
|
||||
|-----------------------------------------------------------------|------------------------------------------------|
|
||||
| `url('https://example.com:80/').getHost()` | Get the 'example.com:80' host part of the URL. |
|
||||
| `url('https://example.com/path with spaces/').getEscapedPath()` | Returns '/path%20with%20spaces/' |
|
||||
{{< /table >}}
|
||||
|
||||
See the [Kubernetes URL library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#URLs)
|
||||
godoc for more information.
|
||||
|
||||
## Type checking
|
||||
|
||||
CEL is a [gradually typed language](https://github.com/google/cel-spec/blob/master/doc/langdef.md#gradual-type-checking).
|
||||
|
||||
Some Kubernetes API fields contain fully type checked CEL expressions. For
|
||||
example, [CustomResourceDefinitions Validation
|
||||
Rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
|
||||
are fully type checked.
|
||||
|
||||
Some Kubernetes API fields contain partially type checked CEL expressions. A
|
||||
partially type checked expression is an experessions where some of the variables
|
||||
are statically typed but others are dynamically typed. For example, in the CEL
|
||||
expressions of
|
||||
[ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/)
|
||||
the `request` variable is typed, but the `object` variable is dynamically typed.
|
||||
As a result, an expression containing `request.namex` would fail type checking
|
||||
because the `namex` field is not defined. However, `object.namex` would pass
|
||||
type checking even when the `namex` field is not defined for the resource kinds
|
||||
that `object` refers to, because `object` is dynamically typed.
|
||||
|
||||
The `has()` macro in CEL may be used in CEL expressions to check if a field of a
|
||||
dynamically typed variable is accessible before attempting to access the field's
|
||||
value. For example:
|
||||
|
||||
```cel
|
||||
has(object.namex) ? object.namex == 'special' : request.name == 'special'
|
||||
```
|
||||
|
||||
## Type system integration
|
||||
|
||||
{{< table caption="Table showing the relationship between OpenAPIv3 types and CEL types" >}}
|
||||
| OpenAPIv3 type | CEL type |
|
||||
|----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
|
||||
| 'object' with Properties | object / "message type" (`type(<object>)` evaluates to `selfType<uniqueNumber>.path.to.object.from.self` |
|
||||
| 'object' with AdditionalProperties | map |
|
||||
| 'object' with x-kubernetes-embedded-type | object / "message type", 'apiVersion', 'kind', 'metadata.name' and 'metadata.generateName' are implicitly included in schema |
|
||||
| 'object' with x-kubernetes-preserve-unknown-fields | object / "message type", unknown fields are NOT accessible in CEL expression |
|
||||
| x-kubernetes-int-or-string | union of int or string, `self.intOrString < 100 \|\| self.intOrString == '50%'` evaluates to true for both `50` and `"50%"` |
|
||||
| 'array | list |
|
||||
| 'array' with x-kubernetes-list-type=map | list with map based Equality & unique key guarantees |
|
||||
| 'array' with x-kubernetes-list-type=set | list with set based Equality & unique entry guarantees |
|
||||
| 'boolean' | boolean |
|
||||
| 'number' (all formats) | double |
|
||||
| 'integer' (all formats) | int (64) |
|
||||
| _no equivalent_ | uint (64) |
|
||||
| 'null' | null_type |
|
||||
| 'string' | string |
|
||||
| 'string' with format=byte (base64 encoded) | bytes |
|
||||
| 'string' with format=date | timestamp (google.protobuf.Timestamp) |
|
||||
| 'string' with format=datetime | timestamp (google.protobuf.Timestamp) |
|
||||
| 'string' with format=duration | duration (google.protobuf.Duration) |
|
||||
{{< /table >}}
|
||||
|
||||
Also see: [CEL types](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#values),
|
||||
[OpenAPI types](https://swagger.io/specification/#data-types),
|
||||
[Kubernetes Structural Schemas](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema).
|
||||
|
||||
Equality comparison for arrays with `x-kubernetes-list-type` of `set` or `map` ignores element
|
||||
order. For example `[1, 2] == [2, 1]` if the arrays represent Kubernetes `set` values.
|
||||
|
||||
Concatenation on arrays with `x-kubernetes-list-type` use the semantics of the
|
||||
list type:
|
||||
|
||||
- `set`: `X + Y` performs a union where the array positions of all elements in
|
||||
`X` are preserved and non-intersecting elements in `Y` are appended, retaining
|
||||
their partial order.
|
||||
- `map`: `X + Y` performs a merge where the array positions of all keys in `X`
|
||||
are preserved but the values are overwritten by values in `Y` when the key
|
||||
sets of `X` and `Y` intersect. Elements in `Y` with non-intersecting keys are
|
||||
appended, retaining their partial order.
|
||||
|
||||
## Escaping
|
||||
|
||||
Only Kubernetes resource property names of the form
|
||||
`[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible from CEL. Accessible property
|
||||
names are escaped according to the following rules when accessed in the
|
||||
expression:
|
||||
|
||||
{{< table caption="Table of CEL identifier escaping rules" >}}
|
||||
| escape sequence | property name equivalent |
|
||||
|-------------------|----------------------------------------------------------------------------------------------|
|
||||
| `__underscores__` | `__` |
|
||||
| `__dot__` | `.` |
|
||||
| `__dash__` | `-` |
|
||||
| `__slash__` | `/` |
|
||||
| `__{keyword}__` | [CEL **RESERVED** keyword](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#syntax) |
|
||||
{{< /table >}}
|
||||
|
||||
When you escape any of CEL's **RESERVED** keywords you need to match the exact property name
|
||||
use the underscore escaping
|
||||
(for example, `int` in the word `sprint` would not be escaped and nor would it need to be).
|
||||
|
||||
Examples on escaping:
|
||||
|
||||
{{< table caption="Examples escaped CEL identifiers" >}}
|
||||
| property name | rule with escaped property name |
|
||||
|---------------|-----------------------------------|
|
||||
| `namespace` | `self.__namespace__ > 0` |
|
||||
| `x-prop` | `self.x__dash__prop > 0` |
|
||||
| `redact__d` | `self.redact__underscores__d > 0` |
|
||||
| `string` | `self.startsWith('kube')` |
|
||||
{{< /table >}}
|
||||
|
||||
## Resource constraints
|
||||
|
||||
CEL is non-Turing complete and offers a variety of production safety controls to
|
||||
limit execution time. CEL's _resource constraint_ features provide feedback to
|
||||
developers about expression complexity and help protect the API server from
|
||||
excessive resource consumption during evaluation. CEL's resource constraint
|
||||
features are used to prevent CEL evaluation from consuming excessive API server
|
||||
resources.
|
||||
|
||||
A key element of the resource constraint features is a _cost unit_ that CEL
|
||||
defines as a way of tracking CPU utilization. Cost units are independent of
|
||||
system load and hardware. Cost units are also deterministic; for any given CEL
|
||||
expression and input data, evaluation of the expression by the CEL interpreter
|
||||
will always result in the same cost.
|
||||
|
||||
Many of CEL's core operations have fixed costs. The simplest operations, such as
|
||||
comparisons (e.g. `<`) have a cost of 1. Some have a higher fixed cost, for
|
||||
example list literal declarations have a fixed base cost of 40 cost units.
|
||||
|
||||
Calls to functions implemented in native code approximate cost based on the time
|
||||
complexity of the operation. For example: operations that use regular
|
||||
expressions, such as `match` and `find`, are estimated using an approximated
|
||||
cost of `length(regexString)*length(inputString)`. The approximated cost
|
||||
reflects the worst case time complexity of Go's RE2 implementation.
|
||||
|
||||
### Runtime cost budget
|
||||
|
||||
All CEL expressions evaluated by Kubernetes are constrained by a runtime cost
|
||||
budget. The runtime cost budget is an estimate of actual CPU utilization
|
||||
computed by incrementing a cost unit counter while interpreting a CEL
|
||||
expression. If the CEL interpreter executes too many instructions, the runtime
|
||||
cost budget will be exceeded, execution of the expressions will be halted, and
|
||||
an error will result.
|
||||
|
||||
Some Kubernetes resources define an additional runtime cost budget that bounds
|
||||
the execution of multiple expressions. If the sum total of the cost of
|
||||
expressions exceed the budget, execution of the expressions will be halted, and
|
||||
an error will result. For example the validation of a custom resource has a
|
||||
_per-validation_ runtime cost budget for all [Validation
|
||||
Rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
|
||||
evaluated to validate the custom resource.
|
||||
|
||||
### Estimated cost limits
|
||||
|
||||
For some Kubernetes resources, the API server may also check if worst case
|
||||
estimated running time of CEL expressions would be prohibitively expensive to
|
||||
execute. If so, the API server prevent the CEL expression from being written to
|
||||
API resources by rejecting create or update operations containing the CEL
|
||||
expression to the API resources. This feature offers a stronger assurance that
|
||||
CEL expressions written to the API resource will be evaluate at runtime without
|
||||
exceeding the runtime cost budget.
|
||||
|
|
@ -41,12 +41,12 @@ cluster's API server.
|
|||
|
||||
## Define clusters, users, and contexts
|
||||
|
||||
Suppose you have two clusters, one for development work and one for scratch work.
|
||||
Suppose you have two clusters, one for development work and one for test work.
|
||||
In the `development` cluster, your frontend developers work in a namespace called `frontend`,
|
||||
and your storage developers work in a namespace called `storage`. In your `scratch` cluster,
|
||||
and your storage developers work in a namespace called `storage`. In your `test` cluster,
|
||||
developers work in the default namespace, or they create auxiliary namespaces as they
|
||||
see fit. Access to the development cluster requires authentication by certificate. Access
|
||||
to the scratch cluster requires authentication by username and password.
|
||||
to the test cluster requires authentication by username and password.
|
||||
|
||||
Create a directory named `config-exercise`. In your
|
||||
`config-exercise` directory, create a file named `config-demo` with this content:
|
||||
|
|
@ -60,7 +60,7 @@ clusters:
|
|||
- cluster:
|
||||
name: development
|
||||
- cluster:
|
||||
name: scratch
|
||||
name: test
|
||||
|
||||
users:
|
||||
- name: developer
|
||||
|
|
@ -72,7 +72,7 @@ contexts:
|
|||
- context:
|
||||
name: dev-storage
|
||||
- context:
|
||||
name: exp-scratch
|
||||
name: exp-test
|
||||
```
|
||||
|
||||
A configuration file describes clusters, users, and contexts. Your `config-demo` file
|
||||
|
|
@ -83,7 +83,7 @@ your configuration file:
|
|||
|
||||
```shell
|
||||
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
|
||||
kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify
|
||||
kubectl config --kubeconfig=config-demo set-cluster test --server=https://5.6.7.8 --insecure-skip-tls-verify
|
||||
```
|
||||
|
||||
Add user details to your configuration file:
|
||||
|
|
@ -108,7 +108,7 @@ Add context details to your configuration file:
|
|||
```shell
|
||||
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
|
||||
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
|
||||
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
|
||||
kubectl config --kubeconfig=config-demo set-context exp-test --cluster=test --namespace=default --user=experimenter
|
||||
```
|
||||
|
||||
Open your `config-demo` file to see the added details. As an alternative to opening the
|
||||
|
|
@ -130,7 +130,7 @@ clusters:
|
|||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://5.6.7.8
|
||||
name: scratch
|
||||
name: test
|
||||
contexts:
|
||||
- context:
|
||||
cluster: development
|
||||
|
|
@ -143,10 +143,10 @@ contexts:
|
|||
user: developer
|
||||
name: dev-storage
|
||||
- context:
|
||||
cluster: scratch
|
||||
cluster: test
|
||||
namespace: default
|
||||
user: experimenter
|
||||
name: exp-scratch
|
||||
name: exp-test
|
||||
current-context: ""
|
||||
kind: Config
|
||||
preferences: {}
|
||||
|
|
@ -220,19 +220,19 @@ users:
|
|||
client-key: fake-key-file
|
||||
```
|
||||
|
||||
Now suppose you want to work for a while in the scratch cluster.
|
||||
Now suppose you want to work for a while in the test cluster.
|
||||
|
||||
Change the current context to `exp-scratch`:
|
||||
Change the current context to `exp-test`:
|
||||
|
||||
```shell
|
||||
kubectl config --kubeconfig=config-demo use-context exp-scratch
|
||||
kubectl config --kubeconfig=config-demo use-context exp-test
|
||||
```
|
||||
|
||||
Now any `kubectl` command you give will apply to the default namespace of
|
||||
the `scratch` cluster. And the command will use the credentials of the user
|
||||
listed in the `exp-scratch` context.
|
||||
the `test` cluster. And the command will use the credentials of the user
|
||||
listed in the `exp-test` context.
|
||||
|
||||
View configuration associated with the new current context, `exp-scratch`.
|
||||
View configuration associated with the new current context, `exp-test`.
|
||||
|
||||
```shell
|
||||
kubectl config --kubeconfig=config-demo view --minify
|
||||
|
|
@ -338,10 +338,10 @@ contexts:
|
|||
user: developer
|
||||
name: dev-storage
|
||||
- context:
|
||||
cluster: scratch
|
||||
cluster: test
|
||||
namespace: default
|
||||
user: experimenter
|
||||
name: exp-scratch
|
||||
name: exp-test
|
||||
```
|
||||
|
||||
For more information about how kubeconfig files are merged, see
|
||||
|
|
|
|||
|
|
@ -103,6 +103,7 @@ Name | Encryption | Strength | Speed | Key Length | Other Considerations
|
|||
`aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented.
|
||||
`aescbc` | AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks.
|
||||
`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with [PKCS#7](https://datatracker.ietf.org/doc/html/rfc2315) padding (prior to v1.25), using AES-GCM starting from v1.25, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/).
|
||||
{{< /table >}}
|
||||
|
||||
Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider
|
||||
is the first provider, the first key is used for encryption.
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ weight: 50
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
The `dockershim` component of Kubernetes allows to use Docker as a Kubernetes's
|
||||
The `dockershim` component of Kubernetes allows the use of Docker as a Kubernetes's
|
||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
|
||||
Kubernetes' built-in `dockershim` component was removed in release v1.24.
|
||||
|
||||
|
|
@ -40,11 +40,11 @@ dependency on Docker:
|
|||
1. Third-party tools that perform above mentioned privileged operations. See
|
||||
[Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
|
||||
for more information.
|
||||
1. Make sure there is no indirect dependencies on dockershim behavior.
|
||||
1. Make sure there are no indirect dependencies on dockershim behavior.
|
||||
This is an edge case and unlikely to affect your application. Some tooling may be configured
|
||||
to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for
|
||||
a specific log message as part of troubleshooting instructions.
|
||||
If you have such tooling configured, test the behavior on test
|
||||
If you have such tooling configured, test the behavior on a test
|
||||
cluster before migration.
|
||||
|
||||
## Dependency on Docker explained {#role-of-dockershim}
|
||||
|
|
@ -74,7 +74,7 @@ before to check on these containers is no longer available.
|
|||
|
||||
You cannot get container information using `docker ps` or `docker inspect`
|
||||
commands. As you cannot list containers, you cannot get logs, stop containers,
|
||||
or execute something inside container using `docker exec`.
|
||||
or execute something inside a container using `docker exec`.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -67,6 +67,7 @@ cat /mnt/data/index.html
|
|||
```
|
||||
|
||||
The output should be:
|
||||
|
||||
```
|
||||
Hello from Kubernetes storage
|
||||
```
|
||||
|
|
@ -247,8 +248,8 @@ You can now close the shell to your Node.
|
|||
|
||||
You can perform 2 volume mounts on your nginx container:
|
||||
|
||||
`/usr/share/nginx/html` for the static website
|
||||
`/etc/nginx/nginx.conf` for the default config
|
||||
- `/usr/share/nginx/html` for the static website
|
||||
- `/etc/nginx/nginx.conf` for the default config
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
|
|
@ -261,6 +262,7 @@ with a GID. Then the GID is automatically added to any Pod that uses the
|
|||
PersistentVolume.
|
||||
|
||||
Use the `pv.beta.kubernetes.io/gid` annotation as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
|
|
@ -269,6 +271,7 @@ metadata:
|
|||
annotations:
|
||||
pv.beta.kubernetes.io/gid: "1234"
|
||||
```
|
||||
|
||||
When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID
|
||||
is applied to all containers in the Pod in the same way that GIDs specified in the
|
||||
Pod's security context are. Every GID, whether it originates from a PersistentVolume
|
||||
|
|
|
|||
|
|
@ -59,46 +59,32 @@ Figure 1. SOCKS5 tutorial components
|
|||
|
||||
## Using ssh to create a SOCKS5 proxy
|
||||
|
||||
This command starts a SOCKS5 proxy between your client machine and the remote server.
|
||||
The SOCKS5 proxy lets you connect to your cluster's API server.
|
||||
The following command starts a SOCKS5 proxy between your client machine and the remote SOCKS server:
|
||||
|
||||
```shell
|
||||
# The SSH tunnel continues running in the foreground after you run this
|
||||
ssh -D 1080 -q -N username@kubernetes-remote-server.example
|
||||
```
|
||||
|
||||
The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:
|
||||
* `-D 1080`: opens a SOCKS proxy on local port :1080.
|
||||
* `-q`: quiet mode. Causes most warning and diagnostic messages to be suppressed.
|
||||
* `-N`: Do not execute a remote command. Useful for just forwarding ports.
|
||||
* `username@kubernetes-remote-server.example`: the remote SSH server where the Kubernetes cluster is running.
|
||||
* `username@kubernetes-remote-server.example`: the remote SSH server behind which the Kubernetes cluster
|
||||
is running (eg: a bastion host).
|
||||
|
||||
## Client configuration
|
||||
|
||||
To explore the Kubernetes API you'll first need to instruct your clients to send their queries through
|
||||
the SOCKS5 proxy we created earlier.
|
||||
|
||||
For command-line tools, set the `https_proxy` environment variable and pass it to commands that you run.
|
||||
To access the Kubernetes API server through the proxy you must instruct `kubectl` to send queries through
|
||||
the `SOCKS` proxy we created earlier. Do this by either setting the appropriate environment variable,
|
||||
or via the `proxy-url` attribute in the kubeconfig file. Using an environment variable:
|
||||
|
||||
```shell
|
||||
export https_proxy=socks5h://localhost:1080
|
||||
export HTTPS_PROXY=socks5://localhost:1080
|
||||
```
|
||||
|
||||
When you set the `https_proxy` variable, tools such as `curl` route HTTPS traffic through the proxy
|
||||
you configured. For this to work, the tool must support SOCKS5 proxying.
|
||||
|
||||
{{< note >}}
|
||||
In the URL https://localhost:6443/api, `localhost` does not refer to your local client computer.
|
||||
Instead, it refers to the endpoint on the remote server known as `localhost`.
|
||||
The `curl` tool sends the hostname from the HTTPS URL over SOCKS, and the remote server
|
||||
resolves that locally (to an address that belongs to its loopback interface).
|
||||
{{</ note >}}
|
||||
|
||||
```shell
|
||||
curl -k -v https://localhost:6443/api
|
||||
```
|
||||
|
||||
To use the official Kubernetes client `kubectl` with a proxy, set the `proxy-url` element
|
||||
for the relevant `cluster` entry within your `~/.kube/config` file. For example:
|
||||
To always use this setting on a specific `kubectl` context, specify the `proxy-url` attribute in the relevant
|
||||
`cluster` entry within the `~/.kube/config` file. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
@ -106,7 +92,7 @@ clusters:
|
|||
- cluster:
|
||||
certificate-authority-data: LRMEMMW2 # shortened for readability
|
||||
server: https://<API_SERVER_IP_ADRESS>:6443 # the "Kubernetes API" server, in other words the IP address of kubernetes-remote-server.example
|
||||
proxy-url: socks5://localhost:1080 # the "SSH SOCKS5 proxy" in the diagram above (DNS resolution over socks is built-in)
|
||||
proxy-url: socks5://localhost:1080 # the "SSH SOCKS5 proxy" in the diagram above
|
||||
name: default
|
||||
contexts:
|
||||
- context:
|
||||
|
|
@ -123,7 +109,8 @@ users:
|
|||
client-key-data: LS0tLS1CRUdJT= # shortened for readability
|
||||
```
|
||||
|
||||
If the tunnel is operating and you use `kubectl` with a context that uses this cluster, you can interact with your cluster through that proxy. For example:
|
||||
Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or
|
||||
the `proxy-url` attribute, you can interact with your cluster through that proxy. For example:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
|
@ -134,6 +121,24 @@ NAMESPACE NAME READY STATUS RESTA
|
|||
kube-system coredns-85cb69466-klwq8 1/1 Running 0 5m46s
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
- Before `kubectl` 1.24, most `kubectl` commands worked when using a socks proxy, except `kubectl exec`.
|
||||
- `kubectl` supports both `HTTPS_PROXY` and `https_proxy` environment variables. These are used by other
|
||||
programs that support SOCKS, such as `curl`. Therefore in some cases it
|
||||
will be better to define the environment variable on the command line:
|
||||
```shell
|
||||
HTTPS_PROXY=socks5://localhost:1080 kubectl get pods
|
||||
```
|
||||
- When using `proxy-url`, the proxy is used only for the relevant `kubectl` context,
|
||||
whereas the environment variable will affect all contexts.
|
||||
- The k8s API server hostname can be further protected from DNS leakage by using the `socks5h` protocol name
|
||||
instead of the more commonly known `socks5` protocol shown above. In this case, `kubectl` will ask the proxy server
|
||||
(such as an ssh bastion) to resolve the k8s API server domain name, instead of resolving it on the system running
|
||||
`kubectl`. Note also that with `socks5h`, a k8s API server URL like `https://localhost:6443/api` does not refer
|
||||
to your local client computer. Instead, it refers to `localhost` as known on the proxy server (eg the ssh bastion).
|
||||
{{</ note >}}
|
||||
|
||||
|
||||
## Clean up
|
||||
|
||||
Stop the ssh port-forwarding process by pressing `CTRL+C` on the terminal where it is running.
|
||||
|
|
|
|||
|
|
@ -88,65 +88,5 @@ If you're using AMD GPU devices, you can deploy
|
|||
Node Labeller is a {{< glossary_tooltip text="controller" term_id="controller" >}} that automatically
|
||||
labels your nodes with GPU device properties.
|
||||
|
||||
At the moment, that controller can add labels for:
|
||||
|
||||
* Device ID (-device-id)
|
||||
* VRAM Size (-vram)
|
||||
* Number of SIMD (-simd-count)
|
||||
* Number of Compute Unit (-cu-count)
|
||||
* Firmware and Feature Versions (-firmware)
|
||||
* GPU Family, in two letters acronym (-family)
|
||||
* SI - Southern Islands
|
||||
* CI - Sea Islands
|
||||
* KV - Kaveri
|
||||
* VI - Volcanic Islands
|
||||
* CZ - Carrizo
|
||||
* AI - Arctic Islands
|
||||
* RV - Raven
|
||||
|
||||
```shell
|
||||
kubectl describe node cluster-node-23
|
||||
```
|
||||
|
||||
```
|
||||
Name: cluster-node-23
|
||||
Roles: <none>
|
||||
Labels: beta.amd.com/gpu.cu-count.64=1
|
||||
beta.amd.com/gpu.device-id.6860=1
|
||||
beta.amd.com/gpu.family.AI=1
|
||||
beta.amd.com/gpu.simd-count.256=1
|
||||
beta.amd.com/gpu.vram.16G=1
|
||||
kubernetes.io/arch=amd64
|
||||
kubernetes.io/os=linux
|
||||
kubernetes.io/hostname=cluster-node-23
|
||||
Annotations: node.alpha.kubernetes.io/ttl: 0
|
||||
…
|
||||
```
|
||||
|
||||
With the Node Labeller in use, you can specify the GPU type in the Pod spec:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: cuda-vector-add
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: cuda-vector-add
|
||||
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
|
||||
image: "registry.k8s.io/cuda-vector-add:v0.1"
|
||||
resources:
|
||||
limits:
|
||||
nvidia.com/gpu: 1
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
– matchExpressions:
|
||||
– key: beta.amd.com/gpu.family.AI # Arctic Islands GPU family
|
||||
operator: Exist
|
||||
```
|
||||
|
||||
This ensures that the Pod will be scheduled to a node that has the GPU type
|
||||
you specified.
|
||||
Similar functionality for NVIDIA is provided by
|
||||
[GPU feature discovery](https://github.com/NVIDIA/gpu-feature-discovery/blob/main/README.md).
|
||||
|
|
|
|||
|
|
@ -120,7 +120,7 @@ metadata:
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
Each variable in the `.env` file becomes a separate key in the ConfigMap that you generate. This is different from the previous example which embeds a file named `.properties` (and all its entries) as the value for a single key.
|
||||
Each variable in the `.env` file becomes a separate key in the ConfigMap that you generate. This is different from the previous example which embeds a file named `application.properties` (and all its entries) as the value for a single key.
|
||||
{{< /note >}}
|
||||
|
||||
ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the `literals` list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a key-value pair:
|
||||
|
|
|
|||
|
|
@ -50,7 +50,9 @@ specified by one of the built-in Kubernetes controllers:
|
|||
In this case, make a note of the controller's `.spec.selector`; the same
|
||||
selector goes into the PDBs `.spec.selector`.
|
||||
|
||||
From version 1.15 PDBs support custom controllers where the [scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource) is enabled.
|
||||
From version 1.15 PDBs support custom controllers where the
|
||||
[scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource)
|
||||
is enabled.
|
||||
|
||||
You can also use PDBs with pods which are not controlled by one of the above
|
||||
controllers, or arbitrary groups of pods, but there are some restrictions,
|
||||
|
|
@ -74,7 +76,8 @@ due to a voluntary disruption.
|
|||
- Multiple-instance Stateful application such as Consul, ZooKeeper, or etcd:
|
||||
- Concern: Do not reduce number of instances below quorum, otherwise writes fail.
|
||||
- Possible Solution 1: set maxUnavailable to 1 (works with varying scale of application).
|
||||
- Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). (Allows more disruptions at once).
|
||||
- Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5).
|
||||
(Allows more disruptions at once).
|
||||
- Restartable Batch Job:
|
||||
- Concern: Job needs to complete in case of voluntary disruption.
|
||||
- Possible solution: Do not create a PDB. The Job controller will create a replacement pod.
|
||||
|
|
@ -83,17 +86,20 @@ due to a voluntary disruption.
|
|||
|
||||
Values for `minAvailable` or `maxUnavailable` can be expressed as integers or as a percentage.
|
||||
|
||||
- When you specify an integer, it represents a number of Pods. For instance, if you set `minAvailable` to 10, then 10
|
||||
Pods must always be available, even during a disruption.
|
||||
- When you specify a percentage by setting the value to a string representation of a percentage (eg. `"50%"`), it represents a percentage of
|
||||
total Pods. For instance, if you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available during a
|
||||
disruption.
|
||||
- When you specify an integer, it represents a number of Pods. For instance, if you set
|
||||
`minAvailable` to 10, then 10 Pods must always be available, even during a disruption.
|
||||
- When you specify a percentage by setting the value to a string representation of a
|
||||
percentage (eg. `"50%"`), it represents a percentage of total Pods. For instance, if
|
||||
you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available
|
||||
during a disruption.
|
||||
|
||||
When you specify the value as a percentage, it may not map to an exact number of Pods. For example, if you have 7 Pods and
|
||||
you set `minAvailable` to `"50%"`, it's not immediately obvious whether that means 3 Pods or 4 Pods must be available.
|
||||
Kubernetes rounds up to the nearest integer, so in this case, 4 Pods must be available. When you specify the value
|
||||
`maxUnavailable` as a percentage, Kubernetes rounds up the number of Pods that may be disrupted. Thereby a disruption
|
||||
can exceed your defined `maxUnavailable` percentage. You can examine the
|
||||
When you specify the value as a percentage, it may not map to an exact number of Pods.
|
||||
For example, if you have 7 Pods and you set `minAvailable` to `"50%"`, it's not
|
||||
immediately obvious whether that means 3 Pods or 4 Pods must be available. Kubernetes
|
||||
rounds up to the nearest integer, so in this case, 4 Pods must be available. When you
|
||||
specify the value `maxUnavailable` as a percentage, Kubernetes rounds up the number of
|
||||
Pods that may be disrupted. Thereby a disruption can exceed your defined
|
||||
`maxUnavailable` percentage. You can examine the
|
||||
[code](https://github.com/kubernetes/kubernetes/blob/23be9587a0f8677eb8091464098881df939c44a9/pkg/controller/disruption/disruption.go#L539)
|
||||
that controls this behavior.
|
||||
|
||||
|
|
@ -151,8 +157,8 @@ voluntary evictions, not all causes of unavailability.
|
|||
If you set `maxUnavailable` to 0% or 0, or you set `minAvailable` to 100% or the number of replicas,
|
||||
you are requiring zero voluntary evictions. When you set zero voluntary evictions for a workload
|
||||
object such as ReplicaSet, then you cannot successfully drain a Node running one of those Pods.
|
||||
If you try to drain a Node where an unevictable Pod is running, the drain never completes. This is permitted as per the
|
||||
semantics of `PodDisruptionBudget`.
|
||||
If you try to drain a Node where an unevictable Pod is running, the drain never completes.
|
||||
This is permitted as per the semantics of `PodDisruptionBudget`.
|
||||
|
||||
You can find examples of pod disruption budgets defined below. They match pods with the label
|
||||
`app: zookeeper`.
|
||||
|
|
@ -229,7 +235,8 @@ status:
|
|||
|
||||
### Healthiness of a Pod
|
||||
|
||||
The current implementation considers healthy pods, as pods that have `.status.conditions` item with `type="Ready"` and `status="True"`.
|
||||
The current implementation considers healthy pods, as pods that have `.status.conditions`
|
||||
item with `type="Ready"` and `status="True"`.
|
||||
These pods are tracked via `.status.currentHealthy` field in the PDB status.
|
||||
|
||||
## Unhealthy Pod Eviction Policy
|
||||
|
|
@ -251,22 +258,26 @@ to the `IfHealthyBudget` policy.
|
|||
Policies:
|
||||
|
||||
`IfHealthyBudget`
|
||||
: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only if the guarded application is not
|
||||
disrupted (`.status.currentHealthy` is at least equal to `.status.desiredHealthy`).
|
||||
: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only
|
||||
if the guarded application is not disrupted (`.status.currentHealthy` is at least
|
||||
equal to `.status.desiredHealthy`).
|
||||
|
||||
: This policy ensures that running pods of an already disrupted application have the best chance to become healthy.
|
||||
This has negative implications for draining nodes, which can be blocked by misbehaving applications that are guarded by a PDB.
|
||||
More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration),
|
||||
or pods that are just failing to report the `Ready` condition.
|
||||
: This policy ensures that running pods of an already disrupted application have
|
||||
the best chance to become healthy. This has negative implications for draining
|
||||
nodes, which can be blocked by misbehaving applications that are guarded by a PDB.
|
||||
More specifically applications with pods in `CrashLoopBackOff` state
|
||||
(due to a bug or misconfiguration), or pods that are just failing to report the
|
||||
`Ready` condition.
|
||||
|
||||
`AlwaysAllow`
|
||||
: Running pods (`.status.phase="Running"`), but not yet healthy are considered disrupted and can be evicted
|
||||
regardless of whether the criteria in a PDB is met.
|
||||
: Running pods (`.status.phase="Running"`), but not yet healthy are considered
|
||||
disrupted and can be evicted regardless of whether the criteria in a PDB is met.
|
||||
|
||||
: This means prospective running pods of a disrupted application might not get a chance to become healthy.
|
||||
By using this policy, cluster managers can easily evict misbehaving applications that are guarded by a PDB.
|
||||
More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration),
|
||||
or pods that are just failing to report the `Ready` condition.
|
||||
: This means prospective running pods of a disrupted application might not get a
|
||||
chance to become healthy. By using this policy, cluster managers can easily evict
|
||||
misbehaving applications that are guarded by a PDB. More specifically applications
|
||||
with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), or pods
|
||||
that are just failing to report the `Ready` condition.
|
||||
|
||||
{{< note >}}
|
||||
Pods in `Pending`, `Succeeded` or `Failed` phase are always considered for eviction.
|
||||
|
|
|
|||
|
|
@ -22,7 +22,8 @@ This task shows you how to delete a {{< glossary_tooltip term_id="StatefulSet" >
|
|||
|
||||
## Deleting a StatefulSet
|
||||
|
||||
You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
|
||||
You can delete a StatefulSet in the same way you delete other resources in Kubernetes:
|
||||
use the `kubectl delete` command, and specify the StatefulSet either by file or by name.
|
||||
|
||||
```shell
|
||||
kubectl delete -f <file.yaml>
|
||||
|
|
@ -38,14 +39,17 @@ You may need to delete the associated headless service separately after the Stat
|
|||
kubectl delete service <service-name>
|
||||
```
|
||||
|
||||
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=orphan`.
|
||||
For example:
|
||||
When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0.
|
||||
All Pods that are part of this workload are also deleted. If you want to delete
|
||||
only the StatefulSet and not the Pods, use `--cascade=orphan`. For example:
|
||||
|
||||
```shell
|
||||
kubectl delete -f <file.yaml> --cascade=orphan
|
||||
```
|
||||
|
||||
By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows:
|
||||
By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet
|
||||
are left behind even after the StatefulSet object itself is deleted. If the pods have
|
||||
a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows:
|
||||
|
||||
```shell
|
||||
kubectl delete pods -l app.kubernetes.io/name=MyApp
|
||||
|
|
@ -53,7 +57,12 @@ kubectl delete pods -l app.kubernetes.io/name=MyApp
|
|||
|
||||
### Persistent Volumes
|
||||
|
||||
Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have terminated might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
|
||||
Deleting the Pods in a StatefulSet will not delete the associated volumes.
|
||||
This is to ensure that you have the chance to copy data off the volume before
|
||||
deleting it. Deleting the PVC after the pods have terminated might trigger
|
||||
deletion of the backing Persistent Volumes depending on the storage class
|
||||
and reclaim policy. You should never assume ability to access a volume
|
||||
after claim deletion.
|
||||
|
||||
{{< note >}}
|
||||
Use caution when deleting a PVC, as it may lead to data loss.
|
||||
|
|
@ -61,7 +70,8 @@ Use caution when deleting a PVC, as it may lead to data loss.
|
|||
|
||||
### Complete deletion of a StatefulSet
|
||||
|
||||
To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
|
||||
To delete everything in a StatefulSet, including the associated pods,
|
||||
you can run a series of commands similar to the following:
|
||||
|
||||
```shell
|
||||
grace=$(kubectl get pods <stateful-set-pod> --template '{{.spec.terminationGracePeriodSeconds}}')
|
||||
|
|
@ -71,11 +81,17 @@ kubectl delete pvc -l app.kubernetes.io/name=MyApp
|
|||
|
||||
```
|
||||
|
||||
In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; substitute your own label as appropriate.
|
||||
In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`;
|
||||
substitute your own label as appropriate.
|
||||
|
||||
### Force deletion of StatefulSet pods
|
||||
|
||||
If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) for details.
|
||||
If you find that some pods in your StatefulSet are stuck in the 'Terminating'
|
||||
or 'Unknown' states for an extended period of time, you may need to manually
|
||||
intervene to forcefully delete the pods from the apiserver.
|
||||
This is a potentially dangerous task. Refer to
|
||||
[Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/)
|
||||
for details.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -14,14 +14,17 @@ weight: 50
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
|
||||
This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to
|
||||
increasing or decreasing the number of replicas.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
- StatefulSets are only available in Kubernetes version 1.5 or later.
|
||||
To check your version of Kubernetes, run `kubectl version`.
|
||||
|
||||
- Not all stateful applications scale nicely. If you are unsure about whether to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information.
|
||||
- Not all stateful applications scale nicely. If you are unsure about whether
|
||||
to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/)
|
||||
or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information.
|
||||
|
||||
- You should perform scaling only when you are confident that your stateful application
|
||||
cluster is completely healthy.
|
||||
|
|
@ -46,7 +49,9 @@ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
|
|||
|
||||
### Make in-place updates on your StatefulSets
|
||||
|
||||
Alternatively, you can do [in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) on your StatefulSets.
|
||||
Alternatively, you can do
|
||||
[in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources)
|
||||
on your StatefulSets.
|
||||
|
||||
If your StatefulSet was initially created with `kubectl apply`,
|
||||
update `.spec.replicas` of the StatefulSet manifests, and then do a `kubectl apply`:
|
||||
|
|
@ -71,10 +76,12 @@ kubectl patch statefulsets <stateful-set-name> -p '{"spec":{"replicas":<new-repl
|
|||
|
||||
### Scaling down does not work right
|
||||
|
||||
You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place
|
||||
after those stateful Pods become running and ready.
|
||||
You cannot scale down a StatefulSet when any of the stateful Pods it manages is
|
||||
unhealthy. Scaling down only takes place after those stateful Pods become running and ready.
|
||||
|
||||
If spec.replicas > 1, Kubernetes cannot determine the reason for an unhealthy Pod. It might be the result of a permanent fault or of a transient fault. A transient fault can be caused by a restart required by upgrading or maintenance.
|
||||
If spec.replicas > 1, Kubernetes cannot determine the reason for an unhealthy Pod.
|
||||
It might be the result of a permanent fault or of a transient fault. A transient
|
||||
fault can be caused by a restart required by upgrading or maintenance.
|
||||
|
||||
If the Pod is unhealthy due to a permanent fault, scaling
|
||||
without correcting the fault may lead to a state where the StatefulSet membership
|
||||
|
|
|
|||
|
|
@ -30,6 +30,11 @@ Install the following on your workstation:
|
|||
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
|
||||
- [kubectl](/docs/tasks/tools/)
|
||||
|
||||
This tutorial demonstrates what you can configure for a Kubernetes cluster that you fully
|
||||
control. If you are learning how to configure Pod Security Admission for a managed cluster
|
||||
where you are not able to configure the control plane, read
|
||||
[Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss).
|
||||
|
||||
## Choose the right Pod Security Standard to apply
|
||||
|
||||
[Pod Security Admission](/docs/concepts/security/pod-security-admission/)
|
||||
|
|
@ -42,22 +47,22 @@ that are most appropriate for your configuration, do the following:
|
|||
1. Create a cluster with no Pod Security Standards applied:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0
|
||||
kind create cluster --name psa-wo-cluster-pss
|
||||
```
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
```
|
||||
Creating cluster "psa-wo-cluster-pss" ...
|
||||
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-psa-wo-cluster-pss"
|
||||
You can now use your cluster with:
|
||||
|
||||
|
||||
kubectl cluster-info --context kind-psa-wo-cluster-pss
|
||||
|
||||
|
||||
Thanks for using kind! 😊
|
||||
```
|
||||
|
||||
|
|
@ -72,7 +77,7 @@ that are most appropriate for your configuration, do the following:
|
|||
Kubernetes control plane is running at https://127.0.0.1:61350
|
||||
|
||||
CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
|
|
@ -82,7 +87,7 @@ that are most appropriate for your configuration, do the following:
|
|||
kubectl get ns
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
```
|
||||
NAME STATUS AGE
|
||||
default Active 9m30s
|
||||
kube-node-lease Active 9m32s
|
||||
|
|
@ -99,8 +104,9 @@ that are most appropriate for your configuration, do the following:
|
|||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=privileged
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
|
|
@ -108,12 +114,13 @@ that are most appropriate for your configuration, do the following:
|
|||
namespace/local-path-storage labeled
|
||||
```
|
||||
2. Baseline
|
||||
```shell
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=baseline
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
|
|
@ -123,15 +130,16 @@ that are most appropriate for your configuration, do the following:
|
|||
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged
|
||||
namespace/kube-system labeled
|
||||
namespace/local-path-storage labeled
|
||||
```
|
||||
```
|
||||
|
||||
3. Restricted
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=restricted
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
|
|
@ -180,7 +188,7 @@ following:
|
|||
|
||||
```
|
||||
mkdir -p /tmp/pss
|
||||
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
|
||||
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: AdmissionConfiguration
|
||||
plugins:
|
||||
|
|
@ -212,7 +220,7 @@ following:
|
|||
1. Configure the API server to consume this file during cluster creation:
|
||||
|
||||
```
|
||||
cat <<EOF > /tmp/pss/cluster-config.yaml
|
||||
cat <<EOF > /tmp/pss/cluster-config.yaml
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
|
|
@ -255,22 +263,22 @@ following:
|
|||
these Pod Security Standards:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml
|
||||
kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Creating cluster "psa-with-cluster-pss" ...
|
||||
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-psa-with-cluster-pss"
|
||||
You can now use your cluster with:
|
||||
|
||||
|
||||
kubectl cluster-info --context kind-psa-with-cluster-pss
|
||||
|
||||
|
||||
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
|
||||
```
|
||||
|
||||
|
|
@ -281,36 +289,21 @@ following:
|
|||
The output is similar to this:
|
||||
```
|
||||
Kubernetes control plane is running at https://127.0.0.1:63855
|
||||
|
||||
CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
1. Create the following Pod specification for a minimal configuration in the default namespace:
|
||||
|
||||
```
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
```
|
||||
1. Create the Pod in the cluster:
|
||||
1. Create a Pod in the default namespace:
|
||||
|
||||
```shell
|
||||
kubectl apply -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
|
||||
The pod is started normally, but the output includes a warning:
|
||||
```
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
|
|
|||
|
|
@ -31,14 +31,14 @@ Install the following on your workstation:
|
|||
1. Create a `KinD` cluster as follows:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-ns-level --image kindest/node:v1.23.0
|
||||
kind create cluster --name psa-ns-level
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
Creating cluster "psa-ns-level" ...
|
||||
✓ Ensuring node image (kindest/node:v1.23.0) 🖼
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
|
|
@ -80,11 +80,12 @@ The output is similar to this:
|
|||
namespace/example created
|
||||
```
|
||||
|
||||
## Apply Pod Security Standards
|
||||
## Enable Pod Security Standards checking for that namespace
|
||||
|
||||
1. Enable Pod Security Standards on this namespace using labels supported by
|
||||
built-in Pod Security Admission. In this step we will warn on baseline pod
|
||||
security standard as per the latest version (default value)
|
||||
built-in Pod Security Admission. In this step you will configure a check to
|
||||
warn on Pods that don't meet the latest version of the _baseline_ pod
|
||||
security standard.
|
||||
|
||||
```shell
|
||||
kubectl label --overwrite ns example \
|
||||
|
|
@ -92,8 +93,8 @@ namespace/example created
|
|||
pod-security.kubernetes.io/warn-version=latest
|
||||
```
|
||||
|
||||
2. Multiple pod security standards can be enabled on any namespace, using labels.
|
||||
Following command will `enforce` the `baseline` Pod Security Standard, but
|
||||
2. You can configure multiple pod security standard checks on any namespace, using labels.
|
||||
The following command will `enforce` the `baseline` Pod Security Standard, but
|
||||
`warn` and `audit` for `restricted` Pod Security Standards as per the latest
|
||||
version (default value)
|
||||
|
||||
|
|
@ -107,41 +108,24 @@ namespace/example created
|
|||
pod-security.kubernetes.io/audit-version=latest
|
||||
```
|
||||
|
||||
## Verify the Pod Security Standards
|
||||
## Verify the Pod Security Standard enforcement
|
||||
|
||||
1. Create a minimal pod in `example` namespace:
|
||||
1. Create a baseline Pod in the `example` namespace:
|
||||
|
||||
```shell
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
kubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
|
||||
1. Apply the pod spec to the cluster in `example` namespace:
|
||||
|
||||
```shell
|
||||
kubectl apply -n example -f /tmp/pss/nginx-pod.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
The Pod does start OK; the output includes a warning. For example:
|
||||
|
||||
```
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
1. Apply the pod spec to the cluster in `default` namespace:
|
||||
1. Create a baseline Pod in the `default` namespace:
|
||||
|
||||
```shell
|
||||
kubectl apply -n default -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
Output is similar to this:
|
||||
|
||||
|
|
@ -149,9 +133,9 @@ namespace/example created
|
|||
pod/nginx created
|
||||
```
|
||||
|
||||
The Pod Security Standards were applied only to the `example`
|
||||
namespace. You could create the same Pod in the `default` namespace
|
||||
with no warnings.
|
||||
The Pod Security Standards enforcement and warning settings were applied only
|
||||
to the `example` namespace. You could create the same Pod in the `default`
|
||||
namespace with no warnings.
|
||||
|
||||
## Clean up
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,221 @@
|
|||
---
|
||||
title: Explore Termination Behavior for Pods And Their Endpoints
|
||||
content_type: tutorial
|
||||
weight: 60
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Once you connected your Application with Service following steps
|
||||
like those outlined in [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/),
|
||||
you have a continuously running, replicated application, that is exposed on a network.
|
||||
This tutorial helps you look at the termination flow for Pods and to explore ways to implement
|
||||
graceful connection draining.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Termination process for Pods and their endpoints
|
||||
|
||||
There are often cases when you need to terminate a Pod - be it for upgrade or scale down.
|
||||
In order to improve application availability, it may be important to implement
|
||||
a proper active connections draining. This tutorial explains the flow of
|
||||
Pod termination in connection with the corresponding endpoint state and removal.
|
||||
|
||||
This tutorial explains the flow of Pod termination in connection with the
|
||||
corresponding endpoint state and removal by using
|
||||
a simple nginx web server to demonstrate the concept.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Example flow with endpoint termination
|
||||
|
||||
The following is the example of the flow described in the
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
document.
|
||||
|
||||
Let's say you have a Deployment containing of a single `nginx` replica
|
||||
(just for demonstration purposes) and a Service:
|
||||
|
||||
{{< codenew file="service/pod-with-graceful-termination.yaml" >}}
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 120 # extra long grace period
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
# Real life termination may take any time up to terminationGracePeriodSeconds.
|
||||
# In this example - just hang around for at least the duration of terminationGracePeriodSeconds,
|
||||
# at 120 seconds container will be forcibly terminated.
|
||||
# Note, all this time nginx will keep processing requests.
|
||||
command: [
|
||||
"/bin/sh", "-c", "sleep 180"
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-service
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
```
|
||||
|
||||
Once the Pod and Service are running, you can get the name of any associated EndpointSlices:
|
||||
|
||||
```shell
|
||||
kubectl get endpointslice
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```none
|
||||
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
|
||||
nginx-service-6tjbr IPv4 80 10.12.1.199,10.12.1.201 22m
|
||||
```
|
||||
|
||||
You can see its status, and validate that there is one endpoint registered:
|
||||
|
||||
```shell
|
||||
kubectl get endpointslices -o json -l kubernetes.io/service-name=nginx-service
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```none
|
||||
{
|
||||
"addressType": "IPv4",
|
||||
"apiVersion": "discovery.k8s.io/v1",
|
||||
"endpoints": [
|
||||
{
|
||||
"addresses": [
|
||||
"10.12.1.201"
|
||||
],
|
||||
"conditions": {
|
||||
"ready": true,
|
||||
"serving": true,
|
||||
"terminating": false
|
||||
```
|
||||
|
||||
Now let's terminate the Pod and validate that the Pod is being terminated
|
||||
respecting the graceful termination period configuration:
|
||||
|
||||
```shell
|
||||
kubectl delete pod nginx-deployment-7768647bf9-b4b9s
|
||||
```
|
||||
|
||||
All pods:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```none
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-7768647bf9-b4b9s 1/1 Terminating 0 4m1s
|
||||
nginx-deployment-7768647bf9-rkxlw 1/1 Running 0 8s
|
||||
```
|
||||
|
||||
You can see that the new pod got scheduled.
|
||||
|
||||
While the new endpoint is being created for the new Pod, the old endpoint is
|
||||
still around in the terminating state:
|
||||
|
||||
```shell
|
||||
kubectl get endpointslice -o json nginx-service-6tjbr
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```none
|
||||
{
|
||||
"addressType": "IPv4",
|
||||
"apiVersion": "discovery.k8s.io/v1",
|
||||
"endpoints": [
|
||||
{
|
||||
"addresses": [
|
||||
"10.12.1.201"
|
||||
],
|
||||
"conditions": {
|
||||
"ready": false,
|
||||
"serving": true,
|
||||
"terminating": true
|
||||
},
|
||||
"nodeName": "gke-main-default-pool-dca1511c-d17b",
|
||||
"targetRef": {
|
||||
"kind": "Pod",
|
||||
"name": "nginx-deployment-7768647bf9-b4b9s",
|
||||
"namespace": "default",
|
||||
"uid": "66fa831c-7eb2-407f-bd2c-f96dfe841478"
|
||||
},
|
||||
"zone": "us-central1-c"
|
||||
},
|
||||
{
|
||||
"addresses": [
|
||||
"10.12.1.202"
|
||||
],
|
||||
"conditions": {
|
||||
"ready": true,
|
||||
"serving": true,
|
||||
"terminating": false
|
||||
},
|
||||
"nodeName": "gke-main-default-pool-dca1511c-d17b",
|
||||
"targetRef": {
|
||||
"kind": "Pod",
|
||||
"name": "nginx-deployment-7768647bf9-rkxlw",
|
||||
"namespace": "default",
|
||||
"uid": "722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35"
|
||||
},
|
||||
"zone": "us-central1-c"
|
||||
```
|
||||
|
||||
This allows applications to communicate their state during termination
|
||||
and clients (such as load balancers) to implement a connections draining functionality.
|
||||
These clients may detect terminating endpoints and implement a special logic for them.
|
||||
|
||||
In Kubernetes, endpoints that are terminating always have their `ready` status set as as `false`.
|
||||
This needs to happen for backward
|
||||
compatibility, so existing load balancers will not use it for regular traffic.
|
||||
If traffic draining on terminating pod is needed, the actual readiness can be
|
||||
checked as a condition `serving`.
|
||||
|
||||
When Pod is deleted, the old endpoint will also be deleted.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn how to [Connect Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
* Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
* Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/)
|
||||
* Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
|
@ -51,11 +51,12 @@ nodes:
|
|||
# default None
|
||||
propagation: None
|
||||
EOF
|
||||
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.23.0 --config /tmp/pss/cluster-config.yaml
|
||||
kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml
|
||||
kubectl cluster-info --context kind-psa-with-cluster-pss
|
||||
|
||||
# Wait for 15 seconds (arbitrary) ServiceAccount Admission Controller to be available
|
||||
sleep 15
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
cat <<EOF |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
|
@ -67,4 +68,17 @@ spec:
|
|||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
kubectl apply -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -f -
|
||||
|
||||
# Await input
|
||||
sleep 1
|
||||
( bash -c 'true' 2>/dev/null && bash -c 'read -p "Press any key to continue... " -n1 -s' ) || \
|
||||
( printf "Press Enter to continue... " && read ) 1>&2
|
||||
|
||||
# Clean up
|
||||
printf "\n\nCleaning up:\n" 1>&2
|
||||
set -e
|
||||
kubectl delete pod --all -n example --now
|
||||
kubectl delete ns example
|
||||
kind delete cluster --name psa-with-cluster-pss
|
||||
rm -f /tmp/pss/cluster-config.yaml
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
#!/bin/sh
|
||||
# Until v1.23 is released, kind node image needs to be built from k/k master branch
|
||||
# Ref: https://kind.sigs.k8s.io/docs/user/quick-start/#building-images
|
||||
kind create cluster --name psa-ns-level --image kindest/node:v1.23.0
|
||||
kind create cluster --name psa-ns-level
|
||||
kubectl cluster-info --context kind-psa-ns-level
|
||||
# Wait for 15 seconds (arbitrary) ServiceAccount Admission Controller to be available
|
||||
# Wait for 15 seconds (arbitrary) for ServiceAccount Admission Controller to be available
|
||||
sleep 15
|
||||
kubectl create ns example
|
||||
|
||||
# Create and label the namespace
|
||||
kubectl create ns example || exit 1 # if namespace exists, don't do the next steps
|
||||
kubectl label --overwrite ns example \
|
||||
pod-security.kubernetes.io/enforce=baseline \
|
||||
pod-security.kubernetes.io/enforce-version=latest \
|
||||
|
|
@ -13,7 +13,9 @@ kubectl label --overwrite ns example \
|
|||
pod-security.kubernetes.io/warn-version=latest \
|
||||
pod-security.kubernetes.io/audit=restricted \
|
||||
pod-security.kubernetes.io/audit-version=latest
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
|
||||
# Try running a Pod
|
||||
cat <<EOF |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
|
@ -25,4 +27,16 @@ spec:
|
|||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
kubectl apply -n example -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -n example -f -
|
||||
|
||||
# Await input
|
||||
sleep 1
|
||||
( bash -c 'true' 2>/dev/null && bash -c 'read -p "Press any key to continue... " -n1 -s' ) || \
|
||||
( printf "Press Enter to continue... " && read ) 1>&2
|
||||
|
||||
# Clean up
|
||||
printf "\n\nCleaning up:\n" 1>&2
|
||||
set -e
|
||||
kubectl delete pod --all -n example --now
|
||||
kubectl delete ns example
|
||||
kind delete cluster --name psa-ns-level
|
||||
|
|
|
|||
|
|
@ -0,0 +1,32 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 120 # extra long grace period
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
# Real life termination may take any time up to terminationGracePeriodSeconds.
|
||||
# In this example - just hang around for at least the duration of terminationGracePeriodSeconds,
|
||||
# at 120 seconds container will be forcibly terminated.
|
||||
# Note, all this time nginx will keep processing requests.
|
||||
command: [
|
||||
"/bin/sh", "-c", "sleep 180"
|
||||
]
|
||||
|
|
@ -78,9 +78,9 @@ releases may also occur in between these.
|
|||
|
||||
| Monthly Patch Release | Cherry Pick Deadline | Target date |
|
||||
| --------------------- | -------------------- | ----------- |
|
||||
| February 2023 | 2023-02-10 | 2023-02-15 |
|
||||
| March 2023 | 2023-03-10 | 2023-03-15 |
|
||||
| April 2023 | 2023-04-07 | 2023-04-12 |
|
||||
| May 2023 | 2023-05-12 | 2023-05-17 |
|
||||
| June 2023 | 2023-06-09 | 2023-06-14 |
|
||||
|
||||
## Detailed Release History for Active Branches
|
||||
|
||||
|
|
|
|||
|
|
@ -722,7 +722,7 @@ Conditions:
|
|||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "registry.k8s.io/pause:0.8.0" already present on machine
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
|
||||
|
|
|
|||
|
|
@ -840,7 +840,7 @@ spec:
|
|||
secretName: dotfile-secret
|
||||
containers:
|
||||
- name: dotfile-test-container
|
||||
image: k8s.gcr.io/busybox
|
||||
image: registry.k8s.io/busybox
|
||||
command:
|
||||
- ls
|
||||
- "-l"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
title: Finalizadores
|
||||
content_type: concept
|
||||
weight: 80
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{<glossary_definition term_id="finalizer" length="long">}}
|
||||
|
||||
Puedes usar finalizadores para controlar {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}
|
||||
de los recursos alertando a los controladores para que ejecuten tareas de limpieza especificas antes de eliminar el recurso.
|
||||
|
||||
Los finalizadores usualmente no especifican codigo a ejecutar, sino que son generalmente listas de parametros referidos a
|
||||
un recurso especifico, similares a las anotaciones. Kubernetes especifica algunos finalizadores automaticamente,
|
||||
pero podrías especificar tus propios.
|
||||
|
||||
## Cómo funcionan los finalizadores
|
||||
|
||||
Cuando creas un recurso utilizando un archivo de manifiesto, puedes especificar
|
||||
finalizadores mediante el campo `metadata.finalizers`. Cuando intentas eliminar el
|
||||
recurso, el servidor API que maneja el pedido de eliminación ve los valores en el
|
||||
campo `finalizadores` y hace lo siguiente:
|
||||
|
||||
* Modifica el objecto para agregar un campo `metadata.deletionTimestamp` con
|
||||
el momento en que comenzaste la eliminación.
|
||||
* Previene que el objeto sea eliminado hasta que su campo `metadata.finalizers`
|
||||
este vacío.
|
||||
* Retorna un codigo de estado `202` (HTTP "Aceptado")
|
||||
|
||||
El controlador que meneja ese finalizador recibe la actualización del objecto
|
||||
configurando el campo `metadata.deletionTimestamp`, indicando que la eliminación
|
||||
del objeto ha sido solicitada.
|
||||
El controlador luego intenta satisfacer los requerimientos de los finalizadores
|
||||
especificados para ese recurso. Cada vez que una condición del finalizador es
|
||||
satisfecha, el controlador remueve ese parametro del campo `finalizadores`. Cuando
|
||||
el campo `finalizadores` esta vacío, un objeto con un campo `deletionTimestamp`
|
||||
configurado es automaticamente borrado. Puedes tambien utilizar finalizadores para
|
||||
prevenir el borrado de recursos no manejados.
|
||||
|
||||
Un ejemplo usual de un finalizador es `kubernetes.io/pv-protection`, el cual
|
||||
previene el borrado accidental de objetos `PersistentVolume`. Cuando un objeto
|
||||
`PersistentVolume` está en uso por un Pod, Kubernetes agrega el finalizador
|
||||
`pv-protection`. Si intentas elimiar el `PersistentVolume`, este pasa a un estado
|
||||
`Terminating`, pero el controlador no puede eliminarlo ya que existe el finalizador.
|
||||
Cuando el Pod deja de utilizar el `PersistentVolume`, Kubernetes borra el finalizador
|
||||
`pv-protection` y el controlador borra el volumen.
|
||||
|
||||
## Referencias de dueño, etiquetas y finalizadores (#owners-labels-finalizers)
|
||||
|
||||
Al igual que las {{<glossary_tooltip text="etiquetas" term_id="label">}}, las
|
||||
[referencias de dueño](/docs/concepts/overview/working-with-objects/owners-dependents/)
|
||||
describen las relaciones entre objetos en Kubernetes, pero son utilizadas para un
|
||||
propósito diferente. Cuando un
|
||||
{{<glossary_tooltip text="controlador" term_id="controller">}} maneja objetos como
|
||||
Pods, utiliza etiquetas para identificar cambios a grupos de objetos relacionados.
|
||||
Por ejemplo, cuando un {{<glossary_tooltip text="Job" term_id="job">}} crea uno
|
||||
o más Pods, el controlador del Job agrega etiquetas a esos pods para identificar cambios
|
||||
a cualquier Pod en el cluster con la misma etiqueta.
|
||||
|
||||
El controlador del Job tambien agrega *referencias de dueño* a esos Pods, referidas
|
||||
al Job que creo a los Pods. Si borras el Job mientras estos Pods estan corriendo,
|
||||
Kubernetes utiliza las referencias de dueño (no las etiquetas) para determinar
|
||||
cuáles Pods en el cluster deberían ser borrados.
|
||||
|
||||
Kubernetes también procesa finalizadores cuando identifica referencias de dueño en
|
||||
un recurso que ha sido marcado para eliminación.
|
||||
|
||||
En algunas situaciones, los finalizadores pueden bloquear el borrado de objetos
|
||||
dependientes, causando que el objeto inicial a borrar permanezca más de lo
|
||||
esperado sin ser completamente eliminado. En esas situaciones, deberías chequear
|
||||
finalizadores y referencias de dueños en los objetos y sus dependencias para
|
||||
intentar solucionarlo.
|
||||
|
||||
{{<note>}}
|
||||
En casos donde los objetos queden bloqueados en un estado de eliminación, evita
|
||||
borrarlos manualmente para que el proceso continue. Los finalizadores usualmente
|
||||
son agregados a los recursos por una razón, por lo cual eliminarlos forzosamente
|
||||
puede causar problemas en tu cluster. Borrados manuales sólo deberían ejecutados
|
||||
cuando el propósito del finalizador es entendido y satisfecho de alguna otra manera (por
|
||||
ejemplo, borrando manualmente un objeto dependiente).
|
||||
{{</note>}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Lea [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/)
|
||||
en el blog de Kubernetes.
|
||||
|
|
@ -95,7 +95,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: cuda-test
|
||||
image: "k8s.gcr.io/cuda-vector-add:v0.1"
|
||||
image: "registry.k8s.io/cuda-vector-add:v0.1"
|
||||
resources:
|
||||
limits:
|
||||
nvidia.com/gpu: 1
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ metadata:
|
|||
name: test-ebs
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-ebs
|
||||
|
|
@ -160,7 +160,7 @@ metadata:
|
|||
name: test-cinder
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-cinder-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-cinder
|
||||
|
|
@ -271,7 +271,7 @@ metadata:
|
|||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /cache
|
||||
|
|
@ -349,7 +349,7 @@ metadata:
|
|||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
|
|
@ -496,7 +496,7 @@ metadata:
|
|||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
|
|
@ -526,7 +526,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: test-webserver
|
||||
image: k8s.gcr.io/test-webserver:latest
|
||||
image: registry.k8s.io/test-webserver:latest
|
||||
volumeMounts:
|
||||
- mountPath: /var/local/aaa
|
||||
name: mydir
|
||||
|
|
@ -657,7 +657,7 @@ metadata:
|
|||
name: test-portworx-volume-pod
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /mnt
|
||||
|
|
@ -847,7 +847,7 @@ metadata:
|
|||
name: pod-0
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: pod-0
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
|
|
@ -976,7 +976,7 @@ metadata:
|
|||
name: test-vmdk
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-vmdk
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ spec:
|
|||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: k8s.gcr.io/nginx-slim:0.8
|
||||
image: registry.k8s.io/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
|
|
|
|||
|
|
@ -0,0 +1,35 @@
|
|||
---
|
||||
title: Finalizador
|
||||
id: finalizer
|
||||
date: 2021-07-07
|
||||
full_link: /docs/concepts/overview/working-with-objects/finalizers/
|
||||
short_description: >
|
||||
Un atributo de un namespace que dicta a Kubernetes a esperar hasta que condiciones
|
||||
especificas son satisfechas antes que pueda borrar un objeto marcado para eliminacion.
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Los finalizadores son atributos de un namespace que instruyen a Kubernetes a
|
||||
esperar a que ciertas condiciones sean satisfechas antes que pueda borrar
|
||||
definitivamente un objeto que ha sido marcado para eliminarse.
|
||||
Los finalizadores alertan a los {{<glossary_tooltip text="controladores" term_id="controller">}}
|
||||
para borrar recursos que poseian esos objetos eliminados.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Cuando instruyes a Kubernetes a borrar un objeto que tiene finalizadores
|
||||
especificados, la API de Kubernetes marca ese objeto para eliminacion
|
||||
configurando el campo `metadata.deletionTimestamp`, y retorna un codigo de
|
||||
estado `202` (HTTP "Aceptado").
|
||||
El objeto a borrar permanece en un estado
|
||||
de terminacion mientras el plano de contol, u otros componentes, ejecutan
|
||||
las acciones definidas en los finalizadores.
|
||||
Luego de que esas acciones son completadas, el controlador borra los
|
||||
finalizadores relevantes del objeto. Cuando el campo `metadata.finalizers`
|
||||
esta vacio, Kubernetes considera el proceso de eliminacion completo y borra
|
||||
el objeto.
|
||||
|
||||
Puedes utilizar finalizadores para controlar {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}
|
||||
de recursos. Por ejemplo, puedes definir un finalizador para borrar recursos
|
||||
relacionados o infraestructura antes que el controlador elimine el objeto.
|
||||
|
|
@ -41,7 +41,7 @@ En este ejercicio crearás un Pod que ejecuta un único Contenedor. Este Pod tie
|
|||
|
||||
La salida debería ser similar a:
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
```
|
||||
|
|
@ -69,7 +69,7 @@ En este ejercicio crearás un Pod que ejecuta un único Contenedor. Este Pod tie
|
|||
|
||||
La salida debería ser similar a:
|
||||
|
||||
```shell
|
||||
```console
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379
|
||||
root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash
|
||||
|
|
@ -86,7 +86,7 @@ En este ejercicio crearás un Pod que ejecuta un único Contenedor. Este Pod tie
|
|||
|
||||
1. En el terminal original, observa los cambios en el Pod de Redis. Eventualmente verás algo como lo siguiente:
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
redis 0/1 Completed 0 6m
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ Existen los siguientes métodos para instalar kubectl en Windows:
|
|||
- Usando PowerShell puede automatizar la verificación usando el operador `-eq` para obtener un resultado de `True` o `False`:
|
||||
|
||||
```powershell
|
||||
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
|
||||
$(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256))
|
||||
```
|
||||
|
||||
1. Agregue el binario a su `PATH`.
|
||||
|
|
|
|||
|
|
@ -50,8 +50,7 @@ brew install bash-completion@2
|
|||
Como se indica en el resultado de este comando, agregue lo siguiente a su archivo `~/.bash_profile`:
|
||||
|
||||
```bash
|
||||
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
|
||||
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
|
||||
brew_etc="$(brew --prefix)/etc" && [[ -r "${brew_etc}/profile.d/bash_completion.sh" ]] && . "${brew_etc}/profile.d/bash_completion.sh"
|
||||
```
|
||||
|
||||
Vuelva a cargar su shell y verifique que bash-complete v2 esté instalado correctamente con `type _init_completion`.
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ Un [*Deployment*](/docs/concepts/workloads/controllers/deployment/) en Kubernete
|
|||
1. Ejecutar el comando `kubectl create` para crear un Deployment que maneje un Pod. El Pod ejecuta un contenedor basado en la imagen proveida por Docker.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
|
||||
kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4
|
||||
```
|
||||
|
||||
2. Ver el Deployment:
|
||||
|
|
|
|||
|
|
@ -889,7 +889,7 @@ spec:
|
|||
secretName: dotfile-secret
|
||||
containers:
|
||||
- name: dotfile-test-container
|
||||
image: k8s.gcr.io/busybox
|
||||
image: registry.k8s.io/busybox
|
||||
command:
|
||||
- ls
|
||||
- "-l"
|
||||
|
|
|
|||
|
|
@ -192,7 +192,7 @@ spec:
|
|||
path: /any/path/it/will/be/replaced
|
||||
containers:
|
||||
- name: pv-recycler
|
||||
image: "k8s.gcr.io/busybox"
|
||||
image: "registry.k8s.io/busybox"
|
||||
command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
|
|
|
|||
|
|
@ -113,7 +113,7 @@ metadata:
|
|||
name: test-ebs
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-ebs
|
||||
|
|
@ -190,7 +190,7 @@ metadata:
|
|||
name: test-cinder
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-cinder-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-cinder
|
||||
|
|
@ -294,7 +294,7 @@ metadata:
|
|||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /cache
|
||||
|
|
@ -369,7 +369,7 @@ metadata:
|
|||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
|
|
@ -509,7 +509,7 @@ metadata:
|
|||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
|
|
@ -759,7 +759,7 @@ metadata:
|
|||
name: test-portworx-volume-pod
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /mnt
|
||||
|
|
@ -824,7 +824,7 @@ metadata:
|
|||
name: pod-0
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: pod-0
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
|
|
@ -953,7 +953,7 @@ metadata:
|
|||
name: test-vmdk
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/test-webserver
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-vmdk
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ spec:
|
|||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: k8s.gcr.io/nginx-slim:0.8
|
||||
image: registry.k8s.io/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
|
|
|
|||
|
|
@ -329,7 +329,7 @@ spec:
|
|||
containers:
|
||||
- args:
|
||||
- /server
|
||||
image: k8s.gcr.io/liveness
|
||||
image: registry.k8s.io/liveness
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
# lorsque "host" n'est pas défini, "PodIP" sera utilisé
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ cd <fed-base>
|
|||
hack/update-federation-api-reference-docs.sh
|
||||
```
|
||||
|
||||
Le script exécute le [k8s.gcr.io/gen-swagger-docs](https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/gen-swagger-docs?gcrImageListquery=%255B%255D&gcrImageListpage=%257B%2522t%2522%253A%2522%2522%252C%2522i%2522%253A0%257D&gcrImageListsize=50&gcrImageListsort=%255B%257B%2522p%2522%253A%2522uploaded%2522%252C%2522s%2522%253Afalse%257D%255D) image pour générer cet ensemble de documents de référence:
|
||||
Le script exécute le [registry.k8s.io/gen-swagger-docs](https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/gen-swagger-docs?gcrImageListquery=%255B%255D&gcrImageListpage=%257B%2522t%2522%253A%2522%2522%252C%2522i%2522%253A0%257D&gcrImageListsize=50&gcrImageListsort=%255B%257B%2522p%2522%253A%2522uploaded%2522%252C%2522s%2522%253Afalse%257D%255D) image pour générer cet ensemble de documents de référence:
|
||||
|
||||
* /docs/api-reference/extensions/v1beta1/operations.html
|
||||
* /docs/api-reference/extensions/v1beta1/definitions.html
|
||||
|
|
|
|||
|
|
@ -359,8 +359,8 @@ Exemples utilisant `-o=custom-columns` :
|
|||
# Toutes les images s'exécutant dans un cluster
|
||||
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'
|
||||
|
||||
# Toutes les images excepté "k8s.gcr.io/coredns:1.6.2"
|
||||
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'
|
||||
# Toutes les images excepté "registry.k8s.io/coredns:1.6.2"
|
||||
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="registry.k8s.io/coredns:1.6.2")].image'
|
||||
|
||||
# Tous les champs dans metadata quel que soit leur nom
|
||||
kubectl get pods -A -o=custom-columns='DATA:metadata.*'
|
||||
|
|
|
|||
|
|
@ -62,7 +62,7 @@ kubeadm init [flags]
|
|||
--feature-gates string Un ensemble de paires clef=valeur qui décrivent l'entrée de configuration pour des fonctionnalités diverses. Il n'y en a aucune dans cette version.
|
||||
-h, --help aide pour l'initialisation (init)
|
||||
--ignore-preflight-errors strings Une liste de contrôles dont les erreurs seront catégorisées comme "warnings" (avertissements). Par exemple : 'IsPrivilegedUser,Swap'. La valeur 'all' ignore les erreurs de tous les contrôles.
|
||||
--image-repository string Choisis un container registry d'où télécharger les images du control plane. (par défaut "k8s.gcr.io")
|
||||
--image-repository string Choisis un container registry d'où télécharger les images du control plane. (par défaut "registry.k8s.io")
|
||||
--kubernetes-version string Choisis une version Kubernetes spécifique pour le control plane. (par défaut "stable-1")
|
||||
--node-name string Spécifie le nom du noeud.
|
||||
--pod-network-cidr string Spécifie l'intervalle des adresses IP pour le réseau des pods. Si fournie, le control plane allouera automatiquement les CIDRs pour chacun des noeuds.
|
||||
|
|
|
|||
|
|
@ -131,12 +131,12 @@ Pour de l'information sur comment passer des options aux composants du control p
|
|||
|
||||
### Utiliser des images personnalisées {#custom-images}
|
||||
|
||||
Par défaut, kubeadm télécharge les images depuis `k8s.gcr.io`, à moins que la version demandée de Kubernetes soit une version Intégration Continue (CI). Dans ce cas, `gcr.io/k8s-staging-ci-images` est utilisé.
|
||||
Par défaut, kubeadm télécharge les images depuis `registry.k8s.io`, à moins que la version demandée de Kubernetes soit une version Intégration Continue (CI). Dans ce cas, `gcr.io/k8s-staging-ci-images` est utilisé.
|
||||
|
||||
Vous pouvez outrepasser ce comportement en utilisant [kubeadm avec un fichier de configuration](#config-file).
|
||||
Les personnalisations permises sont :
|
||||
|
||||
* fournir un `imageRepository` à utiliser à la place de `k8s.gcr.io`.
|
||||
* fournir un `imageRepository` à utiliser à la place de `registry.k8s.io`.
|
||||
* régler `useHyperKubeImage` à `true` pour utiliser l'image HyperKube.
|
||||
* fournir un `imageRepository` et un `imageTag` pour etcd et l'extension (add-on) DNS.
|
||||
|
||||
|
|
@ -264,7 +264,7 @@ kubeadm config images list
|
|||
kubeadm config images pull
|
||||
```
|
||||
|
||||
A partir de Kubernetes 1.12, les images prefixées par `k8s.gcr.io/kube-*`, `k8s.gcr.io/etcd` et `k8s.gcr.io/pause`
|
||||
A partir de Kubernetes 1.12, les images prefixées par `registry.k8s.io/kube-*`, `registry.k8s.io/etcd` et `registry.k8s.io/pause`
|
||||
ne nécessitent pas un suffix `-${ARCH}`.
|
||||
|
||||
### Automatiser kubeadm
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ Suivez les étapes ci-dessous pour commencer et explorer Minikube.
|
|||
Créons un déploiement Kubernetes en utilisant une image existante nommée `echoserver`, qui est un serveur HTTP, et exposez-la sur le port 8080 à l’aide de `--port`.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
|
||||
kubectl create deployment hello-minikube --image=registry.k8s.io/echoserver:1.10
|
||||
```
|
||||
|
||||
Le résultat est similaire à ceci:
|
||||
|
|
|
|||
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: On-Premises VMs
|
||||
weight: 60
|
||||
---
|
||||
|
|
@ -29,7 +29,7 @@ L'interface utilisateur du tableau de bord n'est pas déployée par défaut.
|
|||
Pour le déployer, exécutez la commande suivante:
|
||||
|
||||
```text
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/charts/recommended.yaml
|
||||
```
|
||||
|
||||
## Accès à l'interface utilisateur du tableau de bord
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ Cela peut être utilisé dans le cas des liveness checks sur les conteneurs à d
|
|||
|
||||
De nombreuses applications fonctionnant pour des longues périodes finissent par passer à des états de rupture et ne peuvent pas se rétablir, sauf en étant redémarrées. Kubernetes fournit des liveness probes pour détecter et remédier à ces situations.
|
||||
|
||||
Dans cet exercice, vous allez créer un Pod qui exécute un conteneur basé sur l'image `k8s.gcr.io/busybox`. Voici le fichier de configuration pour le Pod :
|
||||
Dans cet exercice, vous allez créer un Pod qui exécute un conteneur basé sur l'image `registry.k8s.io/busybox`. Voici le fichier de configuration pour le Pod :
|
||||
|
||||
{{< codenew file="pods/probe/exec-liveness.yaml" >}}
|
||||
|
||||
|
|
@ -61,8 +61,8 @@ La sortie indique qu'aucune liveness probe n'a encore échoué :
|
|||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "registry.k8s.io/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "registry.k8s.io/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
```
|
||||
|
|
@ -79,8 +79,8 @@ Au bas de la sortie, il y a des messages indiquant que les liveness probes ont
|
|||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "registry.k8s.io/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "registry.k8s.io/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
|
|
@ -102,7 +102,7 @@ liveness-exec 1/1 Running 1 1m
|
|||
## Définir une requête HTTP de liveness
|
||||
|
||||
Un autre type de liveness probe utilise une requête GET HTTP. Voici la configuration
|
||||
d'un Pod qui fait fonctionner un conteneur basé sur l'image `k8s.gcr.io/liveness`.
|
||||
d'un Pod qui fait fonctionner un conteneur basé sur l'image `registry.k8s.io/liveness`.
|
||||
|
||||
{{< codenew file="pods/probe/http-liveness.yaml" >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ Voici le fichier de configuration du Pod :
|
|||
|
||||
La sortie ressemble à ceci :
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
```
|
||||
|
|
@ -73,7 +73,7 @@ Voici le fichier de configuration du Pod :
|
|||
|
||||
La sortie ressemble à ceci :
|
||||
|
||||
```shell
|
||||
```console
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379
|
||||
root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash
|
||||
|
|
@ -91,7 +91,7 @@ Voici le fichier de configuration du Pod :
|
|||
1. Dans votre terminal initial, surveillez les changements apportés au Pod de Redis. Éventuellement,
|
||||
vous verrez quelque chose comme ça :
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
redis 0/1 Completed 0 6m
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ En quelques étapes, nous vous emmenons de Docker Compose à Kubernetes. Tous do
|
|||
services:
|
||||
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
image: registry.k8s.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
title: Définir des variables d'environnement pour un Container
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Cette page montre comment définir des variables d'environnement pour un
|
||||
container au sein d'un Pod Kubernetes.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Définir une variable d'environnement pour un container
|
||||
|
||||
Lorsque vous créez un Pod, vous pouvez définir des variables d'environnement
|
||||
pour les containers qui seront exécutés au sein du Pod.
|
||||
Pour les définir, utilisez le champ `env` ou `envFrom`
|
||||
dans le fichier de configuration.
|
||||
|
||||
Dans cet exercice, vous allez créer un Pod qui exécute un container. Le fichier de configuration pour ce Pod contient une variable d'environnement s'appelant `DEMO_GREETING` et sa valeur est `"Hello from the environment"`. Voici le fichier de configuration du Pod:
|
||||
|
||||
{{< codenew file="pods/inject/envars.yaml" >}}
|
||||
|
||||
1. Créez un Pod à partir de ce fichier:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml
|
||||
```
|
||||
|
||||
1. Listez les Pods:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l purpose=demonstrate-envars
|
||||
```
|
||||
|
||||
Le résultat sera similaire à celui-ci:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
envar-demo 1/1 Running 0 9s
|
||||
```
|
||||
|
||||
1. Listez les variables d'environnement au sein du container:
|
||||
|
||||
```shell
|
||||
kubectl exec envar-demo -- printenv
|
||||
```
|
||||
|
||||
Le résultat sera similaire à celui-ci:
|
||||
|
||||
```
|
||||
NODE_VERSION=4.4.2
|
||||
EXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237
|
||||
HOSTNAME=envar-demo
|
||||
...
|
||||
DEMO_GREETING=Hello from the environment
|
||||
DEMO_FAREWELL=Such a sweet sorrow
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Les variables d'environnement définies dans les champs `env` ou `envFrom`
|
||||
écraseront les variables définies dans l'image utilisée par le container.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Une variable d'environnement peut faire référence à une autre variable,
|
||||
cependant l'ordre de déclaration est important. Une variable faisant référence
|
||||
à une autre doit être déclarée après la variable référencée.
|
||||
De plus, il est recommandé d'éviter les références circulaires.
|
||||
{{< /note >}}
|
||||
|
||||
## Utilisez des variables d'environnement dans la configuration
|
||||
|
||||
Les variables d'environnement que vous définissez dans la configuration d'un Pod peuvent être utilisées à d'autres endroits de la configuration, comme par exemple dans les commandes et arguments pour les containers.
|
||||
Dans l'exemple ci-dessous, les variables d'environnement `GREETING`, `HONORIFIC`, et
|
||||
`NAME` ont des valeurs respectives de `Warm greetings to`, `The Most
|
||||
Honorable`, et `Kubernetes`. Ces variables sont ensuites utilisées comme arguments
|
||||
pour le container `env-print-demo`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: print-greeting
|
||||
spec:
|
||||
containers:
|
||||
- name: env-print-demo
|
||||
image: bash
|
||||
env:
|
||||
- name: GREETING
|
||||
value: "Warm greetings to"
|
||||
- name: HONORIFIC
|
||||
value: "The Most Honorable"
|
||||
- name: NAME
|
||||
value: "Kubernetes"
|
||||
command: ["echo"]
|
||||
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
|
||||
```
|
||||
|
||||
Une fois le Pod créé, la commande `echo Warm greetings to The Most Honorable Kubernetes` sera exécutée dans le container.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* En savoir plus sur les [variables d'environnement](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/).
|
||||
* Apprendre à [utiliser des secrets comme variables d'environnement](/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
|
||||
* Voir la documentation de référence pour [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core).
|
||||
|
||||
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
title: Définir des variables d'environnement dépendantes
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Cette page montre comment définir des variables d'environnement
|
||||
interdépendantes pour un container dans un Pod Kubernetes.
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Définir une variable d'environnement dépendante pour un container
|
||||
|
||||
Lorsque vous créez un Pod, vous pouvez configurer des variables d'environnement interdépendantes pour les containers exécutés dans un Pod.
|
||||
Pour définir une variable d'environnement dépendante, vous pouvez utiliser le format $(VAR_NAME) dans le champ `value` de la spécification `env` dans le fichier de configuration.
|
||||
|
||||
Dans cette exercice, vous allez créer un Pod qui exécute un container. Le fichier de configuration de ce Pod définit des variables d'environnement interdépendantes avec une ré-utilisation entre les différentes variables. Voici le fichier de configuration de ce Pod:
|
||||
|
||||
{{< codenew file="pods/inject/dependent-envars.yaml" >}}
|
||||
|
||||
1. Créez un Pod en utilisant ce fichier de configuration:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dependent-envars.yaml
|
||||
```
|
||||
```
|
||||
pod/dependent-envars-demo created
|
||||
```
|
||||
|
||||
2. Listez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl get pods dependent-envars-demo
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
dependent-envars-demo 1/1 Running 0 9s
|
||||
```
|
||||
|
||||
3. Affichez les logs pour le container exécuté dans votre Pod:
|
||||
|
||||
```shell
|
||||
kubectl logs pod/dependent-envars-demo
|
||||
```
|
||||
```
|
||||
|
||||
UNCHANGED_REFERENCE=$(PROTOCOL)://172.17.0.1:80
|
||||
SERVICE_ADDRESS=https://172.17.0.1:80
|
||||
ESCAPED_REFERENCE=$(PROTOCOL)://172.17.0.1:80
|
||||
```
|
||||
|
||||
Comme montré ci-dessus, vous avez défini une dépendance correcte pour `SERVICE_ADDRESS`, une dépendance manquante pour `UNCHANGED_REFERENCE`, et avez ignoré la dépendance pour `ESCAPED_REFERENCE`.
|
||||
|
||||
Lorsqu'une variable d'environnement est déja définie alors
|
||||
qu'elle est référencée par une autre variable, la référence s'effectue
|
||||
correctement, comme dans l'exemple de `SERVICE_ADDRESS`.
|
||||
|
||||
Il est important de noter que l'ordre dans la liste `env` est important.
|
||||
Une variable d'environnement ne sera pas considérée comme "définie"
|
||||
si elle est spécifiée plus bas dans la liste. C'est pourquoi
|
||||
`UNCHANGED_REFERENCE` ne résout pas correctement `$(PROTOCOL)` dans l'exemple précédent.
|
||||
|
||||
Lorsque la variable d'environnement n'est pas définie, ou n'inclut qu'une partie des variables, la variable non définie sera traitée comme une chaine de caractères, par exemple `UNCHANGED_REFERENCE`. Notez que les variables d'environnement malformées n'empêcheront généralement pas le démarrage du conteneur.
|
||||
|
||||
La syntaxe `$(VAR_NAME)` peut être échappée avec un double `$`, par exemple `$$(VAR_NAME)`.
|
||||
Les références échappées ne sont jamais développées, que la variable référencée
|
||||
soit définie ou non. C'est le cas pour l'exemple `ESCAPED_REFERENCE` ci-dessus.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* En savoir plus sur les [variables d'environnement](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/).
|
||||
* Lire la documentation pour [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core).
|
||||
|
||||
|
|
@ -0,0 +1,355 @@
|
|||
---
|
||||
title: Distribuer des données sensibles de manière sécurisée avec les Secrets
|
||||
content_type: task
|
||||
weight: 50
|
||||
min-kubernetes-server-version: v1.6
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Cette page montre comment injecter des données sensibles comme des mots de passe ou des clés de chiffrement dans des Pods.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
### Encoder vos données en format base64
|
||||
|
||||
Supposons que vous avez deux données sensibles: un identifiant `my-app` et un
|
||||
mot de passe
|
||||
`39528$vdg7Jb`. Premièrement, utilisez un outil capable d'encoder vos données
|
||||
dans un format base64. Voici un exemple en utilisant le programme base64:
|
||||
```shell
|
||||
echo -n 'my-app' | base64
|
||||
echo -n '39528$vdg7Jb' | base64
|
||||
```
|
||||
|
||||
Le résultat montre que la représentation base64 de l'utilisateur est `bXktYXBw`,
|
||||
et que la représentation base64 du mot de passe est `Mzk1MjgkdmRnN0pi`.
|
||||
|
||||
{{< caution >}}
|
||||
Utilisez un outil local approuvé par votre système d'exploitation
|
||||
afin de réduire les risques de sécurité liés à l'utilisation d'un outil externe.
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Créer un Secret
|
||||
|
||||
Voici un fichier de configuration que vous pouvez utiliser pour créer un Secret
|
||||
qui contiendra votre identifiant et mot de passe:
|
||||
|
||||
{{< codenew file="pods/inject/secret.yaml" >}}
|
||||
|
||||
1. Créez le Secret:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml
|
||||
```
|
||||
|
||||
1. Listez les informations du Secret:
|
||||
|
||||
```shell
|
||||
kubectl get secret test-secret
|
||||
```
|
||||
|
||||
Résultat:
|
||||
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
test-secret Opaque 2 1m
|
||||
```
|
||||
|
||||
1. Affichez les informations détaillées du Secret:
|
||||
|
||||
```shell
|
||||
kubectl describe secret test-secret
|
||||
```
|
||||
|
||||
Résultat:
|
||||
|
||||
```
|
||||
Name: test-secret
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
password: 13 bytes
|
||||
username: 7 bytes
|
||||
```
|
||||
|
||||
### Créer un Secret en utilisant kubectl
|
||||
|
||||
Si vous voulez sauter l'étape d'encodage, vous pouvez créer le même Secret
|
||||
en utilisant la commande `kubectl create secret`. Par exemple:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'
|
||||
```
|
||||
|
||||
Cette approche est plus pratique. La façon de faire plus explicite
|
||||
montrée précédemment permet de démontrer et comprendre le fonctionnement des Secrets.
|
||||
|
||||
|
||||
## Créer un Pod qui a accès aux données sensibles à travers un Volume
|
||||
|
||||
Voici un fichier de configuration qui permet de créer un Pod:
|
||||
|
||||
{{< codenew file="pods/inject/secret-pod.yaml" >}}
|
||||
|
||||
1. Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml
|
||||
```
|
||||
|
||||
1. Vérifiez que le Pod est opérationnel:
|
||||
|
||||
```shell
|
||||
kubectl get pod secret-test-pod
|
||||
```
|
||||
|
||||
Résultat:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
secret-test-pod 1/1 Running 0 42m
|
||||
```
|
||||
|
||||
1. Exécutez une session shell dans le Container qui est dans votre Pod:
|
||||
```shell
|
||||
kubectl exec -i -t secret-test-pod -- /bin/bash
|
||||
```
|
||||
|
||||
1. Les données sont exposées au container à travers un Volume monté sur
|
||||
`/etc/secret-volume`.
|
||||
|
||||
Dans votre shell, listez les fichiers du dossier `/etc/secret-volume`:
|
||||
```shell
|
||||
# À exécuter à l'intérieur du container
|
||||
ls /etc/secret-volume
|
||||
```
|
||||
Le résultat montre deux fichiers, un pour chaque donnée du Secret:
|
||||
```
|
||||
password username
|
||||
```
|
||||
|
||||
1. Toujours dans le shell, affichez le contenu des fichiers
|
||||
`username` et `password`:
|
||||
```shell
|
||||
# À exécuter à l'intérieur du container
|
||||
echo "$( cat /etc/secret-volume/username )"
|
||||
echo "$( cat /etc/secret-volume/password )"
|
||||
```
|
||||
Le résultat doit contenir votre identifiant et mot de passe:
|
||||
```
|
||||
my-app
|
||||
39528$vdg7Jb
|
||||
```
|
||||
|
||||
Vous pouvez alors modifier votre image ou votre ligne de commande pour que le programme
|
||||
recherche les fichiers contenus dans le dossier du champ `mountPath`.
|
||||
Chaque clé du Secret `data` sera exposée comme un fichier à l'intérieur de ce dossier.
|
||||
|
||||
### Monter les données du Secret sur des chemins spécifiques
|
||||
|
||||
Vous pouvez contrôler les chemins sur lesquels les données des Secrets sont montées.
|
||||
Utilisez le champ `.spec.volumes[].secret.items` pour changer le
|
||||
chemin cible de chaque donnée:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: mysecret
|
||||
items:
|
||||
- key: username
|
||||
path: my-group/my-username
|
||||
```
|
||||
|
||||
Voici ce qu'il se passe lorsque vous déployez ce Pod:
|
||||
|
||||
* La clé `username` du Secret `mysecret` est montée dans le container sur le chemin
|
||||
`/etc/foo/my-group/my-username` au lieu de `/etc/foo/username`.
|
||||
* La clé `password` du Secret n'est pas montée dans le container.
|
||||
|
||||
Si vous listez de manière explicite les clés en utilisant le champ `.spec.volumes[].secret.items`,
|
||||
il est important de prendre en considération les points suivants:
|
||||
|
||||
* Seules les clés listées dans le champ `items` seront montées.
|
||||
* Pour monter toutes les clés du Secret, toutes doivent être
|
||||
définies dans le champ `items`.
|
||||
* Toutes les clés définies doivent exister dans le Secret.
|
||||
Sinon, le volume ne sera pas créé.
|
||||
|
||||
### Appliquer des permissions POSIX aux données
|
||||
|
||||
Vous pouvez appliquer des permissions POSIX pour une clé d'un Secret. Si vous n'en configurez pas, les permissions seront par défaut `0644`.
|
||||
Vous pouvez aussi définir des permissions pour tout un Secret, et redéfinir les permissions pour chaque clé si nécessaire.
|
||||
|
||||
Par exemple, il est possible de définir un mode par défaut:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
volumes:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: mysecret
|
||||
defaultMode: 0400
|
||||
```
|
||||
|
||||
Le Secret sera monté sur `/etc/foo`; tous les fichiers créés par le secret
|
||||
auront des permissions de type `0400`.
|
||||
|
||||
{{< note >}}
|
||||
Si vous définissez un Pod en utilisant le format JSON, il est important de
|
||||
noter que la spécification JSON ne supporte pas le système octal, et qu'elle
|
||||
comprendra la valeur `0400` comme la valeur _décimale_ `400`.
|
||||
En JSON, utilisez plutôt l'écriture décimale pour le champ `defaultMode`.
|
||||
Si vous utilisez le format YAML, vous pouvez utiliser le système octal
|
||||
pour définir `defaultMode`.
|
||||
{{< /note >}}
|
||||
|
||||
## Définir des variables d'environnement avec des Secrets
|
||||
|
||||
Il est possible de monter les données des Secrets comme variables d'environnement dans vos containers.
|
||||
|
||||
Si un container consomme déja un Secret en variables d'environnement,
|
||||
la mise à jour de ce Secret ne sera pas répercutée dans le container tant
|
||||
qu'il n'aura pas été redémarré. Il existe cependant des solutions tierces
|
||||
permettant de redémarrer les containers lors d'une mise à jour du Secret.
|
||||
|
||||
### Définir une variable d'environnement à partir d'un seul Secret
|
||||
|
||||
* Définissez une variable d'environnement et sa valeur à l'intérieur d'un Secret:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
|
||||
```
|
||||
|
||||
* Assignez la valeur de `backend-username` définie dans le Secret
|
||||
à la variable d'environnement `SECRET_USERNAME` dans la configuration du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
* À l'intérieur d'une session shell, affichez le contenu de la variable
|
||||
d'environnement `SECRET_USERNAME`:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'
|
||||
```
|
||||
|
||||
Le résultat est:
|
||||
```
|
||||
backend-admin
|
||||
```
|
||||
|
||||
### Définir des variables d'environnement à partir de plusieurs Secrets
|
||||
|
||||
* Comme précédemment, créez d'abord les Secrets:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
|
||||
kubectl create secret generic db-user --from-literal=db-username='db-admin'
|
||||
```
|
||||
|
||||
* Définissez les variables d'environnement dans la configuration du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
* Dans un shell, listez les variables d'environnement du container:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'
|
||||
```
|
||||
Le résultat est:
|
||||
```
|
||||
DB_USERNAME=db-admin
|
||||
BACKEND_USERNAME=backend-admin
|
||||
```
|
||||
|
||||
|
||||
## Configurez toutes les paires de clé-valeur d'un Secret comme variables d'environnement
|
||||
|
||||
{{< note >}}
|
||||
Cette fonctionnalité n'est disponible que dans les versions de Kubernetes
|
||||
égales ou supérieures à v1.6.
|
||||
{{< /note >}}
|
||||
|
||||
* Créez un Secret contenant plusieurs paires de clé-valeur:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'
|
||||
```
|
||||
|
||||
* Utilisez `envFrom` pour définir toutes les données du Secret comme variables
|
||||
d'environnement. Les clés du Secret deviendront les noms des variables
|
||||
d'environnement à l'intérieur du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml
|
||||
```
|
||||
|
||||
* Dans votre shell, affichez les variables d'environnement `username` et `password`:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
|
||||
```
|
||||
|
||||
Le résultat est:
|
||||
```
|
||||
username: my-app
|
||||
password: 39528$vdg7Jb
|
||||
```
|
||||
|
||||
### Références
|
||||
|
||||
* [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
|
||||
* [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)
|
||||
* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* En savoir plus sur les [Secrets](/docs/concepts/configuration/secret/).
|
||||
* En savoir plus sur les [Volumes](/docs/concepts/storage/volumes/).
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: Installation du catalogue de services
|
||||
weight: 150
|
||||
---
|
||||
|
|
@ -78,7 +78,7 @@ Les déploiements sont le moyen recommandé pour gérer la création et la mise
|
|||
Pod utilise un conteneur basé sur l'image Docker fournie.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
|
||||
kubectl create deployment hello-node --image=registry.k8s.io/echoserver:1.4
|
||||
```
|
||||
|
||||
2. Affichez le déploiement :
|
||||
|
|
|
|||
|
|
@ -41,9 +41,9 @@ spec:
|
|||
serviceAccountName: cloud-controller-manager
|
||||
containers:
|
||||
- name: cloud-controller-manager
|
||||
# pour les fournisseurs in-tree, nous utilisons k8s.gcr.io/cloud-controller-manager
|
||||
# pour les fournisseurs in-tree, nous utilisons registry.k8s.io/cloud-controller-manager
|
||||
# cela peut être remplacé par n'importe quelle autre image pour les fournisseurs out-of-tree
|
||||
image: k8s.gcr.io/cloud-controller-manager:v1.8.0
|
||||
image: registry.k8s.io/cloud-controller-manager:v1.8.0
|
||||
command:
|
||||
- /usr/local/bin/cloud-controller-manager
|
||||
- --cloud-provider=<YOUR_CLOUD_PROVIDER> # Ajoutez votre propre fournisseur de cloud ici!
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ spec:
|
|||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: count-agent
|
||||
image: k8s.gcr.io/fluentd-gcp:1.30
|
||||
image: registry.k8s.io/fluentd-gcp:1.30
|
||||
env:
|
||||
- name: FLUENTD_ARGS
|
||||
value: -c /etc/fluentd-config/fluentd.conf
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: master
|
||||
image: k8s.gcr.io/redis:e2e # or just image: redis
|
||||
image: registry.k8s.io/redis:e2e # or just image: redis
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
|
|
|
|||
|
|
@ -0,0 +1,26 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dependent-envars-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: dependent-envars-demo
|
||||
args:
|
||||
- while true; do echo -en '\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\n'; sleep 30; done;
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
image: busybox:1.28
|
||||
env:
|
||||
- name: SERVICE_PORT
|
||||
value: "80"
|
||||
- name: SERVICE_IP
|
||||
value: "172.17.0.1"
|
||||
- name: UNCHANGED_REFERENCE
|
||||
value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
|
||||
- name: PROTOCOL
|
||||
value: "https"
|
||||
- name: SERVICE_ADDRESS
|
||||
value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
|
||||
- name: ESCAPED_REFERENCE
|
||||
value: "$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envar-demo
|
||||
labels:
|
||||
purpose: demonstrate-envars
|
||||
spec:
|
||||
containers:
|
||||
- name: envar-demo-container
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
env:
|
||||
- name: DEMO_GREETING
|
||||
value: "Hello from the environment"
|
||||
- name: DEMO_FAREWELL
|
||||
value: "Such a sweet sorrow"
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envvars-multiple-secrets
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: BACKEND_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backend-user
|
||||
key: backend-username
|
||||
- name: DB_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: db-user
|
||||
key: db-username
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envfrom-secret
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: test-secret
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: env-single-secret
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: SECRET_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backend-user
|
||||
key: backend-username
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-envars-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: SECRET_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: test-secret
|
||||
key: username
|
||||
- name: SECRET_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: test-secret
|
||||
key: password
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
# name must match the volume name below
|
||||
- name: secret-volume
|
||||
mountPath: /etc/secret-volume
|
||||
readOnly: true
|
||||
# The secret data is exposed to Containers in the Pod through a Volume.
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: test-secret
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: test-secret
|
||||
data:
|
||||
username: bXktYXBw
|
||||
password: Mzk1MjgkdmRnN0pi
|
||||
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: k8s.gcr.io/busybox
|
||||
image: registry.k8s.io/busybox
|
||||
command: [ "/bin/echo", "$(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
|
||||
env:
|
||||
- name: SPECIAL_LEVEL_KEY
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: k8s.gcr.io/busybox
|
||||
image: registry.k8s.io/busybox
|
||||
command: [ "/bin/sh", "-c", "env" ]
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: k8s.gcr.io/busybox
|
||||
image: registry.k8s.io/busybox
|
||||
command: [ "/bin/sh","-c","cat /etc/config/keys" ]
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: k8s.gcr.io/busybox
|
||||
image: registry.k8s.io/busybox
|
||||
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: k8s.gcr.io/busybox
|
||||
image: registry.k8s.io/busybox
|
||||
command: [ "/bin/sh", "-c", "env" ]
|
||||
env:
|
||||
- name: SPECIAL_LEVEL_KEY
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue