Merge remote-tracking branch 'upstream/master' into dev-1.19
|
@ -5,7 +5,7 @@ weight: 10
|
|||
---
|
||||
{{% capture overview %}}
|
||||
|
||||
Sie erstellen ihr Docker Image und laden es in eine Registry hoch bevor es in einem Kubernetes Pod referenziert werden kann.
|
||||
Sie erstellen ihr Docker Image und laden es in eine Registry hoch, bevor es in einem Kubernetes Pod referenziert werden kann.
|
||||
|
||||
Die `image` Eigenschaft eines Containers unterstüzt die gleiche Syntax wie die des `docker` Kommandos, inklusive privater Registries und Tags.
|
||||
|
||||
|
@ -16,8 +16,8 @@ Die `image` Eigenschaft eines Containers unterstüzt die gleiche Syntax wie die
|
|||
|
||||
## Aktualisieren von Images
|
||||
|
||||
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu das dass Kubelet Images überspringt die bereits auf einem Node vorliegen.
|
||||
Wenn sie stattdessen möchten das ein Image immer forciert heruntergeladen wird, können sie folgendes tun:
|
||||
Die Standardregel für das Herunterladen von Images ist `IfNotPresent`, dies führt dazu, dass das Kubelet Images überspringt, die bereits auf einem Node vorliegen.
|
||||
Wenn sie stattdessen möchten, dass ein Image immer forciert heruntergeladen wird, können sie folgendes tun:
|
||||
|
||||
|
||||
- Die `imagePullPolicy` des Containers auf `Always` setzen.
|
||||
|
@ -25,7 +25,7 @@ Wenn sie stattdessen möchten das ein Image immer forciert heruntergeladen wird,
|
|||
- Die `imagePullPolicy` und den Tag des Images auslassen.
|
||||
- Den [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) Admission Controller aktivieren.
|
||||
|
||||
Beachten Sie das die die Nutzung des `:latest` Tags vermeiden sollten, weitere Informationen siehe: [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images).
|
||||
Beachten Sie, dass Sie die Nutzung des `:latest` Tags vermeiden sollten. Für weitere Informationen siehe: [Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images).
|
||||
|
||||
## Multi-Architektur Images mit Manifesten bauen
|
||||
|
||||
|
@ -38,17 +38,17 @@ https://docs.docker.com/edge/engine/reference/commandline/manifest/
|
|||
Hier einige Beispiele wie wir dies in unserem Build - Prozess nutzen:
|
||||
https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos=
|
||||
|
||||
Diese Kommandos basieren rein auf dem Docker Kommandozeileninterface und werden auch damit ausgeführt. Sie sollten entweder die Datei `$HOME/.docker/config.json` bearbeiten und den `experimental` Schlüssel auf `enabled`setzen, oder einfach die Umgebungsvariable `DOCKER_CLI_EXPERIMENTAL` auf `enabled`setzen wenn Sie das Docker Kommandozeileninterface aufrufen.
|
||||
Diese Kommandos basieren rein auf dem Docker Kommandozeileninterface und werden auch damit ausgeführt. Sie sollten entweder die Datei `$HOME/.docker/config.json` bearbeiten und den `experimental` Schlüssel auf `enabled` setzen, oder einfach die Umgebungsvariable `DOCKER_CLI_EXPERIMENTAL` auf `enabled` setzen, wenn Sie das Docker Kommandozeileninterface aufrufen.
|
||||
|
||||
|
||||
{{< note >}}
|
||||
Nutzen die bitte Docker *18.06 oder neuer*, ältere Versionen haben entweder bugs oder unterstützen die experimentelle Kommandozeilenoption nicht. Beispiel: https://github.com/docker/cli/issues/1135 verursacht Probleme unter containerd.
|
||||
Nutzen die bitte Docker *18.06 oder neuer*, ältere Versionen haben entweder Bugs oder unterstützen die experimentelle Kommandozeilenoption nicht. Beispiel: https://github.com/docker/cli/issues/1135 verursacht Probleme unter containerd.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
Wenn mit alten Manifesten Probleme auftreten können sie die alten Manifeste in `$HOME/.docker/manifests` entfernen um von vorne zu beginnen.
|
||||
Wenn mit alten Manifesten Probleme auftreten, können sie die alten Manifeste in `$HOME/.docker/manifests` entfernen, um von vorne zu beginnen.
|
||||
|
||||
Für Kubernetes selbst haben wir typischerweise Images mit dem Suffix `-$(ARCH)` genutzt. Um die Abwärtskompatibilität zu erhalten bitten wir Sie die älteren Images mit diesen Suffixen zu generieren. Die Idee dahinter ist z.B. das `pause` image zu generieren, welches das Manifest für alle Architekturen hat, `pause-amd64` wäre dann abwärtskompatibel zu älteren Konfigurationen, oder YAML - Dateien die ein Image mit Suffixen hart kodiert haben.
|
||||
Für Kubernetes selbst nutzen wir typischerweise Images mit dem Suffix `-$(ARCH)`. Um die Abwärtskompatibilität zu erhalten, bitten wir Sie, die älteren Images mit diesen Suffixen zu generieren. Die Idee dahinter ist z.B., das `pause` image zu generieren, welches das Manifest für alle Architekturen hat, `pause-amd64` wäre dann abwärtskompatibel zu älteren Konfigurationen, oder YAML - Dateien, die ein Image mit Suffixen hart kodiert enthalten.
|
||||
|
||||
|
||||
## Nutzung einer privaten Registry
|
||||
|
@ -73,32 +73,32 @@ Authentifizierungsdaten können auf verschiedene Weisen hinterlegt werden:
|
|||
- Setzt die Konfiguration der Nodes durch einen Cluster - Aministrator voraus
|
||||
- Im Voraus heruntergeladene Images
|
||||
- Alle Pods können jedes gecachte Image auf einem Node nutzen
|
||||
- Setzt root - Zugriff auf allen Nodes zum einrichten voraus
|
||||
- Spezifizieren eines ImagePullSecrets bei einem Pod
|
||||
- Nur Pods die eigene Schlüssel vorhalten haben Zugriff auf eine private Registry
|
||||
- Setzt root - Zugriff auf allen Nodes zum Einrichten voraus
|
||||
- Spezifizieren eines ImagePullSecrets auf einem Pod
|
||||
- Nur Pods, die eigene Secret tragen, haben Zugriff auf eine private Registry
|
||||
|
||||
Jeder Option wird im Folgenden im Detail beschrieben
|
||||
Jede Option wird im Folgenden im Detail beschrieben
|
||||
|
||||
|
||||
### Bei Nutzung der Google Container Registry
|
||||
|
||||
Kubernetes hat eine native Unterstützung für die [Google Container
|
||||
Registry (GCR)](https://cloud.google.com/tools/container-registry/) wenn es auf der Google Compute
|
||||
Engine (GCE) läuft. Wenn Sie ihren Cluster auf GCE oder der Google Kubernetes Engine betreiben, genügt es einfach den vollen Image Namen zu nutzen (z.B. gcr.io/my_project/image:tag ).
|
||||
Engine (GCE) läuft. Wenn Sie ihren Cluster auf GCE oder der Google Kubernetes Engine betreiben, genügt es, einfach den vollen Image Namen zu nutzen (z.B. gcr.io/my_project/image:tag ).
|
||||
|
||||
Alle Pods in einem Cluster haben dann lesenden Zugriff auf Images in dieser Registry.
|
||||
Alle Pods in einem Cluster haben dann Lesezugriff auf Images in dieser Registry.
|
||||
|
||||
Das Kubelet authentifiziert sich bei GCR mit Nutzung des Google service Kontos der jeweiligen Instanz.
|
||||
Das Google service Konto der Instanz hat einen `https://www.googleapis.com/auth/devstorage.read_only`, so kann es vom GCR des Projektes hochladen, aber nicht herunterladen.
|
||||
Das Google Servicekonto der Instanz hat einen `https://www.googleapis.com/auth/devstorage.read_only`, so kann es vom GCR des Projektes herunter, aber nicht hochladen.
|
||||
|
||||
|
||||
### Bei Nutzung der Amazon Elastic Container Registry
|
||||
|
||||
Kubernetes eine native Unterstützung für die [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) wenn Knoten AWS EC2 Instanzen sind.
|
||||
Kubernetes bietet native Unterstützung für die [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/), wenn Knoten AWS EC2 Instanzen sind.
|
||||
|
||||
Es muss einfach nur der komplette Image Name (z.B. `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`) in der Pod - Definition genutzt werden.
|
||||
|
||||
Alle Benutzer eines Clusters die Pods erstellen dürfen können dann jedes der Images in der ECR Registry zum Ausführen von Pods nutzen.
|
||||
Alle Benutzer eines Clusters, die Pods erstellen dürfen, können dann jedes der Images in der ECR Registry zum Ausführen von Pods nutzen.
|
||||
|
||||
Das Kubelet wird periodisch ECR Zugriffsdaten herunterladen und auffrischen, es benötigt hierfür die folgenden Berechtigungen:
|
||||
|
||||
|
@ -126,12 +126,12 @@ Fehlerbehebung:
|
|||
- `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider`
|
||||
|
||||
### Bei Nutzung der Azure Container Registry (ACR)
|
||||
Bei Nutzung der [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) können sie sich entweder als ein administrativer Nutzer, oder als ein Service Principal authentifizieren
|
||||
Bei Nutzung der [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) können sie sich entweder als ein administrativer Nutzer, oder als ein Service Principal authentifizieren.
|
||||
In jedem Fall wird die Authentifizierung über die Standard - Docker Authentifizierung ausgeführt. Diese Anleitung bezieht sich auf das [azure-cli](https://github.com/azure/azure-cli) Kommandozeilenwerkzeug.
|
||||
|
||||
Sie müssen zunächst eine Registry und Authentifizierungsdaten erstellen, eine komplette Dokumentation dazu finden sie hier: [Azure container registry documentation](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli).
|
||||
|
||||
Sobald sie ihre Container Registry erstelt haben, nutzen sie die folgenden Authentifizierungsdaten:
|
||||
Sobald sie ihre Container Registry erstellt haben, nutzen sie die folgenden Authentifizierungsdaten:
|
||||
|
||||
* `DOCKER_USER` : Service Principal oder Administratorbenutzername
|
||||
* `DOCKER_PASSWORD`: Service Principal Password oder Administratorpasswort
|
||||
|
@ -139,37 +139,37 @@ Sobald sie ihre Container Registry erstelt haben, nutzen sie die folgenden Authe
|
|||
* `DOCKER_EMAIL`: `${some-email-address}`
|
||||
|
||||
Wenn sie diese Variablen befüllt haben, können sie:
|
||||
[configure a Kubernetes Secret and use it to deploy a Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
[Kubernetes konfigurieren und damit einen Pod deployen](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod).
|
||||
|
||||
### Bei Nutzung der IBM Cloud Container Registry
|
||||
Die IBM Cloud Container Registry bietet eine mandantenfähige Private Image Registry die Sie nutzen können um ihre Docker Images sicher speichern und teilen zu können.
|
||||
Im Standard werden Images in ihrer Private Registry vom integrierten Schwachstellenscaner durchsucht um Sicherheitsprobleme und potentielle Schwachstellen zu finden. Benutzer können ihren IBM Cloud Account nutzen um Zugang zu ihren Images zu erhalten, oder um einen Token zu gernerieren, der Zugriff auf die Registry Namespaces erlaubt.
|
||||
Die IBM Cloud Container Registry bietet eine mandantenfähige Private Image Registry, die Sie nutzen können, um ihre Docker Images sicher zu speichern und zu teilen.
|
||||
Standardmäßig werden Images in ihrer Private Registry vom integrierten Schwachstellenscaner durchsucht, um Sicherheitsprobleme und potentielle Schwachstellen zu finden. Benutzer können ihren IBM Cloud Account nutzen, um Zugang zu ihren Images zu erhalten, oder um einen Token zu generieren, der Zugriff auf die Registry Namespaces erlaubt.
|
||||
|
||||
Um das IBM Cloud Container Registry Kommandozeilenwerkzeug zu installieren und einen Namespace für ihre Images zu erstellen, folgen sie dieser Dokumentation [Getting started with IBM Cloud Container Registry](https://cloud.ibm.com/docs/services/Registry?topic=registry-getting-started).
|
||||
|
||||
Sie können die IBM Cloud Container Registry nutzen um Container aus [IBM Cloud public images](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images) und ihren eigenen Images in den `default` Namespace ihres IBM Cloud Kubernetes Service Clusters zu deployen.
|
||||
Sie können die IBM Cloud Container Registry nutzen, um Container aus [IBM Cloud public images](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images) und ihren eigenen Images in den `default` Namespace ihres IBM Cloud Kubernetes Service Clusters zu deployen.
|
||||
Um einen Container in einen anderen Namespace, oder um ein Image aus einer anderen IBM Cloud Container Registry Region oder einem IBM Cloud account zu deployen, erstellen sie ein Kubernetes `imagePullSecret`.
|
||||
Weitere Informationen finden sie unter: [Building containers from images](https://cloud.ibm.com/docs/containers?topic=containers-images).
|
||||
|
||||
### Knoten für die Nutzung einer Private Registry konfigurieren
|
||||
|
||||
{{< note >}}
|
||||
Wenn sie auf Google Kubernetes Engine laufen gibt es schon eine `.dockercfg` auf jedem Knoten die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden.
|
||||
Wenn sie Google Kubernetes Engine verwenden, gibt es schon eine `.dockercfg` auf jedem Knoten, die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Wenn sie auf AWS EC2 laufen und die EC2 Container Registry (ECR) nutzen, wird das Kubelet auf jedem Knoten die ECR Zugriffsdaten verwalten und aktualisieren. Dann kann die folgende Vorgehensweise nicht angewendet werden.
|
||||
Wenn sie AWS EC2 verwenden und die EC2 Container Registry (ECR) nutzen, wird das Kubelet auf jedem Knoten die ECR Zugriffsdaten verwalten und aktualisieren. Dann kann die folgende Vorgehensweise nicht angewendet werden.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Diese Vorgehensweise ist anwendbar wenn sie ihre Knoten - Konfiguration ändern können, sie wird nicht zuverlässig auf GCE other einem anderen Cloud - Provider funktionieren der automatisch Knoten ersetzt.
|
||||
Diese Vorgehensweise ist anwendbar, wenn sie ihre Knoten-Konfiguration ändern können; Sie wird nicht zuverlässig auf GCE oder einem anderen Cloud - Provider funktionieren, der automatisch Knoten ersetzt.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes unterstützt zur Zeit nur die `auths` und `HttpHeaders` Sektionen der Docker Konfiguration . Das bedeutet das die Hilfswerkzeuge (`credHelpers` ooderr `credsStore`) nicht unterstützt werden.
|
||||
Kubernetes unterstützt zurzeit nur die `auths` und `HttpHeaders` Abschnitte der Dockerkonfiguration. Das bedeutet, dass die Hilfswerkzeuge (`credHelpers` ooderr `credsStore`) nicht unterstützt werden.
|
||||
{{< /note >}}
|
||||
|
||||
Docker speichert Schlüssel für eigene Registries in der Datei `$HOME/.dockercfg` oder `$HOME/.docker/config.json`. Wenn sie die gleiche Datei in einen der unten aufgeführten Suchpfade ablegen wird Kubelet sie als Hilfswerkzeug für Zugriffsdaten nutzen wenn es Images bezieht.
|
||||
Docker speichert Schlüssel für eigene Registries entweder unter `$HOME/.dockercfg` oder `$HOME/.docker/config.json`. Wenn sie die gleiche Datei in einen der unten aufgeführten Suchpfade speichern, wird Kubelet sie als Hilfswerkzeug für Zugriffsdaten beim Beziehen von Images nutzen.
|
||||
|
||||
|
||||
* `{--root-dir:-/var/lib/kubelet}/config.json`
|
||||
|
@ -182,20 +182,21 @@ Docker speichert Schlüssel für eigene Registries in der Datei `$HOME/.dockercf
|
|||
* `/.dockercfg`
|
||||
|
||||
{{< note >}}
|
||||
Eventuell müssen sie `HOME=/root` in ihrer Umgebungsvariablendatei setzen
|
||||
Eventuell müssen sie `HOME=/root` in ihrer Umgebungsvariablendatei setzen.
|
||||
{{< /note >}}
|
||||
|
||||
Dies sind die empfohlenen Schritte um ihre Knoten für eine Nutzung einer eigenen Registry zu konfigurieren, in diesem Beispiel führen sie folgende Schritte auf ihrem Desktop/Laptop aus:
|
||||
Dies sind die empfohlenen Schritte, um ihre Knoten für eine Nutzung einer eigenen Registry zu konfigurieren.
|
||||
In diesem Beispiel führen sie folgende Schritte auf ihrem Desktop/Laptop aus:
|
||||
|
||||
1. Führen sie `docker login [server]` für jeden Satz ihrer Zugriffsdaten aus. Dies aktualisiert `$HOME/.docker/config.json`.
|
||||
2. Prüfen Sie `$HOME/.docker/config.json` in einem Editor darauf ob dort nur Zugriffsdaten enthalten sind die Sie nutzen möchten.
|
||||
2. Prüfen Sie `$HOME/.docker/config.json` in einem Editor darauf, ob dort nur Zugriffsdaten enthalten sind, die Sie nutzen möchten.
|
||||
3. Erhalten sie eine Liste ihrer Knoten:
|
||||
- Wenn sie die Namen benötigen: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
|
||||
- Wenn sie die IP - Adressen benötigen: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
|
||||
4. Kopieren sie ihre lokale `.docker/config.json` in einen der oben genannten Suchpfade.
|
||||
- Zum Beispiel: `for n in $nodes; do scp ~/.docker/config.json root@$n:/var/lib/kubelet/config.json; done`
|
||||
|
||||
Prüfen durch das Erstellen eines Pods der ein eigenes Image nutzt, z.B.:
|
||||
Prüfen durch das Erstellen eines Pods, der ein eigenes Image nutzt, z.B.:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - <<EOF
|
||||
|
@ -212,7 +213,7 @@ spec:
|
|||
EOF
|
||||
pod/private-image-test-1 created
|
||||
```
|
||||
Wenn alles funktioniert sollten sie nach einigen Momenten folgendes sehen:
|
||||
Wenn alles funktioniert, sollten sie nach einigen Momenten folgendes sehen:
|
||||
|
||||
```shell
|
||||
kubectl logs private-image-test-1
|
||||
|
@ -225,27 +226,27 @@ kubectl describe pods/private-image-test-1 | grep "Failed"
|
|||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
||||
```
|
||||
|
||||
Sie müssen sich darum kümmern das alle Knoten im Cluster die gleiche `.docker/config.json` haben, anderenfalls werden Pods auf einigen Knoten starten, auf anderen jedoch nicht.
|
||||
Wenn sie zum Beispiel Knoten automatisch Skalieren lassen, sollte dann jedes Instanztemplate die `.docker/config.json` beinhalten, oder ein Laufwerk einhängen die diese beinhaltet.
|
||||
Sie müssen sich darum kümmern, dass alle Knoten im Cluster die gleiche `.docker/config.json` haben, andernfalls werden Pods auf einigen Knoten starten, auf anderen jedoch nicht.
|
||||
Wenn sie zum Beispiel Knoten automatisch skalieren lassen, sollte dann jedes Instanztemplate die `.docker/config.json` beinhalten, oder ein Laufwerk einhängen, das diese beinhaltet.
|
||||
|
||||
Alle Pods haben Lesezugriff auf jedes Image in ihrer eigenen Registry sobald die Registry - Schlüssel zur `.docker/config.json` hinzugefügt wurden.
|
||||
Alle Pods haben Lesezugriff auf jedes Image in ihrer eigenen Registry, sobald die Registry - Schlüssel zur `.docker/config.json` hinzugefügt wurden.
|
||||
|
||||
### Im Voraus heruntergeladene Images
|
||||
|
||||
{{< note >}}
|
||||
Wenn sie auf Google Kubernetes Engine laufen gibt es schon eine `.dockercfg` auf jedem Knoten die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden.
|
||||
Wenn sie Google Kubernetes Engine verwenden, gibt es schon eine `.dockercfg` auf jedem Knoten die Zugriffsdaten für ihre Google Container Registry beinhaltet. Dann kann die folgende Vorgehensweise nicht angewendet werden.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Diese Vorgehensweise ist anwendbar wenn sie ihre Knoten - Konfiguration ändern können, sie wird nicht zuverlässig auf GCE other einem anderen Cloud - Provider funktionieren der automatisch Knoten ersetzt.
|
||||
Diese Vorgehensweise ist anwendbar, wenn sie ihre Knoten-Konfiguration ändern können; Sie wird nicht zuverlässig auf GCE oder einem anderen Cloud - Provider funktionieren, der automatisch Knoten ersetzt.
|
||||
{{< /note >}}
|
||||
|
||||
Im Standard wird das Kubelet versuchen jedes Image von der spezifizierten Registry herunterzuladen.
|
||||
Falls jedoch die `imagePullPolicy` Eigenschaft der Containers auf `IfNotPresent` oder `Never` gesetzt wurde, wird ein lokales Image genutzt (präferiert oder exklusiv, je nach dem).
|
||||
Standardmäßig wird das Kubelet versuchen, jedes Image von der spezifizierten Registry herunterzuladen.
|
||||
Falls jedoch die `imagePullPolicy` Eigenschaft der Containers auf `IfNotPresent` oder `Never` gesetzt wurde, wird ein lokales Image genutzt (präferiert oder exklusiv, jenachdem).
|
||||
|
||||
Wenn Sie sich auf im Voraus heruntergeladene Images als Ersatz für eine Registry - Authentifizierung verlassen möchten, müssen sie sicherstellen das alle Knoten die gleichen im voraus heruntergeladenen Images aufweisen.
|
||||
Wenn Sie sich auf im Voraus heruntergeladene Images als Ersatz für eine Registry - Authentifizierung verlassen möchten, müssen sie sicherstellen, dass alle Knoten die gleichen, im Voraus heruntergeladenen Images aufweisen.
|
||||
|
||||
Diese Medthode kann dazu genutzt werden bestimmte Images aus Geschwindigkeitsgründen im Voraus zu laden, oder als Alternative zur Authentifizierung an einer eigenen Registry zu nutzen.
|
||||
Diese Medthode kann dazu genutzt werden, bestimmte Images aus Geschwindigkeitsgründen im Voraus zu laden, oder als Alternative zur Authentifizierung an einer eigenen Registry zu nutzen.
|
||||
|
||||
Alle Pods haben Leserechte auf alle im Voraus geladenen Images.
|
||||
|
||||
|
@ -259,24 +260,24 @@ Kubernetes unterstützt die Spezifikation von Registrierungsschlüsseln für ein
|
|||
|
||||
#### Erstellung eines Secrets mit einer Docker Konfiguration
|
||||
|
||||
Führen sie foldenen Befehl mit Ersetzung der groß geschriebenen Werte aus:
|
||||
Führen sie folgenden Befehl mit Ersetzung der groß geschriebenen Werte aus:
|
||||
|
||||
```shell
|
||||
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||
```
|
||||
|
||||
Wenn sie bereits eine Datei mit Docker Zugriffsdaten haben, könenn sie die Zugriffsdaten als ein Kubernetes Secret importieren:
|
||||
Wenn Sie bereits eine Datei mit Docker-Zugriffsdaten haben, können Sie die Zugriffsdaten als ein Kubernetes Secret importieren:
|
||||
[Create a Secret based on existing Docker credentials](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) beschreibt die Erstellung.
|
||||
Dies ist insbesondere dann sinnvoll wenn sie mehrere eigene Container Registries nutzen, da `kubectl create secret docker-registry` ein Secret erstellt das nur mit einer einzelnen eigenen Registry funktioniert.
|
||||
Dies ist insbesondere dann sinnvoll, wenn sie mehrere eigene Container Registries nutzen, da `kubectl create secret docker-registry` ein Secret erstellt, das nur mit einer einzelnen eigenen Registry funktioniert.
|
||||
|
||||
{{< note >}}
|
||||
Pods können nur eigene Image Pull Secret in ihrem eigenen Namespace referenzieren, somit muss dieser Prozess jedes mal einzeln für je Namespace angewendet werden.
|
||||
Pods können nur eigene Image Pull Secret in ihrem eigenen Namespace referenzieren, somit muss dieser Prozess jedes mal einzeln für jeden Namespace angewendet werden.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
#### Referenzierung eines imagePullSecrets bei einem Pod
|
||||
|
||||
Nun können sie Pods erstellen die dieses Secret referenzieren indem sie eine `imagePullSecrets` Sektion zu ihrer Pod - Definition hinzufügen.
|
||||
Nun können Sie Pods erstellen, die dieses Secret referenzieren, indem Sie einen Aschnitt `imagePullSecrets` zu ihrer Pod - Definition hinzufügen.
|
||||
|
||||
```shell
|
||||
cat <<EOF > pod.yaml
|
||||
|
@ -299,9 +300,9 @@ resources:
|
|||
EOF
|
||||
```
|
||||
|
||||
Dies muss für jeden Pod getan werden der eine eigene Registry nutzt.
|
||||
Dies muss für jeden Pod getan werden, der eine eigene Registry nutzt.
|
||||
|
||||
Die Erstellung dieser Sektion kann jedoch automatisiert werden indem man imagePullSecrets einer serviceAccount](/docs/user-guide/service-accounts) Ressource hinzufügt.
|
||||
Die Erstellung dieser Sektion kann jedoch automatisiert werden, indem man imagePullSecrets einer [serviceAccount](/docs/user-guide/service-accounts) Ressource hinzufügt.
|
||||
[Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) bietet detaillierte Anweisungen hierzu.
|
||||
|
||||
Sie können dies in Verbindung mit einer auf jedem Knoten genutzten `.docker/config.json` benutzen, die Zugriffsdaten werden dann zusammengeführt. Dieser Vorgehensweise wird in der Google Kubernetes Engine funktionieren.
|
||||
|
|
|
@ -0,0 +1,589 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "WSL+Docker: Kubernetes on the Windows Desktop"
|
||||
date: 2020-05-21
|
||||
slug: wsl-docker-kubernetes-on-the-windows-desktop
|
||||
---
|
||||
|
||||
**Authors**: [Nuno do Carmo](https://twitter.com/nunixtech) Docker Captain and WSL Corsair; [Ihor Dvoretskyi](https://twitter.com/idvoretskyi), Developer Advocate, Cloud Native Computing Foundation
|
||||
|
||||
# Introduction
|
||||
|
||||
New to Windows 10 and WSL2, or new to Docker and Kubernetes? Welcome to this blog post where we will install from scratch Kubernetes in Docker [KinD](https://kind.sigs.k8s.io/) and [Minikube](https://minikube.sigs.k8s.io/docs/).
|
||||
|
||||
|
||||
# Why Kubernetes on Windows?
|
||||
|
||||
For the last few years, Kubernetes became a de-facto standard platform for running containerized services and applications in distributed environments. While a wide variety of distributions and installers exist to deploy Kubernetes in the cloud environments (public, private or hybrid), or within the bare metal environments, there is still a need to deploy and run Kubernetes locally, for example, on the developer's workstation.
|
||||
|
||||
Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - [the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/), the line between Windows and Linux environments became even less visible.
|
||||
|
||||
|
||||
Also, WSL brought an ability to run Kubernetes on Windows almost seamlessly!
|
||||
|
||||
|
||||
Below, we will cover in brief how to install and use various solutions to run Kubernetes locally.
|
||||
|
||||
# Prerequisites
|
||||
|
||||
Since we will explain how to install KinD, we won't go into too much detail around the installation of KinD's dependencies.
|
||||
|
||||
However, here is the list of the prerequisites needed and their version/lane:
|
||||
|
||||
- OS: Windows 10 version 2004, Build 19041
|
||||
- [WSL2 enabled](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install)
|
||||
- In order to install the distros as WSL2 by default, once WSL2 installed, run the command `wsl.exe --set-default-version 2` in Powershell
|
||||
- WSL2 distro installed from the Windows Store - the distro used is Ubuntu-18.04
|
||||
- [Docker Desktop for Windows](https://hub.docker.com/editions/community/docker-ce-desktop-windows), stable channel - the version used is 2.2.0.4
|
||||
- [Optional] Microsoft Terminal installed from the Windows Store
|
||||
- Open the Windows store and type "Terminal" in the search, it will be (normally) the first option
|
||||
|
||||

|
||||
|
||||
And that's actually it. For Docker Desktop for Windows, no need to configure anything yet as we will explain it in the next section.
|
||||
|
||||
# WSL2: First contact
|
||||
|
||||
Once everything is installed, we can launch the WSL2 terminal from the Start menu, and type "Ubuntu" for searching the applications and documents:
|
||||
|
||||

|
||||
|
||||
Once found, click on the name and it will launch the default Windows console with the Ubuntu bash shell running.
|
||||
|
||||
Like for any normal Linux distro, you need to create a user and set a password:
|
||||
|
||||

|
||||
|
||||
## [Optional] Update the `sudoers`
|
||||
|
||||
As we are working, normally, on our local computer, it might be nice to update the `sudoers` and set the group `%sudo` to be password-less:
|
||||
|
||||
```bash
|
||||
# Edit the sudoers with the visudo command
|
||||
sudo visudo
|
||||
|
||||
# Change the %sudo group to be password-less
|
||||
%sudo ALL=(ALL:ALL) NOPASSWD: ALL
|
||||
|
||||
# Press CTRL+X to exit
|
||||
# Press Y to save
|
||||
# Press Enter to confirm
|
||||
```
|
||||
|
||||

|
||||
|
||||
## Update Ubuntu
|
||||
|
||||
Before we move to the Docker Desktop settings, let's update our system and ensure we start in the best conditions:
|
||||
|
||||
```bash
|
||||
# Update the repositories and list of the packages available
|
||||
sudo apt update
|
||||
# Update the system based on the packages installed > the "-y" will approve the change automatically
|
||||
sudo apt upgrade -y
|
||||
```
|
||||
|
||||

|
||||
|
||||
# Docker Desktop: faster with WSL2
|
||||
|
||||
Before we move into the settings, let's do a small test, it will display really how cool the new integration with Docker Desktop is:
|
||||
|
||||
```bash
|
||||
# Try to see if the docker cli and daemon are installed
|
||||
docker version
|
||||
# Same for kubectl
|
||||
kubectl version
|
||||
```
|
||||
|
||||

|
||||
|
||||
You got an error? Perfect! It's actually good news, so let's now move on to the settings.
|
||||
|
||||
## Docker Desktop settings: enable WSL2 integration
|
||||
|
||||
First let's start Docker Desktop for Windows if it's not still the case. Open the Windows start menu and type "docker", click on the name to start the application:
|
||||
|
||||

|
||||
|
||||
You should now see the Docker icon with the other taskbar icons near the clock:
|
||||
|
||||

|
||||
|
||||
Now click on the Docker icon and choose settings. A new window will appear:
|
||||
|
||||

|
||||
|
||||
By default, the WSL2 integration is not active, so click the "Enable the experimental WSL 2 based engine" and click "Apply & Restart":
|
||||
|
||||

|
||||
|
||||
What this feature did behind the scenes was to create two new distros in WSL2, containing and running all the needed backend sockets, daemons and also the CLI tools (read: docker and kubectl command).
|
||||
|
||||
Still, this first setting is still not enough to run the commands inside our distro. If we try, we will have the same error as before.
|
||||
|
||||
In order to fix it, and finally be able to use the commands, we need to tell the Docker Desktop to "attach" itself to our distro also:
|
||||
|
||||

|
||||
|
||||
Let's now switch back to our WSL2 terminal and see if we can (finally) launch the commands:
|
||||
|
||||
```bash
|
||||
# Try to see if the docker cli and daemon are installed
|
||||
docker version
|
||||
# Same for kubectl
|
||||
kubectl version
|
||||
```
|
||||
|
||||

|
||||
|
||||
> Tip: if nothing happens, restart Docker Desktop and restart the WSL process in Powershell: `Restart-Service LxssManager` and launch a new Ubuntu session
|
||||
|
||||
And success! The basic settings are now done and we move to the installation of KinD.
|
||||
|
||||
# KinD: Kubernetes made easy in a container
|
||||
|
||||
Right now, we have Docker that is installed, configured and the last test worked fine.
|
||||
|
||||
However, if we look carefully at the `kubectl` command, it found the "Client Version" (1.15.5), but it didn't find any server.
|
||||
|
||||
This is normal as we didn't enable the Docker Kubernetes cluster. So let's install KinD and create our first cluster.
|
||||
|
||||
And as sources are always important to mention, we will follow (partially) the how-to on the [official KinD website](https://kind.sigs.k8s.io/docs/user/quick-start/):
|
||||
|
||||
```bash
|
||||
# Download the latest version of KinD
|
||||
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64
|
||||
# Make the binary executable
|
||||
chmod +x ./kind
|
||||
# Move the binary to your executable path
|
||||
sudo mv ./kind /usr/local/bin/
|
||||
```
|
||||
|
||||

|
||||
|
||||
## KinD: the first cluster
|
||||
|
||||
We are ready to create our first cluster:
|
||||
|
||||
```bash
|
||||
# Check if the KUBECONFIG is not set
|
||||
echo $KUBECONFIG
|
||||
# Check if the .kube directory is created > if not, no need to create it
|
||||
ls $HOME/.kube
|
||||
# Create the cluster and give it a name (optional)
|
||||
kind create cluster --name wslkind
|
||||
# Check if the .kube has been created and populated with files
|
||||
ls $HOME/.kube
|
||||
```
|
||||
|
||||

|
||||
|
||||
> Tip: as you can see, the Terminal was changed so the nice icons are all displayed
|
||||
|
||||
The cluster has been successfully created, and because we are using Docker Desktop, the network is all set for us to use "as is".
|
||||
|
||||
So we can open the `Kubernetes master` URL in our Windows browser:
|
||||
|
||||

|
||||
|
||||
And this is the real strength from Docker Desktop for Windows with the WSL2 backend. Docker really did an amazing integration.
|
||||
|
||||
## KinD: counting 1 - 2 - 3
|
||||
|
||||
Our first cluster was created and it's the "normal" one node cluster:
|
||||
|
||||
```bash
|
||||
# Check how many nodes it created
|
||||
kubectl get nodes
|
||||
# Check the services for the whole cluster
|
||||
kubectl get all --all-namespaces
|
||||
```
|
||||
|
||||

|
||||
|
||||
While this will be enough for most people, let's leverage one of the coolest feature, multi-node clustering:
|
||||
|
||||
|
||||
```bash
|
||||
# Delete the existing cluster
|
||||
kind delete cluster --name wslkind
|
||||
# Create a config file for a 3 nodes cluster
|
||||
cat << EOF > kind-3nodes.yaml
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
- role: worker
|
||||
- role: worker
|
||||
EOF
|
||||
# Create a new cluster with the config file
|
||||
kind create cluster --name wslkindmultinodes --config ./kind-3nodes.yaml
|
||||
# Check how many nodes it created
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||

|
||||
|
||||
> Tip: depending on how fast we run the "get nodes" command, it can be that not all the nodes are ready, wait few seconds and run it again, everything should be ready
|
||||
|
||||
And that's it, we have created a three-node cluster, and if we look at the services one more time, we will see several that have now three replicas:
|
||||
|
||||
|
||||
```bash
|
||||
# Check the services for the whole cluster
|
||||
kubectl get all --all-namespaces
|
||||
```
|
||||
|
||||

|
||||
|
||||
## KinD: can I see a nice dashboard?
|
||||
|
||||
Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview.
|
||||
|
||||
For that, the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard) project has been created. The installation and first connection test is quite fast, so let's do it:
|
||||
|
||||
```bash
|
||||
# Install the Dashboard application into our cluster
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml
|
||||
# Check the resources it created based on the new namespace created
|
||||
kubectl get all -n kubernetes-dashboard
|
||||
```
|
||||
|
||||

|
||||
|
||||
As it created a service with a ClusterIP (read: internal network address), we cannot reach it if we type the URL in our Windows browser:
|
||||
|
||||

|
||||
|
||||
That's because we need to create a temporary proxy:
|
||||
|
||||
|
||||
```bash
|
||||
# Start a kubectl proxy
|
||||
kubectl proxy
|
||||
# Enter the URL on your browser: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
|
||||
```
|
||||
|
||||

|
||||
|
||||
Finally to login, we can either enter a Token, which we didn't create, or enter the `kubeconfig` file from our Cluster.
|
||||
|
||||
If we try to login with the `kubeconfig`, we will get the error "Internal error (500): Not enough data to create auth info structure". This is due to the lack of credentials in the `kubeconfig` file.
|
||||
|
||||
So to avoid you ending with the same error, let's follow the [recommended RBAC approach](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md).
|
||||
|
||||
Let's open a new WSL2 session:
|
||||
|
||||
```bash
|
||||
# Create a new ServiceAccount
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: admin-user
|
||||
namespace: kubernetes-dashboard
|
||||
EOF
|
||||
# Create a ClusterRoleBinding for the ServiceAccount
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: admin-user
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: admin-user
|
||||
namespace: kubernetes-dashboard
|
||||
EOF
|
||||
```
|
||||
|
||||

|
||||
|
||||
```bash
|
||||
# Get the Token for the ServiceAccount
|
||||
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
|
||||
# Copy the token and copy it into the Dashboard login and press "Sign in"
|
||||
```
|
||||
|
||||

|
||||
|
||||
Success! And let's see our nodes listed also:
|
||||
|
||||

|
||||
|
||||
A nice and shiny three nodes appear.
|
||||
|
||||
# Minikube: Kubernetes from everywhere
|
||||
|
||||
Right now, we have Docker that is installed, configured and the last test worked fine.
|
||||
|
||||
However, if we look carefully at the `kubectl` command, it found the "Client Version" (1.15.5), but it didn't find any server.
|
||||
|
||||
This is normal as we didn't enable the Docker Kubernetes cluster. So let's install Minikube and create our first cluster.
|
||||
|
||||
And as sources are always important to mention, we will follow (partially) the how-to from the [Kubernetes.io website](https://kubernetes.io/docs/tasks/tools/install-minikube/):
|
||||
|
||||
```bash
|
||||
# Download the latest version of Minikube
|
||||
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
|
||||
# Make the binary executable
|
||||
chmod +x ./minikube
|
||||
# Move the binary to your executable path
|
||||
sudo mv ./minikube /usr/local/bin/
|
||||
```
|
||||
|
||||

|
||||
|
||||
## Minikube: updating the host
|
||||
|
||||
If we follow the how-to, it states that we should use the `--driver=none` flag in order to run Minikube directly on the host and Docker.
|
||||
|
||||
Unfortunately, we will get an error about "conntrack" being required to run Kubernetes v 1.18:
|
||||
|
||||
```bash
|
||||
# Create a minikube one node cluster
|
||||
minikube start --driver=none
|
||||
```
|
||||
|
||||

|
||||
|
||||
> Tip: as you can see, the Terminal was changed so the nice icons are all displayed
|
||||
|
||||
So let's fix the issue by installing the missing package:
|
||||
|
||||
```bash
|
||||
# Install the conntrack package
|
||||
sudo apt install -y conntrack
|
||||
```
|
||||
|
||||

|
||||
|
||||
Let's try to launch it again:
|
||||
|
||||
```bash
|
||||
# Create a minikube one node cluster
|
||||
minikube start --driver=none
|
||||
# We got a permissions error > try again with sudo
|
||||
sudo minikube start --driver=none
|
||||
```
|
||||
|
||||

|
||||
|
||||
Ok, this error cloud be problematic ... in the past. Luckily for us, there's a solution
|
||||
|
||||
## Minikube: enabling SystemD
|
||||
|
||||
In order to enable SystemD on WSL2, we will apply the [scripts](https://forum.snapcraft.io/t/running-snaps-on-wsl2-insiders-only-for-now/13033) from [Daniel Llewellyn](https://twitter.com/diddledan).
|
||||
|
||||
I invite you to read the full blog post and how he came to the solution, and the various iterations he did to fix several issues.
|
||||
|
||||
So in a nutshell, here are the commands:
|
||||
|
||||
```bash
|
||||
# Install the needed packages
|
||||
sudo apt install -yqq daemonize dbus-user-session fontconfig
|
||||
```
|
||||
|
||||

|
||||
|
||||
```bash
|
||||
# Create the start-systemd-namespace script
|
||||
sudo vi /usr/sbin/start-systemd-namespace
|
||||
#!/bin/bash
|
||||
|
||||
SYSTEMD_PID=$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')
|
||||
if [ -z "$SYSTEMD_PID" ] || [ "$SYSTEMD_PID" != "1" ]; then
|
||||
export PRE_NAMESPACE_PATH="$PATH"
|
||||
(set -o posix; set) | \
|
||||
grep -v "^BASH" | \
|
||||
grep -v "^DIRSTACK=" | \
|
||||
grep -v "^EUID=" | \
|
||||
grep -v "^GROUPS=" | \
|
||||
grep -v "^HOME=" | \
|
||||
grep -v "^HOSTNAME=" | \
|
||||
grep -v "^HOSTTYPE=" | \
|
||||
grep -v "^IFS='.*"$'\n'"'" | \
|
||||
grep -v "^LANG=" | \
|
||||
grep -v "^LOGNAME=" | \
|
||||
grep -v "^MACHTYPE=" | \
|
||||
grep -v "^NAME=" | \
|
||||
grep -v "^OPTERR=" | \
|
||||
grep -v "^OPTIND=" | \
|
||||
grep -v "^OSTYPE=" | \
|
||||
grep -v "^PIPESTATUS=" | \
|
||||
grep -v "^POSIXLY_CORRECT=" | \
|
||||
grep -v "^PPID=" | \
|
||||
grep -v "^PS1=" | \
|
||||
grep -v "^PS4=" | \
|
||||
grep -v "^SHELL=" | \
|
||||
grep -v "^SHELLOPTS=" | \
|
||||
grep -v "^SHLVL=" | \
|
||||
grep -v "^SYSTEMD_PID=" | \
|
||||
grep -v "^UID=" | \
|
||||
grep -v "^USER=" | \
|
||||
grep -v "^_=" | \
|
||||
cat - > "$HOME/.systemd-env"
|
||||
echo "PATH='$PATH'" >> "$HOME/.systemd-env"
|
||||
exec sudo /usr/sbin/enter-systemd-namespace "$BASH_EXECUTION_STRING"
|
||||
fi
|
||||
if [ -n "$PRE_NAMESPACE_PATH" ]; then
|
||||
export PATH="$PRE_NAMESPACE_PATH"
|
||||
fi
|
||||
```
|
||||
|
||||
```bash
|
||||
# Create the enter-systemd-namespace
|
||||
sudo vi /usr/sbin/enter-systemd-namespace
|
||||
#!/bin/bash
|
||||
|
||||
if [ "$UID" != 0 ]; then
|
||||
echo "You need to run $0 through sudo"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')"
|
||||
if [ -z "$SYSTEMD_PID" ]; then
|
||||
/usr/sbin/daemonize /usr/bin/unshare --fork --pid --mount-proc /lib/systemd/systemd --system-unit=basic.target
|
||||
while [ -z "$SYSTEMD_PID" ]; do
|
||||
SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')"
|
||||
done
|
||||
fi
|
||||
|
||||
if [ -n "$SYSTEMD_PID" ] && [ "$SYSTEMD_PID" != "1" ]; then
|
||||
if [ -n "$1" ] && [ "$1" != "bash --login" ] && [ "$1" != "/bin/bash --login" ]; then
|
||||
exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \
|
||||
/usr/bin/sudo -H -u "$SUDO_USER" \
|
||||
/bin/bash -c 'set -a; source "$HOME/.systemd-env"; set +a; exec bash -c '"$(printf "%q" "$@")"
|
||||
else
|
||||
exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \
|
||||
/bin/login -p -f "$SUDO_USER" \
|
||||
$(/bin/cat "$HOME/.systemd-env" | grep -v "^PATH=")
|
||||
fi
|
||||
echo "Existential crisis"
|
||||
fi
|
||||
```
|
||||
|
||||
```bash
|
||||
# Edit the permissions of the enter-systemd-namespace script
|
||||
sudo chmod +x /usr/sbin/enter-systemd-namespace
|
||||
# Edit the bash.bashrc file
|
||||
sudo sed -i 2a"# Start or enter a PID namespace in WSL2\nsource /usr/sbin/start-systemd-namespace\n" /etc/bash.bashrc
|
||||
```
|
||||
|
||||

|
||||
|
||||
Finally, exit and launch a new session. You **do not** need to stop WSL2, a new session is enough:
|
||||
|
||||

|
||||
|
||||
## Minikube: the first cluster
|
||||
|
||||
We are ready to create our first cluster:
|
||||
|
||||
```bash
|
||||
# Check if the KUBECONFIG is not set
|
||||
echo $KUBECONFIG
|
||||
# Check if the .kube directory is created > if not, no need to create it
|
||||
ls $HOME/.kube
|
||||
# Check if the .minikube directory is created > if yes, delete it
|
||||
ls $HOME/.minikube
|
||||
# Create the cluster with sudo
|
||||
sudo minikube start --driver=none
|
||||
```
|
||||
|
||||
In order to be able to use `kubectl` with our user, and not `sudo`, Minikube recommends running the `chown` command:
|
||||
|
||||
```bash
|
||||
# Change the owner of the .kube and .minikube directories
|
||||
sudo chown -R $USER $HOME/.kube $HOME/.minikube
|
||||
# Check the access and if the cluster is running
|
||||
kubectl cluster-info
|
||||
# Check the resources created
|
||||
kubectl get all --all-namespaces
|
||||
```
|
||||
|
||||

|
||||
|
||||
The cluster has been successfully created, and Minikube used the WSL2 IP, which is great for several reasons, and one of them is that we can open the `Kubernetes master` URL in our Windows browser:
|
||||
|
||||

|
||||
|
||||
And the real strength of WSL2 integration, the port `8443` once open on WSL2 distro, it actually forwards it to Windows, so instead of the need to remind the IP address, we can also reach the `Kubernetes master` URL via `localhost`:
|
||||
|
||||

|
||||
|
||||
## Minikube: can I see a nice dashboard?
|
||||
|
||||
Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview.
|
||||
|
||||
For that, Minikube embeded the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard). Thanks to it, running and accessing the Dashboard is very simple:
|
||||
|
||||
```bash
|
||||
# Enable the Dashboard service
|
||||
sudo minikube dashboard
|
||||
# Access the Dashboard from a browser on Windows side
|
||||
```
|
||||
|
||||

|
||||
|
||||
The command creates also a proxy, which means that once we end the command, by pressing `CTRL+C`, the Dashboard will no more be accessible.
|
||||
|
||||
Still, if we look at the namespace `kubernetes-dashboard`, we will see that the service is still created:
|
||||
|
||||
```bash
|
||||
# Get all the services from the dashboard namespace
|
||||
kubectl get all --namespace kubernetes-dashboard
|
||||
```
|
||||
|
||||

|
||||
|
||||
Let's edit the service and change it's type to `LoadBalancer`:
|
||||
|
||||
```bash
|
||||
# Edit the Dashoard service
|
||||
kubectl edit service/kubernetes-dashboard --namespace kubernetes-dashboard
|
||||
# Go to the very end and remove the last 2 lines
|
||||
status:
|
||||
loadBalancer: {}
|
||||
# Change the type from ClusterIO to LoadBalancer
|
||||
type: LoadBalancer
|
||||
# Save the file
|
||||
```
|
||||
|
||||

|
||||
|
||||
Check again the Dashboard service and let's access the Dashboard via the LoadBalancer:
|
||||
|
||||
```bash
|
||||
# Get all the services from the dashboard namespace
|
||||
kubectl get all --namespace kubernetes-dashboard
|
||||
# Access the Dashboard from a browser on Windows side with the URL: localhost:<port exposed>
|
||||
```
|
||||
|
||||

|
||||
|
||||
# Conclusion
|
||||
|
||||
It's clear that we are far from done as we could have some LoadBalancing implemented and/or other services (storage, ingress, registry, etc...).
|
||||
|
||||
Concerning Minikube on WSL2, as it needed to enable SystemD, we can consider it as an intermediate level to be implemented.
|
||||
|
||||
So with two solutions, what could be the "best for you"? Both bring their own advantages and inconveniences, so here an overview from our point of view solely:
|
||||
|
||||
| Criteria | KinD | Minikube |
|
||||
| -------------------- | ----------------------------- | -------- |
|
||||
| Installation on WSL2 | Very Easy | Medium |
|
||||
| Multi-node | Yes | No |
|
||||
| Plugins | Manual install | Yes |
|
||||
| Persistence | Yes, however not designed for | Yes |
|
||||
| Alternatives | K3d | Microk8s |
|
||||
|
||||
We hope you could have a real taste of the integration between the different components: WSL2 - Docker Desktop - KinD/Minikube. And that gave you some ideas or, even better, some answers to your Kubernetes workflows with KinD and/or Minikube on Windows and WSL2.
|
||||
|
||||
See you soon for other adventures in the Kubernetes ocean.
|
||||
|
||||
[Nuno](https://twitter.com/nunixtech) & [Ihor](https://twitter.com/idvoretskyi)
|
|
@ -308,7 +308,7 @@ Node objects track information about the Node's resource capacity (for example:
|
|||
of memory available, and the number of CPUs).
|
||||
Nodes that [self register](#self-registration-of-nodes) report their capacity during
|
||||
registration. If you [manually](#manual-node-administration) add a Node, then
|
||||
you need to set the node's capacity informaton when you add it.
|
||||
you need to set the node's capacity information when you add it.
|
||||
|
||||
The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
|
||||
there are enough resources for all the Pods on a Node. The scheduler checks that the sum
|
||||
|
|
|
@ -77,7 +77,7 @@ The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the
|
|||
|
||||
- `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally.
|
||||
|
||||
- `imagePullPolicy: Always`: the image is pulled every time the pod is started.
|
||||
- `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
|
||||
|
||||
- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `Always` is applied.
|
||||
|
||||
|
|
|
@ -580,7 +580,7 @@ spec:
|
|||
- name: foo
|
||||
secret:
|
||||
secretName: mysecret
|
||||
defaultMode: 256
|
||||
defaultMode: 0400
|
||||
```
|
||||
|
||||
Then, the secret will be mounted on `/etc/foo` and all the files created by the
|
||||
|
@ -590,6 +590,38 @@ Note that the JSON spec doesn't support octal notation, so use the value 256 for
|
|||
0400 permissions. If you use YAML instead of JSON for the Pod, you can use octal
|
||||
notation to specify permissions in a more natural way.
|
||||
|
||||
Note if you `kubectl exec` into the Pod, you need to follow the symlink to find
|
||||
the expected file mode. For example,
|
||||
|
||||
Check the secrets file mode on the pod.
|
||||
```
|
||||
kubectl exec mypod -it sh
|
||||
|
||||
cd /etc/foo
|
||||
ls -l
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
total 0
|
||||
lrwxrwxrwx 1 root root 15 May 18 00:18 password -> ..data/password
|
||||
lrwxrwxrwx 1 root root 15 May 18 00:18 username -> ..data/username
|
||||
```
|
||||
|
||||
Follow the symlink to find the correct file mode.
|
||||
|
||||
```
|
||||
cd /etc/foo/..data
|
||||
ls -l
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
total 8
|
||||
-r-------- 1 root root 12 May 18 00:18 password
|
||||
-r-------- 1 root root 5 May 18 00:18 username
|
||||
```
|
||||
|
||||
You can also use mapping, as in the previous example, and specify different
|
||||
permissions for different files like this:
|
||||
|
||||
|
@ -612,12 +644,12 @@ spec:
|
|||
items:
|
||||
- key: username
|
||||
path: my-group/my-username
|
||||
mode: 511
|
||||
mode: 0777
|
||||
```
|
||||
|
||||
In this case, the file resulting in `/etc/foo/my-group/my-username` will have
|
||||
permission value of `0777`. Owing to JSON limitations, you must specify the mode
|
||||
in decimal notation.
|
||||
permission value of `0777`. If you use JSON, owing to JSON limitations, you
|
||||
must specify the mode in decimal notation, `511`.
|
||||
|
||||
Note that this permission value might be displayed in decimal notation if you
|
||||
read it later.
|
||||
|
|
|
@ -83,11 +83,13 @@ For example:
|
|||
|
||||
#### Support traffic shaping
|
||||
|
||||
**Experimental Feature**
|
||||
|
||||
The CNI networking plugin also supports pod ingress and egress traffic shaping. You can use the official [bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth)
|
||||
plugin offered by the CNI plugin team or use your own plugin with bandwidth control functionality.
|
||||
|
||||
If you want to enable traffic shaping support, you must add a `bandwidth` plugin to your CNI configuration file
|
||||
(default `/etc/cni/net.d`).
|
||||
If you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI configuration file
|
||||
(default `/etc/cni/net.d`) and ensure that the binary is included in your CNI bin dir (default `/opt/cni/bin`).
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -51,11 +51,11 @@ You can list the current namespaces in a cluster using:
|
|||
kubectl get namespace
|
||||
```
|
||||
```
|
||||
NAME STATUS AGE
|
||||
default Active 1d
|
||||
kube-system Active 1d
|
||||
kube-public Active 1d
|
||||
kube-node-lease Active 1d
|
||||
NAME STATUS AGE
|
||||
default Active 1d
|
||||
kube-node-lease Active 1d
|
||||
kube-public Active 1d
|
||||
kube-system Active 1d
|
||||
```
|
||||
|
||||
Kubernetes starts with three initial namespaces:
|
||||
|
|
|
@ -374,6 +374,8 @@ several security mechanisms.
|
|||
|
||||
{{< codenew file="policy/restricted-psp.yaml" >}}
|
||||
|
||||
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/#policy-instantiation) for more examples.
|
||||
|
||||
## Policy Reference
|
||||
|
||||
### Privileged
|
||||
|
@ -633,6 +635,8 @@ Refer to the [Sysctl documentation](
|
|||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
|
||||
|
||||
Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -77,21 +77,10 @@ A toleration "matches" a taint if the keys are the same and the effects are the
|
|||
|
||||
There are two special cases:
|
||||
|
||||
* An empty `key` with operator `Exists` matches all keys, values and effects which means this
|
||||
An empty `key` with operator `Exists` matches all keys, values and effects which means this
|
||||
will tolerate everything.
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- operator: "Exists"
|
||||
```
|
||||
|
||||
* An empty `effect` matches all effects with key `key`.
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- key: "key"
|
||||
operator: "Exists"
|
||||
```
|
||||
An empty `effect` matches all effects with key `key`.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -0,0 +1,300 @@
|
|||
---
|
||||
reviewers:
|
||||
- tallclair
|
||||
title: Pod Security Standards
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Security settings for Pods are typically applied by using [security
|
||||
contexts](/docs/tasks/configure-pod-container/security-context/). Security Contexts allow for the
|
||||
definition of privilege and access controls on a per-Pod basis.
|
||||
|
||||
The enforcement and policy-based definition of cluster requirements of security contexts has
|
||||
previously been achieved using [Pod Security Policy](/docs/concepts/policy/pod-security-policy/). A
|
||||
_Pod Security Policy_ is a cluster-level resource that controls security sensitive aspects of the
|
||||
Pod specification.
|
||||
|
||||
However, numerous means of policy enforcement have arisen that augment or replace the use of
|
||||
PodSecurityPolicy. The intent of this page is to detail recommended Pod security profiles, decoupled
|
||||
from any specific instantiation.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Policy Types
|
||||
|
||||
There is an immediate need for base policy definitions to broadly cover the security spectrum. These
|
||||
should range from highly restricted to highly flexible:
|
||||
|
||||
- **_Privileged_** - Unrestricted policy, providing the widest possible level of permissions. This
|
||||
policy allows for known privilege escalations.
|
||||
- **_Baseline/Default_** - Minimally restrictive policy while preventing known privilege
|
||||
escalations. Allows the default (minimally specified) Pod configuration.
|
||||
- **_Restricted_** - Heavily restricted policy, following current Pod hardening best practices.
|
||||
|
||||
## Policies
|
||||
|
||||
### Privileged
|
||||
|
||||
The Privileged policy is purposely-open, and entirely unrestricted. This type of policy is typically
|
||||
aimed at system- and infrastructure-level workloads managed by privileged, trusted users.
|
||||
|
||||
The privileged policy is defined by an absence of restrictions. For blacklist-oriented enforcement
|
||||
mechanisms (such as gatekeeper), the privileged profile may be an absence of applied constraints
|
||||
rather than an instantiated policy. In contrast, for a whitelist oriented mechanism (such as Pod
|
||||
Security Policy) the privileged policy should enable all controls (disable all restrictions).
|
||||
|
||||
### Baseline/Default
|
||||
|
||||
The Baseline/Default policy is aimed at ease of adoption for common containerized workloads while
|
||||
preventing known privilege escalations. This policy is targeted at application operators and
|
||||
developers of non-critical applications. The following listed controls should be
|
||||
enforced/disallowed:
|
||||
|
||||
<table>
|
||||
<caption style="display:none">Baseline policy specification</caption>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><strong>Control</strong></td>
|
||||
<td><strong>Policy</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Host Namespaces</td>
|
||||
<td>
|
||||
Sharing the host namespaces must be disallowed.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.hostNetwork<br>
|
||||
spec.hostPID<br>
|
||||
spec.hostIPC<br>
|
||||
<br><b>Allowed Values:</b> false<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Privileged Containers</td>
|
||||
<td>
|
||||
Privileged Pods disable most security mechanisms and must be disallowed.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.containers[*].securityContext.privileged<br>
|
||||
spec.initContainers[*].securityContext.privileged<br>
|
||||
<br><b>Allowed Values:</b> false, undefined/nil<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Capabilities</td>
|
||||
<td>
|
||||
Adding additional capabilities beyond the <a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities">default set</a> must be disallowed.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.containers[*].securityContext.capabilities.add<br>
|
||||
spec.initContainers[*].securityContext.capabilities.add<br>
|
||||
<br><b>Allowed Values:</b> empty (optionally whitelisted defaults)<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>HostPath Volumes</td>
|
||||
<td>
|
||||
HostPath volumes must be forbidden.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.volumes[*].hostPath<br>
|
||||
<br><b>Allowed Values:</b> undefined/nil<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Host Ports</td>
|
||||
<td>
|
||||
HostPorts should be disallowed, or at minimum restricted to a whitelist.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.containers[*].ports[*].hostPort<br>
|
||||
spec.initContainers[*].ports[*].hostPort<br>
|
||||
<br><b>Allowed Values:</b> 0, undefined, (whitelisted)<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>AppArmor <em>(optional)</em></td>
|
||||
<td>
|
||||
On supported hosts, the `runtime/default` AppArmor profile is applied by default. The default policy should prevent overriding or disabling the policy, or restrict overrides to a whitelisted set of profiles.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']<br>
|
||||
<br><b>Allowed Values:</b> runtime/default, undefined<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>SELinux <em>(optional)</em></td>
|
||||
<td>
|
||||
Setting custom SELinux options should be disallowed.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.securityContext.seLinuxOptions<br>
|
||||
spec.containers[*].securityContext.seLinuxOptions<br>
|
||||
spec.initContainers[*].securityContext.seLinuxOptions<br>
|
||||
<br><b>Allowed Values:</b> undefined/nil<br>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
### Restricted
|
||||
|
||||
The Restricted policy is aimed at enforcing current Pod hardening best practices, at the expense of
|
||||
some compatibility. It is targeted at operators and developers of security-critical applications, as
|
||||
well as lower-trust users.The following listed controls should be enforced/disallowed:
|
||||
|
||||
|
||||
<table>
|
||||
<caption style="display:none">Restricted policy specification</caption>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><strong>Control</strong></td>
|
||||
<td><strong>Policy</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2"><em>Everything from the default profile.</em></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Volume Types</td>
|
||||
<td>
|
||||
In addition to restricting HostPath volumes, the restricted profile limits usage of non-core volume types to those defined through PersistentVolumes.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.volumes[*].hostPath<br>
|
||||
spec.volumes[*].gcePersistentDisk<br>
|
||||
spec.volumes[*].awsElasticBlockStore<br>
|
||||
spec.volumes[*].gitRepo<br>
|
||||
spec.volumes[*].nfs<br>
|
||||
spec.volumes[*].iscsi<br>
|
||||
spec.volumes[*].glusterfs<br>
|
||||
spec.volumes[*].rbd<br>
|
||||
spec.volumes[*].flexVolume<br>
|
||||
spec.volumes[*].cinder<br>
|
||||
spec.volumes[*].cephFS<br>
|
||||
spec.volumes[*].flocker<br>
|
||||
spec.volumes[*].fc<br>
|
||||
spec.volumes[*].azureFile<br>
|
||||
spec.volumes[*].vsphereVolume<br>
|
||||
spec.volumes[*].quobyte<br>
|
||||
spec.volumes[*].azureDisk<br>
|
||||
spec.volumes[*].portworxVolume<br>
|
||||
spec.volumes[*].scaleIO<br>
|
||||
spec.volumes[*].storageos<br>
|
||||
spec.volumes[*].csi<br>
|
||||
<br><b>Allowed Values:</b> undefined/nil<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Privilege Escalation</td>
|
||||
<td>
|
||||
Privilege escalation to root should not be allowed.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.containers[*].securityContext.privileged<br>
|
||||
spec.initContainers[*].securityContext.privileged<br>
|
||||
<br><b>Allowed Values:</b> false, undefined/nil<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Running as Non-root</td>
|
||||
<td>
|
||||
Containers must be required to run as non-root users.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.securityContext.runAsNonRoot<br>
|
||||
spec.containers[*].securityContext.runAsNonRoot<br>
|
||||
spec.initContainers[*].securityContext.runAsNonRoot<br>
|
||||
<br><b>Allowed Values:</b> true<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Non-root groups <em>(optional)</em></td>
|
||||
<td>
|
||||
Containers should be forbidden from running with a root primary or supplementary GID.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
spec.securityContext.runAsGroup<br>
|
||||
spec.securityContext.supplementalGroups[*]<br>
|
||||
spec.securityContext.fsGroup<br>
|
||||
spec.containers[*].securityContext.runAsGroup<br>
|
||||
spec.containers[*].securityContext.supplementalGroups[*]<br>
|
||||
spec.containers[*].securityContext.fsGroup<br>
|
||||
spec.initContainers[*].securityContext.runAsGroup<br>
|
||||
spec.initContainers[*].securityContext.supplementalGroups[*]<br>
|
||||
spec.initContainers[*].securityContext.fsGroup<br>
|
||||
<br><b>Allowed Values:</b><br>
|
||||
non-zero<br>
|
||||
undefined / nil (except for `*.runAsGroup`)<br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Seccomp</td>
|
||||
<td>
|
||||
The runtime/default seccomp profile must be required, or allow additional whitelisted values.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
metadata.annotations['seccomp.security.alpha.kubernetes.io/pod']<br>
|
||||
metadata.annotations['container.seccomp.security.alpha.kubernetes.io/*']<br>
|
||||
<br><b>Allowed Values:</b><br>
|
||||
runtime/default<br>
|
||||
undefined (container annotation)<br>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## Policy Instantiation
|
||||
|
||||
Decoupling policy definition from policy instantiation allows for a common understanding and
|
||||
consistent language of policies across clusters, independent of the underlying enforcement
|
||||
mechanism.
|
||||
|
||||
As mechanisms mature, they will be defined below on a per-policy basis. The methods of enforcement
|
||||
of individual policies are not defined here.
|
||||
|
||||
[**PodSecurityPolicy**](/docs/concepts/policy/pod-security-policy/)
|
||||
|
||||
- [Privileged](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/privileged-psp.yaml)
|
||||
- [Baseline](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/baseline-psp.yaml)
|
||||
- [Restricted](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml)
|
||||
|
||||
## FAQ
|
||||
|
||||
### Why isn't there a profile between privileged and default?
|
||||
|
||||
The three profiles defined here have a clear linear progression from most secure (restricted) to least
|
||||
secure (privileged), and cover a broad set of workloads. Privileges required above the baseline
|
||||
policy are typically very application specific, so we do not offer a standard profile in this
|
||||
niche. This is not to say that the privileged profile should always be used in this case, but that
|
||||
policies in this space need to be defined on a case-by-case basis.
|
||||
|
||||
SIG Auth may reconsider this position in the future, should a clear need for other profiles arise.
|
||||
|
||||
### What's the difference between a security policy and a security context?
|
||||
|
||||
[Security Contexts](/docs/tasks/configure-pod-container/security-context/) configure Pods and
|
||||
Containers at runtime. Security contexts are defined as part of the Pod and container specifications
|
||||
in the Pod manifest, and represent parameters to the container runtime.
|
||||
|
||||
Security policies are control plane mechanisms to enforce specific settings in the Security Context,
|
||||
as well as other parameters outside the Security Contex. As of February 2020, the current native
|
||||
solution for enforcing these security policies is [Pod Security
|
||||
Policy](/docs/concepts/policy/pod-security-policy/) - a mechanism for centrally enforcing security
|
||||
policy on Pods across a cluster. Other alternatives for enforcing security policy are being
|
||||
developed in the Kubernetes ecosystem, such as [OPA
|
||||
Gatekeeper](https://github.com/open-policy-agent/gatekeeper).
|
||||
|
||||
### What profiles should I apply to my Windows Pods?
|
||||
|
||||
Windows in Kubernetes has some limitations and differentiators from standard Linux-based
|
||||
workloads. Specifically, the Pod SecurityContext fields [have no effect on
|
||||
Windows](/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-podsecuritycontext). As
|
||||
such, no standardized Pod Security profiles currently exists.
|
||||
|
||||
### What about sandboxed Pods?
|
||||
|
||||
There is not currently an API standard that controls whether a Pod is considered sandboxed or
|
||||
not. Sandbox Pods may be identified by the use of a sandboxed runtime (such as gVisor or Kata
|
||||
Containers), but there is no standard definition of what a sandboxed runtime is.
|
||||
|
||||
The protections necessary for sandboxed workloads can differ from others. For example, the need to
|
||||
restrict privileged permissions is lessened when the workload is isolated from the underlying
|
||||
kernel. This allows for workloads requiring heightened permissions to still be isolated.
|
||||
|
||||
Additionally, the protection of sandboxed workloads is highly dependent on the method of
|
||||
sandboxing. As such, no single ‘recommended’ policy is recommended for all sandboxed workloads.
|
||||
|
||||
{{% /capture %}}
|
|
@ -19,7 +19,7 @@ By default, Docker uses host-private networking, so containers can talk to other
|
|||
|
||||
Coordinating port allocations across multiple developers or teams that provide containers is very difficult to do at scale, and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.
|
||||
|
||||
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](https://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes).
|
||||
This guide uses a simple nginx server to demonstrate proof of concept.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -254,7 +254,7 @@ options ndots:5
|
|||
|
||||
### Feature availability
|
||||
|
||||
The availability of Pod DNS Config and DNS Policy "`None`"" is shown as below.
|
||||
The availability of Pod DNS Config and DNS Policy "`None`" is shown as below.
|
||||
|
||||
| k8s version | Feature support |
|
||||
| :---------: |:-----------:|
|
||||
|
|
|
@ -125,7 +125,7 @@ That introduces the following issues:
|
|||
scheduler instead of the DaemonSet controller, by adding the `NodeAffinity` term
|
||||
to the DaemonSet pods, instead of the `.spec.nodeName` term. The default
|
||||
scheduler is then used to bind the pod to the target host. If node affinity of
|
||||
the DaemonSet pod already exists, it is replaced. The DaemonSet controller only
|
||||
the DaemonSet pod already exists, it is replaced (the original node affinity was taken into account before selecting the target host). The DaemonSet controller only
|
||||
performs these operations when creating or modifying DaemonSet pods, and no
|
||||
changes are made to the `spec.template` of the DaemonSet.
|
||||
|
||||
|
|
|
@ -472,7 +472,7 @@ starts a Spark master controller (see [spark example](https://github.com/kuberne
|
|||
driver, and then cleans up.
|
||||
|
||||
An advantage of this approach is that the overall process gets the completion guarantee of a Job
|
||||
object, but complete control over what Pods are created and how work is assigned to them.
|
||||
object, but maintains complete control over what Pods are created and how work is assigned to them.
|
||||
|
||||
## Cron Jobs {#cron-jobs}
|
||||
|
||||
|
|
|
@ -1079,37 +1079,37 @@ In order from most secure to least secure, the approaches are:
|
|||
|
||||
2. Grant a role to the "default" service account in a namespace
|
||||
|
||||
If an application does not specify a `serviceAccountName`, it uses the "default" service account.
|
||||
If an application does not specify a `serviceAccountName`, it uses the "default" service account.
|
||||
|
||||
{{< note >}}
|
||||
Permissions given to the "default" service account are available to any pod
|
||||
in the namespace that does not specify a `serviceAccountName`.
|
||||
{{< /note >}}
|
||||
{{< note >}}
|
||||
Permissions given to the "default" service account are available to any pod
|
||||
in the namespace that does not specify a `serviceAccountName`.
|
||||
{{< /note >}}
|
||||
|
||||
For example, grant read-only permission within "my-namespace" to the "default" service account:
|
||||
For example, grant read-only permission within "my-namespace" to the "default" service account:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding default-view \
|
||||
--clusterrole=view \
|
||||
--serviceaccount=my-namespace:default \
|
||||
--namespace=my-namespace
|
||||
```
|
||||
```shell
|
||||
kubectl create rolebinding default-view \
|
||||
--clusterrole=view \
|
||||
--serviceaccount=my-namespace:default \
|
||||
--namespace=my-namespace
|
||||
```
|
||||
|
||||
Many [add-ons](/docs/concepts/cluster-administration/addons/) run as the
|
||||
"default" service account in the `kube-system` namespace.
|
||||
To allow those add-ons to run with super-user access, grant cluster-admin
|
||||
permissions to the "default" service account in the `kube-system` namespace.
|
||||
Many [add-ons](/docs/concepts/cluster-administration/addons/) run as the
|
||||
"default" service account in the `kube-system` namespace.
|
||||
To allow those add-ons to run with super-user access, grant cluster-admin
|
||||
permissions to the "default" service account in the `kube-system` namespace.
|
||||
|
||||
{{< caution >}}
|
||||
Enabling this means the `kube-system` namespace contains Secrets
|
||||
that grant super-user access to your cluster's API.
|
||||
{{< /caution >}}
|
||||
{{< caution >}}
|
||||
Enabling this means the `kube-system` namespace contains Secrets
|
||||
that grant super-user access to your cluster's API.
|
||||
{{< /caution >}}
|
||||
|
||||
```shell
|
||||
kubectl create clusterrolebinding add-on-cluster-admin \
|
||||
--clusterrole=cluster-admin \
|
||||
--serviceaccount=kube-system:default
|
||||
```
|
||||
```shell
|
||||
kubectl create clusterrolebinding add-on-cluster-admin \
|
||||
--clusterrole=cluster-admin \
|
||||
--serviceaccount=kube-system:default
|
||||
```
|
||||
|
||||
3. Grant a role to all service accounts in a namespace
|
||||
|
||||
|
|
|
@ -119,7 +119,7 @@ track=stable
|
|||
|
||||
- **CPU requirement (cores)** and **Memory requirement (MiB)**: You can specify the minimum [resource limits](/docs/tasks/configure-pod-container/limit-range/) for the container. By default, Pods run with unbounded CPU and memory limits.
|
||||
|
||||
- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/user-guide/containers/#containers-and-commands). You can use the command options and arguments to override the default.
|
||||
- **Run command** and **Run command arguments**: By default, your containers run the specified Docker image's default [entrypoint command](/docs/tasks/inject-data-application/define-command-argument-container/). You can use the command options and arguments to override the default.
|
||||
|
||||
- **Run as privileged**: This setting determines whether processes in [privileged containers](/docs/user-guide/pods/#privileged-mode-for-pod-containers) are equivalent to processes running as root on the host. Privileged containers can make use of capabilities like manipulating the network stack and accessing devices.
|
||||
|
||||
|
|
|
@ -47,7 +47,9 @@ This tutorial provides a container image that uses NGINX to echo back all the re
|
|||
|
||||
{{< kat-button >}}
|
||||
|
||||
{{< note >}}If you installed Minikube locally, run `minikube start`.{{< /note >}}
|
||||
{{< note >}}
|
||||
If you installed Minikube locally, run `minikube start`.
|
||||
{{< /note >}}
|
||||
|
||||
2. Open the Kubernetes dashboard in a browser:
|
||||
|
||||
|
@ -113,7 +115,9 @@ Pod runs a Container based on the provided Docker image.
|
|||
kubectl config view
|
||||
```
|
||||
|
||||
{{< note >}}For more information about `kubectl`commands, see the [kubectl overview](/docs/user-guide/kubectl-overview/).{{< /note >}}
|
||||
{{< note >}}
|
||||
For more information about `kubectl`commands, see the [kubectl overview](/docs/user-guide/kubectl-overview/).
|
||||
{{< /note >}}
|
||||
|
||||
## Create a Service
|
||||
|
||||
|
|
|
@ -0,0 +1,74 @@
|
|||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: baseline
|
||||
annotations:
|
||||
# Optional: Allow the default AppArmor profile, requires setting the default.
|
||||
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
|
||||
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
|
||||
# Optional: Allow the default seccomp profile, requires setting the default.
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default,unconfined'
|
||||
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'unconfined'
|
||||
spec:
|
||||
privileged: false
|
||||
# The moby default capability set, defined here:
|
||||
# https://github.com/moby/moby/blob/0a5cec2833f82a6ad797d70acbf9cbbaf8956017/oci/caps/defaults.go#L6-L19
|
||||
allowedCapabilities:
|
||||
- 'CHOWN'
|
||||
- 'DAC_OVERRIDE'
|
||||
- 'FSETID'
|
||||
- 'FOWNER'
|
||||
- 'MKNOD'
|
||||
- 'NET_RAW'
|
||||
- 'SETGID'
|
||||
- 'SETUID'
|
||||
- 'SETFCAP'
|
||||
- 'SETPCAP'
|
||||
- 'NET_BIND_SERVICE'
|
||||
- 'SYS_CHROOT'
|
||||
- 'KILL'
|
||||
- 'AUDIT_WRITE'
|
||||
# Allow all volume types except hostpath
|
||||
volumes:
|
||||
# 'core' volume types
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
# Assume that persistentVolumes set up by the cluster admin are safe to use.
|
||||
- 'persistentVolumeClaim'
|
||||
# Allow all other non-hostpath volume types.
|
||||
- 'awsElasticBlockStore'
|
||||
- 'azureDisk'
|
||||
- 'azureFile'
|
||||
- 'cephFS'
|
||||
- 'cinder'
|
||||
- 'csi'
|
||||
- 'fc'
|
||||
- 'flexVolume'
|
||||
- 'flocker'
|
||||
- 'gcePersistentDisk'
|
||||
- 'gitRepo'
|
||||
- 'glusterfs'
|
||||
- 'iscsi'
|
||||
- 'nfs'
|
||||
- 'photonPersistentDisk'
|
||||
- 'portworxVolume'
|
||||
- 'quobyte'
|
||||
- 'rbd'
|
||||
- 'scaleIO'
|
||||
- 'storageos'
|
||||
- 'vsphereVolume'
|
||||
hostNetwork: false
|
||||
hostIPC: false
|
||||
hostPID: false
|
||||
readOnlyRootFilesystem: false
|
||||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'RunAsAny'
|
||||
fsGroup:
|
||||
rule: 'RunAsAny'
|
|
@ -97,7 +97,7 @@ class: training
|
|||
</h5>
|
||||
<p>The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.</p>
|
||||
<br>
|
||||
<a href=https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/" target="_blank" class="button">Go to Certification</a>
|
||||
<a href="https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/" target="_blank" class="button">Go to Certification</a>
|
||||
</center>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -3,7 +3,7 @@ title: Documentación de Kubernetes
|
|||
noedit: true
|
||||
cid: docsHome
|
||||
layout: docsportal_home
|
||||
class: gridPage
|
||||
class: gridPage gridPageHome
|
||||
linkTitle: "Home"
|
||||
main_menu: true
|
||||
weight: 10
|
||||
|
|
|
@ -27,7 +27,7 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
|
|||
|
||||
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
|
||||
|
||||
* [Hello Minikube](/docs/tutorials/hello-minikube/)
|
||||
* [Hello Minikube](/es/docs/tutorials/hello-minikube/)
|
||||
|
||||
## Configuración
|
||||
|
||||
|
|
|
@ -0,0 +1,275 @@
|
|||
---
|
||||
title: Hello Minikube
|
||||
content_template: templates/tutorial
|
||||
weight: 5
|
||||
menu:
|
||||
main:
|
||||
title: "Get Started"
|
||||
weight: 10
|
||||
post: >
|
||||
<p>¿Listo para poner manos a la obra? Construye un clúster sencillo de Kubernetes que ejecuta un Hola Mundo para Node.js</p>
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Este tutorial muestra como ejecutar una aplicación Node.js Hola Mundo en Kubernetes utilizando
|
||||
[Minikube](/docs/setup/learning-environment/minikube) y Katacoda.
|
||||
Katacoda provee un ambiente de Kubernetes desde el navegador.
|
||||
|
||||
{{< note >}}
|
||||
También se puede seguir este tutorial si se ha instalado [Minikube localmente](/docs/tasks/tools/install-minikube/).
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture objectives %}}
|
||||
|
||||
* Desplegar una aplicación Hola Mundo en Minikube.
|
||||
* Ejecutar la aplicación.
|
||||
* Ver los logs de la aplicación.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
Este tutorial provee una imagen de contenedor construida desde los siguientes archivos:
|
||||
|
||||
{{< codenew language="js" file="minikube/server.js" >}}
|
||||
|
||||
{{< codenew language="conf" file="minikube/Dockerfile" >}}
|
||||
|
||||
Para más información sobre el comando `docker build`, lea la [documentación de Docker ](https://docs.docker.com/engine/reference/commandline/build/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## Crear un clúster Minikube
|
||||
|
||||
1. Haz clic en **Launch Terminal**
|
||||
|
||||
{{< kat-button >}}
|
||||
|
||||
{{< note >}}Si se tiene instalado Minikube local, ejecutar `minikube start`.{{< /note >}}
|
||||
|
||||
2. Abrir el tablero de Kubernetes dashboard en un navegador:
|
||||
|
||||
```shell
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
3. Solo en el ambiente de Katacoda: En la parte superior de la terminal, haz clic en el símbolo + y luego clic en **Select port to view on Host 1**.
|
||||
|
||||
4. Solo en el ambiente de Katacoda: Escribir `30000`, y hacer clic en **Display Port**.
|
||||
|
||||
## Crear un Deployment
|
||||
|
||||
Un [*Pod*](/docs/concepts/workloads/pods/pod/) en Kubernetes es un grupo de uno o más contenedores,
|
||||
asociados con propósitos de administración y redes. El Pod en este tutorial tiene solo un contenedor.
|
||||
Un [*Deployment*](/docs/concepts/workloads/controllers/deployment/) en Kubernetes verifica la salud del Pod y reinicia su contenedor si este es eliminado. Los Deployments son la manera recomendada de manejar la creación y escalación.
|
||||
|
||||
1. Ejecutar el comando `kubectl create` para crear un Deployment que maneje un Pod. El Pod ejecuta un contenedor basado en la imagen proveida por Docker.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
|
||||
```
|
||||
|
||||
2. Ver el Deployment:
|
||||
|
||||
```shell
|
||||
kubectl get deployments
|
||||
```
|
||||
|
||||
El resultado es similar a:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
hello-node 1/1 1 1 1m
|
||||
```
|
||||
|
||||
3. Ver el Pod:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
El resultado es similar a:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
4. Ver los eventos del clúster:
|
||||
|
||||
```shell
|
||||
kubectl get events
|
||||
```
|
||||
|
||||
5. Ver la configuración `kubectl`:
|
||||
|
||||
```shell
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
{{< note >}} Para más información sobre el comando `kubectl`, ver [kubectl overview](/docs/user-guide/kubectl-overview/).{{< /note >}}
|
||||
|
||||
## Crear un Service
|
||||
|
||||
Por defecto, el Pod es accedido por su dirección IP interna dentro del clúster de Kubernetes, para hacer que el contenedor `hello-node` sea accesible desde afuera de la red virtual Kubernetes, se debe exponer el Pod como un
|
||||
[*Service*](/docs/concepts/services-networking/service/) de Kubernetes.
|
||||
|
||||
1. Exponer el Pod a la red pública de internet utilizando el comando `kubectl expose`:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
|
||||
```
|
||||
|
||||
El flag `--type=LoadBalancer` indica que se quiere exponer el Service fuera del clúster.
|
||||
|
||||
2. Ver el Service creado:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
El resultado es similar a:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
|
||||
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
|
||||
```
|
||||
|
||||
Para los proveedores Cloud que soportan balanceadores de carga, una dirección IP externa será provisionada para acceder al servicio, en Minikube, el tipo `LoadBalancer` permite que el servicio sea accesible a través del comando `minikube service`.
|
||||
|
||||
3. Ejecutar el siguiente comando:
|
||||
|
||||
```shell
|
||||
minikube service hello-node
|
||||
```
|
||||
|
||||
4. Solo en el ambiente de Katacoda: Hacer clic sobre el símbolo +, y luego en **Select port to view on Host 1**.
|
||||
|
||||
5. Solo en el ambiente de Katacoda: Anotar el puerto de 5 dígitos ubicado al lado del valor de `8080` en el resultado de servicios. Este número de puerto es generado aleatoriamente y puede ser diferente al indicado en el ejemplo. Escribir el número de puerto en el cuadro de texto y hacer clic en Display Port. Usando el ejemplo anterior, usted escribiría `30369`.
|
||||
|
||||
Esto abre una ventana de navegador que contiene la aplicación y muestra el mensaje "Hello World".
|
||||
|
||||
## Habilitar Extensiones
|
||||
|
||||
Minikube tiene un conjunto de {{< glossary_tooltip text="Extensiones" term_id="addons" >}} que pueden ser habilitados y desahabilitados en el ambiente local de Kubernetes.
|
||||
|
||||
1. Listar las extensiones soportadas actualmente:
|
||||
|
||||
```shell
|
||||
minikube addons list
|
||||
```
|
||||
|
||||
El resultado es similar a:
|
||||
|
||||
```
|
||||
addon-manager: enabled
|
||||
dashboard: enabled
|
||||
default-storageclass: enabled
|
||||
efk: disabled
|
||||
freshpod: disabled
|
||||
gvisor: disabled
|
||||
helm-tiller: disabled
|
||||
ingress: disabled
|
||||
ingress-dns: disabled
|
||||
logviewer: disabled
|
||||
metrics-server: disabled
|
||||
nvidia-driver-installer: disabled
|
||||
nvidia-gpu-device-plugin: disabled
|
||||
registry: disabled
|
||||
registry-creds: disabled
|
||||
storage-provisioner: enabled
|
||||
storage-provisioner-gluster: disabled
|
||||
```
|
||||
|
||||
2. Habilitar una extensión, por ejemplo, `metrics-server`:
|
||||
|
||||
```shell
|
||||
minikube addons enable metrics-server
|
||||
```
|
||||
|
||||
El resultado es similar a:
|
||||
|
||||
```
|
||||
metrics-server was successfully enabled
|
||||
```
|
||||
|
||||
3. Ver el Pod y Service creados:
|
||||
|
||||
```shell
|
||||
kubectl get pod,svc -n kube-system
|
||||
```
|
||||
|
||||
El resultado es similar a:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
|
||||
pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
|
||||
pod/metrics-server-67fb648c5 1/1 Running 0 26s
|
||||
pod/etcd-minikube 1/1 Running 0 34m
|
||||
pod/influxdb-grafana-b29w8 2/2 Running 0 26s
|
||||
pod/kube-addon-manager-minikube 1/1 Running 0 34m
|
||||
pod/kube-apiserver-minikube 1/1 Running 0 34m
|
||||
pod/kube-controller-manager-minikube 1/1 Running 0 34m
|
||||
pod/kube-proxy-rnlps 1/1 Running 0 34m
|
||||
pod/kube-scheduler-minikube 1/1 Running 0 34m
|
||||
pod/storage-provisioner 1/1 Running 0 34m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s
|
||||
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m
|
||||
service/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s
|
||||
service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s
|
||||
```
|
||||
|
||||
4. Deshabilitar `metrics-server`:
|
||||
|
||||
```shell
|
||||
minikube addons disable metrics-server
|
||||
```
|
||||
|
||||
El resultado es similar a:
|
||||
|
||||
```
|
||||
metrics-server was successfully disabled
|
||||
```
|
||||
|
||||
## Limpieza
|
||||
|
||||
Ahora se puede eliminar los recursos creados en el clúster:
|
||||
|
||||
```shell
|
||||
kubectl delete service hello-node
|
||||
kubectl delete deployment hello-node
|
||||
```
|
||||
|
||||
Opcional, detener la máquina virtual de Minikube:
|
||||
|
||||
```shell
|
||||
minikube stop
|
||||
```
|
||||
|
||||
Opcional, eliminar la máquina virtual de Minikube:
|
||||
|
||||
```shell
|
||||
minikube delete
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Leer más sobre [Deployments](/docs/concepts/workloads/controllers/deployment/).
|
||||
* Leer más sobre [Desplegando aplicaciones](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
* Leer más sobre [Services](/docs/concepts/services-networking/service/).
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,4 @@
|
|||
FROM node:6.14.2
|
||||
EXPOSE 8080
|
||||
COPY server.js .
|
||||
CMD [ "node", "server.js" ]
|
|
@ -0,0 +1,9 @@
|
|||
var http = require('http');
|
||||
|
||||
var handleRequest = function(request, response) {
|
||||
console.log('Received request for URL: ' + request.url);
|
||||
response.writeHead(200);
|
||||
response.end('Hello World!');
|
||||
};
|
||||
var www = http.createServer(handleRequest);
|
||||
www.listen(8080);
|
|
@ -100,7 +100,7 @@ Voici un exemple d'affichage d'événements lors de l'exécution de cette comman
|
|||
|
||||
```
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
|
||||
|
@ -117,7 +117,7 @@ Events:
|
|||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* En savoir plus sur l'[Environnement d'un conteneur](/fr/docs/concepts/containers/container-environment-variables/).
|
||||
* En savoir plus sur l'[Environnement d'un conteneur](/fr/docs/concepts/containers/container-environment/).
|
||||
* Entraînez-vous à
|
||||
[attacher des handlers de conteneurs à des événements de cycle de vie](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
|
||||
|
|
|
@ -60,12 +60,13 @@ Ces certificats peuvent être fournis de différentes manières :
|
|||
- automatiqueent configuré dans Google Compute Engine ou Google Kubernetes Engine
|
||||
- tous les pods peuvent lire le registre privé du projet
|
||||
- En utilisant Amazon Elastic Container Registry (ECR)
|
||||
- utilise des rôles et politiques IAM pour contrôler l'accès aux dépôts ECR
|
||||
- utilise les rôles et politiques IAM pour contrôler l'accès aux dépôts ECR
|
||||
- rafraîchit automatiquement les certificats de login ECR
|
||||
- En utilisant Oracle Cloud Infrastructure Registry (OCIR)
|
||||
- utilisez les rôles et politiques IAM pour contrôler l'accès aux dépôts OCIR
|
||||
- utilise les rôles et politiques IAM pour contrôler l'accès aux dépôts OCIR
|
||||
- En utilisant Azure Container Registry (ACR)
|
||||
- En utilisant IBM Cloud Container Registry
|
||||
- utilise les rôles et politiques IAM pour contrôler l'accès à l'IBM Cloud Container Registry
|
||||
- En configurant les nœuds pour s'authentifier auprès d'un registre privé
|
||||
- tous les pods peuvent lire les registres privés configurés
|
||||
- nécessite la configuration des nœuds par un administrateur du cluster
|
||||
|
@ -120,9 +121,9 @@ Dépannage :
|
|||
- Vérifiez toutes les exigences ci-dessus.
|
||||
- Copiez les certificats de $REGION (par ex. `us-west-2`) sur votre poste de travail. Connectez-vous en SSH sur l'hôte et exécutez Docker manuellement avec ces certificats. Est-ce que ça marche ?
|
||||
- Vérifiez que kubelet s'exécute avec `--cloud-provider=aws`.
|
||||
- Recherchez dans les logs de kubelet (par ex. `journalctl -u kubelet`) des lignes de logs ressemblant à :
|
||||
- `plugins.go:56] Registering credential provider: aws-ecr-key`
|
||||
- `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider`
|
||||
- Augmentez la verbosité des logs de kubelet à au moins 3 et recherchez dans les logs de kubelet (par exemple avec `journalctl -u kubelet`) des lignes similaires à :
|
||||
+ - `aws_credentials.go:109] unable to get ECR credentials from cache, checking ECR API`
|
||||
+ - `aws_credentials.go:116] Got ECR credentials from ECR API for <AWS account ID for ECR>.dkr.ecr.<AWS region>.amazonaws.com`
|
||||
|
||||
### Utiliser Azure Container Registry (ACR)
|
||||
En utilisant [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/)
|
||||
|
@ -143,11 +144,11 @@ Une fois que vous avez défini ces variables, vous pouvez
|
|||
|
||||
### Utiliser IBM Cloud Container Registry
|
||||
|
||||
IBM Cloud Container Registry fournit un registre d'images multi-tenant privé que vous pouvez utiliser pour stocker et partager de manière sécurisée vos images Docker. Par défaut, les images de votre registre privé sont scannées par le Vulnerability Advisor intégré pour détecter des failles de sécurité et des vulnérabilités potentielles. Les utilisateurs de votre compte IBM Cloud peuvent accéder à vos images, ou vous pouvez créer un token pour garantir l'accès à des namespaces du registre.
|
||||
IBM Cloud Container Registry fournit un registre d'images multi-tenant privé que vous pouvez utiliser pour stocker et partager de manière sécurisée vos images. Par défaut, les images de votre registre privé sont scannées par le Vulnerability Advisor intégré pour détecter des failles de sécurité et des vulnérabilités potentielles. Les utilisateurs de votre compte IBM Cloud peuvent accéder à vos images, ou vous pouvez des rôles et politiques IAM pour fournir l'accès aux namespaces de l'IBM Cloud Container Registry.
|
||||
|
||||
Pour installer le plugin du CLI de IBM Cloud Container Registry et créer un namespace pour vos images, voir [Débuter avec IBM Cloud Container Registry](https://cloud.ibm.com/docs/services/Registry?topic=registry-getting-started).
|
||||
Pour installer le plugin du CLI de IBM Cloud Container Registry et créer un namespace pour vos images, voir [Débuter avec IBM Cloud Container Registry](https://cloud.ibm.com/docs/Registry?topic=registry-getting-started).
|
||||
|
||||
Vous pouvez utiliser le IBM Cloud Container Registry pour déployer des conteneurs depuis des [images publiques de IBM Cloud](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images) et vos images privées dans le namespace `default` de votre cluster IBM Cloud Kubernetes Service. Pour déployer un conteneur dans d'autres namespaces, ou pour utiliser une image d'une autre région de IBM Cloud Container Registry ou d'un autre compte IBM Cloud, créez un `imagePullSecret` Kubernetes. Pour plus d'informations, voir [Construire des conteneurs à partir d'images](https://cloud.ibm.com/docs/containers?topic=containers-images#images).
|
||||
Si vous utilisez le même compte et la même région, vous pouvez déployer des images stockées dans IBM Cloud Container Registry vers la namespace `default` de votre cluster IBM Cloud Kubernetes Service sans configuration supplémentaire, voir [Construire des conteneurs à partir d'images](https://cloud.ibm.com/docs/containers?topic=containers-images). Pour les autres options de configuration, voir [Comprendre comment autoriser votre cluster à télécharger des images depuis un registre](https://cloud.ibm.com/docs/containers?topic=containers-registry#cluster_registry_auth).
|
||||
|
||||
### Configurer les nœuds pour s'authentifier auprès d'un registre privé
|
||||
|
||||
|
@ -209,19 +210,33 @@ spec:
|
|||
imagePullPolicy: Always
|
||||
command: [ "echo", "SUCCESS" ]
|
||||
EOF
|
||||
```
|
||||
|
||||
```
|
||||
pod/test-image-privee-1 created
|
||||
```
|
||||
Si tout fonctionne, alors, après quelques instants, vous devriez voir :
|
||||
|
||||
Si tout fonctionne, alors, après quelques instants, vous pouvez exécuter :
|
||||
|
||||
```shell
|
||||
kubectl logs test-image-privee-1
|
||||
```
|
||||
|
||||
et voir que la commande affiche :
|
||||
|
||||
```
|
||||
SUCCESS
|
||||
```
|
||||
|
||||
En cas de problèmes, vous verrez :
|
||||
Si vous suspectez que la commande a échouée, vous pouvez exécuter :
|
||||
|
||||
```shell
|
||||
kubectl describe pods/test-image-privee-1 | grep "Failed"
|
||||
kubectl describe pods/test-image-privee-1 | grep 'Failed'
|
||||
```
|
||||
|
||||
En cas d'échec, l'affichage sera similaire à :
|
||||
|
||||
```
|
||||
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
|
||||
```
|
||||
|
||||
|
@ -338,7 +353,7 @@ Il y a plusieurs solutions pour configurer des registres privés. Voici quelques
|
|||
- Générez des certificats de registre pour chaque *tenant*, placez-les dans des secrets, et placez ces secrets dans les namespaces de chaque *tenant*.
|
||||
pod - Le *tenant* ajoute ce secret dans les imagePullSecrets de chaque pod.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
Si vous devez accéder à plusieurs registres, vous pouvez créer un secret pour chaque registre.
|
||||
Kubelet va fusionner tous les `imagePullSecrets` dans un unique `.docker/config.json` virtuel.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -43,7 +43,7 @@ complete -F __start_kubectl k
|
|||
|
||||
```bash
|
||||
source <(kubectl completion zsh) # active l'auto-complétion pour zsh dans le shell courant
|
||||
echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # ajoute l'auto-complétion de manière permanente à votre shell zsh
|
||||
echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # ajoute l'auto-complétion de manière permanente à votre shell zsh
|
||||
```
|
||||
|
||||
## Contexte et configuration de Kubectl
|
||||
|
@ -87,7 +87,7 @@ kubectl config unset users.foo # Supprime l'utilisateur fo
|
|||
|
||||
## Création d'objets
|
||||
|
||||
Les manifests Kubernetes peuvent être définis en json ou yaml. Les extensions de fichier `.yaml`,
|
||||
Les manifests Kubernetes peuvent être définis en YAML ou JSON. Les extensions de fichier `.yaml`,
|
||||
`.yml`, et `.json` peuvent être utilisés.
|
||||
|
||||
```bash
|
||||
|
@ -145,7 +145,7 @@ EOF
|
|||
# Commandes Get avec un affichage basique
|
||||
kubectl get services # Liste tous les services d'un namespace
|
||||
kubectl get pods --all-namespaces # Liste tous les Pods de tous les namespaces
|
||||
kubectl get pods -o wide # Liste tous les Pods du namespace, avec plus de détails
|
||||
kubectl get pods -o wide # Liste tous les Pods du namespace courant, avec plus de détails
|
||||
kubectl get deployment my-dep # Liste un déploiement particulier
|
||||
kubectl get pods # Liste tous les Pods dans un namespace
|
||||
kubectl get pod my-pod -o yaml # Affiche le YAML du Pod
|
||||
|
@ -154,20 +154,20 @@ kubectl get pod my-pod -o yaml # Affiche le YAML du Pod
|
|||
kubectl describe nodes my-node
|
||||
kubectl describe pods my-pod
|
||||
|
||||
# Liste des services triés par nom
|
||||
kubectl get services --sort-by=.metadata.name # Liste les services classés par nom
|
||||
# Liste les services triés par nom
|
||||
kubectl get services --sort-by=.metadata.name
|
||||
|
||||
# Liste les pods classés par nombre de redémarrages
|
||||
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
|
||||
|
||||
# Affiche les pods du namespace test classés par capacité de stockage
|
||||
kubectl get pods -n test --sort-by=.spec.capacity.storage
|
||||
# Affiche les volumes persistants classés par capacité de stockage
|
||||
kubectl get pv --sort-by=.spec.capacity.storage
|
||||
|
||||
# Affiche la version des labels de tous les pods ayant un label app=cassandra
|
||||
kubectl get pods --selector=app=cassandra -o \
|
||||
jsonpath='{.items[*].metadata.labels.version}'
|
||||
|
||||
# Affiche tous les noeuds (en utilisant un sélecteur pour exclure ceux ayant un label
|
||||
# Affiche tous les noeuds (en utilisant un sélecteur pour exclure ceux ayant un label
|
||||
# nommé 'node-role.kubernetes.io/master')
|
||||
kubectl get node --selector='!node-role.kubernetes.io/master'
|
||||
|
||||
|
@ -252,7 +252,7 @@ kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1",
|
|||
```
|
||||
|
||||
## Édition de ressources
|
||||
Ceci édite n'importe quelle ressource de l'API dans un éditeur.
|
||||
Édite n'importe quelle ressource de l'API dans un éditeur.
|
||||
|
||||
```bash
|
||||
kubectl edit svc/docker-registry # Édite le service nommé docker-registry
|
||||
|
@ -274,7 +274,7 @@ kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale plusie
|
|||
kubectl delete -f ./pod.json # Supprime un pod en utilisant le type et le nom spécifiés dans pod.json
|
||||
kubectl delete pod,service baz foo # Supprime les pods et services ayant les mêmes noms "baz" et "foo"
|
||||
kubectl delete pods,services -l name=myLabel # Supprime les pods et services ayant le label name=myLabel
|
||||
kubectl -n my-ns delete po,svc --all # Supprime tous les pods et services dans le namespace my-ns
|
||||
kubectl -n my-ns delete pod,svc --all # Supprime tous les pods et services dans le namespace my-ns
|
||||
# Supprime tous les pods correspondants à pattern1 ou pattern2 avec awk
|
||||
kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs kubectl delete -n mynamespace pod
|
||||
```
|
||||
|
@ -292,9 +292,9 @@ kubectl logs -f my-pod # Fait défiler (stream) les
|
|||
kubectl logs -f my-pod -c my-container # Fait défiler (stream) les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs)
|
||||
kubectl logs -f -l name=myLabel --all-containers # Fait défiler (stream) les logs de tous les pods ayant le label name=myLabel (stdout)
|
||||
kubectl run -i --tty busybox --image=busybox -- sh # Exécute un pod comme un shell interactif
|
||||
kubectl run nginx --image=nginx --restart=Never -n
|
||||
mynamespace # Run pod nginx in a specific namespace
|
||||
kubectl run nginx --image=nginx --restart=Never # Run pod nginx and write its spec into a file called pod.yaml
|
||||
kubectl run nginx --image=nginx --restart=Never -n
|
||||
mynamespace # Exécute le pod nginx dans un namespace spécifique
|
||||
kubectl run nginx --image=nginx --restart=Never # Simule l'exécution du pod nginx et écrit sa spécification dans un fichier pod.yaml
|
||||
--dry-run -o yaml > pod.yaml
|
||||
|
||||
kubectl attach my-pod -i # Attache à un conteneur en cours d'exécution
|
||||
|
@ -340,7 +340,7 @@ kubectl api-resources --api-group=extensions # Toutes les ressources dans le gro
|
|||
|
||||
### Formattage de l'affichage
|
||||
|
||||
Pour afficher les détails sur votre terminal dans un format spécifique, vous pouvez utiliser une des options `-o` ou `--output` avec les commandes `kubectl` qui les prennent en charge.
|
||||
Pour afficher les détails sur votre terminal dans un format spécifique, utilisez l'option `-o` (ou `--output`) avec les commandes `kubectl` qui la prend en charge.
|
||||
|
||||
Format d'affichage | Description
|
||||
--------------| -----------
|
||||
|
@ -353,6 +353,21 @@ Format d'affichage | Description
|
|||
`-o=wide` | Affiche dans le format texte avec toute information supplémentaire, et pour des pods, le nom du noeud est inclus
|
||||
`-o=yaml` | Affiche un objet de l'API formaté en YAML
|
||||
|
||||
Exemples utilisant `-o=custom-columns` :
|
||||
|
||||
```bash
|
||||
# Toutes les images s'exécutant dans un cluster
|
||||
kubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'
|
||||
|
||||
# Toutes les images excepté "k8s.gcr.io/coredns:1.6.2"
|
||||
kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!="k8s.gcr.io/coredns:1.6.2")].image'
|
||||
|
||||
# Tous les champs dans metadata quel que soit leur nom
|
||||
kubectl get pods -A -o=custom-columns='DATA:metadata.*'
|
||||
```
|
||||
|
||||
Plus d'exemples dans la [documentation de référence](/fr/docs/reference/kubectl/overview/#colonnes-personnalisées) de kubectl.
|
||||
|
||||
### Verbosité de l'affichage de Kubectl et débogage
|
||||
|
||||
La verbosité de Kubectl est contrôlée par une des options `-v` ou `--v` suivie d'un entier représentant le niveau de log. Les conventions générales de logging de Kubernetes et les niveaux de log associés sont décrits [ici](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
|
||||
|
|
|
@ -16,7 +16,6 @@ Pour une sortie stable dans un script :
|
|||
|
||||
* Demandez un des formats de sortie orienté machine, comme `-o name`, `-o json`, `-o yaml`, `-o go-template` ou `-o jsonpath`.
|
||||
* Spécifiez complètement la version. Par exemple, `jobs.v1.batch/monjob`. Cela va assurer que kubectl n'utilise pas sa version par défaut, qui risque d'évoluer avec le temps.
|
||||
* Utilisez le flag `--generator` pour coller à un comportement spécifique lorsque vous utilisez les commandes basées sur un générateur, comme `kubectl run` ou `kubectl expose`.
|
||||
* Ne vous basez pas sur un contexte, des préférences ou tout autre état implicite.
|
||||
|
||||
## Bonnes pratiques
|
||||
|
@ -26,48 +25,34 @@ Pour une sortie stable dans un script :
|
|||
Pour que `kubectl run` satisfasse l'infrastructure as code :
|
||||
|
||||
* Taggez les images avec un tag spécifique à une version et n'utilisez pas ce tag pour une nouvelle version. Par exemple, utilisez `:v1234`, `v1.2.3`, `r03062016-1-4`, plutôt que `:latest` (Pour plus d'informations, voir [Bonnes pratiques pour la configuration](/docs/concepts/configuration/overview/#container-images)).
|
||||
* Capturez les paramètres dans un script enregistré, ou tout au moins utilisez `--record` pour annoter les objets créés avec la ligne de commande correspondante pour une image peu paramétrée.
|
||||
* Capturez le script pour une image fortement paramétrée.
|
||||
* Passez à des fichiers de configuration enregistrés dans un système de contrôle de source pour des fonctionnalités désirées mais non exprimables avec des flags de `kubectl run`.
|
||||
* Collez à une version spécifique de [générateur](#generators), comme `kubectl run --generator=deployment/v1beta1`.
|
||||
|
||||
Vous pouvez utiliser l'option `--dry-run` pour prévisualiser l'objet qui serait envoyé à votre cluster, sans réellement l'envoyer.
|
||||
|
||||
{{< note >}}
|
||||
Tous les générateurs `kubectl` sont dépréciés. Voir la documentation de Kubernetes v1.17 pour une [liste](https://v1-17.docs.kubernetes.io/fr/docs/reference/kubectl/conventions/#g%C3%A9n%C3%A9rateurs) de générateurs et comment ils étaient utilisés.
|
||||
{{< /note >}}
|
||||
|
||||
#### Générateurs
|
||||
|
||||
Vous pouvez créer les ressources suivantes en utilisant `kubectl run` avec le flag `--generator` :
|
||||
|
||||
| Ressource | groupe api | commande kubectl |
|
||||
|-----------------------------------|--------------------|---------------------------------------------------|
|
||||
| Pod | v1 | `kubectl run --generator=run-pod/v1` |
|
||||
| Replication controller (déprécié) | v1 | `kubectl run --generator=run/v1` |
|
||||
| Deployment (déprécié) | extensions/v1beta1 | `kubectl run --generator=deployment/v1beta1` |
|
||||
| Deployment (déprécié) | apps/v1beta1 | `kubectl run --generator=deployment/apps.v1beta1` |
|
||||
| Job (déprécié) | batch/v1 | `kubectl run --generator=job/v1` |
|
||||
| CronJob (déprécié) | batch/v1beta1 | `kubectl run --generator=cronjob/v1beta1` |
|
||||
| CronJob (déprécié) | batch/v2alpha1 | `kubectl run --generator=cronjob/v2alpha1` |
|
||||
|
||||
{{< note >}}
|
||||
`kubectl run --generator` sauf pour `run-pod/v1` est déprécié depuis v1.12.
|
||||
{{< /note >}}
|
||||
|
||||
Si vous n'indiquez pas de flag de générateur, d'autres flags vous demandent d'utiliser un générateur spécifique. La table suivante liste les flags qui vous forcent à préciser un générateur spécifique, selon la version du cluster :
|
||||
|
||||
| Ressource générée | Cluster v1.4 et suivants | Cluster v1.3 | Cluster v1.2 | Cluster v1.1 et précédents |
|
||||
|:----------------------:|--------------------------|-----------------------|--------------------------------------------|--------------------------------------------|
|
||||
| Pod | `--restart=Never` | `--restart=Never` | `--generator=run-pod/v1` | `--restart=OnFailure` OU `--restart=Never` |
|
||||
| Replication Controller | `--generator=run/v1` | `--generator=run/v1` | `--generator=run/v1` | `--restart=Always` |
|
||||
| Deployment | `--restart=Always` | `--restart=Always` | `--restart=Always` | N/A |
|
||||
| Job | `--restart=OnFailure` | `--restart=OnFailure` | `--restart=OnFailure` OU `--restart=Never` | N/A |
|
||||
| Cron Job | `--schedule=<cron>` | N/A | N/A | N/A |
|
||||
|
||||
{{< note >}}
|
||||
Ces flags utilisent un générateur par défaut uniquement lorsque vous n'avez utilisé aucun flag.
|
||||
Cela veut dire que lorsque vous combinez `--generator` avec d'autres flags, le générateur que vous avez spécifié plus tard ne change pas. Par exemple, dans cluster v1.4, si vous spécifiez d'abord `--restart=Always`, un Deployment est créé ; si vous spécifiez ensuite `--restart=Always` et `--generator=run/v1`, alors un Replication Controller sera créé.
|
||||
Ceci vous permet de coller à un comportement spécifique avec le générateur, même si le générateur par défaut est changé par la suite.
|
||||
{{< /note >}}
|
||||
|
||||
Les flags définissent le générateur dans l'ordre suivant : d'abord le flag `--schedule`, puis le flag `--restart`, et finalement le flag `--generator`.
|
||||
|
||||
Pour vérifier la ressource qui a été finalement créée, utilisez le flag `--dry-run`, qui fournit l'objet qui sera soumis au cluster.
|
||||
Vous pouvez générer les ressources suivantes avec une commande kubectl, `kubectl create --dry-run -o yaml`:
|
||||
```
|
||||
clusterrole Crée un ClusterRole.
|
||||
clusterrolebinding Crée un ClusterRoleBinding pour un ClusterRole particulier.
|
||||
configmap Crée une configmap à partir d'un fichier local, un répertoire ou une valeur litérale.
|
||||
cronjob Crée un cronjob avec le nom spécifié.
|
||||
deployment Crée un deployment avec le nom spécifié.
|
||||
job Crée un job avec le nom spécifié.
|
||||
namespace Crée un namespace avec le nom spécifié.
|
||||
poddisruptionbudget Crée un pod disruption budget avec le nom spécifié.
|
||||
priorityclass Crée une priorityclass avec le nom spécifié.
|
||||
quota Crée un quota avec le nom spécifié.
|
||||
role Crée un role avec une unique règle.
|
||||
rolebinding Crée un RoleBinding pour un Role ou ClusterRole particulier.
|
||||
secret Crée un secret en utilisant la sous-commande spécifiée.
|
||||
service Crée un service en utilisant la sous-commande spécifiée.
|
||||
serviceaccount Crée un service account avec le nom spécifié.
|
||||
```
|
||||
|
||||
### `kubectl apply`
|
||||
|
||||
|
|
|
@ -71,10 +71,10 @@ Fonction | Description | Exemple
|
|||
--------------------|----------------------------|-----------------------------------------------------------------|------------------
|
||||
`text` | le texte en clair | `le type est {.kind}` | `le type est List`
|
||||
`@` | l'objet courant | `{@}` | identique à l'entrée
|
||||
`.` ou `[]` | opérateur fils | `{.kind}` ou `{['kind']}` | `List`
|
||||
`.` ou `[]` | opérateur fils | `{.kind}`, `{['kind']}` ou `{['name\.type']}` | `List`
|
||||
`..` | descente récursive | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e`
|
||||
`*` | joker. Tous les objets | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]`
|
||||
`[start:end :step]` | opérateur d'indice | `{.users[0].name}` | `myself`
|
||||
`[start:end:step]` | opérateur d'indice | `{.users[0].name}` | `myself`
|
||||
`[,]` | opérateur d'union | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]`
|
||||
`?()` | filtre | `{.users[?(@.name=="e2e")].user.password}` | `secret`
|
||||
`range`, `end` | itération de liste | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]`
|
||||
|
@ -87,14 +87,18 @@ kubectl get pods -o json
|
|||
kubectl get pods -o=jsonpath='{@}'
|
||||
kubectl get pods -o=jsonpath='{.items[0]}'
|
||||
kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
|
||||
kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}"
|
||||
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Sous Windows, vous devez utiliser des guillemets _doubles_ autour des modèles JSONPath qui contiennent des espaces (et non des guillemets simples comme ci-dessus pour bash). Ceci entraîne que vous devez utiliser un guillemet simple ou un double guillemet échappé autour des chaînes litérales dans le modèle. Par exemple :
|
||||
|
||||
```cmd
|
||||
C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}"
|
||||
C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}"
|
||||
kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}"
|
||||
kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}"
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -510,6 +510,7 @@ kubectl [flags]
|
|||
|
||||
{{% capture seealso %}}
|
||||
|
||||
* [kubectl alpha](/docs/reference/generated/kubectl/kubectl-commands#alpha) - Commandes pour fonctionnalités alpha
|
||||
* [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - Met à jour les annotations d'une ressource
|
||||
* [kubectl api-resources](/docs/reference/generated/kubectl/kubectl-commands#api-resources) - Affiche les ressources de l'API prises en charge sur le serveur
|
||||
* [kubectl api-versions](/docs/reference/generated/kubectl/kubectl-commands#api-versions) - Affiche les versions de l'API prises en charge sur le serveur, sous la forme "groupe/version"
|
||||
|
@ -545,7 +546,7 @@ kubectl [flags]
|
|||
* [kubectl replace](/docs/reference/generated/kubectl/kubectl-commands#replace) - Remplace une ressource par fichier ou stdin
|
||||
* [kubectl rollout](/docs/reference/generated/kubectl/kubectl-commands#rollout) - Gère le rollout d'une ressource
|
||||
* [kubectl run](/docs/reference/generated/kubectl/kubectl-commands#run) - Exécute une image donnée dans le cluster
|
||||
* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - Définit une nouvelle taille pour un Deployment, ReplicaSet, Replication Controller, ou Job
|
||||
* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - Définit une nouvelle taille pour un Deployment, ReplicaSet ou Replication Controller
|
||||
* [kubectl set](/docs/reference/generated/kubectl/kubectl-commands#set) - Définit des fonctionnalités spécifiques sur des objets
|
||||
* [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) - Met à jour les marques (taints) sur un ou plusieurs nœuds
|
||||
* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - Affiche l'utilisation de ressources matérielles (CPU/Memory/Storage)
|
||||
|
|
|
@ -9,7 +9,7 @@ card:
|
|||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
Kubectl est une interface en ligne de commande qui permet d'exécuter des commandes sur des clusters Kubernetes. `kubectl` recherche un fichier appelé config dans le répertoire $HOME/.kube. Vous pouvez spécifier d'autres fichiers [kubeconfig](https://kube
|
||||
Kubectl est un outil en ligne de commande pour contrôler des clusters Kubernetes. `kubectl` recherche un fichier appelé config dans le répertoire $HOME/.kube. Vous pouvez spécifier d'autres fichiers [kubeconfig](https://kube
|
||||
rnetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) en définissant la variable d'environnement KUBECONFIG ou en utilisant le paramètre [`--kubeconfig`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
|
||||
|
||||
Cet aperçu couvre la syntaxe `kubectl`, décrit les opérations et fournit des exemples classiques. Pour des détails sur chaque commande, incluant toutes les options et sous-commandes autorisées, voir la documentation de référence de [kubectl](/docs/reference/generated/kubectl/kubectl-commands/). Pour des instructions d'installation, voir [installer kubectl](/docs/tasks/kubectl/install/).
|
||||
|
@ -67,34 +67,51 @@ Si vous avez besoin d'aide, exécutez `kubectl help` depuis la fenêtre de termi
|
|||
|
||||
Le tableau suivant inclut une courte description et la syntaxe générale pour chaque opération `kubectl` :
|
||||
|
||||
Opération | Syntaxe | Description
|
||||
-------------------- | -------------------- | --------------------
|
||||
Opération | Syntaxe | Description
|
||||
----------------| ---------------------------------------------------------------------------------------------------------------------------------------------------------| --------------------
|
||||
`alpha` | `kubectl alpha SOUS-COMMANDE [flags]` | Liste les commandes disponibles qui correspondent à des fonctionnalités alpha, qui ne sont pas activées par défaut dans les clusters Kubernetes.
|
||||
`annotate` | <code>kubectl annotate (-f FICHIER | TYPE NOM | TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]</code> | Ajoute ou modifie les annotations d'une ou plusieurs ressources.
|
||||
`api-resources` | `kubectl api-resources [flags]` | Liste les ressources d'API disponibles.
|
||||
`api-versions` | `kubectl api-versions [flags]` | Liste les versions d'API disponibles.
|
||||
`apply` | `kubectl apply -f FICHIER [flags]` | Applique un changement de configuration à une ressource depuis un fichier ou stdin.
|
||||
`attach` | `kubectl attach POD -c CONTENEUR [-i] [-t] [flags]` | Attache à un conteneur en cours d'exécution soit pour voir la sortie standard soit pour interagir avec le conteneur (stdin).
|
||||
`auth` | `kubectl auth [flags] [options]` | Inspecte les autorisations.
|
||||
`autoscale` | <code>kubectl autoscale (-f FICHIER | TYPE NOM | TYPE/NOM) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]</code> | Scale automatiquement l'ensemble des pods gérés par un replication controller.
|
||||
`certificate` | `kubectl certificate SOUS-COMMANDE [options]` | Modifie les ressources de type certificat.
|
||||
`cluster-info` | `kubectl cluster-info [flags]` | Affiche les informations des endpoints du master et des services du cluster.
|
||||
`completion` | `kubectl completion SHELL [options]` | Affiche le code de complétion pour le shell spécifié (bash ou zsh).
|
||||
`config` | `kubectl config SOUS-COMMANDE [flags]` | Modifie les fichiers kubeconfig. Voir les sous-commandes individuelles pour plus de détails.
|
||||
`convert` | `kubectl convert -f FICHIER [options]` | Convertit des fichiers de configuration entre différentes versions d'API. Les formats YAML et JSON sont acceptés.
|
||||
`cordon` | `kubectl cordon NOEUD [options]` | Marque un nœud comme non programmable.
|
||||
`cp` | `kubectl cp <ficher-src> <fichier-dest> [options]` | Copie des fichiers et des répertoires vers et depuis des conteneurs.
|
||||
`create` | `kubectl create -f FICHIER [flags]` | Crée une ou plusieurs ressources depuis un fichier ou stdin.
|
||||
`delete` | <code>kubectl delete (-f FICHIER | TYPE [NOM | /NOM | -l label | --all]) [flags]</code> | Supprime des ressources soit depuis un fichier ou stdin, ou en indiquant des sélecteurs de label, des noms, des sélecteurs de ressources ou des ressources.
|
||||
`describe` | <code>kubectl describe (-f FICHIER | TYPE [PREFIXE_NOM | /NOM | -l label]) [flags]</code> | Affiche l'état détaillé d'une ou plusieurs ressources.
|
||||
`diff` | `kubectl diff -f FICHIER [flags]` | Diff un fichier ou stdin par rapport à la configuration en cours (**BETA**)
|
||||
`diff` | `kubectl diff -f FICHIER [flags]` | Diff un fichier ou stdin par rapport à la configuration en cours
|
||||
`drain` | `kubectl drain NOEUD [options]` | Vide un nœud en préparation de sa mise en maintenance.
|
||||
`edit` | <code>kubectl edit (-f FICHIER | TYPE NOM | TYPE/NOM) [flags]</code> | Édite et met à jour la définition d'une ou plusieurs ressources sur le serveur en utilisant l'éditeur par défaut.
|
||||
`exec` | `kubectl exec POD [-c CONTENEUR] [-i] [-t] [flags] [-- COMMANDE [args...]]` | Exécute une commande à l'intérieur d'un conteneur dans un pod.
|
||||
`explain` | `kubectl explain [--recursive=false] [flags]` | Obtient des informations sur différentes ressources. Par exemple pods, nœuds, services, etc.
|
||||
`expose` | <code>kubectl expose (-f FICHIER | TYPE NOM | TYPE/NOM) [--port=port] [--protocol=TCP|UDP] [--target-port=nombre-ou-nom] [--name=nom] [--external-ip=ip-externe-ou-service] [--type=type] [flags]</code> | Expose un replication controller, service ou pod comme un nouveau service Kubernetes.
|
||||
`get` | <code>kubectl get (-f FICHIER | TYPE [NOM | /NOM | -l label]) [--watch] [--sort-by=CHAMP] [[-o | --output]=FORMAT_AFFICHAGE] [flags]</code> | Liste une ou plusieurs ressources.
|
||||
`kustomize` | `kubectl kustomize <répertoire> [flags] [options]` | Liste un ensemble de ressources d'API généré à partir d'instructions d'un fichier kustomization.yaml. Le paramètre doit être le chemin d'un répertoire contenant ce fichier, ou l'URL d'un dépôt git incluant un suffixe de chemin par rapport à la racine du dépôt.
|
||||
`label` | <code>kubectl label (-f FICHIER | TYPE NOM | TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]</code> | Ajoute ou met à jour les labels d'une ou plusieurs ressources.
|
||||
`logs` | `kubectl logs POD [-c CONTENEUR] [--follow] [flags]` | Affiche les logs d'un conteneur dans un pod.
|
||||
`options` | `kubectl options` | Liste des options globales, s'appliquant à toutes commandes.
|
||||
`patch` | <code>kubectl patch (-f FICHIER | TYPE NOM | TYPE/NOM) --patch PATCH [flags]</code> | Met à jour un ou plusieurs champs d'une resource en utilisant le processus de merge patch stratégique.
|
||||
`plugin` | `kubectl plugin [flags] [options]` | Fournit des utilitaires pour interagir avec des plugins.
|
||||
`port-forward` | `kubectl port-forward POD [PORT_LOCAL:]PORT_DISTANT [...[PORT_LOCAL_N:]PORT_DISTANT_N] [flags]` | Transfère un ou plusieurs ports locaux vers un pod.
|
||||
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Exécute un proxy vers un API server Kubernetes.
|
||||
`replace` | `kubectl replace -f FICHIER` | Remplace une ressource depuis un fichier ou stdin.
|
||||
`rolling-update`| <code>kubectl rolling-update ANCIEN_NOM_CONTROLEUR ([NOUVEAU_NOM_CONTROLEUR] --image=NOUVELLE_IMAGE_CONTENEUR | -f NOUVELLE_SPEC_CONTROLEUR) [flags]</code> | Exécute un rolling update en remplaçant graduellement le replication controller indiqué et ses pods.
|
||||
`run` | `kubectl run NOM --image=image [--env="cle=valeur"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Exécute dans le cluster l'image indiquée.
|
||||
`rollout` | `kubectl rollout SOUS-COMMANDE [options]` | Gère le rollout d'une ressource. Les types de ressources valides sont : deployments, daemonsets et statefulsets.
|
||||
`run` | `kubectl run NOM --image=image [--env="cle=valeur"] [--port=port] [--replicas=replicas] [--dry-run=server|client|none] [--overrides=inline-json] [flags]` | Exécute dans le cluster l'image indiquée.
|
||||
`scale` | <code>kubectl scale (-f FICHIER | TYPE NOM | TYPE/NOM) --replicas=QUANTITE [--resource-version=version] [--current-replicas=quantité] [flags]</code> | Met à jour la taille du replication controller indiqué.
|
||||
`set` | `kubectl set SOUS-COMMANDE [options]` | Configure les ressources de l'application.
|
||||
`taint` | `kubectl taint NOEUD NNOM CLE_1=VAL_1:EFFET_TAINT_1 ... CLE_N=VAL_N:EFFET_TAINT_N [options]` | Met à jour les marques (taints) d'un ou plusieurs nœuds.
|
||||
`top` | `kubectl top [flags] [options]` | Affiche l'utilisation des ressources (CPU/Mémoire/Stockage).
|
||||
`uncordon` | `kubectl uncordon NOEUD [options]` | Marque un noeud comme programmable.
|
||||
`version` | `kubectl version [--client] [flags]` | Affiche la version de Kubernetes du serveur et du client.
|
||||
`wait` | <code>kubectl wait ([-f FICHIER] | ressource.groupe/ressource.nom | ressource.groupe [(-l label | --all)]) [--for=delete|--for condition=available] [options]</code> | Expérimental : Attend un condition spécifique sur une ou plusieurs ressources.
|
||||
|
||||
Rappelez-vous : Pour tout savoir sur les opérations, voir la documentation de référence de [kubectl](/docs/user-guide/kubectl/).
|
||||
|
||||
|
@ -105,7 +122,8 @@ Le tableau suivant inclut la liste de tous les types de ressources pris en charg
|
|||
(cette sortie peut être obtenue depuis `kubectl api-resources`, et correspond à Kubernetes 1.13.3.)
|
||||
|
||||
| Nom de la ressource | Noms abrégés | Groupe API | Par namespace | Genre de la ressource |
|
||||
|---|---|---|---|---|
|
||||
|---------------------|--------------|------------|---------------|-----------------------|
|
||||
| `bindings` | | | true | Binding|
|
||||
| `componentstatuses` | `cs` | | false | ComponentStatus |
|
||||
| `configmaps` | `cm` | | true | ConfigMap |
|
||||
| `endpoints` | `ep` | | true | Endpoints |
|
||||
|
@ -150,6 +168,8 @@ Le tableau suivant inclut la liste de tous les types de ressources pris en charg
|
|||
| `rolebindings` | | rbac.authorization.k8s.io | true | RoleBinding |
|
||||
| `roles` | | rbac.authorization.k8s.io | true | Role |
|
||||
| `priorityclasses` | `pc` | scheduling.k8s.io | false | PriorityClass |
|
||||
| `csidrivers` | | storage.k8s.io | false | CSIDriver |
|
||||
| `csinodes` | | storage.k8s.io | false | CSINode |
|
||||
| `storageclasses` | `sc` | storage.k8s.io | false | StorageClass |
|
||||
| `volumeattachments` | | storage.k8s.io | false | VolumeAttachment |
|
||||
|
||||
|
@ -242,8 +262,8 @@ kubectl get pods <nom-pod> --server-print=false
|
|||
La sortie ressemble à :
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nom-pod 1/1 Running 0 1m
|
||||
NAME AGE
|
||||
nom-pod 1m
|
||||
```
|
||||
|
||||
### Ordonner les listes d'objets
|
||||
|
@ -297,8 +317,8 @@ $ kubectl get replicationcontroller <nom-rc>
|
|||
# Liste ensemble tous les replication controller et les services dans le format de sortie texte.
|
||||
$ kubectl get rc,services
|
||||
|
||||
# Liste tous les daemon sets, dont ceux non initialisés, dans le format de sortie texte.
|
||||
$ kubectl get ds --include-uninitialized
|
||||
# Liste tous les daemon sets dans le format de sortie texte.
|
||||
kubectl get ds
|
||||
|
||||
# Liste tous les pods s'exécutant sur le nœud serveur01
|
||||
$ kubectl get pods --field-selector=spec.nodeName=serveur01
|
||||
|
@ -317,8 +337,8 @@ $ kubectl describe pods/<nom-pod>
|
|||
# Rappelez-vous : les noms des pods étant créés par un replication controller sont préfixés par le nom du replication controller.
|
||||
$ kubectl describe pods <nom-rc>
|
||||
|
||||
# Décrit tous les pods, sans inclure les non initialisés
|
||||
$ kubectl describe pods --include-uninitialized=false
|
||||
# Décrit tous les pods
|
||||
$ kubectl describe pods
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
@ -332,11 +352,8 @@ Vous pouvez utiliser les options `-w` ou `--watch` pour initier l'écoute des mo
|
|||
# Supprime un pod en utilisant le type et le nom spécifiés dans le fichier pod.yaml.
|
||||
$ kubectl delete -f pod.yaml
|
||||
|
||||
# Supprime tous les pods et services ayant le label name=<label-name>.
|
||||
$ kubectl delete pods,services -l name=<label-name>
|
||||
|
||||
# Supprime tous les pods et services ayant le label name=<label-name>, en incluant les non initialisés.
|
||||
$ kubectl delete pods,services -l name=<label-name> --include-uninitialized
|
||||
# Supprime tous les pods et services ayant le label <clé-label>=<valeur-label>
|
||||
$ kubectl delete pods,services -l <clé-label>=<valeur-label>
|
||||
|
||||
# Supprime tous les pods, en incluant les non initialisés.
|
||||
$ kubectl delete pods --all
|
||||
|
@ -346,13 +363,13 @@ $ kubectl delete pods --all
|
|||
|
||||
```shell
|
||||
# Affiche la sortie de la commande 'date' depuis le pod <nom-pod>. Par défaut, la sortie se fait depuis le premier conteneur.
|
||||
$ kubectl exec <nom-pod> date
|
||||
$ kubectl exec <nom-pod> -- date
|
||||
|
||||
# Affiche la sortie de la commande 'date' depuis le conteneur <nom-conteneur> du pod <nom-pod>.
|
||||
$ kubectl exec <nom-pod> -c <nom-conteneur> date
|
||||
$ kubectl exec <nom-pod> -c <nom-conteneur> -- date
|
||||
|
||||
# Obtient un TTY interactif et exécute /bin/bash depuis le pod <nom-pod>. Par défaut, la sortie se fait depuis le premier conteneur.
|
||||
$ kubectl exec -ti <nom-pod> /bin/bash
|
||||
$ kubectl exec -ti <nom-pod> -- /bin/bash
|
||||
```
|
||||
|
||||
`kubectl logs` - Affiche les logs d'un conteneur dans un pod.
|
||||
|
@ -365,6 +382,16 @@ $ kubectl logs <nom-pod>
|
|||
$ kubectl logs -f <nom-pod>
|
||||
```
|
||||
|
||||
`kubectl diff` - Affiche un diff des mises à jour proposées au cluster.
|
||||
|
||||
```shell
|
||||
# Diff les ressources présentes dans "pod.json".
|
||||
kubectl diff -f pod.json
|
||||
|
||||
# Diff les ressources présentes dans le fichier lu sur l'entrée standard.
|
||||
cat service.yaml | kubectl diff -f -
|
||||
```
|
||||
|
||||
## Exemples : Créer et utiliser des plugins
|
||||
|
||||
Utilisez les exemples suivants pour vous familiariser avec l'écriture et l'utilisation de plugins `kubectl` :
|
||||
|
@ -428,7 +455,7 @@ $ cat ./kubectl-whoami
|
|||
# ce plugin utilise la commande `kubectl config` pour afficher
|
||||
# l'information sur l'utilisateur courant, en se basant sur
|
||||
# le contexte couramment sélectionné
|
||||
kubectl config view --template='{{ range .contexts }}{{ if eq .name "'$(kubectl config current-context)'" }}Current user: {{ .context.user }}{{ end }}{{ end }}'
|
||||
kubectl config view --template='{{ range .contexts }}{{ if eq .name "'$(kubectl config current-context)'" }}Current user: {{ printf "%s\n" .context.user }}{{ end }}{{ end }}'
|
||||
```
|
||||
|
||||
Exécuter le plugin ci-dessus vous donne une sortie contenant l'utilisateur du contexte couramment sélectionné dans votre fichier KUBECONFIG :
|
||||
|
|
|
@ -16,7 +16,7 @@ Une page montre comment effectuer une seule chose, généralement en donnant une
|
|||
|
||||
{{% capture body %}}
|
||||
|
||||
## Interface web (Dashboard) #{dashboard}
|
||||
## Interface web (Dashboard) {#dashboard}
|
||||
|
||||
Déployer et accéder au dashboard web de votre cluster pour vous aider à le gérer et administrer un cluster Kubernetes.
|
||||
|
||||
|
|
|
@ -0,0 +1,126 @@
|
|||
---
|
||||
title: Le API di Kubernetes
|
||||
content_template: templates/concept
|
||||
weight: 30
|
||||
card:
|
||||
name: concepts
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Le convenzioni generali seguite dalle API sono descritte in [API conventions doc](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
|
||||
|
||||
Gli *endpoints* delle API, la lista delle risorse esposte ed i relativi esempi sono descritti in [API Reference](/docs/reference).
|
||||
|
||||
L'accesso alle API da remoto è discusso in [Controllare l'accesso alle API](/docs/reference/access-authn-authz/controlling-access/).
|
||||
|
||||
Le API di Kubernetes servono anche come riferimento per lo schema dichiarativo della configurazione del sistema stesso. Il comando [kubectl](/docs/reference/kubectl/overview/) può essere usato per creare, aggiornare, cancellare ed ottenere le istanze delle risorse esposte attraverso le API.
|
||||
|
||||
Kubernetes assicura la persistenza del suo stato (al momento in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) usando la rappresentazione delle risorse implementata dalle API.
|
||||
|
||||
Kubernetes stesso è diviso in differenti componenti, i quali interagiscono tra loro attraverso le stesse API.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Evoluzione delle API
|
||||
|
||||
In base alla nostra esperienza, ogni sistema di successo ha bisogno di evolvere ovvero deve estendersi aggiungendo funzionalità o modificare le esistenti per adattarle a nuovi casi d'uso. Le API di Kubernetes sono quindi destinate a cambiare e ad estendersi. In generale, ci si deve aspettare che nuove risorse vengano aggiunte di frequente cosi come nuovi campi possano altresì essere aggiunti a risorse esistenti. L'eliminazione di risorse o di campi devono seguire la [politica di deprecazione delle API](/docs/reference/using-api/deprecation-policy/).
|
||||
|
||||
In cosa consiste una modifica compatibile e come modificare le API è descritto dal [API change document](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md).
|
||||
|
||||
## Definizioni OpenAPI e Swagger
|
||||
|
||||
La documentazione completa e dettagliata delle API è fornita attraverso la specifica [OpenAPI](https://www.openapis.org/).
|
||||
|
||||
Dalla versione 1.10 di Kubernetes, l'API server di Kubernetes espone le specifiche OpenAPI attraverso il seguente *endpoint* `/openapi/v2`. Attraverso i seguenti *headers* HTTP è possibile richiedere un formato specifico:
|
||||
|
||||
Header | Possibili Valori
|
||||
------ | ---------------
|
||||
Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (il content-type di default è `application/json` per `*/*` ovvero questo header può anche essere omesso)
|
||||
Accept-Encoding | `gzip` (questo header è facoltativo)
|
||||
|
||||
Prima della versione 1.14, gli *endpoints* che includono il formato del nome all'interno del segmento (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`)
|
||||
espongo le specifiche OpenAPI in formati differenti. Questi *endpoints* sono deprecati, e saranno rimossi dalla versione 1.14 di Kubernetes.
|
||||
|
||||
**Esempi per ottenere le specifiche OpenAPI**:
|
||||
|
||||
Prima della 1.10 | Dalla versione 1.10 di Kubernetes
|
||||
---------------- | -----------------------------
|
||||
GET /swagger.json | GET /openapi/v2 **Accept**: application/json
|
||||
GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf
|
||||
GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf **Accept-Encoding**: gzip
|
||||
|
||||
Kubernetes implementa per le sue API anche una serializzazione alternativa basata sul formato Protobuf che è stato pensato principalmente per la comunicazione intra-cluster, documentato nella seguente [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md), e i files IDL per ciascun schema si trovano nei *Go packages* che definisco i tipi delle API.
|
||||
|
||||
Prima della versione 1.14, l'*apiserver* di Kubernetes espone anche un'*endpoint*, `/swaggerapi`, che può essere usato per ottenere
|
||||
le documentazione per le API di Kubernetes secondo le specifiche [Swagger v1.2](http://swagger.io/) .
|
||||
Questo *endpoint* è deprecato, ed è stato rimosso nella versione 1.14 di Kubernetes.
|
||||
|
||||
## Versionamento delle API
|
||||
|
||||
Per facilitare l'eliminazione di campi specifici o la modifica della rappresentazione di una data risorsa, Kubernetes supporta molteplici versioni della stessa API disponibili attraverso differenti indirizzi, come ad esempio `/api/v1` oppure
|
||||
`/apis/extensions/v1beta1`.
|
||||
|
||||
Abbiamo deciso di versionare a livello di API piuttosto che a livello di risorsa o di campo per assicurare che una data API rappresenti una chiara, consistente vista delle risorse di sistema e dei sui comportamenti, e per abilitare un controllo degli accessi sia per le API in via di decommissionamento che per quelle sperimentali.
|
||||
|
||||
Si noti che il versionamento delle API ed il versionamento del Software sono indirettamente collegati. La [API and release versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) descrive la relazione tra le versioni delle API ed le versioni del Software.
|
||||
|
||||
Differenti versioni delle API implicano differenti livelli di stabilità e supporto. I criteri per ciascuno livello sono descritti in dettaglio nella [API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Queste modifiche sono qui ricapitolate:
|
||||
|
||||
- Livello alpha:
|
||||
- Il nome di versione contiene `alpha` (e.g. `v1alpha1`).
|
||||
- Potrebbe contenere dei *bug*. Abilitare questa funzionalità potrebbe esporre al rischio di *bugs*. Disabilitata di default.
|
||||
- Il supporto di questa funzionalità potrebbe essere rimosso in ogni momento senza previa notifica.
|
||||
- Questa API potrebbe cambiare in modo incompatibile in rilasci futuri del Software e senza previa notifica.
|
||||
- Se ne raccomandata l'utilizzo solo in *clusters* di test creati per un breve periodo di vita, a causa di potenziali *bugs* e delle mancanza di un supporto di lungo periodo.
|
||||
- Livello beta:
|
||||
- Il nome di versione contiene `beta` (e.g. `v2beta3`).
|
||||
- Il codice è propriamente testato. Abilitare la funzionalità è considerato sicuro. Abilitata di default.
|
||||
- Il supporto per la funzionalità nel suo complesso non sarà rimosso, tuttavia potrebbe subire delle modifiche.
|
||||
- Lo schema e/o la semantica delle risorse potrebbe cambiare in modo incompatibile in successivi rilasci beta o stabili. Nel caso questo dovesse verificarsi, verrano fornite istruzioni per la migrazione alla versione successiva. Questo potrebbe richiedere la cancellazione, modifica, e la ri-creazione degli oggetti supportati da questa API. Questo processo di modifica potrebbe richiedere delle valutazioni. La modifica potrebbe richiedere un periodo di non disponibilità dell'applicazione che utilizza questa funzionalità.
|
||||
- Raccomandata solo per applicazioni non critiche per la vostra impresa a causa dei potenziali cambiamenti incompatibili in rilasci successivi. Se avete più *clusters* che possono essere aggiornati separatamente, potreste essere in grado di gestire meglio questa limitazione.
|
||||
- **Per favore utilizzate le nostre versioni beta e forniteci riscontri relativamente ad esse! Una volta promosse a stabili, potrebbe non essere semplice apportare cambiamenti successivi.**
|
||||
- Livello stabile:
|
||||
- Il nome di versione è `vX` dove `X` è un intero.
|
||||
- Le funzionalità relative alle versioni stabili continueranno ad essere presenti per parecchie versioni successive.
|
||||
|
||||
## API groups
|
||||
|
||||
Per facilitare l'estendibilità delle API di Kubernetes, sono stati implementati gli [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
|
||||
L'*API group* è specificato nel percorso REST ed anche nel campo `apiVersion` di un oggetto serializzato.
|
||||
|
||||
Al momento ci sono diversi *API groups* in uso:
|
||||
|
||||
1. Il gruppo *core*, spesso referenziato come il *legacy group*, è disponibile al percorso REST `/api/v1` ed utilizza `apiVersion: v1`.
|
||||
|
||||
1. I gruppi basati su un nome specifico sono disponibili attraverso il percorso REST `/apis/$GROUP_NAME/$VERSION`, ed usano `apiVersion: $GROUP_NAME/$VERSION` (e.g. `apiVersion: batch/v1`). La lista completa degli *API groups* supportati e' descritta nel documento [Kubernetes API reference](/docs/reference/).
|
||||
|
||||
Vi sono due modi per supportati per estendere le API attraverso le [*custom resources*](/docs/concepts/api-extension/custom-resources/):
|
||||
|
||||
1. [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)
|
||||
è pensato per utenti con esigenze CRUD basilari.
|
||||
1. Utenti che necessitano di un nuovo completo set di API che utilizzi appieno la semantica di Kubernetes possono implementare il loro *apiserver* ed utilizzare l'[*aggregator*](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/)
|
||||
per fornire ai propri utilizzatori la stessa esperienza a cui sono abituati con le API incluse nativamente in Kubernetes.
|
||||
|
||||
|
||||
## Abilitare o disabilitare gli *API groups*
|
||||
|
||||
Alcune risorse ed *API groups* sono abilitati di default. Questi posso essere abilitati o disabilitati attraverso il settaggio/flag `--runtime-config`
|
||||
applicato sull'*apiserver*. `--runtime-config` accetta valori separati da virgola. Per esempio: per disabilitare `batch/v1`, usa la seguente configurazione `--runtime-config=batch/v1=false`, per abilitare `batch/v2alpha1`, utilizzate `--runtime-config=batch/v2alpha1`.
|
||||
Il *flag* accetta set di coppie *chiave/valore* separati da virgola che descrivono la configurazione a *runtime* dell'*apiserver*.
|
||||
|
||||
{{< note >}}Abilitare o disabilitare risorse o gruppi richiede il riavvio dell'*apiserver* e del *controller-manager* affinché le modifiche specificate attraverso il flag `--runtime-config` abbiano effetto.{{< /note >}}
|
||||
|
||||
## Abilitare specifiche risorse nel gruppo extensions/v1beta1
|
||||
|
||||
DaemonSets, Deployments, StatefulSet, NetworkPolicies, PodSecurityPolicies e ReplicaSets presenti nel gruppo di API `extensions/v1beta1` sono disabilitate di default.
|
||||
Per esempio: per abilitare deployments and daemonsets, utilizza la seguente configurazione
|
||||
`--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`.
|
||||
|
||||
{{< note >}}Abilitare/disabilitare una singola risorsa è supportato solo per il gruppo di API `extensions/v1beta1` per ragioni storiche.{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,75 @@
|
|||
---
|
||||
title: Tutorials
|
||||
main_menu: true
|
||||
weight: 60
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Questa sezione della documentazione di Kubernetes contiene i tutorials.
|
||||
Un tutorial mostra come raggiungere un obiettivo più complesso di un singolo
|
||||
[task](/docs/tasks/). Solitamente un tutorial ha diverse sezioni, ognuna delle quali
|
||||
consiste in una sequenza di più task.
|
||||
Prima di procedere con vari tutorial, raccomandiamo di aggiungere il
|
||||
[Glossario](/docs/reference/glossary/) ai tuoi bookmark per riferimenti successivi.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Per cominciare
|
||||
|
||||
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) è un approfondito tutorial che aiuta a capire cosa è Kubernetes e che permette di testare in modo interattivo alcune semplici funzionalità di Kubernetes.
|
||||
|
||||
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
|
||||
|
||||
* [Hello Minikube](/docs/tutorials/hello-minikube/)
|
||||
|
||||
## Configurazione
|
||||
|
||||
* [Configurare Redis utilizzando una ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
|
||||
|
||||
## Stateless Applications
|
||||
|
||||
* [Esporre un External IP Address per permettere l'accesso alle applicazioni nel Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Esempio: Rilasciare l'applicazione PHP Guestbook con Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Stateful Applications
|
||||
|
||||
* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
* [Esempio: WordPress e MySQL con i PersistentVolumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
|
||||
|
||||
* [Esempio: Rilasciare Cassandra con i StatefulSets](/docs/tutorials/stateful-application/cassandra/)
|
||||
|
||||
* [Eseguire ZooKeeper, un sistema distribuito CP](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## CI/CD Pipelines
|
||||
|
||||
* [Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/5/set-cicd-pipeline-kubernetes-part-1-overview)
|
||||
|
||||
* [Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)](https://www.linux.com/blog/learn/chapter/Intro-to-Kubernetes/2017/6/set-cicd-pipeline-jenkins-pod-kubernetes-part-2)
|
||||
|
||||
* [Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/run-and-scale-distributed-crossword-puzzle-app-cicd-kubernetes-part-3)
|
||||
|
||||
* [Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)](https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4)
|
||||
|
||||
## Clusters
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
||||
## Servizi
|
||||
|
||||
* [Utilizzare Source IP](/docs/tutorials/services/source-ip/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
Se sei interessato a scrivere un tutorial, vedi
|
||||
[Utilizzare i Page Templates](/docs/home/contribute/page-templates/)
|
||||
per informazioni su come creare una tutorial page e sul tutorial template.
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,280 @@
|
|||
---
|
||||
title: Hello Minikube
|
||||
content_template: templates/tutorial
|
||||
weight: 5
|
||||
menu:
|
||||
main:
|
||||
title: "Cominciamo!"
|
||||
weight: 10
|
||||
post: >
|
||||
<p>Sei pronto a cominciare con Kubernetes? Crea un Kubernetes cluster ed esegui un'appliczione di esempio.</p>
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Questo tutorial mostra come eseguire una semplice applicazione in Kubernetes
|
||||
utilizzando [Minikube](/docs/setup/learning-environment/minikube) e Katacoda.
|
||||
Katacoda permette di operare su un'installazione di Kubernetes dal tuo browser.
|
||||
|
||||
{{< note >}}
|
||||
Come alternativa, è possibile eseguire questo tutorial [installando minikube](/docs/tasks/tools/install-minikube/) localmente.
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture objectives %}}
|
||||
|
||||
* Rilasciare una semplice applicazione su Minikube.
|
||||
* Eseguire l'applicazione.
|
||||
* Visualizzare i log dell'applicazione.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
Questo tutorial fornisce una container image che utilizza NGINX per risponde a tutte le richieste
|
||||
con un echo che visualizza i dati della richiesta stessa.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## Crea un Minikube cluster
|
||||
|
||||
1. Click **Launch Terminal**
|
||||
|
||||
{{< kat-button >}}
|
||||
|
||||
{{< note >}}Se hai installato Minikube localmente, esegui `minikube start`.{{< /note >}}
|
||||
|
||||
2. Apri la console di Kubernetes nel browser:
|
||||
|
||||
```shell
|
||||
minikube dashboard
|
||||
```
|
||||
|
||||
3. Katacoda environment only: In alto alla finestra del terminale, fai click segno più, e a seguire click su **Select port to view on Host 1**.
|
||||
|
||||
4. Katacoda environment only: Inserisci `30000`, a seguire click su **Display Port**.
|
||||
|
||||
## Crea un Deployment
|
||||
|
||||
Un Kubernetes [*Pod*](/docs/concepts/workloads/pods/pod/) è un gruppo di uno o più Containers,
|
||||
che sono uniti tra loro dal punto di vista amministrativo e che condividono lo stesso network.
|
||||
Il Pod in questo tutorial ha un solo Container. Un Kubernetes
|
||||
[*Deployment*](/docs/concepts/workloads/controllers/deployment/) monitora lo stato del Pod ed
|
||||
eventualmente provvedere a farlo ripartire nel caso questo termini. L'uso dei Deployments è la
|
||||
modalità raccomandata per gestire la creazione e lo scaling dei Pods.
|
||||
|
||||
|
||||
1. Usa il comando `kubectl create` per creare un Deployment che gestisce un singolo Pod. Il Pod
|
||||
eseguirà un Container basato sulla Docker image specificata.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
|
||||
```
|
||||
|
||||
2. Visualizza il Deployment:
|
||||
|
||||
```shell
|
||||
kubectl get deployments
|
||||
```
|
||||
|
||||
L'output del comando è simile a:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
hello-node 1/1 1 1 1m
|
||||
```
|
||||
|
||||
3. Visualizza il Pod creato dal Deployment:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
L'output del comando è simile a:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
4. Visualizza gli eventi del cluster Kubernetes:
|
||||
|
||||
```shell
|
||||
kubectl get events
|
||||
```
|
||||
|
||||
5. Visualizza la configurazione di `kubectl`:
|
||||
|
||||
```shell
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
{{< note >}}Per maggiori informazioni sui comandi di `kubectl`, vedi [kubectl overview](/docs/user-guide/kubectl-overview/).{{< /note >}}
|
||||
|
||||
## Crea un Service
|
||||
|
||||
Con le impostazioni di default, un Pod è accessibile solamente dagli indirizzi IP interni
|
||||
al Kubernetes cluster. Per far si che il Container `hello-node` sia accessibile dall'esterno
|
||||
del Kubernetes virtual network, è necessario esporre il Pod utilizzando un
|
||||
Kubernetes [*Service*](/docs/concepts/services-networking/service/).
|
||||
|
||||
1. Esponi il Pod su internet untilizzando il comando `kubectl expose`:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
|
||||
```
|
||||
|
||||
Il flag `--type=LoadBalancer` indica la volontà di esporre il Service
|
||||
all'esterno del Kubernetes cluster.
|
||||
|
||||
2. Visualizza il Servizio appena creato:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
```
|
||||
|
||||
L'output del comando è simile a:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
|
||||
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
|
||||
```
|
||||
|
||||
Nei cloud providers che supportano i servizi di tipo load balancers,
|
||||
viene fornito un indirizzo IP pubblico per permettere l'acesso al Service. Su Minikube,
|
||||
il service type `LoadBalancer` rende il Service accessibile attraverso il comando `minikube service`.
|
||||
|
||||
3. Esegui il comando:
|
||||
|
||||
```shell
|
||||
minikube service hello-node
|
||||
```
|
||||
|
||||
4. Katacoda environment only: Fai click segno più, e a seguire click su **Select port to view on Host 1**.
|
||||
|
||||
5. Katacoda environment only: Fai attenzione al numero di 5 cifre visualizzato a fianco di `8080` nell'output del comando. Questo port number è generato casualmente e può essere diverso nel tuo caso. Inserisci il tuo port number nella textbox, e a seguire fai click su Display Port. Nell'esempio precedente, avresti scritto `30369`.
|
||||
|
||||
Questo apre un finestra nel browser dove l'applicazione visuallizza l'echo delle richieste ricevute.
|
||||
|
||||
## Attiva gli addons
|
||||
|
||||
Minikube include un set {{< glossary_tooltip text="addons" term_id="addons" >}} che possono essere attivati, disattivati o eseguti nel ambiente Kubernetes locale.
|
||||
|
||||
1. Elenca gli addons disponibili:
|
||||
|
||||
```shell
|
||||
minikube addons list
|
||||
```
|
||||
|
||||
L'output del comando è simile a:
|
||||
|
||||
```
|
||||
addon-manager: enabled
|
||||
dashboard: enabled
|
||||
default-storageclass: enabled
|
||||
efk: disabled
|
||||
freshpod: disabled
|
||||
gvisor: disabled
|
||||
helm-tiller: disabled
|
||||
ingress: disabled
|
||||
ingress-dns: disabled
|
||||
logviewer: disabled
|
||||
metrics-server: disabled
|
||||
nvidia-driver-installer: disabled
|
||||
nvidia-gpu-device-plugin: disabled
|
||||
registry: disabled
|
||||
registry-creds: disabled
|
||||
storage-provisioner: enabled
|
||||
storage-provisioner-gluster: disabled
|
||||
```
|
||||
|
||||
2. Attiva un addon, per esempio, `metrics-server`:
|
||||
|
||||
```shell
|
||||
minikube addons enable metrics-server
|
||||
```
|
||||
|
||||
L'output del comando è simile a:
|
||||
|
||||
```
|
||||
metrics-server was successfully enabled
|
||||
```
|
||||
|
||||
3. Visualizza i Pods ed i Service creati in precedenza:
|
||||
|
||||
```shell
|
||||
kubectl get pod,svc -n kube-system
|
||||
```
|
||||
|
||||
L'output del comando è simile a:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
|
||||
pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
|
||||
pod/metrics-server-67fb648c5 1/1 Running 0 26s
|
||||
pod/etcd-minikube 1/1 Running 0 34m
|
||||
pod/influxdb-grafana-b29w8 2/2 Running 0 26s
|
||||
pod/kube-addon-manager-minikube 1/1 Running 0 34m
|
||||
pod/kube-apiserver-minikube 1/1 Running 0 34m
|
||||
pod/kube-controller-manager-minikube 1/1 Running 0 34m
|
||||
pod/kube-proxy-rnlps 1/1 Running 0 34m
|
||||
pod/kube-scheduler-minikube 1/1 Running 0 34m
|
||||
pod/storage-provisioner 1/1 Running 0 34m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s
|
||||
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m
|
||||
service/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s
|
||||
service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s
|
||||
```
|
||||
|
||||
4. Disabilita `metrics-server`:
|
||||
|
||||
```shell
|
||||
minikube addons disable metrics-server
|
||||
```
|
||||
|
||||
L'output del comando è simile a:
|
||||
|
||||
```
|
||||
metrics-server was successfully disabled
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
Adesso puoi procedere a fare clean up delle risorse che hai creato nel tuo cluster:
|
||||
|
||||
```shell
|
||||
kubectl delete service hello-node
|
||||
kubectl delete deployment hello-node
|
||||
```
|
||||
|
||||
Eventualmente, puoi stoppare la Minikube virtual machine (VM):
|
||||
|
||||
```shell
|
||||
minikube stop
|
||||
```
|
||||
|
||||
Eventualmente, puoi cancellare la Minikube VM:
|
||||
|
||||
```shell
|
||||
minikube delete
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Approfondisci la tua conoscenza dei [Deployments](/docs/concepts/workloads/controllers/deployment/).
|
||||
* Approfondisci la tua conoscenza di [Rilasciare applicazioni](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
* Approfondisci la tua conoscenza dei [Services](/docs/concepts/services-networking/service/).
|
||||
|
||||
{{% /capture %}}
|
|
@ -48,59 +48,59 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
## 업그레이드할 버전 결정
|
||||
|
||||
1. 최신의 안정 버전인 1.18을 찾는다.
|
||||
최신의 안정 버전인 1.18을 찾는다.
|
||||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
apt update
|
||||
apt-cache madison kubeadm
|
||||
# 목록에서 최신 버전 1.18을 찾는다
|
||||
# 1.18.x-00과 같아야 한다. 여기서 x는 최신 패치이다.
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
# 목록에서 최신 버전 1.18을 찾는다
|
||||
# 1.18.x-0과 같아야 한다. 여기서 x는 최신 패치이다.
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## 컨트롤 플레인 노드 업그레이드
|
||||
|
||||
### 첫 번째 컨트롤 플레인 노드 업그레이드
|
||||
|
||||
1. 첫 번째 컨트롤 플레인 노드에서 kubeadm을 업그레이드한다.
|
||||
- 첫 번째 컨트롤 플레인 노드에서 kubeadm을 업그레이드한다.
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
# 1.18.x-00에서 x를 최신 패치 버전으로 바꾼다.
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
|
||||
-
|
||||
# apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
# 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다.
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
1. 다운로드하려는 버전이 잘 받아졌는지 확인한다.
|
||||
- 다운로드하려는 버전이 잘 받아졌는지 확인한다.
|
||||
|
||||
```shell
|
||||
kubeadm version
|
||||
```
|
||||
|
||||
1. 컨트롤 플레인 노드를 드레인(drain)한다.
|
||||
- 컨트롤 플레인 노드를 드레인(drain)한다.
|
||||
|
||||
```shell
|
||||
# <cp-node-name>을 컨트롤 플레인 노드 이름으로 바꾼다.
|
||||
kubectl drain <cp-node-name> --ignore-daemonsets
|
||||
```
|
||||
|
||||
1. 컨트롤 플레인 노드에서 다음을 실행한다.
|
||||
- 컨트롤 플레인 노드에서 다음을 실행한다.
|
||||
|
||||
```shell
|
||||
sudo kubeadm upgrade plan
|
||||
|
@ -143,13 +143,13 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
이 명령은 클러스터를 업그레이드할 수 있는지를 확인하고, 업그레이드할 수 있는 버전을 가져온다.
|
||||
|
||||
{{< note >}}
|
||||
또한 `kubeadm upgrade` 는 이 노드에서 관리하는 인증서를 자동으로 갱신한다.
|
||||
인증서 갱신을 하지 않으려면 `--certificate-renewal=false` 플래그를 사용할 수 있다.
|
||||
자세한 내용은 [인증서 관리 가이드](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)를 참고한다.
|
||||
{{</ note >}}
|
||||
{{< note >}}
|
||||
또한 `kubeadm upgrade` 는 이 노드에서 관리하는 인증서를 자동으로 갱신한다.
|
||||
인증서 갱신을 하지 않으려면 `--certificate-renewal=false` 플래그를 사용할 수 있다.
|
||||
자세한 내용은 [인증서 관리 가이드](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)를 참고한다.
|
||||
{{</ note >}}
|
||||
|
||||
1. 업그레이드할 버전을 선택하고, 적절한 명령을 실행한다. 예를 들면 다음과 같다.
|
||||
- 업그레이드할 버전을 선택하고, 적절한 명령을 실행한다. 예를 들면 다음과 같다.
|
||||
|
||||
```shell
|
||||
# 이 업그레이드를 위해 선택한 패치 버전으로 x를 바꾼다.
|
||||
|
@ -238,7 +238,7 @@ min-kubernetes-server-version: 1.18
|
|||
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
|
||||
```
|
||||
|
||||
1. CNI 제공자 플러그인을 수동으로 업그레이드한다.
|
||||
- CNI 제공자 플러그인을 수동으로 업그레이드한다.
|
||||
|
||||
CNI(컨테이너 네트워크 인터페이스) 제공자는 자체 업그레이드 지침을 따를 수 있다.
|
||||
[애드온](/docs/concepts/cluster-administration/addons/) 페이지에서
|
||||
|
@ -246,7 +246,7 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
CNI 제공자가 데몬셋(DaemonSet)으로 실행되는 경우 추가 컨트롤 플레인 노드에는 이 단계가 필요하지 않다.
|
||||
|
||||
1. 컨트롤 플레인 노드에 적용된 cordon을 해제한다.
|
||||
- 컨트롤 플레인 노드에 적용된 cordon을 해제한다.
|
||||
|
||||
```shell
|
||||
# <cp-node-name>을 컨트롤 플레인 노드 이름으로 바꾼다.
|
||||
|
@ -255,46 +255,46 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
### 추가 컨트롤 플레인 노드 업그레이드
|
||||
|
||||
1. 첫 번째 컨트롤 플레인 노드와 동일하지만 다음을 사용한다.
|
||||
첫 번째 컨트롤 플레인 노드와 동일하지만 다음을 사용한다.
|
||||
|
||||
```
|
||||
sudo kubeadm upgrade node
|
||||
```
|
||||
```
|
||||
sudo kubeadm upgrade node
|
||||
```
|
||||
|
||||
아래 명령 대신 위의 명령을 사용한다.
|
||||
아래 명령 대신 위의 명령을 사용한다.
|
||||
|
||||
```
|
||||
sudo kubeadm upgrade apply
|
||||
```
|
||||
```
|
||||
sudo kubeadm upgrade apply
|
||||
```
|
||||
|
||||
또한 `sudo kubeadm upgrade plan` 은 필요하지 않다.
|
||||
또한 `sudo kubeadm upgrade plan` 은 필요하지 않다.
|
||||
|
||||
### kubelet과 kubectl 업그레이드
|
||||
|
||||
1. 모든 컨트롤 플레인 노드에서 kubelet 및 kubectl을 업그레이드한다.
|
||||
모든 컨트롤 플레인 노드에서 kubelet 및 kubectl을 업그레이드한다.
|
||||
|
||||
{{< tabs name="k8s_install_kubelet" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
{{< tabs name="k8s_install_kubelet" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
# 1.18.x-00의 x를 최신 패치 버전으로 바꾼다
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
|
||||
-
|
||||
# apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
# 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
1. kubelet을 다시 시작한다.
|
||||
kubelet을 다시 시작한다.
|
||||
|
||||
```shell
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
```shell
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
## 워커 노드 업그레이드
|
||||
|
||||
|
@ -303,28 +303,28 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
### kubeadm 업그레이드
|
||||
|
||||
1. 모든 워커 노드에서 kubeadm을 업그레이드한다.
|
||||
- 모든 워커 노드에서 kubeadm을 업그레이드한다.
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
# 1.18.x-00의 x를 최신 패치 버전으로 바꾼다
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
|
||||
-
|
||||
# apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
# 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### 노드 드레인
|
||||
|
||||
1. 스케줄 불가능(unschedulable)으로 표시하고 워크로드를 축출하여 유지 보수할 노드를 준비한다.
|
||||
- 스케줄 불가능(unschedulable)으로 표시하고 워크로드를 축출하여 유지 보수할 노드를 준비한다.
|
||||
|
||||
```shell
|
||||
# <node-to-drain>을 드레이닝하려는 노드 이름으로 바꾼다.
|
||||
|
@ -349,26 +349,26 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
### kubelet과 kubectl 업그레이드
|
||||
|
||||
1. 모든 워커 노드에서 kubelet 및 kubectl을 업그레이드한다.
|
||||
- 모든 워커 노드에서 kubelet 및 kubectl을 업그레이드한다.
|
||||
|
||||
{{< tabs name="k8s_kubelet_and_kubectl" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
{{< tabs name="k8s_kubelet_and_kubectl" >}}
|
||||
{{% tab name="Ubuntu, Debian 또는 HypriotOS" %}}
|
||||
# 1.18.x-00의 x를 최신 패치 버전으로 바꾼다
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
|
||||
-
|
||||
# apt-get 버전 1.1부터 다음 방법을 사용할 수도 있다
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL 또는 Fedora" %}}
|
||||
# 1.18.x-0에서 x를 최신 패치 버전으로 바꾼다
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
1. kubelet을 다시 시작한다.
|
||||
- kubelet을 다시 시작한다.
|
||||
|
||||
```shell
|
||||
sudo systemctl restart kubelet
|
||||
|
@ -376,7 +376,7 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
### 노드에 적용된 cordon 해제
|
||||
|
||||
1. 스케줄 가능(schedulable)으로 표시하여 노드를 다시 온라인 상태로 만든다.
|
||||
- 스케줄 가능(schedulable)으로 표시하여 노드를 다시 온라인 상태로 만든다.
|
||||
|
||||
```shell
|
||||
# <node-to-drain>을 노드의 이름으로 바꾼다.
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "Configuração"
|
||||
weight: 80
|
||||
---
|
||||
|
|
@ -0,0 +1,197 @@
|
|||
---
|
||||
reviewers:
|
||||
- dchen1107
|
||||
- egernst
|
||||
- tallclair
|
||||
title: Pod Overhead
|
||||
content_template: templates/concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
Quando executa um Pod num nó, o próprio Pod usa uma quantidade de recursos do sistema. Estes
|
||||
recursos são adicionais aos recursos necessários para executar o(s) _container(s)_ dentro do Pod.
|
||||
Sobrecarga de Pod, do inglês _Pod Overhead_, é uma funcionalidade que serve para contabilizar os recursos consumidos pela
|
||||
infraestrutura do Pod para além das solicitações e limites do _container_.
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
No Kubernetes, a sobrecarga de _Pods_ é definido no tempo de
|
||||
[admissão](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
|
||||
de acordo com a sobrecarga associada à
|
||||
[RuntimeClass](/docs/concepts/containers/runtime-class/) do _Pod_.
|
||||
|
||||
Quando é ativada a Sobrecarga de Pod, a sobrecarga é considerada adicionalmente à soma das
|
||||
solicitações de recursos do _container_ ao agendar um Pod. Semelhantemente, o _kubelet_
|
||||
incluirá a sobrecarga do Pod ao dimensionar o cgroup do Pod e ao
|
||||
executar a classificação de despejo do Pod.
|
||||
|
||||
## Possibilitando a Sobrecarga do Pod {#set-up}
|
||||
|
||||
Terá de garantir que o [portão de funcionalidade](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`PodOverhead` está ativo (está ativo por defeito a partir da versão 1.18)
|
||||
por todo o cluster, e uma `RuntimeClass` é utilizada que defina o campo `overhead`.
|
||||
|
||||
## Exemplo de uso
|
||||
|
||||
Para usar a funcionalidade PodOverhead, é necessário uma RuntimeClass que define o campo `overhead`.
|
||||
Por exemplo, poderia usar a definição da RuntimeClass abaixo com um _container runtime_ virtualizado
|
||||
que usa cerca de 120MiB por Pod para a máquina virtual e o sistema operativo convidado:
|
||||
|
||||
```yaml
|
||||
---
|
||||
kind: RuntimeClass
|
||||
apiVersion: node.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: kata-fc
|
||||
handler: kata-fc
|
||||
overhead:
|
||||
podFixed:
|
||||
memory: "120Mi"
|
||||
cpu: "250m"
|
||||
```
|
||||
|
||||
As cargas de trabalho que são criadas e que especificam o manipulador RuntimeClass `kata-fc` irão
|
||||
usar a sobrecarga de memória e cpu em conta para os cálculos da quota de recursos, agendamento de nós,
|
||||
assim como dimensionamento do cgroup do Pod.
|
||||
|
||||
Considere executar a seguinte carga de trabalho de exemplo, test-pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
runtimeClassName: kata-fc
|
||||
containers:
|
||||
- name: busybox-ctr
|
||||
image: busybox
|
||||
stdin: true
|
||||
tty: true
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 100Mi
|
||||
- name: nginx-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1500m
|
||||
memory: 100Mi
|
||||
```
|
||||
|
||||
Na altura de admissão o [controlador de admissão](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) RuntimeClass
|
||||
atualiza o _PodSpec_ da carga de trabalho de forma a incluir o `overhead` como descrito na RuntimeClass. Se o _PodSpec_ já tiver este campo definido
|
||||
o _Pod_ será rejeitado. No exemplo dado, como apenas o nome do RuntimeClass é especificado, o controlador de admissão muda o _Pod_ de forma a
|
||||
incluir um `overhead`.
|
||||
|
||||
Depois do controlador de admissão RuntimeClass, pode verificar o _PodSpec_ atualizado:
|
||||
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
|
||||
```
|
||||
|
||||
O output é:
|
||||
```
|
||||
map[cpu:250m memory:120Mi]
|
||||
```
|
||||
|
||||
Se for definido um _ResourceQuota_, a soma dos pedidos dos _containers_ assim como o campo `overhead` são contados.
|
||||
|
||||
Quando o kube-scheduler está a decidir que nó deve executar um novo _Pod_, o agendador considera o `overhead` do _Pod_,
|
||||
assim como a soma de pedidos aos _containers_ para esse _Pod_. Para este exemplo, o agendador adiciona os
|
||||
pedidos e a sobrecarga, depois procura um nó com 2.25 CPU e 320 MiB de memória disponível.
|
||||
|
||||
Assim que um _Pod_ é agendado a um nó, o kubelet nesse nó cria um novo {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
|
||||
para o _Pod_. É dentro deste _pod_ que o _container runtime_ subjacente vai criar _containers_.
|
||||
|
||||
Se o recurso tiver um limite definido para cada _container_ (_QoS_ garantida ou _Burstrable QoS_ com limites definidos),
|
||||
o kubelet definirá um limite superior para o cgroup do _pod_ associado a esse recurso (cpu.cfs_quota_us para CPU
|
||||
e memory.limit_in_bytes de memória). Este limite superior é baseado na soma dos limites do _container_ mais o `overhead`
|
||||
definido no _PodSpec_.
|
||||
|
||||
Para o CPU, se o _Pod_ for QoS garantida ou _Burstrable QoS_, o kubelet vai definir `cpu.shares` baseado na soma dos
|
||||
pedidos ao _container_ mais o `overhead` definido no _PodSpec_.
|
||||
|
||||
Olhando para o nosso exemplo, verifique os pedidos ao _container_ para a carga de trabalho:
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
|
||||
```
|
||||
|
||||
O total de pedidos ao _container_ são 2000m CPU e 200MiB de memória:
|
||||
```
|
||||
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
|
||||
```
|
||||
|
||||
Verifique isto contra o que é observado pelo nó:
|
||||
```bash
|
||||
kubectl describe node | grep test-pod -B2
|
||||
```
|
||||
|
||||
O output mostra que 2250m CPU e 320MiB de memória são solicitados, que inclui _PodOverhead_:
|
||||
```
|
||||
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
|
||||
--------- ---- ------------ ---------- --------------- ------------- ---
|
||||
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
|
||||
```
|
||||
|
||||
## Verificar os limites cgroup do Pod
|
||||
|
||||
Verifique os cgroups de memória do Pod no nó onde a carga de trabalho está em execução. No seguinte exemplo, [`crictl`] (https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
|
||||
é usado no nó, que fornece uma CLI para _container runtimes_ compatíveis com CRI. Isto é um
|
||||
exemplo avançado para mostrar o comportamento do _PodOverhead_, e não é esperado que os utilizadores precisem de verificar
|
||||
cgroups diretamente no nó.
|
||||
|
||||
Primeiro, no nó em particular, determine o identificador do _Pod_:
|
||||
|
||||
```bash
|
||||
# Execute no nó onde o Pod está agendado
|
||||
POD_ID="$(sudo crictl pods --name test-pod -q)"
|
||||
```
|
||||
|
||||
A partir disto, pode determinar o caminho do cgroup para o _Pod_:
|
||||
```bash
|
||||
# Execute no nó onde o Pod está agendado
|
||||
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
|
||||
```
|
||||
|
||||
O caminho do cgroup resultante inclui o _container_ `pause` do _Pod_. O cgroup no nível do _Pod_ está um diretório acima.
|
||||
```
|
||||
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
|
||||
```
|
||||
|
||||
Neste caso especifico, o caminho do cgroup do pod é `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verifique a configuração cgroup de nível do _Pod_ para a memória:
|
||||
```bash
|
||||
# Execute no nó onde o Pod está agendado
|
||||
# Mude também o nome do cgroup de forma a combinar com o cgroup alocado ao pod.
|
||||
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
|
||||
```
|
||||
|
||||
Isto é 320 MiB, como esperado:
|
||||
```
|
||||
335544320
|
||||
```
|
||||
|
||||
### Observabilidade
|
||||
|
||||
Uma métrica `kube_pod_overhead` está disponível em [kube-state-metrics] (https://github.com/kubernetes/kube-state-metrics)
|
||||
para ajudar a identificar quando o _PodOverhead_ está a ser utilizado e para ajudar a observar a estabilidade das cargas de trabalho
|
||||
em execução com uma sobrecarga (_Overhead_) definida. Esta funcionalidade não está disponível na versão 1.9 do kube-state-metrics,
|
||||
mas é esperado num próximo _release_. Os utilizadores necessitarão entretanto de construir kube-state-metrics a partir da fonte.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "Настройка Pod'ов и контейнеров"
|
||||
weight: 20
|
||||
---
|
||||
|
|
@ -0,0 +1,341 @@
|
|||
---
|
||||
title: Настройка Liveness, Readiness и Startup проб
|
||||
content_template: templates/task
|
||||
weight: 110
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
На этой странице рассказывается, как настроить liveness, readiness и startup пробы для контейнеров.
|
||||
|
||||
[Kubelet](/docs/admin/kubelet/) использует liveness пробу для проверки,
|
||||
когда перезапустить контейнер.
|
||||
Например, liveness проба должна поймать блокировку,
|
||||
когда приложение запущено, но не может ничего сделать.
|
||||
В этом случае перезапуск приложения может помочь сделать приложение
|
||||
более доступным, несмотря на баги.
|
||||
|
||||
Kubelet использует readiness пробы, чтобы узнать,
|
||||
готов ли контейнер принимать траффик.
|
||||
Pod считается готовым, когда все его контейнеры готовы.
|
||||
|
||||
Одно из применений такого сигнала - контроль, какие Pod будут использованы
|
||||
в качестве бек-енда для сервиса.
|
||||
Пока Pod не в статусе ready, он будет исключен из балансировщиков нагрузки сервиса.
|
||||
|
||||
Kubelet использует startup пробы, чтобы понять, когда приложение в контейнере было запущено.
|
||||
Если проба настроена, он блокирует liveness и readiness проверки, до того как проба становится успешной, и проверяет, что эта проба не мешает запуску приложения.
|
||||
Это может быть использовано для проверки работоспособности медленно стартующих контейнеров,
|
||||
чтобы избежать убийства kubelet'ом прежде, чем они будут запущены.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Определение liveness команды
|
||||
|
||||
Многие приложения, работающие в течение длительного времени, ломаются
|
||||
и могут быть восстановлены только перезапуском.
|
||||
Kubernetes предоставляет liveness пробы, чтобы обнаруживать и исправлять такие ситуации.
|
||||
|
||||
В этом упражнении вы создадите Pod, который запускает контейнер, основанный на образе `k8s.gcr.io/busybox`. Конфигурационный файл для Pod'а:
|
||||
|
||||
{{< codenew file="pods/probe/exec-liveness.yaml" >}}
|
||||
|
||||
В конфигурационном файле вы можете видеть, что Pod состоит из одного `Container`.
|
||||
Поле `periodSeconds` определяет, что kubelet должен производить liveness
|
||||
пробы каждые 5 секунд. Поле `initialDelaySeconds` говорит kubelet'у, что он должен ждать 5 секунд перед первой пробой. Для проведения пробы
|
||||
kubelet исполняет команду `cat /tmp/healthy` в целевом контейнере.
|
||||
Если команда успешна, она возвращает 0, и kubelet считает контейнер живым и здоровым.
|
||||
Если команда возвращает ненулевое значение, kubelet убивает и перезапускает контейнер.
|
||||
|
||||
Когда контейнер запускается, он исполняет команду
|
||||
|
||||
```shell
|
||||
/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
|
||||
```
|
||||
|
||||
Для первых 30 секунд жизни контейнера существует файл `/tmp/healthy`.
|
||||
Поэтому в течение первых 30 секунд команда `cat /tmp/healthy` возвращает код успеха. После 30 секунд `cat /tmp/healthy` возвращает код ошибки.
|
||||
|
||||
Создание Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/probe/exec-liveness.yaml
|
||||
```
|
||||
|
||||
В течение 30 секунд посмотрим события Pod:
|
||||
|
||||
```shell
|
||||
kubectl describe pod liveness-exec
|
||||
```
|
||||
|
||||
Вывод команды показывает, что ещё ни одна liveness проба не завалена:
|
||||
|
||||
```
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
```
|
||||
|
||||
После 35 секунд посмотрим события Pod снова:
|
||||
|
||||
```shell
|
||||
kubectl describe pod liveness-exec
|
||||
```
|
||||
|
||||
Внизу вывода появились сообщения, показывающие, что liveness
|
||||
проба завалена и containers был убит и пересоздан.
|
||||
|
||||
```
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
|
||||
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
|
||||
2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
```
|
||||
|
||||
Подождите ещё 30 секунд и убедитесь, что контейнер был перезапущен:
|
||||
|
||||
```shell
|
||||
kubectl get pod liveness-exec
|
||||
```
|
||||
|
||||
Вывод команды показывает, что `RESTARTS` увеличено на 1:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
liveness-exec 1/1 Running 1 1m
|
||||
```
|
||||
|
||||
## Определение liveness HTTP запроса
|
||||
|
||||
Другой вид liveness пробы использует запрос HTTP GET. Ниже представлен файл конфигурации для Pod, который запускает контейнер, основанный на образе `k8s.gcr.io/liveness`.
|
||||
|
||||
{{< codenew file="pods/probe/http-liveness.yaml" >}}
|
||||
|
||||
В конфигурационном файле вы можете видеть Pod с одним контейнером.
|
||||
Поле `periodSeconds` определяет, что kubelet должен производить liveness
|
||||
пробы каждые 3 секунды. Поле `initialDelaySeconds` сообщает kubelet'у, что он должен ждать 3 секунды перед проведением первой пробы. Для проведения пробы
|
||||
kubelet отправляет запрос HTTP GET на сервер, который запущен в контейнере и слушает порт 8080. Если обработчик пути `/healthz` на сервере
|
||||
возвращает код успеха, kubelet рассматривает контейнер как живой и здоровый. Если обработчик возвращает код ошибки, kubelet убивает и перезапускает контейнер.
|
||||
|
||||
Любой код, больший или равный 200 и меньший 400, означает успех. Любой другой код интерпретируется как ошибка.
|
||||
|
||||
Вы можете посмотреть исходные коды сервера в
|
||||
[server.go](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/test/images/agnhost/liveness/server.go).
|
||||
|
||||
В течение первых 10 секунд жизни контейнера обработчик `/healthz`
|
||||
возвращает статус 200. После обработчик возвращает статус 500.
|
||||
|
||||
```go
|
||||
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
|
||||
duration := time.Now().Sub(started)
|
||||
if duration.Seconds() > 10 {
|
||||
w.WriteHeader(500)
|
||||
w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds())))
|
||||
} else {
|
||||
w.WriteHeader(200)
|
||||
w.Write([]byte("ok"))
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
Kubelet начинает выполнять health checks через 3 секунды после старта контейнера.
|
||||
Итак, первая пара проверок будет успешна. Но после 10 секунд health
|
||||
checks будут завалены и kubelet убьёт и перезапустит контейнер.
|
||||
|
||||
Чтобы попробовать HTTP liveness проверку, создайте Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/probe/http-liveness.yaml
|
||||
```
|
||||
|
||||
Через 10 секунд посмотрите события Pod, чтобы проверить, что liveness probes завалилась и container перезапустился:
|
||||
|
||||
```shell
|
||||
kubectl describe pod liveness-http
|
||||
```
|
||||
|
||||
В релизах до v1.13 (включая v1.13), если переменная окружения
|
||||
`http_proxy` (или `HTTP_PROXY`) определена на node, где запущен Pod,
|
||||
HTTP liveness проба использует этот прокси.
|
||||
В версиях после v1.13, определение локальной HTTP прокси в переменной окружения не влияет на HTTP liveness пробу.
|
||||
|
||||
## Определение TCP liveness пробы
|
||||
|
||||
Третий тип liveness проб использует TCP сокет. С этой конфигурацией
|
||||
kubelet будет пытаться открыть сокет к вашему контейнеру на определённый порт.
|
||||
Если он сможет установить соединение, контейнер будет считаться здоровым, если нет, будет считаться заваленным.
|
||||
|
||||
{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}}
|
||||
|
||||
Как вы можете видеть, конфигурация TCP проверок довольно похожа на HTTP проверки.
|
||||
Этот пример использует обе - readiness и liveness пробы. Kubelet будет отправлять первую readiness пробу через 5 секунд после старта контейнера. Он будет пытаться соединиться с `goproxy` контейнером на порт 8080. Если проба успешна, Pod
|
||||
будет помечен как ready. Kubelet будет продолжать запускать эту проверку каждые 10 секунд.
|
||||
|
||||
В дополнение к readiness пробе, конфигурация включает liveness пробу.
|
||||
Kubelet запустит первую liveness пробу через 15 секунд после старта контейнера. Аналогично readiness пробе, он будет пытаться соединиться с контейнером `goproxy` на порт 8080. Если liveness проба завалится, контейнер будет перезапущен.
|
||||
|
||||
Для испытаний TCP liveness проверки, создадим Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/probe/tcp-liveness-readiness.yaml
|
||||
```
|
||||
|
||||
Через 15 секунд посмотрим события Pod'а для проверки liveness пробы:
|
||||
|
||||
```shell
|
||||
kubectl describe pod goproxy
|
||||
```
|
||||
|
||||
## Использование именованных портов
|
||||
|
||||
Вы можете использовать именованный порт
|
||||
[ContainerPort](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#containerport-v1-core)
|
||||
для HTTP или TCP liveness проверок:
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
- name: liveness-port
|
||||
containerPort: 8080
|
||||
hostPort: 8080
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: liveness-port
|
||||
```
|
||||
|
||||
## Защита медленно запускающихся контейнеров со startup пробами {#define-startup-probes}
|
||||
|
||||
Иногда приходится иметь дело со старыми приложениями, которым может требоваться дополнительное время для запуска
|
||||
на их первую инициализацию.
|
||||
В таких случаях бывает сложно настроить параметры liveness пробы без ущерба для скорости реакции на deadlock'и, для выявления которых как раз и нужна liveness проба.
|
||||
Хитрость заключается в том, чтобы настроить startup пробу с такой же командой, что и HTTP или TCP проверка, но `failureThreshold * periodSeconds` должно быть достаточным, чтобы покрыть наихудшее время старта.
|
||||
|
||||
Итак, предыдущий пример будет выглядеть так:
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
- name: liveness-port
|
||||
containerPort: 8080
|
||||
hostPort: 8080
|
||||
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: liveness-port
|
||||
failureThreshold: 1
|
||||
periodSeconds: 10
|
||||
|
||||
startupProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: liveness-port
|
||||
failureThreshold: 30
|
||||
periodSeconds: 10
|
||||
```
|
||||
|
||||
Благодаря startup пробе, приложению дано максимум 5 минут
|
||||
(30 * 10 = 300 сек.) для завершения его старта.
|
||||
Как только startup проба успешна 1 раз, liveness проба начинает контролировать дедлоки контейнеров.
|
||||
Если startup probe так и не заканчивается успехом, контейнер будет убит через 300 секунд и подвергнется `restartPolicy` pod'а.
|
||||
|
||||
## Определение readiness проб
|
||||
|
||||
Иногда приложения временно не могут обслужить траффик.
|
||||
Например, приложение может требовать загрузки огромных данных
|
||||
или конфигурационных файлов во время старта, или зависит от внешних сервисов после старта.
|
||||
В таких случаях вы не хотите убивать приложение, но и
|
||||
отправлять ему клиентские запросы тоже не хотите.
|
||||
Kubernetes предоставляет
|
||||
readiness пробы для определения и нивелирования таких ситуаций. Pod с контейнерами сообщает, что они не готовы принимать траффик через Kubernetes Services.
|
||||
|
||||
{{< note >}}
|
||||
Readiness пробы запускаются на контейнере в течение всего его жизненного цикла.
|
||||
{{< /note >}}
|
||||
|
||||
Readiness пробы настраиваются аналогично liveness пробам. Единственная разница в использовании поля `readinessProbe` вместо `livenessProbe`.
|
||||
|
||||
```yaml
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/healthy
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
```
|
||||
|
||||
Конфигурация HTTP и TCP readiness проб также идентичны
|
||||
liveness пробам.
|
||||
|
||||
Readiness и liveness пробы могут быть использованы одновременно на одном контейнере.
|
||||
Использование обеих проб может обеспечить отсутствие траффика в контейнер, пока он не готов для этого, и контейнер будет перезапущен, если сломается.
|
||||
|
||||
## Конфигурация проб
|
||||
|
||||
{{< comment >}}
|
||||
Eventually, some of this section could be moved to a concept topic.
|
||||
{{< /comment >}}
|
||||
|
||||
[Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) имеют несколько полей, которые
|
||||
вы можете использовать для более точного контроля поведения
|
||||
liveness и readiness проверок:
|
||||
|
||||
* `initialDelaySeconds`: Количество секунд от старта контейнера до начала liveness или readiness проб. По умолчанию 0 секунд. Минимальное значение 0.
|
||||
* `periodSeconds`: Длительность времени (в секундах) между двумя последовательными проведениями проб. По умолчанию 10
|
||||
секунд. Минимальное значение 1.
|
||||
* `timeoutSeconds`: Количество секунд ожидания пробы. По умолчанию
|
||||
1 секунда. Минимальное значение 1.
|
||||
* `successThreshold`: Минимальное количество последовательных проверок, чтобы проба считалась успешной после неудачной. По умолчанию 1.
|
||||
Должна быть 1 для liveness. Минимальное значение 1.
|
||||
* `failureThreshold`: Когда Pod стартует и проба даёт ошибку, Kubernetes будет пытаться `failureThreshold` раз перед тем, как сдаться. Сдаться в этом случае для liveness пробы означает перезапуск контейнера. В случае readiness пробы Pod будет помечен Unready.
|
||||
По умолчанию 3. Минимальное значение 1.
|
||||
|
||||
[HTTP пробы](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
|
||||
имеют дополнительные поля, которые могут быть установлены для `httpGet`:
|
||||
|
||||
* `host`: Имя хоста для соединения, по умолчанию pod IP. Вы, возможно, захотите установить заголовок "Host" в httpHeaders вместо этого.
|
||||
* `scheme`: Схема для соединения к хосту (HTTP or HTTPS). По умолчанию HTTP.
|
||||
* `path`: Путь для доступа к HTTP серверу.
|
||||
* `httpHeaders`: Кастомные заголовки запроса. HTTP позволяет повторяющиеся заголовки.
|
||||
* `port`: Имя или номер порта для доступа к контейнеру. Номер должен быть в диапазоне от 1 до 65535.
|
||||
|
||||
Для HTTP проб kubelet отправляет HTTP запрос на настроенный путь и
|
||||
порт для проведения проверок. Kubelet отправляет пробу на IP адрес pod’а,
|
||||
если адрес не переопределён необязательным полем `host` в `httpGet`. Если поле `scheme` установлено в `HTTPS`, kubelet отправляет HTTPS запрос, пропуская проверку сертификата. В большинстве сценариев вам не нужно устанавливать поле `host`.
|
||||
Рассмотрим один сценарий, где бы он мог понадобиться. Допустим, контейнер слушает 127.0.0.1 и поле Pod'а `hostNetwork` задано в true. Поле `host` опции `httpGet` должно быть задано в 127.0.0.1. Если pod полагается на виртуальные хосты, что, скорее всего, более распространённая ситуация, лучше вместо поля `host` устанавливать заголовок `Host` в `httpHeaders`.
|
||||
|
||||
Для TCP проб kubelet устанавливает соединение с ноды, не внутри pod, что означает, что вы не можете использовать service name в параметре `host`, пока kubelet не может выполнить его резолв.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Узнать больше о
|
||||
[Контейнерных пробах](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
|
||||
Вы также можете прочитать API references для:
|
||||
|
||||
* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
* [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
|
||||
* [Проба](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-exec
|
||||
spec:
|
||||
containers:
|
||||
- name: liveness
|
||||
image: k8s.gcr.io/busybox
|
||||
args:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- cat
|
||||
- /tmp/healthy
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
|
@ -0,0 +1,21 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
test: liveness
|
||||
name: liveness-http
|
||||
spec:
|
||||
containers:
|
||||
- name: liveness
|
||||
image: k8s.gcr.io/liveness
|
||||
args:
|
||||
- /server
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
httpHeaders:
|
||||
- name: Custom-Header
|
||||
value: Awesome
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 3
|
|
@ -0,0 +1,22 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: goproxy
|
||||
labels:
|
||||
app: goproxy
|
||||
spec:
|
||||
containers:
|
||||
- name: goproxy
|
||||
image: k8s.gcr.io/goproxy:0.1
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 8080
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 20
|
|
@ -0,0 +1,8 @@
|
|||
Вам нужен Kubernetes кластер и инструмент командной строки kubectl должен быть настроен
|
||||
на связь с вашим кластером. Если у вас ещё нет кластера,
|
||||
вы можете создать, его используя
|
||||
[Minikube](/docs/setup/learning-environment/minikube/),
|
||||
или вы можете использовать одну из песочниц Kubernetes:
|
||||
|
||||
* [Katacoda](https://www.katacoda.com/courses/kubernetes/playground)
|
||||
* [Play with Kubernetes](http://labs.play-with-k8s.com/)
|
|
@ -0,0 +1,190 @@
|
|||
<!--
|
||||
---
|
||||
title: " Weekly Kubernetes Community Hangout Notes - July 31 2015 "
|
||||
date: 2015-08-04
|
||||
slug: weekly-kubernetes-community-hangout
|
||||
url: /blog/2015/08/Weekly-Kubernetes-Community-Hangout
|
||||
---
|
||||
-->
|
||||
|
||||
---
|
||||
title: " Kubernetes社区每周环聊笔记-2015年7月31日 "
|
||||
date: 2015-08-04
|
||||
slug: weekly-kubernetes-community-hangout
|
||||
url: /blog/2015/08/Weekly-Kubernetes-Community-Hangout
|
||||
---
|
||||
|
||||
<!--
|
||||
Every week the Kubernetes contributing community meet virtually over Google Hangouts. We want anyone who's interested to know what's discussed in this forum.
|
||||
|
||||
Here are the notes from today's meeting:
|
||||
-->
|
||||
|
||||
每周,Kubernetes 贡献社区都会通过Google 环聊虚拟开会。我们希望任何有兴趣的人都知道本论坛讨论的内容。
|
||||
|
||||
这是今天会议的笔记:
|
||||
|
||||
<!--
|
||||
* Private Registry Demo - Muhammed
|
||||
|
||||
* Run docker-registry as an RC/Pod/Service
|
||||
|
||||
* Run a proxy on every node
|
||||
|
||||
* Access as localhost:5000
|
||||
|
||||
* Discussion:
|
||||
|
||||
* Should we back it by GCS or S3 when possible?
|
||||
|
||||
* Run real registry backed by $object_store on each node
|
||||
|
||||
* DNS instead of localhost?
|
||||
|
||||
* disassemble image strings?
|
||||
|
||||
* more like DNS policy?
|
||||
-->
|
||||
* 私有镜像仓库演示 - Muhammed
|
||||
|
||||
* 将 docker-registry 作为 RC/Pod/Service 运行
|
||||
|
||||
* 在每个节点上运行代理
|
||||
|
||||
* 以 localhost:5000 访问
|
||||
|
||||
* 讨论:
|
||||
|
||||
* 我们应该在可能的情况下通过 GCS 或 S3 支持它吗?
|
||||
|
||||
* 在每个节点上运行由 $object_store 支持的真实镜像仓库
|
||||
|
||||
* DNS 代替 localhost?
|
||||
|
||||
* 分解 docker 镜像字符串?
|
||||
|
||||
* 更像 DNS 策略吗?
|
||||
<!--
|
||||
* Running Large Clusters - Joe
|
||||
|
||||
* Samsung keen to see large scale O(1000)
|
||||
|
||||
* Starting on AWS
|
||||
|
||||
* RH also interested - test plan needed
|
||||
|
||||
* Plan for next week: discuss working-groups
|
||||
|
||||
* If you are interested in joining conversation on cluster scalability send mail to [joe@0xBEDA.com][4]
|
||||
-->
|
||||
|
||||
* 运行大型集群 - Joe
|
||||
|
||||
* 三星渴望看到大规模 O(1000)
|
||||
|
||||
* 从 AWS 开始
|
||||
|
||||
* RH 也有兴趣 - 需要测试计划
|
||||
|
||||
* 计划下周:讨论工作组
|
||||
|
||||
* 如果您有兴趣加入有关集群可扩展性的对话,请发送邮件至[joe@0xBEDA.com][4]
|
||||
|
||||
<!--
|
||||
* Resource API Proposal - Clayton
|
||||
|
||||
* New stuff wants more info on resources
|
||||
|
||||
* Proposal for resources API - ask apiserver for info on pods
|
||||
|
||||
* Send feedback to: #11951
|
||||
|
||||
* Discussion on snapshot vs time-series vs aggregates
|
||||
-->
|
||||
|
||||
* 资源 API 提案 - Clayton
|
||||
|
||||
* 新东西需要更多资源信息
|
||||
|
||||
* 关于资源 API 的提案 - 向 apiserver 询问有关pod的信息
|
||||
|
||||
* 发送反馈至:#11951
|
||||
|
||||
* 关于快照,时间序列和聚合的讨论
|
||||
|
||||
<!--
|
||||
* Containerized kubelet - Clayton
|
||||
|
||||
* Open pull
|
||||
|
||||
* Docker mount propagation - RH carries patches
|
||||
|
||||
* Big issues around whole bootstrap of the system
|
||||
|
||||
* dual: boot-docker/system-docker
|
||||
|
||||
* Kube-in-docker is really nice, but maybe not critical
|
||||
|
||||
* Do the small stuff to make progress
|
||||
|
||||
* Keep pressure on docker
|
||||
-->
|
||||
* 容器化 kubelet - Clayton
|
||||
|
||||
* 打开 pull
|
||||
|
||||
* Docker挂载传播 - RH 带有补丁
|
||||
|
||||
* 有关整个系统引导程序的大问题
|
||||
|
||||
* 双:引导docker /系统docker
|
||||
|
||||
* Kube-in-docker非常好,但可能并不关键
|
||||
|
||||
* 做些小事以取得进步
|
||||
|
||||
* 对 docker 施加压力
|
||||
<!--
|
||||
* Web UI (preilly)
|
||||
|
||||
* Where does web UI stand?
|
||||
|
||||
* OK to split it back out
|
||||
|
||||
* Use it as a container image
|
||||
|
||||
* Build image as part of kube release process
|
||||
|
||||
* Vendor it back in? Maybe, maybe not.
|
||||
|
||||
* Will DNS be split out?
|
||||
|
||||
* Probably more tightly integrated, instead
|
||||
|
||||
* Other potential spin-outs:
|
||||
|
||||
* apiserver
|
||||
|
||||
* clients
|
||||
-->
|
||||
* Web UI(preilly)
|
||||
|
||||
* Web UI 放在哪里?
|
||||
|
||||
* 确定将其拆分出去
|
||||
|
||||
* 将其用作容器镜像
|
||||
|
||||
* 作为 kube 发布过程的一部分构建映像
|
||||
|
||||
* vendor回来了吗?也许吧,也许不是。
|
||||
|
||||
* DNS将被拆分吗?
|
||||
|
||||
* 可能更紧密地集成在一起,而不是
|
||||
|
||||
* 其他潜在的衍生产品:
|
||||
|
||||
* apiserver
|
||||
|
||||
* clients
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: " Kubernetes 社区指导委员会选举结果 "
|
||||
date: 2017-10-05
|
||||
slug: kubernetes-community-steering-committee-election-results
|
||||
url: /blog/2017/10/Kubernetes-Community-Steering-Committee-Election-Results
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: " Kubernetes Community Steering Committee Election Results "
|
||||
date: 2017-10-05
|
||||
slug: kubernetes-community-steering-committee-election-results
|
||||
url: /blog/2017/10/Kubernetes-Community-Steering-Committee-Election-Results
|
||||
---
|
||||
-->
|
||||
<!--
|
||||
Beginning with the announcement of Kubernetes 1.0 at OSCON in 2015, there has been a concerted effort to share the power and burden of leadership across the Kubernetes community.
|
||||
-->
|
||||
自 2015 年 OSCON 发布 Kubernetes 1.0 以来,大家一直在共同努力,在 Kubernetes 社区中共同分享领导力和责任。
|
||||
|
||||
<!--
|
||||
With the work of the Bootstrap Governance Committee, consisting of Brandon Philips, Brendan Burns, Brian Grant, Clayton Coleman, Joe Beda, Sarah Novotny and Tim Hockin - a cross section of long-time leaders representing 5 different companies with major investments of talent and effort in the Kubernetes Ecosystem - we wrote an initial [Steering Committee Charter](https://github.com/kubernetes/steering/blob/master/charter.md) and launched a community wide election to seat a Kubernetes Steering Committee.
|
||||
-->
|
||||
在 Brandon Philips、Brendan Burns、Brian Grant、Clayton Coleman、Joe Beda、Sarah Novotny 和 Tim Hockin 组成的自举治理委员会的工作下 - 代表 5 家不同公司的长期领导者,他们对 Kubernetes 生态系统进行了大量的人才投资和努力 - 编写了初始的[指导委员会章程](https://github.com/kubernetes/steering/blob/master/charter.md),并发起了一次社区选举,以选举 Kubernetes 指导委员会成员。
|
||||
|
||||
<!--
|
||||
To quote from the Charter -
|
||||
-->
|
||||
引用章程 -
|
||||
|
||||
<!--
|
||||
_The initial role of the steering committee is to **instantiate the formal process for Kubernetes governance**. In addition to defining the initial governance process, the bootstrap committee strongly believes that **it is important to provide a means for iterating** the processes defined by the steering committee. We do not believe that we will get it right the first time, or possibly ever, and won’t even complete the governance development in a single shot. The role of the steering committee is to be a live, responsive body that can refactor and reform as necessary to adapt to a changing project and community._
|
||||
-->
|
||||
_指导委员会的最初职责是**实例化 Kubernetes 治理的正式过程**。除定义初始治理过程外,指导委员会还坚信**提供一种方法来迭代指导委员会定义的方法很重要**。我们不相信我们会在第一次或以后把这些做好,也不会一口气完成治理开发工作。指导委员会的作用是成为一个积极响应的机构,可以根据需要进行重构和改造,以适应不断变化的项目和社区。
|
||||
|
||||
<!--
|
||||
This is our largest step yet toward making an implicit governance structure explicit. Kubernetes vision has been one of an inclusive and broad community seeking to build software which empowers our users with the portability of containers. The Steering Committee will be a strong leadership voice guiding the project toward success.
|
||||
-->
|
||||
这是将我们隐式治理结构明确化的最大一步。Kubernetes 的愿景一直是成为一个包容而广泛的社区,用我们的软件带给用户容器的便利性。指导委员会将是一个强有力的引领声音,指导该项目取得成功。
|
||||
|
||||
<!--
|
||||
The Kubernetes Community is pleased to announce the results of the 2017 Steering Committee Elections. **Please congratulate Aaron Crickenberger, Derek Carr, Michelle Noorali, Phillip Wittrock, Quinton Hoole and Timothy St. Clair** , who will be joining the members of the Bootstrap Governance committee on the newly formed Kubernetes Steering Committee. Derek, Michelle, and Phillip will serve for 2 years. Aaron, Quinton, and Timothy will serve for 1 year.
|
||||
-->
|
||||
Kubernetes 社区很高兴地宣布 2017 年指导委员会选举的结果。 **请祝贺 Aaron Crickenberger、Derek Carr、Michelle Noorali、Phillip Wittrock、Quinton Hoole 和 Timothy St. Clair**,他们将成为新成立的 Kubernetes 指导委员会的自举治理委员会成员。Derek、Michelle 和 Phillip 将任职 2 年。Aaron、Quinton、和 Timothy 将任职 1 年。
|
||||
|
||||
<!--
|
||||
This group will meet regularly in order to clarify and streamline the structure and operation of the project. Early work will include electing a representative to the CNCF Governing Board, evolving project processes, refining and documenting the vision and scope of the project, and chartering and delegating to more topical community groups.
|
||||
-->
|
||||
该小组将定期开会,以阐明和简化项目的结构和运行。早期的工作将包括选举 CNCF 理事会的代表,发展项目流程,完善和记录项目的愿景和范围,以及授权和委派更多主题社区团体。
|
||||
|
||||
<!--
|
||||
Please see [the full Steering Committee backlog](https://github.com/kubernetes/steering/blob/master/backlog.md) for more details.
|
||||
-->
|
||||
请参阅[完整的指导委员会待办事项列表](https://github.com/kubernetes/steering/blob/master/backlog.md)以获取更多详细信息。
|
|
@ -226,14 +226,14 @@ There are two supported paths to extending the API with [custom resources](/docs
|
|||
为客户提供无缝的服务。
|
||||
|
||||
<!--
|
||||
## Enabling API groups
|
||||
## Enabling or disabling API groups
|
||||
|
||||
Certain resources and API groups are enabled by default. They can be enabled or disabled by setting `--runtime-config`
|
||||
on apiserver. `--runtime-config` accepts comma separated values. For ex: to disable batch/v1, set
|
||||
on apiserver. `--runtime-config` accepts comma separated values. For example: to disable batch/v1, set
|
||||
`--runtime-config=batch/v1=false`, to enable batch/v2alpha1, set `--runtime-config=batch/v2alpha1`.
|
||||
The flag accepts comma separated set of key=value pairs describing runtime configuration of the apiserver.
|
||||
|
||||
IMPORTANT: Enabling or disabling groups or resources requires restarting apiserver and controller-manager
|
||||
Enabling or disabling groups or resources requires restarting apiserver and controller-manager
|
||||
to pick up the `--runtime-config` changes.
|
||||
-->
|
||||
|
||||
|
@ -244,22 +244,31 @@ to pick up the `--runtime-config` changes.
|
|||
例如:要禁用batch/v1,请设置 `--runtime-config=batch/v1=false`,以启用batch/v2alpha1,请设置`--runtime-config=batch/v2alpha1`。
|
||||
该标志接受描述apiserver的运行时配置的逗号分隔的一组键值对。
|
||||
|
||||
重要:启用或禁用组或资源需要重新启动apiserver和控制器管理器来使得 `--runtime-config` 更改生效。
|
||||
{{< note >}}
|
||||
|
||||
启用或禁用组或资源需要重新启动apiserver和控制器管理器来使得 `--runtime-config` 更改生效。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Enabling resources in the groups
|
||||
## Enabling specific resources in the extensions/v1beta1 group
|
||||
|
||||
DaemonSets, Deployments, HorizontalPodAutoscalers, Ingresses, Jobs and ReplicaSets are enabled by default.
|
||||
Other extensions resources can be enabled by setting `--runtime-config` on
|
||||
apiserver. `--runtime-config` accepts comma separated values. For example: to disable deployments and ingress, set
|
||||
`--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingresses=false`
|
||||
DaemonSets, Deployments, StatefulSet, NetworkPolicies, PodSecurityPolicies and ReplicaSets in the `extensions/v1beta1` API group are disabled by default.
|
||||
For example: to enable deployments and daemonsets, set
|
||||
`--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`.
|
||||
|
||||
Individual resource enablement/disablement is only supported in the `extensions/v1beta1` API group for legacy reasons.
|
||||
-->
|
||||
|
||||
## 启用组中资源
|
||||
## 启用 extensions/v1beta1 组中资源
|
||||
|
||||
DaemonSets,Deployments,HorizontalPodAutoscalers,Ingress,Jobs 和 ReplicaSets是默认启用的。
|
||||
其他扩展资源可以通过在apiserver上设置 `--runtime-config` 来启用。
|
||||
`--runtime-config` 接受逗号分隔的值。 例如:要禁用 Deployment 和 Ingress,
|
||||
请设置 `--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingress=false`
|
||||
在 `extensions/v1beta1` API 组中,DaemonSets,Deployments,StatefulSet, NetworkPolicies, PodSecurityPolicies 和 ReplicaSets 是默认禁用的。
|
||||
例如:要启用 deployments 和 daemonsets,请设置 `--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`。
|
||||
|
||||
{{< note >}}
|
||||
|
||||
出于遗留原因,仅在 `extensions / v1beta1` API 组中支持各个资源的启用/禁用。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
approvers:
|
||||
- derekwaynecarr
|
||||
title: 资源配额
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!--
|
||||
|
|
|
@ -412,7 +412,7 @@ By default, kube-proxy in userspace mode chooses a backend via a round-robin alg
|
|||
任何连接到“代理端口”的请求,都会被代理到 `Service` 的backend `Pods` 中的某个上面(如 `Endpoints` 所报告的一样)。
|
||||
使用哪个 backend `Pod`,是 kube-proxy 基于 `SessionAffinity` 来确定的。
|
||||
|
||||
最后,它安装 iptables 规则,捕获到达该 `Service` 的 `clusterIP`(是虚拟 IP)和 `Port` 的请求,并重定向到代理端口,代理端口再代理请求到 backend `Pod`。
|
||||
最后,它配置 iptables 规则,捕获到达该 `Service` 的 `clusterIP`(是虚拟 IP)和 `Port` 的请求,并重定向到代理端口,代理端口再代理请求到 backend `Pod`。
|
||||
|
||||
默认情况下,用户空间模式下的kube-proxy通过循环算法选择后端。
|
||||
|
||||
|
@ -451,8 +451,8 @@ having traffic sent via kube-proxy to a Pod that's known to have failed.
|
|||
### iptables 代理模式 {#proxy-mode-iptables}
|
||||
|
||||
这种模式,kube-proxy 会监视 Kubernetes 控制节点对 `Service` 对象和 `Endpoints` 对象的添加和移除。
|
||||
对每个 `Service`,它会安装 iptables 规则,从而捕获到达该 `Service` 的 `clusterIP` 和端口的请求,进而将请求重定向到 `Service` 的一组 backend 中的某个上面。
|
||||
对于每个 `Endpoints` 对象,它也会安装 iptables 规则,这个规则会选择一个 backend 组合。
|
||||
对每个 `Service`,它会配置 iptables 规则,从而捕获到达该 `Service` 的 `clusterIP` 和端口的请求,进而将请求重定向到 `Service` 的一组 backend 中的某个上面。
|
||||
对于每个 `Endpoints` 对象,它也会配置 iptables 规则,这个规则会选择一个 backend 组合。
|
||||
|
||||
默认的策略是,kube-proxy 在 iptables 模式下随机选择一个 backend。
|
||||
|
||||
|
@ -1679,7 +1679,7 @@ through a load-balancer, though in those cases the client IP does get altered.
|
|||
再次考虑前面提到的图片处理应用程序。
|
||||
当创建 backend `Service` 时,Kubernetes 控制面板会给它指派一个虚拟 IP 地址,比如 10.0.0.1。
|
||||
假设 `Service` 的端口是 1234,该 `Service` 会被集群中所有的 `kube-proxy` 实例观察到。
|
||||
当代理看到一个新的 `Service`, 它会安装一系列的 iptables 规则,从 VIP 重定向到 per-`Service` 规则。
|
||||
当代理看到一个新的 `Service`, 它会配置一系列的 iptables 规则,从 VIP 重定向到 per-`Service` 规则。
|
||||
该 per-`Service` 规则连接到 per-`Endpoint` 规则,该 per-`Endpoint` 规则会重定向(目标 NAT)到 backend。
|
||||
|
||||
当一个客户端连接到一个 VIP,iptables 规则开始起作用。一个 backend 会被选择(或者根据会话亲和性,或者随机),数据包被重定向到这个 backend。
|
||||
|
|
|
@ -69,7 +69,7 @@ parameters:
|
|||
Volume snapshot classes have a driver that determines what CSI volume plugin is
|
||||
used for provisioning VolumeSnapshots. This field must be specified.
|
||||
-->
|
||||
### 驱动程序(#driver)
|
||||
### 驱动程序 {#driver}
|
||||
|
||||
卷快照类有一个驱动程序,用于确定配置 VolumeSnapshot 的 CSI 卷插件。 必须指定此字段。
|
||||
|
||||
|
|
|
@ -1771,10 +1771,6 @@ Choose one of the following methods to create a VMDK.
|
|||
|
||||
{{< tabs name="tabs_volumes" >}}
|
||||
{{% tab name="使用 vmkfstools 创建" %}}
|
||||
<!--
|
||||
{{% tab name="Create using vmkfstools" %}}
|
||||
First ssh into ESX, then use the following command to create a VMDK:
|
||||
-->
|
||||
|
||||
首先 ssh 到 ESX,然后使用下面的命令来创建 VMDK:
|
||||
|
||||
|
@ -1783,10 +1779,6 @@ vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
|
|||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="使用 vmware-vdiskmanager 创建" %}}
|
||||
<!--
|
||||
{{% tab name="Create using vmware-vdiskmanager" %}}
|
||||
Use the following command to create a VMDK:
|
||||
-->
|
||||
|
||||
使用下面的命令创建 VMDK:
|
||||
|
||||
|
@ -2409,7 +2401,7 @@ sudo systemctl daemon-reload
|
|||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
|
@ -2418,4 +2410,5 @@ sudo systemctl restart docker
|
|||
-->
|
||||
|
||||
* 参考[使用持久卷部署 WordPress 和 MySQL](/zh/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/) 示例。
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -696,14 +696,15 @@ most up-to-date version of that branch.
|
|||
git commit -m "Your commit message"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Do not reference a GitHub issue or pull request by ID or URL in the
|
||||
commit message. If you do, it will cause that issue or pull request to get
|
||||
a notification every time the commit shows up in a new Git branch. You can
|
||||
link issues and pull requests together later, in the GitHub UI.
|
||||
-->不要在提交消息中引用 GitHub issue 或 PR(通过 ID 或 URL)。如果您这样做了,那么每当提交出现在新的Git 分支中时,就会导致该 issue 或 PR 获得通知。稍后,您可以在 GitHub UI 中链接 issues 并将请求拉到一起。
|
||||
{{< /note >}}
|
||||
{{< note >}}
|
||||
<!--
|
||||
Do not reference a GitHub issue or pull request by ID or URL in the
|
||||
commit message. If you do, it will cause that issue or pull request to get
|
||||
a notification every time the commit shows up in a new Git branch. You can
|
||||
link issues and pull requests together later, in the GitHub UI.
|
||||
-->
|
||||
不要在提交消息中引用 GitHub issue 或 PR(通过 ID 或 URL)。如果您这样做了,那么每当提交出现在新的 Git 分支中时,就会导致该 issue 或 PR 获得通知。稍后,您可以在 GitHub UI 中链接 issues 并将请求拉到一起。
|
||||
{{< /note >}}
|
||||
|
||||
5. <!--
|
||||
Optionally, you can test your change by staging the site locally using the
|
||||
|
|
|
@ -175,6 +175,9 @@ other = "Cele"
|
|||
[prerequisites_heading]
|
||||
other = "Nim zaczniesz"
|
||||
|
||||
[subscribe_button]
|
||||
other = "Subskrybuj"
|
||||
|
||||
[ui_search_placeholder]
|
||||
other = "Szukaj"
|
||||
|
||||
|
|
|
@ -172,6 +172,9 @@ other = "教程目标"
|
|||
[prerequisites_heading]
|
||||
other = "准备开始"
|
||||
|
||||
[subscribe_button]
|
||||
other = "订阅"
|
||||
|
||||
[ui_search_placeholder]
|
||||
other = "搜索"
|
||||
|
||||
|
|
BIN
static/images/blog/2020-05-21-wsl2-dockerdesktop-k8s/wsl2-apt-update-upgrade.png
Executable file
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 42 KiB |
After Width: | Height: | Size: 67 KiB |
After Width: | Height: | Size: 67 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 6.5 KiB |
After Width: | Height: | Size: 196 KiB |
After Width: | Height: | Size: 78 KiB |
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 127 KiB |
After Width: | Height: | Size: 64 KiB |
After Width: | Height: | Size: 80 KiB |
After Width: | Height: | Size: 72 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 29 KiB |
After Width: | Height: | Size: 115 KiB |
After Width: | Height: | Size: 95 KiB |
After Width: | Height: | Size: 131 KiB |
After Width: | Height: | Size: 30 KiB |
After Width: | Height: | Size: 134 KiB |
After Width: | Height: | Size: 144 KiB |
After Width: | Height: | Size: 141 KiB |
After Width: | Height: | Size: 133 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 65 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 79 KiB |
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 91 KiB |
After Width: | Height: | Size: 28 KiB |
After Width: | Height: | Size: 138 KiB |
After Width: | Height: | Size: 142 KiB |
After Width: | Height: | Size: 70 KiB |
After Width: | Height: | Size: 91 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 27 KiB |
After Width: | Height: | Size: 8.5 KiB |
After Width: | Height: | Size: 5.9 KiB |