Merge branch 'kubernetes:main' into patch-1

pull/41174/head
Vitalii Natarov 2023-06-12 14:05:52 +02:00 committed by GitHub
commit 6a20c5df0d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
364 changed files with 18204 additions and 206667 deletions

View File

@ -49,11 +49,11 @@ check-headers-file:
scripts/check-headers-file.sh
production-build: module-check ## Build the production site and ensure that noindex headers aren't added
hugo --cleanDestinationDir --minify --environment production
GOMAXPROCS=1 hugo --cleanDestinationDir --minify --environment production
HUGO_ENV=production $(MAKE) check-headers-file
non-production-build: module-check ## Build the non-production site, which adds noindex headers to prevent indexing
hugo --cleanDestinationDir --enableGitInfo --environment nonprod
GOMAXPROCS=1 hugo --cleanDestinationDir --enableGitInfo --environment nonprod
serve: module-check ## Boot the development server.
hugo server --buildFuture --environment development

View File

@ -32,11 +32,11 @@ aliases:
- bradtopol
- divya-mohan0209
- kbhawkey
- mickeyboxell
- natalisucks
- nate-double-u
- onlydole
- reylejano
- Rishit-dagli # 1.28 Release Team Docs Lead
- sftim
- tengqm
sig-docs-en-reviews: # PR reviews for English content

View File

@ -13,64 +13,4 @@ Im Abschnitt Konzepte erfahren Sie mehr über die Bestandteile des Kubernetes-Sy
<!-- body -->
## Überblick
Um mit Kubernetes zu arbeiten, verwenden Sie *Kubernetes-API-Objekte*, um den *gewünschten Status Ihres Clusters* zu beschreiben:
welche Anwendungen oder anderen Workloads Sie ausführen möchten, welche Containerimages sie verwenden, die Anzahl der Replikate, welche Netzwerk- und Festplattenressourcen Sie zur Verfügung stellen möchten, und vieles mehr. Sie legen den gewünschten Status fest, indem Sie Objekte mithilfe der Kubernetes-API erstellen. Dies geschieht normalerweise über die Befehlszeilenschnittstelle `kubectl`. Sie können die Kubernetes-API auch direkt verwenden, um mit dem Cluster zu interagieren und den gewünschten Status festzulegen oder zu ändern.
Sobald Sie den gewünschten Status eingestellt haben, wird das *Kubernetes Control Plane* dafür sorgen, dass der aktuelle Status des Clusters mit dem gewünschten Status übereinstimmt. Zu diesem Zweck führt Kubernetes verschiedene Aufgaben automatisch aus, z. B. das Starten oder Neustarten von Containern, Skalieren der Anzahl der Repliken einer bestimmten Anwendung und vieles mehr. Das Kubernetes Control Plane besteht aus einer Reihe von Prozessen, die in Ihrem Cluster ausgeführt werden:
* Der **Kubernetes Master** besteht aus drei Prozessen, die auf einem einzelnen Node in Ihrem Cluster ausgeführt werden, der als Master-Node bezeichnet wird. Diese Prozesse sind: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) und [kube-scheduler](/docs/admin/kube-scheduler/).
* Jeder einzelne Node in Ihrem Cluster, welcher nicht der Master ist, führt zwei Prozesse aus:
* **[kubelet](/docs/admin/kubelet/)**, das mit dem Kubernetes Master kommuniziert.
* **[kube-proxy](/docs/admin/kube-proxy/)**, ein Netzwerk-Proxy, der die Netzwerkdienste von Kubernetes auf jedem Node darstellt.
## Kubernetes Objects
Kubernetes enthält eine Reihe von Abstraktionen, die den Status Ihres Systems darstellen: im Container eingesetzte Anwendungen und Workloads, die zugehörigen Netzwerk- und Festplattenressourcen sowie weitere Informationen zu den Aufgaben Ihres Clusters. Diese Abstraktionen werden durch Objekte in der Kubernetes-API dargestellt. Lesen Sie [Kubernetes Objects Überblick](/docs/concepts/abstractions/overview/) für weitere Details.
Die Basisobjekte von Kubernetes umfassen:
* [Pod](/docs/concepts/workloads/pods/pod-overview/)
* [Service](/docs/concepts/services-networking/service/)
* [Volume](/docs/concepts/storage/volumes/)
* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
Darüber hinaus enthält Kubernetes Abstraktionen auf höherer Ebene, die als Controller bezeichnet werden. Controller bauen auf den Basisobjekten auf und bieten zusätzliche Funktionen und Komfortfunktionen. Sie beinhalten:
* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
* [Deployment](/docs/concepts/workloads/controllers/deployment/)
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
## Kubernetes Control Plane
Die verschiedenen Teile der Kubernetes-Steuerungsebene (Control Plane), wie der Kubernetes Master- und der Kubelet-Prozess, bestimmen, wie Kubernetes mit Ihrem Cluster kommuniziert. Das Control Plane verwaltet ein Inventar aller Kubernetes-Objekte im System und führt kontinuierlich Kontrollschleifen aus, um den Status dieser Objekte zu verwalten. Zu jeder Zeit reagieren die Kontrollschleifen des Control Plane auf Änderungen im Cluster und arbeiten daran, dass der tatsächliche Status aller Objekte im System mit dem von Ihnen definierten Status übereinstimmt.
Wenn Sie beispielsweise mit der Kubernetes-API ein Deployment-Objekt erstellen, geben Sie einen neuen gewünschten Status für das System an. Das Kubernetes Control Plane zeichnet die Objekterstellung auf und führt Ihre Anweisungen aus, indem es die erforderlichen Anwendungen startet und Sie für auf den Cluster-Nodes plant - Dadurch wird der tatsächliche Status des Clusters an den gewünschten Status angepasst.
### Kubernetes Master
Der Kubernetes-Master ist für Erhalt des gewünschten Status Ihres Clusters verantwortlich. Wenn Sie mit Kubernetes interagieren, beispielsweise mit dem Kommandozeilen-Tool `kubectl`, kommunizieren Sie mit dem Kubernetes-Master Ihres Clusters.
> Der Begriff "Master" bezeichnet dabei eine Reihe von Prozessen, die den Clusterstatus verwalten. Normalerweise werden diese Prozesse alle auf einem einzigen Node im Cluster ausgeführt. Dieser Node wird auch als Master bezeichnet. Der Master kann repliziert werden, um die Verfügbarkeit und Redundanz zu erhöhen.
### Kubernetes Nodes
Die Nodes in einem Cluster sind die Maschinen (VMs, physische Server usw.), auf denen Ihre Anwendungen und Cloud-Workflows ausgeführt werden. Der Kubernetes-Master steuert jeden Node; Sie werden selten direkt mit Nodes interagieren.
#### Objekt Metadata
* [Anmerkungen](/docs/concepts/overview/working-with-objects/annotations/)
## {{% heading "whatsnext" %}}
Wenn Sie eine Konzeptseite schreiben möchten, lesen Sie [Seitenvorlagen verwenden](/docs/home/contribute/page-templates/)
für Informationen zum Konzeptseitentyp und zur Dokumentations Vorlage.

View File

@ -1,4 +1,6 @@
---
title: "Kubernetes Architekur"
weight: 30
description: >
Hier werden die architektonischen Konzepte von Kubernetes beschrieben.
---

View File

@ -1,5 +1,7 @@
---
title: "Cluster Administration"
weight: 100
description: >
Tiefergreifende Details, die für das Erstellen und Administrieren eines Kubernetes Clusters relevant sind.
---

View File

@ -1,5 +1,7 @@
---
title: "Konfiguration"
weight: 80
description: >
Resourcen, die bei der Konfiguration von Pods in Kubernetes nützlich sind.
---

View File

@ -1,5 +1,7 @@
---
title: "Container"
weight: 40
description: >
Methoden, um Anwendungen und ihre Abhängigkeiten zu zusammenzufassen.
---

View File

@ -2,6 +2,9 @@
title: Konzept Dokumentations-Vorlage
content_type: concept
toc_hide: true
description: >
Wenn Sie eine Konzeptseite schreiben möchten, lesen Sie [Seitenvorlagen verwenden](/docs/home/contribute/page-templates/)
für Informationen zum Konzeptseitentyp und zur Dokumentations-Vorlage.
---
<!-- overview -->

View File

@ -5,4 +5,6 @@ feature:
title: Für Erweiterungen entworfen
description: >
Kubernetes kann ohne Änderungen am Upstream-Quelltext erweitert werden.
description: >
Verschiedene Wege, um die Funktionalität von Kubernetes zu erweitern.
---

View File

@ -1,5 +1,9 @@
---
title: "Überblick"
weight: 20
description: >
Kubernetes ist eine portable, erweiterbare und quelloffene Plattform, um containerisierte Arbeitslasten und Dienste zu verwalten.
Dies wird mithilfe von Automatisierungen und deklarativen Konfigurationen erreicht. Kubernetes hat ein großes, schnell wachsendes Ökosystem.
Dienstleistungen, Hilfestellungen und Tools für Kubernetes sind weit verbreitet.
---

View File

@ -1,5 +1,7 @@
---
title: "Richtlinien"
weight: 90
description: >
Sie können Richtlinien erstellen, die Resource-Gruppen zugewiesen werden können.
---

View File

@ -1,5 +1,7 @@
---
title: "Dienste, Lastverteilung und Netzwerkfunktionen"
weight: 60
description: >
Konzepte und Resourcen bezüglich Netzwerktechnik in Kubernetes
---

View File

@ -1,5 +1,7 @@
---
title: "Speicher"
weight: 70
description: >
Methoden, um volatilen oder persistenten Speicher für Pods im Cluster zur Verfügung zu stellen.
---

View File

@ -1,5 +1,8 @@
---
title: "Workloads"
weight: 50
description: >
Informationen über Pods, die kleinsten Einheiten, die in Kubernetes bereitgestellt werden können und
über Abstraktionen, die hierbei behilflich sind.
---

View File

@ -39,10 +39,10 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
{{< note >}}
Um eine spezifische Version herunterzuladen, ersetze `$(curl -L -s https://dl.k8s.io/release/stable.txt)` mit der spezifischen Version.
Um zum Beispiel Version {{< param "fullversion" >}} auf Linux herunterzuladen:
Um zum Beispiel Version {{< skew currentPatchVersion >}} auf Linux herunterzuladen:
```bash
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/linux/amd64/kubectl
```
{{< /note >}}
@ -139,7 +139,7 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
2. Den öffentlichen Google Cloud Signaturschlüssel herunterladen:
```shell
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
```
3. Kubernetes zum `apt` Repository:
@ -170,7 +170,7 @@ name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo yum install -y kubectl
```

View File

@ -44,16 +44,16 @@ Um kubectl auf macOS zu installieren, gibt es die folgenden Möglichkeiten:
{{< note >}}
Um eine spezifische Version herunterzuladen, ersetze `$(curl -L -s https://dl.k8s.io/release/stable.txt)` mit der spezifischen Version
Um zum Beispiel Version {{< param "fullversion" >}} auf Intel macOS herunterzuladen:
Um zum Beispiel Version {{< skew currentPatchVersion >}} auf Intel macOS herunterzuladen:
```bash
curl -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl"
curl -LO "https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/amd64/kubectl"
```
Für macOS auf Apple Silicon (z.B. M1/M2):
```bash
curl -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/arm64/kubectl"
curl -LO "https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/arm64/kubectl"
```
{{< /note >}}

View File

@ -30,8 +30,8 @@ Nachfolgend finden Sie einige Methoden zur Installation von kubectl.
{{< tabs name="kubectl_install" >}}
{{< tab name="Ubuntu, Debian oder HypriotOS" codelang="bash" >}}
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /usr/share/keyrings/kubernetes.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/kubernetes.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
{{< /tab >}}
@ -197,10 +197,10 @@ Sie können kubectl als Teil des Google Cloud SDK installieren.
Um eine bestimmte Version herunterzuladen, ersetzen Sie den Befehlsteil `$(curl -LS https://dl.k8s.io/release/stable.txt)` mit der jeweiligen Version.
Um beispielsweise die Version {{< param "fullversion" >}} auf macOS herunterzuladen, verwenden Sie den folgenden Befehl:
Um beispielsweise die Version {{< skew currentPatchVersion >}} auf macOS herunterzuladen, verwenden Sie den folgenden Befehl:
```
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/amd64/kubectl
```
2. Machen Sie die kubectl-Binärdatei ausführbar.
@ -225,10 +225,10 @@ Sie können kubectl als Teil des Google Cloud SDK installieren.
Um eine bestimmte Version herunterzuladen, ersetzen Sie den Befehlsteil `$(curl -LS https://dl.k8s.io/release/stable.txt)` mit der jeweiligen Version.
Um beispielsweise die Version {{< param "fullversion" >}} auf Linux herunterzuladen, verwenden Sie den folgenden Befehl:
Um beispielsweise die Version {{< skew currentPatchVersion >}} auf Linux herunterzuladen, verwenden Sie den folgenden Befehl:
```
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/linux/amd64/kubectl
```
2. Machen Sie die kubectl-Binärdatei ausführbar.
@ -244,12 +244,12 @@ Sie können kubectl als Teil des Google Cloud SDK installieren.
```
{{% /tab %}}
{{% tab name="Windows" %}}
1. Laden Sie das aktuellste Release {{< param "fullversion" >}} von [diesem link](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe) herunter.
1. Laden Sie das aktuellste Release {{< skew currentPatchVersion >}} von [diesem link](https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe) herunter.
Oder, sofern Sie `curl` installiert haven, verwenden Sie den folgenden Befehl:
```
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe
```
Informationen zur aktuellen stabilen Version (z. B. für scripting) finden Sie unter [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).

View File

@ -127,13 +127,13 @@ Note how we set those parameters so they are used only when you deploy to GKE. Y
After training, you [export your model](https://www.tensorflow.org/serving/serving_basic) to a serving location.
Kubeflow also includes a serving package as well. In a separate example, we trained a standard Inception model, and stored the trained model in a bucket weve created called gs://kubeflow-models with the path /inception.
Kubeflow also includes a serving package as well.
To deploy a the trained model for serving, execute the following:
```
ks generate tf-serving inception --name=inception
---namespace=default --model\_path=gs://kubeflow-models/inception
---namespace=default --model\_path=gs://$bucket_name/$model_loc
ks apply gke -c inception
```
@ -170,3 +170,6 @@ Thank you for your support so far, we could not be more excited!
_Jeremy Lewi & David Aronchick_
Google
Note:
* This article was amended in June 2023 to update the trained model bucket location.

View File

@ -82,7 +82,7 @@ For external clients, automatic DNS expansion described is not currently possibl
That way, your clients can always use the short form on the left, and always be automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes cluster federation.
As further reading, a more elaborate example for users is available in the [Multi-Cluster Service DNS with ExternalDNS guide](https://github.com/kubernetes-sigs/federation-v2/blob/master/docs/servicedns-with-externaldns.md).
As further reading, a more elaborate example for users is available in the [Multi-Cluster Service DNS with ExternalDNS guide](https://github.com/kubernetes-retired/kubefed/blob/dbcd4da3823a7ba8ac29e80c9d5b968868638d28/docs/servicedns-with-externaldns.md)
# Try it yourself
To get started with Federation v2, please refer to the [user guide](https://github.com/kubernetes-sigs/federation-v2/blob/master/docs/userguide.md). Deployment can be accomplished with a [Helm chart](https://github.com/kubernetes-sigs/kubefed/blob/master/charts/kubefed/README.md), and once the control plane is available, the [user guides example](https://github.com/kubernetes-sigs/federation-v2/blob/master/docs/userguide.md#example) can be used to get some hands-on experience with using Federation V2.

View File

@ -119,8 +119,8 @@ Here are some of the images we built
- `gcr.io/kubernetes-e2e-test-images/volume/iscsi:2.0`
- `gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0`
- `gcr.io/kubernetes-e2e-test-images/volume/rbd:1.0.1`
- `k8s.gcr.io/etcd:3.3.15`
- `k8s.gcr.io/pause:3.1`
- `registry.k8s.io/etcd:3.3.15` (image changed since publication - previously used registry "k8s.gcr.io")
- `registry.k8s.io/pause:3.1` (image changed since publication - previously used registry "k8s.gcr.io")
Finally, we ran the tests and got the test result, include `e2e.log`, which showed that all test cases passed. Additionally, we submitted our test result to [k8s-conformance](https://github.com/cncf/k8s-conformance) as a [pull request](https://github.com/cncf/k8s-conformance/pull/779).

View File

@ -32,7 +32,7 @@ files side by side to the artifacts for verifying their integrity.
[tarballs]: https://github.com/kubernetes/kubernetes/blob/release-1.26/CHANGELOG/CHANGELOG-1.26.md#downloads-for-v1260
[binaries]: https://gcsweb.k8s.io/gcs/kubernetes-release/release/v1.26.0/bin
[sboms]: https://dl.k8s.io/release/v1.26.0/kubernetes-release.spdx
[provenance]: https://dl.k8s.io/kubernetes-release/release/v1.26.0/provenance.json
[provenance]: https://dl.k8s.io/release/v1.26.0/provenance.json
[cosign]: https://github.com/sigstore/cosign
To verify an artifact, for example `kubectl`, you can download the

View File

@ -69,7 +69,7 @@ in `CredentialProviderResponse`. When the value is `Image`, the kubelet will onl
match the image of the first request. When the value is `Registry`, the kubelet will use cached credentials for any subsequent image pulls
destined for the same registry host but using different paths (for example, `gcr.io/foo/bar` and `gcr.io/bar/foo` refer to different images
from the same registry). Lastly, when the value is `Global`, the kubelet will use returned credentials for all images that match against
the plugin, including images that can map to different registry hosts (for example, gcr.io vs k8s.gcr.io). The `cacheKeyType` field is required by plugin
the plugin, including images that can map to different registry hosts (for example, gcr.io vs registry.k8s.io (previously k8s.gcr.io)). The `cacheKeyType` field is required by plugin
implementations.
```json

View File

@ -68,7 +68,7 @@ More detials can be found in the KEP <https://kep.k8s.io/1040> and the pull requ
## Event triggered updates to container status
`Evented PLEG` (PLEG is short for "Pod Lifecycle Event Generator") is set to be in beta for v1.27,
Kubernetes offers two ways for the kubelet to detect Pod lifecycle events, such as a the last
Kubernetes offers two ways for the kubelet to detect Pod lifecycle events, such as the last
process in a container shutting down.
In Kubernetes v1.27, the _event based_ mechanism has graduated to beta but remains
disabled by default. If you do explicitly switch to event-based lifecycle change detection,
@ -92,7 +92,7 @@ enabling this feature gate may affect the start-up speed of the pod if the pod s
a large amount of memory.
Kubelet configuration now includes `memoryThrottlingFactor`. This factor is multiplied by
the memory limit or node allocatable memory to set the cgroupv2 memory.high value for enforcing
the memory limit or node allocatable memory to set the cgroupv2 `memory.high` value for enforcing
MemoryQoS. Decreasing this factor sets a lower high limit for container cgroups, increasing reclaim
pressure. Increasing this factor will put less reclaim pressure. The default value is 0.8 initially
and will change to 0.9 in Kubernetes v1.27. This parameter adjustment can reduce the potential
@ -113,7 +113,7 @@ container startup by mounting volumes with the correct SELinux label instead of
on the volumes recursively. Further details can be found in the KEP <https://kep.k8s.io/1710>.
To identify the cause of slow pod startup, analyzing metrics and logs can be helpful. Other
factorsthat may impact pod startup include container runtime, disk speed, CPU and memory
factors that may impact pod startup include container runtime, disk speed, CPU and memory
resources on the node.
SIG Node is responsible for ensuring fast Pod startup times, while addressing issues in large

View File

@ -0,0 +1,282 @@
---
layout: blog
title: "Having fun with seccomp profiles on the edge"
date: 2023-05-18
slug: seccomp-profiles-edge
---
**Author**: Sascha Grunert
The [Security Profiles Operator (SPO)][spo] is a feature-rich
[operator][operator] for Kubernetes to make managing seccomp, SELinux and
AppArmor profiles easier than ever. Recording those profiles from scratch is one
of the key features of this operator, which usually involves the integration
into large CI/CD systems. Being able to test the recording capabilities of the
operator in edge cases is one of the recent development efforts of the SPO and
makes it excitingly easy to play around with seccomp profiles.
[spo]: https://github.com/kubernetes-sigs/security-profiles-operator
[operator]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator
## Recording seccomp profiles with `spoc record`
The [v0.8.0][spo-latest] release of the Security Profiles Operator shipped a new
command line interface called `spoc`, a little helper tool for recording and
replaying seccomp profiles among various other things that are out of scope of
this blog post.
[spo-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/v0.8.0
Recording a seccomp profile requires a binary to be executed, which can be a
simple golang application which just calls [`uname(2)`][uname]:
```go
package main
import (
"syscall"
)
func main() {
utsname := syscall.Utsname{}
if err := syscall.Uname(&utsname); err != nil {
panic(err)
}
}
```
[uname]: https://man7.org/linux/man-pages/man2/uname.2.html
Building a binary from that code can be done by:
```console
> go build -o main main.go
> ldd ./main
not a dynamic executable
```
Now it's possible to download the latest binary of [`spoc` from
GitHub][spoc-latest] and run the application on Linux with it:
[spoc-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/download/v0.8.0/spoc.amd64
```console
> sudo ./spoc record ./main
10:08:25.591945 Loading bpf module
10:08:25.591958 Using system btf file
libbpf: loading object 'recorder.bpf.o' from buffer
libbpf: prog 'sys_enter': relo #3: patched insn #22 (ALU/ALU64) imm 16 -> 16
10:08:25.610767 Getting bpf program sys_enter
10:08:25.610778 Attaching bpf tracepoint
10:08:25.611574 Getting syscalls map
10:08:25.611582 Getting pid_mntns map
10:08:25.613097 Module successfully loaded
10:08:25.613311 Processing events
10:08:25.613693 Running command with PID: 336007
10:08:25.613835 Received event: pid: 336007, mntns: 4026531841
10:08:25.613951 No container ID found for PID (pid=336007, mntns=4026531841, err=unable to find container ID in cgroup path)
10:08:25.614856 Processing recorded data
10:08:25.614975 Found process mntns 4026531841 in bpf map
10:08:25.615110 Got syscalls: read, close, mmap, rt_sigaction, rt_sigprocmask, madvise, nanosleep, clone, uname, sigaltstack, arch_prctl, gettid, futex, sched_getaffinity, exit_group, openat
10:08:25.615195 Adding base syscalls: access, brk, capget, capset, chdir, chmod, chown, close_range, dup2, dup3, epoll_create1, epoll_ctl, epoll_pwait, execve, faccessat2, fchdir, fchmodat, fchown, fchownat, fcntl, fstat, fstatfs, getdents64, getegid, geteuid, getgid, getpid, getppid, getuid, ioctl, keyctl, lseek, mkdirat, mknodat, mount, mprotect, munmap, newfstatat, openat2, pipe2, pivot_root, prctl, pread64, pselect6, readlink, readlinkat, rt_sigreturn, sched_yield, seccomp, set_robust_list, set_tid_address, setgid, setgroups, sethostname, setns, setresgid, setresuid, setsid, setuid, statfs, statx, symlinkat, tgkill, umask, umount2, unlinkat, unshare, write
10:08:25.616293 Wrote seccomp profile to: /tmp/profile.yaml
10:08:25.616298 Unloading bpf module
```
I have to execute `spoc` as root because it will internally run an [ebpf][ebpf]
program by reusing the same code parts from the Security Profiles Operator
itself. I can see that the bpf module got loaded successfully and `spoc`
attached the required tracepoint to it. Then it will track the main application
by using its [mount namespace][mntns] and process the recorded syscall data. The
nature of ebpf programs is that they see the whole context of the Kernel, which
means that `spoc` tracks all syscalls of the system, but does not interfere with
their execution.
[ebpf]: https://ebpf.io
[mntns]: https://man7.org/linux/man-pages/man7/mount_namespaces.7.html
The logs indicate that `spoc` found the syscalls `read`, `close`,
`mmap` and so on, including `uname`. All other syscalls than `uname` are coming
from the golang runtime and its garbage collection, which already adds overhead
to a basic application like in our demo. I can also see from the log line
`Adding base syscalls: …` that `spoc` adds a bunch of base syscalls to the
resulting profile. Those are used by the OCI runtime (like [runc][runc] or
[crun][crun]) in order to be able to run a container. This means that `spoc`
can be used to record seccomp profiles which then can be containerized directly.
This behavior can be disabled in `spoc` by using the `--no-base-syscalls`/`-n`
or customized via the `--base-syscalls`/`-b` command line flags. This can be
helpful in cases where different OCI runtimes other than crun and runc are used,
or if I just want to record the seccomp profile for the application and stack
it with another [base profile][base].
[runc]: https://github.com/opencontainers/runc
[crun]: https://github.com/containers/crun
[base]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/35ebdda/installation-usage.md#base-syscalls-for-a-container-runtime
The resulting profile is now available in `/tmp/profile.yaml`, but the default
location can be changed using the `--output-file value`/`-o` flag:
```console
> cat /tmp/profile.yaml
```
```yaml
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
creationTimestamp: null
name: main
spec:
architectures:
- SCMP_ARCH_X86_64
defaultAction: SCMP_ACT_ERRNO
syscalls:
- action: SCMP_ACT_ALLOW
names:
- access
- arch_prctl
- brk
- …
- uname
- …
status: {}
```
The seccomp profile Custom Resource Definition (CRD) can be directly used
together with the Security Profiles Operator for managing it within Kubernetes.
`spoc` is also capable of producing raw seccomp profiles (as JSON), by using the
`--type`/`-t` `raw-seccomp` flag:
```console
> sudo ./spoc record --type raw-seccomp ./main
52.628827 Wrote seccomp profile to: /tmp/profile.json
```
```console
> jq . /tmp/profile.json
```
```json
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64"],
"syscalls": [
{
"names": ["access", "…", "write"],
"action": "SCMP_ACT_ALLOW"
}
]
}
```
The utility `spoc record` allows us to record complex seccomp profiles directly
from binary invocations in any Linux system which is capable of running the ebpf
code within the Kernel. But it can do more: How about modifying the seccomp
profile and then testing it by using `spoc run`.
## Running seccomp profiles with `spoc run`
`spoc` is also able to run binaries with applied seccomp profiles, making it
easy to test any modification to it. To do that, just run:
```console
> sudo ./spoc run ./main
10:29:58.153263 Reading file /tmp/profile.yaml
10:29:58.153311 Assuming YAML profile
10:29:58.154138 Setting up seccomp
10:29:58.154178 Load seccomp profile
10:29:58.154189 Starting audit log enricher
10:29:58.154224 Enricher reading from file /var/log/audit/audit.log
10:29:58.155356 Running command with PID: 437880
>
```
It looks like that the application exited successfully, which is anticipated
because I did not modify the previously recorded profile yet. I can also
specify a custom location for the profile by using the `--profile`/`-p` flag,
but this was not necessary because I did not modify the default output location
from the record. `spoc` will automatically determine if it's a raw (JSON) or CRD
(YAML) based seccomp profile and then apply it to the process.
The Security Profiles Operator supports a [log enricher feature][enricher],
which provides additional seccomp related information by parsing the audit logs.
`spoc run` uses the enricher in the same way to provide more data to the end
users when it comes to debugging seccomp profiles.
[enricher]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/35ebdda/installation-usage.md#using-the-log-enricher
Now I have to modify the profile to see anything valuable in the output. For
example, I could remove the allowed `uname` syscall:
```console
> jq 'del(.syscalls[0].names[] | select(. == "uname"))' /tmp/profile.json > /tmp/no-uname-profile.json
```
And then try to run it again with the new profile `/tmp/no-uname-profile.json`:
```console
> sudo ./spoc run -p /tmp/no-uname-profile.json ./main
10:39:12.707798 Reading file /tmp/no-uname-profile.json
10:39:12.707892 Setting up seccomp
10:39:12.707920 Load seccomp profile
10:39:12.707982 Starting audit log enricher
10:39:12.707998 Enricher reading from file /var/log/audit/audit.log
10:39:12.709164 Running command with PID: 480512
panic: operation not permitted
goroutine 1 [running]:
main.main()
/path/to/main.go:10 +0x85
10:39:12.713035 Unable to run: launch runner: wait for command: exit status 2
```
Alright, that was expected! The applied seccomp profile blocks the `uname`
syscall, which results in an "operation not permitted" error. This error is
pretty generic and does not provide any hint on what got blocked by seccomp.
It is generally extremely difficult to predict how applications behave if single
syscalls are forbidden by seccomp. It could be possible that the application
terminates like in our simple demo, but it could also lead to a strange
misbehavior and the application does not stop at all.
If I now change the default seccomp action of the profile from `SCMP_ACT_ERRNO`
to `SCMP_ACT_LOG` like this:
```console
> jq '.defaultAction = "SCMP_ACT_LOG"' /tmp/no-uname-profile.json > /tmp/no-uname-profile-log.json
```
Then the log enricher will give us a hint that the `uname` syscall got blocked
when using `spoc run`:
```console
> sudo ./spoc run -p /tmp/no-uname-profile-log.json ./main
10:48:07.470126 Reading file /tmp/no-uname-profile-log.json
10:48:07.470234 Setting up seccomp
10:48:07.470245 Load seccomp profile
10:48:07.470302 Starting audit log enricher
10:48:07.470339 Enricher reading from file /var/log/audit/audit.log
10:48:07.470889 Running command with PID: 522268
10:48:07.472007 Seccomp: uname (63)
```
The application will not terminate any more, but seccomp will log the behavior
to `/var/log/audit/audit.log` and `spoc` will parse the data to correlate it
directly to our program. Generating the log messages to the audit subsystem
comes with a large performance overhead and should be handled with care in
production systems. It also comes with a security risk when running untrusted
apps in audit mode in production environments.
This demo should give you an impression how to debug seccomp profile issues with
applications, probably by using our shiny new helper tool powered by the
features of the Security Profiles Operator. `spoc` is a flexible and portable
binary suitable for edge cases where resources are limited and even Kubernetes
itself may not be available with its full capabilities.
Thank you for reading this blog post! If you're interested in more, providing
feedback or asking for help, then feel free to get in touch with us directly via
[Slack (#security-profiles-operator)][slack] or the [mailing list][mail].
[slack]: https://kubernetes.slack.com/messages/security-profiles-operator
[mail]: https://groups.google.com/forum/#!forum/kubernetes-dev

View File

@ -0,0 +1,206 @@
---
layout: blog
title: "Using OCI artifacts to distribute security profiles for seccomp, SELinux and AppArmor"
date: 2023-05-24
slug: oci-security-profiles
---
**Author**: Sascha Grunert
The [Security Profiles Operator (SPO)][spo] makes managing seccomp, SELinux and
AppArmor profiles within Kubernetes easier than ever. It allows cluster
administrators to define the profiles in a predefined custom resource YAML,
which then gets distributed by the SPO into the whole cluster. Modification and
removal of the security profiles are managed by the operator in the same way,
but thats a small subset of its capabilities.
[spo]: https://github.com/kubernetes-sigs/security-profiles-operator
Another core feature of the SPO is being able to stack seccomp profiles. This
means that users can define a `baseProfileName` in the YAML specification, which
then gets automatically resolved by the operator and combines the syscall rules.
If a base profile has another `baseProfileName`, then the operator will
recursively resolve the profiles up to a certain depth. A common use case is to
define base profiles for low level container runtimes (like [runc][runc] or
[crun][crun]) which then contain syscalls which are required in any case to run
the container. Alternatively, application developers can define seccomp base
profiles for their standard distribution containers and stack dedicated profiles
for the application logic on top. This way developers can focus on maintaining
seccomp profiles which are way simpler and scoped to the application logic,
without having a need to take the whole infrastructure setup into account.
[runc]: https://github.com/opencontainers/runc
[crun]: https://github.com/containers/crun
But how to maintain those base profiles? For example, the amount of required
syscalls for a runtime can change over its release cycle in the same way it can
change for the main application. Base profiles have to be available in the same
cluster, otherwise the main seccomp profile will fail to deploy. This means that
theyre tightly coupled to the main application profiles, which acts against the
main idea of base profiles. Distributing and managing them as plain files feels
like an additional burden to solve.
## OCI artifacts to the rescue
The [v0.8.0][spo-latest] release of the Security Profiles Operator supports
managing base profiles as OCI artifacts! Imagine OCI artifacts as lightweight
container images, storing files in layers in the same way images do, but without
a process to be executed. Those artifacts can be used to store security profiles
like regular container images in compatible registries. This means they can be
versioned, namespaced and annotated similar to regular container images.
[spo-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/v0.8.0
To see how that works in action, specify a `baseProfileName` prefixed with
`oci://` within a seccomp profile CRD, for example:
```yaml
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
kind: SeccompProfile
metadata:
name: test
spec:
defaultAction: SCMP_ACT_ERRNO
baseProfileName: oci://ghcr.io/security-profiles/runc:v1.1.5
syscalls:
- action: SCMP_ACT_ALLOW
names:
- uname
```
The operator will take care of pulling the content by using [oras][oras], as
well as verifying the [sigstore (cosign)][cosign] signatures of the artifact. If
the artifacts are not signed, then the SPO will reject them. The resulting
profile `test` will then contain all base syscalls from the remote `runc`
profile plus the additional allowed `uname` one. It is also possible to
reference the base profile by its digest (SHA256) making the artifact to be
pulled more specific, for example by referencing
`oci://ghcr.io/security-profiles/runc@sha256:380…`.
[oras]: https://oras.land
[cosign]: https://github.com/sigstore/cosign
The operator internally caches pulled artifacts up to 24 hours for 1000
profiles, meaning that they will be refreshed after that time period, if the
cache is full or the operator daemon gets restarted.
Because the overall resulting syscalls are hidden from the user (I only have the
`baseProfileName` listed in the SeccompProfile, and not the syscalls themselves), I'll additionally
annotate that SeccompProfile with the final `syscalls`.
Here's how the SeccompProfile looks after I annotate it:
```console
> kubectl describe seccompprofile test
Name: test
Namespace: security-profiles-operator
Labels: spo.x-k8s.io/profile-id=SeccompProfile-test
Annotations: syscalls:
[{"names":["arch_prctl","brk","capget","capset","chdir","clone","close",...
API Version: security-profiles-operator.x-k8s.io/v1beta1
```
The SPO maintainers provide all public base profiles as part of the [“Security
Profiles” GitHub organization][org].
[org]: https://github.com/orgs/security-profiles/packages
## Managing OCI security profiles
Alright, now the official SPO provides a bunch of base profiles, but how can I
define my own? Well, first of all we have to choose a working registry. There
are a bunch of registries that already supports OCI artifacts:
- [CNCF Distribution](https://github.com/distribution/distribution)
- [Azure Container Registry](https://aka.ms/acr)
- [Amazon Elastic Container Registry](https://aws.amazon.com/ecr)
- [Google Artifact Registry](https://cloud.google.com/artifact-registry)
- [GitHub Packages container registry](https://docs.github.com/en/packages/guides/about-github-container-registry)
- [Bundle Bar](https://bundle.bar/docs/supported-clients/oras)
- [Docker Hub](https://hub.docker.com)
- [Zot Registry](https://zotregistry.io)
The Security Profiles Operator ships a new command line interface called `spoc`,
which is a little helper tool for managing OCI profiles among doing various other
things which are out of scope of this blog post. But, the command `spoc push`
can be used to push a security profile to a registry:
```
> export USERNAME=my-user
> export PASSWORD=my-pass
> spoc push -f ./examples/baseprofile-crun.yaml ghcr.io/security-profiles/crun:v1.8.3
16:35:43.899886 Pushing profile ./examples/baseprofile-crun.yaml to: ghcr.io/security-profiles/crun:v1.8.3
16:35:43.899939 Creating file store in: /tmp/push-3618165827
16:35:43.899947 Adding profile to store: ./examples/baseprofile-crun.yaml
16:35:43.900061 Packing files
16:35:43.900282 Verifying reference: ghcr.io/security-profiles/crun:v1.8.3
16:35:43.900310 Using tag: v1.8.3
16:35:43.900313 Creating repository for ghcr.io/security-profiles/crun
16:35:43.900319 Using username and password
16:35:43.900321 Copying profile to repository
16:35:46.976108 Signing container image
Generating ephemeral keys...
Retrieving signed certificate...
Note that there may be personally identifiable information associated with this signed artifact.
This may include the email address associated with the account with which you authenticate.
This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later.
By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs.
Your browser will now be opened to:
https://oauth2.sigstore.dev/auth/auth?access_type=…
Successfully verified SCT...
tlog entry created with index: 16520520
Pushing signature to: ghcr.io/security-profiles/crun
```
You can see that the tool automatically signs the artifact and pushes the
`./examples/baseprofile-crun.yaml` to the registry, which is then directly ready
for usage within the SPO. If username and password authentication is required,
either use the `--username`, `-u` flag or export the `USERNAME` environment
variable. To set the password, export the `PASSWORD` environment variable.
It is possible to add custom annotations to the security profile by using the
`--annotations` / `-a` flag multiple times in `KEY:VALUE` format. Those have no
effect for now, but at some later point additional features of the operator may
rely them.
The `spoc` client is also able to pull security profiles from OCI artifact
compatible registries. To do that, just run `spoc pull`:
```console
> spoc pull ghcr.io/security-profiles/runc:v1.1.5
16:32:29.795597 Pulling profile from: ghcr.io/security-profiles/runc:v1.1.5
16:32:29.795610 Verifying signature
Verification for ghcr.io/security-profiles/runc:v1.1.5 --
The following checks were performed on each of these signatures:
- Existence of the claims in the transparency log was verified offline
- The code-signing certificate was verified using trusted certificate authority certificates
[{"critical":{"identity":{"docker-reference":"ghcr.io/security-profiles/runc"},…}}]
16:32:33.208695 Creating file store in: /tmp/pull-3199397214
16:32:33.208713 Verifying reference: ghcr.io/security-profiles/runc:v1.1.5
16:32:33.208718 Creating repository for ghcr.io/security-profiles/runc
16:32:33.208742 Using tag: v1.1.5
16:32:33.208743 Copying profile from repository
16:32:34.119652 Reading profile
16:32:34.119677 Trying to unmarshal seccomp profile
16:32:34.120114 Got SeccompProfile: runc-v1.1.5
16:32:34.120119 Saving profile in: /tmp/profile.yaml
```
The profile can be now found in `/tmp/profile.yaml` or the specified output file
`--output-file` / `-o`. We can specify an username and password in the same way
as for `spoc push`.
`spoc` makes it easy to manage security profiles as OCI artifacts, which can be
then consumed directly by the operator itself.
That was our compact journey through the latest possibilities of the Security
Profiles Operator! If you're interested in more, providing feedback or asking
for help, then feel free to get in touch with us directly via [Slack
(#security-profiles-operator)][slack] or [the mailing list][mail].
[slack]: https://kubernetes.slack.com/messages/security-profiles-operator
[mail]: https://groups.google.com/forum/#!forum/kubernetes-dev

View File

@ -0,0 +1,94 @@
---
layout: blog
title: "dl.k8s.io to adopt a Content Delivery Network"
date: 2023-06-09
slug: dl-adopt-cdn
---
**Authors**: Arnaud Meukam (VMware), Hannah Aubry (Fastly), Frederico
Muñoz (SAS Institute)
We're happy to announce that dl.k8s.io, home of the official Kubernetes
binaries, will soon be powered by [Fastly](https://www.fastly.com).
Fastly is known for its high-performance content delivery network (CDN) designed
to deliver content quickly and reliably around the world. With its powerful
network, Fastly will help us deliver official Kubernetes binaries to users
faster and more reliably than ever before.
The decision to use Fastly was made after an extensive evaluation process in
which we carefully evaluated several potential content delivery network
providers. Ultimately, we chose Fastly because of their commitment to the open
internet and proven track record of delivering fast and secure digital
experiences to some of the most known open source projects (through their [Fast
Forward](https://www.fastly.com/fast-forward) program).
## What you need to know about this change
- On Monday, July 24th, the IP addresses and backend storage associated with the
dl.k8s.io domain name will change.
- The change will not impact the vast majority of users since the domain
name will remain the same.
- If you restrict access to specific IP ranges, access to the dl.k8s.io domain
could stop working.
If you think you may be impacted or want to know more about this change,
please keep reading.
## Why are we making this change
The official Kubernetes binaries site, dl.k8s.io, is used by thousands of users
all over the world, and currently serves _more than 5 petabytes of binaries each
month_. This change will allow us to improve access to those resources by
leveraging a world-wide CDN.
## Does this affect dl.k8s.io only, or are other domains also affected?
Only dl.k8s.io will be affected by this change.
## My company specifies the domain names that we are allowed to be accessed. Will this change affect the domain name?
No, the domain name (`dl.k8s.io`) will remain the same: no change will be
necessary, and access to the Kubernetes release binaries site should not be
affected.
## My company uses some form of IP filtering. Will this change affect access to the site?
If IP-based filtering is in place, its possible that access to the site will be
affected when the new IP addresses become active.
## If my company doesnt use IP addresses to restrict network traffic, do we need to do anything?
No, the switch to the CDN should be transparent.
## Will there be a dual running period?
**No, it is a cutover.** You can, however, test your networks right now to check
if they can route to the new public IP addresses from Fastly. You should add
the new IPs to your network's `allowlist` before July 24th. Once the transfer is
complete, ensure your networks use the new IP addresses to connect to
the `dl.k8s.io` service.
## What are the new IP addresses?
If you need to manage an allow list for downloads, you can get the ranges to
match from the Fastly API, in JSON: [public IP address
ranges](https://api.fastly.com/public-ip-list). You don't need any credentials
to download that list of ranges.
## What next steps would you recommend?
If you have IP-based filtering in place, we recommend the following course of
action **before July, 24th**:
- Add the new IP addresses to your allowlist.
- Conduct tests with your networks/firewall to ensure your networks can route to
the new IP addresses.
After the change is made, we recommend double-checking that HTTP calls are
accessing dl.k8s.io with the new IP addresses.
## What should I do if I detect some abnormality after the cutover date?
If you encounter any weirdness during binaries download, please [open an
issue](https://github.com/kubernetes/k8s.io/issues/new/choose).

View File

@ -26,8 +26,7 @@ each Node in your cluster, so that the
The kubelet acts as a client when connecting to the container runtime via gRPC.
The runtime and image service endpoints have to be available in the container
runtime, which can be configured separately within the kubelet by using the
`--image-service-endpoint` and `--container-runtime-endpoint` [command line
flags](/docs/reference/command-line-tools-reference/kubelet)
`--image-service-endpoint` [command line flags](/docs/reference/command-line-tools-reference/kubelet).
For Kubernetes v{{< skew currentVersion >}}, the kubelet prefers to use CRI `v1`.
If a container runtime does not support `v1` of the CRI, then the kubelet tries to

View File

@ -118,7 +118,7 @@ break the kubelet behavior and remove containers that should exist.
To configure options for unused container and image garbage collection, tune the
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
and change the parameters related to garbage collection using the
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
resource type.
### Container image lifecycle

View File

@ -506,7 +506,7 @@ in a cluster,
|`custom-class-c` | 1000 |
|`regular/unset` | 0 |
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/)
the settings for `shutdownGracePeriodByPodPriority` could look like:
|Pod priority class value|Shutdown period|
@ -590,7 +590,7 @@ VolumeAttachments will not be deleted from the original shutdown node so the vol
used by these pods cannot be attached to a new running node. As a result, the
application running on the StatefulSet cannot function properly. If the original
shutdown node comes up, the pods will be deleted by kubelet and new pods will be
created on a different running node. If the original shutdown node does not come up,
created on a different running node. If the original shutdown node does not come up,
these pods will be stuck in terminating status on the shutdown node forever.
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
@ -625,7 +625,7 @@ onwards, swap memory support can be enabled on a per-node basis.
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
must be set to false.
{{< warning >}}

View File

@ -193,7 +193,7 @@ A PriorityLevelConfiguration represents a single priority level. Each
PriorityLevelConfiguration has an independent limit on the number of outstanding
requests, and limitations on the number of queued requests.
The nominal oncurrency limit for a PriorityLevelConfiguration is not
The nominal concurrency limit for a PriorityLevelConfiguration is not
specified in an absolute number of seats, but rather in "nominal
concurrency shares." The total concurrency limit for the API Server is
distributed among the existing PriorityLevelConfigurations in

View File

@ -81,15 +81,16 @@ See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
A container runtime handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
Different container runtimes implement this in different ways; however, the integration with the kubelet is standardized
as the _CRI logging format_.
A container runtime handles and redirects any output generated to a containerized
application's `stdout` and `stderr` streams.
Different container runtimes implement this in different ways; however, the integration
with the kubelet is standardized as the _CRI logging format_.
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node,
all corresponding containers are also evicted, along with their logs.
By default, if a container restarts, the kubelet keeps one terminated container with its logs.
If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
The kubelet makes logs available to clients via a special feature of the Kubernetes API. The usual way to access this is
by running `kubectl logs`.
The kubelet makes logs available to clients via a special feature of the Kubernetes API.
The usual way to access this is by running `kubectl logs`.
### Log rotation
@ -101,7 +102,7 @@ If you configure rotation, the kubelet is responsible for rotating container log
The kubelet sends this information to the container runtime (using CRI),
and the runtime writes the container logs to the given location.
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration),
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
`containerLogMaxSize` and `containerLogMaxFiles`,
using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
@ -201,7 +202,8 @@ as your responsibility.
## Cluster-level logging architectures
While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options:
While Kubernetes does not provide a native solution for cluster-level logging, there are
several common approaches you can consider. Here are some options:
* Use a node-level logging agent that runs on every node.
* Include a dedicated sidecar container for logging in an application pod.
@ -211,14 +213,18 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
![Using a node level logging agent](/images/docs/user-guide/logging/logging-with-node-agent.png)
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
You can implement cluster-level logging by including a _node-level logging agent_ on each node.
The logging agent is a dedicated tool that exposes logs or pushes logs to a backend.
Commonly, the logging agent is a container that has access to a directory with log files from all of the
application containers on that node.
Because the logging agent must run on every node, it is recommended to run the agent
as a `DaemonSet`.
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects
these logs and forwards them for aggregation.
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}

View File

@ -11,17 +11,16 @@ feature:
<!-- overview -->
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how
much of each resource a {{< glossary_tooltip text="container" term_id="container" >}} needs.
The most common resources to specify are CPU and memory (RAM); there are others.
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how much of each resource a
{{< glossary_tooltip text="container" term_id="container" >}} needs. The most common resources to specify are CPU and memory
(RAM); there are others.
When you specify the resource _request_ for containers in a Pod, the
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this
information to decide which node to place the Pod on. When you specify a resource _limit_
for a container, the kubelet enforces those limits so that the running container is not
allowed to use more of that resource than the limit you set. The kubelet also reserves
at least the _request_ amount of that system resource specifically for that container
to use.
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this information to decide which node to place the Pod on.
When you specify a resource _limit_ for a container, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} enforces those
limits so that the running container is not allowed to use more of that resource
than the limit you set. The kubelet also reserves at least the _request_ amount of
that system resource specifically for that container to use.
<!-- body -->
@ -257,6 +256,18 @@ Your applications cannot expect any performance SLAs (disk IOPS for example)
from local ephemeral storage.
{{< /caution >}}
{{< note >}}
To make the resource quota work on ephemeral-storage, two things need to be done:
* An admin sets the resource quota for ephemeral-storage in a namespace.
* A user needs to specify limits for the ephemeral-storage resource in the Pod spec.
If the user doesn't specify the ephemeral-storage resource limit in the Pod spec,
the resource quota is not enforced on ephemeral-storage.
{{< /note >}}
Kubernetes lets you track, reserve and limit the amount
of ephemeral local storage a Pod can consume.

View File

@ -458,9 +458,22 @@ common use cases and suggested solutions.
If you need access to multiple registries, you can create one secret for each registry.
## Legacy built-in kubelet credential provider
In older versions of Kubernetes, the kubelet had a direct integration with cloud provider credentials.
This gave it the ability to dynamically fetch credentials for image registries.
There were three built-in implementations of the kubelet credential provider integration:
ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry).
For more information on the legacy mechanism, read the documentation for the version of Kubernetes that you
are using. Kubernetes v1.26 through to v{{< skew latestVersion >}} do not include the legacy mechanism, so
you would need to either:
- configure a kubelet image credential provider on each node
- specify image pull credentials using `imagePullSecrets` and at least one Secret
## {{% heading "whatsnext" %}}
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md).
* Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection).
* Learn more about [pulling an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry).

View File

@ -283,6 +283,20 @@ and to support other aspects of the Kubernetes network model.
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
allow Kubernetes to work with different networking topologies and technologies.
### Kubelet image credential provider plugins
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
Kubelet image credential providers are plugins for the kubelet to dynamically retrieve image registry
credentials. The credentials are then used when pulling images from container image registries that
match the configuration.
The plugins can communicate with external services or use local files to obtain credentials. This way,
the kubelet does not need to have static credentials for each registry, and can support various
authentication methods and protocols.
For plugin configuration details, see
[Configure a kubelet image credential provider](/docs/tasks/administer-cluster/kubelet-credential-provider/).
## Scheduling extensions
The scheduler is a special type of controller that watches pods, and assigns

View File

@ -122,6 +122,12 @@ about containers in a central database, and provides a UI for browsing that data
A [cluster-level logging](/docs/concepts/cluster-administration/logging/) mechanism is responsible for
saving container logs to a central log store with search/browsing interface.
### Network Plugins
[Network plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins) are software
components that implement the container network interface (CNI) specification. They are responsible for
allocating IP addresses to pods and enabling them to communicate with each other within the cluster.
## {{% heading "whatsnext" %}}

View File

@ -120,6 +120,31 @@ satisfy the StatefulSet specification.
Different kinds of object can also have different `.status`; again, the API reference pages
detail the structure of that `.status` field, and its content for each different type of object.
## Server side field validation
Starting with Kubernetes v1.25, the API server offers server side
[field validation](/docs/reference/using-api/api-concepts/#field-validation)
that detects unrecognized or duplicate fields in an object. It provides all the functionality
of `kubectl --validate` on the server side.
The `kubectl` tool uses the `--validate` flag to set the level of field validation. It accepts the
values `ignore`, `warn`, and `strict` while also accepting the values `true` (equivalent to `strict`)
and `false` (equivalent to `ignore`). The default validation setting for `kubectl` is `--validate=true`.
`Strict`
: Strict field validation, errors on validation failure
`Warn`
: Field validation is performed, but errors are exposed as warnings rather than failing the request
`Ignore`
: No server side field validation is performed
When `kubectl` cannot connect to an API server that supports field validation it will fall back
to using client-side validation. Kubernetes 1.27 and later versions always offer field validation;
older Kubernetes releases might not. If your cluster is older than v1.27, check the documentation
for your version of Kubernetes.
## {{% heading "whatsnext" %}}
If you're new to Kubernetes, read more about the following:

View File

@ -247,7 +247,7 @@ The set of pods that a `service` targets is defined with a label selector.
Similarly, the population of pods that a `replicationcontroller` should
manage is also defined with a label selector.
Labels selectors for both objects are defined in `json` or `yaml` files using maps,
Label selectors for both objects are defined in `json` or `yaml` files using maps,
and only _equality-based_ requirement selectors are supported:
```json

View File

@ -1,11 +1,68 @@
---
title: "Policies"
weight: 90
no_list: true
description: >
Policies you can configure that apply to groups of resources.
Manage security and best-practices with policies.
---
{{< note >}}
See [Network Policies](/docs/concepts/services-networking/network-policies/)
for documentation about NetworkPolicy in Kubernetes.
{{< /note >}}
<!-- overview -->
Kubernetes policies are configurations that manage other configurations or runtime behaviors. Kubernetes offers various forms of policies, described below:
<!-- body -->
## Apply policies using API objects
Some API objects act as policies. Here are some examples:
* [NetworkPolicies](/docs/concepts/services-networking/network-policies/) can be used to restrict ingress and egress traffic for a workload.
* [LimitRanges](/docs/concepts/policy/limit-range/) manage resource allocation constraints across different object kinds.
* [ResourceQuotas](/docs/concepts/policy/resource-quotas/) limit resource consumption for a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
## Apply policies using admission controllers
An {{< glossary_tooltip text="admission controller" term_id="admission-controller" >}}
runs in the API server
and can validate or mutate API requests. Some admission controllers act to apply policies.
For example, the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller modifies a new Pod to set the image pull policy to `Always`.
Kubernetes has several built-in admission controllers that are configurable via the API server `--enable-admission-plugins` flag.
Details on admission controllers, with the complete list of available admission controllers, are documented in a dedicated section:
* [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
## Apply policies using ValidatingAdmissionPolicy
Validating admission policies allow configurable validation checks to be executed in the API server using the Common Expression Language (CEL). For example, a `ValidatingAdmissionPolicy` can be used to disallow use of the `latest` image tag.
A `ValidatingAdmissionPolicy` operates on an API request and can be used to block, audit, and warn users about non-compliant configurations.
Details on the `ValidatingAdmissionPolicy` API, with examples, are documented in a dedicated section:
* [Validating Admission Policy](/docs/reference/access-authn-authz/validating-admission-policy/)
## Apply policies using dynamic admission control
Dynamic admission controllers (or admission webhooks) run outside the API server as separate applications that register to receive webhooks requests to perform validation or mutation of API requests.
Dynamic admission controllers can be used to apply policies on API requests and trigger other policy-based workflows. A dynamic admission controller can perform complex checks including those that require retrieval of other cluster resources and external data. For example, an image verification check can lookup data from OCI registries to validate the container image signatures and attestations.
Details on dynamic admission control are documented in a dedicated section:
* [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
### Implementations {#implementations-admission-control}
{{% thirdparty-content %}}
Dynamic Admission Controllers that act as flexible policy engines are being developed in the Kubernetes ecosystem, such as:
- [Kubewarden](https://github.com/kubewarden)
- [Kyverno](https://kyverno.io)
- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper)
- [Polaris](https://polaris.docs.fairwinds.com/admission-controller/)
## Apply policies using Kubelet configurations
Kubernetes allows configuring the Kubelet on each worker node. Some Kubelet configurations act as policies:
* [Process ID limts and reservations](/docs/concepts/policy/pid-limiting/) are used to limit and reserve allocatable PIDs.
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/) can manage compute, memory, and device resources for latency-critical and high-throughput workloads.

View File

@ -135,6 +135,9 @@ You can use the `operator` field to specify a logical operator for Kubernetes to
interpreting the rules. You can use `In`, `NotIn`, `Exists`, `DoesNotExist`,
`Gt` and `Lt`.
Read [Operators](#operators)
to learn more about how these work.
`NotIn` and `DoesNotExist` allow you to define node anti-affinity behavior.
Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/)
to repel Pods from specific nodes.
@ -310,6 +313,9 @@ refer to the [design proposal](https://git.k8s.io/design-proposals-archive/sched
You can use the `In`, `NotIn`, `Exists` and `DoesNotExist` values in the
`operator` field for Pod affinity and anti-affinity.
Read [Operators](#operators)
to learn more about how these work.
In principle, the `topologyKey` can be any allowed label key with the following
exceptions for performance and security reasons:
@ -492,6 +498,31 @@ overall utilization.
Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
to learn more about how these work.
## Operators
The following are all the logical operators that you can use in the `operator` field for `nodeAffinity` and `podAffinity` mentioned above.
| Operator | Behavior |
| :------------: | :-------------: |
| `In` | The label value is present in the supplied set of strings |
| `NotIn` | The label value is not contained in the supplied set of strings |
| `Exists` | A label with this key exists on the object |
| `DoesNotExist` | No label with this key exists on the object |
The following operators can only be used with `nodeAffinity`.
| Operator | Behaviour |
| :------------: | :-------------: |
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than or equal to the integer that results from parsing the value of a label named by this selector |
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than or equal to the integer that results from parsing the value of a label named by this selector |
{{<note>}}
`Gt` and `Lt` operators will not work with non-integer values. If the given value
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
are not available for `podAffinity`.
{{</note>}}
## {{% heading "whatsnext" %}}
- Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .

View File

@ -10,7 +10,6 @@ weight: 50
<!-- overview -->
This page provides an overview of controlling access to the Kubernetes API.
<!-- body -->
Users access the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) using `kubectl`,
client libraries, or by making REST requests. Both human users and
@ -23,11 +22,15 @@ following diagram:
## Transport security
By default, the Kubernetes API server listens on port 6443 on the first non-localhost network interface, protected by TLS. In a typical production Kubernetes cluster, the API serves on port 443. The port can be changed with the `--secure-port`, and the listening IP address with the `--bind-address` flag.
By default, the Kubernetes API server listens on port 6443 on the first non-localhost
network interface, protected by TLS. In a typical production Kubernetes cluster, the
API serves on port 443. The port can be changed with the `--secure-port`, and the
listening IP address with the `--bind-address` flag.
The API server presents a certificate. This certificate may be signed using
a private certificate authority (CA), or based on a public key infrastructure linked
to a generally recognized CA. The certificate and corresponding private key can be set by using the `--tls-cert-file` and `--tls-private-key-file` flags.
to a generally recognized CA. The certificate and corresponding private key can be set
by using the `--tls-cert-file` and `--tls-private-key-file` flags.
If your cluster uses a private certificate authority, you need a copy of that CA
certificate configured into your `~/.kube/config` on the client, so that you can
@ -65,9 +68,12 @@ users in its API.
## Authorization
After the request is authenticated as coming from a specific user, the request must be authorized. This is shown as step **2** in the diagram.
After the request is authenticated as coming from a specific user, the request must
be authorized. This is shown as step **2** in the diagram.
A request must include the username of the requester, the requested action, and the object affected by the action. The request is authorized if an existing policy declares that the user has permissions to complete the requested action.
A request must include the username of the requester, the requested action, and
the object affected by the action. The request is authorized if an existing policy
declares that the user has permissions to complete the requested action.
For example, if Bob has the policy below, then he can read pods only in the namespace `projectCaribou`:
@ -83,7 +89,9 @@ For example, if Bob has the policy below, then he can read pods only in the name
}
}
```
If Bob makes the following request, the request is authorized because he is allowed to read objects in the `projectCaribou` namespace:
If Bob makes the following request, the request is authorized because he is
allowed to read objects in the `projectCaribou` namespace:
```json
{
@ -99,14 +107,25 @@ If Bob makes the following request, the request is authorized because he is allo
}
}
```
If Bob makes a request to write (`create` or `update`) to the objects in the `projectCaribou` namespace, his authorization is denied. If Bob makes a request to read (`get`) objects in a different namespace such as `projectFish`, then his authorization is denied.
Kubernetes authorization requires that you use common REST attributes to interact with existing organization-wide or cloud-provider-wide access control systems. It is important to use REST formatting because these control systems might interact with other APIs besides the Kubernetes API.
If Bob makes a request to write (`create` or `update`) to the objects in the
`projectCaribou` namespace, his authorization is denied. If Bob makes a request
to read (`get`) objects in a different namespace such as `projectFish`, then his authorization is denied.
Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC Mode, and Webhook mode. When an administrator creates a cluster, they configure the authorization modules that should be used in the API server. If more than one authorization modules are configured, Kubernetes checks each module, and if any module authorizes the request, then the request can proceed. If all of the modules deny the request, then the request is denied (HTTP status code 403).
Kubernetes authorization requires that you use common REST attributes to interact
with existing organization-wide or cloud-provider-wide access control systems.
It is important to use REST formatting because these control systems might
interact with other APIs besides the Kubernetes API.
To learn more about Kubernetes authorization, including details about creating policies using the supported authorization modules, see [Authorization](/docs/reference/access-authn-authz/authorization/).
Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC Mode,
and Webhook mode. When an administrator creates a cluster, they configure the
authorization modules that should be used in the API server. If more than one
authorization modules are configured, Kubernetes checks each module, and if
any module authorizes the request, then the request can proceed. If all of
the modules deny the request, then the request is denied (HTTP status code 403).
To learn more about Kubernetes authorization, including details about creating
policies using the supported authorization modules, see [Authorization](/docs/reference/access-authn-authz/authorization/).
## Admission control

View File

@ -41,6 +41,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md) is an [Easegress](https://megaease.com/easegress/) based API gateway that can run as an ingress controller.
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
lets you use an Ingress to configure F5 BIG-IP virtual servers.
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller-1-0/742835/fortiadc-ingress-controller-overview) support the Kubernetes Ingress resources and allows you to manage FortiADC objects from Kubernetes
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
which offers API gateway functionality.
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for

View File

@ -15,7 +15,6 @@ weight: 30
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
{{< glossary_definition term_id="ingress" length="all" >}}
<!-- body -->
## Terminology
@ -23,14 +22,21 @@ weight: 30
For clarity, this guide defines the following terms:
* Node: A worker machine in Kubernetes, part of a cluster.
* Cluster: A set of Nodes that run containerized applications managed by Kubernetes. For this example, and in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes [networking model](/docs/concepts/cluster-administration/networking/).
* Service: A Kubernetes {{< glossary_tooltip term_id="service" >}} that identifies a set of Pods using {{< glossary_tooltip text="label" term_id="label" >}} selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
* Cluster: A set of Nodes that run containerized applications managed by Kubernetes.
For this example, and in most common Kubernetes deployments, nodes in the cluster
are not part of the public internet.
* Edge router: A router that enforces the firewall policy for your cluster. This
could be a gateway managed by a cloud provider or a physical piece of hardware.
* Cluster network: A set of links, logical or physical, that facilitate communication
within a cluster according to the Kubernetes [networking model](/docs/concepts/cluster-administration/networking/).
* Service: A Kubernetes {{< glossary_tooltip term_id="service" >}} that identifies
a set of Pods using {{< glossary_tooltip text="label" term_id="label" >}} selectors.
Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
## What is Ingress?
[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1-networking-k8s-io) exposes HTTP and HTTPS routes from outside the cluster to
[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1-networking-k8s-io)
exposes HTTP and HTTPS routes from outside the cluster to
{{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
@ -38,7 +44,11 @@ Here is a simple example where an Ingress sends all its traffic to one Service:
{{< figure src="/docs/images/ingress.svg" alt="ingress-diagram" class="diagram-large" caption="Figure. Ingress" link="https://mermaid.live/edit#pako:eNqNkstuwyAQRX8F4U0r2VHqPlSRKqt0UamLqlnaWWAYJygYLB59KMm_Fxcix-qmGwbuXA7DwAEzzQETXKutof0Ovb4vaoUQkwKUu6pi3FwXM_QSHGBt0VFFt8DRU2OWSGrKUUMlVQwMmhVLEV1Vcm9-aUksiuXRaO_CEhkv4WjBfAgG1TrGaLa-iaUw6a0DcwGI-WgOsF7zm-pN881fvRx1UDzeiFq7ghb1kgqFWiElyTjnuXVG74FkbdumefEpuNuRu_4rZ1pqQ7L5fL6YQPaPNiFuywcG9_-ihNyUkm6YSONWkjVNM8WUIyaeOJLO3clTB_KhL8NQDmVe-OJjxgZM5FhFiiFTK5zjDkxHBQ9_4zB4a-x20EGNSZhyaKmXrg7f5hSsvufUwTMXThtMWiot5Jh6p9ffimHijIezaSVoeN0uiqcfMJvf7w" >}}
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress may be configured to give Services externally-reachable URLs,
load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An [Ingress controller](/docs/concepts/services-networking/ingress-controllers)
is responsible for fulfilling the Ingress, usually with a load balancer, though
it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically
uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#type-nodeport) or
@ -46,10 +56,11 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin
## Prerequisites
You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect.
You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers)
to satisfy an Ingress. Only creating an Ingress resource has no effect.
You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/). You can choose from a number of
[Ingress controllers](/docs/concepts/services-networking/ingress-controllers).
You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/).
You can choose from a number of [Ingress controllers](/docs/concepts/services-networking/ingress-controllers).
Ideally, all Ingress controllers should fit the reference specification. In reality, the various Ingress
controllers operate slightly differently.
@ -68,10 +79,10 @@ An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
The name of an Ingress object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/rewrite/README.md).
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
your choice of Ingress controller to learn which annotations are supported.
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/rewrite/README.md).
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations.
Review the documentation for your choice of Ingress controller to learn which annotations are supported.
The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
has all the information needed to configure a load balancer or proxy server. Most importantly, it
@ -100,7 +111,8 @@ Each HTTP rule contains the following information:
incoming request before the load balancer directs traffic to the referenced
Service.
* A backend is a combination of Service and port names as described in the
[Service doc](/docs/concepts/services-networking/service/) or a [custom resource backend](#resource-backend) by way of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}. HTTP (and HTTPS) requests to the
[Service doc](/docs/concepts/services-networking/service/) or a [custom resource backend](#resource-backend)
by way of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}. HTTP (and HTTPS) requests to the
Ingress that match the host and path of the rule are sent to the listed backend.
A `defaultBackend` is often configured in an Ingress controller to service any requests that do not
@ -168,9 +180,11 @@ supported path types:
match for path _p_ if every _p_ is an element-wise prefix of _p_ of the
request path.
{{< note >}} If the last element of the path is a substring of the last
{{< note >}}
If the last element of the path is a substring of the last
element in request path, it is not a match (for example: `/foo/bar`
matches `/foo/bar/baz`, but does not match `/foo/barbaz`). {{< /note >}}
matches `/foo/bar/baz`, but does not match `/foo/barbaz`).
{{< /note >}}
### Examples
@ -196,12 +210,14 @@ supported path types:
| Mixed | `/foo` (Prefix), `/foo` (Exact) | `/foo` | Yes, prefers Exact |
#### Multiple matches
In some cases, multiple paths within an Ingress will match a request. In those
cases precedence will be given first to the longest matching path. If two paths
are still equally matched, precedence will be given to paths with an exact path
type over prefix path type.
## Hostname wildcards
Hosts can be precise matches (for example “`foo.bar.com`”) or a wildcard (for
example “`*.foo.com`”). Precise matches require that the HTTP `host` header
matches the `host` field. Wildcard matches require the HTTP `host` header is
@ -248,6 +264,7 @@ the `name` of the parameters identifies a specific cluster scoped
resource for that API.
For example:
```yaml
---
apiVersion: networking.k8s.io/v1
@ -266,6 +283,7 @@ spec:
kind: ClusterIngressParameter
name: external-config-1
```
{{% /tab %}}
{{% tab name="Namespaced" %}}
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
@ -295,6 +313,7 @@ The IngressClass API itself is always cluster-scoped.
Here is an example of an IngressClass that refers to parameters that are
namespaced:
```yaml
---
apiVersion: networking.k8s.io/v1
@ -390,8 +409,7 @@ down to a minimum. For example, a setup like:
{{< figure src="/docs/images/ingressFanOut.svg" alt="ingress-fanout-diagram" class="diagram-large" caption="Figure. Ingress Fan Out" link="https://mermaid.live/edit#pako:eNqNUslOwzAQ_RXLvYCUhMQpUFzUUzkgcUBwbHpw4klr4diR7bCo8O8k2FFbFomLPZq3jP00O1xpDpjijWHtFt09zAuFUCUFKHey8vf6NE7QrdoYsDZumGIb4Oi6NAskNeOoZJKpCgxK4oXwrFVgRyi7nCVXWZKRPMlysv5yD6Q4Xryf1Vq_WzDPooJs9egLNDbolKTpT03JzKgh3zWEztJZ0Niu9L-qZGcdmAMfj4cxvWmreba613z9C0B-AMQD-V_AdA-A4j5QZu0SatRKJhSqhZR0wjmPrDP6CeikrutQxy-Cuy2dtq9RpaU2dJKm6fzI5Glmg0VOLio4_5dLjx27hFSC015KJ2VZHtuQvY2fuHcaE43G0MaCREOow_FV5cMxHZ5-oPX75UM5avuXhXuOI9yAaZjg_aLuBl6B3RYaKDDtSw4166QrcKE-emrXcubghgunDaY1kxYizDqnH99UhakzHYykpWD9hjS--fEJoIELqQ" >}}
would require an Ingress such as:
It would require an Ingress such as:
{{< codenew file="service/networking/simple-fanout-example.yaml" >}}
@ -435,7 +453,6 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at
{{< figure src="/docs/images/ingressNameBased.svg" alt="ingress-namebase-diagram" class="diagram-large" caption="Figure. Ingress Name Based Virtual hosting" link="https://mermaid.live/edit#pako:eNqNkl9PwyAUxb8KYS-atM1Kp05m9qSJJj4Y97jugcLtRqTQAPVPdN_dVlq3qUt8gZt7zvkBN7xjbgRgiteW1Rt0_zjLNUJcSdD-ZBn21WmcoDu9tuBcXDHN1iDQVWHnSBkmUMEU0xwsSuK5DK5l745QejFNLtMkJVmSZmT1Re9NcTz_uDXOU1QakxTMJtxUHw7ss-SQLhehQEODTsdH4l20Q-zFyc84-Y67pghv5apxHuweMuj9eS2_NiJdPhix-kMgvwQShOyYMNkJoEUYM3PuGkpUKyY1KqVSdCSEiJy35gnoqCzLvo5fpPAbOqlfI26UsXQ0Ho9nB5CnqesRGTnncPYvSqsdUvqp9KRdlI6KojjEkB0mnLgjDRONhqENBYm6oXbLV5V1y6S7-l42_LowlIN2uFm_twqOcAW2YlK0H_i9c-bYb6CCHNO2FFCyRvkc53rbWptaMA83QnpjMS2ZchBh1nizeNMcU28bGEzXkrV_pArN7Sc0rBTu" >}}
The following Ingress tells the backing load balancer to route requests based on
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
@ -446,7 +463,9 @@ web traffic to the IP address of your Ingress controller can be matched without
virtual host being required.
For example, the following Ingress routes traffic
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic whose request host header doesn't match `first.bar.com` and `second.bar.com` to `service3`.
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`,
and any traffic whose request host header doesn't match `first.bar.com`
and `second.bar.com` to `service3`.
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}
@ -615,8 +634,6 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
* Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer)
* Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport)
## {{% heading "whatsnext" %}}
* Learn about the [Ingress](/docs/reference/kubernetes-api/service-resources/ingress-v1/) API

View File

@ -7,13 +7,11 @@ content_type: concept
weight: 150
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
{{< note >}}
This feature, specifically the alpha `topologyKeys` API, is deprecated since
Kubernetes v1.21.
[Topology Aware Routing](/docs/concepts/services-networking/topology-aware-routing/),
@ -25,7 +23,6 @@ topology of the cluster. For example, a service can specify that traffic be
preferentially routed to endpoints that are on the same Node as the client, or
in the same availability zone.
<!-- body -->
## Topology-aware traffic routing
@ -51,7 +48,8 @@ same top-of-rack switch for the lowest latency.
## Using Service Topology
If your cluster has the `ServiceTopology` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled, you can control Service traffic
If your cluster has the `ServiceTopology` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enabled, you can control Service traffic
routing by specifying the `topologyKeys` field on the Service spec. This field
is a preference-order list of Node labels which will be used to sort endpoints
when accessing this Service. Traffic will be directed to a Node whose value for
@ -83,8 +81,6 @@ traffic as follows.
none are available within this zone:
`["topology.kubernetes.io/zone", "*"]`.
## Constraints
* Service topology is not compatible with `externalTrafficPolicy=Local`, and
@ -101,7 +97,6 @@ traffic as follows.
* The catch-all value, `"*"`, must be the last value in the topology keys, if
it is used.
## Examples
The following are common examples of using the Service Topology feature.
@ -147,12 +142,10 @@ spec:
- "*"
```
### Only Zonal or Regional Endpoints
A Service that prefers zonal then regional endpoints. If no endpoints exist in either, traffic is dropped.
```yaml
apiVersion: v1
kind: Service

View File

@ -1239,7 +1239,7 @@ for that Service.
When you define a Service, you can specify `externalIPs` for any
[service type](#publishing-services-service-types).
In the example below, the Service named `"my-service"` can be accessed by clients using TCP,
on `"198.51.100.32:80"` (calculated from `.spec.externalIP` and `.spec.port`).
on `"198.51.100.32:80"` (calculated from `.spec.externalIPs[]` and `.spec.ports[].port`).
```yaml
apiVersion: v1

View File

@ -98,7 +98,8 @@ vendors provide their own external provisioner.
### Reclaim Policy
PersistentVolumes that are dynamically created by a StorageClass will have the
reclaim policy specified in the `reclaimPolicy` field of the class, which can be
[reclaim policy](/docs/concepts/storage/persistent-volumes/#reclaiming)
specified in the `reclaimPolicy` field of the class, which can be
either `Delete` or `Retain`. If no `reclaimPolicy` is specified when a
StorageClass object is created, it will default to `Delete`.
@ -107,8 +108,6 @@ whatever reclaim policy they were assigned at creation.
### Allow Volume Expansion
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
PersistentVolumes can be configured to be expandable. This feature when set to `true`,
allows the users to resize the volume by editing the corresponding PVC object.
@ -146,8 +145,9 @@ the class or PV. If a mount option is invalid, the PV mount fails.
### Volume Binding Mode
The `volumeBindingMode` field controls when [volume binding and dynamic
provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur. When unset, "Immediate" mode is used by default.
The `volumeBindingMode` field controls when
[volume binding and dynamic provisioning](/docs/concepts/storage/persistent-volumes/#provisioning)
should occur. When unset, "Immediate" mode is used by default.
The `Immediate` mode indicates that volume binding and dynamic
provisioning occurs once the PersistentVolumeClaim is created. For storage
@ -176,14 +176,14 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent
- All of the above
- [Local](#local)
{{< feature-state state="stable" for_k8s_version="v1.17" >}}
[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning
and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver
to see its supported topology keys and examples.
{{< note >}}
If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec
to specify node affinity. If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
to specify node affinity.
If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
Instead, you can use node selector for hostname in this case as shown below.
{{< /note >}}
@ -353,7 +353,8 @@ parameters:
- `path`: Path that is exported by the NFS server.
- `readOnly`: A flag indicating whether the storage will be mounted as read only (default false).
Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS.
Kubernetes doesn't include an internal NFS provisioner.
You need to use an external provisioner to create a StorageClass for NFS.
Here are some examples:
- [NFS Ganesha server and external provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
@ -376,7 +377,8 @@ parameters:
{{< note >}}
{{< feature-state state="deprecated" for_k8s_version="v1.11" >}}
This internal provisioner of OpenStack is deprecated. Please use [the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
This internal provisioner of OpenStack is deprecated. Please use
[the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
{{< /note >}}
### vSphere
@ -386,11 +388,15 @@ There are two types of provisioners for vSphere storage classes:
- [CSI provisioner](#vsphere-provisioner-csi): `csi.vsphere.vmware.com`
- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume`
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi).
For more information on the CSI provisioner, see
[Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and
[vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
#### CSI Provisioner {#vsphere-provisioner-csi}
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml).
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters.
For an example, refer to the [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml).
#### vCP Provisioner
@ -642,8 +648,6 @@ parameters:
### Local
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass

View File

@ -13,24 +13,46 @@ weight: 100
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows CSI Drivers to detect abnormal volume conditions from the underlying storage systems and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows
CSI Drivers to detect abnormal volume conditions from the underlying storage systems
and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}
or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
<!-- body -->
## Volume health monitoring
Kubernetes _volume health monitoring_ is part of how Kubernetes implements the Container Storage Interface (CSI). Volume health monitoring feature is implemented in two components: an External Health Monitor controller, and the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.
Kubernetes _volume health monitoring_ is part of how Kubernetes implements the
Container Storage Interface (CSI). Volume health monitoring feature is implemented
in two components: an External Health Monitor controller, and the
{{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.
If a CSI Driver supports Volume Health Monitoring feature from the controller side, an event will be reported on the related {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC) when an abnormal volume condition is detected on a CSI volume.
If a CSI Driver supports Volume Health Monitoring feature from the controller side,
an event will be reported on the related
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC)
when an abnormal volume condition is detected on a CSI volume.
The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}} also watches for node failure events. You can enable node failure monitoring by setting the `enable-node-watcher` flag to true. When the external health monitor detects a node failure event, the controller reports an Event will be reported on the PVC to indicate that pods using this PVC are on a failed node.
The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}}
also watches for node failure events. You can enable node failure monitoring by setting
the `enable-node-watcher` flag to true. When the external health monitor detects a node
failure event, the controller reports an Event will be reported on the PVC to indicate
that pods using this PVC are on a failed node.
If a CSI Driver supports Volume Health Monitoring feature from the node side, an Event will be reported on every Pod using the PVC when an abnormal volume condition is detected on a CSI volume. In addition, Volume Health information is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`. The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume is healthy. For more information, please check [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).
If a CSI Driver supports Volume Health Monitoring feature from the node side,
an Event will be reported on every Pod using the PVC when an abnormal volume
condition is detected on a CSI volume. In addition, Volume Health information
is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal
is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`.
The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume
is healthy. For more information, please check
[KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).
{{< note >}}
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to use this feature from the node side.
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
to use this feature from the node side.
{{< /note >}}
## {{% heading "whatsnext" %}}
See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html) to find out which CSI drivers have implemented this feature.
See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html)
to find out which CSI drivers have implemented this feature.

View File

@ -11,36 +11,43 @@ weight: 70
<!-- overview -->
This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
This document describes the concept of cloning existing CSI Volumes in Kubernetes.
Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
<!-- body -->
## Introduction
The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.
The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds
support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s
in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be
consumed as any standard Volume would be. The only difference is that upon
provisioning, rather than creating a "new" empty Volume, the back end device
creates an exact duplicate of the specified Volume.
The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
The implementation of cloning, from the perspective of the Kubernetes API, adds
the ability to specify an existing PVC as a dataSource during new PVC creation.
The source PVC must be bound and available (not in use).
Users need to be aware of the following when using this feature:
* Cloning support (`VolumePVCDataSource`) is only available for CSI drivers.
* Cloning support is only available for dynamic provisioners.
* CSI drivers may or may not have implemented the volume cloning functionality.
* You can only clone a PVC when it exists in the same namespace as the destination PVC (source and destination must be in the same namespace).
* You can only clone a PVC when it exists in the same namespace as the destination PVC
(source and destination must be in the same namespace).
* Cloning is supported with a different Storage Class.
- Destination volume can be the same or a different storage class as the source.
- Default storage class can be used and storageClassName omitted in the spec.
* Cloning can only be performed between two volumes that use the same VolumeMode setting (if you request a block mode volume, the source MUST also be block mode)
- Destination volume can be the same or a different storage class as the source.
- Default storage class can be used and storageClassName omitted in the spec.
* Cloning can only be performed between two volumes that use the same VolumeMode setting
(if you request a block mode volume, the source MUST also be block mode)
## Provisioning
Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
Clones are provisioned like any other PVC with the exception of adding a dataSource
that references an existing PVC in the same namespace.
```yaml
apiVersion: v1
@ -61,13 +68,18 @@ spec:
```
{{< note >}}
You must specify a capacity value for `spec.resources.requests.storage`, and the value you specify must be the same or larger than the capacity of the source volume.
You must specify a capacity value for `spec.resources.requests.storage`, and the
value you specify must be the same or larger than the capacity of the source volume.
{{< /note >}}
The result is a new PVC with the name `clone-of-pvc-1` that has the exact same content as the specified source `pvc-1`.
The result is a new PVC with the name `clone-of-pvc-1` that has the exact same
content as the specified source `pvc-1`.
## Usage
Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone.
Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC.
It's also expected at this point that the newly created PVC is an independent object.
It can be consumed, cloned, snapshotted, or deleted independently and without
consideration for it's original dataSource PVC. This also implies that the source
is not linked in any way to the newly created clone, it may also be modified or
deleted without affecting the newly created clone.

View File

@ -17,9 +17,6 @@ This document describes the concept of VolumeSnapshotClass in Kubernetes. Famili
with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
[storage classes](/docs/concepts/storage/storage-classes) is suggested.
<!-- body -->
## Introduction
@ -40,7 +37,8 @@ of a class when first creating VolumeSnapshotClass objects, and the objects cann
be updated once they are created.
{{< note >}}
Installation of the CRDs is the responsibility of the Kubernetes distribution. Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
Installation of the CRDs is the responsibility of the Kubernetes distribution.
Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
{{< /note >}}
```yaml
@ -76,14 +74,17 @@ used for provisioning VolumeSnapshots. This field must be specified.
### DeletionPolicy
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can either be `Retain` or `Delete`. This field must be specified.
Volume snapshot classes have a deletionPolicy. It enables you to configure what
happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to
is to be deleted. The deletionPolicy of a volume snapshot class can either be
`Retain` or `Delete`. This field must be specified.
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`, then both the underlying snapshot and VolumeSnapshotContent remain.
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be
deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`,
then both the underlying snapshot and VolumeSnapshotContent remain.
## Parameters
Volume snapshot classes have parameters that describe volume snapshots belonging to
the volume snapshot class. Different parameters may be accepted depending on the
`driver`.

View File

@ -27,12 +27,6 @@ Familiarity with [Pods](/docs/concepts/workloads/pods/) is suggested.
## Background
Docker has a concept of
[volumes](https://docs.docker.com/storage/), though it is
somewhat looser and less managed. A Docker volume is a directory on
disk or in another container. Docker provides volume
drivers, but the functionality is somewhat limited.
Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod" text="Pod" >}}
can use any number of volume types simultaneously.
[Ephemeral volume](/docs/concepts/storage/ephemeral-volumes/) types have a lifetime of a pod,
@ -295,13 +289,17 @@ Note that this path is derived from the volume's `mountPath` and the `path`
keyed with `log_level`.
{{< note >}}
* You must create a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)
before you can use it.
* A ConfigMap is always mounted as `readOnly`.
* A container using a ConfigMap as a [`subPath`](#using-subpath) volume mount will not
receive ConfigMap updates.
* Text data is exposed as files using the UTF-8 character encoding. For other character encodings, use `binaryData`.
{{< /note >}}
### downwardAPI {#downwardapi}
@ -930,12 +928,14 @@ backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.
{{< note >}}
You must create a Secret in the Kubernetes API before you can use it.
{{< /note >}}
{{< note >}}
A container using a Secret as a [`subPath`](#using-subpath) volume mount will not
* You must create a Secret in the Kubernetes API before you can use it.
* A Secret is always mounted as `readOnly`.
* A container using a Secret as a [`subPath`](#using-subpath) volume mount will not
receive Secret updates.
{{< /note >}}
For more details, see [Configuring Secrets](/docs/concepts/configuration/secret/).
@ -1143,9 +1143,8 @@ persistent volume:
The value is passed as `volume_id` on all calls to the CSI volume driver when
referencing the volume.
* `readOnly`: An optional boolean value indicating whether the volume is to be
"ControllerPublished" (attached) as read only. Default is false. This value is
passed to the CSI driver via the `readonly` field in the
`ControllerPublishVolumeRequest`.
"ControllerPublished" (attached) as read only. Default is false. This value is passed
to the CSI driver via the `readonly` field in the `ControllerPublishVolumeRequest`.
* `fsType`: If the PV's `VolumeMode` is `Filesystem` then this field may be used
to specify the filesystem that should be used to mount the volume. If the
volume has not been formatted and formatting is supported, this value will be

View File

@ -291,7 +291,7 @@ network port spaces). Kubernetes uses pause containers to allow for worker conta
crashing or restarting without losing any of the networking configuration.
Kubernetes maintains a multi-architecture image that includes support for Windows.
For Kubernetes v{{< skew currentVersion >}} the recommended pause image is `registry.k8s.io/pause:3.6`.
For Kubernetes v{{< skew currentPatchVersion >}} the recommended pause image is `registry.k8s.io/pause:3.6`.
The [source code](https://github.com/kubernetes/kubernetes/tree/master/build/pause)
is available on GitHub.

View File

@ -242,76 +242,76 @@ Here are values used for each Windows Server version:
A cluster administrator can create a `RuntimeClass` object which is used to encapsulate these taints and tolerations.
1. Save this file to `runtimeClasses.yml`. It includes the appropriate `nodeSelector`
for the Windows OS, architecture, and version.
for the Windows OS, architecture, and version.
```yaml
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: windows-2019
handler: example-container-runtime-handler
scheduling:
nodeSelector:
kubernetes.io/os: 'windows'
kubernetes.io/arch: 'amd64'
node.kubernetes.io/windows-build: '10.0.17763'
tolerations:
- effect: NoSchedule
key: os
operator: Equal
value: "windows"
```
```yaml
---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: windows-2019
handler: example-container-runtime-handler
scheduling:
nodeSelector:
kubernetes.io/os: 'windows'
kubernetes.io/arch: 'amd64'
node.kubernetes.io/windows-build: '10.0.17763'
tolerations:
- effect: NoSchedule
key: os
operator: Equal
value: "windows"
```
1. Run `kubectl create -f runtimeClasses.yml` using as a cluster administrator
1. Add `runtimeClassName: windows-2019` as appropriate to Pod specs
For example:
For example:
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: iis-2019
labels:
app: iis-2019
spec:
replicas: 1
template:
metadata:
name: iis-2019
labels:
app: iis-2019
spec:
runtimeClassName: windows-2019
containers:
- name: iis
image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
resources:
limits:
cpu: 1
memory: 800Mi
requests:
cpu: .1
memory: 300Mi
ports:
- containerPort: 80
selector:
matchLabels:
app: iis-2019
---
apiVersion: v1
kind: Service
metadata:
name: iis
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: iis-2019
```
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: iis-2019
labels:
app: iis-2019
spec:
replicas: 1
template:
metadata:
name: iis-2019
labels:
app: iis-2019
spec:
runtimeClassName: windows-2019
containers:
- name: iis
image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
resources:
limits:
cpu: 1
memory: 800Mi
requests:
cpu: .1
memory: 300Mi
ports:
- containerPort: 80
selector:
matchLabels:
app: iis-2019
---
apiVersion: v1
kind: Service
metadata:
name: iis
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: iis-2019
```
[RuntimeClass]: /docs/concepts/containers/runtime-class/

View File

@ -1234,11 +1234,9 @@ it is created.
## {{% heading "whatsnext" %}}
* Learn about [Pods](/docs/concepts/workloads/pods).
* [Run a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
* `Deployment` is a top-level resource in the Kubernetes REST API.
Read the {{< api-reference page="workload-resources/deployment-v1" >}}
object definition to understand the API for deployments.
* Learn more about [Pods](/docs/concepts/workloads/pods).
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
* Read the {{< api-reference page="workload-resources/deployment-v1" >}} to understand the Deployment API.
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how
you can use it to manage application availability during disruptions.
* Use kubectl to [create a Deployment](/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/).

View File

@ -290,8 +290,13 @@ Jobs with _fixed completion count_ - that is, jobs that have non null
The Job is considered complete when there is one successfully completed Pod
for each index. For more information about how to use this mode, see
[Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/).
Note that, although rare, more than one Pod could be started for the same
index, but only one of them will count towards the completion count.
{{< note >}}
Although rare, more than one Pod could be started for the same index (due to various reasons such as node failures,
kubelet restarts, or Pod evictions). In this case, only the first Pod that completes successfully will
count towards the completion count and update the status of the Job. The other Pods that are running
or completed for the same index will be deleted by the Job controller once they are detected.
{{< /note >}}
## Handling Pod and container failures

View File

@ -316,6 +316,10 @@ Each probe must define exactly one of these four mechanisms:
the port is open. If the remote system (the container) closes
the connection immediately after it opens, this counts as healthy.
{{< caution >}} Unlike the other mechanisms, `exec` probe's implementation involves the creation/forking of multiple processes each time when executed.
As a result, in case of the clusters having higher pod densities, lower intervals of `initialDelaySeconds`, `periodSeconds`, configuring any probe with exec mechanism might introduce an overhead on the cpu usage of the node.
In such scenarios, consider using the alternative probe mechanisms to avoid the overhead.{{< /caution >}}
### Probe outcome
Each probe has one of three results:

View File

@ -55,7 +55,7 @@ to use this feature with Kubernetes stateless pods:
* CRI-O: version 1.25 (and later) supports user namespaces for containers.
Please note that containerd v1.7 supports user namespaces for containers,
compatible with Kubernetes {{< skew currentVersion >}}. It should not be used
compatible with Kubernetes {{< skew currentPatchVersion >}}. It should not be used
with Kubernetes 1.27 (and later).
Support for this in [cri-dockerd is not planned][CRI-dockerd-issue] yet.

View File

@ -17,8 +17,6 @@ You can register for Kubernetes Slack at https://slack.k8s.io/.
For information on creating new content for the Kubernetes
docs, follow the [style guide](/docs/contribute/style/style-guide).
<!-- body -->
## Overview
@ -40,15 +38,19 @@ Kubernetes docs allow content for third-party projects only when:
### Third party content
Kubernetes documentation includes applied examples of projects in the Kubernetes project&mdash;projects that live in the [kubernetes](https://github.com/kubernetes) and
Kubernetes documentation includes applied examples of projects in the Kubernetes
project&mdash;projects that live in the [kubernetes](https://github.com/kubernetes) and
[kubernetes-sigs](https://github.com/kubernetes-sigs) GitHub organizations.
Links to active content in the Kubernetes project are always allowed.
Kubernetes requires some third party content to function. Examples include container runtimes (containerd, CRI-O, Docker),
[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins), [Ingress controllers](/docs/concepts/services-networking/ingress-controllers/), and [logging](/docs/concepts/cluster-administration/logging/).
[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins),
[Ingress controllers](/docs/concepts/services-networking/ingress-controllers/),
and [logging](/docs/concepts/cluster-administration/logging/).
Docs can link to third-party open source software (OSS) outside the Kubernetes project only if it's necessary for Kubernetes to function.
Docs can link to third-party open source software (OSS) outside the Kubernetes
project only if it's necessary for Kubernetes to function.
### Dual sourced content
@ -59,19 +61,14 @@ Dual-sourced content requires double the effort (or more!) to maintain
and grows stale more quickly.
{{< note >}}
If you're a maintainer for a Kubernetes project and need help hosting your own docs,
ask for help in [#sig-docs on Kubernetes Slack](https://kubernetes.slack.com/messages/C1J0BPD2M/).
{{< /note >}}
### More information
If you have questions about allowed content, join the [Kubernetes Slack](https://slack.k8s.io/) #sig-docs channel and ask!
## {{% heading "whatsnext" %}}
* Read the [Style guide](/docs/contribute/style/style-guide).

View File

@ -4,13 +4,10 @@ content_type: concept
weight: 90
---
<!-- overview -->
This site uses Hugo. In Hugo, [content organization](https://gohugo.io/content-management/organization/) is a core concept.
<!-- body -->
{{% note %}}
@ -21,7 +18,9 @@ This site uses Hugo. In Hugo, [content organization](https://gohugo.io/content-m
### Page Order
The documentation side menu, the documentation page browser etc. are listed using Hugo's default sort order, which sorts by weight (from 1), date (newest first), and finally by the link title.
The documentation side menu, the documentation page browser etc. are listed using
Hugo's default sort order, which sorts by weight (from 1), date (newest first),
and finally by the link title.
Given that, if you want to move a page or a section up, set a weight in the page's front matter:
@ -30,24 +29,25 @@ title: My Page
weight: 10
```
{{% note %}}
For page weights, it can be smart not to use 1, 2, 3 ..., but some other interval, say 10, 20, 30... This allows you to insert pages where you want later.
Additionally, each weight within the same directory (section) should not be overlapped with the other weights. This makes sure that content is always organized correctly, especially in localized content.
For page weights, it can be smart not to use 1, 2, 3 ..., but some other interval,
say 10, 20, 30... This allows you to insert pages where you want later.
Additionally, each weight within the same directory (section) should not be
overlapped with the other weights. This makes sure that content is always
organized correctly, especially in localized content.
{{% /note %}}
### Documentation Main Menu
The `Documentation` main menu is built from the sections below `docs/` with the `main_menu` flag set in front matter of the `_index.md` section content file:
The `Documentation` main menu is built from the sections below `docs/` with
the `main_menu` flag set in front matter of the `_index.md` section content file:
```yaml
main_menu: true
```
Note that the link title is fetched from the page's `linkTitle`, so if you want it to be something different than the title, change it in the content file:
Note that the link title is fetched from the page's `linkTitle`, so if you want
it to be something different than the title, change it in the content file:
```yaml
main_menu: true
@ -55,9 +55,10 @@ title: Page Title
linkTitle: Title used in links
```
{{% note %}}
The above needs to be done per language. If you don't see your section in the menu, it is probably because it is not identified as a section by Hugo. Create a `_index.md` content file in the section folder.
The above needs to be done per language. If you don't see your section in the menu,
it is probably because it is not identified as a section by Hugo. Create a
`_index.md` content file in the section folder.
{{% /note %}}
### Documentation Side Menu
@ -72,11 +73,13 @@ If you don't want to list a section or page, set the `toc_hide` flag to `true` i
toc_hide: true
```
When you navigate to a section that has content, the specific section or page (e.g. `_index.md`) is shown. Else, the first page inside that section is shown.
When you navigate to a section that has content, the specific section or page
(e.g. `_index.md`) is shown. Else, the first page inside that section is shown.
### Documentation Browser
The page browser on the documentation home page is built using all the sections and pages that are directly below the `docs section`.
The page browser on the documentation home page is built using all the sections
and pages that are directly below the `docs section`.
If you don't want to list a section or page, set the `toc_hide` flag to `true` in front matter:
@ -86,14 +89,18 @@ toc_hide: true
### The Main Menu
The site links in the top-right menu -- and also in the footer -- are built by page-lookups. This is to make sure that the page actually exists. So, if the `case-studies` section does not exist in a site (language), it will not be linked to.
The site links in the top-right menu -- and also in the footer -- are built by
page-lookups. This is to make sure that the page actually exists. So, if the
`case-studies` section does not exist in a site (language), it will not be linked to.
## Page Bundles
In addition to standalone content pages (Markdown files), Hugo supports [Page Bundles](https://gohugo.io/content-management/page-bundles/).
In addition to standalone content pages (Markdown files), Hugo supports
[Page Bundles](https://gohugo.io/content-management/page-bundles/).
One example is [Custom Hugo Shortcodes](/docs/contribute/style/hugo-shortcodes/). It is considered a `leaf bundle`. Everything below the directory, including the `index.md`, will be part of the bundle. This also includes page-relative links, images that can be processed etc.:
One example is [Custom Hugo Shortcodes](/docs/contribute/style/hugo-shortcodes/).
It is considered a `leaf bundle`. Everything below the directory, including the `index.md`,
will be part of the bundle. This also includes page-relative links, images that can be processed etc.:
```bash
en/docs/home/contribute/includes
@ -103,7 +110,8 @@ en/docs/home/contribute/includes
└── podtemplate.json
```
Another widely used example is the `includes` bundle. It sets `headless: true` in front matter, which means that it does not get its own URL. It is only used in other pages.
Another widely used example is the `includes` bundle. It sets `headless: true` in
front matter, which means that it does not get its own URL. It is only used in other pages.
```bash
en/includes
@ -118,22 +126,22 @@ en/includes
Some important notes to the files in the bundles:
* For translated bundles, any missing non-content files will be inherited from languages above. This avoids duplication.
* All the files in a bundle are what Hugo calls `Resources` and you can provide metadata per language, such as parameters and title, even if it does not supports front matter (YAML files etc.). See [Page Resources Metadata](https://gohugo.io/content-management/page-resources/#page-resources-metadata).
* The value you get from `.RelPermalink` of a `Resource` is page-relative. See [Permalinks](https://gohugo.io/content-management/urls/#permalinks).
* For translated bundles, any missing non-content files will be inherited from
languages above. This avoids duplication.
* All the files in a bundle are what Hugo calls `Resources` and you can provide
metadata per language, such as parameters and title, even if it does not supports
front matter (YAML files etc.).
See [Page Resources Metadata](https://gohugo.io/content-management/page-resources/#page-resources-metadata).
* The value you get from `.RelPermalink` of a `Resource` is page-relative.
See [Permalinks](https://gohugo.io/content-management/urls/#permalinks).
## Styles
The [SASS](https://sass-lang.com/) source of the stylesheets for this site is stored in `assets/sass` and is automatically built by Hugo.
The [SASS](https://sass-lang.com/) source of the stylesheets for this site is
stored in `assets/sass` and is automatically built by Hugo.
## {{% heading "whatsnext" %}}
* Learn about [custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/)
* Learn about the [Style guide](/docs/contribute/style/style-guide)
* Learn about the [Content guide](/docs/contribute/style/content-guide)

View File

@ -14,8 +14,8 @@ For additional information on creating new content for the Kubernetes
documentation, read the [Documentation Content Guide](/docs/contribute/style/content-guide/).
Changes to the style guide are made by SIG Docs as a group. To propose a change
or addition, [add it to the agenda](https://bit.ly/sig-docs-agenda) for an upcoming SIG Docs meeting, and attend the meeting to participate in the
discussion.
or addition, [add it to the agenda](https://bit.ly/sig-docs-agenda) for an upcoming
SIG Docs meeting, and attend the meeting to participate in the discussion.
<!-- body -->
@ -42,11 +42,17 @@ The English-language documentation uses U.S. English spelling and grammar.
### Use upper camel case for API objects
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal case. You may see different capitalization, such as "configMap", in the [API Reference](/docs/reference/kubernetes-api/). When writing general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
When you refer specifically to interacting with an API object, use
[UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as
Pascal case. You may see different capitalization, such as "configMap",
in the [API Reference](/docs/reference/kubernetes-api/). When writing
general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
When you are generally discussing an API object, use
[sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
The following examples focus on capitalization. For more information about formatting API object names, review the related guidance on [Code Style](#code-style-inline-code).
The following examples focus on capitalization. For more information about formatting
API object names, review the related guidance on [Code Style](#code-style-inline-code).
{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
Do | Don't
@ -130,7 +136,9 @@ Remove trailing spaces in the code. | Add trailing spaces in the code, where the
{{< /table >}}
{{< note >}}
The website supports syntax highlighting for code samples, but specifying a language is optional. Syntax highlighting in the code block should conform to the [contrast guidelines.](https://www.w3.org/WAI/WCAG21/quickref/?versions=2.0&showtechniques=141%2C143#contrast-minimum)
The website supports syntax highlighting for code samples, but specifying a language
is optional. Syntax highlighting in the code block should conform to the
[contrast guidelines.](https://www.w3.org/WAI/WCAG21/quickref/?versions=2.0&showtechniques=141%2C143#contrast-minimum)
{{< /note >}}
### Use code style for object field names and namespaces
@ -189,7 +197,10 @@ This section talks about how we reference API resources in the documentation.
### Clarification about "resource"
Kubernetes uses the word "resource" to refer to API resources, such as `pod`, `deployment`, and so on. We also use "resource" to talk about CPU and memory requests and limits. Always refer to API resources as "API resources" to avoid confusion with CPU and memory resources.
Kubernetes uses the word "resource" to refer to API resources, such as `pod`,
`deployment`, and so on. We also use "resource" to talk about CPU and memory
requests and limits. Always refer to API resources as "API resources" to avoid
confusion with CPU and memory resources.
### When to use Kubernetes API terminologies
@ -197,21 +208,27 @@ The different Kubernetes API terminologies are:
- Resource type: the name used in the API URL (such as `pods`, `namespaces`)
- Resource: a single instance of a resource type (such as `pod`, `secret`)
- Object: a resource that serves as a "record of intent". An object is a desired state for a specific part of your cluster, which the Kubernetes control plane tries to maintain.
- Object: a resource that serves as a "record of intent". An object is a desired
state for a specific part of your cluster, which the Kubernetes control plane tries to maintain.
Always use "resource" or "object" when referring to an API resource in docs. For example, use "a `Secret` object" over just "a `Secret`".
Always use "resource" or "object" when referring to an API resource in docs.
For example, use "a `Secret` object" over just "a `Secret`".
### API resource names
Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as PascalCase, and code formatting.
Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case),
also known as PascalCase, and code formatting.
For inline code in an HTML document, use the `<code>` tag. In a Markdown document, use the backtick (`` ` ``).
Don't split an API object name into separate words. For example, use `PodTemplateList`, not Pod Template List.
For more information about PascalCase and code formatting, please review the related guidance on [Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects) and [Use code style for inline code, commands, and API objects](/docs/contribute/style/style-guide/#code-style-inline-code).
For more information about PascalCase and code formatting, please review the related guidance on
[Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects)
and [Use code style for inline code, commands, and API objects](/docs/contribute/style/style-guide/#code-style-inline-code).
For more information about Kubernetes API terminologies, please review the related guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/#standard-api-terminology).
For more information about Kubernetes API terminologies, please review the related
guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/#standard-api-terminology).
## Code snippet formatting
@ -240,17 +257,23 @@ nginx 1/1 Running 0 13s 10.200.0.4 worker0
### Versioning Kubernetes examples
Code examples and configuration examples that include version information should be consistent with the accompanying text.
Code examples and configuration examples that include version information should
be consistent with the accompanying text.
If the information is version specific, the Kubernetes version needs to be defined in the `prerequisites` section of the [Task template](/docs/contribute/style/page-content-types/#task) or the [Tutorial template](/docs/contribute/style/page-content-types/#tutorial). Once the page is saved, the `prerequisites` section is shown as **Before you begin**.
If the information is version specific, the Kubernetes version needs to be defined
in the `prerequisites` section of the [Task template](/docs/contribute/style/page-content-types/#task)
or the [Tutorial template](/docs/contribute/style/page-content-types/#tutorial).
Once the page is saved, the `prerequisites` section is shown as **Before you begin**.
To specify the Kubernetes version for a task or tutorial page, include `min-kubernetes-server-version` in the front matter of the page.
To specify the Kubernetes version for a task or tutorial page, include
`min-kubernetes-server-version` in the front matter of the page.
If the example YAML is in a standalone file, find and review the topics that include it as a reference.
Verify that any topics using the standalone YAML have the appropriate version information defined.
If a stand-alone YAML file is not referenced from any topics, consider deleting it instead of updating it.
For example, if you are writing a tutorial that is relevant to Kubernetes version 1.8, the front-matter of your markdown file should look something like:
For example, if you are writing a tutorial that is relevant to Kubernetes version 1.8,
the front-matter of your markdown file should look something like:
```yaml
---
@ -283,7 +306,10 @@ On-premises | On-premises or On-prem rather than On-premise or other variations.
## Shortcodes
Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create different rhetorical appeal levels. Our documentation supports three different shortcodes in this category: **Note** `{{</* note */>}}`, **Caution** `{{</* caution */>}}`, and **Warning** `{{</* warning */>}}`.
Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create
different rhetorical appeal levels. Our documentation supports three different
shortcodes in this category: **Note** `{{</* note */>}}`,
**Caution** `{{</* caution */>}}`, and **Warning** `{{</* warning */>}}`.
1. Surround the text with an opening and closing shortcode.
@ -412,7 +438,8 @@ The output is:
### Include Statements
Shortcodes inside include statements will break the build. You must insert them in the parent document, before and after you call the include. For example:
Shortcodes inside include statements will break the build. You must insert them
in the parent document, before and after you call the include. For example:
```
{{</* note */>}}
@ -424,11 +451,19 @@ Shortcodes inside include statements will break the build. You must insert them
### Line breaks
Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headings follow the first-level (or the title) without any preceding paragraphs or texts. A two line spacing helps visualize the overall structure of content in a code editor better.
Use a single newline to separate block-level content like headings, lists, images,
code blocks, and others. The exception is second-level headings, where it should
be two newlines. Second-level headings follow the first-level (or the title) without
any preceding paragraphs or texts. A two line spacing helps visualize the overall
structure of content in a code editor better.
### Headings and titles {#headings}
People accessing this documentation may use a screen reader or other assistive technology (AT). [Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest.
People accessing this documentation may use a screen reader or other assistive technology (AT).
[Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices,
they output items on a page one at a time. If there is a lot of content on a page, you can
use headings to give the page an internal structure. A good page structure helps all readers
to easily navigate the page or filter topics of interest.
{{< table caption = "Do and Don't - Headings" >}}
Do | Don't
@ -460,12 +495,20 @@ Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](
### Lists
Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is an ordered or unordered list—it will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list.
Website navigation links can also be marked up as list items; after all they are nothing but a group of related links.
Group items in a list that are related to each other and need to appear in a specific
order or to indicate a correlation between multiple items. When a screen reader comes
across a list—whether it is an ordered or unordered list—it will be announced to the
user that there is a group of list items. The user can then use the arrow keys to move
up and down between the various items in the list. Website navigation links can also be
marked up as list items; after all they are nothing but a group of related links.
- End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences.
- End each item in a list with a period if one or more items in the list are complete
sentences. For the sake of consistency, normally either all items or none should be complete sentences.
{{< note >}} Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.{{< /note >}}
{{< note >}}
Ordered lists that are part of an incomplete introductory sentence can be in lowercase
and punctuated as if each item was a part of the introductory sentence.
{{< /note >}}
- Use the number one (`1.`) for ordered lists.
@ -475,11 +518,15 @@ Website navigation links can also be marked up as list items; after all they are
- Indent nested lists with four spaces (for example, ⋅⋅⋅⋅).
- List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab.
- List items may consist of multiple paragraphs. Each subsequent paragraph in a list
item must be indented by either four spaces or one tab.
### Tables
The semantic purpose of a data table is to present tabular data. Sighted users can quickly scan the table but a screen reader goes through line by line. A table caption is used to create a descriptive title for a data table. Assistive technologies (AT) use the HTML table caption element to identify the table contents to the user within the page structure.
The semantic purpose of a data table is to present tabular data. Sighted users can
quickly scan the table but a screen reader goes through line by line. A table caption
is used to create a descriptive title for a data table. Assistive technologies (AT)
use the HTML table caption element to identify the table contents to the user within the page structure.
- Add table captions using [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions) for tables.

View File

@ -2,7 +2,6 @@
reviewers:
- erictune
- lavalamp
- ericchiang
- deads2k
- liggitt
title: Authenticating

View File

@ -3,7 +3,6 @@ reviewers:
- timstclair
- deads2k
- liggitt
- ericchiang
title: Using Node Authorization
content_type: concept
weight: 90

View File

@ -791,7 +791,7 @@ In the following table:
`attach` and `port-forward` requests.
- `SupportIPVSProxyMode`: Enable providing in-cluster service load balancing using IPVS.
See [service proxies](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) for more details.
See [service proxies](/docs/reference/networking/virtual-ips/) for more details.
- `SupportNodePidsLimit`: Enable the support to limiting PIDs on the Node. The parameter
`pid=<number>` in the `--system-reserved` and `--kube-reserved` options can be specified to

View File

@ -182,14 +182,14 @@ kubelet [flags]
<td colspan="2">--container-log-max-files int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Beta feature&gt; Set the maximum number of container log files that can be present for a container. The number must be &gt;= 2. This flag can only be used with <code>--container-runtime=remote</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Beta feature&gt; Set the maximum number of container log files that can be present for a container. The number must be &gt;= 2. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>
<td colspan="2">--container-log-max-size string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>10Mi</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Beta feature&gt; Set the maximum size (e.g. <code>10Mi</code>) of container log file before it is rotated. This flag can only be used with <code>--container-runtime=remote</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">&lt;Warning: Beta feature&gt; Set the maximum size (e.g. <code>10Mi</code>) of container log file before it is rotated. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
</tr>
<tr>

View File

@ -72,14 +72,14 @@ It is suitable for correlating log entries between the webhook and apiserver, fo
</td>
</tr>
<tr><td><code>kind</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionKind"><code>meta/v1.GroupVersionKind</code></a>
</td>
<td>
<p>Kind is the fully-qualified type of object being submitted (for example, v1.Pod or autoscaling.v1.Scale)</p>
</td>
</tr>
<tr><td><code>resource</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionResource"><code>meta/v1.GroupVersionResource</code></a>
</td>
<td>
<p>Resource is the fully-qualified resource being requested (for example, v1.pods)</p>
@ -93,7 +93,7 @@ It is suitable for correlating log entries between the webhook and apiserver, fo
</td>
</tr>
<tr><td><code>requestKind</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionKind"><code>meta/v1.GroupVersionKind</code></a>
</td>
<td>
<p>RequestKind is the fully-qualified type of the original API request (for example, v1.Pod or autoscaling.v1.Scale).
@ -107,7 +107,7 @@ and <code>requestKind: {group:&quot;apps&quot;, version:&quot;v1beta1&quot;, kin
</td>
</tr>
<tr><td><code>requestResource</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionResource"><code>meta/v1.GroupVersionResource</code></a>
</td>
<td>
<p>RequestResource is the fully-qualified resource of the original API request (for example, v1.pods).
@ -153,7 +153,7 @@ requested. e.g. a patch can result in either a CREATE or UPDATE Operation.</p>
</td>
</tr>
<tr><td><code>userInfo</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
</td>
<td>
<p>UserInfo is information about the requesting user</p>

View File

@ -1,7 +1,7 @@
---
title: kube-controller-manager Configuration (v1alpha1)
content_type: tool-reference
package: cloudcontrollermanager.config.k8s.io/v1alpha1
package: controllermanager.config.k8s.io/v1alpha1
auto_generated: true
---
@ -9,310 +9,9 @@ auto_generated: true
## Resource Types
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration)
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
## `NodeControllerConfiguration` {#NodeControllerConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
<p>NodeControllerConfiguration contains elements describing NodeController.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>ConcurrentNodeSyncs</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
<p>ConcurrentNodeSyncs is the number of workers
concurrently synchronizing nodes</p>
</td>
</tr>
</tbody>
</table>
## `ServiceControllerConfiguration` {#ServiceControllerConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
<p>ServiceControllerConfiguration contains elements describing ServiceController.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>ConcurrentServiceSyncs</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
<p>concurrentServiceSyncs is the number of services that are
allowed to sync concurrently. Larger number = more responsive service
management, but more CPU (and network) load.</p>
</td>
</tr>
</tbody>
</table>
## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration}
<p>CloudControllerManagerConfiguration contains elements describing cloud-controller manager.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>cloudcontrollermanager.config.k8s.io/v1alpha1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>CloudControllerManagerConfiguration</code></td></tr>
<tr><td><code>Generic</code> <B>[Required]</B><br/>
<a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"><code>GenericControllerManagerConfiguration</code></a>
</td>
<td>
<p>Generic holds configuration for a generic controller-manager</p>
</td>
</tr>
<tr><td><code>KubeCloudShared</code> <B>[Required]</B><br/>
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration"><code>KubeCloudSharedConfiguration</code></a>
</td>
<td>
<p>KubeCloudSharedConfiguration holds configuration for shared related features
both in cloud controller manager and kube-controller manager.</p>
</td>
</tr>
<tr><td><code>NodeController</code> <B>[Required]</B><br/>
<a href="#NodeControllerConfiguration"><code>NodeControllerConfiguration</code></a>
</td>
<td>
<p>NodeController holds configuration for node controller
related features.</p>
</td>
</tr>
<tr><td><code>ServiceController</code> <B>[Required]</B><br/>
<a href="#ServiceControllerConfiguration"><code>ServiceControllerConfiguration</code></a>
</td>
<td>
<p>ServiceControllerConfiguration holds configuration for ServiceController
related features.</p>
</td>
</tr>
<tr><td><code>NodeStatusUpdateFrequency</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status</p>
</td>
</tr>
<tr><td><code>Webhook</code> <B>[Required]</B><br/>
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration"><code>WebhookConfiguration</code></a>
</td>
<td>
<p>Webhook is the configuration for cloud-controller-manager hosted webhooks</p>
</td>
</tr>
</tbody>
</table>
## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration}
**Appears in:**
- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration)
<p>CloudProviderConfiguration contains basically elements about cloud provider.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>Name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>Name is the provider for cloud services.</p>
</td>
</tr>
<tr><td><code>CloudConfigFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>cloudConfigFile is the path to the cloud provider configuration file.</p>
</td>
</tr>
</tbody>
</table>
## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
<p>KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
and cloud-controller manager, but not genericconfig.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>CloudProvider</code> <B>[Required]</B><br/>
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration"><code>CloudProviderConfiguration</code></a>
</td>
<td>
<p>CloudProviderConfiguration holds configuration for CloudProvider related features.</p>
</td>
</tr>
<tr><td><code>ExternalCloudVolumePlugin</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>externalCloudVolumePlugin specifies the plugin to use when cloudProvider is &quot;external&quot;.
It is currently used by the in repo cloud providers to handle node and volume control in the KCM.</p>
</td>
</tr>
<tr><td><code>UseServiceAccountCredentials</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>useServiceAccountCredentials indicates whether controllers should be run with
individual service account credentials.</p>
</td>
</tr>
<tr><td><code>AllowUntaggedCloud</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>run with untagged cloud instances</p>
</td>
</tr>
<tr><td><code>RouteReconciliationPeriod</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..</p>
</td>
</tr>
<tr><td><code>NodeMonitorPeriod</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.</p>
</td>
</tr>
<tr><td><code>ClusterName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>clusterName is the instance prefix for the cluster.</p>
</td>
</tr>
<tr><td><code>ClusterCIDR</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>clusterCIDR is CIDR Range for Pods in cluster.</p>
</td>
</tr>
<tr><td><code>AllocateNodeCIDRs</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if
ConfigureCloudRoutes is true, to be set on the cloud provider.</p>
</td>
</tr>
<tr><td><code>CIDRAllocatorType</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>CIDRAllocatorType determines what kind of pod CIDR allocator will be used.</p>
</td>
</tr>
<tr><td><code>ConfigureCloudRoutes</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs
to be configured on the cloud provider.</p>
</td>
</tr>
<tr><td><code>NodeSyncPeriod</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
periods will result in fewer calls to cloud provider, but may delay addition
of new nodes to cluster.</p>
</td>
</tr>
</tbody>
</table>
## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
<p>WebhookConfiguration contains configuration related to
cloud-controller-manager hosted webhooks</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>Webhooks</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
<p>Webhooks is the list of webhooks to enable or disable
'*' means &quot;all enabled by default webhooks&quot;
'foo' means &quot;enable 'foo'&quot;
'-foo' means &quot;disable 'foo'&quot;
first item for a particular name wins</p>
</td>
</tr>
</tbody>
</table>
@ -1879,4 +1578,305 @@ volume plugin should search for additional third party volume plugins</p>
</tr>
</tbody>
</table>
## `NodeControllerConfiguration` {#NodeControllerConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
<p>NodeControllerConfiguration contains elements describing NodeController.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>ConcurrentNodeSyncs</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
<p>ConcurrentNodeSyncs is the number of workers
concurrently synchronizing nodes</p>
</td>
</tr>
</tbody>
</table>
## `ServiceControllerConfiguration` {#ServiceControllerConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
<p>ServiceControllerConfiguration contains elements describing ServiceController.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>ConcurrentServiceSyncs</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
<p>concurrentServiceSyncs is the number of services that are
allowed to sync concurrently. Larger number = more responsive service
management, but more CPU (and network) load.</p>
</td>
</tr>
</tbody>
</table>
## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration}
<p>CloudControllerManagerConfiguration contains elements describing cloud-controller manager.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>cloudcontrollermanager.config.k8s.io/v1alpha1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>CloudControllerManagerConfiguration</code></td></tr>
<tr><td><code>Generic</code> <B>[Required]</B><br/>
<a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"><code>GenericControllerManagerConfiguration</code></a>
</td>
<td>
<p>Generic holds configuration for a generic controller-manager</p>
</td>
</tr>
<tr><td><code>KubeCloudShared</code> <B>[Required]</B><br/>
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration"><code>KubeCloudSharedConfiguration</code></a>
</td>
<td>
<p>KubeCloudSharedConfiguration holds configuration for shared related features
both in cloud controller manager and kube-controller manager.</p>
</td>
</tr>
<tr><td><code>NodeController</code> <B>[Required]</B><br/>
<a href="#NodeControllerConfiguration"><code>NodeControllerConfiguration</code></a>
</td>
<td>
<p>NodeController holds configuration for node controller
related features.</p>
</td>
</tr>
<tr><td><code>ServiceController</code> <B>[Required]</B><br/>
<a href="#ServiceControllerConfiguration"><code>ServiceControllerConfiguration</code></a>
</td>
<td>
<p>ServiceControllerConfiguration holds configuration for ServiceController
related features.</p>
</td>
</tr>
<tr><td><code>NodeStatusUpdateFrequency</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status</p>
</td>
</tr>
<tr><td><code>Webhook</code> <B>[Required]</B><br/>
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration"><code>WebhookConfiguration</code></a>
</td>
<td>
<p>Webhook is the configuration for cloud-controller-manager hosted webhooks</p>
</td>
</tr>
</tbody>
</table>
## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration}
**Appears in:**
- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration)
<p>CloudProviderConfiguration contains basically elements about cloud provider.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>Name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>Name is the provider for cloud services.</p>
</td>
</tr>
<tr><td><code>CloudConfigFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>cloudConfigFile is the path to the cloud provider configuration file.</p>
</td>
</tr>
</tbody>
</table>
## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
<p>KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
and cloud-controller manager, but not genericconfig.</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>CloudProvider</code> <B>[Required]</B><br/>
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration"><code>CloudProviderConfiguration</code></a>
</td>
<td>
<p>CloudProviderConfiguration holds configuration for CloudProvider related features.</p>
</td>
</tr>
<tr><td><code>ExternalCloudVolumePlugin</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>externalCloudVolumePlugin specifies the plugin to use when cloudProvider is &quot;external&quot;.
It is currently used by the in repo cloud providers to handle node and volume control in the KCM.</p>
</td>
</tr>
<tr><td><code>UseServiceAccountCredentials</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>useServiceAccountCredentials indicates whether controllers should be run with
individual service account credentials.</p>
</td>
</tr>
<tr><td><code>AllowUntaggedCloud</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>run with untagged cloud instances</p>
</td>
</tr>
<tr><td><code>RouteReconciliationPeriod</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..</p>
</td>
</tr>
<tr><td><code>NodeMonitorPeriod</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.</p>
</td>
</tr>
<tr><td><code>ClusterName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>clusterName is the instance prefix for the cluster.</p>
</td>
</tr>
<tr><td><code>ClusterCIDR</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>clusterCIDR is CIDR Range for Pods in cluster.</p>
</td>
</tr>
<tr><td><code>AllocateNodeCIDRs</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if
ConfigureCloudRoutes is true, to be set on the cloud provider.</p>
</td>
</tr>
<tr><td><code>CIDRAllocatorType</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<p>CIDRAllocatorType determines what kind of pod CIDR allocator will be used.</p>
</td>
</tr>
<tr><td><code>ConfigureCloudRoutes</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
<p>configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs
to be configured on the cloud provider.</p>
</td>
</tr>
<tr><td><code>NodeSyncPeriod</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
periods will result in fewer calls to cloud provider, but may delay addition
of new nodes to cluster.</p>
</td>
</tr>
</tbody>
</table>
## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration}
**Appears in:**
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
<p>WebhookConfiguration contains configuration related to
cloud-controller-manager hosted webhooks</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>Webhooks</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
<p>Webhooks is the list of webhooks to enable or disable
'*' means &quot;all enabled by default webhooks&quot;
'foo' means &quot;enable 'foo'&quot;
'-foo' means &quot;disable 'foo'&quot;
first item for a particular name wins</p>
</td>
</tr>
</tbody>
</table>

View File

@ -273,6 +273,7 @@ kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl
kubectl label pods my-pod new-label=awesome # Add a Label
kubectl label pods my-pod new-label- # Remove a label
kubectl label pods my-pod new-label=new-value --overwrite # Overwrite an existing value
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation
kubectl annotate pods my-pod icon- # Remove annotation
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"

File diff suppressed because it is too large Load Diff

View File

@ -52,18 +52,25 @@ nor should they need to keep track of the set of backends themselves.
## Proxy modes
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
The kube-proxy starts up in different modes, which are determined by its configuration.
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for
kube-proxy effectively deprecates the behavior for almost all of the flags for
the kube-proxy.
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
For example, if your operating system doesn't allow you to run iptables commands,
the standard kernel kube-proxy implementation will not work.
On Linux nodes, the available modes for kube-proxy are:
[`iptables`](#proxy-mode-iptables)
: A mode where the kube-proxy configures packet forwarding rules using iptables, on Linux.
[`ipvs`](#proxy-mode-ipvs)
: a mode where the kube-proxy configures packet forwarding rules using ipvs.
There is only one mode available for kube-proxy on Windows:
[`kernelspace`](#proxy-mode-kernelspace)
: a mode where the kube-proxy configures packet forwarding rules in the Windows kernel
### `iptables` proxy mode {#proxy-mode-iptables}
_This proxy mode is only available on Linux nodes._
In this mode, kube-proxy watches the Kubernetes
{{< glossary_tooltip term_id="control-plane" text="control plane" >}} for the addition and
removal of Service and EndpointSlice {{< glossary_tooltip term_id="object" text="objects." >}}
@ -199,6 +206,8 @@ and is likely to hurt functionality more than it improves performance.
### IPVS proxy mode {#proxy-mode-ipvs}
_This proxy mode is only available on Linux nodes._
In `ipvs` mode, kube-proxy watches Kubernetes Services and EndpointSlices,
calls `netlink` interface to create IPVS rules accordingly and synchronizes
IPVS rules with Kubernetes Services and EndpointSlices periodically.
@ -235,6 +244,37 @@ falls back to running in iptables proxy mode.
{{< figure src="/images/docs/services-ipvs-overview.svg" title="Virtual IP address mechanism for Services, using IPVS mode" class="diagram-medium" >}}
### `kernelspace` proxy mode {#proxy-mode-kernelspace}
_This proxy mode is only available on Windows nodes._
The kube-proxy configures packet filtering rules in the Windows _Virtual Filtering Platform_ (VFP),
an extension to Windows vSwitch. These rules process encapsulated packets within the node-level
virtual networks, and rewrite packets so that the destination IP address (and layer 2 information)
is correct for getting the packet routed to the correct destination.
The Windows VFP is analogous to tools such as Linux `nftables` or `iptables`. The Windows VFP extends
the _Hyper-V Switch_, which was initially implemented to support virtual machine networking.
When a Pod on a node sends traffic to a virtual IP address, and the kube-proxy selects a Pod on
a different node as the load balancing target, the `kernelspace` proxy mode rewrites that packet
to be destined to the target backend Pod. The Windows _Host Networking Service_ (HNS) ensures that
packet rewriting rules are configured so that the return traffic appears to come from the virtual
IP address and not the specific backend Pod.
#### Direct server return for `kernelspace` mode {#windows-direct-server-return}
{{< feature-state for_k8s_version="v1.14" state="alpha" >}}
As an alternative to the basic operation, a node that hosts the backend Pod for a Service can
apply the packet rewriting directly, rather than placing this burden on the node where the client
Pod is running. This is called _direct server return_.
To use this, you must run kube-proxy with the `--enable-dsr` command line argument **and**
enable the `WinDSR` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
Direct server return also optimizes the case for Pod return traffic even when both Pods
are running on the same node.
## Session affinity
In these proxy models, the traffic bound for the Service's IP:Port is
@ -332,7 +372,7 @@ NAME PARENTREF
#### IP address ranges for Service virtual IP addresses {#service-ip-static-sub-range}
{{< feature-state for_k8s_version="v1.25" state="beta" >}}
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
Kubernetes divides the `ClusterIP` range into two bands, based on
the size of the configured `service-cluster-ip-range` by using the following formula
@ -356,7 +396,7 @@ to control how Kubernetes routes traffic to healthy (“ready”) backends.
### Internal traffic policy
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
You can set the `.spec.internalTrafficPolicy` field to control how traffic from
internal sources is routed. Valid values are `Cluster` and `Local`. Set the field to

View File

@ -420,7 +420,7 @@ individually with the [`kubeadm init phase mark-control-plane`](/docs/reference/
Please note that:
1. The `node-role.kubernetes.io/master` taint is deprecated and will be removed in kubeadm version 1.25
1. Mark control-plane phase phase can be invoked individually with the command
1. Mark control-plane phase can be invoked individually with the command
[`kubeadm init phase mark-control-plane`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-control-plane)

View File

@ -6,10 +6,6 @@ weight: 10
{{% thirdparty-content %}}
{{<note>}}
This page is deprecated and will be removed in Kubernetes 1.27.
{{</note>}}
`crictl` is a command-line interface for {{<glossary_tooltip term_id="cri" text="CRI">}}-compatible container runtimes.
You can use it to inspect and debug container runtimes and applications on a
Kubernetes node. `crictl` and its source are hosted in the
@ -74,4 +70,4 @@ crictl | Description
`runp` | Run a new pod
`rmp` | Remove one or more pods
`stopp` | Stop one or more running pods
{{< /table >}}
{{< /table >}}

View File

@ -284,6 +284,36 @@ Content-Type: application/json
<followed by regular watch stream starting from resourceVersion="10245">
```
## Response compression
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
`APIResponseCompression` is an option that allows the API server to compress the responses for **get**
and **list** requests, reducing the network bandwidth and improving the performance of large-scale clusters.
It is enabled by default since Kubernetes 1.16 and it can be disabled by including
`APIResponseCompression=false` in the `--feature-gates` flag on the API server.
API response compression can significantly reduce the size of the response, especially for large resources or
[collections](/docs/reference/using-api/api-concepts/#collections).
For example, a **list** request for pods can return hundreds of kilobytes or even megabytes of data,
depending on the number of pods and their attributes. By compressing the response, the network bandwidth
can be saved and the latency can be reduced.
To verify if `APIResponseCompression` is working, you can send a **get** or **list** request to the
API server with an `Accept-Encoding` header, and check the response size and headers. For example:
```console
GET /api/v1/pods
Accept-Encoding: gzip
---
200 OK
Content-Type: application/json
content-encoding: gzip
...
```
The `content-encoding` header indicates that the response is compressed with `gzip`.
## Retrieving large results sets in chunks
{{< feature-state for_k8s_version="v1.9" state="beta" >}}
@ -1036,8 +1066,9 @@ Continue Token, Exact
{{< note >}}
When you **list** resources and receive a collection response, the response includes the
[metadata](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta) of the collection as
well as [object metadata](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)
[list metadata](/docs/reference/generated/kubernetes-api/v{{ skew currentVersion >}}/#listmeta-v1-meta)
of the collection as well as
[object metadata](/docs/reference/generated/kubernetes-api/v{{ skew currentVersion >}}/#objectmeta-v1-meta)
for each item in that collection. For individual objects found within a collection response,
`.metadata.resourceVersion` tracks when that object was last updated, and not how up-to-date
the object is when served.

View File

@ -144,6 +144,44 @@ Examples:
See the [Kubernetes URL library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#URLs)
godoc for more information.
### Kubernetes authorizer library
For CEL expressions in the API where a variable of type `Authorizer` is available,
the authorizer may be used to perform authorization checks for the principal
(authenticated user) of the request.
API resource checks are performed as follows:
1. Specify the group and resource to check: `Authorizer.group(string).resource(string) ResourceCheck`
2. Optionally call any combination of the following builder functions to further narrow the authorization check.
Note that these functions return the receiver type and can be chained:
- `ResourceCheck.subresource(string) ResourceCheck`
- `ResourceCheck.namespace(string) ResourceCheck`
- `ResourceCheck.name(string) ResourceCheck`
3. Call `ResourceCheck.check(verb string) Decision` to perform the authorization check.
4. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.
Non-resource authorization performed are used as follows:
1. specify only a path: `Authorizer.path(string) PathCheck`
1. Call `PathCheck.check(httpVerb string) Decision` to perform the authorization check.
1. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.
To perform an authorization check for a service account:
- `Authorizer.serviceAccount(namespace string, name string) Authorizer`
{{< table caption="Examples of CEL expressions using URL library functions" >}}
| CEL Expression | Purpose |
|--------------------------------------------------------------------------------------------------------------|------------------------------------------------|
| `authorizer.group('').resource('pods').namespace('default').check('create').allowed()` | Returns true if the principal (user or service account) is allowed create pods in the 'default' namespace. |
| `authorizer.path('/healthz').check('get').allowed()` | Checks if the principal (user or service account) is authorized to make HTTP GET requests to the /healthz API path. |
| `authorizer.serviceAccount('default', 'myserviceaccount').resource('deployments').check('delete').allowed()` | Checks if the service account is authorized to delete deployments. |
{{< /table >}}
See the [Kubernetes Authz library](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz)
godoc for more information.
## Type checking
CEL is a [gradually typed language](https://github.com/google/cel-spec/blob/master/doc/langdef.md#gradual-type-checking).
@ -297,4 +335,4 @@ execute. If so, the API server prevent the CEL expression from being written to
API resources by rejecting create or update operations containing the CEL
expression to the API resources. This feature offers a stronger assurance that
CEL expressions written to the API resource will be evaluate at runtime without
exceeding the runtime cost budget.
exceeding the runtime cost budget.

View File

@ -44,15 +44,16 @@ If you are running a version of Kubernetes other than v{{< skew currentVersion >
check the documentation for that version.
{{< /note >}}
<!-- body -->
## Install and configure prerequisites
The following steps apply common settings for Kubernetes nodes on Linux.
The following steps apply common settings for Kubernetes nodes on Linux.
You can skip a particular setting if you're certain you don't need it.
For more information, see [Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements) or the documentation for your specific container runtime.
For more information, see
[Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements)
or the documentation for your specific container runtime.
### Forwarding IPv4 and letting iptables see bridged traffic
@ -78,29 +79,31 @@ EOF
sudo sysctl --system
```
Verify that the `br_netfilter`, `overlay` modules are loaded by running below instructions:
Verify that the `br_netfilter`, `overlay` modules are loaded by running the following commands:
```bash
lsmod | grep br_netfilter
lsmod | grep overlay
```
Verify that the `net.bridge.bridge-nf-call-iptables`, `net.bridge.bridge-nf-call-ip6tables`, `net.ipv4.ip_forward` system variables are set to 1 in your `sysctl` config by running below instruction:
Verify that the `net.bridge.bridge-nf-call-iptables`, `net.bridge.bridge-nf-call-ip6tables`, and
`net.ipv4.ip_forward` system variables are set to `1` in your `sysctl` config by running the following command:
```bash
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
```
## Cgroup drivers
## cgroup drivers
On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
are used to constrain resources that are allocated to processes.
Both {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
Both the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
underlying container runtime need to interface with control groups to enforce
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/) and set
resources such as cpu/memory requests and limits. To interface with control
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/)
and set resources such as cpu/memory requests and limits. To interface with control
groups, the kubelet and the container runtime need to use a *cgroup driver*.
It's critical that the kubelet and the container runtime uses the same cgroup
It's critical that the kubelet and the container runtime use the same cgroup
driver and are configured the same.
There are two cgroup drivers available:
@ -110,16 +113,15 @@ There are two cgroup drivers available:
### cgroupfs driver {#cgroupfs-cgroup-driver}
The `cgroupfs` driver is the default cgroup driver in the kubelet. When the `cgroupfs`
driver is used, the kubelet and the container runtime directly interface with
the cgroup filesystem to configure cgroups.
The `cgroupfs` driver is the [default cgroup driver in the kubelet](/docs/reference/config-api/kubelet-config.v1beta1).
When the `cgroupfs` driver is used, the kubelet and the container runtime directly interface with
the cgroup filesystem to configure cgroups.
The `cgroupfs` driver is **not** recommended when
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) is the
init system because systemd expects a single cgroup manager on
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups)
, use the `systemd` cgroup driver instead of
`cgroupfs`.
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups), use the `systemd`
cgroup driver instead of `cgroupfs`.
### systemd cgroup driver {#systemd-cgroup-driver}
@ -150,6 +152,11 @@ kind: KubeletConfiguration
cgroupDriver: systemd
```
{{< note >}}
Starting with v1.22 and later, when creating a cluster with kubeadm, if the user does not set
the `cgroupDriver` field under `KubeletConfiguration`, kubeadm defaults it to `systemd`.
{{< /note >}}
If you configure `systemd` as the cgroup driver for the kubelet, you must also
configure `systemd` as the cgroup driver for the container runtime. Refer to
the documentation for your container runtime for instructions. For example:
@ -190,7 +197,9 @@ using the (deprecated) v1alpha2 API instead.
This section outlines the necessary steps to use containerd as CRI runtime.
To install containerd on your system, follow the instructions on [getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).Return to this step once you've created a valid `config.toml` configuration file.
To install containerd on your system, follow the instructions on
[getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).
Return to this step once you've created a valid `config.toml` configuration file.
{{< tabs name="Finding your config.toml file" >}}
{{% tab name="Linux" %}}

View File

@ -157,7 +157,7 @@ For more information on version skews, see:
2. Download the Google Cloud public signing key:
```shell
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
```
3. Add the Kubernetes `apt` repository:
@ -217,7 +217,7 @@ sudo systemctl enable --now kubelet
Install CNI plugins (required for most pod network):
```bash
CNI_PLUGINS_VERSION="v1.2.0"
CNI_PLUGINS_VERSION="v1.3.0"
ARCH="amd64"
DEST="/opt/cni/bin"
sudo mkdir -p "$DEST"
@ -239,7 +239,7 @@ sudo mkdir -p "$DOWNLOAD_DIR"
Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI))
```bash
CRICTL_VERSION="v1.26.0"
CRICTL_VERSION="v1.27.0"
ARCH="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
```
@ -253,7 +253,7 @@ cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
sudo chmod +x {kubeadm,kubelet}
RELEASE_VERSION="v0.4.0"
RELEASE_VERSION="v0.15.1"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
@ -297,4 +297,4 @@ If you are running into difficulties with kubeadm, please consult our [troublesh
## {{% heading "whatsnext" %}}
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)

View File

@ -171,7 +171,7 @@ It augments the basic
{{< note >}}
The contents below are just an example. If you don't want to use a package manager
follow the guide outlined in the [Without a package manager](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#k8s-install-2))
follow the guide outlined in the ([Without a package manager](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#k8s-install-2))
section.
{{< /note >}}

View File

@ -68,7 +68,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
ExecStart=
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
# Replace the value of "--container-runtime-endpoint" for a different container runtime if needed.
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
Restart=always
EOF

View File

@ -2,7 +2,7 @@
reviewers:
- jpbetz
- cheftako
title: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
title: Migrate Replicated Control Plane To Use Cloud Controller Manager
linkTitle: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
content_type: task
weight: 250
@ -14,45 +14,92 @@ weight: 250
## Background
As part of the [cloud provider extraction effort](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/), all cloud specific controllers must be moved out of the `kube-controller-manager`. All existing clusters that run cloud controllers in the `kube-controller-manager` must migrate to instead run the controllers in a cloud provider specific `cloud-controller-manager`.
As part of the [cloud provider extraction effort](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/),
all cloud specific controllers must be moved out of the `kube-controller-manager`.
All existing clusters that run cloud controllers in the `kube-controller-manager`
must migrate to instead run the controllers in a cloud provider specific
`cloud-controller-manager`.
Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud specific" controllers between the `kube-controller-manager` and the `cloud-controller-manager` via a shared resource lock between the two components while upgrading the replicated control plane. For a single-node control plane, or if unavailability of controller managers can be tolerated during the upgrade, Leader Migration is not needed and this guide can be ignored.
Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud
specific" controllers between the `kube-controller-manager` and the
`cloud-controller-manager` via a shared resource lock between the two components
while upgrading the replicated control plane. For a single-node control plane, or if
unavailability of controller managers can be tolerated during the upgrade, Leader
Migration is not needed and this guide can be ignored.
Leader Migration can be enabled by setting `--enable-leader-migration` on `kube-controller-manager` or `cloud-controller-manager`. Leader Migration only applies during the upgrade and can be safely disabled or left enabled after the upgrade is complete.
Leader Migration can be enabled by setting `--enable-leader-migration` on
`kube-controller-manager` or `cloud-controller-manager`. Leader Migration only
applies during the upgrade and can be safely disabled or left enabled after the
upgrade is complete.
This guide walks you through the manual process of upgrading the control plane from `kube-controller-manager` with built-in cloud provider to running both `kube-controller-manager` and `cloud-controller-manager`. If you use a tool to deploy and manage the cluster, please refer to the documentation of the tool and the cloud provider for specific instructions of the migration.
This guide walks you through the manual process of upgrading the control plane from
`kube-controller-manager` with built-in cloud provider to running both
`kube-controller-manager` and `cloud-controller-manager`. If you use a tool to deploy
and manage the cluster, please refer to the documentation of the tool and the cloud
provider for specific instructions of the migration.
## {{% heading "prerequisites" %}}
It is assumed that the control plane is running Kubernetes version N and to be upgraded to version N + 1. Although it is possible to migrate within the same version, ideally the migration should be performed as part of an upgrade so that changes of configuration can be aligned to each release. The exact versions of N and N + 1 depend on each cloud provider. For example, if a cloud provider builds a `cloud-controller-manager` to work with Kubernetes 1.24, then N can be 1.23 and N + 1 can be 1.24.
It is assumed that the control plane is running Kubernetes version N and to be
upgraded to version N + 1. Although it is possible to migrate within the same
version, ideally the migration should be performed as part of an upgrade so that
changes of configuration can be aligned to each release. The exact versions of N and
N + 1 depend on each cloud provider. For example, if a cloud provider builds a
`cloud-controller-manager` to work with Kubernetes 1.24, then N can be 1.23 and N + 1
can be 1.24.
The control plane nodes should run `kube-controller-manager` with Leader Election enabled, which is the default. As of version N, an in-tree cloud provider must be set with `--cloud-provider` flag and `cloud-controller-manager` should not yet be deployed.
The control plane nodes should run `kube-controller-manager` with Leader Election
enabled, which is the default. As of version N, an in-tree cloud provider must be set
with `--cloud-provider` flag and `cloud-controller-manager` should not yet be
deployed.
The out-of-tree cloud provider must have built a `cloud-controller-manager` with Leader Migration implementation. If the cloud provider imports `k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later, Leader Migration will be available. However, for version before v0.22.0, Leader Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be enabled in `cloud-controller-manager`.
The out-of-tree cloud provider must have built a `cloud-controller-manager` with
Leader Migration implementation. If the cloud provider imports
`k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later,
Leader Migration will be available. However, for version before v0.22.0, Leader
Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be
enabled in `cloud-controller-manager`.
This guide assumes that kubelet of each control plane node starts `kube-controller-manager` and `cloud-controller-manager` as static pods defined by their manifests. If the components run in a different setting, please adjust the steps accordingly.
This guide assumes that kubelet of each control plane node starts
`kube-controller-manager` and `cloud-controller-manager` as static pods defined by
their manifests. If the components run in a different setting, please adjust the
steps accordingly.
For authorization, this guide assumes that the cluster uses RBAC. If another authorization mode grants permissions to `kube-controller-manager` and `cloud-controller-manager` components, please grant the needed access in a way that matches the mode.
For authorization, this guide assumes that the cluster uses RBAC. If another
authorization mode grants permissions to `kube-controller-manager` and
`cloud-controller-manager` components, please grant the needed access in a way that
matches the mode.
<!-- steps -->
### Grant access to Migration Lease
The default permissions of the controller manager allow only accesses to their main Lease. In order for the migration to work, accesses to another Lease are required.
The default permissions of the controller manager allow only accesses to their main
Lease. In order for the migration to work, accesses to another Lease are required.
You can grant `kube-controller-manager` full access to the leases API by modifying the `system::leader-locking-kube-controller-manager` role. This task guide assumes that the name of the migration lease is `cloud-provider-extraction-migration`.
You can grant `kube-controller-manager` full access to the leases API by modifying
the `system::leader-locking-kube-controller-manager` role. This task guide assumes
that the name of the migration lease is `cloud-provider-extraction-migration`.
`kubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
```shell
kubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
```
Do the same to the `system::leader-locking-cloud-controller-manager` role.
`kubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
```shell
kubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
```
### Initial Leader Migration configuration
Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, `kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`. The following example configuration shows the assignment.
Leader Migration optionally takes a configuration file representing the state of
controller-to-manager assignment. At this moment, with in-tree cloud provider,
`kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`. The
following example configuration shows the assignment.
Leader Migration can be enabled without a configuration. Please see [Default Configuration](#default-configuration) for details.
Leader Migration can be enabled without a configuration. Please see
[Default Configuration](#default-configuration) for details.
```yaml
kind: LeaderMigrationConfiguration
@ -67,8 +114,9 @@ controllerLeaders:
component: kube-controller-manager
```
Alternatively, because the controllers can run under either controller managers, setting `component` to `*`
for both sides makes the configuration file consistent between both parties of the migration.
Alternatively, because the controllers can run under either controller managers,
setting `component` to `*` for both sides makes the configuration file consistent
between both parties of the migration.
```yaml
# wildcard version
@ -84,16 +132,25 @@ controllerLeaders:
component: *
```
On each control plane node, save the content to `/etc/leadermigration.conf`, and update the manifest of `kube-controller-manager` so that the file is mounted inside the container at the same location. Also, update the same manifest to add the following arguments:
On each control plane node, save the content to `/etc/leadermigration.conf`, and
update the manifest of `kube-controller-manager` so that the file is mounted inside
the container at the same location. Also, update the same manifest to add the
following arguments:
- `--enable-leader-migration` to enable Leader Migration on the controller manager
- `--leader-migration-config=/etc/leadermigration.conf` to set configuration file
Restart `kube-controller-manager` on each node. At this moment, `kube-controller-manager` has leader migration enabled and is ready for the migration.
Restart `kube-controller-manager` on each node. At this moment,
`kube-controller-manager` has leader migration enabled and is ready for the
migration.
### Deploy Cloud Controller Manager
In version N + 1, the desired state of controller-to-manager assignment can be represented by a new configuration file, shown as follows. Please note `component` field of each `controllerLeaders` changing from `kube-controller-manager` to `cloud-controller-manager`. Alternatively, use the wildcard version mentioned above, which has the same effect.
In version N + 1, the desired state of controller-to-manager assignment can be
represented by a new configuration file, shown as follows. Please note `component`
field of each `controllerLeaders` changing from `kube-controller-manager` to
`cloud-controller-manager`. Alternatively, use the wildcard version mentioned above,
which has the same effect.
```yaml
kind: LeaderMigrationConfiguration
@ -108,35 +165,70 @@ controllerLeaders:
component: cloud-controller-manager
```
When creating control plane nodes of version N + 1, the content should be deployed to `/etc/leadermigration.conf`. The manifest of `cloud-controller-manager` should be updated to mount the configuration file in the same manner as `kube-controller-manager` of version N. Similarly, add `--enable-leader-migration` and `--leader-migration-config=/etc/leadermigration.conf` to the arguments of `cloud-controller-manager`.
When creating control plane nodes of version N + 1, the content should be deployed to
`/etc/leadermigration.conf`. The manifest of `cloud-controller-manager` should be
updated to mount the configuration file in the same manner as
`kube-controller-manager` of version N. Similarly, add `--enable-leader-migration`
and `--leader-migration-config=/etc/leadermigration.conf` to the arguments of
`cloud-controller-manager`.
Create a new control plane node of version N + 1 with the updated `cloud-controller-manager` manifest, and with the `--cloud-provider` flag set to `external` for `kube-controller-manager`. `kube-controller-manager` of version N + 1 MUST NOT have Leader Migration enabled because, with an external cloud provider, it does not run the migrated controllers anymore, and thus it is not involved in the migration.
Create a new control plane node of version N + 1 with the updated
`cloud-controller-manager` manifest, and with the `--cloud-provider` flag set to
`external` for `kube-controller-manager`. `kube-controller-manager` of version N + 1
MUST NOT have Leader Migration enabled because, with an external cloud provider, it
does not run the migrated controllers anymore, and thus it is not involved in the
migration.
Please refer to [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) for more detail on how to deploy `cloud-controller-manager`.
Please refer to [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/)
for more detail on how to deploy `cloud-controller-manager`.
### Upgrade Control Plane
The control plane now contains nodes of both version N and N + 1. The nodes of version N run `kube-controller-manager` only, and these of version N + 1 run both `kube-controller-manager` and `cloud-controller-manager`. The migrated controllers, as specified in the configuration, are running under either `kube-controller-manager` of version N or `cloud-controller-manager` of version N + 1 depending on which controller manager holds the migration lease. No controller will ever be running under both controller managers at any time.
The control plane now contains nodes of both version N and N + 1. The nodes of
version N run `kube-controller-manager` only, and these of version N + 1 run both
`kube-controller-manager` and `cloud-controller-manager`. The migrated controllers,
as specified in the configuration, are running under either `kube-controller-manager`
of version N or `cloud-controller-manager` of version N + 1 depending on which
controller manager holds the migration lease. No controller will ever be running
under both controller managers at any time.
In a rolling manner, create a new control plane node of version N + 1 and bring down one of version N + 1 until the control plane contains only nodes of version N + 1.
If a rollback from version N + 1 to N is required, add nodes of version N with Leader Migration enabled for `kube-controller-manager` back to the control plane, replacing one of version N + 1 each time until there are only nodes of version N.
In a rolling manner, create a new control plane node of version N + 1 and bring down
one of version N until the control plane contains only nodes of version N + 1.
If a rollback from version N + 1 to N is required, add nodes of version N with Leader
Migration enabled for `kube-controller-manager` back to the control plane, replacing
one of version N + 1 each time until there are only nodes of version N.
### (Optional) Disable Leader Migration {#disable-leader-migration}
Now that the control plane has been upgraded to run both `kube-controller-manager` and `cloud-controller-manager` of version N + 1, Leader Migration has finished its job and can be safely disabled to save one Lease resource. It is safe to re-enable Leader Migration for the rollback in the future.
Now that the control plane has been upgraded to run both `kube-controller-manager`
and `cloud-controller-manager` of version N + 1, Leader Migration has finished its
job and can be safely disabled to save one Lease resource. It is safe to re-enable
Leader Migration for the rollback in the future.
In a rolling manager, update manifest of `cloud-controller-manager` to unset both `--enable-leader-migration` and `--leader-migration-config=` flag, also remove the mount of `/etc/leadermigration.conf`, and finally remove `/etc/leadermigration.conf`. To re-enable Leader Migration, recreate the configuration file and add its mount and the flags that enable Leader Migration back to `cloud-controller-manager`.
In a rolling manager, update manifest of `cloud-controller-manager` to unset both
`--enable-leader-migration` and `--leader-migration-config=` flag, also remove the
mount of `/etc/leadermigration.conf`, and finally remove `/etc/leadermigration.conf`.
To re-enable Leader Migration, recreate the configuration file and add its mount and
the flags that enable Leader Migration back to `cloud-controller-manager`.
### Default Configuration
Starting Kubernetes 1.22, Leader Migration provides a default configuration suitable for the default controller-to-manager assignment.
The default configuration can be enabled by setting `--enable-leader-migration` but without `--leader-migration-config=`.
Starting Kubernetes 1.22, Leader Migration provides a default configuration suitable
for the default controller-to-manager assignment.
The default configuration can be enabled by setting `--enable-leader-migration` but
without `--leader-migration-config=`.
For `kube-controller-manager` and `cloud-controller-manager`, if there are no flags that enable any in-tree cloud provider or change ownership of controllers, the default configuration can be used to avoid manual creation of the configuration file.
For `kube-controller-manager` and `cloud-controller-manager`, if there are no flags
that enable any in-tree cloud provider or change ownership of controllers, the
default configuration can be used to avoid manual creation of the configuration file.
### Special case: migrating the Node IPAM controller {#node-ipam-controller-migration}
If your cloud provider provides an implementation of Node IPAM controller, you should switch to the implementation in `cloud-controller-manager`. Disable Node IPAM controller in `kube-controller-manager` of version N + 1 by adding `--controllers=*,-nodeipam` to its flags. Then add `nodeipam` to the list of migrated controllers.
If your cloud provider provides an implementation of Node IPAM controller, you should
switch to the implementation in `cloud-controller-manager`. Disable Node IPAM
controller in `kube-controller-manager` of version N + 1 by adding
`--controllers=*,-nodeipam` to its flags. Then add `nodeipam` to the list of migrated
controllers.
```yaml
# wildcard version, with nodeipam
@ -156,5 +248,6 @@ controllerLeaders:
## {{% heading "whatsnext" %}}
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration) enhancement proposal.
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration)
enhancement proposal.

View File

@ -21,7 +21,9 @@ acceptably. The kubelet provides methods to enable more complex workload
placement policies while keeping the abstraction free from explicit placement
directives.
For detailed information on resource management, please refer to the
[Resource Management for Pods and Containers](/docs/concepts/configuration/manage-resources-containers)
documentation.
## {{% heading "prerequisites" %}}

View File

@ -34,7 +34,7 @@ This page shows how to enable and configure encryption of secret data at rest.
The `kube-apiserver` process accepts an argument `--encryption-provider-config`
that controls how API data is encrypted in etcd.
The configuration is provided as an API named
[`EncryptionConfiguration`](/docs/reference/config-api/apiserver-encryption.v1/). `--encryption-provider-config-automatic-reload` boolean argument determines if the file set by `--encryption-provider-config` should be automatically reloaded if the disk contents change. This enables key rotation without API server restarts. An example configuration is provided below.
[`EncryptionConfiguration`](/docs/reference/config-api/apiserver-encryption.v1/). An example configuration is provided below.
{{< caution >}}
**IMPORTANT:** For high-availability configurations (with two or more control plane nodes), the
@ -321,19 +321,19 @@ To create a new Secret, perform the following steps:
- command:
- kube-apiserver
...
- --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # <-- add this line
- --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line
volumeMounts:
...
- name: enc # <-- add this line
mountPath: /etc/kubernetes/enc # <-- add this line
readonly: true # <-- add this line
- name: enc # add this line
mountPath: /etc/kubernetes/enc # add this line
readonly: true # add this line
...
volumes:
...
- name: enc # <-- add this line
hostPath: # <-- add this line
path: /etc/kubernetes/enc # <-- add this line
type: DirectoryOrCreate # <-- add this line
- name: enc # add this line
hostPath: # add this line
path: /etc/kubernetes/enc # add this line
type: DirectoryOrCreate # add this line
...
```
@ -462,6 +462,19 @@ Then run the following command to force decrypt all Secrets:
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
```
## Configure automatic reloading
You can configure automatic reloading of encryption provider configuration.
That setting determines whether the
{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} should
load the file you specify for `--encryption-provider-config` only once at
startup, or automatically whenever you change that file. Enabling this option
allows you to change the keys for encryption at rest without restarting the
API server.
To allow automatic reloading, configure the API server to run with:
`--encryption-provider-config-automatic-reload=true`
## {{% heading "whatsnext" %}}
* Learn more about the [EncryptionConfiguration configuration API (v1)](/docs/reference/config-api/apiserver-encryption.v1/).

View File

@ -14,39 +14,69 @@ This page shows how to configure and enable the `ip-masq-agent`.
<!-- discussion -->
## IP Masquerade Agent User Guide
The `ip-masq-agent` configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
The `ip-masq-agent` configures iptables rules to hide a pod's IP address behind the cluster
node's IP address. This is typically done when sending traffic to destinations outside the
cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
### **Key Terms**
* **NAT (Network Address Translation)**
Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
Is a method of remapping one IP address to another by modifying either the source and/or
destination address information in the IP header. Typically performed by a device doing IP routing.
* **Masquerading**
A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
A form of NAT that is typically used to perform a many to one address translation, where
multiple source IP addresses are masked behind a single address, which is typically the
device doing the IP routing. In Kubernetes this is the Node's IP address.
* **CIDR (Classless Inter-Domain Routing)**
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes.
CIDR introduced a new method of representation for IP addresses, now commonly known as
**CIDR notation**, in which an address or routing prefix is written with a suffix indicating
the number of bits of the prefix, such as 192.168.2.0/24.
* **Link Local**
A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
A link-local address is a network address that is valid only for communications within the
network segment or the broadcast domain that the host is connected to. Link-local addresses
for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when
sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This
essentially hides pod IP addresses behind the cluster node's IP address. In some environments,
traffic to "external" addresses must come from a known machine address. For example, in Google
Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in
Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide
the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the
agent is configured to treat the three private IP ranges specified by
[RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
These ranges are `10.0.0.0/8`, `172.16.0.0/12`, and `192.168.0.0 16`.
The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default.
The agent is configured to reload its configuration from the location
*/etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
![masq/non-masq example](/images/docs/ip-masq.png)
The agent configuration file must be written in YAML or JSON syntax, and may contain three optional keys:
The agent configuration file must be written in YAML or JSON syntax, and may contain three
optional keys:
* `nonMasqueradeCIDRs`: A list of strings in
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify
the non-masquerade ranges.
* `masqLinkLocal`: A Boolean (true/false) which indicates whether to masquerade traffic to the
link local prefix `169.254.0.0/16`. False by default.
* `resyncInterval`: A time interval at which the agent attempts to reload config from disk.
For example: '30s', where 's' means seconds, 'ms' means milliseconds.
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16 ranges will NOT be masqueraded. Any
other traffic (assumed to be internet) will be masqueraded. An example of a local destination
from a pod could be its Node's IP address as well as another node's address or one of the IP
addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The
below entries show the default set of rules that are applied by the ip-masq-agent:
```shell
iptables -t nat -L IP-MASQ-AGENT
```
```none
target prot opt source destination
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
@ -70,7 +100,8 @@ To create an ip-masq-agent, run the following kubectl command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml
```
You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.
You must also apply the appropriate node label to any nodes in your cluster that you want the
agent to run on.
```shell
kubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true
@ -78,10 +109,17 @@ kubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-sigs/ip-masq-agent)
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called "config".
In most cases, the default set of rules should be sufficient; however, if this is not the case
for your cluster, you can create and apply a
[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP
ranges that are affected. For example, to allow
only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following
[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called
"config".
{{< note >}}
It is important that the file is called config since, by default, that will be used as the key for lookup by the `ip-masq-agent`:
It is important that the file is called config since, by default, that will be used as the key
for lookup by the `ip-masq-agent`:
```yaml
nonMasqueradeCIDRs:
@ -96,7 +134,8 @@ Run the following command to add the config map to your cluster:
kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system
```
This will update a file located at `/etc/config/ip-masq-agent` which is periodically checked every `resyncInterval` and applied to the cluster node.
This will update a file located at `/etc/config/ip-masq-agent` which is periodically checked
every `resyncInterval` and applied to the cluster node.
After the resync interval has expired, you should see the iptables rules reflect your changes:
```shell
@ -111,7 +150,9 @@ RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent:
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
```
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set `masqLinkLocal` to true in the ConfigMap.
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which
sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can
set `masqLinkLocal` to true in the ConfigMap.
```yaml
nonMasqueradeCIDRs:

View File

@ -6,7 +6,7 @@ weight: 20
<!-- overview -->
This page explains how to configure the kubelet cgroup driver to match the container
This page explains how to configure the kubelet's cgroup driver to match the container
runtime cgroup driver for kubeadm clusters.
## {{% heading "prerequisites" %}}
@ -20,7 +20,9 @@ You should be familiar with the Kubernetes
The [Container runtimes](/docs/setup/production-environment/container-runtimes) page
explains that the `systemd` driver is recommended for kubeadm based setups instead
of the `cgroupfs` driver, because kubeadm manages the kubelet as a systemd service.
of the kubelet's [default](/docs/reference/config-api/kubelet-config.v1beta1) `cgroupfs` driver,
because kubeadm manages the kubelet as a
[systemd service](/docs/setup/production-environment/tools/kubeadm/kubelet-integration).
The page also provides details on how to set up a number of different container runtimes with the
`systemd` driver by default.
@ -32,9 +34,8 @@ This `KubeletConfiguration` can include the `cgroupDriver` field which controls
driver of the kubelet.
{{< note >}}
In v1.22, if the user is not setting the `cgroupDriver` field under `KubeletConfiguration`,
`kubeadm` will default it to `systemd`.
In v1.22 and later, if the user does not set the `cgroupDriver` field under `KubeletConfiguration`,
kubeadm defaults it to `systemd`.
{{< /note >}}
A minimal example of configuring the field explicitly:
@ -81,7 +82,7 @@ you must refer to the documentation of the container runtime of your choice.
## Migrating to the `systemd` driver
To change the cgroup driver of an existing kubeadm cluster to `systemd` in-place,
To change the cgroup driver of an existing kubeadm cluster from `cgroupfs` to `systemd` in-place,
a similar procedure to a kubelet upgrade is required. This must include both
steps outlined below.

View File

@ -53,13 +53,13 @@ setting up a cluster to use an external CA.
You can use the `check-expiration` subcommand to check when certificates expire:
```
```shell
kubeadm certs check-expiration
```
The output is similar to this:
```
```console
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 30, 2020 23:36 UTC 364d no
apiserver Dec 30, 2020 23:36 UTC 364d ca no
@ -268,7 +268,7 @@ serverTLSBootstrap: true
If you have already created the cluster you must adapt it by doing the following:
- Find and edit the `kubelet-config-{{< skew currentVersion >}}` ConfigMap in the `kube-system` namespace.
In that ConfigMap, the `kubelet` key has a
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/)
document as its value. Edit the KubeletConfiguration document to set `serverTLSBootstrap: true`.
- On each node, add the `serverTLSBootstrap: true` field in `/var/lib/kubelet/config.yaml`
and restart the kubelet with `systemctl restart kubelet`
@ -284,6 +284,8 @@ These CSRs can be viewed using:
```shell
kubectl get csr
```
```console
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending

View File

@ -38,17 +38,17 @@ The upgrade workflow at high level is the following:
### Additional information
- The instructions below outline when to drain each node during the upgrade process.
If you are performing a **minor** version upgrade for any kubelet, you **must**
first drain the node (or nodes) that you are upgrading. In the case of control plane nodes,
they could be running CoreDNS Pods or other critical workloads. For more information see
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
If you are performing a **minor** version upgrade for any kubelet, you **must**
first drain the node (or nodes) that you are upgrading. In the case of control plane nodes,
they could be running CoreDNS Pods or other critical workloads. For more information see
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
- All containers are restarted after upgrade, because the container spec hash value is changed.
- To verify that the kubelet service has successfully restarted after the kubelet has been upgraded,
you can execute `systemctl status kubelet` or view the service logs with `journalctl -xeu kubelet`.
you can execute `systemctl status kubelet` or view the service logs with `journalctl -xeu kubelet`.
- Usage of the `--config` flag of `kubeadm upgrade` with
[kubeadm configuration API types](/docs/reference/config-api/kubeadm-config.v1beta3)
with the purpose of reconfiguring the cluster is not recommended and can have unexpected results. Follow the steps in
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure) instead.
[kubeadm configuration API types](/docs/reference/config-api/kubeadm-config.v1beta3)
with the purpose of reconfiguring the cluster is not recommended and can have unexpected results. Follow the steps in
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure) instead.
<!-- steps -->
@ -58,15 +58,23 @@ Find the latest patch release for Kubernetes {{< skew currentVersion >}} using t
{{< tabs name="k8s_install_versions" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
apt update
apt-cache madison kubeadm
# find the latest {{< skew currentVersion >}} version in the list
# it should look like {{< skew currentVersion >}}.x-00, where x is the latest patch
```shell
# Find the latest {{< skew currentVersion >}} version in the list.
# It should look like {{< skew currentVersion >}}.x-00, where x is the latest patch.
apt update
apt-cache madison kubeadm
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# find the latest {{< skew currentVersion >}} version in the list
# it should look like {{< skew currentVersion >}}.x-0, where x is the latest patch
```shell
# Find the latest {{< skew currentVersion >}} version in the list.
# It should look like {{< skew currentVersion >}}.x-0, where x is the latest patch.
yum list --showduplicates kubeadm --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
@ -79,75 +87,78 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
**For the first control plane node**
- Upgrade kubeadm:
1. Upgrade kubeadm:
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubeadm
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
<br />
- Verify that the download works and has the expected version:
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
kubeadm version
```
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
- Verify the upgrade plan:
{{% /tab %}}
{{< /tabs >}}
```shell
kubeadm upgrade plan
```
1. Verify that the download works and has the expected version:
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
It also shows a table with the component config version states.
```shell
kubeadm version
```
{{< note >}}
`kubeadm upgrade` also automatically renews the certificates that it manages on this node.
To opt-out of certificate renewal the flag `--certificate-renewal=false` can be used.
For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs).
{{</ note >}}
{{< note >}}
If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide
a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag.
Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade.
{{</ note >}}
1. Verify the upgrade plan:
- Choose a version to upgrade to, and run the appropriate command. For example:
```shell
kubeadm upgrade plan
```
```shell
# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x
```
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
It also shows a table with the component config version states.
Once the command finishes you should see:
{{< note >}}
`kubeadm upgrade` also automatically renews the certificates that it manages on this node.
To opt-out of certificate renewal the flag `--certificate-renewal=false` can be used.
For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs).
{{</ note >}}
```
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew currentVersion >}}.x". Enjoy!
{{< note >}}
If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide
a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag.
Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade.
{{</ note >}}
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
```
1. Choose a version to upgrade to, and run the appropriate command. For example:
- Manually upgrade your CNI provider plugin.
```shell
# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x
```
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow.
Check the [addons](/docs/concepts/cluster-administration/addons/) page to
find your CNI provider and see whether additional upgrade steps are required.
Once the command finishes you should see:
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
```
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew currentVersion >}}.x". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
```
1. Manually upgrade your CNI provider plugin.
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow.
Check the [addons](/docs/concepts/cluster-administration/addons/) page to
find your CNI provider and see whether additional upgrade steps are required.
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
**For the other control plane nodes**
@ -167,60 +178,63 @@ Also calling `kubeadm upgrade plan` and upgrading the CNI provider plugin is no
### Drain the node
- Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
```shell
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
```
```shell
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
```
### Upgrade kubelet and kubectl
- Upgrade the kubelet and kubectl:
1. Upgrade the kubelet and kubectl:
{{< tabs name="k8s_install_kubelet" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubelet kubectl
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
<br />
{{< tabs name="k8s_install_kubelet" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
- Restart the kubelet:
```shell
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubelet kubectl
```
```shell
sudo systemctl daemon-reload
sudo systemctl restart kubelet
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
1. Restart the kubelet:
```shell
sudo systemctl daemon-reload
sudo systemctl restart kubelet
```
### Uncordon the node
- Bring the node back online by marking it schedulable:
Bring the node back online by marking it schedulable:
```shell
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
```
```shell
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
```
## Upgrade worker nodes
The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time,
without compromising the minimum required capacity for running your workloads.
The following pages show how to Upgrade Linux and Windows worker nodes:
The following pages show how to upgrade Linux and Windows worker nodes:
* [Upgrade Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/)
* [Upgrade Windows nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/)
* [Upgrade Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/)
* [Upgrade Windows nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/)
## Verify the status of the cluster
@ -280,4 +294,3 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
- Fetches the kubeadm `ClusterConfiguration` from the cluster.
- Upgrades the kubelet configuration for this node.

View File

@ -9,7 +9,7 @@ weight: 100
This page explains how to upgrade a Linux Worker Nodes created with kubeadm.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
* Familiarize yourself with [the process for upgrading the rest of your kubeadm
cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to
@ -21,80 +21,79 @@ upgrade the control plane nodes before upgrading your Linux Worker nodes.
### Upgrade kubeadm
Upgrade kubeadm:
Upgrade kubeadm:
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubeadm
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubeadm
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
### Call "kubeadm upgrade"
- For worker nodes this upgrades the local kubelet configuration:
For worker nodes this upgrades the local kubelet configuration:
```shell
sudo kubeadm upgrade node
```
```shell
sudo kubeadm upgrade node
```
### Drain the node
- Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
```shell
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
```
```shell
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
```
### Upgrade kubelet and kubectl
- Upgrade the kubelet and kubectl:
1. Upgrade the kubelet and kubectl:
{{< tabs name="k8s_kubelet_and_kubectl" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubelet kubectl
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
<br />
{{< tabs name="k8s_kubelet_and_kubectl" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
apt-mark hold kubelet kubectl
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
- Restart the kubelet:
1. Restart the kubelet:
```shell
sudo systemctl daemon-reload
sudo systemctl restart kubelet
```
```shell
sudo systemctl daemon-reload
sudo systemctl restart kubelet
```
### Uncordon the node
- Bring the node back online by marking it schedulable:
Bring the node back online by marking it schedulable:
```shell
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
```
```shell
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
```
## {{% heading "whatsnext" %}}
## {{% heading "whatsnext" %}}
* See how to [Upgrade Windows nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/).

View File

@ -54,7 +54,7 @@ In order to use this feature, the kubelet expects two flags to be set:
The configuration file passed into `--image-credential-provider-config` is read by the kubelet to determine which exec plugins
should be invoked for which container images. Here's an example configuration file you may end up using if you are using the
[ECR](https://aws.amazon.com/ecr/)-based plugin:
[ECR-based plugin](https://github.com/kubernetes/cloud-provider-aws/tree/master/cmd/ecr-credential-provider):
```yaml
apiVersion: kubelet.config.k8s.io/v1
@ -68,7 +68,7 @@ providers:
# name is the required name of the credential provider. It must match the name of the
# provider executable as seen by the kubelet. The executable must be in the kubelet's
# bin directory (set by the --image-credential-provider-bin-dir flag).
- name: ecr
- name: ecr-credential-provider
# matchImages is a required list of strings used to match against images in order to
# determine if this provider should be invoked. If one of the strings matches the
# requested image from the kubelet, the plugin will be invoked and given a chance
@ -94,7 +94,7 @@ providers:
# - registry.io:8080/path
matchImages:
- "*.dkr.ecr.*.amazonaws.com"
- "*.dkr.ecr.*.amazonaws.cn"
- "*.dkr.ecr.*.amazonaws.com.cn"
- "*.dkr.ecr-fips.*.amazonaws.com"
- "*.dkr.ecr.us-iso-east-1.c2s.ic.gov"
- "*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov"
@ -107,8 +107,8 @@ providers:
apiVersion: credentialprovider.kubelet.k8s.io/v1
# Arguments to pass to the command when executing it.
# +optional
args:
- get-credentials
# args:
# - --example-argument
# Env defines additional environment variables to expose to the process. These
# are unioned with the host's environment, as well as variables client-go uses
# to pass argument to the plugin.

View File

@ -98,8 +98,7 @@ then run the following commands:
## Configure the kubelet to use containerd as its container runtime
Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags.
`--container-runtime=remote` and
Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags;
`--container-runtime-endpoint=unix:///run/containerd/containerd.sock`.
Users using kubeadm should be aware that the `kubeadm` tool stores the CRI socket for each host as

View File

@ -41,9 +41,9 @@ node-2 Ready v1.16.15 docker://19.3.1
node-3 Ready v1.16.15 docker://19.3.1
```
If your runtime shows as Docker Engine, you still might not be affected by the
removal of dockershim in Kubernetes v1.24. [Check the runtime
endpoint](#which-endpoint) to see if you use dockershim. If you don't use
dockershim, you aren't affected.
removal of dockershim in Kubernetes v1.24.
[Check the runtime endpoint](#which-endpoint) to see if you use dockershim.
If you don't use dockershim, you aren't affected.
For containerd, the output is similar to this:
@ -88,7 +88,8 @@ nodes.
* If your nodes use Kubernetes v1.23 and earlier and these flags aren't
present or if the `--container-runtime` flag is not `remote`,
you use the dockershim socket with Docker Engine.
you use the dockershim socket with Docker Engine. The `--container-runtime` command line
argument is not available in Kubernetes v1.27 and later.
* If the `--container-runtime-endpoint` flag is present, check the socket
name to find out which runtime you use. For example,
`unix:///run/containerd/containerd.sock` is the containerd endpoint.
@ -96,4 +97,4 @@ nodes.
If you want to change the Container Runtime on a Node from Docker Engine to containerd,
you can find out more information on [migrating from Docker Engine to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/),
or, if you want to continue using Docker Engine in Kubernetes v1.24 and later, migrate to a
CRI-compatible adapter like [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).
CRI-compatible adapter like [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).

View File

@ -8,25 +8,26 @@ weight: 340
---
<!-- overview -->
This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}. The page also shows how to use Kubernetes namespaces to subdivide your cluster.
This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}.
The page also shows how to use Kubernetes namespaces to subdivide your cluster.
## {{% heading "prerequisites" %}}
* Have an [existing Kubernetes cluster](/docs/setup/).
* You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
* You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}},
{{< glossary_tooltip term_id="service" text="Services" >}}, and
{{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
<!-- steps -->
## Viewing namespaces
1. List the current namespaces in a cluster using:
List the current namespaces in a cluster using:
```shell
kubectl get namespaces
```
```
```console
NAME STATUS AGE
default Active 11d
kube-system Active 11d
@ -35,9 +36,12 @@ kube-public Active 11d
Kubernetes starts with three initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
* `kube-public` This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
* `kube-public` This namespace is created automatically and is readable by all users
(including those not authenticated). This namespace is mostly reserved for cluster usage,
in case that some resources should be visible and readable publicly throughout the whole cluster.
The public aspect of this namespace is only a convention, not a requirement.
You can also get the summary of a specific namespace using:
@ -50,7 +54,7 @@ Or you can get detailed information with:
```shell
kubectl describe namespaces <name>
```
```
```console
Name: default
Labels: <none>
Annotations: <none>
@ -66,18 +70,18 @@ Resource Limits
Note that these details show both resource quota (if present) as well as resource limit ranges.
Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators
to define *Hard* resource usage limits that a *Namespace* may consume.
Resource quota tracks aggregate usage of resources in the Namespace and allows cluster operators
to define *Hard* resource usage limits that a Namespace may consume.
A limit range defines min/max constraints on the amount of resources a single entity can consume in
a *Namespace*.
a Namespace.
See [Admission control: Limit Range](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md)
A namespace can be in one of two phases:
* `Active` the namespace is in use
* `Terminating` the namespace is being deleted, and can not be used for new objects
* `Active` the namespace is in use
* `Terminating` the namespace is being deleted, and can not be used for new objects
For more details, see [Namespace](/docs/reference/kubernetes-api/cluster-resources/namespace-v1/)
in the API reference.
@ -85,35 +89,38 @@ in the API reference.
## Creating a new namespace
{{< note >}}
Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces.
Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces.
{{< /note >}}
1. Create a new YAML file called `my-namespace.yaml` with the contents:
Create a new YAML file called `my-namespace.yaml` with the contents:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Then run:
```
kubectl create -f ./my-namespace.yaml
```
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
```
Then run:
2. Alternatively, you can create namespace using below command:
```shell
kubectl create -f ./my-namespace.yaml
```
```
kubectl create namespace <insert-namespace-name-here>
```
Alternatively, you can create namespace using below command:
```shell
kubectl create namespace <insert-namespace-name-here>
```
The name of your namespace must be a valid
[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
There's an optional field `finalizers`, which allows observables to purge resources whenever the namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will be created but will get stuck in the `Terminating` state if the user tries to delete it.
There's an optional field `finalizers`, which allows observables to purge resources whenever the
namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will
be created but will get stuck in the `Terminating` state if the user tries to delete it.
More information on `finalizers` can be found in the namespace [design doc](https://git.k8s.io/design-proposals-archive/architecture/namespaces.md#finalizers).
More information on `finalizers` can be found in the namespace
[design doc](https://git.k8s.io/design-proposals-archive/architecture/namespaces.md#finalizers).
## Deleting a namespace
@ -131,191 +138,192 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
## Subdividing your cluster using Kubernetes namespaces
1. Understand the default namespace
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the
cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods,
Services, and Deployments used by the cluster.
Assuming you have a fresh cluster, you can introspect the available namespaces by doing the following:
Assuming you have a fresh cluster, you can introspect the available namespaces by doing the following:
```shell
kubectl get namespaces
```
```console
NAME STATUS AGE
default Active 13m
```
```shell
kubectl get namespaces
```
```
NAME STATUS AGE
default Active 13m
```
### Create new namespaces
2. Create new namespaces
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
In a scenario where an organization is using a shared Kubernetes cluster for development and
production use cases:
In a scenario where an organization is using a shared Kubernetes cluster for development and production use cases:
- The development team would like to maintain a space in the cluster where they can get a view on
the list of Pods, Services, and Deployments they use to build and run their application.
In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify
resources are relaxed to enable agile development.
The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
are relaxed to enable agile development.
- The operations team would like to maintain a space in the cluster where they can enforce strict
procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run
the production site.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
Pods, Services, and Deployments that run the production site.
One pattern this organization could follow is to partition the Kubernetes cluster into two
namespaces: `development` and `production`. Let's create two new namespaces to hold our work.
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`.
Create the `development` namespace using kubectl:
Let's create two new namespaces to hold our work.
```shell
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
Create the `development` namespace using kubectl:
And then let's create the `production` namespace using kubectl:
```shell
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
```shell
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
```
And then let's create the `production` namespace using kubectl:
To be sure things are right, list all of the namespaces in our cluster.
```shell
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
```
```shell
kubectl get namespaces --show-labels
```
To be sure things are right, list all of the namespaces in our cluster.
```console
NAME STATUS AGE LABELS
default Active 32m <none>
development Active 29s name=development
production Active 23s name=production
```
```shell
kubectl get namespaces --show-labels
```
```
NAME STATUS AGE LABELS
default Active 32m <none>
development Active 29s name=development
production Active 23s name=production
```
### Create pods in each namespace
3. Create pods in each namespace
A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
Users interacting with one namespace do not see the content in another namespace.
To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
```shell
kubectl create deployment snowflake \
--image=registry.k8s.io/serve_hostname \
-n=development --replicas=2
```
Users interacting with one namespace do not see the content in another namespace.
We have created a deployment whose replica size is 2 that is running the pod called `snowflake`
with a basic container that serves the hostname.
To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
```shell
kubectl get deployment -n=development
```
```console
NAME READY UP-TO-DATE AVAILABLE AGE
snowflake 2/2 2 2 2m
```
```shell
kubectl create deployment snowflake --image=registry.k8s.io/serve_hostname -n=development --replicas=2
```
We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname.
```shell
kubectl get pods -l app=snowflake -n=development
```
```console
NAME READY STATUS RESTARTS AGE
snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
```
```shell
kubectl get deployment -n=development
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
snowflake 2/2 2 2 2m
```
```shell
kubectl get pods -l app=snowflake -n=development
```
```
NAME READY STATUS RESTARTS AGE
snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
```
And this is great, developers are able to do what they want, and they do not have to worry about
affecting content in the `production` namespace.
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace.
Let's switch to the `production` namespace and show how resources in one namespace are hidden from
the other. The `production` namespace should be empty, and the following commands should return nothing.
Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
```shell
kubectl get deployment -n=production
kubectl get pods -n=production
```
The `production` namespace should be empty, and the following commands should return nothing.
Production likes to run cattle, so let's create some cattle pods.
```shell
kubectl get deployment -n=production
kubectl get pods -n=production
```
```shell
kubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production
kubectl scale deployment cattle --replicas=5 -n=production
Production likes to run cattle, so let's create some cattle pods.
kubectl get deployment -n=production
```
```shell
kubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production
kubectl scale deployment cattle --replicas=5 -n=production
```console
NAME READY UP-TO-DATE AVAILABLE AGE
cattle 5/5 5 5 10s
```
kubectl get deployment -n=production
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
cattle 5/5 5 5 10s
```
```shell
kubectl get pods -l app=cattle -n=production
```
```console
NAME READY STATUS RESTARTS AGE
cattle-2263376956-41xy6 1/1 Running 0 34s
cattle-2263376956-kw466 1/1 Running 0 34s
cattle-2263376956-n4v97 1/1 Running 0 34s
cattle-2263376956-p5p3i 1/1 Running 0 34s
cattle-2263376956-sxpth 1/1 Running 0 34s
```
```shell
kubectl get pods -l app=cattle -n=production
```
```
NAME READY STATUS RESTARTS AGE
cattle-2263376956-41xy6 1/1 Running 0 34s
cattle-2263376956-kw466 1/1 Running 0 34s
cattle-2263376956-n4v97 1/1 Running 0 34s
cattle-2263376956-p5p3i 1/1 Running 0 34s
cattle-2263376956-sxpth 1/1 Running 0 34s
```
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
At this point, it should be clear that the resources users create in one namespace are hidden from
the other namespace.
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
authorization rules for each namespace.
<!-- discussion -->
## Understanding the motivation for using namespaces
A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community').
A single cluster should be able to satisfy the needs of multiple users or groups of users
(henceforth in this document a _user community_).
Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
It does this by providing the following:
1. A scope for [Names](/docs/concepts/overview/working-with-objects/names/).
2. A mechanism to attach authorization and policy to a subsection of the cluster.
1. A scope for [names](/docs/concepts/overview/working-with-objects/names/).
1. A mechanism to attach authorization and policy to a subsection of the cluster.
Use of multiple namespaces is optional.
Each user community wants to be able to work in isolation from other communities.
Each user community has its own:
1. resources (pods, services, replication controllers, etc.)
2. policies (who can or cannot perform actions in their community)
3. constraints (this community is allowed this much quota, etc.)
1. policies (who can or cannot perform actions in their community)
1. constraints (this community is allowed this much quota, etc.)
A cluster operator may create a Namespace for each unique user community.
The Namespace provides a unique scope for:
1. named resources (to avoid basic naming collisions)
2. delegated management authority to trusted users
3. ability to limit community resource consumption
1. delegated management authority to trusted users
1. ability to limit community resource consumption
Use cases include:
1. As a cluster operator, I want to support multiple user communities on a single cluster.
2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users
in those communities.
3. As a cluster operator, I want to limit the amount of resources each community can consume in order
to limit the impact to other communities using the cluster.
4. As a cluster user, I want to interact with resources that are pertinent to my user community in
isolation of what other user communities are doing on the cluster.
1. As a cluster operator, I want to support multiple user communities on a single cluster.
1. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted
users in those communities.
1. As a cluster operator, I want to limit the amount of resources each community can consume in
order to limit the impact to other communities using the cluster.
1. As a cluster user, I want to interact with resources that are pertinent to my user community in
isolation of what other user communities are doing on the cluster.
## Understanding namespaces and DNS
When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding
[DNS entry](/docs/concepts/services-networking/dns-pod-service/).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container uses `<service-name>` it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
multiple namespaces such as Development, Staging and Production. If you want to reach
across namespaces, you need to use the fully qualified domain name (FQDN).
## {{% heading "whatsnext" %}}
* Learn more about [setting the namespace preference](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference).
* Learn more about [setting the namespace for a request](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-for-a-request)
* See [namespaces design](https://git.k8s.io/design-proposals-archive/architecture/namespaces.md).

View File

@ -231,7 +231,7 @@ Under this scenario, 'Allocatable' will be 14.5 CPUs, 28.5Gi of memory and
Scheduler ensures that the total memory `requests` across all pods on this node does
not exceed 28.5Gi and storage doesn't exceed 88Gi.
Kubelet evicts pods whenever the overall memory usage across pods exceeds 28.5Gi,
or if overall disk usage exceeds 88Gi If all processes on the node consume as
or if overall disk usage exceeds 88Gi. If all processes on the node consume as
much CPU as they can, pods together cannot consume more than 14.5 CPUs.
If `kube-reserved` and/or `system-reserved` is not enforced and system daemons

View File

@ -2,7 +2,6 @@
reviewers:
- soltysh
- sttts
- ericchiang
content_type: concept
title: Auditing
---

View File

@ -44,7 +44,7 @@ The rest of this section describes these steps in detail.
The flow can be seen in the following diagram.
![aggregation auth flows](/images/docs/aggregation-api-auth-flow.png).
![aggregation auth flows](/images/docs/aggregation-api-auth-flow.png)
The source for the above swimlanes can be found in the source of this document.

View File

@ -18,6 +18,11 @@ In Kubernetes, there are two ways to expose Pod and container fields to a runnin
Together, these two ways of exposing Pod and container fields are called the
downward API.
As Services are the primary mode of communication between containerized applications managed by Kubernetes,
it is helpful to be able to discover them at runtime.
Read more about accessing Services [here](/docs/tutorials/services/connect-applications-service/#accessing-the-service).
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}

View File

@ -157,7 +157,7 @@ The following methods exist for installing kubectl on Linux:
2. Download the Google Cloud public signing key:
```shell
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
```
3. Add the Kubernetes `apt` repository:

View File

@ -30,7 +30,5 @@ _build:
</main>
</div>
</body>
</html>

View File

@ -29,7 +29,5 @@ _build:
</main>
</div>
</body>
</html>

View File

@ -29,7 +29,5 @@ _build:
</main>
</div>
</body>
</html>

View File

@ -29,8 +29,6 @@ _build:
</main>
</div>
</body>
</html>

View File

@ -29,8 +29,6 @@ _build:
</main>
</div>
</body>
</html>

View File

@ -29,8 +29,6 @@ _build:
</main>
</div>
</body>
</html>

View File

@ -122,8 +122,7 @@ gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor e
{{< note >}}
AppArmor is currently in beta, so options are specified as annotations. Once support graduates to
general availability, the annotations will be replaced with first-class fields (more details in
[Upgrade path to GA](#upgrade-path-to-general-availability)).
general availability, the annotations will be replaced with first-class fields.
{{< /note >}}
AppArmor profiles are specified *per-container*. To specify the AppArmor profile to run a Pod

View File

@ -148,9 +148,9 @@ Para ver las etiquetas generadas automáticamente en cada pod, ejecuta el comand
```shell
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
```
El ReplicaSet creado garantiza que hay tres Pods de `nginx` ejecutándose en todo momento.

Some files were not shown because too many files have changed in this diff Show More