commit
610b895266
4
Makefile
4
Makefile
|
@ -49,11 +49,11 @@ check-headers-file:
|
|||
scripts/check-headers-file.sh
|
||||
|
||||
production-build: module-check ## Build the production site and ensure that noindex headers aren't added
|
||||
hugo --cleanDestinationDir --minify --environment production
|
||||
GOMAXPROCS=1 hugo --cleanDestinationDir --minify --environment production
|
||||
HUGO_ENV=production $(MAKE) check-headers-file
|
||||
|
||||
non-production-build: module-check ## Build the non-production site, which adds noindex headers to prevent indexing
|
||||
hugo --cleanDestinationDir --enableGitInfo --environment nonprod
|
||||
GOMAXPROCS=1 hugo --cleanDestinationDir --enableGitInfo --environment nonprod
|
||||
|
||||
serve: module-check ## Boot the development server.
|
||||
hugo server --buildFuture --environment development
|
||||
|
|
|
@ -32,11 +32,11 @@ aliases:
|
|||
- bradtopol
|
||||
- divya-mohan0209
|
||||
- kbhawkey
|
||||
- mickeyboxell
|
||||
- natalisucks
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- reylejano
|
||||
- Rishit-dagli # 1.28 Release Team Docs Lead
|
||||
- sftim
|
||||
- tengqm
|
||||
sig-docs-en-reviews: # PR reviews for English content
|
||||
|
|
|
@ -13,64 +13,4 @@ Im Abschnitt Konzepte erfahren Sie mehr über die Bestandteile des Kubernetes-Sy
|
|||
|
||||
<!-- body -->
|
||||
|
||||
## Überblick
|
||||
|
||||
Um mit Kubernetes zu arbeiten, verwenden Sie *Kubernetes-API-Objekte*, um den *gewünschten Status Ihres Clusters* zu beschreiben:
|
||||
welche Anwendungen oder anderen Workloads Sie ausführen möchten, welche Containerimages sie verwenden, die Anzahl der Replikate, welche Netzwerk- und Festplattenressourcen Sie zur Verfügung stellen möchten, und vieles mehr. Sie legen den gewünschten Status fest, indem Sie Objekte mithilfe der Kubernetes-API erstellen. Dies geschieht normalerweise über die Befehlszeilenschnittstelle `kubectl`. Sie können die Kubernetes-API auch direkt verwenden, um mit dem Cluster zu interagieren und den gewünschten Status festzulegen oder zu ändern.
|
||||
|
||||
Sobald Sie den gewünschten Status eingestellt haben, wird das *Kubernetes Control Plane* dafür sorgen, dass der aktuelle Status des Clusters mit dem gewünschten Status übereinstimmt. Zu diesem Zweck führt Kubernetes verschiedene Aufgaben automatisch aus, z. B. das Starten oder Neustarten von Containern, Skalieren der Anzahl der Repliken einer bestimmten Anwendung und vieles mehr. Das Kubernetes Control Plane besteht aus einer Reihe von Prozessen, die in Ihrem Cluster ausgeführt werden:
|
||||
|
||||
* Der **Kubernetes Master** besteht aus drei Prozessen, die auf einem einzelnen Node in Ihrem Cluster ausgeführt werden, der als Master-Node bezeichnet wird. Diese Prozesse sind: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) und [kube-scheduler](/docs/admin/kube-scheduler/).
|
||||
* Jeder einzelne Node in Ihrem Cluster, welcher nicht der Master ist, führt zwei Prozesse aus:
|
||||
* **[kubelet](/docs/admin/kubelet/)**, das mit dem Kubernetes Master kommuniziert.
|
||||
* **[kube-proxy](/docs/admin/kube-proxy/)**, ein Netzwerk-Proxy, der die Netzwerkdienste von Kubernetes auf jedem Node darstellt.
|
||||
|
||||
## Kubernetes Objects
|
||||
|
||||
Kubernetes enthält eine Reihe von Abstraktionen, die den Status Ihres Systems darstellen: im Container eingesetzte Anwendungen und Workloads, die zugehörigen Netzwerk- und Festplattenressourcen sowie weitere Informationen zu den Aufgaben Ihres Clusters. Diese Abstraktionen werden durch Objekte in der Kubernetes-API dargestellt. Lesen Sie [Kubernetes Objects Überblick](/docs/concepts/abstractions/overview/) für weitere Details.
|
||||
|
||||
Die Basisobjekte von Kubernetes umfassen:
|
||||
|
||||
* [Pod](/docs/concepts/workloads/pods/pod-overview/)
|
||||
* [Service](/docs/concepts/services-networking/service/)
|
||||
* [Volume](/docs/concepts/storage/volumes/)
|
||||
* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
|
||||
|
||||
Darüber hinaus enthält Kubernetes Abstraktionen auf höherer Ebene, die als Controller bezeichnet werden. Controller bauen auf den Basisobjekten auf und bieten zusätzliche Funktionen und Komfortfunktionen. Sie beinhalten:
|
||||
|
||||
* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
* [Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
|
||||
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
|
||||
* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
|
||||
|
||||
## Kubernetes Control Plane
|
||||
|
||||
Die verschiedenen Teile der Kubernetes-Steuerungsebene (Control Plane), wie der Kubernetes Master- und der Kubelet-Prozess, bestimmen, wie Kubernetes mit Ihrem Cluster kommuniziert. Das Control Plane verwaltet ein Inventar aller Kubernetes-Objekte im System und führt kontinuierlich Kontrollschleifen aus, um den Status dieser Objekte zu verwalten. Zu jeder Zeit reagieren die Kontrollschleifen des Control Plane auf Änderungen im Cluster und arbeiten daran, dass der tatsächliche Status aller Objekte im System mit dem von Ihnen definierten Status übereinstimmt.
|
||||
|
||||
Wenn Sie beispielsweise mit der Kubernetes-API ein Deployment-Objekt erstellen, geben Sie einen neuen gewünschten Status für das System an. Das Kubernetes Control Plane zeichnet die Objekterstellung auf und führt Ihre Anweisungen aus, indem es die erforderlichen Anwendungen startet und Sie für auf den Cluster-Nodes plant - Dadurch wird der tatsächliche Status des Clusters an den gewünschten Status angepasst.
|
||||
|
||||
### Kubernetes Master
|
||||
|
||||
Der Kubernetes-Master ist für Erhalt des gewünschten Status Ihres Clusters verantwortlich. Wenn Sie mit Kubernetes interagieren, beispielsweise mit dem Kommandozeilen-Tool `kubectl`, kommunizieren Sie mit dem Kubernetes-Master Ihres Clusters.
|
||||
|
||||
> Der Begriff "Master" bezeichnet dabei eine Reihe von Prozessen, die den Clusterstatus verwalten. Normalerweise werden diese Prozesse alle auf einem einzigen Node im Cluster ausgeführt. Dieser Node wird auch als Master bezeichnet. Der Master kann repliziert werden, um die Verfügbarkeit und Redundanz zu erhöhen.
|
||||
|
||||
### Kubernetes Nodes
|
||||
|
||||
Die Nodes in einem Cluster sind die Maschinen (VMs, physische Server usw.), auf denen Ihre Anwendungen und Cloud-Workflows ausgeführt werden. Der Kubernetes-Master steuert jeden Node; Sie werden selten direkt mit Nodes interagieren.
|
||||
|
||||
#### Objekt Metadata
|
||||
|
||||
|
||||
* [Anmerkungen](/docs/concepts/overview/working-with-objects/annotations/)
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Wenn Sie eine Konzeptseite schreiben möchten, lesen Sie [Seitenvorlagen verwenden](/docs/home/contribute/page-templates/)
|
||||
für Informationen zum Konzeptseitentyp und zur Dokumentations Vorlage.
|
||||
|
||||
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
---
|
||||
title: "Kubernetes Architekur"
|
||||
weight: 30
|
||||
description: >
|
||||
Hier werden die architektonischen Konzepte von Kubernetes beschrieben.
|
||||
---
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
title: "Cluster Administration"
|
||||
weight: 100
|
||||
description: >
|
||||
Tiefergreifende Details, die für das Erstellen und Administrieren eines Kubernetes Clusters relevant sind.
|
||||
---
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
title: "Konfiguration"
|
||||
weight: 80
|
||||
description: >
|
||||
Resourcen, die bei der Konfiguration von Pods in Kubernetes nützlich sind.
|
||||
---
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
title: "Container"
|
||||
weight: 40
|
||||
description: >
|
||||
Methoden, um Anwendungen und ihre Abhängigkeiten zu zusammenzufassen.
|
||||
---
|
||||
|
||||
|
|
|
@ -2,6 +2,9 @@
|
|||
title: Konzept Dokumentations-Vorlage
|
||||
content_type: concept
|
||||
toc_hide: true
|
||||
description: >
|
||||
Wenn Sie eine Konzeptseite schreiben möchten, lesen Sie [Seitenvorlagen verwenden](/docs/home/contribute/page-templates/)
|
||||
für Informationen zum Konzeptseitentyp und zur Dokumentations-Vorlage.
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -5,4 +5,6 @@ feature:
|
|||
title: Für Erweiterungen entworfen
|
||||
description: >
|
||||
Kubernetes kann ohne Änderungen am Upstream-Quelltext erweitert werden.
|
||||
description: >
|
||||
Verschiedene Wege, um die Funktionalität von Kubernetes zu erweitern.
|
||||
---
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
---
|
||||
title: "Überblick"
|
||||
weight: 20
|
||||
description: >
|
||||
Kubernetes ist eine portable, erweiterbare und quelloffene Plattform, um containerisierte Arbeitslasten und Dienste zu verwalten.
|
||||
Dies wird mithilfe von Automatisierungen und deklarativen Konfigurationen erreicht. Kubernetes hat ein großes, schnell wachsendes Ökosystem.
|
||||
Dienstleistungen, Hilfestellungen und Tools für Kubernetes sind weit verbreitet.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
title: "Richtlinien"
|
||||
weight: 90
|
||||
description: >
|
||||
Sie können Richtlinien erstellen, die Resource-Gruppen zugewiesen werden können.
|
||||
---
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
title: "Dienste, Lastverteilung und Netzwerkfunktionen"
|
||||
weight: 60
|
||||
description: >
|
||||
Konzepte und Resourcen bezüglich Netzwerktechnik in Kubernetes
|
||||
---
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
---
|
||||
title: "Speicher"
|
||||
weight: 70
|
||||
description: >
|
||||
Methoden, um volatilen oder persistenten Speicher für Pods im Cluster zur Verfügung zu stellen.
|
||||
---
|
||||
|
||||
|
|
|
@ -1,5 +1,8 @@
|
|||
---
|
||||
title: "Workloads"
|
||||
weight: 50
|
||||
description: >
|
||||
Informationen über Pods, die kleinsten Einheiten, die in Kubernetes bereitgestellt werden können und
|
||||
über Abstraktionen, die hierbei behilflich sind.
|
||||
---
|
||||
|
||||
|
|
|
@ -30,8 +30,8 @@ Nachfolgend finden Sie einige Methoden zur Installation von kubectl.
|
|||
{{< tabs name="kubectl_install" >}}
|
||||
{{< tab name="Ubuntu, Debian oder HypriotOS" codelang="bash" >}}
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /usr/share/keyrings/kubernetes.gpg
|
||||
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/kubernetes.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
|
|
@ -82,7 +82,7 @@ For external clients, automatic DNS expansion described is not currently possibl
|
|||
|
||||
That way, your clients can always use the short form on the left, and always be automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes cluster federation.
|
||||
|
||||
As further reading, a more elaborate example for users is available in the [Multi-Cluster Service DNS with ExternalDNS guide](https://github.com/kubernetes-sigs/federation-v2/blob/master/docs/servicedns-with-externaldns.md).
|
||||
As further reading, a more elaborate example for users is available in the [Multi-Cluster Service DNS with ExternalDNS guide](https://github.com/kubernetes-retired/kubefed/blob/dbcd4da3823a7ba8ac29e80c9d5b968868638d28/docs/servicedns-with-externaldns.md)
|
||||
|
||||
# Try it yourself
|
||||
To get started with Federation v2, please refer to the [user guide](https://github.com/kubernetes-sigs/federation-v2/blob/master/docs/userguide.md). Deployment can be accomplished with a [Helm chart](https://github.com/kubernetes-sigs/kubefed/blob/master/charts/kubefed/README.md), and once the control plane is available, the [user guide’s example](https://github.com/kubernetes-sigs/federation-v2/blob/master/docs/userguide.md#example) can be used to get some hands-on experience with using Federation V2.
|
||||
|
|
|
@ -119,8 +119,8 @@ Here are some of the images we built:
|
|||
- `gcr.io/kubernetes-e2e-test-images/volume/iscsi:2.0`
|
||||
- `gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0`
|
||||
- `gcr.io/kubernetes-e2e-test-images/volume/rbd:1.0.1`
|
||||
- `k8s.gcr.io/etcd:3.3.15`
|
||||
- `k8s.gcr.io/pause:3.1`
|
||||
- `registry.k8s.io/etcd:3.3.15` (image changed since publication - previously used registry "k8s.gcr.io")
|
||||
- `registry.k8s.io/pause:3.1` (image changed since publication - previously used registry "k8s.gcr.io")
|
||||
|
||||
Finally, we ran the tests and got the test result, include `e2e.log`, which showed that all test cases passed. Additionally, we submitted our test result to [k8s-conformance](https://github.com/cncf/k8s-conformance) as a [pull request](https://github.com/cncf/k8s-conformance/pull/779).
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ files side by side to the artifacts for verifying their integrity.
|
|||
[tarballs]: https://github.com/kubernetes/kubernetes/blob/release-1.26/CHANGELOG/CHANGELOG-1.26.md#downloads-for-v1260
|
||||
[binaries]: https://gcsweb.k8s.io/gcs/kubernetes-release/release/v1.26.0/bin
|
||||
[sboms]: https://dl.k8s.io/release/v1.26.0/kubernetes-release.spdx
|
||||
[provenance]: https://dl.k8s.io/kubernetes-release/release/v1.26.0/provenance.json
|
||||
[provenance]: https://dl.k8s.io/release/v1.26.0/provenance.json
|
||||
[cosign]: https://github.com/sigstore/cosign
|
||||
|
||||
To verify an artifact, for example `kubectl`, you can download the
|
||||
|
|
|
@ -69,7 +69,7 @@ in `CredentialProviderResponse`. When the value is `Image`, the kubelet will onl
|
|||
match the image of the first request. When the value is `Registry`, the kubelet will use cached credentials for any subsequent image pulls
|
||||
destined for the same registry host but using different paths (for example, `gcr.io/foo/bar` and `gcr.io/bar/foo` refer to different images
|
||||
from the same registry). Lastly, when the value is `Global`, the kubelet will use returned credentials for all images that match against
|
||||
the plugin, including images that can map to different registry hosts (for example, gcr.io vs k8s.gcr.io). The `cacheKeyType` field is required by plugin
|
||||
the plugin, including images that can map to different registry hosts (for example, gcr.io vs registry.k8s.io (previously k8s.gcr.io)). The `cacheKeyType` field is required by plugin
|
||||
implementations.
|
||||
|
||||
```json
|
||||
|
|
|
@ -68,7 +68,7 @@ More detials can be found in the KEP <https://kep.k8s.io/1040> and the pull requ
|
|||
## Event triggered updates to container status
|
||||
|
||||
`Evented PLEG` (PLEG is short for "Pod Lifecycle Event Generator") is set to be in beta for v1.27,
|
||||
Kubernetes offers two ways for the kubelet to detect Pod lifecycle events, such as a the last
|
||||
Kubernetes offers two ways for the kubelet to detect Pod lifecycle events, such as the last
|
||||
process in a container shutting down.
|
||||
In Kubernetes v1.27, the _event based_ mechanism has graduated to beta but remains
|
||||
disabled by default. If you do explicitly switch to event-based lifecycle change detection,
|
||||
|
@ -92,7 +92,7 @@ enabling this feature gate may affect the start-up speed of the pod if the pod s
|
|||
a large amount of memory.
|
||||
|
||||
Kubelet configuration now includes `memoryThrottlingFactor`. This factor is multiplied by
|
||||
the memory limit or node allocatable memory to set the cgroupv2 memory.high value for enforcing
|
||||
the memory limit or node allocatable memory to set the cgroupv2 `memory.high` value for enforcing
|
||||
MemoryQoS. Decreasing this factor sets a lower high limit for container cgroups, increasing reclaim
|
||||
pressure. Increasing this factor will put less reclaim pressure. The default value is 0.8 initially
|
||||
and will change to 0.9 in Kubernetes v1.27. This parameter adjustment can reduce the potential
|
||||
|
@ -113,7 +113,7 @@ container startup by mounting volumes with the correct SELinux label instead of
|
|||
on the volumes recursively. Further details can be found in the KEP <https://kep.k8s.io/1710>.
|
||||
|
||||
To identify the cause of slow pod startup, analyzing metrics and logs can be helpful. Other
|
||||
factorsthat may impact pod startup include container runtime, disk speed, CPU and memory
|
||||
factors that may impact pod startup include container runtime, disk speed, CPU and memory
|
||||
resources on the node.
|
||||
|
||||
SIG Node is responsible for ensuring fast Pod startup times, while addressing issues in large
|
||||
|
|
|
@ -0,0 +1,282 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Having fun with seccomp profiles on the edge"
|
||||
date: 2023-05-18
|
||||
slug: seccomp-profiles-edge
|
||||
---
|
||||
|
||||
**Author**: Sascha Grunert
|
||||
|
||||
The [Security Profiles Operator (SPO)][spo] is a feature-rich
|
||||
[operator][operator] for Kubernetes to make managing seccomp, SELinux and
|
||||
AppArmor profiles easier than ever. Recording those profiles from scratch is one
|
||||
of the key features of this operator, which usually involves the integration
|
||||
into large CI/CD systems. Being able to test the recording capabilities of the
|
||||
operator in edge cases is one of the recent development efforts of the SPO and
|
||||
makes it excitingly easy to play around with seccomp profiles.
|
||||
|
||||
[spo]: https://github.com/kubernetes-sigs/security-profiles-operator
|
||||
[operator]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator
|
||||
|
||||
## Recording seccomp profiles with `spoc record`
|
||||
|
||||
The [v0.8.0][spo-latest] release of the Security Profiles Operator shipped a new
|
||||
command line interface called `spoc`, a little helper tool for recording and
|
||||
replaying seccomp profiles among various other things that are out of scope of
|
||||
this blog post.
|
||||
|
||||
[spo-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/v0.8.0
|
||||
|
||||
Recording a seccomp profile requires a binary to be executed, which can be a
|
||||
simple golang application which just calls [`uname(2)`][uname]:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
)
|
||||
|
||||
func main() {
|
||||
utsname := syscall.Utsname{}
|
||||
if err := syscall.Uname(&utsname); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
[uname]: https://man7.org/linux/man-pages/man2/uname.2.html
|
||||
|
||||
Building a binary from that code can be done by:
|
||||
|
||||
```console
|
||||
> go build -o main main.go
|
||||
> ldd ./main
|
||||
not a dynamic executable
|
||||
```
|
||||
|
||||
Now it's possible to download the latest binary of [`spoc` from
|
||||
GitHub][spoc-latest] and run the application on Linux with it:
|
||||
|
||||
[spoc-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/download/v0.8.0/spoc.amd64
|
||||
|
||||
```console
|
||||
> sudo ./spoc record ./main
|
||||
10:08:25.591945 Loading bpf module
|
||||
10:08:25.591958 Using system btf file
|
||||
libbpf: loading object 'recorder.bpf.o' from buffer
|
||||
…
|
||||
libbpf: prog 'sys_enter': relo #3: patched insn #22 (ALU/ALU64) imm 16 -> 16
|
||||
10:08:25.610767 Getting bpf program sys_enter
|
||||
10:08:25.610778 Attaching bpf tracepoint
|
||||
10:08:25.611574 Getting syscalls map
|
||||
10:08:25.611582 Getting pid_mntns map
|
||||
10:08:25.613097 Module successfully loaded
|
||||
10:08:25.613311 Processing events
|
||||
10:08:25.613693 Running command with PID: 336007
|
||||
10:08:25.613835 Received event: pid: 336007, mntns: 4026531841
|
||||
10:08:25.613951 No container ID found for PID (pid=336007, mntns=4026531841, err=unable to find container ID in cgroup path)
|
||||
10:08:25.614856 Processing recorded data
|
||||
10:08:25.614975 Found process mntns 4026531841 in bpf map
|
||||
10:08:25.615110 Got syscalls: read, close, mmap, rt_sigaction, rt_sigprocmask, madvise, nanosleep, clone, uname, sigaltstack, arch_prctl, gettid, futex, sched_getaffinity, exit_group, openat
|
||||
10:08:25.615195 Adding base syscalls: access, brk, capget, capset, chdir, chmod, chown, close_range, dup2, dup3, epoll_create1, epoll_ctl, epoll_pwait, execve, faccessat2, fchdir, fchmodat, fchown, fchownat, fcntl, fstat, fstatfs, getdents64, getegid, geteuid, getgid, getpid, getppid, getuid, ioctl, keyctl, lseek, mkdirat, mknodat, mount, mprotect, munmap, newfstatat, openat2, pipe2, pivot_root, prctl, pread64, pselect6, readlink, readlinkat, rt_sigreturn, sched_yield, seccomp, set_robust_list, set_tid_address, setgid, setgroups, sethostname, setns, setresgid, setresuid, setsid, setuid, statfs, statx, symlinkat, tgkill, umask, umount2, unlinkat, unshare, write
|
||||
10:08:25.616293 Wrote seccomp profile to: /tmp/profile.yaml
|
||||
10:08:25.616298 Unloading bpf module
|
||||
```
|
||||
|
||||
I have to execute `spoc` as root because it will internally run an [ebpf][ebpf]
|
||||
program by reusing the same code parts from the Security Profiles Operator
|
||||
itself. I can see that the bpf module got loaded successfully and `spoc`
|
||||
attached the required tracepoint to it. Then it will track the main application
|
||||
by using its [mount namespace][mntns] and process the recorded syscall data. The
|
||||
nature of ebpf programs is that they see the whole context of the Kernel, which
|
||||
means that `spoc` tracks all syscalls of the system, but does not interfere with
|
||||
their execution.
|
||||
|
||||
[ebpf]: https://ebpf.io
|
||||
[mntns]: https://man7.org/linux/man-pages/man7/mount_namespaces.7.html
|
||||
|
||||
The logs indicate that `spoc` found the syscalls `read`, `close`,
|
||||
`mmap` and so on, including `uname`. All other syscalls than `uname` are coming
|
||||
from the golang runtime and its garbage collection, which already adds overhead
|
||||
to a basic application like in our demo. I can also see from the log line
|
||||
`Adding base syscalls: …` that `spoc` adds a bunch of base syscalls to the
|
||||
resulting profile. Those are used by the OCI runtime (like [runc][runc] or
|
||||
[crun][crun]) in order to be able to run a container. This means that `spoc`
|
||||
can be used to record seccomp profiles which then can be containerized directly.
|
||||
This behavior can be disabled in `spoc` by using the `--no-base-syscalls`/`-n`
|
||||
or customized via the `--base-syscalls`/`-b` command line flags. This can be
|
||||
helpful in cases where different OCI runtimes other than crun and runc are used,
|
||||
or if I just want to record the seccomp profile for the application and stack
|
||||
it with another [base profile][base].
|
||||
|
||||
[runc]: https://github.com/opencontainers/runc
|
||||
[crun]: https://github.com/containers/crun
|
||||
[base]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/35ebdda/installation-usage.md#base-syscalls-for-a-container-runtime
|
||||
|
||||
The resulting profile is now available in `/tmp/profile.yaml`, but the default
|
||||
location can be changed using the `--output-file value`/`-o` flag:
|
||||
|
||||
```console
|
||||
> cat /tmp/profile.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
|
||||
kind: SeccompProfile
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: main
|
||||
spec:
|
||||
architectures:
|
||||
- SCMP_ARCH_X86_64
|
||||
defaultAction: SCMP_ACT_ERRNO
|
||||
syscalls:
|
||||
- action: SCMP_ACT_ALLOW
|
||||
names:
|
||||
- access
|
||||
- arch_prctl
|
||||
- brk
|
||||
- …
|
||||
- uname
|
||||
- …
|
||||
status: {}
|
||||
```
|
||||
|
||||
The seccomp profile Custom Resource Definition (CRD) can be directly used
|
||||
together with the Security Profiles Operator for managing it within Kubernetes.
|
||||
`spoc` is also capable of producing raw seccomp profiles (as JSON), by using the
|
||||
`--type`/`-t` `raw-seccomp` flag:
|
||||
|
||||
```console
|
||||
> sudo ./spoc record --type raw-seccomp ./main
|
||||
…
|
||||
52.628827 Wrote seccomp profile to: /tmp/profile.json
|
||||
```
|
||||
|
||||
```console
|
||||
> jq . /tmp/profile.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"defaultAction": "SCMP_ACT_ERRNO",
|
||||
"architectures": ["SCMP_ARCH_X86_64"],
|
||||
"syscalls": [
|
||||
{
|
||||
"names": ["access", "…", "write"],
|
||||
"action": "SCMP_ACT_ALLOW"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The utility `spoc record` allows us to record complex seccomp profiles directly
|
||||
from binary invocations in any Linux system which is capable of running the ebpf
|
||||
code within the Kernel. But it can do more: How about modifying the seccomp
|
||||
profile and then testing it by using `spoc run`.
|
||||
|
||||
## Running seccomp profiles with `spoc run`
|
||||
|
||||
`spoc` is also able to run binaries with applied seccomp profiles, making it
|
||||
easy to test any modification to it. To do that, just run:
|
||||
|
||||
```console
|
||||
> sudo ./spoc run ./main
|
||||
10:29:58.153263 Reading file /tmp/profile.yaml
|
||||
10:29:58.153311 Assuming YAML profile
|
||||
10:29:58.154138 Setting up seccomp
|
||||
10:29:58.154178 Load seccomp profile
|
||||
10:29:58.154189 Starting audit log enricher
|
||||
10:29:58.154224 Enricher reading from file /var/log/audit/audit.log
|
||||
10:29:58.155356 Running command with PID: 437880
|
||||
>
|
||||
```
|
||||
|
||||
It looks like that the application exited successfully, which is anticipated
|
||||
because I did not modify the previously recorded profile yet. I can also
|
||||
specify a custom location for the profile by using the `--profile`/`-p` flag,
|
||||
but this was not necessary because I did not modify the default output location
|
||||
from the record. `spoc` will automatically determine if it's a raw (JSON) or CRD
|
||||
(YAML) based seccomp profile and then apply it to the process.
|
||||
|
||||
The Security Profiles Operator supports a [log enricher feature][enricher],
|
||||
which provides additional seccomp related information by parsing the audit logs.
|
||||
`spoc run` uses the enricher in the same way to provide more data to the end
|
||||
users when it comes to debugging seccomp profiles.
|
||||
|
||||
[enricher]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/35ebdda/installation-usage.md#using-the-log-enricher
|
||||
|
||||
Now I have to modify the profile to see anything valuable in the output. For
|
||||
example, I could remove the allowed `uname` syscall:
|
||||
|
||||
```console
|
||||
> jq 'del(.syscalls[0].names[] | select(. == "uname"))' /tmp/profile.json > /tmp/no-uname-profile.json
|
||||
```
|
||||
|
||||
And then try to run it again with the new profile `/tmp/no-uname-profile.json`:
|
||||
|
||||
```console
|
||||
> sudo ./spoc run -p /tmp/no-uname-profile.json ./main
|
||||
10:39:12.707798 Reading file /tmp/no-uname-profile.json
|
||||
10:39:12.707892 Setting up seccomp
|
||||
10:39:12.707920 Load seccomp profile
|
||||
10:39:12.707982 Starting audit log enricher
|
||||
10:39:12.707998 Enricher reading from file /var/log/audit/audit.log
|
||||
10:39:12.709164 Running command with PID: 480512
|
||||
panic: operation not permitted
|
||||
|
||||
goroutine 1 [running]:
|
||||
main.main()
|
||||
/path/to/main.go:10 +0x85
|
||||
10:39:12.713035 Unable to run: launch runner: wait for command: exit status 2
|
||||
```
|
||||
|
||||
Alright, that was expected! The applied seccomp profile blocks the `uname`
|
||||
syscall, which results in an "operation not permitted" error. This error is
|
||||
pretty generic and does not provide any hint on what got blocked by seccomp.
|
||||
It is generally extremely difficult to predict how applications behave if single
|
||||
syscalls are forbidden by seccomp. It could be possible that the application
|
||||
terminates like in our simple demo, but it could also lead to a strange
|
||||
misbehavior and the application does not stop at all.
|
||||
|
||||
If I now change the default seccomp action of the profile from `SCMP_ACT_ERRNO`
|
||||
to `SCMP_ACT_LOG` like this:
|
||||
|
||||
```console
|
||||
> jq '.defaultAction = "SCMP_ACT_LOG"' /tmp/no-uname-profile.json > /tmp/no-uname-profile-log.json
|
||||
```
|
||||
|
||||
Then the log enricher will give us a hint that the `uname` syscall got blocked
|
||||
when using `spoc run`:
|
||||
|
||||
```console
|
||||
> sudo ./spoc run -p /tmp/no-uname-profile-log.json ./main
|
||||
10:48:07.470126 Reading file /tmp/no-uname-profile-log.json
|
||||
10:48:07.470234 Setting up seccomp
|
||||
10:48:07.470245 Load seccomp profile
|
||||
10:48:07.470302 Starting audit log enricher
|
||||
10:48:07.470339 Enricher reading from file /var/log/audit/audit.log
|
||||
10:48:07.470889 Running command with PID: 522268
|
||||
10:48:07.472007 Seccomp: uname (63)
|
||||
```
|
||||
|
||||
The application will not terminate any more, but seccomp will log the behavior
|
||||
to `/var/log/audit/audit.log` and `spoc` will parse the data to correlate it
|
||||
directly to our program. Generating the log messages to the audit subsystem
|
||||
comes with a large performance overhead and should be handled with care in
|
||||
production systems. It also comes with a security risk when running untrusted
|
||||
apps in audit mode in production environments.
|
||||
|
||||
This demo should give you an impression how to debug seccomp profile issues with
|
||||
applications, probably by using our shiny new helper tool powered by the
|
||||
features of the Security Profiles Operator. `spoc` is a flexible and portable
|
||||
binary suitable for edge cases where resources are limited and even Kubernetes
|
||||
itself may not be available with its full capabilities.
|
||||
|
||||
Thank you for reading this blog post! If you're interested in more, providing
|
||||
feedback or asking for help, then feel free to get in touch with us directly via
|
||||
[Slack (#security-profiles-operator)][slack] or the [mailing list][mail].
|
||||
|
||||
[slack]: https://kubernetes.slack.com/messages/security-profiles-operator
|
||||
[mail]: https://groups.google.com/forum/#!forum/kubernetes-dev
|
|
@ -0,0 +1,206 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Using OCI artifacts to distribute security profiles for seccomp, SELinux and AppArmor"
|
||||
date: 2023-05-24
|
||||
slug: oci-security-profiles
|
||||
---
|
||||
|
||||
**Author**: Sascha Grunert
|
||||
|
||||
The [Security Profiles Operator (SPO)][spo] makes managing seccomp, SELinux and
|
||||
AppArmor profiles within Kubernetes easier than ever. It allows cluster
|
||||
administrators to define the profiles in a predefined custom resource YAML,
|
||||
which then gets distributed by the SPO into the whole cluster. Modification and
|
||||
removal of the security profiles are managed by the operator in the same way,
|
||||
but that’s a small subset of its capabilities.
|
||||
|
||||
[spo]: https://github.com/kubernetes-sigs/security-profiles-operator
|
||||
|
||||
Another core feature of the SPO is being able to stack seccomp profiles. This
|
||||
means that users can define a `baseProfileName` in the YAML specification, which
|
||||
then gets automatically resolved by the operator and combines the syscall rules.
|
||||
If a base profile has another `baseProfileName`, then the operator will
|
||||
recursively resolve the profiles up to a certain depth. A common use case is to
|
||||
define base profiles for low level container runtimes (like [runc][runc] or
|
||||
[crun][crun]) which then contain syscalls which are required in any case to run
|
||||
the container. Alternatively, application developers can define seccomp base
|
||||
profiles for their standard distribution containers and stack dedicated profiles
|
||||
for the application logic on top. This way developers can focus on maintaining
|
||||
seccomp profiles which are way simpler and scoped to the application logic,
|
||||
without having a need to take the whole infrastructure setup into account.
|
||||
|
||||
[runc]: https://github.com/opencontainers/runc
|
||||
[crun]: https://github.com/containers/crun
|
||||
|
||||
But how to maintain those base profiles? For example, the amount of required
|
||||
syscalls for a runtime can change over its release cycle in the same way it can
|
||||
change for the main application. Base profiles have to be available in the same
|
||||
cluster, otherwise the main seccomp profile will fail to deploy. This means that
|
||||
they’re tightly coupled to the main application profiles, which acts against the
|
||||
main idea of base profiles. Distributing and managing them as plain files feels
|
||||
like an additional burden to solve.
|
||||
|
||||
## OCI artifacts to the rescue
|
||||
|
||||
The [v0.8.0][spo-latest] release of the Security Profiles Operator supports
|
||||
managing base profiles as OCI artifacts! Imagine OCI artifacts as lightweight
|
||||
container images, storing files in layers in the same way images do, but without
|
||||
a process to be executed. Those artifacts can be used to store security profiles
|
||||
like regular container images in compatible registries. This means they can be
|
||||
versioned, namespaced and annotated similar to regular container images.
|
||||
|
||||
[spo-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/v0.8.0
|
||||
|
||||
To see how that works in action, specify a `baseProfileName` prefixed with
|
||||
`oci://` within a seccomp profile CRD, for example:
|
||||
|
||||
```yaml
|
||||
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
|
||||
kind: SeccompProfile
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
defaultAction: SCMP_ACT_ERRNO
|
||||
baseProfileName: oci://ghcr.io/security-profiles/runc:v1.1.5
|
||||
syscalls:
|
||||
- action: SCMP_ACT_ALLOW
|
||||
names:
|
||||
- uname
|
||||
```
|
||||
|
||||
The operator will take care of pulling the content by using [oras][oras], as
|
||||
well as verifying the [sigstore (cosign)][cosign] signatures of the artifact. If
|
||||
the artifacts are not signed, then the SPO will reject them. The resulting
|
||||
profile `test` will then contain all base syscalls from the remote `runc`
|
||||
profile plus the additional allowed `uname` one. It is also possible to
|
||||
reference the base profile by its digest (SHA256) making the artifact to be
|
||||
pulled more specific, for example by referencing
|
||||
`oci://ghcr.io/security-profiles/runc@sha256:380…`.
|
||||
|
||||
[oras]: https://oras.land
|
||||
[cosign]: https://github.com/sigstore/cosign
|
||||
|
||||
The operator internally caches pulled artifacts up to 24 hours for 1000
|
||||
profiles, meaning that they will be refreshed after that time period, if the
|
||||
cache is full or the operator daemon gets restarted.
|
||||
|
||||
Because the overall resulting syscalls are hidden from the user (I only have the
|
||||
`baseProfileName` listed in the SeccompProfile, and not the syscalls themselves), I'll additionally
|
||||
annotate that SeccompProfile with the final `syscalls`.
|
||||
|
||||
Here's how the SeccompProfile looks after I annotate it:
|
||||
|
||||
```console
|
||||
> kubectl describe seccompprofile test
|
||||
Name: test
|
||||
Namespace: security-profiles-operator
|
||||
Labels: spo.x-k8s.io/profile-id=SeccompProfile-test
|
||||
Annotations: syscalls:
|
||||
[{"names":["arch_prctl","brk","capget","capset","chdir","clone","close",...
|
||||
API Version: security-profiles-operator.x-k8s.io/v1beta1
|
||||
```
|
||||
|
||||
The SPO maintainers provide all public base profiles as part of the [“Security
|
||||
Profiles” GitHub organization][org].
|
||||
|
||||
[org]: https://github.com/orgs/security-profiles/packages
|
||||
|
||||
## Managing OCI security profiles
|
||||
|
||||
Alright, now the official SPO provides a bunch of base profiles, but how can I
|
||||
define my own? Well, first of all we have to choose a working registry. There
|
||||
are a bunch of registries that already supports OCI artifacts:
|
||||
|
||||
- [CNCF Distribution](https://github.com/distribution/distribution)
|
||||
- [Azure Container Registry](https://aka.ms/acr)
|
||||
- [Amazon Elastic Container Registry](https://aws.amazon.com/ecr)
|
||||
- [Google Artifact Registry](https://cloud.google.com/artifact-registry)
|
||||
- [GitHub Packages container registry](https://docs.github.com/en/packages/guides/about-github-container-registry)
|
||||
- [Bundle Bar](https://bundle.bar/docs/supported-clients/oras)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Zot Registry](https://zotregistry.io)
|
||||
|
||||
The Security Profiles Operator ships a new command line interface called `spoc`,
|
||||
which is a little helper tool for managing OCI profiles among doing various other
|
||||
things which are out of scope of this blog post. But, the command `spoc push`
|
||||
can be used to push a security profile to a registry:
|
||||
|
||||
```
|
||||
> export USERNAME=my-user
|
||||
> export PASSWORD=my-pass
|
||||
> spoc push -f ./examples/baseprofile-crun.yaml ghcr.io/security-profiles/crun:v1.8.3
|
||||
16:35:43.899886 Pushing profile ./examples/baseprofile-crun.yaml to: ghcr.io/security-profiles/crun:v1.8.3
|
||||
16:35:43.899939 Creating file store in: /tmp/push-3618165827
|
||||
16:35:43.899947 Adding profile to store: ./examples/baseprofile-crun.yaml
|
||||
16:35:43.900061 Packing files
|
||||
16:35:43.900282 Verifying reference: ghcr.io/security-profiles/crun:v1.8.3
|
||||
16:35:43.900310 Using tag: v1.8.3
|
||||
16:35:43.900313 Creating repository for ghcr.io/security-profiles/crun
|
||||
16:35:43.900319 Using username and password
|
||||
16:35:43.900321 Copying profile to repository
|
||||
16:35:46.976108 Signing container image
|
||||
Generating ephemeral keys...
|
||||
Retrieving signed certificate...
|
||||
|
||||
Note that there may be personally identifiable information associated with this signed artifact.
|
||||
This may include the email address associated with the account with which you authenticate.
|
||||
This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later.
|
||||
|
||||
By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs.
|
||||
Your browser will now be opened to:
|
||||
https://oauth2.sigstore.dev/auth/auth?access_type=…
|
||||
Successfully verified SCT...
|
||||
tlog entry created with index: 16520520
|
||||
Pushing signature to: ghcr.io/security-profiles/crun
|
||||
```
|
||||
|
||||
You can see that the tool automatically signs the artifact and pushes the
|
||||
`./examples/baseprofile-crun.yaml` to the registry, which is then directly ready
|
||||
for usage within the SPO. If username and password authentication is required,
|
||||
either use the `--username`, `-u` flag or export the `USERNAME` environment
|
||||
variable. To set the password, export the `PASSWORD` environment variable.
|
||||
|
||||
It is possible to add custom annotations to the security profile by using the
|
||||
`--annotations` / `-a` flag multiple times in `KEY:VALUE` format. Those have no
|
||||
effect for now, but at some later point additional features of the operator may
|
||||
rely them.
|
||||
|
||||
The `spoc` client is also able to pull security profiles from OCI artifact
|
||||
compatible registries. To do that, just run `spoc pull`:
|
||||
|
||||
```console
|
||||
> spoc pull ghcr.io/security-profiles/runc:v1.1.5
|
||||
16:32:29.795597 Pulling profile from: ghcr.io/security-profiles/runc:v1.1.5
|
||||
16:32:29.795610 Verifying signature
|
||||
|
||||
Verification for ghcr.io/security-profiles/runc:v1.1.5 --
|
||||
The following checks were performed on each of these signatures:
|
||||
- Existence of the claims in the transparency log was verified offline
|
||||
- The code-signing certificate was verified using trusted certificate authority certificates
|
||||
|
||||
[{"critical":{"identity":{"docker-reference":"ghcr.io/security-profiles/runc"},…}}]
|
||||
16:32:33.208695 Creating file store in: /tmp/pull-3199397214
|
||||
16:32:33.208713 Verifying reference: ghcr.io/security-profiles/runc:v1.1.5
|
||||
16:32:33.208718 Creating repository for ghcr.io/security-profiles/runc
|
||||
16:32:33.208742 Using tag: v1.1.5
|
||||
16:32:33.208743 Copying profile from repository
|
||||
16:32:34.119652 Reading profile
|
||||
16:32:34.119677 Trying to unmarshal seccomp profile
|
||||
16:32:34.120114 Got SeccompProfile: runc-v1.1.5
|
||||
16:32:34.120119 Saving profile in: /tmp/profile.yaml
|
||||
```
|
||||
|
||||
The profile can be now found in `/tmp/profile.yaml` or the specified output file
|
||||
`--output-file` / `-o`. We can specify an username and password in the same way
|
||||
as for `spoc push`.
|
||||
|
||||
`spoc` makes it easy to manage security profiles as OCI artifacts, which can be
|
||||
then consumed directly by the operator itself.
|
||||
|
||||
That was our compact journey through the latest possibilities of the Security
|
||||
Profiles Operator! If you're interested in more, providing feedback or asking
|
||||
for help, then feel free to get in touch with us directly via [Slack
|
||||
(#security-profiles-operator)][slack] or [the mailing list][mail].
|
||||
|
||||
[slack]: https://kubernetes.slack.com/messages/security-profiles-operator
|
||||
[mail]: https://groups.google.com/forum/#!forum/kubernetes-dev
|
|
@ -590,7 +590,7 @@ VolumeAttachments will not be deleted from the original shutdown node so the vol
|
|||
used by these pods cannot be attached to a new running node. As a result, the
|
||||
application running on the StatefulSet cannot function properly. If the original
|
||||
shutdown node comes up, the pods will be deleted by kubelet and new pods will be
|
||||
created on a different running node. If the original shutdown node does not come up,
|
||||
created on a different running node. If the original shutdown node does not come up,
|
||||
these pods will be stuck in terminating status on the shutdown node forever.
|
||||
|
||||
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
|
||||
|
|
|
@ -458,9 +458,22 @@ common use cases and suggested solutions.
|
|||
|
||||
If you need access to multiple registries, you can create one secret for each registry.
|
||||
|
||||
## Legacy built-in kubelet credential provider
|
||||
|
||||
In older versions of Kubernetes, the kubelet had a direct integration with cloud provider credentials.
|
||||
This gave it the ability to dynamically fetch credentials for image registries.
|
||||
|
||||
There were three built-in implementations of the kubelet credential provider integration:
|
||||
ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry).
|
||||
|
||||
For more information on the legacy mechanism, read the documentation for the version of Kubernetes that you
|
||||
are using. Kubernetes v1.26 through to v{{< skew latestVersion >}} do not include the legacy mechanism, so
|
||||
you would need to either:
|
||||
- configure a kubelet image credential provider on each node
|
||||
- specify image pull credentials using `imagePullSecrets` and at least one Secret
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md).
|
||||
* Learn about [container image garbage collection](/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection).
|
||||
* Learn more about [pulling an Image from a Private Registry](/docs/tasks/configure-pod-container/pull-image-private-registry).
|
||||
|
||||
|
|
|
@ -120,6 +120,31 @@ satisfy the StatefulSet specification.
|
|||
Different kinds of object can also have different `.status`; again, the API reference pages
|
||||
detail the structure of that `.status` field, and its content for each different type of object.
|
||||
|
||||
## Server side field validation
|
||||
|
||||
Starting with Kubernetes v1.25, the API server offers server side
|
||||
[field validation](/docs/reference/using-api/api-concepts/#field-validation)
|
||||
that detects unrecognized or duplicate fields in an object. It provides all the functionality
|
||||
of `kubectl --validate` on the server side.
|
||||
|
||||
The `kubectl` tool uses the `--validate` flag to set the level of field validation. It accepts the
|
||||
values `ignore`, `warn`, and `strict` while also accepting the values `true` (equivalent to `strict`)
|
||||
and `false` (equivalent to `ignore`). The default validation setting for `kubectl` is `--validate=true`.
|
||||
|
||||
`Strict`
|
||||
: Strict field validation, errors on validation failure
|
||||
|
||||
`Warn`
|
||||
: Field validation is performed, but errors are exposed as warnings rather than failing the request
|
||||
|
||||
`Ignore`
|
||||
: No server side field validation is performed
|
||||
|
||||
When `kubectl` cannot connect to an API server that supports field validation it will fall back
|
||||
to using client-side validation. Kubernetes 1.27 and later versions always offer field validation;
|
||||
older Kubernetes releases might not. If your cluster is older than v1.27, check the documentation
|
||||
for your version of Kubernetes.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
If you're new to Kubernetes, read more about the following:
|
||||
|
|
|
@ -247,7 +247,7 @@ The set of pods that a `service` targets is defined with a label selector.
|
|||
Similarly, the population of pods that a `replicationcontroller` should
|
||||
manage is also defined with a label selector.
|
||||
|
||||
Labels selectors for both objects are defined in `json` or `yaml` files using maps,
|
||||
Label selectors for both objects are defined in `json` or `yaml` files using maps,
|
||||
and only _equality-based_ requirement selectors are supported:
|
||||
|
||||
```json
|
||||
|
|
|
@ -135,6 +135,9 @@ You can use the `operator` field to specify a logical operator for Kubernetes to
|
|||
interpreting the rules. You can use `In`, `NotIn`, `Exists`, `DoesNotExist`,
|
||||
`Gt` and `Lt`.
|
||||
|
||||
Read [Operators](#operators)
|
||||
to learn more about how these work.
|
||||
|
||||
`NotIn` and `DoesNotExist` allow you to define node anti-affinity behavior.
|
||||
Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/)
|
||||
to repel Pods from specific nodes.
|
||||
|
@ -310,6 +313,9 @@ refer to the [design proposal](https://git.k8s.io/design-proposals-archive/sched
|
|||
You can use the `In`, `NotIn`, `Exists` and `DoesNotExist` values in the
|
||||
`operator` field for Pod affinity and anti-affinity.
|
||||
|
||||
Read [Operators](#operators)
|
||||
to learn more about how these work.
|
||||
|
||||
In principle, the `topologyKey` can be any allowed label key with the following
|
||||
exceptions for performance and security reasons:
|
||||
|
||||
|
@ -492,6 +498,31 @@ overall utilization.
|
|||
Read [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
to learn more about how these work.
|
||||
|
||||
## Operators
|
||||
|
||||
The following are all the logical operators that you can use in the `operator` field for `nodeAffinity` and `podAffinity` mentioned above.
|
||||
|
||||
| Operator | Behavior |
|
||||
| :------------: | :-------------: |
|
||||
| `In` | The label value is present in the supplied set of strings |
|
||||
| `NotIn` | The label value is not contained in the supplied set of strings |
|
||||
| `Exists` | A label with this key exists on the object |
|
||||
| `DoesNotExist` | No label with this key exists on the object |
|
||||
|
||||
The following operators can only be used with `nodeAffinity`.
|
||||
|
||||
| Operator | Behaviour |
|
||||
| :------------: | :-------------: |
|
||||
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than or equal to the integer that results from parsing the value of a label named by this selector |
|
||||
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than or equal to the integer that results from parsing the value of a label named by this selector |
|
||||
|
||||
|
||||
{{<note>}}
|
||||
`Gt` and `Lt` operators will not work with non-integer values. If the given value
|
||||
doesn't parse as an integer, the pod will fail to get scheduled. Also, `Gt` and `Lt`
|
||||
are not available for `podAffinity`.
|
||||
{{</note>}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
|
||||
|
|
|
@ -41,6 +41,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
|||
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md) is an [Easegress](https://megaease.com/easegress/) based API gateway that can run as an ingress controller.
|
||||
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
|
||||
lets you use an Ingress to configure F5 BIG-IP virtual servers.
|
||||
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller-1-0/742835/fortiadc-ingress-controller-overview) support the Kubernetes Ingress resources and allows you to manage FortiADC objects from Kubernetes
|
||||
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),
|
||||
which offers API gateway functionality.
|
||||
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
|
||||
|
|
|
@ -27,12 +27,6 @@ Familiarity with [Pods](/docs/concepts/workloads/pods/) is suggested.
|
|||
|
||||
## Background
|
||||
|
||||
Docker has a concept of
|
||||
[volumes](https://docs.docker.com/storage/), though it is
|
||||
somewhat looser and less managed. A Docker volume is a directory on
|
||||
disk or in another container. Docker provides volume
|
||||
drivers, but the functionality is somewhat limited.
|
||||
|
||||
Kubernetes supports many types of volumes. A {{< glossary_tooltip term_id="pod" text="Pod" >}}
|
||||
can use any number of volume types simultaneously.
|
||||
[Ephemeral volume](/docs/concepts/storage/ephemeral-volumes/) types have a lifetime of a pod,
|
||||
|
@ -295,13 +289,17 @@ Note that this path is derived from the volume's `mountPath` and the `path`
|
|||
keyed with `log_level`.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
* You must create a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
before you can use it.
|
||||
|
||||
* A ConfigMap is always mounted as `readOnly`.
|
||||
|
||||
* A container using a ConfigMap as a [`subPath`](#using-subpath) volume mount will not
|
||||
receive ConfigMap updates.
|
||||
|
||||
|
||||
* Text data is exposed as files using the UTF-8 character encoding. For other character encodings, use `binaryData`.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
### downwardAPI {#downwardapi}
|
||||
|
@ -930,12 +928,14 @@ backed by tmpfs (a RAM-backed filesystem) so they are never written to
|
|||
non-volatile storage.
|
||||
|
||||
{{< note >}}
|
||||
You must create a Secret in the Kubernetes API before you can use it.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
A container using a Secret as a [`subPath`](#using-subpath) volume mount will not
|
||||
* You must create a Secret in the Kubernetes API before you can use it.
|
||||
|
||||
* A Secret is always mounted as `readOnly`.
|
||||
|
||||
* A container using a Secret as a [`subPath`](#using-subpath) volume mount will not
|
||||
receive Secret updates.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
For more details, see [Configuring Secrets](/docs/concepts/configuration/secret/).
|
||||
|
@ -1143,9 +1143,8 @@ persistent volume:
|
|||
The value is passed as `volume_id` on all calls to the CSI volume driver when
|
||||
referencing the volume.
|
||||
* `readOnly`: An optional boolean value indicating whether the volume is to be
|
||||
"ControllerPublished" (attached) as read only. Default is false. This value is
|
||||
passed to the CSI driver via the `readonly` field in the
|
||||
`ControllerPublishVolumeRequest`.
|
||||
"ControllerPublished" (attached) as read only. Default is false. This value is passed
|
||||
to the CSI driver via the `readonly` field in the `ControllerPublishVolumeRequest`.
|
||||
* `fsType`: If the PV's `VolumeMode` is `Filesystem` then this field may be used
|
||||
to specify the filesystem that should be used to mount the volume. If the
|
||||
volume has not been formatted and formatting is supported, this value will be
|
||||
|
|
|
@ -242,76 +242,76 @@ Here are values used for each Windows Server version:
|
|||
A cluster administrator can create a `RuntimeClass` object which is used to encapsulate these taints and tolerations.
|
||||
|
||||
1. Save this file to `runtimeClasses.yml`. It includes the appropriate `nodeSelector`
|
||||
for the Windows OS, architecture, and version.
|
||||
for the Windows OS, architecture, and version.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: node.k8s.io/v1
|
||||
kind: RuntimeClass
|
||||
metadata:
|
||||
name: windows-2019
|
||||
handler: example-container-runtime-handler
|
||||
scheduling:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: 'windows'
|
||||
kubernetes.io/arch: 'amd64'
|
||||
node.kubernetes.io/windows-build: '10.0.17763'
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: os
|
||||
operator: Equal
|
||||
value: "windows"
|
||||
```
|
||||
```yaml
|
||||
---
|
||||
apiVersion: node.k8s.io/v1
|
||||
kind: RuntimeClass
|
||||
metadata:
|
||||
name: windows-2019
|
||||
handler: example-container-runtime-handler
|
||||
scheduling:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: 'windows'
|
||||
kubernetes.io/arch: 'amd64'
|
||||
node.kubernetes.io/windows-build: '10.0.17763'
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: os
|
||||
operator: Equal
|
||||
value: "windows"
|
||||
```
|
||||
|
||||
1. Run `kubectl create -f runtimeClasses.yml` using as a cluster administrator
|
||||
1. Add `runtimeClassName: windows-2019` as appropriate to Pod specs
|
||||
|
||||
For example:
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: iis-2019
|
||||
labels:
|
||||
app: iis-2019
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
name: iis-2019
|
||||
labels:
|
||||
app: iis-2019
|
||||
spec:
|
||||
runtimeClassName: windows-2019
|
||||
containers:
|
||||
- name: iis
|
||||
image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1
|
||||
memory: 800Mi
|
||||
requests:
|
||||
cpu: .1
|
||||
memory: 300Mi
|
||||
ports:
|
||||
- containerPort: 80
|
||||
selector:
|
||||
matchLabels:
|
||||
app: iis-2019
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: iis
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
selector:
|
||||
app: iis-2019
|
||||
```
|
||||
```yaml
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: iis-2019
|
||||
labels:
|
||||
app: iis-2019
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
name: iis-2019
|
||||
labels:
|
||||
app: iis-2019
|
||||
spec:
|
||||
runtimeClassName: windows-2019
|
||||
containers:
|
||||
- name: iis
|
||||
image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1
|
||||
memory: 800Mi
|
||||
requests:
|
||||
cpu: .1
|
||||
memory: 300Mi
|
||||
ports:
|
||||
- containerPort: 80
|
||||
selector:
|
||||
matchLabels:
|
||||
app: iis-2019
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: iis
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
selector:
|
||||
app: iis-2019
|
||||
```
|
||||
|
||||
[RuntimeClass]: /docs/concepts/containers/runtime-class/
|
||||
|
|
|
@ -1234,11 +1234,9 @@ it is created.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about [Pods](/docs/concepts/workloads/pods).
|
||||
* [Run a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
* `Deployment` is a top-level resource in the Kubernetes REST API.
|
||||
Read the {{< api-reference page="workload-resources/deployment-v1" >}}
|
||||
object definition to understand the API for deployments.
|
||||
* Learn more about [Pods](/docs/concepts/workloads/pods).
|
||||
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
* Read the {{< api-reference page="workload-resources/deployment-v1" >}} to understand the Deployment API.
|
||||
* Read about [PodDisruptionBudget](/docs/concepts/workloads/pods/disruptions/) and how
|
||||
you can use it to manage application availability during disruptions.
|
||||
|
||||
* Use kubectl to [create a Deployment](/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/).
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
reviewers:
|
||||
- erictune
|
||||
- lavalamp
|
||||
- ericchiang
|
||||
- deads2k
|
||||
- liggitt
|
||||
title: Authenticating
|
||||
|
|
|
@ -3,7 +3,6 @@ reviewers:
|
|||
- timstclair
|
||||
- deads2k
|
||||
- liggitt
|
||||
- ericchiang
|
||||
title: Using Node Authorization
|
||||
content_type: concept
|
||||
weight: 90
|
||||
|
|
|
@ -804,7 +804,7 @@ In the following table:
|
|||
`attach` and `port-forward` requests.
|
||||
|
||||
- `SupportIPVSProxyMode`: Enable providing in-cluster service load balancing using IPVS.
|
||||
See [service proxies](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) for more details.
|
||||
See [service proxies](/docs/reference/networking/virtual-ips/) for more details.
|
||||
|
||||
- `SupportNodePidsLimit`: Enable the support to limiting PIDs on the Node. The parameter
|
||||
`pid=<number>` in the `--system-reserved` and `--kube-reserved` options can be specified to
|
||||
|
|
|
@ -72,14 +72,14 @@ It is suitable for correlating log entries between the webhook and apiserver, fo
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>kind</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionKind"><code>meta/v1.GroupVersionKind</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Kind is the fully-qualified type of object being submitted (for example, v1.Pod or autoscaling.v1.Scale)</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resource</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionResource"><code>meta/v1.GroupVersionResource</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Resource is the fully-qualified resource being requested (for example, v1.pods)</p>
|
||||
|
@ -93,7 +93,7 @@ It is suitable for correlating log entries between the webhook and apiserver, fo
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>requestKind</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionKind"><code>meta/v1.GroupVersionKind</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>RequestKind is the fully-qualified type of the original API request (for example, v1.Pod or autoscaling.v1.Scale).
|
||||
|
@ -107,7 +107,7 @@ and <code>requestKind: {group:"apps", version:"v1beta1", kin
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>requestResource</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupVersionResource"><code>meta/v1.GroupVersionResource</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>RequestResource is the fully-qualified resource of the original API request (for example, v1.pods).
|
||||
|
@ -153,7 +153,7 @@ requested. e.g. a patch can result in either a CREATE or UPDATE Operation.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>userInfo</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication-k8s-io"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>UserInfo is information about the requesting user</p>
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: kube-controller-manager Configuration (v1alpha1)
|
||||
content_type: tool-reference
|
||||
package: cloudcontrollermanager.config.k8s.io/v1alpha1
|
||||
package: controllermanager.config.k8s.io/v1alpha1
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
|
@ -9,310 +9,9 @@ auto_generated: true
|
|||
## Resource Types
|
||||
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration)
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
|
||||
## `NodeControllerConfiguration` {#NodeControllerConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>NodeControllerConfiguration contains elements describing NodeController.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>ConcurrentNodeSyncs</code> <B>[Required]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>ConcurrentNodeSyncs is the number of workers
|
||||
concurrently synchronizing nodes</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `ServiceControllerConfiguration` {#ServiceControllerConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>ServiceControllerConfiguration contains elements describing ServiceController.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>ConcurrentServiceSyncs</code> <B>[Required]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>concurrentServiceSyncs is the number of services that are
|
||||
allowed to sync concurrently. Larger number = more responsive service
|
||||
management, but more CPU (and network) load.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>CloudControllerManagerConfiguration contains elements describing cloud-controller manager.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>cloudcontrollermanager.config.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>CloudControllerManagerConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>Generic</code> <B>[Required]</B><br/>
|
||||
<a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"><code>GenericControllerManagerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Generic holds configuration for a generic controller-manager</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>KubeCloudShared</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration"><code>KubeCloudSharedConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>KubeCloudSharedConfiguration holds configuration for shared related features
|
||||
both in cloud controller manager and kube-controller manager.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeController</code> <B>[Required]</B><br/>
|
||||
<a href="#NodeControllerConfiguration"><code>NodeControllerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>NodeController holds configuration for node controller
|
||||
related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ServiceController</code> <B>[Required]</B><br/>
|
||||
<a href="#ServiceControllerConfiguration"><code>ServiceControllerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ServiceControllerConfiguration holds configuration for ServiceController
|
||||
related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeStatusUpdateFrequency</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>Webhook</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration"><code>WebhookConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Webhook is the configuration for cloud-controller-manager hosted webhooks</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration)
|
||||
|
||||
|
||||
<p>CloudProviderConfiguration contains basically elements about cloud provider.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>Name</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Name is the provider for cloud services.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CloudConfigFile</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>cloudConfigFile is the path to the cloud provider configuration file.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
|
||||
and cloud-controller manager, but not genericconfig.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>CloudProvider</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration"><code>CloudProviderConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>CloudProviderConfiguration holds configuration for CloudProvider related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ExternalCloudVolumePlugin</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external".
|
||||
It is currently used by the in repo cloud providers to handle node and volume control in the KCM.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>UseServiceAccountCredentials</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>useServiceAccountCredentials indicates whether controllers should be run with
|
||||
individual service account credentials.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllowUntaggedCloud</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>run with untagged cloud instances</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>RouteReconciliationPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeMonitorPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterName is the instance prefix for the cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterCIDR</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterCIDR is CIDR Range for Pods in cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllocateNodeCIDRs</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if
|
||||
ConfigureCloudRoutes is true, to be set on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CIDRAllocatorType</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>CIDRAllocatorType determines what kind of pod CIDR allocator will be used.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ConfigureCloudRoutes</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs
|
||||
to be configured on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeSyncPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
|
||||
periods will result in fewer calls to cloud provider, but may delay addition
|
||||
of new nodes to cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>WebhookConfiguration contains configuration related to
|
||||
cloud-controller-manager hosted webhooks</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>Webhooks</code> <B>[Required]</B><br/>
|
||||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Webhooks is the list of webhooks to enable or disable
|
||||
'*' means "all enabled by default webhooks"
|
||||
'foo' means "enable 'foo'"
|
||||
'-foo' means "disable 'foo'"
|
||||
first item for a particular name wins</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1879,4 +1578,305 @@ volume plugin should search for additional third party volume plugins</p>
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
## `NodeControllerConfiguration` {#NodeControllerConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>NodeControllerConfiguration contains elements describing NodeController.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>ConcurrentNodeSyncs</code> <B>[Required]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>ConcurrentNodeSyncs is the number of workers
|
||||
concurrently synchronizing nodes</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `ServiceControllerConfiguration` {#ServiceControllerConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>ServiceControllerConfiguration contains elements describing ServiceController.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>ConcurrentServiceSyncs</code> <B>[Required]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>concurrentServiceSyncs is the number of services that are
|
||||
allowed to sync concurrently. Larger number = more responsive service
|
||||
management, but more CPU (and network) load.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>CloudControllerManagerConfiguration contains elements describing cloud-controller manager.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>cloudcontrollermanager.config.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>CloudControllerManagerConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>Generic</code> <B>[Required]</B><br/>
|
||||
<a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"><code>GenericControllerManagerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Generic holds configuration for a generic controller-manager</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>KubeCloudShared</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration"><code>KubeCloudSharedConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>KubeCloudSharedConfiguration holds configuration for shared related features
|
||||
both in cloud controller manager and kube-controller manager.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeController</code> <B>[Required]</B><br/>
|
||||
<a href="#NodeControllerConfiguration"><code>NodeControllerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>NodeController holds configuration for node controller
|
||||
related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ServiceController</code> <B>[Required]</B><br/>
|
||||
<a href="#ServiceControllerConfiguration"><code>ServiceControllerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ServiceControllerConfiguration holds configuration for ServiceController
|
||||
related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeStatusUpdateFrequency</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>Webhook</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration"><code>WebhookConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Webhook is the configuration for cloud-controller-manager hosted webhooks</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration)
|
||||
|
||||
|
||||
<p>CloudProviderConfiguration contains basically elements about cloud provider.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>Name</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Name is the provider for cloud services.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CloudConfigFile</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>cloudConfigFile is the path to the cloud provider configuration file.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
|
||||
and cloud-controller manager, but not genericconfig.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>CloudProvider</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration"><code>CloudProviderConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>CloudProviderConfiguration holds configuration for CloudProvider related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ExternalCloudVolumePlugin</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external".
|
||||
It is currently used by the in repo cloud providers to handle node and volume control in the KCM.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>UseServiceAccountCredentials</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>useServiceAccountCredentials indicates whether controllers should be run with
|
||||
individual service account credentials.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllowUntaggedCloud</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>run with untagged cloud instances</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>RouteReconciliationPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeMonitorPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterName is the instance prefix for the cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterCIDR</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterCIDR is CIDR Range for Pods in cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllocateNodeCIDRs</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if
|
||||
ConfigureCloudRoutes is true, to be set on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CIDRAllocatorType</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>CIDRAllocatorType determines what kind of pod CIDR allocator will be used.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ConfigureCloudRoutes</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs
|
||||
to be configured on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeSyncPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
|
||||
periods will result in fewer calls to cloud provider, but may delay addition
|
||||
of new nodes to cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>WebhookConfiguration contains configuration related to
|
||||
cloud-controller-manager hosted webhooks</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>Webhooks</code> <B>[Required]</B><br/>
|
||||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Webhooks is the list of webhooks to enable or disable
|
||||
'*' means "all enabled by default webhooks"
|
||||
'foo' means "enable 'foo'"
|
||||
'-foo' means "disable 'foo'"
|
||||
first item for a particular name wins</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
@ -273,6 +273,7 @@ kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl
|
|||
|
||||
kubectl label pods my-pod new-label=awesome # Add a Label
|
||||
kubectl label pods my-pod new-label- # Remove a label
|
||||
kubectl label pods my-pod new-label=new-value --overwrite # Overwrite an existing value
|
||||
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation
|
||||
kubectl annotate pods my-pod icon- # Remove annotation
|
||||
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"
|
||||
|
|
|
@ -268,6 +268,24 @@ The annotation `kubernetes.io/limit-ranger` records that resource defaults were
|
|||
and they were applied successfully.
|
||||
For more details, read about [LimitRanges](/docs/concepts/policy/limit-range).
|
||||
|
||||
### addonmanager.kubernetes.io/mode
|
||||
|
||||
Example: `addonmanager.kubernetes.io/mode: "Reconcile"`
|
||||
|
||||
Used on: All objects
|
||||
|
||||
To specify how an add-on should be managed, you can use the `addonmanager.kubernetes.io/mode` label.
|
||||
This label can have one of three values: `Reconcile`, `EnsureExists`, or `Ignore`.
|
||||
|
||||
- `Reconcile`: Addon resources will be periodically reconciled with the expected state. If there are any differences,
|
||||
the add-on manager will recreate, reconfigure or delete the resources as needed. This is the default mode if no label is specified.
|
||||
- `EnsureExists`: Addon resources will be checked for existence only but will not be modified after creation.
|
||||
The add-on manager will create or re-create the resources when there is no instance of the resource with that name.
|
||||
- `Ignore`: Addon resources will be ignored. This mode is useful for add-ons that are not compatible with
|
||||
the add-on manager or that are managed by another controller.
|
||||
|
||||
For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)
|
||||
|
||||
### beta.kubernetes.io/arch (deprecated)
|
||||
|
||||
This label has been deprecated. Please use `kubernetes.io/arch` instead.
|
||||
|
@ -834,6 +852,18 @@ Used on: Namespace
|
|||
|
||||
This annotation requires the [PodTolerationRestriction](/docs/reference/access-authn-authz/admission-controllers/#podtolerationrestriction) admission controller to be enabled. This annotation key allows assigning tolerations to a namespace and any new pods created in this namespace would get these tolerations added.
|
||||
|
||||
### scheduler.alpha.kubernetes.io/tolerationsWhitelist {#schedulerkubernetestolerations-whitelist}
|
||||
|
||||
Example: `scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[{"operator": "Exists", "effect": "NoSchedule", "key": "dedicated-node"}]'`
|
||||
|
||||
Used on: Namespace
|
||||
|
||||
This annotation is only useful when the (alpha)
|
||||
[PodTolerationRestriction](/docs/reference/access-authn-authz/admission-controllers/#podtolerationrestriction)
|
||||
admission controller is enabled. The annotation value is a JSON document that defines a list of allowed tolerations
|
||||
for the namespace it annotates. When you create a Pod or modify its tolerations, the API server checks the tolerations
|
||||
to see if they are mentioned in the allow list. The pod is admitted only if the check succeeds.
|
||||
|
||||
### scheduler.alpha.kubernetes.io/preferAvoidPods (deprecated) {#scheduleralphakubernetesio-preferavoidpods}
|
||||
|
||||
Used on: Nodes
|
||||
|
@ -1114,15 +1144,22 @@ used to determine if the user has applied settings different from the kubeadm de
|
|||
|
||||
Used on: Node
|
||||
|
||||
Label that kubeadm applies on the control plane nodes that it manages.
|
||||
A marker label to indicate that the node is used to run {{< glossary_tooltip text="control plane" term_id="control-plane" >}} components. The kubeadm tool applies this label to the control plane nodes that it manages. Other cluster management tools typically also set this taint.
|
||||
|
||||
You can label control plane nodes with this label to make it easier to schedule Pods only onto these nodes, or to avoid running Pods on the control plane. If this label is set, [EndpointSlice controller](/docs/concepts/services-networking/topology-aware-routing/#implementation-control-plane) ignores that node while calculating Topology Aware Hints.
|
||||
|
||||
### node-role.kubernetes.io/control-plane {#node-role-kubernetes-io-control-plane-taint}
|
||||
|
||||
Used on: Node
|
||||
|
||||
Taint that kubeadm applies on control plane nodes to restrict placing pods and allow only specific pods to schedule on them.
|
||||
|
||||
Example: `node-role.kubernetes.io/control-plane:NoSchedule`
|
||||
|
||||
Taint that kubeadm applies on control plane nodes to allow only critical workloads to schedule on them.
|
||||
If this Taint is applied, control plane nodes allow only critical workloads to schedule on them. You can manually remove this taint with the following command on a specific node.
|
||||
```shell
|
||||
kubectl taint nodes <node-name> node-role.kubernetes.io/control-plane:NoSchedule-
|
||||
```
|
||||
|
||||
### node-role.kubernetes.io/master (deprecated) {#node-role-kubernetes-io-master-taint}
|
||||
|
||||
|
|
|
@ -52,18 +52,25 @@ nor should they need to keep track of the set of backends themselves.
|
|||
|
||||
## Proxy modes
|
||||
|
||||
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
|
||||
The kube-proxy starts up in different modes, which are determined by its configuration.
|
||||
|
||||
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for
|
||||
kube-proxy effectively deprecates the behavior for almost all of the flags for
|
||||
the kube-proxy.
|
||||
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
|
||||
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
|
||||
For example, if your operating system doesn't allow you to run iptables commands,
|
||||
the standard kernel kube-proxy implementation will not work.
|
||||
On Linux nodes, the available modes for kube-proxy are:
|
||||
|
||||
[`iptables`](#proxy-mode-iptables)
|
||||
: A mode where the kube-proxy configures packet forwarding rules using iptables, on Linux.
|
||||
|
||||
[`ipvs`](#proxy-mode-ipvs)
|
||||
: a mode where the kube-proxy configures packet forwarding rules using ipvs.
|
||||
|
||||
There is only one mode available for kube-proxy on Windows:
|
||||
|
||||
[`kernelspace`](#proxy-mode-kernelspace)
|
||||
: a mode where the kube-proxy configures packet forwarding rules in the Windows kernel
|
||||
|
||||
### `iptables` proxy mode {#proxy-mode-iptables}
|
||||
|
||||
_This proxy mode is only available on Linux nodes._
|
||||
|
||||
In this mode, kube-proxy watches the Kubernetes
|
||||
{{< glossary_tooltip term_id="control-plane" text="control plane" >}} for the addition and
|
||||
removal of Service and EndpointSlice {{< glossary_tooltip term_id="object" text="objects." >}}
|
||||
|
@ -199,6 +206,8 @@ and is likely to hurt functionality more than it improves performance.
|
|||
|
||||
### IPVS proxy mode {#proxy-mode-ipvs}
|
||||
|
||||
_This proxy mode is only available on Linux nodes._
|
||||
|
||||
In `ipvs` mode, kube-proxy watches Kubernetes Services and EndpointSlices,
|
||||
calls `netlink` interface to create IPVS rules accordingly and synchronizes
|
||||
IPVS rules with Kubernetes Services and EndpointSlices periodically.
|
||||
|
@ -235,6 +244,37 @@ falls back to running in iptables proxy mode.
|
|||
|
||||
{{< figure src="/images/docs/services-ipvs-overview.svg" title="Virtual IP address mechanism for Services, using IPVS mode" class="diagram-medium" >}}
|
||||
|
||||
### `kernelspace` proxy mode {#proxy-mode-kernelspace}
|
||||
|
||||
_This proxy mode is only available on Windows nodes._
|
||||
|
||||
The kube-proxy configures packet filtering rules in the Windows _Virtual Filtering Platform_ (VFP),
|
||||
an extension to Windows vSwitch. These rules process encapsulated packets within the node-level
|
||||
virtual networks, and rewrite packets so that the destination IP address (and layer 2 information)
|
||||
is correct for getting the packet routed to the correct destination.
|
||||
The Windows VFP is analogous to tools such as Linux `nftables` or `iptables`. The Windows VFP extends
|
||||
the _Hyper-V Switch_, which was initially implemented to support virtual machine networking.
|
||||
|
||||
When a Pod on a node sends traffic to a virtual IP address, and the kube-proxy selects a Pod on
|
||||
a different node as the load balancing target, the `kernelspace` proxy mode rewrites that packet
|
||||
to be destined to the target backend Pod. The Windows _Host Networking Service_ (HNS) ensures that
|
||||
packet rewriting rules are configured so that the return traffic appears to come from the virtual
|
||||
IP address and not the specific backend Pod.
|
||||
|
||||
#### Direct server return for `kernelspace` mode {#windows-direct-server-return}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="alpha" >}}
|
||||
|
||||
As an alternative to the basic operation, a node that hosts the backend Pod for a Service can
|
||||
apply the packet rewriting directly, rather than placing this burden on the node where the client
|
||||
Pod is running. This is called _direct server return_.
|
||||
|
||||
To use this, you must run kube-proxy with the `--enable-dsr` command line argument **and**
|
||||
enable the `WinDSR` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
|
||||
Direct server return also optimizes the case for Pod return traffic even when both Pods
|
||||
are running on the same node.
|
||||
|
||||
## Session affinity
|
||||
|
||||
In these proxy models, the traffic bound for the Service's IP:Port is
|
||||
|
|
|
@ -420,7 +420,7 @@ individually with the [`kubeadm init phase mark-control-plane`](/docs/reference/
|
|||
Please note that:
|
||||
|
||||
1. The `node-role.kubernetes.io/master` taint is deprecated and will be removed in kubeadm version 1.25
|
||||
1. Mark control-plane phase phase can be invoked individually with the command
|
||||
1. Mark control-plane phase can be invoked individually with the command
|
||||
[`kubeadm init phase mark-control-plane`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-control-plane)
|
||||
|
||||
|
||||
|
|
|
@ -284,6 +284,36 @@ Content-Type: application/json
|
|||
<followed by regular watch stream starting from resourceVersion="10245">
|
||||
```
|
||||
|
||||
## Response compression
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
`APIResponseCompression` is an option that allows the API server to compress the responses for **get**
|
||||
and **list** requests, reducing the network bandwidth and improving the performance of large-scale clusters.
|
||||
It is enabled by default since Kubernetes 1.16 and it can be disabled by including
|
||||
`APIResponseCompression=false` in the `--feature-gates` flag on the API server.
|
||||
|
||||
API response compression can significantly reduce the size of the response, especially for large resources or
|
||||
[collections](/docs/reference/using-api/api-concepts/#collections).
|
||||
For example, a **list** request for pods can return hundreds of kilobytes or even megabytes of data,
|
||||
depending on the number of pods and their attributes. By compressing the response, the network bandwidth
|
||||
can be saved and the latency can be reduced.
|
||||
|
||||
To verify if `APIResponseCompression` is working, you can send a **get** or **list** request to the
|
||||
API server with an `Accept-Encoding` header, and check the response size and headers. For example:
|
||||
|
||||
```console
|
||||
GET /api/v1/pods
|
||||
Accept-Encoding: gzip
|
||||
---
|
||||
200 OK
|
||||
Content-Type: application/json
|
||||
content-encoding: gzip
|
||||
...
|
||||
```
|
||||
|
||||
The `content-encoding` header indicates that the response is compressed with `gzip`.
|
||||
|
||||
## Retrieving large results sets in chunks
|
||||
|
||||
{{< feature-state for_k8s_version="v1.9" state="beta" >}}
|
||||
|
@ -1036,8 +1066,9 @@ Continue Token, Exact
|
|||
|
||||
{{< note >}}
|
||||
When you **list** resources and receive a collection response, the response includes the
|
||||
[metadata](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta) of the collection as
|
||||
well as [object metadata](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)
|
||||
[list metadata](/docs/reference/generated/kubernetes-api/v{{ skew currentVersion >}}/#listmeta-v1-meta)
|
||||
of the collection as well as
|
||||
[object metadata](/docs/reference/generated/kubernetes-api/v{{ skew currentVersion >}}/#objectmeta-v1-meta)
|
||||
for each item in that collection. For individual objects found within a collection response,
|
||||
`.metadata.resourceVersion` tracks when that object was last updated, and not how up-to-date
|
||||
the object is when served.
|
||||
|
|
|
@ -144,6 +144,44 @@ Examples:
|
|||
See the [Kubernetes URL library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#URLs)
|
||||
godoc for more information.
|
||||
|
||||
### Kubernetes authorizer library
|
||||
|
||||
For CEL expressions in the API where a variable of type `Authorizer` is available,
|
||||
the authorizer may be used to perform authorization checks for the principal
|
||||
(authenticated user) of the request.
|
||||
|
||||
API resource checks are performed as follows:
|
||||
|
||||
1. Specify the group and resource to check: `Authorizer.group(string).resource(string) ResourceCheck`
|
||||
2. Optionally call any combination of the following builder functions to further narrow the authorization check.
|
||||
Note that these functions return the receiver type and can be chained:
|
||||
- `ResourceCheck.subresource(string) ResourceCheck`
|
||||
- `ResourceCheck.namespace(string) ResourceCheck`
|
||||
- `ResourceCheck.name(string) ResourceCheck`
|
||||
3. Call `ResourceCheck.check(verb string) Decision` to perform the authorization check.
|
||||
4. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.
|
||||
|
||||
Non-resource authorization performed are used as follows:
|
||||
|
||||
1. specify only a path: `Authorizer.path(string) PathCheck`
|
||||
1. Call `PathCheck.check(httpVerb string) Decision` to perform the authorization check.
|
||||
1. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.
|
||||
|
||||
To perform an authorization check for a service account:
|
||||
|
||||
- `Authorizer.serviceAccount(namespace string, name string) Authorizer`
|
||||
|
||||
{{< table caption="Examples of CEL expressions using URL library functions" >}}
|
||||
| CEL Expression | Purpose |
|
||||
|--------------------------------------------------------------------------------------------------------------|------------------------------------------------|
|
||||
| `authorizer.group('').resource('pods').namespace('default').check('create').allowed()` | Returns true if the principal (user or service account) is allowed create pods in the 'default' namespace. |
|
||||
| `authorizer.path('/healthz').check('get').allowed()` | Checks if the principal (user or service account) is authorized to make HTTP GET requests to the /healthz API path. |
|
||||
| `authorizer.serviceAccount('default', 'myserviceaccount').resource('deployments').check('delete').allowed()` | Checks if the service account is authorized to delete deployments. |
|
||||
{{< /table >}}
|
||||
|
||||
See the [Kubernetes Authz library](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz)
|
||||
godoc for more information.
|
||||
|
||||
## Type checking
|
||||
|
||||
CEL is a [gradually typed language](https://github.com/google/cel-spec/blob/master/doc/langdef.md#gradual-type-checking).
|
||||
|
@ -297,4 +335,4 @@ execute. If so, the API server prevent the CEL expression from being written to
|
|||
API resources by rejecting create or update operations containing the CEL
|
||||
expression to the API resources. This feature offers a stronger assurance that
|
||||
CEL expressions written to the API resource will be evaluate at runtime without
|
||||
exceeding the runtime cost budget.
|
||||
exceeding the runtime cost budget.
|
||||
|
|
|
@ -44,15 +44,16 @@ If you are running a version of Kubernetes other than v{{< skew currentVersion >
|
|||
check the documentation for that version.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
## Install and configure prerequisites
|
||||
|
||||
The following steps apply common settings for Kubernetes nodes on Linux.
|
||||
The following steps apply common settings for Kubernetes nodes on Linux.
|
||||
|
||||
You can skip a particular setting if you're certain you don't need it.
|
||||
|
||||
For more information, see [Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements) or the documentation for your specific container runtime.
|
||||
For more information, see
|
||||
[Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements)
|
||||
or the documentation for your specific container runtime.
|
||||
|
||||
### Forwarding IPv4 and letting iptables see bridged traffic
|
||||
|
||||
|
@ -78,29 +79,31 @@ EOF
|
|||
sudo sysctl --system
|
||||
```
|
||||
|
||||
Verify that the `br_netfilter`, `overlay` modules are loaded by running below instructions:
|
||||
Verify that the `br_netfilter`, `overlay` modules are loaded by running the following commands:
|
||||
|
||||
```bash
|
||||
lsmod | grep br_netfilter
|
||||
lsmod | grep overlay
|
||||
```
|
||||
|
||||
Verify that the `net.bridge.bridge-nf-call-iptables`, `net.bridge.bridge-nf-call-ip6tables`, `net.ipv4.ip_forward` system variables are set to 1 in your `sysctl` config by running below instruction:
|
||||
Verify that the `net.bridge.bridge-nf-call-iptables`, `net.bridge.bridge-nf-call-ip6tables`, and
|
||||
`net.ipv4.ip_forward` system variables are set to `1` in your `sysctl` config by running the following command:
|
||||
|
||||
```bash
|
||||
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
|
||||
```
|
||||
|
||||
## Cgroup drivers
|
||||
## cgroup drivers
|
||||
|
||||
On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
|
||||
are used to constrain resources that are allocated to processes.
|
||||
|
||||
Both {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
|
||||
Both the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
|
||||
underlying container runtime need to interface with control groups to enforce
|
||||
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/) and set
|
||||
resources such as cpu/memory requests and limits. To interface with control
|
||||
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/)
|
||||
and set resources such as cpu/memory requests and limits. To interface with control
|
||||
groups, the kubelet and the container runtime need to use a *cgroup driver*.
|
||||
It's critical that the kubelet and the container runtime uses the same cgroup
|
||||
It's critical that the kubelet and the container runtime use the same cgroup
|
||||
driver and are configured the same.
|
||||
|
||||
There are two cgroup drivers available:
|
||||
|
@ -110,16 +113,15 @@ There are two cgroup drivers available:
|
|||
|
||||
### cgroupfs driver {#cgroupfs-cgroup-driver}
|
||||
|
||||
The `cgroupfs` driver is the default cgroup driver in the kubelet. When the `cgroupfs`
|
||||
driver is used, the kubelet and the container runtime directly interface with
|
||||
the cgroup filesystem to configure cgroups.
|
||||
The `cgroupfs` driver is the [default cgroup driver in the kubelet](/docs/reference/config-api/kubelet-config.v1beta1).
|
||||
When the `cgroupfs` driver is used, the kubelet and the container runtime directly interface with
|
||||
the cgroup filesystem to configure cgroups.
|
||||
|
||||
The `cgroupfs` driver is **not** recommended when
|
||||
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) is the
|
||||
init system because systemd expects a single cgroup manager on
|
||||
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups)
|
||||
, use the `systemd` cgroup driver instead of
|
||||
`cgroupfs`.
|
||||
the system. Additionally, if you use [cgroup v2](/docs/concepts/architecture/cgroups), use the `systemd`
|
||||
cgroup driver instead of `cgroupfs`.
|
||||
|
||||
### systemd cgroup driver {#systemd-cgroup-driver}
|
||||
|
||||
|
@ -150,6 +152,11 @@ kind: KubeletConfiguration
|
|||
cgroupDriver: systemd
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Starting with v1.22 and later, when creating a cluster with kubeadm, if the user does not set
|
||||
the `cgroupDriver` field under `KubeletConfiguration`, kubeadm defaults it to `systemd`.
|
||||
{{< /note >}}
|
||||
|
||||
If you configure `systemd` as the cgroup driver for the kubelet, you must also
|
||||
configure `systemd` as the cgroup driver for the container runtime. Refer to
|
||||
the documentation for your container runtime for instructions. For example:
|
||||
|
@ -190,7 +197,9 @@ using the (deprecated) v1alpha2 API instead.
|
|||
|
||||
This section outlines the necessary steps to use containerd as CRI runtime.
|
||||
|
||||
To install containerd on your system, follow the instructions on [getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).Return to this step once you've created a valid `config.toml` configuration file.
|
||||
To install containerd on your system, follow the instructions on
|
||||
[getting started with containerd](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).
|
||||
Return to this step once you've created a valid `config.toml` configuration file.
|
||||
|
||||
{{< tabs name="Finding your config.toml file" >}}
|
||||
{{% tab name="Linux" %}}
|
||||
|
|
|
@ -157,7 +157,7 @@ For more information on version skews, see:
|
|||
2. Download the Google Cloud public signing key:
|
||||
|
||||
```shell
|
||||
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
|
||||
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
```
|
||||
|
||||
3. Add the Kubernetes `apt` repository:
|
||||
|
@ -217,7 +217,7 @@ sudo systemctl enable --now kubelet
|
|||
Install CNI plugins (required for most pod network):
|
||||
|
||||
```bash
|
||||
CNI_PLUGINS_VERSION="v1.2.0"
|
||||
CNI_PLUGINS_VERSION="v1.3.0"
|
||||
ARCH="amd64"
|
||||
DEST="/opt/cni/bin"
|
||||
sudo mkdir -p "$DEST"
|
||||
|
@ -239,7 +239,7 @@ sudo mkdir -p "$DOWNLOAD_DIR"
|
|||
Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI))
|
||||
|
||||
```bash
|
||||
CRICTL_VERSION="v1.26.0"
|
||||
CRICTL_VERSION="v1.27.0"
|
||||
ARCH="amd64"
|
||||
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
|
||||
```
|
||||
|
@ -253,7 +253,7 @@ cd $DOWNLOAD_DIR
|
|||
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
|
||||
sudo chmod +x {kubeadm,kubelet}
|
||||
|
||||
RELEASE_VERSION="v0.4.0"
|
||||
RELEASE_VERSION="v0.15.1"
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
|
||||
sudo mkdir -p /etc/systemd/system/kubelet.service.d
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
|
@ -297,4 +297,4 @@ If you are running into difficulties with kubeadm, please consult our [troublesh
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
|
||||
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
|
||||
|
|
|
@ -21,7 +21,9 @@ acceptably. The kubelet provides methods to enable more complex workload
|
|||
placement policies while keeping the abstraction free from explicit placement
|
||||
directives.
|
||||
|
||||
|
||||
For detailed information on resource management, please refer to the
|
||||
[Resource Management for Pods and Containers](/docs/concepts/configuration/manage-resources-containers)
|
||||
documentation.
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
|
|
@ -34,7 +34,7 @@ This page shows how to enable and configure encryption of secret data at rest.
|
|||
The `kube-apiserver` process accepts an argument `--encryption-provider-config`
|
||||
that controls how API data is encrypted in etcd.
|
||||
The configuration is provided as an API named
|
||||
[`EncryptionConfiguration`](/docs/reference/config-api/apiserver-encryption.v1/). `--encryption-provider-config-automatic-reload` boolean argument determines if the file set by `--encryption-provider-config` should be automatically reloaded if the disk contents change. This enables key rotation without API server restarts. An example configuration is provided below.
|
||||
[`EncryptionConfiguration`](/docs/reference/config-api/apiserver-encryption.v1/). An example configuration is provided below.
|
||||
|
||||
{{< caution >}}
|
||||
**IMPORTANT:** For high-availability configurations (with two or more control plane nodes), the
|
||||
|
@ -321,19 +321,19 @@ To create a new Secret, perform the following steps:
|
|||
- command:
|
||||
- kube-apiserver
|
||||
...
|
||||
- --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # <-- add this line
|
||||
- --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line
|
||||
volumeMounts:
|
||||
...
|
||||
- name: enc # <-- add this line
|
||||
mountPath: /etc/kubernetes/enc # <-- add this line
|
||||
readonly: true # <-- add this line
|
||||
- name: enc # add this line
|
||||
mountPath: /etc/kubernetes/enc # add this line
|
||||
readonly: true # add this line
|
||||
...
|
||||
volumes:
|
||||
...
|
||||
- name: enc # <-- add this line
|
||||
hostPath: # <-- add this line
|
||||
path: /etc/kubernetes/enc # <-- add this line
|
||||
type: DirectoryOrCreate # <-- add this line
|
||||
- name: enc # add this line
|
||||
hostPath: # add this line
|
||||
path: /etc/kubernetes/enc # add this line
|
||||
type: DirectoryOrCreate # add this line
|
||||
...
|
||||
```
|
||||
|
||||
|
@ -462,6 +462,19 @@ Then run the following command to force decrypt all Secrets:
|
|||
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
|
||||
```
|
||||
|
||||
## Configure automatic reloading
|
||||
|
||||
You can configure automatic reloading of encryption provider configuration.
|
||||
That setting determines whether the
|
||||
{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} should
|
||||
load the file you specify for `--encryption-provider-config` only once at
|
||||
startup, or automatically whenever you change that file. Enabling this option
|
||||
allows you to change the keys for encryption at rest without restarting the
|
||||
API server.
|
||||
|
||||
To allow automatic reloading, configure the API server to run with:
|
||||
`--encryption-provider-config-automatic-reload=true`
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about the [EncryptionConfiguration configuration API (v1)](/docs/reference/config-api/apiserver-encryption.v1/).
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 20
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This page explains how to configure the kubelet cgroup driver to match the container
|
||||
This page explains how to configure the kubelet's cgroup driver to match the container
|
||||
runtime cgroup driver for kubeadm clusters.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
@ -20,7 +20,9 @@ You should be familiar with the Kubernetes
|
|||
|
||||
The [Container runtimes](/docs/setup/production-environment/container-runtimes) page
|
||||
explains that the `systemd` driver is recommended for kubeadm based setups instead
|
||||
of the `cgroupfs` driver, because kubeadm manages the kubelet as a systemd service.
|
||||
of the kubelet's [default](/docs/reference/config-api/kubelet-config.v1beta1) `cgroupfs` driver,
|
||||
because kubeadm manages the kubelet as a
|
||||
[systemd service](/docs/setup/production-environment/tools/kubeadm/kubelet-integration).
|
||||
|
||||
The page also provides details on how to set up a number of different container runtimes with the
|
||||
`systemd` driver by default.
|
||||
|
@ -32,9 +34,8 @@ This `KubeletConfiguration` can include the `cgroupDriver` field which controls
|
|||
driver of the kubelet.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
In v1.22, if the user is not setting the `cgroupDriver` field under `KubeletConfiguration`,
|
||||
`kubeadm` will default it to `systemd`.
|
||||
In v1.22 and later, if the user does not set the `cgroupDriver` field under `KubeletConfiguration`,
|
||||
kubeadm defaults it to `systemd`.
|
||||
{{< /note >}}
|
||||
|
||||
A minimal example of configuring the field explicitly:
|
||||
|
@ -81,7 +82,7 @@ you must refer to the documentation of the container runtime of your choice.
|
|||
|
||||
## Migrating to the `systemd` driver
|
||||
|
||||
To change the cgroup driver of an existing kubeadm cluster to `systemd` in-place,
|
||||
To change the cgroup driver of an existing kubeadm cluster from `cgroupfs` to `systemd` in-place,
|
||||
a similar procedure to a kubelet upgrade is required. This must include both
|
||||
steps outlined below.
|
||||
|
||||
|
|
|
@ -38,17 +38,17 @@ The upgrade workflow at high level is the following:
|
|||
### Additional information
|
||||
|
||||
- The instructions below outline when to drain each node during the upgrade process.
|
||||
If you are performing a **minor** version upgrade for any kubelet, you **must**
|
||||
first drain the node (or nodes) that you are upgrading. In the case of control plane nodes,
|
||||
they could be running CoreDNS Pods or other critical workloads. For more information see
|
||||
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
|
||||
If you are performing a **minor** version upgrade for any kubelet, you **must**
|
||||
first drain the node (or nodes) that you are upgrading. In the case of control plane nodes,
|
||||
they could be running CoreDNS Pods or other critical workloads. For more information see
|
||||
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
|
||||
- All containers are restarted after upgrade, because the container spec hash value is changed.
|
||||
- To verify that the kubelet service has successfully restarted after the kubelet has been upgraded,
|
||||
you can execute `systemctl status kubelet` or view the service logs with `journalctl -xeu kubelet`.
|
||||
you can execute `systemctl status kubelet` or view the service logs with `journalctl -xeu kubelet`.
|
||||
- Usage of the `--config` flag of `kubeadm upgrade` with
|
||||
[kubeadm configuration API types](/docs/reference/config-api/kubeadm-config.v1beta3)
|
||||
with the purpose of reconfiguring the cluster is not recommended and can have unexpected results. Follow the steps in
|
||||
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure) instead.
|
||||
[kubeadm configuration API types](/docs/reference/config-api/kubeadm-config.v1beta3)
|
||||
with the purpose of reconfiguring the cluster is not recommended and can have unexpected results. Follow the steps in
|
||||
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure) instead.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -58,15 +58,23 @@ Find the latest patch release for Kubernetes {{< skew currentVersion >}} using t
|
|||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
apt update
|
||||
apt-cache madison kubeadm
|
||||
# find the latest {{< skew currentVersion >}} version in the list
|
||||
# it should look like {{< skew currentVersion >}}.x-00, where x is the latest patch
|
||||
|
||||
```shell
|
||||
# Find the latest {{< skew currentVersion >}} version in the list.
|
||||
# It should look like {{< skew currentVersion >}}.x-00, where x is the latest patch.
|
||||
apt update
|
||||
apt-cache madison kubeadm
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
# find the latest {{< skew currentVersion >}} version in the list
|
||||
# it should look like {{< skew currentVersion >}}.x-0, where x is the latest patch
|
||||
|
||||
```shell
|
||||
# Find the latest {{< skew currentVersion >}} version in the list.
|
||||
# It should look like {{< skew currentVersion >}}.x-0, where x is the latest patch.
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -79,75 +87,78 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
|
|||
|
||||
**For the first control plane node**
|
||||
|
||||
- Upgrade kubeadm:
|
||||
1. Upgrade kubeadm:
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```shell
|
||||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
<br />
|
||||
|
||||
- Verify that the download works and has the expected version:
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
|
||||
```shell
|
||||
kubeadm version
|
||||
```
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
|
||||
- Verify the upgrade plan:
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
```shell
|
||||
kubeadm upgrade plan
|
||||
```
|
||||
1. Verify that the download works and has the expected version:
|
||||
|
||||
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
|
||||
It also shows a table with the component config version states.
|
||||
```shell
|
||||
kubeadm version
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`kubeadm upgrade` also automatically renews the certificates that it manages on this node.
|
||||
To opt-out of certificate renewal the flag `--certificate-renewal=false` can be used.
|
||||
For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs).
|
||||
{{</ note >}}
|
||||
|
||||
{{< note >}}
|
||||
If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide
|
||||
a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag.
|
||||
Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade.
|
||||
{{</ note >}}
|
||||
1. Verify the upgrade plan:
|
||||
|
||||
- Choose a version to upgrade to, and run the appropriate command. For example:
|
||||
```shell
|
||||
kubeadm upgrade plan
|
||||
```
|
||||
|
||||
```shell
|
||||
# replace x with the patch version you picked for this upgrade
|
||||
sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x
|
||||
```
|
||||
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
|
||||
It also shows a table with the component config version states.
|
||||
|
||||
Once the command finishes you should see:
|
||||
{{< note >}}
|
||||
`kubeadm upgrade` also automatically renews the certificates that it manages on this node.
|
||||
To opt-out of certificate renewal the flag `--certificate-renewal=false` can be used.
|
||||
For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs).
|
||||
{{</ note >}}
|
||||
|
||||
```
|
||||
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew currentVersion >}}.x". Enjoy!
|
||||
{{< note >}}
|
||||
If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide
|
||||
a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag.
|
||||
Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade.
|
||||
{{</ note >}}
|
||||
|
||||
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
|
||||
```
|
||||
1. Choose a version to upgrade to, and run the appropriate command. For example:
|
||||
|
||||
- Manually upgrade your CNI provider plugin.
|
||||
```shell
|
||||
# replace x with the patch version you picked for this upgrade
|
||||
sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x
|
||||
```
|
||||
|
||||
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow.
|
||||
Check the [addons](/docs/concepts/cluster-administration/addons/) page to
|
||||
find your CNI provider and see whether additional upgrade steps are required.
|
||||
Once the command finishes you should see:
|
||||
|
||||
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
|
||||
```
|
||||
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew currentVersion >}}.x". Enjoy!
|
||||
|
||||
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
|
||||
```
|
||||
|
||||
1. Manually upgrade your CNI provider plugin.
|
||||
|
||||
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow.
|
||||
Check the [addons](/docs/concepts/cluster-administration/addons/) page to
|
||||
find your CNI provider and see whether additional upgrade steps are required.
|
||||
|
||||
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
|
||||
|
||||
**For the other control plane nodes**
|
||||
|
||||
|
@ -167,60 +178,63 @@ Also calling `kubeadm upgrade plan` and upgrading the CNI provider plugin is no
|
|||
|
||||
### Drain the node
|
||||
|
||||
- Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
|
||||
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
|
||||
|
||||
```shell
|
||||
# replace <node-to-drain> with the name of your node you are draining
|
||||
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||
```
|
||||
```shell
|
||||
# replace <node-to-drain> with the name of your node you are draining
|
||||
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||
```
|
||||
|
||||
### Upgrade kubelet and kubectl
|
||||
|
||||
- Upgrade the kubelet and kubectl:
|
||||
1. Upgrade the kubelet and kubectl:
|
||||
|
||||
{{< tabs name="k8s_install_kubelet" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
<br />
|
||||
{{< tabs name="k8s_install_kubelet" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
|
||||
- Restart the kubelet:
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
|
||||
```shell
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
1. Restart the kubelet:
|
||||
|
||||
```shell
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
### Uncordon the node
|
||||
|
||||
- Bring the node back online by marking it schedulable:
|
||||
Bring the node back online by marking it schedulable:
|
||||
|
||||
```shell
|
||||
# replace <node-to-uncordon> with the name of your node
|
||||
kubectl uncordon <node-to-uncordon>
|
||||
```
|
||||
```shell
|
||||
# replace <node-to-uncordon> with the name of your node
|
||||
kubectl uncordon <node-to-uncordon>
|
||||
```
|
||||
|
||||
## Upgrade worker nodes
|
||||
|
||||
The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time,
|
||||
without compromising the minimum required capacity for running your workloads.
|
||||
|
||||
The following pages show how to Upgrade Linux and Windows worker nodes:
|
||||
The following pages show how to upgrade Linux and Windows worker nodes:
|
||||
|
||||
* [Upgrade Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/)
|
||||
* [Upgrade Windows nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/)
|
||||
* [Upgrade Linux nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/)
|
||||
* [Upgrade Windows nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/)
|
||||
|
||||
## Verify the status of the cluster
|
||||
|
||||
|
@ -280,4 +294,3 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
|
|||
|
||||
- Fetches the kubeadm `ClusterConfiguration` from the cluster.
|
||||
- Upgrades the kubelet configuration for this node.
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ weight: 100
|
|||
This page explains how to upgrade a Linux Worker Nodes created with kubeadm.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
* Familiarize yourself with [the process for upgrading the rest of your kubeadm
|
||||
cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to
|
||||
|
@ -21,80 +21,79 @@ upgrade the control plane nodes before upgrading your Linux Worker nodes.
|
|||
|
||||
### Upgrade kubeadm
|
||||
|
||||
Upgrade kubeadm:
|
||||
Upgrade kubeadm:
|
||||
|
||||
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Call "kubeadm upgrade"
|
||||
|
||||
- For worker nodes this upgrades the local kubelet configuration:
|
||||
For worker nodes this upgrades the local kubelet configuration:
|
||||
|
||||
```shell
|
||||
sudo kubeadm upgrade node
|
||||
```
|
||||
```shell
|
||||
sudo kubeadm upgrade node
|
||||
```
|
||||
|
||||
### Drain the node
|
||||
|
||||
- Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
|
||||
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
|
||||
|
||||
```shell
|
||||
# replace <node-to-drain> with the name of your node you are draining
|
||||
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||
```
|
||||
```shell
|
||||
# replace <node-to-drain> with the name of your node you are draining
|
||||
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||
```
|
||||
|
||||
### Upgrade kubelet and kubectl
|
||||
|
||||
- Upgrade the kubelet and kubectl:
|
||||
1. Upgrade the kubelet and kubectl:
|
||||
|
||||
{{< tabs name="k8s_kubelet_and_kubectl" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
<br />
|
||||
{{< tabs name="k8s_kubelet_and_kubectl" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-00 with the latest patch version
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
```shell
|
||||
# replace x in {{< skew currentVersion >}}.x-0 with the latest patch version
|
||||
yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
- Restart the kubelet:
|
||||
1. Restart the kubelet:
|
||||
|
||||
```shell
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
```shell
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
### Uncordon the node
|
||||
|
||||
- Bring the node back online by marking it schedulable:
|
||||
Bring the node back online by marking it schedulable:
|
||||
|
||||
```shell
|
||||
# replace <node-to-uncordon> with the name of your node
|
||||
kubectl uncordon <node-to-uncordon>
|
||||
```
|
||||
```shell
|
||||
# replace <node-to-uncordon> with the name of your node
|
||||
kubectl uncordon <node-to-uncordon>
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* See how to [Upgrade Windows nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/).
|
|
@ -54,7 +54,7 @@ In order to use this feature, the kubelet expects two flags to be set:
|
|||
|
||||
The configuration file passed into `--image-credential-provider-config` is read by the kubelet to determine which exec plugins
|
||||
should be invoked for which container images. Here's an example configuration file you may end up using if you are using the
|
||||
[ECR](https://aws.amazon.com/ecr/)-based plugin:
|
||||
[ECR-based plugin](https://github.com/kubernetes/cloud-provider-aws/tree/master/cmd/ecr-credential-provider):
|
||||
|
||||
```yaml
|
||||
apiVersion: kubelet.config.k8s.io/v1
|
||||
|
@ -68,7 +68,7 @@ providers:
|
|||
# name is the required name of the credential provider. It must match the name of the
|
||||
# provider executable as seen by the kubelet. The executable must be in the kubelet's
|
||||
# bin directory (set by the --image-credential-provider-bin-dir flag).
|
||||
- name: ecr
|
||||
- name: ecr-credential-provider
|
||||
# matchImages is a required list of strings used to match against images in order to
|
||||
# determine if this provider should be invoked. If one of the strings matches the
|
||||
# requested image from the kubelet, the plugin will be invoked and given a chance
|
||||
|
@ -94,7 +94,7 @@ providers:
|
|||
# - registry.io:8080/path
|
||||
matchImages:
|
||||
- "*.dkr.ecr.*.amazonaws.com"
|
||||
- "*.dkr.ecr.*.amazonaws.cn"
|
||||
- "*.dkr.ecr.*.amazonaws.com.cn"
|
||||
- "*.dkr.ecr-fips.*.amazonaws.com"
|
||||
- "*.dkr.ecr.us-iso-east-1.c2s.ic.gov"
|
||||
- "*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov"
|
||||
|
@ -107,8 +107,8 @@ providers:
|
|||
apiVersion: credentialprovider.kubelet.k8s.io/v1
|
||||
# Arguments to pass to the command when executing it.
|
||||
# +optional
|
||||
args:
|
||||
- get-credentials
|
||||
# args:
|
||||
# - --example-argument
|
||||
# Env defines additional environment variables to expose to the process. These
|
||||
# are unioned with the host's environment, as well as variables client-go uses
|
||||
# to pass argument to the plugin.
|
||||
|
|
|
@ -8,25 +8,26 @@ weight: 340
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}. The page also shows how to use Kubernetes namespaces to subdivide your cluster.
|
||||
|
||||
This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}.
|
||||
The page also shows how to use Kubernetes namespaces to subdivide your cluster.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Have an [existing Kubernetes cluster](/docs/setup/).
|
||||
* You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
|
||||
* You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}},
|
||||
{{< glossary_tooltip term_id="service" text="Services" >}}, and
|
||||
{{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Viewing namespaces
|
||||
|
||||
1. List the current namespaces in a cluster using:
|
||||
List the current namespaces in a cluster using:
|
||||
|
||||
```shell
|
||||
kubectl get namespaces
|
||||
```
|
||||
```
|
||||
```console
|
||||
NAME STATUS AGE
|
||||
default Active 11d
|
||||
kube-system Active 11d
|
||||
|
@ -35,9 +36,12 @@ kube-public Active 11d
|
|||
|
||||
Kubernetes starts with three initial namespaces:
|
||||
|
||||
* `default` The default namespace for objects with no other namespace
|
||||
* `kube-system` The namespace for objects created by the Kubernetes system
|
||||
* `kube-public` This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
|
||||
* `default` The default namespace for objects with no other namespace
|
||||
* `kube-system` The namespace for objects created by the Kubernetes system
|
||||
* `kube-public` This namespace is created automatically and is readable by all users
|
||||
(including those not authenticated). This namespace is mostly reserved for cluster usage,
|
||||
in case that some resources should be visible and readable publicly throughout the whole cluster.
|
||||
The public aspect of this namespace is only a convention, not a requirement.
|
||||
|
||||
You can also get the summary of a specific namespace using:
|
||||
|
||||
|
@ -50,7 +54,7 @@ Or you can get detailed information with:
|
|||
```shell
|
||||
kubectl describe namespaces <name>
|
||||
```
|
||||
```
|
||||
```console
|
||||
Name: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
@ -66,18 +70,18 @@ Resource Limits
|
|||
|
||||
Note that these details show both resource quota (if present) as well as resource limit ranges.
|
||||
|
||||
Resource quota tracks aggregate usage of resources in the *Namespace* and allows cluster operators
|
||||
to define *Hard* resource usage limits that a *Namespace* may consume.
|
||||
Resource quota tracks aggregate usage of resources in the Namespace and allows cluster operators
|
||||
to define *Hard* resource usage limits that a Namespace may consume.
|
||||
|
||||
A limit range defines min/max constraints on the amount of resources a single entity can consume in
|
||||
a *Namespace*.
|
||||
a Namespace.
|
||||
|
||||
See [Admission control: Limit Range](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md)
|
||||
|
||||
A namespace can be in one of two phases:
|
||||
|
||||
* `Active` the namespace is in use
|
||||
* `Terminating` the namespace is being deleted, and can not be used for new objects
|
||||
* `Active` the namespace is in use
|
||||
* `Terminating` the namespace is being deleted, and can not be used for new objects
|
||||
|
||||
For more details, see [Namespace](/docs/reference/kubernetes-api/cluster-resources/namespace-v1/)
|
||||
in the API reference.
|
||||
|
@ -85,35 +89,38 @@ in the API reference.
|
|||
## Creating a new namespace
|
||||
|
||||
{{< note >}}
|
||||
Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces.
|
||||
Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces.
|
||||
{{< /note >}}
|
||||
|
||||
1. Create a new YAML file called `my-namespace.yaml` with the contents:
|
||||
Create a new YAML file called `my-namespace.yaml` with the contents:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: <insert-namespace-name-here>
|
||||
```
|
||||
Then run:
|
||||
|
||||
```
|
||||
kubectl create -f ./my-namespace.yaml
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: <insert-namespace-name-here>
|
||||
```
|
||||
Then run:
|
||||
|
||||
2. Alternatively, you can create namespace using below command:
|
||||
```shell
|
||||
kubectl create -f ./my-namespace.yaml
|
||||
```
|
||||
|
||||
```
|
||||
kubectl create namespace <insert-namespace-name-here>
|
||||
```
|
||||
Alternatively, you can create namespace using below command:
|
||||
|
||||
```shell
|
||||
kubectl create namespace <insert-namespace-name-here>
|
||||
```
|
||||
|
||||
The name of your namespace must be a valid
|
||||
[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
|
||||
|
||||
There's an optional field `finalizers`, which allows observables to purge resources whenever the namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will be created but will get stuck in the `Terminating` state if the user tries to delete it.
|
||||
There's an optional field `finalizers`, which allows observables to purge resources whenever the
|
||||
namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will
|
||||
be created but will get stuck in the `Terminating` state if the user tries to delete it.
|
||||
|
||||
More information on `finalizers` can be found in the namespace [design doc](https://git.k8s.io/design-proposals-archive/architecture/namespaces.md#finalizers).
|
||||
More information on `finalizers` can be found in the namespace
|
||||
[design doc](https://git.k8s.io/design-proposals-archive/architecture/namespaces.md#finalizers).
|
||||
|
||||
## Deleting a namespace
|
||||
|
||||
|
@ -131,191 +138,192 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
|
|||
|
||||
## Subdividing your cluster using Kubernetes namespaces
|
||||
|
||||
1. Understand the default namespace
|
||||
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the
|
||||
cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
|
||||
|
||||
By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods,
|
||||
Services, and Deployments used by the cluster.
|
||||
Assuming you have a fresh cluster, you can introspect the available namespaces by doing the following:
|
||||
|
||||
Assuming you have a fresh cluster, you can introspect the available namespaces by doing the following:
|
||||
```shell
|
||||
kubectl get namespaces
|
||||
```
|
||||
```console
|
||||
NAME STATUS AGE
|
||||
default Active 13m
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get namespaces
|
||||
```
|
||||
```
|
||||
NAME STATUS AGE
|
||||
default Active 13m
|
||||
```
|
||||
### Create new namespaces
|
||||
|
||||
2. Create new namespaces
|
||||
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
|
||||
|
||||
For this exercise, we will create two additional Kubernetes namespaces to hold our content.
|
||||
In a scenario where an organization is using a shared Kubernetes cluster for development and
|
||||
production use cases:
|
||||
|
||||
In a scenario where an organization is using a shared Kubernetes cluster for development and production use cases:
|
||||
- The development team would like to maintain a space in the cluster where they can get a view on
|
||||
the list of Pods, Services, and Deployments they use to build and run their application.
|
||||
In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify
|
||||
resources are relaxed to enable agile development.
|
||||
|
||||
The development team would like to maintain a space in the cluster where they can get a view on the list of Pods, Services, and Deployments
|
||||
they use to build and run their application. In this space, Kubernetes resources come and go, and the restrictions on who can or cannot modify resources
|
||||
are relaxed to enable agile development.
|
||||
- The operations team would like to maintain a space in the cluster where they can enforce strict
|
||||
procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run
|
||||
the production site.
|
||||
|
||||
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
|
||||
Pods, Services, and Deployments that run the production site.
|
||||
One pattern this organization could follow is to partition the Kubernetes cluster into two
|
||||
namespaces: `development` and `production`. Let's create two new namespaces to hold our work.
|
||||
|
||||
One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`.
|
||||
Create the `development` namespace using kubectl:
|
||||
|
||||
Let's create two new namespaces to hold our work.
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
|
||||
```
|
||||
|
||||
Create the `development` namespace using kubectl:
|
||||
And then let's create the `production` namespace using kubectl:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
|
||||
```
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
|
||||
```
|
||||
|
||||
And then let's create the `production` namespace using kubectl:
|
||||
To be sure things are right, list all of the namespaces in our cluster.
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
|
||||
```
|
||||
```shell
|
||||
kubectl get namespaces --show-labels
|
||||
```
|
||||
|
||||
To be sure things are right, list all of the namespaces in our cluster.
|
||||
```console
|
||||
NAME STATUS AGE LABELS
|
||||
default Active 32m <none>
|
||||
development Active 29s name=development
|
||||
production Active 23s name=production
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get namespaces --show-labels
|
||||
```
|
||||
```
|
||||
NAME STATUS AGE LABELS
|
||||
default Active 32m <none>
|
||||
development Active 29s name=development
|
||||
production Active 23s name=production
|
||||
```
|
||||
### Create pods in each namespace
|
||||
|
||||
3. Create pods in each namespace
|
||||
A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
|
||||
Users interacting with one namespace do not see the content in another namespace.
|
||||
To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
|
||||
|
||||
A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster.
|
||||
```shell
|
||||
kubectl create deployment snowflake \
|
||||
--image=registry.k8s.io/serve_hostname \
|
||||
-n=development --replicas=2
|
||||
```
|
||||
|
||||
Users interacting with one namespace do not see the content in another namespace.
|
||||
We have created a deployment whose replica size is 2 that is running the pod called `snowflake`
|
||||
with a basic container that serves the hostname.
|
||||
|
||||
To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
|
||||
```shell
|
||||
kubectl get deployment -n=development
|
||||
```
|
||||
```console
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
snowflake 2/2 2 2 2m
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl create deployment snowflake --image=registry.k8s.io/serve_hostname -n=development --replicas=2
|
||||
```
|
||||
We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname.
|
||||
```shell
|
||||
kubectl get pods -l app=snowflake -n=development
|
||||
```
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snowflake-3968820950-9dgr8 1/1 Running 0 2m
|
||||
snowflake-3968820950-vgc4n 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get deployment -n=development
|
||||
```
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
snowflake 2/2 2 2 2m
|
||||
```
|
||||
```shell
|
||||
kubectl get pods -l app=snowflake -n=development
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snowflake-3968820950-9dgr8 1/1 Running 0 2m
|
||||
snowflake-3968820950-vgc4n 1/1 Running 0 2m
|
||||
```
|
||||
And this is great, developers are able to do what they want, and they do not have to worry about
|
||||
affecting content in the `production` namespace.
|
||||
|
||||
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace.
|
||||
Let's switch to the `production` namespace and show how resources in one namespace are hidden from
|
||||
the other. The `production` namespace should be empty, and the following commands should return nothing.
|
||||
|
||||
Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
|
||||
```shell
|
||||
kubectl get deployment -n=production
|
||||
kubectl get pods -n=production
|
||||
```
|
||||
|
||||
The `production` namespace should be empty, and the following commands should return nothing.
|
||||
Production likes to run cattle, so let's create some cattle pods.
|
||||
|
||||
```shell
|
||||
kubectl get deployment -n=production
|
||||
kubectl get pods -n=production
|
||||
```
|
||||
```shell
|
||||
kubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production
|
||||
kubectl scale deployment cattle --replicas=5 -n=production
|
||||
|
||||
Production likes to run cattle, so let's create some cattle pods.
|
||||
kubectl get deployment -n=production
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl create deployment cattle --image=registry.k8s.io/serve_hostname -n=production
|
||||
kubectl scale deployment cattle --replicas=5 -n=production
|
||||
```console
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
cattle 5/5 5 5 10s
|
||||
```
|
||||
|
||||
kubectl get deployment -n=production
|
||||
```
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
cattle 5/5 5 5 10s
|
||||
```
|
||||
```shell
|
||||
kubectl get pods -l app=cattle -n=production
|
||||
```
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cattle-2263376956-41xy6 1/1 Running 0 34s
|
||||
cattle-2263376956-kw466 1/1 Running 0 34s
|
||||
cattle-2263376956-n4v97 1/1 Running 0 34s
|
||||
cattle-2263376956-p5p3i 1/1 Running 0 34s
|
||||
cattle-2263376956-sxpth 1/1 Running 0 34s
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=cattle -n=production
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cattle-2263376956-41xy6 1/1 Running 0 34s
|
||||
cattle-2263376956-kw466 1/1 Running 0 34s
|
||||
cattle-2263376956-n4v97 1/1 Running 0 34s
|
||||
cattle-2263376956-p5p3i 1/1 Running 0 34s
|
||||
cattle-2263376956-sxpth 1/1 Running 0 34s
|
||||
```
|
||||
|
||||
At this point, it should be clear that the resources users create in one namespace are hidden from the other namespace.
|
||||
At this point, it should be clear that the resources users create in one namespace are hidden from
|
||||
the other namespace.
|
||||
|
||||
As the policy support in Kubernetes evolves, we will extend this scenario to show how you can provide different
|
||||
authorization rules for each namespace.
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Understanding the motivation for using namespaces
|
||||
|
||||
A single cluster should be able to satisfy the needs of multiple users or groups of users (henceforth a 'user community').
|
||||
A single cluster should be able to satisfy the needs of multiple users or groups of users
|
||||
(henceforth in this document a _user community_).
|
||||
|
||||
Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
|
||||
It does this by providing the following:
|
||||
|
||||
1. A scope for [Names](/docs/concepts/overview/working-with-objects/names/).
|
||||
2. A mechanism to attach authorization and policy to a subsection of the cluster.
|
||||
1. A scope for [names](/docs/concepts/overview/working-with-objects/names/).
|
||||
1. A mechanism to attach authorization and policy to a subsection of the cluster.
|
||||
|
||||
Use of multiple namespaces is optional.
|
||||
|
||||
Each user community wants to be able to work in isolation from other communities.
|
||||
|
||||
Each user community has its own:
|
||||
|
||||
1. resources (pods, services, replication controllers, etc.)
|
||||
2. policies (who can or cannot perform actions in their community)
|
||||
3. constraints (this community is allowed this much quota, etc.)
|
||||
1. policies (who can or cannot perform actions in their community)
|
||||
1. constraints (this community is allowed this much quota, etc.)
|
||||
|
||||
A cluster operator may create a Namespace for each unique user community.
|
||||
|
||||
The Namespace provides a unique scope for:
|
||||
|
||||
1. named resources (to avoid basic naming collisions)
|
||||
2. delegated management authority to trusted users
|
||||
3. ability to limit community resource consumption
|
||||
1. delegated management authority to trusted users
|
||||
1. ability to limit community resource consumption
|
||||
|
||||
Use cases include:
|
||||
|
||||
1. As a cluster operator, I want to support multiple user communities on a single cluster.
|
||||
2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users
|
||||
in those communities.
|
||||
3. As a cluster operator, I want to limit the amount of resources each community can consume in order
|
||||
to limit the impact to other communities using the cluster.
|
||||
4. As a cluster user, I want to interact with resources that are pertinent to my user community in
|
||||
isolation of what other user communities are doing on the cluster.
|
||||
1. As a cluster operator, I want to support multiple user communities on a single cluster.
|
||||
1. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted
|
||||
users in those communities.
|
||||
1. As a cluster operator, I want to limit the amount of resources each community can consume in
|
||||
order to limit the impact to other communities using the cluster.
|
||||
1. As a cluster user, I want to interact with resources that are pertinent to my user community in
|
||||
isolation of what other user communities are doing on the cluster.
|
||||
|
||||
## Understanding namespaces and DNS
|
||||
|
||||
When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
|
||||
When you create a [Service](/docs/concepts/services-networking/service/), it creates a corresponding
|
||||
[DNS entry](/docs/concepts/services-networking/dns-pod-service/).
|
||||
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
|
||||
that if a container uses `<service-name>` it will resolve to the service which
|
||||
is local to a namespace. This is useful for using the same configuration across
|
||||
multiple namespaces such as Development, Staging and Production. If you want to reach
|
||||
across namespaces, you need to use the fully qualified domain name (FQDN).
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [setting the namespace preference](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference).
|
||||
* Learn more about [setting the namespace for a request](/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-for-a-request)
|
||||
* See [namespaces design](https://git.k8s.io/design-proposals-archive/architecture/namespaces.md).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -231,7 +231,7 @@ Under this scenario, 'Allocatable' will be 14.5 CPUs, 28.5Gi of memory and
|
|||
Scheduler ensures that the total memory `requests` across all pods on this node does
|
||||
not exceed 28.5Gi and storage doesn't exceed 88Gi.
|
||||
Kubelet evicts pods whenever the overall memory usage across pods exceeds 28.5Gi,
|
||||
or if overall disk usage exceeds 88Gi If all processes on the node consume as
|
||||
or if overall disk usage exceeds 88Gi. If all processes on the node consume as
|
||||
much CPU as they can, pods together cannot consume more than 14.5 CPUs.
|
||||
|
||||
If `kube-reserved` and/or `system-reserved` is not enforced and system daemons
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
reviewers:
|
||||
- soltysh
|
||||
- sttts
|
||||
- ericchiang
|
||||
content_type: concept
|
||||
title: Auditing
|
||||
---
|
||||
|
|
|
@ -44,7 +44,7 @@ The rest of this section describes these steps in detail.
|
|||
|
||||
The flow can be seen in the following diagram.
|
||||
|
||||
.
|
||||

|
||||
|
||||
The source for the above swimlanes can be found in the source of this document.
|
||||
|
||||
|
|
|
@ -157,7 +157,7 @@ The following methods exist for installing kubectl on Linux:
|
|||
2. Download the Google Cloud public signing key:
|
||||
|
||||
```shell
|
||||
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
|
||||
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
```
|
||||
|
||||
3. Add the Kubernetes `apt` repository:
|
||||
|
|
|
@ -122,8 +122,7 @@ gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor e
|
|||
|
||||
{{< note >}}
|
||||
AppArmor is currently in beta, so options are specified as annotations. Once support graduates to
|
||||
general availability, the annotations will be replaced with first-class fields (more details in
|
||||
[Upgrade path to GA](#upgrade-path-to-general-availability)).
|
||||
general availability, the annotations will be replaced with first-class fields.
|
||||
{{< /note >}}
|
||||
|
||||
AppArmor profiles are specified *per-container*. To specify the AppArmor profile to run a Pod
|
||||
|
|
|
@ -80,6 +80,8 @@ releases may also occur in between these.
|
|||
| --------------------- | -------------------- | ----------- |
|
||||
| May 2023 | 2023-05-12 | 2023-05-17 |
|
||||
| June 2023 | 2023-06-09 | 2023-06-14 |
|
||||
| July 2023 | 2023-07-07 | 2023-07-12 |
|
||||
| August 2023 | 2023-08-04 | 2023-08-09 |
|
||||
|
||||
## Detailed Release History for Active Branches
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ Si estas buscando información sobre cómo comenzar a contribuir a los repositor
|
|||
|
||||
## Lo básico sobre nuestra documentación
|
||||
|
||||
La documentación de Kuberentes esta escrita usando Markdown, procesada y
|
||||
La documentación de Kubernetes esta escrita usando Markdown, procesada y
|
||||
desplegada usando Hugo. El código fuente está en GitHub accessible en [git.k8s.io/website/](https://github.com/kubernetes/website).
|
||||
La mayoría de la documentación en castellano está en `/content/es/docs`. Alguna de
|
||||
la documentación de referencia se genera automática con los scripts del
|
||||
|
|
|
@ -258,7 +258,7 @@ RELEASE_VERSION="v0.6.0"
|
|||
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
|
||||
ARCH="amd64"
|
||||
cd $DOWNLOAD_DIR
|
||||
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
|
||||
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
|
||||
sudo chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
|
||||
|
|
|
@ -30,15 +30,15 @@ Vous devez utiliser une version de kubectl qui différe seulement d'une version
|
|||
1. Téléchargez la dernière release avec la commande :
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/$(curl -Ls https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` avec la version spécifique.
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -s https://dl.k8s.io/release/stable.txt)` avec la version spécifique.
|
||||
|
||||
Par exemple, pour télécharger la version {{< param "fullversion" >}} sur Linux, tapez :
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Rendez le binaire kubectl exécutable.
|
||||
|
@ -110,15 +110,15 @@ kubectl version --client
|
|||
1. Téléchargez la dernière version:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/$(curl -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` avec la version spécifique.
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -s https://dl.k8s.io/release/stable.txt)` avec la version spécifique.
|
||||
|
||||
Par exemple, pour télécharger la version {{< param "fullversion" >}} sur macOS, tapez :
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Rendez le binaire kubectl exécutable.
|
||||
|
@ -180,15 +180,15 @@ Si vous êtes sur MacOS et que vous utilisez le gestionnaire de paquets [Macport
|
|||
|
||||
### Installer le binaire kubectl avec curl sur Windows
|
||||
|
||||
1. Téléchargez la dernière version {{< param "fullversion" >}} depuis [ce lien](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
1. Téléchargez la dernière version {{< param "fullversion" >}} depuis [ce lien](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
Ou si vous avez `curl` installé, utilisez cette commande:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Pour connaître la dernière version stable (par exemple, en scripting), jetez un coup d'oeil à [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt).
|
||||
Pour connaître la dernière version stable (par exemple, en scripting), jetez un coup d'oeil à [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
|
||||
2. Ajoutez le binaire dans votre PATH.
|
||||
3. Testez pour vous assurer que la version que vous avez installée est à jour:
|
||||
|
|
|
@ -247,7 +247,7 @@ RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
|
|||
mkdir -p /opt/bin
|
||||
ARCH="amd64"
|
||||
cd /opt/bin
|
||||
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
|
||||
curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
|
||||
chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
RELEASE_VERSION="v0.2.7"
|
||||
|
|
|
@ -26,15 +26,15 @@ Kamu harus menggunakan kubectl dengan perbedaan maksimal satu versi minor dengan
|
|||
1. Unduh versi terbarunya dengan perintah:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/`curl -LS https://dl.k8s.io/release/stable.txt`/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
Untuk mengunduh versi spesifik, ganti bagian `curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt` dengan versi yang diinginkan.
|
||||
Untuk mengunduh versi spesifik, ganti bagian `curl -LS https://dl.k8s.io/release/stable.txt` dengan versi yang diinginkan.
|
||||
|
||||
Misalnya, untuk mengunduh versi {{< param "fullversion" >}} di Linux, ketik:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Jadikan program `kubectl` dapat dieksekusi.
|
||||
|
@ -106,15 +106,15 @@ kubectl version --client
|
|||
1. Unduh versi terbarunya dengan perintah:
|
||||
|
||||
```
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -LS https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
Untuk mengunduh versi spesifik, ganti bagian `curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt` dengan versi yang diinginkan.
|
||||
Untuk mengunduh versi spesifik, ganti bagian `curl -LS https://dl.k8s.io/release/stable.txt` dengan versi yang diinginkan.
|
||||
|
||||
Misalnya, untuk mengunduh versi {{< param "fullversion" >}} pada macOS, ketik:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Buat agar program `kubectl` dapat dijalankan.
|
||||
|
@ -176,15 +176,15 @@ Jika kamu menggunakan macOS dan manajer paket [Macports](https://macports.org/),
|
|||
|
||||
### Menginstal program kubectl dengan curl pada Windows
|
||||
|
||||
1. Unduh versi terbarunya {{< param "fullversion" >}} dari [tautan ini](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
1. Unduh versi terbarunya {{< param "fullversion" >}} dari [tautan ini](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
Atau jika sudah ada `curl` pada mesin kamu, jalankan perintah ini:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Untuk mendapatkan versi stabil terakhir (misalnya untuk _scripting_), lihat di [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt).
|
||||
Untuk mendapatkan versi stabil terakhir (misalnya untuk _scripting_), lihat di [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
|
||||
2. Tambahkan program yang sudah diunduh tersebut ke PATH kamu.
|
||||
3. Pastikan instalasinya sudah berhasil dengan melakukan pengecekan versi:
|
||||
|
|
|
@ -245,8 +245,8 @@ web-0
|
|||
web-1
|
||||
```
|
||||
selanjutnya, jalankan:
|
||||
```
|
||||
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh
|
||||
```shell
|
||||
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
|
||||
```
|
||||
perintah itu akan menjalankan _shell_ baru.
|
||||
Di dalam _shell_ yang baru jalankan:
|
||||
|
|
|
@ -18,7 +18,7 @@ weight: 20
|
|||
|
||||
1. パッチを当てたバージョンのeasyrsa3をダウンロードして解凍し、初期化します。
|
||||
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
|
||||
curl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz
|
||||
tar xzf easy-rsa.tar.gz
|
||||
cd easy-rsa-master/easyrsa3
|
||||
./easyrsa init-pki
|
||||
|
|
|
@ -830,7 +830,7 @@ HTTPやHTTPSなどの一般的なプロトコルでExternalNameを使用する
|
|||
`externalIPs`はKubernetesによって管理されず、それを管理する責任はクラスターの管理者にあります。
|
||||
|
||||
Serviceのspecにおいて、`externalIPs`は他のどの`ServiceTypes`と併用して設定できます。
|
||||
下記の例では、"`my-service`"は"`80.11.12.10:80`" (`externalIP:port`)のクライアントからアクセス可能です。
|
||||
下記の例では、"`my-service`"は"`198.51.100.32:80`" (`externalIP:port`)のクライアントからアクセス可能です。
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
@ -207,7 +207,7 @@ RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
|
|||
ARCH="amd64"
|
||||
mkdir -p /opt/bin
|
||||
cd /opt/bin
|
||||
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
|
||||
curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
|
||||
chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
|
||||
|
|
|
@ -84,7 +84,7 @@ kubectl get pods --namespace kube-system -o jsonpath="{..image}"
|
|||
|
||||
## jsonpathの代わりにgo-templateを使用してコンテナイメージを一覧表示する {#list-container-images-using-a-go-template-instead-of-jsonpath}
|
||||
|
||||
jsonpathの代わりに、kubectlは[go-templates](https://golang.org/pkg/text/template/)を使用した出力のフォーマットをサポートしています:
|
||||
jsonpathの代わりに、kubectlは[go-templates](https://pkg.go.dev/text/template)を使用した出力のフォーマットをサポートしています:
|
||||
|
||||
|
||||
```shell
|
||||
|
@ -105,7 +105,7 @@ kubectl get pods --all-namespaces -o go-template --template="{{range .items}}{{r
|
|||
### 参照
|
||||
|
||||
* [jsonpath](/docs/reference/kubectl/jsonpath/)参照ガイド
|
||||
* [Go template](https://golang.org/pkg/text/template/)参照ガイド
|
||||
* [Go template](https://pkg.go.dev/text/template)参照ガイド
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ Slackは登録が必要です。[招待をリクエストする](https://slack.k
|
|||
|
||||
登録が完了したら、増え続けるチャンネルリストを見て、興味のある様々なテーマについて調べてみましょう。
|
||||
たとえば、Kubernetesの初心者は、[`#kubernetes-novice`](https://kubernetes.slack.com/messages/kubernetes-novice)に参加してみるのもよいでしょう。
|
||||
別の例として、開発者は[`#kubernetes-dev`](https://kubernetes.slack.com/messages/kubernetes-dev)チャンネルに参加するとよいでしょう。
|
||||
別の例として、開発者は[`#kubernetes-contributors`](https://kubernetes.slack.com/messages/kubernetes-contributors)チャンネルに参加するとよいでしょう。
|
||||
|
||||
また、多くの国別/言語別チャンネルがあります。これらのチャンネルに参加すれば、地域特有のサポートや情報を得ることができます。
|
||||
|
||||
|
|
|
@ -93,7 +93,7 @@ spec:
|
|||
|
||||
* [コンテナ](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)の`terminationMessagePath`フィールド参照
|
||||
* [ログ取得](/docs/concepts/cluster-administration/logging/)について
|
||||
* [Goテンプレート](https://golang.org/pkg/text/template/)について
|
||||
* [Goテンプレート](https://pkg.go.dev/text/template)について
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -26,15 +26,15 @@ kubectlのバージョンは、クラスターのマイナーバージョンと
|
|||
1. 次のコマンドにより、最新リリースをダウンロードしてください:
|
||||
|
||||
```
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -LS https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
```
|
||||
|
||||
特定のバージョンをダウンロードする場合、コマンドの`$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)`の部分を特定のバージョンに書き換えてください。
|
||||
特定のバージョンをダウンロードする場合、コマンドの`$(curl -LS https://dl.k8s.io/release/stable.txt)`の部分を特定のバージョンに書き換えてください。
|
||||
|
||||
たとえば、Linuxへ{{< param "fullversion" >}}のバージョンをダウンロードするには、次のコマンドを入力します:
|
||||
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. kubectlバイナリを実行可能にしてください。
|
||||
|
@ -108,15 +108,15 @@ kubectl version --client
|
|||
1. 最新リリースをダウンロードしてください:
|
||||
|
||||
```bash
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -LS https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
特定のバージョンをダウンロードする場合、コマンドの`$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)`の部分を特定のバージョンに書き換えてください。
|
||||
特定のバージョンをダウンロードする場合、コマンドの`$(curl -LS https://dl.k8s.io/release/stable.txt)`の部分を特定のバージョンに書き換えてください。
|
||||
|
||||
たとえば、macOSへ{{< param "fullversion" >}}のバージョンをダウンロードするには、次のコマンドを入力します:
|
||||
|
||||
```bash
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. kubectlバイナリを実行可能にしてください。
|
||||
|
@ -178,15 +178,15 @@ macOSで[MacPorts](https://macports.org/)パッケージマネージャーを使
|
|||
|
||||
### curlを使用してWindowsへkubectlのバイナリをインストールする
|
||||
|
||||
1. [こちらのリンク](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)から、最新リリースである{{< param "fullversion" >}}をダウンロードしてください。
|
||||
1. [こちらのリンク](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)から、最新リリースである{{< param "fullversion" >}}をダウンロードしてください。
|
||||
|
||||
または、`curl`をインストールされていれば、次のコマンドも使用できます:
|
||||
|
||||
```bash
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
最新の安定版を入手する際は(たとえばスクリプトで使用する場合)、[https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt)を参照してください。
|
||||
最新の安定版を入手する際は(たとえばスクリプトで使用する場合)、[https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt)を参照してください。
|
||||
|
||||
2. バイナリをPATHに追加します
|
||||
3. `kubectl`のバージョンがダウンロードしたものと同じであることを確認してください:
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "セキュリティ"
|
||||
weight: 40
|
||||
---
|
||||
|
|
@ -0,0 +1,149 @@
|
|||
---
|
||||
title: 名前空間レベルでのPodセキュリティの標準の適用
|
||||
content_type: チュートリアル
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% alert title="Note" %}}
|
||||
このチュートリアルは、新しいクラスターにのみ適用されます。
|
||||
{{% /alert %}}
|
||||
|
||||
Podセキュリティアドミッション(PSA)は、[ベータへ進み](/blog/2021/12/09/pod-security-admission-beta/)、v1.23以降でデフォルトで有効になっています。
|
||||
Podセキュリティアドミッションは、Podが作成される際に、[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/)の適用の認可を制御するものです。
|
||||
このチュートリアルでは、一度に1つの名前空間で`baseline` Podセキュリティ標準を強制します。
|
||||
|
||||
Podセキュリティの標準を複数の名前空間に一度にクラスターレベルで適用することもできます。やり方については[クラスターレベルでのPodセキュリティの標準の適用](/docs/tutorials/security/cluster-level-pss/)を参照してください。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
ワークステーションに以下をインストールしてください:
|
||||
|
||||
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
|
||||
- [kubectl](/ja/docs/tasks/tools/)
|
||||
|
||||
## クラスターの作成
|
||||
|
||||
1. 以下のように`KinD`クラスターを作成します。
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-ns-level
|
||||
```
|
||||
|
||||
出力は次のようになります:
|
||||
|
||||
```
|
||||
Creating cluster "psa-ns-level" ...
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentPatchVersion >}}) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-psa-ns-level"
|
||||
You can now use your cluster with:
|
||||
|
||||
kubectl cluster-info --context kind-psa-ns-level
|
||||
|
||||
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
|
||||
```
|
||||
|
||||
1. kubectl のコンテキストを新しいクラスターにセットします:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info --context kind-psa-ns-level
|
||||
```
|
||||
出力は次のようになります:
|
||||
|
||||
```
|
||||
Kubernetes control plane is running at https://127.0.0.1:50996
|
||||
CoreDNS is running at https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
## 名前空間の作成
|
||||
|
||||
`example`と呼ぶ新しい名前空間を作成します:
|
||||
|
||||
```shell
|
||||
kubectl create ns example
|
||||
```
|
||||
|
||||
出力は次のようになります:
|
||||
|
||||
```
|
||||
namespace/example created
|
||||
```
|
||||
|
||||
## 名前空間へのPodセキュリティの標準チェックの有効化
|
||||
|
||||
1. ビルトインのPod Security Admissionでサポートされているラベルを使って、この名前空間のPodセキュリティの標準を有効にします。
|
||||
このステップでは、_baseline_ Podセキュリティの標準の最新バージョンに合わないPodについて警告するチェックを設定します。
|
||||
|
||||
```shell
|
||||
kubectl label --overwrite ns example \
|
||||
pod-security.kubernetes.io/warn=baseline \
|
||||
pod-security.kubernetes.io/warn-version=latest
|
||||
```
|
||||
|
||||
2. ラベルを使って、任意の名前空間に対して複数のPodセキュリティの標準チェックを設定できます。
|
||||
以下のコマンドは、`baseline` Podセキュリティの標準を`enforce`(強制)としますが、`restricted` Podセキュリティの標準には最新バージョンに準じて`warn`(警告)および`audit`(監査)とします(デフォルト値)。
|
||||
|
||||
```shell
|
||||
kubectl label --overwrite ns example \
|
||||
pod-security.kubernetes.io/enforce=baseline \
|
||||
pod-security.kubernetes.io/enforce-version=latest \
|
||||
pod-security.kubernetes.io/warn=restricted \
|
||||
pod-security.kubernetes.io/warn-version=latest \
|
||||
pod-security.kubernetes.io/audit=restricted \
|
||||
pod-security.kubernetes.io/audit-version=latest
|
||||
```
|
||||
|
||||
## Podセキュリティの標準の強制の実証
|
||||
|
||||
1. `example`名前空間内に`baseline` Podを作成します:
|
||||
|
||||
```shell
|
||||
kubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
Podは正常に起動しますが、出力には警告が含まれています。例えば:
|
||||
|
||||
```
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
1. `default`名前空間内に`baseline` Podを作成します:
|
||||
|
||||
```shell
|
||||
kubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
出力は次のようになります:
|
||||
|
||||
```
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
`example`名前空間にだけ、Podセキュリティの標準のenforceと警告の設定が適用されました。
|
||||
`default`名前空間内では、警告なしに同じPodを作成できました。
|
||||
|
||||
## 後片付け
|
||||
|
||||
では、上記で作成したクラスターを、以下のコマンドを実行して削除します:
|
||||
|
||||
```shell
|
||||
kind delete cluster --name psa-ns-level
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- 前出の一連の手順を一度に全て行うために[シェルスクリプト](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)を実行します。
|
||||
|
||||
1. KinDクラスターを作成します。
|
||||
2. 新しい名前空間を作成します。
|
||||
3. `enforce`モードでは`baseline` Podセキュリティの標準を適用し、`warn`および`audit`モードでは`restricted` Podセキュリティの標準を適用します。
|
||||
4. これらのPodセキュリティの標準を適用した新しいPodを作成します。
|
||||
|
||||
- [Podのセキュリティアドミッション](/ja/docs/concepts/security/pod-security-admission/)
|
||||
- [Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/)
|
||||
- [クラスターレベルでのPodセキュリティの標準の適用](/ja/docs/tutorials/security/cluster-level-pss/)
|
|
@ -2,7 +2,6 @@
|
|||
#reviewers:
|
||||
#- soltysh
|
||||
#- sttts
|
||||
#- ericchiang
|
||||
content_type: concept
|
||||
title: 감사(auditing)
|
||||
---
|
||||
|
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
title: Guia de Conteúdo da Documentação
|
||||
linktitle: Guia de conteúdo
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Esta página contém orientações para a documentação do Kubernetes.
|
||||
|
||||
Se você tiver dúvidas sobre o que é permitido, junte-se ao canal #sig-docs no
|
||||
[Slack do Kubernetes](https://slack.k8s.io/) e pergunte!
|
||||
|
||||
Você pode se registrar no Slack do Kubernetes através do endereço https://slack.k8s.io/.
|
||||
|
||||
Para informações sobre como criar novo conteúdo para a documentação do Kubernetes,
|
||||
siga o [guia de estilo](/pt-br/docs/contribute/style/style-guide).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Visão geral
|
||||
|
||||
O código-fonte para o _website_ do Kubernetes, incluindo a documentação, é
|
||||
armazenado no repositório [kubernetes/website](https://github.com/kubernetes/website).
|
||||
|
||||
Localizada dentro da pasta `kubernetes/website/content/<codigo-do-idioma>/docs`,
|
||||
a maior parte da documentação do Kubernetes é específica para o
|
||||
[projeto Kubernetes](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
## O que é permitido
|
||||
|
||||
A documentação do Kubernetes permite conteúdo de projetos de terceiros somente
|
||||
quando:
|
||||
|
||||
- O conteúdo documenta software que existe no projeto Kubernetes
|
||||
- O conteúdo documenta software que está fora do projeto, mas é necessário para
|
||||
o funcionamento do Kubernetes
|
||||
- O conteúdo é canônico no kubernetes.io, ou está vinculado a conteúdo canônico
|
||||
em outro local
|
||||
|
||||
### Conteúdo de terceiros
|
||||
|
||||
A documentação do Kubernetes contém exemplos aplicados de projetos no projeto
|
||||
Kubernetes — projetos que existem nas organizações
|
||||
[kubernetes](https://github.com/kubernetes) e
|
||||
[kubernetes-sigs](https://github.com/kubernetes-sigs)
|
||||
do GitHub.
|
||||
|
||||
Links para conteúdo ativo no projeto Kubernetes sempre são permitidos.
|
||||
|
||||
O Kubernetes requer alguns conteúdos de terceiros para funcionar. Exemplos
|
||||
incluem agentes de execução de contêiner (containerd, CRI-O, Docker),
|
||||
[políticas de rede](/pt-br/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
|
||||
(plugins CNI), [controladores Ingress](/docs/concepts/services-networking/ingress-controllers/),
|
||||
e [sistemas de log](/pt-br/docs/concepts/cluster-administration/logging/).
|
||||
|
||||
A documentação pode conter vínculos com software de código aberto de terceiros
|
||||
fora do projeto Kubernetes somente quando estes projetos são necessários para
|
||||
o funcionamento do Kubernetes.
|
||||
|
||||
### Conteúdo duplicado
|
||||
|
||||
Sempre que possível, a documentação do Kubernetes utiliza links para fontes
|
||||
canônicas de documentação ao invés de hospedar conteúdo duplicado.
|
||||
|
||||
Conteúdo duplicado requer o dobro de esforço (ou mais!) para manter e fica
|
||||
obsoleto mais rapidamente.
|
||||
|
||||
{{< note >}}
|
||||
Se você é um mantenedor e precisa de auxílio para hospedar sua própria
|
||||
documentação, solicite ajuda no canal
|
||||
[#sig-docs do Slack do Kubernetes](https://kubernetes.slack.com/messages/C1J0BPD2M/).
|
||||
{{< /note >}}
|
||||
|
||||
### Mais informações
|
||||
|
||||
Se você tem dúvidas sobre o conteúdo permitido, junte-se ao canal #sig-docs
|
||||
do [Slack do Kubernetes](https://slack.k8s.io/) e faça sua pergunta!
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Leia o [guia de estilo](/pt-br/docs/contribute/style/style-guide).
|
|
@ -0,0 +1,718 @@
|
|||
---
|
||||
title: Guia de Estilo da Documentação
|
||||
linktitle: Guia de estilo
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Esta página fornece orientações de estilo para escrita da documentação do
|
||||
Kubernetes. Estas são orientações, não regras. Utilize seu melhor julgamento
|
||||
e sinta-se livre para propor alterações neste documento através de um pull request.
|
||||
|
||||
Para informações adicionais sobre como criar novo conteúdo para a documentação
|
||||
do Kubernetes, leia o
|
||||
[Guia de Conteúdo da Documentação](/pt-br/docs/contribute/style/content-guide/).
|
||||
|
||||
Mudanças no guia de estilo são feitas pelo SIG Docs como um grupo. Para propor
|
||||
uma alteração ou adição, inclua o tópico na [agenda](https://bit.ly/sig-docs-agenda)
|
||||
de uma das reuniões futuras do SIG Docs, e participe da reunião para fazer parte
|
||||
da discussão.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
{{< note >}}
|
||||
A documentação do Kubernetes utiliza o
|
||||
[processador de markdown Goldmark](https://github.com/yuin/goldmark) com alguns
|
||||
ajustes, bem como alguns [shortcodes do Hugo](/docs/contribute/style/hugo-shortcodes/)
|
||||
para suportar entradas de glossário, tabulações e representação do estado das
|
||||
funcionalidades.
|
||||
{{< /note >}}
|
||||
|
||||
## Língua
|
||||
|
||||
A documentação do Kubernetes foi traduzida para diversas línguas (veja
|
||||
[READMEs das Localizações](https://github.com/kubernetes/website/blob/main/README.md#localization-readmemds)).
|
||||
|
||||
A forma para localização de documentação em uma língua diferente está descrita em
|
||||
[localizando a documentação do Kubernetes](/docs/contribute/localization/).
|
||||
|
||||
## Padrões de formatação da documentação
|
||||
|
||||
### Utilize _upper camel case_ para objetos da API {#use-upper-camel-case-for-api-objects}
|
||||
|
||||
Quando você se referir especificamente a interações com um objeto da API, utilize
|
||||
[_UpperCamelCase_](https://pt.wikipedia.org/wiki/CamelCase), também conhecido como
|
||||
_Pascal case_. Você poderá encontrar formatação de maiúsculas e minúsculas
|
||||
diferente, como por exemplo "configMap", na [referência da API](/docs/reference/kubernetes-api/).
|
||||
Ao escrever documentação geral, prefira a utilização de _upper camel case_,
|
||||
chamando o objeto de "ConfigMap".
|
||||
|
||||
Quando você estiver discutindo um objeto da API, utilize a
|
||||
[formatação de maiúsculas e minúsculas no estilo de sentença](https://learn.microsoft.com/pt-br/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
||||
|
||||
Os exemplos a seguir focam no estilo de formatação de maiúsculas e minúsculas.
|
||||
Para mais informações sobre como formatar nomes de objetos da API, revise a
|
||||
orientação relacionada no [manual de estilo de código](#code-style-inline-code).
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilizando _Pascal case_ para objetos da API" >}}
|
||||
Faça | Não faça
|
||||
:-------------------------------------------------------------------------------- | :-------
|
||||
O recurso HorizontalPodAutoscaler é responsável por ... | O Horizontal pod autoscaler é responsável por ...
|
||||
Um objeto PodList é uma lista de Pods. | Um objeto Pod List é uma lista de Pods.
|
||||
O objeto Volume contém um campo `hostPath`. | O objeto volume contém um campo hostPath.
|
||||
Cada objeto ConfigMap é parte de um namespace. | Cada objeto configMap é parte de um namespace.
|
||||
Para o gerenciamento de dados confidenciais, considere utilizar a API de Secrets. | Para o gerenciamento de dados confidenciais, considere utilizar a API de segredos.
|
||||
{{< /table >}}
|
||||
|
||||
### Utilize _chevrons_ para espaços reservados
|
||||
|
||||
Utilize _chevrons_ (< e >) para espaços reservados. Comunique ao leitor o
|
||||
que o espaço reservado significa. Por exemplo:
|
||||
|
||||
```shell
|
||||
kubectl describe pod <nome-do-pod> -n <namespace>
|
||||
```
|
||||
|
||||
Se o nome do namespace do Pod for `default`, você pode omitir o paramêtro '-n'.
|
||||
|
||||
### Grife elementos de interface de usuário
|
||||
|
||||
{{< table caption = "Faça e não faça - grife elementos da interface do usuário" >}}
|
||||
Faça | Não faça
|
||||
:------------------- | :-------
|
||||
Clique em **Fork**. | Clique em "Fork".
|
||||
Selecione **Other**. | Selecione "Other".
|
||||
{{< /table >}}
|
||||
|
||||
### Utilize itálico para definir ou introduzir novos termos
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize itálico para novos termos" >}}
|
||||
Faça | Não faça
|
||||
:-- | :-----
|
||||
Um _cluster_ é um conjunto de nós ... | Um "cluster" é um conjunto de nós ...
|
||||
Estes componentes formam a _camada de gerenciamento_. | Estes componentes formam a **camada de gerenciamento**.
|
||||
{{< /table >}}
|
||||
|
||||
### Utilize estilo de código para nomes de arquivos, diretórios e caminhos
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize estilo de código para nomes de arquivos, diretórios e caminhos" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Abra o arquivo `envars.yaml`. | Abra o arquivo envars.yaml.
|
||||
Navegue até o diretório `/docs/tutorials`. | Navegue até o diretório /docs/tutorials.
|
||||
Abra o arquivo `/_data/concepts.yaml`. | Abra o arquivo /\_data/concepts.yaml.
|
||||
{{< /table >}}
|
||||
|
||||
### Utilize o padrão internacional para pontuação dentro de aspas
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize o padrão internacional para pontuação dentro de aspas" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
eventos são registrados com um "estágio associado". | eventos são registrados com um "estágio associado."
|
||||
A cópia é chamada de "fork". | A cópia é chamada de "fork."
|
||||
{{< /table >}}
|
||||
|
||||
## Formatação de código embutido
|
||||
|
||||
### Utilize estilo de código para código embutido, comandos e objetos da API {#code-style-inline-code}
|
||||
|
||||
Para código embutido em um documento HTML, utilize a _tag_ `<code>`. Em um documento
|
||||
Markdown, utilize os símbolos de crase (`` ` ``).
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize estilo de código para código embutido, comandos e objetos da API" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
O comando `kubectl run` cria um `Pod`. | O comando "kubectl run" cria um pod.
|
||||
O kubelet em cada nó obtém um `Lease` ... | O kubelet em cada nó obtem um _lease_...
|
||||
Um `PersistentVolume` representa armazenamento durável ... | Um _PersistentVolume_ representa armazenamento durável ...
|
||||
Para gerenciamento declarativo, utilize `kubectl apply`. | Para gerenciamento declarativo, utilize "kubectl apply".
|
||||
Circunde exemplos de código com três símbolos de crase. (` ``` `) | Circunde exemplos de código com quaisquer outras sintaxes.
|
||||
Utilize um único símbolo de crase para circundar código embutido. Por exemplo, `var example = true`. | Utilize dois asteriscos (`**`) ou um subtraço (`_`) para circundar código embutido. Por exemplo, **var example = true**.
|
||||
Utilize três símbolos de crase antes e depois de um bloco de código de múltiplas linhas para blocos de código cercados. | Utilize blocos de código de múltiplas linhas para criar diagramas, fluxogramas, ou outras ilustrações.
|
||||
Utilize nomes de variáveis significativos que possuem um contexto. | Utilize nomes de variáveis como 'foo', 'bar' e 'baz' que não são significativos e não possuem contexto.
|
||||
Remova espaços em branco em final de linha no código. | Adicione espaços em branco no código, onde estes são importantes, pois os leitores de tela lerão os espaços em branco também.
|
||||
{{< /table >}}
|
||||
|
||||
{{< note >}}
|
||||
Este website suporta destaque de sintaxe para exemplos de código, mas a especificação
|
||||
de uma linguagem é opcional. Destaque de sintaxe nos blocos de código devem estar
|
||||
de acordo com as [orientações de contraste.](https://www.w3.org/WAI/WCAG21/quickref/?versions=2.0&showtechniques=141%2C143#contrast-minimum)
|
||||
{{< /note >}}
|
||||
|
||||
### Utilize estilo de código para nomes de campos de objetos e namespaces
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize estilo de código para nomes de campos de objetos" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Especifique o valor do campo `replicas` no arquivo de configuração. | Especifique o valor do campo "replicas" no arquivo de configuração.
|
||||
O valor do campo `exec` é um objeto do tipo ExecAction. | O valor do campo "exec" é um objeto do tipo ExecAction.
|
||||
Execute o processo como um DaemonSet no namespace `kube-system`. | Execute o processo como um DaemonSet no namespace kube-system.
|
||||
{{< /table >}}
|
||||
|
||||
### Utilize estilo de código para ferramentas de linha de comando e nomes de componentes do Kubernetes
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize estilo de código para ferramentas de linha de comando e componentes do Kubernetes" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
O kubelet preserva a estabilidade do nó. | O `kubelet` preserva a estabilidade do nó.
|
||||
O `kubectl` gerencia a busca e a autenticação com o servidor da API. | O kubectl gerencia a busca e a autenticação com o servidor da API.
|
||||
Execute o processo com o certificado, `kube-apiserver --client-ca-file=FILENAME`. | Execute o processo com o certificado, kube-apiserver --client-ca-file=FILENAME.
|
||||
{{< /table >}}
|
||||
|
||||
### Iniciando sentenças com o nome de uma ferramenta de linha de comando ou de um componente
|
||||
|
||||
{{< table caption = "Faça e não faça - Iniciando sentenças com o nome de uma ferramenta de linha de comando ou de um componente" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
A ferramenta `kubeadm` inicializa e provisiona máquinas em um cluster. | `kubeadm` inicializa e provisiona ferramentas em um cluster.
|
||||
O kube-scheduler é o escalonador padrão para o Kubernetes. | kube-scheduler é o escalonador padrão para o Kubernetes.
|
||||
{{< /table >}}
|
||||
|
||||
### Utilize uma descrição geral no lugar de um nome de componente
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize uma descrição geral no lugar de um nome de componente" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
O servidor da API do Kubernetes oferece uma especificação OpenAPI. | O apiserver oferece uma especificação OpenAPI.
|
||||
APIs agregadas são servidores de API subordinados. | APIs agregadas são APIServers subordinados.
|
||||
{{< /table >}}
|
||||
|
||||
### Utilize estilo normal para valores de campos do tipo texto ou inteiro
|
||||
|
||||
Para valores de campos do tipo texto ou inteiro, utilize o estilo normal sem aspas.
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize o estilo normal para valores de campo do tipo texto ou inteiro" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Especifique o valor Always para o campo `imagePullPolicy`. | Especifique o valor "Always" para o campo `imagePullPolicy`.
|
||||
Especifique o valor nginx:1.16 para o campo `image`. | Especifique o valor `nginx:1.16` para o campo `image`.
|
||||
Especifique o valor 2 para o campo `replicas`. | Especifique o valor `2` para o campo `replicas`.
|
||||
{{< /table >}}
|
||||
|
||||
## Referindo-se a recursos da API do Kubernetes
|
||||
|
||||
Esta seção discorre sobre como referenciar recursos da API na documentação.
|
||||
|
||||
### Clarificação sobre "recurso"
|
||||
|
||||
O Kubernetes utiliza a palavra "recurso" para se referir a recursos da API, como
|
||||
`pod`, `deployment`, e demais objetos. Também utilizamos "recurso" para falar de
|
||||
requisições e limites de recursos de CPU e memória. Sempre se refira a recursos
|
||||
da API como "recursos da API" para evitar confusão com recursos de CPU e memória.
|
||||
|
||||
### Quando utilizar a terminologia da API do Kubernetes
|
||||
|
||||
As diferentes terminologias da API do Kubernetes são:
|
||||
|
||||
- Tipo de recurso: o nome utilizado na URL da API (como `pods`, `namespaces`)
|
||||
- Recurso: uma instância única de um tipo de recurso (como `pod`, `secret`)
|
||||
- Objeto: um recurso que serve como um "registro de intenção". Um objeto é um
|
||||
estado desejado para uma parte específica do seu cluster, que a camada de
|
||||
gerenciamento do Kubernetes tenta manter.
|
||||
|
||||
Sempre utilize "recurso" ou "objeto" ao se referir a um recurso da API em
|
||||
documentação. Por exemplo, utilize "um objeto Secret" ao invés de apenas
|
||||
"um Secret".
|
||||
|
||||
### Nomes de recursos da API
|
||||
|
||||
Sempre formate nomes de recursos da API utilizando
|
||||
[_UpperCamelCase_](https://en.wikipedia.org/wiki/Camel_case), também conhecido
|
||||
como _PascalCase_, e formatação de código.
|
||||
|
||||
Para código embutido em um documento HTML, utilize a tag `<code>`. Em um documento
|
||||
Markdown, utilize o sinal de crase (`` ` ``).
|
||||
|
||||
Não separe um nome de objeto da API em palavras individuais. Por exemplo, escreva
|
||||
`PodTemplateList` no lugar de Pod Template List.
|
||||
|
||||
Para mais informações sobre o _PascalCase_ e formatação de código, por favor
|
||||
revise as orientações relacionadas nas seções
|
||||
[Utilize _UpperCamelCase_ para objetos da API](#use-upper-camel-case-for-api-objects)
|
||||
e [Utilize estilo de código para código embutido, comandos e objetos da API](#code-style-inline-code).
|
||||
|
||||
Para mais informações sobre as terminologias da API do Kubernetes, por favor
|
||||
revise a orientação relacionada sobre [terminologia da API do Kubernetes](/docs/reference/using-api/api-concepts/#standard-api-terminology).
|
||||
|
||||
## Formatação de fragmentos de código
|
||||
|
||||
### Não inclua o prompt de comando
|
||||
|
||||
{{< table caption = "Faça e não faça - Não inclua o prompt de comando" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
kubectl get pods | $ kubectl get pods
|
||||
{{< /table >}}
|
||||
|
||||
### Separe os comandos de seus resultados
|
||||
|
||||
Verifique que o Pod está rodando no seu nó escolhido:
|
||||
|
||||
```shell
|
||||
kubectl get pods --output=wide
|
||||
```
|
||||
|
||||
A saída é semelhante a:
|
||||
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
nginx 1/1 Running 0 13s 10.200.0.4 worker0
|
||||
```
|
||||
|
||||
### Exemplos de versionamento do Kubernetes
|
||||
|
||||
Exemplos de código e de configuração que incluem informação da versão devem ser
|
||||
consistentes com o texto que os acompanha.
|
||||
|
||||
Se a informação é específica para uma versão, a versão do Kubernetes deve ser
|
||||
definida na seção `prerequisites` dos modelos de página de
|
||||
[tarefa](/docs/contribute/style/page-content-types/#task) ou de
|
||||
[tutorial](/docs/contribute/style/page-content-types/#tutorial).
|
||||
Assim que a página for salva, a seção `prerequisitos` é exibida com o título
|
||||
**Antes de você começar**.
|
||||
|
||||
Para especificar uma versão do Kubernetes para uma página de tarefa ou de
|
||||
tutorial, inclua a chave `min-kubernetes-server-version` na seção de
|
||||
_front matter_.
|
||||
|
||||
Se o exemplo de YAML for um arquivo avulso, procure e revise os tópicos que
|
||||
o incluem como uma referência. Verifique que quaisquer tópicos que estejam
|
||||
utilizando o YAML avulso têm a informação de versão apropriada definida. Se um
|
||||
arquivo avulso YAML não for referenciado em nenhum tópico, considere apagá-lo
|
||||
ao invés de atualizá-lo.
|
||||
|
||||
Por exemplo, se você estiver escrevendo um tutorial que é relevante para a
|
||||
versão 1.8 do Kubernetes, o _front matter_ do seu arquivo Markdown deve ser
|
||||
semelhante ao demonstrado abaixo:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: <seu título de tutorial aqui>
|
||||
min-kubernetes-server-version: v1.8
|
||||
---
|
||||
```
|
||||
|
||||
Nos exemplos de código e configuração, não inclua comentários sobre versões
|
||||
alternativas. Tenha o cuidado de não incluir afirmações incorretas em comentários
|
||||
nos seus exemplos, como por exemplo:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1 # versões mais antigas usam...
|
||||
kind: Pod
|
||||
...
|
||||
```
|
||||
|
||||
## Kubernetes.io word list
|
||||
|
||||
Uma lista de termos específicos do Kubernetes para serem utilizados de forma
|
||||
consistente em todo o website.
|
||||
|
||||
{{< table caption = "Lista de palavras do Kubernetes.io" >}}
|
||||
Term | Usage
|
||||
:--- | :----
|
||||
Kubernetes | Kubernetes deve sempre ser escrito com K maiúsculo.
|
||||
Docker | Docker deve sempre ser escrito com D maiúsculo.
|
||||
SIG Docs | Escreva SIG Docs ao invés de SIG-DOCS ou outras variantes.
|
||||
On-premises | Escreva On-premises ou On-prem ao invés de On-premise ou outras variantes.
|
||||
{{< /table >}}
|
||||
|
||||
## Shortcodes
|
||||
|
||||
Os [_shortcodes_](https://gohugo.io/content-management/shortcodes) do Hugo
|
||||
auxiliam na criação de diferentes níveis de atrativos retóricos. Nossa
|
||||
documentação suporta três diferentes _shortcodes_ nessa categoria: **Nota**
|
||||
`{{</* note */>}}`, **Cuidado** `{{</* caution */>}}`, e **Aviso**
|
||||
`{{</* warning */>}}`.
|
||||
|
||||
1. Circunde o texto com uma abertura e um fechamento de _shortcode_.
|
||||
2. Utilize a sintaxe abaixo para aplicar um estilo:
|
||||
```none
|
||||
{{</* note */>}}
|
||||
Não há necessidade de incluir um prefixo; o _shortcode_ fornece um automaticamente (Nota:, Cuidado:, etc.).
|
||||
{{</* /note */>}}
|
||||
```
|
||||
|
||||
A saída é semelhante a:
|
||||
|
||||
{{< note >}}
|
||||
O prefixo é gerado automaticamente com a seleção do tipo da tag.
|
||||
{{< /note >}}
|
||||
|
||||
### Nota
|
||||
|
||||
Utilize `{{</* note */>}}` para destacar uma dica ou uma informação que pode ser
|
||||
útil para o leitor.
|
||||
|
||||
Por exemplo:
|
||||
|
||||
```
|
||||
{{</* note */>}}
|
||||
Você _ainda_ pode utilizar Markdown dentro destas seções de destaque.
|
||||
{{</* /note */>}}
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
{{< note >}}
|
||||
Você _ainda_ pode utilizar Markdown dentro destas seções de destaque.
|
||||
{{< /note >}}
|
||||
|
||||
Você pode utilizar o _shortcode_ `{{</* note */>}}` em uma lista:
|
||||
|
||||
```
|
||||
1. Utilize o _shortcode_ `note` em uma lista
|
||||
|
||||
1. Um segundo item em uma lista com um shortcode note embutido
|
||||
|
||||
{{</* note */>}}
|
||||
_Shortcodes_ Aviso, Cuidado e Nota, embutidos em listas, devem ser indentados
|
||||
com quatro espaços. Veja mais em [Problemas comuns com _shortcodes_](#common-shortcode-issues).
|
||||
{{</* /note */>}}
|
||||
|
||||
1. Um terceiro item em uma lista
|
||||
|
||||
1. Um quarto item em uma lista
|
||||
```
|
||||
|
||||
A saída é:
|
||||
|
||||
1. Utilize o _shortcode_ `note` em uma lista
|
||||
|
||||
1. Um segundo item em uma lista com um shortcode note embutido
|
||||
|
||||
{{< note >}}
|
||||
_Shortcodes_ Aviso, Cuidado e Nota, quando embutidos em listas, devem ser
|
||||
indentados com quatro espaços. Veja mais em
|
||||
[Problemas comuns com _shortcodes_](#common-shortcode-issues).
|
||||
{{< /note >}}
|
||||
|
||||
1. Um terceiro item em uma lista
|
||||
|
||||
1. Um quarto item em uma lista
|
||||
|
||||
### Cuidado
|
||||
|
||||
Utilize `{{</* caution */>}}` para chamar a atenção a informações importantes
|
||||
que podem evitar problemas.
|
||||
|
||||
Por exemplo:
|
||||
|
||||
```
|
||||
{{</* caution */>}}
|
||||
O estilo de chamada se aplica somente à linha diretamente acima da tag.
|
||||
{{</* /caution */>}}
|
||||
```
|
||||
|
||||
A saída é:
|
||||
|
||||
{{< caution >}}
|
||||
O estilo de chamada se aplica somente à linha diretamente acima da tag.
|
||||
{{< /caution >}}
|
||||
|
||||
### Aviso
|
||||
|
||||
Utilize `{{</* warning */>}}` para indicar perigo ou uma orientação que é crucial
|
||||
e deve ser seguida.
|
||||
|
||||
Por exemplo:
|
||||
|
||||
```
|
||||
{{</* warning */>}}
|
||||
Cuidado.
|
||||
{{</* /warning */>}}
|
||||
```
|
||||
|
||||
A saída é:
|
||||
|
||||
{{< warning >}}
|
||||
Cuidado.
|
||||
{{< /warning >}}
|
||||
|
||||
### Ambiente de teste embutido do Katacoda
|
||||
|
||||
Este botão permite aos usuários executar o Minikube dentro dos seus navegadores
|
||||
utilizando o terminal do Katacoda. Ele diminui a barreira de entrada permitindo
|
||||
que as pessoas utilizem o Minikube com um único clique ao invés de precisar
|
||||
fazer todo o processo de instalação e configuração das ferramentas Minikube e
|
||||
Kubectl na máquina local.
|
||||
|
||||
O ambiente de teste embutido está configurado para executar `minikube start` e
|
||||
permite aos usuários completar tutoriais na mesma janela que a documentação.
|
||||
|
||||
{{< caution >}}
|
||||
Esta sessão é limitada a 15 minutos.
|
||||
{{< /caution >}}
|
||||
|
||||
Por exemplo:
|
||||
|
||||
```
|
||||
{{</* kat-button */>}}
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
{{< kat-button >}}
|
||||
|
||||
## Problemas comuns com _shortcodes_ {#common-shortcode-issues}
|
||||
|
||||
### Listas ordenadas
|
||||
|
||||
_Shortcodes_ interrompem listas numeradas a não ser que estejam indentados com
|
||||
quatro espaços antes da nota e da tag.
|
||||
|
||||
Por exemplo:
|
||||
|
||||
```
|
||||
1. Preaqueça o forno a 350°F.
|
||||
|
||||
1. Prepare a massa e a coloque na assadeira.
|
||||
`{{</* note */>}}Unte a assadeira para melhores resultados.{{</* /note */>}}`
|
||||
|
||||
1. Asse por 20-25 minutos, ou até que ao testar com um palito este saia limpo.
|
||||
```
|
||||
|
||||
A saída é:
|
||||
|
||||
1. Preaqueça o forno a 350°F.
|
||||
|
||||
1. Prepare a massa e a coloque na assadeira.
|
||||
{{< note >}}Unte a assadeira para melhores resultados.{{< /note >}}
|
||||
|
||||
1. Asse por 20-25 minutos, ou até que ao testar com um palito este saia limpo.
|
||||
|
||||
### Cláusulas `include`
|
||||
|
||||
_Shortcodes_ dentro de cláusulas `include` fazem com que o build falhe. Você
|
||||
deve colocá-los no documento superior, antes e depois da cláusula `include`.
|
||||
Por exemplo:
|
||||
|
||||
```
|
||||
{{</* note */>}}
|
||||
{{</* include "task-tutorial-prereqs.md" */>}}
|
||||
{{</* /note */>}}
|
||||
```
|
||||
|
||||
## Elementos Markdown
|
||||
|
||||
### Quebras de linha
|
||||
|
||||
Utilize uma única linha em branco para dividir conteúdo a nível de bloco como por
|
||||
exemplo cabeçalhos, listas, imagens, blocos de código, entre outros. A exceção
|
||||
são cabeçalhos de segundo nível, onde duas linhas em branco devem ser utilizadas.
|
||||
Cabeçalhos de segundo nível seguem o primeiro nível (ou o título) sem nenhum
|
||||
texto ou parágrafo precedente. Um espaçamento de duas linhas em branco auxilia
|
||||
a melhor visualização geral da estrutura do conteúdo em um editor de texto.
|
||||
|
||||
### Cabeçalhos e títulos {#headings}
|
||||
|
||||
Pessoas que acessam esta documentação podem estar fazendo uso de um leitor de
|
||||
tela ou outro tipo de tecnologia auxiliar.
|
||||
[Leitores de tela](https://pt.wikipedia.org/wiki/Leitor_de_tela) são dispositivos
|
||||
de saída linear que falam de um item por vez em uma página. Se uma grande
|
||||
quantidade de conteúdo existe em uma página, você pode utilizar cabeçalhos para
|
||||
dar à página uma estrutura interna. Uma boa estrutura de página auxilia todos os
|
||||
leitores a navegar facilmente ou filtrar tópicos de interesse.
|
||||
|
||||
{{< table caption = "Faça e não faça - Cabeçalhos" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Atualize o título no _front matter_ da página ou postagem de blog. | Utilize cabeçalho de primeiro nível, pois o Hugo automaticamente converte o título do _front matter_ para um cabeçalho de primeiro nível.
|
||||
Utilize cabeçalhos ordenados para fornecer um resumo de alto nível do seu conteúdo. | Utilize cabeçalhos de nível 4 a 6, a menos que seja absolutamente necessário. Se o seu conteúdo é detalhado a este nível, pode ser que ele precise ser dividido em artigos separados.
|
||||
Utilize o sinal numérico ou cerquilha (`#`) para conteúdo que não seja postagem de blog. | Utilize traços ou sinais de igual (`---` ou `===`) para designar cabeçalhos de primeiro nível.
|
||||
Utilize formatação de maiúsculas e minúsculas de sentença para cabeçalhos no corpo da página. Por exemplo, **Estenda o kubectl com plugins** | Utilize formatação de maiúsculas e minúsculas de título para cabeçalhos no corpo da página. Por exemplo, **Estenda o Kubectl com Plugins**
|
||||
Utilize formatação de maiúsculas e minúsculas de título para o título da página no _front matter_. Por exemplo, `title: Riscos do Contorno do Servidor da API do Kubernetes` | Utilize formatação de maiúsculas e minúsculas de sentença para títulos de página no _front matter_. Por exemplo, não utilize `title: Riscos do contorno do servidor da API do Kubernetes`
|
||||
{{< /table >}}
|
||||
|
||||
### Parágrafos
|
||||
|
||||
{{< table caption = "Faça e não faça - Parágrafos" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Tente manter os parágrafos abaixo de 6 sentenças. | Indente o primeiro parágrafo com caracteres de espaço. Por exemplo, ⋅⋅⋅Três espaços antes de um parágrafo o indenta.
|
||||
Utilize três hífens (`---`) para criar uma régua horizontal. Utilize réguas horizontais para quebras no conteúdo do parágrafo. Por exemplo, uma mudança de cena em uma história, ou uma mudança de tópico dentro de uma seção. | Utilize réguas horizontais para decoração.
|
||||
{{< /table >}}
|
||||
|
||||
### Links
|
||||
|
||||
{{< table caption = "Faça e não faça - Links" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Crie hiperlinks que forneçam o contexto para o conteúdo para o qual eles apontam. Por exemplo: certas portas estão abertas em suas máquinas. Veja <a href="#check-required-ports">Verifique portas necessárias</a> para mais detalhes. | Utilize termos ambíguos, como "clique aqui". Por exemplo: certas portas estão abertas em suas máquinas. Veja <a href="#check-required-ports">aqui</a> para mais detalhes.
|
||||
Crie hiperlinks no estilo de Markdown: `[texto do link](URL)`. Por exemplo: `[_Shortcodes_ do Hugo](/docs/contribute/style/hugo-shortcodes/#table-captions)`, cuja saída é [_Shortcodes_ do Hugo](/docs/contribute/style/hugo-shortcodes/#table-captions). | Crie links no estilo de HTML: `<a href="/media/examples/link-element-example.css" target="_blank">Visite nosso tutorial!</a>`, ou crie links que abrem em novas abas ou janelas. Por exemplo: `[website de exemplo](https://example.com){target="_blank"}`
|
||||
{{< /table >}}
|
||||
|
||||
### Listas
|
||||
|
||||
Agrupe em listas itens relacionados que devem aparecer em uma ordem específica,
|
||||
ou para indicar uma correlação entre vários itens. Quando um leitor de tela
|
||||
encontra uma lista, independentemente de ser uma lista ordenada ou não-ordenada,
|
||||
o leitor de tela anunciará ao usuário que há um grupo de itens em lista. O
|
||||
usuário pode então utilizar as teclas de seta para navegar para cima e para baixo
|
||||
entre os vários itens da lista. Links para navegação no website também podem ser
|
||||
marcados como itens de lista, pois nada mais são do que um grupo de links
|
||||
relacionados.
|
||||
|
||||
- Finalize cada item em uma lista com um ponto final se um ou mais itens na lista
|
||||
forem sentenças completas. Para consistência, normalmente todos os itens da
|
||||
lista devem ser sentenças completas, ou nenhum dos itens deve ser.
|
||||
|
||||
{{< note >}}
|
||||
Listas ordenadas que são parte de uma sentença introdutória incompleta podem
|
||||
ser mantidos em letras minúsculas e pontuados como se cada item fosse uma
|
||||
parte da sentença introdutória.
|
||||
{{< /note >}}
|
||||
|
||||
- Utilize o número um (`1.`) para listas ordenadas.
|
||||
|
||||
- Utilize (`+`), (`*`) ou (`-`) para listas não-ordenadas.
|
||||
|
||||
- Deixe uma linha em branco após cada lista.
|
||||
|
||||
- Indente listas aninhadas com quatro espaços (por exemplo, ⋅⋅⋅⋅).
|
||||
|
||||
- Itens de lista podem consistir de múltiplos parágrafos. Cada parágrafo
|
||||
subsequente em uma lista deve estar indentado em quatro espaços ou um caractere
|
||||
de tabulação.
|
||||
|
||||
### Tabelas
|
||||
|
||||
O propósito semântico de uma tabela de dados é apresentar dados tabulados.
|
||||
Usuários que não fazem uso de leitores de tela podem inspecionar a tabela de forma
|
||||
visual rapidamente, mas um leitor de tela irá ler o conteúdo linha a linha.
|
||||
Uma legenda de tabela é utilizada para criar um título descritivo para uma tabela
|
||||
de dados. Tecnologias auxiliares utilizam o elemento HTML `caption` para
|
||||
identificar o conteúdo da tabela para o usuário dentro da estrutura da página.
|
||||
|
||||
- Adicione legendas às suas tabelas utilizando os [_shortcodes_ do Hugo](/docs/contribute/style/hugo-shortcodes/#table-captions) para tabelas.
|
||||
|
||||
## Melhores práticas de conteúdo
|
||||
|
||||
Esta seção contém melhores práticas sugeridas para conteúdo claro, conciso e
|
||||
consistente.
|
||||
|
||||
### Utilize o tempo presente
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize o tempo presente" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Este comando inicializa um proxy. | Este comando irá iniciar um proxy.
|
||||
{{< /table >}}
|
||||
|
||||
Exceção: utilize o tempo futuro ou pretérito quando necessário para comunicar o
|
||||
significado correto.
|
||||
|
||||
### Utilize voz ativa
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize voz ativa" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Você pode explorar a API utilizando um navegador. | A API pode ser explorada utilizando um navegador.
|
||||
O arquivo YAML especifica o número de réplicas. | O número de réplicas é especificado no arquivo YAML.
|
||||
{{< /table >}}
|
||||
|
||||
Exceção: utilize a voz passiva se a voz ativa resultar em uma construção estranha.
|
||||
|
||||
### Utilize linguagem simples e direta
|
||||
|
||||
Utilize linguagem simples e direta. Evite utilizar frases ou expressões
|
||||
desnecessárias, como "por favor".
|
||||
|
||||
{{< table caption = "Faça e não faça - Utilize linguagem simples e direta" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Para criar um ReplicaSet, ... | A fim de criar um ReplicaSet, ...
|
||||
Veja o arquivo de configuração. | Por favor, veja o arquivo de configuração.
|
||||
Veja os Pods. | Com este próximo comando veremos os Pods.
|
||||
{{< /table >}}
|
||||
|
||||
### Dirija-se ao leitor utilizando "você"
|
||||
|
||||
{{< table caption = "Faça e não faça - Dirigindo-se ao leitor" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Você pode criar um Deployment através ... | Criaremos um Deployment através ...
|
||||
Na saída acima, você pode ver ... | Na saída acima, vimos que ...
|
||||
{{< /table >}}
|
||||
|
||||
### Evite frases em Latim
|
||||
|
||||
Prefira termos em inglês no lugar de abreviações em Latim.
|
||||
|
||||
{{< table caption = "Faça e não faça - Evite frases em Latim" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
For example, ... | e.g., ...
|
||||
That is, ...| i.e., ...
|
||||
{{< /table >}}
|
||||
|
||||
Exceção: utilize "etc." para et cetera.
|
||||
|
||||
## Padrões a evitar
|
||||
|
||||
### Evite utilizar "nós"
|
||||
|
||||
O uso de "nós" em uma sentença pode ser confuso, pois o leitor pode não saber
|
||||
se é parte do "nós" que você está descrevendo.
|
||||
|
||||
{{< table caption = "Faça e não faça - Padrões a evitar" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
A versão 1.4 inclui ... | Na versão 1.4, adicionamos ...
|
||||
O Kubernetes fornece uma nova funcionalidade para ... | Nós fornecemos uma nova funcionalidade para ...
|
||||
Esta página ensina sobre como você pode utilizar Pods. | Nesta página, iremos aprender sobre Pods.
|
||||
{{< /table >}}
|
||||
|
||||
### Evite jargões e expressões idiomáticas
|
||||
|
||||
Alguns leitores falam inglês como segunda língua. Evite jargões e expressões
|
||||
idiomáticas para auxiliar na compreensão.
|
||||
|
||||
{{< table caption = "Faça e não faça - Evite jargões e expressões idiomáticas" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Internally, ... | Under the hood, ...
|
||||
Create a new cluster. | Turn up a new cluster.
|
||||
{{< /table >}}
|
||||
|
||||
### Evite afirmações sobre o futuro
|
||||
|
||||
Evite fazer promessas ou dar dicas sobre o futuro. Se você precisa falar sobre
|
||||
uma funcionalidade em estado alfa, coloque o texto sob um cabeçalho que
|
||||
classifique a informação em estado alfa.
|
||||
|
||||
Uma exceção a esta regra é a documentação sobre descontinuações que serão
|
||||
convertidas em remoções em uma versão futura. Um exemplo deste tipo de documentação
|
||||
é o [Guia de migração de APIs descontinuadas](/docs/reference/using-api/deprecation-guide/).
|
||||
|
||||
### Evite afirmações que ficarão desatualizadas em breve
|
||||
|
||||
Evite palavras como "atualmente" e "novo". Uma funcionalidade que é nova hoje
|
||||
pode não ser mais considerada nova em alguns meses.
|
||||
|
||||
{{< table caption = "Faça e não faça - Evite afirmações que ficarão desatualizadas em breve" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Na versão 1.4, ... | Na versão atual, ...
|
||||
A funcionalidade de Federação fornece ... | A nova funcionalidade de Federação fornece ...
|
||||
{{< /table >}}
|
||||
|
||||
### Evite palavras que assumem um nível específico de conhecimento
|
||||
|
||||
Evite palavras como "apenas", "simplesmente", "fácil", "facilmente" ou "simples".
|
||||
Estas palavras não agregam valor.
|
||||
|
||||
{{< table caption = "Faça e não faça - Evite palavras insensitivas" >}}
|
||||
Faça | Não faça
|
||||
:--| :-----
|
||||
Inclua um comando em ... | Inclua apenas um comando em ...
|
||||
Execute o contêiner ... | Simplesmente execute o contêiner ...
|
||||
Você pode remover ... | Você pode facilmente remover ...
|
||||
Estes passos ... | Estes passos simples ...
|
||||
{{< /table >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Aprenda sobre como [escrever um novo tópico](/docs/contribute/style/write-new-topic/).
|
||||
* Aprenda sobre como [utilizar modelos de páginas](/docs/contribute/style/page-content-types/).
|
||||
* Aprenda sobre como [criar um _pull request_](/docs/contribute/new-content/open-a-pr/).
|
|
@ -4,7 +4,7 @@ weight: 25
|
|||
layout: cve-feed
|
||||
---
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
Esta é uma lista, mantida pela comunidade, de CVEs oficiais anunciadas pelo Comitê de Resposta de Segurança do Kubernetes. Veja [Informações de Segurança e Divulgação do Kubernetes](/docs/reference/issues-security/security/) para mais detalhes.
|
||||
|
||||
|
|
|
@ -0,0 +1,205 @@
|
|||
---
|
||||
title: Use o redirecionamento de porta para acessar aplicativos em um cluster.
|
||||
content_type: task
|
||||
weight: 40
|
||||
min-kubernetes-server-version: v1.10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Esta página mostra como usar o `kubectl port-forward` para se conectar a um servidor MongoDB em execução em um cluster Kubernetes. Esse tipo de conexão pode ser útil para depuração de bancos de dados.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
* Instale o [MongoDB Shell](https://www.mongodb.com/try/download/shell).
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Criando a implantação e o serviço do MongoDB
|
||||
|
||||
1. Crie uma Implantação que execute o MongoDB:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml
|
||||
```
|
||||
|
||||
A saída de um comando bem-sucedido verifica que a implantação foi criada:
|
||||
|
||||
```
|
||||
deployment.apps/mongo criado
|
||||
```
|
||||
|
||||
Visualize o status do pod para verificar se ele está pronto:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
A saída exibe o pod criado:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mongo-75f59d57f4-4nd6q 1/1 Em execução 0 2m4s
|
||||
```
|
||||
|
||||
Visualize o status da implantação:
|
||||
|
||||
```shell
|
||||
kubectl get deployment
|
||||
```
|
||||
|
||||
A saída exibe que a implantação foi criada:
|
||||
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
mongo 1/1 1 1 2m21s
|
||||
```
|
||||
|
||||
A implantação gerencia automaticamente um conjunto de réplicas.
|
||||
Visualize o status do conjunto de réplicas usando:
|
||||
|
||||
```shell
|
||||
kubectl get replicaset
|
||||
```
|
||||
|
||||
Visualize o status do conjunto de réplicas usando:
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
mongo-75f59d57f4 1 1 1 3m12s
|
||||
```
|
||||
|
||||
2. Crie um serviço para expor o MongoDB na rede:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml
|
||||
```
|
||||
|
||||
A saída de um comando bem-sucedido verifica que o serviço foi criado:
|
||||
|
||||
```
|
||||
service/mongo criado
|
||||
```
|
||||
|
||||
Verifique o serviço criado::
|
||||
|
||||
```shell
|
||||
kubectl get service mongo
|
||||
```
|
||||
|
||||
A saída exibe o serviço criado:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
mongo ClusterIP 10.96.41.183 <none> 27017/TCP 11s
|
||||
```
|
||||
|
||||
3. Verifique se o servidor MongoDB está sendo executado no Pod e ouvindo a porta 27017:
|
||||
|
||||
```shell
|
||||
# Altere mongo-75f59d57f4-4nd6q para o nome do Pod
|
||||
kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
|
||||
```
|
||||
|
||||
A saída exibe a porta para o MongoDB nesse Pod:
|
||||
|
||||
```
|
||||
27017
|
||||
```
|
||||
|
||||
27017 é a porta TCP alocada ao MongoDB na internet.
|
||||
|
||||
## Encaminhe uma porta local para uma porta no Pod
|
||||
|
||||
1. `kubectl port-forward` permite usar o nome do recurso, como o nome do pod, para selecionar um pod correspondente para encaminhar a porta.
|
||||
|
||||
|
||||
```shell
|
||||
# Altere mongo-75f59d57f4-4nd6q para o nome do Pod
|
||||
kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017
|
||||
```
|
||||
|
||||
que é o mesmo que
|
||||
|
||||
```shell
|
||||
kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017
|
||||
```
|
||||
|
||||
ou
|
||||
|
||||
```shell
|
||||
kubectl port-forward deployment/mongo 28015:27017
|
||||
```
|
||||
|
||||
ou
|
||||
|
||||
```shell
|
||||
kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017
|
||||
```
|
||||
|
||||
ou
|
||||
|
||||
```shell
|
||||
kubectl port-forward service/mongo 28015:27017
|
||||
```
|
||||
|
||||
Qualquer um dos comandos acima funciona. A saída é semelhante a esta:
|
||||
|
||||
```
|
||||
Encaminhamento de 127.0.0.1:28015 -> 27017
|
||||
Encaminhamento de [::1]:28015 -> 27017
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`kubectl port-forward` não retorna. Para continuar com os exercícios, você precisará abrir outro terminal.
|
||||
{{< /note >}}
|
||||
|
||||
2. Inicie a interface de linha de comando do MongoDB:
|
||||
|
||||
```shell
|
||||
mongosh --port 28015
|
||||
```
|
||||
|
||||
3. No prompt de comando do MongoDB, digite o comando `ping`:
|
||||
|
||||
```
|
||||
db.runCommand( { ping: 1 } )
|
||||
```
|
||||
|
||||
Uma solicitação de ping bem-sucedida retorna:
|
||||
|
||||
```
|
||||
{ ok: 1 }
|
||||
```
|
||||
|
||||
### Opcionalmente, deixe kubectl escolher a porta local {#let-kubectl-choose-local-port}
|
||||
|
||||
Se você não precisa de uma porta local específica, pode permitir que o `kubectl` escolha e reserve a porta local e, assim, evitar ter que gerenciar conflitos de porta local, com a sintaxe ligeiramente mais simples:
|
||||
|
||||
```shell
|
||||
kubectl port-forward deployment/mongo :27017
|
||||
```
|
||||
|
||||
A ferramenta `kubectl` encontra um número de porta local que não está em uso (evitando números de porta baixos, porque esses podem ser usados por outras aplicações). A saída é semelhante a:
|
||||
|
||||
```
|
||||
Encaminhamento de 127.0.0.1:63753 -> 27017
|
||||
Encaminhamento de [::1]:63753 -> 27017
|
||||
```
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Discussão
|
||||
|
||||
As conexões feitas à porta local 28015 são encaminhadas para a porta 27017 do Pod que está executando o servidor MongoDB. Com esta conexão em vigor, você pode usar seu local de trabalho para depurar o banco de dados que está sendo executado no Pod.
|
||||
|
||||
{{< note >}}
|
||||
`kubectl port-forward` é implementado apenas para portas TCP.
|
||||
O suporte ao protocolo UDP é rastreado em
|
||||
[issue 47862](https://github.com/kubernetes/kubernetes/issues/47862).
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Saiba mais sobre [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands/#port-forward).
|
|
@ -0,0 +1,144 @@
|
|||
---
|
||||
title: Configurar um provedor de credenciais de imagem para o kubelet
|
||||
description: Configure o plugin de provedor de credenciais de imagem do kubelet
|
||||
content_type: task
|
||||
min-kubernetes-server-version: v1.26
|
||||
weight: 120
|
||||
---
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
A partir do Kubernetes v1.20, o kubelet pode obter dinamicamente as credenciais para um registro de imagem de contêiner usando plugins executáveis. O kubelet e o plugin executável se comunicam por meio de stdio (stdin, stdout e stderr) usando APIs versionadas do Kubernetes. Esses plugins permitem que o kubelet solicite credenciais para um registro de contêiner dinamicamente, em vez de armazenar credenciais estáticas no disco. Por exemplo, o plugin pode se comunicar com um servidor de metadados local para recuperar credenciais de curta duração para uma imagem que está sendo baixada pelo kubelet.
|
||||
|
||||
Você pode estar interessado em usar essa funcionalidade se alguma das condições abaixo for verdadeira:
|
||||
|
||||
* Chamadas de API para um serviço de provedor de nuvem são necessárias para recuperar informações de autenticação para um registro.
|
||||
* As credenciais têm tempos de expiração curtos e é necessário solicitar novas credenciais com frequência.
|
||||
* Armazenar credenciais de registro no disco ou em `imagePullSecrets` não é aceitável.
|
||||
|
||||
Este guia demonstra como configurar o mecanismo de plugin do provedor de credenciais de imagem do kubelet.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Você precisa de um cluster Kubernetes com nós que suportem plugins de provedor de credenciais do kubelet. Esse suporte está disponível no Kubernetes {{< skew currentVersion >}}; As versões v1.24 e v1.25 do Kubernetes incluíram isso como um recurso beta, ativado por padrão.
|
||||
|
||||
* Uma implementação funcional de um plugin executável de provedor de credenciais. Você pode criar seu próprio plugin ou usar um fornecido por provedores de nuvem.
|
||||
|
||||
{{< version-check >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Instalando Plugins nos Nós
|
||||
|
||||
Um plugin de provedor de credenciais é um binário executável que será executado pelo kubelet. Certifique-se de que o binário do plugin exista em cada nó do seu cluster e esteja armazenado em um diretório conhecido. O diretório será necessário posteriormente ao configurar as _flags_ do kubelet.
|
||||
|
||||
## Configurando o Kubelet
|
||||
|
||||
Para usar esse recurso, o kubelet espera que duas flags sejam definidas:
|
||||
|
||||
* `--image-credential-provider-config` - o caminho para o arquivo de configuração do plugin de provedor de credenciais.
|
||||
* `--image-credential-provider-bin-dir` - o caminho para o diretório onde estão localizados os binários do plugin de provedor de credenciais.
|
||||
|
||||
### Configurar um provedor de credenciais do kubelet
|
||||
|
||||
O arquivo de configuração passado para `--image-credential-provider-config` é lido pelo kubelet para determinar quais plugins executáveis devem ser invocados para quais imagens de contêiner. Aqui está um exemplo de arquivo de configuração que você pode acabar usando se estiver usando o plugin baseado no [ECR](https://aws.amazon.com/ecr/):
|
||||
|
||||
```yaml
|
||||
apiVersion: kubelet.config.k8s.io/v1
|
||||
kind: CredentialProviderConfig
|
||||
# providers é uma lista de plug-ins auxiliares do provedor de credenciais que serão habilitados pelo kubelet.
|
||||
# Vários provedores podem corresponder a uma única imagem, caso em que as credenciais
|
||||
# de todos os provedores serão devolvidos ao kubelet. Se vários provedores forem chamados
|
||||
# para uma única imagem, os resultados são combinados. Se os provedores retornarem
|
||||
# chaves de autenticação sobrepostas, o valor do provedor anterior da lista é usado.
|
||||
providers:
|
||||
# name é o nome necessário do provedor de credenciais. Deve corresponder ao nome do
|
||||
# executável do provedor visto pelo kubelet. O executável deve estar no
|
||||
# diretório bin do kubelet (definido pela flag --image-credential-provider-bin-dir).
|
||||
- name: ecr
|
||||
# matchImages é uma lista obrigatória de strings usadas para corresponder às imagens para
|
||||
# determinar se este provedor deve ser invocado. Se uma das strings corresponder à
|
||||
# imagem solicitada do kubelet, o plug-in será invocado e terá uma chance
|
||||
# para fornecer credenciais. Espera-se que as imagens contenham o domínio de registro
|
||||
# e caminho da URL.
|
||||
#
|
||||
# Cada entrada em matchImages é um padrão que pode opcionalmente conter uma porta e um caminho.
|
||||
# Globs podem ser usados no domínio, mas não na porta ou no caminho. Globs são suportados
|
||||
# como subdomínios como '*.k8s.io' ou 'k8s.*.io' e domínios de nível superior como 'k8s.*'.
|
||||
# A correspondência de subdomínios parciais como 'app*.k8s.io' também é suportada. Cada glob só pode corresponder
|
||||
# a um único segmento de subdomínio, então `*.io` **não** corresponde a `*.k8s.io`.
|
||||
#
|
||||
# Existe uma correspondência entre uma imagem e uma matchImage quando todas as opções abaixo são verdadeiras:
|
||||
# - Ambos contêm o mesmo número de partes de domínio e cada parte faz correspondência.
|
||||
# - O caminho da URL de um matchImages deve ser um prefixo do caminho do URL da imagem de destino.
|
||||
# - Se matchImages contiver uma porta, a porta também deverá corresponder à imagem.
|
||||
#
|
||||
# Valores de exemplo de matchImages:
|
||||
# - 123456789.dkr.ecr.us-east-1.amazonaws.com
|
||||
# - *.azurecr.io
|
||||
# - gcr.io
|
||||
# - *.*.registry.io
|
||||
# - Registry.io:8080/path
|
||||
matchImages:
|
||||
- "*.dkr.ecr.*.amazonaws.com"
|
||||
- "*.dkr.ecr.*.amazonaws.cn"
|
||||
- "*.dkr.ecr-fips.*.amazonaws.com"
|
||||
- "*.dkr.ecr.us-iso-east-1.c2s.ic.gov"
|
||||
- "*.dkr.ecr.us-isob-east-1.sc2s.sgov.gov"
|
||||
# defaultCacheDuration é a duração padrão em que o plug-in armazenará as credenciais na memória
|
||||
# se a duração do cache não for fornecida na resposta do plug-in. Este campo é obrigatório.
|
||||
defaultCacheDuration: "12h"
|
||||
# Versão de entrada necessária do exec CredentialProviderRequest. O CredentialProviderResponse retornado
|
||||
# DEVE usar a mesma versão de codificação da entrada. Os valores atualmente suportados são:
|
||||
# - credentialprovider.kubelet.k8s.io/v1
|
||||
apiVersion: credentialprovider.kubelet.k8s.io/v1
|
||||
# Argumentos para passar ao comando quando for executá-lo.
|
||||
# +optional
|
||||
args:
|
||||
- get-credentials
|
||||
# Env define variáveis de ambiente adicionais para expor ao processo. Esses valores
|
||||
# são combinados com o ambiente do host, bem como as variáveis que o client-go usa
|
||||
# para passar o argumento para o plugin.
|
||||
# +optional
|
||||
env:
|
||||
- name: AWS_PROFILE
|
||||
value: example_profile
|
||||
```
|
||||
|
||||
O campo `providers` é uma lista de plugins habilitados usados pelo kubelet. Cada entrada tem alguns campos obrigatórios:
|
||||
|
||||
* `name`: o nome do plugin que DEVE corresponder ao nome do binário executável que existe no diretório passado para `--image-credential-provider-bin-dir`.
|
||||
* `matchImages`: uma lista de strings usadas para comparar com imagens, a fim de determinar se este provedor deve ser invocado. Mais sobre isso abaixo.
|
||||
* `defaultCacheDuration`: a duração padrão em que o kubelet armazenará em cache as credenciais em memória, caso a duração de cache não tenha sido especificada pelo plugin.
|
||||
* `apiVersion`: a versão da API que o kubelet e o plugin executável usarão ao se comunicar.
|
||||
|
||||
Cada provedor de credenciais também pode receber argumentos opcionais e variáveis de ambiente. Consulte os implementadores do plugin para determinar qual conjunto de argumentos e variáveis de ambiente são necessários para um determinado plugin.
|
||||
|
||||
#### Configurar a correspondência de imagens
|
||||
|
||||
O campo `matchImages` de cada provedor de credenciais é usado pelo kubelet para determinar se um plugin deve ser invocado
|
||||
para uma determinada imagem que um Pod está usando. Cada entrada em `matchImages` é um padrão de imagem que pode opcionalmente conter uma porta e um caminho.
|
||||
Globs podem ser usados no domínio, mas não na porta ou no caminho. Globs são suportados como subdomínios como `*.k8s.io` ou `k8s.*.io`,
|
||||
e domínios de nível superior como `k8s.*`. Correspondência de subdomínios parciais como `app*.k8s.io` também é suportada. Cada glob só pode corresponder
|
||||
a um único segmento de subdomínio, então `*.io` NÃO corresponde a `*.k8s.io`.
|
||||
|
||||
Uma correspondência existe entre um nome de imagem e uma entrada `matchImage` quando todos os itens abaixo são verdadeiros:
|
||||
|
||||
* Ambos contêm o mesmo número de partes de domínio e cada parte corresponde.
|
||||
* O caminho da URL da imagem correspondente deve ser um prefixo do caminho da URL da imagem de destino.
|
||||
* Se o `matchImages` contiver uma porta, então a porta deve corresponder na imagem também.
|
||||
|
||||
Alguns valores de exemplo de padrões `matchImages` são:
|
||||
|
||||
* `123456789.dkr.ecr.us-east-1.amazonaws.com`
|
||||
* `*.azurecr.io`
|
||||
* `gcr.io`
|
||||
* `*.*.registry.io`
|
||||
* `foo.registry.io:8080/path`
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Leia os detalhes sobre `CredentialProviderConfig` na [referência da API de configuração do kubelet (v1)](/docs/reference/config-api/kubelet-config.v1/).
|
||||
* Leia a [referência da API do provedor de credenciais do kubelet (v1)](/docs/reference/config-api/kubelet-config.v1/).
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
title: Limitar o consumo de armazenamento
|
||||
content_type: task
|
||||
weight: 240
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Este exemplo demonstra como limitar a quantidade de armazenamento consumido em um namespace.
|
||||
|
||||
Os seguintes recursos são usados na demonstração: [ResourceQuota](/pt-br/docs/concepts/policy/resource-quotas/), [LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/), e [PersistentVolumeClaim](/pt-br/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
<!-- steps -->
|
||||
## Cenário: Limitando o consumo de armazenamento.
|
||||
|
||||
O administrador do cluster está operando um cluster em nome de uma população de usuários e o administrador quer controlar quanto armazenamento um único namespace pode consumir para controlar custos.
|
||||
|
||||
O administrador gostaria de limitar:
|
||||
|
||||
1. O número de *persistent volume claims* em um namespace
|
||||
2. A quantidade de armazenamento que cada *claim* pode solicitar
|
||||
3. A quantidade total de armazenamento que o namespace pode ter.
|
||||
|
||||
## LimitRange para limitar solicitações de armazenamento
|
||||
|
||||
Adicionar um LimitRange a um namespace impõe tamanhos mínimos e máximos para solicitações de armazenamento. O armazenamento é solicitado através do PersistentVolumeClaim. O controlador de admissão que impõe os limites rejeitará qualquer PVC que esteja acima ou abaixo dos valores definidos pelo administrador.
|
||||
|
||||
Neste exemplo, um PVC que solicita 10Gi de armazenamento seria rejeitado porque excede o limite máximo de 2Gi.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: storagelimits
|
||||
spec:
|
||||
limits:
|
||||
- type: PersistentVolumeClaim
|
||||
max:
|
||||
storage: 2Gi
|
||||
min:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
As requisições de armazenamento mínimas são usadas quando o provedor de armazenamento subjacente exige certos valores mínimos. Por exemplo, os volumes do AWS EBS têm um requisito mínimo de 1 Gi.
|
||||
|
||||
## StorageQuota para limitar a quantidade de PVC e a capacidade de armazenamento cumulativa
|
||||
|
||||
Os administradores podem limitar o número de PVCs em um namespace, bem como a capacidade cumulativa desses PVCs. Novos PVCs que excedam qualquer um desses valores máximos serão rejeitados.
|
||||
|
||||
Neste exemplo, o sexto PVC no namespace seria rejeitado porque excede a contagem máxima de 5. Alternativamente, uma cota máxima de 5Gi, combinada com o limite máximo de 2Gi acima, não pode ter 3 PVCs, cada um com 2Gi. Isso seria um total de 6Gi solicitados para um namespace limitado a 5Gi.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: storagequota
|
||||
spec:
|
||||
hard:
|
||||
persistentvolumeclaims: "5"
|
||||
requests.storage: "5Gi"
|
||||
```
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Resumo
|
||||
|
||||
Um LimitRange pode colocar um limite na quantidade de armazenamento solicitado enquanto um ResourceQuota pode efetivamente limitar o armazenamento consumido por um namespace através do número de *claims* e da capacidade de armazenamento cumulativa. Isso permite que um administrador do cluster planeje o custo de armazenamento do seu cluster sem risco de qualquer projeto exceder sua cota.
|
|
@ -0,0 +1,287 @@
|
|||
---
|
||||
title: Configurando um Pod Para Usar um Volume Persistente Para armazenamento
|
||||
content_type: task
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Esta página mostra como configurar um Pod para usar um
|
||||
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
|
||||
para armazenamento.
|
||||
Aqui está o resumo do processo:
|
||||
|
||||
1. Você, como administrador do cluster, faz a criação de um Volume Persistente suportado por armazenamento físico. Você não associa o volume a nenhum Pod.
|
||||
|
||||
1. Você, agora assumindo o papel de desenvolvedor/usuário do cluster, faz a criação
|
||||
de um PersistentVolumeClaim que é automaticamente vinculado ao Volume Persistente adequado.
|
||||
|
||||
1. Você cria um Pod que usa o PersistentVolumeClaim acima para armazenamento.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* Você precisa ter um cluster Kubernetes que tenha apenas um nó, e a ferramenta de linha de comando
|
||||
{{< glossary_tooltip text="kubectl" term_id="kubectl" >}} configurada para se comunicar com seu cluster. Se você
|
||||
ainda não tem um cluster de um único nó, você pode criar um usando o
|
||||
[Minikube](https://minikube.sigs.k8s.io/docs/).
|
||||
|
||||
* Familiarize-se com o material em
|
||||
[Volumes persistentes](/pt-br/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Criando um arquivo index.html no seu nó
|
||||
Abra um shell no único nó do seu cluster. A maneira de abrir um shell vai depender de como
|
||||
você inicializou seu cluster. Por exemplo, se você estiver usando o Minikube,
|
||||
você pode abrir um shell para o seu nó digitando `minikube ssh`.
|
||||
|
||||
No seu shell desse nó, crie um diretótio `/mnt/data`:
|
||||
|
||||
```shell
|
||||
# Assumindo que o seu nó use "sudo" para executar comandos
|
||||
# como superusuário
|
||||
sudo mkdir /mnt/data
|
||||
```
|
||||
|
||||
content/pt-br/docs/tasks/configure-pod-container/configure-service-account.md
|
||||
No diretório `/mnt/data`, crie o arquivo `index.html`:
|
||||
|
||||
```shell
|
||||
# Novamente assumindo que seu nó use "sudo" para executar comandos
|
||||
# como superusuário
|
||||
sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Se o seu nó usa uma ferramenta para acesso como superusuário que não `sudo`, você pode
|
||||
geralmente fazer isso funcionar substituindo `sudo` pelo nome da outra ferramenta.
|
||||
{{< /note >}}
|
||||
|
||||
Teste se o arquivo `index.html` existe:
|
||||
|
||||
```shell
|
||||
cat /mnt/data/index.html
|
||||
```
|
||||
|
||||
A saída deve ser:
|
||||
```
|
||||
Hello from Kubernetes storage
|
||||
```
|
||||
|
||||
Você agora pode fechar o shell do seu nó.
|
||||
|
||||
## Crie um Volume Persistente
|
||||
|
||||
Neste exercício, você cria um Volume Persistente *hostPath*. O Kubernetes suporta
|
||||
`hostPath` para desenvolvimento e teste em um cluster com apenas um nó. Um Volume Persistente
|
||||
`hostPath` usa um arquivo ou diretório no nó, para emular um armazenamento conectado pela rede.
|
||||
|
||||
Em um cluster de produção, você não usaria `hostPath`. Em vez disso um administrador
|
||||
de cluster provisionaria um recurso de rede, como um disco persistente do
|
||||
Google Compute Engine, um NFS compartilhado, ou um volume do
|
||||
Amazon Elastic Block Store. Administradores podem também usar [classes de armazenamento](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageclass-v1-storage)
|
||||
para incializar [provisionamento dinâmico](https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes).
|
||||
|
||||
Aqui está o arquivo de configuração para o Volume Persistente `hostPath`:
|
||||
|
||||
{{< codenew file="pods/storage/pv-volume.yaml" >}}
|
||||
|
||||
O arquivo de configuração especifica que o volume está no diretório `/mnt/data` do nó do cluster.
|
||||
A configuração também especifica um tamanho de 10 gibibytes e um modo de acesso
|
||||
`ReadWriteOnce`, o que significa que o volume pode ser montado como leitura-escrita
|
||||
pelo único nó. Define o [nome da classe de armazenamento](/pt-br/docs/concepts/storage/persistent-volumes/#classe)
|
||||
`manual` para o Volume Persistente, que será usado para vincular requisições
|
||||
`PersistentVolumeClaim` à esse Volume Persistente.
|
||||
|
||||
Crie o Volume Persistente:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
|
||||
```
|
||||
|
||||
Veja informações do Volume Persistente:
|
||||
|
||||
```shell
|
||||
kubectl get pv task-pv-volume
|
||||
```
|
||||
|
||||
A saída mostra que o Volume Persistente tem um `STATUS` de `Available`. Isto
|
||||
significa que ainda não foi vinculado a um `PersistentVolumeClaim`.
|
||||
|
||||
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
task-pv-volume 10Gi RWO Retain Available manual 4s
|
||||
|
||||
## Crie um `PersistentVolumeClaim`
|
||||
|
||||
O próximo passo é criar um `PersistentVolumeClaim`. Pods usam `PersistentVolumeClaims`
|
||||
para requisitar armazenamento físico. Neste exercício, você vai criar
|
||||
um `PersistentVolumeClaim` que requisita um volume com pelo menos três
|
||||
gibibytes, com acesso de leitura-escrita para pelo menos um nó.
|
||||
|
||||
Aqui está o arquivo de configuração para o`PersistentVolumeClaim`:
|
||||
|
||||
{{< codenew file="pods/storage/pv-claim.yaml" >}}
|
||||
|
||||
Crie o `PersistentVolumeClaim`:
|
||||
|
||||
kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml
|
||||
|
||||
Após criar o `PersistentVolumeClaim`, o Kubernetes `control plane` procura por um
|
||||
Volume Persistente que satisfaça os requerimentos reivindicados. Se o `control plane`
|
||||
encontrar um Volume Persistente adequado, com a mesma classe de armazenamento,
|
||||
ele liga o volume requisitado.
|
||||
|
||||
Olhe novamente o Volume Persistente:
|
||||
|
||||
```shell
|
||||
kubectl get pv task-pv-volume
|
||||
```
|
||||
|
||||
Agora a saída mostra um `STATUS` de `Bound`.
|
||||
|
||||
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m
|
||||
|
||||
Olhe para o `PersistentVolumeClaim`:
|
||||
|
||||
```shell
|
||||
kubectl get pvc task-pv-claim
|
||||
```
|
||||
|
||||
A saída mostra que o`PersistentVolumeClaim` está vinculado ao seu Volume Persistente,
|
||||
`task-pv-volume`.
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
|
||||
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
|
||||
|
||||
## Crie um Pod
|
||||
|
||||
O próximo passo é criar um Pod que usa o seu `PersistentVolumeClaim` como um volume.
|
||||
|
||||
Aqui está o arquivo de configuração para o Pod:
|
||||
|
||||
{{< codenew file="pods/storage/pv-pod.yaml" >}}
|
||||
|
||||
Note que o arquivo de configuração do Pod especifica um `PersistentVolumeClaim`, mas não
|
||||
especifica um Volume Persistente. Do ponto de vista do Pod, a reivindicação é de um volume.
|
||||
|
||||
Crie o Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml
|
||||
```
|
||||
|
||||
Verifique se o contêiner no Pod está executando;
|
||||
|
||||
```shell
|
||||
kubectl get pod task-pv-pod
|
||||
```
|
||||
|
||||
Abra o shell do contêiner, executando no seu Pod:
|
||||
|
||||
```shell
|
||||
kubectl exec -it task-pv-pod -- /bin/bash
|
||||
```
|
||||
|
||||
No seu shell, verifique se o nginx está servindo o arquivo `index.html` do volume
|
||||
do `hostPath`:
|
||||
|
||||
```shell
|
||||
# Certifique-se de executar esses 3 comandos dentro do shell, na raiz que vem da
|
||||
# execução "kubectl exec" do passo anterior
|
||||
apt update
|
||||
apt install curl
|
||||
curl http://localhost/
|
||||
```
|
||||
|
||||
A saída mostra o texto que você escreveu no arquivo `index.html` no volume do
|
||||
`hostPath`:
|
||||
|
||||
Hello from Kubernetes storage
|
||||
|
||||
|
||||
Se você vir essa mensagem, configurou com sucesso um pod para
|
||||
usar o armazenamento de um `PersistentVolumeClaim`.
|
||||
|
||||
## Limpeza
|
||||
|
||||
Exclua o Pod, o `PersistentVolumeClaim` e o Volume Persistente:
|
||||
|
||||
```shell
|
||||
kubectl delete pod task-pv-pod
|
||||
kubectl delete pvc task-pv-claim
|
||||
kubectl delete pv task-pv-volume
|
||||
```
|
||||
|
||||
Se você ainda não tem um shell aberto no nó em seu cluster,
|
||||
Abra um novo shell da mesma maneira que você fez antes.
|
||||
No shell do seu nó, remova o arquivo e o diretório que você criou:
|
||||
|
||||
```shell
|
||||
# Pressupondo que seu nó usa "sudo" para executar comandos
|
||||
# como superusuário
|
||||
sudo rm /mnt/data/index.html
|
||||
sudo rmdir /mnt/data
|
||||
```
|
||||
|
||||
Você pode agora fechar o shell do seu nó.
|
||||
|
||||
## Montando o mesmo Volume Persistente em dois lugares
|
||||
|
||||
{{< codenew file="pods/storage/pv-duplicate.yaml" >}}
|
||||
|
||||
Você pode realizar a montagem de 2 volumes no seu contêiner nginx:
|
||||
|
||||
`/usr/share/nginx/html` para o website estático
|
||||
`/etc/nginx/nginx.conf` para a configuração padrão
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Controle de accesso
|
||||
|
||||
Armazenamento configurado com um `group ID` (GID) possibilita a escrita somente pelos
|
||||
Pods usando a mesma GID. GIDs incompatíveis ou perdidos causam erros de negação
|
||||
de permissão. Para reduzir a necessidade de coordenação de usuários, um administrador
|
||||
pode anotar um Volume Persistente com uma GID. Então a GID é automaticamente
|
||||
adicionada a qualquer Pod que use um Volume Persistente.
|
||||
|
||||
Use a anotação `pv.beta.kubernetes.io/gid` como a seguir:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv1
|
||||
annotations:
|
||||
pv.beta.kubernetes.io/gid: "1234"
|
||||
```
|
||||
Quando um Pod consome um Volume Persistente que tem uma anotação GID, o GID anotado
|
||||
é aplicado à todos os contêiners no Pod, da mesma forma que as GIDs especificadas no
|
||||
contexto de segurança em que o Pod está. Cada GID, se é originário de uma anotação
|
||||
de Volume Persistente ou da especificação do Pod,
|
||||
é aplicada ao primeiro processo executando em cada contêiner.
|
||||
|
||||
{{< note >}}
|
||||
Quando um Pod consome um Volume Persistente, os GIDs associados ao Volume Persistente
|
||||
não estiverem presentes no próprio recurso do Pod.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Aprenda mais sobre [Volumes Persistentes](/docs/concepts/storage/persistent-volumes/).
|
||||
* Leia o [Documento de design de armazenamento persistente](https://git.k8s.io/design-proposals-archive/storage/persistent-storage.md).
|
||||
|
||||
### Referência
|
||||
|
||||
* [Volume Persistente](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolume-v1-core)
|
||||
* [`PersistentVolumeSpec`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core)
|
||||
* [`PersistentVolumeClaim`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
|
||||
* [`PersistentVolumeClaimSpec`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
|
||||
|
|
@ -0,0 +1,98 @@
|
|||
---
|
||||
title: Aplicando Padrões de Segurança de Pod com `Labels` em Namespace
|
||||
reviewers:
|
||||
- tallclair
|
||||
- liggitt
|
||||
content_type: task
|
||||
min-kubernetes-server-version: v1.22
|
||||
---
|
||||
|
||||
Os namespaces podem ser rotulados para aplicar os [Padrões de segurança de pod](/docs/concepts/security/pod-security-standards). As três políticas
|
||||
[privilegiado](/docs/concepts/security/pod-security-standards/#privileged),
|
||||
[linha de base](/docs/concepts/security/pod-security-standards/#baseline)
|
||||
e [restrito](/docs/concepts/security/pod-security-standards/#restricted)
|
||||
cobrem amplamente o espectro de segurança e são implementados pela
|
||||
[segurança de Pod](/docs/concepts/security/pod-security-admission/)
|
||||
{{< glossary_tooltip text="controlador de admissão" term_id="admission-controller" >}}.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{% version-check %}}
|
||||
|
||||
- Garantir que a `PodSecurity` do [portal de funcionalidades](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features)
|
||||
esteja habilitada.
|
||||
|
||||
## Exigindo o padrão de segurança `baseline` de pod com rótulos em namespace
|
||||
|
||||
Este manifesto define um Namespace `my-baseline-namespace` que:
|
||||
|
||||
- _Bloqueia_ quaisquer Pods que não satisfazem os requisitos da política `baseline`.
|
||||
- Gera um aviso para o usuário e adiciona uma anotação de auditoria, a qualquer
|
||||
pod criado que não satisfaça os requisitos da política `restricted`.
|
||||
- Fixa as versões das políticas `baseline` e `restricted` à v{{< skew currentVersion >}}.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: my-baseline-namespace
|
||||
labels:
|
||||
pod-security.kubernetes.io/enforce: baseline
|
||||
pod-security.kubernetes.io/enforce-version: v{{< skew currentVersion >}}
|
||||
|
||||
# Estamos definindo-os para o nosso nível _desejado_ `enforce`.
|
||||
pod-security.kubernetes.io/audit: restricted
|
||||
pod-security.kubernetes.io/audit-version: v{{< skew currentVersion >}}
|
||||
pod-security.kubernetes.io/warn: restricted
|
||||
pod-security.kubernetes.io/warn-version: v{{< skew currentVersion >}}
|
||||
```
|
||||
|
||||
## Adicionar Rótulos aos Namespaces Existentes com `kubectl label`
|
||||
|
||||
{{< note >}}
|
||||
Quando um rótulo de política `enforce` (ou version) é adicionada ou modificada,
|
||||
O plugin de admissão testará cada Pod no namespace contra a nova política.
|
||||
Violações são devolvidas ao usuário como avisos.
|
||||
{{< /note >}}
|
||||
|
||||
É útil aplicar a flag `--dry-run` ao avaliar inicialmente as alterações
|
||||
do perfil de segurança para namespaces. As verificações padrão de segurança
|
||||
do pod ainda serão executadas em modo _dry run_, dando-lhe informações sobre
|
||||
como a nova política trataria os pods existentes, sem realmente atualizar a política.
|
||||
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=baseline
|
||||
```
|
||||
|
||||
### Aplicando a todos os namespaces
|
||||
|
||||
Se você está apenas começando com os padrões de segurança de pod, um primeiro passo
|
||||
adequado seria configurar todos namespaces com anotações de auditoria para um
|
||||
nível mais rigoroso, como `baseline`:
|
||||
|
||||
```shell
|
||||
kubectl label --overwrite ns --all \
|
||||
pod-security.kubernetes.io/audit=baseline \
|
||||
pod-security.kubernetes.io/warn=baseline
|
||||
```
|
||||
|
||||
Observe que isso não está aplicando as definições de nível, para que os namespaces
|
||||
que não foram explicitamente avaliados possam ser distinguidos. Você pode listar
|
||||
os namespaces sem um nível aplicado, explicitamente definido, usando este comando:
|
||||
|
||||
```shell
|
||||
kubectl get namespaces --selector='!pod-security.kubernetes.io/enforce'
|
||||
```
|
||||
|
||||
### Aplicando a um único namespace
|
||||
|
||||
Você pode atualizar um namespace específico também. Este comando adiciona a política
|
||||
`enforce=restricted` ao `my-existing-namespace`, fixando a política que restringe
|
||||
à versão v{{< skew currentVersion >}}.
|
||||
|
||||
```shell
|
||||
kubectl label --overwrite ns my-existing-namespace \
|
||||
pod-security.kubernetes.io/enforce=restricted \
|
||||
pod-security.kubernetes.io/enforce-version=v{{< skew currentVersion >}}
|
||||
```
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: "Instale as ferramentas"
|
||||
description: Configure as ferramentas do Kubernetes no seu computador.
|
||||
weight: 10
|
||||
no_list: true
|
||||
---
|
||||
|
||||
## kubectl
|
||||
|
||||
<!-- overview -->
|
||||
A ferramenta de linha de comando do Kubernetes, [kubectl](/docs/reference/kubectl/kubectl/), permite que você execute comandos nos clusters Kubernetes.
|
||||
Você pode usar o kubectl para instalar aplicações, inspecionar e gerenciar recursos de cluster e visualizar os logs.
|
||||
Para obter mais informações, incluindo uma lista completa de operações kubectl, consulte a [documentação de referência `kubectl`](/docs/reference/kubectl/).
|
||||
|
||||
Kubectl é instalável em uma variedade de plataformas tais como Linux, macOS e Windows.
|
||||
Encontre seu sistema operacional preferido abaixo.
|
||||
|
||||
- [Instale o kubectl no Linux](/pt-br/docs/tasks/tools/install-kubectl-linux)
|
||||
- [Instale o kubectl no macOS](/docs/tasks/tools/install-kubectl-macos)
|
||||
- [Instale o kubectl no Windows](/docs/tasks/tools/install-kubectl-windows)
|
||||
|
||||
## kind
|
||||
|
||||
O [`kind`](https://kind.sigs.k8s.io/) permite que você execute o Kubernetes no seu computador local.
|
||||
Esta ferramenta requer que você tenha o [Docker](https://docs.docker.com/get-docker/) instalado e configurado.
|
||||
|
||||
A página de [Início Rápido](https://kind.sigs.k8s.io/docs/user/quick-start/) mostra o que você precisa fazer para começar a trabalhar com o `kind`.
|
||||
|
||||
<a class="btn btn-primary" href="https://kind.sigs.k8s.io/docs/user/quick-start/" role="button" aria-label="Acesse o guia de início rápido do kind">Acesse o guia de início rápido do kind</a>
|
||||
|
||||
## minikube
|
||||
|
||||
Assim como o `kind`, o [`minikube`](https://minikube.sigs.k8s.io/) é uma ferramenta que permite executar o Kubernetes localmente.
|
||||
O `minikube` executa um cluster Kubernetes local com tudo-em-um ou com vários nós no seu computador pessoal (incluindo PCs Windows, macOS e Linux) para que você possa experimentar o Kubernetes ou para o trabalho de desenvolvimento diário.
|
||||
|
||||
Você pode seguir o [guia de início oficial](https://minikube.sigs.k8s.io/docs/start/) se o seu foco é instalar a ferramenta.
|
||||
|
||||
<a class="btn btn-primary" href="https://minikube.sigs.k8s.io/docs/start/" role="button" aria-label="Acesse o guia de início">Acesse o guia de início</a>
|
||||
|
||||
Depois de instalar o `minikube`, você pode usá-lo para executar uma [aplicação exemplo](/pt-br/docs/tutorials/hello-minikube/).
|
||||
|
||||
## kubeadm
|
||||
|
||||
Você pode usar a ferramenta {{< glossary_tooltip term_id="kubeadm" text="kubeadm" >}} para criar e gerenciar clusters Kubernetes.
|
||||
Ela executa as ações necessárias para obter um cluster mínimo viável e seguro em funcionamento de maneira amigável ao usuário.
|
||||
|
||||
[Instalando a ferramenta kubeadm](/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) mostra como instalar o kubeadm.
|
||||
Uma vez instalado, você pode usá-lo para [criar um cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
|
||||
<a class="btn btn-primary" href="/pt-br/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" role="button" aria-label="Acesse o guia instalando a ferramenta kubeadm">Acesse o guia instalando a ferramenta kubeadm</a>
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: task-pv-claim
|
||||
spec:
|
||||
storageClassName: manual
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 3Gi
|
|
@ -0,0 +1,22 @@
|
|||
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
containers:
|
||||
- name: test
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
# a mount for site-data
|
||||
- name: config
|
||||
mountPath: /usr/share/nginx/html
|
||||
subPath: html
|
||||
# another mount for nginx config
|
||||
- name: config
|
||||
mountPath: /etc/nginx/nginx.conf
|
||||
subPath: nginx.conf
|
||||
volumes:
|
||||
- name: config
|
||||
persistentVolumeClaim:
|
||||
claimName: test-nfs-claim
|
|
@ -0,0 +1,20 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: task-pv-pod
|
||||
spec:
|
||||
volumes:
|
||||
- name: task-pv-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: task-pv-claim
|
||||
containers:
|
||||
- name: task-pv-container
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/usr/share/nginx/html"
|
||||
name: task-pv-storage
|
||||
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: task-pv-volume
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
storageClassName: manual
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
hostPath:
|
||||
path: "/mnt/data"
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: flowcontrol.apiserver.k8s.io/v1beta2
|
||||
apiVersion: flowcontrol.apiserver.k8s.io/v1beta3
|
||||
kind: FlowSchema
|
||||
metadata:
|
||||
name: health-for-strangers
|
||||
|
|
|
@ -40,7 +40,7 @@ card:
|
|||
<div class="row">
|
||||
<div class="col-md-9">
|
||||
<h2>Чем может Kubernetes помочь вам?</h2>
|
||||
<p>От современных веб-сервисов пользователи ожидают, что приложения будут доступны 24/7, а разработчики — развёртывать новые версии приложений по нескольку раз в день. Контейнеризация направлена на достижение этой цели, посольку позволяет выпускать и обновлять приложения без простоев. Kubernetes гарантирует, что ваши контейнеризованные приложения будет запущены где угодно и когда угодно, вместе со всеми необходимыми для их работы ресурсами и инструментами. Kubernetes — это готовая к промышленному использованию платформа с открытым исходным кодом, разработанная на основе накопленного опыта Google по оркестровке контейнеров и вобравшая в себя лучшие идеи от сообщества.</p>
|
||||
<p>От современных веб-сервисов пользователи ожидают, что приложения будут доступны 24/7, а разработчики — развёртывать новые версии приложений по нескольку раз в день. Контейнеризация направлена на достижение этой цели, поскольку позволяет выпускать и обновлять приложения без простоев. Kubernetes гарантирует, что ваши контейнеризованные приложения будет запущены где угодно и когда угодно, вместе со всеми необходимыми для их работы ресурсами и инструментами. Kubernetes — это готовая к промышленному использованию платформа с открытым исходным кодом, разработанная на основе накопленного опыта Google по оркестровке контейнеров и вобравшая в себя лучшие идеи от сообщества.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: flowcontrol.apiserver.k8s.io/v1beta2
|
||||
apiVersion: flowcontrol.apiserver.k8s.io/v1beta3
|
||||
kind: FlowSchema
|
||||
metadata:
|
||||
name: health-for-strangers
|
||||
|
|
|
@ -198,12 +198,12 @@ validations are done here.
|
|||
### AppArmor support
|
||||
|
||||
This version introduces the initial support for AppArmor, allowing users to load and
|
||||
unload AppArmor profiles into cluster nodes by using the new [AppArmorProfile](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base/crds/apparmorprofile.yaml) CRD.
|
||||
unload AppArmor profiles into cluster nodes by using the new [AppArmorProfile](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base-crds/crds/apparmorprofile.yaml) CRD.
|
||||
-->
|
||||
### AppArmor 支持
|
||||
|
||||
0.4.0 版本引入了对 AppArmor 的初始支持,允许用户通过使用新的
|
||||
[AppArmorProfile](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base/crds/apparmorprofile.yaml)
|
||||
[AppArmorProfile](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base-crds/crds/apparmorprofile.yaml)
|
||||
在集群节点中 CRD 加载或卸载 AppArmor 配置文件。
|
||||
|
||||
<!--
|
||||
|
|
|
@ -61,7 +61,7 @@ files side by side to the artifacts for verifying their integrity.
|
|||
[tarballs]: https://github.com/kubernetes/kubernetes/blob/release-1.26/CHANGELOG/CHANGELOG-1.26.md#downloads-for-v1260
|
||||
[binaries]: https://gcsweb.k8s.io/gcs/kubernetes-release/release/v1.26.0/bin
|
||||
[sboms]: https://dl.k8s.io/release/v1.26.0/kubernetes-release.spdx
|
||||
[provenance]: https://dl.k8s.io/kubernetes-release/release/v1.26.0/provenance.json
|
||||
[provenance]: https://dl.k8s.io/release/v1.26.0/provenance.json
|
||||
[cosign]: https://github.com/sigstore/cosign
|
||||
|
||||
<!--
|
||||
|
@ -212,4 +212,4 @@ the [#sig-release][slack] Slack channel.
|
|||
- [Signing Release Artifacts Enhancement Proposal](https://github.com/kubernetes/enhancements/issues/3031)
|
||||
-->
|
||||
## 附加资源 {#additional-resources}
|
||||
- [签名发布工件增强提案](https://github.com/kubernetes/enhancements/issues/3031)
|
||||
- [签名发布工件增强提案](https://github.com/kubernetes/enhancements/issues/3031)
|
||||
|
|
|
@ -0,0 +1,152 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: 使用 Kubelet API 查询节点日志"
|
||||
date: 2023-04-21
|
||||
slug: node-log-query-alpha
|
||||
---
|
||||
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Query Node Logs Using The Kubelet API"
|
||||
date: 2023-04-21
|
||||
slug: node-log-query-alpha
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Author:** Aravindh Puthiyaparambil (Red Hat)
|
||||
-->
|
||||
**作者:** Aravindh Puthiyaparambil (Red Hat)
|
||||
|
||||
**译者:** Xin Li (DaoCloud)
|
||||
|
||||
<!--
|
||||
Kubernetes 1.27 introduced a new feature called _Node log query_ that allows
|
||||
viewing logs of services running on the node.
|
||||
-->
|
||||
Kubernetes 1.27 引入了一个名为**节点日志查询**的新功能,
|
||||
可以查看节点上运行的服务的日志。
|
||||
|
||||
<!--
|
||||
## What problem does it solve?
|
||||
Cluster administrators face issues when debugging malfunctioning services
|
||||
running on the node. They usually have to SSH or RDP into the node to view the
|
||||
logs of the service to debug the issue. The _Node log query_ feature helps with
|
||||
this scenario by allowing the cluster administrator to view the logs using
|
||||
_kubectl_. This is especially useful with Windows nodes where you run into the
|
||||
issue of the node going to the ready state but containers not coming up due to
|
||||
CNI misconfigurations and other issues that are not easily identifiable by
|
||||
looking at the Pod status.
|
||||
-->
|
||||
## 它解决了什么问题?
|
||||
|
||||
集群管理员在调试节点上运行的表现不正常的服务时会遇到问题。
|
||||
他们通常必须通过 SSH 或 RDP 进入节点以查看服务日志以调试问题。
|
||||
**节点日志查询**功能通过允许集群管理员使用 **kubectl**
|
||||
查看日志的方式来帮助解决这种情况。这对于 Windows 节点特别有用,
|
||||
在 Windows 节点中,你会遇到节点进入就绪状态但由于 CNI
|
||||
错误配置和其他不易通过查看 Pod 状态来辨别的问题而导致容器无法启动的情况。
|
||||
|
||||
<!--
|
||||
## How does it work?
|
||||
|
||||
The kubelet already has a _/var/log/_ viewer that is accessible via the node
|
||||
proxy endpoint. The feature supplements this endpoint with a shim that shells
|
||||
out to `journalctl`, on Linux nodes, and the `Get-WinEvent` cmdlet on Windows
|
||||
nodes. It then uses the existing filters provided by the commands to allow
|
||||
filtering the logs. The kubelet also uses heuristics to retrieve the logs.
|
||||
If the user is not aware if a given system services logs to a file or to the
|
||||
native system logger, the heuristics first checks the native operating system
|
||||
logger and if that is not available it attempts to retrieve the first logs
|
||||
from `/var/log/<servicename>` or `/var/log/<servicename>.log` or
|
||||
`/var/log/<servicename>/<servicename>.log`.
|
||||
-->
|
||||
## 它是如何工作的?
|
||||
|
||||
kubelet 已经有一个 **/var/log/** 查看器,可以通过节点代理端点访问。
|
||||
本功能特性通过一个隔离层对这个端点进行增强,在 Linux 节点上通过
|
||||
`journalctl` Shell 调用获得日志,在 Windows 节点上通过 `Get-WinEvent` CmdLet 获取日志。
|
||||
然后它使用命令提供的过滤器来过滤日志。kubelet 还使用启发式方法来检索日志。
|
||||
如果用户不知道给定的系统服务是记录到文件还是本机系统记录器,
|
||||
启发式方法首先检查本机操作系统记录器,如果不可用,它会尝试先从 `/var/log/<servicename>`
|
||||
或 `/var/log/<servicename>.log` 或 `/var/log/<servicename>/<servicename>.log` 检索日志。
|
||||
|
||||
|
||||
<!--
|
||||
On Linux we assume that service logs are available via journald, and that
|
||||
`journalctl` is installed. On Windows we assume that service logs are available
|
||||
in the application log provider. Also note that fetching node logs is only
|
||||
available if you are authorized to do so (in RBAC, that's **get** and
|
||||
**create** access to `nodes/proxy`). The privileges that you need to fetch node
|
||||
logs also allow elevation-of-privilege attacks, so be careful about how you
|
||||
manage them.
|
||||
-->
|
||||
在 Linux 上,我们假设服务日志可通过 journald 获得,
|
||||
并且安装了 `journalctl`。 在 Windows 上,我们假设服务日志在应用程序日志提供程序中可用。
|
||||
另请注意,只有在你被授权的情况下才能获取节点日志(在 RBAC 中,
|
||||
这是对 `nodes/proxy` 的 **get** 和 **create** 访问)。
|
||||
获取节点日志所需的特权也允许特权提升攻击(elevation-of-privilege),
|
||||
因此请谨慎管理它们。
|
||||
|
||||
<!--
|
||||
## How do I use it?
|
||||
|
||||
To use the feature, ensure that the `NodeLogQuery`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is
|
||||
enabled for that node, and that the kubelet configuration options
|
||||
`enableSystemLogHandler` and `enableSystemLogQuery` are both set to true. You can
|
||||
then query the logs from all your nodes or just a subset. Here is an example to
|
||||
retrieve the kubelet service logs from a node:
|
||||
-->
|
||||
## 该如何使用它
|
||||
|
||||
要使用该功能,请确保为该节点启用了 `NodeLogQuery`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
并且 kubelet 配置选项 `enableSystemLogHandler` 和 `enableSystemLogQuery` 都设置为 true。
|
||||
然后,你可以查询所有节点或部分节点的日志。下面是一个从节点检索 kubelet 服务日志的示例:
|
||||
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"
|
||||
```
|
||||
|
||||
<!--
|
||||
You can further filter the query to narrow down the results:
|
||||
-->
|
||||
你可以进一步过滤查询以缩小结果范围:
|
||||
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example that have the word "error"
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error"
|
||||
```
|
||||
|
||||
<!--
|
||||
You can also fetch files from `/var/log/` on a Linux node:
|
||||
-->
|
||||
你还可以从 Linux 节点上的 `/var/log/` 获取文件:
|
||||
|
||||
```shell
|
||||
kubectl get --raw "/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>"
|
||||
```
|
||||
|
||||
<!--
|
||||
You can read the
|
||||
[documentation](/docs/concepts/cluster-administration/system-logs/#log-query)
|
||||
for all the available options.
|
||||
-->
|
||||
你可以阅读[文档](/zh-cn/docs/concepts/cluster-administration/system-logs/#log-query)获取所有可用选项。
|
||||
|
||||
<!--
|
||||
## How do I help?
|
||||
|
||||
Please use the feature and provide feedback by opening GitHub issues or
|
||||
reaching out to us on the
|
||||
[#sig-windows](https://kubernetes.slack.com/archives/C0SJ4AFB7) channel on the
|
||||
Kubernetes Slack or the SIG Windows
|
||||
[mailing list](https://groups.google.com/g/kubernetes-sig-windows).
|
||||
-->
|
||||
## 如何提供帮助
|
||||
|
||||
请使用该功能并通过在 GitHub 上登记问题或通过 Kubernetes Slack
|
||||
的 [#sig-windows](https://kubernetes.slack.com/archives/C0SJ4AFB7) 频道
|
||||
或 SIG Windows [邮件列表](https://groups.google.com/g/kubernetes-sig-windows)
|
||||
联系我们来提供反馈。
|
|
@ -0,0 +1,512 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27:介绍用于磁盘卷组快照的新 API"
|
||||
date: 2023-05-08
|
||||
slug: kubernetes-1-27-volume-group-snapshot-alpha
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Introducing An API For Volume Group Snapshots"
|
||||
date: 2023-05-08
|
||||
slug: kubernetes-1-27-volume-group-snapshot-alpha
|
||||
-->
|
||||
|
||||
**Author:** Xing Yang (VMware)
|
||||
|
||||
**译者**: [顾欣](https://github.com/asa3311)
|
||||
|
||||
<!--
|
||||
Volume group snapshot is introduced as an Alpha feature in Kubernetes v1.27.
|
||||
This feature introduces a Kubernetes API that allows users to take crash consistent
|
||||
snapshots for multiple volumes together. It uses a label selector to group multiple
|
||||
`PersistentVolumeClaims` for snapshotting.
|
||||
This new feature is only supported for [CSI](https://kubernetes-csi.github.io/docs/) volume drivers.
|
||||
-->
|
||||
磁盘卷组快照在 Kubernetes v1.27 中作为 Alpha 特性被引入。
|
||||
此特性引入了一个 Kubernetes API,允许用户对多个卷进行快照,以保证在发生故障时数据的一致性。
|
||||
它使用标签选择器来将多个 `PersistentVolumeClaims` (持久卷申领)分组以进行快照。
|
||||
这个新特性仅支持 CSI 卷驱动器。
|
||||
|
||||
|
||||
<!--
|
||||
## An overview of volume group snapshots
|
||||
|
||||
Some storage systems provide the ability to create a crash consistent snapshot of
|
||||
multiple volumes. A group snapshot represents “copies” from multiple volumes that
|
||||
are taken at the same point-in-time. A group snapshot can be used either to rehydrate
|
||||
new volumes (pre-populated with the snapshot data) or to restore existing volumes to
|
||||
a previous state (represented by the snapshots).
|
||||
-->
|
||||
## 磁盘卷组快照概述
|
||||
|
||||
一些存储系统提供了创建多个卷的崩溃一致性快照的能力。
|
||||
卷组快照表示在同一时间点从多个卷中生成的“副本”。
|
||||
卷组快照可以用来重新填充新的卷(预先填充快照数据)或者将现有卷恢复到以前的状态(由快照代表)。
|
||||
|
||||
<!--
|
||||
## Why add volume group snapshots to Kubernetes?
|
||||
|
||||
The Kubernetes volume plugin system already provides a powerful abstraction that
|
||||
automates the provisioning, attaching, mounting, resizing, and snapshotting of block
|
||||
and file storage.
|
||||
-->
|
||||
## 为什么要在 Kubernetes 中添加卷组快照?
|
||||
|
||||
Kubernetes 的卷插件系统已经提供了一个强大的抽象层,
|
||||
可以自动化块存储和文件存储的制备、挂接、挂载、调整大小和快照等操作。
|
||||
|
||||
<!--
|
||||
Underpinning all these features is the Kubernetes goal of workload portability:
|
||||
Kubernetes aims to create an abstraction layer between distributed applications and
|
||||
underlying clusters so that applications can be agnostic to the specifics of the
|
||||
cluster they run on and application deployment requires no cluster specific knowledge.
|
||||
-->
|
||||
所有这些特性的出发点是 Kubernetes 对工作负载可移植性的目标:
|
||||
Kubernetes 致力于在分布式应用和底层集群之间创建一个抽象层,
|
||||
使应用可以对承载它们的集群的特殊属性无感,应用部署不需要特定于某集群的知识。
|
||||
|
||||
<!--
|
||||
There is already a [VolumeSnapshot](/docs/concepts/storage/volume-snapshots/) API
|
||||
that provides the ability to take a snapshot of a persistent volume to protect against
|
||||
data loss or data corruption. However, there are other snapshotting functionalities
|
||||
not covered by the VolumeSnapshot API.
|
||||
-->
|
||||
Kubernetes 已经提供了一个 [VolumeSnapshot](/zh-cn/docs/concepts/storage/volume-snapshots/) API,
|
||||
这个 API 提供对持久性卷进行快照的能力,可用于防止数据丢失或数据损坏。然而,
|
||||
还有一些其他的快照功能并未被 VolumeSnapshot API 所覆盖。
|
||||
|
||||
<!--
|
||||
Some storage systems support consistent group snapshots that allow a snapshot to be
|
||||
taken from multiple volumes at the same point-in-time to achieve write order consistency.
|
||||
This can be useful for applications that contain multiple volumes. For example,
|
||||
an application may have data stored in one volume and logs stored in another volume.
|
||||
If snapshots for the data volume and the logs volume are taken at different times,
|
||||
the application will not be consistent and will not function properly if it is restored
|
||||
from those snapshots when a disaster strikes.
|
||||
-->
|
||||
一些存储系统支持一致性的卷组快照,允许在同一时间点在多个卷上生成快照,以实现写入顺序的一致性。
|
||||
这对于包含多个卷的应用非常有用。例如,应用可能在一个卷中存储数据,在另一个卷中存储日志。
|
||||
如果数据卷和日志卷的快照在不同的时间点进行,应用将不会保持一致,
|
||||
当灾难发生时从这些快照中恢复,应用将无法正常工作。
|
||||
|
||||
<!--
|
||||
It is true that you can quiesce the application first, take an individual snapshot from
|
||||
each volume that is part of the application one after the other, and then unquiesce the
|
||||
application after all the individual snapshots are taken. This way, you would get
|
||||
application consistent snapshots.
|
||||
-->
|
||||
确实,你可以首先使应用静默,然后依次为构成应用的每个卷中生成一个独立的快照,
|
||||
等所有的快照都已逐个生成后,再取消应用的静止状态。这样你就可以得到应用一致性的快照。
|
||||
|
||||
<!--
|
||||
However, sometimes it may not be possible to quiesce an application or the application
|
||||
quiesce can be too expensive so you want to do it less frequently. Taking individual
|
||||
snapshots one after another may also take longer time compared to taking a consistent
|
||||
group snapshot. Some users may not want to do application quiesce very often for these
|
||||
reasons. For example, a user may want to run weekly backups with application quiesce
|
||||
and nightly backups without application quiesce but with consistent group support which
|
||||
provides crash consistency across all volumes in the group.
|
||||
-->
|
||||
然而,有时可能无法使应用静默,或者使应用静默的代价过高,因此你希望较少地进行这个操作。
|
||||
相较于生成一致性的卷组快照,依次生成单个快照可能需要更长的时间。
|
||||
由于这些原因,有些用户可能不希望经常使应用静默。例如,
|
||||
用户可能希望每周进行一次需要应用静默的备份,而在每晚进行不需应用静默但带有卷组一致性支持的备份,
|
||||
这种一致性支持将确保组中所有卷的崩溃一致性。
|
||||
|
||||
<!--
|
||||
## Kubernetes Volume Group Snapshots API
|
||||
|
||||
Kubernetes Volume Group Snapshots introduce [three new API
|
||||
objects](https://github.com/kubernetes-csi/external-snapshotter/blob/master/client/apis/volumegroupsnapshot/v1alpha1/types.go)
|
||||
for managing snapshots:
|
||||
-->
|
||||
## Kubernetes 卷组快照 API
|
||||
|
||||
Kubernetes 卷组快照引入了
|
||||
[三个新的 API 对象](https://github.com/kubernetes-csi/external-snapshotter/blob/master/client/apis/volumegroupsnapshot/v1alpha1/types.go)
|
||||
用于管理快照:
|
||||
|
||||
<!--
|
||||
`VolumeGroupSnapshot`
|
||||
: Created by a Kubernetes user (or perhaps by your own automation) to request
|
||||
creation of a volume group snapshot for multiple persistent volume claims.
|
||||
It contains information about the volume group snapshot operation such as the
|
||||
timestamp when the volume group snapshot was taken and whether it is ready to use.
|
||||
The creation and deletion of this object represents a desire to create or delete a
|
||||
cluster resource (a group snapshot).
|
||||
-->
|
||||
`VolumeGroupSnapshot`:由 Kubernetes 用户(或由你的自动化系统)创建,
|
||||
以请求为多个持久卷申领创建卷组快照。它包含了关于卷组快照操作的信息,
|
||||
如卷组快照的生成时间戳以及是否可直接使用。
|
||||
此对象的创建和删除代表了创建或删除集群资源(一个卷组快照)的意愿。
|
||||
|
||||
<!--
|
||||
`VolumeGroupSnapshotContent`
|
||||
: Created by the snapshot controller for a dynamically created VolumeGroupSnapshot.
|
||||
It contains information about the volume group snapshot including the volume group
|
||||
snapshot ID.
|
||||
This object represents a provisioned resource on the cluster (a group snapshot).
|
||||
The VolumeGroupSnapshotContent object binds to the VolumeGroupSnapshot for which it
|
||||
was created with a one-to-one mapping.
|
||||
-->
|
||||
`VolumeGroupSnapshotContent`:由快照控制器动态生成的 VolumeGroupSnapshot 所创建。
|
||||
它包含了关于卷组快照的信息,包括卷组快照 ID。此对象代表了集群上制备的一个资源(一个卷组快照)。
|
||||
VolumeGroupSnapshotContent 对象与其创建时所对应的 VolumeGroupSnapshot 之间存在一对一的映射。
|
||||
|
||||
<!--
|
||||
`VolumeGroupSnapshotClass`
|
||||
: Created by cluster administrators to describe how volume group snapshots should be
|
||||
created. including the driver information, the deletion policy, etc.
|
||||
|
||||
These three API kinds are defined as CustomResourceDefinitions (CRDs).
|
||||
These CRDs must be installed in a Kubernetes cluster for a CSI Driver to support
|
||||
volume group snapshots.
|
||||
-->
|
||||
`VolumeGroupSnapshotClass`:由集群管理员创建,用来描述如何创建卷组快照,
|
||||
包括驱动程序信息、删除策略等。
|
||||
|
||||
这三种 API 类型被定义为自定义资源(CRD)。
|
||||
这些 CRD 必须在 Kubernetes 集群中安装,以便 CSI 驱动程序支持卷组快照。
|
||||
|
||||
<!--
|
||||
## How do I use Kubernetes Volume Group Snapshots
|
||||
|
||||
Volume group snapshots are implemented in the
|
||||
[external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter) repository. Implementing volume
|
||||
group snapshots meant adding or changing several components:
|
||||
-->
|
||||
## 如何使用 Kubernetes 卷组快照
|
||||
|
||||
卷组快照是在 [external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter)
|
||||
仓库中实现的。实现卷组快照意味着添加或更改几个组件:
|
||||
|
||||
<!--
|
||||
* Added new CustomResourceDefinitions for VolumeGroupSnapshot and two supporting APIs.
|
||||
* Volume group snapshot controller logic is added to the common snapshot controller.
|
||||
* Volume group snapshot validation webhook logic is added to the common snapshot validation webhook.
|
||||
* Adding logic to make CSI calls into the snapshotter sidecar controller.
|
||||
-->
|
||||
* 添加了新的 CustomResourceDefinition 用于 VolumeGroupSnapshot 和两个辅助性 API。
|
||||
* 向通用快照控制器中添加卷组快照控制器的逻辑。
|
||||
* 向通用快照验证 webhook 中添加卷组快照验证 webhook 的逻辑。
|
||||
* 添加逻辑以便在快照 sidecar 控制器中进行 CSI 调用。
|
||||
|
||||
<!--
|
||||
The volume snapshot controller, CRDs, and validation webhook are deployed once per
|
||||
cluster, while the sidecar is bundled with each CSI driver.
|
||||
|
||||
Therefore, it makes sense to deploy the volume snapshot controller, CRDs, and validation
|
||||
webhook as a cluster addon. I strongly recommend that Kubernetes distributors
|
||||
bundle and deploy the volume snapshot controller, CRDs, and validation webhook as part
|
||||
of their Kubernetes cluster management process (independent of any CSI Driver).
|
||||
-->
|
||||
每个集群只部署一次卷快照控制器、CRD 和验证 webhook,
|
||||
而 sidecar 则与每个 CSI 驱动程序一起打包。
|
||||
|
||||
因此,将卷快照控制器、CRD 和验证 webhook 作为集群插件部署是合理的。
|
||||
我强烈建议 Kubernetes 发行版的厂商将卷快照控制器、
|
||||
CRD 和验证 webhook 打包并作为他们的 Kubernetes 集群管理过程的一部分(独立于所有 CSI 驱动)。
|
||||
|
||||
<!--
|
||||
### Creating a new group snapshot with Kubernetes
|
||||
|
||||
Once a VolumeGroupSnapshotClass object is defined and you have volumes you want to
|
||||
snapshot together, you may request a new group snapshot by creating a VolumeGroupSnapshot
|
||||
object.
|
||||
-->
|
||||
### 使用 Kubernetes 创建新的卷组快照
|
||||
|
||||
一旦定义了一个 VolumeGroupSnapshotClass 对象,并且你有想要一起生成快照的卷,
|
||||
就可以通过创建一个 VolumeGroupSnapshot 对象来请求一个新的卷组快照。
|
||||
|
||||
<!--
|
||||
The source of the group snapshot specifies whether the underlying group snapshot
|
||||
should be dynamically created or if a pre-existing VolumeGroupSnapshotContent
|
||||
should be used.
|
||||
|
||||
A pre-existing VolumeGroupSnapshotContent is created by a cluster administrator.
|
||||
It contains the details of the real volume group snapshot on the storage system which
|
||||
is available for use by cluster users.
|
||||
-->
|
||||
卷组快照的源指定了底层的卷组快照是应该动态创建,
|
||||
还是应该使用预先存在的 VolumeGroupSnapshotContent。
|
||||
|
||||
预先存在的 VolumeGroupSnapshotContent 由集群管理员创建。
|
||||
其中包含了在存储系统上实际卷组快照的细节,这些卷组快照可供集群用户使用。
|
||||
|
||||
<!--
|
||||
One of the following members in the source of the group snapshot must be set.
|
||||
|
||||
* `selector` - a label query over PersistentVolumeClaims that are to be grouped
|
||||
together for snapshotting. This labelSelector will be used to match the label
|
||||
added to a PVC.
|
||||
* `volumeGroupSnapshotContentName` - specifies the name of a pre-existing
|
||||
VolumeGroupSnapshotContent object representing an existing volume group snapshot.
|
||||
|
||||
In the following example, there are two PVCs.
|
||||
-->
|
||||
在卷组快照源中,必须设置以下成员之一。
|
||||
|
||||
* `selector` - 针对要一起生成快照的 PersistentVolumeClaims 的标签查询。
|
||||
该 labelSelector 将用于匹配添加到 PVC 上的标签。
|
||||
* `volumeGroupSnapshotContentName` - 指定一个现有的 VolumeGroupSnapshotContent
|
||||
对象的名称,该对象代表着一个已存在的卷组快照。
|
||||
|
||||
在以下示例中,有两个 PVC。
|
||||
|
||||
```console
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||
pvc-0 Bound pvc-a42d7ea2-e3df-11ed-b5ea-0242ac120002 1Gi RWO 48s
|
||||
pvc-1 Bound pvc-a42d81b8-e3df-11ed-b5ea-0242ac120002 1Gi RWO 48s
|
||||
```
|
||||
|
||||
<!--
|
||||
Label the PVCs.
|
||||
-->
|
||||
标记 PVC。
|
||||
|
||||
```console
|
||||
% kubectl label pvc pvc-0 group=myGroup
|
||||
persistentvolumeclaim/pvc-0 labeled
|
||||
|
||||
% kubectl label pvc pvc-1 group=myGroup
|
||||
persistentvolumeclaim/pvc-1 labeled
|
||||
```
|
||||
|
||||
<!--
|
||||
For dynamic provisioning, a selector must be set so that the snapshot controller can
|
||||
find PVCs with the matching labels to be snapshotted together.
|
||||
-->
|
||||
对于动态制备,必须设置一个选择算符,以便快照控制器可以找到带有匹配标签的 PVC,一起进行快照。
|
||||
|
||||
```yaml
|
||||
apiVersion: groupsnapshot.storage.k8s.io/v1alpha1
|
||||
kind: VolumeGroupSnapshot
|
||||
metadata:
|
||||
name: new-group-snapshot-demo
|
||||
namespace: demo-namespace
|
||||
spec:
|
||||
volumeGroupSnapshotClassName: csi-groupSnapclass
|
||||
source:
|
||||
selector:
|
||||
matchLabels:
|
||||
group: myGroup
|
||||
```
|
||||
|
||||
<!--
|
||||
In the VolumeGroupSnapshot spec, a user can specify the VolumeGroupSnapshotClass which
|
||||
has the information about which CSI driver should be used for creating the group snapshot.
|
||||
|
||||
Two individual volume snapshots will be created as part of the volume group snapshot creation.
|
||||
-->
|
||||
在 VolumeGroupSnapshot 的规约中,用户可以指定 VolumeGroupSnapshotClass,
|
||||
其中包含应使用哪个 CSI 驱动程序来创建卷组快照的信息。
|
||||
|
||||
作为创建卷组快照的一部分,将创建两个单独的卷快照。
|
||||
|
||||
```console
|
||||
snapshot-62abb5db7204ac6e4c1198629fec533f2a5d9d60ea1a25f594de0bf8866c7947-2023-04-26-2.20.4
|
||||
snapshot-2026811eb9f0787466171fe189c805a22cdb61a326235cd067dc3a1ac0104900-2023-04-26-2.20.4
|
||||
```
|
||||
|
||||
<!--
|
||||
### How to use group snapshot for restore in Kubernetes
|
||||
|
||||
At restore time, the user can request a new PersistentVolumeClaim to be created from
|
||||
a VolumeSnapshot object that is part of a VolumeGroupSnapshot. This will trigger
|
||||
provisioning of a new volume that is pre-populated with data from the specified
|
||||
snapshot. The user should repeat this until all volumes are created from all the
|
||||
snapshots that are part of a group snapshot.
|
||||
-->
|
||||
### 如何在 Kubernetes 中使用卷组快照进行恢复
|
||||
|
||||
在恢复时,用户可以请求某 VolumeGroupSnapshot 的一部分,即某个 VolumeSnapshot 对象,
|
||||
创建一个新的 PersistentVolumeClaim。这将触发新卷的制备过程,
|
||||
并使用指定快照中的数据进行预填充。用户应该重复此步骤,直到为卷组快照的所有部分创建了所有卷。
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc0-restore
|
||||
namespace: demo-namespace
|
||||
spec:
|
||||
storageClassName: csi-hostpath-sc
|
||||
dataSource:
|
||||
name: snapshot-62abb5db7204ac6e4c1198629fec533f2a5d9d60ea1a25f594de0bf8866c7947-2023-04-26-2.20.4
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
<!--
|
||||
## As a storage vendor, how do I add support for group snapshots to my CSI driver?
|
||||
|
||||
To implement the volume group snapshot feature, a CSI driver **must**:
|
||||
|
||||
* Implement a new group controller service.
|
||||
* Implement group controller RPCs: `CreateVolumeGroupSnapshot`, `DeleteVolumeGroupSnapshot`, and `GetVolumeGroupSnapshot`.
|
||||
* Add group controller capability `CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT`.
|
||||
-->
|
||||
## 作为一个存储供应商,我应该如何为我的 CSI 驱动程序添加对卷组快照的支持?
|
||||
|
||||
要实现卷组快照功能,CSI 驱动**必须**:
|
||||
|
||||
* 实现一个新的组控制器服务。
|
||||
* 实现组控制器的 RPC:`CreateVolumeGroupSnapshot`,`DeleteVolumeGroupSnapshot` 和 `GetVolumeGroupSnapshot`。
|
||||
* 添加组控制器的特性 `CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT`。
|
||||
|
||||
<!--
|
||||
See the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md)
|
||||
and the [Kubernetes-CSI Driver Developer Guide](https://kubernetes-csi.github.io/docs/)
|
||||
for more details.
|
||||
|
||||
a CSI Volume Driver as possible, it provides a suggested mechanism to deploy a
|
||||
containerized CSI driver to simplify the process.
|
||||
-->
|
||||
更多详情请参阅
|
||||
[CSI规范](https://github.com/container-storage-interface/spec/blob/master/spec.md)
|
||||
和 [Kubernetes-CSI驱动程序开发指南](https://kubernetes-csi.github.io/docs/)。
|
||||
|
||||
对于 CSI 卷驱动程序,它提供了一种建议采用的机制来部署容器化的 CSI 驱动程序以简化流程。
|
||||
|
||||
<!--
|
||||
As part of this recommended deployment process, the Kubernetes team provides a number of
|
||||
sidecar (helper) containers, including the
|
||||
[external-snapshotter sidecar container](https://kubernetes-csi.github.io/docs/external-snapshotter.html)
|
||||
which has been updated to support volume group snapshot.
|
||||
-->
|
||||
作为所推荐的部署过程的一部分,Kubernetes 团队提供了许多 sidecar(辅助)容器,
|
||||
包括已经更新以支持卷组快照的
|
||||
[external-snapshotter](https://kubernetes-csi.github.io/docs/external-snapshotter.html)
|
||||
sidecar 容器。
|
||||
|
||||
<!--
|
||||
The external-snapshotter watches the Kubernetes API server for the
|
||||
`VolumeGroupSnapshotContent` object and triggers `CreateVolumeGroupSnapshot` and
|
||||
`DeleteVolumeGroupSnapshot` operations against a CSI endpoint.
|
||||
-->
|
||||
external-snapshotter 会监听 Kubernetes API 服务器上的 `VolumeGroupSnapshotContent` 对象,
|
||||
并对 CSI 端点触发 `CreateVolumeGroupSnapshot` 和 `DeleteVolumeGroupSnapshot` 操作。
|
||||
|
||||
<!--
|
||||
## What are the limitations?
|
||||
|
||||
The alpha implementation of volume group snapshots for Kubernetes has the following
|
||||
limitations:
|
||||
|
||||
* Does not support reverting an existing PVC to an earlier state represented by
|
||||
a snapshot (only supports provisioning a new volume from a snapshot).
|
||||
* No application consistency guarantees beyond any guarantees provided by the storage system
|
||||
(e.g. crash consistency). See this [doc](https://github.com/kubernetes/community/blob/master/wg-data-protection/data-protection-workflows-white-paper.md#quiesce-and-unquiesce-hooks)
|
||||
for more discussions on application consistency.
|
||||
-->
|
||||
## 有哪些限制?
|
||||
|
||||
Kubernetes 的卷组快照的 Alpha 版本具有以下限制:
|
||||
|
||||
* 不支持将现有的 PVC 还原到由快照表示的较早状态(仅支持从快照创建新的卷)。
|
||||
* 除了存储系统提供的保证(例如崩溃一致性)之外,不提供应用一致性保证。
|
||||
请参阅此[文档](https://github.com/kubernetes/community/blob/master/wg-data-protection/data-protection-workflows-white-paper.md#quiesce-and-unquiesce-hooks),
|
||||
了解有关应用一致性的更多讨论。
|
||||
|
||||
<!--
|
||||
## What’s next?
|
||||
|
||||
Depending on feedback and adoption, the Kubernetes team plans to push the CSI
|
||||
Group Snapshot implementation to Beta in either 1.28 or 1.29.
|
||||
Some of the features we are interested in supporting include volume replication,
|
||||
replication group, volume placement, application quiescing, changed block tracking, and more.
|
||||
-->
|
||||
## 下一步是什么?
|
||||
|
||||
根据反馈和采用情况,Kubernetes 团队计划在 1.28 或 1.29 版本中将 CSI 卷组快照实现推进到 Beta 阶段。
|
||||
我们有兴趣支持的一些功能包括卷复制、复制组、卷位置选择、应用静默、变更块跟踪等等。
|
||||
|
||||
<!--
|
||||
## How can I learn more?
|
||||
|
||||
- The [design spec](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3476-volume-group-snapshot)
|
||||
for the volume group snapshot feature.
|
||||
- The [code repository](https://github.com/kubernetes-csi/external-snapshotter) for volume group
|
||||
snapshot APIs and controller.
|
||||
- CSI [documentation](https://kubernetes-csi.github.io/docs/) on the group snapshot feature.
|
||||
-->
|
||||
## 如何获取更多信息?
|
||||
|
||||
- 有关卷组快照功能的[设计规约](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3476-volume-group-snapshot)。
|
||||
- 卷组快照 API 和控制器的[代码仓库](https://github.com/kubernetes-csi/external-snapshotter)。
|
||||
- CSI 关于卷组快照功能的[文档](https://kubernetes-csi.github.io/docs/)。
|
||||
|
||||
<!--
|
||||
## How do I get involved?
|
||||
|
||||
This project, like all of Kubernetes, is the result of hard work by many contributors
|
||||
from diverse backgrounds working together. On behalf of SIG Storage, I would like to
|
||||
offer a huge thank you to the contributors who stepped up these last few quarters
|
||||
to help the project reach alpha:
|
||||
-->
|
||||
## 如何参与其中?
|
||||
|
||||
这个项目,就像 Kubernetes 的所有项目一样,是许多不同背景的贡献者共同努力的结果。
|
||||
我代表 SIG Storage,
|
||||
向在过去几个季度中积极参与项目并帮助项目达到 Alpha 版本的贡献者们表示衷心的感谢:
|
||||
|
||||
<!--
|
||||
* Alex Meade ([ameade](https://github.com/ameade))
|
||||
* Ben Swartzlander ([bswartz](https://github.com/bswartz))
|
||||
* Humble Devassy Chirammal ([humblec](https://github.com/humblec))
|
||||
* James Defelice ([jdef](https://github.com/jdef))
|
||||
* Jan Šafránek ([jsafrane](https://github.com/jsafrane))
|
||||
* Jing Xu ([jingxu97](https://github.com/jingxu97))
|
||||
* Michelle Au ([msau42](https://github.com/msau42))
|
||||
* Niels de Vos ([nixpanic](https://github.com/nixpanic))
|
||||
* Rakshith R ([Rakshith-R](https://github.com/Rakshith-R))
|
||||
* Raunak Shah ([RaunakShah](https://github.com/RaunakShah))
|
||||
* Saad Ali ([saad-ali](https://github.com/saad-ali))
|
||||
* Thomas Watson ([rbo54](https://github.com/rbo54))
|
||||
* Xing Yang ([xing-yang](https://github.com/xing-yang))
|
||||
* Yati Padia ([yati1998](https://github.com/yati1998))
|
||||
-->
|
||||
* Alex Meade ([ameade](https://github.com/ameade))
|
||||
* Ben Swartzlander ([bswartz](https://github.com/bswartz))
|
||||
* Humble Devassy Chirammal ([humblec](https://github.com/humblec))
|
||||
* James Defelice ([jdef](https://github.com/jdef))
|
||||
* Jan Šafránek ([jsafrane](https://github.com/jsafrane))
|
||||
* Jing Xu ([jingxu97](https://github.com/jingxu97))
|
||||
* Michelle Au ([msau42](https://github.com/msau42))
|
||||
* Niels de Vos ([nixpanic](https://github.com/nixpanic))
|
||||
* Rakshith R ([Rakshith-R](https://github.com/Rakshith-R))
|
||||
* Raunak Shah ([RaunakShah](https://github.com/RaunakShah))
|
||||
* Saad Ali ([saad-ali](https://github.com/saad-ali))
|
||||
* Thomas Watson ([rbo54](https://github.com/rbo54))
|
||||
* Xing Yang ([xing-yang](https://github.com/xing-yang))
|
||||
* Yati Padia ([yati1998](https://github.com/yati1998))
|
||||
|
||||
<!--
|
||||
We also want to thank everyone else who has contributed to the project, including others
|
||||
who helped review the [KEP](https://github.com/kubernetes/enhancements/pull/1551)
|
||||
and the [CSI spec PR](https://github.com/container-storage-interface/spec/pull/519).
|
||||
|
||||
For those interested in getting involved with the design and development of CSI or
|
||||
any part of the Kubernetes Storage system, join the
|
||||
[Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
|
||||
We always welcome new contributors.
|
||||
|
||||
We also hold regular [Data Protection Working Group meetings](https://docs.google.com/document/d/15tLCV3csvjHbKb16DVk-mfUmFry_Rlwo-2uG6KNGsfw/edit#).
|
||||
New attendees are welcome to join our discussions.
|
||||
-->
|
||||
我们还要感谢其他为该项目做出贡献的人,
|
||||
包括帮助审核 [KEP](https://github.com/kubernetes/enhancements/pull/1551)和
|
||||
[CSI 规约 PR](https://github.com/container-storage-interface/spec/pull/519)的其他人员。
|
||||
|
||||
对于那些对参与 CSI 设计和开发或 Kubernetes 存储系统感兴趣的人,
|
||||
欢迎加入 [Kubernetes存储特别兴趣小组](https://github.com/kubernetes/community/tree/master/sig-storage)(SIG)。
|
||||
我们随时欢迎新的贡献者。
|
||||
|
||||
我们还定期举行[数据保护工作组会议](https://docs.google.com/document/d/15tLCV3csvjHbKb16DVk-mfUmFry_Rlwo-2uG6KNGsfw/edit#)。
|
||||
欢迎新的与会者加入我们的讨论。
|
|
@ -0,0 +1,395 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "在边缘上玩转 seccomp 配置文件"
|
||||
date: 2023-05-18
|
||||
slug: seccomp-profiles-edge
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Having fun with seccomp profiles on the edge"
|
||||
date: 2023-05-18
|
||||
slug: seccomp-profiles-edge
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Author**: Sascha Grunert
|
||||
-->
|
||||
**作者**: Sascha Grunert
|
||||
|
||||
**译者**: [Michael Yao](https://github.com/windsonsea) (DaoCloud)
|
||||
|
||||
<!--
|
||||
The [Security Profiles Operator (SPO)][spo] is a feature-rich
|
||||
[operator][operator] for Kubernetes to make managing seccomp, SELinux and
|
||||
AppArmor profiles easier than ever. Recording those profiles from scratch is one
|
||||
of the key features of this operator, which usually involves the integration
|
||||
into large CI/CD systems. Being able to test the recording capabilities of the
|
||||
operator in edge cases is one of the recent development efforts of the SPO and
|
||||
makes it excitingly easy to play around with seccomp profiles.
|
||||
-->
|
||||
[Security Profiles Operator (SPO)][spo] 是一个功能丰富的 Kubernetes [operator][operator],
|
||||
相比以往可以简化 seccomp、SELinux 和 AppArmor 配置文件的管理。
|
||||
从头开始记录这些配置文件是该 Operator 的关键特性之一,这通常涉及与大型 CI/CD 系统集成。
|
||||
在边缘场景中测试 Operator 的记录能力是 SPO 的最新开发工作之一,
|
||||
非常有助于轻松玩转 seccomp 配置文件。
|
||||
|
||||
<!--
|
||||
## Recording seccomp profiles with `spoc record`
|
||||
|
||||
The [v0.8.0][spo-latest] release of the Security Profiles Operator shipped a new
|
||||
command line interface called `spoc`, a little helper tool for recording and
|
||||
replaying seccomp profiles among various other things that are out of scope of
|
||||
this blog post.
|
||||
-->
|
||||
## 使用 `spoc record` 记录 seccomp 配置文件
|
||||
|
||||
[v0.8.0][spo-latest] 版本的 Security Profiles Operator 附带一个名为 `spoc` 的全新命令行接口,
|
||||
是一个能够用来记录和回放 seccomp 配置文件的工具,该工具还有一些其他能力不在这篇博文的讨论范围内。
|
||||
|
||||
[spo-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/v0.8.0
|
||||
|
||||
<!--
|
||||
Recording a seccomp profile requires a binary to be executed, which can be a
|
||||
simple golang application which just calls [`uname(2)`][uname]:
|
||||
-->
|
||||
记录 seccomp 配置文件需要执行一个二进制文件,这个二进制文件可以是一个仅只调用
|
||||
[`uname(2)`][uname] 的简单 Golang 应用程序:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
)
|
||||
|
||||
func main() {
|
||||
utsname := syscall.Utsname{}
|
||||
if err := syscall.Uname(&utsname); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
[uname]: https://man7.org/linux/man-pages/man2/uname.2.html
|
||||
|
||||
<!--
|
||||
Building a binary from that code can be done by:
|
||||
-->
|
||||
可通过以下命令从代码构建一个二进制文件:
|
||||
|
||||
```console
|
||||
> go build -o main main.go
|
||||
> ldd ./main
|
||||
not a dynamic executable
|
||||
```
|
||||
|
||||
<!--
|
||||
Now it's possible to download the latest binary of [`spoc` from
|
||||
GitHub][spoc-latest] and run the application on Linux with it:
|
||||
-->
|
||||
现在可以从 GitHub 下载最新的 [`spoc`][spoc-latest] 二进制文件,
|
||||
并使用它在 Linux 上运行应用程序:
|
||||
|
||||
[spoc-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/download/v0.8.0/spoc.amd64
|
||||
|
||||
```console
|
||||
> sudo ./spoc record ./main
|
||||
10:08:25.591945 Loading bpf module
|
||||
10:08:25.591958 Using system btf file
|
||||
libbpf: loading object 'recorder.bpf.o' from buffer
|
||||
…
|
||||
libbpf: prog 'sys_enter': relo #3: patched insn #22 (ALU/ALU64) imm 16 -> 16
|
||||
10:08:25.610767 Getting bpf program sys_enter
|
||||
10:08:25.610778 Attaching bpf tracepoint
|
||||
10:08:25.611574 Getting syscalls map
|
||||
10:08:25.611582 Getting pid_mntns map
|
||||
10:08:25.613097 Module successfully loaded
|
||||
10:08:25.613311 Processing events
|
||||
10:08:25.613693 Running command with PID: 336007
|
||||
10:08:25.613835 Received event: pid: 336007, mntns: 4026531841
|
||||
10:08:25.613951 No container ID found for PID (pid=336007, mntns=4026531841, err=unable to find container ID in cgroup path)
|
||||
10:08:25.614856 Processing recorded data
|
||||
10:08:25.614975 Found process mntns 4026531841 in bpf map
|
||||
10:08:25.615110 Got syscalls: read, close, mmap, rt_sigaction, rt_sigprocmask, madvise, nanosleep, clone, uname, sigaltstack, arch_prctl, gettid, futex, sched_getaffinity, exit_group, openat
|
||||
10:08:25.615195 Adding base syscalls: access, brk, capget, capset, chdir, chmod, chown, close_range, dup2, dup3, epoll_create1, epoll_ctl, epoll_pwait, execve, faccessat2, fchdir, fchmodat, fchown, fchownat, fcntl, fstat, fstatfs, getdents64, getegid, geteuid, getgid, getpid, getppid, getuid, ioctl, keyctl, lseek, mkdirat, mknodat, mount, mprotect, munmap, newfstatat, openat2, pipe2, pivot_root, prctl, pread64, pselect6, readlink, readlinkat, rt_sigreturn, sched_yield, seccomp, set_robust_list, set_tid_address, setgid, setgroups, sethostname, setns, setresgid, setresuid, setsid, setuid, statfs, statx, symlinkat, tgkill, umask, umount2, unlinkat, unshare, write
|
||||
10:08:25.616293 Wrote seccomp profile to: /tmp/profile.yaml
|
||||
10:08:25.616298 Unloading bpf module
|
||||
```
|
||||
|
||||
<!--
|
||||
I have to execute `spoc` as root because it will internally run an [ebpf][ebpf]
|
||||
program by reusing the same code parts from the Security Profiles Operator
|
||||
itself. I can see that the bpf module got loaded successfully and `spoc`
|
||||
attached the required tracepoint to it. Then it will track the main application
|
||||
by using its [mount namespace][mntns] and process the recorded syscall data. The
|
||||
nature of ebpf programs is that they see the whole context of the Kernel, which
|
||||
means that `spoc` tracks all syscalls of the system, but does not interfere with
|
||||
their execution.
|
||||
-->
|
||||
我必须以 root 用户身份执行 `spoc`,因为它将在内部通过复用 Security Profiles Operator
|
||||
自身的相同代码,运行一个 [ebpf][ebpf] 程序。
|
||||
我可以看到 bpf 模块已成功加载,并且 `spoc` 已将所需的跟踪点附加到该模块。
|
||||
随后该模块将使用其[挂载命名空间][mntns]跟踪主应用程序并处理记录的系统调用数据。
|
||||
ebpf 程序的本质是监视整个内核的上下文,这意味着 `spoc` 跟踪系统的所有系统调用,但不会干涉其执行过程。
|
||||
|
||||
[ebpf]: https://ebpf.io
|
||||
[mntns]: https://man7.org/linux/man-pages/man7/mount_namespaces.7.html
|
||||
|
||||
<!--
|
||||
The logs indicate that `spoc` found the syscalls `read`, `close`,
|
||||
`mmap` and so on, including `uname`. All other syscalls than `uname` are coming
|
||||
from the golang runtime and its garbage collection, which already adds overhead
|
||||
to a basic application like in our demo. I can also see from the log line
|
||||
`Adding base syscalls: …` that `spoc` adds a bunch of base syscalls to the
|
||||
resulting profile. Those are used by the OCI runtime (like [runc][runc] or
|
||||
[crun][crun]) in order to be able to run a container. This means that `spoc`
|
||||
can be used to record seccomp profiles which then can be containerized directly.
|
||||
This behavior can be disabled in `spoc` by using the `--no-base-syscalls`/`-n`
|
||||
or customized via the `--base-syscalls`/`-b` command line flags. This can be
|
||||
helpful in cases where different OCI runtimes other than crun and runc are used,
|
||||
or if I just want to record the seccomp profile for the application and stack
|
||||
it with another [base profile][base].
|
||||
-->
|
||||
这些日志表明 `spoc` 发现了包括 `uname` 在内的 `read`、`close`、`mmap` 等系统调用。
|
||||
除 `uname` 之外的所有系统调用都来自 Golang 运行时及其垃圾回收,这已经为我们演示中的简单应用增加了开销。
|
||||
我还可以从日志行 `Adding base syscalls: …` 中看到 `spoc` 将一堆基本系统调用添加到了生成的配置文件中。
|
||||
这些系统调用由 OCI 运行时(如 [runc][runc] 或 [crun][crun])使用以便能够运行容器。
|
||||
这意味着 `spoc` 可用于记录可直接被容器化的 seccomp 配置文件。
|
||||
这种行为可以通过在 `spoc` 中使用 `--no-base-syscalls`/`-n` 禁用,或通过
|
||||
`--base-syscalls`/`-b` 命令行标志进行自定义。这对于使用除了 crun 和 runc 之外的不同 OCI
|
||||
运行时或者如果我只想记录应用的 seccomp 配置文件并将其与另一个[基本配置文件][base]组合时非常有帮助。
|
||||
|
||||
[runc]: https://github.com/opencontainers/runc
|
||||
[crun]: https://github.com/containers/crun
|
||||
[base]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/35ebdda/installation-usage.md#base-syscalls-for-a-container-runtime
|
||||
|
||||
<!--
|
||||
The resulting profile is now available in `/tmp/profile.yaml`, but the default
|
||||
location can be changed using the `--output-file value`/`-o` flag:
|
||||
-->
|
||||
生成的配置文件现在位于 `/tmp/profile.yaml`,
|
||||
但可以使用 `--output-file value`/`-o` 标志更改默认位置:
|
||||
|
||||
```console
|
||||
> cat /tmp/profile.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
|
||||
kind: SeccompProfile
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: main
|
||||
spec:
|
||||
architectures:
|
||||
- SCMP_ARCH_X86_64
|
||||
defaultAction: SCMP_ACT_ERRNO
|
||||
syscalls:
|
||||
- action: SCMP_ACT_ALLOW
|
||||
names:
|
||||
- access
|
||||
- arch_prctl
|
||||
- brk
|
||||
- …
|
||||
- uname
|
||||
- …
|
||||
status: {}
|
||||
```
|
||||
|
||||
<!--
|
||||
The seccomp profile Custom Resource Definition (CRD) can be directly used
|
||||
together with the Security Profiles Operator for managing it within Kubernetes.
|
||||
`spoc` is also capable of producing raw seccomp profiles (as JSON), by using the
|
||||
`--type`/`-t` `raw-seccomp` flag:
|
||||
-->
|
||||
seccomp 配置文件 CRD 可直接与 Security Profiles Operator 一起使用,统一在 Kubernetes 中进行管理。
|
||||
`spoc` 还可以通过使用 `--type`/`-t` `raw-seccomp` 标志生成原始的 seccomp 配置文件(格式为 JSON):
|
||||
|
||||
```console
|
||||
> sudo ./spoc record --type raw-seccomp ./main
|
||||
…
|
||||
52.628827 Wrote seccomp profile to: /tmp/profile.json
|
||||
```
|
||||
|
||||
```console
|
||||
> jq . /tmp/profile.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"defaultAction": "SCMP_ACT_ERRNO",
|
||||
"architectures": ["SCMP_ARCH_X86_64"],
|
||||
"syscalls": [
|
||||
{
|
||||
"names": ["access", "…", "write"],
|
||||
"action": "SCMP_ACT_ALLOW"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
The utility `spoc record` allows us to record complex seccomp profiles directly
|
||||
from binary invocations in any Linux system which is capable of running the ebpf
|
||||
code within the Kernel. But it can do more: How about modifying the seccomp
|
||||
profile and then testing it by using `spoc run`.
|
||||
-->
|
||||
实用程序 `spoc record` 允许我们直接在任何能够在内核中运行 ebpf 代码的 Linux
|
||||
系统上记录复杂的 seccomp 配置文件。但它还可以做更多事情:
|
||||
例如修改 seccomp 配置文件并使用 `spoc run` 进行测试。
|
||||
|
||||
<!--
|
||||
## Running seccomp profiles with `spoc run`
|
||||
|
||||
`spoc` is also able to run binaries with applied seccomp profiles, making it
|
||||
easy to test any modification to it. To do that, just run:
|
||||
-->
|
||||
## 使用 `spoc run` 运行 seccomp 配置文件
|
||||
|
||||
`spoc` 还能够使用 seccomp 配置文件来运行二进制文件,轻松测试对其所做的任何修改。
|
||||
要执行此操作,只需运行:
|
||||
|
||||
```console
|
||||
> sudo ./spoc run ./main
|
||||
10:29:58.153263 Reading file /tmp/profile.yaml
|
||||
10:29:58.153311 Assuming YAML profile
|
||||
10:29:58.154138 Setting up seccomp
|
||||
10:29:58.154178 Load seccomp profile
|
||||
10:29:58.154189 Starting audit log enricher
|
||||
10:29:58.154224 Enricher reading from file /var/log/audit/audit.log
|
||||
10:29:58.155356 Running command with PID: 437880
|
||||
>
|
||||
```
|
||||
|
||||
<!--
|
||||
It looks like that the application exited successfully, which is anticipated
|
||||
because I did not modify the previously recorded profile yet. I can also
|
||||
specify a custom location for the profile by using the `--profile`/`-p` flag,
|
||||
but this was not necessary because I did not modify the default output location
|
||||
from the record. `spoc` will automatically determine if it's a raw (JSON) or CRD
|
||||
(YAML) based seccomp profile and then apply it to the process.
|
||||
-->
|
||||
看起来应用程序已成功退出,这是符合预期的,因为我尚未修改先前记录的配置文件。
|
||||
我还可以使用 `--profile`/`-p` 标志指定配置文件的自定义位置,但这并不是必需的,
|
||||
因为我没有修改默认输出位置。`spoc` 将自动确定它是基于原始的(JSON)还是基于 CRD 的
|
||||
(YAML)seccomp 配置文件,然后将其应用于该进程。
|
||||
|
||||
<!--
|
||||
The Security Profiles Operator supports a [log enricher feature][enricher],
|
||||
which provides additional seccomp related information by parsing the audit logs.
|
||||
`spoc run` uses the enricher in the same way to provide more data to the end
|
||||
users when it comes to debugging seccomp profiles.
|
||||
-->
|
||||
Security Profiles Operator 支持 [log enricher 特性][enricher],
|
||||
通过解析审计日志提供与 seccomp 相关的额外信息。
|
||||
`spoc run` 以同样的方式使用 enricher 向最终用户提供更多数据以调试 seccomp 配置文件。
|
||||
|
||||
[enricher]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/35ebdda/installation-usage.md#using-the-log-enricher
|
||||
|
||||
<!--
|
||||
Now I have to modify the profile to see anything valuable in the output. For
|
||||
example, I could remove the allowed `uname` syscall:
|
||||
-->
|
||||
现在我不得不修改配置文件来查看输出中有价值的信息。
|
||||
例如,我可以移除允许的 `uname` 系统调用:
|
||||
|
||||
```console
|
||||
> jq 'del(.syscalls[0].names[] | select(. == "uname"))' /tmp/profile.json > /tmp/no-uname-profile.json
|
||||
```
|
||||
|
||||
<!--
|
||||
And then try to run it again with the new profile `/tmp/no-uname-profile.json`:
|
||||
-->
|
||||
然后尝试用新的配置文件 `/tmp/no-uname-profile.json` 来运行:
|
||||
|
||||
```console
|
||||
> sudo ./spoc run -p /tmp/no-uname-profile.json ./main
|
||||
10:39:12.707798 Reading file /tmp/no-uname-profile.json
|
||||
10:39:12.707892 Setting up seccomp
|
||||
10:39:12.707920 Load seccomp profile
|
||||
10:39:12.707982 Starting audit log enricher
|
||||
10:39:12.707998 Enricher reading from file /var/log/audit/audit.log
|
||||
10:39:12.709164 Running command with PID: 480512
|
||||
panic: operation not permitted
|
||||
|
||||
goroutine 1 [running]:
|
||||
main.main()
|
||||
/path/to/main.go:10 +0x85
|
||||
10:39:12.713035 Unable to run: launch runner: wait for command: exit status 2
|
||||
```
|
||||
|
||||
<!--
|
||||
Alright, that was expected! The applied seccomp profile blocks the `uname`
|
||||
syscall, which results in an "operation not permitted" error. This error is
|
||||
pretty generic and does not provide any hint on what got blocked by seccomp.
|
||||
It is generally extremely difficult to predict how applications behave if single
|
||||
syscalls are forbidden by seccomp. It could be possible that the application
|
||||
terminates like in our simple demo, but it could also lead to a strange
|
||||
misbehavior and the application does not stop at all.
|
||||
-->
|
||||
好的,这符合预期!应用的 seccomp 配置文件阻止了 `uname` 系统调用,导致出现
|
||||
"operation not permitted" 错误。此错误提示过于宽泛,没有提供关于 seccomp 阻止了什么的任何提示。
|
||||
通常情况下,如果 seccomp 禁止某个系统调用,很难预测应用程序会做出什么行为。
|
||||
可能应用程序像这个简单演示一样终止,但也可能导致奇怪的异常行为使得应用程序根本无法停止。
|
||||
|
||||
<!--
|
||||
If I now change the default seccomp action of the profile from `SCMP_ACT_ERRNO`
|
||||
to `SCMP_ACT_LOG` like this:
|
||||
-->
|
||||
现在,如果我将配置文件的默认 seccomp 操作从 `SCMP_ACT_ERRNO` 更改为 `SCMP_ACT_LOG`,就像这样:
|
||||
|
||||
```console
|
||||
> jq '.defaultAction = "SCMP_ACT_LOG"' /tmp/no-uname-profile.json > /tmp/no-uname-profile-log.json
|
||||
```
|
||||
|
||||
<!--
|
||||
Then the log enricher will give us a hint that the `uname` syscall got blocked
|
||||
when using `spoc run`:
|
||||
-->
|
||||
那么 log enricher 将提示我们 `uname` 系统调用在使用 `spoc run` 时被阻止:
|
||||
|
||||
```console
|
||||
> sudo ./spoc run -p /tmp/no-uname-profile-log.json ./main
|
||||
10:48:07.470126 Reading file /tmp/no-uname-profile-log.json
|
||||
10:48:07.470234 Setting up seccomp
|
||||
10:48:07.470245 Load seccomp profile
|
||||
10:48:07.470302 Starting audit log enricher
|
||||
10:48:07.470339 Enricher reading from file /var/log/audit/audit.log
|
||||
10:48:07.470889 Running command with PID: 522268
|
||||
10:48:07.472007 Seccomp: uname (63)
|
||||
```
|
||||
|
||||
<!--
|
||||
The application will not terminate any more, but seccomp will log the behavior
|
||||
to `/var/log/audit/audit.log` and `spoc` will parse the data to correlate it
|
||||
directly to our program. Generating the log messages to the audit subsystem
|
||||
comes with a large performance overhead and should be handled with care in
|
||||
production systems. It also comes with a security risk when running untrusted
|
||||
apps in audit mode in production environments.
|
||||
-->
|
||||
应用程序现在不会再终止,但 seccomp 将行为记录到 `/var/log/audit/audit.log` 中,
|
||||
而 `spoc` 会解析数据以将其直接与我们的程序相关联。将日志消息生成到审计子系统中会带来巨大的性能开销,
|
||||
在生产系统中应小心处理。当在生产环境中以审计模式运行不受信任的应用时,也会带来安全风险。
|
||||
|
||||
<!--
|
||||
This demo should give you an impression how to debug seccomp profile issues with
|
||||
applications, probably by using our shiny new helper tool powered by the
|
||||
features of the Security Profiles Operator. `spoc` is a flexible and portable
|
||||
binary suitable for edge cases where resources are limited and even Kubernetes
|
||||
itself may not be available with its full capabilities.
|
||||
-->
|
||||
本文的演示希望让你了解如何使用 Security Profiles Operator
|
||||
各项特性所赋予的全新辅助工具来调试应用程序的 seccomp 配置文件问题。
|
||||
`spoc` 是一个灵活且可移植的二进制文件,适用于资源有限的边缘场景,
|
||||
甚至是 Kubernetes 本身可能无法提供其全部功能的场景中。
|
||||
|
||||
<!--
|
||||
Thank you for reading this blog post! If you're interested in more, providing
|
||||
feedback or asking for help, then feel free to get in touch with us directly via
|
||||
[Slack (#security-profiles-operator)][slack] or the [mailing list][mail].
|
||||
-->
|
||||
感谢阅读这篇博文!如果你有兴趣了解更多,想提出反馈或寻求帮助,请通过
|
||||
[Slack (#security-profiles-operator)][slack] 或[邮件列表][mail]直接与我们联系。
|
||||
|
||||
[slack]: https://kubernetes.slack.com/messages/security-profiles-operator
|
||||
[mail]: https://groups.google.com/forum/#!forum/kubernetes-dev
|
|
@ -0,0 +1,308 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "使用 OCI 工件为 seccomp、SELinux 和 AppArmor 分发安全配置文件"
|
||||
date: 2023-05-24
|
||||
slug: oci-security-profiles
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Using OCI artifacts to distribute security profiles for seccomp, SELinux and AppArmor"
|
||||
date: 2023-05-24
|
||||
slug: oci-security-profiles
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Author**: Sascha Grunert
|
||||
-->
|
||||
**作者**: Sascha Grunert
|
||||
|
||||
**译者**: [Michael Yao](https://github.com/windsonsea) (DaoCloud)
|
||||
|
||||
<!--
|
||||
The [Security Profiles Operator (SPO)][spo] makes managing seccomp, SELinux and
|
||||
AppArmor profiles within Kubernetes easier than ever. It allows cluster
|
||||
administrators to define the profiles in a predefined custom resource YAML,
|
||||
which then gets distributed by the SPO into the whole cluster. Modification and
|
||||
removal of the security profiles are managed by the operator in the same way,
|
||||
but that’s a small subset of its capabilities.
|
||||
-->
|
||||
[Security Profiles Operator (SPO)][spo] 使得在 Kubernetes 中管理
|
||||
seccomp、SELinux 和 AppArmor 配置文件变得更加容易。
|
||||
它允许集群管理员在预定义的自定义资源 YAML 中定义配置文件,然后由 SPO 分发到整个集群中。
|
||||
安全配置文件的修改和移除也由 Operator 以同样的方式进行管理,但这只是其能力的一小部分。
|
||||
|
||||
[spo]: https://github.com/kubernetes-sigs/security-profiles-operator
|
||||
|
||||
<!--
|
||||
Another core feature of the SPO is being able to stack seccomp profiles. This
|
||||
means that users can define a `baseProfileName` in the YAML specification, which
|
||||
then gets automatically resolved by the operator and combines the syscall rules.
|
||||
If a base profile has another `baseProfileName`, then the operator will
|
||||
recursively resolve the profiles up to a certain depth. A common use case is to
|
||||
define base profiles for low level container runtimes (like [runc][runc] or
|
||||
[crun][crun]) which then contain syscalls which are required in any case to run
|
||||
the container. Alternatively, application developers can define seccomp base
|
||||
profiles for their standard distribution containers and stack dedicated profiles
|
||||
for the application logic on top. This way developers can focus on maintaining
|
||||
seccomp profiles which are way simpler and scoped to the application logic,
|
||||
without having a need to take the whole infrastructure setup into account.
|
||||
-->
|
||||
SPO 的另一个核心特性是能够组合 seccomp 配置文件。这意味着用户可以在 YAML
|
||||
规约中定义 `baseProfileName`,然后 Operator 会自动解析并组合系统调用规则。
|
||||
如果基本配置文件有另一个 `baseProfileName`,那么 Operator 将以递归方式解析配置文件到一定深度。
|
||||
常见的使用场景是为低级容器运行时(例如 [runc][runc] 或 [crun][crun])定义基本配置文件,
|
||||
在这些配置文件中包含各种情况下运行容器所需的系统调用。另外,应用开发人员可以为其标准分发容器定义
|
||||
seccomp 基本配置文件,并在其上组合针对应用逻辑的专用配置文件。
|
||||
这样开发人员就可以专注于维护更简单且范围限制为应用逻辑的 seccomp 配置文件,
|
||||
而不需要考虑整个基础设施的设置。
|
||||
|
||||
[runc]: https://github.com/opencontainers/runc
|
||||
[crun]: https://github.com/containers/crun
|
||||
|
||||
<!--
|
||||
But how to maintain those base profiles? For example, the amount of required
|
||||
syscalls for a runtime can change over its release cycle in the same way it can
|
||||
change for the main application. Base profiles have to be available in the same
|
||||
cluster, otherwise the main seccomp profile will fail to deploy. This means that
|
||||
they’re tightly coupled to the main application profiles, which acts against the
|
||||
main idea of base profiles. Distributing and managing them as plain files feels
|
||||
like an additional burden to solve.
|
||||
-->
|
||||
但是如何维护这些基本配置文件呢?
|
||||
例如,运行时所需的系统调用数量可能会像主应用一样在其发布周期内发生变化。
|
||||
基本配置文件必须在同一集群中可用,否则主 seccomp 配置文件将无法部署。
|
||||
这意味着这些基本配置文件与主应用配置文件紧密耦合,因此违背了基本配置文件的核心理念。
|
||||
将基本配置文件作为普通文件分发和管理感觉像是需要解决的额外负担。
|
||||
|
||||
<!--
|
||||
## OCI artifacts to the rescue
|
||||
|
||||
The [v0.8.0][spo-latest] release of the Security Profiles Operator supports
|
||||
managing base profiles as OCI artifacts! Imagine OCI artifacts as lightweight
|
||||
container images, storing files in layers in the same way images do, but without
|
||||
a process to be executed. Those artifacts can be used to store security profiles
|
||||
like regular container images in compatible registries. This means they can be
|
||||
versioned, namespaced and annotated similar to regular container images.
|
||||
-->
|
||||
## OCI 工件成为救命良方 {#oci-artifacts-to-rescue}
|
||||
|
||||
Security Profiles Operator 的 [v0.8.0][spo-latest] 版本支持将基本配置文件作为
|
||||
OCI 工件进行管理!将 OCI 工件假想为轻量级容器镜像,采用与镜像相同的方式在各层中存储文件,
|
||||
但没有要执行的进程。这些工件可以用于像普通容器镜像一样在兼容的镜像仓库中存储安全配置文件。
|
||||
这意味着这些工件可以被版本化、作用于命名空间并类似常规容器镜像一样添加注解。
|
||||
|
||||
[spo-latest]: https://github.com/kubernetes-sigs/security-profiles-operator/releases/v0.8.0
|
||||
|
||||
<!--
|
||||
To see how that works in action, specify a `baseProfileName` prefixed with
|
||||
`oci://` within a seccomp profile CRD, for example:
|
||||
-->
|
||||
若要查看具体的工作方式,可以在 seccomp 配置文件 CRD 内以前缀 `oci://`
|
||||
指定 `baseProfileName`,例如:
|
||||
|
||||
```yaml
|
||||
apiVersion: security-profiles-operator.x-k8s.io/v1beta1
|
||||
kind: SeccompProfile
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
defaultAction: SCMP_ACT_ERRNO
|
||||
baseProfileName: oci://ghcr.io/security-profiles/runc:v1.1.5
|
||||
syscalls:
|
||||
- action: SCMP_ACT_ALLOW
|
||||
names:
|
||||
- uname
|
||||
```
|
||||
|
||||
<!--
|
||||
The operator will take care of pulling the content by using [oras][oras], as
|
||||
well as verifying the [sigstore (cosign)][cosign] signatures of the artifact. If
|
||||
the artifacts are not signed, then the SPO will reject them. The resulting
|
||||
profile `test` will then contain all base syscalls from the remote `runc`
|
||||
profile plus the additional allowed `uname` one. It is also possible to
|
||||
reference the base profile by its digest (SHA256) making the artifact to be
|
||||
pulled more specific, for example by referencing
|
||||
`oci://ghcr.io/security-profiles/runc@sha256:380…`.
|
||||
-->
|
||||
Operator 将负责使用 [oras][oras] 拉取内容,并验证工件的 [sigstore (cosign)][cosign] 签名。
|
||||
如果某些工件未经签名,则 SPO 将拒绝它们。随后生成的配置文件 `test` 将包含来自远程
|
||||
`runc` 配置文件的所有基本系统调用加上额外允许的 `uname` 系统调用。
|
||||
你还可以通过摘要(SHA256)来引用基本配置文件,使要被拉取的工件更为确定,
|
||||
例如通过引用 `oci://ghcr.io/security-profiles/runc@sha256: 380…`。
|
||||
|
||||
[oras]: https://oras.land
|
||||
[cosign]: https://github.com/sigstore/cosign
|
||||
|
||||
<!--
|
||||
The operator internally caches pulled artifacts up to 24 hours for 1000
|
||||
profiles, meaning that they will be refreshed after that time period, if the
|
||||
cache is full or the operator daemon gets restarted.
|
||||
-->
|
||||
Operator 在内部缓存已拉取的工件,最多可缓存 1000 个配置文件 24 小时,
|
||||
这意味着如果缓存已满、Operator 守护进程重启或超出给定时段后这些工件将被刷新。
|
||||
|
||||
<!--
|
||||
Because the overall resulting syscalls are hidden from the user (I only have the
|
||||
`baseProfileName` listed in the SeccompProfile, and not the syscalls themselves), I'll additionally
|
||||
annotate that SeccompProfile with the final `syscalls`.
|
||||
|
||||
Here's how the SeccompProfile looks after I annotate it:
|
||||
-->
|
||||
因为总体生成的系统调用对用户不可见
|
||||
(我只列出了 SeccompProfile 中的 `baseProfileName`,而没有列出系统调用本身),
|
||||
所以我为该 SeccompProfile 的最终 `syscalls` 添加了额外的注解。
|
||||
|
||||
以下是我注解后的 SeccompProfile:
|
||||
|
||||
```console
|
||||
> kubectl describe seccompprofile test
|
||||
Name: test
|
||||
Namespace: security-profiles-operator
|
||||
Labels: spo.x-k8s.io/profile-id=SeccompProfile-test
|
||||
Annotations: syscalls:
|
||||
[{"names":["arch_prctl","brk","capget","capset","chdir","clone","close",...
|
||||
API Version: security-profiles-operator.x-k8s.io/v1beta1
|
||||
```
|
||||
|
||||
<!--
|
||||
The SPO maintainers provide all public base profiles as part of the [“Security
|
||||
Profiles” GitHub organization][org].
|
||||
-->
|
||||
SPO 维护者们作为 [“Security Profiles” GitHub 组织][org] 的成员提供所有公开的基本配置文件。
|
||||
|
||||
[org]: https://github.com/orgs/security-profiles/packages
|
||||
|
||||
<!--
|
||||
## Managing OCI security profiles
|
||||
|
||||
Alright, now the official SPO provides a bunch of base profiles, but how can I
|
||||
define my own? Well, first of all we have to choose a working registry. There
|
||||
are a bunch of registries that already supports OCI artifacts:
|
||||
-->
|
||||
## 管理 OCI 安全配置文件 {#managing-oci-security-profiles}
|
||||
|
||||
好的,官方的 SPO 提供了许多基本配置文件,但是我如何定义自己的配置文件呢?
|
||||
首先,我们必须选择一个可用的镜像仓库。有许多镜像仓库都已支持 OCI 工件:
|
||||
|
||||
- [CNCF Distribution](https://github.com/distribution/distribution)
|
||||
- [Azure Container Registry](https://aka.ms/acr)
|
||||
- [Amazon Elastic Container Registry](https://aws.amazon.com/ecr)
|
||||
- [Google Artifact Registry](https://cloud.google.com/artifact-registry)
|
||||
- [GitHub Packages container registry](https://docs.github.com/en/packages/guides/about-github-container-registry)
|
||||
- [Bundle Bar](https://bundle.bar/docs/supported-clients/oras)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Zot Registry](https://zotregistry.io)
|
||||
|
||||
<!--
|
||||
The Security Profiles Operator ships a new command line interface called `spoc`,
|
||||
which is a little helper tool for managing OCI profiles among doing various other
|
||||
things which are out of scope of this blog post. But, the command `spoc push`
|
||||
can be used to push a security profile to a registry:
|
||||
-->
|
||||
Security Profiles Operator 交付一个新的名为 `spoc` 的命令行界面,
|
||||
这是一个用于管理 OCI 配置文件的小型辅助工具,该工具提供的各项能力不在这篇博文的讨论范围内。
|
||||
但 `spoc push` 命令可以用于将安全配置文件推送到镜像仓库:
|
||||
|
||||
```console
|
||||
> export USERNAME=my-user
|
||||
> export PASSWORD=my-pass
|
||||
> spoc push -f ./examples/baseprofile-crun.yaml ghcr.io/security-profiles/crun:v1.8.3
|
||||
16:35:43.899886 Pushing profile ./examples/baseprofile-crun.yaml to: ghcr.io/security-profiles/crun:v1.8.3
|
||||
16:35:43.899939 Creating file store in: /tmp/push-3618165827
|
||||
16:35:43.899947 Adding profile to store: ./examples/baseprofile-crun.yaml
|
||||
16:35:43.900061 Packing files
|
||||
16:35:43.900282 Verifying reference: ghcr.io/security-profiles/crun:v1.8.3
|
||||
16:35:43.900310 Using tag: v1.8.3
|
||||
16:35:43.900313 Creating repository for ghcr.io/security-profiles/crun
|
||||
16:35:43.900319 Using username and password
|
||||
16:35:43.900321 Copying profile to repository
|
||||
16:35:46.976108 Signing container image
|
||||
Generating ephemeral keys...
|
||||
Retrieving signed certificate...
|
||||
|
||||
Note that there may be personally identifiable information associated with this signed artifact.
|
||||
This may include the email address associated with the account with which you authenticate.
|
||||
This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later.
|
||||
|
||||
By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs.
|
||||
Your browser will now be opened to:
|
||||
https://oauth2.sigstore.dev/auth/auth?access_type=…
|
||||
Successfully verified SCT...
|
||||
tlog entry created with index: 16520520
|
||||
Pushing signature to: ghcr.io/security-profiles/crun
|
||||
```
|
||||
|
||||
<!--
|
||||
You can see that the tool automatically signs the artifact and pushes the
|
||||
`./examples/baseprofile-crun.yaml` to the registry, which is then directly ready
|
||||
for usage within the SPO. If username and password authentication is required,
|
||||
either use the `--username`, `-u` flag or export the `USERNAME` environment
|
||||
variable. To set the password, export the `PASSWORD` environment variable.
|
||||
-->
|
||||
你可以看到该工具自动签署工件并将 `./examples/baseprofile-crun.yaml` 推送到镜像仓库中,
|
||||
然后直接可以在 SPO 中使用此文件。如果需要验证用户名和密码,则可以使用 `--username`、
|
||||
`-u` 标志或导出 `USERNAME` 环境变量。要设置密码,可以导出 `PASSWORD` 环境变量。
|
||||
|
||||
<!--
|
||||
It is possible to add custom annotations to the security profile by using the
|
||||
`--annotations` / `-a` flag multiple times in `KEY:VALUE` format. Those have no
|
||||
effect for now, but at some later point additional features of the operator may
|
||||
rely them.
|
||||
|
||||
The `spoc` client is also able to pull security profiles from OCI artifact
|
||||
compatible registries. To do that, just run `spoc pull`:
|
||||
-->
|
||||
采用 `KEY:VALUE` 的格式多次使用 `--annotations` / `-a` 标志,
|
||||
可以为安全配置文件添加自定义注解。目前这些对安全配置文件没有影响,
|
||||
但是在后续某个阶段,Operator 的其他特性可能会依赖于它们。
|
||||
|
||||
`spoc` 客户端还可以从兼容 OCI 工件的镜像仓库中拉取安全配置文件。
|
||||
要执行此操作,只需运行 `spoc pull`:
|
||||
|
||||
```console
|
||||
> spoc pull ghcr.io/security-profiles/runc:v1.1.5
|
||||
16:32:29.795597 Pulling profile from: ghcr.io/security-profiles/runc:v1.1.5
|
||||
16:32:29.795610 Verifying signature
|
||||
|
||||
Verification for ghcr.io/security-profiles/runc:v1.1.5 --
|
||||
The following checks were performed on each of these signatures:
|
||||
- Existence of the claims in the transparency log was verified offline
|
||||
- The code-signing certificate was verified using trusted certificate authority certificates
|
||||
|
||||
[{"critical":{"identity":{"docker-reference":"ghcr.io/security-profiles/runc"},…}}]
|
||||
16:32:33.208695 Creating file store in: /tmp/pull-3199397214
|
||||
16:32:33.208713 Verifying reference: ghcr.io/security-profiles/runc:v1.1.5
|
||||
16:32:33.208718 Creating repository for ghcr.io/security-profiles/runc
|
||||
16:32:33.208742 Using tag: v1.1.5
|
||||
16:32:33.208743 Copying profile from repository
|
||||
16:32:34.119652 Reading profile
|
||||
16:32:34.119677 Trying to unmarshal seccomp profile
|
||||
16:32:34.120114 Got SeccompProfile: runc-v1.1.5
|
||||
16:32:34.120119 Saving profile in: /tmp/profile.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
The profile can be now found in `/tmp/profile.yaml` or the specified output file
|
||||
`--output-file` / `-o`. We can specify an username and password in the same way
|
||||
as for `spoc push`.
|
||||
|
||||
`spoc` makes it easy to manage security profiles as OCI artifacts, which can be
|
||||
then consumed directly by the operator itself.
|
||||
-->
|
||||
现在可以在 `/tmp/profile.yaml` 或 `--output-file` / `-o` 所指定的输出文件中找到该配置文件。
|
||||
我们可以像 `spoc push` 一样指定用户名和密码。
|
||||
|
||||
`spoc` 使得以 OCI 工件的形式管理安全配置文件变得非常容易,这些 OCI 工件可以由 Operator 本身直接使用。
|
||||
|
||||
<!--
|
||||
That was our compact journey through the latest possibilities of the Security
|
||||
Profiles Operator! If you're interested in more, providing feedback or asking
|
||||
for help, then feel free to get in touch with us directly via [Slack
|
||||
(#security-profiles-operator)][slack] or [the mailing list][mail].
|
||||
-->
|
||||
本文简要介绍了通过 Security Profiles Operator 能够达成的各种最新可能性!
|
||||
如果你有兴趣了解更多,无论是提出反馈还是寻求帮助,
|
||||
请通过 [Slack (#security-profiles-operator)][slack] 或[邮件列表][mail]直接与我们联系。
|
||||
|
||||
[slack]: https://kubernetes.slack.com/messages/security-profiles-operator
|
||||
[mail]: https://groups.google.com/forum/#!forum/kubernetes-dev
|
|
@ -37,7 +37,7 @@ closer to the desired state, by turning equipment on or off.
|
|||
## Controller pattern
|
||||
|
||||
A controller tracks at least one Kubernetes resource type.
|
||||
These [objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects)
|
||||
These {{< glossary_tooltip text="objects" term_id="object" >}}
|
||||
have a spec field that represents the desired state. The
|
||||
controller(s) for that resource are responsible for making the current
|
||||
state come closer to that desired state.
|
||||
|
@ -56,7 +56,7 @@ detail.
|
|||
## 控制器模式 {#controller-pattern}
|
||||
|
||||
一个控制器至少追踪一种类型的 Kubernetes 资源。这些
|
||||
[对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects)
|
||||
{{< glossary_tooltip text="对象" term_id="object" >}}
|
||||
有一个代表期望状态的 `spec` 字段。
|
||||
该资源的控制器负责确保其当前状态接近期望状态。
|
||||
|
||||
|
@ -287,14 +287,14 @@ Kubernetes 允许你运行一个稳定的控制平面,这样即使某些内置
|
|||
## {{% heading "whatsnext" %}}
|
||||
<!--
|
||||
* Read about the [Kubernetes control plane](/docs/concepts/overview/components/#control-plane-components)
|
||||
* Discover some of the basic [Kubernetes objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
* Discover some of the basic [Kubernetes objects](/docs/concepts/overview/working-with-objects/)
|
||||
* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
||||
* If you want to write your own controller, see
|
||||
[Extension Patterns](/docs/concepts/extend-kubernetes/#extension-patterns)
|
||||
in Extending Kubernetes.
|
||||
-->
|
||||
* 阅读 [Kubernetes 控制平面组件](/zh-cn/docs/concepts/overview/components/#control-plane-components)
|
||||
* 了解 [Kubernetes 对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
* 了解 [Kubernetes 对象](/zh-cn/docs/concepts/overview/working-with-objects/)
|
||||
的一些基本知识
|
||||
* 进一步学习 [Kubernetes API](/zh-cn/docs/concepts/overview/kubernetes-api/)
|
||||
* 如果你想编写自己的控制器,请看 Kubernetes 的
|
||||
|
|
|
@ -46,7 +46,7 @@ allows the clean up of resources like the following:
|
|||
<!--
|
||||
## Owners and dependents {#owners-dependents}
|
||||
|
||||
Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
|
||||
Many objects in Kubernetes link to each other through [*owner references*](/docs/concepts/overview/working-with-objects/owners-dependents/).
|
||||
Owner references tell the control plane which objects are dependent on others.
|
||||
Kubernetes uses owner references to give the control plane, and other API
|
||||
clients, the opportunity to clean up related resources before deleting an
|
||||
|
@ -98,7 +98,7 @@ it is treated as having an unresolvable owner reference, and is not able to be g
|
|||
|
||||
<!--
|
||||
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
|
||||
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
|
||||
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
|
||||
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
|
||||
You can check for that kind of Event by running
|
||||
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
|
||||
|
@ -118,7 +118,7 @@ Kubernetes checks for and deletes objects that no longer have owner
|
|||
references, like the pods left behind when you delete a ReplicaSet. When you
|
||||
delete an object, you can control whether Kubernetes deletes the object's
|
||||
dependents automatically, in a process called *cascading deletion*. There are
|
||||
two types of cascading deletion, as follows:
|
||||
two types of cascading deletion, as follows:
|
||||
|
||||
* Foreground cascading deletion
|
||||
* Background cascading deletion
|
||||
|
@ -135,7 +135,7 @@ Kubernetes 会检查并删除那些不再拥有属主引用的对象,例如在
|
|||
|
||||
<!--
|
||||
You can also control how and when garbage collection deletes resources that have
|
||||
owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id="finalizer">}}.
|
||||
owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id="finalizer">}}.
|
||||
-->
|
||||
你也可以使用 Kubernetes {{<glossary_tooltip text="Finalizers" term_id="finalizer">}}
|
||||
来控制垃圾收集机制如何以及何时删除包含属主引用的资源。
|
||||
|
@ -145,7 +145,7 @@ owner references using Kubernetes {{<glossary_tooltip text="finalizers" term_id=
|
|||
|
||||
In foreground cascading deletion, the owner object you're deleting first enters
|
||||
a *deletion in progress* state. In this state, the following happens to the
|
||||
owner object:
|
||||
owner object:
|
||||
-->
|
||||
### 前台级联删除 {#foreground-deletion}
|
||||
|
||||
|
@ -169,7 +169,7 @@ owner object:
|
|||
After the owner object enters the deletion in progress state, the controller
|
||||
deletes the dependents. After deleting all the dependent objects, the controller
|
||||
deletes the owner object. At this point, the object is no longer visible in the
|
||||
Kubernetes API.
|
||||
Kubernetes API.
|
||||
|
||||
During foreground cascading deletion, the only dependents that block owner
|
||||
deletion are those that have the `ownerReference.blockOwnerDeletion=true` field.
|
||||
|
@ -223,7 +223,7 @@ to override this behaviour, see [Delete owner objects and orphan dependents](/do
|
|||
The {{<glossary_tooltip text="kubelet" term_id="kubelet">}} performs garbage
|
||||
collection on unused images every five minutes and on unused containers every
|
||||
minute. You should avoid using external garbage collection tools, as these can
|
||||
break the kubelet behavior and remove containers that should exist.
|
||||
break the kubelet behavior and remove containers that should exist.
|
||||
-->
|
||||
## 未使用容器和镜像的垃圾收集 {#containers-images}
|
||||
|
||||
|
@ -248,7 +248,7 @@ resource type.
|
|||
### Container image lifecycle
|
||||
|
||||
Kubernetes manages the lifecycle of all images through its *image manager*,
|
||||
which is part of the kubelet, with the cooperation of
|
||||
which is part of the kubelet, with the cooperation of
|
||||
{{< glossary_tooltip text="cadvisor" term_id="cadvisor" >}}. The kubelet
|
||||
considers the following disk usage limits when making garbage collection
|
||||
decisions:
|
||||
|
@ -277,7 +277,7 @@ kubelet 会持续删除镜像,直到磁盘用量到达 `LowThresholdPercent`
|
|||
### Container garbage collection {#container-image-garbage-collection}
|
||||
|
||||
The kubelet garbage collects unused containers based on the following variables,
|
||||
which you can define:
|
||||
which you can define:
|
||||
-->
|
||||
### 容器垃圾收集 {#container-image-garbage-collection}
|
||||
|
||||
|
@ -300,7 +300,7 @@ kubelet 会基于如下变量对所有未使用的容器执行垃圾收集操作
|
|||
|
||||
<!--
|
||||
In addition to these variables, the kubelet garbage collects unidentified and
|
||||
deleted containers, typically starting with the oldest first.
|
||||
deleted containers, typically starting with the oldest first.
|
||||
|
||||
`MaxPerPodContainer` and `MaxContainers` may potentially conflict with each other
|
||||
in situations where retaining the maximum number of containers per Pod
|
||||
|
@ -333,8 +333,8 @@ You can tune garbage collection of resources by configuring options specific to
|
|||
the controllers managing those resources. The following pages show you how to
|
||||
configure garbage collection:
|
||||
|
||||
* [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
|
||||
* [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
* [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
|
||||
* [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
-->
|
||||
## 配置垃圾收集 {#configuring-gc}
|
||||
|
||||
|
|
|
@ -138,7 +138,7 @@ first and re-added after the update.
|
|||
### Self-registration of Nodes
|
||||
|
||||
When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to
|
||||
register itself with the API server. This is the preferred pattern, used by most distros.
|
||||
register itself with the API server. This is the preferred pattern, used by most distros.
|
||||
|
||||
For self-registration, the kubelet is started with the following options:
|
||||
-->
|
||||
|
@ -219,7 +219,7 @@ Pods already scheduled on the Node may misbehave or cause issues if the Node
|
|||
configuration will be changed on kubelet restart. For example, already running
|
||||
Pod may be tainted against the new labels assigned to the Node, while other
|
||||
Pods, that are incompatible with that Pod will be scheduled based on this new
|
||||
label. Node re-registration ensures all Pods will be drained and properly
|
||||
label. Node re-registration ensures all Pods will be drained and properly
|
||||
re-scheduled.
|
||||
-->
|
||||
如果在 kubelet 重启期间 Node 配置发生了变化,已经被调度到某 Node 上的 Pod
|
||||
|
@ -409,9 +409,9 @@ of the Node resource. For example, the following JSON structure describes a heal
|
|||
<!--
|
||||
When problems occur on nodes, the Kubernetes control plane automatically creates
|
||||
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
|
||||
affecting the node. An example of this is when the `status` of the Ready condition
|
||||
affecting the node. An example of this is when the `status` of the Ready condition
|
||||
remains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`,
|
||||
which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status,
|
||||
which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status,
|
||||
or a `node.kubernetes.io/not-ready` taint, for a `False` status, to be added to the Node.
|
||||
-->
|
||||
当节点上出现问题时,Kubernetes 控制面会自动创建与影响节点的状况对应的
|
||||
|
@ -643,7 +643,7 @@ then the eviction mechanism does not take per-zone unavailability into account.
|
|||
A key reason for spreading your nodes across availability zones is so that the
|
||||
workload can be shifted to healthy zones when one entire zone goes down.
|
||||
Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
|
||||
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
|
||||
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
|
||||
completely unhealthy (none of the nodes in the cluster are healthy). In such a
|
||||
case, the node controller assumes that there is some problem with connectivity
|
||||
between the control plane and the nodes, and doesn't perform any evictions.
|
||||
|
@ -740,12 +740,14 @@ The kubelet attempts to detect node system shutdown and terminates pods running
|
|||
|
||||
Kubelet ensures that pods follow the normal
|
||||
[pod termination process](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
during the node shutdown.
|
||||
during the node shutdown. During node shutdown, the kubelet does not accept new
|
||||
Pods (even if those Pods are already bound to the node).
|
||||
-->
|
||||
kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的所有 Pod。
|
||||
|
||||
在节点终止期间,kubelet 保证 Pod 遵从常规的
|
||||
[Pod 终止流程](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
||||
[Pod 终止流程](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination),
|
||||
且不接受新的 Pod(即使这些 Pod 已经绑定到该节点)。
|
||||
|
||||
<!--
|
||||
The Graceful node shutdown feature depends on systemd since it takes advantage of
|
||||
|
@ -776,6 +778,36 @@ set to non-zero values.
|
|||
`shutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活节点体面关闭功能。
|
||||
要激活此功能特性,这两个 kubelet 配置选项要适当配置,并设置为非零值。
|
||||
|
||||
<!--
|
||||
Once systemd detects or notifies node shutdown, the kubelet sets a `NotReady` condition on
|
||||
the Node, with the `reason` set to `"node is shutting down"`. The kube-scheduler honors this condition
|
||||
and does not schedule any Pods onto the affected node; other third-party schedulers are
|
||||
expected to follow the same logic. This means that new Pods won't be scheduled onto that node
|
||||
and therefore none will start.
|
||||
-->
|
||||
一旦 systemd 检测到或通知节点关闭,kubelet 就会在节点上设置一个
|
||||
`NotReady` 状况,并将 `reason` 设置为 `"node is shutting down"`。
|
||||
kube-scheduler 会重视此状况,不将 Pod 调度到受影响的节点上;
|
||||
其他第三方调度程序也应当遵循相同的逻辑。这意味着新的 Pod 不会被调度到该节点上,
|
||||
因此不会有新 Pod 启动。
|
||||
|
||||
<!--
|
||||
The kubelet **also** rejects Pods during the `PodAdmission` phase if an ongoing
|
||||
node shutdown has been detected, so that even Pods with a
|
||||
{{< glossary_tooltip text="toleration" term_id="toleration" >}} for
|
||||
`node.kubernetes.io/not-ready:NoSchedule` do not start there.
|
||||
-->
|
||||
如果检测到节点关闭过程正在进行中,kubelet **也会**在 `PodAdmission`
|
||||
阶段拒绝 Pod,即使是该 Pod 带有 `node.kubernetes.io/not-ready:NoSchedule`
|
||||
的{{< glossary_tooltip text="容忍度" term_id="toleration" >}}。
|
||||
|
||||
<!--
|
||||
At the same time when kubelet is setting that condition on its Node via the API, the kubelet also begins
|
||||
terminating any Pods that are running locally.
|
||||
-->
|
||||
同时,当 kubelet 通过 API 在其 Node 上设置该状况时,kubelet
|
||||
也开始终止在本地运行的所有 Pod。
|
||||
|
||||
<!--
|
||||
During a graceful shutdown, kubelet terminates pods in two phases:
|
||||
|
||||
|
@ -810,6 +842,19 @@ Graceful node shutdown feature is configured with two
|
|||
* 在节点关闭期间指定用于终止[关键 Pod](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)
|
||||
的持续时间。该值应小于 `shutdownGracePeriod`。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
There are cases when Node termination was cancelled by the system (or perhaps manually
|
||||
by an administrator). In either of those situations the
|
||||
Node will return to the `Ready` state. However Pods which already started the process
|
||||
of termination
|
||||
will not be restored by kubelet and will need to be re-scheduled.
|
||||
-->
|
||||
在某些情况下,节点终止过程会被系统取消(或者可能由管理员手动取消)。
|
||||
无论哪种情况下,节点都将返回到 `Ready` 状态。然而,已经开始终止进程的
|
||||
Pod 将不会被 kubelet 恢复,需要被重新调度。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
For example, if `shutdownGracePeriod=30s`, and
|
||||
`shutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by
|
||||
|
@ -1007,10 +1052,10 @@ kubelet 子系统中会生成 `graceful_shutdown_start_time_seconds` 和
|
|||
{{< feature-state state="beta" for_k8s_version="v1.26" >}}
|
||||
|
||||
<!--
|
||||
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
|
||||
either because the command does not trigger the inhibitor locks mechanism used by
|
||||
kubelet or because of a user error, i.e., the ShutdownGracePeriod and
|
||||
ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above
|
||||
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
|
||||
either because the command does not trigger the inhibitor locks mechanism used by
|
||||
kubelet or because of a user error, i.e., the ShutdownGracePeriod and
|
||||
ShutdownGracePeriodCriticalPods are not configured properly. Please refer to above
|
||||
section [Graceful Node Shutdown](#graceful-node-shutdown) for more details.
|
||||
-->
|
||||
节点关闭的操作可能无法被 kubelet 的节点关闭管理器检测到,
|
||||
|
@ -1019,15 +1064,15 @@ section [Graceful Node Shutdown](#graceful-node-shutdown) for more details.
|
|||
请参考以上[节点体面关闭](#graceful-node-shutdown)部分了解更多详细信息。
|
||||
|
||||
<!--
|
||||
When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods
|
||||
that are part of a {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} will be stuck in terminating status on
|
||||
the shutdown node and cannot move to a new running node. This is because kubelet on
|
||||
the shutdown node is not available to delete the pods so the StatefulSet cannot
|
||||
create a new pod with the same name. If there are volumes used by the pods, the
|
||||
VolumeAttachments will not be deleted from the original shutdown node so the volumes
|
||||
used by these pods cannot be attached to a new running node. As a result, the
|
||||
application running on the StatefulSet cannot function properly. If the original
|
||||
shutdown node comes up, the pods will be deleted by kubelet and new pods will be
|
||||
When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods
|
||||
that are part of a {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} will be stuck in terminating status on
|
||||
the shutdown node and cannot move to a new running node. This is because kubelet on
|
||||
the shutdown node is not available to delete the pods so the StatefulSet cannot
|
||||
create a new pod with the same name. If there are volumes used by the pods, the
|
||||
VolumeAttachments will not be deleted from the original shutdown node so the volumes
|
||||
used by these pods cannot be attached to a new running node. As a result, the
|
||||
application running on the StatefulSet cannot function properly. If the original
|
||||
shutdown node comes up, the pods will be deleted by kubelet and new pods will be
|
||||
created on a different running node. If the original shutdown node does not come up,
|
||||
these pods will be stuck in terminating status on the shutdown node forever.
|
||||
-->
|
||||
|
@ -1043,13 +1088,13 @@ these pods will be stuck in terminating status on the shutdown node forever.
|
|||
如果原来的已关闭节点没有被恢复,那些在已关闭节点上的 Pod 将永远滞留在终止状态。
|
||||
|
||||
<!--
|
||||
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
|
||||
or `NoSchedule` effect to a Node marking it out-of-service.
|
||||
To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute`
|
||||
or `NoSchedule` effect to a Node marking it out-of-service.
|
||||
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
is enabled on {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}, and a Node is marked out-of-service with this taint, the
|
||||
pods on the node will be forcefully deleted if there are no matching tolerations on it and volume
|
||||
detach operations for the pods terminating on the node will happen immediately. This allows the
|
||||
Pods on the out-of-service node to recover quickly on a different node.
|
||||
Pods on the out-of-service node to recover quickly on a different node.
|
||||
-->
|
||||
为了缓解上述情况,用户可以手动将具有 `NoExecute` 或 `NoSchedule` 效果的
|
||||
`node.kubernetes.io/out-of-service` 污点添加到节点上,标记其无法提供服务。
|
||||
|
@ -1064,7 +1109,7 @@ Pods on the out-of-service node to recover quickly on a different node.
|
|||
During a non-graceful shutdown, Pods are terminated in the two phases:
|
||||
|
||||
1. Force delete the Pods that do not have matching `out-of-service` tolerations.
|
||||
2. Immediately perform detach volume operation for such pods.
|
||||
2. Immediately perform detach volume operation for such pods.
|
||||
-->
|
||||
在非体面关闭期间,Pod 分两个阶段终止:
|
||||
|
||||
|
@ -1180,12 +1225,12 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
|
|||
|
||||
<!--
|
||||
Learn more about the following:
|
||||
* [Components](/docs/concepts/overview/components/#node-components) that make up a node.
|
||||
* [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
|
||||
* [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document.
|
||||
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/).
|
||||
* [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/).
|
||||
* [Components](/docs/concepts/overview/components/#node-components) that make up a node.
|
||||
* [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
|
||||
* [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document.
|
||||
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
|
||||
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/).
|
||||
* [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/).
|
||||
-->
|
||||
进一步了解以下资料:
|
||||
|
||||
|
|
|
@ -846,6 +846,37 @@ If you need access to multiple registries, you can create one secret for each re
|
|||
-->
|
||||
如果你需要访问多个仓库,可以为每个仓库创建一个 Secret。
|
||||
|
||||
<!--
|
||||
## Legacy built-in kubelet credential provider
|
||||
|
||||
In older versions of Kubernetes, the kubelet had a direct integration with cloud provider credentials.
|
||||
This gave it the ability to dynamically fetch credentials for image registries.
|
||||
-->
|
||||
## 旧版的内置 kubelet 凭据提供程序
|
||||
|
||||
在旧版本的 Kubernetes 中,kubelet 与云提供商凭据直接集成。
|
||||
这使它能够动态获取镜像仓库的凭据。
|
||||
|
||||
<!--
|
||||
There were three built-in implementations of the kubelet credential provider integration:
|
||||
ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry).
|
||||
-->
|
||||
kubelet 凭据提供程序集成存在三个内置实现:
|
||||
ACR(Azure 容器仓库)、ECR(Elastic 容器仓库)和 GCR(Google 容器仓库)
|
||||
|
||||
<!--
|
||||
For more information on the legacy mechanism, read the documentation for the version of Kubernetes that you
|
||||
are using. Kubernetes v1.26 through to v{{< skew latestVersion >}} do not include the legacy mechanism, so
|
||||
you would need to either:
|
||||
- configure a kubelet image credential provider on each node
|
||||
- specify image pull credentials using `imagePullSecrets` and at least one Secret
|
||||
-->
|
||||
有关该旧版机制的更多信息,请阅读你正在使用的 Kubernetes 版本的文档。
|
||||
从 Kubernetes v1.26 到 v{{< skew latestVersion >}} 不再包含该旧版机制,因此你需要:
|
||||
|
||||
- 在每个节点上配置一个 kubelet 镜像凭据提供程序
|
||||
- 使用 `imagePullSecrets` 和至少一个 Secret 指定镜像拉取凭据
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -268,7 +268,7 @@ Kubernetes 为你提供:
|
|||
-->
|
||||
* **密钥与配置管理**
|
||||
|
||||
Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。
|
||||
Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 SSH 密钥。
|
||||
你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
|
||||
|
||||
<!--
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue