Merge branch 'kubernetes:main' into es_cleanup_getting_started_guides
commit
ed7f189951
|
|
@ -167,7 +167,6 @@ aliases:
|
|||
- edsoncelio
|
||||
- femrtnz
|
||||
- jcjesus
|
||||
- rikatz
|
||||
- stormqueen1990
|
||||
- yagonobre
|
||||
sig-docs-pt-reviews: # PR reviews for Portugese content
|
||||
|
|
@ -176,7 +175,6 @@ aliases:
|
|||
- femrtnz
|
||||
- jcjesus
|
||||
- mrerlison
|
||||
- rikatz
|
||||
- stormqueen1990
|
||||
- yagonobre
|
||||
sig-docs-vi-owners: # Admins for Vietnamese content
|
||||
|
|
|
|||
|
|
@ -22,11 +22,11 @@ This repository contains the assets required to build the [Kubernetes website an
|
|||
<!--
|
||||
## Using this repository
|
||||
|
||||
You can run the website locally using Hugo (Extended version), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
|
||||
You can run the website locally using [Hugo (Extended version)](https://gohugo.io/), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
|
||||
-->
|
||||
## 使用这个仓库
|
||||
|
||||
可以使用 Hugo(扩展版)在本地运行网站,也可以在容器中运行它。强烈建议使用容器,因为这样可以和在线网站的部署保持一致。
|
||||
可以使用 [Hugo(扩展版)](https://gohugo.io/)在本地运行网站,也可以在容器中运行它。强烈建议使用容器,因为这样可以和在线网站的部署保持一致。
|
||||
|
||||
<!--
|
||||
## Prerequisites
|
||||
|
|
@ -150,11 +150,11 @@ This will start the local Hugo server on port 1313. Open up your browser to <htt
|
|||
## 构建 API 参考页面
|
||||
|
||||
<!--
|
||||
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
|
||||
The API reference pages located in `content/en/docs/reference/kubernetes-api` are built from the Swagger specification, also known as OpenAPI specification, using <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
|
||||
|
||||
To update the reference pages for a new Kubernetes release follow these steps:
|
||||
-->
|
||||
位于 `content/en/docs/reference/kubernetes-api` 的 API 参考页面是根据 Swagger 规范构建的,使用 <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>。
|
||||
位于 `content/en/docs/reference/kubernetes-api` 的 API 参考页面是使用 <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs> 根据 Swagger 规范(也称为 OpenAPI 规范)构建的。
|
||||
|
||||
要更新 Kubernetes 新版本的参考页面,请执行以下步骤:
|
||||
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -99,6 +99,7 @@
|
|||
- initContainerStatuses
|
||||
- containerStatuses
|
||||
- ephemeralContainerStatuses
|
||||
- resize
|
||||
|
||||
- definition: io.k8s.api.core.v1.Container
|
||||
field_categories:
|
||||
|
|
@ -127,6 +128,7 @@
|
|||
- name: Resources
|
||||
fields:
|
||||
- resources
|
||||
- resizePolicy
|
||||
- name: Lifecycle
|
||||
fields:
|
||||
- lifecycle
|
||||
|
|
@ -219,6 +221,9 @@
|
|||
fields:
|
||||
- volumeMounts
|
||||
- volumeDevices
|
||||
- name: Resources
|
||||
fields:
|
||||
- resizePolicy
|
||||
- name: Lifecycle
|
||||
fields:
|
||||
- terminationMessagePath
|
||||
|
|
|
|||
|
|
@ -66,18 +66,18 @@ parts:
|
|||
- name: PriorityClass
|
||||
group: scheduling.k8s.io
|
||||
version: v1
|
||||
- name: PodScheduling
|
||||
- name: PodSchedulingContext
|
||||
group: resource.k8s.io
|
||||
version: v1alpha1
|
||||
version: v1alpha2
|
||||
- name: ResourceClaim
|
||||
group: resource.k8s.io
|
||||
version: v1alpha1
|
||||
version: v1alpha2
|
||||
- name: ResourceClaimTemplate
|
||||
group: resource.k8s.io
|
||||
version: v1alpha1
|
||||
version: v1alpha2
|
||||
- name: ResourceClass
|
||||
group: resource.k8s.io
|
||||
version: v1alpha1
|
||||
version: v1alpha2
|
||||
- name: Service Resources
|
||||
chapters:
|
||||
- name: Service
|
||||
|
|
@ -148,6 +148,12 @@ parts:
|
|||
- name: CertificateSigningRequest
|
||||
group: certificates.k8s.io
|
||||
version: v1
|
||||
- name: ClusterTrustBundle
|
||||
group: certificates.k8s.io
|
||||
version: v1alpha1
|
||||
- name: SelfSubjectReview
|
||||
group: authentication.k8s.io
|
||||
version: v1beta1
|
||||
- name: Authorization Resources
|
||||
chapters:
|
||||
- name: LocalSubjectAccessReview
|
||||
|
|
@ -191,6 +197,9 @@ parts:
|
|||
- name: PodDisruptionBudget
|
||||
group: policy
|
||||
version: v1
|
||||
- name: IPAddress
|
||||
group: networking.k8s.io
|
||||
version: v1alpha1
|
||||
- name: Extend Resources
|
||||
chapters:
|
||||
- name: CustomResourceDefinition
|
||||
|
|
|
|||
|
|
@ -102,6 +102,16 @@ main {
|
|||
}
|
||||
}
|
||||
|
||||
::selection {
|
||||
background: #326ce5;
|
||||
color: white;
|
||||
}
|
||||
|
||||
::-moz-selection {
|
||||
background: #326ce5;
|
||||
color: white;
|
||||
}
|
||||
|
||||
// HEADER
|
||||
|
||||
#hamburger {
|
||||
|
|
|
|||
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
title: Resourcen-Verwaltung für Pods und Container
|
||||
content_type: concept
|
||||
weight: 40
|
||||
feature:
|
||||
title: Automatisches Bin Packing
|
||||
description: >
|
||||
Container können je nach Systemanforderungen auf spezifischen Nodes ausgeführt werden. Somit kann eine effiziente Nutzung von Ressourcen erreicht werden.
|
||||
---
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
title: Secrets
|
||||
content_type: concept
|
||||
feature:
|
||||
title: Verwaltung von Secrets und Konfigurationen
|
||||
description: >
|
||||
Deploye und aktualisiere Secrets sowie Anwendungskonfigurationen, ohne ein Image neu zu bauen oder Secrets preiszugeben.
|
||||
weight: 30
|
||||
---
|
||||
|
|
@ -1,4 +1,8 @@
|
|||
---
|
||||
title: "Kubernetes erweitern"
|
||||
weight: 110
|
||||
feature:
|
||||
title: Für Erweiterungen entworfen
|
||||
description: >
|
||||
Kubernetes kann ohne Änderungen am Upstream-Quelltext erweitert werden.
|
||||
---
|
||||
|
|
|
|||
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: IPv4/IPv6 dual-stack
|
||||
description: >-
|
||||
Kubernetes erlaubt Netzwerkkonfigurationen mit IPv4 oder IPv6 (Single Stack).
|
||||
Im Dual-Stack-Betrieb kann IPv4 im Verbund mit IPv6 verwendet werden.
|
||||
|
||||
feature:
|
||||
title: IPv4/IPv6 Dual-Stack
|
||||
description: >
|
||||
Pods und Dienste können gleichzeitig IPv4- und IPv6-Adressen verwenden.
|
||||
content_type: concept
|
||||
reviewers:
|
||||
- lachie83
|
||||
- khenidak
|
||||
- aramase
|
||||
- bridgetkromhout
|
||||
weight: 90
|
||||
---
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: Services
|
||||
feature:
|
||||
title: Service-Discovery und Load Balancing
|
||||
description: >
|
||||
Anwendungen müssen keinen komplizierten Mechanismus für Service-Discovery verwenden. Kubernetes verteilt IP-Adressen und DNS-Einträge automatisch an Pods und übernimmt auch das Load Balancing.
|
||||
description: >-
|
||||
Veröffentliche deine Applikation über einen einzelnen, nach außen sichtbaren Endpunkt,
|
||||
auch wenn die Workload über mehrere Backends verteilt ist.
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: Persistente Volumes
|
||||
feature:
|
||||
title: Speicher-Orchestrierung
|
||||
description: >
|
||||
Binde automatisch deinen gewünschten Speicher ein. Egal, ob lokaler Speicher, Speicher eines Cloud Providers (z.B. AWS oder GCP) oder ein Netzwerkspeicher (z.B. NFS, iSCSI, Ceph oder Cinder).
|
||||
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: Deployments
|
||||
feature:
|
||||
title: Automatisierte Rollouts und Rollbacks
|
||||
description: >
|
||||
Kubernetes wendet Änderungen an deinen Anwendungen oder seiner eigenen Konfiguration stufenweise an. Währenddessen achtet es darauf, dass nicht alle Instanzen der Anwendung zur gleichen Zeit beeinträchtigt werden. Falls etwas schief geht, macht Kubernetes die Änderungen rückgängig.
|
||||
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
title: Jobs
|
||||
content_type: concept
|
||||
feature:
|
||||
title: Stapelweise Ausführung
|
||||
description: >
|
||||
Neben Diensten kann Kubernetes auch die stapelweise Ausführung von Workloads verwalten. Im Falle eines Fehlers können Container ausgetauscht werden.
|
||||
weight: 50
|
||||
---
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: ReplicaSet
|
||||
feature:
|
||||
title: Selbstheilung
|
||||
anchor: Funktionsweise eines ReplicaSets
|
||||
description: >
|
||||
Container werden mithilfe von Health-Checks überwacht und im Falle eines Fehlers neu gestartet. Sie werden erst wieder verwendet, wenn Sie komplett einsatzbereit sind.
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
|
@ -3,3 +3,25 @@ title: "Werkzeuge installieren"
|
|||
weight: 10
|
||||
---
|
||||
|
||||
## kubectl
|
||||
|
||||
Das Kubernetes Befehlszeilenprogramm [kubectl](/docs/user-guide/kubectl/) ermöglicht es Ihnen, Befehle auf einem Kubernetes-Cluster auszuführen. Sie können mit kubectl Anwendungen bereitstellen, Cluster-Ressourcen überwachen und verwalten sowie Logs einsehen.
|
||||
Weitere Informationen über alle verfügbaren `kubectl`-Befehle finden Sie in der [Kommandoreferenz von kubectl](/docs/reference/kubectl/).
|
||||
|
||||
`kubectl` kann unter Linux, macOS und Windows installiert werden. [Hier](install-kubectl) finden Sie Anleitungen zur Installation von `kubectl`.
|
||||
|
||||
## kind
|
||||
Mit [`kind`](https://kind.sigs.k8s.io/) können Sie Kubernetes lokal auf Ihrem Computer ausführen. Voraussetzung hierfür ist eine konfigurierte und funktionierende [Docker](https://docs.docker.com/get-docker/)-Installation.
|
||||
|
||||
Die `kind` [Schnellstart](https://kind.sigs.k8s.io/docs/user/quick-start/)-Seite gibt Informationen darüber, was für den schnellen Einstieg mit `kind` benötigt wird.
|
||||
|
||||
## minikube
|
||||
Ähnlich wie `kind` ist [`minikube`](https://minikube.sigs.k8s.io/) ein Tool, mit dem man Kubernetes lokal auf dem Computer ausführen kann. Minikube erstellt Cluster mit einer Node oder mehreren Nodes. Somit ist es ein praktisches Tool für tägliche Entwicklungsaktivitäten mit Kubernetes, oder um Kubernetes einfach einmal lokal auszuprobieren.
|
||||
|
||||
[Hier](/install-minikube) erfahren Sie, wie Sie `minikube` auf Ihrem Computer installieren können.
|
||||
Falls Sie `minikube` bereits installiert haben, können Sie es verwenden, um eine [Beispiel-Anwendung zu bereitzustellen.](/docs/tutorials/hello-minikube/).
|
||||
|
||||
## kubeadm
|
||||
Mit `kubeadm` können Sie Kubernetes-Cluster erstellen und verwalten. `kubeadm` führt alle notwendigen Schritte aus, um ein minimales aber sicheres Cluster in einer benutzerfreundlichen Art und Weise aufzusetzen.
|
||||
[Auf dieser Seite](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) finden Sie Anleitungen zur Installation von `kubeadm`.
|
||||
Sobald Sie `kubeadm` installiert haben, erfahren Sie [hier](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) wie man ein Cluster mit `kubeadm` erstellt.
|
||||
|
|
@ -7,6 +7,7 @@ slug: grpc-probes-now-in-beta
|
|||
|
||||
**Author**: Sergey Kanzhelev (Google)
|
||||
|
||||
_Update: Since this article was posted, the feature was graduated to GA in v1.27 and doesn't require any feature gates to be enabled.
|
||||
|
||||
With Kubernetes 1.24 the gRPC probes functionality entered beta and is available by default.
|
||||
Now you can configure startup, liveness, and readiness probes for your gRPC app
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ compatible behavior when disabled, and to document how to interact with each oth
|
|||
|
||||
This enabled the Kubernetes project to graduate to GA the CPU Manager core component and core CPU allocation algorithms to GA,
|
||||
while also enabling a new age of experimentation in this area.
|
||||
In Kubernetes v1.26, the CPU Manager supports [three different policy options](/docs/tasks/administer-cluster/cpu-management-policies.md#static-policy-options):
|
||||
In Kubernetes v1.26, the CPU Manager supports [three different policy options](/docs/tasks/administer-cluster/cpu-management-policies#static-policy-options):
|
||||
|
||||
`full-pcpus-only`
|
||||
: restrict the CPU Manager core allocation algorithm to full physical cores only, reducing noisy neighbor issues from hardware technologies that allow sharing cores.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,223 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes v1.27: Chill Vibes"
|
||||
date: 2023-04-11
|
||||
slug: kubernetes-v1-27-release
|
||||
---
|
||||
|
||||
**Authors**: [Kubernetes v1.27 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.27/release-team.md)
|
||||
|
||||
Announcing the release of Kubernetes v1.27, the first release of 2023!
|
||||
|
||||
This release consist of 60 enhancements. 18 of those enhancements are entering Alpha, 29 are graduating to Beta, and 13 are graduating to Stable.
|
||||
|
||||
## Release theme and logo
|
||||
|
||||
**Kubernetes v1.27: Chill Vibes**
|
||||
|
||||
The theme for Kubernetes v1.27 is *Chill Vibes*.
|
||||
|
||||
{{< figure src="/images/blog/2023-04-11-kubernetes-1.27-blog/kubernetes-1.27.png" alt="Kubernetes 1.27 Chill Vibes logo" class="release-logo" >}}
|
||||
|
||||
|
||||
It's a little silly, but there were some important shifts in this release that helped inspire the theme. Throughout a typical Kubernetes release cycle, there are several deadlines that features need to meet to remain included. If a feature misses any of these deadlines, there is an exception process they can go through. Handling these exceptions is a very normal part of the release. But v1.27 is the first release that anyone can remember where we didn't receive a single exception request after the enhancements freeze. Even as the release progressed, things remained much calmer than any of us are used to.
|
||||
|
||||
There's a specific reason we were able to enjoy a more calm release this time around, and that's all the work that folks put in behind the scenes to improve how we manage the release. That's what this theme celebrates, people putting in the work to make things better for the community.
|
||||
|
||||
Special thanks to [Britnee Laverack](https://www.instagram.com/artsyfie/) for creating the logo. Britnee also designed the logo for [Kubernetes 1.24: Stargazer](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#release-theme-and-logo).
|
||||
|
||||
# What's New (Major Themes)
|
||||
|
||||
## Freeze `k8s.gcr.io` image registry
|
||||
|
||||
Replacing the old image registry, [k8s.gcr.io](https://cloud.google.com/container-registry/) with [registry.k8s.io](https://github.com/kubernetes/registry.k8s.io) which has been generally available for several months. The Kubernetes project created and runs the `registry.k8s.io` image registry, which is fully controlled by the community.
|
||||
This means that the old registry `k8s.gcr.io` will be frozen and no further images for Kubernetes and related sub-projects will be published to the old registry.
|
||||
|
||||
What does this change mean for contributors?
|
||||
|
||||
* If you are a maintainer of a sub-project, you will need to update your manifests and Helm charts to use the new registry. For more information, checkout this [project](https://github.com/kubernetes-sigs/community-images).
|
||||
|
||||
What does this change mean for end users?
|
||||
|
||||
* Kubernetes `v1.27` release will not be published to the `k8s.gcr.io` registry.
|
||||
|
||||
* Patch releases for `v1.24`, `v1.25`, and `v1.26` will no longer be published to the old registry after April.
|
||||
|
||||
* Starting in v1.25, the default image registry has been set to `registry.k8s.io`. This value is overridable in kubeadm and kubelet but setting it to `k8s.gcr.io` will fail for new releases after April as they won’t be present in the old registry.
|
||||
|
||||
* If you want to increase the reliability of your cluster and remove dependency on the community-owned registry or you are running Kubernetes in networks where external traffic is restricted, you should consider hosting local image registry mirrors. Some cloud vendors may offer hosted solutions for this.
|
||||
|
||||
|
||||
## `SeccompDefault` graduates to stable
|
||||
|
||||
To use seccomp profile defaulting, you must run the kubelet with the `--seccomp-default` [command line flag](/docs/reference/command-line-tools-reference/kubelet) enabled for each node where you want to use it.
|
||||
If enabled, the kubelet will use the `RuntimeDefault` seccomp profile by default, which is defined by the container runtime, instead of using the `Unconfined` (seccomp disabled) mode. The default profiles aim to provide a strong set of security defaults while preserving the functionality of the workload. It is possible that the default profiles differ between container runtimes and their release versions.
|
||||
|
||||
You can find detailed information about a possible upgrade and downgrade strategy in the related Kubernetes Enhancement Proposal (KEP): [Enable seccomp by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2413-seccomp-by-default).
|
||||
|
||||
## Mutable scheduling directives for Jobs graduates to GA
|
||||
|
||||
This was introduced in v1.22 and started as a beta level, now it's stable. In most cases a parallel job will want the pods to run with constraints, like all in the same zone, or all either on GPU model x or y but not a mix of both. The `suspend` field is the first step towards achieving those semantics. `suspend` allows a custom queue controller to decide when a job should start. However, once a job is unsuspended, a custom queue controller has no influence on where the pods of a job will actually land.
|
||||
|
||||
This feature allows updating a Job's scheduling directives before it starts, which gives custom queue controllers
|
||||
the ability to influence pod placement while at the same time offloading actual pod-to-node assignment to
|
||||
kube-scheduler. This is allowed only for suspended Jobs that have never been unsuspended before.
|
||||
The fields in a Job's pod template that can be updated are node affinity, node selector, tolerations, labels
|
||||
,annotations, and [scheduling gates](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/).
|
||||
Find more details in the KEP:
|
||||
[Allow updating scheduling directives of jobs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/2926-job-mutable-scheduling-directives).
|
||||
|
||||
## DownwardAPIHugePages graduates to stable
|
||||
|
||||
In Kubernetes v1.20, support for `requests.hugepages-<pagesize>` and `limits.hugepages-<pagesize>` was added
|
||||
to the [downward API](/docs/concepts/workloads/pods/downward-api/) to be consistent with other resources like cpu, memory, and ephemeral storage.
|
||||
This feature graduates to stable in this release. You can find more details in the KEP:
|
||||
[Downward API HugePages](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2053-downward-api-hugepages).
|
||||
|
||||
## Pod Scheduling Readiness goes to beta
|
||||
|
||||
Upon creation, Pods are ready for scheduling. Kubernetes scheduler does its due diligence to find nodes to place all pending Pods. However, in a real-world case, some Pods may stay in a _missing-essential-resources_ state for a long period. These Pods actually churn the scheduler (and downstream integrators like Cluster Autoscaler) in an unnecessary manner.
|
||||
|
||||
By specifying/removing a Pod's `.spec.schedulingGates`, you can control when a Pod is ready to be considered for scheduling.
|
||||
|
||||
The `schedulingGates` field contains a list of strings, and each string literal is perceived as a criteria that must be satisfied before a Pod is considered schedulable. This field can be initialized only when a Pod is created (either by the client, or mutated during admission). After creation, each schedulingGate can be removed in an arbitrary order, but addition of a new scheduling gate is disallowed.
|
||||
|
||||
## Node log access via Kubernetes API
|
||||
|
||||
This feature helps cluster administrators debug issues with services running on nodes by allowing them to query service logs. To use this feature, ensure that the `NodeLogQuery` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled on that node, and that the kubelet configuration options `enableSystemLogHandler` and `enableSystemLogQuery` are both set to true.
|
||||
On Linux, we assume that service logs are available via journald. On Windows, we assume that service logs are available in the application log provider. You can also fetch logs from the `/var/log/` and `C:\var\log` directories on Linux and Windows, respectively.
|
||||
|
||||
A cluster administrator can try out this alpha feature across all nodes of their cluster, or on a subset of them.
|
||||
|
||||
## ReadWriteOncePod PersistentVolume access mode goes to beta
|
||||
|
||||
Kubernetes `v1.22` introduced a new access mode `ReadWriteOncePod` for [PersistentVolumes](/docs/concepts/storage/persistent-volumes/#persistent-volumes) (PVs) and [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVCs). This access mode enables you to restrict volume access to a single pod in the cluster, ensuring that only one pod can write to the volume at a time. This can be particularly useful for stateful workloads that require single-writer access to storage.
|
||||
|
||||
The ReadWriteOncePod beta adds support for [scheduler preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
|
||||
of pods that use ReadWriteOncePod PVCs.
|
||||
Scheduler preemption allows higher-priority pods to preempt lower-priority pods. For example when a pod (A) with a `ReadWriteOncePod` PVC is scheduled, if another pod (B) is found using the same PVC and pod (A) has higher priority, the scheduler will return an `Unschedulable` status and attempt to preempt pod (B).
|
||||
For more context, see the KEP: [ReadWriteOncePod PersistentVolume AccessMode](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode).
|
||||
|
||||
|
||||
## Respect PodTopologySpread after rolling upgrades
|
||||
|
||||
`matchLabelKeys` is a list of pod label keys used to select the pods over which spreading will be calculated. The keys are used to lookup values from the pod labels. Those key-value labels are ANDed with `labelSelector` to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the pod labels will be ignored. A null or empty list means only match against the `labelSelector`.
|
||||
|
||||
With `matchLabelKeys`, users don't need to update the `pod.spec` between different revisions. The controller/operator just needs to set different values to the same `label` key for different revisions. The scheduler will assume the values automatically based on `matchLabelKeys`. For example, if users use Deployment, they can use the label keyed with `pod-template-hash`, which is added automatically by the Deployment controller, to distinguish between different revisions in a single Deployment.
|
||||
|
||||
|
||||
## Faster SELinux volume relabeling using mounts
|
||||
|
||||
In this release, how SELinux labels are applied to volumes used by Pods is graduating to beta. This feature speeds up container startup by mounting volumes with the correct SELinux label instead of changing each file on the volumes recursively. Linux kernel with SELinux support allows the first mount of a volume to set SELinux label on the whole volume using `-o context=` mount option. This way, all files will have assigned the given label in a constant time, without recursively walking through the whole volumes.
|
||||
|
||||
The `context` mount option cannot be applied to bind mounts or re-mounts of already mounted volumes.
|
||||
For CSI storage, a CSI driver does the first mount of a volume, and so it must be the CSI driver that actually
|
||||
applies this mount option. We added a new field `SELinuxMount` to CSIDriver objects, so that drivers can
|
||||
announce whether they support the `-o context` mount option.
|
||||
|
||||
If Kubernetes knows the SELinux label of a Pod **and** the CSI driver responsible for a pod's volume
|
||||
announces `SELinuxMount: true` **and** the volume has Access Mode `ReadWriteOncePod`, then it
|
||||
will ask the CSI driver to mount the volume with mount option `context=` **and** it will tell the container
|
||||
runtime not to relabel content of the volume (because all files already have the right label).
|
||||
Get more information on this from the KEP: [Speed up SELinux volume relabeling using mounts](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling).
|
||||
|
||||
## Robust VolumeManager reconstruction goes to beta
|
||||
|
||||
This is a volume manager refactoring that allows the kubelet to populate additional information about how
|
||||
existing volumes are mounted during the kubelet startup. In general, this makes volume cleanup more robust.
|
||||
If you enable the `NewVolumeManagerReconstruction` feature gate on a node, you'll get enhanced discovery of mounted volumes during kubelet startup.
|
||||
|
||||
Before Kubernetes v1.25, the kubelet used different default behavior for discovering mounted volumes during the kubelet startup. If you disable this feature gate (it's enabled by default), you select the legacy discovery behavior.
|
||||
|
||||
In Kubernetes v1.25 and v1.26, this behavior toggle was part of the `SELinuxMountReadWriteOncePod` feature gate.
|
||||
|
||||
## Mutable Pod Scheduling Directives goes to beta
|
||||
|
||||
This allows mutating a pod that is blocked on a scheduling readiness gate with a more constrained node affinity/selector. It gives the ability to mutate a pods scheduling directives before it is allowed to be scheduled and gives an external resource controller the ability to influence pod placement while at the same time offload actual pod-to-node assignment to kube-scheduler.
|
||||
|
||||
This opens the door for a new pattern of adding scheduling features to Kubernetes. Specifically, building lightweight schedulers that implement features not supported by kube-scheduler, while relying on the existing kube-scheduler to support all upstream features and handle the pod-to-node binding. This pattern should be the preferred one if the custom feature doesn't require implementing a schedule plugin, which entails re-building and maintaining a custom kube-scheduler binary.
|
||||
|
||||
## Feature graduations and deprecations in Kubernetes v1.27
|
||||
### Graduations to stable
|
||||
|
||||
This release includes a total of 9 enhancements promoted to Stable:
|
||||
|
||||
* [Default container annotation that to be used by kubectl](https://github.com/kubernetes/enhancements/issues/2227)
|
||||
* [TimeZone support in CronJob](https://github.com/kubernetes/enhancements/issues/3140)
|
||||
* [Expose metrics about resource requests and limits that represent the pod model](https://github.com/kubernetes/enhancements/issues/1748)
|
||||
* [Server Side Unknown Field Validation](https://github.com/kubernetes/enhancements/issues/2885)
|
||||
* [Node Topology Manager](https://github.com/kubernetes/enhancements/issues/693)
|
||||
* [Add gRPC probe to Pod.Spec.Container.{Liveness,Readiness,Startup} Probe](https://github.com/kubernetes/enhancements/issues/2727)
|
||||
* [Add configurable grace period to probes](https://github.com/kubernetes/enhancements/issues/2238)
|
||||
* [OpenAPI v3](https://github.com/kubernetes/enhancements/issues/2896)
|
||||
* [Stay on supported Go versions](https://github.com/kubernetes/enhancements/issues/3744)
|
||||
|
||||
### Deprecations and removals
|
||||
|
||||
This release saw several removals:
|
||||
|
||||
* [Removal of `storage.k8s.io/v1beta1` from CSIStorageCapacity](https://github.com/kubernetes/kubernetes/pull/108445)
|
||||
* [Removal of support for deprecated seccomp annotations](https://github.com/kubernetes/kubernetes/pull/114947)
|
||||
* [Removal of `--master-service-namespace` command line argument](https://github.com/kubernetes/kubernetes/pull/112797)
|
||||
* [Removal of the `ControllerManagerLeaderMigration` feature gate](https://github.com/kubernetes/kubernetes/pull/113534)
|
||||
* [Removal of `--enable-taint-manager` command line argument](https://github.com/kubernetes/kubernetes/pull/111411)
|
||||
* [Removal of `--pod-eviction-timeout` command line argument](https://github.com/kubernetes/kubernetes/pull/113710)
|
||||
* [Removal of the `CSI Migration` feature gate](https://github.com/kubernetes/kubernetes/pull/110410)
|
||||
* [Removal of `CSIInlineVolume` feature gate](https://github.com/kubernetes/kubernetes/pull/111258)
|
||||
* [Removal of `EphemeralContainers` feature gate](https://github.com/kubernetes/kubernetes/pull/111402)
|
||||
* [Removal of `LocalStorageCapacityIsolation` feature gate](https://github.com/kubernetes/kubernetes/pull/111513)
|
||||
* [Removal of `NetworkPolicyEndPort` feature gate](https://github.com/kubernetes/kubernetes/pull/110868)
|
||||
* [Removal of `StatefulSetMinReadySeconds` feature gate](https://github.com/kubernetes/kubernetes/pull/110896)
|
||||
* [Removal of `IdentifyPodOS` feature gate](https://github.com/kubernetes/kubernetes/pull/111229)
|
||||
* [Removal of `DaemonSetUpdateSurge` feature gate](https://github.com/kubernetes/kubernetes/pull/111194)
|
||||
|
||||
## Release notes
|
||||
|
||||
The complete details of the Kubernetes v1.27 release are available in our [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md).
|
||||
|
||||
## Availability
|
||||
|
||||
Kubernetes v1.27 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.27.0). To get started with Kubernetes, you can run local Kubernetes clusters using [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8s.io/), etc. You can also easily install v1.27 using [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/).
|
||||
|
||||
## Release team
|
||||
|
||||
Kubernetes is only possible with the support, commitment, and hard work of its community. Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on. This requires people with specialised skills from all corners of our community, from the code itself to its documentation and project management.
|
||||
|
||||
Special thanks to our Release Lead Xander Grzywinski for guiding us through a smooth and successful release cycle and to all members of the release team for supporting one another and working so hard to produce the v1.27 release for the community.
|
||||
|
||||
## Ecosystem updates
|
||||
|
||||
* KubeCon + CloudNativeCon Europe 2023 will take place in Amsterdam, The Netherlands, from 17 – 21 April 2023! You can find more information about the conference and registration on the [event site](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/).
|
||||
* cdCon + GitOpsCon will be held in Vancouver, Canada, on May 8th and 9th, 2023! More information about the conference and registration can be found on the [event site](https://events.linuxfoundation.org/cdcon-gitopscon/).
|
||||
|
||||
## Project velocity
|
||||
|
||||
The [CNCF K8s DevStats](https://k8s.devstats.cncf.io/d/12/dashboards?orgId=1&refresh=15m) project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing, and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.
|
||||
|
||||
In the v1.27 release cycle, which [ran for 14 weeks](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.27) (January 9 to April 11), we saw contributions from [1020 companies](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.26.0%20-%20now&var-metric=contributions) and [1603 individuals](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.26.0%20-%20now&var-metric=contributions&var-repogroup_name=Kubernetes&var-repo_name=kubernetes%2Fkubernetes&var-country_name=All&var-companies=All).
|
||||
|
||||
## Upcoming release webinar
|
||||
|
||||
Join members of the Kubernetes v1.27 release team on Friday, April 14, 2023, at 10 a.m. PDT to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the [event page](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-v127-release/) on the CNCF Online Programs site.
|
||||
|
||||
## Get Involved
|
||||
|
||||
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests.
|
||||
|
||||
Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below:
|
||||
|
||||
* Find out more about contributing to Kubernetes at the [Kubernetes Contributors website](https://www.kubernetes.dev/).
|
||||
|
||||
* Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for the latest updates.
|
||||
|
||||
* Join the community discussion on [Discuss](https://discuss.kubernetes.io/).
|
||||
|
||||
* Join the community on [Slack](https://communityinviter.com/apps/kubernetes/community).
|
||||
|
||||
* Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes).
|
||||
|
||||
* [Share](https://docs.google.com/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform) your Kubernetes story.
|
||||
|
||||
* Read more about what’s happening with Kubernetes on the [blog](https://kubernetes.io/blog/).
|
||||
|
||||
* Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team).
|
||||
|
|
@ -0,0 +1,151 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: More fine-grained pod topology spread policies reached beta"
|
||||
date: 2023-04-17
|
||||
slug: fine-grained-pod-topology-spread-features-beta
|
||||
---
|
||||
|
||||
**Authors:** [Alex Wang](https://github.com/denkensk) (Shopee), [Kante Yin](https://github.com/kerthcet) (DaoCloud), [Kensei Nakada](https://github.com/sanposhiho) (Mercari)
|
||||
|
||||
In Kubernetes v1.19, [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
went to general availability (GA).
|
||||
|
||||
As time passed, we - SIG Scheduling - received feedback from users,
|
||||
and, as a result, we're actively working on improving the Topology Spread feature via three KEPs.
|
||||
All of these features have reached beta in Kubernetes v1.27 and are enabled by default.
|
||||
|
||||
This blog post introduces each feature and the use case behind each of them.
|
||||
|
||||
## KEP-3022: min domains in Pod Topology Spread
|
||||
|
||||
Pod Topology Spread has the `maxSkew` parameter to define the degree to which Pods may be unevenly distributed.
|
||||
|
||||
But, there wasn't a way to control the number of domains over which we should spread.
|
||||
Some users want to force spreading Pods over a minimum number of domains, and if there aren't enough already present, make the cluster-autoscaler provision them.
|
||||
|
||||
Kubernetes v1.24 introduced the `minDomains` parameter for pod topology spread constraints,
|
||||
as an alpha feature.
|
||||
Via `minDomains` parameter, you can define the minimum number of domains.
|
||||
|
||||
For example, assume there are 3 Nodes with the enough capacity,
|
||||
and a newly created ReplicaSet has the following `topologySpreadConstraints` in its Pod template.
|
||||
|
||||
```yaml
|
||||
...
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
minDomains: 5 # requires 5 Nodes at least (because each Node has a unique hostname).
|
||||
whenUnsatisfiable: DoNotSchedule # minDomains is valid only when DoNotSchedule is used.
|
||||
topologyKey: kubernetes.io/hostname
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
foo: bar
|
||||
```
|
||||
|
||||
In this case, 3 Pods will be scheduled to those 3 Nodes,
|
||||
but other 2 Pods from this replicaset will be unschedulable until more Nodes join the cluster.
|
||||
|
||||
You can imagine that the cluster autoscaler provisions new Nodes based on these unschedulable Pods,
|
||||
and as a result, the replicas are finally spread over 5 Nodes.
|
||||
|
||||
## KEP-3094: Take taints/tolerations into consideration when calculating podTopologySpread skew
|
||||
|
||||
Before this enhancement, when you deploy a pod with `podTopologySpread` configured, kube-scheduler would
|
||||
take the Nodes that satisfy the Pod's nodeAffinity and nodeSelector into consideration
|
||||
in filtering and scoring, but would not care about whether the node taints are tolerated by the incoming pod or not.
|
||||
This may lead to a node with untolerated taint as the only candidate for spreading, and as a result,
|
||||
the pod will stuck in Pending if it doesn't tolerate the taint.
|
||||
|
||||
To allow more fine-gained decisions about which Nodes to account for when calculating spreading skew,
|
||||
Kubernetes 1.25 introduced two new fields within `topologySpreadConstraints` to define node inclusion policies:
|
||||
`nodeAffinityPolicy` and `nodeTaintPolicy`.
|
||||
|
||||
A manifest that applies these policies looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: example-pod
|
||||
spec:
|
||||
# Configure a topology spread constraint
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: <integer>
|
||||
# ...
|
||||
nodeAffinityPolicy: [Honor|Ignore]
|
||||
nodeTaintsPolicy: [Honor|Ignore]
|
||||
# other Pod fields go here
|
||||
```
|
||||
|
||||
The `nodeAffinityPolicy` field indicates how Kubernetes treats a Pod's `nodeAffinity` or `nodeSelector` for
|
||||
pod topology spreading.
|
||||
If `Honor`, kube-scheduler filters out nodes not matching `nodeAffinity`/`nodeSelector` in the calculation of
|
||||
spreading skew.
|
||||
If `Ignore`, all nodes will be included, regardless of whether they match the Pod's `nodeAffinity`/`nodeSelector`
|
||||
or not.
|
||||
|
||||
For backwards compatibility, `nodeAffinityPolicy` defaults to `Honor`.
|
||||
|
||||
The `nodeTaintsPolicy` field defines how Kubernetes considers node taints for pod topology spreading.
|
||||
If `Honor`, only tainted nodes for which the incoming pod has a toleration, will be included in the calculation of spreading skew.
|
||||
If `Ignore`, kube-scheduler will not consider the node taints at all in the calculation of spreading skew, so a node with
|
||||
pod untolerated taint will also be included.
|
||||
|
||||
For backwards compatibility, `nodeTaintsPolicy` defaults to `Ignore`.
|
||||
|
||||
The feature was introduced in v1.25 as alpha. By default, it was disabled, so if you want to use this feature in v1.25,
|
||||
you had to explictly enable the feature gate `NodeInclusionPolicyInPodTopologySpread`. In the following v1.26
|
||||
release, that associated feature graduated to beta and is enabled by default.
|
||||
|
||||
## KEP-3243: Respect Pod topology spread after rolling upgrades
|
||||
|
||||
Pod Topology Spread uses the field `labelSelector` to identify the group of pods over which
|
||||
spreading will be calculated. When using topology spreading with Deployments, it is common
|
||||
practice to use the `labelSelector` of the Deployment as the `labelSelector` in the topology
|
||||
spread constraints. However, this implies that all pods of a Deployment are part of the spreading
|
||||
calculation, regardless of whether they belong to different revisions. As a result, when a new revision
|
||||
is rolled out, spreading will apply across pods from both the old and new ReplicaSets, and so by the
|
||||
time the new ReplicaSet is completely rolled out and the old one is rolled back, the actual spreading
|
||||
we are left with may not match expectations because the deleted pods from the older ReplicaSet will cause
|
||||
skewed distribution for the remaining pods. To avoid this problem, in the past users needed to add a
|
||||
revision label to Deployment and update it manually at each rolling upgrade (both the label on the
|
||||
pod template and the `labelSelector` in the `topologySpreadConstraints`).
|
||||
|
||||
To solve this problem with a simpler API, Kubernetes v1.25 introduced a new field named
|
||||
`matchLabelKeys` to `topologySpreadConstraints`. `matchLabelKeys` is a list of pod label keys to select
|
||||
the pods over which spreading will be calculated. The keys are used to lookup values from the labels of
|
||||
the Pod being scheduled, those key-value labels are ANDed with `labelSelector` to select the group of
|
||||
existing pods over which spreading will be calculated for the incoming pod.
|
||||
|
||||
With `matchLabelKeys`, you don't need to update the `pod.spec` between different revisions.
|
||||
The controller or operator managing rollouts just needs to set different values to the same label key for different revisions.
|
||||
The scheduler will assume the values automatically based on `matchLabelKeys`.
|
||||
For example, if you are configuring a Deployment, you can use the label keyed with
|
||||
[pod-template-hash](/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label),
|
||||
which is added automatically by the Deployment controller, to distinguish between different
|
||||
revisions in a single Deployment.
|
||||
|
||||
```yaml
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
topologyKey: kubernetes.io/hostname
|
||||
whenUnsatisfiable: DoNotSchedule
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: foo
|
||||
matchLabelKeys:
|
||||
- pod-template-hash
|
||||
```
|
||||
|
||||
## Getting involved
|
||||
|
||||
These features are managed by Kubernetes [SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling).
|
||||
|
||||
Please join us and share your feedback. We look forward to hearing from you!
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
- [Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/) in the Kubernetes documentation
|
||||
- [KEP-3022: min domains in Pod Topology Spread](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3022-min-domains-in-pod-topology-spread)
|
||||
- [KEP-3094: Take taints/tolerations into consideration when calculating PodTopologySpread skew](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3094-pod-topology-spread-considering-taints)
|
||||
- [KEP-3243: Respect PodTopologySpread after rolling upgrades](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/3243-respect-pod-topology-spread-after-rolling-upgrades)
|
||||
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Efficient SELinux volume relabeling (Beta)"
|
||||
date: 2023-04-18T10:00:00-08:00
|
||||
slug: kubernetes-1-27-efficient-selinux-relabeling-beta
|
||||
---
|
||||
|
||||
**Author:** Jan Šafránek (Red Hat)
|
||||
|
||||
# The problem
|
||||
|
||||
On Linux with Security-Enhanced Linux (SELinux) enabled, it's traditionally
|
||||
the container runtime that applies SELinux labels to a Pod and all its volumes.
|
||||
Kubernetes only passes the SELinux label from a Pod's `securityContext` fields
|
||||
to the container runtime.
|
||||
|
||||
The container runtime then recursively changes SELinux label on all files that
|
||||
are visible to the Pod's containers. This can be time-consuming if there are
|
||||
many files on the volume, especially when the volume is on a remote filesystem.
|
||||
|
||||
{{% alert title="Note" color="info" %}}
|
||||
If a container uses `subPath` of a volume, only that `subPath` of the whole
|
||||
volume is relabeled. This allows two pods that have two different SELinux labels
|
||||
to use the same volume, as long as they use different subpaths of it.
|
||||
{{% /alert %}}
|
||||
|
||||
If a Pod does not have any SELinux label assigned in Kubernetes API, the
|
||||
container runtime assigns a unique random one, so a process that potentially
|
||||
escapes the container boundary cannot access data of any other container on the
|
||||
host. The container runtime still recursively relabels all pod volumes with this
|
||||
random SELinux label.
|
||||
|
||||
# Improvement using mount options
|
||||
|
||||
If a Pod and its volume meet **all** of the following conditions, Kubernetes will
|
||||
_mount_ the volume directly with the right SELinux label. Such mount will happen
|
||||
in a constant time and the container runtime will not need to recursively
|
||||
relabel any files on it.
|
||||
|
||||
1. The operating system must support SELinux.
|
||||
|
||||
Without SELinux support detected, kubelet and the container runtime do not
|
||||
do anything with regard to SELinux.
|
||||
|
||||
1. The [feature gates](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`ReadWriteOncePod` and `SELinuxMountReadWriteOncePod` must be enabled.
|
||||
These feature gates are Beta in Kubernetes 1.27 and Alpha in 1.25.
|
||||
|
||||
With any of these feature gates disabled, SELinux labels will be always
|
||||
applied by the container runtime by a recursive walk through the volume
|
||||
(or its subPaths).
|
||||
|
||||
1. The Pod must have at least `seLinuxOptions.level` assigned in its [Pod Security Context](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) or all Pod containers must have it set in their [Security Contexts](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1).
|
||||
Kubernetes will read the default `user`, `role` and `type` from the operating
|
||||
system defaults (typically `system_u`, `system_r` and `container_t`).
|
||||
|
||||
Without Kubernetes knowing at least the SELinux `level`, the container
|
||||
runtime will assign a random one _after_ the volumes are mounted. The
|
||||
container runtime will still relabel the volumes recursively in that case.
|
||||
|
||||
1. The volume must be a Persistent Volume with
|
||||
[Access Mode](/docs/concepts/storage/persistent-volumes/#access-modes)
|
||||
`ReadWriteOncePod`.
|
||||
|
||||
This is a limitation of the initial implementation. As described above,
|
||||
two Pods can have a different SELinux label and still use the same volume,
|
||||
as long as they use a different `subPath` of it. This use case is not
|
||||
possible when the volumes are _mounted_ with the SELinux label, because the
|
||||
whole volume is mounted and most filesystems don't support mounting a single
|
||||
volume multiple times with multiple SELinux labels.
|
||||
|
||||
If running two Pods with two different SELinux contexts and using
|
||||
different `subPaths` of the same volume is necessary in your deployments,
|
||||
please comment in the [KEP](https://github.com/kubernetes/enhancements/issues/1710)
|
||||
issue (or upvote any existing comment - it's best not to duplicate).
|
||||
Such pods may not run when the feature is extended to cover all volume access modes.
|
||||
|
||||
1. The volume plugin or the CSI driver responsible for the volume supports
|
||||
mounting with SELinux mount options.
|
||||
|
||||
These in-tree volume plugins support mounting with SELinux mount options:
|
||||
`fc`, `iscsi`, and `rbd`.
|
||||
|
||||
CSI drivers that support mounting with SELinux mount options must announce
|
||||
that in their
|
||||
[CSIDriver](/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/)
|
||||
instance by setting `seLinuxMount` field.
|
||||
|
||||
Volumes managed by other volume plugins or CSI drivers that don't
|
||||
set `seLinuxMount: true` will be recursively relabelled by the container
|
||||
runtime.
|
||||
|
||||
## Mounting with SELinux context
|
||||
|
||||
When all aforementioned conditions are met, kubelet will
|
||||
pass `-o context=<SELinux label>` mount option to the volume plugin or CSI
|
||||
driver. CSI driver vendors must ensure that this mount option is supported
|
||||
by their CSI driver and, if necessary, the CSI driver appends other mount
|
||||
options that are needed for `-o context` to work.
|
||||
|
||||
For example, NFS may need `-o context=<SELinux label>,nosharecache`, so each
|
||||
volume mounted from the same NFS server can have a different SELinux label
|
||||
value. Similarly, CIFS may need `-o context=<SELinux label>,nosharesock`.
|
||||
|
||||
It's up to the CSI driver vendor to test their CSI driver in a SELinux enabled
|
||||
environment before setting `seLinuxMount: true` in the CSIDriver instance.
|
||||
|
||||
# How can I learn more?
|
||||
SELinux in containers: see excellent
|
||||
[visual SELinux guide](https://opensource.com/business/13/11/selinux-policy-guide)
|
||||
by Daniel J Walsh. Note that the guide is older than Kubernetes, it describes
|
||||
*Multi-Category Security* (MCS) mode using virtual machines as an example,
|
||||
however, a similar concept is used for containers.
|
||||
|
||||
See a series of blog posts for details how exactly SELinux is applied to
|
||||
containers by container runtimes:
|
||||
* [How SELinux separates containers using Multi-Level Security](https://www.redhat.com/en/blog/how-selinux-separates-containers-using-multi-level-security)
|
||||
* [Why you should be using Multi-Category Security for your Linux containers](https://www.redhat.com/en/blog/why-you-should-be-using-multi-category-security-your-linux-containers)
|
||||
|
||||
Read the KEP: [Speed up SELinux volume relabeling using mounts](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling)
|
||||
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Single Pod Access Mode for PersistentVolumes Graduates to Beta"
|
||||
date: 2023-04-20
|
||||
slug: read-write-once-pod-access-mode-beta
|
||||
---
|
||||
|
||||
**Author:** Chris Henzie (Google)
|
||||
|
||||
With the release of Kubernetes v1.27 the ReadWriteOncePod feature has graduated
|
||||
to beta. In this blog post, we'll take a closer look at this feature, what it
|
||||
does, and how it has evolved in the beta release.
|
||||
|
||||
## What is ReadWriteOncePod?
|
||||
|
||||
ReadWriteOncePod is a new access mode for
|
||||
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/#persistent-volumes) (PVs)
|
||||
and [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVCs)
|
||||
introduced in Kubernetes v1.22. This access mode enables you to restrict volume
|
||||
access to a single pod in the cluster, ensuring that only one pod can write to
|
||||
the volume at a time. This can be particularly useful for stateful workloads
|
||||
that require single-writer access to storage.
|
||||
|
||||
For more context on access modes and how ReadWriteOncePod works read
|
||||
[What are access modes and why are they important?](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#what-are-access-modes-and-why-are-they-important)
|
||||
in the _Introducing Single Pod Access Mode for PersistentVolumes_ article from 2021.
|
||||
|
||||
## Changes in the ReadWriteOncePod beta
|
||||
|
||||
The ReadWriteOncePod beta adds support for
|
||||
[scheduler preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
|
||||
of pods using ReadWriteOncePod PVCs.
|
||||
|
||||
Scheduler preemption allows higher-priority pods to preempt lower-priority pods,
|
||||
so that they can start running on the same node. With this release, pods using
|
||||
ReadWriteOncePod PVCs can also be preempted if a higher-priority pod requires
|
||||
the same PVC.
|
||||
|
||||
## How can I start using ReadWriteOncePod?
|
||||
|
||||
With ReadWriteOncePod now in beta, it will be enabled by default in cluster
|
||||
versions v1.27 and beyond.
|
||||
|
||||
Note that ReadWriteOncePod is
|
||||
[only supported for CSI volumes](/docs/concepts/storage/persistent-volumes/#access-modes).
|
||||
Before using this feature you will need to update the following
|
||||
[CSI sidecars](https://kubernetes-csi.github.io/docs/sidecar-containers.html)
|
||||
to these versions or greater:
|
||||
|
||||
- [csi-provisioner:v3.0.0+](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0)
|
||||
- [csi-attacher:v3.3.0+](https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0)
|
||||
- [csi-resizer:v1.3.0+](https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0)
|
||||
|
||||
To start using ReadWriteOncePod, create a PVC with the ReadWriteOncePod access mode:
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: single-writer-only
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOncePod # Allow only a single pod to access single-writer-only.
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
If your storage plugin supports
|
||||
[dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/),
|
||||
new PersistentVolumes will be created with the ReadWriteOncePod access mode applied.
|
||||
|
||||
Read [Migrating existing PersistentVolumes](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/#migrating-existing-persistentvolumes)
|
||||
for details on migrating existing volumes to use ReadWriteOncePod.
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
Please see the [alpha blog post](/blog/2021/09/13/read-write-once-pod-access-mode-alpha)
|
||||
and [KEP-2485](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2485-read-write-once-pod-pv-access-mode/README.md)
|
||||
for more details on the ReadWriteOncePod access mode and motivations for CSI spec changes.
|
||||
|
||||
## How do I get involved?
|
||||
|
||||
The [Kubernetes #csi Slack channel](https://kubernetes.slack.com/messages/csi)
|
||||
and any of the standard
|
||||
[SIG Storage communication channels](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact)
|
||||
are great mediums to reach out to the SIG Storage and the CSI teams.
|
||||
|
||||
Special thanks to the following people whose thoughtful reviews and feedback helped shape this feature:
|
||||
|
||||
* Abdullah Gharaibeh (ahg-g)
|
||||
* Aldo Culquicondor (alculquicondor)
|
||||
* Antonio Ojea (aojea)
|
||||
* David Eads (deads2k)
|
||||
* Jan Šafránek (jsafrane)
|
||||
* Joe Betz (jpbetz)
|
||||
* Kante Yin (kerthcet)
|
||||
* Michelle Au (msau42)
|
||||
* Tim Bannister (sftim)
|
||||
* Xing Yang (xing-yang)
|
||||
|
||||
If you’re interested in getting involved with the design and development of CSI
|
||||
or any part of the Kubernetes storage system, join the
|
||||
[Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
|
||||
We’re rapidly growing and always welcome new contributors.
|
||||
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Query Node Logs Using The Kubelet API"
|
||||
date: 2023-04-21
|
||||
slug: node-log-query-alpha
|
||||
---
|
||||
|
||||
**Author:** Aravindh Puthiyaparambil (Red Hat)
|
||||
|
||||
Kubernetes 1.27 introduced a new feature called _Node log query_ that allows
|
||||
viewing logs of services running on the node.
|
||||
|
||||
## What problem does it solve?
|
||||
Cluster administrators face issues when debugging malfunctioning services
|
||||
running on the node. They usually have to SSH or RDP into the node to view the
|
||||
logs of the service to debug the issue. The _Node log query_ feature helps with
|
||||
this scenario by allowing the cluster administrator to view the logs using
|
||||
_kubectl_. This is especially useful with Windows nodes where you run into the
|
||||
issue of the node going to the ready state but containers not coming up due to
|
||||
CNI misconfigurations and other issues that are not easily identifiable by
|
||||
looking at the Pod status.
|
||||
|
||||
## How does it work?
|
||||
|
||||
The kubelet already has a _/var/log/_ viewer that is accessible via the node
|
||||
proxy endpoint. The feature supplements this endpoint with a shim that shells
|
||||
out to `journalctl`, on Linux nodes, and the `Get-WinEvent` cmdlet on Windows
|
||||
nodes. It then uses the existing filters provided by the commands to allow
|
||||
filtering the logs. The kubelet also uses heuristics to retrieve the logs.
|
||||
If the user is not aware if a given system services logs to a file or to the
|
||||
native system logger, the heuristics first checks the native operating system
|
||||
logger and if that is not available it attempts to retrieve the first logs
|
||||
from `/var/log/<servicename>` or `/var/log/<servicename>.log` or
|
||||
`/var/log/<servicename>/<servicename>.log`.
|
||||
|
||||
On Linux we assume that service logs are available via journald, and that
|
||||
`journalctl` is installed. On Windows we assume that service logs are available
|
||||
in the application log provider. Also note that fetching node logs is only
|
||||
available if you are authorized to do so (in RBAC, that's **get** and
|
||||
**create** access to `nodes/proxy`). The privileges that you need to fetch node
|
||||
logs also allow elevation-of-privilege attacks, so be careful about how you
|
||||
manage them.
|
||||
|
||||
## How do I use it?
|
||||
|
||||
To use the feature, ensure that the `NodeLogQuery`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is
|
||||
enabled for that node, and that the kubelet configuration options
|
||||
`enableSystemLogHandler` and `enableSystemLogQuery` are both set to true. You can
|
||||
then query the logs from all your nodes or just a subset. Here is an example to
|
||||
retrieve the kubelet service logs from a node:
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"
|
||||
```
|
||||
|
||||
You can further filter the query to narrow down the results:
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example that have the word "error"
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error"
|
||||
```
|
||||
|
||||
You can also fetch files from `/var/log/` on a Linux node:
|
||||
```shell
|
||||
kubectl get --raw "/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>"
|
||||
```
|
||||
|
||||
You can read the
|
||||
[documentation](/docs/concepts/cluster-administration/system-logs/#log-query)
|
||||
for all the available options.
|
||||
|
||||
## How do I help?
|
||||
|
||||
Please use the feature and provide feedback by opening GitHub issues or
|
||||
reaching out to us on the
|
||||
[#sig-windows](https://kubernetes.slack.com/archives/C0SJ4AFB7) channel on the
|
||||
Kubernetes Slack or the SIG Windows
|
||||
[mailing list](https://groups.google.com/g/kubernetes-sig-windows).
|
||||
|
|
@ -0,0 +1,133 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Server Side Field Validation and OpenAPI V3 move to GA"
|
||||
date: 2023-04-24
|
||||
slug: openapi-v3-field-validation-ga
|
||||
---
|
||||
|
||||
**Author**: Jeffrey Ying (Google), Antoine Pelisse (Google)
|
||||
|
||||
Before Kubernetes v1.8 (!), typos, mis-indentations or minor errors in
|
||||
YAMLs could have catastrophic consequences (e.g. a typo like
|
||||
forgetting the trailing s in `replica: 1000` could cause an outage,
|
||||
because the value would be ignored and missing, forcing a reset of
|
||||
replicas back to 1). This was solved back then by fetching the OpenAPI
|
||||
v2 in kubectl and using it to verify that fields were correct and
|
||||
present before applying. Unfortunately, at that time, Custom Resource
|
||||
Definitions didn’t exist, and the code was written under that
|
||||
assumption. When CRDs were later introduced, the lack of flexibility
|
||||
in the validation code forced some hard decisions in the way CRDs
|
||||
exposed their schema, leaving us in a cycle of bad validation causing
|
||||
bad OpenAPI and vice-versa. With the new OpenAPI v3 and Server Field
|
||||
Validation being GA in 1.27, we’ve now solved both of these problems.
|
||||
|
||||
Server Side Field Validation offers resource validation on create,
|
||||
update and patch requests to the apiserver and was added to Kubernetes
|
||||
in v1.25, beta in v1.26 and is now GA in v1.27. It provides all the
|
||||
functionality of kubectl validate on the server side.
|
||||
|
||||
[OpenAPI](https://swagger.io/specification/) is a standard, language
|
||||
agnostic interface for discovering the set of operations and types
|
||||
that a Kubernetes cluster supports. OpenAPI V3 is the latest standard
|
||||
of the OpenAPI and is an improvement upon [OpenAPI
|
||||
V2](https://kubernetes.io/blog/2016/12/kubernetes-supports-openapi/)
|
||||
which has been supported since Kubernetes 1.5. OpenAPI V3 support was
|
||||
added in Kubernetes in v1.23, moved to beta in v1.24 and is now GA in
|
||||
v1.27.
|
||||
|
||||
## OpenAPI V3
|
||||
|
||||
### What does OpenAPI V3 offer over V2
|
||||
|
||||
#### Built-in types
|
||||
|
||||
Kubernetes offers certain annotations on fields that are not
|
||||
representable in OpenAPI V2, or sometimes not represented in the
|
||||
OpenAPI v2 that Kubernetes generate. Most notably, the "default" field
|
||||
is published in OpenAPI V3 while omitted in OpenAPI V2. A single type
|
||||
that can represent multiple types is also expressed correctly in
|
||||
OpenAPI V3 with the oneOf field. This includes proper representations
|
||||
for IntOrString and Quantity.
|
||||
|
||||
#### Custom Resource Definitions
|
||||
|
||||
In Kubernetes, Custom Resource Definitions use a structural OpenAPI V3
|
||||
schema that cannot be represented as OpenAPI V2 without a loss of
|
||||
certain fields. Some of these include nullable, default, anyOf, oneOf,
|
||||
not, etc. OpenAPI V3 is a completely lossless representation of the
|
||||
CustomResourceDefinition structural schema.
|
||||
|
||||
### How do I use it?
|
||||
|
||||
The OpenAPI V3 root discovery can be found at the `/openapi/v3`
|
||||
endpoint of a Kubernetes API server. OpenAPI V3 documents are grouped
|
||||
by group-version to reduce the size of the data transported, the
|
||||
separate documents can be accessed at
|
||||
`/openapi/v3/apis/<group>/<version>` and `/openapi/v3/api/v1`
|
||||
representing the legacy group version. Please refer to the [Kubernetes
|
||||
API Documentation](/docs/concepts/overview/kubernetes-api/) for more
|
||||
information around this endpoint.
|
||||
|
||||
Various consumers of the OpenAPI have already been updated to consume
|
||||
v3, including the entirety of kubectl, and server side apply. An
|
||||
OpenAPI V3 Golang client is available in
|
||||
[client-go](https://github.com/kubernetes/client-go/blob/release-1.27/openapi3/root.go).
|
||||
|
||||
## Server Side Field Validation
|
||||
|
||||
The query parameter `fieldValidation` may be used to indicate the
|
||||
level of field validation the server should perform. If the parameter
|
||||
is not passed, server side field validation is in `Warn` mode by
|
||||
default.
|
||||
|
||||
- Strict: Strict field validation, errors on validation failure
|
||||
- Warn: Field validation is performed, but errors are exposed as
|
||||
warnings rather than failing the request
|
||||
- Ignore: No server side field validation is performed
|
||||
|
||||
kubectl will skip client side validation and will automatically use
|
||||
server side field validation in `Strict` mode. Controllers by default
|
||||
use server side field validation in `Warn` mode.
|
||||
|
||||
With client side validation, we had to be extra lenient because some
|
||||
fields were missing from OpenAPI V2 and we didn’t want to reject
|
||||
possibly valid objects. This is all fixed in server side validation.
|
||||
Additional documentation may be found
|
||||
[here](/docs/reference/using-api/api-concepts/#field-validation)
|
||||
|
||||
## What's next?
|
||||
|
||||
With Server Side Field Validation and OpenAPI V3 released as GA, we
|
||||
introduce more accurate representations of Kubernetes resources. It is
|
||||
recommended to use server side field validation over client side, but
|
||||
with OpenAPI V3, clients are free to implement their own validation if
|
||||
necessary (to “shift things left”) and we guarantee a full lossless
|
||||
schema published by OpenAPI.
|
||||
|
||||
Some existing efforts will further improve the information available
|
||||
through OpenAPI including [CEL validation and
|
||||
admission](/docs/reference/using-api/cel/), along with OpenAPI
|
||||
annotations on built-in types.
|
||||
|
||||
Many other tools can be built for authoring and transforming resources
|
||||
using the type information found in the OpenAPI v3.
|
||||
|
||||
## How to get involved?
|
||||
|
||||
These two features are driven by the SIG API Machinery community,
|
||||
available on the slack channel \#sig-api-machinery, through the
|
||||
[mailing
|
||||
list](https://groups.google.com/g/kubernetes-sig-api-machinery) and we
|
||||
meet every other Wednesday at 11:00 AM PT on Zoom.
|
||||
|
||||
We offer a huge thanks to all the contributors who helped design,
|
||||
implement, and review these two features.
|
||||
|
||||
- Alexander Zielenski
|
||||
- Antoine Pelisse
|
||||
- Daniel Smith
|
||||
- David Eads
|
||||
- Jeffrey Ying
|
||||
- Jordan Liggitt
|
||||
- Kevin Delgado
|
||||
- Sean Sullivan
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
layout: blog
|
||||
title: Updates to the Auto-refreshing Official CVE Feed
|
||||
date: 2023-04-25
|
||||
slug: k8s-cve-feed-beta
|
||||
---
|
||||
|
||||
**Authors**: Cailyn Edwards (Shopify), Mahé Tardy (Isovalent), Pushkar Joglekar
|
||||
|
||||
Since launching the [Auto-refreshing Official CVE feed](/docs/reference/issues-security/official-cve-feed/) as an alpha
|
||||
feature in the 1.25 release, we have made significant improvements and updates. We are excited to announce the release of the
|
||||
beta version of the feed. This blog post will outline the feedback received, the changes made, and talk about how you can help
|
||||
as we prepare to make this a stable feature in a future Kubernetes Release.
|
||||
|
||||
|
||||
## Feedback from end-users
|
||||
|
||||
SIG Security received some feedback from end-users:
|
||||
- The JSON CVE Feed [did not comply](https://github.com/kubernetes/website/issues/36808)
|
||||
with the [JSON Feed specification](https://www.jsonfeed.org/) as its name would suggest.
|
||||
- The feed could also [support RSS](https://github.com/kubernetes/sig-security/issues/77)
|
||||
in addition to JSON Feed format.
|
||||
- Some metadata could be [added](https://github.com/kubernetes/sig-security/issues/72) to indicate the freshness of
|
||||
the feed overall, or [specific CVEs](https://github.com/kubernetes/sig-security/issues/63). Another suggestion was
|
||||
to [indicate](https://github.com/kubernetes/sig-security/issues/71) which Prow job recently updated the feed. See
|
||||
more ideas directly on the [the umbrella issue](https://github.com/kubernetes/sig-security/issues/1).
|
||||
- The feed Markdown table on the website [should be ordered](https://github.com/kubernetes/sig-security/issues/73)
|
||||
from the most recent to the least recently announced CVE.
|
||||
|
||||
## Summary of changes
|
||||
|
||||
In response, the SIG did a [rework of the script generating the JSON feed](https://github.com/kubernetes/sig-security/pull/76)
|
||||
to comply with the JSON Feed specification from generation and add a
|
||||
`last_updated` root field to indicate overall freshness. This redesign needed a
|
||||
[corresponding fix on the Kubernetes website side](https://github.com/kubernetes/website/pull/38579)
|
||||
for the CVE feed page to continue to work with the new format.
|
||||
|
||||
After that, [RSS feed support](https://github.com/kubernetes/website/pull/39513)
|
||||
could be added transparently so that end-users can consume the feed in their
|
||||
preferred format.
|
||||
|
||||
Overall, the redesign based on the JSON Feed specification, which this time broke
|
||||
backward compatibility, will allow updates in the future to address the rest of
|
||||
the issue while being more transparent and less disruptive to end-users.
|
||||
|
||||
### Updates
|
||||
| **Title** | **Issue** | **Status** |
|
||||
| ------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| CVE Feed: JSON feed should pass jsonfeed spec validator | [kubernetes/webite#36808](https://github.com/kubernetes/website/issues/36808) | closed, addressed by [kubernetes/sig-security#76](https://github.com/kubernetes/sig-security/pull/76) |
|
||||
| CVE Feed: Add lastUpdatedAt as a metadata field | [kubernetes/sig-security#72](https://github.com/kubernetes/sig-security/issues/72) | closed, addressed by [kubernetes/sig-security#76](https://github.com/kubernetes/sig-security/pull/76) |
|
||||
| Support RSS feeds by generating data in Atom format | [kubernetes/sig-security#77](https://github.com/kubernetes/sig-security/issues/77) | closed, addressed by [kubernetes/website#39513](https://github.com/kubernetes/website/pull/39513)|
|
||||
| CVE Feed: Sort Markdown Table from most recent to least recently announced CVE | [kubernetes/sig-security#73](https://github.com/kubernetes/sig-security/issues/73) | closed, addressed by [kubernetes/sig-security#76](https://github.com/kubernetes/sig-security/pull/76) |
|
||||
| CVE Feed: Include a timestamp field for each CVE indicating when it was last updated | [kubernetes/sig-security#63](https://github.com/kubernetes/sig-security/issues/63) | closed, addressed by [kubernetes/sig-security#76](https://github.com/kubernetes/sig-security/pull/76) |
|
||||
| CVE Feed: Add Prow job link as a metadata field | [kubernetes/sig-security#71](https://github.com/kubernetes/sig-security/issues/71) | closed, addressed by [kubernetes/sig-security#83](https://github.com/kubernetes/sig-security/pull/83) |
|
||||
|
||||
## What's next?
|
||||
|
||||
In preparation to [graduate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-stages) the feed
|
||||
to stable i.e. `General Availability` stage, SIG Security is still gathering feedback from end users who are using the updated beta feed.
|
||||
|
||||
To help us continue to improve the feed in future Kubernetes Releases please share feedback by adding a comment to
|
||||
this [tracking issue](https://github.com/kubernetes/sig-security/issues/1) or
|
||||
let us know on [#sig-security-tooling](https://kubernetes.slack.com/archives/C01CUSVMHPY)
|
||||
Kubernetes Slack channel, join [Kubernetes Slack here](https://slack.k8s.io).
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Safer, More Performant Pruning in kubectl apply"
|
||||
date: 2023-05-09
|
||||
slug: introducing-kubectl-applyset-pruning
|
||||
---
|
||||
|
||||
**Authors:** Katrina Verey (Shopify) and Justin Santa Barbara (Google)
|
||||
|
||||
Declarative configuration management with the `kubectl apply` command is the gold standard approach
|
||||
to creating or modifying Kubernetes resources. However, one challenge it presents is the deletion
|
||||
of resources that are no longer needed. In Kubernetes version 1.5, the `--prune` flag was
|
||||
introduced to address this issue, allowing kubectl apply to automatically clean up previously
|
||||
applied resources removed from the current configuration.
|
||||
|
||||
Unfortunately, that existing implementation of `--prune` has design flaws that diminish its
|
||||
performance and can result in unexpected behaviors. The main issue stems from the lack of explicit
|
||||
encoding of the previously applied set by the preceding `apply` operation, necessitating
|
||||
error-prone dynamic discovery. Object leakage, inadvertent over-selection of resources, and limited
|
||||
compatibility with custom resources are a few notable drawbacks of this implementation. Moreover,
|
||||
its coupling to client-side apply hinders user upgrades to the superior server-side apply
|
||||
mechanism.
|
||||
|
||||
Version 1.27 of `kubectl` introduces an alpha version of a revamped pruning implementation that
|
||||
addresses these issues. This new implementation, based on a concept called _ApplySet_, promises
|
||||
better performance and safety.
|
||||
|
||||
An _ApplySet_ is a group of resources associated with a _parent_ object on the cluster, as
|
||||
identified and configured through standardized labels and annotations. Additional standardized
|
||||
metadata allows for accurate identification of ApplySet _member_ objects within the cluster,
|
||||
simplifying operations like pruning.
|
||||
|
||||
To leverage ApplySet-based pruning, set the `KUBECTL_APPLYSET=true` environment variable and include
|
||||
the flags `--prune` and `--applyset` in your `kubectl apply` invocation:
|
||||
|
||||
```shell
|
||||
KUBECTL_APPLYSET=true kubectl apply -f <directory/> --prune --applyset=<name>
|
||||
```
|
||||
|
||||
By default, ApplySet uses a Secret as the parent object. However, you can also use
|
||||
a ConfigMap with the format `--applyset=configmaps/<name>`. If your desired Secret or
|
||||
ConfigMap object does not yet exist, `kubectl` will create it for you. Furthermore, custom
|
||||
resources can be enabled for use as ApplySet parent objects.
|
||||
|
||||
The ApplySet implementation is based on a new low-level specification that can support higher-level
|
||||
ecosystem tools by improving their interoperability. The lightweight nature of this specification
|
||||
enables these tools to continue to use existing object grouping systems while opting in to
|
||||
ApplySet's metadata conventions to prevent inadvertent changes by other tools (such as `kubectl`).
|
||||
|
||||
ApplySet-based pruning offers a promising solution to the shortcomings of the previous `--prune`
|
||||
implementation in `kubectl` and can help streamline your Kubernetes resource management. Please
|
||||
give this new feature a try and share your experiences with the community—ApplySet is under active
|
||||
development, and your feedback is invaluable!
|
||||
|
||||
|
||||
### Additional resources
|
||||
|
||||
- For more information how to use ApplySet-based pruning, read
|
||||
[Declarative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/declarative-config/) in the Kubernetes documentation.
|
||||
- For a deeper dive into the technical design of this feature or to learn how to implement the
|
||||
ApplySet specification in your own tools, refer to [KEP 3659](https://git.k8s.io/enhancements/keps/sig-cli/3659-kubectl-apply-prune/README.md):
|
||||
_ApplySet: `kubectl apply --prune` redesign and graduation strategy_.
|
||||
|
||||
|
||||
### How do I get involved?
|
||||
|
||||
If you want to get involved in ApplySet development, you can get in touch with the developers at
|
||||
[SIG CLI](https://git.k8s.io/community/sig-cli). To provide feedback on the feature, please
|
||||
[file a bug](https://github.com/kubernetes/kubectl/issues/new?assignees=knverey,justinsb&labels=kind%2Fbug&template=bug-report.md)
|
||||
or [request an enhancement](https://github.com/kubernetes/kubectl/issues/new?assignees=knverey,justinsb&labels=kind%2Fbug&template=enhancement.md)
|
||||
on the `kubernetes/kubectl` repository.
|
||||
|
|
@ -0,0 +1,173 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services"
|
||||
date: 2023-05-11
|
||||
slug: nodeport-dynamic-and-static-allocation
|
||||
---
|
||||
|
||||
**Author:** Xu Zhenglun (Alibaba)
|
||||
|
||||
In Kubernetes, a Service can be used to provide a unified traffic endpoint for
|
||||
applications running on a set of Pods. Clients can use the virtual IP address (or _VIP_) provided
|
||||
by the Service for access, and Kubernetes provides load balancing for traffic accessing
|
||||
different back-end Pods, but a ClusterIP type of Service is limited to providing access to
|
||||
nodes within the cluster, while traffic from outside the cluster cannot be routed.
|
||||
One way to solve this problem is to use a `type: NodePort` Service, which sets up a mapping
|
||||
to a specific port of all nodes in the cluster, thus redirecting traffic from the
|
||||
outside to the inside of the cluster.
|
||||
|
||||
## How Kubernetes allocates node ports to Services?
|
||||
|
||||
When a `type: NodePort` Service is created, its corresponding port(s) are allocated in one
|
||||
of two ways:
|
||||
|
||||
- **Dynamic** : If the Service type is `NodePort` and you do not set a `nodePort`
|
||||
value explicitly in the `spec` for that Service, the Kubernetes control plane will
|
||||
automatically allocate an unused port to it at creation time.
|
||||
|
||||
- **Static** : In addition to the dynamic auto-assignment described above, you can also
|
||||
explicitly assign a port that is within the nodeport port range configuration.
|
||||
|
||||
The value of `nodePort` that you manually assign must be unique across the whole cluster.
|
||||
Attempting to create a Service of `type: NodePort` where you explicitly specify a node port that
|
||||
was already allocated results in an error.
|
||||
|
||||
## Why do you need to reserve ports of NodePort Service?
|
||||
Sometimes, you may want to have a NodePort Service running on well-known ports
|
||||
so that other components and users inside o r outside the cluster can use them.
|
||||
|
||||
In some complex cluster deployments with a mix of Kubernetes nodes and other servers on the same network,
|
||||
it may be necessary to use some pre-defined ports for communication. In particular, some fundamental
|
||||
components cannot rely on the VIPs that back `type: LoadBalancer` Services
|
||||
because the virtual IP address mapping implementation for that cluster also relies on
|
||||
these foundational components.
|
||||
|
||||
Now suppose you need to expose a Minio object storage service on Kubernetes to clients
|
||||
running outside the Kubernetes cluster, and the agreed port is `30009`, we need to
|
||||
create a Service as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
ports:
|
||||
- name: api
|
||||
nodePort: 30009
|
||||
port: 9000
|
||||
protocol: TCP
|
||||
targetPort: 9000
|
||||
selector:
|
||||
app: minio
|
||||
type: NodePort
|
||||
```
|
||||
|
||||
However, as mentioned before, if the port (30009) required for the `minio` Service is not reserved,
|
||||
and another `type: NodePort` (or possibly `type: LoadBalancer`) Service is created and dynamically
|
||||
allocated before or concurrently with the `minio` Service, TCP port 30009 might be allocated to that
|
||||
other Service; if so, creation of the `minio` Service will fail due to a node port collision.
|
||||
|
||||
## How can you avoid NodePort Service port conflicts?
|
||||
Kubernetes 1.24 introduced changes for `type: ClusterIP` Services, dividing the CIDR range for cluster
|
||||
IP addresses into two blocks that use different allocation policies to [reduce the risk of conflicts](/docs/reference/networking/virtual-ips/#avoiding-collisions).
|
||||
In Kubernetes 1.27, as an alpha feature, you can adopt a similar policy for `type: NodePort` Services.
|
||||
You can enable a new [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`ServiceNodePortStaticSubrange`. Turning this on allows you to use a different port allocation strategy
|
||||
for `type: NodePort` Services, and reduce the risk of collision.
|
||||
|
||||
The port range for `NodePort` will be divided, based on the formula `min(max(16, nodeport-size / 32), 128)`.
|
||||
The outcome of the formula will be a number between 16 and 128, with a step size that increases as the
|
||||
size of the nodeport range increases. The outcome of the formula determine that the size of static port
|
||||
range. When the port range is less than 16, the size of static port range will be set to 0,
|
||||
which means that all ports will be dynamically allocated.
|
||||
|
||||
Dynamic port assignment will use the upper band by default, once this has been exhausted it will use the lower range.
|
||||
This will allow users to use static allocations on the lower band with a low risk of collision.
|
||||
|
||||
## Examples
|
||||
|
||||
### default range: 30000-32767
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-32767 |
|
||||
| Band Offset |   `min(max(16, 2768/32), 128)` <br>= `min(max(16, 86), 128)` <br>= `min(86, 128)` <br>= 86 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30085 |
|
||||
| Dynamic band start | 30086 |
|
||||
| Dynamic band end | 32767 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-32767
|
||||
"Static" : 86
|
||||
"Dynamic" : 2682
|
||||
{{< /mermaid >}}
|
||||
|
||||
### very small range: 30000-30015
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-30015 |
|
||||
| Band Offset | 0 |
|
||||
| Static band start | - |
|
||||
| Static band end | - |
|
||||
| Dynamic band start | 30000 |
|
||||
| Dynamic band end | 30015 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-30015
|
||||
"Static" : 0
|
||||
"Dynamic" : 16
|
||||
{{< /mermaid >}}
|
||||
|
||||
### small(lower boundary) range: 30000-30127
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-30127 |
|
||||
| Band Offset |   `min(max(16, 128/32), 128)` <br>= `min(max(16, 4), 128)` <br>= `min(16, 128)` <br>= 16 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30015 |
|
||||
| Dynamic band start | 30016 |
|
||||
| Dynamic band end | 30127 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-30127
|
||||
"Static" : 16
|
||||
"Dynamic" : 112
|
||||
{{< /mermaid >}}
|
||||
|
||||
### large(upper boundary) range: 30000-34095
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-34095 |
|
||||
| Band Offset |   `min(max(16, 4096/32), 128)` <br>= `min(max(16, 128), 128)` <br>= `min(128, 128)` <br>= 128 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30127 |
|
||||
| Dynamic band start | 30128 |
|
||||
| Dynamic band end | 34095 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-34095
|
||||
"Static" : 128
|
||||
"Dynamic" : 3968
|
||||
{{< /mermaid >}}
|
||||
|
||||
### very large range: 30000-38191
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-38191 |
|
||||
| Band Offset |   `min(max(16, 8192/32), 128)` <br>= `min(max(16, 256), 128)` <br>= `min(256, 128)` <br>= 128 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30127 |
|
||||
| Dynamic band start | 30128 |
|
||||
| Dynamic band end | 38191 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-38191
|
||||
"Static" : 128
|
||||
"Dynamic" : 8064
|
||||
{{< /mermaid >}}
|
||||
|
|
@ -93,7 +93,15 @@ For self-registration, the kubelet is started with the following options:
|
|||
{{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
|
||||
|
||||
No-op if `register-node` is false.
|
||||
- `--node-ip` - IP address of the node.
|
||||
- `--node-ip` - Optional comma-separated list of the IP addresses for the node.
|
||||
You can only specify a single address for each address family.
|
||||
For example, in a single-stack IPv4 cluster, you set this value to be the IPv4 address that the
|
||||
kubelet should use for the node.
|
||||
See [configure IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)
|
||||
for details of running a dual-stack cluster.
|
||||
|
||||
If you don't provide this argument, the kubelet uses the node's default IPv4 address, if any;
|
||||
if the node has no IPv4 addresses then the kubelet uses the node's default IPv6 address.
|
||||
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node
|
||||
in the cluster (see label restrictions enforced by the
|
||||
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
|
||||
|
|
@ -215,34 +223,20 @@ of the Node resource. For example, the following JSON structure describes a heal
|
|||
]
|
||||
```
|
||||
|
||||
If the `status` of the Ready condition remains `Unknown` or `False` for longer
|
||||
than the `pod-eviction-timeout` (an argument passed to the
|
||||
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager"
|
||||
>}}), then the [node controller](#node-controller) triggers
|
||||
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
|
||||
for all Pods assigned to that node. The default eviction timeout duration is
|
||||
**five minutes**.
|
||||
In some cases when the node is unreachable, the API server is unable to communicate
|
||||
with the kubelet on the node. The decision to delete the pods cannot be communicated to
|
||||
the kubelet until communication with the API server is re-established. In the meantime,
|
||||
the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||
|
||||
The node controller does not force delete pods until it is confirmed that they have stopped
|
||||
running in the cluster. You can see the pods that might be running on an unreachable node as
|
||||
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
|
||||
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
|
||||
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
|
||||
all the Pod objects running on the node to be deleted from the API server and frees up their
|
||||
names.
|
||||
|
||||
When problems occur on nodes, the Kubernetes control plane automatically creates
|
||||
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
|
||||
affecting the node.
|
||||
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
|
||||
Pods can also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
|
||||
them run on a Node even though it has a specific taint.
|
||||
affecting the node. An example of this is when the `status` of the Ready condition
|
||||
remains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`,
|
||||
which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status,
|
||||
or a `node.kubernetes.io/not-ready` taint, for a `False` status, to be added to the Node.
|
||||
|
||||
See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
|
||||
These taints affect pending pods as the scheduler takes the Node's taints into consideration when
|
||||
assigning a pod to a Node. Existing pods scheduled to the node may be evicted due to the application
|
||||
of `NoExecute` taints. Pods may also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
|
||||
them schedule to and continue running on a Node even though it has a specific taint.
|
||||
|
||||
See [Taint Based Evictions](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) and
|
||||
[Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
|
||||
for more details.
|
||||
|
||||
### Capacity and Allocatable {#capacity}
|
||||
|
|
|
|||
|
|
@ -231,6 +231,53 @@ Similar to the container logs, you should rotate system component logs in the `/
|
|||
In Kubernetes clusters created by the `kube-up.sh` script, log rotation is configured by the `logrotate` tool.
|
||||
The `logrotate` tool rotates logs daily, or once the log size is greater than 100MB.
|
||||
|
||||
## Log query
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows viewing logs of services
|
||||
running on the node. To use the feature, ensure that the `NodeLogQuery`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled for that node, and that the
|
||||
kubelet configuration options `enableSystemLogHandler` and `enableSystemLogQuery` are both set to true. On Linux
|
||||
we assume that service logs are available via journald. On Windows we assume that service logs are available
|
||||
in the application log provider. On both operating systems, logs are also available by reading files within
|
||||
`/var/log/`.
|
||||
|
||||
Provided you are authorized to interact with node objects, you can try out this alpha feature on all your nodes or
|
||||
just a subset. Here is an example to retrieve the kubelet service logs from a node:
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"
|
||||
```
|
||||
|
||||
You can also fetch files, provided that the files are in a directory that the kubelet allows for log
|
||||
fetches. For example, you can fetch a log from `/var/log` on a Linux node:
|
||||
```shell
|
||||
kubectl get --raw "/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>"
|
||||
```
|
||||
|
||||
The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is
|
||||
writing logs to the operating system's native logger like journald or to a log file in `/var/log/`. The heuristics
|
||||
first checks the native logger and if that is not available attempts to retrieve the first logs from
|
||||
`/var/log/<servicename>` or `/var/log/<servicename>.log` or `/var/log/<servicename>/<servicename>.log`.
|
||||
|
||||
The complete list of options that can be used are:
|
||||
|
||||
Option | Description
|
||||
------ | -----------
|
||||
`boot` | boot show messages from a specific system boot
|
||||
`pattern` | pattern filters log entries by the provided PERL-compatible regular expression
|
||||
`query` | query specifies services(s) or files from which to return logs (required)
|
||||
`sinceTime` | an [RFC3339](https://www.rfc-editor.org/rfc/rfc3339) timestamp from which to show logs (inclusive)
|
||||
`untilTime` | an [RFC3339](https://www.rfc-editor.org/rfc/rfc3339) timestamp until which to show logs (inclusive)
|
||||
`tailLines` | specify how many lines from the end of the log to retrieve; the default is to fetch the whole log
|
||||
|
||||
Example of a more complex query:
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example that have the word "error"
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error"
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about the [Kubernetes Logging Architecture](/docs/concepts/cluster-administration/logging/)
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ weight: 90
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
System component traces record the latency of and relationships between operations in the cluster.
|
||||
|
||||
|
|
@ -59,14 +59,12 @@ as the kube-apiserver is often a public endpoint.
|
|||
|
||||
#### Enabling tracing in the kube-apiserver
|
||||
|
||||
To enable tracing, enable the `APIServerTracing`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the kube-apiserver. Also, provide the kube-apiserver with a tracing configuration file
|
||||
To enable tracing, provide the kube-apiserver with a tracing configuration file
|
||||
with `--tracing-config-file=<path-to-config>`. This is an example config that records
|
||||
spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1alpha1
|
||||
apiVersion: apiserver.config.k8s.io/v1beta1
|
||||
kind: TracingConfiguration
|
||||
# default value
|
||||
#endpoint: localhost:4317
|
||||
|
|
@ -74,11 +72,11 @@ samplingRatePerMillion: 100
|
|||
```
|
||||
|
||||
For more information about the `TracingConfiguration` struct, see
|
||||
[API server config API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration).
|
||||
[API server config API (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/#apiserver-k8s-io-v1beta1-TracingConfiguration).
|
||||
|
||||
### kubelet traces
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
The kubelet CRI interface and authenticated http servers are instrumented to generate
|
||||
trace spans. As with the apiserver, the endpoint and sampling rate are configurable.
|
||||
|
|
@ -88,10 +86,7 @@ Enabled without a configured endpoint, the default OpenTelemetry Collector recei
|
|||
|
||||
#### Enabling tracing in the kubelet
|
||||
|
||||
To enable tracing, enable the `KubeletTracing`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the kubelet. Also, provide the kubelet with a
|
||||
[tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.25/tracing/api/v1/types.go).
|
||||
To enable tracing, apply the [tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.27/tracing/api/v1/types.go).
|
||||
This is an example snippet of a kubelet config that records spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
|
||||
|
||||
```yaml
|
||||
|
|
@ -105,6 +100,21 @@ tracing:
|
|||
samplingRatePerMillion: 100
|
||||
```
|
||||
|
||||
If the `samplingRatePerMillion` is set to one million (`1000000`), then every
|
||||
span will be sent to the exporter.
|
||||
|
||||
The kubelet in Kubernetes v{{< skew currentVersion >}} collects spans from
|
||||
the garbage collection, pod synchronization routine as well as every gRPC
|
||||
method. Connected container runtimes like CRI-O and containerd can link the
|
||||
traces to their exported spans to provide additional context of information.
|
||||
|
||||
Please note that exporting spans always comes with a small performance overhead
|
||||
on the networking and CPU side, depending on the overall configuration of the
|
||||
system. If there is any issue like that in a cluster which is running with
|
||||
tracing enabled, then mitigate the problem by either reducing the
|
||||
`samplingRatePerMillion` or disabling tracing completely by removing the
|
||||
configuration.
|
||||
|
||||
## Stability
|
||||
|
||||
Tracing instrumentation is still under active development, and may change
|
||||
|
|
|
|||
|
|
@ -157,6 +157,48 @@ that Kubernetes will keep trying to pull the image, with an increasing back-off
|
|||
Kubernetes raises the delay between each attempt until it reaches a compiled-in limit,
|
||||
which is 300 seconds (5 minutes).
|
||||
|
||||
## Serial and parallel image pulls
|
||||
|
||||
By default, kubelet pulls images serially. In other words, kubelet sends only
|
||||
one image pull request to the image service at a time. Other image pull requests
|
||||
have to wait until the one being processed is complete.
|
||||
|
||||
Nodes make image pull decisions in isolation. Even when you use serialized image
|
||||
pulls, two different nodes can pull the same image in parallel.
|
||||
|
||||
If you would like to enable parallel image pulls, you can set the field
|
||||
`serializeImagePulls` to false in the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/).
|
||||
With `serializeImagePulls` set to false, image pull requests will be sent to the image service immediately,
|
||||
and multiple images will be pulled at the same time.
|
||||
|
||||
When enabling parallel image pulls, please make sure the image service of your
|
||||
container runtime can handle parallel image pulls.
|
||||
|
||||
The kubelet never pulls multiple images in parallel on behalf of one Pod. For example,
|
||||
if you have a Pod that has an init container and an application container, the image
|
||||
pulls for the two containers will not be parallelized. However, if you have two
|
||||
Pods that use different images, the kubelet pulls the images in parallel on
|
||||
behalf of the two different Pods, when parallel image pulls is enabled.
|
||||
|
||||
### Maximum parallel image pulls
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
When `serializeImagePulls` is set to false, the kubelet defaults to no limit on the
|
||||
maximum number of images being pulled at the same time. If you would like to
|
||||
limit the number of parallel image pulls, you can set the field `maxParallelImagePulls`
|
||||
in kubelet configuration. With `maxParallelImagePulls` set to _n_, only _n_ images
|
||||
can be pulled at the same time, and any image pull beyond _n_ will have to wait
|
||||
until at least one ongoing image pull is complete.
|
||||
|
||||
Limiting the number parallel image pulls would prevent image pulling from consuming
|
||||
too much network bandwidth or disk I/O, when parallel image pulling is enabled.
|
||||
|
||||
You can set `maxParallelImagePulls` to a positive number that is greater than or
|
||||
equal to 1. If you set `maxParallelImagePulls` to be greater than or equal to 2, you
|
||||
must set the `serializeImagePulls` to false. The kubelet will fail to start with invalid
|
||||
`maxParallelImagePulls` settings.
|
||||
|
||||
## Multi-architecture images with image indexes
|
||||
|
||||
As well as providing binary images, a container registry can also serve a
|
||||
|
|
|
|||
|
|
@ -213,6 +213,7 @@ for these devices:
|
|||
service PodResourcesLister {
|
||||
rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
|
||||
rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}
|
||||
rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}
|
||||
}
|
||||
```
|
||||
|
||||
|
|
@ -223,6 +224,14 @@ id of exclusively allocated CPUs, device id as it was reported by device plugins
|
|||
the NUMA node where these devices are allocated. Also, for NUMA-based machines, it contains the
|
||||
information about memory and hugepages reserved for a container.
|
||||
|
||||
Starting from Kubernetes v1.27, the `List` enpoint can provide information on resources
|
||||
of running pods allocated in `ResourceClaims` by the `DynamicResourceAllocation` API. To enable
|
||||
this feature `kubelet` must be started with the following flags:
|
||||
|
||||
```
|
||||
--feature-gates=DynamicResourceAllocation=true,KubeletPodResourcesDynamiceResources=true
|
||||
```
|
||||
|
||||
```gRPC
|
||||
// ListPodResourcesResponse is the response returned by List function
|
||||
message ListPodResourcesResponse {
|
||||
|
|
@ -242,6 +251,7 @@ message ContainerResources {
|
|||
repeated ContainerDevices devices = 2;
|
||||
repeated int64 cpu_ids = 3;
|
||||
repeated ContainerMemory memory = 4;
|
||||
repeated DynamicResource dynamic_resources = 5;
|
||||
}
|
||||
|
||||
// ContainerMemory contains information about memory and hugepages assigned to a container
|
||||
|
|
@ -267,6 +277,28 @@ message ContainerDevices {
|
|||
repeated string device_ids = 2;
|
||||
TopologyInfo topology = 3;
|
||||
}
|
||||
|
||||
// DynamicResource contains information about the devices assigned to a container by Dynamic Resource Allocation
|
||||
message DynamicResource {
|
||||
string class_name = 1;
|
||||
string claim_name = 2;
|
||||
string claim_namespace = 3;
|
||||
repeated ClaimResource claim_resources = 4;
|
||||
}
|
||||
|
||||
// ClaimResource contains per-plugin resource information
|
||||
message ClaimResource {
|
||||
repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = "CDIDevices"];
|
||||
}
|
||||
|
||||
// CDIDevice specifies a CDI device information
|
||||
message CDIDevice {
|
||||
// Fully qualified CDI device name
|
||||
// for example: vendor.com/gpu=gpudevice1
|
||||
// see more details in the CDI specification:
|
||||
// https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md
|
||||
string name = 1;
|
||||
}
|
||||
```
|
||||
{{< note >}}
|
||||
cpu_ids in the `ContainerResources` in the `List` endpoint correspond to exclusive CPUs allocated
|
||||
|
|
@ -333,6 +365,36 @@ Support for the `PodResourcesLister service` requires `KubeletPodResources`
|
|||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
|
||||
It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.
|
||||
|
||||
### `Get` gRPC endpoint {#grpc-endpoint-get}
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.27" >}}
|
||||
|
||||
The `Get` endpoint provides information on resources of a running Pod. It exposes information
|
||||
similar to those described in the `List` endpoint. The `Get` endpoint requires `PodName`
|
||||
and `PodNamespace` of the running Pod.
|
||||
|
||||
```gRPC
|
||||
// GetPodResourcesRequest contains information about the pod
|
||||
message GetPodResourcesRequest {
|
||||
string pod_name = 1;
|
||||
string pod_namespace = 2;
|
||||
}
|
||||
```
|
||||
|
||||
To enable this feature, you must start your kubelet services with the following flag:
|
||||
|
||||
```
|
||||
--feature-gates=KubeletPodResourcesGet=true
|
||||
```
|
||||
|
||||
The `Get` endpoint can provide Pod information related to dynamic resources
|
||||
allocated by the dynamic resource allocation API. To enable this feature, you must
|
||||
ensure your kubelet services are started with the following flags:
|
||||
|
||||
```
|
||||
--feature-gates=KubeletPodResourcesGet=true,DynamicResourceAllocation=true,KubeletPodResourcesDynamiceResources=true
|
||||
```
|
||||
|
||||
## Device plugin integration with the Topology Manager
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
|
|
|||
|
|
@ -82,17 +82,13 @@ packages that define the API objects.
|
|||
|
||||
### OpenAPI V3
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.24" >}}
|
||||
{{< feature-state state="stable" for_k8s_version="v1.27" >}}
|
||||
|
||||
Kubernetes {{< param "version" >}} offers beta support for publishing its APIs as OpenAPI v3; this is a
|
||||
beta feature that is enabled by default.
|
||||
You can disable the beta feature by turning off the
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) named `OpenAPIV3`
|
||||
for the kube-apiserver component.
|
||||
Kubernetes supports publishing a description of its APIs as OpenAPI v3.
|
||||
|
||||
A discovery endpoint `/openapi/v3` is provided to see a list of all
|
||||
group/versions available. This endpoint only returns JSON. These group/versions
|
||||
are provided in the following format:
|
||||
group/versions available. This endpoint only returns JSON. These
|
||||
group/versions are provided in the following format:
|
||||
|
||||
```yaml
|
||||
{
|
||||
|
|
@ -153,11 +149,37 @@ Refer to the table below for accepted request headers.
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
A Golang implementation to fetch the OpenAPI V3 is provided in the package `k8s.io/client-go/openapi3`.
|
||||
|
||||
## Persistence
|
||||
|
||||
Kubernetes stores the serialized state of objects by writing them into
|
||||
{{< glossary_tooltip term_id="etcd" >}}.
|
||||
|
||||
## API Discovery
|
||||
|
||||
A list of all group versions supported by a cluster is published at
|
||||
the `/api` and `/apis` endpoints. Each group version also advertises
|
||||
the list of resources supported via `/apis/<group>/<version>` (for
|
||||
example: `/apis/rbac.authorization.k8s.io/v1alpha1`). These endpoints
|
||||
are used by kubectl to fetch the list of resources supported by a
|
||||
cluster.
|
||||
|
||||
### Aggregated Discovery
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.27" >}}
|
||||
|
||||
Kubernetes offers beta support for aggregated discovery, publishing
|
||||
all resources supported by a cluster through two endpoints (`/api` and
|
||||
`/apis`) compared to one for every group version. Requesting this
|
||||
endpoint drastically reduces the number of requests sent to fetch the
|
||||
discovery for the average Kubernetes cluster. This may be accessed by
|
||||
requesting the respective endpoints with an Accept header indicating
|
||||
the aggregated discovery resource:
|
||||
`Accept: application/json;v=v2beta1;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList`.
|
||||
|
||||
The endpoint also supports ETag and protobuf encoding.
|
||||
|
||||
## API groups and versioning
|
||||
|
||||
To make it easier to eliminate fields or restructure resource representations,
|
||||
|
|
|
|||
|
|
@ -6,13 +6,13 @@ weight: 90
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
In Kubernetes, some objects are *owners* of other objects. For example, a
|
||||
{{<glossary_tooltip text="ReplicaSet" term_id="replica-set">}} is the owner of a set of Pods. These owned objects are *dependents*
|
||||
In Kubernetes, some {{< glossary_tooltip text="objects" term_id="Object" >}} are *owners* of other objects. For example, a
|
||||
{{<glossary_tooltip text="ReplicaSet" term_id="replica-set">}} is the owner of a set of {{<glossary_tooltip text="Pods" term_id="pod">}}. These owned objects are *dependents*
|
||||
of their owner.
|
||||
|
||||
Ownership is different from the [labels and selectors](/docs/concepts/overview/working-with-objects/labels/)
|
||||
mechanism that some resources also use. For example, consider a Service that
|
||||
creates `EndpointSlice` objects. The Service uses labels to allow the control plane to
|
||||
creates `EndpointSlice` objects. The Service uses {{<glossary_tooltip text="labels" term_id="label">}} to allow the control plane to
|
||||
determine which `EndpointSlice` objects are used for that Service. In addition
|
||||
to the labels, each `EndpointSlice` that is managed on behalf of a Service has
|
||||
an owner reference. Owner references help different parts of Kubernetes avoid
|
||||
|
|
@ -21,8 +21,8 @@ interfering with objects they don’t control.
|
|||
## Owner references in object specifications
|
||||
|
||||
Dependent objects have a `metadata.ownerReferences` field that references their
|
||||
owner object. A valid owner reference consists of the object name and a UID
|
||||
within the same namespace as the dependent object. Kubernetes sets the value of
|
||||
owner object. A valid owner reference consists of the object name and a {{<glossary_tooltip text="UID" term_id="uid">}}
|
||||
within the same {{<glossary_tooltip text="namespace" term_id="namespace">}} as the dependent object. Kubernetes sets the value of
|
||||
this field automatically for objects that are dependents of other objects like
|
||||
ReplicaSets, DaemonSets, Deployments, Jobs and CronJobs, and ReplicationControllers.
|
||||
You can also configure these relationships manually by changing the value of
|
||||
|
|
@ -66,10 +66,10 @@ When you tell Kubernetes to delete a resource, the API server allows the
|
|||
managing controller to process any [finalizer rules](/docs/concepts/overview/working-with-objects/finalizers/)
|
||||
for the resource. {{<glossary_tooltip text="Finalizers" term_id="finalizer">}}
|
||||
prevent accidental deletion of resources your cluster may still need to function
|
||||
correctly. For example, if you try to delete a `PersistentVolume` that is still
|
||||
correctly. For example, if you try to delete a [PersistentVolume](/docs/concepts/storage/persistent-volumes/) that is still
|
||||
in use by a Pod, the deletion does not happen immediately because the
|
||||
`PersistentVolume` has the `kubernetes.io/pv-protection` finalizer on it.
|
||||
Instead, the volume remains in the `Terminating` status until Kubernetes clears
|
||||
Instead, the [volume](/docs/concepts/storage/volumes/) remains in the `Terminating` status until Kubernetes clears
|
||||
the finalizer, which only happens after the `PersistentVolume` is no longer
|
||||
bound to a Pod.
|
||||
|
||||
|
|
@ -86,4 +86,4 @@ object.
|
|||
|
||||
* Learn more about [Kubernetes finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
|
||||
* Learn about [garbage collection](/docs/concepts/architecture/garbage-collection).
|
||||
* Read the API reference for [object metadata](/docs/reference/kubernetes-api/common-definitions/object-meta/#System).
|
||||
* Read the API reference for [object metadata](/docs/reference/kubernetes-api/common-definitions/object-meta/#System).
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ weight: 65
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
Dynamic resource allocation is a new API for requesting and sharing resources
|
||||
between pods and containers inside a pod. It is a generalization of the
|
||||
|
|
@ -31,7 +31,7 @@ check the documentation for that version of Kubernetes.
|
|||
|
||||
## API
|
||||
|
||||
The new `resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API group"
|
||||
The `resource.k8s.io/v1alpha2` {{< glossary_tooltip text="API group"
|
||||
term_id="api-group" >}} provides four new types:
|
||||
|
||||
ResourceClass
|
||||
|
|
@ -51,7 +51,7 @@ ResourceClaimTemplate
|
|||
: Defines the spec and some meta data for creating
|
||||
ResourceClaims. Created by a user when deploying a workload.
|
||||
|
||||
PodScheduling
|
||||
PodSchedulingContext
|
||||
: Used internally by the control plane and resource drivers
|
||||
to coordinate pod scheduling when ResourceClaims need to be allocated
|
||||
for a Pod.
|
||||
|
|
@ -76,7 +76,7 @@ Here is an example for a fictional resource driver. Two ResourceClaim objects
|
|||
will get created for this Pod and each container gets access to one of them.
|
||||
|
||||
```yaml
|
||||
apiVersion: resource.k8s.io/v1alpha1
|
||||
apiVersion: resource.k8s.io/v1alpha2
|
||||
kind: ResourceClass
|
||||
name: resource.example.com
|
||||
driverName: resource-driver.example.com
|
||||
|
|
@ -88,7 +88,7 @@ spec:
|
|||
color: black
|
||||
size: large
|
||||
---
|
||||
apiVersion: resource.k8s.io/v1alpha1
|
||||
apiVersion: resource.k8s.io/v1alpha2
|
||||
kind: ResourceClaimTemplate
|
||||
metadata:
|
||||
name: large-black-cat-claim-template
|
||||
|
|
@ -162,6 +162,12 @@ gets scheduled onto one node and then cannot run there, which is bad because
|
|||
such a pending Pod also blocks all other resources like RAM or CPU that were
|
||||
set aside for it.
|
||||
|
||||
## Monitoring resources
|
||||
|
||||
The kubelet provides a gRPC service to enable discovery of dynamic resources of
|
||||
running Pods. For more information on the gRPC endpoints, see the
|
||||
[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
|
||||
|
||||
## Limitations
|
||||
|
||||
The scheduler plugin must be involved in scheduling Pods which use
|
||||
|
|
@ -176,7 +182,7 @@ future.
|
|||
Dynamic resource allocation is an *alpha feature* and only enabled when the
|
||||
`DynamicResourceAllocation` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/) and the
|
||||
`resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API group"
|
||||
`resource.k8s.io/v1alpha2` {{< glossary_tooltip text="API group"
|
||||
term_id="api-group" >}} are enabled. For details on that, see the
|
||||
`--feature-gates` and `--runtime-config` [kube-apiserver
|
||||
parameters](/docs/reference/command-line-tools-reference/kube-apiserver/).
|
||||
|
|
@ -203,8 +209,9 @@ error: the server doesn't have a resource type "resourceclasses"
|
|||
```
|
||||
|
||||
The default configuration of kube-scheduler enables the "DynamicResources"
|
||||
plugin if and only if the feature gate is enabled. Custom configurations may
|
||||
have to be modified to include it.
|
||||
plugin if and only if the feature gate is enabled and when using
|
||||
the v1 configuration API. Custom configurations may have to be modified to
|
||||
include it.
|
||||
|
||||
In addition to enabling the feature in the cluster, a resource driver also has to
|
||||
be installed. Please refer to the driver's documentation for details.
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ kubectl get pod test-pod -o jsonpath='{.spec.schedulingGates}'
|
|||
The output is:
|
||||
|
||||
```none
|
||||
[{"name":"foo"},{"name":"bar"}]
|
||||
[{"name":"example.com/foo"},{"name":"example.com/bar"}]
|
||||
```
|
||||
|
||||
To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
|
||||
|
|
@ -89,6 +89,32 @@ The metric `scheduler_pending_pods` comes with a new label `"gated"` to distingu
|
|||
has been tried scheduling but claimed as unschedulable, or explicitly marked as not ready for
|
||||
scheduling. You can use `scheduler_pending_pods{queue="gated"}` to check the metric result.
|
||||
|
||||
## Mutable Pod Scheduling Directives
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
You can mutate scheduling directives of Pods while they have scheduling gates, with certain constraints.
|
||||
At a high level, you can only tighten the scheduling directives of a Pod. In other words, the updated
|
||||
directives would cause the Pods to only be able to be scheduled on a subset of the nodes that it would
|
||||
previously match. More concretely, the rules for updating a Pod's scheduling directives are as follows:
|
||||
|
||||
1. For `.spec.nodeSelector`, only additions are allowed. If absent, it will be allowed to be set.
|
||||
|
||||
2. For `spec.affinity.nodeAffinity`, if nil, then setting anything is allowed.
|
||||
|
||||
3. If `NodeSelectorTerms` was empty, it will be allowed to be set.
|
||||
If not empty, then only additions of `NodeSelectorRequirements` to `matchExpressions`
|
||||
or `fieldExpressions` are allowed, and no changes to existing `matchExpressions`
|
||||
and `fieldExpressions` will be allowed. This is because the terms in
|
||||
`.requiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms`, are ORed
|
||||
while the expressions in `nodeSelectorTerms[].matchExpressions` and
|
||||
`nodeSelectorTerms[].fieldExpressions` are ANDed.
|
||||
|
||||
4. For `.preferredDuringSchedulingIgnoredDuringExecution`, all updates are allowed.
|
||||
This is because preferred terms are not authoritative, and so policy controllers
|
||||
don't validate those terms.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read the [PodSchedulingReadiness KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/3521-pod-scheduling-readiness) for more details
|
||||
|
|
|
|||
|
|
@ -52,7 +52,18 @@ equivalent to "Predicate" and "Scoring" is equivalent to "Priority function".
|
|||
One plugin may register at multiple extension points to perform more complex or
|
||||
stateful tasks.
|
||||
|
||||
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="scheduling framework extension points" class="diagram-large">}}
|
||||
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="Scheduling framework extension points" class="diagram-large">}}
|
||||
|
||||
### PreEnqueue {#pre-enqueue}
|
||||
|
||||
These plugins are called prior to adding Pods to the internal active queue, where Pods are marked as
|
||||
ready for scheduling.
|
||||
|
||||
Only when all PreEnqueue plugins return `Success`, the Pod is allowed to enter the active queue.
|
||||
Otherwise, it's placed in the internal unschedulable Pods list, and doesn't get an `Unschedulable` condition.
|
||||
|
||||
For more details about how internal scheduler queues work, read
|
||||
[Scheduling queue in kube-scheduler](https://github.com/kubernetes/community/blob/f03b6d5692bd979f07dd472e7b6836b2dad0fd9b/contributors/devel/sig-scheduling/scheduler_queues.md).
|
||||
|
||||
### QueueSort {#queue-sort}
|
||||
|
||||
|
|
|
|||
|
|
@ -224,6 +224,11 @@ In case a node is to be evicted, the node controller or the kubelet adds relevan
|
|||
with `NoExecute` effect. If the fault condition returns to normal the kubelet or node
|
||||
controller can remove the relevant taint(s).
|
||||
|
||||
In some cases when the node is unreachable, the API server is unable to communicate
|
||||
with the kubelet on the node. The decision to delete the pods cannot be communicated to
|
||||
the kubelet until communication with the API server is re-established. In the meantime,
|
||||
the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||
|
||||
{{< note >}}
|
||||
The control plane limits the rate of adding node new taints to nodes. This rate limiting
|
||||
manages the number of evictions that are triggered when many nodes become unreachable at
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ spec:
|
|||
topologyKey: <string>
|
||||
whenUnsatisfiable: <string>
|
||||
labelSelector: <object>
|
||||
matchLabelKeys: <list> # optional; alpha since v1.25
|
||||
matchLabelKeys: <list> # optional; beta since v1.27
|
||||
nodeAffinityPolicy: [Honor|Ignore] # optional; beta since v1.26
|
||||
nodeTaintsPolicy: [Honor|Ignore] # optional; beta since v1.26
|
||||
### other Pod fields go here
|
||||
|
|
@ -129,24 +129,36 @@ your cluster. Those fields are:
|
|||
for more details.
|
||||
|
||||
- **matchLabelKeys** is a list of pod label keys to select the pods over which
|
||||
spreading will be calculated. The keys are used to lookup values from the pod labels, those key-value labels are ANDed with `labelSelector` to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the pod labels will be ignored. A null or empty list means only match against the `labelSelector`.
|
||||
spreading will be calculated. The keys are used to lookup values from the pod labels,
|
||||
those key-value labels are ANDed with `labelSelector` to select the group of existing
|
||||
pods over which spreading will be calculated for the incoming pod. The same key is
|
||||
forbidden to exist in both `matchLabelKeys` and `labelSelector`. `matchLabelKeys` cannot
|
||||
be set when `labelSelector` isn't set. Keys that don't exist in the pod labels will be
|
||||
ignored. A null or empty list means only match against the `labelSelector`.
|
||||
|
||||
With `matchLabelKeys`, users don't need to update the `pod.spec` between different revisions. The controller/operator just needs to set different values to the same `label` key for different revisions. The scheduler will assume the values automatically based on `matchLabelKeys`. For example, if users use Deployment, they can use the label keyed with `pod-template-hash`, which is added automatically by the Deployment controller, to distinguish between different revisions in a single Deployment.
|
||||
With `matchLabelKeys`, you don't need to update the `pod.spec` between different revisions.
|
||||
The controller/operator just needs to set different values to the same label key for different
|
||||
revisions. The scheduler will assume the values automatically based on `matchLabelKeys`. For
|
||||
example, if you are configuring a Deployment, you can use the label keyed with
|
||||
[pod-template-hash](/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label), which
|
||||
is added automatically by the Deployment controller, to distinguish between different revisions
|
||||
in a single Deployment.
|
||||
|
||||
```yaml
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
topologyKey: kubernetes.io/hostname
|
||||
whenUnsatisfiable: DoNotSchedule
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: foo
|
||||
matchLabelKeys:
|
||||
- app
|
||||
- pod-template-hash
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The `matchLabelKeys` field is an alpha field added in 1.25. You have to enable the
|
||||
`MatchLabelKeysInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
in order to use it.
|
||||
The `matchLabelKeys` field is a beta-level field and enabled by default in 1.27. You can disable it by disabling the
|
||||
`MatchLabelKeysInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
{{< /note >}}
|
||||
|
||||
- **nodeAffinityPolicy** indicates how we will treat Pod's nodeAffinity/nodeSelector
|
||||
|
|
|
|||
|
|
@ -79,6 +79,13 @@ An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but
|
|||
address - see [RFC 4193](https://tools.ietf.org/html/rfc4193))
|
||||
{{< /note >}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
When using an external cloud provider, you can pass a dual-stack `--node-ip` value to
|
||||
kubelet if you enable the `CloudDualStackNodeIPs` feature gate in both kubelet and the
|
||||
external cloud provider. This is only supported for cloud providers that support dual
|
||||
stack clusters.
|
||||
|
||||
## Services
|
||||
|
||||
You can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both.
|
||||
|
|
|
|||
|
|
@ -16,8 +16,8 @@ weight: 150
|
|||
|
||||
This feature, specifically the alpha `topologyKeys` API, is deprecated since
|
||||
Kubernetes v1.21.
|
||||
[Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/),
|
||||
introduced in Kubernetes v1.21, provide similar functionality.
|
||||
[Topology Aware Routing](/docs/concepts/services-networking/topology-aware-routing/),
|
||||
introduced in Kubernetes v1.21, provides similar functionality.
|
||||
{{</ note >}}
|
||||
|
||||
_Service Topology_ enables a service to route traffic based upon the Node
|
||||
|
|
|
|||
|
|
@ -588,6 +588,20 @@ spec:
|
|||
nodePort: 30007
|
||||
```
|
||||
|
||||
#### Reserve Nodeport Ranges to avoid collisions when port assigning
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
The policy for assigning ports to NodePort services applies to both the auto-assignment and
|
||||
the manual assignment scenarios. When a user wants to create a NodePort service that
|
||||
uses a specific port, the target port may conflict with another port that has already been assigned.
|
||||
In this case, you can enable the feature gate `ServiceNodePortStaticSubrange`, which allows you
|
||||
to use a different port allocation strategy for NodePort Services. The port range for NodePort services
|
||||
is divided into two bands. Dynamic port assignment uses the upper band by default, and it may use
|
||||
the lower band once the upper band has been exhausted. Users can then allocate from the lower band
|
||||
with a lower risk of port collision.
|
||||
|
||||
|
||||
#### Custom IP address configuration for `type: NodePort` Services {#service-nodeport-custom-listen-address}
|
||||
|
||||
You can set up nodes in your cluster to use a particular IP address for serving node port
|
||||
|
|
@ -647,12 +661,6 @@ status:
|
|||
Traffic from the external load balancer is directed at the backend Pods.
|
||||
The cloud provider decides how it is load balanced.
|
||||
|
||||
Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created
|
||||
with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
|
||||
the loadBalancer is set up with an ephemeral IP address. If you specify a `loadBalancerIP`
|
||||
but your cloud provider does not support the feature, the `loadbalancerIP` field that you
|
||||
set is ignored.
|
||||
|
||||
To implement a Service of `type: LoadBalancer`, Kubernetes typically starts off
|
||||
by making the changes that are equivalent to you requesting a Service of
|
||||
`type: NodePort`. The cloud-controller-manager component then configures the external load balancer to
|
||||
|
|
@ -662,19 +670,24 @@ You can configure a load balanced Service to
|
|||
[omit](#load-balancer-nodeport-allocation) assigning a node port, provided that the
|
||||
cloud provider implementation supports this.
|
||||
|
||||
Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created
|
||||
with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified,
|
||||
the loadBalancer is set up with an ephemeral IP address. If you specify a `loadBalancerIP`
|
||||
but your cloud provider does not support the feature, the `loadbalancerIP` field that you
|
||||
set is ignored.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need
|
||||
to create a static type public IP address resource. This public IP address resource should
|
||||
be in the same resource group of the other automatically created resources of the cluster.
|
||||
For example, `MC_myResourceGroup_myAKSCluster_eastus`.
|
||||
|
||||
Specify the assigned IP address as loadBalancerIP. Ensure that you have updated the
|
||||
`securityGroupName` in the cloud provider configuration file.
|
||||
For information about troubleshooting `CreatingLoadBalancerFailed` permission issues see,
|
||||
[Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](https://docs.microsoft.com/en-us/azure/aks/static-ip)
|
||||
or [CreatingLoadBalancerFailed on AKS cluster with advanced networking](https://github.com/Azure/AKS/issues/357).
|
||||
The`.spec.loadBalancerIP` field for a Service was deprecated in Kubernetes v1.24.
|
||||
|
||||
This field was under-specified and its meaning varies across implementations. It also cannot support dual-stack networking. This field may be removed in a future API version.
|
||||
|
||||
If you're integrating with a provider that supports specifying the load balancer IP address(es)
|
||||
for a Service via a (provider specific) annotation, you should switch to doing that.
|
||||
|
||||
If you are writing code for a load balancer integration with Kubernetes, avoid using this field.
|
||||
You can integrate with [Gateway](https://gateway-api.sigs.k8s.io/) rather than Service, or you
|
||||
can define your own (provider specific) annotations on the Service that specify the equivalent detail.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
reviewers:
|
||||
- robscott
|
||||
title: Topology Aware Hints
|
||||
title: Topology Aware Routing
|
||||
content_type: concept
|
||||
weight: 100
|
||||
description: >-
|
||||
_Topology Aware Hints_ provides a mechanism to help keep network traffic within the zone
|
||||
_Topology Aware Routing_ provides a mechanism to help keep network traffic within the zone
|
||||
where it originated. Preferring same-zone traffic between Pods in your cluster can help
|
||||
with reliability, performance (network latency and throughput), or cost.
|
||||
---
|
||||
|
|
@ -15,45 +15,68 @@ description: >-
|
|||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
_Topology Aware Hints_ enable topology aware routing by including suggestions
|
||||
for how clients should consume endpoints. This approach adds metadata to enable
|
||||
consumers of EndpointSlice (or Endpoints) objects, so that traffic to
|
||||
those network endpoints can be routed closer to where it originated.
|
||||
{{< note >}}
|
||||
Prior to Kubernetes 1.27, this feature was known as _Topology Aware Hints_.
|
||||
{{</ note >}}
|
||||
|
||||
For example, you can route traffic within a locality to reduce
|
||||
costs, or to improve network performance.
|
||||
_Topology Aware Routing_ adjusts routing behavior to prefer keeping traffic in
|
||||
the zone it originated from. In some cases this can help reduce costs or improve
|
||||
network performance.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Motivation
|
||||
|
||||
Kubernetes clusters are increasingly deployed in multi-zone environments.
|
||||
_Topology Aware Hints_ provides a mechanism to help keep traffic within the zone
|
||||
it originated from. This concept is commonly referred to as "Topology Aware
|
||||
Routing". When calculating the endpoints for a {{< glossary_tooltip term_id="Service" >}},
|
||||
the EndpointSlice controller considers the topology (region and zone) of each endpoint
|
||||
and populates the hints field to allocate it to a zone.
|
||||
Cluster components such as the {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
|
||||
can then consume those hints, and use them to influence how the traffic is routed
|
||||
(favoring topologically closer endpoints).
|
||||
_Topology Aware Routing_ provides a mechanism to help keep traffic within the
|
||||
zone it originated from. When calculating the endpoints for a {{<
|
||||
glossary_tooltip term_id="Service" >}}, the EndpointSlice controller considers
|
||||
the topology (region and zone) of each endpoint and populates the hints field to
|
||||
allocate it to a zone. Cluster components such as {{< glossary_tooltip
|
||||
term_id="kube-proxy" text="kube-proxy" >}} can then consume those hints, and use
|
||||
them to influence how the traffic is routed (favoring topologically closer
|
||||
endpoints).
|
||||
|
||||
## Using Topology Aware Hints
|
||||
## Enabling Topology Aware Routing
|
||||
|
||||
You can activate Topology Aware Hints for a Service by setting the
|
||||
`service.kubernetes.io/topology-aware-hints` annotation to `auto`. This tells
|
||||
the EndpointSlice controller to set topology hints if it is deemed safe.
|
||||
Importantly, this does not guarantee that hints will always be set.
|
||||
{{< note >}}
|
||||
Prior to Kubernetes 1.27, this behavior was controlled using the
|
||||
`service.kubernetes.io/topology-aware-hints` annotation.
|
||||
{{</ note >}}
|
||||
|
||||
## How it works {#implementation}
|
||||
You can enable Topology Aware Routing for a Service by setting the
|
||||
`service.kubernetes.io/topology-mode` annotation to `Auto`. When there are
|
||||
enough endpoints available in each zone, Topology Hints will be populated on
|
||||
EndpointSlices to allocate individual endpoints to specific zones, resulting in
|
||||
traffic being routed closer to where it originated from.
|
||||
|
||||
The functionality enabling this feature is split into two components: The
|
||||
EndpointSlice controller and the kube-proxy. This section provides a high level overview
|
||||
of how each component implements this feature.
|
||||
## When it works best
|
||||
|
||||
This feature works best when:
|
||||
|
||||
### 1. Incoming traffic is evenly distributed
|
||||
|
||||
If a large proportion of traffic is originating from a single zone, that traffic
|
||||
could overload the subset of endpoints that have been allocated to that zone.
|
||||
This feature is not recommended when incoming traffic is expected to originate
|
||||
from a single zone.
|
||||
|
||||
### 2. The Service has 3 or more endpoints per zone {#three-or-more-endpoints-per-zone}
|
||||
In a three zone cluster, this means 9 or more endpoints. If there are fewer than
|
||||
3 endpoints per zone, there is a high (≈50%) probability that the EndpointSlice
|
||||
controller will not be able to allocate endpoints evenly and instead will fall
|
||||
back to the default cluster-wide routing approach.
|
||||
|
||||
## How It Works
|
||||
|
||||
The "Auto" heuristic attempts to proportionally allocate a number of endpoints
|
||||
to each zone. Note that this heuristic works best for Services that have a
|
||||
significant number of endpoints.
|
||||
|
||||
### EndpointSlice controller {#implementation-control-plane}
|
||||
|
||||
The EndpointSlice controller is responsible for setting hints on EndpointSlices
|
||||
when this feature is enabled. The controller allocates a proportional amount of
|
||||
when this heuristic is enabled. The controller allocates a proportional amount of
|
||||
endpoints to each zone. This proportion is based on the
|
||||
[allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
CPU cores for nodes running in that zone. For example, if one zone had 2 CPU
|
||||
|
|
@ -145,6 +168,11 @@ zone.
|
|||
proportions of each zone. This could have unintended consequences if a large
|
||||
portion of nodes are unready.
|
||||
|
||||
* The EndpointSlice controller ignores nodes with the
|
||||
`node-role.kubernetes.io/control-plane` or `node-role.kubernetes.io/master`
|
||||
label set. This could be problematic if workloads are also running on those
|
||||
nodes.
|
||||
|
||||
* The EndpointSlice controller does not take into account {{< glossary_tooltip
|
||||
text="tolerations" term_id="toleration" >}} when deploying or calculating the
|
||||
proportions of each zone. If the Pods backing a Service are limited to a
|
||||
|
|
@ -157,6 +185,17 @@ zone.
|
|||
either not picking up on this event, or newly added pods starting in a
|
||||
different zone.
|
||||
|
||||
|
||||
## Custom heuristics
|
||||
|
||||
Kubernetes is deployed in many different ways, there is no single heuristic for
|
||||
allocating endpoints to zones will work for every use case. A key goal of this
|
||||
feature is to enable custom heuristics to be developed if the built in heuristic
|
||||
does not work for your use case. The first steps to enable custom heuristics
|
||||
were included in the 1.27 release. This is a limited implementation that may not
|
||||
yet cover some relevant and plausible situations.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
|
||||
|
|
@ -637,7 +637,8 @@ The access modes are:
|
|||
: the volume can be mounted as read-write by many nodes.
|
||||
|
||||
`ReadWriteOncePod`
|
||||
: the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod
|
||||
: {{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod
|
||||
access mode if you want to ensure that only one pod across whole cluster can
|
||||
read that PVC or write to it. This is only supported for CSI volumes and
|
||||
Kubernetes version 1.22+.
|
||||
|
|
|
|||
|
|
@ -1168,10 +1168,13 @@ persistent volume:
|
|||
secrets are passed. When you have configured secret data for node-initiated
|
||||
volume expansion, the kubelet passes that data via the `NodeExpandVolume()`
|
||||
call to the CSI driver. In order to use the `nodeExpandSecretRef` field, your
|
||||
cluster should be running Kubernetes version 1.25 or later and you must enable
|
||||
cluster should be running Kubernetes version 1.25 or later.
|
||||
* If you are running Kubernetes Version 1.25 or 1.26, you must enable
|
||||
the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
named `CSINodeExpandSecret` for each kube-apiserver and for the kubelet on every
|
||||
node. You must also be using a CSI driver that supports or requires secret data during
|
||||
node. In Kubernetes version 1.27 this feature has been enabled by default
|
||||
and no explicit enablement of the feature gate is required.
|
||||
You must also be using a CSI driver that supports or requires secret data during
|
||||
node-initiated storage resize operations.
|
||||
* `nodePublishSecretRef`: A reference to the secret object containing
|
||||
sensitive information to pass to the CSI driver to complete the CSI
|
||||
|
|
|
|||
|
|
@ -89,10 +89,6 @@ section refers to several key workload abstractions and how they map to Windows.
|
|||
|
||||
The `.spec.os.name` field should be set to `windows` to indicate that the current Pod uses Windows containers.
|
||||
|
||||
{{< note >}}
|
||||
Starting from 1.25, the `IdentifyPodOS` feature gate is in GA stage and defaults to be enabled.
|
||||
{{< /note >}}
|
||||
|
||||
If you set the `.spec.os.name` field to `windows`,
|
||||
you must not set the following fields in the `.spec` of that Pod:
|
||||
|
||||
|
|
|
|||
|
|
@ -162,10 +162,6 @@ that the containers in that Pod are designed for. For Pods that run Linux contai
|
|||
`.spec.os.name` to `linux`. For Pods that run Windows containers, set `.spec.os.name`
|
||||
to `windows`.
|
||||
|
||||
{{< note >}}
|
||||
Starting from 1.25, the `IdentifyPodOS` feature is in GA stage and defaults to be enabled.
|
||||
{{< /note >}}
|
||||
|
||||
The scheduler does not use the value of `.spec.os.name` when assigning Pods to nodes. You should
|
||||
use normal Kubernetes mechanisms for
|
||||
[assigning pods to nodes](/docs/concepts/scheduling-eviction/assign-pod-node/)
|
||||
|
|
|
|||
|
|
@ -14,9 +14,9 @@ weight: 80
|
|||
|
||||
A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repeating schedule.
|
||||
|
||||
CronJob is meant for performing regular scheduled actions such as backups, report generation,
|
||||
and so on. One CronJob object is like one line of a _crontab_ (cron table) file on a
|
||||
Unix system. It runs a job periodically on a given schedule, written in
|
||||
CronJob is meant for performing regular scheduled actions such as backups, report generation,
|
||||
and so on. One CronJob object is like one line of a _crontab_ (cron table) file on a
|
||||
Unix system. It runs a job periodically on a given schedule, written in
|
||||
[Cron](https://en.wikipedia.org/wiki/Cron) format.
|
||||
|
||||
CronJobs have limitations and idiosyncrasies.
|
||||
|
|
@ -162,19 +162,22 @@ For another way to clean up jobs automatically, see [Clean up finished jobs auto
|
|||
|
||||
### Time zones
|
||||
|
||||
For CronJobs with no time zone specified, the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}} interprets schedules relative to its local time zone.
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="beta" >}}
|
||||
For CronJobs with no time zone specified, the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}
|
||||
interprets schedules relative to its local time zone.
|
||||
|
||||
If you enable the `CronJobTimeZone` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
you can specify a time zone for a CronJob (if you don't enable that feature gate, or if you are using a version of
|
||||
Kubernetes that does not have experimental time zone support, all CronJobs in your cluster have an unspecified
|
||||
timezone).
|
||||
You can specify a time zone for a CronJob by setting `.spec.timeZone` to the name
|
||||
of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
|
||||
For example, setting `.spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret
|
||||
the schedule relative to Coordinated Universal Time.
|
||||
|
||||
When you have the feature enabled, you can set `.spec.timeZone` to the name of a valid [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). For example, setting
|
||||
`.spec.timeZone: "Etc/UTC"` instructs Kubernetes to interpret the schedule relative to Coordinated Universal Time.
|
||||
A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is not available on the system.
|
||||
|
||||
## CronJob limitations {#cron-job-limitations}
|
||||
|
||||
### Unsupported TimeZone specification
|
||||
|
||||
{{< caution >}}
|
||||
The implementation of the CronJob API in Kubernetes {{< skew currentVersion >}} lets you set
|
||||
the `.spec.schedule` field to include a timezone; for example: `CRON_TZ=UTC * * * * *`
|
||||
or `TZ=UTC * * * * *`.
|
||||
|
|
@ -183,14 +186,10 @@ Specifying a timezone that way is **not officially supported** (and never has be
|
|||
|
||||
If you try to set a schedule that includes `TZ` or `CRON_TZ` timezone specification,
|
||||
Kubernetes reports a [warning](/blog/2020/09/03/warnings/) to the client.
|
||||
Future versions of Kubernetes might not implement that unofficial timezone mechanism at all.
|
||||
{{< /caution >}}
|
||||
|
||||
A time zone database from the Go standard library is included in the binaries and used as a fallback in case an external database is not available on the system.
|
||||
|
||||
## CronJob limitations {#cron-job-limitations}
|
||||
Future versions of Kubernetes will prevent setting the unofficial timezone mechanism entirely.
|
||||
|
||||
### Modifying a CronJob
|
||||
|
||||
By design, a CronJob contains a template for _new_ Jobs.
|
||||
If you modify an existing CronJob, the changes you make will apply to new Jobs that
|
||||
start to run after your modification is complete. Jobs (and their Pods) that have already
|
||||
|
|
|
|||
|
|
@ -54,22 +54,22 @@ Check on the status of the Job with `kubectl`:
|
|||
|
||||
{{< tabs name="Check status of Job" >}}
|
||||
{{< tab name="kubectl describe job pi" codelang="bash" >}}
|
||||
Name: pi
|
||||
Namespace: default
|
||||
Selector: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578
|
||||
Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578
|
||||
job-name=pi
|
||||
Annotations: batch.kubernetes.io/job-tracking:
|
||||
Parallelism: 1
|
||||
Completions: 1
|
||||
Completion Mode: NonIndexed
|
||||
Start Time: Fri, 28 Oct 2022 13:05:18 +0530
|
||||
Completed At: Fri, 28 Oct 2022 13:05:21 +0530
|
||||
Duration: 3s
|
||||
Pods Statuses: 0 Active / 1 Succeeded / 0 Failed
|
||||
Name: pi
|
||||
Namespace: default
|
||||
Selector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
batch.kubernetes.io/job-name=pi
|
||||
...
|
||||
Annotations: batch.kubernetes.io/job-tracking: ""
|
||||
Parallelism: 1
|
||||
Completions: 1
|
||||
Start Time: Mon, 02 Dec 2019 15:20:11 +0200
|
||||
Completed At: Mon, 02 Dec 2019 15:21:16 +0200
|
||||
Duration: 65s
|
||||
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
|
||||
Pod Template:
|
||||
Labels: controller-uid=0cd26dd5-88a2-4a5f-a203-ea19a1d5d578
|
||||
job-name=pi
|
||||
Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
batch.kubernetes.io/job-name=pi
|
||||
Containers:
|
||||
pi:
|
||||
Image: perl:5.34.0
|
||||
|
|
@ -93,15 +93,13 @@ Events:
|
|||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
annotations:
|
||||
batch.kubernetes.io/job-tracking: ""
|
||||
kubectl.kubernetes.io/last-applied-configuration: |
|
||||
{"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl:5.34.0","name":"pi"}],"restartPolicy":"Never"}}}}
|
||||
annotations: batch.kubernetes.io/job-tracking: ""
|
||||
...
|
||||
creationTimestamp: "2022-11-10T17:53:53Z"
|
||||
generation: 1
|
||||
labels:
|
||||
controller-uid: 204fb678-040b-497f-9266-35ffa8716d14
|
||||
job-name: pi
|
||||
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
batch.kubernetes.io/job-name: pi
|
||||
name: pi
|
||||
namespace: default
|
||||
resourceVersion: "4751"
|
||||
|
|
@ -113,14 +111,14 @@ spec:
|
|||
parallelism: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: 204fb678-040b-497f-9266-35ffa8716d14
|
||||
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
suspend: false
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
controller-uid: 204fb678-040b-497f-9266-35ffa8716d14
|
||||
job-name: pi
|
||||
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
batch.kubernetes.io/job-name: pi
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
|
|
@ -152,7 +150,7 @@ To view completed Pods of a Job, use `kubectl get pods`.
|
|||
To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:
|
||||
|
||||
```shell
|
||||
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
|
||||
pods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')
|
||||
echo $pods
|
||||
```
|
||||
|
||||
|
|
@ -171,6 +169,12 @@ View the standard output of one of the pods:
|
|||
kubectl logs $pods
|
||||
```
|
||||
|
||||
Another way to view the logs of a Job:
|
||||
|
||||
```shell
|
||||
kubectl logs jobs/pi
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
|
|
@ -192,6 +196,10 @@ characters.
|
|||
|
||||
A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
|
||||
### Job Labels
|
||||
|
||||
Job labels will have `batch.kubernetes.io/` prefix for `job-name` and `controller-uid`.
|
||||
|
||||
### Pod Template
|
||||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
|
@ -631,14 +639,7 @@ as soon as the Job was resumed.
|
|||
|
||||
### Mutable Scheduling Directives
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
In order to use this behavior, you must enable the `JobMutableNodeSchedulingDirectives`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/).
|
||||
It is enabled by default.
|
||||
{{< /note >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
In most cases a parallel job will want the pods to run with constraints,
|
||||
like all in the same zone, or all either on GPU model x or y but not a mix of both.
|
||||
|
|
@ -653,7 +654,7 @@ pod-to-node assignment to kube-scheduler. This is allowed only for suspended Job
|
|||
been unsuspended before.
|
||||
|
||||
The fields in a Job's pod template that can be updated are node affinity, node selector,
|
||||
tolerations, labels and annotations.
|
||||
tolerations, labels, annotations and [scheduling gates](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/).
|
||||
|
||||
### Specifying your own Pod selector
|
||||
|
||||
|
|
@ -696,12 +697,12 @@ metadata:
|
|||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
Then you create a new Job with name `new` and you explicitly specify the same selector.
|
||||
Since the existing Pods have label `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
|
||||
Since the existing Pods have label `batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
|
||||
they are controlled by Job `new` as well.
|
||||
|
||||
You need to specify `manualSelector: true` in the new Job since you are not using
|
||||
|
|
@ -716,7 +717,7 @@ spec:
|
|||
manualSelector: true
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
|
|
@ -807,6 +808,17 @@ These are some requirements and semantics of the API:
|
|||
- `Count`: use to indicate that the Pod should be handled in the default way.
|
||||
The counter towards the `.spec.backoffLimit` should be incremented.
|
||||
|
||||
{{< note >}}
|
||||
When you use a `podFailurePolicy`, the job controller only matches Pods in the
|
||||
`Failed` phase. Pods with a deletion timestamp that are not in a terminal phase
|
||||
(`Failed` or `Succeeded`) are considered still terminating. This implies that
|
||||
terminating pods retain a [tracking finalizer](#job-tracking-with-finalizers)
|
||||
until they reach a terminal phase.
|
||||
Since Kubernetes 1.27, Kubelet transitions deleted pods to a terminal phase
|
||||
(see: [Pod Phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase)). This
|
||||
ensures that deleted pods have their finalizers removed by the Job controller.
|
||||
{{< /note >}}
|
||||
|
||||
### Job tracking with finalizers
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
|
@ -837,6 +849,19 @@ checking if the Job has the annotation
|
|||
this annotation from Jobs. Instead, you can recreate the Jobs to ensure they
|
||||
are tracked using Pod finalizers.
|
||||
|
||||
### Elastic Indexed Jobs
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
You can scale Indexed Jobs up or down by mutating both `.spec.parallelism`
|
||||
and `.spec.completions` together such that `.spec.parallelism == .spec.completions`.
|
||||
When the `ElasticIndexedJob`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
is disabled, `.spec.completions` is immutable.
|
||||
|
||||
Use cases for elastic Indexed Jobs include batch workloads which require
|
||||
scaling an indexed Job, such as MPI, Horovord, Ray, and PyTorch training jobs.
|
||||
|
||||
## Alternatives
|
||||
|
||||
### Bare Pods
|
||||
|
|
|
|||
|
|
@ -160,7 +160,7 @@ pods will be assigned ordinals from 0 up through N-1.
|
|||
|
||||
### Start ordinal
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
`.spec.ordinals` is an optional field that allows you to configure the integer
|
||||
ordinals assigned to each Pod. It defaults to nil. You must enable the
|
||||
|
|
@ -360,7 +360,7 @@ StatefulSet will then begin to recreate the Pods using the reverted template.
|
|||
|
||||
## PersistentVolumeClaim retention
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
The optional `.spec.persistentVolumeClaimRetentionPolicy` field controls if
|
||||
and how PVCs are deleted during the lifecycle of a StatefulSet. You must enable the
|
||||
|
|
|
|||
|
|
@ -136,6 +136,11 @@ against the disruption budget, but workload resources (such as Deployment and St
|
|||
are not limited by PDBs when doing rolling upgrades. Instead, the handling of failures
|
||||
during application updates is configured in the spec for the specific workload resource.
|
||||
|
||||
It is recommended to set `AlwaysAllow` [Unhealthy Pod Eviction Policy](/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy)
|
||||
to your PodDisruptionBudgets to support eviction of misbehaving applications during a node drain.
|
||||
The default behavior is to wait for the application pods to become [healthy](/docs/tasks/run-application/configure-pdb/#healthiness-of-a-pod)
|
||||
before the drain can proceed.
|
||||
|
||||
When a pod is evicted using the eviction API, it is gracefully
|
||||
[terminated](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination), honoring the
|
||||
`terminationGracePeriodSeconds` setting in its [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
|
|
@ -231,11 +236,6 @@ can happen, according to:
|
|||
|
||||
{{< feature-state for_k8s_version="v1.26" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
If you are using an older version of Kubernetes than {{< skew currentVersion >}}
|
||||
please refer to the corresponding version of the documentation.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
In order to use this behavior, you must have the `PodDisruptionConditions`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
|
|
@ -247,7 +247,7 @@ that the Pod is about to be deleted due to a {{<glossary_tooltip term_id="disrup
|
|||
The `reason` field of the condition additionally
|
||||
indicates one of the following reasons for the Pod termination:
|
||||
|
||||
`PreemptionByKubeScheduler`
|
||||
`PreemptionByScheduler`
|
||||
: Pod is due to be {{<glossary_tooltip term_id="preemption" text="preempted">}} by a scheduler in order to accommodate a new Pod with a higher priority. For more information, see [Pod priority preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/).
|
||||
|
||||
`DeletionByTaintManager`
|
||||
|
|
|
|||
|
|
@ -107,10 +107,10 @@ for resources such as CPU and memory.
|
|||
: A container's memory request
|
||||
|
||||
`resource: limits.hugepages-*`
|
||||
: A container's hugepages limit (provided that the `DownwardAPIHugePages` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
|
||||
: A container's hugepages limit
|
||||
|
||||
`resource: requests.hugepages-*`
|
||||
: A container's hugepages request (provided that the `DownwardAPIHugePages` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled)
|
||||
: A container's hugepages request
|
||||
|
||||
`resource: limits.ephemeral-storage`
|
||||
: A container's ephemeral-storage limit
|
||||
|
|
|
|||
|
|
@ -91,6 +91,12 @@ A Pod is granted a term to terminate gracefully, which defaults to 30 seconds.
|
|||
You can use the flag `--force` to [terminate a Pod by force](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced).
|
||||
{{< /note >}}
|
||||
|
||||
Since Kubernetes 1.27, the kubelet transitions deleted pods, except for
|
||||
[static pods](/docs/tasks/configure-pod-container/static-pod/) and
|
||||
[force-deleted pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced)
|
||||
without a finalizer, to a terminal phase (`Failed` or `Succeeded` depending on
|
||||
the exit statuses of the pod containers) before their deletion from the API server.
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, Kubernetes
|
||||
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
|
||||
|
||||
|
|
@ -296,10 +302,7 @@ Each probe must define exactly one of these four mechanisms:
|
|||
The target should implement
|
||||
[gRPC health checks](https://grpc.io/grpc/core/md_doc_health-checking.html).
|
||||
The diagnostic is considered successful if the `status`
|
||||
of the response is `SERVING`.
|
||||
gRPC probes are an alpha feature and are only available if you
|
||||
enable the `GRPCContainerProbe`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
of the response is `SERVING`.
|
||||
|
||||
`httpGet`
|
||||
: Performs an HTTP `GET` request against the Pod's IP
|
||||
|
|
@ -494,6 +497,8 @@ feature gate `EndpointSliceTerminatingCondition` is enabled.
|
|||
1. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends
|
||||
`SIGKILL` to any processes still running in any container in the Pod.
|
||||
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
|
||||
1. The kubelet transitions the pod into a terminal phase (`Failed` or `Succeeded` depending on
|
||||
the end state of its containers). This step is guaranteed since version 1.27.
|
||||
1. The kubelet triggers forcible removal of Pod object from the API server, by setting grace period
|
||||
to 0 (immediate deletion).
|
||||
1. The API server deletes the Pod's API object, which is then no longer visible from any client.
|
||||
|
|
|
|||
|
|
@ -85,6 +85,22 @@ CPU limit or a CPU request.
|
|||
Containers in a Pod can request other resources (not CPU or memory) and still be classified as
|
||||
`BestEffort`.
|
||||
|
||||
## Memory QoS with cgroup v2
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
|
||||
|
||||
Memory QoS uses the memory controller of cgroup v2 to guarantee memory resources in Kubernetes.
|
||||
Memory requests and limits of containers in pod are used to set specific interfaces `memory.min`
|
||||
and `memory.high` provided by the memory controller. When `memory.min` is set to memory requests,
|
||||
memory resources are reserved and never reclaimed by the kernel; this is how Memory QoS ensures
|
||||
memory availability for Kubernetes pods. And if memory limits are set in the container,
|
||||
this means that the system needs to limit container memory usage; Memory QoS uses `memory.high`
|
||||
to throttle workload approaching its memory limit, ensuring that the system is not overwhelmed
|
||||
by instantaneous memory allocation.
|
||||
|
||||
Memory QoS relies on QoS class to determine which settings to apply; however, these are different
|
||||
mechanisms that both provide controls over quality of service.
|
||||
|
||||
## Some behavior is independent of QoS class {#class-independent-behavior}
|
||||
|
||||
Certain behavior is independent of the QoS class assigned by Kubernetes. For example:
|
||||
|
|
|
|||
|
|
@ -31,18 +31,32 @@ mitigate some future vulnerabilities too.
|
|||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
This is a Linux-only feature and support is needed in Linux for idmap mounts on
|
||||
the filesystems used. This means:
|
||||
|
||||
* On the node, the filesystem you use for `/var/lib/kubelet/pods/`, or the
|
||||
custom directory you configure for this, needs idmap mount support.
|
||||
* All the filesystems used in the pod's volumes must support idmap mounts.
|
||||
|
||||
In practice this means you need at least Linux 6.3, as tmpfs started supporting
|
||||
idmap mounts in that version. This is usually needed as several Kubernetes
|
||||
features use tmpfs (the service account token that is mounted by default uses a
|
||||
tmpfs, Secrets use a tmpfs, etc.)
|
||||
|
||||
Some popular filesystems that support idmap mounts in Linux 6.3 are: btrfs,
|
||||
ext4, xfs, fat, tmpfs, overlayfs.
|
||||
|
||||
<!-- When merging this with the dev-1.27 branch conflicts will arise. The text
|
||||
as it is in the dev-1.27 branch should be used. -->
|
||||
This is a Linux only feature. In addition, support is needed in the
|
||||
In addition, support is needed in the
|
||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
|
||||
to use this feature with Kubernetes stateless pods:
|
||||
|
||||
* CRI-O: version 1.25 (and later) supports user namespaces for containers.
|
||||
|
||||
* containerd: version 1.7 supports user namespaces for containers, compatible
|
||||
with Kubernetes v1.25 and v1.26, but not with later releases. If you are
|
||||
running a different version of Kubernetes, check the documentation for that
|
||||
Kubernetes release.
|
||||
Please note that containerd v1.7 supports user namespaces for containers,
|
||||
compatible with Kubernetes {{< skew currentVersion >}}. It should not be used
|
||||
with Kubernetes 1.27 (and later).
|
||||
|
||||
Support for this in [cri-dockerd is not planned][CRI-dockerd-issue] yet.
|
||||
|
||||
|
|
@ -154,13 +168,6 @@ volume types are allowed:
|
|||
* downwardAPI
|
||||
* emptyDir
|
||||
|
||||
To guarantee that the pod can read the files of such volumes, volumes are
|
||||
created as if you specified `.spec.securityContext.fsGroup` as `0` for the Pod.
|
||||
If it is specified to a different value, this other value will of course be
|
||||
honored instead.
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
As a by-product of this, folders and files for these volumes will have
|
||||
permissions for the group, even if `defaultMode` or `mode` to specific items of
|
||||
the volumes were specified without permissions to groups. For example, it is not
|
||||
possible to mount these volumes in a way that its files have permissions only
|
||||
for the owner.
|
||||
* Take a look at [Use a User Namespace With a Pod](/docs/tasks/configure-pod-container/user-namespaces/)
|
||||
|
|
|
|||
|
|
@ -100,7 +100,6 @@ operator to use or manage a cluster.
|
|||
|
||||
## Config API for kubeadm
|
||||
|
||||
* [v1beta2](/docs/reference/config-api/kubeadm-config.v1beta2/)
|
||||
* [v1beta3](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
|
||||
## Design Docs
|
||||
|
|
|
|||
|
|
@ -1221,7 +1221,7 @@ The following `ExecCredential` manifest describes a cluster information sample.
|
|||
|
||||
## API access to authentication information for a client {#self-subject-review}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
If your cluster has the API enabled, you can use the `SelfSubjectReview` API to find out how your Kubernetes cluster maps your authentication
|
||||
information to identify you as a client. This works whether you are authenticating as a user (typically representing
|
||||
|
|
@ -1231,11 +1231,11 @@ a real person) or as a ServiceAccount.
|
|||
|
||||
Request example (the body would be a `SelfSubjectReview`):
|
||||
```
|
||||
POST /apis/authentication.k8s.io/v1alpha1/selfsubjectreviews
|
||||
POST /apis/authentication.k8s.io/v1beta1/selfsubjectreviews
|
||||
```
|
||||
```json
|
||||
{
|
||||
"apiVersion": "authentication.k8s.io/v1alpha1",
|
||||
"apiVersion": "authentication.k8s.io/v1beta1",
|
||||
"kind": "SelfSubjectReview"
|
||||
}
|
||||
```
|
||||
|
|
@ -1243,7 +1243,7 @@ Response example:
|
|||
|
||||
```json
|
||||
{
|
||||
"apiVersion": "authentication.k8s.io/v1alpha1",
|
||||
"apiVersion": "authentication.k8s.io/v1beta1",
|
||||
"kind": "SelfSubjectReview",
|
||||
"status": {
|
||||
"userInfo": {
|
||||
|
|
@ -1262,7 +1262,7 @@ Response example:
|
|||
}
|
||||
```
|
||||
|
||||
For convenience, the `kubectl alpha auth whoami` command is present. Executing this command will produce the following output (yet different user attributes will be shown):
|
||||
For convenience, the `kubectl auth whoami` command is present. Executing this command will produce the following output (yet different user attributes will be shown):
|
||||
|
||||
* Simple output example
|
||||
```
|
||||
|
|
@ -1352,8 +1352,8 @@ By default, all authenticated users can create `SelfSubjectReview` objects when
|
|||
You can only make `SelfSubjectReview` requests if:
|
||||
* the `APISelfSubjectReview`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
is enabled for your cluster
|
||||
* the API server for your cluster has the `authentication.k8s.io/v1alpha1`
|
||||
is enabled for your cluster (enabled by default after reaching Beta)
|
||||
* the API server for your cluster has the `authentication.k8s.io/v1alpha1` or `authentication.k8s.io/v1beta1`
|
||||
{{< glossary_tooltip term_id="api-group" text="API group" >}}
|
||||
enabled.
|
||||
{{< /note >}}
|
||||
|
|
|
|||
|
|
@ -4,27 +4,33 @@ reviewers:
|
|||
- mikedanese
|
||||
- munnerz
|
||||
- enj
|
||||
title: Certificate Signing Requests
|
||||
title: Certificates and Certificate Signing Requests
|
||||
content_type: concept
|
||||
weight: 25
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
|
||||
|
||||
The Certificates API enables automation of
|
||||
Kubernetes certificate and trust bundle APIs enable automation of
|
||||
[X.509](https://www.itu.int/rec/T-REC-X.509) credential provisioning by providing
|
||||
a programmatic interface for clients of the Kubernetes API to request and obtain
|
||||
X.509 {{< glossary_tooltip term_id="certificate" text="certificates" >}} from a Certificate Authority (CA).
|
||||
|
||||
There is also experimental (alpha) support for distributing [trust bundles](#cluster-trust-bundles).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Certificate signing requests
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
|
||||
|
||||
|
||||
A CertificateSigningRequest (CSR) resource is used to request that a certificate be signed
|
||||
by a denoted signer, after which the request may be approved or denied before
|
||||
finally being signed.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Request signing process
|
||||
### Request signing process
|
||||
|
||||
The CertificateSigningRequest resource type allows a client to ask for an X.509 certificate
|
||||
be issued, based on a signing request.
|
||||
|
|
@ -64,12 +70,46 @@ state for some duration:
|
|||
* Pending requests: automatically deleted after 24 hours
|
||||
* All requests: automatically deleted after the issued certificate has expired
|
||||
|
||||
### Certificate signing authorization {#authorization}
|
||||
|
||||
To allow creating a CertificateSigningRequest and retrieving any CertificateSigningRequest:
|
||||
|
||||
* Verbs: `create`, `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
|
||||
For example:
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-create.yaml" >}}
|
||||
|
||||
To allow approving a CertificateSigningRequest:
|
||||
|
||||
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/approval`
|
||||
* Verbs: `approve`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
|
||||
|
||||
For example:
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-approve.yaml" >}}
|
||||
|
||||
To allow signing a CertificateSigningRequest:
|
||||
|
||||
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/status`
|
||||
* Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-sign.yaml" >}}
|
||||
|
||||
|
||||
## Signers
|
||||
|
||||
Custom signerNames can also be specified. All signers should provide information about how they work so that clients can predict what will happen to their CSRs.
|
||||
Signers abstractly represent the entity or entities that might sign, or have
|
||||
signed, a security certificate.
|
||||
|
||||
Any signer that is made available for outside a particular cluster should provide information
|
||||
about how the signer works, so that consumers can understand what that means for CertifcateSigningRequests
|
||||
and (if enabled) [ClusterTrustBundles](#cluster-trust-bundles).
|
||||
This includes:
|
||||
|
||||
1. **Trust distribution**: how trust (CA bundles) are distributed.
|
||||
1. **Trust distribution**: how trust anchors (CA certificates or certificate bundles) are distributed.
|
||||
1. **Permitted subjects**: any restrictions on and behavior when a disallowed subject is requested.
|
||||
1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames, Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.
|
||||
1. **Permitted key usages / extended key usages**: any restrictions on and behavior when usages different than the signer-determined usages are specified in the CSR.
|
||||
|
|
@ -77,13 +117,17 @@ This includes:
|
|||
and the behavior when the signer-determined expiration is different from the CSR `spec.expirationSeconds` field.
|
||||
1. **CA bit allowed/disallowed**: and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it.
|
||||
|
||||
Commonly, the `status.certificate` field contains a single PEM-encoded X.509
|
||||
certificate once the CSR is approved and the certificate is issued. Some
|
||||
signers store multiple certificates into the `status.certificate` field. In
|
||||
Commonly, the `status.certificate` field of a CertificateSigningRequest contains a
|
||||
single PEM-encoded X.509 certificate once the CSR is approved and the certificate is issued.
|
||||
Some signers store multiple certificates into the `status.certificate` field. In
|
||||
that case, the documentation for the signer should specify the meaning of
|
||||
additional certificates; for example, this might be the certificate plus
|
||||
intermediates to be presented during TLS handshakes.
|
||||
|
||||
If you want to make the _trust anchor_ (root certificate) available, this should be done
|
||||
separately from a CertificateSigningRequest and its `status.certificate` field. For example,
|
||||
you could use a ClusterTrustBundle.
|
||||
|
||||
The PKCS#10 signing request format does not have a standard mechanism to specify a
|
||||
certificate expiration or lifetime. The expiration or lifetime therefore has to be set
|
||||
through the `spec.expirationSeconds` field of the CSR object. The built-in signers
|
||||
|
|
@ -153,9 +197,8 @@ Kubernetes provides built-in signers that each have a well-known `signerName`:
|
|||
of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.
|
||||
1. CA bit allowed/disallowed - not allowed.
|
||||
|
||||
{{< note >}}
|
||||
Failures for all of these are only reported in kube-controller-manager logs.
|
||||
{{< /note >}}
|
||||
The kube-controller-manager implements [control plane signing](#signer-control-plane) for each of the built in
|
||||
signers. Failures for all of these are only reported in kube-controller-manager logs.
|
||||
|
||||
{{< note >}}
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
|
|
@ -168,156 +211,89 @@ kube-apiserver, but this is not a standard.
|
|||
None of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only
|
||||
guaranteed to verify a connection to the API server using the default service (`kubernetes.default.svc`).
|
||||
|
||||
## Authorization
|
||||
### Custom signers
|
||||
|
||||
To allow creating a CertificateSigningRequest and retrieving any CertificateSigningRequest:
|
||||
You can also introduce your own custom signer, which should have a similar prefixed name but using your
|
||||
own domain name. For example, if you represent an open source project that uses the domain `open-fictional.example`
|
||||
then you might use `issuer.open-fictional.example/service-mesh` as a signer name.
|
||||
|
||||
* Verbs: `create`, `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
A custom signer uses the Kubernetes API to issue a certificate. See [API-based signers](#signer-api).
|
||||
|
||||
For example:
|
||||
## Signing
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-create.yaml" >}}
|
||||
### Control plane signer {#signer-control-plane}
|
||||
|
||||
To allow approving a CertificateSigningRequest:
|
||||
The Kubernetes control plane implements each of the
|
||||
[Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers),
|
||||
as part of the kube-controller-manager.
|
||||
|
||||
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/approval`
|
||||
* Verbs: `approve`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
|
||||
{{< note >}}
|
||||
Prior to Kubernetes v1.18, the kube-controller-manager would sign any CSRs that
|
||||
were marked as approved.
|
||||
{{< /note >}}
|
||||
|
||||
For example:
|
||||
{{< note >}}
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-approve.yaml" >}}
|
||||
### API-based signers {#signer-api}
|
||||
|
||||
To allow signing a CertificateSigningRequest:
|
||||
Users of the REST API can sign CSRs by submitting an UPDATE request to the `status`
|
||||
subresource of the CSR to be signed.
|
||||
|
||||
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/status`
|
||||
* Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
|
||||
As part of this request, the `status.certificate` field should be set to contain the
|
||||
signed certificate. This field contains one or more PEM-encoded certificates.
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-sign.yaml" >}}
|
||||
All PEM blocks must have the "CERTIFICATE" label, contain no headers,
|
||||
and the encoded data must be a BER-encoded ASN.1 Certificate structure
|
||||
as described in [section 4 of RFC5280](https://tools.ietf.org/html/rfc5280#section-4.1).
|
||||
|
||||
## Normal user
|
||||
Example certificate content:
|
||||
|
||||
A few steps are required in order to get a normal user to be able to
|
||||
authenticate and invoke an API. First, this user must have a certificate issued
|
||||
by the Kubernetes cluster, and then present that certificate to the Kubernetes API.
|
||||
|
||||
### Create private key
|
||||
|
||||
The following scripts show how to generate PKI private key and CSR. It is
|
||||
important to set CN and O attribute of the CSR. CN is the name of the user and
|
||||
O is the group that this user will belong to. You can refer to
|
||||
[RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups.
|
||||
|
||||
```shell
|
||||
openssl genrsa -out myuser.key 2048
|
||||
openssl req -new -key myuser.key -out myuser.csr
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b/oAwDQYJKoZIhvcNAQEL
|
||||
BQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV
|
||||
BAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4
|
||||
MB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz
|
||||
dGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G
|
||||
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3
|
||||
+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm
|
||||
kmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh
|
||||
Q7Et13wrvTJqBMJo1GTwQuF+HYOku0NF/DLqbZIcpI08yQKyrBgYz2uO51/oNp8a
|
||||
sTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7
|
||||
2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj
|
||||
YTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB
|
||||
Af8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC
|
||||
ggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr
|
||||
L0/TCrqmuaaliUa42jQTt2OVsVP/L8ofFunj/KjpQU0bvKJPLMRKtmxbhXuQCQi1
|
||||
qCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G/JyY2Vrh19/X8+mImMEK5eOy
|
||||
o0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+/XeDhEcg+lJvz9Eyo2
|
||||
aGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd
|
||||
M1fLPhLyR54fGaY+7/X8P9AZzPefAkwizeXwe9ii6/a08vWoiE4=
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
### Create CertificateSigningRequest
|
||||
Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated,
|
||||
to allow for explanatory text as described in [section 5.2 of RFC7468](https://www.rfc-editor.org/rfc/rfc7468#section-5.2).
|
||||
|
||||
Create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl. Below is a script to generate the CertificateSigningRequest.
|
||||
When encoded in JSON or YAML, this field is base-64 encoded.
|
||||
A CertificateSigningRequest containing the example certificate above would look like this:
|
||||
|
||||
```shell
|
||||
cat <<EOF | kubectl apply -f -
|
||||
```yaml
|
||||
apiVersion: certificates.k8s.io/v1
|
||||
kind: CertificateSigningRequest
|
||||
metadata:
|
||||
name: myuser
|
||||
spec:
|
||||
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
|
||||
signerName: kubernetes.io/kube-apiserver-client
|
||||
expirationSeconds: 86400 # one day
|
||||
usages:
|
||||
- client auth
|
||||
EOF
|
||||
```
|
||||
|
||||
Some points to note:
|
||||
|
||||
- `usages` has to be '`client auth`'
|
||||
- `expirationSeconds` could be made longer (i.e. `864000` for ten days) or shorter (i.e. `3600` for one hour)
|
||||
- `request` is the base64 encoded value of the CSR file content.
|
||||
You can get the content using this command:
|
||||
|
||||
```shell
|
||||
cat myuser.csr | base64 | tr -d "\n"
|
||||
```
|
||||
|
||||
### Approve certificate signing request
|
||||
|
||||
Use kubectl to create a CSR and approve it.
|
||||
|
||||
Get the list of CSRs:
|
||||
|
||||
```shell
|
||||
kubectl get csr
|
||||
```
|
||||
|
||||
Approve the CSR:
|
||||
|
||||
```shell
|
||||
kubectl certificate approve myuser
|
||||
```
|
||||
|
||||
### Get the certificate
|
||||
|
||||
Retrieve the certificate from the CSR:
|
||||
|
||||
```shell
|
||||
kubectl get csr/myuser -o yaml
|
||||
```
|
||||
|
||||
The certificate value is in Base64-encoded format under `status.certificate`.
|
||||
|
||||
Export the issued certificate from the CertificateSigningRequest.
|
||||
|
||||
```shell
|
||||
kubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt
|
||||
```
|
||||
|
||||
### Create Role and RoleBinding
|
||||
|
||||
With the certificate created it is time to define the Role and RoleBinding for
|
||||
this user to access Kubernetes cluster resources.
|
||||
|
||||
This is a sample command to create a Role for this new user:
|
||||
|
||||
```shell
|
||||
kubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods
|
||||
```
|
||||
|
||||
This is a sample command to create a RoleBinding for this new user:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding developer-binding-myuser --role=developer --user=myuser
|
||||
```
|
||||
|
||||
### Add to kubeconfig
|
||||
|
||||
The last step is to add this user into the kubeconfig file.
|
||||
|
||||
First, you need to add new credentials:
|
||||
|
||||
```shell
|
||||
kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true
|
||||
|
||||
```
|
||||
|
||||
Then, you need to add the context:
|
||||
|
||||
```shell
|
||||
kubectl config set-context myuser --cluster=kubernetes --user=myuser
|
||||
```
|
||||
|
||||
To test it, change the context to `myuser`:
|
||||
|
||||
```shell
|
||||
kubectl config use-context myuser
|
||||
...
|
||||
status:
|
||||
certificate: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS..."
|
||||
```
|
||||
|
||||
## Approval or rejection {#approval-rejection}
|
||||
|
||||
Before a [signer](#signers) issues a certificate based on a CertificateSigningRequest,
|
||||
the signer typically checks that the issuance for that CSR has been _approved_.
|
||||
|
||||
### Control plane automated approval {#approval-rejection-control-plane}
|
||||
|
||||
The kube-controller-manager ships with a built-in approver for certificates with
|
||||
|
|
@ -389,76 +365,236 @@ code using TitleCase; this is a convention but you can set it to anything
|
|||
you like. If you want to add a note for human consumption, use the
|
||||
`status.conditions.message` field.
|
||||
|
||||
## Signing
|
||||
|
||||
### Control plane signer {#signer-control-plane}
|
||||
## Cluster trust bundles {#cluster-trust-bundles}
|
||||
|
||||
The Kubernetes control plane implements each of the
|
||||
[Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers),
|
||||
as part of the kube-controller-manager.
|
||||
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
|
||||
|
||||
{{< note >}}
|
||||
Prior to Kubernetes v1.18, the kube-controller-manager would sign any CSRs that
|
||||
were marked as approved.
|
||||
In Kubernetes {{< skew currentVersion >}}, you must enable the `ClusterTrustBundles`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
_and_ the `certificates.k8s.io/v1alpha1`
|
||||
{{< glossary_tooltip text="API group" term_id="api-group" >}} in order to use
|
||||
this API.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
{{< /note >}}
|
||||
A ClusterTrustBundles is a cluster-scoped object for distributing X.509 trust
|
||||
anchors (root certificates) to workloads within the cluster. They're designed
|
||||
to work well with the [signer](#signers) concept from CertificateSigningRequests.
|
||||
|
||||
### API-based signers {#signer-api}
|
||||
ClusterTrustBundles can be used in two modes:
|
||||
[signer-linked](#ctb-signer-linked) and [signer-unlinked](#ctb-signer-unlinked).
|
||||
|
||||
Users of the REST API can sign CSRs by submitting an UPDATE request to the `status`
|
||||
subresource of the CSR to be signed.
|
||||
### Common properties and validation {#ctb-common}
|
||||
|
||||
As part of this request, the `status.certificate` field should be set to contain the
|
||||
signed certificate. This field contains one or more PEM-encoded certificates.
|
||||
All ClusterTrustBundle objects have strong validation on the contents of their
|
||||
`trustBundle` field. That field must contain one or more X.509 certificates,
|
||||
DER-serialized, each wrapped in a PEM `CERTIFICATE` block. The certificates
|
||||
must parse as valid X.509 certificates.
|
||||
|
||||
All PEM blocks must have the "CERTIFICATE" label, contain no headers,
|
||||
and the encoded data must be a BER-encoded ASN.1 Certificate structure
|
||||
as described in [section 4 of RFC5280](https://tools.ietf.org/html/rfc5280#section-4.1).
|
||||
Esoteric PEM features like inter-block data and intra-block headers are either
|
||||
rejected during object validation, or can be ignored by consumers of the object.
|
||||
Additionally, consumers are allowed to reorder the certificates in
|
||||
the bundle with their own arbitrary but stable ordering.
|
||||
|
||||
Example certificate content:
|
||||
ClusterTrustBundle objects should be considered world-readable within the
|
||||
cluster. If your cluster uses [RBAC](/docs/reference/access-authn-authz/rbac/)
|
||||
authorization, all ServiceAccounts have a default grant that allows them to
|
||||
**get**, **list**, and **watch** all ClusterTrustBundle objects.
|
||||
If you use your own authorization mechanism and you have enabled
|
||||
ClusterTrustBundles in your cluster, you should set up an equivalent rule to
|
||||
make these objects public within the cluster, so that they work as intended.
|
||||
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b/oAwDQYJKoZIhvcNAQEL
|
||||
BQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV
|
||||
BAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4
|
||||
MB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz
|
||||
dGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G
|
||||
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3
|
||||
+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm
|
||||
kmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh
|
||||
Q7Et13wrvTJqBMJo1GTwQuF+HYOku0NF/DLqbZIcpI08yQKyrBgYz2uO51/oNp8a
|
||||
sTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7
|
||||
2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj
|
||||
YTBfMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB
|
||||
Af8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC
|
||||
ggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr
|
||||
L0/TCrqmuaaliUa42jQTt2OVsVP/L8ofFunj/KjpQU0bvKJPLMRKtmxbhXuQCQi1
|
||||
qCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G/JyY2Vrh19/X8+mImMEK5eOy
|
||||
o0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+/XeDhEcg+lJvz9Eyo2
|
||||
aGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd
|
||||
M1fLPhLyR54fGaY+7/X8P9AZzPefAkwizeXwe9ii6/a08vWoiE4=
|
||||
-----END CERTIFICATE-----
|
||||
If you do not have permission to list cluster trust bundles by default in your
|
||||
cluster, you can impersonate a service account you have access to in order to
|
||||
see available ClusterTrustBundles:
|
||||
|
||||
```bash
|
||||
kubectl get clustertrustbundles --as='system:serviceaccount:mynamespace:default'
|
||||
```
|
||||
|
||||
Non-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated,
|
||||
to allow for explanatory text as described in [section 5.2 of RFC7468](https://www.rfc-editor.org/rfc/rfc7468#section-5.2).
|
||||
### Signer-linked ClusterTrustBundles {#ctb-signer-linked}
|
||||
|
||||
When encoded in JSON or YAML, this field is base-64 encoded.
|
||||
A CertificateSigningRequest containing the example certificate above would look like this:
|
||||
Signer-linked ClusterTrustBundles are associated with a _signer name_, like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: certificates.k8s.io/v1alpha1
|
||||
kind: ClusterTrustBundle
|
||||
metadata:
|
||||
name: example.com:mysigner:foo
|
||||
spec:
|
||||
signerName: example.com/mysigner
|
||||
trustBundle: "<... PEM data ...>"
|
||||
```
|
||||
|
||||
These ClusterTrustBundles are intended to be maintained by a signer-specific
|
||||
controller in the cluster, so they have several security features:
|
||||
|
||||
* To create or update a signer-linked ClusterTrustBundle, you must be permitted
|
||||
to **attest** on the signer (custom authorization verb `attest`,
|
||||
API group `certificates.k8s.io`; resource path `signers`). You can configure
|
||||
authorization for the specific resource name
|
||||
`<signerNameDomain>/<signerNamePath>` or match a pattern such as
|
||||
`<signerNameDomain>/*`.
|
||||
* Signer-linked ClusterTrustBundles **must** be named with a prefix derived from
|
||||
their `spec.signerName` field. Slashes (`/`) are replaced with colons (`:`),
|
||||
and a final colon is appended. This is followed by an arbitary name. For
|
||||
example, the signer `example.com/mysigner` can be linked to a
|
||||
ClusterTrustBundle `example.com:mysigner:<arbitrary-name>`.
|
||||
|
||||
Signer-linked ClusterTrustBundles will typically be consumed in workloads
|
||||
by a combination of a
|
||||
[field selector](/docs/concepts/overview/working-with-objects/field-selectors/) on the signer name, and a separate
|
||||
[label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
|
||||
|
||||
### Signer-unlinked ClusterTrustBundles {#ctb-signer-unlinked}
|
||||
|
||||
Signer-unlinked ClusterTrustBundles have an empty `spec.signerName` field, like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: certificates.k8s.io/v1alpha1
|
||||
kind: ClusterTrustBundle
|
||||
metadata:
|
||||
name: foo
|
||||
spec:
|
||||
# no signerName specified, so the field is blank
|
||||
trustBundle: "<... PEM data ...>"
|
||||
```
|
||||
|
||||
They are primarily intended for cluster configuration use cases. Each
|
||||
signer-unlinked ClusterTrustBundle is an independent object, in contrast to the
|
||||
customary grouping behavior of signer-linked ClusterTrustBundles.
|
||||
|
||||
Signer-unlinked ClusterTrustBundles have no `attest` verb requirement.
|
||||
Instead, you control access to them directly using the usual mechanisms,
|
||||
such as role-based access control.
|
||||
|
||||
To distinguish them from signer-linked ClusterTrustBundles, the names of
|
||||
signer-unlinked ClusterTrustBundles **must not** contain a colon (`:`).
|
||||
|
||||
<!-- TODO this should become a task page -->
|
||||
## How to issue a certificate for a user {#normal-user}
|
||||
|
||||
A few steps are required in order to get a normal user to be able to
|
||||
authenticate and invoke an API. First, this user must have a certificate issued
|
||||
by the Kubernetes cluster, and then present that certificate to the Kubernetes API.
|
||||
|
||||
### Create private key
|
||||
|
||||
The following scripts show how to generate PKI private key and CSR. It is
|
||||
important to set CN and O attribute of the CSR. CN is the name of the user and
|
||||
O is the group that this user will belong to. You can refer to
|
||||
[RBAC](/docs/reference/access-authn-authz/rbac/) for standard groups.
|
||||
|
||||
```shell
|
||||
openssl genrsa -out myuser.key 2048
|
||||
openssl req -new -key myuser.key -out myuser.csr
|
||||
```
|
||||
|
||||
### Create a CertificateSigningRequest {#create-certificatessigningrequest}
|
||||
|
||||
Create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl. Below is a script to generate the CertificateSigningRequest.
|
||||
|
||||
```shell
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: certificates.k8s.io/v1
|
||||
kind: CertificateSigningRequest
|
||||
...
|
||||
status:
|
||||
certificate: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS..."
|
||||
metadata:
|
||||
name: myuser
|
||||
spec:
|
||||
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
|
||||
signerName: kubernetes.io/kube-apiserver-client
|
||||
expirationSeconds: 86400 # one day
|
||||
usages:
|
||||
- client auth
|
||||
EOF
|
||||
```
|
||||
|
||||
Some points to note:
|
||||
|
||||
- `usages` has to be '`client auth`'
|
||||
- `expirationSeconds` could be made longer (i.e. `864000` for ten days) or shorter (i.e. `3600` for one hour)
|
||||
- `request` is the base64 encoded value of the CSR file content.
|
||||
You can get the content using this command:
|
||||
|
||||
```shell
|
||||
cat myuser.csr | base64 | tr -d "\n"
|
||||
```
|
||||
|
||||
|
||||
### Approve the CertificateSigningRequest {#approve-certificate-signing-request}
|
||||
|
||||
Use kubectl to create a CSR and approve it.
|
||||
|
||||
Get the list of CSRs:
|
||||
|
||||
```shell
|
||||
kubectl get csr
|
||||
```
|
||||
|
||||
Approve the CSR:
|
||||
|
||||
```shell
|
||||
kubectl certificate approve myuser
|
||||
```
|
||||
|
||||
### Get the certificate
|
||||
|
||||
Retrieve the certificate from the CSR:
|
||||
|
||||
```shell
|
||||
kubectl get csr/myuser -o yaml
|
||||
```
|
||||
|
||||
The certificate value is in Base64-encoded format under `status.certificate`.
|
||||
|
||||
Export the issued certificate from the CertificateSigningRequest.
|
||||
|
||||
```shell
|
||||
kubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt
|
||||
```
|
||||
|
||||
### Create Role and RoleBinding
|
||||
|
||||
With the certificate created it is time to define the Role and RoleBinding for
|
||||
this user to access Kubernetes cluster resources.
|
||||
|
||||
This is a sample command to create a Role for this new user:
|
||||
|
||||
```shell
|
||||
kubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods
|
||||
```
|
||||
|
||||
This is a sample command to create a RoleBinding for this new user:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding developer-binding-myuser --role=developer --user=myuser
|
||||
```
|
||||
|
||||
### Add to kubeconfig
|
||||
|
||||
The last step is to add this user into the kubeconfig file.
|
||||
|
||||
First, you need to add new credentials:
|
||||
|
||||
```shell
|
||||
kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true
|
||||
|
||||
```
|
||||
|
||||
Then, you need to add the context:
|
||||
|
||||
```shell
|
||||
kubectl config set-context myuser --cluster=kubernetes --user=myuser
|
||||
```
|
||||
|
||||
To test it, change the context to `myuser`:
|
||||
|
||||
```shell
|
||||
kubectl config use-context myuser
|
||||
```
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read [Manage TLS Certificates in a Cluster](/docs/tasks/tls/managing-tls-in-a-cluster/)
|
||||
|
|
|
|||
|
|
@ -719,6 +719,97 @@ webhooks:
|
|||
|
||||
The `matchPolicy` for an admission webhooks defaults to `Equivalent`.
|
||||
|
||||
### Matching requests: `matchConditions`
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.27" >}}
|
||||
|
||||
{{< note >}}
|
||||
Use of `matchConditions` requires the [featuregate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`AdmissionWebhookMatchConditions` to be explicitly enabled on the kube-apiserver before this feature can be used.
|
||||
{{< /note >}}
|
||||
|
||||
You can define _match conditions_for webhooks if you need fine-grained request filtering. These
|
||||
conditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still
|
||||
doesn't provide the filtering you want over when to call out over HTTP. Match conditions are
|
||||
[CEL expressions](/docs/reference/using-api/cel/). All match conditions must evaluate to true for the
|
||||
webhook to be called.
|
||||
|
||||
Here is an example illustrating a few different uses for match conditions:
|
||||
|
||||
```yaml
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
webhooks:
|
||||
- name: my-webhook.example.com
|
||||
matchPolicy: Equivalent
|
||||
rules:
|
||||
- operations: ['CREATE','UPDATE']
|
||||
apiGroups: ['*']
|
||||
apiVersions: ['*']
|
||||
resources: ['*']
|
||||
failurePolicy: 'Ignore' # Fail-open (optional)
|
||||
sideEffects: None
|
||||
clientConfig:
|
||||
service:
|
||||
namespace: my-namespace
|
||||
name: my-webhook
|
||||
caBundle: '<omitted>'
|
||||
matchConditions:
|
||||
- name: 'exclude-leases' # Each match condition must have a unique name
|
||||
expression: '!(request.resource.group == "coordination.k8s.io" && request.resource.resource == "leases")' # Match non-lease resources.
|
||||
- name: 'exclude-kubelet-requests'
|
||||
expression: '!("system:nodes" in request.userInfo.groups)' # Match requests made by non-node users.
|
||||
- name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.
|
||||
expression: 'request.resource.group != "rbac.authorization.k8s.io"'
|
||||
|
||||
# This example illustrates the use of the 'authorizer'. The authorization check is more expensive
|
||||
# than a simple expression, so in this example it is scoped to only RBAC requests by using a second
|
||||
# webhook. Both webhooks can be served by the same endpoint.
|
||||
- name: rbac.my-webhook.example.com
|
||||
matchPolicy: Equivalent
|
||||
rules:
|
||||
- operations: ['CREATE','UPDATE']
|
||||
apiGroups: ['rbac.authorization.k8s.io']
|
||||
apiVersions: ['*']
|
||||
resources: ['*']
|
||||
failurePolicy: 'Fail' # Fail-closed (the default)
|
||||
sideEffects: None
|
||||
clientConfig:
|
||||
service:
|
||||
namespace: my-namespace
|
||||
name: my-webhook
|
||||
caBundle: '<omitted>'
|
||||
matchConditions:
|
||||
- name: 'breakglass'
|
||||
# Skip requests made by users authorized to 'breakglass' on this webhook.
|
||||
# The 'breakglass' API verb does not need to exist outside this check.
|
||||
expression: '!authorizer.group("admissionregistration.k8s.io").resource("validatingwebhookconfigurations").name("my-webhook.example.com").check("breakglass").allowed()'
|
||||
```
|
||||
|
||||
Match conditions have access to the following CEL variables:
|
||||
|
||||
- `object` - The object from the incoming request. The value is null for DELETE requests. The object
|
||||
version may be converted based on the [matchPolicy](#matching-requests-matchpolicy).
|
||||
- `oldObject` - The existing object. The value is null for CREATE requests.
|
||||
- `request` - The request portion of the [AdmissionReview](#request), excluding `object` and `oldObject`.
|
||||
- `authorizer` - A CEL Authorizer. May be used to perform authorization checks for the principal
|
||||
(authenticated user) of the request. See
|
||||
[Authz](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz) in the Kubernetes CEL library
|
||||
documentation for more details.
|
||||
- `authorizer.requestResource` - A shortcut for an authorization check configured with the request
|
||||
resource (group, resource, (subresource), namespace, name).
|
||||
|
||||
For more information on CEL expressions, refer to the
|
||||
[Common Expression Language in Kubernetes reference](/docs/reference/using-api/cel/).
|
||||
|
||||
In the event of an error evaluating a match condition the webhook is never called. Whether to reject
|
||||
the request is determined as follows:
|
||||
|
||||
1. If **any** match condition evaluated to `false` (regardless of other errors), the API server skips the webhook.
|
||||
2. Otherwise:
|
||||
- for [`failurePolicy: Fail`](#failure-policy), reject the request (without calling the webhook).
|
||||
- for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the webhook.
|
||||
|
||||
### Contacting the webhook
|
||||
|
||||
Once the API server has determined a request should be sent to a webhook,
|
||||
|
|
@ -1175,4 +1266,3 @@ cause the control plane components to stop functioning or introduce unknown beha
|
|||
If your admission webhooks don't intend to modify the behavior of the Kubernetes control
|
||||
plane, exclude the `kube-system` namespace from being intercepted using a
|
||||
[`namespaceSelector`](#matching-requests-namespaceselector).
|
||||
|
||||
|
|
|
|||
|
|
@ -929,8 +929,8 @@ to a role that grants that permission. To allow a user to create/update role bin
|
|||
|
||||
1. Grant them a role that allows them to create/update RoleBinding or ClusterRoleBinding objects, as desired.
|
||||
2. Grant them permissions needed to bind a particular role:
|
||||
* implicitly, by giving them the permissions contained in the role.
|
||||
* explicitly, by giving them permission to perform the `bind` verb on the particular Role (or ClusterRole).
|
||||
* implicitly, by giving them the permissions contained in the role.
|
||||
* explicitly, by giving them permission to perform the `bind` verb on the particular Role (or ClusterRole).
|
||||
|
||||
For example, this ClusterRole and RoleBinding would allow `user-1` to grant other users the `admin`, `edit`, and `view` roles in the namespace `user-1-namespace`:
|
||||
|
||||
|
|
@ -1105,7 +1105,7 @@ Examples:
|
|||
|
||||
* Test applying a manifest file of RBAC objects, displaying changes that would be made:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client
|
||||
```
|
||||
|
||||
|
|
@ -1260,7 +1260,7 @@ Here are two approaches for managing this transition:
|
|||
Run both the RBAC and ABAC authorizers, and specify a policy file that contains
|
||||
the [legacy ABAC policy](/docs/reference/access-authn-authz/abac/#policy-file-format):
|
||||
|
||||
```
|
||||
```shell
|
||||
--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -92,6 +92,7 @@ metadata:
|
|||
name: "demo-binding-test.example.com"
|
||||
spec:
|
||||
policyName: "demo-policy.example.com"
|
||||
validationActions: [Deny]
|
||||
matchResources:
|
||||
namespaceSelector:
|
||||
matchLabels:
|
||||
|
|
@ -107,6 +108,37 @@ ValidatingAdmissionPolicy 'demo-policy.example.com' with binding 'demo-binding-t
|
|||
|
||||
The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured.
|
||||
|
||||
#### Validation actions
|
||||
|
||||
Each `ValidatingAdmissionPolicyBinding` must specify one or more
|
||||
`validationActions` to declare how `validations` of a policy are enforced.
|
||||
|
||||
The supported `validationActions` are:
|
||||
|
||||
- `Deny`: Validation failure results in a denied request.
|
||||
- `Warn`: Validation failure is reported to the request client
|
||||
as a [warning](/blog/2020/09/03/warnings/).
|
||||
- `Audit`: Validation failure is included in the audit event for the API request.
|
||||
|
||||
For example, to both warn clients about a validation failure and to audit the
|
||||
validation failures, use:
|
||||
|
||||
```yaml
|
||||
validationActions: [Warn, Audit]
|
||||
```
|
||||
|
||||
`Deny` and `Warn` may not be used together since this combination
|
||||
needlessly duplicates the validation failure both in the
|
||||
API response body and the HTTP warning headers.
|
||||
|
||||
A `validation` that evaluates to false is always enforced according to these
|
||||
actions. Failures defined by the `failurePolicy` are enforced
|
||||
according to these actions only if the `failurePolicy` is set to `Fail` (or unset),
|
||||
otherwise the failures are ignored.
|
||||
|
||||
See [Audit Annotations: validation falures](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation_failure)
|
||||
for more details about the validation failure audit annotation.
|
||||
|
||||
#### Parameter resources
|
||||
|
||||
Parameter resources allow a policy configuration to be separate from its definition.
|
||||
|
|
@ -159,6 +191,7 @@ metadata:
|
|||
name: "replicalimit-binding-test.example.com"
|
||||
spec:
|
||||
policyName: "replicalimit-policy.example.com"
|
||||
validationActions: [Deny]
|
||||
paramRef:
|
||||
name: "replica-limit-test.example.com"
|
||||
matchResources:
|
||||
|
|
@ -188,6 +221,7 @@ metadata:
|
|||
name: "replicalimit-binding-nontest"
|
||||
spec:
|
||||
policyName: "replicalimit-policy.example.com"
|
||||
validationActions: [Deny]
|
||||
paramRef:
|
||||
name: "replica-limit-clusterwide.example.com"
|
||||
matchResources:
|
||||
|
|
@ -219,6 +253,7 @@ metadata:
|
|||
name: "replicalimit-binding-global"
|
||||
spec:
|
||||
policyName: "replicalimit-policy.example.com"
|
||||
validationActions: [Deny]
|
||||
params: "replica-limit-clusterwide.example.com"
|
||||
matchResources:
|
||||
namespaceSelector:
|
||||
|
|
@ -299,6 +334,12 @@ variables as well as some other useful variables:
|
|||
- 'request' - Attributes of the [admission request](/docs/reference/config-api/apiserver-admission.v1/#admission-k8s-io-v1-AdmissionRequest).
|
||||
- 'params' - Parameter resource referred to by the policy binding being evaluated. The value is
|
||||
null if `ParamKind` is unset.
|
||||
- `authorizer` - A CEL Authorizer. May be used to perform authorization checks for the principal
|
||||
(authenticated user) of the request. See
|
||||
[Authz](https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz) in the Kubernetes CEL library
|
||||
documentation for more details.
|
||||
- `authorizer.requestResource` - A shortcut for an authorization check configured with the request
|
||||
resource (group, resource, (subresource), namespace, name).
|
||||
|
||||
The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from
|
||||
the root of the object. No other metadata properties are accessible.
|
||||
|
|
@ -323,12 +364,12 @@ For example, `int` in the word “sprint” would not be escaped.
|
|||
|
||||
Examples on escaping:
|
||||
|
||||
|property name | rule with escaped property name |
|
||||
| ----------------| ----------------------- |
|
||||
| namespace | `self.__namespace__ > 0` |
|
||||
| x-prop | `self.x__dash__prop > 0` |
|
||||
| redact__d | `self.redact__underscores__d > 0` |
|
||||
| string | `self.startsWith('kube')` |
|
||||
|property name | rule with escaped property name |
|
||||
| ----------------|-----------------------------------|
|
||||
| namespace | `object.__namespace__ > 0` |
|
||||
| x-prop | `object.x__dash__prop > 0` |
|
||||
| redact__d | `object.redact__underscores__d > 0` |
|
||||
| string | `object.startsWith('kube')` |
|
||||
|
||||
Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1].
|
||||
Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:
|
||||
|
|
@ -365,3 +406,175 @@ HTTP response code, are used in the HTTP response to the client.
|
|||
The currently supported reasons are: `Unauthorized`, `Forbidden`, `Invalid`, `RequestEntityTooLarge`.
|
||||
If not set, `StatusReasonInvalid` is used in the response to the client.
|
||||
|
||||
### Matching requests: `matchConditions`
|
||||
|
||||
You can define _match conditions_ for a `ValidatingAdmissionPolicy` if you need fine-grained request filtering. These
|
||||
conditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still
|
||||
doesn't provide the filtering you want. Match conditions are
|
||||
[CEL expressions](/docs/reference/using-api/cel/). All match conditions must evaluate to true for the
|
||||
resource to be evaluated.
|
||||
|
||||
Here is an example illustrating a few different uses for match conditions:
|
||||
|
||||
{{< codenew file="access/validating-admission-policy-match-conditions.yaml" >}}
|
||||
|
||||
Match conditions have access to the same CEL variables as validation expressions.
|
||||
|
||||
In the event of an error evaluating a match condition the policy is not evaluated. Whether to reject
|
||||
the request is determined as follows:
|
||||
|
||||
1. If **any** match condition evaluated to `false` (regardless of other errors), the API server skips the policy.
|
||||
2. Otherwise:
|
||||
- for [`failurePolicy: Fail`](#failure-policy), reject the request (without evaluating the policy).
|
||||
- for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the policy.
|
||||
|
||||
### Audit annotations
|
||||
|
||||
`auditAnnotations` may be used to include audit annotations in the audit event of the API request.
|
||||
|
||||
For example, here is an admission policy with an audit annotation:
|
||||
|
||||
{{< codenew file="access/validating-admission-policy-audit-annotation.yaml" >}}
|
||||
|
||||
When an API request is validated with this admission policy, the resulting audit event will look like:
|
||||
|
||||
```
|
||||
# the audit event recorded
|
||||
{
|
||||
"kind": "Event",
|
||||
"apiVersion": "audit.k8s.io/v1",
|
||||
"annotations": {
|
||||
"demo-policy.example.com/high-replica-count": "Deployment spec.replicas set to 128"
|
||||
# other annotations
|
||||
...
|
||||
}
|
||||
# other fields
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
In this example the annotation will only be included if the `spec.replicas` of the Deployment is more than
|
||||
50, otherwise the CEL expression evalutes to null and the annotation will not be included.
|
||||
|
||||
Note that audit annotation keys are prefixed by the name of the `ValidatingAdmissionWebhook` and a `/`. If
|
||||
another admission controller, such as an admission webhook, uses the exact same audit annotation key, the
|
||||
value of the first admission controller to include the audit annotation will be included in the audit
|
||||
event and all other values will be ignored.
|
||||
|
||||
### Message expression
|
||||
|
||||
To return a more friendly message when the policy rejects a request, we can use a CEL expression
|
||||
to composite a message with `spec.validations[i].messageExpression`. Similar to the validation expression,
|
||||
a message expression has access to `object`, `oldObject`, `request`, and `params`. Unlike validations,
|
||||
message expression must evaluate to a string.
|
||||
|
||||
For example, to better inform the user of the reason of denial when the policy refers to a parameter,
|
||||
we can have the following validation:
|
||||
|
||||
{{< codenew file="access/deployment-replicas-policy.yaml" >}}
|
||||
|
||||
After creating a params object that limits the replicas to 3 and setting up the binding,
|
||||
when we try to create a deployment with 5 replicas, we will receive the following message.
|
||||
|
||||
```
|
||||
$ kubectl create deploy --image=nginx nginx --replicas=5
|
||||
error: failed to create deployment: deployments.apps "nginx" is forbidden: ValidatingAdmissionPolicy 'deploy-replica-policy.example.com' with binding 'demo-binding-test.example.com' denied request: object.spec.replicas must be no greater than 3
|
||||
```
|
||||
|
||||
This is more informative than a static message of "too many replicas".
|
||||
|
||||
The message expression takes precedence over the static message defined in `spec.validations[i].message` if both are defined.
|
||||
However, if the message expression fails to evaluate, the static message will be used instead.
|
||||
Additionally, if the message expression evaluates to a multi-line string,
|
||||
the evaluation result will be discarded and the static message will be used if present.
|
||||
Note that static message is validated against multi-line strings.
|
||||
|
||||
### Type checking
|
||||
|
||||
When a policy definition is created or updated, the validation process parses the expressions it contains
|
||||
and reports any syntax errors, rejecting the definition if any errors are found.
|
||||
Afterward, the referred variables are checked for type errors, including missing fields and type confusion,
|
||||
against the matched types of `spec.matchConstraints`.
|
||||
The result of type checking can be retrieved from `status.typeChecking`.
|
||||
The presence of `status.typeChecking` indicates the completion of type checking,
|
||||
and an empty `status.typeChecking` means that no errors were detected.
|
||||
|
||||
For example, given the following policy definition:
|
||||
|
||||
```yaml
|
||||
apiVersion: admissionregistration.k8s.io/v1alpha1
|
||||
kind: ValidatingAdmissionPolicy
|
||||
metadata:
|
||||
name: "deploy-replica-policy.example.com"
|
||||
spec:
|
||||
matchConstraints:
|
||||
resourceRules:
|
||||
- apiGroups: ["apps"]
|
||||
apiVersions: ["v1"]
|
||||
operations: ["CREATE", "UPDATE"]
|
||||
resources: ["deployments"]
|
||||
validations:
|
||||
- expression: "object.replicas > 1" # should be "object.spec.replicas > 1"
|
||||
message: "must be replicated"
|
||||
reason: Invalid
|
||||
```
|
||||
|
||||
The status will yield the following information:
|
||||
|
||||
```yaml
|
||||
status:
|
||||
typeChecking:
|
||||
expressionWarnings:
|
||||
- fieldRef: spec.validations[0].expression
|
||||
warning: |-
|
||||
apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'
|
||||
| object.replicas > 1
|
||||
| ......^
|
||||
```
|
||||
|
||||
If multiple resources are matched in `spec.matchConstraints`, all of matched resources will be checked against.
|
||||
For example, the following policy definition
|
||||
|
||||
```yaml
|
||||
apiVersion: admissionregistration.k8s.io/v1alpha1
|
||||
kind: ValidatingAdmissionPolicy
|
||||
metadata:
|
||||
name: "replica-policy.example.com"
|
||||
spec:
|
||||
matchConstraints:
|
||||
resourceRules:
|
||||
- apiGroups: ["apps"]
|
||||
apiVersions: ["v1"]
|
||||
operations: ["CREATE", "UPDATE"]
|
||||
resources: ["deployments","replicasets"]
|
||||
validations:
|
||||
- expression: "object.replicas > 1" # should be "object.spec.replicas > 1"
|
||||
message: "must be replicated"
|
||||
reason: Invalid
|
||||
```
|
||||
|
||||
will have multiple types and type checking result of each type in the warning message.
|
||||
|
||||
```yaml
|
||||
status:
|
||||
typeChecking:
|
||||
expressionWarnings:
|
||||
- fieldRef: spec.validations[0].expression
|
||||
warning: |-
|
||||
apps/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'
|
||||
| object.replicas > 1
|
||||
| ......^
|
||||
apps/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'
|
||||
| object.replicas > 1
|
||||
| ......^
|
||||
```
|
||||
|
||||
Type Checking has the following limitation:
|
||||
|
||||
- No wildcard matching. If `spec.matchConstraints.resourceRules` contains `"*"` in any of `apiGroups`, `apiVersions` or `resources`,
|
||||
the types that `"*"` matches will not be checked.
|
||||
- The number of matched types is limited to 10. This is to prevent a policy that manually specifying too many types.
|
||||
to consume excessive computing resources. In the order of ascending group, version, and then resource, 11th combination and beyond are ignored.
|
||||
- Type Checking does not affect the policy behavior in any way. Even if the type checking detects errors, the policy will continue
|
||||
to evaluate. If errors do occur during evaluate, the failure policy will decide its outcome.
|
||||
- Type Checking does not apply to CRDs, including matched CRD types and reference of paramKind. The support for CRDs will come in future release.
|
||||
|
|
|
|||
|
|
@ -58,8 +58,22 @@ In the following table:
|
|||
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 |
|
||||
| `CSIDriverRegistry` | `true` | Beta | 1.14 | 1.17 |
|
||||
| `CSIDriverRegistry` | `true` | GA | 1.18 | 1.21 |
|
||||
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `CSIInlineVolume` | `true` | Beta | 1.16 | 1.24 |
|
||||
| `CSIInlineVolume` | `true` | GA | 1.25 | 1.26 |
|
||||
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
|
||||
| `CSIMigration` | `true` | Beta | 1.17 | 1.24 |
|
||||
| `CSIMigration` | `true` | GA | 1.25 | 1.26 |
|
||||
| `CSIMigrationAWS` | `false` | Alpha | 1.14 | 1.16 |
|
||||
| `CSIMigrationAWS` | `false` | Beta | 1.17 | 1.22 |
|
||||
| `CSIMigrationAWS` | `true` | Beta | 1.23 | 1.24 |
|
||||
| `CSIMigrationAWS` | `true` | GA | 1.25 | 1.26 |
|
||||
| `CSIMigrationAWSComplete` | `false` | Alpha | 1.17 | 1.20 |
|
||||
| `CSIMigrationAWSComplete` | - | Deprecated | 1.21 | 1.21 |
|
||||
| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 |
|
||||
| `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | 1.22 |
|
||||
| `CSIMigrationAzureDisk` | `true` | Beta | 1.23 | 1.23 |
|
||||
| `CSIMigrationAzureDisk` | `true` | GA | 1.24 | 1.26 |
|
||||
| `CSIMigrationAzureDiskComplete` | `false` | Alpha | 1.17 | 1.20 |
|
||||
| `CSIMigrationAzureDiskComplete` | - | Deprecated | 1.21 | 1.21 |
|
||||
| `CSIMigrationAzureFileComplete` | `false` | Alpha | 1.17 | 1.20 |
|
||||
|
|
@ -85,14 +99,17 @@ In the following table:
|
|||
| `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | 1.19 |
|
||||
| `CSIVolumeFSGroupPolicy` | `true` | Beta | 1.20 | 1.22 |
|
||||
| `CSIVolumeFSGroupPolicy` | `true` | GA | 1.23 | 1.25 |
|
||||
| `CSRDuration` | `true` | Beta | 1.22 | 1.23 |
|
||||
| `CSRDuration` | `true` | GA | 1.24 | 1.25 |
|
||||
| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 |
|
||||
| `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | 1.22 |
|
||||
| `ConfigurableFSGroupPolicy` | `true` | GA | 1.23 | 1.25 |
|
||||
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | 1.23 |
|
||||
| `ControllerManagerLeaderMigration` | `true` | GA | 1.24 | 1.26 |
|
||||
| `CronJobControllerV2` | `false` | Alpha | 1.20 | 1.20 |
|
||||
| `CronJobControllerV2` | `true` | Beta | 1.21 | 1.21 |
|
||||
| `CronJobControllerV2` | `true` | GA | 1.22 | 1.23 |
|
||||
| `CSRDuration` | `true` | Beta | 1.22 | 1.23 |
|
||||
| `CSRDuration` | `true` | GA | 1.24 | 1.25 |
|
||||
| `CustomPodDNS` | `false` | Alpha | 1.9 | 1.9 |
|
||||
| `CustomPodDNS` | `true` | Beta| 1.10 | 1.13 |
|
||||
| `CustomPodDNS` | `true` | GA | 1.14 | 1.16 |
|
||||
|
|
@ -111,6 +128,9 @@ In the following table:
|
|||
| `CustomResourceWebhookConversion` | `false` | Alpha | 1.13 | 1.14 |
|
||||
| `CustomResourceWebhookConversion` | `true` | Beta | 1.15 | 1.15 |
|
||||
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | 1.18 |
|
||||
| `DaemonSetUpdateSurge` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | 1.24 |
|
||||
| `DaemonSetUpdateSurge` | `true` | GA | 1.25 | 1.26 |
|
||||
| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
|
||||
| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | 1.23 |
|
||||
| `DefaultPodTopologySpread` | `true` | GA | 1.24 | 1.25 |
|
||||
|
|
@ -135,9 +155,21 @@ In the following table:
|
|||
| `EndpointSliceProxying` | `false` | Alpha | 1.18 | 1.18 |
|
||||
| `EndpointSliceProxying` | `true` | Beta | 1.19 | 1.21 |
|
||||
| `EndpointSliceProxying` | `true` | GA | 1.22 | 1.24 |
|
||||
| `EphemeralContainers` | `false` | Alpha | 1.16 | 1.22 |
|
||||
| `EphemeralContainers` | `true` | Beta | 1.23 | 1.24 |
|
||||
| `EphemeralContainers` | `true` | GA | 1.25 | 1.26 |
|
||||
| `EvenPodsSpread` | `false` | Alpha | 1.16 | 1.17 |
|
||||
| `EvenPodsSpread` | `true` | Beta | 1.18 | 1.18 |
|
||||
| `EvenPodsSpread` | `true` | GA | 1.19 | 1.21 |
|
||||
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | 1.23 |
|
||||
| `ExpandCSIVolumes` | `true` | GA | 1.24 | 1.26 |
|
||||
| `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.14 |
|
||||
| `ExpandInUsePersistentVolumes` | `true` | Beta | 1.15 | 1.23 |
|
||||
| `ExpandInUsePersistentVolumes` | `true` | GA | 1.24 | 1.26 |
|
||||
| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 |
|
||||
| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | 1.23 |
|
||||
| `ExpandPersistentVolumes` | `true` | GA | 1.24 | 1.26 |
|
||||
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 |
|
||||
| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | 1.16 |
|
||||
| `ExternalPolicyForExternalIP` | `true` | GA | 1.18 | 1.22 |
|
||||
|
|
@ -157,6 +189,9 @@ In the following table:
|
|||
| `IPv6DualStack` | `false` | Alpha | 1.15 | 1.20 |
|
||||
| `IPv6DualStack` | `true` | Beta | 1.21 | 1.22 |
|
||||
| `IPv6DualStack` | `true` | GA | 1.23 | 1.24 |
|
||||
| `IdentifyPodOS` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `IdentifyPodOS` | `true` | Beta | 1.24 | 1.24 |
|
||||
| `IdentifyPodOS` | `true` | GA | 1.25 | 1.26 |
|
||||
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
|
||||
| `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | 1.20 |
|
||||
| `ImmutableEphemeralVolumes` | `true` | GA | 1.21 | 1.24 |
|
||||
|
|
@ -176,6 +211,9 @@ In the following table:
|
|||
| `LegacyNodeRoleBehavior` | `false` | Alpha | 1.16 | 1.18 |
|
||||
| `LegacyNodeRoleBehavior` | `true` | Beta | 1.19 | 1.20 |
|
||||
| `LegacyNodeRoleBehavior` | `false` | GA | 1.21 | 1.22 |
|
||||
| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
|
||||
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | 1.24 |
|
||||
| `LocalStorageCapacityIsolation` | `true` | GA | 1.25 | 1.26 |
|
||||
| `MountContainers` | `false` | Alpha | 1.9 | 1.16 |
|
||||
| `MountContainers` | `false` | Deprecated | 1.17 | 1.17 |
|
||||
| `MountPropagation` | `false` | Alpha | 1.8 | 1.9 |
|
||||
|
|
@ -183,6 +221,9 @@ In the following table:
|
|||
| `MountPropagation` | `true` | GA | 1.12 | 1.14 |
|
||||
| `NamespaceDefaultLabelName` | `true` | Beta | 1.21 | 1.21 |
|
||||
| `NamespaceDefaultLabelName` | `true` | GA | 1.22 | 1.23 |
|
||||
| `NetworkPolicyEndPort` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `NetworkPolicyEndPort` | `true` | Beta | 1.22 | 1.24 |
|
||||
| `NetworkPolicyEndPort` | `true` | GA | 1.25 | 1.26 |
|
||||
| `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | 1.18 |
|
||||
| `NodeDisruptionExclusion` | `true` | Beta | 1.19 | 1.20 |
|
||||
| `NodeDisruptionExclusion` | `true` | GA | 1.21 | 1.22 |
|
||||
|
|
@ -270,6 +311,9 @@ In the following table:
|
|||
| `StartupProbe` | `false` | Alpha | 1.16 | 1.17 |
|
||||
| `StartupProbe` | `true` | Beta | 1.18 | 1.19 |
|
||||
| `StartupProbe` | `true` | GA | 1.20 | 1.23 |
|
||||
| `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 |
|
||||
| `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | 1.24 |
|
||||
| `StatefulSetMinReadySeconds` | `true` | GA | 1.25 | 1.26 |
|
||||
| `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 |
|
||||
| `StorageObjectInUseProtection` | `true` | GA | 1.11 | 1.24 |
|
||||
| `StreamingProxyRedirects` | `false` | Beta | 1.5 | 1.5 |
|
||||
|
|
@ -385,6 +429,18 @@ In the following table:
|
|||
- `CSIDriverRegistry`: Enable all logic related to the CSIDriver API object in
|
||||
`csi.storage.k8s.io`.
|
||||
|
||||
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
|
||||
|
||||
- `CSIMigration`: Enables shims and translation logic to route volume
|
||||
operations from in-tree plugins to corresponding pre-installed CSI plugins
|
||||
|
||||
- `CSIMigrationAWS`: Enables shims and translation logic to route volume
|
||||
operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Supports
|
||||
falling back to in-tree EBS plugin for mount operations to nodes that have
|
||||
the feature disabled or that do not have EBS CSI plugin installed and
|
||||
configured. Does not support falling back for provision operations, for those
|
||||
the CSI plugin must be installed and configured.
|
||||
|
||||
- `CSIMigrationAWSComplete`: Stops registering the EBS in-tree plugin in
|
||||
kubelet and volume controllers and enables shims and translation logic to
|
||||
route volume operations from the AWS-EBS in-tree plugin to EBS CSI plugin.
|
||||
|
|
@ -393,6 +449,14 @@ In the following table:
|
|||
been deprecated in favor of the `InTreePluginAWSUnregister` feature flag
|
||||
which prevents the registration of in-tree EBS plugin.
|
||||
|
||||
- `CSIMigrationAzureDisk`: Enables shims and translation logic to route volume
|
||||
operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin.
|
||||
Supports falling back to in-tree AzureDisk plugin for mount operations to
|
||||
nodes that have the feature disabled or that do not have AzureDisk CSI plugin
|
||||
installed and configured. Does not support falling back for provision
|
||||
operations, for those the CSI plugin must be installed and configured.
|
||||
Requires CSIMigration feature flag enabled.
|
||||
|
||||
- `CSIMigrationAzureDiskComplete`: Stops registering the Azure-Disk in-tree
|
||||
plugin in kubelet and volume controllers and enables shims and translation
|
||||
logic to route volume operations from the Azure-Disk in-tree plugin to
|
||||
|
|
@ -469,6 +533,13 @@ In the following table:
|
|||
{{< glossary_tooltip text="CronJob" term_id="cronjob" >}} controller. Otherwise,
|
||||
version 1 of the same controller is selected.
|
||||
|
||||
- `ControllerManagerLeaderMigration`: Enables Leader Migration for
|
||||
[kube-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) and
|
||||
[cloud-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager)
|
||||
which allows a cluster operator to live migrate
|
||||
controllers from the kube-controller-manager into an external controller-manager
|
||||
(e.g. the cloud-controller-manager) in an HA cluster without downtime.
|
||||
|
||||
- `CustomPodDNS`: Enable customizing the DNS settings for a Pod using its `dnsConfig` property.
|
||||
Check [Pod's DNS Config](/docs/concepts/services-networking/dns-pod-service/#pods-dns-config)
|
||||
for more details.
|
||||
|
|
@ -486,6 +557,10 @@ In the following table:
|
|||
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
|
||||
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
|
||||
|
||||
- `DaemonSetUpdateSurge`: Enables the DaemonSet workloads to maintain
|
||||
availability during update per node.
|
||||
See [Perform a Rolling Update on a DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/).
|
||||
|
||||
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
|
||||
[default spreading](/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints).
|
||||
|
||||
|
|
@ -518,9 +593,21 @@ In the following table:
|
|||
Endpoints, enabling scalability and performance improvements. See
|
||||
[Enabling Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/).
|
||||
|
||||
- `EphemeralContainers`: Enable the ability to add
|
||||
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
|
||||
to running Pods.
|
||||
|
||||
- `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See
|
||||
[Pod Topology Spread Constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
|
||||
|
||||
- `ExpandCSIVolumes`: Enable the expanding of CSI volumes.
|
||||
|
||||
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See
|
||||
[Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
|
||||
|
||||
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See
|
||||
[Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
|
||||
|
||||
- `ExperimentalCriticalPodAnnotation`: Enable annotating specific pods as *critical*
|
||||
so that their [scheduling is guaranteed](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
|
||||
This feature is deprecated by Pod Priority and Preemption as of v1.13.
|
||||
|
|
@ -548,6 +635,11 @@ In the following table:
|
|||
- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
|
||||
support for IPv6.
|
||||
|
||||
- `IdentifyPodOS`: Allows the Pod OS field to be specified. This helps in identifying
|
||||
the OS of the pod authoritatively during the API server admission time.
|
||||
In Kubernetes {{< skew currentVersion >}}, the allowed values for the `pod.spec.os.name`
|
||||
are `windows` and `linux`.
|
||||
|
||||
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as
|
||||
immutable for better safety and performance.
|
||||
|
||||
|
|
@ -573,6 +665,11 @@ In the following table:
|
|||
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
|
||||
feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`.
|
||||
|
||||
- `LocalStorageCapacityIsolation`: Enable the consumption of
|
||||
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
|
||||
and also the `sizeLimit` property of an
|
||||
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
|
||||
|
||||
- `MountContainers`: Enable using utility containers on host as the volume mounter.
|
||||
|
||||
- `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods.
|
||||
|
|
@ -683,6 +780,9 @@ In the following table:
|
|||
- `StartupProbe`: Enable the [startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe)
|
||||
probe in the kubelet.
|
||||
|
||||
- `StatefulSetMinReadySeconds`: Allows `minReadySeconds` to be respected by
|
||||
the StatefulSet controller.
|
||||
|
||||
- `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or
|
||||
PersistentVolumeClaim objects if they are still being used.
|
||||
|
||||
|
|
|
|||
|
|
@ -62,11 +62,15 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `APIPriorityAndFairness` | `true` | Beta | 1.20 | |
|
||||
| `APIResponseCompression` | `false` | Alpha | 1.7 | 1.15 |
|
||||
| `APIResponseCompression` | `true` | Beta | 1.16 | |
|
||||
| `APISelfSubjectReview` | `false` | Alpha | 1.26 | |
|
||||
| `APISelfSubjectReview` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `APISelfSubjectReview` | `true` | Beta | 1.27 | |
|
||||
| `APIServerIdentity` | `false` | Alpha | 1.20 | 1.25 |
|
||||
| `APIServerIdentity` | `true` | Beta | 1.26 | |
|
||||
| `APIServerTracing` | `false` | Alpha | 1.22 | |
|
||||
| `AggregatedDiscoveryEndpoint` | `false` | Alpha | 1.26 | |
|
||||
| `APIServerTracing` | `false` | Alpha | 1.22 | 1.26 |
|
||||
| `APIServerTracing` | `true` | Beta | 1.27 | |
|
||||
| `AdmissionWebhookMatchConditions` | `false` | Alpha | 1.27 | |
|
||||
| `AggregatedDiscoveryEndpoint` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `AggregatedDiscoveryEndpoint` | `true` | Beta | 1.27 | |
|
||||
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | 1.23 |
|
||||
| `AnyVolumeDataSource` | `true` | Beta | 1.24 | |
|
||||
| `AppArmor` | `true` | Beta | 1.4 | |
|
||||
|
|
@ -77,37 +81,40 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `CSIMigrationPortworx` | `false` | Alpha | 1.23 | 1.24 |
|
||||
| `CSIMigrationPortworx` | `false` | Beta | 1.25 | |
|
||||
| `CSIMigrationRBD` | `false` | Alpha | 1.23 | |
|
||||
| `CSINodeExpandSecret` | `false` | Alpha | 1.25 | |
|
||||
| `CSINodeExpandSecret` | `false` | Alpha | 1.25 | 1.26 |
|
||||
| `CSINodeExpandSecret` | `true` | Beta | 1.27 | |
|
||||
| `CSIVolumeHealth` | `false` | Alpha | 1.21 | |
|
||||
| `ComponentSLIs` | `false` | Alpha | 1.26 | |
|
||||
| `CloudControllerManagerWebhook` | false | Alpha | 1.27 | |
|
||||
| `CloudDualStackNodeIPs` | false | Alpha | 1.27 | |
|
||||
| `ClusterTrustBundle` | false | Alpha | 1.27 | |
|
||||
| `ComponentSLIs` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `ComponentSLIs` | `true` | Beta | 1.27 | |
|
||||
| `ContainerCheckpoint` | `false` | Alpha | 1.25 | |
|
||||
| `ContextualLogging` | `false` | Alpha | 1.24 | |
|
||||
| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 |
|
||||
| `CronJobTimeZone` | `true` | Beta | 1.25 | |
|
||||
| `CrossNamespaceVolumeDataSource` | `false` | Alpha| 1.26 | |
|
||||
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
|
||||
| `CustomResourceValidationExpressions` | `false` | Alpha | 1.23 | 1.24 |
|
||||
| `CustomResourceValidationExpressions` | `true` | Beta | 1.25 | |
|
||||
| `DisableCloudProviders` | `false` | Alpha | 1.22 | |
|
||||
| `DisableKubeletCloudCredentialProviders` | `false` | Alpha | 1.23 | |
|
||||
| `DownwardAPIHugePages` | `false` | Alpha | 1.20 | 1.20 |
|
||||
| `DownwardAPIHugePages` | `false` | Beta | 1.21 | 1.21 |
|
||||
| `DownwardAPIHugePages` | `true` | Beta | 1.22 | |
|
||||
| `DynamicResourceAllocation` | `false` | Alpha | 1.26 | |
|
||||
| `EventedPLEG` | `false` | Alpha | 1.26 | - |
|
||||
| `ElasticIndexedJob` | `true` | Beta` | 1.27 | |
|
||||
| `EventedPLEG` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `EventedPLEG` | `false` | Beta | 1.27 | - |
|
||||
| `ExpandedDNSConfig` | `false` | Alpha | 1.22 | 1.25 |
|
||||
| `ExpandedDNSConfig` | `true` | Beta | 1.26 | |
|
||||
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
|
||||
| `GRPCContainerProbe` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `GRPCContainerProbe` | `true` | Beta | 1.24 | |
|
||||
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | 1.20 |
|
||||
| `GracefulNodeShutdown` | `true` | Beta | 1.21 | |
|
||||
| `GracefulNodeShutdownBasedOnPodPriority` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `GracefulNodeShutdownBasedOnPodPriority` | `true` | Beta | 1.24 | |
|
||||
| `HPAContainerMetrics` | `false` | Alpha | 1.20 | |
|
||||
| `HPAContainerMetrics` | `false` | Alpha | 1.20 | 1.26 |
|
||||
| `HPAContainerMetrics` | `true` | Beta | 1.27 | |
|
||||
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
|
||||
| `HonorPVReclaimPolicy` | `false` | Alpha | 1.23 | |
|
||||
| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | |
|
||||
| `IPTablesOwnershipCleanup` | `false` | Alpha | 1.25 | 1.26 |
|
||||
| `IPTablesOwnershipCleanup` | `true` | Beta | 1.27 | |
|
||||
| `InPlacePodVerticalScaling` | `false` | Alpha | 1.27 | |
|
||||
| `InTreePluginAWSUnregister` | `false` | Alpha | 1.21 | |
|
||||
| `InTreePluginAzureDiskUnregister` | `false` | Alpha | 1.21 | |
|
||||
| `InTreePluginAzureFileUnregister` | `false` | Alpha | 1.21 | |
|
||||
|
|
@ -116,51 +123,61 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `InTreePluginPortworxUnregister` | `false` | Alpha | 1.23 | |
|
||||
| `InTreePluginRBDUnregister` | `false` | Alpha | 1.23 | |
|
||||
| `InTreePluginvSphereUnregister` | `false` | Alpha | 1.21 | |
|
||||
| `JobMutableNodeSchedulingDirectives` | `true` | Beta | 1.23 | |
|
||||
| `JobPodFailurePolicy` | `false` | Alpha | 1.25 | 1.25 |
|
||||
| `JobPodFailurePolicy` | `true` | Beta | 1.26 | |
|
||||
| `JobReadyPods` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `JobReadyPods` | `true` | Beta | 1.24 | |
|
||||
| `KMSv2` | `false` | Alpha | 1.25 | |
|
||||
| `KMSv2` | `false` | Alpha | 1.25 | 1.26 |
|
||||
| `KMSv2` | `true` | Beta | 1.27 | |
|
||||
| `KubeletInUserNamespace` | `false` | Alpha | 1.22 | |
|
||||
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
|
||||
| `KubeletPodResources` | `true` | Beta | 1.15 | |
|
||||
| `KubeletPodResourcesDynamicResources` | `false` | Alpha | 1.27 | |
|
||||
| `KubeletPodResourcesGet` | `false` | Alpha | 1.27 | |
|
||||
| `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 |
|
||||
| `KubeletPodResourcesGetAllocatable` | `true` | Beta | 1.23 | |
|
||||
| `KubeletTracing` | `false` | Alpha | 1.25 | |
|
||||
| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.25 | |
|
||||
| `KubeletTracing` | `false` | Alpha | 1.25 | 1.26 |
|
||||
| `KubeletTracing` | `true` | Beta | 1.27 | |
|
||||
| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `LegacyServiceAccountTokenTracking` | `true` | Beta | 1.27 | |
|
||||
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | - |
|
||||
| `LogarithmicScaleDown` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `LogarithmicScaleDown` | `true` | Beta | 1.22 | |
|
||||
| `LoggingAlphaOptions` | `false` | Alpha | 1.24 | - |
|
||||
| `LoggingBetaOptions` | `true` | Beta | 1.24 | - |
|
||||
| `MatchLabelKeysInPodTopologySpread` | `false` | Alpha | 1.25 | |
|
||||
| `MatchLabelKeysInPodTopologySpread` | `false` | Alpha | 1.25 | 1.26 |
|
||||
| `MatchLabelKeysInPodTopologySpread` | `true` | Beta | 1.27 | - |
|
||||
| `MaxUnavailableStatefulSet` | `false` | Alpha | 1.24 | |
|
||||
| `MemoryManager` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `MemoryManager` | `true` | Beta | 1.22 | |
|
||||
| `MemoryQoS` | `false` | Alpha | 1.22 | |
|
||||
| `MinDomainsInPodTopologySpread` | `false` | Alpha | 1.24 | 1.24 |
|
||||
| `MinDomainsInPodTopologySpread` | `false` | Beta | 1.25 | |
|
||||
| `MinimizeIPTablesRestore` | `false` | Alpha | 1.26 | - |
|
||||
| `MinDomainsInPodTopologySpread` | `false` | Beta | 1.25 | 1.26 |
|
||||
| `MinDomainsInPodTopologySpread` | `true` | Beta | 1.27 | |
|
||||
| `MinimizeIPTablesRestore` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `MinimizeIPTablesRestore` | `true` | Beta | 1.27 | |
|
||||
| `MultiCIDRRangeAllocator` | `false` | Alpha | 1.25 | |
|
||||
| `MultiCIDRServiceAllocator` | `false` | Alpha | 1.27 | |
|
||||
| `NetworkPolicyStatus` | `false` | Alpha | 1.24 | |
|
||||
| `NewVolumeManagerReconstruction` | `true` | Beta | 1.27 | |
|
||||
| `NodeInclusionPolicyInPodTopologySpread` | `false` | Alpha | 1.25 | 1.25 |
|
||||
| `NodeInclusionPolicyInPodTopologySpread` | `true` | Beta | 1.26 | |
|
||||
| `NodeLogQuery` | `false` | Alpha | 1.27 | |
|
||||
| `NodeOutOfServiceVolumeDetach` | `false` | Alpha | 1.24 | 1.25 |
|
||||
| `NodeOutOfServiceVolumeDetach` | `true` | Beta | 1.26 | |
|
||||
| `NodeSwap` | `false` | Alpha | 1.22 | |
|
||||
| `OpenAPIEnums` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `OpenAPIEnums` | `true` | Beta | 1.24 | |
|
||||
| `OpenAPIV3` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `OpenAPIV3` | `true` | Beta | 1.24 | |
|
||||
| `PDBUnhealthyPodEvictionPolicy` | `false` | Alpha | 1.26 | |
|
||||
| `PDBUnhealthyPodEvictionPolicy` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `PDBUnhealthyPodEvictionPolicy` | `true` | Beta | 1.27 | |
|
||||
| `PodAndContainerStatsFromCRI` | `false` | Alpha | 1.23 | |
|
||||
| `PodDeletionCost` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `PodDeletionCost` | `true` | Beta | 1.22 | |
|
||||
| `PodDisruptionConditions` | `false` | Alpha | 1.25 | 1.25 |
|
||||
| `PodDisruptionConditions` | `true` | Beta | 1.26 | |
|
||||
| `PodHasNetworkCondition` | `false` | Alpha | 1.25 | |
|
||||
| `PodSchedulingReadiness` | `false` | Alpha | 1.26 | |
|
||||
| `PodSchedulingReadiness` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `PodSchedulingReadiness` | `true` | Beta | 1.27 | |
|
||||
| `ProbeTerminationGracePeriod` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `ProbeTerminationGracePeriod` | `false` | Beta | 1.22 | 1.24 |
|
||||
| `ProbeTerminationGracePeriod` | `true` | Beta | 1.25 | |
|
||||
|
|
@ -168,7 +185,8 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `ProxyTerminatingEndpoints` | `false` | Alpha | 1.22 | 1.25 |
|
||||
| `ProxyTerminatingEndpoints` | `true` | Beta | 1.26 | |
|
||||
| `QOSReserved` | `false` | Alpha | 1.11 | |
|
||||
| `ReadWriteOncePod` | `false` | Alpha | 1.22 | |
|
||||
| `ReadWriteOncePod` | `false` | Alpha | 1.22 | 1.26 |
|
||||
| `ReadWriteOncePod` | `true` | Beta | 1.27 | |
|
||||
| `RecoverVolumeExpansionFailure` | `false` | Alpha | 1.23 | |
|
||||
| `RemainingItemCount` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `RemainingItemCount` | `true` | Beta | 1.16 | |
|
||||
|
|
@ -176,29 +194,30 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `RetroactiveDefaultStorageClass` | `true` | Beta | 1.26 | |
|
||||
| `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 |
|
||||
| `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | |
|
||||
| `SELinuxMountReadWriteOncePod` | `false` | Alpha | 1.25 | |
|
||||
| `SeccompDefault` | `false` | Alpha | 1.22 | 1.24 |
|
||||
| `SeccompDefault` | `true` | Beta | 1.25 | |
|
||||
| `ServerSideFieldValidation` | `false` | Alpha | 1.23 | 1.24 |
|
||||
| `ServerSideFieldValidation` | `true` | Beta | 1.25 | |
|
||||
| `SELinuxMountReadWriteOncePod` | `false` | Alpha | 1.25 | 1.26 |
|
||||
| `SELinuxMountReadWriteOncePod` | `true` | Beta | 1.27 | |
|
||||
| `SecurityContextDeny` | `false` | Alpha | 1.27 | |
|
||||
| `ServiceNodePortStaticSubrange` | `false` | Alpha | 1.27 | |
|
||||
| `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | 1.21 |
|
||||
| `SizeMemoryBackedVolumes` | `true` | Beta | 1.22 | |
|
||||
| `StatefulSetAutoDeletePVC` | `false` | Alpha | 1.22 | |
|
||||
| `StatefulSetStartOrdinal` | `false` | Alpha | 1.26 | |
|
||||
| `StableLoadBalancerNodeGet` | `true` | Beta | 1.27 | |
|
||||
| `StatefulSetAutoDeletePVC` | `false` | Alpha | 1.22 | 1.26 |
|
||||
| `StatefulSetAutoDeletePVC` | `false` | Beta | 1.27 | |
|
||||
| `StatefulSetStartOrdinal` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `StatefulSetStartOrdinal` | `true` | Beta | 1.27 | |
|
||||
| `StorageVersionAPI` | `false` | Alpha | 1.20 | |
|
||||
| `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 |
|
||||
| `StorageVersionHash` | `true` | Beta | 1.15 | |
|
||||
| `TopologyAwareHints` | `false` | Alpha | 1.21 | 1.22 |
|
||||
| `TopologyAwareHints` | `false` | Beta | 1.23 | 1.23 |
|
||||
| `TopologyAwareHints` | `true` | Beta | 1.24 | |
|
||||
| `TopologyManager` | `false` | Alpha | 1.16 | 1.17 |
|
||||
| `TopologyManager` | `true` | Beta | 1.18 | |
|
||||
| `TopologyManagerPolicyAlphaOptions` | `false` | Alpha | 1.26 | |
|
||||
| `TopologyManagerPolicyBetaOptions` | `false` | Beta | 1.26 | |
|
||||
| `TopologyManagerPolicyOptions` | `false` | Alpha | 1.26 | |
|
||||
| `UserNamespacesStatelessPodsSupport` | `false` | Alpha | 1.25 | |
|
||||
| `ValidatingAdmissionPolicy` | `false` | Alpha | 1.26 | |
|
||||
| `VolumeCapacityPriority` | `false` | Alpha | 1.21 | - |
|
||||
| `WatchList` | false | Alpha | 1.27 | |
|
||||
| `WinDSR` | `false` | Alpha | 1.14 | |
|
||||
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
|
||||
| `WinOverlay` | `true` | Beta | 1.20 | |
|
||||
|
|
@ -217,20 +236,6 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
|
||||
| `CPUManager` | `true` | Beta | 1.10 | 1.25 |
|
||||
| `CPUManager` | `true` | GA | 1.26 | - |
|
||||
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `CSIInlineVolume` | `true` | Beta | 1.16 | 1.24 |
|
||||
| `CSIInlineVolume` | `true` | GA | 1.25 | - |
|
||||
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
|
||||
| `CSIMigration` | `true` | Beta | 1.17 | 1.24 |
|
||||
| `CSIMigration` | `true` | GA | 1.25 | - |
|
||||
| `CSIMigrationAWS` | `false` | Alpha | 1.14 | 1.16 |
|
||||
| `CSIMigrationAWS` | `false` | Beta | 1.17 | 1.22 |
|
||||
| `CSIMigrationAWS` | `true` | Beta | 1.23 | 1.24 |
|
||||
| `CSIMigrationAWS` | `true` | GA | 1.25 | - |
|
||||
| `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | 1.18 |
|
||||
| `CSIMigrationAzureDisk` | `false` | Beta | 1.19 | 1.22 |
|
||||
| `CSIMigrationAzureDisk` | `true` | Beta | 1.23 | 1.23 |
|
||||
| `CSIMigrationAzureDisk` | `true` | GA | 1.24 | |
|
||||
| `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | 1.20 |
|
||||
| `CSIMigrationAzureFile` | `false` | Beta | 1.21 | 1.23 |
|
||||
| `CSIMigrationAzureFile` | `true` | Beta | 1.24 | 1.25 |
|
||||
|
|
@ -247,12 +252,9 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `CSIStorageCapacity` | `true` | Beta | 1.21 | 1.23 |
|
||||
| `CSIStorageCapacity` | `true` | GA | 1.24 | - |
|
||||
| `ConsistentHTTPGetHandlers` | `true` | GA | 1.25 | - |
|
||||
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | 1.23 |
|
||||
| `ControllerManagerLeaderMigration` | `true` | GA | 1.24 | - |
|
||||
| `DaemonSetUpdateSurge` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | 1.24 |
|
||||
| `DaemonSetUpdateSurge` | `true` | GA | 1.25 | - |
|
||||
| `CronJobTimeZone` | `false` | Alpha | 1.24 | 1.24 |
|
||||
| `CronJobTimeZone` | `true` | Beta | 1.25 | 1.26 |
|
||||
| `CronJobTimeZone` | `true` | GA | 1.27 | - |
|
||||
| `DelegateFSGroupToCSIDriver` | `false` | Alpha | 1.22 | 1.22 |
|
||||
| `DelegateFSGroupToCSIDriver` | `true` | Beta | 1.23 | 1.25 |
|
||||
| `DelegateFSGroupToCSIDriver` | `true` | GA | 1.26 |-|
|
||||
|
|
@ -262,6 +264,10 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
|
||||
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.24 |
|
||||
| `DisableAcceleratorUsageMetrics` | `true` | GA | 1.25 |- |
|
||||
| `DownwardAPIHugePages` | `false` | Alpha | 1.20 | 1.20 |
|
||||
| `DownwardAPIHugePages` | `false` | Beta | 1.21 | 1.21 |
|
||||
| `DownwardAPIHugePages` | `true` | Beta | 1.22 | 1.26 |
|
||||
| `DownwardAPIHugePages` | `true` | GA | 1.27 | - |
|
||||
| `DryRun` | `false` | Alpha | 1.12 | 1.12 |
|
||||
| `DryRun` | `true` | Beta | 1.13 | 1.18 |
|
||||
| `DryRun` | `true` | GA | 1.19 | - |
|
||||
|
|
@ -271,22 +277,12 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `EndpointSliceTerminatingCondition` | `false` | Alpha | 1.20 | 1.21 |
|
||||
| `EndpointSliceTerminatingCondition` | `true` | Beta | 1.22 | 1.25 |
|
||||
| `EndpointSliceTerminatingCondition` | `true` | GA | 1.26 | |
|
||||
| `EphemeralContainers` | `false` | Alpha | 1.16 | 1.22 |
|
||||
| `EphemeralContainers` | `true` | Beta | 1.23 | 1.24 |
|
||||
| `EphemeralContainers` | `true` | GA | 1.25 | - |
|
||||
| `ExecProbeTimeout` | `true` | GA | 1.20 | - |
|
||||
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | 1.23 |
|
||||
| `ExpandCSIVolumes` | `true` | GA | 1.24 | - |
|
||||
| `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.14 |
|
||||
| `ExpandInUsePersistentVolumes` | `true` | Beta | 1.15 | 1.23 |
|
||||
| `ExpandInUsePersistentVolumes` | `true` | GA | 1.24 | - |
|
||||
| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 |
|
||||
| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | 1.23 |
|
||||
| `ExpandPersistentVolumes` | `true` | GA | 1.24 |- |
|
||||
| `IdentifyPodOS` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `IdentifyPodOS` | `true` | Beta | 1.24 | 1.24 |
|
||||
| `IdentifyPodOS` | `true` | GA | 1.25 | - |
|
||||
| `GRPCContainerProbe` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `GRPCContainerProbe` | `true` | Beta | 1.24 | 1.26 |
|
||||
| `GRPCContainerProbe` | `true` | GA | 1.27 | |
|
||||
| `JobMutableNodeSchedulingDirectives` | `true` | Beta | 1.23 | 1.26 |
|
||||
| `JobMutableNodeSchedulingDirectives` | `true` | GA | 1.27 | |
|
||||
| `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 |
|
||||
| `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 |
|
||||
| `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | 1.25 |
|
||||
|
|
@ -296,33 +292,36 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `KubeletCredentialProviders` | `true` | GA | 1.26 | - |
|
||||
| `LegacyServiceAccountTokenNoAutoGeneration` | `true` | Beta | 1.24 | 1.25 |
|
||||
| `LegacyServiceAccountTokenNoAutoGeneration` | `true` | GA | 1.26 | - |
|
||||
| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
|
||||
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | 1.24 |
|
||||
| `LocalStorageCapacityIsolation` | `true` | GA | 1.25 | - |
|
||||
| `MixedProtocolLBService` | `false` | Alpha | 1.20 | 1.23 |
|
||||
| `MixedProtocolLBService` | `true` | Beta | 1.24 | 1.25 |
|
||||
| `MixedProtocolLBService` | `true` | GA | 1.26 | - |
|
||||
| `NetworkPolicyEndPort` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `NetworkPolicyEndPort` | `true` | Beta | 1.22 | 1.24 |
|
||||
| `NetworkPolicyEndPort` | `true` | GA | 1.25 | - |
|
||||
| `OpenAPIV3` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `OpenAPIV3` | `true` | Beta | 1.24 | 1.26 |
|
||||
| `OpenAPIV3` | `true` | GA | 1.27 | - |
|
||||
| `PodSecurity` | `false` | Alpha | 1.22 | 1.22 |
|
||||
| `PodSecurity` | `true` | Beta | 1.23 | 1.24 |
|
||||
| `PodSecurity` | `true` | GA | 1.25 | |
|
||||
| `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 |
|
||||
| `RemoveSelfLink` | `true` | Beta | 1.20 | 1.23 |
|
||||
| `RemoveSelfLink` | `true` | GA | 1.24 | - |
|
||||
| `SeccompDefault` | `false` | Alpha | 1.22 | 1.24 |
|
||||
| `SeccompDefault` | `true` | Beta | 1.25 | 1.26 |
|
||||
| `SeccompDefault` | `true` | GA | 1.27 | - |
|
||||
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ServerSideApply` | `true` | Beta | 1.16 | 1.21 |
|
||||
| `ServerSideApply` | `true` | GA | 1.22 | - |
|
||||
| `ServerSideFieldValidation` | `false` | Alpha | 1.23 | 1.24 |
|
||||
| `ServerSideFieldValidation` | `true` | Beta | 1.25 | 1.26 |
|
||||
| `ServerSideFieldValidation` | `true` | GA | 1.27 | - |
|
||||
| `ServiceIPStaticSubrange` | `false` | Alpha | 1.24 | 1.24 |
|
||||
| `ServiceIPStaticSubrange` | `true` | Beta | 1.25 | 1.25 |
|
||||
| `ServiceIPStaticSubrange` | `true` | GA | 1.26 | - |
|
||||
| `ServiceInternalTrafficPolicy` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `ServiceInternalTrafficPolicy` | `true` | Beta | 1.22 | 1.25 |
|
||||
| `ServiceInternalTrafficPolicy` | `true` | GA | 1.26 | - |
|
||||
| `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 |
|
||||
| `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | 1.24 |
|
||||
| `StatefulSetMinReadySeconds` | `true` | GA | 1.25 | - |
|
||||
| `TopologyManager` | `false` | Alpha | 1.16 | 1.17 |
|
||||
| `TopologyManager` | `true` | Beta | 1.18 | 1.26 |
|
||||
| `TopologyManager` | `true` | GA | 1.27 | - |
|
||||
| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `WatchBookmark` | `true` | Beta | 1.16 | 1.16 |
|
||||
| `WatchBookmark` | `true` | GA | 1.17 | - |
|
||||
|
|
@ -374,6 +373,8 @@ A *General Availability* (GA) feature is also referred to as a *stable* feature.
|
|||
|
||||
Each feature gate is designed for enabling/disabling a specific feature:
|
||||
|
||||
- `AdmissionWebhookMatchConditions`: Enable [match conditions](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchconditions)
|
||||
on mutating & validating admission webhooks.
|
||||
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`)
|
||||
resources from API server in chunks.
|
||||
- `APIPriorityAndFairness`: Enable managing request concurrency with
|
||||
|
|
@ -393,14 +394,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
|
||||
- `AppArmor`: Enable use of AppArmor mandatory access control for Pods running on Linux nodes.
|
||||
See [AppArmor Tutorial](/docs/tutorials/security/apparmor/) for more details.
|
||||
- `ContainerCheckpoint`: Enables the kubelet `checkpoint` API.
|
||||
See [Kubelet Checkpoint API](/docs/reference/node/kubelet-checkpoint-api/) for more details.
|
||||
- `ControllerManagerLeaderMigration`: Enables Leader Migration for
|
||||
[kube-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#initial-leader-migration-configuration) and
|
||||
[cloud-controller-manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/#deploy-cloud-controller-manager)
|
||||
which allows a cluster operator to live migrate
|
||||
controllers from the kube-controller-manager into an external controller-manager
|
||||
(e.g. the cloud-controller-manager) in an HA cluster without downtime.
|
||||
- `CPUManager`: Enable container level CPU affinity support, see
|
||||
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
|
||||
- `CPUManagerPolicyAlphaOptions`: This allows fine-tuning of CPUManager policies,
|
||||
|
|
@ -412,22 +405,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
This feature gate guards *a group* of CPUManager options whose quality level is beta.
|
||||
This feature gate will never graduate to stable.
|
||||
- `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies.
|
||||
- `CSIInlineVolume`: Enable CSI Inline volumes support for pods.
|
||||
- `CSIMigration`: Enables shims and translation logic to route volume
|
||||
operations from in-tree plugins to corresponding pre-installed CSI plugins
|
||||
- `CSIMigrationAWS`: Enables shims and translation logic to route volume
|
||||
operations from the AWS-EBS in-tree plugin to EBS CSI plugin. Supports
|
||||
falling back to in-tree EBS plugin for mount operations to nodes that have
|
||||
the feature disabled or that do not have EBS CSI plugin installed and
|
||||
configured. Does not support falling back for provision operations, for those
|
||||
the CSI plugin must be installed and configured.
|
||||
- `CSIMigrationAzureDisk`: Enables shims and translation logic to route volume
|
||||
operations from the Azure-Disk in-tree plugin to AzureDisk CSI plugin.
|
||||
Supports falling back to in-tree AzureDisk plugin for mount operations to
|
||||
nodes that have the feature disabled or that do not have AzureDisk CSI plugin
|
||||
installed and configured. Does not support falling back for provision
|
||||
operations, for those the CSI plugin must be installed and configured.
|
||||
Requires CSIMigration feature flag enabled.
|
||||
- `CSIMigrationAzureFile`: Enables shims and translation logic to route volume
|
||||
operations from the Azure-File in-tree plugin to AzureFile CSI plugin.
|
||||
Supports falling back to in-tree AzureFile plugin for mount operations to
|
||||
|
|
@ -465,15 +442,20 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
[Storage Capacity](/docs/concepts/storage/storage-capacity/).
|
||||
Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details.
|
||||
- `CSIVolumeHealth`: Enable support for CSI volume health monitoring on node.
|
||||
- `CloudControllerManagerWebhook`: Enable webhooks in cloud controller manager.
|
||||
- `CloudDualStackNodeIPs`: Enables dual-stack `kubelet --node-ip` with external cloud providers.
|
||||
See [Configure IPv4/IPv6 dual-stack](/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)
|
||||
for more details.
|
||||
- `ClusterTrustBundle`: Enable ClusterTrustBundle objects and kubelet integration.
|
||||
- `ComponentSLIs`: Enable the `/metrics/slis` endpoint on Kubernetes components like
|
||||
kubelet, kube-scheduler, kube-proxy, kube-controller-manager, cloud-controller-manager
|
||||
allowing you to scrape health check metrics.
|
||||
- `ConsistentHTTPGetHandlers`: Normalize HTTP get URL and Header passing for lifecycle
|
||||
handlers with probers.
|
||||
- `ContainerCheckpoint`: Enables the kubelet `checkpoint` API.
|
||||
See [Kubelet Checkpoint API](/docs/reference/node/kubelet-checkpoint-api/) for more details.
|
||||
- `ContextualLogging`: When you enable this feature gate, Kubernetes components that support
|
||||
contextual logging add extra detail to log output.
|
||||
- `ControllerManagerLeaderMigration`: Enables leader migration for
|
||||
`kube-controller-manager` and `cloud-controller-manager`.
|
||||
- `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/)
|
||||
- `CrossNamespaceVolumeDataSource`: Enable the usage of cross namespace volume data source
|
||||
to allow you to specify a source namespace in the `dataSourceRef` field of a
|
||||
|
|
@ -483,9 +465,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `CustomResourceValidationExpressions`: Enable expression language validation in CRD
|
||||
which will validate customer resource based on validation rules written in
|
||||
the `x-kubernetes-validations` extension.
|
||||
- `DaemonSetUpdateSurge`: Enables the DaemonSet workloads to maintain
|
||||
availability during update per node.
|
||||
See [Perform a Rolling Update on a DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/).
|
||||
- `DelegateFSGroupToCSIDriver`: If supported by the CSI driver, delegates the
|
||||
role of applying `fsGroup` from a Pod's `securityContext` to the driver by
|
||||
passing `fsGroup` through the NodeStageVolume and NodePublishVolume CSI calls.
|
||||
|
|
@ -502,15 +481,16 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information).
|
||||
- `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests
|
||||
so that validation, merging, and mutation can be tested without committing.
|
||||
- `DynamicResourceAllocation": Enables support for resources with custom parameters and a lifecycle
|
||||
- `DynamicResourceAllocation`: Enables support for resources with custom parameters and a lifecycle
|
||||
that is independent of a Pod.
|
||||
- `ElasticIndexedJob`: Enables Indexed Jobs to be scaled up or down by mutating both
|
||||
`spec.completions` and `spec.parallelism` together such that `spec.completions == spec.parallelism`.
|
||||
See docs on [elastic Indexed Jobs](/docs/concepts/workloads/controllers/job#elastic-indexed-jobs)
|
||||
for more details.
|
||||
- `EndpointSliceTerminatingCondition`: Enables EndpointSlice `terminating` and `serving`
|
||||
condition fields.
|
||||
- `EfficientWatchResumption`: Allows for storage-originated bookmark (progress
|
||||
notify) events to be delivered to the users. This is only applied to watch operations.
|
||||
- `EphemeralContainers`: Enable the ability to add
|
||||
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
|
||||
to running pods.
|
||||
- `EventedPLEG`: Enable support for the kubelet to receive container life cycle events from the
|
||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}} via
|
||||
an extension to {{<glossary_tooltip term_id="cri" text="CRI">}}.
|
||||
|
|
@ -523,15 +503,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
This feature gate exists in case any of your existing workloads depend on a
|
||||
now-corrected fault where Kubernetes ignored exec probe timeouts. See
|
||||
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
|
||||
- `ExpandCSIVolumes`: Enable the expanding of CSI volumes.
|
||||
- `ExpandedDNSConfig`: Enable kubelet and kube-apiserver to allow more DNS
|
||||
search paths and longer list of DNS search paths. This feature requires container
|
||||
runtime support(Containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
|
||||
[Expanded DNS Configuration](/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration).
|
||||
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See
|
||||
[Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
|
||||
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See
|
||||
[Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
|
||||
- `ExperimentalHostUserNamespaceDefaulting`: Enabling the defaulting user
|
||||
namespace to host. This is for containers that are using other host namespaces,
|
||||
host mounts, or containers that are privileged or using specific non-namespaced
|
||||
|
|
@ -555,10 +530,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler`
|
||||
resources when using custom or external metrics.
|
||||
- `IPTablesOwnershipCleanup`: This causes kubelet to no longer create legacy iptables rules.
|
||||
- `IdentifyPodOS`: Allows the Pod OS field to be specified. This helps in identifying
|
||||
the OS of the pod authoritatively during the API server admission time.
|
||||
In Kubernetes {{< skew currentVersion >}}, the allowed values for the `pod.spec.os.name`
|
||||
are `windows` and `linux`.
|
||||
- `InPlacePodVerticalScaling`: Enables in-place Pod vertical scaling.
|
||||
- `InTreePluginAWSUnregister`: Stops registering the aws-ebs in-tree plugin in kubelet
|
||||
and volume controllers.
|
||||
- `InTreePluginAzureDiskUnregister`: Stops registering the azuredisk in-tree plugin in kubelet
|
||||
|
|
@ -597,9 +569,14 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
|
||||
[Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
|
||||
for more details.
|
||||
- `KubeletPodResourcesGet`: Enable the `Get` gRPC endpoint on kubelet's for Pod resources.
|
||||
This API augments the [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources).
|
||||
- `KubeletPodResourcesGetAllocatable`: Enable the kubelet's pod resources
|
||||
`GetAllocatableResources` functionality. This API augments the
|
||||
[resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)
|
||||
- `KubeletPodResourcesDynamiceResources`: Extend the kubelet's pod resources gRPC endpoint to
|
||||
to include resources allocated in `ResourceClaims` via `DynamicResourceAllocation` API.
|
||||
See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources) for more details.
|
||||
with informations about the allocatable resources, enabling clients to properly
|
||||
track the free compute resources on a node.
|
||||
- `KubeletTracing`: Add support for distributed tracing in the kubelet.
|
||||
|
|
@ -610,10 +587,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
[service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens).
|
||||
- `LegacyServiceAccountTokenTracking`: Track usage of Secret-based
|
||||
[service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens).
|
||||
- `LocalStorageCapacityIsolation`: Enable the consumption of
|
||||
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
|
||||
and also the `sizeLimit` property of an
|
||||
[emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
|
||||
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation`
|
||||
is enabled for
|
||||
[local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/)
|
||||
|
|
@ -642,12 +615,25 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
|
||||
Service instance.
|
||||
- `MultiCIDRRangeAllocator`: Enables the MultiCIDR range allocator.
|
||||
- `NetworkPolicyEndPort`: Enable use of the field `endPort` in NetworkPolicy objects,
|
||||
allowing the selection of a port range instead of a single port.
|
||||
- `MultiCIDRServiceAllocator`: Track IP address allocations for Service cluster IPs using IPAddress objects.
|
||||
- `NetworkPolicyStatus`: Enable the `status` subresource for NetworkPolicy objects.
|
||||
- `NewVolumeManagerReconstruction`: Enable improved discovery of mounted volumes during kubelet
|
||||
startup.
|
||||
<!-- remove next 2 paragraphs when feature graduates to GA -->
|
||||
Before Kubernetes v1.25, the kubelet used different default behavior for discovering mounted
|
||||
volumes during the kubelet startup. If you disable this feature gate (it's enabled by default), you select
|
||||
the legacy discovery behavior.
|
||||
|
||||
In Kubernetes v1.25 and v1.26, this behavior toggle was part of the `SELinuxMountReadWriteOncePod`
|
||||
feature gate.
|
||||
- `NewVolumeManagerReconstruction`: Enables improved discovery of mounted volumes during kubelet
|
||||
startup. Since this code has been significantly refactored, we allow to opt-out in case kubelet
|
||||
gets stuck at the startup or is not unmounting volumes from terminated Pods. Note that this
|
||||
refactoring was behind `SELinuxMountReadWriteOncePod` alpha feature gate in Kubernetes 1.25.
|
||||
- `NodeInclusionPolicyInPodTopologySpread`: Enable using `nodeAffinityPolicy` and `nodeTaintsPolicy` in
|
||||
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
when calculating pod topology spread skew.
|
||||
- `NodeLogQuery`: Enables querying logs of node services using the `/logs` endpoint.
|
||||
- `NodeOutOfServiceVolumeDetach`: When a Node is marked out-of-service using the
|
||||
`node.kubernetes.io/out-of-service` taint, Pods on the node will be forcefully deleted
|
||||
if they can not tolerate this taint, and the volume detach operations for Pods terminating
|
||||
|
|
@ -703,6 +689,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile
|
||||
for all workloads.
|
||||
The seccomp profile is specified in the `securityContext` of a Pod and/or a Container.
|
||||
- `SecurityContextDeny`: This gate signals that the `SecurityContextDeny` admission controller is deprecated.
|
||||
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/server-side-apply/)
|
||||
feature on the API Server.
|
||||
- `ServerSideFieldValidation`: Enables server-side field validation. This means the validation
|
||||
|
|
@ -717,8 +704,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
for more details.
|
||||
- `SizeMemoryBackedVolumes`: Enable kubelets to determine the size limit for
|
||||
memory-backed volumes (mainly `emptyDir` volumes).
|
||||
- `StatefulSetMinReadySeconds`: Allows `minReadySeconds` to be respected by
|
||||
the StatefulSet controller.
|
||||
- `StableLoadBalancerNodeGet`: Enables less load balancer re-configurations by
|
||||
the service controller (KCCM) as an effect of changing node state.
|
||||
- `StatefulSetStartOrdinal`: Allow configuration of the start ordinal in a
|
||||
StatefulSet. See
|
||||
[Start ordinal](/docs/concepts/workloads/controllers/statefulset/#start-ordinal)
|
||||
|
|
@ -748,6 +735,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `VolumeCapacityPriority`: Enable support for prioritizing nodes in different
|
||||
topologies based on available PV capacity.
|
||||
- `WatchBookmark`: Enable support for watch bookmark events.
|
||||
- `WatchList` : Enable support for [streaming initial state of objects in watch requests](/docs/reference/using-api/api-concepts/#streaming-lists).
|
||||
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
|
||||
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
|
||||
- `WindowsHostNetwork`: Enables support for joining Windows containers to a hosts' network namespace.
|
||||
|
|
|
|||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
|
@ -381,7 +381,7 @@ CPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br/>
|
|||
CPUManagerPolicyOptions=true|false (BETA - default=true)<br/>
|
||||
CSIMigrationPortworx=true|false (BETA - default=false)<br/>
|
||||
CSIMigrationRBD=true|false (ALPHA - default=false)<br/>
|
||||
CSINodeExpandSecret=true|false (ALPHA - default=false)<br/>
|
||||
CSINodeExpandSecret=true|false (BETA - default=true)<br/>
|
||||
CSIVolumeHealth=true|false (ALPHA - default=false)<br/>
|
||||
ComponentSLIs=true|false (ALPHA - default=false)<br/>
|
||||
ContainerCheckpoint=true|false (ALPHA - default=false)<br/>
|
||||
|
|
|
|||
|
|
@ -72,14 +72,14 @@ It is suitable for correlating log entries between the webhook and apiserver, fo
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>kind</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Kind is the fully-qualified type of object being submitted (for example, v1.Pod or autoscaling.v1.Scale)</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resource</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Resource is the fully-qualified resource being requested (for example, v1.pods)</p>
|
||||
|
|
@ -93,7 +93,7 @@ It is suitable for correlating log entries between the webhook and apiserver, fo
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>requestKind</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionkind-v1-meta"><code>meta/v1.GroupVersionKind</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>RequestKind is the fully-qualified type of the original API request (for example, v1.Pod or autoscaling.v1.Scale).
|
||||
|
|
@ -107,7 +107,7 @@ and <code>requestKind: {group:"apps", version:"v1beta1", kin
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>requestResource</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#groupversionresource-v1-meta"><code>meta/v1.GroupVersionResource</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>RequestResource is the fully-qualified resource of the original API request (for example, v1.pods).
|
||||
|
|
@ -153,7 +153,7 @@ requested. e.g. a patch can result in either a CREATE or UPDATE Operation.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>userInfo</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>UserInfo is information about the requesting user</p>
|
||||
|
|
@ -227,7 +227,7 @@ This must be copied over from the corresponding AdmissionRequest.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>status</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Result contains extra details into why an admission request was denied.
|
||||
|
|
|
|||
|
|
@ -72,14 +72,14 @@ For non-resource requests, this is the lower-cased HTTP method.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>user</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Authenticated user information.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>impersonatedUser</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Impersonated user information.</p>
|
||||
|
|
@ -117,7 +117,7 @@ Does not apply for List-type requests, or non-resource requests.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>responseStatus</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#status-v1-meta"><code>meta/v1.Status</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>The response status, populated even when the ResponseObject is not a Status type.
|
||||
|
|
@ -145,14 +145,14 @@ at Response Level.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>requestReceivedTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Time the request reached the apiserver.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>stageTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Time the request reached current audit stage.</p>
|
||||
|
|
@ -189,7 +189,7 @@ should be short. Annotations are included in the Metadata level.</p>
|
|||
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted">No description provided.</span></td>
|
||||
|
|
@ -224,7 +224,7 @@ categories are logged.</p>
|
|||
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ObjectMeta is included for interoperability with API infrastructure.</p>
|
||||
|
|
@ -279,7 +279,7 @@ in a rule will override the global default.</p>
|
|||
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted">No description provided.</span></td>
|
||||
|
|
|
|||
|
|
@ -81,23 +81,11 @@ auto_generated: true
|
|||
<tr><td><code>kind</code><br/>string</td><td><code>TracingConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
<tr><td><code>TracingConfiguration</code> <B>[Required]</B><br/>
|
||||
<a href="#TracingConfiguration"><code>TracingConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Endpoint of the collector that's running on the control-plane node.
|
||||
The APIServer uses the egressType ControlPlane when sending data to the collector.
|
||||
The syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md.
|
||||
Defaults to the otlpgrpc default, localhost:4317
|
||||
The connection is insecure, and does not support TLS.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>samplingRatePerMillion</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>SamplingRatePerMillion is the number of samples to collect per million spans.
|
||||
Defaults to 0.</p>
|
||||
<td>(Members of <code>TracingConfiguration</code> are embedded into this type.)
|
||||
<p>Embed the component config tracing configuration struct</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|
|
@ -372,4 +360,45 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## `TracingConfiguration` {#TracingConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)
|
||||
|
||||
|
||||
<p>TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Endpoint of the collector this component will report traces to.
|
||||
The connection is insecure, and does not currently support TLS.
|
||||
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>samplingRatePerMillion</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>SamplingRatePerMillion is the number of samples to collect per million spans.
|
||||
Recommended is unset. If unset, sampler respects its parent span's sampling
|
||||
rate, but otherwise never samples.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
@ -11,6 +11,7 @@ auto_generated: true
|
|||
|
||||
|
||||
- [EgressSelectorConfiguration](#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration)
|
||||
- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration)
|
||||
|
||||
|
||||
|
||||
|
|
@ -39,6 +40,31 @@ auto_generated: true
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TracingConfiguration` {#apiserver-k8s-io-v1beta1-TracingConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>TracingConfiguration provides versioned configuration for tracing clients.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.k8s.io/v1beta1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>TracingConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>TracingConfiguration</code> <B>[Required]</B><br/>
|
||||
<a href="#TracingConfiguration"><code>TracingConfiguration</code></a>
|
||||
</td>
|
||||
<td>(Members of <code>TracingConfiguration</code> are embedded into this type.)
|
||||
<p>Embed the component config tracing configuration struct</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `Connection` {#apiserver-k8s-io-v1beta1-Connection}
|
||||
|
||||
|
||||
|
|
@ -265,4 +291,47 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## `TracingConfiguration` {#TracingConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)
|
||||
|
||||
- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration)
|
||||
|
||||
|
||||
<p>TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Endpoint of the collector this component will report traces to.
|
||||
The connection is insecure, and does not currently support TLS.
|
||||
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>samplingRatePerMillion</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>SamplingRatePerMillion is the number of samples to collect per million spans.
|
||||
Recommended is unset. If unset, sampler respects its parent span's sampling
|
||||
rate, but otherwise never samples.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
@ -18,7 +18,45 @@ auto_generated: true
|
|||
|
||||
|
||||
|
||||
<p>EncryptionConfiguration stores the complete configuration for encryption providers.</p>
|
||||
<p>EncryptionConfiguration stores the complete configuration for encryption providers.
|
||||
It also allows the use of wildcards to specify the resources that should be encrypted.
|
||||
Use '<em>.<!-- raw HTML omitted -->' to encrypt all resources within a group or '</em>.<em>' to encrypt all resources.
|
||||
'</em>.' can be used to encrypt all resource in the core group. '<em>.</em>' will encrypt all
|
||||
resources, even custom resources that are added after API server start.
|
||||
Use of wildcards that overlap within the same resource list or across multiple
|
||||
entries are not allowed since part of the configuration would be ineffective.
|
||||
Resource lists are processed in order, with earlier lists taking precedence.</p>
|
||||
<p>Example:</p>
|
||||
<pre><code>kind: EncryptionConfiguration
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
resources:
|
||||
- resources:
|
||||
- events
|
||||
providers:
|
||||
- identity: {} # do not encrypt events even though *.* is specified below
|
||||
- resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
- pandas.awesome.bears.example
|
||||
providers:
|
||||
- aescbc:
|
||||
keys:
|
||||
- name: key1
|
||||
secret: c2VjcmV0IGlzIHNlY3VyZQ==
|
||||
- resources:
|
||||
- '*.apps'
|
||||
providers:
|
||||
- aescbc:
|
||||
keys:
|
||||
- name: key2
|
||||
secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==
|
||||
- resources:
|
||||
- '*.*'
|
||||
providers:
|
||||
- aescbc:
|
||||
keys:
|
||||
- name: key3
|
||||
secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==</code></pre>
|
||||
|
||||
|
||||
<table class="table">
|
||||
|
|
@ -114,7 +152,7 @@ Each key has to be 32 bytes long for AES-CBC and 16, 24 or 32 bytes for AES-GCM.
|
|||
</td>
|
||||
<td>
|
||||
<p>cachesize is the maximum number of secrets which are cached in memory. The default value is 1000.
|
||||
Set to a negative value to disable caching.</p>
|
||||
Set to a negative value to disable caching. This field is only allowed for KMS v1 providers.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>endpoint</code> <B>[Required]</B><br/>
|
||||
|
|
@ -243,7 +281,11 @@ Set to a negative value to disable caching.</p>
|
|||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>resources is a list of kubernetes resources which have to be encrypted.</p>
|
||||
<p>resources is a list of kubernetes resources which have to be encrypted. The resource names are derived from <code>resource</code> or <code>resource.group</code> of the group/version/resource.
|
||||
eg: pandas.awesome.bears.example is a custom resource with 'group': awesome.bears.example, 'resource': pandas.
|
||||
Use '<em>.</em>' to encrypt all resources and '<em>.<!-- raw HTML omitted -->' to encrypt all resources in a specific group.
|
||||
eg: '</em>.awesome.bears.example' will encrypt all resources in the group 'awesome.bears.example'.
|
||||
eg: '*.' will encrypt all resources in the core group (such as pods, configmaps, etc).</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>providers</code> <B>[Required]</B><br/>
|
||||
|
|
@ -251,7 +293,7 @@ Set to a negative value to disable caching.</p>
|
|||
</td>
|
||||
<td>
|
||||
<p>providers is a list of transformers to be used for reading and writing the resources to disk.
|
||||
eg: aesgcm, aescbc, secretbox, identity.</p>
|
||||
eg: aesgcm, aescbc, secretbox, identity, kms.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|
|
|
|||
|
|
@ -206,7 +206,7 @@ itself should at least be protected via file permissions.</p>
|
|||
|
||||
|
||||
<tr><td><code>expirationTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ExpirationTimestamp indicates a time when the provided credentials expire.</p>
|
||||
|
|
|
|||
|
|
@ -206,7 +206,7 @@ itself should at least be protected via file permissions.</p>
|
|||
|
||||
|
||||
<tr><td><code>expirationTimestamp</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ExpirationTimestamp indicates a time when the provided credentials expire.</p>
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ auto_generated: true
|
|||
|
||||
|
||||
<tr><td><code>metadata</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Standard object's metadata.
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: kube-controller-manager Configuration (v1alpha1)
|
||||
content_type: tool-reference
|
||||
package: controllermanager.config.k8s.io/v1alpha1
|
||||
package: cloudcontrollermanager.config.k8s.io/v1alpha1
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
|
|
@ -9,11 +9,358 @@ auto_generated: true
|
|||
## Resource Types
|
||||
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration)
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
|
||||
## `NodeControllerConfiguration` {#NodeControllerConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>NodeControllerConfiguration contains elements describing NodeController.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>ConcurrentNodeSyncs</code> <B>[Required]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>ConcurrentNodeSyncs is the number of workers
|
||||
concurrently synchronizing nodes</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `ServiceControllerConfiguration` {#ServiceControllerConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>ServiceControllerConfiguration contains elements describing ServiceController.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>ConcurrentServiceSyncs</code> <B>[Required]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>concurrentServiceSyncs is the number of services that are
|
||||
allowed to sync concurrently. Larger number = more responsive service
|
||||
management, but more CPU (and network) load.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>CloudControllerManagerConfiguration contains elements describing cloud-controller manager.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>cloudcontrollermanager.config.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>CloudControllerManagerConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>Generic</code> <B>[Required]</B><br/>
|
||||
<a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"><code>GenericControllerManagerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Generic holds configuration for a generic controller-manager</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>KubeCloudShared</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration"><code>KubeCloudSharedConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>KubeCloudSharedConfiguration holds configuration for shared related features
|
||||
both in cloud controller manager and kube-controller manager.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeController</code> <B>[Required]</B><br/>
|
||||
<a href="#NodeControllerConfiguration"><code>NodeControllerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>NodeController holds configuration for node controller
|
||||
related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ServiceController</code> <B>[Required]</B><br/>
|
||||
<a href="#ServiceControllerConfiguration"><code>ServiceControllerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ServiceControllerConfiguration holds configuration for ServiceController
|
||||
related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeStatusUpdateFrequency</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>Webhook</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration"><code>WebhookConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Webhook is the configuration for cloud-controller-manager hosted webhooks</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration)
|
||||
|
||||
|
||||
<p>CloudProviderConfiguration contains basically elements about cloud provider.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>Name</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Name is the provider for cloud services.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CloudConfigFile</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>cloudConfigFile is the path to the cloud provider configuration file.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
|
||||
and cloud-controller manager, but not genericconfig.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>CloudProvider</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration"><code>CloudProviderConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>CloudProviderConfiguration holds configuration for CloudProvider related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ExternalCloudVolumePlugin</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external".
|
||||
It is currently used by the in repo cloud providers to handle node and volume control in the KCM.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>UseServiceAccountCredentials</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>useServiceAccountCredentials indicates whether controllers should be run with
|
||||
individual service account credentials.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllowUntaggedCloud</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>run with untagged cloud instances</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>RouteReconciliationPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeMonitorPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterName is the instance prefix for the cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterCIDR</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterCIDR is CIDR Range for Pods in cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllocateNodeCIDRs</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if
|
||||
ConfigureCloudRoutes is true, to be set on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CIDRAllocatorType</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>CIDRAllocatorType determines what kind of pod CIDR allocator will be used.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ConfigureCloudRoutes</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs
|
||||
to be configured on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeSyncPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
|
||||
periods will result in fewer calls to cloud provider, but may delay addition
|
||||
of new nodes to cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>WebhookConfiguration contains configuration related to
|
||||
cloud-controller-manager hosted webhooks</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>Webhooks</code> <B>[Required]</B><br/>
|
||||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Webhooks is the list of webhooks to enable or disable
|
||||
'*' means "all enabled by default webhooks"
|
||||
'foo' means "enable 'foo'"
|
||||
'-foo' means "disable 'foo'"
|
||||
first item for a particular name wins</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>controllermanager.config.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>LeaderMigrationConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>leaderName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>LeaderName is the name of the leader election resource that protects the migration
|
||||
E.g. 1-20-KCM-to-1-21-CCM</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceLock</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>ResourceLock indicates the resource object type that will be used to lock
|
||||
Should be "leases" or "endpoints"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>controllerLeaders</code> <B>[Required]</B><br/>
|
||||
<a href="#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration"><code>[]ControllerLeaderConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ControllerLeaders contains a list of migrating leader lock configurations</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `ControllerLeaderConfiguration` {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration}
|
||||
|
||||
|
||||
|
|
@ -146,48 +493,6 @@ first item for a particular name wins</p>
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>leaderName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>LeaderName is the name of the leader election resource that protects the migration
|
||||
E.g. 1-20-KCM-to-1-21-CCM</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>resourceLock</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>ResourceLock indicates the resource object type that will be used to lock
|
||||
Should be "leases" or "endpoints"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>controllerLeaders</code> <B>[Required]</B><br/>
|
||||
<a href="#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration"><code>[]ControllerLeaderConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ControllerLeaders contains a list of migrating leader lock configurations</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
|
@ -1115,14 +1420,6 @@ allowed to sync concurrently.</p>
|
|||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>EnableTaintManager</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>If set to true enables NoExecute Taints and will evict all not-tolerating
|
||||
Pod running on Nodes tainted with this kind of Taints.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeEvictionRate</code> <B>[Required]</B><br/>
|
||||
<code>float32</code>
|
||||
</td>
|
||||
|
|
@ -1582,230 +1879,4 @@ volume plugin should search for additional third party volume plugins</p>
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
## `ServiceControllerConfiguration` {#ServiceControllerConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>ServiceControllerConfiguration contains elements describing ServiceController.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>ConcurrentServiceSyncs</code> <B>[Required]</B><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>concurrentServiceSyncs is the number of services that are
|
||||
allowed to sync concurrently. Larger number = more responsive service
|
||||
management, but more CPU (and network) load.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration}
|
||||
|
||||
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>cloudcontrollermanager.config.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>CloudControllerManagerConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>Generic</code> <B>[Required]</B><br/>
|
||||
<a href="#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration"><code>GenericControllerManagerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Generic holds configuration for a generic controller-manager</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>KubeCloudShared</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration"><code>KubeCloudSharedConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>KubeCloudSharedConfiguration holds configuration for shared related features
|
||||
both in cloud controller manager and kube-controller manager.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ServiceController</code> <B>[Required]</B><br/>
|
||||
<a href="#ServiceControllerConfiguration"><code>ServiceControllerConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>ServiceControllerConfiguration holds configuration for ServiceController
|
||||
related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeStatusUpdateFrequency</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration)
|
||||
|
||||
|
||||
<p>CloudProviderConfiguration contains basically elements about cloud provider.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>Name</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Name is the provider for cloud services.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CloudConfigFile</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>cloudConfigFile is the path to the cloud provider configuration file.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
|
||||
|
||||
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
|
||||
|
||||
|
||||
<p>KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
|
||||
and cloud-controller manager, but not genericconfig.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>CloudProvider</code> <B>[Required]</B><br/>
|
||||
<a href="#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration"><code>CloudProviderConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>CloudProviderConfiguration holds configuration for CloudProvider related features.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ExternalCloudVolumePlugin</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external".
|
||||
It is currently used by the in repo cloud providers to handle node and volume control in the KCM.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>UseServiceAccountCredentials</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>useServiceAccountCredentials indicates whether controllers should be run with
|
||||
individual service account credentials.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllowUntaggedCloud</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>run with untagged cloud instances</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>RouteReconciliationPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeMonitorPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterName is the instance prefix for the cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ClusterCIDR</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clusterCIDR is CIDR Range for Pods in cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>AllocateNodeCIDRs</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if
|
||||
ConfigureCloudRoutes is true, to be set on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>CIDRAllocatorType</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>CIDRAllocatorType determines what kind of pod CIDR allocator will be used.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ConfigureCloudRoutes</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs
|
||||
to be configured on the cloud provider.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>NodeSyncPeriod</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
|
||||
periods will result in fewer calls to cloud provider, but may delay addition
|
||||
of new nodes to cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
@ -531,12 +531,12 @@ will exit with an error.</p>
|
|||
|
||||
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
|
||||
|
||||
|
||||
|
|
@ -593,12 +593,12 @@ client.</p>
|
|||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
|
||||
|
||||
|
||||
|
|
@ -621,7 +621,7 @@ client.</p>
|
|||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>enableContentionProfiling enables lock contention profiling, if
|
||||
<p>enableContentionProfiling enables block profiling, if
|
||||
enableProfiling is true.</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
|
|||
|
|
@ -85,6 +85,14 @@ that play a role in the number of candidates shortlisted. Must be at least
|
|||
matching hard affinity to the incoming pod.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ignorePreferredTermsOfExistingPods</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>IgnorePreferredTermsOfExistingPods configures the scheduler to ignore existing pods' preferred affinity
|
||||
rules when scoring candidate nodes, unless the incoming pod has inter-pod affinities.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
@ -202,7 +210,7 @@ with the extender. These extenders are shared by all scheduler profiles.</p>
|
|||
|
||||
|
||||
<tr><td><code>addedAffinity</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>AddedAffinity is applied to all Pods additionally to the NodeAffinity
|
||||
|
|
@ -301,7 +309,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m
|
|||
|
||||
|
||||
<tr><td><code>defaultConstraints</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>DefaultConstraints defines topology spread constraints to be applied to
|
||||
|
|
@ -1176,7 +1184,7 @@ client.</p>
|
|||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>enableContentionProfiling enables lock contention profiling, if
|
||||
<p>enableContentionProfiling enables block profiling, if
|
||||
enableProfiling is true.</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
@ -1188,12 +1196,12 @@ enableProfiling is true.</p>
|
|||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)
|
||||
|
||||
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
|
||||
|
||||
|
||||
<p>LeaderElectionConfiguration defines the configuration of leader election
|
||||
clients for components that can run with leader election enabled.</p>
|
||||
|
|
|
|||
|
|
@ -85,6 +85,14 @@ that play a role in the number of candidates shortlisted. Must be at least
|
|||
matching hard affinity to the incoming pod.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ignorePreferredTermsOfExistingPods</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>IgnorePreferredTermsOfExistingPods configures the scheduler to ignore existing pods' preferred affinity
|
||||
rules when scoring candidate nodes, unless the incoming pod has inter-pod affinities.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
@ -218,7 +226,7 @@ with the extender. These extenders are shared by all scheduler profiles.</p>
|
|||
|
||||
|
||||
<tr><td><code>addedAffinity</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>AddedAffinity is applied to all Pods additionally to the NodeAffinity
|
||||
|
|
@ -317,7 +325,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m
|
|||
|
||||
|
||||
<tr><td><code>defaultConstraints</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>DefaultConstraints defines topology spread constraints to be applied to
|
||||
|
|
@ -1153,7 +1161,7 @@ client.</p>
|
|||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>enableContentionProfiling enables lock contention profiling, if
|
||||
<p>enableContentionProfiling enables block profiling, if
|
||||
enableProfiling is true.</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
|
|||
|
|
@ -85,6 +85,14 @@ that play a role in the number of candidates shortlisted. Must be at least
|
|||
matching hard affinity to the incoming pod.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ignorePreferredTermsOfExistingPods</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>IgnorePreferredTermsOfExistingPods configures the scheduler to ignore existing pods' preferred affinity
|
||||
rules when scoring candidate nodes, unless the incoming pod has inter-pod affinities.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
@ -202,7 +210,7 @@ with the extender. These extenders are shared by all scheduler profiles.</p>
|
|||
|
||||
|
||||
<tr><td><code>addedAffinity</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>AddedAffinity is applied to all Pods additionally to the NodeAffinity
|
||||
|
|
@ -301,7 +309,7 @@ The default strategy is LeastAllocated with an equal "cpu" and "m
|
|||
|
||||
|
||||
<tr><td><code>defaultConstraints</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>DefaultConstraints defines topology spread constraints to be applied to
|
||||
|
|
@ -1157,7 +1165,7 @@ client.</p>
|
|||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>enableContentionProfiling enables lock contention profiling, if
|
||||
<p>enableContentionProfiling enables block profiling, if
|
||||
enableProfiling is true.</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -30,6 +30,7 @@ the user to configure a directory from which to take patches for components depl
|
|||
<ul>
|
||||
<li>kubeadm v1.15.x and newer can be used to migrate from v1beta1 to v1beta2.</li>
|
||||
<li>kubeadm v1.22.x and newer no longer support v1beta1 and older APIs, but can be used to migrate v1beta2 to v1beta3.</li>
|
||||
<li>kubeadm v1.27.x and newer no longer support v1beta2 and older APIs,</li>
|
||||
</ul>
|
||||
<h2>Basics</h2>
|
||||
<p>The preferred way to configure kubeadm is to pass an YAML configuration file with the <code>--config</code> option. Some of the
|
||||
|
|
@ -264,109 +265,6 @@ node only (e.g. the node ip).</p>
|
|||
|
||||
|
||||
|
||||
## `BootstrapToken` {#BootstrapToken}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)
|
||||
|
||||
|
||||
<p>BootstrapToken describes one bootstrap token, stored as a Secret in the cluster</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>token</code> <B>[Required]</B><br/>
|
||||
<a href="#BootstrapTokenString"><code>BootstrapTokenString</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>token</code> is used for establishing bidirectional trust between nodes and control-planes.
|
||||
Used for joining nodes in the cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>description</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>description</code> sets a human-friendly message why this token exists and what it's used
|
||||
for, so other administrators can know its purpose.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ttl</code><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>ttl</code> defines the time to live for this token. Defaults to <code>24h</code>.
|
||||
<code>expires</code> and <code>ttl</code> are mutually exclusive.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>expires</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>expires</code> specifies the timestamp when this token expires. Defaults to being set
|
||||
dynamically at runtime based on the <code>ttl</code>. <code>expires</code> and <code>ttl</code> are mutually exclusive.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>usages</code><br/>
|
||||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>usages</code> describes the ways in which this token can be used. Can by default be used
|
||||
for establishing bidirectional trust, but that can be changed here.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>groups</code><br/>
|
||||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>groups</code> specifies the extra groups that this token will authenticate as when/if
|
||||
used for authentication</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `BootstrapTokenString` {#BootstrapTokenString}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [BootstrapToken](#BootstrapToken)
|
||||
|
||||
|
||||
<p>BootstrapTokenString is a token of the format <code>abcdef.abcdef0123456789</code> that is used
|
||||
for both validation of the practically of the API server from a joining node's point
|
||||
of view and as an authentication method for the node in the bootstrap phase of
|
||||
"kubeadm join". This token is and should be short-lived.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>-</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted">No description provided.</span></td>
|
||||
</tr>
|
||||
<tr><td><code>-</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted">No description provided.</span></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta3-ClusterConfiguration}
|
||||
|
||||
|
||||
|
|
@ -1036,7 +934,7 @@ file from which to load cluster information.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>pathType</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#hostpathtype-v1-core"><code>core/v1.HostPathType</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#hostpathtype-v1-core"><code>core/v1.HostPathType</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>pathType</code> is the type of the <code>hostPath</code>.</p>
|
||||
|
|
@ -1259,7 +1157,7 @@ This information will be annotated to the Node API object, for later re-use</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>taints</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#taint-v1-core"><code>[]core/v1.Taint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#taint-v1-core"><code>[]core/v1.Taint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>taints</code> specifies the taints the Node API object should be registered with.
|
||||
|
|
@ -1290,7 +1188,7 @@ the current node is registered.</p>
|
|||
</td>
|
||||
</tr>
|
||||
<tr><td><code>imagePullPolicy</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#pullpolicy-v1-core"><code>core/v1.PullPolicy</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#pullpolicy-v1-core"><code>core/v1.PullPolicy</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>imagePullPolicy</code> specifies the policy for image pulling during kubeadm "init" and
|
||||
|
|
@ -1338,4 +1236,107 @@ first alpha-numerically.</p>
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## `BootstrapToken` {#BootstrapToken}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)
|
||||
|
||||
|
||||
<p>BootstrapToken describes one bootstrap token, stored as a Secret in the cluster</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>token</code> <B>[Required]</B><br/>
|
||||
<a href="#BootstrapTokenString"><code>BootstrapTokenString</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>token</code> is used for establishing bidirectional trust between nodes and control-planes.
|
||||
Used for joining nodes in the cluster.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>description</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>description</code> sets a human-friendly message why this token exists and what it's used
|
||||
for, so other administrators can know its purpose.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>ttl</code><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>ttl</code> defines the time to live for this token. Defaults to <code>24h</code>.
|
||||
<code>expires</code> and <code>ttl</code> are mutually exclusive.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>expires</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#time-v1-meta"><code>meta/v1.Time</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>expires</code> specifies the timestamp when this token expires. Defaults to being set
|
||||
dynamically at runtime based on the <code>ttl</code>. <code>expires</code> and <code>ttl</code> are mutually exclusive.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>usages</code><br/>
|
||||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>usages</code> describes the ways in which this token can be used. Can by default be used
|
||||
for establishing bidirectional trust, but that can be changed here.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>groups</code><br/>
|
||||
<code>[]string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p><code>groups</code> specifies the extra groups that this token will authenticate as when/if
|
||||
used for authentication</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `BootstrapTokenString` {#BootstrapTokenString}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [BootstrapToken](#BootstrapToken)
|
||||
|
||||
|
||||
<p>BootstrapTokenString is a token of the format <code>abcdef.abcdef0123456789</code> that is used
|
||||
for both validation of the practically of the API server from a joining node's point
|
||||
of view and as an authentication method for the node in the bootstrap phase of
|
||||
"kubeadm join". This token is and should be short-lived.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>-</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted">No description provided.</span></td>
|
||||
</tr>
|
||||
<tr><td><code>-</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted">No description provided.</span></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
@ -169,211 +169,4 @@ credential plugin.</p>
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
## `FormatOptions` {#FormatOptions}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
|
||||
<p>FormatOptions contains options for the different logging formats.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>json</code> <B>[Required]</B><br/>
|
||||
<a href="#JSONOptions"><code>JSONOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>[Alpha] JSON contains options for logging format "json".
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `JSONOptions` {#JSONOptions}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [FormatOptions](#FormatOptions)
|
||||
|
||||
|
||||
<p>JSONOptions contains options for logging format "json".</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>splitStream</code> <B>[Required]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>[Alpha] SplitStream redirects error messages to stderr while
|
||||
info messages go to stdout, with buffering. The default is to write
|
||||
both to stdout, without buffering. Only available when
|
||||
the LoggingAlphaOptions feature gate is enabled.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>infoBufferSize</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>[Alpha] InfoBufferSize sets the size of the info stream when
|
||||
using split streams. The default is zero, which disables buffering.
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LogFormatFactory` {#LogFormatFactory}
|
||||
|
||||
|
||||
|
||||
<p>LogFormatFactory provides support for a certain additional,
|
||||
non-default log format.</p>
|
||||
|
||||
|
||||
|
||||
|
||||
## `LoggingConfiguration` {#LoggingConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
|
||||
<p>LoggingConfiguration contains logging options.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>format</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Format Flag specifies the structure of log messages.
|
||||
default value of format is <code>text</code></p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>flushFrequency</code> <B>[Required]</B><br/>
|
||||
<a href="https://pkg.go.dev/time#Duration"><code>time.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Maximum number of nanoseconds (i.e. 1s = 1000000000) between log
|
||||
flushes. Ignored if the selected logging backend writes log
|
||||
messages without buffering.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>verbosity</code> <B>[Required]</B><br/>
|
||||
<a href="#VerbosityLevel"><code>VerbosityLevel</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Verbosity is the threshold that determines which log messages are
|
||||
logged. Default is zero which logs only the most important
|
||||
messages. Higher values enable additional messages. Error messages
|
||||
are always logged.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>vmodule</code> <B>[Required]</B><br/>
|
||||
<a href="#VModuleConfiguration"><code>VModuleConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>VModule overrides the verbosity threshold for individual files.
|
||||
Only supported for "text" log format.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>options</code> <B>[Required]</B><br/>
|
||||
<a href="#FormatOptions"><code>FormatOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>[Alpha] Options holds additional parameters that are specific
|
||||
to the different logging formats. Only the options for the selected
|
||||
format get used, but all of them get validated.
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TracingConfiguration` {#TracingConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
|
||||
<p>TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Endpoint of the collector this component will report traces to.
|
||||
The connection is insecure, and does not currently support TLS.
|
||||
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>samplingRatePerMillion</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>SamplingRatePerMillion is the number of samples to collect per million spans.
|
||||
Recommended is unset. If unset, sampler respects its parent span's sampling
|
||||
rate, but otherwise never samples.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `VModuleConfiguration` {#VModuleConfiguration}
|
||||
|
||||
(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`)
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
|
||||
<p>VModuleConfiguration is a collection of individual file names or patterns
|
||||
and the corresponding verbosity threshold.</p>
|
||||
|
||||
|
||||
|
||||
|
||||
## `VerbosityLevel` {#VerbosityLevel}
|
||||
|
||||
(Alias of `uint32`)
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
|
||||
|
||||
<p>VerbosityLevel represents a klog or logr verbosity threshold.</p>
|
||||
|
||||
|
||||
|
||||
|
|
@ -169,6 +169,4 @@ credential plugin.</p>
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -262,7 +262,7 @@ Default: 10</p>
|
|||
<td>
|
||||
<p>eventRecordQPS is the maximum event creations per second. If 0, there
|
||||
is no limit enforced. The value cannot be a negative number.
|
||||
Default: 5</p>
|
||||
Default: 50</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>eventBurst</code><br/>
|
||||
|
|
@ -273,7 +273,7 @@ Default: 5</p>
|
|||
allows event creations to burst to this number, while still not exceeding
|
||||
eventRecordQPS. This field canot be a negative number and it is only used
|
||||
when eventRecordQPS > 0.
|
||||
Default: 10</p>
|
||||
Default: 100</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>enableDebuggingHandlers</code><br/>
|
||||
|
|
@ -290,7 +290,7 @@ Default: true</p>
|
|||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>enableContentionProfiling enables lock contention profiling, if enableDebuggingHandlers is true.
|
||||
<p>enableContentionProfiling enables block profiling, if enableDebuggingHandlers is true.
|
||||
Default: false</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
@ -529,8 +529,7 @@ resources;</li>
|
|||
<li><code>single-numa-node</code>: kubelet only allows pods with a single NUMA alignment
|
||||
of CPU and device resources.</li>
|
||||
</ul>
|
||||
<p>Policies other than "none" require the TopologyManager feature gate to be enabled.
|
||||
Default: "none"</p>
|
||||
<p>Default: "none"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>topologyManagerScope</code><br/>
|
||||
|
|
@ -543,8 +542,7 @@ that topology manager requests and hint providers generate. Valid values include
|
|||
<li><code>container</code>: topology policy is applied on a per-container basis.</li>
|
||||
<li><code>pod</code>: topology policy is applied on a per-pod basis.</li>
|
||||
</ul>
|
||||
<p>"pod" scope requires the TopologyManager feature gate to be enabled.
|
||||
Default: "container"</p>
|
||||
<p>Default: "container"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>topologyManagerPolicyOptions</code><br/>
|
||||
|
|
@ -692,7 +690,7 @@ Default: "application/vnd.kubernetes.protobuf"</p>
|
|||
</td>
|
||||
<td>
|
||||
<p>kubeAPIQPS is the QPS to use while talking with kubernetes apiserver.
|
||||
Default: 5</p>
|
||||
Default: 50</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>kubeAPIBurst</code><br/>
|
||||
|
|
@ -701,7 +699,7 @@ Default: 5</p>
|
|||
<td>
|
||||
<p>kubeAPIBurst is the burst to allow while talking with kubernetes API server.
|
||||
This field cannot be a negative number.
|
||||
Default: 10</p>
|
||||
Default: 100</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>serializeImagePulls</code><br/>
|
||||
|
|
@ -715,6 +713,16 @@ Issue #10959 has more details.
|
|||
Default: true</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>maxParallelImagePulls</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>MaxParallelImagePulls sets the maximum number of image pulls in parallel.
|
||||
This field cannot be set if SerializeImagePulls is true.
|
||||
Setting it to nil means no limit.
|
||||
Default: nil</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>evictionHard</code><br/>
|
||||
<code>map[string]string</code>
|
||||
</td>
|
||||
|
|
@ -953,7 +961,7 @@ Default: ""</p>
|
|||
<td>
|
||||
<p>systemReservedCgroup helps the kubelet identify absolute name of top level CGroup used
|
||||
to enforce <code>systemReserved</code> compute resource reservation for OS system daemons.
|
||||
Refer to <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
|
||||
Refer to <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">Node Allocatable</a>
|
||||
doc for more information.
|
||||
Default: ""</p>
|
||||
</td>
|
||||
|
|
@ -964,7 +972,7 @@ Default: ""</p>
|
|||
<td>
|
||||
<p>kubeReservedCgroup helps the kubelet identify absolute name of top level CGroup used
|
||||
to enforce <code>KubeReserved</code> compute resource reservation for Kubernetes node system daemons.
|
||||
Refer to <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
|
||||
Refer to <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">Node Allocatable</a>
|
||||
doc for more information.
|
||||
Default: ""</p>
|
||||
</td>
|
||||
|
|
@ -980,7 +988,7 @@ If <code>none</code> is specified, no other options may be specified.
|
|||
When <code>system-reserved</code> is in the list, systemReservedCgroup must be specified.
|
||||
When <code>kube-reserved</code> is in the list, kubeReservedCgroup must be specified.
|
||||
This field is supported only when <code>cgroupsPerQOS</code> is set to true.
|
||||
Refer to <a href="https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md">Node Allocatable</a>
|
||||
Refer to <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">Node Allocatable</a>
|
||||
for more information.
|
||||
Default: ["pods"]</p>
|
||||
</td>
|
||||
|
|
@ -1042,6 +1050,15 @@ Format: text</p>
|
|||
Default: true</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>enableSystemLogQuery</code><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>enableSystemLogQuery enables the node log query feature on the /logs endpoint.
|
||||
EnableSystemLogHandler has to be enabled in addition for this feature to work.
|
||||
Default: false</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>shutdownGracePeriod</code><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
|
|
@ -1143,7 +1160,6 @@ Default: true</p>
|
|||
</td>
|
||||
<td>
|
||||
<p>SeccompDefault enables the use of <code>RuntimeDefault</code> as the default seccomp profile for all workloads.
|
||||
This requires the corresponding SeccompDefault feature gate to be enabled as well.
|
||||
Default: false</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
@ -1156,11 +1172,11 @@ when setting the cgroupv2 memory.high value to enforce MemoryQoS.
|
|||
Decreasing this factor will set lower high limit for container cgroups and put heavier reclaim pressure
|
||||
while increasing will put less reclaim pressure.
|
||||
See https://kep.k8s.io/2570 for more details.
|
||||
Default: 0.8</p>
|
||||
Default: 0.9</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>registerWithTaints</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#taint-v1-core"><code>[]core/v1.Taint</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#taint-v1-core"><code>[]core/v1.Taint</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>registerWithTaints are an array of taints to add to a node object when
|
||||
|
|
@ -1182,7 +1198,8 @@ Default: true</p>
|
|||
</td>
|
||||
<td>
|
||||
<p>Tracing specifies the versioned configuration for OpenTelemetry tracing clients.
|
||||
See https://kep.k8s.io/2832 for more details.</p>
|
||||
See https://kep.k8s.io/2832 for more details.
|
||||
Default: nil</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>localStorageCapacityIsolation</code><br/>
|
||||
|
|
@ -1199,6 +1216,25 @@ disabled. Once disabled, user should not set request/limit for container's ephem
|
|||
Default: true</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>containerRuntimeEndpoint</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>ContainerRuntimeEndpoint is the endpoint of container runtime.
|
||||
Unix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows.
|
||||
Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>imageServiceEndpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>ImageServiceEndpoint is the endpoint of container image service.
|
||||
Unix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows.
|
||||
Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'.
|
||||
If not specified, the value in containerRuntimeEndpoint is used.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
@ -1220,7 +1256,7 @@ It exists in the kubeletconfig API group because it is classified as a versioned
|
|||
|
||||
|
||||
<tr><td><code>source</code><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>source is the source that we are serializing.</p>
|
||||
|
|
@ -1581,7 +1617,7 @@ and groups corresponding to the Organization in the client certificate.</p>
|
|||
<span class="text-muted">No description provided.</span></td>
|
||||
</tr>
|
||||
<tr><td><code>limits</code> <B>[Required]</B><br/>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
|
||||
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<span class="text-muted">No description provided.</span></td>
|
||||
|
|
|
|||
|
|
@ -17,6 +17,6 @@ tags:
|
|||
|
||||
If your Kubernetes cluster uses etcd as its backing store, make sure you have a
|
||||
[back up](/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) plan
|
||||
for those data.
|
||||
for the data.
|
||||
|
||||
You can find in-depth information about etcd in the official [documentation](https://etcd.io/docs/).
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -9,13 +9,13 @@ weight: 20
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
As an alpha feature, Kubernetes lets you configure Service Level Indicator (SLI) metrics
|
||||
By default, Kubernetes {{< skew currentVersion >}} publishes Service Level Indicator (SLI) metrics
|
||||
for each Kubernetes component binary. This metric endpoint is exposed on the serving
|
||||
HTTPS port of each component, at the path `/metrics/slis`. You must enable the
|
||||
HTTPS port of each component, at the path `/metrics/slis`. The
|
||||
`ComponentSLIs` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
for every component from which you want to scrape SLI metrics.
|
||||
defaults to enabled for each Kubernetes component as of v1.27.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ outputs:
|
|||
layout: cve-feed
|
||||
---
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
This is a community maintained list of official CVEs announced by
|
||||
the Kubernetes Security Response Committee. See
|
||||
|
|
|
|||
|
|
@ -22,9 +22,11 @@ For a stable output in a script:
|
|||
|
||||
## Subresources
|
||||
|
||||
* You can use the `--subresource` alpha flag for kubectl commands like `get`, `patch`,
|
||||
* You can use the `--subresource` beta flag for kubectl commands like `get`, `patch`,
|
||||
`edit` and `replace` to fetch and update subresources for all resources that
|
||||
support them. Currently, only the `status` and `scale` subresources are supported.
|
||||
* For `kubectl edit`, the `scale` subresource is not supported. If you use `--subresource` with
|
||||
`kubectl edit` and specify `scale` as the subresource, the command will error out.
|
||||
* The API contract against a subresource is identical to a full resource. While updating the
|
||||
`status` subresource to a new value, keep in mind that the subresource could be potentially
|
||||
reconciled by a controller to a different value.
|
||||
|
|
|
|||
|
|
@ -361,6 +361,14 @@ kubectl [flags]
|
|||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">KUBECTL_ENABLE_CMD_SHADOW</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">When set to true, external plugins can be used as subcommands for builtin commands if subcommand does not exist. In alpha stage, this feature can only be used for create command(e.g. kubectl create networkpolicy).
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
|
|||
|
|
@ -404,6 +404,11 @@ GET /apis/certificates.k8s.io/v1/certificatesigningrequests
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -899,6 +904,11 @@ DELETE /apis/certificates.k8s.io/v1/certificatesigningrequests
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,506 @@
|
|||
---
|
||||
api_metadata:
|
||||
apiVersion: "certificates.k8s.io/v1alpha1"
|
||||
import: "k8s.io/api/certificates/v1alpha1"
|
||||
kind: "ClusterTrustBundle"
|
||||
content_type: "api_reference"
|
||||
description: "ClusterTrustBundle is a cluster-scoped container for X."
|
||||
title: "ClusterTrustBundle v1alpha1"
|
||||
weight: 5
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
`apiVersion: certificates.k8s.io/v1alpha1`
|
||||
|
||||
`import "k8s.io/api/certificates/v1alpha1"`
|
||||
|
||||
|
||||
## ClusterTrustBundle {#ClusterTrustBundle}
|
||||
|
||||
ClusterTrustBundle is a cluster-scoped container for X.509 trust anchors (root certificates).
|
||||
|
||||
ClusterTrustBundle objects are considered to be readable by any authenticated user in the cluster, because they can be mounted by pods using the `clusterTrustBundle` projection. All service accounts have read access to ClusterTrustBundles by default. Users who only have namespace-level access to a cluster can read ClusterTrustBundles by impersonating a serviceaccount that they have access to.
|
||||
|
||||
It can be optionally associated with a particular assigner, in which case it contains one valid set of trust anchors for that signer. Signers may have multiple associated ClusterTrustBundles; each is an independent set of trust anchors for that signer. Admission control is used to enforce that only users with permissions on the signer can create or modify the corresponding bundle.
|
||||
|
||||
<hr>
|
||||
|
||||
- **apiVersion**: certificates.k8s.io/v1alpha1
|
||||
|
||||
|
||||
- **kind**: ClusterTrustBundle
|
||||
|
||||
|
||||
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
|
||||
|
||||
metadata contains the object metadata.
|
||||
|
||||
- **spec** (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundleSpec" >}}">ClusterTrustBundleSpec</a>), required
|
||||
|
||||
spec contains the signer (if any) and trust anchors.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## ClusterTrustBundleSpec {#ClusterTrustBundleSpec}
|
||||
|
||||
ClusterTrustBundleSpec contains the signer and trust anchors.
|
||||
|
||||
<hr>
|
||||
|
||||
- **trustBundle** (string), required
|
||||
|
||||
trustBundle contains the individual X.509 trust anchors for this bundle, as PEM bundle of PEM-wrapped, DER-formatted X.509 certificates.
|
||||
|
||||
The data must consist only of PEM certificate blocks that parse as valid X.509 certificates. Each certificate must include a basic constraints extension with the CA bit set. The API server will reject objects that contain duplicate certificates, or that use PEM block headers.
|
||||
|
||||
Users of ClusterTrustBundles, including Kubelet, are free to reorder and deduplicate certificate blocks in this file according to their own logic, as well as to drop PEM block headers and inter-block data.
|
||||
|
||||
- **signerName** (string)
|
||||
|
||||
signerName indicates the associated signer, if any.
|
||||
|
||||
In order to create or update a ClusterTrustBundle that sets signerName, you must have the following cluster-scoped permission: group=certificates.k8s.io resource=signers resourceName=\<the signer name> verb=attest.
|
||||
|
||||
If signerName is not empty, then the ClusterTrustBundle object must be named with the signer name as a prefix (translating slashes to colons). For example, for the signer name `example.com/foo`, valid ClusterTrustBundle object names include `example.com:foo:abc` and `example.com:foo:v1`.
|
||||
|
||||
If signerName is empty, then the ClusterTrustBundle object's name must not have such a prefix.
|
||||
|
||||
List/watch requests for ClusterTrustBundles can filter on this field using a `spec.signerName=NAME` field selector.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## ClusterTrustBundleList {#ClusterTrustBundleList}
|
||||
|
||||
ClusterTrustBundleList is a collection of ClusterTrustBundle objects
|
||||
|
||||
<hr>
|
||||
|
||||
- **apiVersion**: certificates.k8s.io/v1alpha1
|
||||
|
||||
|
||||
- **kind**: ClusterTrustBundleList
|
||||
|
||||
|
||||
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
|
||||
|
||||
metadata contains the list metadata.
|
||||
|
||||
- **items** ([]<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>), required
|
||||
|
||||
items is a collection of ClusterTrustBundle objects
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## Operations {#Operations}
|
||||
|
||||
|
||||
|
||||
<hr>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### `get` read the specified ClusterTrustBundle
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
GET /apis/certificates.k8s.io/v1alpha1/clustertrustbundles/{name}
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **name** (*in path*): string, required
|
||||
|
||||
name of the ClusterTrustBundle
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): OK
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
||||
### `list` list or watch objects of kind ClusterTrustBundle
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
GET /apis/certificates.k8s.io/v1alpha1/clustertrustbundles
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **allowWatchBookmarks** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
|
||||
|
||||
|
||||
- **continue** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
|
||||
|
||||
|
||||
- **fieldSelector** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
|
||||
|
||||
|
||||
- **labelSelector** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
|
||||
|
||||
|
||||
- **limit** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
- **resourceVersion** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
|
||||
|
||||
|
||||
- **resourceVersionMatch** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
||||
|
||||
- **watch** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundleList" >}}">ClusterTrustBundleList</a>): OK
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
||||
### `create` create a ClusterTrustBundle
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
POST /apis/certificates.k8s.io/v1alpha1/clustertrustbundles
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **body**: <a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>, required
|
||||
|
||||
|
||||
|
||||
|
||||
- **dryRun** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
|
||||
- **fieldManager** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
|
||||
- **fieldValidation** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): OK
|
||||
|
||||
201 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): Created
|
||||
|
||||
202 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): Accepted
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
||||
### `update` replace the specified ClusterTrustBundle
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
PUT /apis/certificates.k8s.io/v1alpha1/clustertrustbundles/{name}
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **name** (*in path*): string, required
|
||||
|
||||
name of the ClusterTrustBundle
|
||||
|
||||
|
||||
- **body**: <a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>, required
|
||||
|
||||
|
||||
|
||||
|
||||
- **dryRun** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
|
||||
- **fieldManager** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
|
||||
- **fieldValidation** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): OK
|
||||
|
||||
201 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): Created
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
||||
### `patch` partially update the specified ClusterTrustBundle
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
PATCH /apis/certificates.k8s.io/v1alpha1/clustertrustbundles/{name}
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **name** (*in path*): string, required
|
||||
|
||||
name of the ClusterTrustBundle
|
||||
|
||||
|
||||
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>, required
|
||||
|
||||
|
||||
|
||||
|
||||
- **dryRun** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
|
||||
- **fieldManager** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
|
||||
- **fieldValidation** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
|
||||
- **force** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): OK
|
||||
|
||||
201 (<a href="{{< ref "../authentication-resources/cluster-trust-bundle-v1alpha1#ClusterTrustBundle" >}}">ClusterTrustBundle</a>): Created
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
||||
### `delete` delete a ClusterTrustBundle
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
DELETE /apis/certificates.k8s.io/v1alpha1/clustertrustbundles/{name}
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **name** (*in path*): string, required
|
||||
|
||||
name of the ClusterTrustBundle
|
||||
|
||||
|
||||
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
|
||||
|
||||
|
||||
|
||||
|
||||
- **dryRun** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
|
||||
- **gracePeriodSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
- **propagationPolicy** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
|
||||
|
||||
202 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): Accepted
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
||||
### `deletecollection` delete collection of ClusterTrustBundle
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
DELETE /apis/certificates.k8s.io/v1alpha1/clustertrustbundles
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
|
||||
|
||||
|
||||
|
||||
|
||||
- **continue** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
|
||||
|
||||
|
||||
- **dryRun** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
|
||||
- **fieldSelector** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
|
||||
|
||||
|
||||
- **gracePeriodSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
|
||||
|
||||
|
||||
- **labelSelector** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
|
||||
|
||||
|
||||
- **limit** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
- **propagationPolicy** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
|
||||
|
||||
|
||||
- **resourceVersion** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
|
||||
|
||||
|
||||
- **resourceVersionMatch** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
|
@ -0,0 +1,142 @@
|
|||
---
|
||||
api_metadata:
|
||||
apiVersion: "authentication.k8s.io/v1beta1"
|
||||
import: "k8s.io/api/authentication/v1beta1"
|
||||
kind: "SelfSubjectReview"
|
||||
content_type: "api_reference"
|
||||
description: "SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request."
|
||||
title: "SelfSubjectReview v1beta1"
|
||||
weight: 6
|
||||
auto_generated: true
|
||||
---
|
||||
|
||||
<!--
|
||||
The file is auto-generated from the Go source code of the component using a generic
|
||||
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
|
||||
to generate the reference documentation, please read
|
||||
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
|
||||
To update the reference content, please follow the
|
||||
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
|
||||
guide. You can file document formatting bugs against the
|
||||
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
|
||||
-->
|
||||
|
||||
`apiVersion: authentication.k8s.io/v1beta1`
|
||||
|
||||
`import "k8s.io/api/authentication/v1beta1"`
|
||||
|
||||
|
||||
## SelfSubjectReview {#SelfSubjectReview}
|
||||
|
||||
SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. When using impersonation, users will receive the user info of the user being impersonated. If impersonation or request header authentication is used, any extra keys will have their case ignored and returned as lowercase.
|
||||
|
||||
<hr>
|
||||
|
||||
- **apiVersion**: authentication.k8s.io/v1beta1
|
||||
|
||||
|
||||
- **kind**: SelfSubjectReview
|
||||
|
||||
|
||||
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
|
||||
|
||||
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
|
||||
- **status** (<a href="{{< ref "../authentication-resources/self-subject-review-v1beta1#SelfSubjectReviewStatus" >}}">SelfSubjectReviewStatus</a>)
|
||||
|
||||
Status is filled in by the server with the user attributes.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## SelfSubjectReviewStatus {#SelfSubjectReviewStatus}
|
||||
|
||||
SelfSubjectReviewStatus is filled by the kube-apiserver and sent back to a user.
|
||||
|
||||
<hr>
|
||||
|
||||
- **userInfo** (UserInfo)
|
||||
|
||||
User attributes of the user making this request.
|
||||
|
||||
<a name="UserInfo"></a>
|
||||
*UserInfo holds the information about the user needed to implement the user.Info interface.*
|
||||
|
||||
- **userInfo.extra** (map[string][]string)
|
||||
|
||||
Any additional information provided by the authenticator.
|
||||
|
||||
- **userInfo.groups** ([]string)
|
||||
|
||||
The names of groups this user is a part of.
|
||||
|
||||
- **userInfo.uid** (string)
|
||||
|
||||
A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs.
|
||||
|
||||
- **userInfo.username** (string)
|
||||
|
||||
The name that uniquely identifies this user among all active users.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## Operations {#Operations}
|
||||
|
||||
|
||||
|
||||
<hr>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### `create` create a SelfSubjectReview
|
||||
|
||||
#### HTTP Request
|
||||
|
||||
POST /apis/authentication.k8s.io/v1beta1/selfsubjectreviews
|
||||
|
||||
#### Parameters
|
||||
|
||||
|
||||
- **body**: <a href="{{< ref "../authentication-resources/self-subject-review-v1beta1#SelfSubjectReview" >}}">SelfSubjectReview</a>, required
|
||||
|
||||
|
||||
|
||||
|
||||
- **dryRun** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
|
||||
|
||||
|
||||
- **fieldManager** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
|
||||
|
||||
|
||||
- **fieldValidation** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
|
||||
|
||||
|
||||
- **pretty** (*in query*): string
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
|
||||
|
||||
|
||||
|
||||
#### Response
|
||||
|
||||
|
||||
200 (<a href="{{< ref "../authentication-resources/self-subject-review-v1beta1#SelfSubjectReview" >}}">SelfSubjectReview</a>): OK
|
||||
|
||||
201 (<a href="{{< ref "../authentication-resources/self-subject-review-v1beta1#SelfSubjectReview" >}}">SelfSubjectReview</a>): Created
|
||||
|
||||
202 (<a href="{{< ref "../authentication-resources/self-subject-review-v1beta1#SelfSubjectReview" >}}">SelfSubjectReview</a>): Accepted
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
|
|
@ -182,6 +182,11 @@ GET /api/v1/namespaces/{namespace}/serviceaccounts
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -250,6 +255,11 @@ GET /api/v1/serviceaccounts
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -560,6 +570,11 @@ DELETE /api/v1/namespaces/{namespace}/serviceaccounts
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
|
|||
|
|
@ -200,6 +200,11 @@ GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -485,6 +490,11 @@ DELETE /apis/rbac.authorization.k8s.io/v1/clusterrolebindings
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
|
|||
|
|
@ -196,6 +196,11 @@ GET /apis/rbac.authorization.k8s.io/v1/clusterroles
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -481,6 +486,11 @@ DELETE /apis/rbac.authorization.k8s.io/v1/clusterroles
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
|
|||
|
|
@ -210,6 +210,11 @@ GET /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -278,6 +283,11 @@ GET /apis/rbac.authorization.k8s.io/v1/rolebindings
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -588,6 +598,11 @@ DELETE /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
|
|||
|
|
@ -195,6 +195,11 @@ GET /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -263,6 +268,11 @@ GET /apis/rbac.authorization.k8s.io/v1/roles
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
@ -573,6 +583,11 @@ DELETE /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles
|
|||
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
|
||||
|
||||
|
||||
- **sendInitialEvents** (*in query*): boolean
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
|
||||
|
||||
|
||||
- **timeoutSeconds** (*in query*): integer
|
||||
|
||||
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue