From da08874e48564961d72e1ec959e2715822fe04f1 Mon Sep 17 00:00:00 2001 From: Michael Taufen Date: Mon, 8 Jun 2020 15:52:49 -0700 Subject: [PATCH 02/74] Update docs for ServiceAccountIssuerDiscovery beta https://github.com/kubernetes/enhancements/issues/1393 --- .../reference/command-line-tools-reference/feature-gates.md | 3 ++- .../tasks/configure-pod-container/configure-service-account.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 806d7a2021..0e06617017 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -142,7 +142,8 @@ different Kubernetes components. | `ServiceAppProtocol` | `true` | Beta | 1.19 | | | `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | | `ServerSideApply` | `true` | Beta | 1.16 | | -| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | | +| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | 1.19 | +| `ServiceAccountIssuerDiscovery` | `true` | Beta | 1.20 | | | `ServiceAppProtocol` | `false` | Alpha | 1.18 | | | `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 | | `ServiceNodeExclusion` | `true` | Beta | 1.19 | | diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index e3f97dd5ce..c1a3a8f1c2 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -324,7 +324,7 @@ The application is responsible for reloading the token when it rotates. Periodic ## Service Account Issuer Discovery -{{< feature-state for_k8s_version="v1.18" state="alpha" >}} +{{< feature-state for_k8s_version="v1.20" state="beta" >}} The Service Account Issuer Discovery feature is enabled by enabling the `ServiceAccountIssuerDiscovery` [feature gate](/docs/reference/command-line-tools-reference/feature-gates) From ecf851c495ad9031ea1d14c508e2ebf569ff9d20 Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Wed, 16 Sep 2020 18:15:13 +0000 Subject: [PATCH 03/74] promote SupportNodePidsLimit and SupportPodPidsLimit to GA --- .../command-line-tools-reference/feature-gates.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 806d7a2021..b6092e7c54 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -152,10 +152,6 @@ different Kubernetes components. | `StartupProbe` | `true` | Beta | 1.18 | | | `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 | | `StorageVersionHash` | `true` | Beta | 1.15 | | -| `SupportNodePidsLimit` | `false` | Alpha | 1.14 | 1.14 | -| `SupportNodePidsLimit` | `true` | Beta | 1.15 | | -| `SupportPodPidsLimit` | `false` | Alpha | 1.10 | 1.13 | -| `SupportPodPidsLimit` | `true` | Beta | 1.14 | | | `Sysctls` | `true` | Beta | 1.11 | | | `TokenRequest` | `false` | Alpha | 1.10 | 1.11 | | `TokenRequest` | `true` | Beta | 1.12 | | @@ -290,6 +286,12 @@ different Kubernetes components. | `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | | `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 | | `SupportIPVSProxyMode` | `true` | GA | 1.11 | - | +| `SupportNodePidsLimit` | `false` | Alpha | 1.14 | 1.14 | +| `SupportNodePidsLimit` | `true` | Beta | 1.15 | 1.19 | +| `SupportNodePidsLimit` | `true` | GA | 1.20 | - | +| `SupportPodPidsLimit` | `false` | Alpha | 1.10 | 1.13 | +| `SupportPodPidsLimit` | `true` | Beta | 1.14 | 1.19 | +| `SupportPodPidsLimit` | `true` | GA | 1.20 | - | | `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 | | `TaintBasedEvictions` | `true` | Beta | 1.13 | 1.17 | | `TaintBasedEvictions` | `true` | GA | 1.18 | - | @@ -531,6 +533,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `SupportIPVSProxyMode`: Enable providing in-cluster service load balancing using IPVS. See [service proxies](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) for more details. - `SupportPodPidsLimit`: Enable the support to limiting PIDs in Pods. +- `SupportNodePidsLimit`: Enable the support to limiting PIDs on the Node. The parameter `pid=` in the `--system-reserved` and `--kube-reserved` options can be specified to ensure that the specified number of process IDs will be reserved for the system as a whole and for Kubernetes system daemons respectively. - `Sysctls`: Enable support for namespaced kernel parameters (sysctls) that can be set for each pod. See [sysctls](/docs/tasks/administer-cluster/sysctl-cluster/) for more details. - `TaintBasedEvictions`: Enable evicting pods from nodes based on taints on nodes and tolerations on Pods. From 7b7ed6bb10a6a6aa9b58ef2bb6612833ed7482cc Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Thu, 17 Sep 2020 17:00:28 +0000 Subject: [PATCH 04/74] documentation for pid limiting functionality --- .../manage-resources-containers.md | 4 + .../en/docs/concepts/policy/pid-limiting.md | 99 +++++++++++++++++++ .../concepts/policy/pod-security-policy.md | 2 +- .../docs/concepts/policy/resource-quotas.md | 2 +- 4 files changed, 105 insertions(+), 2 deletions(-) create mode 100644 content/en/docs/concepts/policy/pid-limiting.md diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 1f907ad0d0..cb5b13efe4 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -594,6 +594,10 @@ spec: example.com/foo: 1 ``` +## PID limiting + +Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See [Pid Limiting](/docs/concepts/policy/pid-limiting/) for information. + ## Troubleshooting ### My Pods are pending with event message failedScheduling diff --git a/content/en/docs/concepts/policy/pid-limiting.md b/content/en/docs/concepts/policy/pid-limiting.md new file mode 100644 index 0000000000..f29484b34d --- /dev/null +++ b/content/en/docs/concepts/policy/pid-limiting.md @@ -0,0 +1,99 @@ +--- +reviewers: +- derekwaynecarr +title: Process ID Limits And Reservations +content_type: concept +weight: 40 +--- + + + +{{< feature-state for_k8s_version="v1.20" state="stable" >}} + +Kubernetes allow you to limit the number of process IDs (PIDs) that a {{< glossary_tooltip term_id="Pod" text="Pod" >}} can use. +You can also reserve a number of allocatable PIDs for each {{< glossary_tooltip term_id="node" text="node" >}} +for use by the operating system and daemons (rather than by Pods). + + + +Process IDs (PIDs) are a fundamental resource on nodes. It is trivial to hit the +task limit without hitting any other resource limits, which can then cause +instability to a host machine. + +Cluster administrators require mechanisms to ensure that Pods running in the +cluster cannot induce PID exhaustion that prevents host daemons (such as the +{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} or +{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}, +and potentially also the container runtime) from running. +In addition, it is important to ensure that PIDs are limited among Pods in order +to ensure they have limited impact on other workloads on the same node. + +{{< note >}} +On certain Linux installations, the operating system sets the PIDs limit to a low default, +such as `32768`. Consider raising the value of `/proc/sys/kernel/pid_max`. +{{< /note >}} + +You can configure a kubelet to limit the number of PIDs a given pod can consume. +For example, if your node's host OS is set to use a maximum of `262144` PIDs and +expect to host less than `250` pods, one can give each pod a budget of `1000` +PIDs to prevent using up that node's overall number of available PIDs. If the +admin wants to overcommit PIDs similar to CPU or memory, they may do so as well +with some additional risks. Either way, a single pod will not be able to bring +the whole machine down. This kind of resource limiting helps to prevent simple +fork bombs from affecting operation of an entire cluster. + +Per-pod PID limiting allows administrators to protect one pod from another, but +does not ensure that all Pods scheduled onto that host are unable to impact the node overall. +Per-Pod limiting also does not protect the node agents themselves from PID exhaustion. + +You can also reserve an amount of PIDs for node overhead, separate from the +allocation to Pods. This is similar to how you can reserve CPU, memory, or other +resources for use by the operating system and other facilities outside of Pods +and their containers. + +PID limiting is a an important sibling to [compute +resource](/docs/concepts/configuration/manage-resources-containers/) requests +and limits. However, you specify it in a different way: rather than defining a +Pod's resource limit in the `.spec` for a Pod, you configure the limit as a +setting on the kubelet. Pod-defined PID limits are not currently supported. + +{{< caution >}} +This means that the limit that applies to a Pod may be different depending on +where the Pod is scheduled. To make things simple, it's easiest if all Nodes use +the same PID resource limits and reservations. +{{< /caution >}} + +## Node PID limits + +Kubernetes allows you to reserve a number of process IDs for the system use. To +configure the reservation, use the parameter `pid=` in the +`--system-reserved` and `--kube-reserved` command line options to the kubelet. +The value you specified declares that the specified number of process IDs will +be reserved for the system as a whole and for Kubernetes system daemons +respectively. + +{{< note >}} +Before Kubernetes version 1.20, PID resource limiting with Node-level +reservations required enabling the [feature +gate](/docs/reference/command-line-tools-reference/feature-gates/) +`SupportNodePidsLimit` to work. +{{< /note >}} + +## Pod PID limits + +Kubernetes allows you to limit the number of processes running in a Pod. You +specify this limit at the node level, rather than configuring it as a resource +limit for a particular Pod. Each Node can have a different PID limit. +To configure the limit, you can specify the command line parameter `--pod-max-pids` to the kubelet, or set `PodPidsLimit` in the kubelet [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/). + +{{< note >}} +Before Kubernetes version 1.20, PID resource limiting for Pods required enabling +the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +`SupportPodPidsLimit` to work. +{{< /note >}} + +## {{% heading "whatsnext" %}} + +- Refer to the [PID Limiting enhancement document](https://github.com/kubernetes/enhancements/blob/097b4d8276bc9564e56adf72505d43ce9bc5e9e8/keps/sig-node/20190129-pid-limiting.md) for more information. +- For historical context, read [Process ID Limiting for Stability Improvements in Kubernetes 1.14](/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/). +- Read [Managing Resources for Containers](/docs/concepts/configuration/manage-resources-containers/). diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 22cac5f2a2..6ef71ec733 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -4,7 +4,7 @@ reviewers: - tallclair title: Pod Security Policies content_type: concept -weight: 20 +weight: 30 --- diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 6a29cb1741..a02bd60d40 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -3,7 +3,7 @@ reviewers: - derekwaynecarr title: Resource Quotas content_type: concept -weight: 10 +weight: 20 --- From 957ad74c2ee43138833b6205a6826a88f499d3f6 Mon Sep 17 00:00:00 2001 From: Anna Jung Date: Fri, 25 Sep 2020 13:55:01 -0500 Subject: [PATCH 05/74] Update config.toml to show 1.20 as the current version --- config.toml | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/config.toml b/config.toml index 0915eeb4c3..756ddf13c4 100644 --- a/config.toml +++ b/config.toml @@ -115,10 +115,10 @@ time_format_blog = "Monday, January 02, 2006" description = "Production-Grade Container Orchestration" showedit = true -latest = "v1.19" +latest = "v1.20" -fullversion = "v1.19.0" -version = "v1.19" +fullversion = "v1.20.0" +version = "v1.20" githubbranch = "master" docsbranch = "master" deprecated = false @@ -156,12 +156,19 @@ js = [ "script" ] +[[params.versions]] +fullversion = "v1.20.0" +version = "v1.20" +githubbranch = "v1.20.0" +docsbranch = "master" +url = "https://kubernetes.io" + [[params.versions]] fullversion = "v1.19.0" version = "v1.19" githubbranch = "v1.19.0" docsbranch = "master" -url = "https://kubernetes.io" +url = "https://v1-19.docs.kubernetes.io" [[params.versions]] fullversion = "v1.18.8" @@ -184,13 +191,6 @@ githubbranch = "v1.16.14" docsbranch = "release-1.16" url = "https://v1-16.docs.kubernetes.io" -[[params.versions]] -fullversion = "v1.15.12" -version = "v1.15" -githubbranch = "v1.15.12" -docsbranch = "release-1.15" -url = "https://v1-15.docs.kubernetes.io" - # User interface configuration [params.ui] From 096499936e42f6ef0c1b5839cbb2493007958ec3 Mon Sep 17 00:00:00 2001 From: Matthias Bertschy Date: Wed, 7 Oct 2020 08:39:48 +0200 Subject: [PATCH 06/74] Promote startupProbe to GA in 1.20 --- content/en/docs/concepts/workloads/pods/pod-lifecycle.md | 2 +- .../reference/command-line-tools-reference/feature-gates.md | 5 +++-- .../docs/reference/command-line-tools-reference/kubelet.md | 1 - 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md index 9dc37c357c..7d4a7e0071 100644 --- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md @@ -314,7 +314,7 @@ to stop. ### When should you use a startup probe? -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.20" state="stable" >}} Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness interval, you can configure diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 942fa2221b..024029fbcf 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -148,8 +148,6 @@ different Kubernetes components. | `ServiceNodeExclusion` | `true` | Beta | 1.19 | | | `ServiceTopology` | `false` | Alpha | 1.17 | | | `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | | -| `StartupProbe` | `false` | Alpha | 1.16 | 1.17 | -| `StartupProbe` | `true` | Beta | 1.18 | | | `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 | | `StorageVersionHash` | `true` | Beta | 1.15 | | | `Sysctls` | `true` | Beta | 1.11 | | @@ -277,6 +275,9 @@ different Kubernetes components. | `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 | | `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 | | `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | - | +| `StartupProbe` | `false` | Alpha | 1.16 | 1.17 | +| `StartupProbe` | `true` | Beta | 1.18 | 1.19 | +| `StartupProbe` | `true` | GA | 1.20 | - | | `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | | `StorageObjectInUseProtection` | `true` | GA | 1.11 | - | | `StreamingProxyRedirects` | `false` | Beta | 1.5 | 1.5 | diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index 2a123796a1..38d5026cb7 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -617,7 +617,6 @@ ServiceAppProtocol=true|false (BETA - default=true)
ServiceNodeExclusion=true|false (BETA - default=true)
ServiceTopology=true|false (ALPHA - default=false)
SetHostnameAsFQDN=true|false (ALPHA - default=false)
-StartupProbe=true|false (BETA - default=true)
StorageVersionHash=true|false (BETA - default=true)
SupportNodePidsLimit=true|false (BETA - default=true)
SupportPodPidsLimit=true|false (BETA - default=true)
From f37f4732102a85f1dd36c72d84c904fc9c7a1827 Mon Sep 17 00:00:00 2001 From: Han Kang Date: Fri, 24 Jul 2020 11:10:42 -0700 Subject: [PATCH 07/74] add documentation for system:monitoring rbac policy --- content/en/docs/reference/access-authn-authz/rbac.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 2be833826c..464cd3f4ff 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -801,7 +801,12 @@ This is commonly used by add-on API servers for unified authentication and autho None Allows access to the resources required by most dynamic volume provisioners. - + +system:monitoring +system:monitoring group +Allows read access to control-plane monitoring endpoints (i.e. {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} liveness and readiness endpoints (/healthz, /livez, /readyz), the individual health-check endpoints (/healthz/*, /livez/*, /readyz/*), and /metrics). Note that individual health check endpoints and the metric endpoint may expose sensitive information. + + ### Roles for built-in controllers {#controller-roles} From be23194dadd22e9a7c7e024159e5a2f42f024b57 Mon Sep 17 00:00:00 2001 From: Dan Winship Date: Thu, 15 Oct 2020 21:08:54 -0400 Subject: [PATCH 08/74] SCTP is GA in 1.20 --- .../services-networking/network-policies.md | 11 --- .../concepts/services-networking/service.md | 75 ++++++++----------- .../feature-gates.md | 5 +- 3 files changed, 35 insertions(+), 56 deletions(-) diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index 774d7b7808..78836074e7 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -208,17 +208,6 @@ You can create a "default" policy for a namespace which prevents all ingress AND This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed ingress or egress traffic. -## SCTP support - -{{< feature-state for_k8s_version="v1.19" state="beta" >}} - -As a beta feature, this is enabled by default. To disable SCTP at a cluster level, you (or your cluster administrator) will need to disable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=false,…`. -When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`. - -{{< note >}} -You must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that supports SCTP protocol NetworkPolicies. -{{< /note >}} - # What you CAN'T do with network policies (at least, not yet) As of Kubernetes 1.20, the following functionality does not exist in the NetworkPolicy API, but you might be able to implement workarounds using Operating System components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress controllers, Service Mesh implementations) or admission controllers. In case you are new to network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be implemented using the NetworkPolicy API. Some (but not all) of these user stories are actively being discussed for future releases of the NetworkPolicy API. diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 83b7850364..31dd0aca5c 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -579,8 +579,8 @@ status: Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced. For LoadBalancer type of Services, when there is more than one port defined, all -ports must have the same protocol and the protocol must be one of `TCP`, `UDP`, -and `SCTP`. +ports must have the same protocol, and the protocol must be one which is supported +by the cloud provider. Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified, @@ -588,11 +588,6 @@ the loadBalancer is set up with an ephemeral IP address. If you specify a `loadB but your cloud provider does not support the feature, the `loadbalancerIP` field that you set is ignored. -{{< note >}} -If you're using SCTP, see the [caveat](#caveat-sctp-loadbalancer-service-type) below about the -`LoadBalancer` Service type. -{{< /note >}} - {{< note >}} On **Azure**, if you want to use a user-specified public type `loadBalancerIP`, you first need @@ -1184,6 +1179,36 @@ You can use TCP for any kind of Service, and it's the default network protocol. You can use UDP for most Services. For type=LoadBalancer Services, UDP support depends on the cloud provider offering this facility. +### SCTP + +{{< feature-state for_k8s_version="v1.20" state="stable" >}} + +When using a network plugin that supports SCTP traffic, you can use SCTP for +most Services. For type=LoadBalancer Services, SCTP support depends on the cloud +provider offering this facility. (Most do not). + +#### Warnings {#caveat-sctp-overview} + +##### Support for multihomed SCTP associations {#caveat-sctp-multihomed} + +{{< warning >}} +The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod. + +NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules. +{{< /warning >}} + +##### Windows {#caveat-sctp-windows-os} + +{{< note >}} +SCTP is not supported on Windows based nodes. +{{< /note >}} + +##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace} + +{{< warning >}} +The kube-proxy does not support the management of SCTP associations when it is in userspace mode. +{{< /warning >}} + ### HTTP If your cloud provider supports it, you can use a Service in LoadBalancer mode @@ -1211,42 +1236,6 @@ PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n followed by the data from the client. -### SCTP - -{{< feature-state for_k8s_version="v1.19" state="beta" >}} - -Kubernetes supports SCTP as a `protocol` value in Service, Endpoints, EndpointSlice, NetworkPolicy and Pod definitions. As a beta feature, this is enabled by default. To disable SCTP at a cluster level, you (or your cluster administrator) will need to disable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=false,…`. - -When the feature gate is enabled, you can set the `protocol` field of a Service, Endpoints, EndpointSlice, NetworkPolicy or Pod to `SCTP`. Kubernetes sets up the network accordingly for the SCTP associations, just like it does for TCP connections. - -#### Warnings {#caveat-sctp-overview} - -##### Support for multihomed SCTP associations {#caveat-sctp-multihomed} - -{{< warning >}} -The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod. - -NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules. -{{< /warning >}} - -##### Service with type=LoadBalancer {#caveat-sctp-loadbalancer-service-type} - -{{< warning >}} -You can only create a Service with `type` LoadBalancer plus `protocol` SCTP if the cloud provider's load balancer implementation supports SCTP as a protocol. Otherwise, the Service creation request is rejected. The current set of cloud load balancer providers (Azure, AWS, CloudStack, GCE, OpenStack) all lack support for SCTP. -{{< /warning >}} - -##### Windows {#caveat-sctp-windows-os} - -{{< warning >}} -SCTP is not supported on Windows based nodes. -{{< /warning >}} - -##### Userspace kube-proxy {#caveat-sctp-kube-proxy-userspace} - -{{< warning >}} -The kube-proxy does not support the management of SCTP associations when it is in userspace mode. -{{< /warning >}} - ## {{% heading "whatsnext" %}} * Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 5dc99be46f..b265d24a88 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -136,8 +136,6 @@ different Kubernetes components. | `RunAsGroup` | `true` | Beta | 1.14 | | | `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 | | `RuntimeClass` | `true` | Beta | 1.14 | | -| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 | -| `SCTPSupport` | `true` | Beta | 1.19 | | | `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 | | `ServiceAppProtocol` | `true` | Beta | 1.19 | | | `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | @@ -273,6 +271,9 @@ different Kubernetes components. | `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | | `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | 1.16 | | `ScheduleDaemonSetPods` | `true` | GA | 1.17 | - | +| `SCTPSupport` | `false` | Alpha | 1.12 | 1.18 | +| `SCTPSupport` | `true` | Beta | 1.19 | 1.19 | +| `SCTPSupport` | `true` | GA | 1.20 | - | | `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 | | `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 | | `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | - | From 6fc4e102b89326cef416e4b6c47e8bfc5447b812 Mon Sep 17 00:00:00 2001 From: Andrew Keesler Date: Mon, 19 Oct 2020 10:22:38 -0400 Subject: [PATCH 09/74] exec credential provider: cluster info details Signed-off-by: Andrew Keesler --- .../access-authn-authz/authentication.md | 32 +++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index a97dca823f..439b6c5900 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -882,11 +882,20 @@ users: On Fedora: dnf install example-client-go-exec-plugin ... + + # Whether or not to provide cluster information, which could potentially contain + # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO + # environment variable. + provideClusterInfo: true clusters: - name: my-cluster cluster: server: "https://172.17.4.100:6443" certificate-authority: "/etc/kubernetes/ca.pem" + extensions: + - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config + extension: + some-config-per-cluster: config-data # arbitrary config contexts: - name: my-cluster context: @@ -968,3 +977,26 @@ RFC3339 timestamp. Presence or absence of an expiry has the following impact: } ``` +The plugin can optionally be called with an environment variable, `KUBERNETES_EXEC_INFO`, +that contains information about the cluster for which this plugin is obtaining +credentials. This information can be used to perform cluster-specific credential +acquisition logic. In order to enable this behavior, the `provideClusterInfo` field must +be set on the exec user field in the +[kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/). Here is an +example of the aforementioned `KUBERNETES_EXEC_INFO` environment variable. + +```json +{ + "apiVersion": "client.authentication.k8s.io/v1beta1", + "kind": "ExecCredential", + "spec": { + "cluster": { + "server": "https://172.17.4.100:6443", + "certificate-authority-data": "LS0t...", + "config": { + "some-config-per-cluster": "config-data" + } + } + } +} +``` From c29185dac504d11dff176c80c4116ad71bfb4503 Mon Sep 17 00:00:00 2001 From: Javier Diaz-Montes Date: Fri, 2 Oct 2020 17:59:59 -0400 Subject: [PATCH 10/74] Updating doc to reflect that setHostnameAsFQDN feature will be beta in v1.20 --- .../en/docs/concepts/services-networking/dns-pod-service.md | 6 +----- .../reference/command-line-tools-reference/feature-gates.md | 3 ++- 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index 1a55acf3d8..93474f24fa 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -168,11 +168,7 @@ record unless `publishNotReadyAddresses=True` is set on the Service. ### Pod's setHostnameAsFQDN field {#pod-sethostnameasfqdn-field} -{{< feature-state for_k8s_version="v1.19" state="alpha" >}} - -**Prerequisites**: The `SetHostnameAsFQDN` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) -must be enabled for the -{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} +{{< feature-state for_k8s_version="v1.20" state="beta" >}} When a Pod is configured to have fully qualified domain name (FQDN), its hostname is the short hostname. For example, if you have a Pod with the fully qualified domain name `busybox-1.default-subdomain.my-namespace.svc.cluster-domain.example`, then by default the `hostname` command inside that Pod returns `busybox-1` and the `hostname --fqdn` command returns the FQDN. diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 5dc99be46f..ab8fef3d48 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -148,7 +148,8 @@ different Kubernetes components. | `ServiceNodeExclusion` | `false` | Alpha | 1.8 | 1.18 | | `ServiceNodeExclusion` | `true` | Beta | 1.19 | | | `ServiceTopology` | `false` | Alpha | 1.17 | | -| `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | | +| `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | 1.19 | +| `SetHostnameAsFQDN` | `true` | Beta | 1.20 | | | `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 | | `StorageVersionHash` | `true` | Beta | 1.15 | | | `Sysctls` | `true` | Beta | 1.11 | | From 72a66b696911d6839d8b28844c4c03fd76b5dde9 Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Fri, 23 Oct 2020 20:57:54 +0000 Subject: [PATCH 11/74] RuntimeClass GA --- content/en/docs/concepts/containers/runtime-class.md | 4 ++-- .../en/docs/concepts/scheduling-eviction/pod-overhead.md | 2 +- .../reference/access-authn-authz/admission-controllers.md | 2 +- .../reference/command-line-tools-reference/feature-gates.md | 6 ++++-- .../windows/user-guide-windows-containers.md | 2 +- 5 files changed, 9 insertions(+), 7 deletions(-) diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index 6589590bc4..06fc69e336 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -9,7 +9,7 @@ weight: 20 -{{< feature-state for_k8s_version="v1.14" state="beta" >}} +{{< feature-state for_k8s_version="v1.20" state="GA" >}} This page describes the RuntimeClass resource and runtime selection mechanism. @@ -66,7 +66,7 @@ The RuntimeClass resource currently only has 2 significant fields: the RuntimeCl (`metadata.name`) and the handler (`handler`). The object definition looks like this: ```yaml -apiVersion: node.k8s.io/v1beta1 # RuntimeClass is defined in the node.k8s.io API group +apiVersion: node.k8s.io/v1 # RuntimeClass is defined in the node.k8s.io API group kind: RuntimeClass metadata: name: myclass # The name the RuntimeClass will be referenced by diff --git a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md index 5eced7954f..989db38a7f 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md @@ -48,7 +48,7 @@ that uses around 120MiB per Pod for the virtual machine and the guest OS: ```yaml --- kind: RuntimeClass -apiVersion: node.k8s.io/v1beta1 +apiVersion: node.k8s.io/v1 metadata: name: kata-fc handler: kata-fc diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index b4823245d3..8ea7e1accd 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -725,7 +725,7 @@ See the [resourceQuota design doc](https://git.k8s.io/community/contributors/des ### RuntimeClass {#runtimeclass} -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.18" state="beta" >}} For [RuntimeClass](/docs/concepts/containers/runtime-class/) definitions which describe an overhead associated with running a pod, this admission controller will set the pod.Spec.Overhead field accordingly. diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index bdd1850694..0d441ffb98 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -128,14 +128,13 @@ different Kubernetes components. | `PodDisruptionBudget` | `false` | Alpha | 1.3 | 1.4 | | `PodDisruptionBudget` | `true` | Beta | 1.5 | | | `PodOverhead` | `false` | Alpha | 1.16 | - | +| `PodOverhead` | `true` | Beta | 1.18 | - | | `ProcMountType` | `false` | Alpha | 1.12 | | | `QOSReserved` | `false` | Alpha | 1.11 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 | | `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | | | `RunAsGroup` | `true` | Beta | 1.14 | | -| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 | -| `RuntimeClass` | `true` | Beta | 1.14 | | | `ServiceAppProtocol` | `false` | Alpha | 1.18 | 1.18 | | `ServiceAppProtocol` | `true` | Beta | 1.19 | | | `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | @@ -269,6 +268,9 @@ different Kubernetes components. | `ResourceQuotaScopeSelectors` | `true` | GA | 1.17 | - | | `RotateKubeletClientCertificate` | `true` | Beta | 1.8 | 1.18 | | `RotateKubeletClientCertificate` | `true` | GA | 1.19 | - | +| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 | +| `RuntimeClass` | `true` | Beta | 1.14 | 1.19 | +| `RuntimeClass` | `true` | GA | 1.20 | - | | `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | | `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | 1.16 | | `ScheduleDaemonSetPods` | `true` | GA | 1.17 | - | diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md index e28afeb9f2..4ea12f5ca2 100644 --- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -177,7 +177,7 @@ This label reflects the Windows major, minor, and build number that need to matc 1. Save this file to `runtimeClasses.yml`. It includes the appropriate `nodeSelector` for the Windows OS, architecture, and version. ```yaml -apiVersion: node.k8s.io/v1beta1 +apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows-2019 From ca7cb78cab9157d16ac8e858c8ab4870332b7c0e Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Mon, 26 Oct 2020 15:27:38 -0700 Subject: [PATCH 12/74] Update content/en/docs/concepts/containers/runtime-class.md Co-authored-by: Tim Bannister --- content/en/docs/concepts/containers/runtime-class.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md index 06fc69e336..1ba631e350 100644 --- a/content/en/docs/concepts/containers/runtime-class.md +++ b/content/en/docs/concepts/containers/runtime-class.md @@ -9,7 +9,7 @@ weight: 20 -{{< feature-state for_k8s_version="v1.20" state="GA" >}} +{{< feature-state for_k8s_version="v1.20" state="stable" >}} This page describes the RuntimeClass resource and runtime selection mechanism. @@ -186,4 +186,3 @@ are accounted for in Kubernetes. - Read about the [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) concept - [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md) - From ca462f973d0fb1e30cb99d4917b9088c2048f14f Mon Sep 17 00:00:00 2001 From: Shihang Zhang Date: Tue, 27 Oct 2020 10:04:00 -0700 Subject: [PATCH 13/74] add doc for CSIServiceAccountToken --- .../docs/reference/command-line-tools-reference/feature-gates.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index bdd1850694..7ada9a3e3b 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -79,6 +79,7 @@ different Kubernetes components. | `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | | | `CSIMigrationvSphere` | `false` | Beta | 1.19 | | | `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | | +| `CSIServiceAccountToken` | `false` | Alpha | 1.20 | | | `CSIStorageCapacity` | `false` | Alpha | 1.19 | | | `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | | | `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | | From 6d51948652450024fa5dc120c32457805efaa650 Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Thu, 29 Oct 2020 17:19:11 -0700 Subject: [PATCH 14/74] Update content/en/docs/reference/access-authn-authz/admission-controllers.md Co-authored-by: Tim Bannister --- .../access-authn-authz/admission-controllers.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index 8ea7e1accd..3cfd7543d8 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -727,8 +727,13 @@ See the [resourceQuota design doc](https://git.k8s.io/community/contributors/des {{< feature-state for_k8s_version="v1.18" state="beta" >}} -For [RuntimeClass](/docs/concepts/containers/runtime-class/) definitions which describe an overhead associated with running a pod, -this admission controller will set the pod.Spec.Overhead field accordingly. +If you enable the `PodOverhead` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), and define a RuntimeClass with [Pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/) configured, this admission controller checks incoming +Pods. When enabled, this admission controller rejects any Pod create requests that have the overhead already set. +For Pods that have a RuntimeClass is configured and selected in their `.spec`, this admission controller sets `.spec.overhead` in the Pod based on the value defined in the corresponding RuntimeClass. + +{{< note >}} +The `.spec.overhead` field for Pod and the `.overhead` field for RuntimeClass are both in beta. If you do not enable the `PodOverhead` feature gate, all Pods are treated as if `.spec.overhead` is unset. +{{< /note >}} See also [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) for more information. From 63283f5c310e5cc5cd884f2e431a46c251e6f4df Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Thu, 29 Oct 2020 17:22:26 -0700 Subject: [PATCH 15/74] Update content/en/docs/reference/access-authn-authz/admission-controllers.md --- .../docs/reference/access-authn-authz/admission-controllers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index 3cfd7543d8..ee7984c025 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -725,7 +725,7 @@ See the [resourceQuota design doc](https://git.k8s.io/community/contributors/des ### RuntimeClass {#runtimeclass} -{{< feature-state for_k8s_version="v1.18" state="beta" >}} +{{< feature-state for_k8s_version="v1.20" state="stable" >}} If you enable the `PodOverhead` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), and define a RuntimeClass with [Pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/) configured, this admission controller checks incoming Pods. When enabled, this admission controller rejects any Pod create requests that have the overhead already set. From 06fda8221a5613f69dc839c8e9987146c043e757 Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Thu, 29 Oct 2020 17:23:43 -0700 Subject: [PATCH 16/74] Update content/en/docs/reference/command-line-tools-reference/feature-gates.md --- .../reference/command-line-tools-reference/feature-gates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 0d441ffb98..f289a0bb51 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -127,7 +127,7 @@ different Kubernetes components. | `NonPreemptingPriority` | `true` | Beta | 1.19 | | | `PodDisruptionBudget` | `false` | Alpha | 1.3 | 1.4 | | `PodDisruptionBudget` | `true` | Beta | 1.5 | | -| `PodOverhead` | `false` | Alpha | 1.16 | - | +| `PodOverhead` | `false` | Alpha | 1.16 | 1.17 | | `PodOverhead` | `true` | Beta | 1.18 | - | | `ProcMountType` | `false` | Alpha | 1.12 | | | `QOSReserved` | `false` | Alpha | 1.11 | | From 21362d8e4c77a1a98899b40338111dd5c6b77acc Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Thu, 29 Oct 2020 17:24:02 -0700 Subject: [PATCH 17/74] Update content/en/docs/reference/command-line-tools-reference/feature-gates.md --- .../reference/command-line-tools-reference/feature-gates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index f289a0bb51..a1082be6e6 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -128,7 +128,7 @@ different Kubernetes components. | `PodDisruptionBudget` | `false` | Alpha | 1.3 | 1.4 | | `PodDisruptionBudget` | `true` | Beta | 1.5 | | | `PodOverhead` | `false` | Alpha | 1.16 | 1.17 | -| `PodOverhead` | `true` | Beta | 1.18 | - | +| `PodOverhead` | `true` | Beta | 1.18 | | | `ProcMountType` | `false` | Alpha | 1.12 | | | `QOSReserved` | `false` | Alpha | 1.11 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | | From f2ef3d0e80db46a6b5d0ad42ae57321bbc4c592e Mon Sep 17 00:00:00 2001 From: Renaud Gaubert Date: Thu, 29 Oct 2020 18:43:03 -0700 Subject: [PATCH 18/74] Graduate KubeletPodResources to GA Signed-off-by: Renaud Gaubert --- .../extend-kubernetes/compute-storage-net/device-plugins.md | 3 ++- .../reference/command-line-tools-reference/feature-gates.md | 5 +++-- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 314a01354a..e2b7ab1f34 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -204,7 +204,8 @@ DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a {{< glossary_tooltip term_id="volume" >}} in the plugin's [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core). -Support for the "PodResources service" requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It is enabled by default starting with Kubernetes 1.15. +Support for the "PodResources service" requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. +It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20. ## Device Plugin integration with the Topology Manager diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index bdd1850694..c340a294ea 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -114,8 +114,6 @@ different Kubernetes components. | `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 | | `ImmutableEphemeralVolumes` | `true` | Beta | 1.19 | | | `IPv6DualStack` | `false` | Alpha | 1.16 | | -| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 | -| `KubeletPodResources` | `true` | Beta | 1.15 | | | `LegacyNodeRoleBehavior` | `true` | Alpha | 1.16 | | | `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 | | `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | | @@ -241,6 +239,9 @@ different Kubernetes components. | `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 | | `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 | | `KubeletPluginsWatcher` | `true` | GA | 1.13 | - | +| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 | +| `KubeletPodResources` | `true` | Beta | 1.15 | | +| `KubeletPodResources` | `true` | GA | 1.20 | | | `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | | `MountPropagation` | `true` | Beta | 1.10 | 1.11 | | `MountPropagation` | `true` | GA | 1.12 | - | From 99ecc57389c4a22f28efcd6704a83c7bb3ec1be7 Mon Sep 17 00:00:00 2001 From: Renaud Gaubert Date: Thu, 29 Oct 2020 18:50:08 -0700 Subject: [PATCH 19/74] Graduate DisableAcceleratorUsageMetrics to beta Signed-off-by: Renaud Gaubert --- .../docs/concepts/cluster-administration/system-metrics.md | 2 +- .../reference/command-line-tools-reference/feature-gates.md | 5 +++-- .../docs/reference/command-line-tools-reference/kubelet.md | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/content/en/docs/concepts/cluster-administration/system-metrics.md b/content/en/docs/concepts/cluster-administration/system-metrics.md index 3dba687f4e..0db05c8b65 100644 --- a/content/en/docs/concepts/cluster-administration/system-metrics.md +++ b/content/en/docs/concepts/cluster-administration/system-metrics.md @@ -104,7 +104,7 @@ The kubelet collects accelerator metrics through cAdvisor. To collect these metr The responsibility for collecting accelerator metrics now belongs to the vendor rather than the kubelet. Vendors must provide a container that collects metrics and exposes them to the metrics service (for example, Prometheus). -The [`DisableAcceleratorUsageMetrics` feature gate](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false) disables metrics collected by the kubelet, with a [timeline for enabling this feature by default](https://github.com/kubernetes/enhancements/tree/411e51027db842355bd489691af897afc1a41a5e/keps/sig-node/1867-disable-accelerator-usage-metrics#graduation-criteria). +The [`DisableAcceleratorUsageMetrics` feature gate](/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features:~:text= DisableAcceleratorUsageMetrics,-false) disables metrics collected by the kubelet. ## Component metrics diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index bdd1850694..a5ffe35a93 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -88,7 +88,8 @@ different Kubernetes components. | `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | | | `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 | | `DevicePlugins` | `true` | Beta | 1.10 | | -| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.20 | +| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 | +| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | 1.22 | | `DryRun` | `false` | Alpha | 1.12 | 1.12 | | `DryRun` | `true` | Beta | 1.13 | | | `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 | @@ -429,7 +430,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `CustomResourceWebhookConversion`: Enable webhook-based conversion on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/). troubleshoot a running Pod. -- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/). +- `DisableAcceleratorUsageMetrics`: [Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics). - `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/) based resource provisioning on nodes. - `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md index 38d5026cb7..aea0e23b5b 100644 --- a/content/en/docs/reference/command-line-tools-reference/kubelet.md +++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md @@ -579,7 +579,7 @@ ConfigurableFSGroupPolicy=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
DefaultPodTopologySpread=true|false (ALPHA - default=false)
DevicePlugins=true|false (BETA - default=true)
-DisableAcceleratorUsageMetrics=true|false (ALPHA - default=false)
+DisableAcceleratorUsageMetrics=true|false (BETA - default=true)
DynamicKubeletConfig=true|false (BETA - default=true)
EndpointSlice=true|false (BETA - default=true)
EndpointSliceProxying=true|false (BETA - default=true)
From f5a8dbe61e6c05c3999dc15af57380b5e3c47e5a Mon Sep 17 00:00:00 2001 From: Hugo Fonseca <1098293+fonsecas72@users.noreply.github.com> Date: Sat, 31 Oct 2020 17:12:42 +0000 Subject: [PATCH 20/74] HTTP Probe - Documenting about default headers --- ...igure-liveness-readiness-startup-probes.md | 32 +++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index e72c86255a..150fe720e1 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -336,6 +336,8 @@ liveness. Minimum value is 1. try `failureThreshold` times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1. +### HTTP probes + [HTTP probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core) have additional fields that can be set on `httpGet`: @@ -357,6 +359,36 @@ and the Pod's `hostNetwork` field is true. Then `host`, under `httpGet`, should to 127.0.0.1. If your pod relies on virtual hosts, which is probably the more common case, you should not use `host`, but rather set the `Host` header in `httpHeaders`. +For an HTTP probe, the kubelet sends three request headers in addition to the mandatory `Host` header: +`User-Agent`, `Accept-Encoding` and `Accept`. The default values for these headers are `kube-probe/{{< skew latestVersion >}}` +(where `{{< skew latestVersion >}}` is the version of the kubelet ), `gzip` and `*/*` respectively. + +You can override the default headers by defining `.httpHeaders` for the probe; for example + +```yaml +livenessProbe: + httpHeaders: + Accept: application/json + +startupProbe: + httpHeaders: + User-Agent: MyUserAgent +``` + +You can also remove these two headers by defining them with an empty value. + +```yaml +livenessProbe: + httpHeaders: + Accept: "" + +startupProbe: + httpHeaders: + User-Agent: "" +``` + +### TCP probes + For a TCP probe, the kubelet makes the probe connection at the node, not in the pod, which means that you can not use a service name in the `host` parameter since the kubelet is unable to resolve it. From b838012e1e218778c15137711d5994cb08346645 Mon Sep 17 00:00:00 2001 From: Arjun Naik Date: Mon, 2 Nov 2020 20:44:15 +0100 Subject: [PATCH 21/74] Added docs about container resource metric source for HPA (#23523) * Added docs about container resource metric source for HPA Signed-off-by: Arjun Naik * Update content/en/docs/tasks/run-application/horizontal-pod-autoscale.md Co-authored-by: Tim Bannister * Update content/en/docs/tasks/run-application/horizontal-pod-autoscale.md Co-authored-by: Tim Bannister * Update content/en/docs/tasks/run-application/horizontal-pod-autoscale.md Co-authored-by: Guy Templeton Co-authored-by: Tim Bannister Co-authored-by: Guy Templeton --- .../horizontal-pod-autoscale.md | 70 ++++++++++++++++++- 1 file changed, 69 insertions(+), 1 deletion(-) diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 927c146cf5..35111434b2 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -234,6 +234,75 @@ the delay value is set too short, the scale of the replicas set may keep thrashi usual. {{< /note >}} +## Support for resource metrics + +Any HPA target can be scaled based on the resource usage of the pods in the scaling target. +When defining the pod specification the resource requests like `cpu` and `memory` should +be specified. This is used to determine the resource utilization and used by the HPA controller +to scale the target up or down. To use resource utilization based scaling specify a metric source +like this: + +```yaml +type: Resource +resource: + name: cpu + target: + type: Utilization + averageUtilization: 60 +``` +With this metric the HPA controller will keep the average utilization of the pods in the scaling +target at 60%. Utilization is the ratio between the current usage of resource to the requested +resources of the pod. See [Algorithm](#algorithm-details) for more details about how the utilization +is calculated and averaged. + +{{< note >}} +Since the resource usages of all the containers are summed up the total pod utilization may not +accurately represent the individual container resource usage. This could lead to situations where +a single container might be running with high usage and the HPA will not scale out because the overall +pod usage is still within acceptable limits. +{{< /note >}} + +### Container Resource Metrics + +{{< feature-state for_k8s_version="v1.20" state="alpha" >}} + +`HorizontalPodAutoscaler` also supports a container metric source where the HPA can track the +resource usage of individual containers across a set of Pods, in order to scale the target resource. +This lets you configure scaling thresholds for the containers that matter most in a particular Pod. +For example, if you have a web application and a logging sidecar, you can scale based on the resource +use of the web application, ignoring the sidecar container and its resource use. + +If you revise the target resource to have a new Pod specification with a different set of containers, +you should revise the HPA spec if that newly added container should also be used for +scaling. If the specified container in the metric source is not present or only present in a subset +of the pods then those pods are ignored and the recommendation is recalculated. See [Algorithm](#algorithm-details) +for more details about the calculation. To use container resources for autoscaling define a metric +source as follows: +```yaml +type: ContainerResource +containerResource: + name: cpu + container: application + target: + type: Utilization + averageUtilization: 60 +``` + +In the above example the HPA controller scales the target such that the average utilization of the cpu +in the `application` container of all the pods is 60%. + +{{< note >}} +If you change the name of a container that a HorizontalPodAutoscaler is tracking, you can +make that change in a specific order to ensure scaling remains available and effective +whilst the change is being applied. Before you update the resource that defines the container +(such as a Deployment), you should update the associated HPA to track both the new and +old container names. This way, the HPA is able to calculate a scaling recommendation +throughout the update process. + +Once you have rolled out the container name change to the workload resource, tidy up by removing +the old container name from the HPA specification. +{{< /note >}} + ## Support for multiple metrics Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API @@ -441,4 +510,3 @@ behavior: * Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md). * kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). * Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). - From ac3d7d5642f4cf3d5d1d4e7b5d76812cba2b9992 Mon Sep 17 00:00:00 2001 From: Aldo Culquicondor Date: Mon, 2 Nov 2020 11:03:47 -0500 Subject: [PATCH 22/74] Graduate default pod topology spread to beta --- .../pods/pod-topology-spread-constraints.md | 23 +++++++++++++++---- .../feature-gates.md | 3 ++- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 28b844d474..4b33db8703 100644 --- a/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/en/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -284,8 +284,6 @@ There are some implicit conventions worth noting here: ### Cluster-level default constraints -{{< feature-state for_k8s_version="v1.19" state="beta" >}} - It is possible to set default topology spread constraints for a cluster. Default topology spread constraints are applied to a Pod if, and only if: @@ -312,6 +310,7 @@ profiles: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway + defaultingType: List ``` {{< note >}} @@ -324,9 +323,9 @@ using default constraints for `PodTopologySpread`. #### Internal default constraints -{{< feature-state for_k8s_version="v1.19" state="alpha" >}} +{{< feature-state for_k8s_version="v1.20" state="beta" >}} -When you enable the `DefaultPodTopologySpread` feature gate, the +With the `DefaultPodTopologySpread` feature gate, enabled by default, the legacy `SelectorSpread` plugin is disabled. kube-scheduler uses the following default topology constraints for the `PodTopologySpread` plugin configuration: @@ -353,6 +352,22 @@ The `PodTopologySpread` plugin does not score the nodes that don't have the topology keys specified in the spreading constraints. {{< /note >}} +If you don't want to use the default Pod spreading constraints for your cluster, +you can disable those defaults by setting `defaultingType` to `List` and leaving +empty `defaultConstraints` in the `PodTopologySpread` plugin configuration: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta1 +kind: KubeSchedulerConfiguration + +profiles: + - pluginConfig: + - name: PodTopologySpread + args: + defaultConstraints: [] + defaultingType: List +``` + ## Comparison with PodAffinity/PodAntiAffinity In Kubernetes, directives related to "Affinity" control how Pods are diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index a5ffe35a93..9b13e34d69 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -85,7 +85,8 @@ different Kubernetes components. | `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | | | `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 | | `CustomResourceDefaulting` | `true` | Beta | 1.16 | | -| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | | +| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 | +| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | | | `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 | | `DevicePlugins` | `true` | Beta | 1.10 | | | `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 | From c855d5d68cf05f2fee08db8030ef439ad8fbafa4 Mon Sep 17 00:00:00 2001 From: Andrew Keesler Date: Tue, 3 Nov 2020 12:19:16 -0500 Subject: [PATCH 23/74] exec credential provider: make arbitrary JSON more explicit Signed-off-by: Andrew Keesler --- .../docs/reference/access-authn-authz/authentication.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index 439b6c5900..f727644992 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -895,7 +895,9 @@ clusters: extensions: - name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config extension: - some-config-per-cluster: config-data # arbitrary config + arbitrary: config + this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo + you: ["can", "put", "anything", "here"] contexts: - name: my-cluster context: @@ -994,7 +996,9 @@ example of the aforementioned `KUBERNETES_EXEC_INFO` environment variable. "server": "https://172.17.4.100:6443", "certificate-authority-data": "LS0t...", "config": { - "some-config-per-cluster": "config-data" + "arbitrary": "config", + "this": "can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo", + "you": ["can", "put", "anything", "here"] } } } From 3590d7300892d8dbf6a1a9dd894ed2540514c9ab Mon Sep 17 00:00:00 2001 From: Shihang Zhang Date: Wed, 4 Nov 2020 11:24:53 -0800 Subject: [PATCH 24/74] TokenRequest and TokenRequestProjection are GA now (#24823) --- .../command-line-tools-reference/feature-gates.md | 10 ++++++---- .../configure-service-account.md | 11 +++++------ 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 9b13e34d69..303c90aae5 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -153,10 +153,6 @@ different Kubernetes components. | `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 | | `StorageVersionHash` | `true` | Beta | 1.15 | | | `Sysctls` | `true` | Beta | 1.11 | | -| `TokenRequest` | `false` | Alpha | 1.10 | 1.11 | -| `TokenRequest` | `true` | Beta | 1.12 | | -| `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 | -| `TokenRequestProjection` | `true` | Beta | 1.12 | | | `TTLAfterFinished` | `false` | Alpha | 1.12 | | | `TopologyManager` | `false` | Alpha | 1.16 | | | `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 | @@ -304,6 +300,12 @@ different Kubernetes components. | `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 | | `TaintNodesByCondition` | `true` | Beta | 1.12 | 1.16 | | `TaintNodesByCondition` | `true` | GA | 1.17 | - | +| `TokenRequest` | `false` | Alpha | 1.10 | 1.11 | +| `TokenRequest` | `true` | Beta | 1.12 | 1.19 | +| `TokenRequest` | `true` | GA | 1.20 | - | +| `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 | +| `TokenRequestProjection` | `true` | Beta | 1.12 | 1.19 | +| `TokenRequestProjection` | `true` | GA | 1.20 | - | | `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 | | `VolumePVCDataSource` | `true` | Beta | 1.16 | 1.17 | | `VolumePVCDataSource` | `true` | GA | 1.18 | - | diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index c1a3a8f1c2..4cd5eaa905 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -286,15 +286,16 @@ TODO: Test and explain how to use additional non-K8s secrets with an existing se ## Service Account Token Volume Projection -{{< feature-state for_k8s_version="v1.12" state="beta" >}} +{{< feature-state for_k8s_version="v1.20" state="stable" >}} {{< note >}} -This ServiceAccountTokenVolumeProjection is __beta__ in 1.12 and -enabled by passing all of the following flags to the API server: +To enable and use token request projection, you must specify each of the following +command line arguments to `kube-apiserver`: * `--service-account-issuer` +* `--service-account-key-file` * `--service-account-signing-key-file` -* `--service-account-api-audiences` +* `--api-audiences` {{< /note >}} @@ -385,5 +386,3 @@ See also: - [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/) - [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190730-oidc-discovery.md) - [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html) - - From 1bcc07a674fc7f92d7f6f3571da2ee9f3966004e Mon Sep 17 00:00:00 2001 From: Ravi Gudimetla Date: Mon, 9 Nov 2020 08:45:36 +0530 Subject: [PATCH 25/74] Introduce windows-priorityclass flag to kubelet Introduce windows-priorityclass flag to kubelet --- .../windows/intro-windows-in-kubernetes.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 812d86faf7..a7d25016dd 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -270,6 +270,7 @@ The behavior of the flags behave differently as described below: * MemoryPressure Condition is not implemented * There are no OOM eviction actions taken by the kubelet * Kubelet running on the windows node does not have memory restrictions. `--kubelet-reserve` and `--system-reserve` do not set limits on kubelet or processes running on the host. This means kubelet or a process on the host could cause memory resource starvation outside the node-allocatable and scheduler +* An additional flag to set the priority of the kubelet process is availabe on the Windows nodes called `--windows-priorityclass`. This flag allows kubelet process to get more CPU time slices when compared to other processes running on the Windows host. More information on the allowable values and their meaning is available at [Windows Priority Classes](https://docs.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities#priority-class). In order for kubelet to always have enough CPU cycles it is recommended to set this flag to `ABOVE_NORMAL_PRIORITY_CLASS` and above #### Storage From 179c821b0229f04b5b7eb8c92c65ebee9c134f57 Mon Sep 17 00:00:00 2001 From: Lee Verberne Date: Mon, 9 Nov 2020 13:46:25 +0100 Subject: [PATCH 26/74] Update kubectl debug docs for 1.20 release (#24847) * Update kubectl debug docs for 1.20 release * Apply suggestions from code review Co-authored-by: Zach Corleissen Co-authored-by: Zach Corleissen --- .../workloads/pods/ephemeral-containers.md | 2 +- .../debug-running-pod.md | 181 ++++++++++++++++-- 2 files changed, 166 insertions(+), 17 deletions(-) diff --git a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md index c1852df707..153f2cf3ae 100644 --- a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md +++ b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md @@ -90,7 +90,7 @@ enabled, and Kubernetes client and server version v1.16 or later. {{< /note >}} The examples in this section demonstrate how ephemeral containers appear in -the API. You would normally use `kubectl alpha debug` or another `kubectl` +the API. You would normally use `kubectl debug` or another `kubectl` [plugin](/docs/tasks/extend-kubectl/kubectl-plugins/) to automate these steps rather than invoking the API directly. diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md index 5e67585705..18fc046dec 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -91,18 +91,16 @@ The examples in this section require the `EphemeralContainers` [feature gate]( cluster and `kubectl` version v1.18 or later. {{< /note >}} -You can use the `kubectl alpha debug` command to add ephemeral containers to a +You can use the `kubectl debug` command to add ephemeral containers to a running Pod. First, create a pod for the example: ```shell kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never ``` -{{< note >}} -This section use the `pause` container image in examples because it does not +The examples in this section use the `pause` container image because it does not contain userland debugging utilities, but this method works with all container images. -{{< /note >}} If you attempt to use `kubectl exec` to create a shell you will see an error because there is no shell in this container image. @@ -115,12 +113,12 @@ kubectl exec -it ephemeral-demo -- sh OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown ``` -You can instead add a debugging container using `kubectl alpha debug`. If you +You can instead add a debugging container using `kubectl debug`. If you specify the `-i`/`--interactive` argument, `kubectl` will automatically attach to the console of the Ephemeral Container. ```shell -kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo +kubectl debug -it ephemeral-demo --image=busybox --target=ephemeral-demo ``` ``` @@ -172,20 +170,171 @@ Use `kubectl delete` to remove the Pod when you're finished: kubectl delete pod ephemeral-demo ``` - +### Copying a Pod while adding a new container + +Adding a new container can be useful when your application is running but not +behaving as you expect and you'd like to add additional troubleshooting +utilities to the Pod. + +For example, maybe your application's container images are built on `busybox` +but you need debugging utilities not included in `busybox`. You can simulate +this scenario using `kubectl run`: + +```shell +kubectl run myapp --image=busybox --restart=Never -- sleep 1d +``` + +Run this command to create a copy of `myapp` named `myapp-copy` that adds a +new Ubuntu container for debugging: + +```shell +kubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug +``` + +``` +Defaulting debug container name to debugger-w7xmf. +If you don't see a command prompt, try pressing enter. +root@myapp-debug:/# +``` + +{{< note >}} +* `kubectl debug` automatically generates a container name if you don't choose + one using the `--container` flag. +* The `-i` flag causes `kubectl debug` to attach to the new container by + default. You can prevent this by specifying `--attach=false`. If your session + becomes disconnected you can reattach using `kubectl attach`. +* The `--share-processes` allows the containers in this Pod to see processes + from the other containers in the Pod. For more information about how this + works, see [Share Process Namespace between Containers in a Pod]( + /docs/tasks/configure-pod-container/share-process-namespace/). +{{< /note >}} + +Don't forget to clean up the debugging Pod when you're finished with it: + +```shell +kubectl delete pod myapp myapp-debug +``` + +### Copying a Pod while changing its command + +Sometimes it's useful to change the command for a container, for example to +add a debugging flag or because the application is crashing. + +To simulate a crashing application, use `kubectl run` to create a container +that immediately exists: + +``` +kubectl run --image=busybox myapp -- false +``` + +You can see using `kubectl describe pod myapp` that this container is crashing: + +``` +Containers: + myapp: + Image: busybox + ... + Args: + false + State: Waiting + Reason: CrashLoopBackOff + Last State: Terminated + Reason: Error + Exit Code: 1 +``` + +You can use `kubectl debug` to create a copy of this Pod with the command +changed to an interactive shell: + +``` +kubectl debug myapp -it --copy-to=myapp-debug --container=myapp -- sh +``` + +``` +If you don't see a command prompt, try pressing enter. +/ # +``` + +Now you have an interactive shell that you can use to perform tasks like +checking filesystem paths or running the container command manually. + +{{< note >}} +* To change the command of a specific container you must + specify its name using `--container` or `kubectl debug` will instead + create a new container to run the command you specified. +* The `-i` flag causes `kubectl debug` to attach to the container by default. + You can prevent this by specifying `--attach=false`. If your session becomes + disconnected you can reattach using `kubectl attach`. +{{< /note >}} + +Don't forget to clean up the debugging Pod when you're finished with it: + +```shell +kubectl delete pod myapp myapp-debug +``` + +### Copying a Pod while changing container images + +In some situations you may want to change a misbehaving Pod from its normal +production container images to an image containing a debugging build or +additional utilities. + +As an example, create a Pod using `kubectl run`: + +``` +kubectl run myapp --image=busybox --restart=Never -- sleep 1d +``` + +Now use `kubectl debug` to make a copy and change its container image +to `ubuntu`: + +``` +kubectl debug myapp --copy-to=myapp-debug --set-image=*=ubuntu +``` + +The syntax of `--set-image` uses the same `container_name=image` syntax as +`kubectl set image`. `*=ubuntu` means change the image of all containers +to `ubuntu`. + +Don't forget to clean up the debugging Pod when you're finished with it: + +```shell +kubectl delete pod myapp myapp-debug +``` ## Debugging via a shell on the node {#node-shell-session} -If none of these approaches work, you can find the host machine that the pod is -running on and SSH into that host, but this should generally not be necessary -given tools in the Kubernetes API. Therefore, if you find yourself needing to -ssh into a machine, please file a feature request on GitHub describing your use -case and why these tools are insufficient. +If none of these approaches work, you can find the Node on which the Pod is +running and create a privileged Pod running in the host namespaces. To create +an interactive shell on a node using `kubectl debug`, run: +```shell +kubectl debug node/mynode -it --image=ubuntu +``` +``` +Creating debugging pod node-debugger-mynode-pdx84 with container debugger on node mynode. +If you don't see a command prompt, try pressing enter. +root@ek8s:/# +``` + +When creating a debugging session on a node, keep in mind that: + +* `kubectl debug` automatically generates the name of the new Pod based on + the name of the Node. +* The container runs in the host IPC, Network, and PID namespaces. +* The root filesystem of the Node will be mounted at `/host`. + +Don't forget to clean up the debugging Pod when you're finished with it: + +```shell +kubectl delete pod node-debugger-mynode-pdx84 +``` From c5ffbec1eae4c62f8536486aa7dba0444b299bbc Mon Sep 17 00:00:00 2001 From: Matthew Cary Date: Thu, 5 Nov 2020 19:50:39 +0000 Subject: [PATCH 27/74] placeholder CL for fsgroup policy beta --- .../reference/command-line-tools-reference/feature-gates.md | 3 ++- .../en/docs/tasks/configure-pod-container/security-context.md | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 303c90aae5..989189180f 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -81,7 +81,8 @@ different Kubernetes components. | `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | | | `CSIStorageCapacity` | `false` | Alpha | 1.19 | | | `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | | -| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | | +| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 | +| `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | | | `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | | | `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 | | `CustomResourceDefaulting` | `true` | Beta | 1.16 | | diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index 10afee8640..104dc0003f 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -149,7 +149,7 @@ exit ## Configure volume permission and ownership change policy for Pods -{{< feature-state for_k8s_version="v1.18" state="alpha" >}} +{{< feature-state for_k8s_version="v1.20" state="beta" >}} By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the `fsGroup` specified in a Pod's `securityContext` when that volume is From 3b68d53372289d4b81f4a9f79c583c02498138b0 Mon Sep 17 00:00:00 2001 From: Adhityaa Chandrasekar Date: Thu, 5 Nov 2020 15:31:09 +0000 Subject: [PATCH 28/74] flow control metrics: switch to snake_case for labels Signed-off-by: Adhityaa Chandrasekar --- .../cluster-administration/flow-control.md | 37 +++++++++++-------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index 5b5653260a..8ec8d5f8da 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -331,6 +331,13 @@ PriorityLevelConfigurations. ### Metrics +{{< note >}} +In versions of Kubernetes before v1.20, the labels `flow_schema` and +`priority_level` were inconsistently named `flowSchema` and `priorityLevel`, +respectively. If you're running Kubernetes versions v1.19 and earlier, you +should refer to the documentation for your version. +{{< /note >}} + When you enable the API Priority and Fairness feature, the kube-apiserver exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling important traffic, or find @@ -338,8 +345,8 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_rejected_requests_total` is a counter vector (cumulative since server start) of requests that were rejected, - broken down by the labels `flowSchema` (indicating the one that - matched the request), `priorityLevel` (indicating the one to which + broken down by the labels `flow_schema` (indicating the one that + matched the request), `priority_level` (indicating the one to which the request was assigned), and `reason`. The `reason` label will be have one of the following values: * `queue-full`, indicating that too many requests were already @@ -352,8 +359,8 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_dispatched_requests_total` is a counter vector (cumulative since server start) of requests that began - executing, broken down by the labels `flowSchema` (indicating the - one that matched the request) and `priorityLevel` (indicating the + executing, broken down by the labels `flow_schema` (indicating the + one that matched the request) and `priority_level` (indicating the one to which the request was assigned). * `apiserver_current_inqueue_requests` is a gauge vector of recent @@ -384,17 +391,17 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector holding the instantaneous number of queued (not executing) requests, - broken down by the labels `priorityLevel` and `flowSchema`. + broken down by the labels `priority_level` and `flow_schema`. * `apiserver_flowcontrol_current_executing_requests` is a gauge vector holding the instantaneous number of executing (not waiting in a - queue) requests, broken down by the labels `priorityLevel` and - `flowSchema`. + queue) requests, broken down by the labels `priority_level` and + `flow_schema`. * `apiserver_flowcontrol_priority_level_request_count_samples` is a histogram vector of observations of the then-current number of requests broken down by the labels `phase` (which takes on the - values `waiting` and `executing`) and `priorityLevel`. Each + values `waiting` and `executing`) and `priority_level`. Each histogram gets observations taken periodically, up through the last activity of the relevant sort. The observations are made at a high rate. @@ -402,7 +409,7 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_priority_level_request_count_watermarks` is a histogram vector of high or low water marks of the number of requests broken down by the labels `phase` (which takes on the - values `waiting` and `executing`) and `priorityLevel`; the label + values `waiting` and `executing`) and `priority_level`; the label `mark` takes on values `high` and `low`. The water marks are accumulated over windows bounded by the times when an observation was added to @@ -411,7 +418,7 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_request_queue_length_after_enqueue` is a histogram vector of queue lengths for the queues, broken down by - the labels `priorityLevel` and `flowSchema`, as sampled by the + the labels `priority_level` and `flow_schema`, as sampled by the enqueued requests. Each request that gets queued contributes one sample to its histogram, reporting the length of the queue just after the request was added. Note that this produces different @@ -428,12 +435,12 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector hoding the computed concurrency limit (based on the API server's total concurrency limit and PriorityLevelConfigurations' concurrency - shares), broken down by the label `priorityLevel`. + shares), broken down by the label `priority_level`. * `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram vector of how long requests spent queued, broken down by the labels - `flowSchema` (indicating which one matched the request), - `priorityLevel` (indicating the one to which the request was + `flow_schema` (indicating which one matched the request), + `priority_level` (indicating the one to which the request was assigned), and `execute` (indicating whether the request started executing). {{< note >}} @@ -445,8 +452,8 @@ poorly-behaved workloads that may be harming system health. * `apiserver_flowcontrol_request_execution_seconds` is a histogram vector of how long requests took to actually execute, broken down by - the labels `flowSchema` (indicating which one matched the request) - and `priorityLevel` (indicating the one to which the request was + the labels `flow_schema` (indicating which one matched the request) + and `priority_level` (indicating the one to which the request was assigned). ### Debug endpoints From 220a7b201b42c919824c679cbf08ef7997ce78cd Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Tue, 10 Nov 2020 00:10:11 +0000 Subject: [PATCH 29/74] ExecProbeTimeout feature gate introduction --- .../feature-gates.md | 2 ++ ...igure-liveness-readiness-startup-probes.md | 23 +++++++++++++++---- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 303c90aae5..b16b4d722b 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -227,6 +227,7 @@ different Kubernetes components. | `EvenPodsSpread` | `false` | Alpha | 1.16 | 1.17 | | `EvenPodsSpread` | `true` | Beta | 1.18 | 1.18 | | `EvenPodsSpread` | `true` | GA | 1.19 | - | +| `ExecProbeTimeout` | `true` | GA | 1.20 | - | | `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 | | `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - | | `HugePages` | `false` | Alpha | 1.8 | 1.9 | @@ -450,6 +451,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `EphemeralContainers`: Enable the ability to add {{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}} to running pods. - `EvenPodsSpread`: Enable pods to be scheduled evenly across topology domains. See [Pod Topology Spread Constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). +- `ExecProbeTimeout`: Ensure kubelet respects exec probe timeouts. This feature gate exists in case any of your existing workloads depend on a now-corrected fault where Kubernetes ignored exec probe timeouts. See [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). - `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See [Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim). - `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See [Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims). - `ExperimentalCriticalPodAnnotation`: Enable annotating specific pods as *critical* so that their [scheduling is guaranteed](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/). diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 150fe720e1..e55f8cb789 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -336,6 +336,25 @@ liveness. Minimum value is 1. try `failureThreshold` times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1. +{{< note >}} +Before Kubernetes 1.20, the field `timeoutSeconds` was not respected for exec probes: +probes continued running indefinitely, even past their configured deadline, +until a result was returned. + +This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior, +even without realizing it, as the default timeout is 1 second. +As a cluster administrator, you can disable the feature gate `ExecProbeTimeout` (set it to `false`) +on kubelet to restore the behavior from older versions, then remove that override +once all the exec probes in the cluster have a `timeoutSeconds` value set. + +With the fix of the defect, for exec probes, on Kubernetes `1.20+` with the `dockershim` container runtime, +the process inside the container may keep running even after probe returned failure because of the timeout. +{{< /note >}} +{{< caution >}} +Incorrect implementation of readiness probes may result in an ever growing number +of processes in the container, and resource starvation if this is left unchecked. +{{< /caution >}} + ### HTTP probes [HTTP probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core) @@ -406,7 +425,3 @@ You can also read the API references for: * [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) * [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) * [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) - - - - From 45da527a318d6d552a83ed8da66f2fc1b0920e5f Mon Sep 17 00:00:00 2001 From: Aldo Culquicondor Date: Thu, 5 Nov 2020 16:28:06 -0500 Subject: [PATCH 30/74] Add usage for per-profile node affinity --- .../scheduling-eviction/assign-pod-node.md | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 5123f34ca3..e684b6ea60 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -158,6 +158,49 @@ If you remove or change the label of the node where the pod is scheduled, the po The `weight` field in `preferredDuringSchedulingIgnoredDuringExecution` is in the range 1-100. For each node that meets all of the scheduling requirements (resource request, RequiredDuringScheduling affinity expressions, etc.), the scheduler will compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding MatchExpressions. This score is then combined with the scores of other priority functions for the node. The node(s) with the highest total score are the most preferred. +#### Node affinity per scheduling profile + +{{< feature-state for_k8s_version="v1.20" state="beta" >}} + +When configuring multiple [scheduling profiles](/docs/reference/scheduling/config/#multiple-profiles), you can associate +a profile with a Node affinity, which is useful if a profile only applies to a specific set of Nodes. +To do so, add an `addedAffinity` to the args of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins) +in the [scheduler configuration](/docs/reference/scheduling/config/). For example: + +```yaml +apiVersion: kubescheduler.config.k8s.io/v1beta1 +kind: KubeSchedulerConfiguration + +profiles: + - schedulerName: default-scheduler + - schedulerName: foo-scheduler + pluginConfig: + - name: NodeAffinity + args: + addedAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: scheduler-profile + operator: In + values: + - foo +``` + +The `addedAffinity` is applied to all Pods that set `.spec.schedulerName` to `foo-scheduler`, in addition to the +NodeAffinity specified in the PodSpec. +That is, in order to match the Pod, Nodes need to satisfy `addedAffinity` and the Pod's `.spec.NodeAffinity`. + +Since the `addedAffinity` is not visible to end users, its behavior might be unexpected to them. We +recommend to use node labels that have clear correlation with the profile's scheduler name. + +{{< note >}} +The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler) +is not aware of scheduling profiles. For this reason, it is recommended that you keep a scheduler profile, such as the +`default-scheduler`, without any `addedAffinity`. Then, the Daemonset's Pod template should use this scheduler name. +Otherwise, some Pods created by the Daemonset controller might remain unschedulable. +{{< /note >}} + ### Inter-pod affinity and anti-affinity Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled *based on From 8a3244fdd1ab0c9b762b33d28b99f6fee8c1c9d6 Mon Sep 17 00:00:00 2001 From: Bridget Kromhout Date: Mon, 26 Oct 2020 13:06:20 -0500 Subject: [PATCH 31/74] Dual-stack docs for Kubernetes 1.20 Signed-off-by: Bridget Kromhout Co-authored-by: Tim Bannister Co-authored-by: Lachlan Evenson --- .../services-networking/dual-stack.md | 172 +++++++++++++++--- .../docs/tasks/network/validate-dual-stack.md | 119 +++++++++--- .../networking/dual-stack-default-svc.yaml | 3 +- ...c.yaml => dual-stack-ipfamilies-ipv6.yaml} | 6 +- ...aml => dual-stack-prefer-ipv6-lb-svc.yaml} | 5 +- .../dual-stack-preferred-ipfamilies-svc.yaml | 16 ++ .../networking/dual-stack-preferred-svc.yaml | 13 ++ 7 files changed, 285 insertions(+), 49 deletions(-) rename content/en/examples/service/networking/{dual-stack-ipv4-svc.yaml => dual-stack-ipfamilies-ipv6.yaml} (73%) rename content/en/examples/service/networking/{dual-stack-ipv6-lb-svc.yaml => dual-stack-prefer-ipv6-lb-svc.yaml} (76%) create mode 100644 content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml create mode 100644 content/en/examples/service/networking/dual-stack-preferred-svc.yaml diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md index 234e16dc25..2981bffec8 100644 --- a/content/en/docs/concepts/services-networking/dual-stack.md +++ b/content/en/docs/concepts/services-networking/dual-stack.md @@ -3,6 +3,7 @@ reviewers: - lachie83 - khenidak - aramase +- bridgetkromhout title: IPv4/IPv6 dual-stack feature: title: IPv4/IPv6 dual-stack @@ -30,14 +31,17 @@ If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the c Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features: * Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod) - * IPv4 and IPv6 enabled Services (each Service must be for a single address family) + * IPv4 and IPv6 enabled Services * Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces ## Prerequisites The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters: - * Kubernetes 1.16 or later + * Kubernetes 1.20 or later + For information about using dual-stack services with earlier + Kubernetes versions, refer to the documentation for that version + of Kubernetes. * Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) * A network plugin that supports dual-stack (such as Kubenet or Calico) @@ -68,47 +72,173 @@ An example of an IPv6 CIDR: `fdXY:IJKL:MNOP:15::/64` (this shows the format but ## Services -If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, `.spec.ipFamily`, on that Service. -You can only set this field when creating a new Service. Setting the `.spec.ipFamily` field is optional and should only be used if you plan to enable IPv4 and IPv6 {{< glossary_tooltip text="Services" term_id="service" >}} and {{< glossary_tooltip text="Ingresses" term_id="ingress" >}} on your cluster. The configuration of this field not a requirement for [egress](#egress-traffic) traffic. +If your cluster has dual-stack enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both. + +The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager). + +When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you +set the `.spec.ipFamilyPolicy` field to one of the following values: + +* `SingleStack`: Single-stack service. The control plane allocates a cluster IP for the Service, using the first configured service cluster IP range. +* `PreferDualStack`: + * Only used if the cluster has dual-stack enabled. Allocates IPv4 and IPv6 cluster IPs for the Service + * If the cluster does not have dual-stack enabled, this setting follows the same behavior as `SingleStack`. +* `RequireDualStack`: Allocates Service `.spec.ClusterIPs` from both IPv4 and IPv6 address ranges. + * Selects the `.spec.ClusterIP` from the list of `.spec.ClusterIPs` based on the address family of the first element in the `.spec.ipFamilies` array. + * The cluster must have dual-stack networking configured. + +If you would like to define which IP family to use for single stack or define the order of IP families for dual-stack, you can choose the address families by setting an optional field, `.spec.ipFamilies`, on the Service. {{< note >}} -The default address family for your cluster is the address family of the first service cluster IP range configured via the `--service-cluster-ip-range` flag to the kube-controller-manager. +The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a Service that already exists. If you want to change `.spec.ipFamilies`, delete and recreate the Service. {{< /note >}} -You can set `.spec.ipFamily` to either: +You can set `.spec.ipFamilies` to any of the following array values: - * `IPv4`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv4` - * `IPv6`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv6` +- `["IPv4"]` +- `["IPv6"]` +- `["IPv4","IPv6"]` (dual stack) +- `["IPv6","IPv4"]` (dual stack) -The following Service specification does not include the `ipFamily` field. Kubernetes will assign an IP address (also known as a "cluster IP") from the first configured `service-cluster-ip-range` to this Service. +The first family you list is used for the legacy `.spec.ClusterIP` field. + +### Dual-stack Service configuration scenarios + +These examples demonstrate the behavior of various dual-stack Service configuration scenarios. + +#### Dual-stack options on new Services + +1. This Service specification does not explicitly define `.spec.ipFamilyPolicy`. When you create this Service, Kubernetes assigns a cluster IP for the Service from the first configured `service-cluster-ip-range` and sets the `.spec.ipFamilyPolicy` to `SingleStack`. ([Services without selectors](/docs/concepts/services-networking/service/#services-without-selectors) and [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors will behave in this same way.) {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} -The following Service specification includes the `ipFamily` field. Kubernetes will assign an IPv6 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service. +1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the service. The control plane updates the `.spec` for the Service to record the IP address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from `.spec.ClusterIPs`. + + * For the `.spec.ClusterIP` field, the control plane records the IP address that is from the same address family as the first service cluster IP range. + * On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list one address. + * On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy` behaves the same as `PreferDualStack`. -{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} +{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}} -For comparison, the following Service specification will be assigned an IPv4 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service. +1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is the first element in the `.spec.ClusterIPs` array, overriding the default. -{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}} +{{< codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" >}} -### Type LoadBalancer +#### Dual-stack defaults on existing Services -On cloud providers which support IPv6 enabled external load balancers, setting the `type` field to `LoadBalancer` in additional to setting `ipFamily` field to `IPv6` provisions a cloud load balancer for your Service. +These examples demonstrate the default behavior when dual-stack is newly enabled on a cluster where Services already exist. -## Egress Traffic +1. When dual-stack is enabled on a cluster, existing Services (whether `IPv4` or `IPv6`) are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP will be stored in `.spec.ClusterIPs`. -The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying {{< glossary_tooltip text="CNI" term_id="cni" >}} provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The [ip-masq-agent](https://github.com/kubernetes-sigs/ip-masq-agent) is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters. +{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} -## Known Issues + You can validate this behavior by using kubectl to inspect an existing service. - * Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr) +```shell +kubectl get svc my-service -o yaml +``` +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: MyApp + name: my-service +spec: + clusterIP: 10.0.197.123 + clusterIPs: + - 10.0.197.123 + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: MyApp + type: ClusterIP +status: + loadBalancer: {} +``` +1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`. + +{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} + + You can validate this behavior by using kubectl to inspect an existing headless service with selectors. + +```shell +kubectl get svc my-service -o yaml +``` + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: MyApp + name: my-service +spec: + clusterIP: None + clusterIPs: + - None + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: MyApp +``` + +#### Switching Services between single-stack and dual-stack + +Services can be changed from single-stack to dual-stack and from dual-stack to single-stack. + +1. To change a Service from single-stack to dual-stack, change `.spec.ipFamilyPolicy` from `SingleStack` to `PreferDualStack` or `RequireDualStack` as desired. When you change this Service from single-stack to dual-stack, Kubernetes assigns the missing address family so that the Service now has IPv4 and IPv6 addresses. + + Edit the Service specification updating the `.spec.ipFamilyPolicy` from `SingleStack` to `PreferDualStack`. + +Before: +```yaml +spec: + ipFamilyPolicy: SingleStack +``` +After: +```yaml +spec: + ipFamilyPolicy: PreferDualStack +``` + +1. To change a Service from dual-stack to single-stack, change `.spec.ipFamilyPolicy` from `PreferDualStack` or `RequireDualStack` to `SingleStack`. When you change this Service from dual-stack to single-stack, Kubernetes retains only the first element in the `.spec.ClusterIPs` array, and sets `.spec.ClusterIP` to that IP address and sets `.spec.ipFamilies` to the address family of `.spec.ClusterIPs`. + +### Headless Services without selector + +For [Headless Services without selectors](/docs/concepts/services-networking/service/#without-selectors) and without `.spec.ipFamilyPolicy` explicitly set, the `.spec.ipFamilyPolicy` field defaults to `RequireDualStack`. + +### Service type LoadBalancer + +To provision a dual-stack load balancer for your Service: + * Set the `.spec.type` field to `LoadBalancer` + * Set `.spec.ipFamilyPolicy` field to `PreferDualStack` or `RequireDualStack` + +{{< note >}} +To use a dual-stack `LoadBalancer` type Service, your cloud provider must support IPv4 and IPv6 load balancers. +{{< /note >}} + +## Egress traffic + +If you want to enable egress traffic in order to reach off-cluster destinations (eg. the public Internet) from a Pod that uses non-publicly routable IPv6 addresses, you need to enable the Pod to use a publicly routed IPv6 address via a mechanism such as transparent proxying or IP masquerading. The [ip-masq-agent](https://github.com/kubernetes-sigs/ip-masq-agent) project supports IP masquerading on dual-stack clusters. + +{{< note >}} +Ensure your {{< glossary_tooltip text="CNI" term_id="cni" >}} provider supports IPv6. +{{< /note >}} ## {{% heading "whatsnext" %}} * [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking - - diff --git a/content/en/docs/tasks/network/validate-dual-stack.md b/content/en/docs/tasks/network/validate-dual-stack.md index 305f78db28..5e11cb3057 100644 --- a/content/en/docs/tasks/network/validate-dual-stack.md +++ b/content/en/docs/tasks/network/validate-dual-stack.md @@ -2,7 +2,8 @@ reviewers: - lachie83 - khenidak -min-kubernetes-server-version: v1.16 +- bridgetkromhout +min-kubernetes-server-version: v1.20 title: Validate IPv4/IPv6 dual-stack content_type: task --- @@ -97,31 +98,31 @@ a00:100::4 pod01 ## Validate Services -Create the following Service without the `ipFamily` field set. When this field is not set, the Service gets an IP from the first configured range via `--service-cluster-ip-range` flag on the kube-controller-manager. +Create the following Service that does not explicitly define `.spec.ipFamilyPolicy`. Kubernetes will assign a cluster IP for the Service from the first configured `service-cluster-ip-range` and set the `.spec.ipFamilyPolicy` to `SingleStack`. {{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} -By viewing the YAML for the Service you can observe that the Service has the `ipFamily` field has set to reflect the address family of the first configured range set via `--service-cluster-ip-range` flag on kube-controller-manager. +Use `kubectl` to view the YAML for the Service. ```shell kubectl get svc my-service -o yaml ``` +The Service has `.spec.ipFamilyPolicy` set to `SingleStack` and `.spec.clusterIP` set to an IPv4 address from the first configured range set via `--service-cluster-ip-range` flag on kube-controller-manager. + ```yaml apiVersion: v1 kind: Service metadata: - creationTimestamp: "2019-09-03T20:45:13Z" - labels: - app: MyApp name: my-service namespace: default - resourceVersion: "485836" - selfLink: /api/v1/namespaces/default/services/my-service - uid: b6fa83ef-fe7e-47a3-96a1-ac212fa5b030 spec: - clusterIP: 10.0.29.179 - ipFamily: IPv4 + clusterIP: 10.0.217.164 + clusterIPs: + - 10.0.217.164 + ipFamilies: + - IPv4 + ipFamilyPolicy: SingleStack ports: - port: 80 protocol: TCP @@ -134,28 +135,100 @@ status: loadBalancer: {} ``` -Create the following Service with the `ipFamily` field set to `IPv6`. +Create the following Service that explicitly defines `IPv6` as the first array element in `.spec.ipFamilies`. Kubernetes will assign a cluster IP for the Service from the IPv6 range configured `service-cluster-ip-range` and set the `.spec.ipFamilyPolicy` to `SingleStack`. -{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} +{{< codenew file="service/networking/dual-stack-ipfamilies-ipv6.yaml" >}} -Validate that the Service gets a cluster IP address from the IPv6 address block. You may then validate access to the service via the IP and port. +Use `kubectl` to view the YAML for the Service. + +```shell +kubectl get svc my-service -o yaml ``` - kubectl get svc -l app=MyApp -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -my-service ClusterIP fe80:20d::d06b 80/TCP 9s + +The Service has `.spec.ipFamilyPolicy` set to `SingleStack` and `.spec.clusterIP` set to an IPv6 address from the IPv6 range set via `--service-cluster-ip-range` flag on kube-controller-manager. + +```yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: MyApp + name: my-service +spec: + clusterIP: fd00::5118 + clusterIPs: + - fd00::5118 + ipFamilies: + - IPv6 + ipFamilyPolicy: SingleStack + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: MyApp + sessionAffinity: None + type: ClusterIP +status: + loadBalancer: {} +``` + +Create the following Service that explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. Kubernetes will assign both IPv4 and IPv6 addresses (as this cluster has dual-stack enabled) and select the `.spec.ClusterIP` from the list of `.spec.ClusterIPs` based on the address family of the first element in the `.spec.ipFamilies` array. + +{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}} + +{{< note >}} +The `kubectl get svc` command will only show the primary IP in the `CLUSTER-IP` field. + +```shell +kubectl get svc -l app=MyApp + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-service ClusterIP 10.0.216.242 80/TCP 5s +``` +{{< /note >}} + +Validate that the Service gets cluster IPs from the IPv4 and IPv6 address blocks using `kubectl describe`. You may then validate access to the service via the IPs and ports. + +```shell +kubectl describe svc -l app=MyApp +``` + +``` +Name: my-service +Namespace: default +Labels: app=MyApp +Annotations: +Selector: app=MyApp +Type: ClusterIP +IP Family Policy: PreferDualStack +IP Families: IPv4,IPv6 +IP: 10.0.216.242 +IPs: 10.0.216.242,fd00::af55 +Port: 80/TCP +TargetPort: 9376/TCP +Endpoints: +Session Affinity: None +Events: ``` ### Create a dual-stack load balanced Service -If the cloud provider supports the provisioning of IPv6 enabled external load balancer, create the following Service with both the `ipFamily` field set to `IPv6` and the `type` field set to `LoadBalancer` +If the cloud provider supports the provisioning of IPv6 enabled external load balancers, create the following Service with `PreferDualStack` in `.spec.ipFamilyPolicy`, `IPv6` as the first element of the `.spec.ipFamilies` array and the `type` field set to `LoadBalancer`. -{{< codenew file="service/networking/dual-stack-ipv6-lb-svc.yaml" >}} +{{< codenew file="service/networking/dual-stack-prefer-ipv6-lb-svc.yaml" >}} + +Check the Service: + +```shell +kubectl get svc -l app=MyApp +``` Validate that the Service receives a `CLUSTER-IP` address from the IPv6 address block along with an `EXTERNAL-IP`. You may then validate access to the service via the IP and port. -``` - kubectl get svc -l app=MyApp -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -my-service ClusterIP fe80:20d::d06b 2001:db8:f100:4002::9d37:c0d7 80:31868/TCP 30s + +```shell +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +my-service LoadBalancer fd00::7ebc 2603:1030:805::5 80:30790/TCP 35s ``` diff --git a/content/en/examples/service/networking/dual-stack-default-svc.yaml b/content/en/examples/service/networking/dual-stack-default-svc.yaml index 00ed87ba19..86eadd5478 100644 --- a/content/en/examples/service/networking/dual-stack-default-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-default-svc.yaml @@ -2,10 +2,11 @@ apiVersion: v1 kind: Service metadata: name: my-service + labels: + app: MyApp spec: selector: app: MyApp ports: - protocol: TCP port: 80 - targetPort: 9376 \ No newline at end of file diff --git a/content/en/examples/service/networking/dual-stack-ipv4-svc.yaml b/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml similarity index 73% rename from content/en/examples/service/networking/dual-stack-ipv4-svc.yaml rename to content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml index a875f44d6d..7c7239cae6 100644 --- a/content/en/examples/service/networking/dual-stack-ipv4-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-ipfamilies-ipv6.yaml @@ -2,11 +2,13 @@ apiVersion: v1 kind: Service metadata: name: my-service + labels: + app: MyApp spec: - ipFamily: IPv4 + ipFamilies: + - IPv6 selector: app: MyApp ports: - protocol: TCP port: 80 - targetPort: 9376 \ No newline at end of file diff --git a/content/en/examples/service/networking/dual-stack-ipv6-lb-svc.yaml b/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml similarity index 76% rename from content/en/examples/service/networking/dual-stack-ipv6-lb-svc.yaml rename to content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml index 2586ec9b39..0949a75428 100644 --- a/content/en/examples/service/networking/dual-stack-ipv6-lb-svc.yaml +++ b/content/en/examples/service/networking/dual-stack-prefer-ipv6-lb-svc.yaml @@ -5,11 +5,12 @@ metadata: labels: app: MyApp spec: - ipFamily: IPv6 + ipFamilyPolicy: PreferDualStack + ipFamilies: + - IPv6 type: LoadBalancer selector: app: MyApp ports: - protocol: TCP port: 80 - targetPort: 9376 \ No newline at end of file diff --git a/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml b/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml new file mode 100644 index 0000000000..c31acfec58 --- /dev/null +++ b/content/en/examples/service/networking/dual-stack-preferred-ipfamilies-svc.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service + labels: + app: MyApp +spec: + ipFamilyPolicy: PreferDualStack + ipFamilies: + - IPv6 + - IPv4 + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 diff --git a/content/en/examples/service/networking/dual-stack-preferred-svc.yaml b/content/en/examples/service/networking/dual-stack-preferred-svc.yaml new file mode 100644 index 0000000000..8fb5bfa3d3 --- /dev/null +++ b/content/en/examples/service/networking/dual-stack-preferred-svc.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + name: my-service + labels: + app: MyApp +spec: + ipFamilyPolicy: PreferDualStack + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 From 0b4952dd88ee8a012b7d144948a9079dee510308 Mon Sep 17 00:00:00 2001 From: Shihang Zhang Date: Thu, 5 Nov 2020 09:51:57 -0800 Subject: [PATCH 32/74] separate RootCAConfigMap from BoundServiceAccountToken and Beta --- .../access-authn-authz/service-accounts-admin.md | 13 ++++++++----- .../command-line-tools-reference/feature-gates.md | 4 ++++ 2 files changed, 12 insertions(+), 5 deletions(-) diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md index df653a206f..fdce9513cc 100644 --- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md @@ -10,7 +10,7 @@ weight: 50 --- -This is a Cluster Administrator guide to service accounts. You should be familiar with +This is a Cluster Administrator guide to service accounts. You should be familiar with [configuring Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/). Support for authorization and user accounts is planned but incomplete. Sometimes @@ -59,9 +59,13 @@ It acts synchronously to modify pods as they are created or updated. When this p 1. It adds a `volume` to the pod which contains a token for API access. 1. It adds a `volumeSource` to each container of the pod mounted at `/var/run/secrets/kubernetes.io/serviceaccount`. -Starting from v1.13, you can migrate a service account volume to a projected volume when -the `BoundServiceAccountTokenVolume` feature gate is enabled. -The service account token will expire after 1 hour or the pod is deleted. See more details about [projected volume](/docs/tasks/configure-pod-container/configure-projected-volume-storage/). +#### Bound Service Account Token Volume +{{< feature-state for_k8s_version="v1.13" state="alpha" >}} + +When the `BoundServiceAccountTokenVolume` feature gate is enabled, the service account admission controller will +add a projected service account token volume instead of a secret volume. The service account token will expire after 1 hour by default or the pod is deleted. See more details about [projected volume](/docs/tasks/configure-pod-container/configure-projected-volume-storage/). + +This feature depends on the `RootCAConfigMap` feature gate enabled which publish a "kube-root-ca.crt" ConfigMap to every namespace. This ConfigMap contains a CA bundle used for verifying connections to the kube-apiserver. ### Token Controller @@ -115,4 +119,3 @@ kubectl delete secret mysecretname Service Account Controller manages ServiceAccount inside namespaces, and ensures a ServiceAccount named "default" exists in every active namespace. - diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index f113627a85..96df5b7c54 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -131,6 +131,8 @@ different Kubernetes components. | `ProcMountType` | `false` | Alpha | 1.12 | | | `QOSReserved` | `false` | Alpha | 1.11 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | | +| `RootCAConfigMap` | `false` | Alpha | 1.13 | 1.19 | +| `RootCAConfigMap` | `true` | Beta | 1.20 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 | | `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | | | `RunAsGroup` | `true` | Beta | 1.14 | | @@ -513,6 +515,8 @@ Each feature gate is designed for enabling/disabling a specific feature: the input Pod's cpu and memory limits. The intent is to break ties between nodes with same scores. - `ResourceQuotaScopeSelectors`: Enable resource quota scope selectors. +- `RootCAConfigMap`: Configure the kube-controller-manager to publish a {{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} named `kube-root-ca.crt` to every namespace. This ConfigMap contains a CA bundle used for verifying connections to the kube-apiserver. + See [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md) for more details. - `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet. See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. - `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet. From d81ee2342ad67342516ef4082f5a4b35a2bf8658 Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Wed, 11 Nov 2020 01:38:09 -0800 Subject: [PATCH 33/74] Update content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md Co-authored-by: Tim Bannister --- .../configure-liveness-readiness-startup-probes.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index e55f8cb789..e4add99e2a 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -345,7 +345,10 @@ This defect was corrected in Kubernetes v1.20. You may have been relying on the even without realizing it, as the default timeout is 1 second. As a cluster administrator, you can disable the feature gate `ExecProbeTimeout` (set it to `false`) on kubelet to restore the behavior from older versions, then remove that override -once all the exec probes in the cluster have a `timeoutSeconds` value set. +once all the exec probes in the cluster have a `timeoutSeconds` value set. +If you have pods that are impacted from the default 1 second timeout, +you should update their probe timeout so that you're ready for the +eventual removal of that feature gate. With the fix of the defect, for exec probes, on Kubernetes `1.20+` with the `dockershim` container runtime, the process inside the container may keep running even after probe returned failure because of the timeout. From 1a13c6ba45e1110edd03dcd4fc1ca578d6918f10 Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Wed, 11 Nov 2020 01:38:45 -0800 Subject: [PATCH 34/74] Update content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md Co-authored-by: Tim Bannister --- .../configure-liveness-readiness-startup-probes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index e4add99e2a..4138fd7a3a 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -343,7 +343,7 @@ until a result was returned. This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior, even without realizing it, as the default timeout is 1 second. -As a cluster administrator, you can disable the feature gate `ExecProbeTimeout` (set it to `false`) +As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ExecProbeTimeout` (set it to `false`) on kubelet to restore the behavior from older versions, then remove that override once all the exec probes in the cluster have a `timeoutSeconds` value set. If you have pods that are impacted from the default 1 second timeout, From 1f306541c7fde10ccf8a725221cf6a3e6d79c057 Mon Sep 17 00:00:00 2001 From: Sergey Kanzhelev Date: Wed, 11 Nov 2020 01:39:07 -0800 Subject: [PATCH 35/74] Update content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md Co-authored-by: Tim Bannister --- .../configure-liveness-readiness-startup-probes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 4138fd7a3a..cdbcddb5d0 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -344,7 +344,7 @@ until a result was returned. This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior, even without realizing it, as the default timeout is 1 second. As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ExecProbeTimeout` (set it to `false`) -on kubelet to restore the behavior from older versions, then remove that override +on each kubelet to restore the behavior from older versions, then remove that override once all the exec probes in the cluster have a `timeoutSeconds` value set. If you have pods that are impacted from the default 1 second timeout, you should update their probe timeout so that you're ready for the From 4f0068f333103c5b44c2efd06e51c0e68fa1e039 Mon Sep 17 00:00:00 2001 From: Maciej Szulik Date: Wed, 4 Nov 2020 17:54:40 +0100 Subject: [PATCH 36/74] Add information how to enable cronjob controller v2 --- .../docs/concepts/workloads/controllers/cron-jobs.md | 10 ++++++++-- .../command-line-tools-reference/feature-gates.md | 2 ++ 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index af12bcba25..c4206facb5 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -32,8 +32,6 @@ The name must be no longer than 52 characters. This is because the CronJob contr append 11 characters to the job name provided and there is a constraint that the maximum length of a Job name is no more than 63 characters. - - ## CronJob @@ -82,6 +80,14 @@ be down for the same period as the previous example (`08:29:00` to `10:21:00`,) The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents. +## New controller + +There's an alternative implementation of the CronJob controller, available as an alpha feature since Kubernetes 1.20. To select version 2 of the CronJob controller, pass the following [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) flag to the {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}. + +``` +--feature-gates="CronJobControllerV2=true" +``` + ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 156b4c86df..f670d829d0 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -83,6 +83,7 @@ different Kubernetes components. | `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | | | `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 | | `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | | +| `CronJobControllerV2` | `false` | Alpha | 1.20 | | | `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | | | `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 | | `CustomResourceDefaulting` | `true` | Beta | 1.16 | | @@ -402,6 +403,7 @@ Each feature gate is designed for enabling/disabling a specific feature: Check [Service Account Token Volumes](https://git.k8s.io/community/contributors/design-proposals/storage/svcacct-token-volume-source.md) for more details. - `ConfigurableFSGroupPolicy`: Allows user to configure volume permission change policy for fsGroups when mounting a volume in a Pod. See [Configure volume permission and ownership change policy for Pods](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) for more details. +- `CronJobControllerV2`: Use an alternative implementation of the {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} controller. Otherwise, version 1 of the same controller is selected. The version 2 controller provides experimental performance improvements. - `CPUManager`: Enable container level CPU affinity support, see [CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/). - `CRIContainerLogRotation`: Enable container log rotation for cri container runtime. - `CSIBlockVolume`: Enable external CSI volume drivers to support block storage. See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support) documentation for more details. From 2300a3e0fe1e682eb2b89ddb45af8f458eb9eda9 Mon Sep 17 00:00:00 2001 From: Hugo Fonseca <1098293+fonsecas72@users.noreply.github.com> Date: Thu, 12 Nov 2020 15:47:40 +0000 Subject: [PATCH 37/74] HTTP Probe - Update documentation about default headers --- .../configure-liveness-readiness-startup-probes.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index cdbcddb5d0..0384ff8dbb 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -381,9 +381,9 @@ and the Pod's `hostNetwork` field is true. Then `host`, under `httpGet`, should to 127.0.0.1. If your pod relies on virtual hosts, which is probably the more common case, you should not use `host`, but rather set the `Host` header in `httpHeaders`. -For an HTTP probe, the kubelet sends three request headers in addition to the mandatory `Host` header: -`User-Agent`, `Accept-Encoding` and `Accept`. The default values for these headers are `kube-probe/{{< skew latestVersion >}}` -(where `{{< skew latestVersion >}}` is the version of the kubelet ), `gzip` and `*/*` respectively. +For an HTTP probe, the kubelet sends two request headers in addition to the mandatory `Host` header: +`User-Agent`, and `Accept`. The default values for these headers are `kube-probe/{{< skew latestVersion >}}` +(where `{{< skew latestVersion >}}` is the version of the kubelet ), and `*/*` respectively. You can override the default headers by defining `.httpHeaders` for the probe; for example From d0c6d303c30329deda5c17a0cdda82eee9a86392 Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Thu, 12 Nov 2020 21:28:25 +0200 Subject: [PATCH 38/74] kubeadm: promote the "kubeadm certs" command to GA (#24410) The command resided under "kubeadm alpha certs". It will be promoted to GA in 1.20 as "kubeadm certs". The existing command "kubeadm alpha" will remain present for one more release, but it will be hidden from documentation as it is deprecated. --- ...ubeadm_alpha_certs.md => kubeadm_certs.md} | 0 ...ey.md => kubeadm_certs_certificate-key.md} | 2 +- ...n.md => kubeadm_certs_check-expiration.md} | 2 +- ...e-csr.md => kubeadm_certs_generate-csr.md} | 4 +- ..._certs_renew.md => kubeadm_certs_renew.md} | 2 +- ...f.md => kubeadm_certs_renew_admin.conf.md} | 2 +- ...enew_all.md => kubeadm_certs_renew_all.md} | 2 +- ...eadm_certs_renew_apiserver-etcd-client.md} | 2 +- ...m_certs_renew_apiserver-kubelet-client.md} | 2 +- ...er.md => kubeadm_certs_renew_apiserver.md} | 2 +- ...dm_certs_renew_controller-manager.conf.md} | 2 +- ...dm_certs_renew_etcd-healthcheck-client.md} | 2 +- ...er.md => kubeadm_certs_renew_etcd-peer.md} | 2 +- ....md => kubeadm_certs_renew_etcd-server.md} | 2 +- ...kubeadm_certs_renew_front-proxy-client.md} | 2 +- ... => kubeadm_certs_renew_scheduler.conf.md} | 2 +- .../setup-tools/kubeadm/kubeadm-alpha.md | 59 --------------- .../setup-tools/kubeadm/kubeadm-certs.md | 73 +++++++++++++++++++ .../setup-tools/kubeadm/kubeadm-init.md | 4 +- .../tools/kubeadm/high-availability.md | 6 +- .../kubeadm/kubeadm-certs.md | 16 ++-- 21 files changed, 102 insertions(+), 88 deletions(-) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs.md => kubeadm_certs.md} (100%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_certificate-key.md => kubeadm_certs_certificate-key.md} (95%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_check-expiration.md => kubeadm_certs_check-expiration.md} (97%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_generate-csr.md => kubeadm_certs_generate-csr.md} (94%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew.md => kubeadm_certs_renew.md} (95%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_admin.conf.md => kubeadm_certs_renew_admin.conf.md} (98%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_all.md => kubeadm_certs_renew_all.md} (98%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_apiserver-etcd-client.md => kubeadm_certs_renew_apiserver-etcd-client.md} (97%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_apiserver-kubelet-client.md => kubeadm_certs_renew_apiserver-kubelet-client.md} (97%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_apiserver.md => kubeadm_certs_renew_apiserver.md} (98%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_controller-manager.conf.md => kubeadm_certs_renew_controller-manager.conf.md} (97%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_etcd-healthcheck-client.md => kubeadm_certs_renew_etcd-healthcheck-client.md} (97%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_etcd-peer.md => kubeadm_certs_renew_etcd-peer.md} (98%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_etcd-server.md => kubeadm_certs_renew_etcd-server.md} (98%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_front-proxy-client.md => kubeadm_certs_renew_front-proxy-client.md} (97%) rename content/en/docs/reference/setup-tools/kubeadm/generated/{kubeadm_alpha_certs_renew_scheduler.conf.md => kubeadm_certs_renew_scheduler.conf.md} (98%) create mode 100644 content/en/docs/reference/setup-tools/kubeadm/kubeadm-certs.md diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md similarity index 100% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs.md diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_certificate-key.md similarity index 95% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_certificate-key.md index 534851d42b..2de0366641 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_certificate-key.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_certificate-key.md @@ -11,7 +11,7 @@ generate and print one for you. ``` -kubeadm alpha certs certificate-key [flags] +kubeadm certs certificate-key [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_check-expiration.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_check-expiration.md similarity index 97% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_check-expiration.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_check-expiration.md index 5f51836f25..50a3cb8bf2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_check-expiration.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_check-expiration.md @@ -5,7 +5,7 @@ Checks expiration for the certificates in the local PKI managed by kubeadm. ``` -kubeadm alpha certs check-expiration [flags] +kubeadm certs check-expiration [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_generate-csr.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md similarity index 94% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_generate-csr.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md index 3e0bc4828f..afa8374577 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_generate-csr.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_generate-csr.md @@ -9,14 +9,14 @@ This command is designed for use in [Kubeadm External CA Mode](https://kubernete The PEM encoded signed certificates should then be saved alongside the key files, using ".crt" as the file extension, or in the case of kubeconfig files, the PEM encoded signed certificate should be base64 encoded and added to the kubeconfig file in the "users > user > client-certificate-data" field. ``` -kubeadm alpha certs generate-csr [flags] +kubeadm certs generate-csr [flags] ``` ### Examples ``` # The following command will generate keys and CSRs for all control-plane certificates and kubeconfig files: - kubeadm alpha certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s/pki + kubeadm certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s/pki ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew.md similarity index 95% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew.md index e0bfc54c5b..8b627a595d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew.md @@ -5,7 +5,7 @@ This command is not meant to be run on its own. See list of available subcommands. ``` -kubeadm alpha certs renew [flags] +kubeadm certs renew [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_admin.conf.md similarity index 98% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_admin.conf.md index 2c342c8bc6..536164c45a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_admin.conf.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew admin.conf [flags] +kubeadm certs renew admin.conf [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_all.md similarity index 98% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_all.md index 979bd4f5bc..13c12ed0d0 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_all.md @@ -5,7 +5,7 @@ Renew all known certificates necessary to run the control plane. Renewals are run unconditionally, regardless of expiration date. Renewals can also be run individually for more control. ``` -kubeadm alpha certs renew all [flags] +kubeadm certs renew all [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver-etcd-client.md similarity index 97% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver-etcd-client.md index 9414ea2087..fac6861a7c 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver-etcd-client.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew apiserver-etcd-client [flags] +kubeadm certs renew apiserver-etcd-client [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver-kubelet-client.md similarity index 97% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver-kubelet-client.md index f945da440e..030fb1425a 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver-kubelet-client.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew apiserver-kubelet-client [flags] +kubeadm certs renew apiserver-kubelet-client [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver.md similarity index 98% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver.md index afbb0f97c4..8ab01efd89 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_apiserver.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew apiserver [flags] +kubeadm certs renew apiserver [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_controller-manager.conf.md similarity index 97% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_controller-manager.conf.md index 2479220831..10b44f7c3e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_controller-manager.conf.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew controller-manager.conf [flags] +kubeadm certs renew controller-manager.conf [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-healthcheck-client.md similarity index 97% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-healthcheck-client.md index 6076f031d5..b9ddadd6f1 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-healthcheck-client.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew etcd-healthcheck-client [flags] +kubeadm certs renew etcd-healthcheck-client [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-peer.md similarity index 98% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-peer.md index c19189fc86..3b15fa02f0 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-peer.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew etcd-peer [flags] +kubeadm certs renew etcd-peer [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-server.md similarity index 98% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-server.md index 8ba3e0f4a8..82b9e43e34 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_etcd-server.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew etcd-server [flags] +kubeadm certs renew etcd-server [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_front-proxy-client.md similarity index 97% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_front-proxy-client.md index c592d5ea91..b1f3bc0c84 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_front-proxy-client.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew front-proxy-client [flags] +kubeadm certs renew front-proxy-client [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_scheduler.conf.md similarity index 98% rename from content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md rename to content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_scheduler.conf.md index 3f3b6ca76f..f26fbc22b1 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_renew_scheduler.conf.md @@ -11,7 +11,7 @@ Renewal by default tries to use the certificate authority in the local PKI manag After renewal, in order to make changes effective, is required to restart control-plane components and eventually re-distribute the renewed certificate in case the file is used elsewhere. ``` -kubeadm alpha certs renew scheduler.conf [flags] +kubeadm certs renew scheduler.conf [flags] ``` ### Options diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md index f84c62c01d..eaef0f5140 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-alpha.md @@ -1,7 +1,4 @@ --- -reviewers: -- luxas -- jbeda title: kubeadm alpha content_type: concept weight: 90 @@ -12,62 +9,6 @@ weight: 90 from the community. Please try it out and give us feedback! {{< /caution >}} -## kubeadm alpha certs {#cmd-certs} - -A collection of operations for operating Kubernetes certificates. - -{{< tabs name="tab-certs" >}} -{{< tab name="overview" include="generated/kubeadm_alpha_certs.md" />}} -{{< /tabs >}} - -## kubeadm alpha certs renew {#cmd-certs-renew} - -You can renew all Kubernetes certificates using the `all` subcommand or renew them selectively. -For more details about certificate expiration and renewal see the [certificate management documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/). - -{{< tabs name="tab-certs-renew" >}} -{{< tab name="renew" include="generated/kubeadm_alpha_certs_renew.md" />}} -{{< tab name="all" include="generated/kubeadm_alpha_certs_renew_all.md" />}} -{{< tab name="admin.conf" include="generated/kubeadm_alpha_certs_renew_admin.conf.md" />}} -{{< tab name="apiserver-etcd-client" include="generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md" />}} -{{< tab name="apiserver-kubelet-client" include="generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md" />}} -{{< tab name="apiserver" include="generated/kubeadm_alpha_certs_renew_apiserver.md" />}} -{{< tab name="controller-manager.conf" include="generated/kubeadm_alpha_certs_renew_controller-manager.conf.md" />}} -{{< tab name="etcd-healthcheck-client" include="generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md" />}} -{{< tab name="etcd-peer" include="generated/kubeadm_alpha_certs_renew_etcd-peer.md" />}} -{{< tab name="etcd-server" include="generated/kubeadm_alpha_certs_renew_etcd-server.md" />}} -{{< tab name="front-proxy-client" include="generated/kubeadm_alpha_certs_renew_front-proxy-client.md" />}} -{{< tab name="scheduler.conf" include="generated/kubeadm_alpha_certs_renew_scheduler.conf.md" />}} -{{< /tabs >}} - -## kubeadm alpha certs certificate-key {#cmd-certs-certificate-key} - -This command can be used to generate a new control-plane certificate key. -The key can be passed as `--certificate-key` to `kubeadm init` and `kubeadm join` -to enable the automatic copy of certificates when joining additional control-plane nodes. - -{{< tabs name="tab-certs-certificate-key" >}} -{{< tab name="certificate-key" include="generated/kubeadm_alpha_certs_certificate-key.md" />}} -{{< /tabs >}} - -## kubeadm alpha certs generate-csr {#cmd-certs-generate-csr} - -This command can be used to generate certificate signing requests (CSRs) which -can be submitted to a certificate authority (CA) for signing. - -{{< tabs name="tab-certs-generate-csr" >}} -{{< tab name="certificate-generate-csr" include="generated/kubeadm_alpha_certs_generate-csr.md" />}} -{{< /tabs >}} - -## kubeadm alpha certs check-expiration {#cmd-certs-check-expiration} - -This command checks expiration for the certificates in the local PKI managed by kubeadm. -For more details about certificate expiration and renewal see the [certificate management documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/). - -{{< tabs name="tab-certs-check-expiration" >}} -{{< tab name="check-expiration" include="generated/kubeadm_alpha_certs_check-expiration.md" />}} -{{< /tabs >}} - ## kubeadm alpha kubeconfig user {#cmd-phase-kubeconfig} The `user` subcommand can be used for the creation of kubeconfig files for additional users. diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-certs.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-certs.md new file mode 100644 index 0000000000..3bce10ccf0 --- /dev/null +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-certs.md @@ -0,0 +1,73 @@ +--- +title: kubeadm certs +content_type: concept +weight: 90 +--- + +`kubeadm certs` provides utilities for managing certificates. +For more details on how these commands can be used, see +[Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/). + +## kubeadm certs {#cmd-certs} + +A collection of operations for operating Kubernetes certificates. + +{{< tabs name="tab-certs" >}} +{{< tab name="overview" include="generated/kubeadm_certs.md" />}} +{{< /tabs >}} + +## kubeadm certs renew {#cmd-certs-renew} + +You can renew all Kubernetes certificates using the `all` subcommand or renew them selectively. +For more details see [Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal). + +{{< tabs name="tab-certs-renew" >}} +{{< tab name="renew" include="generated/kubeadm_certs_renew.md" />}} +{{< tab name="all" include="generated/kubeadm_certs_renew_all.md" />}} +{{< tab name="admin.conf" include="generated/kubeadm_certs_renew_admin.conf.md" />}} +{{< tab name="apiserver-etcd-client" include="generated/kubeadm_certs_renew_apiserver-etcd-client.md" />}} +{{< tab name="apiserver-kubelet-client" include="generated/kubeadm_certs_renew_apiserver-kubelet-client.md" />}} +{{< tab name="apiserver" include="generated/kubeadm_certs_renew_apiserver.md" />}} +{{< tab name="controller-manager.conf" include="generated/kubeadm_certs_renew_controller-manager.conf.md" />}} +{{< tab name="etcd-healthcheck-client" include="generated/kubeadm_certs_renew_etcd-healthcheck-client.md" />}} +{{< tab name="etcd-peer" include="generated/kubeadm_certs_renew_etcd-peer.md" />}} +{{< tab name="etcd-server" include="generated/kubeadm_certs_renew_etcd-server.md" />}} +{{< tab name="front-proxy-client" include="generated/kubeadm_certs_renew_front-proxy-client.md" />}} +{{< tab name="scheduler.conf" include="generated/kubeadm_certs_renew_scheduler.conf.md" />}} +{{< /tabs >}} + +## kubeadm certs certificate-key {#cmd-certs-certificate-key} + +This command can be used to generate a new control-plane certificate key. +The key can be passed as `--certificate-key` to [`kubeadm init`](/docs/reference/setup-tools/kubeadm/kubeadm-init) +and [`kubeadm join`](/docs/reference/setup-tools/kubeadm/kubeadm-join) +to enable the automatic copy of certificates when joining additional control-plane nodes. + +{{< tabs name="tab-certs-certificate-key" >}} +{{< tab name="certificate-key" include="generated/kubeadm_certs_certificate-key.md" />}} +{{< /tabs >}} + +## kubeadm certs check-expiration {#cmd-certs-check-expiration} + +This command checks expiration for the certificates in the local PKI managed by kubeadm. +For more details see +[Check certificate expiration](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check-certificate-expiration). + +{{< tabs name="tab-certs-check-expiration" >}} +{{< tab name="check-expiration" include="generated/kubeadm_certs_check-expiration.md" />}} +{{< /tabs >}} + +## kubeadm certs generate-csr {#cmd-certs-generate-csr} + +This command can be used to generate keys and CSRs for all control-plane certificates and kubeconfig files. +The user can then sign the CSRs with a CA of their choice. + +{{< tabs name="tab-certs-generate-csr" >}} +{{< tab name="generate-csr" include="generated/kubeadm_certs_generate-csr.md" />}} +{{< /tabs >}} + +## {{% heading "whatsnext" %}} + +* [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init/) to bootstrap a Kubernetes control-plane node +* [kubeadm join](/docs/reference/setup-tools/kubeadm/kubeadm-join/) to connect a node to the cluster +* [kubeadm reset](/docs/reference/setup-tools/kubeadm/kubeadm-reset/) to revert any changes made to this host by `kubeadm init` or `kubeadm join` diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 7a210ba5de..3d4b977102 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -178,7 +178,7 @@ If the flag `--certificate-key` is not passed to `kubeadm init` and The following command can be used to generate a new key on demand: ```shell -kubeadm alpha certs certificate-key +kubeadm certs certificate-key ``` ### Certificate management with kubeadm @@ -246,7 +246,7 @@ or use a DNS name or an address of a load balancer. nodes. The key can be generated using: ```shell - kubeadm alpha certs certificate-key + kubeadm certs certificate-key ``` Once the cluster is up, you can grab the admin credentials from the control-plane node diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md index b8c3236b73..e387e2d41c 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md @@ -133,10 +133,10 @@ option. Your cluster requirements may need a different configuration. ... You can now join any number of control-plane node by running the following command on each as a root: kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07 - + Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward. - + Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 ``` @@ -155,7 +155,7 @@ option. Your cluster requirements may need a different configuration. To generate such a key you can use the following command: ```sh - kubeadm alpha certs certificate-key + kubeadm certs certificate-key ``` {{< note >}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index abd072b5a3..d9d8a5929e 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -52,7 +52,7 @@ setting up a cluster to use an external CA. You can use the `check-expiration` subcommand to check when certificates expire: ``` -kubeadm alpha certs check-expiration +kubeadm certs check-expiration ``` The output is similar to this: @@ -120,7 +120,7 @@ command. In that case, you should explicitly set `--certificate-renewal=true`. ## Manual certificate renewal -You can renew your certificates manually at any time with the `kubeadm alpha certs renew` command. +You can renew your certificates manually at any time with the `kubeadm certs renew` command. This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in `/etc/kubernetes/pki`. @@ -129,10 +129,10 @@ If you are running an HA cluster, this command needs to be executed on all the c {{< /warning >}} {{< note >}} -`alpha certs renew` uses the existing certificates as the authoritative source for attributes (Common Name, Organization, SAN, etc.) instead of the kubeadm-config ConfigMap. It is strongly recommended to keep them both in sync. +`certs renew` uses the existing certificates as the authoritative source for attributes (Common Name, Organization, SAN, etc.) instead of the kubeadm-config ConfigMap. It is strongly recommended to keep them both in sync. {{< /note >}} -`kubeadm alpha certs renew` provides the following options: +`kubeadm certs renew` provides the following options: The Kubernetes certificates normally reach their expiration date after one year. @@ -170,14 +170,14 @@ controllerManager: ### Create certificate signing requests (CSR) -You can create the certificate signing requests for the Kubernetes certificates API with `kubeadm alpha certs renew --use-api`. +You can create the certificate signing requests for the Kubernetes certificates API with `kubeadm certs renew --use-api`. If you set up an external signer such as [cert-manager](https://github.com/jetstack/cert-manager), certificate signing requests (CSRs) are automatically approved. Otherwise, you must manually approve certificates with the [`kubectl certificate`](/docs/setup/best-practices/certificates/) command. The following kubeadm command outputs the name of the certificate to approve, then blocks and waits for approval to occur: ```shell -sudo kubeadm alpha certs renew apiserver --use-api & +sudo kubeadm certs renew apiserver --use-api & ``` The output is similar to this: ``` @@ -211,13 +211,13 @@ In kubeadm terms, any certificate that would normally be signed by an on-disk CA ### Create certificate signing requests (CSR) -You can create certificate signing requests with `kubeadm alpha certs renew --csr-only`. +You can create certificate signing requests with `kubeadm certs renew --csr-only`. Both the CSR and the accompanying private key are given in the output. You can pass in a directory with `--csr-dir` to output the CSRs to the specified location. If `--csr-dir` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used. -Certificates can be renewed with `kubeadm alpha certs renew --csr-only`. +Certificates can be renewed with `kubeadm certs renew --csr-only`. As with `kubeadm init`, an output directory can be specified with the `--csr-dir` flag. A CSR contains a certificate's name, domains, and IPs, but it does not specify usages. From e62b6e1b189f3c6830bb943cdbe8396cb35357d2 Mon Sep 17 00:00:00 2001 From: Xing Yang Date: Fri, 13 Nov 2020 08:08:24 -0500 Subject: [PATCH 39/74] Add doc for snapshot GA (#24849) --- .../en/docs/concepts/storage/persistent-volumes.md | 8 +++----- .../concepts/storage/volume-snapshot-classes.md | 4 ++-- .../en/docs/concepts/storage/volume-snapshots.md | 14 +++++++------- .../command-line-tools-reference/feature-gates.md | 5 +++-- 4 files changed, 15 insertions(+), 16 deletions(-) diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index cf915bfa52..1c829858fd 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -723,12 +723,10 @@ Only statically provisioned volumes are supported for alpha release. Administrat ## Volume Snapshot and Restore Volume from Snapshot Support -{{< feature-state for_k8s_version="v1.17" state="beta" >}} +{{< feature-state for_k8s_version="v1.20" state="stable" >}} -Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/). - -To enable support for restoring a volume from a volume snapshot data source, enable the -`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager. +Volume snapshots only support the out-of-tree CSI volume plugins. For details, see [Volume Snapshots](/docs/concepts/storage/volume-snapshots/). +In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the [Volume Plugin FAQ] (https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md). ### Create a PersistentVolumeClaim from a Volume Snapshot {#create-persistent-volume-claim-from-volume-snapshot} diff --git a/content/en/docs/concepts/storage/volume-snapshot-classes.md b/content/en/docs/concepts/storage/volume-snapshot-classes.md index f3b7025270..06382e5fba 100644 --- a/content/en/docs/concepts/storage/volume-snapshot-classes.md +++ b/content/en/docs/concepts/storage/volume-snapshot-classes.md @@ -40,7 +40,7 @@ of a class when first creating VolumeSnapshotClass objects, and the objects cann be updated once they are created. ```yaml -apiVersion: snapshot.storage.k8s.io/v1beta1 +apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snapclass @@ -54,7 +54,7 @@ that don't request any particular class to bind to by adding the `snapshot.storage.kubernetes.io/is-default-class: "true"` annotation: ```yaml -apiVersion: snapshot.storage.k8s.io/v1beta1 +apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snapclass diff --git a/content/en/docs/concepts/storage/volume-snapshots.md b/content/en/docs/concepts/storage/volume-snapshots.md index a93ea27e05..7673ecf2b4 100644 --- a/content/en/docs/concepts/storage/volume-snapshots.md +++ b/content/en/docs/concepts/storage/volume-snapshots.md @@ -13,7 +13,6 @@ weight: 20 -{{< feature-state for_k8s_version="v1.17" state="beta" >}} In Kubernetes, a _VolumeSnapshot_ represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes [persistent volumes](/docs/concepts/storage/persistent-volumes/). @@ -37,7 +36,8 @@ Users need to be aware of the following when using this feature: * API Objects `VolumeSnapshot`, `VolumeSnapshotContent`, and `VolumeSnapshotClass` are {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRDs" >}}, not part of the core API. * `VolumeSnapshot` support is only available for CSI drivers. -* As part of the deployment process in the beta version of `VolumeSnapshot`, the Kubernetes team provides a snapshot controller to be deployed into the control plane, and a sidecar helper container called csi-snapshotter to be deployed together with the CSI driver. The snapshot controller watches `VolumeSnapshot` and `VolumeSnapshotContent` objects and is responsible for the creation and deletion of `VolumeSnapshotContent` object in dynamic provisioning. The sidecar csi-snapshotter watches `VolumeSnapshotContent` objects and triggers `CreateSnapshot` and `DeleteSnapshot` operations against a CSI endpoint. +* As part of the deployment process of `VolumeSnapshot`, the Kubernetes team provides a snapshot controller to be deployed into the control plane, and a sidecar helper container called csi-snapshotter to be deployed together with the CSI driver. The snapshot controller watches `VolumeSnapshot` and `VolumeSnapshotContent` objects and is responsible for the creation and deletion of `VolumeSnapshotContent` object. The sidecar csi-snapshotter watches `VolumeSnapshotContent` objects and triggers `CreateSnapshot` and `DeleteSnapshot` operations against a CSI endpoint. +* There is also a validating webhook server which provides tightened validation on snapshot objects. This should be installed by the Kubernetes distros along with the snapshot controller and CRDs, not CSI drivers. It should be installed in all Kubernetes clusters that has the snapshot feature enabled. * CSI drivers may or may not have implemented the volume snapshot functionality. The CSI drivers that have provided support for volume snapshot will likely use the csi-snapshotter. See [CSI Driver documentation](https://kubernetes-csi.github.io/docs/) for details. * The CRDs and snapshot controller installations are the responsibility of the Kubernetes distribution. @@ -78,7 +78,7 @@ Deletion is triggered by deleting the `VolumeSnapshot` object, and the `Deletion Each VolumeSnapshot contains a spec and a status. ```yaml -apiVersion: snapshot.storage.k8s.io/v1beta1 +apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: new-snapshot-test @@ -97,7 +97,7 @@ using the attribute `volumeSnapshotClassName`. If nothing is set, then the defau For pre-provisioned snapshots, you need to specify a `volumeSnapshotContentName` as the source for the snapshot as shown in the following example. The `volumeSnapshotContentName` source field is required for pre-provisioned snapshots. ```yaml -apiVersion: snapshot.storage.k8s.io/v1beta1 +apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: test-snapshot @@ -111,7 +111,7 @@ spec: Each VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates `VolumeSnapshotContent` objects. Here is an example: ```yaml -apiVersion: snapshot.storage.k8s.io/v1beta1 +apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455 @@ -132,7 +132,7 @@ spec: For pre-provisioned snapshots, you (as cluster administrator) are responsible for creating the `VolumeSnapshotContent` object as follows. ```yaml -apiVersion: snapshot.storage.k8s.io/v1beta1 +apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotContent metadata: name: new-snapshot-content-test @@ -154,4 +154,4 @@ You can provision a new volume, pre-populated with data from a snapshot, by usin the *dataSource* field in the `PersistentVolumeClaim` object. For more details, see -[Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support). \ No newline at end of file +[Volume Snapshot and Restore Volume from Snapshot](/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support). diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index ee08f65a98..67f27bc36c 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -159,8 +159,6 @@ different Kubernetes components. | `TopologyManager` | `false` | Alpha | 1.16 | | | `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 | | `ValidateProxyRedirects` | `true` | Beta | 1.14 | | -| `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | 1.16 | -| `VolumeSnapshotDataSource` | `true` | Beta | 1.17 | - | | `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | | | `WindowsGMSA` | `false` | Alpha | 1.14 | | | `WindowsGMSA` | `true` | Beta | 1.16 | | @@ -315,6 +313,9 @@ different Kubernetes components. | `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 | | `TokenRequestProjection` | `true` | Beta | 1.12 | 1.19 | | `TokenRequestProjection` | `true` | GA | 1.20 | - | +| `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | 1.16 | +| `VolumeSnapshotDataSource` | `true` | Beta | 1.17 | 1.19 | +| `VolumeSnapshotDataSource` | `true` | GA | 1.20 | - | | `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 | | `VolumePVCDataSource` | `true` | Beta | 1.16 | 1.17 | | `VolumePVCDataSource` | `true` | GA | 1.18 | - | From 4b95114b63963a5268da4a45dc9ac16bb9b890e7 Mon Sep 17 00:00:00 2001 From: Christian Huffman Date: Fri, 13 Nov 2020 12:18:47 -0500 Subject: [PATCH 40/74] Move CSIVolumeFSGroupPolicy to beta --- .../reference/command-line-tools-reference/feature-gates.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 67f27bc36c..2adfade6b7 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -81,7 +81,8 @@ different Kubernetes components. | `CSIMigrationvSphereComplete` | `false` | Beta | 1.19 | | | `CSIServiceAccountToken` | `false` | Alpha | 1.20 | | | `CSIStorageCapacity` | `false` | Alpha | 1.19 | | -| `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | | +| `CSIVolumeFSGroupPolicy` | `false` | Alpha | 1.19 | 1.19 | +| `CSIVolumeFSGroupPolicy` | `true` | Beta | 1.20 | | | `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 | | `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | | | `CronJobControllerV2` | `false` | Alpha | 1.20 | | From bf23ba2aa94ce5e268271e30d2ae27e2e1ed53da Mon Sep 17 00:00:00 2001 From: Mike Spreitzer Date: Sat, 14 Nov 2020 04:55:04 -0500 Subject: [PATCH 41/74] Update API Priority and Fairness doc for graduatino to beta (#24975) --- .../cluster-administration/flow-control.md | 38 +++++++++++-------- .../feature-gates.md | 3 +- .../health-for-strangers.yaml | 2 +- 3 files changed, 25 insertions(+), 18 deletions(-) diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index 8ec8d5f8da..1a28d734ac 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -6,7 +6,7 @@ min-kubernetes-server-version: v1.18 -{{< feature-state state="alpha" for_k8s_version="v1.18" >}} +{{< feature-state state="beta" for_k8s_version="v1.20" >}} Controlling the behavior of the Kubernetes API server in an overload situation is a key task for cluster administrators. The {{< glossary_tooltip @@ -37,25 +37,30 @@ Fairness feature enabled. -## Enabling API Priority and Fairness +## Enabling/Disabling API Priority and Fairness The API Priority and Fairness feature is controlled by a feature gate -and is not enabled by default. See +and is enabled by default. See [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/) -for a general explanation of feature gates and how to enable and disable them. The -name of the feature gate for APF is "APIPriorityAndFairness". This -feature also involves an {{< glossary_tooltip term_id="api-group" -text="API Group" >}} that must be enabled. You can do these -things by adding the following command-line flags to your -`kube-apiserver` invocation: +for a general explanation of feature gates and how to enable and +disable them. The name of the feature gate for APF is +"APIPriorityAndFairness". This feature also involves an {{< +glossary_tooltip term_id="api-group" text="API Group" >}} with: (a) a +`v1alpha1` version, disabled by default, and (b) a `v1beta1` +version, enabled by default. You can disable the feature +gate and API group v1beta1 version by adding the following +command-line flags to your `kube-apiserver` invocation: ```shell kube-apiserver \ ---feature-gates=APIPriorityAndFairness=true \ ---runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true \ +--feature-gates=APIPriorityAndFairness=false \ +--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=false \ # …and other flags as usual ``` +Alternatively, you can enable the v1alpha1 version of the API group +with `--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`. + The command-line flag `--enable-priority-and-fairness=false` will disable the API Priority and Fairness feature, even if other flags have enabled it. @@ -189,12 +194,14 @@ that originate from outside your cluster. ## Resources The flow control API involves two kinds of resources. -[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io) +[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta1-flowcontrol-apiserver-k8s-io) define the available isolation classes, the share of the available concurrency budget that each can handle, and allow for fine-tuning queuing behavior. -[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io) -are used to classify individual inbound requests, matching each to a single -PriorityLevelConfiguration. +[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1beta1-flowcontrol-apiserver-k8s-io) +are used to classify individual inbound requests, matching each to a +single PriorityLevelConfiguration. There is also a `v1alpha1` version +of the same API group, and it has the same Kinds with the same syntax and +semantics. ### PriorityLevelConfiguration A PriorityLevelConfiguration represents a single isolation class. Each @@ -522,4 +529,3 @@ For background information on design details for API priority and fairness, see the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md). You can make suggestions and feature requests via [SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery). - diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 2adfade6b7..dfc56430d2 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -51,7 +51,8 @@ different Kubernetes components. | `AnyVolumeDataSource` | `false` | Alpha | 1.18 | | | `APIListChunking` | `false` | Alpha | 1.8 | 1.8 | | `APIListChunking` | `true` | Beta | 1.9 | | -| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | | +| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | 1.19 | +| `APIPriorityAndFairness` | `true` | Beta | 1.20 | | | `APIResponseCompression` | `false` | Alpha | 1.7 | | | `AppArmor` | `true` | Beta | 1.4 | | | `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | | diff --git a/content/en/examples/priority-and-fairness/health-for-strangers.yaml b/content/en/examples/priority-and-fairness/health-for-strangers.yaml index 79ee80ab17..ec74077bbd 100644 --- a/content/en/examples/priority-and-fairness/health-for-strangers.yaml +++ b/content/en/examples/priority-and-fairness/health-for-strangers.yaml @@ -1,4 +1,4 @@ -apiVersion: flowcontrol.apiserver.k8s.io/v1alpha1 +apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 kind: FlowSchema metadata: name: health-for-strangers From d91e7f094a967cd8eb623d9b1e6d0017633246c5 Mon Sep 17 00:00:00 2001 From: Laszlo Janosi Date: Tue, 3 Nov 2020 19:17:03 +0000 Subject: [PATCH 42/74] Document the use of mixed protocol values for LoadBalancer Type of Services --- .../concepts/services-networking/service.md | 21 +++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 31dd0aca5c..f4f82e71b0 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -578,10 +578,6 @@ status: Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced. -For LoadBalancer type of Services, when there is more than one port defined, all -ports must have the same protocol, and the protocol must be one which is supported -by the cloud provider. - Some cloud providers allow you to specify the `loadBalancerIP`. In those cases, the load-balancer is created with the user-specified `loadBalancerIP`. If the `loadBalancerIP` field is not specified, the loadBalancer is set up with an ephemeral IP address. If you specify a `loadBalancerIP` @@ -599,6 +595,23 @@ Specify the assigned IP address as loadBalancerIP. Ensure that you have updated {{< /note >}} +#### Load balancers with mixed protocol types + +{{< feature-state for_k8s_version="v1.20" state="alpha" >}} + +By default, for LoadBalancer type of Services, when there is more than one port defined, all +ports must have the same protocol, and the protocol must be one which is supported +by the cloud provider. + +If the feature gate `MixedProtocolLBService` is enabled for the kube-apiserver it is allowed to use different protocols when there is more than one port defined. + +{{< note >}} + +The set of protocols that can be used for LoadBalancer type of Services is still defined by the cloud provider. + +{{< /note >}} + + #### Internal load balancer In a mixed environment it is sometimes necessary to route traffic from Services inside the same From ebf1a6148d0b40f5f5f83613b5c0e12e96615bd0 Mon Sep 17 00:00:00 2001 From: Laszlo Janosi Date: Sat, 14 Nov 2020 15:37:24 +0000 Subject: [PATCH 43/74] Add the MixedProtocolLBService to the feature gate list --- .../docs/reference/command-line-tools-reference/feature-gates.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index dfc56430d2..5bbc1015b8 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -125,6 +125,7 @@ different Kubernetes components. | `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 | | `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | | | `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | | +| `MixedProtocolLBService` | `false` | Alpha | 1.20 | | | `MountContainers` | `false` | Alpha | 1.9 | | | `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | 1.18 | | `NodeDisruptionExclusion` | `true` | Beta | 1.19 | | From dd402eff767e16d940d4ff29eebc1c029adf3be7 Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Thu, 5 Nov 2020 19:44:10 +0200 Subject: [PATCH 44/74] kubeadm: remove general output from "kubeadm init" Leave only the message: "Your Kubernetes control-plane has initialized..." and the details bellow it. --- .../tools/kubeadm/create-cluster-kubeadm.md | 51 +------------------ 1 file changed, 1 insertion(+), 50 deletions(-) diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md index daf4aaec1a..9c6acf5560 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md @@ -151,58 +151,9 @@ have container image support for this architecture. `kubeadm init` first runs a series of prechecks to ensure that the machine is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init` then downloads and installs the cluster control plane components. This may take several minutes. -The output should look like: +After it finishes you should see: ```none -[init] Using Kubernetes version: vX.Y.Z -[preflight] Running pre-flight checks -[preflight] Pulling images required for setting up a Kubernetes cluster -[preflight] This might take a minute or two, depending on the speed of your internet connection -[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' -[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" -[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" -[kubelet-start] Activating the kubelet service -[certs] Using certificateDir folder "/etc/kubernetes/pki" -[certs] Generating "etcd/ca" certificate and key -[certs] Generating "etcd/server" certificate and key -[certs] etcd/server serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1] -[certs] Generating "etcd/healthcheck-client" certificate and key -[certs] Generating "etcd/peer" certificate and key -[certs] etcd/peer serving cert is signed for DNS names [kubeadm-cp localhost] and IPs [10.138.0.4 127.0.0.1 ::1] -[certs] Generating "apiserver-etcd-client" certificate and key -[certs] Generating "ca" certificate and key -[certs] Generating "apiserver" certificate and key -[certs] apiserver serving cert is signed for DNS names [kubeadm-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4] -[certs] Generating "apiserver-kubelet-client" certificate and key -[certs] Generating "front-proxy-ca" certificate and key -[certs] Generating "front-proxy-client" certificate and key -[certs] Generating "sa" key and public key -[kubeconfig] Using kubeconfig folder "/etc/kubernetes" -[kubeconfig] Writing "admin.conf" kubeconfig file -[kubeconfig] Writing "kubelet.conf" kubeconfig file -[kubeconfig] Writing "controller-manager.conf" kubeconfig file -[kubeconfig] Writing "scheduler.conf" kubeconfig file -[control-plane] Using manifest folder "/etc/kubernetes/manifests" -[control-plane] Creating static Pod manifest for "kube-apiserver" -[control-plane] Creating static Pod manifest for "kube-controller-manager" -[control-plane] Creating static Pod manifest for "kube-scheduler" -[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" -[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s -[apiclient] All control plane components are healthy after 31.501735 seconds -[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace -[kubelet] Creating a ConfigMap "kubelet-config-X.Y" in namespace kube-system with the configuration for the kubelets in the cluster -[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm-cp" as an annotation -[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the label "node-role.kubernetes.io/master=''" -[mark-control-plane] Marking the node kubeadm-cp as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] -[bootstrap-token] Using token: -[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles -[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials -[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token -[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster -[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace -[addons] Applied essential addon: CoreDNS -[addons] Applied essential addon: kube-proxy - Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: From b9cd9ddaa4c5242b7dc0e10e177d3490c593592a Mon Sep 17 00:00:00 2001 From: Shihang Zhang Date: Fri, 13 Nov 2020 14:56:18 -0800 Subject: [PATCH 45/74] add description for CSIServiceAccountToken --- .../docs/reference/command-line-tools-reference/feature-gates.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index dfc56430d2..ebafca2a16 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -430,6 +430,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `CSIPersistentVolume`: Enable discovering and mounting volumes provisioned through a [CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) compatible volume plugin. +- `CSIServiceAccountToken`: Enable CSI drivers to receive the pods' service account token that they mount volumes for. See [Token Requests](https://kubernetes-csi.github.io/docs/token-requests.html). - `CSIStorageCapacity`: Enables CSI drivers to publish storage capacity information and the Kubernetes scheduler to use that information when scheduling pods. See [Storage Capacity](/docs/concepts/storage/storage-capacity/). Check the [`csi` volume type](/docs/concepts/storage/volumes/#csi) documentation for more details. - `CSIVolumeFSGroupPolicy`: Allows CSIDrivers to use the `fsGroupPolicy` field. This field controls whether volumes created by a CSIDriver support volume ownership and permission modifications when these volumes are mounted. From c640aee6031bfaf1db660c063416a463419e6dd7 Mon Sep 17 00:00:00 2001 From: Laszlo Janosi Date: Tue, 17 Nov 2020 19:49:17 +0000 Subject: [PATCH 46/74] explain the new MixedProtocolLBService feature flag --- .../docs/reference/command-line-tools-reference/feature-gates.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 5bbc1015b8..664bf7356b 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -500,6 +500,7 @@ Each feature gate is designed for enabling/disabling a specific feature: - `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels provided by `NodeDisruptionExclusion` and `ServiceNodeExclusion`. - `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir). - `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-resources-containers/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy. +- `MixedProtocolLBService`: Enable using different protocols in the same LoadBalancer type Service instance. - `MountContainers`: Enable using utility containers on host as the volume mounter. - `MountPropagation`: Enable sharing volume mounted by one container to other containers or pods. For more details, please see [mount propagation](/docs/concepts/storage/volumes/#mount-propagation). From 8d96fcb42354b6cd70594f828ae4af7c4881a04b Mon Sep 17 00:00:00 2001 From: Jordan Liggitt Date: Tue, 17 Nov 2020 17:38:06 -0500 Subject: [PATCH 47/74] Update GC cross-namespace note --- .../controllers/garbage-collection.md | 21 ++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/garbage-collection.md b/content/en/docs/concepts/workloads/controllers/garbage-collection.md index 93a3c912fb..0e9b4f746a 100644 --- a/content/en/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/en/docs/concepts/workloads/controllers/garbage-collection.md @@ -59,11 +59,22 @@ metadata: ``` {{< note >}} -Cross-namespace owner references are disallowed by design. This means: -1) Namespace-scoped dependents can only specify owners in the same namespace, -and owners that are cluster-scoped. -2) Cluster-scoped dependents can only specify cluster-scoped owners, but not -namespace-scoped owners. +Cross-namespace owner references are disallowed by design. + +Namespaced dependents can specify cluster-scoped or namespaced owners. +A namespaced owner **must** exist in the same namespace as the dependent. +If it does not, the owner reference is treated as absent, and the dependent +is subject to deletion once all owners are verified absent. + +Cluster-scoped dependents can only specify cluster-scoped owners. +In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner, +it is treated as having an unresolveable owner reference, and is not able to be garbage collected. + +In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`, +or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event +with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported. +You can check for that kind of Event by running +`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`. {{< /note >}} ## Controlling how the garbage collector deletes dependents From d046f6d5a47a3607beb9f060cfadb18222d60c25 Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Wed, 18 Nov 2020 03:15:52 +0200 Subject: [PATCH 48/74] layouts/shortcodes/skew.html: add latestVersionAddMinor For site.Params.latest equals v1.20, The following shortcode: skew latestVersionAddMinor N "-" would return: 1-(20+N) negative N are supported. --- layouts/shortcodes/skew.html | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/layouts/shortcodes/skew.html b/layouts/shortcodes/skew.html index 16734ea33a..13f44091b9 100644 --- a/layouts/shortcodes/skew.html +++ b/layouts/shortcodes/skew.html @@ -43,10 +43,22 @@ {{- $oldestMinorVersion -}} {{- end -}} + +{{- if eq $version "latestVersionAddMinor" -}} + {{- $seperator := .Get 2 -}} + {{- if eq $seperator "" -}} + {{- $seperator = "." -}} + {{- end -}} + {{- $latestVersionAddMinor := int (.Get 1) -}} + {{- $latestVersionAddMinor = add $minorVersion $latestVersionAddMinor -}} + {{- $latestVersionAddMinor = printf "%s%s%d" (index $versionArray 0) $seperator $latestVersionAddMinor -}} + {{- $latestVersionAddMinor -}} +{{- end -}} \ No newline at end of file From b675adf796c3d88d6f5fc3804d920be3d842703e Mon Sep 17 00:00:00 2001 From: "Lubomir I. Ivanov" Date: Thu, 5 Nov 2020 19:56:54 +0200 Subject: [PATCH 49/74] kubeadm: upgrade the upgrade documentation for 1.20 Update the kubeadm upgrade documentation: - Re-order the steps to drain / upgrade kubelet/kubectl / uncordon - Remove the detailed output from apply and plan. - Use {{ skew ... }} markers so that we can avoid more verbose changes to the document. - Apply minor / general cleanup (whitespace, missing ':') --- .../kubeadm/kubeadm-upgrade.md | 305 ++++++------------ 1 file changed, 92 insertions(+), 213 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index f58b689f72..0164d5aea1 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -4,85 +4,88 @@ reviewers: title: Upgrading kubeadm clusters content_type: task weight: 20 -min-kubernetes-server-version: 1.19 --- This page explains how to upgrade a Kubernetes cluster created with kubeadm from version -1.18.x to version 1.19.x, and from version 1.19.x to 1.19.y (where `y > x`). +{{< skew latestVersionAddMinor -1 >}}.x to version {{< skew latestVersion >}}.x, and from version +{{< skew latestVersion >}}.x to {{< skew latestVersion >}}.y (where `y > x`). Skipping MINOR versions +when upgrading is unsupported. To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead: -- [Upgrading kubeadm cluster from 1.17 to 1.18](https://v1-18.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) -- [Upgrading kubeadm cluster from 1.16 to 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) -- [Upgrading kubeadm cluster from 1.15 to 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) -- [Upgrading kubeadm cluster from 1.14 to 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/) -- [Upgrading kubeadm cluster from 1.13 to 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/) +- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -2 >}} to {{< skew latestVersionAddMinor -1 >}}](https://v{{< skew latestVersionAddMinor -1 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) +- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -3 >}} to {{< skew latestVersionAddMinor -2 >}}](https://v{{< skew latestVersionAddMinor -2 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) +- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -4 >}} to {{< skew latestVersionAddMinor -3 >}}](https://v{{< skew latestVersionAddMinor -3 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) +- [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -5 >}} to {{< skew latestVersionAddMinor -4 >}}](https://v{{< skew latestVersionAddMinor -4 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) The upgrade workflow at high level is the following: -1. Upgrade the primary control plane node. +1. Upgrade a primary control plane node. 1. Upgrade additional control plane nodes. 1. Upgrade worker nodes. ## {{% heading "prerequisites" %}} -- You need to have a kubeadm Kubernetes cluster running version 1.18.0 or later. -- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). -- The cluster should use a static control plane and etcd pods or external etcd. - Make sure you read the [release notes]({{< latest-release-notes >}}) carefully. +- The cluster should use a static control plane and etcd pods or external etcd. - Make sure to back up any important components, such as app-level state stored in a database. `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. +- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). ### Additional information +- [Draining nodes](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version + upgrades is required. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads. - All containers are restarted after upgrade, because the container spec hash value is changed. -- You only can upgrade from one MINOR version to the next MINOR version, - or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. - For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2. ## Determine which version to upgrade to -Find the latest stable 1.19 version: +Find the latest stable {{< skew latestVersion >}} version using the OS package manager: {{< tabs name="k8s_install_versions" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} apt update apt-cache madison kubeadm - # find the latest 1.19 version in the list - # it should look like 1.19.x-00, where x is the latest patch + # find the latest {{< skew latestVersion >}} version in the list + # it should look like {{< skew latestVersion >}}.x-00, where x is the latest patch {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.19 version in the list - # it should look like 1.19.x-0, where x is the latest patch + # find the latest {{< skew latestVersion >}} version in the list + # it should look like {{< skew latestVersion >}}.x-0, where x is the latest patch {{% /tab %}} {{< /tabs >}} ## Upgrading control plane nodes -### Upgrade the first control plane node +The upgrade procedure on control plane nodes should be executed one node at a time. +Pick a control plane node that you wish to upgrade first. It must have the `/etc/kubernetes/admin.conf` file. -- On your first control plane node, upgrade kubeadm: +### Call "kubeadm upgrade" + +**For the first control plane node** + +- Upgrade kubeadm: {{< tabs name="k8s_install_kubeadm_first_cp" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.19.x-00 with the latest patch version + # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.19.x-00 && \ + apt-get update && apt-get install -y kubeadm={{< skew latestVersion >}}.x-00 && \ apt-mark hold kubeadm - # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00 + apt-get install -y --allow-change-held-packages kubeadm={{< skew latestVersion >}}.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.19.x-0 with the latest patch version - yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes + # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version + yum install -y kubeadm-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -92,63 +95,10 @@ Find the latest stable 1.19 version: kubeadm version ``` -- Drain the control plane node: +- Verify the upgrade plan: ```shell - # replace with the name of your control plane node - kubectl drain --ignore-daemonsets - ``` - -- On the control plane node, run: - - ```shell - sudo kubeadm upgrade plan - ``` - - You should see output similar to this: - - ``` - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [preflight] Running pre-flight checks. - [upgrade] Running cluster health checks - [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.18.4 - [upgrade/versions] kubeadm version: v1.19.0 - [upgrade/versions] Latest stable version: v1.19.0 - [upgrade/versions] Latest version in the v1.18 series: v1.18.4 - - Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': - COMPONENT CURRENT AVAILABLE - Kubelet 1 x v1.18.4 v1.19.0 - - Upgrade to the latest version in the v1.18 series: - - COMPONENT CURRENT AVAILABLE - API Server v1.18.4 v1.19.0 - Controller Manager v1.18.4 v1.19.0 - Scheduler v1.18.4 v1.19.0 - Kube Proxy v1.18.4 v1.19.0 - CoreDNS 1.6.7 1.7.0 - Etcd 3.4.3-0 3.4.7-0 - - You can now apply the upgrade by executing the following command: - - kubeadm upgrade apply v1.19.0 - - _____________________________________________________________________ - - The table below shows the current state of component configs as understood by this version of kubeadm. - Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or - resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually - upgrade to is denoted in the "PREFERRED VERSION" column. - - API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED - kubeproxy.config.k8s.io v1alpha1 v1alpha1 no - kubelet.config.k8s.io v1beta1 v1beta1 no - _____________________________________________________________________ - + kubeadm upgrade plan ``` This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. @@ -170,90 +120,13 @@ Failing to do so will cause `kubeadm upgrade apply` to exit with an error and no ```shell # replace x with the patch version you picked for this upgrade - sudo kubeadm upgrade apply v1.19.x + sudo kubeadm upgrade apply v{{< skew latestVersion >}}.x ``` - - You should see output similar to this: + Once the command finishes you should see: ``` - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [preflight] Running pre-flight checks. - [upgrade] Running cluster health checks - [upgrade/version] You have chosen to change the cluster version to "v1.19.0" - [upgrade/versions] Cluster version: v1.18.4 - [upgrade/versions] kubeadm version: v1.19.0 - [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y - [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster - [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection - [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"... - Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 - Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2 - Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a - [upgrade/etcd] Upgrading to TLS for etcd - Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf - [upgrade/staticpods] Preparing for "etcd" upgrade - [upgrade/staticpods] Renewing etcd-server certificate - [upgrade/staticpods] Renewing etcd-peer certificate - [upgrade/staticpods] Renewing etcd-healthcheck-client certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/etcd.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf - Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf - Static pod: etcd-kind-control-plane hash: 59e40b2aab1cd7055e64450b5ee438f0 - [apiclient] Found 1 Pods for label selector component=etcd - [upgrade/staticpods] Component "etcd" upgraded successfully! - [upgrade/etcd] Waiting for etcd to become available - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests999800980" - [upgrade/staticpods] Preparing for "kube-apiserver" upgrade - [upgrade/staticpods] Renewing apiserver certificate - [upgrade/staticpods] Renewing apiserver-kubelet-client certificate - [upgrade/staticpods] Renewing front-proxy-client certificate - [upgrade/staticpods] Renewing apiserver-etcd-client certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-apiserver.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 - Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 - Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 - Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 - Static pod: kube-apiserver-kind-control-plane hash: f717874150ba572f020dcd89db8480fc - [apiclient] Found 1 Pods for label selector component=kube-apiserver - [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! - [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade - [upgrade/staticpods] Renewing controller-manager.conf certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-controller-manager.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2 - Static pod: kube-controller-manager-kind-control-plane hash: b155b63c70e798b806e64a866e297dd0 - [apiclient] Found 1 Pods for label selector component=kube-controller-manager - [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! - [upgrade/staticpods] Preparing for "kube-scheduler" upgrade - [upgrade/staticpods] Renewing scheduler.conf certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-scheduler.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a - Static pod: kube-scheduler-kind-control-plane hash: 260018ac854dbf1c9fe82493e88aec31 - [apiclient] Found 1 Pods for label selector component=kube-scheduler - [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! - [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" - [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes - [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials - [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token - [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster - W0713 16:26:14.074656 2986 dns.go:282] the CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained. - [addons] Applied essential addon: CoreDNS - [addons] Applied essential addon: kube-proxy - - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy! + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v{{< skew latestVersion >}}.x". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` @@ -266,14 +139,7 @@ Failing to do so will cause `kubeadm upgrade apply` to exit with an error and no This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet. -- Uncordon the control plane node: - - ```shell - # replace with the name of your control plane node - kubectl uncordon - ``` - -### Upgrade additional control plane nodes +**For the other control plane nodes** Same as the first control plane node but use: @@ -287,35 +153,57 @@ instead of: sudo kubeadm upgrade apply ``` -Also `sudo kubeadm upgrade plan` is not needed. +Also calling `kubeadm upgrade plan` and upgrading the CNI provider plugin is no longer needed. + +### Drain the node + +- Prepare the node for maintenance by marking it unschedulable and evicting the workloads: + + ```shell + # replace with the name of your node you are draining + kubectl drain --ignore-daemonsets + ``` ### Upgrade kubelet and kubectl -Upgrade the kubelet and kubectl on all control plane nodes: +- Upgrade the kubelet and kubectl {{< tabs name="k8s_install_kubelet" >}} -{{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.19.x-00 with the latest patch version +{{< tab name="Ubuntu, Debian or HypriotOS" >}} +
+    # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version
     apt-mark unhold kubelet kubectl && \
-    apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \
+    apt-get update && apt-get install -y kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 && \
     apt-mark hold kubelet kubectl
     -
     # since apt-get version 1.1 you can also use the following method
     apt-get update && \
-    apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00
-{{% /tab %}}
-{{% tab name="CentOS, RHEL or Fedora" %}}
-    # replace x in 1.19.x-0 with the latest patch version
-    yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes
-{{% /tab %}}
+    apt-get install -y --allow-change-held-packages kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00
+    
+{{< /tab >}} +{{< tab name="CentOS, RHEL or Fedora" >}} +
+    # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version
+    yum install -y kubelet-{{< skew latestVersion >}}.x-0 kubectl-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes
+    
+{{< /tab >}} {{< /tabs >}} -Restart the kubelet +- Restart the kubelet: -```shell -sudo systemctl daemon-reload -sudo systemctl restart kubelet -``` + ```shell + sudo systemctl daemon-reload + sudo systemctl restart kubelet + ``` + +### Uncordon the node + +- Bring the node back online by marking it schedulable: + + ```shell + # replace with the name of your node + kubectl uncordon + ``` ## Upgrade worker nodes @@ -324,22 +212,22 @@ without compromising the minimum required capacity for running your workloads. ### Upgrade kubeadm -- Upgrade kubeadm on all worker nodes: +- Upgrade kubeadm: {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.19.x-00 with the latest patch version + # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.19.x-00 && \ + apt-get update && apt-get install -y kubeadm={{< skew latestVersion >}}.x-00 && \ apt-mark hold kubeadm - # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00 + apt-get install -y --allow-change-held-packages kubeadm={{< skew latestVersion >}}.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.19.x-0 with the latest patch version - yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes + # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version + yum install -y kubeadm-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -352,17 +240,9 @@ without compromising the minimum required capacity for running your workloads. kubectl drain --ignore-daemonsets ``` - You should see output similar to this: +### Call "kubeadm upgrade" - ``` - node/ip-172-31-85-18 cordoned - WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx - node/ip-172-31-85-18 drained - ``` - -### Upgrade the kubelet configuration - -- Call the following command: +- For worker nodes this upgrades the local kubelet configuration: ```shell sudo kubeadm upgrade node @@ -370,26 +250,26 @@ without compromising the minimum required capacity for running your workloads. ### Upgrade kubelet and kubectl -- Upgrade the kubelet and kubectl on all worker nodes: +- Upgrade the kubelet and kubectl: {{< tabs name="k8s_kubelet_and_kubectl" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.19.x-00 with the latest patch version + # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \ + apt-get update && apt-get install -y kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 && \ apt-mark hold kubelet kubectl - # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00 + apt-get install -y --allow-change-held-packages kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.19.x-0 with the latest patch version - yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes + # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version + yum install -y kubelet-{{< skew latestVersion >}}.x-0 kubectl-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} -- Restart the kubelet +- Restart the kubelet: ```shell sudo systemctl daemon-reload @@ -407,7 +287,8 @@ without compromising the minimum required capacity for running your workloads. ## Verify the status of the cluster -After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster: +After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command +from anywhere kubectl can access the cluster: ```shell kubectl get nodes @@ -415,8 +296,6 @@ kubectl get nodes The `STATUS` column should show `Ready` for all your nodes, and the version number should be updated. - - ## Recovering from a failure state If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, you can run `kubeadm upgrade` again. @@ -428,11 +307,11 @@ During upgrade kubeadm writes the following backup folders under `/etc/kubernete - `kubeadm-backup-etcd--