Merge branch 'main' into dev-1.23

pull/29657/head
Jesse Butler 2021-09-09 14:36:19 -04:00
commit 78ab3b9b0a
439 changed files with 10807 additions and 8038 deletions

2
OWNERS
View File

@ -8,7 +8,9 @@ approvers:
emeritus_approvers:
# - chenopis, commented out to disable PR assignments
# - irvifa, commented out to disable PR assignments
# - jaredbhatti, commented out to disable PR assignments
# - kbarnard10, commented out to disable PR assignments
# - steveperry-53, commented out to disable PR assignments
- stewart-yu
# - zacharysarah, commented out to disable PR assignments

View File

@ -1,10 +1,8 @@
aliases:
sig-docs-blog-owners: # Approvers for blog content
- kbarnard10
- onlydole
- mrbobbytables
sig-docs-blog-reviewers: # Reviewers for blog content
- kbarnard10
- mrbobbytables
- onlydole
- sftim
@ -20,9 +18,8 @@ aliases:
- annajung
- bradtopol
- celestehorgan
- irvifa
- jimangel
- kbarnard10
- jlbutler
- kbhawkey
- onlydole
- pi-victor
@ -35,7 +32,6 @@ aliases:
- celestehorgan
- daminisatya
- jimangel
- kbarnard10
- kbhawkey
- onlydole
- rajeshdeshpande02
@ -88,7 +84,6 @@ aliases:
- danninov
- girikuncoro
- habibrosyad
- irvifa
- phanama
- wahyuoi
sig-docs-id-reviews: # PR reviews for Indonesian content
@ -96,7 +91,6 @@ aliases:
- danninov
- girikuncoro
- habibrosyad
- irvifa
- phanama
- wahyuoi
sig-docs-it-owners: # Admins for Italian content
@ -133,14 +127,13 @@ aliases:
- gochist
- ianychoi
- jihoon-seo
- jmyung
- pjhwa
- seokho-son
- yoonian
- ysyukr
sig-docs-leads: # Website chairs and tech leads
- irvifa
- jimangel
- kbarnard10
- kbhawkey
- onlydole
- sftim
@ -256,4 +249,4 @@ aliases:
- sethmccombs # Release Manager Associate
- thejoycekung # Release Manager Associate
- verolop # Release Manager Associate
- wilsonehusin # Release Manager Associate
- wilsonehusin # Release Manager Associate

View File

@ -18,7 +18,7 @@ Aby móc skorzystać z tego repozytorium, musisz lokalnie zainstalować:
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (Extended version)](https://gohugo.io/)
- Środowisko obsługi kontenerów, np. [Docker-a](https://www.docker.com/).
- Środowisko obsługi kontenerów, np. [Dockera](https://www.docker.com/).
Przed rozpoczęciem zainstaluj niezbędne zależności. Sklonuj repozytorium i przejdź do odpowiedniego katalogu:
@ -43,7 +43,9 @@ make container-image
make container-serve
```
Aby obejrzeć zawartość serwisu otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)).
Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres http://localhost:1313. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
## Jak uruchomić lokalną kopię strony przy pomocy Hugo?

View File

@ -1,6 +1,6 @@
# Документация по Kubernetes
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
Данный репозиторий содержит все необходимые файлы для сборки [сайта Kubernetes и документации](https://kubernetes.io/). Мы благодарим вас за желание внести свой вклад!

View File

@ -4,8 +4,6 @@
Join the [kubernetes-security-announce] group for security and vulnerability announcements.
You can also subscribe to an RSS feed of the above using [this link][kubernetes-security-announce-rss].
## Reporting a Vulnerability
Instructions for reporting a vulnerability can be found on the
@ -17,6 +15,5 @@ Information about supported Kubernetes versions can be found on the
[Kubernetes version and version skew support policy] page on the Kubernetes website.
[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce
[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50
[Kubernetes version and version skew support policy]: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
[Kubernetes Security and Disclosure Information]: https://kubernetes.io/docs/reference/issues-security/security/#report-a-vulnerability

View File

@ -1,6 +1,6 @@
# Defined below are the security contacts for this repo.
#
# They are the contact point for the Product Security Committee to reach out
# They are the contact point for the Security Response Committee to reach out
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
@ -10,7 +10,5 @@
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
irvifa
jimangel
kbarnard10
sftim

@ -1 +1 @@
Subproject commit 78e64febda1b53cafc79979c5978b42162cea276
Subproject commit 55bce686224caba37f93e1e1eb53c0c9fc104ed4

View File

@ -216,6 +216,8 @@ url = "https://v1-19.docs.kubernetes.io"
[params.ui]
# Enable to show the side bar menu in its compact state.
sidebar_menu_compact = false
# https://github.com/gohugoio/hugo/issues/8918#issuecomment-903314696
sidebar_cache_limit = 1
# Set to true to disable breadcrumb navigation.
breadcrumb_disable = false
# Set to true to hide the sidebar search box (the top nav search box will still be displayed if search is enabled)

View File

@ -9,7 +9,7 @@ cid: home
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) ist ein Open-Source-System zur Automatisierung der Bereitstellung, Skalierung und Verwaltung von containerisierten Anwendungen.
Es gruppiert Container, aus denen sich eine Anwendung zusammensetzt, in logische Einheiten, um die Verwaltung und Erkennung zu erleichtern. Kubernetes baut auf [15 Jahre Erfahrung in Bewältigung von Produktions-Workloads bei Google] (http://queue.acm.org/detail.cfm?id=2898444), kombiniert mit Best-of-Breed-Ideen und Praktiken aus der Community.
Es gruppiert Container, aus denen sich eine Anwendung zusammensetzt, in logische Einheiten, um die Verwaltung und Erkennung zu erleichtern. Kubernetes baut auf [15 Jahre Erfahrung in Bewältigung von Produktions-Workloads bei Google](http://queue.acm.org/detail.cfm?id=2898444), kombiniert mit Best-of-Breed-Ideen und Praktiken aus der Community.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
@ -57,4 +57,4 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
{{< blocks/kubernetes-features >}}
{{< blocks/case-studies >}}
{{< blocks/case-studies >}}

View File

@ -5,7 +5,7 @@ weight: 20
<!DOCTYPE html>
<html lang="en">
<html lang="de">
<body>

View File

@ -5,7 +5,7 @@ weight: 20
<!DOCTYPE html>
<html lang="en">
<html lang="de">
<body>

View File

@ -5,7 +5,7 @@ weight: 10
<!DOCTYPE html>
<html lang="en">
<html lang="de">
<body>

View File

@ -5,7 +5,7 @@ weight: 20
<!DOCTYPE html>
<html lang="en">
<html lang="de">
<body>

View File

@ -5,7 +5,7 @@ weight: 10
<!DOCTYPE html>
<html lang="en">
<html lang="de">
<body>

View File

@ -5,7 +5,7 @@ weight: 20
<!DOCTYPE html>
<html lang="en">
<html lang="de">
<body>

View File

@ -5,7 +5,7 @@ weight: 10
<!DOCTYPE html>
<html lang="en">
<html lang="de">
<body>

View File

@ -125,7 +125,7 @@ You may wish to, but you cannot create a hierarchy of namespaces. Namespaces can
Namespaces are easy to create and use but its also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a [kubectl context](/docs/user-guide/kubectl/kubectl_config_set-context/).&nbsp;
Namespaces are easy to create and use but its also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a [kubectl context](/docs/reference/generated/kubectl/kubectl-commands#-em-set-context-em-).&nbsp;

View File

@ -5,6 +5,11 @@ slug: visualize-kubelet-performance-with-node-dashboard
url: /blog/2016/11/Visualize-Kubelet-Performance-With-Node-Dashboard
---
_Since this article was published, the Node Performance Dashboard was retired and is no longer available._
_This retirement happened in early 2019, as part of the_ `kubernetes/contrib`
_[repository deprecation](https://github.com/kubernetes-retired/contrib/issues/3007)_.
In Kubernetes 1.4, we introduced a new node performance analysis tool, called the _node performance dashboard_, to visualize and explore the behavior of the Kubelet in much richer details. This new feature will make it easy to understand and improve code performance for Kubelet developers, and lets cluster maintainer set configuration according to provided Service Level Objectives (SLOs).
**Background**

View File

@ -37,7 +37,7 @@ If you run your storage application on high-end hardware or extra-large instance
[ZooKeeper](https://zookeeper.apache.org/doc/current/) is an interesting use case for StatefulSet for two reasons. First, it demonstrates that StatefulSet can be used to run a distributed, strongly consistent storage application on Kubernetes. Second, it's a prerequisite for running workloads like [Apache Hadoop](http://hadoop.apache.org/) and [Apache Kakfa](https://kafka.apache.org/) on Kubernetes. An [in-depth tutorial](/docs/tutorials/stateful-application/zookeeper/) on deploying a ZooKeeper ensemble on Kubernetes is available in the Kubernetes documentation, and well outline a few of the key features below.
**Creating a ZooKeeper Ensemble**
Creating an ensemble is as simple as using [kubectl create](/docs/user-guide/kubectl/kubectl_create/) to generate the objects stored in the manifest.
Creating an ensemble is as simple as using [kubectl create](/docs/reference/generated/kubectl/kubectl-commands#create) to generate the objects stored in the manifest.
```
@ -297,7 +297,7 @@ zk-0 0/1 Terminating 0 15m
You can use [kubectl apply](/docs/user-guide/kubectl/kubectl_apply/) to recreate the zk StatefulSet and redeploy the ensemble.
You can use [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) to recreate the zk StatefulSet and redeploy the ensemble.

View File

@ -19,8 +19,9 @@ is that they have been superseded by a newer, stable (“GA”) API.
Kubernetes 1.22, due for release in August 2021, will remove a number of deprecated
APIs.
[Kubernetes 1.22 Release Information](https://www.kubernetes.dev/resources/release/)
has details on the schedule for the v1.22 release.
_Update_:
[Kubernetes 1.22: Reaching New Peaks](/blog/2021/08/04/kubernetes-1-22-release-announcement/)
has details on the v1.22 release.
## API removals for Kubernetes v1.22 {#api-changes}

View File

@ -54,7 +54,7 @@ An alpha feature for default seccomp profiles has been added to the kubelet, alo
A new alpha feature allows running the `kubeadm` control plane components as non-root users. This is a long requested security measure in `kubeadm`. To try it you must enable the `kubeadm` specific RootlessControlPlane feature gate. When you deploy a cluster using this alpha feature, your control plane runs with lower privileges.
For `kubeadm`, Kubernetes 1.22 also brings a new [v1beta3 configuration API](https://github.com/kubernetes/kubeadm/issues/1796). This iteration adds some long requested features and deprecates some existing ones. The v1beta3 version is now the preferred API version; the v1beta2 API also remains available and is not yet deprecated.
For `kubeadm`, Kubernetes 1.22 also brings a new [v1beta3 configuration API](/docs/reference/config-api/kubeadm-config.v1beta3/). This iteration adds some long requested features and deprecates some existing ones. The v1beta3 version is now the preferred API version; the v1beta2 API also remains available and is not yet deprecated.
## Major Changes
@ -140,7 +140,7 @@ In the v1.22 release cycle, which ran for 15 weeks (April 26 to August 4), we sa
# Upcoming release webinar
Join members of the Kubernetes 1.22 release team on September 7, 2021 to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the [event page](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-122-release/) on the CNCF Online Programs site.
Join members of the Kubernetes 1.22 release team on October 5, 2021 to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the [event page](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-122-release/) on the CNCF Online Programs site.
# Get Involved

View File

@ -0,0 +1,177 @@
---
layout: blog
title: "Kubernetes 1.22: Server Side Apply moves to GA"
date: 2021-08-06
slug: server-side-apply-ga
---
**Authors:** Jeffrey Ying, Google & Joe Betz, Google
Server-side Apply (SSA) has been promoted to GA in the Kubernetes v1.22 release. The GA milestone means you can depend on the feature and its API, without fear of future backwards-incompatible changes. GA features are protected by the Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/).
## What is Server-side Apply?
Server-side Apply helps users and controllers manage their resources through declarative configurations. Server-side Apply replaces the client side apply feature implemented by “kubectl apply” with a server-side implementation, permitting use by tools/clients other than kubectl. Server-side Apply is a new merging algorithm, as well as tracking of field ownership, running on the Kubernetes api-server. Server-side Apply enables new features like conflict detection, so the system knows when two actors are trying to edit the same field. Refer to the [Server-side Apply Documentation](/docs/reference/using-api/server-side-apply/) and [Beta 2 release announcement](https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/) for more information.
## Whats new since Beta?
Since the [Beta 2 release](https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/) subresources support has been added, and both client-go and Kubebuilder have added comprehensive support for Server-side Apply. This completes the Server-side Apply functionality required to make controller development practical.
### Support for subresources
Server-side Apply now fully supports subresources like `status` and `scale`. This is particularly important for [controllers](/docs/concepts/architecture/controller/), which are often responsible for writing to subresources.
## Server-side Apply support in client-go
Previously, Server-side Apply could only be called from the client-go typed client using the `Patch` function, with `PatchType` set to `ApplyPatchType`. Now, `Apply` functions are included in the client to allow for a more direct and typesafe way of calling Server-side Apply. Each `Apply` function takes an "apply configuration" type as an argument, which is a structured representation of an Apply request. For example:
```go
import (
...
v1ac "k8s.io/client-go/applyconfigurations/autoscaling/v1"
)
hpaApplyConfig := v1ac.HorizontalPodAutoscaler(autoscalerName, ns).
WithSpec(v1ac.HorizontalPodAutoscalerSpec().
WithMinReplicas(0)
)
return hpav1client.Apply(ctx, hpaApplyConfig, metav1.ApplyOptions{FieldManager: "mycontroller", Force: true})
```
Note in this example that `HorizontalPodAutoscaler` is imported from an "applyconfigurations" package. Each "apply configuration" type represents the same Kubernetes object kind as the corresponding go struct, but where all fields are pointers to make them optional, allowing apply requests to be accurately represented. For example, when the apply configuration in the above example is marshalled to YAML, it produces:
```yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: myHPA
namespace: myNamespace
spec:
minReplicas: 0
```
To understand why this is needed, the above YAML cannot be produced by the v1.HorizontalPodAutoscaler go struct. Take for example:
```go
hpa := v1.HorizontalPodAutoscaler{
TypeMeta: metav1.TypeMeta{
APIVersion: "autoscaling/v1",
Kind: "HorizontalPodAutoscaler",
},
ObjectMeta: ObjectMeta{
Namespace: ns,
Name: autoscalerName,
},
Spec: v1.HorizontalPodAutoscalerSpec{
MinReplicas: pointer.Int32Ptr(0),
},
}
```
The above code attempts to declare the same apply configuration as shown in the previous examples, but when marshalled to YAML, produces:
```yaml
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v1
metadata
name: myHPA
namespace: myNamespace
creationTimestamp: null
spec:
scaleTargetRef:
kind: ""
name: ""
minReplicas: 0
maxReplicas: 0
```
Which, among other things, contains `spec.maxReplicas` set to `0`. This is almost certainly not what the caller intended (the intended apply configuration says nothing about the `maxReplicas` field), and could have serious consequences on a production system: it directs the autoscaler to downscale to zero pods. The problem here originates from the fact that the go structs contain required fields that are zero valued if not set explicitly. The go structs work as intended for create and update operations, but are fundamentally incompatible with apply, which is why we have introduced the generated "apply configuration" types.
The "apply configurations" also have convenience `With<FieldName>` functions that make it easier to build apply requests. This allows developers to set fields without having to deal with the fact that all the fields in the "apply configuration" types are pointers, and are inconvenient to set using go. For example `MinReplicas: &0` is not legal go code, so without the `With` functions, developers would work around this problem by using a library, e.g. `MinReplicas: pointer.Int32Ptr(0)`, but string enumerations like `corev1.Protocol` are still a problem since they cannot be supported by a general purpose library. In addition to the convenience, the `With` functions also isolate developers from the underlying representation, which makes it safer for the underlying representation to be changed to support additional features in the future.
## Using Server-side Apply in a controller
You can use the new support for Server-side Apply no matter how you implemented your controller. However, the new client-go support makes it easier to use Server-side Apply in controllers.
When authoring new controllers to use Server-side Apply, a good approach is to have the controller recreate the apply configuration for an object each time it reconciles that object. This ensures that the controller fully reconciles all the fields that it is responsible for. Controllers typically should unconditionally set all the fields they own by setting `Force: true` in the `ApplyOptions`. Controllers must also provide a `FieldManager` name that is unique to the reconciliation loop that apply is called from.
When upgrading existing controllers to use Server-side Apply the same approach often works well--migrate the controllers to recreate the apply configuration each time it reconciles any object. Unfortunately, the controller might have multiple code paths that update different parts of an object depending on various conditions. Migrating a controller like this to Server-side Apply can be risky because if the controller forgets to include any fields in an apply configuration that is included in a previous apply request, a field can be accidently deleted. To ease this type of migration, client-go apply support provides a way to replace any controller reconciliation code that performs a "read/modify-in-place/update" (or patch) workflow with a "extract/modify-in-place/apply" workflow. Here's an example of the new workflow:
```go
fieldMgr := "my-field-manager"
deploymentClient := clientset.AppsV1().Deployments("default")
// read, could also be read from a shared informer
deployment, err := deploymentClient.Get(ctx, "example-deployment", metav1.GetOptions{})
if err != nil {
// handle error
}
// extract
deploymentApplyConfig, err := appsv1ac.ExtractDeployment(deployment, fieldMgr)
if err != nil {
// handle error
}
// modify-in-place
deploymentApplyConfig.Spec.Template.Spec.WithContainers(corev1ac.Container().
WithName("modify-slice").
WithImage("nginx:1.14.2"),
)
// apply
applied, err := deploymentClient.Apply(ctx, extractedDeployment, metav1.ApplyOptions{FieldManager: fieldMgr})
```
For developers using Custom Resource Definitions (CRDs), the Kubebuilder apply support will provide the same capabilities. Documentation will be included in the Kubebuilder book when available.
## Server-side Apply and CustomResourceDefinitions
It is strongly recommended that all [Custom Resource Definitions](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRDs) have a schema. CRDs without a schema are treated as unstructured data by Server-side Apply. Keys are treated as fields in a struct and lists are assumed to be atomic.
CRDs that specify a schema are able to specify additional annotations in the schema. Please refer to the documentation on the full list of available annotations.
New annotations since beta:
**Defaulting:** Values for fields that appliers do not express explicit interest in should be defaulted. This prevents an applier from unintentionally owning a defaulted field that might cause conflicts with other appliers. If unspecified, the default value is nil or the nil equivalent for the corresponding type.
- Usage: see the [CRD Defaulting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting) documentation for more details.
- Golang: `+default=<value>`
- OpenAPI extension: `default: <value>`
Atomic for maps and structs:
**Maps:** By default maps are granular. A different manager is able to manage each map entry. They can also be configured to be atomic such that a single manager owns the entire map.
- Usage: Refer to [Merge Strategy](/docs/reference/using-api/server-side-apply/#merge-strategy) for a more detailed overview
- Golang: `+mapType=granular/atomic`
- OpenAPI extension: `x-kubernetes-map-type: granular/atomic`
**Structs:** By default structs are granular and a separate applier may own each field. For certain kinds of structs, atomicity may be desired. This is most commonly seen in small coordinate-like structs such as Field/Object/Namespace Selectors, Object References, RGB values, Endpoints (Protocol/Port pairs), etc.
- Usage: Refer to [Merge Strategy](/docs/reference/using-api/server-side-apply/#merge-strategy) for a more detailed overview
- Golang: `+structType=granular/atomic`
- OpenAPI extension: `x-kubernetes-map-type:atomic/granular`
## What's Next?
After Server Side Apply, the next focus for the API Expression working-group is around improving the expressiveness and size of the published Kubernetes API schema. To see the full list of items we are working on, please join our working group and refer to the work items document.
## How to get involved?
The working-group for apply is [wg-api-expression](https://github.com/kubernetes/community/tree/master/wg-api-expression). It is available on slack [#wg-api-expression](https://kubernetes.slack.com/archives/C0123CNN8F3), through the [mailing list](https://groups.google.com/g/kubernetes-wg-api-expression) and we also meet every other Tuesday at 9.30 PT on Zoom.
We would also like to use the opportunity to thank the hard work of all the contributors involved in making this promotion to GA possible:
- Andrea Nodari
- Antoine Pelisse
- Daniel Smith
- Jeffrey Ying
- Jenny Buckley
- Joe Betz
- Julian Modesto
- Kevin Delgado
- Kevin Wiesmüller
- Maria Ntalla

View File

@ -0,0 +1,142 @@
---
layout: blog
title: 'New in Kubernetes v1.22: alpha support for using swap memory'
date: 2021-08-09
slug: run-nodes-with-swap-alpha
---
**Author:** Elana Hashman (Red Hat)
The 1.22 release introduced alpha support for configuring swap memory usage for
Kubernetes workloads on a per-node basis.
In prior releases, Kubernetes did not support the use of swap memory on Linux,
as it is difficult to provide guarantees and account for pod memory utilization
when swap is involved. As part of Kubernetes' earlier design, swap support was
considered out of scope, and a kubelet would by default fail to start if swap
was detected on a node.
However, there are a number of [use cases](https://github.com/kubernetes/enhancements/blob/9d127347773ad19894ca488ee04f1cd3af5774fc/keps/sig-node/2400-node-swap/README.md#user-stories)
that would benefit from Kubernetes nodes supporting swap, including improved
node stability, better support for applications with high memory overhead but
smaller working sets, the use of memory-constrained devices, and memory
flexibility.
Hence, over the past two releases, [SIG Node](https://github.com/kubernetes/community/tree/master/sig-node#readme) has
been working to gather appropriate use cases and feedback, and propose a design
for adding swap support to nodes in a controlled, predictable manner so that
Kubernetes users can perform testing and provide data to continue building
cluster capabilities on top of swap. The alpha graduation of swap memory
support for nodes is our first milestone towards this goal!
## How does it work?
There are a number of possible ways that one could envision swap use on a node.
To keep the scope manageable for this initial implementation, when swap is
already provisioned and available on a node, [we have proposed](https://github.com/kubernetes/enhancements/blob/9d127347773ad19894ca488ee04f1cd3af5774fc/keps/sig-node/2400-node-swap/README.md#proposal)
the kubelet should be able to be configured such that:
- It can start with swap on.
- It will direct the Container Runtime Interface to allocate zero swap memory
to Kubernetes workloads by default.
- You can configure the kubelet to specify swap utilization for the entire
node.
Swap configuration on a node is exposed to a cluster admin via the
[`memorySwap` in the KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/).
As a cluster administrator, you can specify the node's behaviour in the
presence of swap memory by setting `memorySwap.swapBehavior`.
This is possible through the addition of a `memory_swap_limit_in_bytes` field
to the container runtime interface (CRI). The kubelet's config will control how
much swap memory the kubelet instructs the container runtime to allocate to
each container via the CRI. The container runtime will then write the swap
settings to the container level cgroup.
## How do I use it?
On a node where swap memory is already provisioned, Kubernetes use of swap on a
node can be enabled by enabling the `NodeSwap` feature gate on the kubelet, and
disabling the `failSwapOn` [configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
or the `--fail-swap-on` command line flag.
You can also optionally configure `memorySwap.swapBehavior` in order to
specify how a node will use swap memory. For example,
```yaml
memorySwap:
swapBehavior: LimitedSwap
```
The available configuration options for `swapBehavior` are:
- `LimitedSwap` (default): Kubernetes workloads are limited in how much swap
they can use. Workloads on the node not managed by Kubernetes can still swap.
- `UnlimitedSwap`: Kubernetes workloads can use as much swap memory as they
request, up to the system limit.
If configuration for `memorySwap` is not specified and the feature gate is
enabled, by default the kubelet will apply the same behaviour as the
`LimitedSwap` setting.
The behaviour of the `LimitedSwap` setting depends if the node is running with
v1 or v2 of control groups (also known as "cgroups"):
- **cgroups v1:** Kubernetes workloads can use any combination of memory and
swap, up to the pod's memory limit, if set.
- **cgroups v2:** Kubernetes workloads cannot use swap memory.
### Caveats
Having swap available on a system reduces predictability. Swap's performance is
worse than regular memory, sometimes by many orders of magnitude, which can
cause unexpected performance regressions. Furthermore, swap changes a system's
behaviour under memory pressure, and applications cannot directly control what
portions of their memory usage are swapped out. Since enabling swap permits
greater memory usage for workloads in Kubernetes that cannot be predictably
accounted for, it also increases the risk of noisy neighbours and unexpected
packing configurations, as the scheduler cannot account for swap memory usage.
The performance of a node with swap memory enabled depends on the underlying
physical storage. When swap memory is in use, performance will be significantly
worse in an I/O operations per second (IOPS) constrained environment, such as a
cloud VM with I/O throttling, when compared to faster storage mediums like
solid-state drives or NVMe.
Hence, we do not recommend the use of swap for certain performance-constrained
workloads or environments. Cluster administrators and developers should
benchmark their nodes and applications before using swap in production
scenarios, and [we need your help](#how-do-i-get-involved) with that!
## Looking ahead
The Kubernetes 1.22 release introduces alpha support for swap memory on nodes,
and we will continue to work towards beta graduation in the 1.23 release. This
will include:
* Adding support for controlling swap consumption at the Pod level via cgroups.
* This will include the ability to set a system-reserved quantity of swap
from what kubelet detects on the host.
* Determining a set of metrics for node QoS in order to evaluate the
performance and stability of nodes with and without swap enabled.
* Collecting feedback from test user cases.
* We will consider introducing new configuration modes for swap, such as a
node-wide swap limit for workloads.
## How can I learn more?
You can review the current [documentation](https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory)
on the Kubernetes website.
For more information, and to assist with testing and provide feedback, please
see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
## How do I get involved?
Your feedback is always welcome! SIG Node [meets regularly](https://github.com/kubernetes/community/tree/master/sig-node#meetings)
and [can be reached](https://github.com/kubernetes/community/tree/master/sig-node#contact)
via [Slack](https://slack.k8s.io/) (channel **#sig-node**), or the SIG's
[mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node).
Feel free to reach out to me, Elana Hashman (**@ehashman** on Slack and GitHub)
if you'd like to help.

View File

@ -0,0 +1,76 @@
---
layout: blog
title: 'Kubernetes 1.22: CSI Windows Support (with CSI Proxy) reaches GA'
date: 2021-08-09
slug: csi-windows-support-with-csi-proxy-reaches-ga
---
**Authors:** Mauricio Poppe (Google), Jing Xu (Google), and Deep Debroy (Apple)
*The stable version of CSI Proxy for Windows has been released alongside Kubernetes 1.22. CSI Proxy enables CSI Drivers running on Windows nodes to perform privileged storage operations.*
## Background
Container Storage Interface (CSI) for Kubernetes went GA in the Kubernetes 1.13 release. CSI has become the standard for exposing block and file storage to containerized workloads on Container Orchestration systems (COs) like Kubernetes. It enables third-party storage providers to write and deploy plugins without the need to alter the core Kubernetes codebase. Legacy in-tree drivers are deprecated and new storage features are introduced in CSI, therefore it is important to get CSI Drivers to work on Windows.
A CSI Driver in Kubernetes has two main components: a controller plugin which runs in the control plane and a node plugin which runs on every node.
- The controller plugin generally does not need direct access to the host and can perform all its operations through the Kubernetes API and external control plane services.
- The node plugin, however, requires direct access to the host for making block devices and/or file systems available to the Kubernetes kubelet. Due to the missing capability of running privileged operations from containers on Windows nodes [CSI Proxy was introduced as alpha in Kubernetes 1.18](https://kubernetes.io/blog/2020/04/03/kubernetes-1-18-feature-windows-csi-support-alpha/) as a way to enable containers to perform privileged storage operations. This enables containerized CSI Drivers to run on Windows nodes.
## What's CSI Proxy and how do CSI drivers interact with it?
When a workload that uses persistent volumes is scheduled, it'll go through a sequence of steps defined in the [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md). First, the workload will be scheduled to run on a node. Then the controller component of a CSI Driver will attach the persistent volume to the node. Finally the node component of a CSI Driver will mount the persistent volume on the node.
The node component of a CSI Driver needs to run on Windows nodes to support Windows workloads. Various privileged operations like scanning of disk devices, mounting of file systems, etc. cannot be done from a containerized application running on Windows nodes yet ([Windows HostProcess containers](https://github.com/kubernetes/enhancements/issues/1981) introduced in Kubernetes 1.22 as alpha enable functionalities that require host access like the operations mentioned before). However, we can perform these operations through a binary (CSI Proxy) that's pre-installed on the Window nodes. CSI Proxy has a client-server architecture and allows CSI drivers to issue privileged storage operations through a gRPC interface exposed over named pipes created during the startup of CSI Proxy.
![CSI Proxy Architecture](/images/blog/2021-08-09-csi-windows-support-with-csi-proxy-reaches-ga/csi-proxy.png)
## CSI Proxy reaches GA
The CSI Proxy development team has worked closely with storage vendors, many of whom started integrating CSI Proxy into their CSI Drivers and provided feedback as early as CSI Proxy design proposal. This cooperation uncovered use cases where additional APIs were needed, found bugs, and identified areas for documentation improvement.
The CSI Proxy design [KEP](https://github.com/kubernetes/enhancements/pull/2737) has been updated to reflect the current CSI Proxy architecture. Additional [development documentation](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/DEVELOPMENT.md) is included for contributors interested in helping with new features or bug fixes.
Before we reached GA we wanted to make sure that our API is simple and consistent. We went through an extensive API review of the v1beta API groups where we made sure that the CSI Proxy API methods and messages are consistent with the naming conventions defined in the [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md). As part of this effort we're graduating the [Disk](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/disk_v1.md), [Filesystem](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/filesystem_v1.md), [SMB](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/smb_v1.md) and [Volume](https://github.com/kubernetes-csi/csi-proxy/blob/master/docs/apis/volume_v1.md) API groups to v1.
Additional Windows system APIs to get information from the Windows nodes and support to mount iSCSI targets in Windows nodes, are available as alpha APIs in the [System API](https://github.com/kubernetes-csi/csi-proxy/tree/v1.0.0/client/api/system/v1alpha1) and the [iSCSI API](https://github.com/kubernetes-csi/csi-proxy/tree/v1.0.0/client/api/iscsi/v1alpha2). These APIs will continue to be improved before we graduate them to v1.
CSI Proxy v1 is compatible with all the previous v1betaX releases. The GA `csi-proxy.exe` binary can handle requests from v1betaX clients thanks to the autogenerated conversion layer that transforms any versioned client request to a version-agnostic request that the server can process. Several [integration tests](https://github.com/kubernetes-csi/csi-proxy/tree/v1.0.0/integrationtests) were added for all the API versions of the API groups that are graduating to v1 to ensure that CSI Proxy is backwards compatible.
Version drift between CSI Proxy and the CSI Drivers that interact with it was also carefully considered. A [connection fallback mechanism](https://github.com/kubernetes-csi/csi-proxy/pull/124) has been provided for CSI Drivers to handle multiple versions of CSI Proxy for a smooth upgrade to v1. This allows CSI Drivers, like the GCE PD CSI Driver, [to recognize which version of the CSI Proxy binary is running](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/738) and handle multiple versions of the CSI Proxy binary deployed on the node.
CSI Proxy v1 is already being used by many CSI Drivers, including the [AWS EBS CSI Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/966), [Azure Disk CSI Driver](https://github.com/kubernetes-sigs/azuredisk-csi-driver/pull/919), [GCE PD CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/738), and [SMB CSI Driver](https://github.com/kubernetes-csi/csi-driver-smb/pull/319).
## Future plans
We're very excited for the future of CSI Proxy. With the upcoming [Windows HostProcess containers](https://github.com/kubernetes/enhancements/issues/1981), we are considering converting the CSI Proxy in to a library consumed by CSI Drivers in addition to the current client/server design. This will allow us to iterate faster on new features because the `csi-proxy.exe` binary will no longer be needed.
## How to get involved?
This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together. Those interested in getting involved with the design and development of CSI Proxy, or any part of the Kubernetes Storage system, may join the Kubernetes Storage Special Interest Group (SIG). Were rapidly growing and always welcome new contributors.
For those interested in more details about CSI support in Windows please reach out in the [#csi-windows](https://kubernetes.slack.com/messages/csi-windows) Kubernetes slack channel.
## Acknowledgments
CSI-Proxy received many contributions from members of the Kubernetes community. We thank all of the people that contributed to CSI Proxy with design reviews, bug reports, bug fixes, and for their continuous support in reaching this milestone:
- [Andy Zhang](https://github.com/andyzhangx)
- [Dan Ilan](https://github.com/jmpfar)
- [Deep Debroy](https://github.com/ddebroy)
- [Humble Devassy Chirammal](https://github.com/humblec)
- [Jing Xu](https://github.com/jingxu97)
- [Jean Rougé](https://github.com/wk8)
- [Jordan Liggitt](https://github.com/liggitt)
- [Kalya Subramanian](https://github.com/ksubrmnn)
- [Krishnakumar R](https://github.com/kkmsft)
- [Manuel Tellez](https://github.com/manueltellez)
- [Mark Rossetti](https://github.com/marosset)
- [Mauricio Poppe](https://github.com/mauriciopoppe)
- [Matthew Wong](https://github.com/wongma7)
- [Michelle Au](https://github.com/msau42)
- [Patrick Lang](https://github.com/PatrickLang)
- [Saad Ali](https://github.com/saad-ali)
- [Yuju Hong](https://github.com/yujuhong)

View File

@ -0,0 +1,144 @@
---
layout: blog
title: "Kubernetes Memory Manager moves to beta"
date: 2021-08-11
slug: kubernetes-1-22-feature-memory-manager-moves-to-beta
---
**Authors:** Artyom Lukianov (Red Hat), Cezary Zukowski (Samsung)
The blog post explains some of the internals of the _Memory manager_, a beta feature
of Kubernetes 1.22. In Kubernetes, the Memory Manager is a
[kubelet](https://kubernetes.io/docs/concepts/overview/components/#kubelet) subcomponent.
The memory manage provides guaranteed memory (and hugepages)
allocation for pods in the `Guaranteed` [QoS class](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes).
This blog post covers:
1. [Why do you need it?](#Why-do-you-need-it?)
2. [The internal details of how the **MemoryManager** works](#How-does-it-work?)
3. [Current limitations of the **MemoryManager**](#Current-limitations)
4. [Future work for the **MemoryManager**](#Future-work-for-the-Memory-Manager)
## Why do you need it?
Some Kubernetes workloads run on nodes with
[non-uniform memory access](https://en.wikipedia.org/wiki/Non-uniform_memory_access) (NUMA).
Suppose you have NUMA nodes in your cluster. In that case, you'll know about the potential for extra latency when
compute resources need to access memory on the different NUMA locality.
To get the best performance and latency for your workload, container CPUs,
peripheral devices, and memory should all be aligned to the same NUMA
locality.
Before Kubernetes v1.22, the kubelet already provided a set of managers to
align CPUs and PCI devices, but you did not have a way to align memory.
The Linux kernel was able to make best-effort attempts to allocate
memory for tasks from the same NUMA node where the container is
executing are placed, but without any guarantee about that placement.
## How does it work?
The memory manager is doing two main things:
- provides the topology hint to the Topology Manager
- allocates the memory for containers and updates the state
The overall sequence of the Memory Manager under the Kubelet
![MemoryManagerDiagram](/images/blog/2021-08-11-memory-manager-moves-to-beta/MemoryManagerDiagram.svg "MemoryManagerDiagram")
During the Admission phase:
1. When first handling a new pod, the kubelet calls the TopologyManager's `Admit()` method.
2. The Topology Manager is calling `GetTopologyHints()` for every hint provider including the Memory Manager.
3. The Memory Manager calculates all possible NUMA nodes combinations for every container inside the pod and returns hints to the Topology Manager.
4. The Topology Manager calls to `Allocate()` for every hint provider including the Memory Manager.
5. The Memory Manager allocates the memory under the state according to the hint that the Topology Manager chose.
During Pod creation:
1. The kubelet calls `PreCreateContainer()`.
2. For each container, the Memory Manager looks the NUMA nodes where it allocated the
memory for the container and then returns that information to the kubelet.
3. The kubelet creates the container, via CRI, using a container specification
that incorporates information from the Memory Manager information.
### Let's talk about the configuration
By default, the Memory Manager runs with the `None` policy, meaning it will just
relax and not do anything. To make use of the Memory Manager, you should set
two command line options for the kubelet:
- `--memory-manager-policy=Static`
- `--reserved-memory="<numaNodeID>:<resourceName>=<quantity>"`
The value for `--memory-manager-policy` is straightforward: `Static`. Deciding what to specify for `--reserved-memory` takes more thought. To configure it correctly, you should follow two main rules:
- The amount of reserved memory for the `memory` resource must be greater than zero.
- The amount of reserved memory for the resource type must be equal
to [NodeAllocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
(`kube-reserved + system-reserved + eviction-hard`) for the resource.
You can read more about memory reservations in [Reserve Compute Resources for System Daemons](/docs/tasks/administer-cluster/reserve-compute-resources/).
![Reserved memory](/images/blog/2021-08-11-memory-manager-moves-to-beta/ReservedMemory.svg)
## Current limitations
The 1.22 release and promotion to beta brings along enhancements and fixes, but the Memory Manager still has several limitations.
### Single vs Cross NUMA node allocation
The NUMA node can not have both single and cross NUMA node allocations. When the container memory is pinned to two or more NUMA nodes, we can not know from which NUMA node the container will consume the memory.
![Single vs Cross NUMA allocation](/images/blog/2021-08-11-memory-manager-moves-to-beta/SingleCrossNUMAAllocation.svg "SingleCrossNUMAAllocation")
1. The `container1` started on the NUMA node 0 and requests *5Gi* of the memory but currently is consuming only *3Gi* of the memory.
2. For container2 the memory request is 10Gi, and no single NUMA node can satisfy it.
3. The `container2` consumes *3.5Gi* of the memory from the NUMA node 0, but once the `container1` will require more memory, it will not have it, and the kernel will kill one of the containers with the *OOM* error.
To prevent such issues, the Memory Manager will fail the admission of the `container2` until the machine has two NUMA nodes without a single NUMA node allocation.
### Works only for Guaranteed pods
The Memory Manager can not guarantee memory allocation for Burstable pods,
also when the Burstable pod has specified equal memory limit and request.
Let's assume you have two Burstable pods: `pod1` has containers with
equal memory request and limits, and `pod2` has containers only with a
memory request set. You want to guarantee memory allocation for the `pod1`.
To the Linux kernel, processes in either pod have the same *OOM score*,
once the kernel finds that it does not have enough memory, it can kill
processes that belong to pod `pod1`.
### Memory fragmentation
The sequence of Pods and containers that start and stop can fragment the memory on NUMA nodes.
The alpha implementation of the Memory Manager does not have any mechanism to balance pods and defragment memory back.
## Future work for the Memory Manager
We do not want to stop with the current state of the Memory Manager and are looking to
make improvements, including in the following areas.
### Make the Memory Manager allocation algorithm smarter
The current algorithm ignores distances between NUMA nodes during the
calculation of the allocation. If same-node placement isn't available, we can still
provide better performance compared to the current implementation, by changing the
Memory Manager to prefer the closest NUMA nodes for cross-node allocation.
### Reduce the number of admission errors
The default Kubernetes scheduler is not aware of the node's NUMA topology, and it can be a reason for many admission errors during the pod start.
We're hoping to add a KEP (Kubernetes Enhancement Proposal) to cover improvements in this area.
Follow [Topology aware scheduler plugin in kube-scheduler](https://github.com/kubernetes/enhancements/issues/2044) to see how this idea progresses.
## Conclusion
With the promotion of the Memory Manager to beta in 1.22, we encourage everyone to give it a try and look forward to any feedback you may have. While there are still several limitations, we have a set of enhancements planned to address them and look forward to providing you with many new features in upcoming releases.
If you have ideas for additional enhancements or a desire for certain features, please let us know. The team is always open to suggestions to enhance and improve the Memory Manager.
We hope you have found this blog informative and helpful! Let us know if you have any questions or comments.
You can contact us via:
- The Kubernetes [#sig-node ](https://kubernetes.slack.com/messages/sig-node)
channel in Slack (visit https://slack.k8s.io/ for an invitation if you need one)
- The SIG Node mailing list, [kubernetes-sig-node@googlegroups.com](https://groups.google.com/g/kubernetes-sig-node)

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

View File

@ -0,0 +1,79 @@
---
layout: blog
title: 'Alpha in v1.22: Windows HostProcess Containers'
date: 2021-08-16
slug: windows-hostprocess-containers
---
**Authors:** Brandon Smith (Microsoft)
Kubernetes v1.22 introduced a new alpha feature for clusters that
include Windows nodes: HostProcess containers.
HostProcess containers aim to extend the Windows container model to enable a wider
range of Kubernetes cluster management scenarios. HostProcess containers run
directly on the host and maintain behavior and access similar to that of a regular
process. With HostProcess containers, users can package and distribute management
operations and functionalities that require host access while retaining versioning
and deployment methods provided by containers. This allows Windows containers to
be used for a variety of device plugin, storage, and networking management scenarios
in Kubernetes. With this comes the enablement of host network mode—allowing
HostProcess containers to be created within the host's network namespace instead of
their own. HostProcess containers can also be built on top of existing Windows server
2019 (or later) base images, managed through the Windows container runtime, and run
as any user that is available on or in the domain of the host machine.
Linux privileged containers are currently used for a variety of key scenarios in
Kubernetes, including kube-proxy (via kubeadm), storage, and networking scenarios.
Support for these scenarios in Windows previously required workarounds via proxies
or other implementations. Using HostProcess containers, cluster operators no longer
need to log onto and individually configure each Windows node for administrative
tasks and management of Windows services. Operators can now utilize the container
model to deploy management logic to as many clusters as needed with ease.
## How does it work?
Windows HostProcess containers are implemented with Windows _Job Objects_, a break from the
previous container model using server silos. Job objects are components of the Windows OS which offer the ability to
manage a group of processes as a group (a.k.a. _jobs_) and assign resource constraints to the
group as a whole. Job objects are specific to the Windows OS and are not associated with the Kubernetes [Job API](https://kubernetes.io/docs/concepts/workloads/controllers/job/). They have no process or file system isolation,
enabling the privileged payload to view and edit the host file system with the
correct permissions, among other host resources. The init process, and any processes
it launches or that are explicitly launched by the user, are all assigned to the
job object of that container. When the init process exits or is signaled to exit,
all the processes in the job will be signaled to exit, the job handle will be
closed and the storage will be unmounted.
HostProcess and Linux privileged containers enable similar scenarios but differ
greatly in their implementation (hence the naming difference). HostProcess containers
have their own pod security policies. Those used to configure Linux privileged
containers **do not** apply. Enabling privileged access to a Windows host is a
fundamentally different process than with Linux so the configuration and
capabilities of each differ significantly. Below is a diagram detailing the
overall architecture of Windows HostProcess containers:
{{< figure src="hostprocess-architecture.png" alt="HostProcess Architecture" >}}
## How do I use it?
HostProcess containers can be run from within a
[HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod).
With the feature enabled on Kubernetes version 1.22, a containerd container runtime of
1.5.4 or higher, and the latest version of hcsshim, deploying a pod spec with the
[correct HostProcess configuration](/docs/tasks/configure-pod-container/create-hostprocess-pod/#before-you-begin)
will enable you to run HostProcess containers. To get started with running
Windows containers see the general guidance for [Windows in Kubernetes](/docs/setup/production-environment/windows/)
## How can I learn more?
- Work through [Create a Windows HostProcess Pod](/docs/tasks/configure-pod-container/create-hostprocess-pod/)
- Read about Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
- Read the enhancement proposal [Windows Privileged Containers and Host Networking Mode](https://github.com/kubernetes/enhancements/tree/master/keps/sig-windows/1981-windows-privileged-container-support) (KEP-1981)
## How do I get involved?
HostProcess containers are in active development. SIG Windows welcomes suggestions from the community.
Get involved with [SIG Windows](https://github.com/kubernetes/community/tree/master/sig-windows)
to contribute!

View File

@ -0,0 +1,267 @@
---
layout: blog
title: "Enable seccomp for all workloads with a new v1.22 alpha feature"
date: 2021-08-25
slug: seccomp-default
---
**Author:** Sascha Grunert, Red Hat
This blog post is about a new Kubernetes feature introduced in v1.22, which adds
an additional security layer on top of the existing seccomp support. Seccomp is
a security mechanism for Linux processes to filter system calls (syscalls) based
on a set of defined rules. Applying seccomp profiles to containerized workloads
is one of the key tasks when it comes to enhancing the security of the
application deployment. Developers, site reliability engineers and
infrastructure administrators have to work hand in hand to create, distribute
and maintain the profiles over the applications life-cycle.
You can use the [`securityContext`][seccontext] field of Pods and their
containers can be used to adjust security related configurations of the
workload. Kubernetes introduced dedicated [seccomp related API
fields][seccontext] in this `SecurityContext` with the [graduation of seccomp to
General Availability (GA)][ga] in v1.19.0. This enhancement allowed an easier
way to specify if the whole pod or a specific container should run as:
[seccontext]: /docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1
[ga]: https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/#graduated-to-stable
- `Unconfined`: seccomp will not be enabled
- `RuntimeDefault`: the container runtimes default profile will be used
- `Localhost`: a node local profile will be applied, which is being referenced
by a relative path to the seccomp profile root (`<kubelet-root-dir>/seccomp`)
of the kubelet
With the graduation of seccomp, nothing has changed from an overall security
perspective, because `Unconfined` is still the default. This is totally fine if
you consider this from the upgrade path and backwards compatibility perspective of
Kubernetes releases. But it also means that it is more likely that a workload
runs without seccomp at all, which should be fixed in the long term.
## `SeccompDefault` to the rescue
Kubernetes v1.22.0 introduces a new kubelet [feature gate][gate]
`SeccompDefault`, which has been added in `alpha` state as every other new
feature. This means that it is disabled by default and can be enabled manually
for every single Kubernetes node.
[gate]: /docs/reference/command-line-tools-reference/feature-gates
What does the feature do? Well, it just changes the default seccomp profile from
`Unconfined` to `RuntimeDefault`. If not specified differently in the pod
manifest, then the feature will add a higher set of security constraints by
using the default profile of the container runtime. These profiles may differ
between runtimes like [CRI-O][crio] or [containerd][ctrd]. They also differ for
its used hardware architectures. But generally speaking, those default profiles
allow a common amount of syscalls while blocking the more dangerous ones, which
are unlikely or unsafe to be used in a containerized application.
[crio]: https://github.com/cri-o/cri-o/blob/fe30d62/vendor/github.com/containers/common/pkg/seccomp/default_linux.go#L45
[ctrd]: https://github.com/containerd/containerd/blob/e1445df/contrib/seccomp/seccomp_default.go#L51
### Enabling the feature
Two kubelet configuration changes have to be made to enable the feature:
1. **Enable the feature** gate by setting the `SeccompDefault=true` via the command
line (`--feature-gates`) or the [kubelet configuration][kubelet] file.
2. **Turn on the feature** by enabling the feature by adding the
`--seccomp-default` command line flag or via the [kubelet
configuration][kubelet] file (`seccompDefault: true`).
[kubelet]: /docs/tasks/administer-cluster/kubelet-config-file
The kubelet will error on startup if only one of the above steps have been done.
### Trying it out
If the feature is enabled on a node, then you can create a new workload like
this:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx:1.21
```
Now it is possible to inspect the used seccomp profile by using
[`crictl`][crictl] while investigating the containers [runtime
specification][rspec]:
[crictl]: https://github.com/kubernetes-sigs/cri-tools
[rspec]: https://github.com/opencontainers/runtime-spec/blob/0c021c1/config-linux.md#seccomp
```bash
CONTAINER_ID=$(sudo crictl ps -q --name=test-container)
sudo crictl inspect $CONTAINER_ID | jq .info.runtimeSpec.linux.seccomp
```
```yaml
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32"],
"syscalls": [
{
"names": ["_llseek", "_newselect", "accept", …, "write", "writev"],
"action": "SCMP_ACT_ALLOW"
},
]
}
```
You can see that the lower level container runtime ([CRI-O][crio-home] and
[runc][runc] in our case), successfully applied the default seccomp profile.
This profile denies all syscalls per default, while allowing commonly used ones
like [`accept`][accept] or [`write`][write].
[crio-home]: https://github.com/cri-o/cri-o
[runc]: https://github.com/opencontainers/runc
[accept]: https://man7.org/linux/man-pages/man2/accept.2.html
[write]: https://man7.org/linux/man-pages/man2/write.2.html
Please note that the feature will not influence any Kubernetes API for now.
Therefore, it is not possible to retrieve the used seccomp profile via `kubectl`
`get` or `describe` if the [`SeccompProfile`][api] field is unset within the
`SecurityContext`.
[api]: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1
The feature also works when using multiple containers within a pod, for example
if you create a pod like this:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container-nginx
image: nginx:1.21
securityContext:
seccompProfile:
type: Unconfined
- name: test-container-redis
image: redis:6.2
```
then you should see that the `test-container-nginx` runs without a seccomp profile:
```bash
sudo crictl inspect $(sudo crictl ps -q --name=test-container-nginx) |
jq '.info.runtimeSpec.linux.seccomp == null'
true
```
Whereas the container `test-container-redis` runs with `RuntimeDefault`:
```bash
sudo crictl inspect $(sudo crictl ps -q --name=test-container-redis) |
jq '.info.runtimeSpec.linux.seccomp != null'
true
```
The same applies to the pod itself, which also runs with the default profile:
```bash
sudo crictl inspectp (sudo crictl pods -q --name test-pod) |
jq '.info.runtimeSpec.linux.seccomp != null'
true
```
### Upgrade strategy
It is recommended to enable the feature in multiple steps, whereas different
risks and mitigations exist for each one.
#### Feature gate enabling
Enabling the feature gate at the kubelet level will not turn on the feature, but
will make it possible by using the `SeccompDefault` kubelet configuration or the
`--seccomp-default` CLI flag. This can be done by an administrator for the whole
cluster or only a set of nodes.
#### Testing the Application
If you're trying this within a dedicated test environment, you have to ensure
that the application code does not trigger syscalls blocked by the
`RuntimeDefault` profile before enabling the feature on a node. This can be done
by:
- _Recommended_: Analyzing the code (manually or by running the application with
[strace][strace]) for any executed syscalls which may be blocked by the
default profiles. If that's the case, then you can override the default by
explicitly setting the pod or container to run as `Unconfined`. Alternatively,
you can create a custom seccomp profile (see optional step below).
profile based on the default by adding the additional syscalls to the
`"action": "SCMP_ACT_ALLOW"` section.
- _Recommended_: Manually set the profile to the target workload and use a
rolling upgrade to deploy into production. Rollback the deployment if the
application does not work as intended.
- _Optional_: Run the application against an end-to-end test suite to trigger
all relevant code paths with `RuntimeDefault` enabled. If a test fails, use
the same mitigation as mentioned above.
- _Optional_: Create a custom seccomp profile based on the default and change
its default action from `SCMP_ACT_ERRNO` to `SCMP_ACT_LOG`. This means that
the seccomp filter for unknown syscalls will have no effect on the application
at all, but the system logs will now indicate which syscalls may be blocked.
This requires at least a Kernel version 4.14 as well as a recent [runc][runc]
release. Monitor the application hosts audit logs (defaults to
`/var/log/audit/audit.log`) or syslog entries (defaults to `/var/log/syslog`)
for syscalls via `type=SECCOMP` (for audit) or `type=1326` (for syslog).
Compare the syscall ID with those [listed in the Linux Kernel
sources][syscalls] and add them to the custom profile. Be aware that custom
audit policies may lead into missing syscalls, depending on the configuration
of auditd.
- _Optional_: Use cluster additions like the [Security Profiles Operator][spo]
for profiling the application via its [log enrichment][logs] capabilities or
recording a profile by using its [recording feature][rec]. This makes the
above mentioned manual log investigation obsolete.
[syscalls]: https://github.com/torvalds/linux/blob/7bb7f2a/arch/x86/entry/syscalls/syscall_64.tbl
[spo]: https://github.com/kubernetes-sigs/security-profiles-operator
[logs]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/c90ef3a/installation-usage.md#record-profiles-from-workloads-with-profilerecordings
[rec]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/c90ef3a/installation-usage.md#using-the-log-enricher
[strace]: https://man7.org/linux/man-pages/man1/strace.1.html
#### Deploying the modified application
Based on the outcome of the application tests, it may be required to change the
application deployment by either specifying `Unconfined` or a custom seccomp
profile. This is not the case if the application works as intended with
`RuntimeDefault`.
#### Enable the kubelet configuration
If everything went well, then the feature is ready to be enabled by the kubelet
configuration or its corresponding CLI flag. This should be done on a per-node
basis to reduce the overall risk of missing a syscall during the investigations
when running the application tests. If it's possible to monitor audit logs
within the cluster, then it's recommended to do this for eventually missed
seccomp events. If the application works as intended then the feature can be
enabled for further nodes within the cluster.
## Conclusion
Thank you for reading this blog post! I hope you enjoyed to see how the usage of
seccomp profiles has been evolved in Kubernetes over the past releases as much
as I do. On your own cluster, change the default seccomp profile to
`RuntimeDefault` (using this new feature) and see the security benefits, and, of
course, feel free to reach out any time for feedback or questions.
---
_Editor's note: If you have any questions or feedback about this blog post, feel
free to reach out via the [Kubernetes slack in #sig-node][slack]._
[slack]: https://kubernetes.slack.com/messages/sig-node

View File

@ -0,0 +1,48 @@
---
layout: blog
title: 'Minimum Ready Seconds for StatefulSets'
date: 2021-08-27
slug: minreadyseconds-statefulsets
---
**Authors:** Ravi Gudimetla (Red Hat), Maciej Szulik (Red Hat)
This blog describes the notion of Availability for `StatefulSet` workloads, and a new alpha feature in Kubernetes 1.22 which adds `minReadySeconds` configuration for `StatefulSets`.
## What problems does this solve?
Prior to Kubernetes 1.22 release, once a `StatefulSet` `Pod` is in the `Ready` state it is considered `Available` to receive traffic. For some of the `StatefulSet` workloads, it may not be the case. For example, a workload like Prometheus with multiple instances of Alertmanager, it should be considered `Available` only when Alertmanager's state transfer is complete, not when the `Pod` is in `Ready` state. Since `minReadySeconds` adds buffer, the state transfer may be complete before the `Pod` becomes `Available`. While this is not a fool proof way of identifying if the state transfer is complete or not, it gives a way to the end user to express their intention of waiting for sometime before the `Pod` is considered `Available` and it is ready to serve requests.
Another case, where `minReadySeconds` helps is when using `LoadBalancer` `Services` with cloud providers. Since `minReadySeconds` adds latency after a `Pod` is `Ready`, it provides buffer time to prevent killing pods in rotation before new pods show up. Imagine a load balancer in unhappy path taking 10-15s to propagate. If you have 2 replicas then, you'd kill the second replica only after the first one is up but in reality, first replica cannot be seen because it is not yet ready to serve requests.
So, in general, the notion of `Availability` in `StatefulSets` is pretty useful and this feature helps in solving the above problems. This is a feature that already exists for `Deployments` and `DaemonSets` and we now have them for `StatefulSets` too to give users consistent workload experience.
## How does it work?
The statefulSet controller watches for both `StatefulSets` and the `Pods` associated with them. When the feature gate associated with this feature is enabled, the statefulSet controller identifies how long a particular `Pod` associated with a `StatefulSet` has been in the `Running` state.
If this value is greater than or equal to the time specified by the end user in `.spec.minReadySeconds` field, the statefulSet controller updates a field called `availableReplicas` in the `StatefulSet`'s status subresource to include this `Pod`. The `status.availableReplicas` in `StatefulSet`'s status is an integer field which tracks the number of pods that are `Available`.
## How do I use it?
You are required to prepare the following things in order to try out the feature:
- Download and install a kubectl greater than v1.22.0 version
- Switch on the feature gate with the command line flag `--feature-gates=StatefulSetMinReadySeconds=true` on `kube-apiserver` and `kube-controller-manager`
After successfully starting `kube-apiserver` and `kube-controller-manager`, you will see `AvailableReplicas` in the status and `minReadySeconds` of spec (with a default value of 0).
Specify a value for `minReadySeconds` for any StatefulSet and you can check if `Pods` are available or not by checking `AvailableReplicas` field using:
`kubectl get statefulset/<name_of_the_statefulset> -o yaml`
## How can I learn more?
- Read the KEP: [minReadySeconds for StatefulSets](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2599-minreadyseconds-for-statefulsets#readme)
- Read the documentation: [Minimum ready seconds](/docs/concepts/workloads/controllers/statefulset/#minimum-ready-seconds) for StatefulSet
- Review the [API definition](/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/) for StatefulSet
## How do I get involved?
Please reach out to us in the [#sig-apps](https://kubernetes.slack.com/archives/C18NZM5K9) channel on Slack (visit https://slack.k8s.io/ for an invitation if you need one), or on the SIG Apps mailing list: kubernetes-sig-apps@googlegroups.com

View File

@ -0,0 +1,219 @@
---
layout: blog
title: "Kubernetes 1.22: A New Design for Volume Populators"
date: 2021-08-30
slug: volume-populators-redesigned
---
**Authors:**
Ben Swartzlander (NetApp)
Kubernetes v1.22, released earlier this month, introduced a redesigned approach for volume
populators. Originally implemented
in v1.18, the API suffered from backwards compatibility issues. Kubernetes v1.22 includes a new API
field called `dataSourceRef` that fixes these problems.
## Data sources
Earlier Kubernetes releases already added a `dataSource` field into the
[PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) API,
used for cloning volumes and creating volumes from snapshots. You could use the `dataSource` field when
creating a new PVC, referencing either an existing PVC or a VolumeSnapshot in the same namespace.
That also modified the normal provisioning process so that instead of yielding an empty volume, the
new PVC contained the same data as either the cloned PVC or the cloned VolumeSnapshot.
Volume populators embrace the same design idea, but extend it to any type of object, as long
as there exists a [custom resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
to define the data source, and a populator controller to implement the logic. Initially,
the `dataSource` field was directly extended to allow arbitrary objects, if the `AnyVolumeDataSource`
feature gate was enabled on a cluster. That change unfortunately caused backwards compatibility
problems, and so the new `dataSourceRef` field was born.
In v1.22 if the `AnyVolumeDataSource` feature gate is enabled, the `dataSourceRef` field is
added, which behaves similarly to the `dataSource` field except that it allows arbitrary
objects to be specified. The API server ensures that the two fields always have the same
contents, and neither of them are mutable. The differences is that at creation time
`dataSource` allows only PVCs or VolumeSnapshots, and ignores all other values, while
`dataSourceRef` allows most types of objects, and in the few cases it doesn't allow an
object (core objects other than PVCs) a validation error occurs.
When this API change graduates to stable, we would deprecate using `dataSource` and recommend
using `dataSourceRef` field for all use cases.
In the v1.22 release, `dataSourceRef` is available (as an alpha feature) specifically for cases
where you want to use for custom volume populators.
## Using populators
Every volume populator must have one or more CRDs that it supports. Administrators may
install the CRD and the populator controller and then PVCs with a `dataSourceRef` specifies
a CR of the type that the populator supports will be handled by the populator controller
instead of the CSI driver directly.
Underneath the covers, the CSI driver is still invoked to create an empty volume, which
the populator controller fills with the appropriate data. The PVC doesn't bind to the PV
until it's fully populated, so it's safe to define a whole application manifest including
pod and PVC specs and the pods won't begin running until everything is ready, just as if
the PVC was a clone of another PVC or VolumeSnapshot.
## How it works
PVCs with data sources are still noticed by the external-provisioner sidecar for the
related storage class (assuming a CSI provisioner is used), but because the sidecar
doesn't understand the data source kind, it doesn't do anything. The populator controller
is also watching for PVCs with data sources of a kind that it understands and when it
sees one, it creates a temporary PVC of the same size, volume mode, storage class,
and even on the same topology (if topology is used) as the original PVC. The populator
controller creates a worker pod that attaches to the volume and writes the necessary
data to it, then detaches from the volume and the populator controller rebinds the PV
from the temporary PVC to the orignal PVC.
## Trying it out
The following things are required to use volume populators:
* Enable the `AnyVolumeDataSource` feature gate
* Install a CRD for the specific data source / populator
* Install the populator controller itself
Populator controllers may use the [lib-volume-populator](https://github.com/kubernetes-csi/lib-volume-populator)
library to do most of the Kubernetes API level work. Individual populators only need to
provide logic for actually writing data into the volume based on a particular CR
instance. This library provides a sample populator implementation.
These optional components improve user experience:
* Install the VolumePopulator CRD
* Create a VolumePopulator custom respource for each specific data source
* Install the [volume data source validator](https://github.com/kubernetes-csi/volume-data-source-validator)
controller (alpha)
The purpose of these components is to generate warning events on PVCs with data sources
for which there is no populator.
## Putting it all together
To see how this works, you can install the sample "hello" populator and try it
out.
First install the volume-data-source-validator controller.
```terminal
kubectl apply -f https://github.com/kubernetes-csi/volume-data-source-validator/blob/master/deploy/kubernetes/rbac-data-source-validator.yaml
kubectl apply -f https://github.com/kubernetes-csi/volume-data-source-validator/blob/master/deploy/kubernetes/setup-data-source-validator.yaml
```
Next install the example populator.
```terminal
kubectl apply -f https://github.com/kubernetes-csi/lib-volume-populator/blob/master/example/hello-populator/crd.yaml
kubectl apply -f https://github.com/kubernetes-csi/lib-volume-populator/blob/master/example/hello-populator/deploy.yaml
```
Create an instance of the `Hello` CR, with some text.
```yaml
apiVersion: hello.k8s.io/v1alpha1
kind: Hello
metadata:
name: example-hello
spec:
fileName: example.txt
fileContents: Hello, world!
```
Create a PVC that refers to that CR as its data source.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
dataSourceRef:
apiGroup: hello.k8s.io
kind: Hello
name: example-hello
volumeMode: Filesystem
```
Next, run a job that reads the file in the PVC.
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: example-job
spec:
template:
spec:
containers:
- name: example-container
image: busybox:latest
command:
- cat
- /mnt/example.txt
volumeMounts:
- name: vol
mountPath: /mnt
restartPolicy: Never
volumes:
- name: vol
persistentVolumeClaim:
claimName: example-pvc
```
Wait for the job to complete (including all of its dependencies).
```terminal
kubectl wait --for=condition=Complete job/example-job
```
And last examine the log from the job.
```terminal
kubectl logs job/example-job
Hello, world!
```
Note that the volume already contained a text file with the string contents from
the CR. This is only the simplest example. Actual populators can set up the volume
to contain arbitrary contents.
## How to write your own volume populator
Developers interested in writing new poplators are encouraged to use the
[lib-volume-populator](https://github.com/kubernetes-csi/lib-volume-populator) library
and to only supply a small controller wrapper around the library, and a pod image
capable of attaching to volumes and writing the appropriate data to the volume.
Individual populators can be extremely generic such that they work with every type
of PVC, or they can do vendor specific things to rapidly fill a volume with data
if the volume was provisioned by a specific CSI driver from the same vendor, for
example, by communicating directly with the storage for that volume.
## The future
As this feature is still in alpha, we expect to update the out of tree controllers
with more tests and documentation. The community plans to eventually re-implement
the populator library as a sidecar, for ease of operations.
We hope to see some official community-supported populators for some widely-shared
use cases. Also, we expect that volume populators will be used by backup vendors
as a way to "restore" backups to volumes, and possibly a standardized API to do
this will evolve.
## How can I learn more?
The enhancement proposal,
[Volume Populators](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1495-volume-populators), includes lots of detail about the history and technical implementation
of this feature.
[Volume populators and data sources](/docs/concepts/storage/persistent-volumes/#volume-populators-and-data-sources), within the documentation topic about persistent volumes,
explains how to use this feature in your cluster.
Please get involved by joining the Kubernetes storage SIG to help us enhance this
feature. There are a lot of good ideas already and we'd be thrilled to have more!

View File

@ -0,0 +1,67 @@
---
layout: blog
title: 'Alpha in Kubernetes v1.22: API Server Tracing'
date: 2021-09-03
slug: api-server-tracing
---
**Authors:** David Ashpole (Google)
In distributed systems, it can be hard to figure out where problems are. You grep through one component's logs just to discover that the source of your problem is in another component. You search there only to discover that you need to enable debug logs to figure out what really went wrong... And it goes on. The more complex the path your request takes, the harder it is to answer questions about where it went. I've personally spent many hours doing this dance with a variety of Kubernetes components. Distributed tracing is a tool which is designed to help in these situations, and the Kubernetes API Server is, perhaps, the most important Kubernetes component to be able to debug. At Kubernetes' Sig Instrumentation, our mission is to make it easier to understand what's going on in your cluster, and we are happy to announce that distributed tracing in the Kubernetes API Server reached alpha in 1.22.
## What is Tracing?
Distributed tracing links together a bunch of super-detailed information from multiple different sources, and structures that telemetry into a single tree for that request. Unlike logging, which limits the quantity of data ingested by using log levels, tracing collects all of the details and uses sampling to collect only a small percentage of requests. This means that once you have a trace which demonstrates an issue, you should have all the information you need to root-cause the problem--no grepping for object UID required! My favorite aspect, though, is how useful the visualizations of traces are. Even if you don't understand the inner workings of the API Server, or don't have a clue what an etcd "Transaction" is, I'd wager you (yes, you!) could tell me roughly what the order of events was, and which components were involved in the request. If some step takes a long time, it is easy to tell where the problem is.
## Why OpenTelemetry?
It's important that Kubernetes works well for everyone, regardless of who manages your infrastructure, or which vendors you choose to integrate with. That is particularly true for Kubernetes' integrations with telemetry solutions. OpenTelemetry, being a CNCF project, shares these core values, and is creating exactly what we need in Kubernetes: A set of open standards for Tracing client library APIs and a standard trace format. By using OpenTelemetry, we can ensure users have the freedom to choose their backend, and ensure vendors have a level playing field. The timing couldn't be better: the OpenTelemetry golang API and SDK are very close to their 1.0 release, and will soon offer backwards-compatibility for these open standards.
## Why instrument the API Server?
The Kubernetes API Server is a great candidate for tracing for a few reasons:
* It follows the standard "RPC" model (serve a request by making requests to downstream components), which makes it easy to instrument.
* Users are latency-sensitive: If a request takes more than 10 seconds to complete, many clients will time-out.
* It has a complex service topology: A single request could require consulting a dozen webhooks, or involve multiple requests to etcd.
## Trying out APIServer Tracing with a webhook
### Enabling API Server Tracing
1. Enable the APIServerTracing [feature-gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/).
2. Set our configuration for tracing by pointing the `--tracing-config-file` flag on the kube-apiserver at our config file, which contains:
```yaml
apiVersion: apiserver.config.k8s.io/v1alpha1
kind: TracingConfiguration
# 1% sampling rate
samplingRatePerMillion: 10000
```
### Enabling Etcd Tracing
Add `--experimental-enable-distributed-tracing`, `--experimental-distributed-tracing-address=0.0.0.0:4317`, `--experimental-distributed-tracing-service-name=etcd` flags to etcd to enable tracing. Note that this traces every request, so it will probably generate a lot of traces if you enable it.
### Example Trace: List Nodes
I could've used any trace backend, but decided to use Jaeger, since it is one of the most popular open-source tracing projects. I deployed [the Jaeger All-in-one container](https://hub.docker.com/r/jaegertracing/all-in-one) in my cluster, deployed [the OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) on my control-plane node ([example](https://github.com/dashpole/dashpole_demos/tree/master/otel/controlplane)), and captured traces like this one:
![Jaeger screenshot showing API server and etcd trace](/images/blog/2021-09-03-api-server-tracing/example-trace-1.png "Jaeger screenshot showing API server and etcd trace")
The teal lines are from the API Server, and includes it serving a request to `/api/v1/nodes`, and issuing a grpc `Range` RPC to ETCD. The yellow-ish line is from ETCD handling the `Range` RPC.
### Example Trace: Create Pod with Mutating Webhook
I instrumented the [example webhook](https://github.com/kubernetes-sigs/controller-runtime/tree/master/examples/builtins) with OpenTelemetry (I had to [patch](https://github.com/dashpole/controller-runtime/commit/85fdda7ba03dd2c22ef62c1a3dbdf5aa651f90da) controller-runtime, but it makes a neat demo), and routed traces to Jaeger as well. I collected traces like this one:
![Jaeger screenshot showing API server, admission webhook, and etcd trace](/images/blog/2021-09-03-api-server-tracing/example-trace-2.png "Jaeger screenshot showing API server, admission webhook, and etcd trace")
Compared with the previous trace, there are two new spans: A teal span from the API Server making a request to the admission webhook, and a brown span from the admission webhook serving the request. Even if you didn't instrument your webhook, you would still get the span from the API Server making the request to the webhook.
## Get involved!
As this is our first attempt at adding distributed tracing to a Kubernetes component, there is probably a lot we can improve! If my struggles resonated with you, or if you just want to try out the latest Kubernetes has to offer, please give the feature a try and open issues with any problem you encountered and ways you think the feature could be improved.
This is just the very beginning of what we can do with distributed tracing in Kubernetes. If there are other components you think would benefit from distributed tracing, or want to help bring API Server Tracing to GA, join sig-instrumentation at our [regular meetings](https://github.com/kubernetes/community/tree/master/sig-instrumentation#instrumentation-special-interest-group) and get involved!

View File

@ -37,6 +37,24 @@ to the labels, each `EndpointSlice` that is managed on behalf of a Service has
an owner reference. Owner references help different parts of Kubernetes avoid
interfering with objects they dont control.
{{< note >}}
Cross-namespace owner references are disallowed by design.
Namespaced dependents can specify cluster-scoped or namespaced owners.
A namespaced owner **must** exist in the same namespace as the dependent.
If it does not, the owner reference is treated as absent, and the dependent
is subject to deletion once all owners are verified absent.
Cluster-scoped dependents can only specify cluster-scoped owners.
In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner,
it is treated as having an unresolvable owner reference, and is not able to be garbage collected.
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
You can check for that kind of Event by running
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
{{< /note >}}
## Cascading deletion {#cascading-deletion}
Kubernetes checks for and deletes objects that no longer have owner

View File

@ -122,6 +122,9 @@ To mark a Node unschedulable, run:
kubectl cordon $NODENAME
```
See [Safely Drain a Node](/docs/tasks/administer-cluster/safely-drain-node/)
for more details.
{{< note >}}
Pods that are part of a {{< glossary_tooltip term_id="daemonset" >}} tolerate
being run on an unschedulable Node. DaemonSets typically provide node-local services
@ -162,8 +165,8 @@ The `conditions` field describes the status of all `Running` nodes. Examples of
| Node Condition | Description |
|----------------------|-------------|
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
| `DiskPressure` | `True` if pressure exists on the disk size--that is, if the disk capacity is low; otherwise `False` |
| `MemoryPressure` | `True` if pressure exists on the node memory--that is, if the node memory is low; otherwise `False` |
| `DiskPressure` | `True` if pressure exists on the disk sizethat is, if the disk capacity is low; otherwise `False` |
| `MemoryPressure` | `True` if pressure exists on the node memorythat is, if the node memory is low; otherwise `False` |
| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False` |
| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
{{< /table >}}
@ -174,7 +177,8 @@ If you use command-line tools to print details of a cordoned Node, the Condition
cordoned nodes are marked Unschedulable in their spec.
{{< /note >}}
The node condition is represented as a JSON object. For example, the following structure describes a healthy node:
In the Kubernetes API, a node's condition is represented as part of the `.status`
of the Node resource. For example, the following JSON structure describes a healthy node:
```json
"conditions": [
@ -189,7 +193,17 @@ The node condition is represented as a JSON object. For example, the following s
]
```
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), then all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
If the `status` of the Ready condition remains `Unknown` or `False` for longer
than the `pod-eviction-timeout` (an argument passed to the
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager"
>}}), then the [node controller](#node-controller) triggers
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
for all Pods assigned to that node. The default eviction timeout duration is
**five minutes**.
In some cases when the node is unreachable, the API server is unable to communicate
with the kubelet on the node. The decision to delete the pods cannot be communicated to
the kubelet until communication with the API server is re-established. In the meantime,
the pods that are scheduled for deletion may continue to run on the partitioned node.
The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as
@ -199,10 +213,12 @@ may need to delete the node object by hand. Deleting the node object from Kubern
all the Pod objects running on the node to be deleted from the API server and frees up their
names.
The node lifecycle controller automatically creates
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that represent conditions.
When problems occur on nodes, the Kubernetes control plane automatically creates
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
affecting the node.
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
Pods can also have tolerations which let them tolerate a Node's taints.
Pods can also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
them run on a Node even though it has a specific taint.
See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
for more details.
@ -222,10 +238,43 @@ on a Node.
### Info
Describes general information about the node, such as kernel version, Kubernetes version (kubelet and kube-proxy version), Docker version (if used), and OS name.
This information is gathered by Kubelet from the node.
Describes general information about the node, such as kernel version, Kubernetes
version (kubelet and kube-proxy version), container runtime details, and which
operating system the node uses.
The kubelet gathers this information from the node and publishes it into
the Kubernetes API.
### Node controller
## Heartbeats
Heartbeats, sent by Kubernetes nodes, help your cluster determine the
availability of each node, and to take action when failures are detected.
For nodes there are two forms of heartbeats:
* updates to the `.status` of a Node
* [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects
within the `kube-node-lease`
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
Each Node has an associated Lease object.
Compared to updates to `.status` of a Node, a Lease is a lightweight resource.
Using Leases for heartbeats reduces the performance impact of these updates
for large clusters.
The kubelet is responsible for creating and updating the `.status` of Nodes,
and for updating their related Leases.
- The kubelet updates the node's `.status` either when there is change in status
or if there has been no update for a configured interval. The default interval
for `.status` updates to Nodes is 5 minutes, which is much longer than the 40
second default timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.
## Node controller
The node {{< glossary_tooltip text="controller" term_id="controller" >}} is a
Kubernetes control plane component that manages various aspects of nodes.
@ -241,39 +290,18 @@ controller deletes the node from its list of nodes.
The third is monitoring the nodes' health. The node controller is
responsible for:
- Updating the NodeReady condition of NodeStatus to ConditionUnknown when a node
becomes unreachable, as the node controller stops receiving heartbeats for some
reason such as the node being down.
- Evicting all the pods from the node using graceful termination if
the node continues to be unreachable. The default timeouts are 40s to start
reporting ConditionUnknown and 5m after that to start evicting pods.
- In the case that a node becomes unreachable, updating the NodeReady condition
of within the Node's `.status`. In this case the node controller sets the
NodeReady condition to `ConditionUnknown`.
- If a node remains unreachable: triggering
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)
for all of the Pods on the unreachable node. By default, the node controller
waits 5 minutes between marking the node as `ConditionUnknown` and submitting
the first eviction request.
The node controller checks the state of each node every `--node-monitor-period` seconds.
#### Heartbeats
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
There are two forms of heartbeats: updates of `NodeStatus` and the
[Lease object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io).
Each Node has an associated Lease object in the `kube-node-lease`
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
Lease is a lightweight resource, which improves the performance
of the node heartbeats as the cluster scales.
The kubelet is responsible for creating and updating the `NodeStatus` and
a Lease object.
- The kubelet updates the `NodeStatus` either when there is change in status
or if there has been no update for a configured interval. The default interval
for `NodeStatus` updates is 5 minutes, which is much longer than the 40 second default
timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from the
`NodeStatus` updates. If the Lease update fails, the kubelet retries with
exponential backoff starting at 200 milliseconds and capped at 7 seconds.
#### Reliability
### Rate limits on eviction
In most cases, the node controller limits the eviction rate to
`--node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
@ -281,7 +309,7 @@ from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
the same time:
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
(default 0.55), then the eviction rate is reduced.
@ -293,15 +321,17 @@ the same time:
The reason these policies are implemented per availability zone is because one
availability zone might become partitioned from the master while the others remain
connected. If your cluster does not span multiple cloud provider availability zones,
then there is only one availability zone (i.e. the whole cluster).
then the eviction mechanism does not take per-zone unavailability into account.
A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down.
Therefore, if all nodes in a zone are unhealthy, then the node controller evicts at
the normal rate of `--node-eviction-rate`. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
case, the node controller assumes that there is some problem with master
connectivity and stops all evictions until some connectivity is restored.
completely unhealthy (none of the nodes in the cluster are healthy). In such a
case, the node controller assumes that there is some problem with connectivity
between the control plane and the nodes, and doesn't perform any evictions.
(If there has been an outage and some nodes reappear, the node controller does
evict pods from the remaining nodes that are unhealthy or unreachable).
The node controller is also responsible for evicting pods running on nodes with
`NoExecute` taints, unless those pods tolerate that taint.
@ -309,7 +339,7 @@ The node controller also adds {{< glossary_tooltip text="taints" term_id="taint"
corresponding to node problems like node unreachable or not ready. This means
that the scheduler won't place Pods onto unhealthy nodes.
### Node capacity
## Resource capacity tracking {#node-capacity}
Node objects track information about the Node's resource capacity: for example, the amount
of memory available and the number of CPUs.

View File

@ -81,7 +81,7 @@ rotate an application's logs automatically.
As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh).
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure.
The kubelet sends this information to the CRI container runtime and the runtime writes the container logs to the given location.

View File

@ -160,7 +160,7 @@ If you're interested in learning more about `kubectl`, go ahead and read [kubect
The examples we've used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
```yaml
labels:

View File

@ -61,6 +61,11 @@ You can write a Pod `spec` that refers to a ConfigMap and configures the contain
in that Pod based on the data in the ConfigMap. The Pod and the ConfigMap must be in
the same {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
{{< note >}}
The `spec` of a {{< glossary_tooltip text="static Pod" term_id="static-pod" >}} cannot refer to a ConfigMap
or any other API objects.
{{< /note >}}
Here's an example ConfigMap that has some keys with single values,
and other keys where the value looks like a fragment of a configuration
format.

View File

@ -181,8 +181,9 @@ When using Docker:
flag in the `docker run` command.
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and
multiplied by 100. The resulting value is the total amount of CPU time that a container can use
every 100ms. A container cannot use more than its share of CPU time during this interval.
multiplied by 100. The resulting value is the total amount of CPU time in microseconds
that a container can use every 100ms. A container cannot use more than its share of
CPU time during this interval.
{{< note >}}
The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
@ -337,6 +338,9 @@ spec:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
volumeMounts:
- name: ephemeral
mountPath: "/tmp"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
@ -344,6 +348,12 @@ spec:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
volumeMounts:
- name: ephemeral
mountPath: "/tmp"
volumes:
- name: ephemeral
emptyDir: {}
```
### How Pods with ephemeral-storage requests are scheduled

View File

@ -21,7 +21,7 @@ This is a living document. If you think of something that is not on this list bu
- Write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml) file as an example of this syntax.
- Note also that many `kubectl` commands can be called on a directory. For example, you can call `kubectl apply` on a directory of config files.
@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN
## Using Labels
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/) app for examples of this approach.
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).
@ -73,32 +73,6 @@ A desired state of an object is described by a Deployment, and if changes to tha
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
## Container Images
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull the specified image.
- `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally.
- `imagePullPolicy: Always`: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
- `imagePullPolicy` is omitted and either the image tag is `:latest` or it is omitted: `imagePullPolicy` is automatically set to `Always`. Note that this will _not_ be updated to `IfNotPresent` if the tag changes value.
- `imagePullPolicy` is omitted and the image tag is present but not `:latest`: `imagePullPolicy` is automatically set to `IfNotPresent`. Note that this will _not_ be updated to `Always` if the tag is later removed or changed to `:latest`.
- `imagePullPolicy: Never`: the image is assumed to exist locally. No attempt is made to pull the image.
{{< note >}}
To make sure the container always uses the same version of the image, you can specify its [digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier); replace `<image-name>:<tag>` with `<image-name>@<digest>` (for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`). The digest uniquely identifies a specific version of the image, so it is never updated by Kubernetes unless you change the digest value.
{{< /note >}}
{{< note >}}
You should avoid using the `:latest` tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.
{{< /note >}}
{{< note >}}
The caching semantics of the underlying image provider make even `imagePullPolicy: Always` efficient, as long as the registry is reliably accessible. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed.
{{< /note >}}
## Using kubectl
- Use `kubectl apply -f <directory>`. This looks for Kubernetes configuration in all `.yaml`, `.yml`, and `.json` files in `<directory>` and passes it to `apply`.

View File

@ -75,9 +75,9 @@ precedence.
## Types of Secret {#secret-types}
When creating a Secret, you can specify its type using the `type` field of
the [`Secret`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
resource, or certain equivalent `kubectl` command line flags (if available).
The Secret type is used to facilitate programmatic handling of the Secret data.
a Secret resource, or certain equivalent `kubectl` command line flags (if available).
The `type` of a Secret is used to facilitate programmatic handling of different
kinds of confidential data.
Kubernetes provides several builtin types for some common usage scenarios.
These types vary in terms of the validations performed and the constraints
@ -833,7 +833,10 @@ are obtained from the API server.
This includes any Pods created using `kubectl`, or indirectly via a replication
controller. It does not include Pods created as a result of the kubelet
`--manifest-url` flag, its `--config` flag, or its REST API (these are
not common ways to create Pods.)
not common ways to create Pods).
The `spec` of a {{< glossary_tooltip text="static Pod" term_id="static-pod" >}} cannot refer to a Secret
or any other API objects.
Secrets must be created before they are consumed in Pods as environment
variables unless they are marked as optional. References to secrets that do
@ -1252,3 +1255,4 @@ for secret data, so that the secrets are not stored in the clear into {{< glossa
- Learn how to [manage Secret using `kubectl`](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)
- Learn how to [manage Secret using config file](/docs/tasks/configmap-secret/managing-secret-using-config-file/)
- Learn how to [manage Secret using kustomize](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)
- Read the [API reference](/docs/reference/kubernetes-api/config-and-storage-resources/secret-v1/) for `Secret`

View File

@ -52,7 +52,7 @@ FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP addresses and are available to the Container via DNS,
if [DNS addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled. 
if [DNS addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/) is enabled. 

View File

@ -39,14 +39,6 @@ There are additional rules about where you can place the separator
characters (`_`, `-`, and `.`) inside an image tag.
If you don't specify a tag, Kubernetes assumes you mean the tag `latest`.
{{< caution >}}
You should avoid using the `latest` tag when deploying containers in production,
as it is harder to track which version of the image is running and more difficult
to roll back to a working version.
Instead, specify a meaningful tag such as `v1.42.0`.
{{< /caution >}}
## Updating images
When you first create a {{< glossary_tooltip text="Deployment" term_id="deployment" >}},
@ -57,13 +49,68 @@ specified. This policy causes the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip pulling an
image if it already exists.
If you would like to always force a pull, you can do one of the following:
### Image pull policy
- set the `imagePullPolicy` of the container to `Always`.
- omit the `imagePullPolicy` and use `:latest` as the tag for the image to use;
Kubernetes will set the policy to `Always`.
- omit the `imagePullPolicy` and the tag for the image to use.
- enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
The `imagePullPolicy` for a container and the tag of the image affect when the
[kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull (download) the specified image.
Here's a list of the values you can set for `imagePullPolicy` and the effects
these values have:
`IfNotPresent`
: the image is pulled only if it is not already present locally.
`Always`
: every time the kubelet launches a container, the kubelet queries the container
image registry to resolve the name to an image
[digest](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier). If the kubelet has a
container image with that exact digest cached locally, the kubelet uses its cached
image; otherwise, the kubelet pulls the image with the resolved digest,
and uses that image to launch the container.
`Never`
: the kubelet does not try fetching the image. If the image is somehow already present
locally, the kubelet attempts to start the container; otherwise, startup fails.
See [pre-pulled images](#pre-pulled-images) for more details.
The caching semantics of the underlying image provider make even
`imagePullPolicy: Always` efficient, as long as the registry is reliably accessible.
Your container runtime can notice that the image layers already exist on the node
so that they don't need to be downloaded again.
{{< note >}}
You should avoid using the `:latest` tag when deploying containers in production as
it is harder to track which version of the image is running and more difficult to
roll back properly.
Instead, specify a meaningful tag such as `v1.42.0`.
{{< /note >}}
To make sure the Pod always uses the same version of a container image, you can specify
the image's digest;
replace `<image-name>:<tag>` with `<image-name>@<digest>`
(for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`).
When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
There are third-party [admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
that mutate Pods (and pod templates) when they are created, so that the
running workload is defined based on an image digest rather than a tag.
That might be useful if you want to make sure that all your workload is
running the same code no matter what tag changes happen at the registry.
#### Default image pull policy {#imagepullpolicy-defaulting}
When you (or a controller) submit a new Pod to the API server, your cluster sets the
`imagePullPolicy` field when specific conditions are met:
- if you omit the `imagePullPolicy` field, and the tag for the container image is
`:latest`, `imagePullPolicy` is automatically set to `Always`;
- if you omit the `imagePullPolicy` field, and you don't specify the tag for the
container image, `imagePullPolicy` is automatically set to `Always`;
- if you omit the `imagePullPolicy` field, and you don't specify the tag for the
container image that isn't `:latest`, the `imagePullPolicy` is automatically set to
`IfNotPresent`.
{{< note >}}
The value of `imagePullPolicy` of the container is always set when the object is
@ -75,7 +122,17 @@ For example, if you create a Deployment with an image whose tag is _not_
the pull policy of any object after its initial creation.
{{< /note >}}
When `imagePullPolicy` is defined without a specific value, it is also set to `Always`.
#### Required image pull
If you would like to always force a pull, you can do one of the following:
- Set the `imagePullPolicy` of the container to `Always`.
- Omit the `imagePullPolicy` and use `:latest` as the tag for the image to use;
Kubernetes will set the policy to `Always` when you submit the Pod.
- Omit the `imagePullPolicy` and the tag for the image to use;
Kubernetes will set the policy to `Always` when you submit the Pod.
- Enable the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller.
### ImagePullBackOff
@ -328,6 +385,7 @@ common use cases and suggested solutions.
If you need access to multiple registries, you can create one secret for each registry.
Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
## {{% heading "whatsnext" %}}
* Read the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/master/manifest.md).

View File

@ -1,5 +1,5 @@
---
title: Extending the Kubernetes API with the aggregation layer
title: Kubernetes API Aggregation Layer
reviewers:
- lavalamp
- cheftako
@ -34,7 +34,7 @@ If your extension API server cannot achieve that latency requirement, consider m
* To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/).
* Then, [setup an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer.
* Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
* Read the specification for [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io)
* Read about [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) in the API reference
Alternatively: learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).

View File

@ -167,7 +167,7 @@ CRDs are easier to create than Aggregated APIs.
| CRDs | Aggregated API |
| --------------------------- | -------------- |
| Do not require programming. Users can choose any language for a CRD controller. | Requires programming in Go and building binary and image. |
| Do not require programming. Users can choose any language for a CRD controller. | Requires programming and building binary and image. |
| No additional service to run; CRDs are handled by API server. | An additional service to create and that could fail. |
| No ongoing support once the CRD is created. Any bug fixes are picked up as part of normal Kubernetes Master upgrades. | May need to periodically pickup bug fixes from upstream and rebuild and update the Aggregated API server. |
| No need to handle multiple versions of your API; for example, when you control the client for this resource, you can upgrade it in sync with the API. | You need to handle multiple versions of your API; for example, when developing an extension to share with the world. |

View File

@ -114,6 +114,7 @@ Operator.
* [Charmed Operator Framework](https://juju.is/)
* [kubebuilder](https://book.kubebuilder.io/)
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK)
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html) along with WebHooks that
you implement yourself

View File

@ -16,7 +16,7 @@ card:
When you deploy Kubernetes, you get a cluster.
{{< glossary_definition term_id="cluster" length="all" prepend="A Kubernetes cluster consists of">}}
This document outlines the various components you need to have
This document outlines the various components you need to have for
a complete and working Kubernetes cluster.
Here's the diagram of a Kubernetes cluster with all the components tied together.

View File

@ -47,7 +47,7 @@ and the controller deletes the volume.
Like {{<glossary_tooltip text="labels" term_id="label">}}, [owner references](/concepts/overview/working-with-objects/owners-dependents/)
describe the relationships between objects in Kubernetes, but are used for a
different purpose. When a
{{<glossary_tooltip text="controllers" term_id="controller">}} manages objects
{{<glossary_tooltip text="controller" term_id="controller">}} manages objects
like Pods, it uses labels to track changes to groups of related objects. For
example, when a {{<glossary_tooltip text="Job" term_id="job">}} creates one or
more Pods, the Job controller applies labels to those pods and tracks changes to
@ -77,4 +77,4 @@ your cluster.
## {{% heading "whatsnext" %}}
* Read [Using Finalizers to Control Deletion](/blog/2021/05/14/using-finalizers-to-control-deletion/)
on the Kubernetes blog.
on the Kubernetes blog.

View File

@ -81,12 +81,11 @@ In the `.yaml` file for the Kubernetes object you want to create, you'll need to
* `metadata` - Data that helps uniquely identify the object, including a `name` string, `UID`, and optional `namespace`
* `spec` - What state you desire for the object
The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) can help you find the spec format for all of the objects you can create using Kubernetes.
For example, the `spec` format for a Pod can be found in
[PodSpec v1 core](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core),
and the `spec` format for a Deployment can be found in
[DeploymentSpec v1 apps](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deploymentspec-v1-apps).
The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes.
For example, the reference for Pod details the [`spec` field](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec)
for a Pod in the API, and the reference for Deployment details the [`spec` field](/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentSpec) for Deployments.
In those API reference pages you'll see mention of PodSpec and DeploymentSpec. These names are implementation details of the Golang code that Kubernetes uses to implement its API.
## {{% heading "whatsnext" %}}

View File

@ -62,7 +62,10 @@ Kubernetes starts with four initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
* `kube-public` This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
* `kube-node-lease` This namespace for the lease objects associated with each node which improves the performance of the node heartbeats as the cluster scales.
* `kube-node-lease` This namespace holds [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/)
objects associated with each node. Node leases allow the kubelet to send
[heartbeats](/docs/concepts/architecture/nodes/#heartbeats) so that the control plane
can detect node failure.
### Setting the namespace for a request

View File

@ -42,6 +42,24 @@ A Kubernetes admission controller controls user access to change this field for
dependent resources, based on the delete permissions of the owner. This control
prevents unauthorized users from delaying owner object deletion.
{{< note >}}
Cross-namespace owner references are disallowed by design.
Namespaced dependents can specify cluster-scoped or namespaced owners.
A namespaced owner **must** exist in the same namespace as the dependent.
If it does not, the owner reference is treated as absent, and the dependent
is subject to deletion once all owners are verified absent.
Cluster-scoped dependents can only specify cluster-scoped owners.
In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner,
it is treated as having an unresolvable owner reference, and is not able to be garbage collected.
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
You can check for that kind of Event by running
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
{{< /note >}}
## Ownership and finalizers
When you tell Kubernetes to delete a resource, the API server allows the

View File

@ -47,7 +47,7 @@ functions to score the feasible Nodes and picks a Node with the highest
score among the feasible ones to run the Pod. The scheduler then notifies
the API server about this decision in a process called _binding_.
Factors that need taken into account for scheduling decisions include
Factors that need to be taken into account for scheduling decisions include
individual and collective resource requirements, hardware / software /
policy constraints, affinity and anti-affinity specifications, data
locality, inter-workload interference, and so on.

View File

@ -92,9 +92,9 @@ shape:
``` yaml
resources:
- name: CPU
- name: cpu
weight: 1
- name: Memory
- name: memory
weight: 1
```
@ -104,9 +104,9 @@ It can be used to add extended resources as follows:
resources:
- name: intel.com/foo
weight: 5
- name: CPU
- name: cpu
weight: 3
- name: Memory
- name: memory
weight: 1
```
@ -123,16 +123,16 @@ Requested resources:
```
intel.com/foo : 2
Memory: 256MB
CPU: 2
memory: 256MB
cpu: 2
```
Resource weights:
```
intel.com/foo : 5
Memory: 1
CPU: 3
memory: 1
cpu: 3
```
FunctionShapePoint {{0, 0}, {100, 10}}
@ -142,13 +142,13 @@ Node 1 spec:
```
Available:
intel.com/foo: 4
Memory: 1 GB
CPU: 8
memory: 1 GB
cpu: 8
Used:
intel.com/foo: 1
Memory: 256MB
CPU: 1
memory: 256MB
cpu: 1
```
Node score:
@ -161,13 +161,13 @@ intel.com/foo = resourceScoringFunction((2+1),4)
= rawScoringFunction(75)
= 7 # floor(75/10)
Memory = resourceScoringFunction((256+256),1024)
memory = resourceScoringFunction((256+256),1024)
= (100 -((1024-512)*100/1024))
= 50 # requested + used = 50% * available
= rawScoringFunction(50)
= 5 # floor(50/10)
CPU = resourceScoringFunction((2+1),8)
cpu = resourceScoringFunction((2+1),8)
= (100 -((8-3)*100/8))
= 37.5 # requested + used = 37.5% * available
= rawScoringFunction(37.5)
@ -182,12 +182,12 @@ Node 2 spec:
```
Available:
intel.com/foo: 8
Memory: 1GB
CPU: 8
memory: 1GB
cpu: 8
Used:
intel.com/foo: 2
Memory: 512MB
CPU: 6
memory: 512MB
cpu: 6
```
Node score:
@ -200,13 +200,13 @@ intel.com/foo = resourceScoringFunction((2+2),8)
= rawScoringFunction(50)
= 5
Memory = resourceScoringFunction((256+512),1024)
memory = resourceScoringFunction((256+512),1024)
= (100 -((1024-768)*100/1024))
= 75
= rawScoringFunction(75)
= 7
CPU = resourceScoringFunction((2+6),8)
cpu = resourceScoringFunction((2+6),8)
= (100 -((8-8)*100/8))
= 100
= rawScoringFunction(100)

View File

@ -142,7 +142,7 @@ By default, the Kubernetes API server serves HTTP on 2 ports:
- is intended for testing and bootstrap, and for other components of the master node
(scheduler, controller-manager) to talk to the API
- no TLS
- default is port 8080, change with `--insecure-port` flag.
- default is port 8080
- default IP is localhost, change with `--insecure-bind-address` flag.
- request **bypasses** authentication and authorization modules.
- request handled by admission control module(s).

View File

@ -62,9 +62,9 @@ takes if a potential violation is detected:
{{< table caption="Pod Security Admission modes" >}}
Mode | Description
:---------|:------------
**`enforce`** | Policy violations will cause the pod to be rejected.
**`audit`** | Policy violations will trigger the addition of an audit annotation, but are otherwise allowed.
**`warn`** | Policy violations will trigger a user-facing warning, but are otherwise allowed.
**enforce** | Policy violations will cause the pod to be rejected.
**audit** | Policy violations will trigger the addition of an audit annotation to the event recorded in the [audit log](/docs/tasks/debug-application-cluster/audit/), but are otherwise allowed.
**warn** | Policy violations will trigger a user-facing warning, but are otherwise allowed.
{{< /table >}}
A namespace can configure any or all modes, or even set a different level for different modes.
@ -91,7 +91,7 @@ Check out [Enforce Pod Security Standards with Namespace Labels](/docs/tasks/con
## Workload resources and Pod templates
Pods are often created indirectly, by creating a [workload
object](https://kubernetes.io/docs/concepts/workloads/controllers/) such as a {{< glossary_tooltip
object](/docs/concepts/workloads/controllers/) such as a {{< glossary_tooltip
term_id="deployment" >}} or {{< glossary_tooltip term_id="job">}}. The workload object defines a
_Pod template_ and a {{< glossary_tooltip term_id="controller" text="controller" >}} for the
workload resource creates Pods based on that template. To help catch violations early, both the
@ -103,7 +103,7 @@ applied to workload resources, only to the resulting pod objects.
You can define _exemptions_ from pod security enforcement in order allow the creation of pods that
would have otherwise been prohibited due to the policy associated with a given namespace.
Exemptions can be statically configured in the
[Admission Controller configuration](#configuring-the-admission-controller).
[Admission Controller configuration](/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller).
Exemptions must be explicitly enumerated. Requests meeting exemption criteria are _ignored_ by the
Admission Controller (all `enforce`, `audit` and `warn` behaviors are skipped). Exemption dimensions include:
@ -142,4 +142,4 @@ current policy level:
- [Enforcing Pod Security Standards](/docs/setup/best-practices/enforcing-pod-security-standards)
- [Enforce Pod Security Standards by Configuring the Built-in Admission Controller](/docs/tasks/configure-pod-container/enforce-standards-admission-controller)
- [Enforce Pod Security Standards with Namespace Labels](/docs/tasks/configure-pod-container/enforce-standards-namespace-labels)
- [Migrating from PodSecurityPolicy to PodSecurity](/docs/tasks/secure-pods/migrate-from-psp)
- [Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](/docs/tasks/configure-pod-container/migrate-from-psp)

View File

@ -495,8 +495,13 @@ as well as other related parameters outside the Security Context. As of July 202
[Pod Security Policies](/docs/concepts/profile/pod-security-profile/) are deprecated in favor of the
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
{{% thirdparty-content %}}
Other alternatives for enforcing security profiles are being developed in the Kubernetes
ecosystem, such as [OPA Gatekeeper](https://github.com/open-profile-agent/gatekeeper).
ecosystem, such as:
- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper).
- [Kubewarden](https://github.com/kubewarden).
- [Kyverno](https://kyverno.io/policies/pod-security/).
### What profiles should I apply to my Windows Pods?

View File

@ -133,7 +133,7 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip
Kubernetes supports 2 primary modes of finding a Service - environment variables
and DNS. The former works out of the box while the latter requires the
[CoreDNS cluster addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns).
[CoreDNS cluster addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns).
{{< note >}}
If the service environment variables are not desired (because possible clashing with expected program ones,
too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks`
@ -231,7 +231,7 @@ Till now we have only accessed the nginx server from within the cluster. Before
* An nginx server configured to use the certificates
* A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
```shell
make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt
@ -303,7 +303,7 @@ Now modify your nginx replicas to start an https server using the certificate in
Noteworthy points about the nginx-secure-app manifest:
- It contains both Deployment and Service specification in the same file.
- The [nginx server](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/default.conf)
- The [nginx server](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/default.conf)
serves HTTP traffic on port 80 and HTTPS traffic on 443, and nginx Service
exposes both ports.
- Each container has access to the keys through a volume mounted at `/etc/nginx/ssl`.

View File

@ -154,6 +154,7 @@ contains two elements in the `from` array, and allows connections from Pods in t
When in doubt, use `kubectl describe` to see how Kubernetes has interpreted the policy.
<a name="behavior-of-ipblock-selectors"></a>
__ipBlock__: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
Cluster ingress and egress mechanisms often require rewriting the source or destination IP
@ -251,7 +252,7 @@ spec:
endPort: 32768
```
The above rule allows any Pod with label `db` on the namespace `default` to communicate
The above rule allows any Pod with label `role=db` on the namespace `default` to communicate
with any IP within the range `10.0.0.0/24` over TCP, provided that the target
port is between the range 32000 and 32768.

View File

@ -242,9 +242,25 @@ There are a few reasons for using proxying for Services:
on the DNS records could impose a high load on DNS that then becomes
difficult to manage.
Later in this page you can read about various kube-proxy implementations work. Overall,
you should note that, when running `kube-proxy`, kernel level rules may be
modified (for example, iptables rules might get created), which won't get cleaned up,
in some cases until you reboot. Thus, running kube-proxy is something that should
only be done by an administrator which understands the consequences of having a
low level, privileged network proxying service on a computer. Although the `kube-proxy`
executable supports a `cleanup` function, this function is not an official feature and
thus is only available to use as-is.
### Configuration
Note that the kube-proxy starts up in different modes, which are determined by its configuration.
- The kube-proxy's configuration is done via a ConfigMap, and the ConfigMap for kube-proxy effectively deprecates the behaviour for almost all of the flags for the kube-proxy.
- The ConfigMap for the kube-proxy does not support live reloading of configuration.
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup. For example, if your operating system doesn't allow you to run iptables commands, the standard kernel kube-proxy implementation will not work. Likewise, if you have an operating system which doesn't support `netsh`, it will not run in Windows userspace mode.
### User space proxy mode {#proxy-mode-userspace}
In this mode, kube-proxy watches the Kubernetes control plane for the addition and
In this (legacy) mode, kube-proxy watches the Kubernetes control plane for the addition and
removal of Service and Endpoint objects. For each Service it opens a
port (randomly chosen) on the local node. Any connections to this "proxy port"
are proxied to one of the Service's backend Pods (as reported via
@ -429,7 +445,7 @@ variables and DNS.
When a Pod is run on a Node, the kubelet adds a set of environment variables
for each active Service. It supports both [Docker links
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
[makeLinkVariables](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
[makeLinkVariables](https://releases.k8s.io/{{< param "fullversion" >}}/pkg/kubelet/envvars/envvars.go#L49))
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.

View File

@ -509,7 +509,7 @@ it will become fully deprecated in a future Kubernetes release.
For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
{{< /note >}}
A PV can specify [node affinity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core) to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity.
A PV can specify node affinity to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity. To specify node affinity, set `nodeAffinity` in the `.spec` of a PV. The [PersistentVolume](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) API reference has more details on this field.
### Phase
@ -897,16 +897,15 @@ and need persistent storage, it is recommended that you use the following patter
or the cluster has no storage system (in which case the user cannot deploy
config requiring PVCs).
## {{% heading "whatsnext" %}}
## {{% heading "whatsnext" %}}
* Learn more about [Creating a PersistentVolume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
* Learn more about [Creating a PersistentVolumeClaim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim).
* Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md).
### Reference
### API references {#reference}
* [PersistentVolume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolume-v1-core)
* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core)
* [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
Read about the APIs described in this page:
* [`PersistentVolume`](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/)
* [`PersistentVolumeClaim`](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-claim-v1/)

View File

@ -130,7 +130,7 @@ and the kubelet, set the `InTreePluginAWSUnregister` flag to `true`.
The `azureDisk` volume type mounts a Microsoft Azure [Data Disk](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) into a pod.
For more details, see the [`azureDisk` volume plugin](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/azure_disk/README.md).
For more details, see the [`azureDisk` volume plugin](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_disk/README.md).
#### azureDisk CSI migration
@ -148,7 +148,7 @@ features must be enabled.
The `azureFile` volume type mounts a Microsoft Azure File volume (SMB 2.1 and 3.0)
into a pod.
For more details, see the [`azureFile` volume plugin](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/azure_file/README.md).
For more details, see the [`azureFile` volume plugin](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_file/README.md).
#### azureFile CSI migration
@ -176,7 +176,7 @@ writers simultaneously.
You must have your own Ceph server running with the share exported before you can use it.
{{< /note >}}
See the [CephFS example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/cephfs/) for more details.
See the [CephFS example](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/) for more details.
### cinder
@ -347,7 +347,7 @@ You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to th
beforehand so that Kubernetes hosts can access them.
{{< /note >}}
See the [fibre channel example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/fibre_channel) for more details.
See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel) for more details.
### flocker (deprecated) {#flocker}
@ -365,7 +365,7 @@ can be shared between pods as required.
You must have your own Flocker installation running before you can use it.
{{< /note >}}
See the [Flocker example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/flocker) for more details.
See the [Flocker example](https://github.com/kubernetes/examples/tree/master/staging/volumes/flocker) for more details.
### gcePersistentDisk
@ -533,7 +533,7 @@ simultaneously.
You must have your own GlusterFS installation running before you can use it.
{{< /note >}}
See the [GlusterFS example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/glusterfs) for more details.
See the [GlusterFS example](https://github.com/kubernetes/examples/tree/master/volumes/glusterfs) for more details.
### hostPath {#hostpath}
@ -661,7 +661,7 @@ and then serve it in parallel from as many Pods as you need. Unfortunately,
iSCSI volumes can only be mounted by a single consumer in read-write mode.
Simultaneous writers are not allowed.
See the [iSCSI example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/iscsi) for more details.
See the [iSCSI example](https://github.com/kubernetes/examples/tree/master/volumes/iscsi) for more details.
### local
@ -749,7 +749,7 @@ writers simultaneously.
You must have your own NFS server running with the share exported before you can use it.
{{< /note >}}
See the [NFS example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/nfs) for more details.
See the [NFS example](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs) for more details.
### persistentVolumeClaim {#persistentvolumeclaim}
@ -797,7 +797,7 @@ Make sure you have an existing PortworxVolume with name `pxvol`
before using it in the Pod.
{{< /note >}}
For more details, see the [Portworx volume](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/volumes/portworx/README.md) examples.
For more details, see the [Portworx volume](https://github.com/kubernetes/examples/tree/master/staging/volumes/portworx/README.md) examples.
### projected
@ -811,7 +811,7 @@ Currently, the following types of volume sources can be projected:
* `serviceAccountToken`
All sources are required to be in the same namespace as the Pod. For more details,
see the [all-in-one volume design document](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/node/all-in-one-volume.md).
see the [all-in-one volume design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/all-in-one-volume.md).
#### Example configuration with a secret, a downwardAPI, and a configMap {#example-configuration-secret-downwardapi-configmap}
@ -972,7 +972,7 @@ and then serve it in parallel from as many pods as you need. Unfortunately,
RBD volumes can only be mounted by a single consumer in read-write mode.
Simultaneous writers are not allowed.
See the [RBD example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/volumes/rbd)
See the [RBD example](https://github.com/kubernetes/examples/tree/master/volumes/rbd)
for more details.
### secret

View File

@ -17,6 +17,8 @@ A _CronJob_ creates {{< glossary_tooltip term_id="job" text="Jobs" >}} on a repe
One CronJob object is like one line of a _crontab_ (cron table) file. It runs a job periodically
on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format.
In addition, the CronJob schedule supports timezone handling, you can specify the timezone by adding "CRON_TZ=<time zone>" at the beginning of the CronJob schedule, and it is recommended to always set `CRON_TZ`.
{{< caution >}}
All **CronJob** `schedule:` times are based on the timezone of the
{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}.
@ -53,15 +55,16 @@ takes you through this example in more detail).
### Cron schedule syntax
```
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
# ┌────────────────── timezone (optional)
# | ┌───────────── minute (0 - 59)
# | │ ┌───────────── hour (0 - 23)
# | │ │ ┌───────────── day of the month (1 - 31)
# | │ │ │ ┌───────────── month (1 - 12)
# | │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# | │ │ │ │ │ 7 is also Sunday on some systems)
# | │ │ │ │ │
# | │ │ │ │ │
# CRON_TZ=UTC * * * * *
```
@ -75,9 +78,9 @@ takes you through this example in more detail).
For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight:
For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight(in UTC):
`0 0 13 * 5`
`CRON_TZ=UTC 0 0 13 * 5`
To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/).

View File

@ -632,7 +632,7 @@ of custom controller for those Pods. This allows the most flexibility, but may
complicated to get started with and offers less integration with Kubernetes.
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), runs a spark
starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/master/staging/spark/README.md)), runs a spark
driver, and then cleans up.
An advantage of this approach is that the overall process gets the completion guarantee of a Job

View File

@ -39,7 +39,7 @@ that provides a set of stateless replicas.
## Limitations
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
* The storage for a given Pod must either be provisioned by a [PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
* StatefulSets currently require a [Headless Service](/docs/concepts/services-networking/service/#headless-services) to be responsible for the network identity of the Pods. You are responsible for creating this Service.
* StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is possible to scale the StatefulSet down to 0 prior to deletion.
@ -173,9 +173,7 @@ Cluster Domain will be set to `cluster.local` unless
### Stable Storage
Kubernetes creates one [PersistentVolume](/docs/concepts/storage/persistent-volumes/) for each
VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume
with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Podreceives a single PersistentVolume with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod is (re)scheduled
onto a node, its `volumeMounts` mount the PersistentVolumes associated with its
PersistentVolume Claims. Note that, the PersistentVolumes associated with the

View File

@ -283,6 +283,13 @@ on the Kubernetes API server for each static Pod.
This means that the Pods running on a node are visible on the API server,
but cannot be controlled from there.
{{< note >}}
The `spec` of a static Pod cannot refer to other API objects
(e.g., {{< glossary_tooltip text="ServiceAccount" term_id="service-account" >}},
{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}},
{{< glossary_tooltip text="Secret" term_id="secret" >}}, etc).
{{< /note >}}
## Container probes
A _probe_ is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions:

View File

@ -32,9 +32,11 @@ If a Pod's init container fails, the kubelet repeatedly restarts that init conta
However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.
To specify an init container for a Pod, add the `initContainers` field into
the Pod specification, as an array of objects of type
[Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core),
alongside the app `containers` array.
the [Pod specification](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec),
as an array of `container` items (similar to the app `containers` field and its contents).
See [Container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container) in the
API reference for more details.
The status of the init containers is returned in `.status.initContainerStatuses`
field as an array of the container statuses (similar to the `.status.containerStatuses`
field).
@ -278,9 +280,11 @@ Init containers have all of the fields of an app container. However, Kubernetes
prohibits `readinessProbe` from being used because init containers cannot
define readiness distinct from completion. This is enforced during validation.
Use `activeDeadlineSeconds` on the Pod and `livenessProbe` on the container to
prevent init containers from failing forever. The active deadline includes init
containers.
Use `activeDeadlineSeconds` on the Pod to prevent init containers from failing forever.
The active deadline includes init containers.
However it is recommended to use `activeDeadlineSeconds` if user deploy their application
as a Job, because `activeDeadlineSeconds` has an effect even after initContainer finished.
The Pod which is already running correctly would be killed by `activeDeadlineSeconds` if you set.
The name of each app and init container in a Pod must be unique; a
validation error is thrown for any container sharing a name with another.

View File

@ -91,7 +91,7 @@ will be different in your situation.
Here's an example of editing a comment in the Kubernetes source code.
In your local kubernetes/kubernetes repository, check out the master branch,
In your local kubernetes/kubernetes repository, check out the default branch,
and make sure it is up to date:
```shell
@ -100,7 +100,7 @@ git checkout master
git pull https://github.com/kubernetes/kubernetes master
```
Suppose this source file in the master branch has the typo "atmost":
Suppose this source file in that default branch has the typo "atmost":
[kubernetes/kubernetes/staging/src/k8s.io/api/apps/v1/types.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/apps/v1/types.go)
@ -183,12 +183,13 @@ In the preceding section, you edited a file in the master branch and then ran sc
to generate an OpenAPI spec and related files. Then you submitted your changes in a pull request
to the master branch of the kubernetes/kubernetes repository. Now suppose you want to backport
your change into a release branch. For example, suppose the master branch is being used to develop
Kubernetes version 1.10, and you want to backport your change into the release-1.9 branch.
Kubernetes version {{< skew latestVersion >}}, and you want to backport your change into the
release-{{< skew prevMinorVersion >}} branch.
Recall that your pull request has two commits: one for editing `types.go`
and one for the files generated by scripts. The next step is to propose a cherry pick of your first
commit into the release-1.9 branch. The idea is to cherry pick the commit that edited `types.go`, but not
the commit that has the results of running the scripts. For instructions, see
commit into the release-{{< skew prevMinorVersion >}} branch. The idea is to cherry pick the commit
that edited `types.go`, but not the commit that has the results of running the scripts. For instructions, see
[Propose a Cherry Pick](https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md).
{{< note >}}
@ -197,8 +198,9 @@ pull request. If you don't have those permissions, you will need to work with so
and milestone for you.
{{< /note >}}
When you have a pull request in place for cherry picking your one commit into the release-1.9 branch,
the next step is to run these scripts in the release-1.9 branch of your local environment.
When you have a pull request in place for cherry picking your one commit into the
release-{{< skew prevMinorVersion >}} branch, the next step is to run these scripts in the
release-{{< skew prevMinorVersion >}} branch of your local environment.
```shell
hack/update-generated-swagger-docs.sh
@ -208,14 +210,15 @@ hack/update-api-reference-docs.sh
```
Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec
and related files. Monitor your pull request until it gets merged into the release-1.9 branch.
and related files. Monitor your pull request until it gets merged into the
release-{{< skew prevMinorVersion >}} branch.
At this point, both the master branch and the release-1.9 branch have your updated `types.go`
At this point, both the master branch and the release-{{< skew prevMinorVersion >}} branch have your updated `types.go`
file and a set of generated files that reflect the change you made to `types.go`. Note that the
generated OpenAPI spec and other generated files in the release-1.9 branch are not necessarily
the same as the generated files in the master branch. The generated files in the release-1.9 branch
contain API elements only from Kubernetes 1.9. The generated files in the master branch might contain
API elements that are not in 1.9, but are under development for 1.10.
generated OpenAPI spec and other generated files in the release-{{< skew prevMinorVersion >}} branch are not necessarily
the same as the generated files in the master branch. The generated files in the release-{{< skew prevMinorVersion >}} branch
contain API elements only from Kubernetes {{< skew prevMinorVersion >}}. The generated files in the master branch might contain
API elements that are not in {{< skew prevMinorVersion >}}, but are under development for {{< skew latestVersion >}}.
## Generating the published reference docs

View File

@ -86,12 +86,12 @@ The remaining steps refer to your base directory as `<rdocs-base>`.
In your local k8s.io/kubernetes repository, check out the branch of interest,
and make sure it is up to date. For example, if you want to generate docs for
Kubernetes 1.17, you could use these commands:
Kubernetes {{< skew prevMinorVersion >}}.0, you could use these commands:
```shell
cd <k8s-base>
git checkout v1.17.0
git pull https://github.com/kubernetes/kubernetes v1.17.0
git checkout v{{< skew prevMinorVersion >}}.0
git pull https://github.com/kubernetes/kubernetes {{< skew prevMinorVersion >}}.0
```
If you do not need to edit the `kubectl` source code, follow the instructions for
@ -109,7 +109,7 @@ local kubernetes/kubernetes repository, and then submit a pull request to the ma
is an example of a pull request that fixes a typo in the kubectl source code.
Monitor your pull request, and respond to reviewer comments. Continue to monitor your
pull request until it is merged into the master branch of the kubernetes/kubernetes repository.
pull request until it is merged into the target branch of the kubernetes/kubernetes repository.
## Cherry picking your change into a release branch
@ -118,9 +118,10 @@ Kubernetes release. If you want your change to appear in the docs for a Kubernet
version that has already been released, you need to propose that your change be
cherry picked into the release branch.
For example, suppose the master branch is being used to develop Kubernetes 1.10,
and you want to backport your change to the release-1.15 branch. For instructions
on how to do this, see
For example, suppose the master branch is being used to develop Kubernetes
{{< skew currentVersion >}}
and you want to backport your change to the release-{{< skew prevMinorVersion >}} branch. For
instructions on how to do this, see
[Propose a Cherry Pick](https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md).
Monitor your cherry-pick pull request until it is merged into the release branch.
@ -138,14 +139,14 @@ Go to `<rdocs-base>`. On you command line, set the following environment variabl
* Set `K8S_ROOT` to `<k8s-base>`.
* Set `K8S_WEBROOT` to `<web-base>`.
* Set `K8S_RELEASE` to the version of the docs you want to build.
For example, if you want to build docs for Kubernetes 1.17, set `K8S_RELEASE` to 1.17.
For example, if you want to build docs for Kubernetes {{< skew prevMinorVersion >}}, set `K8S_RELEASE` to {{< skew prevMinorVersion >}}.
For example:
```shell
export K8S_WEBROOT=$GOPATH/src/github.com/<your-username>/website
export K8S_ROOT=$GOPATH/src/k8s.io/kubernetes
export K8S_RELEASE=1.17
export K8S_RELEASE={{< skew prevMinorVersion >}}
```
## Creating a versioned directory
@ -165,13 +166,14 @@ make createversiondirs
In your local `<k8s-base>` repository, checkout the branch that has
the version of Kubernetes that you want to document. For example, if you want
to generate docs for Kubernetes 1.17, checkout the `v1.17.0` tag. Make sure
to generate docs for Kubernetes {{< skew prevMinorVersion >}}.0, check out the
`v{{< skew prevMinorVersion >}}` tag. Make sure
you local branch is up to date.
```shell
cd <k8s-base>
git checkout v1.17.0
git pull https://github.com/kubernetes/kubernetes v1.17.0
git checkout v{{< skew prevMinorVersion >}}.0
git pull https://github.com/kubernetes/kubernetes v{{< skew prevMinorVersion >}}.0
```
## Running the doc generation code

View File

@ -308,6 +308,12 @@ Localizing site strings lets you customize site-wide text and features: for exam
Some language teams have their own language-specific style guide and glossary. For example, see the [Korean Localization Guide](/ko/docs/contribute/localization_ko/).
### Language specific Zoom meetings
If the localization project needs a separate meeting time, contact a SIG Docs Co-Chair or Tech Lead to create a new reoccurring Zoom meeting and calendar invite. This is only needed when the the team is large enough to sustain and require a separate meeting.
Per CNCF policy, the localization teams must upload their meetings to the SIG Docs YouTube playlist. A SIG Docs Co-Chair or Tech Lead can help with the process until SIG Docs automates it.
## Branching strategy
Because localization projects are highly collaborative efforts, we

View File

@ -22,9 +22,9 @@ overview: >
Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (<a href="https://www.cncf.io/about">CNCF</a>).
cards:
- name: concepts
title: "Understand the basics"
title: "Understand Kubernetes"
description: "Learn about Kubernetes and its fundamental concepts."
button: "Learn Concepts"
button: "View Concepts"
button_path: "/docs/concepts"
- name: tutorials
title: "Try Kubernetes"

View File

@ -127,7 +127,7 @@ up the verbosity:
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}}
```
[Complete file example](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/auth/authorizer/abac/example_policy_file.jsonl)
[Complete file example](https://releases.k8s.io/{{< param "fullversion" >}}/pkg/auth/authorizer/abac/example_policy_file.jsonl)
## A quick note on service accounts

View File

@ -70,7 +70,7 @@ controller on the controller manager.
Each valid token is backed by a secret in the `kube-system` namespace. You can
find the full design doc
[here](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md).
[here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md).
Here is what the secret looks like.

View File

@ -279,8 +279,10 @@ rules:
```
{{< note >}}
You cannot restrict `create` or `deletecollection` requests by resourceName. For `create`, this
limitation is because the object name is not known at authorization time.
You cannot restrict `create` or `deletecollection` requests by their resource name.
For `create`, this limitation is because the name of the new object may not be known at authorization time.
If you restrict `list` or `watch` by resourceName, clients must include a `metadata.name` field selector in their `list` or `watch` request that matches the specified resourceName in order to be authorized.
For example, `kubectl get configmaps --field-selector=metadata.name=my-configmap`
{{< /note >}}
@ -683,12 +685,13 @@ When used in a <b>RoleBinding</b>, it gives full control over every resource in
<td><b>admin</b></td>
<td>None</td>
<td>Allows admin access, intended to be granted within a namespace using a <b>RoleBinding</b>.
If used in a <b>RoleBinding</b>, allows read/write access to most resources in a namespace,
including the ability to create roles and role bindings within the namespace.
This role does not allow write access to resource quota or to the namespace itself.
This role also does not allow write access to Endpoints in clusters created
using Kubernetes v1.22+. More information is available in the ["Write Access for
Endpoints" section](#write-access-for-endpoints).</td>
using Kubernetes v1.22+. More information is available in the
["Write Access for Endpoints" section](#write-access-for-endpoints).</td>
</tr>
<tr>
<td><b>edit</b></td>

View File

@ -172,5 +172,5 @@ Access to other non-resource paths can be disallowed without restricting access
to the REST api.
For further documentation refer to the authorization.v1beta1 API objects and
[webhook.go](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go).
[webhook.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go).

View File

@ -165,8 +165,8 @@ different Kubernetes components.
| `PreferNominatedNode` | `true` | Beta | 1.22 | |
| `ProbeTerminationGracePeriod` | `false` | Alpha | 1.21 | 1.21 |
| `ProbeTerminationGracePeriod` | `false` | Beta | 1.22 | |
| `ProxyTerminatingEndpoints` | `false` | Alpha | 1.22 | |
| `ProcMountType` | `false` | Alpha | 1.12 | |
| `ProxyTerminatingEndpoints` | `false` | Alpha | 1.22 | |
| `QOSReserved` | `false` | Alpha | 1.11 | |
| `ReadWriteOncePod` | `false` | Alpha | 1.22 | |
| `RemainingItemCount` | `false` | Alpha | 1.15 | 1.15 |
@ -789,10 +789,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
and volume controllers.
- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
controller to manage Pod completions per completion index.
- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job)
completions without relying on Pods remaining in the cluster indefinitely.
The Job controller uses Pod finalizers and a field in the Job status to keep
track of the finished Pods to count towards completion.
- `IngressClassNamespacedParams`: Allow namespace-scoped parameters reference in
`IngressClass` resource. This feature adds two fields - `Scope` and `Namespace`
to `IngressClass.spec.parameters`.
@ -800,10 +796,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
Initializers admission plugin.
- `IPv6DualStack`: Enable [dual stack](/docs/concepts/services-networking/dual-stack/)
support for IPv6.
- `JobTrackingWithFinalizers`: Enables the tracking of Job completion without
relying on Pods remaining in the cluster indefinitely. Pod finalizers, in
addition to a field in the Job status, allow the Job controller to track
Pods that it didn't account for yet.
- `JobTrackingWithFinalizers`: Enables tracking [Job](/docs/concepts/workloads/controllers/job)
completions without relying on Pods remaining in the cluster indefinitely.
The Job controller uses Pod finalizers and a field in the Job status to keep
track of the finished Pods to count towards completion.
- `KubeletConfigFile`: Enable loading kubelet configuration from
a file specified using a config file.
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/)
@ -1012,18 +1008,16 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `WatchBookmark`: Enable support for watch bookmark events.
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
- `WindowsEndpointSliceProxying`: When enabled, kube-proxy running on Windows
will use EndpointSlices as the primary data source instead of Endpoints,
enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
- `WindowsHostProcessContainers`: Enables support for Windows HostProcess containers.
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers
with as a non-default user. See
[Configuring RunAsUserName](/docs/tasks/configure-pod-container/configure-runasusername)
for more details.
- `WindowsEndpointSliceProxying`: When enabled, kube-proxy running on Windows
will use EndpointSlices as the primary data source instead of Endpoints,
enabling scalability and performance improvements. See
[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
- `WindowsHostProcessContainers`: Enables the support for `HostProcess`
containers on Windows nodes.
## {{% heading "whatsnext" %}}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -18,7 +18,7 @@ The normal process of bootstrapping these components, especially worker nodes th
can be a challenging process as it is often outside of the scope of Kubernetes and requires significant additional work.
This in turn, can make it challenging to initialize or scale a cluster.
In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request and signing API to simplify the process. The proposal can be
In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request and signing API. The proposal can be
found [here](https://github.com/kubernetes/kubernetes/pull/20439).
This document describes the process of node initialization, how to set up TLS client certificate bootstrapping for

View File

@ -81,7 +81,7 @@ For non-resource requests, this is the lower-cased HTTP method.</td>
<tr><td><code>user</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
</td>
<td>
Authenticated user information.</td>
@ -89,7 +89,7 @@ For non-resource requests, this is the lower-cased HTTP method.</td>
<tr><td><code>impersonatedUser</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
</td>
<td>
Impersonated user information.</td>
@ -123,7 +123,7 @@ Does not apply for List-type requests, or non-resource requests.</td>
<tr><td><code>responseStatus</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#status-v1-meta"><code>meta/v1.Status</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#status-v1-meta"><code>meta/v1.Status</code></a>
</td>
<td>
The response status, populated even when the ResponseObject is not a Status type.
@ -154,7 +154,7 @@ at Response Level.</td>
<tr><td><code>requestReceivedTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
</td>
<td>
Time the request reached the apiserver.</td>
@ -162,7 +162,7 @@ at Response Level.</td>
<tr><td><code>stageTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
</td>
<td>
Time the request reached current audit stage.</td>
@ -206,7 +206,7 @@ EventList is a list of audit Events.
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
@ -252,7 +252,7 @@ categories are logged.
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
</td>
<td>
ObjectMeta is included for interoperability with API infrastructure.Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field.</td>
@ -303,7 +303,7 @@ PolicyList is a list of audit Policies.
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>

View File

@ -187,6 +187,14 @@ ExecConfig.ProvideClusterInfo).</td>
</tr>
<tr><td><code>interactive</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
Interactive declares whether stdin has been passed to this exec plugin.</td>
</tr>
</tbody>
</table>
@ -215,7 +223,7 @@ itself should at least be protected via file permissions.
<tr><td><code>expirationTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta"><code>meta/v1.Time</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta"><code>meta/v1.Time</code></a>
</td>
<td>
ExpirationTimestamp indicates a time when the provided credentials expire.</td>

View File

@ -546,6 +546,10 @@ this always falls back to the userspace proxy.
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
ClientConnectionConfiguration contains details for constructing a client.
@ -597,5 +601,180 @@ client.</td>
</tr>
</tbody>
</table>
## `DebuggingConfiguration` {#DebuggingConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
DebuggingConfiguration holds configuration for Debugging related features.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>enableProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableProfiling enables profiling via web interface host:port/debug/pprof/</td>
</tr>
<tr><td><code>enableContentionProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableContentionProfiling enables lock contention profiling, if
enableProfiling is true.</td>
</tr>
</tbody>
</table>
## `LeaderElectionConfiguration` {#LeaderElectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
LeaderElectionConfiguration defines the configuration of leader election
clients for components that can run with leader election enabled.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>leaderElect</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
leaderElect enables a leader election client to gain leadership
before executing the main loop. Enable this when running replicated
components for high availability.</td>
</tr>
<tr><td><code>leaseDuration</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
leaseDuration is the duration that non-leader candidates will wait
after observing a leadership renewal until attempting to acquire
leadership of a led but unrenewed leader slot. This is effectively the
maximum duration that a leader can be stopped before it is replaced
by another candidate. This is only applicable if leader election is
enabled.</td>
</tr>
<tr><td><code>renewDeadline</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
renewDeadline is the interval between attempts by the acting master to
renew a leadership slot before it stops leading. This must be less
than or equal to the lease duration. This is only applicable if leader
election is enabled.</td>
</tr>
<tr><td><code>retryPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
retryPeriod is the duration the clients should wait between attempting
acquisition and renewal of a leadership. This is only applicable if
leader election is enabled.</td>
</tr>
<tr><td><code>resourceLock</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceLock indicates the resource object type that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the name of resource object that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceNamespace</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the namespace of resource object that will be used to lock
during leader election cycles.</td>
</tr>
</tbody>
</table>
## `LoggingConfiguration` {#LoggingConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
LoggingConfiguration contains logging options
Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>format</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Format Flag specifies the structure of log messages.
default value of format is `text`</td>
</tr>
<tr><td><code>sanitization</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)</td>
</tr>
</tbody>
</table>

View File

@ -13,16 +13,250 @@ auto_generated: true
- [InterPodAffinityArgs](#kubescheduler-config-k8s-io-v1beta2-InterPodAffinityArgs)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [NodeAffinityArgs](#kubescheduler-config-k8s-io-v1beta2-NodeAffinityArgs)
- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesBalancedAllocationArgs)
- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesFitArgs)
- [NodeResourcesLeastAllocatedArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesLeastAllocatedArgs)
- [NodeResourcesMostAllocatedArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesMostAllocatedArgs)
- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadArgs)
- [RequestedToCapacityRatioArgs](#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioArgs)
- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs)
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
ClientConnectionConfiguration contains details for constructing a client.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>kubeconfig</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
kubeconfig is the path to a KubeConfig file.</td>
</tr>
<tr><td><code>acceptContentTypes</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
default value of 'application/json'. This field will control all connections to the server used by a particular
client.</td>
</tr>
<tr><td><code>contentType</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
contentType is the content type used when sending data to the server from this client.</td>
</tr>
<tr><td><code>qps</code> <B>[Required]</B><br/>
<code>float32</code>
</td>
<td>
qps controls the number of queries per second allowed for this connection.</td>
</tr>
<tr><td><code>burst</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
burst allows extra queries to accumulate when a client is exceeding its rate.</td>
</tr>
</tbody>
</table>
## `DebuggingConfiguration` {#DebuggingConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
DebuggingConfiguration holds configuration for Debugging related features.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>enableProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableProfiling enables profiling via web interface host:port/debug/pprof/</td>
</tr>
<tr><td><code>enableContentionProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableContentionProfiling enables lock contention profiling, if
enableProfiling is true.</td>
</tr>
</tbody>
</table>
## `LeaderElectionConfiguration` {#LeaderElectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
LeaderElectionConfiguration defines the configuration of leader election
clients for components that can run with leader election enabled.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>leaderElect</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
leaderElect enables a leader election client to gain leadership
before executing the main loop. Enable this when running replicated
components for high availability.</td>
</tr>
<tr><td><code>leaseDuration</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
leaseDuration is the duration that non-leader candidates will wait
after observing a leadership renewal until attempting to acquire
leadership of a led but unrenewed leader slot. This is effectively the
maximum duration that a leader can be stopped before it is replaced
by another candidate. This is only applicable if leader election is
enabled.</td>
</tr>
<tr><td><code>renewDeadline</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
renewDeadline is the interval between attempts by the acting master to
renew a leadership slot before it stops leading. This must be less
than or equal to the lease duration. This is only applicable if leader
election is enabled.</td>
</tr>
<tr><td><code>retryPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
retryPeriod is the duration the clients should wait between attempting
acquisition and renewal of a leadership. This is only applicable if
leader election is enabled.</td>
</tr>
<tr><td><code>resourceLock</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceLock indicates the resource object type that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the name of resource object that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceNamespace</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the namespace of resource object that will be used to lock
during leader election cycles.</td>
</tr>
</tbody>
</table>
## `LoggingConfiguration` {#LoggingConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
LoggingConfiguration contains logging options
Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>format</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Format Flag specifies the structure of log messages.
default value of format is `text`</td>
</tr>
<tr><td><code>sanitization</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)</td>
</tr>
</tbody>
</table>
## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta2-DefaultPreemptionArgs}
@ -254,7 +488,7 @@ NodeAffinityArgs holds arguments to configure the NodeAffinity plugin.
<tr><td><code>addedAffinity</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
</td>
<td>
AddedAffinity is applied to all Pods additionally to the NodeAffinity
@ -271,6 +505,37 @@ a specific Node (such as Daemonset Pods) might remain unschedulable.</td>
## `NodeResourcesBalancedAllocationArgs` {#kubescheduler-config-k8s-io-v1beta2-NodeResourcesBalancedAllocationArgs}
NodeResourcesBalancedAllocationArgs holds arguments used to configure NodeResourcesBalancedAllocation plugin.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubescheduler.config.k8s.io/v1beta2</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>NodeResourcesBalancedAllocationArgs</code></td></tr>
<tr><td><code>resources</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-ResourceSpec"><code>[]ResourceSpec</code></a>
</td>
<td>
Resources to be managed, the default is "cpu" and "memory" if not specified.</td>
</tr>
</tbody>
</table>
## `NodeResourcesFitArgs` {#kubescheduler-config-k8s-io-v1beta2-NodeResourcesFitArgs}
@ -294,7 +559,7 @@ NodeResourcesFitArgs holds arguments used to configure the NodeResourcesFit plug
</td>
<td>
IgnoredResources is the list of resources that NodeResources fit filter
should ignore.</td>
should ignore. This doesn't apply to scoring.</td>
</tr>
@ -305,73 +570,16 @@ should ignore.</td>
IgnoredResourceGroups defines the list of resource groups that NodeResources fit filter should ignore.
e.g. if group is ["example.com"], it will ignore all resource names that begin
with "example.com", such as "example.com/aaa" and "example.com/bbb".
A resource group name can't contain '/'.</td>
A resource group name can't contain '/'. This doesn't apply to scoring.</td>
</tr>
</tbody>
</table>
## `NodeResourcesLeastAllocatedArgs` {#kubescheduler-config-k8s-io-v1beta2-NodeResourcesLeastAllocatedArgs}
NodeResourcesLeastAllocatedArgs holds arguments used to configure NodeResourcesLeastAllocated plugin.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubescheduler.config.k8s.io/v1beta2</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>NodeResourcesLeastAllocatedArgs</code></td></tr>
<tr><td><code>resources</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-ResourceSpec"><code>[]ResourceSpec</code></a>
<tr><td><code>scoringStrategy</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy"><code>ScoringStrategy</code></a>
</td>
<td>
Resources to be managed, if no resource is provided, default resource set with both
the weight of "cpu" and "memory" set to "1" will be applied.
Resource with "0" weight will not accountable for the final score.</td>
</tr>
</tbody>
</table>
## `NodeResourcesMostAllocatedArgs` {#kubescheduler-config-k8s-io-v1beta2-NodeResourcesMostAllocatedArgs}
NodeResourcesMostAllocatedArgs holds arguments used to configure NodeResourcesMostAllocated plugin.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubescheduler.config.k8s.io/v1beta2</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>NodeResourcesMostAllocatedArgs</code></td></tr>
<tr><td><code>resources</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-ResourceSpec"><code>[]ResourceSpec</code></a>
</td>
<td>
Resources to be managed, if no resource is provided, default resource set with both
the weight of "cpu" and "memory" set to "1" will be applied.
Resource with "0" weight will not accountable for the final score.</td>
ScoringStrategy selects the node resource scoring strategy.
The default strategy is LeastAllocated with an equal "cpu" and "memory" weight.</td>
</tr>
@ -399,7 +607,7 @@ PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread pl
<tr><td><code>defaultConstraints</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
</td>
<td>
DefaultConstraints defines topology spread constraints to be applied to
@ -432,45 +640,6 @@ and to "System" if enabled.</td>
## `RequestedToCapacityRatioArgs` {#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioArgs}
RequestedToCapacityRatioArgs holds arguments used to configure RequestedToCapacityRatio plugin.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubescheduler.config.k8s.io/v1beta2</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>RequestedToCapacityRatioArgs</code></td></tr>
<tr><td><code>shape</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-UtilizationShapePoint"><code>[]UtilizationShapePoint</code></a>
</td>
<td>
Points defining priority function shape</td>
</tr>
<tr><td><code>resources</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-ResourceSpec"><code>[]ResourceSpec</code></a>
</td>
<td>
Resources to be managed</td>
</tr>
</tbody>
</table>
## `VolumeBindingArgs` {#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs}
@ -499,6 +668,24 @@ If this value is nil, the default value (600) will be used.</td>
</tr>
<tr><td><code>shape</code><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-UtilizationShapePoint"><code>[]UtilizationShapePoint</code></a>
</td>
<td>
Shape specifies the points defining the score function shape, which is
used to score nodes based on the utilization of statically provisioned
PVs. The utilization is calculated by dividing the total requested
storage of the pod by the total capacity of feasible PVs on each node.
Each point contains utilization (ranges from 0 to 100) and its
associated score (ranges from 0 to 10). You can turn the priority by
specifying different scores for different utilization numbers.
The default shape points are:
1) 0 for 0 utilization
2) 10 for 100 utilization
All points must be sorted in increasing order by utilization.</td>
</tr>
</tbody>
</table>
@ -800,6 +987,8 @@ If an array is empty, missing, or nil, default plugins at that extension point w
</td>
<td>
Enabled specifies plugins that should be enabled in addition to default plugins.
If the default plugin is also configured in the scheduler config file, the weight of plugin will
be overridden accordingly.
These are called after default plugins and in the same order specified here.</td>
</tr>
@ -952,6 +1141,37 @@ for the PodTopologySpread plugin.
## `RequestedToCapacityRatioParam` {#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam}
**Appears in:**
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy)
RequestedToCapacityRatioParam define RequestedToCapacityRatio parameters
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>shape</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-UtilizationShapePoint"><code>[]UtilizationShapePoint</code></a>
</td>
<td>
Shape is a list of points defining the scoring function shape.</td>
</tr>
</tbody>
</table>
## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta2-ResourceSpec}
@ -959,14 +1179,12 @@ for the PodTopologySpread plugin.
**Appears in:**
- [NodeResourcesLeastAllocatedArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesLeastAllocatedArgs)
- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesBalancedAllocationArgs)
- [NodeResourcesMostAllocatedArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesMostAllocatedArgs)
- [RequestedToCapacityRatioArgs](#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioArgs)
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy)
ResourceSpec represents single resource and weight for bin packing of priority RequestedToCapacityRatioArguments.
ResourceSpec represents a single resource.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
@ -978,7 +1196,7 @@ ResourceSpec represents single resource and weight for bin packing of priority R
<code>string</code>
</td>
<td>
Name of the resource to be managed by RequestedToCapacityRatio function.</td>
Name of the resource.</td>
</tr>
@ -995,6 +1213,72 @@ ResourceSpec represents single resource and weight for bin packing of priority R
## `ScoringStrategy` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy}
**Appears in:**
- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesFitArgs)
ScoringStrategy define ScoringStrategyType for node resource plugin
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>type</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-ScoringStrategyType"><code>ScoringStrategyType</code></a>
</td>
<td>
Type selects which strategy to run.</td>
</tr>
<tr><td><code>resources</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-ResourceSpec"><code>[]ResourceSpec</code></a>
</td>
<td>
Resources to consider when scoring.
The default resource set includes "cpu" and "memory" with an equal weight.
Allowed weights go from 1 to 100.
Weight defaults to 1 if not specified or explicitly set to 0.</td>
</tr>
<tr><td><code>requestedToCapacityRatio</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam"><code>RequestedToCapacityRatioParam</code></a>
</td>
<td>
Arguments specific to RequestedToCapacityRatio strategy.</td>
</tr>
</tbody>
</table>
## `ScoringStrategyType` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategyType}
(Alias of `string`)
**Appears in:**
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy)
ScoringStrategyType the type of scoring strategy used in NodeResourcesFit plugin.
## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta2-UtilizationShapePoint}
@ -1002,7 +1286,9 @@ ResourceSpec represents single resource and weight for bin packing of priority R
**Appears in:**
- [RequestedToCapacityRatioArgs](#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioArgs)
- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs)
- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam)
UtilizationShapePoint represents single point of priority function shape.
@ -1820,199 +2106,3 @@ UtilizationShapePoint represents single point of priority function shape.
</table>
## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
ClientConnectionConfiguration contains details for constructing a client.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>kubeconfig</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
kubeconfig is the path to a KubeConfig file.</td>
</tr>
<tr><td><code>acceptContentTypes</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
default value of 'application/json'. This field will control all connections to the server used by a particular
client.</td>
</tr>
<tr><td><code>contentType</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
contentType is the content type used when sending data to the server from this client.</td>
</tr>
<tr><td><code>qps</code> <B>[Required]</B><br/>
<code>float32</code>
</td>
<td>
qps controls the number of queries per second allowed for this connection.</td>
</tr>
<tr><td><code>burst</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
burst allows extra queries to accumulate when a client is exceeding its rate.</td>
</tr>
</tbody>
</table>
## `DebuggingConfiguration` {#DebuggingConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
DebuggingConfiguration holds configuration for Debugging related features.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>enableProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableProfiling enables profiling via web interface host:port/debug/pprof/</td>
</tr>
<tr><td><code>enableContentionProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableContentionProfiling enables lock contention profiling, if
enableProfiling is true.</td>
</tr>
</tbody>
</table>
## `LeaderElectionConfiguration` {#LeaderElectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
LeaderElectionConfiguration defines the configuration of leader election
clients for components that can run with leader election enabled.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>leaderElect</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
leaderElect enables a leader election client to gain leadership
before executing the main loop. Enable this when running replicated
components for high availability.</td>
</tr>
<tr><td><code>leaseDuration</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
leaseDuration is the duration that non-leader candidates will wait
after observing a leadership renewal until attempting to acquire
leadership of a led but unrenewed leader slot. This is effectively the
maximum duration that a leader can be stopped before it is replaced
by another candidate. This is only applicable if leader election is
enabled.</td>
</tr>
<tr><td><code>renewDeadline</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
renewDeadline is the interval between attempts by the acting master to
renew a leadership slot before it stops leading. This must be less
than or equal to the lease duration. This is only applicable if leader
election is enabled.</td>
</tr>
<tr><td><code>retryPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
retryPeriod is the duration the clients should wait between attempting
acquisition and renewal of a leadership. This is only applicable if
leader election is enabled.</td>
</tr>
<tr><td><code>resourceLock</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceLock indicates the resource object type that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the name of resource object that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceNamespace</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the namespace of resource object that will be used to lock
during leader election cycles.</td>
</tr>
</tbody>
</table>

View File

@ -89,7 +89,7 @@ of the predicates after it finds one predicate that failed.</td>
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender)
- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender)
- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender)
@ -132,7 +132,7 @@ resource when applying predicates.</td>
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender)
- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender)
- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender)

View File

@ -1413,9 +1413,15 @@ first alpha-numerically.</td>
</tbody>
</table>
## `BootstrapToken` {#BootstrapToken}
**Appears in:**
- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)

View File

@ -0,0 +1,18 @@
---
title: Eviction
id: eviction
date: 2021-05-08
full_link: /docs/concepts/scheduling-eviction/
short_description: >
Process of terminating one or more Pods on Nodes
aka:
tags:
- operation
---
Eviction is the process of terminating one or more Pods on Nodes.
<!--more-->
There are two kinds of eviction:
* [Node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
* [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)

View File

@ -29,7 +29,7 @@ To make a report, submit your vulnerability to the [Kubernetes bug bounty progra
You can also email the private [security@kubernetes.io](mailto:security@kubernetes.io) list with the security details and the details expected for [all Kubernetes bug reports](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md).
You may encrypt your email to this list using the GPG keys of the [Product Security Committee members](https://git.k8s.io/security/README.md#product-security-committee-psc). Encryption using GPG is NOT required to make a disclosure.
You may encrypt your email to this list using the GPG keys of the [Security Response Committee members](https://git.k8s.io/security/README.md#product-security-committee-psc). Encryption using GPG is NOT required to make a disclosure.
### When Should I Report a Vulnerability?
@ -47,13 +47,13 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur
## Security Vulnerability Response
Each report is acknowledged and analyzed by Product Security Committee members within 3 working days. This will set off the [Security Release Process](https://git.k8s.io/security/security-release-process.md#disclosures).
Each report is acknowledged and analyzed by Security Response Committee members within 3 working days. This will set off the [Security Release Process](https://git.k8s.io/security/security-release-process.md#disclosures).
Any vulnerability information shared with Product Security Committee stays within Kubernetes project and will not be disseminated to other projects unless it is necessary to get the issue fixed.
Any vulnerability information shared with Security Response Committee stays within Kubernetes project and will not be disseminated to other projects unless it is necessary to get the issue fixed.
As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated.
## Public Disclosure Timing
A public disclosure date is negotiated by the Kubernetes Product Security Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Product Security Committee holds the final say when setting a disclosure date.
A public disclosure date is negotiated by the Kubernetes Security Response Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Security Response Committee holds the final say when setting a disclosure date.

View File

@ -71,6 +71,32 @@ Flags that you specify from the command line override default values and any cor
If you need help, run `kubectl help` from the terminal window.
## In-cluster authentication and namespace overrides
By default `kubectl` will first determine if it is running within a pod, and thus in a cluster. It starts by checking for the `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` environment variables and the existence of a service account token file at `/var/run/secrets/kubernetes.io/serviceaccount/token`. If all three are found in-cluster authentication is assumed.
To maintain backwards compatibility, if the `POD_NAMESPACE` environment variable is set during in-cluster authentication it will override the default namespace from the from the service account token. Any manifests or tools relying on namespace defaulting will be affected by this.
**`POD_NAMESPACE` environment variable**
If the `POD_NAMESPACE` environment variable is set, cli operations on namespaced resources will default to the variable value. For example, if the variable is set to `seattle`, `kubectl get pods` would return pods in the `seattle` namespace. This is because pods are a namespaced resource, and no namespace was provided in the command. Review the output of `kubectl api-resources` to determine if a resource is namespaced.
Explicit use of `--namespace <value>` overrides this behavior.
**How kubectl handles ServiceAccount tokens**
If:
* there is Kubernetes service account token file mounted at
`/var/run/secrets/kubernetes.io/serviceaccount/token`, and
* the `KUBERNETES_SERVICE_HOST` environment variable is set, and
* the `KUBERNETES_SERVICE_PORT` environment variable is set, and
* you don't explicitly specify a namespace on the kubectl command line
then kubectl assumes it is running in your cluster. The kubectl tool looks up the
namespace of that ServiceAccount (this is the same as the namespace of the Pod)
and acts against that namespace. This is different from what happens outside of a
cluster; when kubectl runs outside a cluster and you don't specify a namespace,
the kubectl command acts against the `default` namespace.
## Operations
The following table includes short descriptions and the general syntax for all of the `kubectl` operations:

View File

@ -15,7 +15,7 @@ file and passing its path as a command line argument.
A scheduling Profile allows you to configure the different stages of scheduling
in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}.
Each stage is exposed in a extension point. Plugins provide scheduling behaviors
Each stage is exposed in an extension point. Plugins provide scheduling behaviors
by implementing one or more of these extension points.
You can specify scheduling profiles by running `kube-scheduler --config <filename>`,

View File

@ -8,7 +8,7 @@ card:
weight: 40
---
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice "fast paths" for creating Kubernetes clusters.
<img src="/images/kubeadm-stacked-color.png" align="right" width="150px">Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice "fast paths" for creating Kubernetes clusters.
kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines. Likewise, installing various nice-to-have addons, like the Kubernetes Dashboard, monitoring solutions, and cloud-specific addons, is not in scope.

View File

@ -1,61 +0,0 @@
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference conent, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
Kubeadm experimental sub-commands
### Synopsis
Kubeadm experimental sub-commands
### Options
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for alpha</p></td>
</tr>
</tbody>
</table>
### Options inherited from parent commands
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--rootfs string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>[EXPERIMENTAL] The path to the 'real' host root filesystem.</p></td>
</tr>
</tbody>
</table>

View File

@ -1,63 +0,0 @@
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference conent, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
Kubeconfig file utilities
### Synopsis
Kubeconfig file utilities.
Alpha Disclaimer: this command is currently alpha.
### Options
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for kubeconfig</p></td>
</tr>
</tbody>
</table>
### Options inherited from parent commands
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--rootfs string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>[EXPERIMENTAL] The path to the 'real' host root filesystem.</p></td>
</tr>
</tbody>
</table>

View File

@ -1,102 +0,0 @@
<!--
The file is auto-generated from the Go source code of the component using a generic
[generator](https://github.com/kubernetes-sigs/reference-docs/). To learn how
to generate the reference documentation, please read
[Contributing to the reference documentation](/docs/contribute/generate-ref-docs/).
To update the reference conent, please follow the
[Contributing upstream](/docs/contribute/generate-ref-docs/contribute-upstream/)
guide. You can file document formatting bugs against the
[reference-docs](https://github.com/kubernetes-sigs/reference-docs/) project.
-->
Output a kubeconfig file for an additional user
### Synopsis
Output a kubeconfig file for an additional user.
Alpha Disclaimer: this command is currently alpha.
```
kubeadm alpha kubeconfig user [flags]
```
### Examples
```
# Output a kubeconfig file for an additional user named foo using a kubeadm config file bar
kubeadm alpha kubeconfig user --client-name=foo --config=bar
```
### Options
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--client-name string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The name of user. It will be used as the CN if client certificates are created</p></td>
</tr>
<tr>
<td colspan="2">--config string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>help for user</p></td>
</tr>
<tr>
<td colspan="2">--org strings</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The orgnizations of the client certificate. It will be used as the O if client certificates are created</p></td>
</tr>
<tr>
<td colspan="2">--token string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The token that should be used as the authentication mechanism for this kubeconfig, instead of client certificates</p></td>
</tr>
</tbody>
</table>
### Options inherited from parent commands
<table style="width: 100%; table-layout: fixed;">
<colgroup>
<col span="1" style="width: 10px;" />
<col span="1" />
</colgroup>
<tbody>
<tr>
<td colspan="2">--rootfs string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>[EXPERIMENTAL] The path to the 'real' host root filesystem.</p></td>
</tr>
</tbody>
</table>

View File

@ -17,7 +17,7 @@ Generate keys and certificate signing requests
Generates keys and certificate signing requests (CSRs) for all the certificates required to run the control plane. This command also generates partial kubeconfig files with private key data in the "users &gt; user &gt; client-key-data" field, and for each kubeconfig file an accompanying ".csr" file is created.
This command is designed for use in [Kubeadm External CA Mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing.
This command is designed for use in [Kubeadm External CA Mode](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing.
The PEM encoded signed certificates should then be saved alongside the key files, using ".crt" as the file extension, or in the case of kubeconfig files, the PEM encoded signed certificate should be base64 encoded and added to the kubeconfig file in the "users &gt; user &gt; client-certificate-data" field.
@ -29,7 +29,7 @@ kubeadm certs generate-csr [flags]
```
# The following command will generate keys and CSRs for all control-plane certificates and kubeconfig files:
kubeadm alpha certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s/pki
kubeadm certs generate-csr --kubeconfig-dir /tmp/etc-k8s --cert-dir /tmp/etc-k8s/pki
```
### Options

View File

@ -50,20 +50,6 @@ kubeadm certs renew admin.conf [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr>
<tr>
<td colspan="2">--csr-dir string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to output the CSRs and private keys to</p></td>
</tr>
<tr>
<td colspan="2">--csr-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create CSRs instead of generating certificates</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>

View File

@ -44,20 +44,6 @@ kubeadm certs renew all [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr>
<tr>
<td colspan="2">--csr-dir string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to output the CSRs and private keys to</p></td>
</tr>
<tr>
<td colspan="2">--csr-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create CSRs instead of generating certificates</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>

View File

@ -50,20 +50,6 @@ kubeadm certs renew apiserver-etcd-client [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr>
<tr>
<td colspan="2">--csr-dir string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to output the CSRs and private keys to</p></td>
</tr>
<tr>
<td colspan="2">--csr-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create CSRs instead of generating certificates</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>

View File

@ -50,20 +50,6 @@ kubeadm certs renew apiserver-kubelet-client [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr>
<tr>
<td colspan="2">--csr-dir string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to output the CSRs and private keys to</p></td>
</tr>
<tr>
<td colspan="2">--csr-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create CSRs instead of generating certificates</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>

View File

@ -50,20 +50,6 @@ kubeadm certs renew apiserver [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Path to a kubeadm configuration file.</p></td>
</tr>
<tr>
<td colspan="2">--csr-dir string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>The path to output the CSRs and private keys to</p></td>
</tr>
<tr>
<td colspan="2">--csr-only</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Create CSRs instead of generating certificates</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>

Some files were not shown because too many files have changed in this diff Show More