Merge pull request #31224 from nate-double-u/merged-main-dev-1.24

Merged main into dev 1.24
pull/31338/head
Kubernetes Prow Robot 2022-01-11 11:57:15 -08:00 committed by GitHub
commit c8c474d07d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
203 changed files with 62772 additions and 8406 deletions

View File

@ -34,6 +34,6 @@ Note that code issues should be filed against the main kubernetes repository, wh
### Submitting Documentation Pull Requests
If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/).
If you're fixing an issue in the existing documentation, you should submit a PR against the main branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/).
For more information, see [contributing to Kubernetes docs](https://kubernetes.io/docs/contribute/).

View File

@ -2,10 +2,12 @@ aliases:
sig-docs-blog-owners: # Approvers for blog content
- onlydole
- mrbobbytables
- sftim
sig-docs-blog-reviewers: # Reviewers for blog content
- mrbobbytables
- onlydole
- sftim
- nate-double-u
sig-docs-de-owners: # Admins for German content
- bene2k1
- mkorbi
@ -125,6 +127,7 @@ aliases:
- ClaudiaJKang
- gochist
- ianychoi
- jihoon-seo
- seokho-son
- ysyukr
sig-docs-ko-reviews: # PR reviews for Korean content
@ -242,19 +245,18 @@ aliases:
- saschagrunert # SIG Chair
release-engineering-approvers:
- cpanato # Release Manager
- hasheddan # subproject owner / Release Manager
- palnabarun # Release Manager
- puerco # Release Manager
- saschagrunert # subproject owner / Release Manager
- justaugustus # subproject owner / Release Manager
- Verolop # Release Manager
- xmudrii # Release Manager
release-engineering-reviewers:
- ameukam # Release Manager Associate
- jimangel # Release Manager Associate
- markyjackson-taulia # Release Manager Associate
- mkorbi # Release Manager Associate
- palnabarun # Release Manager Associate
- onlydole # Release Manager Associate
- sethmccombs # Release Manager Associate
- thejoycekung # Release Manager Associate
- verolop # Release Manager Associate
- wilsonehusin # Release Manager Associate

View File

@ -4,10 +4,12 @@ title: 'Health checking gRPC servers on Kubernetes'
date: 2018-10-01
---
_Built-in gRPC probes were introduced in Kubernetes 1.23. To learn more, see [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe)._
**Author**: [Ahmet Alp Balkan](https://twitter.com/ahmetb) (Google)
**Update (December 2021):** _Kubernetes now has built-in gRPC health probes starting in v1.23.
To learn more, see [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
This article was originally written about an external tool to achieve the same task._
[gRPC](https://grpc.io) is on its way to becoming the lingua franca for
communication between cloud-native microservices. If you are deploying gRPC
applications to Kubernetes today, you may be wondering about the best way to

View File

@ -12,6 +12,8 @@ on the deprecation of Docker as a container runtime for Kubernetes kubelets, and
what that means, check out the blog post
[Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/).
Also, you can read [check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) to check whether it does.
### Why is dockershim being deprecated?
Maintaining dockershim has become a heavy burden on the Kubernetes maintainers.

View File

@ -1,6 +1,6 @@
---
layout: blog
title: 'Pod Security Graduates to Beta'
title: 'Kubernetes 1.23: Pod Security Graduates to Beta'
date: 2021-12-09
slug: pod-security-admission-beta
---

View File

@ -1,6 +1,6 @@
---
layout: blog
title: "Kubernetes 1.23 Prevent PersistentVolume leaks when deleting out of order"
title: "Kubernetes 1.23: Prevent PersistentVolume leaks when deleting out of order"
date: 2021-12-15T10:00:00-08:00
slug: kubernetes-1-23-prevent-persistentvolume-leaks-when-deleting-out-of-order
---

View File

@ -0,0 +1,103 @@
---
layout: blog
title: 'Kubernetes 1.23: StatefulSet PVC Auto-Deletion (alpha)'
date: 2021-12-16
slug: kubernetes-1-23-statefulset-pvc-auto-deletion
---
**Author:** Matthew Cary (Google)
Kubernetes v1.23 introduced a new, alpha-level policy for
[StatefulSets](docs/concepts/workloads/controllers/statefulset/) that controls the lifetime of
[PersistentVolumeClaims](docs/concepts/storage/persistent-volumes/) (PVCs) generated from the
StatefulSet spec template for cases when they should be deleted automatically when the StatefulSet
is deleted or pods in the StatefulSet are scaled down.
## What problem does this solve?
A StatefulSet spec can include Pod and PVC templates. When a replica is first created, the
Kubernetes control plane creates a PVC for that replica if one does not already exist. The behavior
before Kubernetes v1.23 was that the control plane never cleaned up the PVCs created for
StatefulSets - this was left up to the cluster administrator, or to some add-on automation that
youd have to find, check suitability, and deploy. The common pattern for managing PVCs, either
manually or through tools such as Helm, is that the PVCs are tracked by the tool that manages them,
with explicit lifecycle. Workflows that use StatefulSets must determine on their own what PVCs are
created by a StatefulSet and what their lifecycle should be.
Before this new feature, when a StatefulSet-managed replica disappears, either because the
StatefulSet is reducing its replica count, or because its StatefulSet is deleted, the PVC and its
backing volume remains and must be manually deleted. While this behavior is appropriate when the
data is critical, in many cases the persistent data in these PVCs is either temporary, or can be
reconstructed from another source. In those cases, PVCs and their backing volumes remaining after
their StatefulSet or replicas have been deleted are not necessary, incur cost, and require manual
cleanup.
## The new StatefulSet PVC retention policy
If you enable the alpha feature, a StatefulSet spec includes a PersistentVolumeClaim retention
policy. This is used to control if and when PVCs created from a StatefulSets `volumeClaimTemplate`
are deleted. This first iteration of the retention policy contains two situations where PVCs may be
deleted.
The first situation is when the StatefulSet resource is deleted (which implies that all replicas are
also deleted). This is controlled by the `whenDeleted` policy. The second situation, controlled by
`whenScaled` is when the StatefulSet is scaled down, which removes some but not all of the replicas
in a StatefulSet. In both cases the policy can either be `Retain`, where the corresponding PVCs are
not touched, or `Delete`, which means that PVCs are deleted. The deletion is done with a normal
[object deletion](/docs/concepts/architecture/garbage-collection/), so that, for example, all
retention policies for the underlying PV are respected.
This policy forms a matrix with four cases. Ill walk through and give an example for each one.
* **`whenDeleted` and `whenScaled` are both `Retain`.** This matches the existing behavior for
StatefulSets, where no PVCs are deleted. This is also the default retention policy. Its
appropriate to use when data on StatefulSet volumes may be irreplaceable and should only be
deleted manually.
* **`whenDeleted` is `Delete` and `whenScaled` is `Retain`.** In this case, PVCs are deleted only when
the entire StatefulSet is deleted. If the StatefulSet is scaled down, PVCs are not touched,
meaning they are available to be reattached if a scale-up occurs with any data from the previous
replica. This might be used for a temporary StatefulSet, such as in a CI instance or ETL
pipeline, where the data on the StatefulSet is needed only during the lifetime of the
StatefulSet lifetime, but while the task is running the data is not easily reconstructible. Any
retained state is needed for any replicas that scale down and then up.
* **`whenDeleted` and `whenScaled` are both `Delete`.** PVCs are deleted immediately when their
replica is no longer needed. Note this does not include when a Pod is deleted and a new version
rescheduled, for example when a node is drained and Pods need to migrate elsewhere. The PVC is
deleted only when the replica is no longer needed as signified by a scale-down or StatefulSet
deletion. This use case is for when data does not need to live beyond the life of its
replica. Perhaps the data is easily reconstructable and the cost savings of deleting unused PVCs
is more important than quick scale-up, or perhaps that when a new replica is created, any data
from a previous replica is not usable and must be reconstructed anyway.
* **`whenDeleted` is `Retain` and `whenScaled` is `Delete`.** This is similar to the previous case,
when there is little benefit to keeping PVCs for fast reuse during scale-up. An example of a
situation where you might use this is an Elasticsearch cluster. Typically you would scale that
workload up and down to match demand, whilst ensuring a minimum number of replicas (for example:
3). When scaling down, data is migrated away from removed replicas and there is no benefit to
retaining those PVCs. However, it can be useful to bring the entire Elasticsearch cluster down
temporarily for maintenance. If you need to take the Elasticsearch system offline, you can do
this by temporarily deleting the StatefulSet, and then bringing the Elasticsearch cluster back
by recreating the StatefulSet. The PVCs holding the Elasticsearch data will still exist and the
new replicas will automatically use them.
Visit the
[documentation](docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) to
see all the details.
## Whats next?
Enable the feature and try it out! Enable the `StatefulSetAutoDeletePVC` feature gate on a cluster,
then create a StatefulSet using the new policy. Test it out and tell us what you think!
I'm very curious to see if this owner reference mechanism works well in practice. For example, we
realized there is no mechanism in Kubernetes for knowing who set a reference, so its possible that
the StatefulSet controller may fight with custom controllers that set their own
references. Fortunately, maintaining the existing retention behavior does not involve any new owner
references, so default behavior will be compatible.
Please tag any issues you report with the label `sig/apps` and assign them to Matthew Cary
([@mattcary](https://github.com/mattcary) at GitHub).
Enjoy!

View File

@ -0,0 +1,148 @@
---
layout: blog
title: "What's new in Security Profiles Operator v0.4.0"
date: 2021-12-17
slug: security-profiles-operator
---
**Authors:** Jakub Hrozek, Juan Antonio Osorio, Paulo Gomes, Sascha Grunert
---
The [Security Profiles Operator (SPO)](https://sigs.k8s.io/security-profiles-operator)
is an out-of-tree Kubernetes enhancement to make the management of
[seccomp](https://en.wikipedia.org/wiki/Seccomp),
[SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) and
[AppArmor](https://en.wikipedia.org/wiki/AppArmor) profiles easier and more
convenient. We're happy to announce that we recently [released
v0.4.0](https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.4.0)
of the operator, which contains a ton of new features, fixes and usability
improvements.
## What's new
It has been a while since the last
[v0.3.0](https://github.com/kubernetes-sigs/security-profiles-operator/releases/tag/v0.3.0)
release of the operator. We added new features, fine-tuned existing ones and
reworked our documentation in 290 commits over the past half year.
One of the highlights is that we're now able to record seccomp and SELinux
profiles using the operators [log enricher](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#log-enricher-based-recording).
This allows us to reduce the dependencies required for profile recording to have
[auditd](https://linux.die.net/man/8/auditd) or
[syslog](https://en.wikipedia.org/wiki/Syslog) (as fallback) running on the
nodes. All profile recordings in the operator work in the same way by using the
`ProfileRecording` CRD as well as their corresponding [label
selectors](/docs/concepts/overview/working-with-objects/labels). The log
enricher itself can be also used to gather meaningful insights about seccomp and
SELinux messages of a node. Checkout the [official
documentation](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#using-the-log-enricher)
to learn more about it.
### seccomp related improvements
Beside the log enricher based recording we now offer an alternative to record
seccomp profiles by utilizing [ebpf](https://ebpf.io). This optional feature can
be enabled by setting `enableBpfRecorder` to `true`. This results in running a
dedicated container, which ships a custom bpf module on every node to collect
the syscalls for containers. It even supports older Kernel versions which do not
expose the [BPF Type Format (BTF)](https://www.kernel.org/doc/html/latest/bpf/btf.html) per
default as well as the `amd64` and `arm64` architectures. Checkout
[our documentation](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#ebpf-based-recording)
to see it in action. By the way, we now add the seccomp profile architecture of
the recorder host to the recorded profile as well.
We also graduated the seccomp profile API from `v1alpha1` to `v1beta1`. This
aligns with our overall goal to stabilize the CRD APIs over time. The only thing
which has changed is that the seccomp profile type `Architectures` now points to
`[]Arch` instead of `[]*Arch`.
### SELinux enhancements
Managing SELinux policies (an equivalent to using `semodule` that
you would normally call on a single server) is not done by SPO
itself, but by another container called selinuxd to provide better
isolation. This release switched to using selinuxd containers from
a personal repository to images located under [our team's quay.io
repository](https://quay.io/organization/security-profiles-operator).
The selinuxd repository has moved as well to [the containers GitHub
organization](https://github.com/containers/selinuxd).
Please note that selinuxd links dynamically to `libsemanage` and mounts the
SELinux directories from the nodes, which means that the selinuxd container
must be running the same distribution as the cluster nodes. SPO defaults
to using CentOS-8 based containers, but we also build Fedora based ones.
If you are using another distribution and would like us to add support for
it, please file [an issue against selinuxd](https://github.com/containers/selinuxd/issues).
#### Profile Recording
This release adds support for recording of SELinux profiles.
The recording itself is managed via an instance of a `ProfileRecording` Custom
Resource as seen in an
[example](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/examples/profilerecording-selinux-logs.yaml)
in our repository. From the user's point of view it works pretty much the same
as recording of seccomp profiles.
Under the hood, to know what the workload is doing SPO installs a special
permissive policy called [selinuxrecording](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base/profiles/selinuxrecording.cil)
on startup which allows everything and logs all AVCs to `audit.log`.
These AVC messages are scraped by the log enricher component and when
the recorded workload exits, the policy is created.
#### `SELinuxProfile` CRD graduation
An `v1alpha2` version of the `SelinuxProfile` object has been introduced. This
removes the raw Common Intermediate Language (CIL) from the object itself and
instead adds a simple policy language to ease the writing and parsing
experience.
Alongside, a `RawSelinuxProfile` object was also introduced. This contains a
wrapped and raw representation of the policy. This was intended for folks to be
able to take their existing policies into use as soon as possible. However, on
validations are done here.
### AppArmor support
This version introduces the initial support for AppArmor, allowing users to load and
unload AppArmor profiles into cluster nodes by using the new [AppArmorProfile](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/deploy/base/crds/apparmorprofile.yaml) CRD.
To enable AppArmor support use the [enableAppArmor feature gate](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/examples/config.yaml#L10) switch of your SPO configuration.
Then use our [apparmor example](https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/examples/apparmorprofile.yaml) to deploy your first profile across your cluster.
### Metrics
The operator now exposes metrics, which are described in detail in
our new [metrics documentation](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#using-metrics).
We decided to secure the metrics retrieval process by using
[kube-rbac-proxy](https://github.com/brancz/kube-rbac-proxy), while we ship an
additional `spo-metrics-client` cluster role (and binding) to retrieve the
metrics from within the cluster. If you're using
[OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift),
then we provide an out of the box working
[`ServiceMonitor`](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#automatic-servicemonitor-deployment)
to access the metrics.
#### Debuggability and robustness
Beside all those new features, we decided to restructure parts of the Security
Profiles Operator internally to make it better to debug and more robust. For
example, we now maintain an internal [gRPC](https://grpc.io) API to communicate
within the operator across different features. We also improved the performance
of the log enricher, which now caches results for faster retrieval of the log
data. The operator can be put into a more [verbose log mode](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#set-logging-verbosity)
by setting `verbosity` from `0` to `1`.
We also print the used `libseccomp` and `libbpf` versions on startup, as well as
expose CPU and memory profiling endpoints for each container via the
[`enableProfiling` option](https://github.com/kubernetes-sigs/security-profiles-operator/blob/71b3915/installation-usage.md#enable-cpu-and-memory-profiling).
Dedicated liveness and startup probes inside of the operator daemon will now
additionally improve the life cycle of the operator.
## Conclusion
Thank you for reading this update. We're looking forward to future enhancements
of the operator and would love to get your feedback about the latest release.
Feel free to reach out to us via the Kubernetes slack
[#security-profiles-operator](https://kubernetes.slack.com/messages/security-profiles-operator)
for any feedback or question.

View File

@ -0,0 +1,201 @@
---
layout: blog
title: "Kubernetes-in-Kubernetes and the WEDOS PXE bootable server farm"
slug: kubernetes-in-kubernetes-and-pxe-bootable-server-farm
date: 2021-12-22
---
**Author**: Andrei Kvapil (WEDOS)
When you own two data centers, thousands of physical servers, virtual machines and hosting for hundreds of thousands sites, Kubernetes can actually simplify the management of all these things. As practice has shown, by using Kubernetes, you can declaratively describe and manage not only applications, but also the infrastructure itself. I work for the largest Czech hosting provider **WEDOS Internet a.s** and today I'll show you two of my projects — [Kubernetes-in-Kubernetes](https://github.com/kvaps/kubernetes-in-kubernetes) and [Kubefarm](https://github.com/kvaps/kubefarm).
With their help you can deploy a fully working Kubernetes cluster inside another Kubernetes using Helm in just a couple of commands. How and why?
Let me introduce you to how our infrastructure works. All our physical servers can be divided into two groups: **control-plane** and **compute** nodes. Control plane nodes are usually set up manually, have a stable OS installed, and designed to run all cluster services including Kubernetes control-plane. The main task of these nodes is to ensure the smooth operation of the cluster itself. Compute nodes do not have any operating system installed by default, instead they are booting the OS image over the network directly from the control plane nodes. Their work is to carry out the workload.
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/scheme01.svg" alt="Kubernetes cluster layout" >}}
Once nodes have downloaded their image, they can continue to work without keeping connection to the PXE server. That is, a PXE server is just keeping rootfs image and does not hold any other complex logic. After our nodes have booted, we can safely restart the PXE server, nothing critical will happen to them.
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/scheme02.svg" alt="Kubernetes cluster after bootstrapping" >}}
After booting, the first thing our nodes do is join to the existing Kubernetes cluster, namely, execute the **kubeadm join** command so that kube-scheduler could schedule some pods on them and launch various workloads afterwards. From the beginning we used the scheme when nodes were joined into the same cluster used for the control-plane nodes.
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/scheme03.svg" alt="Kubernetes scheduling containers to the compute nodes" >}}
This scheme worked stably for over two years. However later we decided to add containerized Kubernetes to it. And now we can spawn new Kubernetes-clusters very easily right on our control-plane nodes which are now member special admin-clusters. Now, compute nodes can be joined directly to their own clusters - depending on the configuration.
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/scheme04.svg" alt="Multiple clusters are running in single Kubernetes, compute nodes joined to them" >}}
## Kubefarm
This project came with the goal of enabling anyone to deploy such an infrastructure in just a couple of commands using Helm and get about the same in the end.
At this time, we moved away from the idea of a monocluster. Because it turned out to be not very convenient for managing work of several development teams in the same cluster. The fact is that Kubernetes was never designed as a multi-tenant solution and at the moment it does not provide sufficient means of isolation between projects. Therefore, running separate clusters for each team turned out to be a good idea. However, there should not be too many clusters, to let them be convenient to manage. Nor is it too small to have sufficient independence between development teams.
The scalability of our clusters became noticeably better after that change. The more clusters you have per number of nodes, the smaller the failure domain and the more stable they work. And as a bonus, we got a fully declaratively described infrastructure. Thus, now you can deploy a new Kubernetes cluster in the same way as deploying any other application in Kubernetes.
It uses [Kubernetes-in-Kubernetes](http://github.com/kvaps/kubernetes-in-kubernetes) as a basis, [LTSP](https://github.com/ltsp/ltsp/) as PXE-server from which the nodes are booted, and automates the DHCP server configuration using [dnsmasq-controller](https://github.com/kvaps/dnsmasq-controller):
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/kubefarm.png" alt="Kubefarm" >}}
## How it works
Now let's see how it works. In general, if you look at Kubernetes as from an application perspective, you can note that it follows all the principles of [The Twelve-Factor App](https://12factor.net/), and is actually written very well. Thus, it means running Kubernetes as an app in a different Kubernetes shouldn't be a big deal.
### Running Kubernetes in Kubernetes
Now let's take a look at the [Kubernetes-in-Kubernetes](https://github.com/kvaps/kubernetes-in-kubernetes) project, which provides a ready-made Helm chart for running Kubernetes in Kubernetes.
Here is the parameters that you can pass to Helm in the values file:
* [**kubernetes/values.yaml**](https://github.com/kvaps/kubernetes-in-kubernetes/tree/v0.13.1/deploy/helm/kubernetes)
<img alt="Kubernetes is just five binaries" style="float: right; max-height: 280px;" src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/5binaries.png">
Beside **persistence** (storage parameters for the cluster), the Kubernetes control-plane components are described here: namely: **etcd cluster**, **apiserver**, **controller-manager** and **scheduler**. These are pretty much standard Kubernetes components. There is a light-hearted saying that “Kubernetes is just five binaries”. So here is where the configuration for these binaries is located.
If you ever tried to bootstrap a cluster using kubeadm, then this config will remind you it's configuration. But in addition to Kubernetes entities, you also have an admin container. In fact, it is a container which holds two binaries inside: **kubectl** and **kubeadm**. They are used to generate kubeconfig for the above components and to perform the initial configuration for the cluster. Also, in an emergency, you can always exec into it to check and manage your cluster.
After the release [has been deployed](https://asciinema.org/a/407280), you can see a list of pods: **admin-container**, **apiserver** in two replicas, **controller-manager**, **etcd-cluster**, **scheduller** and the initial job that initializes the cluster. In the end you have a command, which allows you to get shell into the admin container, you can use it to see what is happening inside:
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot01.svg)](https://asciinema.org/a/407280?autoplay=1)
Also, let's take look at the certificates. If you've ever installed Kubernetes, then you know that it has a _scary_ directory `/etc/kubernetes/pki` with a bunch of some certificates. In case of Kubernetes-in-Kubernetes, you have fully automated management of them with cert-manager. Thus, it is enough to pass all certificates parameters to Helm during installation, and all the certificates will automatically be generated for your cluster.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot02.svg)](https://asciinema.org/a/407280?t=15&autoplay=1)
Looking at one of the certificates, eg. apiserver, you can see that it has a list of DNS names and IP addresses. If you want to make this cluster accessible outside, then just describe the additional DNS names in the values file and update the release. This will update the certificate resource, and cert-manager will regenerate the certificate. You'll no longer need to think about this. If kubeadm certificates need to be renewed at least once a year, here the cert-manager will take care and automatically renew them.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot03.svg)](https://asciinema.org/a/407280?t=25&autoplay=1)
Now let's log into the admin container and look at the cluster and nodes. Of course, there are no nodes, yet, because at the moment you have deployed just the blank control-plane for Kubernetes. But in kube-system namespace you can see some coredns pods waiting for scheduling and configmaps already appeared. That is, you can conclude that the cluster is working:
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot04.svg)](https://asciinema.org/a/407280?t=30&autoplay=1)
Here is the [diagram of the deployed cluster](https://kvaps.github.io/images/posts/Kubernetes-in-Kubernetes-and-PXE-bootable-servers-farm/Argo_CD_kink_network.html). You can see services for all Kubernetes components: **apiserver**, **controller-manager**, **etcd-cluster** and **scheduler**. And the pods on right side to which they forward traffic.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/argocd01.png)](https://kvaps.github.io/images/posts/Kubernetes-in-Kubernetes-and-PXE-bootable-servers-farm/Argo_CD_kink_network.html)
*By the way, the diagram, is drawn in [ArgoCD](https://argoproj.github.io/argo-cd/) — the GitOps tool we use to manage our clusters, and cool diagrams are one of its features.*
### Orchestrating physical servers
OK, now you can see the way how is our Kubernetes control-plane deployed, but what about worker nodes, how are we adding them? As I already said, all our servers are bare metal. We do not use virtualization to run Kubernetes, but we orchestrate all physical servers by ourselves.
Also, we do use Linux network boot feature very actively. Moreover, this is exactly the booting, not some kind of automation of the installation. When the nodes are booting, they just run a ready-made system image for them. That is, to update any node, we just need to reboot it - and it will download a new image. It is very easy, simple and convenient.
For this, the [Kubefarm](https://github.com/kvaps/kubefarm) project was created, which allows you to automate this. The most commonly used examples can be found in the [examples](https://github.com/kvaps/kubefarm/tree/v0.13.1/examples) directory. The most standard of them named [generic](https://github.com/kvaps/kubefarm/tree/v0.13.1/examples/generic). Let's take a look at values.yaml:
* [**generic/values.yaml**](https://github.com/kvaps/kubefarm/blob/v0.13.1/examples/generic/values.yaml)
Here you can specify the parameters which are passed into the upstream Kubernetes-in-Kubernetes chart. In order for you control-plane to be accessible from the outside, it is enough to specify the IP address here, but if you wish, you can specify some DNS name here.
In the PXE server configuration you can specify a timezone. You can also add an SSH key for logging in without a password (but you can also specify a password), as well as kernel modules and parameters that should be applied during booting the system.
Next comes the **nodePools** configuration, i.e. the nodes themselves. If you've ever used a terraform module for gke, then this logic will remind you of it. Here you statically describe all nodes with a set of parameters:
- **Name** (hostname);
- **MAC-addresses** — we have nodes with two network cards, and each one can boot from any of the MAC addresses specified here.
- **IP-address**, which the DHCP server should issue to this node.
In this example, you have two pools: the first has five nodes, the second has only one, the second pool has also two tags assigned. Tags are the way to describe configuration for specific nodes. For example, you can add specific DHCP options for some pools, options for the PXE server for booting (e.g. here is debug option enabled) and set of **kubernetesLabels** and **kubernetesTaints** options. What does that mean?
For example, in this configuration you have a second nodePool with one node. The pool has **debug** and **foo** tags assigned. Now see the options for **foo** tag in **kubernetesLabels**. This means that the m1c43 node will boot with these two labels and taint assigned. Everything seems to be simple. Now [let's try](https://asciinema.org/a/407282) this in practice.
### Demo
Go to [examples](https://github.com/kvaps/kubefarm/tree/v0.13.1/examples) and update previously deployed chart to Kubefarm. Just use the [generic](https://github.com/kvaps/kubefarm/tree/v0.13.1/examples/generic) parameters and look at the pods. You can see that a PXE server and one more job were added. This job essentially goes to the deployed Kubernetes cluster and creates a new token. Now it will run repeatedly every 12 hours to generate a new token, so that the nodes can connect to your cluster.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot05.svg)](https://asciinema.org/a/407282?autoplay=1)
In a [graphical representation](https://kvaps.github.io/images/posts/Kubernetes-in-Kubernetes-and-PXE-bootable-servers-farm/Argo_CD_Applications_kubefarm-network.html), it looks about the same, but now apiserver started to be exposed outside.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/argocd02.png)](https://kvaps.github.io/images/posts/Kubernetes-in-Kubernetes-and-PXE-bootable-servers-farm/Argo_CD_Applications_kubefarm-network.html)
In the diagram, the IP is highlighted in green, the PXE server can be reached through it. At the moment, Kubernetes does not allow creating a single LoadBalancer service for TCP and UDP protocols by default, so you have to create two different services with the same IP address. One is for TFTP, and the second for HTTP, through which the system image is downloaded.
But this simple example is not always enough, sometimes you might need to modify the logic at boot. For example, here is a directory [advanced_network](https://github.com/kvaps/kubefarm/tree/v0.13.1/examples/advanced_network), inside which there is a [values file](https://github.com/kvaps/kubefarm/tree/v0.13.1/examples/advanced_network) with a simple shell script. Let's call it `network.sh`:
* [**network.sh**](https://github.com/kvaps/kubefarm/blob/v0.13.1/examples/advanced_network/values.yaml#L14-L78)
All this script does is take environment variables at boot time, and generates a network configuration based on them. It creates a directory and puts the netplan config inside. For example, a bonding interface is created here. Basically, this script can contain everything you need. It can hold the network configuration or generate the system services, add some hooks or describe any other logic. Anything that can be described in bash or shell languages will work here, and it will be executed at boot time.
Let's see how it can be [deployed](https://asciinema.org/a/407284). Let's pass the generic values file as the first parameter, and an additional values file as the second parameter. This is a standard Helm feature. This way you can also pass the secrets, but in this case, the configuration is just expanded by the second file:
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot06.svg)](https://asciinema.org/a/407284?autoplay=1)
Let's look at the configmap **foo-kubernetes-ltsp** for the netboot server and make sure that `network.sh` script is really there. These commands used to configure the network at boot time:
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot07.svg)](https://asciinema.org/a/407284?t=15&autoplay=1)
[Here](https://asciinema.org/a/407286) you can see how it works in principle. The chassis interface (we use HPE Moonshots 1500) have the nodes, you can enter `show node list` command to get a list of all the nodes. Now you can see the booting process.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot08.svg)](https://asciinema.org/a/407286?autoplay=1)
You can also get their MAC addresses by `show node macaddr all` command. We have a clever operator that collects MAC-addresses from chassis automatically and passes them to the DHCP server. Actually, it's just creating custom configuration resources for dnsmasq-controller which is running in same admin Kubernetes cluster. Also, trough this interface you can control the nodes themselves, e.g. turn them on and off.
If you have no such opportunity to enter the chassis through iLO and collect a list of MAC addresses for your nodes, you can consider using [catchall cluster](https://asciinema.org/a/407287) pattern. Purely speaking, it is just a cluster with a dynamic DHCP pool. Thus, all nodes that are not described in the configuration to other clusters will automatically join to this cluster.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot09.svg)](https://asciinema.org/a/407287?autoplay=1)
For example, you can see a special cluster with some nodes. They are joined to the cluster with an auto-generated name based on their MAC address. Starting from this point you can connect to them and see what happens there. Here you can somehow prepare them, for example, set up the file system and then rejoin them to another cluster.
Now let's try connecting to the node terminal and see how it is booting. After the BIOS, the network card is configured, here it sends a request to the DHCP server from a specific MAC address, which redirects it to a specific PXE server. Later the kernel and initrd image are downloaded from the server using the standard HTTP protocol:
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot10.svg)](https://asciinema.org/a/407286?t=28&autoplay=1)
After loading the kernel, the node downloads the rootfs image and transfers control to systemd. Then the booting proceeds as usual, and after that the node joins Kubernetes:
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot11.svg)](https://asciinema.org/a/407286?t=80&autoplay=1)
If you take a look at **fstab**, you can see only two entries there: **/var/lib/docker** and **/var/lib/kubelet**, they are mounted as **tmpfs** (in fact, from RAM). At the same time, the root partition is mounted as **overlayfs**, so all changes that you make here on the system will be lost on the next reboot.
Looking into the block devices on the node, you can see some nvme disk, but it has not yet been mounted anywhere. There is also a loop device - this is the exact rootfs image downloaded from the server. At the moment it is located in RAM, occupies 653 MB and mounted with the **loop** option.
If you look in **/etc/ltsp**, you find the `network.sh` file that was executed at boot. From containers, you can see running `kube-proxy` and `pause` container for it.
[![](/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/screenshot12.svg)](https://asciinema.org/a/407286?t=100&autoplay=1)
## Details
### Network Boot Image
But where does the main image come from? There is a little trick here. The image for the nodes is built through the [Dockerfile](https://github.com/kvaps/kubefarm/tree/v0.13.1/build/ltsp) along with the server. The [Docker multi-stage build](https://docs.docker.com/develop/develop-images/multistage-build/) feature allows you to easily add any packages and kernel modules exactly at the stage of the image build. It looks like this:
* [**Dockerfile**](https://github.com/kvaps/kubefarm/blob/v0.13.1/build/ltsp/Dockerfile)
What's going on here? First, we take a regular Ubuntu 20.04 and install all the packages we need. First of all we install the **kernel**, **lvm**, **systemd**, **ssh**. In general, everything that you want to see on the final node should be described here. Here we also install `docker` with `kubelet` and `kubeadm`, which are used to join the node to the cluster.
And then we perform an additional configuration. In the last stage, we simply install `tftp` and `nginx` (which serves our image to clients), **grub** (bootloader). Then root of the previous stages copied into the final image and generate squashed image from it. That is, in fact, we get a docker image, which has both the server and the boot image for our nodes. At the same time, it can be easily updated by changing the Dockerfile.
### Webhooks and API aggregation layer
I want to pay special attention to the problem of webhooks and aggregation layer. In general, webhooks is a Kubernetes feature that allows you to respond to the creation or modification of any resources. Thus, you can add a handler so that when resources are applied, Kubernetes must send request to some pod and check if configuration of this resource is correct, or make additional changes to it.
But the point is, in order for the webhooks to work, the apiserver must have direct access to the cluster for which it is running. And if it is started in a separate cluster, like our case, or even separately from any cluster, then Konnectivity service can help us here. Konnectivity is one of the optional but officially supported Kubernetes components.
Let's take cluster of four nodes for example, each of them is running a `kubelet` and we have other Kubernetes components running outside: `kube-apiserver`, `kube-scheduler` and `kube-controller-manager`. By default, all these components interact with the apiserver directly - this is the most known part of the Kubernetes logic. But in fact, there is also a reverse connection. For example, when you want to view the logs or run a `kubectl exec command`, the API server establishes a connection to the specific kubelet independently:
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/konnectivity01.svg" alt="Kubernetes apiserver reaching kubelet" >}}
But the problem is that if we have a webhook, then it usually runs as a standard pod with a service in our cluster. And when apiserver tries to reach it, it will fail because it will try to access an in-cluster service named **webhook.namespace.svc** being outside of the cluster where it is actually running:
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/konnectivity02.svg" alt="Kubernetes apiserver can't reach webhook" >}}
And here Konnectivity comes to our rescue. Konnectivity is a tricky proxy server developed especially for Kubernetes. It can be deployed as a server next to the apiserver. And Konnectivity-agent is deployed in several replicas directly in the cluster you want to access. The agent establishes a connection to the server and sets up a stable channel to make apiserver able to access all webhooks and all kubelets in the cluster. Thus, now all communication with the cluster will take place through the Konnectivity-server:
{{< figure src="/images/blog/2021-12-22-kubernetes-in-kubernetes-and-pxe-bootable-server-farm/konnectivity03.svg" alt="Kubernetes apiserver reaching webhook via konnectivity" >}}
## Our plans
Of course, we are not going to stop at this stage. People interested in the project often write to me. And if there will be a sufficient number of interested people, I hope to move Kubernetes-in-Kubernetes project under [Kubernetes SIGs](https://github.com/kubernetes-sigs), by representing it in form of the official Kubernetes Helm chart. Perhaps, by making this project independent we'll gather an even larger community.
I am also thinking of integrating it with the Machine Controller Manager, which would allow creating worker nodes, not only of physical servers, but also, for example, for creating virtual machines using kubevirt and running them in the same Kubernetes cluster. By the way, it also allows to spawn virtual machines in the clouds, and have a control-plane deployed locally.
I am also considering the option of integrating with the Cluster-API so that you can create physical Kubefarm clusters directly through the Kubernetes environment. But at the moment I'm not completely sure about this idea. If you have any thoughts on this matter, I'll be happy to listen to them.

View File

@ -137,7 +137,7 @@ collection, which deletes images in order based on the last time they were used,
starting with the oldest first. The kubelet deletes images
until disk usage reaches the `LowThresholdPercent` value.
### Container image garbage collection {#container-image-garbage-collection}
### Container garbage collection {#container-image-garbage-collection}
The kubelet garbage collects unused containers based on the following variables,
which you can define:
@ -152,11 +152,11 @@ which you can define:
In addition to these variables, the kubelet garbage collects unidentified and
deleted containers, typically starting with the oldest first.
`MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other
`MaxPerPodContainer` and `MaxContainers` may potentially conflict with each other
in situations where retaining the maximum number of containers per Pod
(`MaxPerPodContainer`) would go outside the allowable total of global dead
containers (`MaxContainers`). In this situation, the kubelet adjusts
`MaxPodPerContainer` to address the conflict. A worst-case scenario would be to
`MaxPerPodContainer` to address the conflict. A worst-case scenario would be to
downgrade `MaxPerPodContainer` to `1` and evict the oldest containers.
Additionally, containers owned by pods that have been deleted are removed once
they are older than `MinAge`.

View File

@ -442,7 +442,7 @@ Message: Pod was terminated in response to imminent node shutdown.
To provide more flexibility during graceful node shutdown around the ordering
of pods during shutdown, graceful node shutdown honors the PriorityClass for
Pods, provided that you enabled this feature in your cluster. The feature
allows allows cluster administers to explicitly define the ordering of pods
allows cluster administers to explicitly define the ordering of pods
during graceful node shutdown based on [priority
classes](docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass).

View File

@ -30,55 +30,7 @@ insert dynamic port numbers into configuration blocks, services have to know
how to find each other, etc. Rather than deal with this, Kubernetes takes a
different approach.
## The Kubernetes network model
Every `Pod` gets its own IP address, maximum one per IP family. This means you
do not need need to deal with mapping container ports to host ports in order to
expose the `Pods` services on the network. This creates a clean,
backwards-compatible model where `Pods` can be treated much like VMs or physical
hosts from the perspectives of port allocation, naming, service discovery, load
balancing, application configuration, and migration.
Kubernetes IP addresses exist at the `Pod` scope, in the `status.PodIPs` field
- containers within a `Pod` share their network namespaces - including their IP
address. This means that containers within a `Pod` can all reach each other's
ports on `localhost`. This also means that containers within a `Pod` must
coordinate port usage, but this is no different from processes in a VM. This is
called the _IP-per-pod_ model.
In every cluster, there exists an abstract pod-network to which pods
are connected by default, unless explicitly configured to use the
host-network (on platforms that support it). Even if a host has
multiple IPs, host-network pods only have one Kubernetes IP address at
the `Pod` scope, that is, the `status.PodIPs` field contains one
IP per address family (for now), so the "IP-per-pod" model is guaranteed.
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
* any pod-network pod on any node can communicate with all other pod-network
pods on all nodes without NAT.
* non-pod processes on a node (the kubelet, and also for example any other system daemon) can
communicate with all pods on that node.
In addition, for platforms and runtimes that support running pods in the host OS network:
* host-network pods of a node can connect directly with all pods IPs on all
nodes, however, unlike pod-network pods, the source IP address might not be
present in the `Pod` `status.PodIPs` field.
This model is principally compatible with the desire for Kubernetes to enable
low-friction porting of apps from VMs to containers. If your workload previously ran
in a VM, your VM typically had a single IP address; everything in that VM could talk to
other VMs on your network.
This is the same basic model but less complex overall.
How this is implemented is a detail of the particular container runtime in use. Likewise, the networking option you choose may support [dual-stack IPv4/IPv6 networking](/docs/concepts/services-networking/dual-stack/); implementations vary.
It is possible to request ports on the `Node` itself which forward to your `Pod`
(called host ports), but this is a very niche operation. How that forwarding is
implemented is also a detail of the container runtime. The `Pod` itself is
blind to the existence or non-existence of host ports.
To learn about the Kubernetes networking model, see [here](/docs/concepts/services-networking/).
## How to implement the Kubernetes networking model

View File

@ -239,6 +239,10 @@ propagation delay, where the cache propagation delay depends on the chosen cache
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
{{< note >}}
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive ConfigMap updates.
{{< /note >}}
## Immutable ConfigMaps {#configmap-immutable}
{{< feature-state for_k8s_version="v1.21" state="stable" >}}

View File

@ -1,22 +1,24 @@
---
title: Managing Resources for Containers
title: Resource Management for Pods and Containers
content_type: concept
weight: 40
feature:
title: Automatic bin packing
description: >
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability.
Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
---
<!-- overview -->
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how
much of each resource a {{< glossary_tooltip text="Container" term_id="container" >}} needs.
much of each resource a {{< glossary_tooltip text="container" term_id="container" >}} needs.
The most common resources to specify are CPU and memory (RAM); there are others.
When you specify the resource _request_ for Containers in a Pod, the scheduler uses this
When you specify the resource _request_ for containers in a Pod, the
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this
information to decide which node to place the Pod on. When you specify a resource _limit_
for a Container, the kubelet enforces those limits so that the running container is not
for a container, the kubelet enforces those limits so that the running container is not
allowed to use more of that resource than the limit you set. The kubelet also reserves
at least the _request_ amount of that system resource specifically for that container
to use.
@ -33,7 +35,7 @@ For example, if you set a `memory` request of 256 MiB for a container, and that
a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to use
more RAM.
If you set a `memory` limit of 4GiB for that Container, the kubelet (and
If you set a `memory` limit of 4GiB for that container, the kubelet (and
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}) enforce the limit.
The runtime prevents the container from using more than the configured resource limit. For example:
when a process in the container tries to consume more than the allowed amount of memory,
@ -45,8 +47,8 @@ or by enforcement (the system prevents the container from ever exceeding the lim
runtimes can have different ways to implement the same restrictions.
{{< note >}}
If a Container specifies its own memory limit, but does not specify a memory request, Kubernetes
automatically assigns a memory request that matches the limit. Similarly, if a Container specifies its own
If a container specifies its own memory limit, but does not specify a memory request, Kubernetes
automatically assigns a memory request that matches the limit. Similarly, if a container specifies its own
CPU limit, but does not specify a CPU request, Kubernetes automatically assigns a CPU request that matches
the limit.
{{< /note >}}
@ -56,7 +58,7 @@ the limit.
*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
CPU represents compute processing and is specified in units of [Kubernetes CPUs](#meaning-of-cpu).
Memory is specified in units of bytes.
If you're using Kubernetes v1.14 or newer, you can specify _huge page_ resources.
For Linux workloads, you can specify _huge page_ resources.
Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory
that are much larger than the default page size.
@ -76,9 +78,10 @@ consumed. They are distinct from
[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified
through the Kubernetes API server.
## Resource requests and limits of Pod and Container
## Resource requests and limits of Pod and container
Each Container of a Pod can specify one or more of the following:
For each container, you can specify resource limits and requests,
including the following:
* `spec.containers[].resources.limits.cpu`
* `spec.containers[].resources.limits.memory`
@ -87,49 +90,64 @@ Each Container of a Pod can specify one or more of the following:
* `spec.containers[].resources.requests.memory`
* `spec.containers[].resources.requests.hugepages-<size>`
Although requests and limits can only be specified on individual Containers, it
is convenient to talk about Pod resource requests and limits. A
*Pod resource request/limit* for a particular resource type is the sum of the
resource requests/limits of that type for each Container in the Pod.
Although you can only specify requests and limits for individual containers,
it is also useful to think about the overall resource requests and limits for
a Pod.
For a particular resource, a *Pod resource request/limit* is the sum of the
resource requests/limits of that type for each container in the Pod.
## Resource units in Kubernetes
### Meaning of CPU
### CPU resource units {#meaning-of-cpu}
Limits and requests for CPU resources are measured in *cpu* units.
One cpu, in Kubernetes, is equivalent to **1 vCPU/Core** for cloud providers and **1 hyperthread** on bare-metal Intel processors.
In Kubernetes, 1 CPU unit is equivalent to **1 physical CPU core**,
or **1 virtual core**, depending on whether the node is a physical host
or a virtual machine running inside a physical machine.
Fractional requests are allowed. When you define a container with
`spec.containers[].resources.requests.cpu` set to `0.5`, you are requesting half
as much CPU time compared to if you asked for `1.0` CPU.
For CPU resource units, the expression `0.1` is equivalent to the
For CPU resource units, the [quantity](/docs/reference/kubernetes-api/common-definitions/quantity/) expression `0.1` is equivalent to the
expression `100m`, which can be read as "one hundred millicpu". Some people say
"one hundred millicores", and this is understood to mean the same thing. A
request with a decimal point, like `0.1`, is converted to `100m` by the API, and
precision finer than `1m` is not allowed. For this reason, the form `100m` might
be preferred.
"one hundred millicores", and this is understood to mean the same thing.
CPU is always requested as an absolute quantity, never as a relative quantity;
0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
CPU resource is always specified as an absolute amount of resource, never as a relative amount. For example,
`500m` CPU represents the roughly same amount of computing power whether that container
runs on a single-core, dual-core, or 48-core machine.
### Meaning of memory
{{< note >}}
Kubernetes doesn't allow you to specify CPU resources with a precision finer than
`1m`. Because of this, it's useful to specify CPU units less than `1.0` or `1000m` using
the milliCPU form; for example, `5m` rather than `0.005`.
{{< /note >}}
### Memory resource units {#meaning-of-memory}
Limits and requests for `memory` are measured in bytes. You can express memory as
a plain integer or as a fixed-point number using one of these suffixes:
E, P, T, G, M, k, m (millis). You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
a plain integer or as a fixed-point number using one of these
[quantity](/docs/reference/kubernetes-api/common-definitions/quantity/) suffixes:
E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following represent roughly the same value:
```shell
128974848, 129e6, 129M, 128974848000m, 123Mi
```
Here's an example.
The following Pod has two Containers. Each Container has a request of 0.25 cpu
and 64MiB (2<sup>26</sup> bytes) of memory. Each Container has a limit of 0.5
cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
MiB of memory, and a limit of 1 cpu and 256MiB of memory.
Take care about case for suffixes. If you request `400m` of memory, this is a request
for 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (`400Mi`)
or 400 megabytes (`400M`).
## Container resources example {#example-1}
The following Pod has two containers. Both containers are defined with a request for
0.25 CPU
and 64MiB (2<sup>26</sup> bytes) of memory. Each container has a limit of 0.5
CPU and 128MiB of memory. You can say the Pod has a request of 0.5 CPU and 128
MiB of memory, and a limit of 1 CPU and 256MiB of memory.
```yaml
---
apiVersion: v1
kind: Pod
metadata:
@ -162,56 +180,54 @@ When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
run on. Each node has a maximum capacity for each of the resource types: the
amount of CPU and memory it can provide for Pods. The scheduler ensures that,
for each resource type, the sum of the resource requests of the scheduled
Containers is less than the capacity of the node. Note that although actual memory
containers is less than the capacity of the node.
Note that although actual memory
or CPU resource usage on nodes is very low, the scheduler still refuses to place
a Pod on a node if the capacity check fails. This protects against a resource
shortage on a node when resource usage later increases, for example, during a
daily peak in request rate.
## How Pods with resource limits are run
## How Kubernetes applies resource requests and limits {#how-pods-with-resource-limits-are-run}
When the kubelet starts a Container of a Pod, it passes the CPU and memory limits
to the container runtime.
When the kubelet starts a container as part of a Pod, the kubelet passes that container's
requests and limits for memory and CPU to the container runtime.
When using Docker:
On Linux, the container runtime typically configures
kernel {{< glossary_tooltip text="cgroups" term_id="cgroup" >}} that apply and enforce the
limits you defined.
- The `spec.containers[].resources.requests.cpu` is converted to its core value,
which is potentially fractional, and multiplied by 1024. The greater of this number
or 2 is used as the value of the
[`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint)
flag in the `docker run` command.
- The CPU limit defines a hard ceiling on how much CPU time that the container can use.
During each scheduling interval (time slice), the Linux kernel checks to see if this
limit is exceeded; if so, the kernel waits before allowing that cgroup to resume execution.
- The CPU request typically defines a weighting. If several different containers (cgroups)
want to run on a contended system, workloads with larger CPU requests are allocated more
CPU time than workloads with small requests.
- The memory request is mainly used during (Kubernetes) Pod scheduling. On a node that uses
cgroups v2, the container runtime might use the memory request as a hint to set
`memory.min` and `memory.low`.
- The memory limit defines a memory limit for that cgroup. If the container tries to
allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates
and, typically, intervenes by stopping one of the processes in the container that tried
to allocate memory. If that process is the container's PID 1, and the container is marked
as restartable, Kubernetes restarts the container.
- The memory limit for the Pod or container can also apply to pages in memory backed
volumes, such as an `emptyDir`. The kubelet tracks `tmpfs` emptyDir volumes as container
memory use, rather than as local ephemeral storage.
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and
multiplied by 100. The resulting value is the total amount of CPU time in microseconds
that a container can use every 100ms. A container cannot use more than its share of
CPU time during this interval.
If a container exceeds its memory request and the node that it runs on becomes short of
memory overall, it is likely that the Pod the container belongs to will be
{{< glossary_tooltip text="evicted" term_id="eviction" >}}.
{{< note >}}
The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
{{</ note >}}
A container might or might not be allowed to exceed its CPU limit for extended periods of time.
However, container runtimes don't terminate Pods or containers for excessive CPU usage.
- The `spec.containers[].resources.limits.memory` is converted to an integer, and
used as the value of the
[`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
flag in the `docker run` command.
If a Container exceeds its memory limit, it might be terminated. If it is
restartable, the kubelet will restart it, as with any other type of runtime
failure.
If a Container exceeds its memory request, it is likely that its Pod will
be evicted whenever the node runs out of memory.
A Container might or might not be allowed to exceed its CPU limit for extended
periods of time. However, it will not be killed for excessive CPU usage.
To determine whether a Container cannot be scheduled or is being killed due to
resource limits, see the
[Troubleshooting](#troubleshooting) section.
To determine whether a container cannot be scheduled or is being killed due to resource limits,
see the [Troubleshooting](#troubleshooting) section.
### Monitoring compute & memory resource usage
The resource usage of a Pod is reported as part of the Pod status.
The kubelet reports the resource usage of a Pod as part of the Pod
[`status`](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status).
If optional [tools for monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
are available in your cluster, then Pod resource usage can be retrieved either
@ -309,21 +325,26 @@ than as local ephemeral storage.
### Setting requests and limits for local ephemeral storage
You can use _ephemeral-storage_ for managing local ephemeral storage. Each Container of a Pod can specify one or more of the following:
You can specify `ephemeral-storage` for managing local ephemeral storage. Each
container of a Pod can specify either or both of the following:
* `spec.containers[].resources.limits.ephemeral-storage`
* `spec.containers[].resources.requests.ephemeral-storage`
Limits and requests for `ephemeral-storage` are measured in bytes. You can express storage as
a plain integer or as a fixed-point number using one of these suffixes:
Limits and requests for `ephemeral-storage` are measured in byte quantities.
You can express storage as a plain integer or as a fixed-point number using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
Mi, Ki. For example, the following represent roughly the same value:
Mi, Ki. For example, the following quantities all represent roughly the same value:
```shell
128974848, 129e6, 129M, 123Mi
```
- `128974848`
- `129e6`
- `129M`
- `123Mi`
In the following example, the Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of local ephemeral storage.
In the following example, the Pod has two containers. Each container has a request of
2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeral
storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and
a limit of 8GiB of local ephemeral storage.
```yaml
apiVersion: v1
@ -360,9 +381,11 @@ spec:
### How Pods with ephemeral-storage requests are scheduled
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods.
For more information, see
[Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node.
The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node.
### Ephemeral storage consumption management {#resource-emphemeralstorage-consumption}
@ -376,7 +399,7 @@ kubelet measures storage use in:
If a Pod is using more ephemeral storage than you allow it to, the kubelet
sets an eviction signal that triggers Pod eviction.
For container-level isolation, if a Container's writable layer and log
For container-level isolation, if a container's writable layer and log
usage exceeds its storage limit, the kubelet marks the Pod for eviction.
For pod-level isolation the kubelet works out an overall Pod storage limit by
@ -493,15 +516,19 @@ Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
for how to advertise device plugin managed resources on each node.
##### Other resources
To advertise a new node-level extended resource, the cluster operator can
submit a `PATCH` HTTP request to the API server to specify the available
quantity in the `status.capacity` for a node in the cluster. After this
operation, the node's `status.capacity` will include a new resource. The
`status.allocatable` field is updated automatically with the new resource
asynchronously by the kubelet. Note that because the scheduler uses the node
`status.allocatable` value when evaluating Pod fitness, there may be a short
delay between patching the node capacity with a new resource and the first Pod
that requests the resource to be scheduled on that node.
asynchronously by the kubelet.
Because the scheduler uses the node's `status.allocatable` value when
evaluating Pod fitness, the scheduler only takes account of the new value after
that asynchronous update. There may be a short delay between patching the
node capacity with a new resource and the time when the first Pod that requests
the resource can be scheduled on that node.
**Example:**
@ -529,7 +556,7 @@ Cluster-level extended resources are not tied to nodes. They are usually managed
by scheduler extenders, which handle the resource consumption and resource quota.
You can specify the extended resources that are handled by scheduler extenders
in [scheduler policy configuration](/docs/reference/config-api/kube-scheduler-policy-config.v1/)
in [scheduler configuration](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
**Example:**
@ -611,27 +638,32 @@ spec:
## PID limiting
Process ID (PID) limits allow for the configuration of a kubelet to limit the number of PIDs that a given Pod can consume. See [Pid Limiting](/docs/concepts/policy/pid-limiting/) for information.
Process ID (PID) limits allow for the configuration of a kubelet
to limit the number of PIDs that a given Pod can consume. See
[PID Limiting](/docs/concepts/policy/pid-limiting/) for information.
## Troubleshooting
### My Pods are pending with event message failedScheduling
### My Pods are pending with event message `FailedScheduling`
If the scheduler cannot find any node where a Pod can fit, the Pod remains
unscheduled until a place can be found. An event is produced each time the
scheduler fails to find a place for the Pod, like this:
unscheduled until a place can be found. An
[Event](/docs/reference/kubernetes-api/cluster-resources/event-v1/) is produced
each time the scheduler fails to find a place for the Pod. You can use `kubectl`
to view the events for a Pod; for example:
```shell
kubectl describe pod frontend | grep -A 3 Events
kubectl describe pod frontend | grep -A 9999999999 Events
```
```
Events:
FirstSeen LastSeen Count From Subobject PathReason Message
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu
```
In the preceding example, the Pod named "frontend" fails to be scheduled due to
insufficient CPU resource on the node. Similar error messages can also suggest
insufficient CPU resource on any node. Similar error messages can also suggest
failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod
is pending with a message of this type, there are several things to try:
@ -640,6 +672,9 @@ is pending with a message of this type, there are several things to try:
- Check that the Pod is not larger than all the nodes. For example, if all the
nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will
never be scheduled.
- Check for node taints. If most of your nodes are tainted, and the new Pod does
not tolerate that taint, the scheduler only considers placements onto the
remaining nodes that don't have that taint.
You can check node capacities and amounts allocated with the
`kubectl describe nodes` command. For example:
@ -674,31 +709,46 @@ Allocated resources:
680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)
```
In the preceding output, you can see that if a Pod requests more than 1120m
CPUs or 6.23Gi of memory, it will not fit on the node.
In the preceding output, you can see that if a Pod requests more than 1.120 CPUs,
or more than 6.23Gi of memory, that Pod will not fit on the node.
By looking at the `Pods` section, you can see which Pods are taking up space on
By looking at the “Pods” section, you can see which Pods are taking up space on
the node.
The amount of resources available to Pods is less than the node capacity, because
system daemons use a portion of the available resources. The `allocatable` field
[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
gives the amount of resources that are available to Pods. For more information, see
[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md).
system daemons use a portion of the available resources. Within the Kubernetes API,
each Node has a `.status.allocatable` field
(see [NodeStatus](/docs/reference/kubernetes-api/cluster-resources/node-v1/#NodeStatus)
for details).
The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.
The `.status.allocatable` field describes the amount of resources that are available
to Pods on that node (for example: 15 virtual CPUs and 7538 MiB of memory).
For more information on node allocatable resources in Kubernetes, see
[Reserve Compute Resources for System Daemons](/docs/tasks/administer-cluster/reserve-compute-resources/).
### My Container is terminated
You can configure [resource quotas](/docs/concepts/policy/resource-quotas/)
to limit the total amount of resources that a namespace can consume.
Kubernetes enforces quotas for objects in particular namespace when there is a
ResourceQuota in that namespace.
For example, if you assign specific namespaces to different teams, you
can add ResourceQuotas into those namespaces. Setting resource quotas helps to
prevent one team from using so much of any resource that this over-use affects other teams.
Your Container might get terminated because it is resource-starved. To check
whether a Container is being killed because it is hitting a resource limit, call
You should also consider what access you grant to that namespace:
**full** write access to a namespace allows someone with that access to remove any
resource, include a configured ResourceQuota.
### My container is terminated
Your container might get terminated because it is resource-starved. To check
whether a container is being killed because it is hitting a resource limit, call
`kubectl describe pod` on the Pod of interest:
```shell
kubectl describe pod simmemleak-hra99
```
The output is similar to:
```
Name: simmemleak-hra99
Namespace: default
@ -709,57 +759,48 @@ Status: Running
Reason:
Message:
IP: 10.244.2.75
Replication Controllers: simmemleak (1/1 replicas created)
Containers:
simmemleak:
Image: saadali/simmemleak
Image: saadali/simmemleak:latest
Limits:
cpu: 100m
memory: 50Mi
State: Running
Started: Tue, 07 Jul 2015 12:54:41 -0700
Last Termination State: Terminated
Exit Code: 1
Started: Fri, 07 Jul 2015 12:54:30 -0700
Finished: Fri, 07 Jul 2015 12:54:33 -0700
Started: Tue, 07 Jul 2019 12:54:41 -0700
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 07 Jul 2019 12:54:30 -0700
Finished: Fri, 07 Jul 2019 12:54:33 -0700
Ready: False
Restart Count: 5
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
Normal Pulled 41s kubelet Container image "saadali/simmemleak:latest" already present on machine
Normal Created 41s kubelet Created container simmemleak
Normal Started 40s kubelet Started container simmemleak
Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod
```
In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
Container in the Pod was terminated and restarted five times.
container in the Pod was terminated and restarted five times (so far).
The `OOMKilled` reason shows that the container tried to use more memory than its limit.
You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
of previously terminated Containers:
```shell
kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
```
```
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
```
You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory.
Your next step might be to check the application code for a memory leak. If you
find that the application is behaving how you expect, consider setting a higher
memory limit (and possibly request) for that container.
## {{% heading "whatsnext" %}}
* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
* Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
* For more details about the difference between requests and limits, see
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md).
* Read the [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core) API reference
* Read the [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) API reference
* Get hands-on experience [assigning Memory resources to containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
* Get hands-on experience [assigning CPU resources to containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
* Read how the API reference defines a [container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
and its [resource requirements](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
* Read more about the [kube-scheduler Policy reference (v1)](/docs/reference/config-api/kube-scheduler-policy-config.v1/)
* Read more about the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)

View File

@ -234,8 +234,8 @@ so the above scenario will not apply if the node is, for example, under `DiskPre
`Guaranteed` pods are guaranteed only when requests and limits are specified for
all the containers and they are equal. These pods will never be evicted because
of another pod's resource consumption. If a system daemon (such as `kubelet`,
`docker`, and `journald`) is consuming more resources than were reserved via
of another pod's resource consumption. If a system daemon (such as `kubelet`
and `journald`) is consuming more resources than were reserved via
`system-reserved` or `kube-reserved` allocations, and the node only has
`Guaranteed` or `Burstable` pods using less resources than requests left on it,
then the kubelet must choose to evict one of these pods to preserve node stability

View File

@ -5,8 +5,50 @@ description: >
Concepts and resources behind networking in Kubernetes.
---
## The Kubernetes network model
Every [`Pod`](/docs/concepts/workloads/pods/) gets its own IP address.
This means you do not need to explicitly create links between `Pods` and you
almost never need to deal with mapping container ports to host ports.
This creates a clean, backwards-compatible model where `Pods` can be treated
much like VMs or physical hosts from the perspectives of port allocation,
naming, service discovery, [load balancing](/docs/concepts/services-networking/ingress/#load-balancing), application configuration,
and migration.
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
* pods on a [node](/docs/concepts/architecture/nodes/) can communicate with all pods on all nodes without NAT
* agents on a node (e.g. system daemons, kubelet) can communicate with all
pods on that node
Note: For those platforms that support `Pods` running in the host network (e.g.
Linux):
* pods in the host network of a node can communicate with all pods on all
nodes without NAT
This model is not only less complex overall, but it is principally compatible
with the desire for Kubernetes to enable low-friction porting of apps from VMs
to containers. If your job previously ran in a VM, your VM had an IP and could
talk to other VMs in your project. This is the same basic model.
Kubernetes IP addresses exist at the `Pod` scope - containers within a `Pod`
share their network namespaces - including their IP address and MAC address.
This means that containers within a `Pod` can all reach each other's ports on
`localhost`. This also means that containers within a `Pod` must coordinate port
usage, but this is no different from processes in a VM. This is called the
"IP-per-pod" model.
How this is implemented is a detail of the particular container runtime in use.
It is possible to request ports on the `Node` itself which forward to your `Pod`
(called host ports), but this is a very niche operation. How that forwarding is
implemented is also a detail of the container runtime. The `Pod` itself is
blind to the existence or non-existence of host ports.
Kubernetes networking addresses four concerns:
- Containers within a Pod use networking to communicate via loopback.
- Containers within a Pod [use networking to communicate](/docs/concepts/services-networking/dns-pod-service/) via loopback.
- Cluster networking provides communication between different Pods.
- The Service resource lets you expose an application running in Pods to be reachable from outside your cluster.
- You can also use Services to publish services only for consumption inside your cluster.
- The [Service resource](/docs/concepts/services-networking/service/) lets you [expose an application running in Pods](/docs/concepts/services-networking/connect-applications-service/) to be reachable from outside your cluster.
- You can also use Services to [publish services only for consumption inside your cluster](/docs/concepts/services-networking/service-traffic-policy/).

View File

@ -33,7 +33,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
* [EnRoute](https://getenroute.io/) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/ingresscontroller.md) is an [Easegress](https://megaease.com/easegress/) based API gateway that can run as an ingress controller.
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md) is an [Easegress](https://megaease.com/easegress/) based API gateway that can run as an ingress controller.
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
lets you use an Ingress to configure F5 BIG-IP virtual servers.
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io),

View File

@ -80,7 +80,7 @@ The name of an Ingress object must be a valid
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md).
Different [Ingress controller](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
your choice of Ingress controller to learn which annotations are supported.
The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)

View File

@ -10,7 +10,7 @@ weight: 50
<!-- overview -->
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a {{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a {{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network. NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:
@ -27,15 +27,17 @@ Meanwhile, when IP based NetworkPolicies are created, we define policies based o
Network policies are implemented by the [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.
## Isolated and Non-isolated Pods
## The Two Sorts of Pod Isolation
By default, pods are non-isolated; they accept traffic from any source.
There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress. They concern what connections may be established. "Isolation" here is not absolute, rather it means "some restrictions apply". The alternative, "non-isolated for $direction", means that no restrictions apply in the stated direction. The two sorts of isolation (or not) are declared independently, and are both relevant for a connection from one pod to another.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
By default, a pod is non-isolated for egress; all outbound connections are allowed. A pod is isolated for egress if there is any NetworkPolicy that both selects the pod and has "Egress" in its `policyTypes`; we say that such a policy applies to the pod for egress. When a pod is isolated for egress, the only allowed connections from the pod are those allowed by the `egress` list of some NetworkPolicy that applies to the pod for egress. The effects of those `egress` lists combine additively.
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
By default, a pod is non-isolated for ingress; all inbound connections are allowed. A pod is isolated for ingress if there is any NetworkPolicy that both selects the pod and has "Ingress" in its `policyTypes`; we say that such a policy applies to the pod for ingress. When a pod is isolated for ingress, the only allowed connections into the pod are those from the pod's node and those allowed by the `ingress` list of some NetworkPolicy that applies to the pod for ingress. The effects of those `ingress` lists combine additively.
For a network flow between two pods to be allowed, both the egress policy on the source pod and the ingress policy on the destination pod need to allow the traffic. If either the egress policy on the source, or the ingress policy on the destination denies the traffic, the traffic will be denied.
Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow. Thus, order of evaluation does not affect the policy result.
For a connection from a source pod to a destination pod to be allowed, both the egress policy on the source pod and the ingress policy on the destination pod need to allow the connection. If either side does not allow the connection, it will not happen.
## The NetworkPolicy resource {#networkpolicy-resource}
@ -176,18 +178,20 @@ in that namespace.
### Default deny all ingress traffic
You can create a "default" isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods.
You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any ingress traffic to those pods.
{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}}
This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated. This policy does not change the default egress isolation behavior.
This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated for ingress. This policy does not affect isolation for egress from any pod.
### Default allow all ingress traffic
### Allow all ingress traffic
If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all traffic in that namespace.
If you want to allow all incoming connections to all pods in a namespace, you can create a policy that explicitly allows that.
{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}}
With this policy in place, no additional policy or policies can cause any incoming connection to those pods to be denied. This policy has no effect on isolation for egress from any pod.
### Default deny all egress traffic
You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy that selects all pods but does not allow any egress traffic from those pods.
@ -195,14 +199,16 @@ You can create a "default" egress isolation policy for a namespace by creating a
{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}}
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed egress traffic. This policy does not
change the default ingress isolation behavior.
change the ingress isolation behavior of any pod.
### Default allow all egress traffic
### Allow all egress traffic
If you want to allow all traffic from all pods in a namespace (even if policies are added that cause some pods to be treated as "isolated"), you can create a policy that explicitly allows all egress traffic in that namespace.
If you want to allow all connections from all pods in a namespace, you can create a policy that explicitly allows all outgoing connections from pods in that namespace.
{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}}
With this policy in place, no additional policy or policies can cause any outgoing connection from those pods to be denied. This policy has no effect on isolation for ingress to any pod.
### Default deny all ingress and all egress traffic
You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by creating the following NetworkPolicy in that namespace.

View File

@ -254,6 +254,16 @@ To request a larger volume for a PVC, edit the PVC object and specify a larger
size. This triggers expansion of the volume that backs the underlying PersistentVolume. A
new PersistentVolume is never created to satisfy the claim. Instead, an existing volume is resized.
{{< warning >}}
Directly editing the size of a PersistentVolume can prevent an automatic resize of that volume.
If you edit the capacity of a PersistentVolume, and then edit the `.spec` of a matching
PersistentVolumeClaim to make the size of the PersistentVolumeClaim match the PersistentVolume,
then no storage resize happens.
The Kubernetes control plane will see that the desired state of both resources matches,
conclude that the backing volume size has been manually
increased and that no resize is necessary.
{{< /warning >}}
#### CSI Volume expansion
{{< feature-state for_k8s_version="v1.16" state="beta" >}}

View File

@ -39,6 +39,10 @@ request a particular class. Administrators set the name and other parameters
of a class when first creating VolumeSnapshotClass objects, and the objects cannot
be updated once they are created.
{{< note >}}
Installation of the CRDs is the responsibility of the Kubernetes distribution. Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
{{< /note >}}
```yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass

View File

@ -110,15 +110,13 @@ membership in the Kubernetes organization.
## Serve as a SIG Co-chair
SIG Docs [approvers](/docs/contribute/participate/roles-and-responsibilities/#approvers)
SIG Docs [members](/docs/contribute/participate/roles-and-responsibilities/#members)
can serve a term as a co-chair of SIG Docs.
### Prerequisites
Approvers must meet the following requirements to be a co-chair:
A Kubernetes member must meet the following requirements to be a co-chair:
- Have been a SIG Docs approver for at least 6 months
- Have [led a Kubernetes docs release](/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release) or shadowed two releases
- Understand SIG Docs workflows and tooling: git, Hugo, localization, blog subproject
- Understand how other Kubernetes SIGs and repositories affect the SIG Docs
workflow, including:
@ -126,6 +124,8 @@ Approvers must meet the following requirements to be a co-chair:
[process in k/community](https://github.com/kubernetes/community/tree/master/sig-docs),
plugins in [k/test-infra](https://github.com/kubernetes/test-infra/), and the role of
[SIG Architecture](https://github.com/kubernetes/community/tree/master/sig-architecture).
In addition, understand how the [Kubernetes docs release process](/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release) works.
- Approved by the SIG Docs community either directly or via lazy consensus.
- Commit at least 5 hours per week (and often more) to the role for a minimum of 6 months
### Responsibilities

View File

@ -85,7 +85,11 @@ class third,fourth white
- Reading the PR description to understand the changes made, and read any linked issues
- Reading any comments by other reviewers
- Clicking the **Files changed** tab to see the files and lines changed
- Previewing the changes in the Netlify preview build by scrolling to the PR's build check section at the bottom of the **Conversation** tab and clicking the **deploy/netlify** line's **Details** link.
- Previewing the changes in the Netlify preview build by scrolling to the PR's build check section at the bottom of the **Conversation** tab.
Here's a screenshot (this shows GitHub's desktop site; if you're reviewing
on a tablet or smartphone device, the GitHub web UI is slightly different):
{{< figure src="/images/docs/github_netlify_deploy_preview.png" alt="GitHub pull request details including link to Netlify preview" >}}
To open the preview, click on the **Details** link of the **deploy/netlify** line in the list of checks.
4. Go to the **Files changed** tab to start your review.
1. Click on the `+` symbol beside the line you want to comment on.

View File

@ -0,0 +1,671 @@
---
title: Diagram Guide
linktitle: Diagram guide
content_type: concept
weight: 15
---
<!--Overview-->
This guide shows you how to create, edit and share diagrams using the Mermaid Javascript library. Mermaid.js allows you to generate diagrams using a simple markdown-like syntax inside Markdown files. You can also use Mermaid to generate `.svg` or `.png` image files that you can add to your documentation.
The target audience for this guide is anybody wishing to learn about Mermaid and/or how to create and add diagrams to Kubernetes documentation.
Figure 1 outlines the topics covered in this section.
{{< mermaid >}}
flowchart LR
subgraph m[Mermaid.js]
direction TB
S[ ]-.-
C[build<br>diagrams<br>with markdown] -->
D[on-line<br>live editor]
end
A[Why are diagrams<br>useful?] --> m
m --> N[3 x methods<br>for creating<br>diagrams]
N --> T[Examples]
T --> X[Styling<br>and<br>captions]
X --> V[Tips]
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
class A,C,D,N,X,m,T,V box
class S spacewhite
%% you can hyperlink Mermaid diagram nodes to a URL using click statements
click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
click N "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
click T "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
click X "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
click V "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank
{{< /mermaid >}}
Figure 1. Topics covered in this section.
All you need to begin working with Mermaid is the following:
* Basic understanding of markdown.
* Using the Mermaid live editor.
* Using [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/).
* Using the [Hugo {{</* figure */>}} shortcode](https://gohugo.io/content-management/shortcodes/#figure).
* Performing [Hugo local previews](https://kubernetes.io/docs/contribute/new-content/open-a-pr/#preview-locally).
* Familiar with the [Contributing new content](/docs/contribute/new-content/) process.
{{< note >}}
You can click on each diagram in this section to view the code and rendered diagram in the Mermaid live editor.
{{< /note >}}
<!--body-->
## Why you should use diagrams in documentation
Diagrams improve documentation clarity and comprehension. There are advantages for both the user and the contributor.
The user benefits include:
* __Friendly landing spot__. A detailed text-only greeting page could intimidate users, in particular, first-time Kubernetes users.
* __Faster grasp of concepts__. A diagram can help users understand the key points of a complex topic. Your diagram can serve as a visual learning guide to dive into the topic details.
* __Better retention__. For some, it is easier to recall pictures rather than text.
The contributor benefits include:
* __Assist in developing the structure and content__ of your contribution. For example, you can start with a simple diagram covering the high-level points and then dive into details.
* __Expand and grow the user community__. Easily consumed documentation augmented with diagrams attracts new users who might previously have been reluctant to engage due to perceived complexities.
You should consider your target audience. In addition to experienced K8s users, you will have many who are new to Kubernetes. Even a simple diagram can assist new users in absorbing Kubernetes concepts. They become emboldened and more confident to further explore Kubernetes and the documentation.
## Mermaid
[Mermaid](https://mermaid-js.github.io/mermaid/#/) is an open source JavaScript library that allows you to create, edit and easily share diagrams using a simple, markdown-like syntax configured inline in Markdown files.
The following lists features of Mermaid:
* Simple code syntax.
* Includes a web-based tool allowing you to code and preview your diagrams.
* Supports multiple formats including flowchart, state and sequence.
* Easy collaboration with colleagues by sharing a per-diagram URL.
* Broad selection of shapes, lines, themes and styling.
The following lists advantages of using Mermaid:
* No need for separate, non-Mermaid diagram tools.
* Adheres to existing PR workflow. You can think of Mermaid code as just Markdown text included in your PR.
* Simple tool builds simple diagrams. You don't want to get bogged down (re)crafting an overly complex and detailed picture. Keep it simple!
Mermaid provides a simple, open and transparent method for the SIG communities to add, edit and collaborate on diagrams for new or existing documentation.
{{< note >}}
You can still use Mermaid to create/edit diagrams even if it's not supported in your environment. This method is called __Mermaid+SVG__ and is explained below.
{{< /note >}}
### Live editor
The [Mermaid live editor](https://mermaid-js.github.io/mermaid-live-editor) is a web-based tool that enables you to create, edit and review diagrams.
The following lists live editor functions:
* Displays Mermaid code and rendered diagram.
* Generates a URL for each saved diagram. The URL is displayed in the URL field of your browser. You can share the URL with colleagues who can access and modify the diagram.
* Option to download `.svg` or `.png` files.
{{< note >}}
The live editor is the easiest and fastest way to create and edit Mermaid diagrams.
{{< /note >}}
## Methods for creating diagrams
Figure 2 outlines the three methods to generate and add diagrams.
{{< mermaid >}}
graph TB
A[Contributor]
B[Inline<br><br>Mermaid code<br>added to .md file]
C[Mermaid+SVG<br><br>Add mermaid-generated<br>svg file to .md file]
D[External tool<br><br>Add external-tool-<br>generated svg file<br>to .md file]
A --> B
A --> C
A --> D
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
class A,B,C,D box
%% you can hyperlink Mermaid diagram nodes to a URL using click statements
click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
{{< /mermaid >}}
Figure 2. Methods to create diagrams.
### Inline
Figure 3 outlines the steps to follow for adding a diagram using the Inline method.
{{< mermaid >}}
graph LR
A[1. Use live editor<br> to create/edit<br>diagram] -->
B[2. Store diagram<br>URL somewhere] -->
C[3. Copy Mermaid code<br>to page markdown file] -->
D[4. Add caption]
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
class A,B,C,D box
%% you can hyperlink Mermaid diagram nodes to a URL using click statements
click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank
{{< /mermaid >}}
Figure 3. Inline Method steps.
The following lists the steps you should follow for adding a diagram using the Inline method:
1. Create your diagram using the live editor.
2. Store the diagram URL somewhere for later access.
3. Copy the mermaid code to the location in your `.md` file where you want the diagram to appear.
4. Add a caption below the diagram using Markdown text.
A Hugo build runs the Mermaid code and turns it into a diagram.
{{< note >}}
You may find keeping track of diagram URLs is cumbersome. If so, make a note in the `.md` file that the Mermaid code is self-documenting. Contributors can copy the Mermaid code to and from the live editor for diagram edits.
{{< /note >}}
Here is a sample code snippet contained in an `.md` file:
```
---
title: My PR
---
Figure 17 shows a simple A to B process.
some markdown text
...
{{</* mermaid */>}}
graph TB
A --> B
{{</* /mermaid */>}}
Figure 17. A to B
more text
```
{{< note >}}
You must include the `{{</* mermaid */>}}`, `{{</* /mermaid */>}}` shortcode tags at the start and end of the Mermaid code block. You should add a diagram caption below the diagram.
{{< /note >}}
For more details on diagram captions, see [How to use captions](#how-to-use-captions).
The following lists advantages of the Inline method:
* Live editor tool.
* Easy to copy Mermaid code to and from the live editor and your `.md` file.
* No need for separate `.svg` image file handling.
* Content text, diagram code and diagram caption contained in the same `.md` file.
You should use the [local](https://kubernetes.io/docs/contribute/new-content/open-a-pr/#preview-locally) and Netlify previews to verify the diagram is properly rendered.
{{< caution >}}
The Mermaid live editor feature set may not support the K8s/website Mermaid feature set. You might see a syntax error or a blank screen after the Hugo build. If that is the case, consider using the Mermaid+SVG method.
{{< /caution >}}
### Mermaid+SVG
Figure 4 outlines the steps to follow for adding a diagram using the Mermaid+SVG method.
{{< mermaid >}}
flowchart LR
A[1. Use live editor<br> to create/edit<br>diagram]
B[2. Store diagram<br>URL somewhere]
C[3. Generate .svg file<br>and download to<br>images/ folder]
subgraph w[ ]
direction TB
D[4. Use figure shortcode<br>to reference .svg<br>file in page<br>.md file] -->
E[5. Add caption]
end
A --> B
B --> C
C --> w
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
class A,B,C,D,E,w box
click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
{{< /mermaid >}}
Figure 4. Mermaid+SVG method steps.
The following lists the steps you should follow for adding a diagram using the Mermaid+SVG method:
1. Create your diagram using the live editor.
2. Store the diagram URL somewhere for later access.
3. Generate an `.svg` image file for the diagram and download it to the appropriate `images/` folder.
4. Use the `{{</* figure */>}}` shortcode to reference the diagram in the `.md` file.
5. Add a caption using the `{{</* figure */>}}` shortcode's `caption` parameter.
For example, use the live editor to create a diagram called `boxnet`. Store the diagram URL somewhere for later access. Generate and download a `boxnet.svg` file to the appropriate `../images/` folder.
Use the `{{</* figure */>}}` shortcode in your PR's `.md` file to reference the `.svg` image file and add a caption.
```json
{{</* figure src="/static/images/boxnet.svg" alt="Boxnet figure" class="diagram-large" caption="Figure 14. Boxnet caption" */>}}
```
For more details on diagram captions, see [How to use captions](#how-to-use-captions).
{{< note >}}
The `{{</* figure */>}}` shortcode is the preferred method for adding `.svg` image files to your documentation. You can also use the standard markdown image syntax like so: `![my boxnet diagram](static/images/boxnet.svg)`. And you will need to add a caption below the diagram.
{{< /note >}}
You should add the live editor URL as a comment block in the `.svg` image file using a text editor. For example, you would include the following at the beginning of the `.svg` image file:
```
<!-- To view or edit the mermaid code, use the following URL: -->
<!-- https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb ... <remainder of the URL> -->
```
The following lists advantages of the Mermaid+SVG method:
* Live editor tool.
* Live editor tool supports the most current Mermaid feature set.
* Employ existing K8s/website methods for handling `.svg` image files.
* Environment doesn't require Mermaid support.
Be sure to check that your diagram renders properly using the [local](https://kubernetes.io/docs/contribute/new-content/open-a-pr/#preview-locally) and Netlify previews.
### External tool
Figure 5 outlines the steps to follow for adding a diagram using the External Tool method.
First, use your external tool to create the diagram and save it as an `.svg` or `.png` image file. After that, use the same steps as the __Mermaid+SVG__ method for adding `.svg` image files.
{{< mermaid >}}
flowchart LR
A[1. Use external<br>tool to create/edit<br>diagram]
B[2. If possible, save<br>diagram coordinates<br>for contributor<br>access]
C[3. Generate .svg <br>or.png file<br>and download to<br>appropriate<br>images/ folder]
subgraph w[ ]
direction TB
D[4. Use figure shortcode<br>to reference svg or<br>png file in<br>page .md file] -->
E[5. Add caption]
end
A --> B
B --> C
C --> w
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
class A,B,C,D,E,w box
click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"
click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"
click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"
click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"
click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ"
{{< /mermaid >}}
Figure 5. External Tool method steps
The following lists the steps you should follow for adding a diagram using the External Tool method:
1. Use your external tool to create a diagram.
2. Save the diagram coordinates for contributor access. For example, your tool may offer a link to the diagram image, or you could place the source code file, such as an `.xml` file, in a public repository for later contributor access.
3. Generate and save the diagram as an `.svg` or `.png` image file. Download this file to the appropriate `../images/` folder.
4. Use the `{{</* figure */>}}` shortcode to reference the diagram in the `.md` file.
5. Add a caption using the `{{</* figure */>}}` shortcode's `caption` parameter.
Here is the `{{</* figure */>}}` shortcode for the `images/apple.svg` diagram:
```text
{{</* figure src="/static/images/apple.svg" alt="red-apple-figure" class="diagram-large" caption="Figure 9. A Big Red Apple" */>}}
```
If your external drawing tool permits:
* You can incorporate multiple `.svg` or `.png` logos, icons and images into your diagram. However, make sure you observe copyright and follow the Kubernetes documentation
[guidelines](/docs/contribute/style/content-guide/) on the use of third party content.
* You should save the diagram source coordinates for later contributor access. For example, your tool may offer a link to the diagram image, or you could place the source code file, such as an `.xml` file, somewhere for contributor access.
For more information on K8s and CNCF logos and images, check out [CNCF Artwork](https://github.com/cncf/artwork).
The following lists advantages of the External Tool method:
* Contributor familiarity with external tool.
* Diagrams require more detail than what Mermaid can offer.
Don't forget to check that your diagram renders correctly using the [local](https://kubernetes.io/docs/contribute/new-content/open-a-pr/#preview-locally) and Netlify previews.
## Examples
This section shows several examples of Mermaid diagrams.
{{< note >}}
The code block examples omit the Hugo `{{</* mermaid */>}}`, `{{</* /mermaid */>}}` shortcode tags. This allows you to copy the code block into the live editor to experiment on your own. Note that the live editor doesn't recognize Hugo shortcodes.
{{< /note >}}
### Example 1 - Pod topology spread constraints
Figure 6 shows the diagram appearing in the [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels) page.
{{< mermaid >}}
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
click n3 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
click n4 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
click n1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
click n2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank
{{< /mermaid >}}
Figure 6. Pod Topology Spread Constraints.
Code block:
```
graph TB
subgraph "zoneB"
n3(Node3)
n4(Node4)
end
subgraph "zoneA"
n1(Node1)
n2(Node2)
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class n1,n2,n3,n4 k8s;
class zoneA,zoneB cluster;
```
### Example 2 - Ingress
Figure 7 shows the diagram appearing in the [What is Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress) page.
{{< mermaid >}}
graph LR;
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress];
ingress-->|routing rule|service[Service];
subgraph cluster
ingress;
service-->pod1[Pod];
service-->pod2[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service,pod1,pod2 k8s;
class client plain;
class cluster cluster;
click client "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank
click ingress "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank
click service "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank
click pod1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank
click pod2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank
{{< /mermaid >}}
Figure 7. Ingress
Code block:
```mermaid
graph LR;
client([client])-. Ingress-managed <br> load balancer .->ingress[Ingress];
ingress-->|routing rule|service[Service];
subgraph cluster
ingress;
service-->pod1[Pod];
service-->pod2[Pod];
end
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class ingress,service,pod1,pod2 k8s;
class client plain;
class cluster cluster;
```
### Example 3 - K8s system flow
Figure 8 depicts a Mermaid sequence diagram showing the system flow between K8s components to start a container.
{{< figure src="/docs/images/diagram-guide-example-3.svg" alt="K8s system flow diagram" class="diagram-large" caption="Figure 8. K8s system flow diagram" link="https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiJSV7aW5pdDp7XCJ0aGVtZVwiOlwibmV1dHJhbFwifX0lJVxuc2VxdWVuY2VEaWFncmFtXG4gICAgYWN0b3IgbWVcbiAgICBwYXJ0aWNpcGFudCBhcGlTcnYgYXMgY29udHJvbCBwbGFuZTxicj48YnI-YXBpLXNlcnZlclxuICAgIHBhcnRpY2lwYW50IGV0Y2QgYXMgY29udHJvbCBwbGFuZTxicj48YnI-ZXRjZCBkYXRhc3RvcmVcbiAgICBwYXJ0aWNpcGFudCBjbnRybE1nciBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5jb250cm9sbGVyPGJyPm1hbmFnZXJcbiAgICBwYXJ0aWNpcGFudCBzY2hlZCBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5zY2hlZHVsZXJcbiAgICBwYXJ0aWNpcGFudCBrdWJlbGV0IGFzIG5vZGU8YnI-PGJyPmt1YmVsZXRcbiAgICBwYXJ0aWNpcGFudCBjb250YWluZXIgYXMgbm9kZTxicj48YnI-Y29udGFpbmVyPGJyPnJ1bnRpbWVcbiAgICBtZS0-PmFwaVNydjogMS4ga3ViZWN0bCBjcmVhdGUgLWYgcG9kLnlhbWxcbiAgICBhcGlTcnYtLT4-ZXRjZDogMi4gc2F2ZSBuZXcgc3RhdGVcbiAgICBjbnRybE1nci0-PmFwaVNydjogMy4gY2hlY2sgZm9yIGNoYW5nZXNcbiAgICBzY2hlZC0-PmFwaVNydjogNC4gd2F0Y2ggZm9yIHVuYXNzaWduZWQgcG9kcyhzKVxuICAgIGFwaVNydi0-PnNjaGVkOiA1LiBub3RpZnkgYWJvdXQgcG9kIHcgbm9kZW5hbWU9XCIgXCJcbiAgICBzY2hlZC0-PmFwaVNydjogNi4gYXNzaWduIHBvZCB0byBub2RlXG4gICAgYXBpU3J2LS0-PmV0Y2Q6IDcuIHNhdmUgbmV3IHN0YXRlXG4gICAga3ViZWxldC0-PmFwaVNydjogOC4gbG9vayBmb3IgbmV3bHkgYXNzaWduZWQgcG9kKHMpXG4gICAgYXBpU3J2LT4-a3ViZWxldDogOS4gYmluZCBwb2QgdG8gbm9kZVxuICAgIGt1YmVsZXQtPj5jb250YWluZXI6IDEwLiBzdGFydCBjb250YWluZXJcbiAgICBrdWJlbGV0LT4-YXBpU3J2OiAxMS4gdXBkYXRlIHBvZCBzdGF0dXNcbiAgICBhcGlTcnYtLT4-ZXRjZDogMTIuIHNhdmUgbmV3IHN0YXRlIiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjp0cnVlfQ" >}}
Code block:
```
%%{init:{"theme":"neutral"}}%%
sequenceDiagram
actor me
participant apiSrv as control plane<br><br>api-server
participant etcd as control plane<br><br>etcd datastore
participant cntrlMgr as control plane<br><br>controller<br>manager
participant sched as control plane<br><br>scheduler
participant kubelet as node<br><br>kubelet
participant container as node<br><br>container<br>runtime
me->>apiSrv: 1. kubectl create -f pod.yaml
apiSrv-->>etcd: 2. save new state
cntrlMgr->>apiSrv: 3. check for changes
sched->>apiSrv: 4. watch for unassigned pods(s)
apiSrv->>sched: 5. notify about pod w nodename=" "
sched->>apiSrv: 6. assign pod to node
apiSrv-->>etcd: 7. save new state
kubelet->>apiSrv: 8. look for newly assigned pod(s)
apiSrv->>kubelet: 9. bind pod to node
kubelet->>container: 10. start container
kubelet->>apiSrv: 11. update pod status
apiSrv-->>etcd: 12. save new state
```
## How to style diagrams
You can style one or more diagram elements using well-known CSS nomenclature. You accomplish this using two types of statements in the Mermaid code.
* `classDef` defines a class of style attributes.
* `class` defines one or more elements to apply the class to.
In the code for [figure 7](https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0), you can see examples of both.
```
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; // defines style for the k8s class
class ingress,service,pod1,pod2 k8s; // k8s class is applied to elements ingress, service, pod1 and pod2.
```
You can include one or multiple `classDef` and `class` statements in your diagram. You can also use the official K8s `#326ce5` hex color code for K8s components in your diagram.
For more information on styling and classes, see [Mermaid Styling and classes docs](https://mermaid-js.github.io/mermaid/#/flowchart?id=styling-and-classes).
## How to use captions
A caption is a brief description of a diagram. A title or a short description of the diagram are examples of captions. Captions aren't meant to replace explanatory text you have in your documentation. Rather, they serve as a "context link" between that text and your diagram.
The combination of some text and a diagram tied together with a caption help provide a concise representation of the information you wish to convey to the user.
Without captions, you are asking the user to scan the text above or below the diagram to figure out a meaning. This can be frustrating for the user.
Figure 9 lays out the three components for proper captioning: diagram, diagram caption and the diagram referral.
{{< mermaid >}}
flowchart
A[Diagram<br><br>Inline Mermaid or<br>SVG image files]
B[Diagram Caption<br><br>Add Figure Number. and<br>Caption Text]
C[Diagram Referral<br><br>Referenence Figure Number<br>in text]
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
class A,B,C box
click A "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank
click B "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank
click C "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank
{{< /mermaid >}}
Figure 9. Caption Components.
{{< note >}}
You should always add a caption to each diagram in your documentation.
{{< /note >}}
**Diagram**
The `Mermaid+SVG` and `External Tool` methods generate `.svg` image files.
Here is the `{{</* figure */>}}` shortcode for the diagram defined in an `.svg` image file saved to `/images/docs/components-of-kubernetes.svg`:
```text
{{</* figure src="/images/docs/components-of-kubernetes.svg" alt="Kubernetes pod running inside a cluster" class="diagram-large" caption="Figure 4. Kubernetes Architecture Components */>}}
```
You should pass the `src`, `alt`, `class` and `caption` values into the `{{</* figure */>}}` shortcode. You can adjust the size of the diagram using `diagram-large`, `diagram-medium` and `diagram-small` classes.
{{< note >}}
Diagrams created using the `Inline` method don't use the `{{</* figure */>}}` shortcode. The Mermaid code defines how the diagram will render on your page.
{{< /note >}}
See [Methods for creating diagrams](#methods-for-creating-diagrams) for more information on the different methods for creating diagrams.
**Diagram Caption**
Next, add a diagram caption.
If you define your diagram in an `.svg` image file, then you should use the `{{</* figure */>}}` shortcode's `caption` parameter.
```text
{{</* figure src="/images/docs/components-of-kubernetes.svg" alt="Kubernetes pod running inside a cluster" class="diagram-large" caption="Figure 4. Kubernetes Architecture Components" */>}}
```
If you define your diagram using inline Mermaid code, then you should use Markdown text.
```text
Figure 4. Kubernetes Architecture Components
```
The following lists several items to consider when adding diagram captions:
* Use the `{{</* figure */>}}` shortcode to add a diagram caption for `Mermaid+SVG` and `External Tool` diagrams.
* Use simple Markdown text to add a diagram caption for the `Inline` method.
* Prepend your diagram caption with `Figure NUMBER.`. You must use `Figure` and the number must be unique for each diagram in your documentation page. Add a period after the number.
* Add your diagram caption text after the `Figure NUMBER.` on the same line. You must puncuate the caption with a period. Keep the caption text short.
* Position your diagram caption __BELOW__ your diagram.
**Diagram Referral**
Finally, you can add a diagram referral. This is used inside your text and should precede the diagram itself. It allows a user to connect your text with the associated diagram. The `Figure NUMBER` in your referral and caption must match.
You should avoid using spatial references such as `..the image below..` or `..the following figure ..`
Here is an example of a diagram referral:
```text
Figure 10 depicts the components of the Kubernetes architecture. The control plane ...
```
Diagram referrals are optional and there are cases where they might not be suitable. If you are not sure, add a diagram referral to your text to see if it looks and sounds okay. When in doubt, use a diagram referral.
**Complete picture**
Figure 10 shows the Kubernetes Architecture diagram that includes the diagram, diagram caption and diagram referral. The `{{</* figure */>}}` shortcode renders the diagram, adds the caption and includes the optional `link` parameter so you can hyperlink the diagram. The diagram referral is contained in this paragraph.
Here is the `{{</* figure */>}}` shortcode for this diagram:
```
{{</* figure src="/images/docs/components-of-kubernetes.svg" alt="Kubernetes pod running inside a cluster" class="diagram-large" caption="Figure 10. Kubernetes Architecture." link="https://kubernetes.io/docs/concepts/overview/components/" */>}}
```
{{< figure src="/images/docs/components-of-kubernetes.svg" alt="Kubernetes pod running inside a cluster" class="diagram-large" caption="Figure 10. Kubernetes Architecture." link="https://kubernetes.io/docs/concepts/overview/components/" >}}
## Tips
* Always use the live editor to create/edit your diagram.
* Always use Hugo local and Netlify previews to check out how the diagram appears in the documentation.
* Include diagram source pointers such as a URL, source code location, or indicate the code is self-documenting.
* Always use diagram captions.
* Very helpful to include the diagram `.svg` or `.png` image and/or Mermaid source code in issues and PRs.
* With the `Mermaid+SVG` and `External Tool` methods, use `.svg` image files because they stay sharp when you zoom in on the diagram.
* Best practice for `.svg` files is to load it into an SVG editing tool and use the
“Convert text to paths” function. This ensures that the diagram renders the same on all systems, regardless of font availability and font rendering support.
* No Mermaid support for additional icons or artwork.
* Hugo Mermaid shortcodes don't work in the live editor.
* Any time you modify a diagram in the live editor, you __must save__ it to generate a new URL for the diagram.
* Click on the diagrams in this section to view the code and diagram rendering in the live editor.
* Look over the source code of this page, `diagram-guide.md`, for more examples.
* Check out the [Mermaid docs](https://mermaid-js.github.io/mermaid/#/) for explanations and examples.
Most important, __Keep Diagrams Simple__. This will save time for you and fellow contributors, and allow for easier reading by new and experienced users.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 35 KiB

View File

@ -73,14 +73,16 @@ configure kubernetes components or tools. Most of these APIs are not exposed
by the API server in a RESTful way though they are essential for a user or an
operator to use or manage a cluster.
* [kube-apiserver configuration (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/)
* [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/)
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/)
* [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
* [kube-scheduler policy reference (v1)](/docs/reference/config-api/kube-scheduler-policy-config.v1/)
* [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and
[kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/)
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/) and
[kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
* [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
* [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/)
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) and
[Client authentication API (v1)](/docs/reference/config-api/client-authentication.v1/)
* [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/)
## Config API for kubeadm

View File

@ -1232,3 +1232,5 @@ The following `ExecCredential` manifest describes a cluster information sample.
## {{% heading "whatsnext" %}}
* Read the [client authentication reference (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)
* Read the [client authentication reference (v1)](/docs/reference/config-api/client-authentication.v1/)

View File

@ -65,7 +65,10 @@ different Kubernetes components.
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | |
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
| `CPUManager` | `true` | Beta | 1.10 | |
| `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | |
| `CPUManagerPolicyAlphaOptions` | `false` | Alpha | 1.23 | |
| `CPUManagerPolicyBetaOptions` | `true` | Beta | 1.23 | |
| `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | 1.22 |
| `CPUManagerPolicyOptions` | `true` | Beta | 1.23 | |
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
| `CSIInlineVolume` | `true` | Beta | 1.16 | - |
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
@ -90,8 +93,6 @@ different Kubernetes components.
| `CSIStorageCapacity` | `true` | Beta | 1.21 | |
| `CSIVolumeHealth` | `false` | Alpha | 1.21 | |
| `CSRDuration` | `true` | Beta | 1.22 | |
| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 |
| `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | |
| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 |
| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | |
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
@ -100,12 +101,14 @@ different Kubernetes components.
| `DaemonSetUpdateSurge` | `true` | Beta | 1.22 | |
| `DefaultPodTopologySpread` | `false` | Alpha | 1.19 | 1.19 |
| `DefaultPodTopologySpread` | `true` | Beta | 1.20 | |
| `DelegateFSGroupToCSIDriver` | `false` | Alpha | 1.22 | |
| `DelegateFSGroupToCSIDriver` | `false` | Alpha | 1.22 | 1.22 |
| `DelegateFSGroupToCSIDriver` | `true` | Beta | 1.23 | |
| `DevicePlugins` | `false` | Alpha | 1.8 | 1.9 |
| `DevicePlugins` | `true` | Beta | 1.10 | |
| `DisableAcceleratorUsageMetrics` | `false` | Alpha | 1.19 | 1.19 |
| `DisableAcceleratorUsageMetrics` | `true` | Beta | 1.20 | |
| `DisableCloudProviders` | `false` | Alpha | 1.22 | |
| `DisableKubeletCloudCredentialProviders` | `false` | Alpha | 1.23 | |
| `DownwardAPIHugePages` | `false` | Alpha | 1.20 | 1.20 |
| `DownwardAPIHugePages` | `false` | Beta | 1.21 | |
| `EfficientWatchResumption` | `false` | Alpha | 1.20 | 1.20 |
@ -124,7 +127,9 @@ different Kubernetes components.
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
| `GracefulNodeShutdown` | `false` | Alpha | 1.20 | 1.20 |
| `GracefulNodeShutdown` | `true` | Beta | 1.21 | |
| `GracefulNodeShutdownBasedOnPodPriority` | `false` | Alpha | 1.23 | |
| `GRPCContainerProbe` | `false` | Alpha | 1.23 | |
| `HonorPVReclaimPolicy` | `false` | Alpha | 1.23 | |
| `HPAContainerMetrics` | `false` | Alpha | 1.20 | |
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
| `IdentifyPodOS` | `false` | Alpha | 1.23 | |
@ -135,6 +140,8 @@ different Kubernetes components.
| `InTreePluginAzureFileUnregister` | `false` | Alpha | 1.21 | |
| `InTreePluginGCEUnregister` | `false` | Alpha | 1.21 | |
| `InTreePluginOpenStackUnregister` | `false` | Alpha | 1.21 | |
| `InTreePluginPortworxUnregister` | `false` | Alpha | 1.23 | |
| `InTreePluginRBDUnregister` | `false` | Alpha | 1.23 | |
| `InTreePluginvSphereUnregister` | `false` | Alpha | 1.21 | |
| `JobMutableNodeSchedulingDirectives` | `true` | Beta | 1.23 | |
| `JobReadyPods` | `false` | Alpha | 1.23 | |
@ -142,7 +149,10 @@ different Kubernetes components.
| `JobTrackingWithFinalizers` | `true` | Beta | 1.23 | |
| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | |
| `KubeletInUserNamespace` | `false` | Alpha | 1.22 | |
| `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | |
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
| `KubeletPodResources` | `true` | Beta | 1.15 | |
| `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 |
| `KubeletPodResourcesGetAllocatable` | `false` | Beta | 1.23 | |
| `LocalStorageCapacityIsolation` | `false` | Alpha | 1.7 | 1.9 |
| `LocalStorageCapacityIsolation` | `true` | Beta | 1.10 | |
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | |
@ -157,10 +167,13 @@ different Kubernetes components.
| `NodeSwap` | `false` | Alpha | 1.22 | |
| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
| `NonPreemptingPriority` | `true` | Beta | 1.19 | |
| `PodDeletionCost` | `false` | Alpha | 1.21 | 1.21 |
| `PodDeletionCost` | `true` | Beta | 1.22 | |
| `OpenAPIEnum` | `false` | Alpha | 1.23 | |
| `OpenAPIv3` | `false` | Alpha | 1.23 | |
| `PodAndContainerStatsFromCRI` | `false` | Alpha | 1.23 | |
| `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 |
| `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | |
| `PodDeletionCost` | `false` | Alpha | 1.21 | 1.21 |
| `PodDeletionCost` | `true` | Beta | 1.22 | |
| `PodOverhead` | `false` | Alpha | 1.16 | 1.17 |
| `PodOverhead` | `true` | Beta | 1.18 | |
| `PodSecurity` | `false` | Alpha | 1.22 | 1.22 |
@ -189,6 +202,7 @@ different Kubernetes components.
| `ServiceLoadBalancerClass` | `true` | Beta | 1.22 | |
| `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | 1.21 |
| `SizeMemoryBackedVolumes` | `true` | Beta | 1.22 | |
| `StatefulSetAutoDeletePVC` | `false` | Alpha | 1.22 | |
| `StatefulSetMinReadySeconds` | `false` | Alpha | 1.22 | 1.22 |
| `StatefulSetMinReadySeconds` | `true` | Beta | 1.23 | |
| `StorageVersionAPI` | `false` | Alpha | 1.20 | |
@ -197,13 +211,14 @@ different Kubernetes components.
| `SuspendJob` | `false` | Alpha | 1.21 | 1.21 |
| `SuspendJob` | `true` | Beta | 1.22 | |
| `TopologyAwareHints` | `false` | Alpha | 1.21 | 1.22 |
| `TopologyAwareHints` | `true` | Beta | 1.23 | |
| `TopologyAwareHints` | `false` | Beta | 1.23 | |
| `TopologyManager` | `false` | Alpha | 1.16 | 1.17 |
| `TopologyManager` | `true` | Beta | 1.18 | |
| `VolumeCapacityPriority` | `false` | Alpha | 1.21 | - |
| `WinDSR` | `false` | Alpha | 1.14 | |
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
| `WinOverlay` | `true` | Beta | 1.20 | |
| `WindowsHostProcessContainers` | `false` | Alpha | 1.22 | 1.22 |
| `WindowsHostProcessContainers` | `false` | Beta | 1.23 | |
{{< /table >}}
@ -233,6 +248,8 @@ different Kubernetes components.
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | 1.20 |
| `BoundServiceAccountTokenVolume` | `true` | Beta | 1.21 | 1.21 |
| `BoundServiceAccountTokenVolume` | `true` | GA | 1.22 | - |
| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | 1.19 |
| `ConfigurableFSGroupPolicy` | `true` | Beta | 1.20 | 1.22 |
| `ConfigurableFSGroupPolicy` | `true` | GA | 1.23 | |
| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
| `CRIContainerLogRotation` | `true` | Beta | 1.11 | 1.20 |
@ -312,12 +329,12 @@ different Kubernetes components.
| `EndpointSliceProxying` | `false` | Alpha | 1.18 | 1.18 |
| `EndpointSliceProxying` | `true` | Beta | 1.19 | 1.21 |
| `EndpointSliceProxying` | `true` | GA | 1.22 | - |
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 |
| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | - |
| `EvenPodsSpread` | `false` | Alpha | 1.16 | 1.17 |
| `EvenPodsSpread` | `true` | Beta | 1.18 | 1.18 |
| `EvenPodsSpread` | `true` | GA | 1.19 | - |
| `ExecProbeTimeout` | `true` | GA | 1.20 | - |
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | 1.12 |
| `ExperimentalCriticalPodAnnotation` | `false` | Deprecated | 1.13 | - |
| `ExternalPolicyForExternalIP` | `true` | GA | 1.18 | - |
| `GCERegionalPersistentDisk` | `true` | Beta | 1.10 | 1.12 |
| `GCERegionalPersistentDisk` | `true` | GA | 1.13 | - |
@ -330,9 +347,6 @@ different Kubernetes components.
| `HugePages` | `false` | Alpha | 1.8 | 1.9 |
| `HugePages` | `true` | Beta| 1.10 | 1.13 |
| `HugePages` | `true` | GA | 1.14 | - |
| `HugePageStorageMediumSize` | `false` | Alpha | 1.18 | 1.18 |
| `HugePageStorageMediumSize` | `true` | Beta | 1.19 | 1.21 |
| `HugePageStorageMediumSize` | `true` | GA | 1.22 | - |
| `HyperVContainer` | `false` | Alpha | 1.10 | 1.19 |
| `HyperVContainer` | `false` | Deprecated | 1.20 | - |
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | 1.18 |
@ -351,9 +365,6 @@ different Kubernetes components.
| `KubeletPluginsWatcher` | `false` | Alpha | 1.11 | 1.11 |
| `KubeletPluginsWatcher` | `true` | Beta | 1.12 | 1.12 |
| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - |
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
| `KubeletPodResources` | `true` | Beta | 1.15 | |
| `KubeletPodResources` | `true` | GA | 1.20 | |
| `LegacyNodeRoleBehavior` | `false` | Alpha | 1.16 | 1.18 |
| `LegacyNodeRoleBehavior` | `true` | Beta | 1.19 | 1.20 |
| `LegacyNodeRoleBehavior` | `false` | GA | 1.21 | - |
@ -375,7 +386,6 @@ different Kubernetes components.
| `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 |
| `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 |
| `PersistentLocalVolumes` | `true` | GA | 1.14 | - |
| `PodAndContainerStatsFromCRI` | `false` | Alpha | 1.23 | |
| `PodDisruptionBudget` | `false` | Alpha | 1.3 | 1.4 |
| `PodDisruptionBudget` | `true` | Beta | 1.5 | 1.20 |
| `PodDisruptionBudget` | `true` | GA | 1.21 | - |
@ -497,7 +507,6 @@ different Kubernetes components.
| `WindowsGMSA` | `false` | Alpha | 1.14 | 1.15 |
| `WindowsGMSA` | `true` | Beta | 1.16 | 1.17 |
| `WindowsGMSA` | `true` | GA | 1.18 | - |
| `WindowsHostProcessContainers` | `false` | Alpha | 1.22 |
| `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 |
| `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 |
| `WindowsRunAsUserName` | `true` | GA | 1.18 | - |
@ -586,6 +595,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
(e.g. the cloud-controller-manager) in an HA cluster without downtime.
- `CPUManager`: Enable container level CPU affinity support, see
[CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
- `CPUManagerPolicyAlphaOptions`: This allows fine-tuning of CPUManager policies, experimental, Alpha-quality options
This feature gate guards *a group* of CPUManager options whose quality level is alpha.
This feature gate will never graduate to beta or stable.
- `CPUManagerPolicyBetaOptions`: This allows fine-tuning of CPUManager policies, experimental, Beta-quality options
This feature gate guards *a group* of CPUManager options whose quality level is beta.
This feature gate will never graduate to stable.
- `CPUManagerPolicyOptions`: Allow fine-tuning of CPUManager policies.
- `CRIContainerLogRotation`: Enable container log rotation for CRI container runtime. The default max size of a log file is 10MB and the
default max number of log files allowed for a container is 5. These values can be configured in the kubelet config.
@ -728,6 +743,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `DisableCloudProviders`: Disables any functionality in `kube-apiserver`,
`kube-controller-manager` and `kubelet` related to the `--cloud-provider`
component flag.
- `DisableKubeletCloudCredentialProviders`: Disable the in-tree functionality in kubelet
to authenticate to a cloud provider container registry for image pull credentials.
- `DownwardAPIHugePages`: Enables usage of hugepages in
[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information).
- `DryRun`: Enable server-side [dry run](/docs/reference/using-api/api-concepts/#dry-run) requests
@ -792,7 +809,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
and gracefully terminate pods running on the node. See
[Graceful Node Shutdown](/docs/concepts/architecture/nodes/#graceful-node-shutdown)
for more details.
= `GracefulNodeShutdownBasedOnPodPriority`: Enables the kubelet to check Pod priorities
when shutting down a node gracefully.
- `GRPCContainerProbe`: Enables the gRPC probe method for {Liveness,Readiness,Startup}Probe. See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
- `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete` irrespective of PV-PVC deletion ordering.
- `HPAContainerMetrics`: Enable the `HorizontalPodAutoscaler` to scale based on
metrics from individual containers in target pods.
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler`
@ -818,6 +838,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
and volume controllers.
- `InTreePluginOpenStackUnregister`: Stops registering the OpenStack cinder in-tree plugin in kubelet
and volume controllers.
- `InTreePluginPortworxUnregister`: Stops registering the Portworx in-tree plugin in kubelet
and volume controllers.
- `InTreePluginRBDUnregister`: Stops registering the RBD in-tree plugin in kubelet
and volume controllers.
- `InTreePluginvSphereUnregister`: Stops registering the vSphere in-tree plugin in kubelet
and volume controllers.
- `IndexedJob`: Allows the [Job](/docs/concepts/workloads/controllers/job/)
@ -889,6 +913,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
Must be used with `KubeletConfiguration.failSwapOn` set to false.
For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory)
- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod.
- `OpenAPIEnum`: Enables populating "enum" fields of OpenAPI schemas in the
spec returned from the API server.
- `OpenAPIv3`: Enables the API server to publish OpenAPI v3.
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
being deleted when it is still used by any Pod.
- `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)

View File

@ -916,7 +916,7 @@ WindowsHostProcessContainers=true|false (ALPHA - default=false)<br/>
</tr>
<tr>
<td colspan="2">--pod-infra-container-image string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>k8s.gcr.io/pause:3.5</code></td>
<td colspan="2">--pod-infra-container-image string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>k8s.gcr.io/pause:3.6</code></td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">Specified image will not be pruned by the image garbage collector. When container-runtime is set to <code>docker</code>, all containers in each pod will use the network/IPC namespaces from this image. Other CRI implementations have their own configuration to set this image.</td>

View File

@ -16,14 +16,12 @@ auto_generated: true
## `Event` {#audit-k8s-io-v1-Event}
**Appears in:**
- [EventList](#audit-k8s-io-v1-EventList)
@ -81,7 +79,7 @@ For non-resource requests, this is the lower-cased HTTP method.</td>
<tr><td><code>user</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
</td>
<td>
Authenticated user information.</td>
@ -89,7 +87,7 @@ For non-resource requests, this is the lower-cased HTTP method.</td>
<tr><td><code>impersonatedUser</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
</td>
<td>
Impersonated user information.</td>
@ -123,7 +121,7 @@ Does not apply for List-type requests, or non-resource requests.</td>
<tr><td><code>responseStatus</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#status-v1-meta"><code>meta/v1.Status</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#status-v1-meta"><code>meta/v1.Status</code></a>
</td>
<td>
The response status, populated even when the ResponseObject is not a Status type.
@ -154,7 +152,7 @@ at Response Level.</td>
<tr><td><code>requestReceivedTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
</td>
<td>
Time the request reached the apiserver.</td>
@ -162,7 +160,7 @@ at Response Level.</td>
<tr><td><code>stageTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
</td>
<td>
Time the request reached current audit stage.</td>
@ -185,8 +183,6 @@ should be short. Annotations are included in the Metadata level.</td>
</tbody>
</table>
## `EventList` {#audit-k8s-io-v1-EventList}
@ -206,7 +202,7 @@ EventList is a list of audit Events.
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
@ -226,15 +222,12 @@ EventList is a list of audit Events.
</tbody>
</table>
## `Policy` {#audit-k8s-io-v1-Policy}
**Appears in:**
- [PolicyList](#audit-k8s-io-v1-PolicyList)
@ -252,7 +245,7 @@ categories are logged.
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
</td>
<td>
ObjectMeta is included for interoperability with API infrastructure.Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field.</td>
@ -295,8 +288,6 @@ in a rule will override the global default.</td>
</tbody>
</table>
## `PolicyList` {#audit-k8s-io-v1-PolicyList}
@ -316,7 +307,7 @@ PolicyList is a list of audit Policies.
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
@ -336,15 +327,12 @@ PolicyList is a list of audit Policies.
</tbody>
</table>
## `GroupResources` {#audit-k8s-io-v1-GroupResources}
**Appears in:**
- [PolicyRule](#audit-k8s-io-v1-PolicyRule)
@ -398,17 +386,13 @@ An empty list implies that every instance of the resource is matched.</td>
</tbody>
</table>
## `Level` {#audit-k8s-io-v1-Level}
(Alias of `string`)
**Appears in:**
- [Event](#audit-k8s-io-v1-Event)
- [PolicyRule](#audit-k8s-io-v1-PolicyRule)
@ -416,15 +400,12 @@ Level defines the amount of information logged during auditing
## `ObjectReference` {#audit-k8s-io-v1-ObjectReference}
**Appears in:**
- [Event](#audit-k8s-io-v1-Event)
@ -510,15 +491,12 @@ The empty string represents the core API group.</td>
</tbody>
</table>
## `PolicyRule` {#audit-k8s-io-v1-PolicyRule}
**Appears in:**
- [Policy](#audit-k8s-io-v1-Policy)
@ -625,19 +603,14 @@ Policy.OmitManagedFields will stand.</td>
</tbody>
</table>
## `Stage` {#audit-k8s-io-v1-Stage}
(Alias of `string`)
**Appears in:**
- [Event](#audit-k8s-io-v1-Event)
- [Policy](#audit-k8s-io-v1-Policy)
- [PolicyRule](#audit-k8s-io-v1-PolicyRule)
@ -645,4 +618,3 @@ Stage defines the stages in request handling that audit events may be generated.

View File

@ -0,0 +1,91 @@
---
title: kube-apiserver Configuration (v1)
content_type: tool-reference
package: apiserver.config.k8s.io/v1
auto_generated: true
---
Package v1 is the v1 version of the API.
## Resource Types
- [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration)
## `AdmissionConfiguration` {#apiserver-config-k8s-io-v1-AdmissionConfiguration}
AdmissionConfiguration provides versioned configuration for admission controllers.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.config.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>AdmissionConfiguration</code></td></tr>
<tr><td><code>plugins</code><br/>
<a href="#apiserver-config-k8s-io-v1-AdmissionPluginConfiguration"><code>[]AdmissionPluginConfiguration</code></a>
</td>
<td>
Plugins allows specifying a configuration per admission control plugin.</td>
</tr>
</tbody>
</table>
## `AdmissionPluginConfiguration` {#apiserver-config-k8s-io-v1-AdmissionPluginConfiguration}
**Appears in:**
- [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration)
AdmissionPluginConfiguration provides the configuration for a single plug-in.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Name is the name of the admission controller.
It must match the registered admission plugin name.</td>
</tr>
<tr><td><code>path</code><br/>
<code>string</code>
</td>
<td>
Path is the path to a configuration file that contains the plugin's
configuration</td>
</tr>
<tr><td><code>configuration</code><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime#Unknown"><code>k8s.io/apimachinery/pkg/runtime.Unknown</code></a>
</td>
<td>
Configuration is an embedded configuration object to be used as the plugin's
configuration. If present, it will be used instead of the path to the configuration file.</td>
</tr>
</tbody>
</table>

View File

@ -1,311 +0,0 @@
---
title: kube-apiserver Configuration (v1beta1)
content_type: tool-reference
package: apiserver.k8s.io/v1beta1
auto_generated: true
---
Package v1beta1 is the v1beta1 version of the API.
## Resource Types
- [EgressSelectorConfiguration](#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration)
## `EgressSelectorConfiguration` {#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration}
EgressSelectorConfiguration provides versioned configuration for egress selector clients.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.k8s.io/v1beta1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>EgressSelectorConfiguration</code></td></tr>
<tr><td><code>egressSelections</code> <B>[Required]</B><br/>
<a href="#apiserver-k8s-io-v1beta1-EgressSelection"><code>[]EgressSelection</code></a>
</td>
<td>
connectionServices contains a list of egress selection client configurations</td>
</tr>
</tbody>
</table>
## `Connection` {#apiserver-k8s-io-v1beta1-Connection}
**Appears in:**
- [EgressSelection](#apiserver-k8s-io-v1beta1-EgressSelection)
Connection provides the configuration for a single egress selection client.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>proxyProtocol</code> <B>[Required]</B><br/>
<a href="#apiserver-k8s-io-v1beta1-ProtocolType"><code>ProtocolType</code></a>
</td>
<td>
Protocol is the protocol used to connect from client to the konnectivity server.</td>
</tr>
<tr><td><code>transport</code><br/>
<a href="#apiserver-k8s-io-v1beta1-Transport"><code>Transport</code></a>
</td>
<td>
Transport defines the transport configurations we use to dial to the konnectivity server.
This is required if ProxyProtocol is HTTPConnect or GRPC.</td>
</tr>
</tbody>
</table>
## `EgressSelection` {#apiserver-k8s-io-v1beta1-EgressSelection}
**Appears in:**
- [EgressSelectorConfiguration](#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration)
EgressSelection provides the configuration for a single egress selection client.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
name is the name of the egress selection.
Currently supported values are "controlplane", "master", "etcd" and "cluster"
The "master" egress selector is deprecated in favor of "controlplane"</td>
</tr>
<tr><td><code>connection</code> <B>[Required]</B><br/>
<a href="#apiserver-k8s-io-v1beta1-Connection"><code>Connection</code></a>
</td>
<td>
connection is the exact information used to configure the egress selection</td>
</tr>
</tbody>
</table>
## `ProtocolType` {#apiserver-k8s-io-v1beta1-ProtocolType}
(Alias of `string`)
**Appears in:**
- [Connection](#apiserver-k8s-io-v1beta1-Connection)
ProtocolType is a set of valid values for Connection.ProtocolType
## `TCPTransport` {#apiserver-k8s-io-v1beta1-TCPTransport}
**Appears in:**
- [Transport](#apiserver-k8s-io-v1beta1-Transport)
TCPTransport provides the information to connect to konnectivity server via TCP
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>url</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
URL is the location of the konnectivity server to connect to.
As an example it might be "https://127.0.0.1:8131"</td>
</tr>
<tr><td><code>tlsConfig</code><br/>
<a href="#apiserver-k8s-io-v1beta1-TLSConfig"><code>TLSConfig</code></a>
</td>
<td>
TLSConfig is the config needed to use TLS when connecting to konnectivity server</td>
</tr>
</tbody>
</table>
## `TLSConfig` {#apiserver-k8s-io-v1beta1-TLSConfig}
**Appears in:**
- [TCPTransport](#apiserver-k8s-io-v1beta1-TCPTransport)
TLSConfig provides the authentication information to connect to konnectivity server
Only used with TCPTransport
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>caBundle</code><br/>
<code>string</code>
</td>
<td>
caBundle is the file location of the CA to be used to determine trust with the konnectivity server.
Must be absent/empty if TCPTransport.URL is prefixed with http://
If absent while TCPTransport.URL is prefixed with https://, default to system trust roots.</td>
</tr>
<tr><td><code>clientKey</code><br/>
<code>string</code>
</td>
<td>
clientKey is the file location of the client key to be used in mtls handshakes with the konnectivity server.
Must be absent/empty if TCPTransport.URL is prefixed with http://
Must be configured if TCPTransport.URL is prefixed with https://</td>
</tr>
<tr><td><code>clientCert</code><br/>
<code>string</code>
</td>
<td>
clientCert is the file location of the client certificate to be used in mtls handshakes with the konnectivity server.
Must be absent/empty if TCPTransport.URL is prefixed with http://
Must be configured if TCPTransport.URL is prefixed with https://</td>
</tr>
</tbody>
</table>
## `Transport` {#apiserver-k8s-io-v1beta1-Transport}
**Appears in:**
- [Connection](#apiserver-k8s-io-v1beta1-Connection)
Transport defines the transport configurations we use to dial to the konnectivity server
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>tcp</code><br/>
<a href="#apiserver-k8s-io-v1beta1-TCPTransport"><code>TCPTransport</code></a>
</td>
<td>
TCP is the TCP configuration for communicating with the konnectivity server via TCP
ProxyProtocol of GRPC is not supported with TCP transport at the moment
Requires at least one of TCP or UDS to be set</td>
</tr>
<tr><td><code>uds</code><br/>
<a href="#apiserver-k8s-io-v1beta1-UDSTransport"><code>UDSTransport</code></a>
</td>
<td>
UDS is the UDS configuration for communicating with the konnectivity server via UDS
Requires at least one of TCP or UDS to be set</td>
</tr>
</tbody>
</table>
## `UDSTransport` {#apiserver-k8s-io-v1beta1-UDSTransport}
**Appears in:**
- [Transport](#apiserver-k8s-io-v1beta1-Transport)
UDSTransport provides the information to connect to konnectivity server via UDS
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>udsName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
UDSName is the name of the unix domain socket to connect to konnectivity server
This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket)</td>
</tr>
</tbody>
</table>

View File

@ -13,7 +13,6 @@ Package v1 is the v1 version of the API.
## `WebhookAdmission` {#apiserver-config-k8s-io-v1-WebhookAdmission}
@ -43,4 +42,3 @@ WebhookAdmission provides configuration for the webhook admission controller.
</tbody>
</table>

View File

@ -0,0 +1,249 @@
---
title: Client Authentication (v1)
content_type: tool-reference
package: client.authentication.k8s.io/v1
auto_generated: true
---
## Resource Types
- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)
## `ExecCredential` {#client-authentication-k8s-io-v1-ExecCredential}
ExecCredential is used by exec-based plugins to communicate credentials to
HTTP transports.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>client.authentication.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>ExecCredential</code></td></tr>
<tr><td><code>spec</code> <B>[Required]</B><br/>
<a href="#client-authentication-k8s-io-v1-ExecCredentialSpec"><code>ExecCredentialSpec</code></a>
</td>
<td>
Spec holds information passed to the plugin by the transport.</td>
</tr>
<tr><td><code>status</code><br/>
<a href="#client-authentication-k8s-io-v1-ExecCredentialStatus"><code>ExecCredentialStatus</code></a>
</td>
<td>
Status is filled in by the plugin and holds the credentials that the transport
should use to contact the API.</td>
</tr>
</tbody>
</table>
## `Cluster` {#client-authentication-k8s-io-v1-Cluster}
**Appears in:**
- [ExecCredentialSpec](#client-authentication-k8s-io-v1-ExecCredentialSpec)
Cluster contains information to allow an exec plugin to communicate
with the kubernetes cluster being authenticated to.
To ensure that this struct contains everything someone would need to communicate
with a kubernetes cluster (just like they would via a kubeconfig), the fields
should shadow "k8s.io/client-go/tools/clientcmd/api/v1".Cluster, with the exception
of CertificateAuthority, since CA data will always be passed to the plugin as bytes.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>server</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Server is the address of the kubernetes cluster (https://hostname:port).</td>
</tr>
<tr><td><code>tls-server-name</code><br/>
<code>string</code>
</td>
<td>
TLSServerName is passed to the server for SNI and is used in the client to
check server certificates against. If ServerName is empty, the hostname
used to contact the server is used.</td>
</tr>
<tr><td><code>insecure-skip-tls-verify</code><br/>
<code>bool</code>
</td>
<td>
InsecureSkipTLSVerify skips the validity check for the server's certificate.
This will make your HTTPS connections insecure.</td>
</tr>
<tr><td><code>certificate-authority-data</code><br/>
<code>[]byte</code>
</td>
<td>
CAData contains PEM-encoded certificate authority certificates.
If empty, system roots should be used.</td>
</tr>
<tr><td><code>proxy-url</code><br/>
<code>string</code>
</td>
<td>
ProxyURL is the URL to the proxy to be used for all requests to this
cluster.</td>
</tr>
<tr><td><code>config</code><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime/#RawExtension"><code>k8s.io/apimachinery/pkg/runtime.RawExtension</code></a>
</td>
<td>
Config holds additional config data that is specific to the exec
plugin with regards to the cluster being authenticated to.
This data is sourced from the clientcmd Cluster object's
extensions[client.authentication.k8s.io/exec] field:
clusters:
- name: my-cluster
cluster:
...
extensions:
- name: client.authentication.k8s.io/exec # reserved extension name for per cluster exec config
extension:
audience: 06e3fbd18de8 # arbitrary config
In some environments, the user config may be exactly the same across many clusters
(i.e. call this exec plugin) minus some details that are specific to each cluster
such as the audience. This field allows the per cluster config to be directly
specified with the cluster info. Using this field to store secret data is not
recommended as one of the prime benefits of exec plugins is that no secrets need
to be stored directly in the kubeconfig.</td>
</tr>
</tbody>
</table>
## `ExecCredentialSpec` {#client-authentication-k8s-io-v1-ExecCredentialSpec}
**Appears in:**
- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)
ExecCredentialSpec holds request and runtime specific information provided by
the transport.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>cluster</code><br/>
<a href="#client-authentication-k8s-io-v1-Cluster"><code>Cluster</code></a>
</td>
<td>
Cluster contains information to allow an exec plugin to communicate with the
kubernetes cluster being authenticated to. Note that Cluster is non-nil only
when provideClusterInfo is set to true in the exec provider config (i.e.,
ExecConfig.ProvideClusterInfo).</td>
</tr>
<tr><td><code>interactive</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
Interactive declares whether stdin has been passed to this exec plugin.</td>
</tr>
</tbody>
</table>
## `ExecCredentialStatus` {#client-authentication-k8s-io-v1-ExecCredentialStatus}
**Appears in:**
- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)
ExecCredentialStatus holds credentials for the transport to use.
Token and ClientKeyData are sensitive fields. This data should only be
transmitted in-memory between client and exec plugin process. Exec plugin
itself should at least be protected via file permissions.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>expirationTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#time-v1-meta"><code>meta/v1.Time</code></a>
</td>
<td>
ExpirationTimestamp indicates a time when the provided credentials expire.</td>
</tr>
<tr><td><code>token</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Token is a bearer token used by the client for request authentication.</td>
</tr>
<tr><td><code>clientCertificateData</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
PEM-encoded client TLS certificates (including intermediates, if any).</td>
</tr>
<tr><td><code>clientKeyData</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
PEM-encoded private key for the above certificate.</td>
</tr>
</tbody>
</table>

View File

@ -13,13 +13,10 @@ auto_generated: true
## `ExecCredential` {#client-authentication-k8s-io-v1beta1-ExecCredential}
ExecCredential is used by exec-based plugins to communicate credentials to
HTTP transports.
@ -31,40 +28,31 @@ HTTP transports.
<tr><td><code>kind</code><br/>string</td><td><code>ExecCredential</code></td></tr>
<tr><td><code>spec</code> <B>[Required]</B><br/>
<a href="#client-authentication-k8s-io-v1beta1-ExecCredentialSpec"><code>ExecCredentialSpec</code></a>
</td>
<td>
Spec holds information passed to the plugin by the transport.</td>
Spec holds information passed to the plugin by the transport.
</td>
</tr>
<tr><td><code>status</code><br/>
<a href="#client-authentication-k8s-io-v1beta1-ExecCredentialStatus"><code>ExecCredentialStatus</code></a>
</td>
<td>
Status is filled in by the plugin and holds the credentials that the transport
should use to contact the API.</td>
should use to contact the API.
</td>
</tr>
</tbody>
</table>
## `Cluster` {#client-authentication-k8s-io-v1beta1-Cluster}
**Appears in:**
- [ExecCredentialSpec](#client-authentication-k8s-io-v1beta1-ExecCredentialSpec)
Cluster contains information to allow an exec plugin to communicate
with the kubernetes cluster being authenticated to.
@ -78,52 +66,46 @@ of CertificateAuthority, since CA data will always be passed to the plugin as by
<tbody>
<tr><td><code>server</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Server is the address of the kubernetes cluster (https://hostname:port).</td>
Server is the address of the kubernetes cluster (https://hostname:port).
</td>
</tr>
<tr><td><code>tls-server-name</code><br/>
<code>string</code>
</td>
<td>
TLSServerName is passed to the server for SNI and is used in the client to
check server certificates against. If ServerName is empty, the hostname
used to contact the server is used.</td>
used to contact the server is used.
</td>
</tr>
<tr><td><code>insecure-skip-tls-verify</code><br/>
<code>bool</code>
</td>
<td>
InsecureSkipTLSVerify skips the validity check for the server's certificate.
This will make your HTTPS connections insecure.</td>
This will make your HTTPS connections insecure.
</td>
</tr>
<tr><td><code>certificate-authority-data</code><br/>
<code>[]byte</code>
</td>
<td>
CAData contains PEM-encoded certificate authority certificates.
If empty, system roots should be used.</td>
If empty, system roots should be used.
</td>
</tr>
<tr><td><code>proxy-url</code><br/>
<code>string</code>
</td>
<td>
ProxyURL is the URL to the proxy to be used for all requests to this
cluster.</td>
cluster.
</td>
</tr>
<tr><td><code>config</code><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime/#RawExtension"><code>k8s.io/apimachinery/pkg/runtime.RawExtension</code></a>
</td>
@ -148,25 +130,19 @@ In some environments, the user config may be exactly the same across many cluste
such as the audience. This field allows the per cluster config to be directly
specified with the cluster info. Using this field to store secret data is not
recommended as one of the prime benefits of exec plugins is that no secrets need
to be stored directly in the kubeconfig.</td>
to be stored directly in the kubeconfig.
</td>
</tr>
</tbody>
</table>
## `ExecCredentialSpec` {#client-authentication-k8s-io-v1beta1-ExecCredentialSpec}
**Appears in:**
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
ExecCredentialSpec holds request and runtime specific information provided by
the transport.
@ -175,7 +151,6 @@ the transport.
<tbody>
<tr><td><code>cluster</code><br/>
<a href="#client-authentication-k8s-io-v1beta1-Cluster"><code>Cluster</code></a>
</td>
@ -183,33 +158,26 @@ the transport.
Cluster contains information to allow an exec plugin to communicate with the
kubernetes cluster being authenticated to. Note that Cluster is non-nil only
when provideClusterInfo is set to true in the exec provider config (i.e.,
ExecConfig.ProvideClusterInfo).</td>
ExecConfig.ProvideClusterInfo).
</td>
</tr>
<tr><td><code>interactive</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
Interactive declares whether stdin has been passed to this exec plugin.</td>
Interactive declares whether stdin has been passed to this exec plugin.
</td>
</tr>
</tbody>
</table>
## `ExecCredentialStatus` {#client-authentication-k8s-io-v1beta1-ExecCredentialStatus}
**Appears in:**
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
ExecCredentialStatus holds credentials for the transport to use.
Token and ClientKeyData are sensitive fields. This data should only be
@ -221,40 +189,34 @@ itself should at least be protected via file permissions.
<tbody>
<tr><td><code>expirationTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta"><code>meta/v1.Time</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#time-v1-meta"><code>meta/v1.Time</code></a>
</td>
<td>
ExpirationTimestamp indicates a time when the provided credentials expire.</td>
ExpirationTimestamp indicates a time when the provided credentials expire.
</td>
</tr>
<tr><td><code>token</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Token is a bearer token used by the client for request authentication.</td>
Token is a bearer token used by the client for request authentication.
</td>
</tr>
<tr><td><code>clientCertificateData</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
PEM-encoded client TLS certificates (including intermediates, if any).</td>
PEM-encoded client TLS certificates (including intermediates, if any).
</td>
</tr>
<tr><td><code>clientKeyData</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
PEM-encoded private key for the above certificate.</td>
PEM-encoded private key for the above certificate.
</td>
</tr>
</tbody>
</table>

View File

@ -13,13 +13,10 @@ auto_generated: true
## `KubeProxyConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration}
KubeProxyConfiguration contains everything necessary to configure the
Kubernetes proxy server.
@ -31,155 +28,136 @@ Kubernetes proxy server.
<tr><td><code>kind</code><br/>string</td><td><code>KubeProxyConfiguration</code></td></tr>
<tr><td><code>featureGates</code> <B>[Required]</B><br/>
<code>map[string]bool</code>
</td>
<td>
featureGates is a map of feature names to bools that enable or disable alpha/experimental features.</td>
featureGates is a map of feature names to bools that enable or disable alpha/experimental features.
</td>
</tr>
<tr><td><code>bindAddress</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
bindAddress is the IP address for the proxy server to serve on (set to 0.0.0.0
for all interfaces)</td>
for all interfaces)
</td>
</tr>
<tr><td><code>healthzBindAddress</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
healthzBindAddress is the IP address and port for the health check server to serve on,
defaulting to 0.0.0.0:10256</td>
defaulting to 0.0.0.0:10256
</td>
</tr>
<tr><td><code>metricsBindAddress</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
metricsBindAddress is the IP address and port for the metrics server to serve on,
defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces)</td>
defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces)
</td>
</tr>
<tr><td><code>bindAddressHardFail</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
bindAddressHardFail, if true, kube-proxy will treat failure to bind to a port as fatal and exit</td>
bindAddressHardFail, if true, kube-proxy will treat failure to bind to a port as fatal and exit
</td>
</tr>
<tr><td><code>enableProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableProfiling enables profiling via web interface on /debug/pprof handler.
Profiling handlers will be handled by metrics server.</td>
Profiling handlers will be handled by metrics server.
</td>
</tr>
<tr><td><code>clusterCIDR</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
clusterCIDR is the CIDR range of the pods in the cluster. It is used to
bridge traffic coming from outside of the cluster. If not provided,
no off-cluster bridging will be performed.</td>
no off-cluster bridging will be performed.
</td>
</tr>
<tr><td><code>hostnameOverride</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
hostnameOverride, if non-empty, will be used as the identity instead of the actual hostname.</td>
hostnameOverride, if non-empty, will be used as the identity instead of the actual hostname.
</td>
</tr>
<tr><td><code>clientConnection</code> <B>[Required]</B><br/>
<a href="#ClientConnectionConfiguration"><code>ClientConnectionConfiguration</code></a>
</td>
<td>
clientConnection specifies the kubeconfig file and client connection settings for the proxy
server to use when communicating with the apiserver.</td>
server to use when communicating with the apiserver.
</td>
</tr>
<tr><td><code>iptables</code> <B>[Required]</B><br/>
<a href="#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration"><code>KubeProxyIPTablesConfiguration</code></a>
</td>
<td>
iptables contains iptables-related configuration options.</td>
iptables contains iptables-related configuration options.
</td>
</tr>
<tr><td><code>ipvs</code> <B>[Required]</B><br/>
<a href="#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPVSConfiguration"><code>KubeProxyIPVSConfiguration</code></a>
</td>
<td>
ipvs contains ipvs-related configuration options.</td>
ipvs contains ipvs-related configuration options.
</td>
</tr>
<tr><td><code>oomScoreAdj</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
oomScoreAdj is the oom-score-adj value for kube-proxy process. Values must be within
the range [-1000, 1000]</td>
the range [-1000, 1000]
</td>
</tr>
<tr><td><code>mode</code> <B>[Required]</B><br/>
<a href="#kubeproxy-config-k8s-io-v1alpha1-ProxyMode"><code>ProxyMode</code></a>
</td>
<td>
mode specifies which proxy mode to use.</td>
mode specifies which proxy mode to use.
</td>
</tr>
<tr><td><code>portRange</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
portRange is the range of host ports (beginPort-endPort, inclusive) that may be consumed
in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen.</td>
in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen.
</td>
</tr>
<tr><td><code>udpIdleTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
udpIdleTimeout is how long an idle UDP connection will be kept open (e.g. '250ms', '2s').
Must be greater than 0. Only applicable for proxyMode=userspace.</td>
Must be greater than 0. Only applicable for proxyMode=userspace.
</td>
</tr>
<tr><td><code>conntrack</code> <B>[Required]</B><br/>
<a href="#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConntrackConfiguration"><code>KubeProxyConntrackConfiguration</code></a>
</td>
<td>
conntrack contains conntrack-related configuration options.</td>
conntrack contains conntrack-related configuration options.
</td>
</tr>
<tr><td><code>configSyncPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
configSyncPeriod is how often configuration from the apiserver is refreshed. Must be greater
than 0.</td>
than 0.
</td>
</tr>
<tr><td><code>nodePortAddresses</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
@ -190,49 +168,40 @@ In case someone would like to expose a service on localhost for local visit and
particular purpose, a list of IP blocks would do that.
If set it to "127.0.0.0/8", kube-proxy will only select the loopback interface for NodePort.
If set it to a non-zero IP block, kube-proxy will filter that down to just the IPs that applied to the node.
An empty string slice is meant to select all network interfaces.</td>
An empty string slice is meant to select all network interfaces.
</td>
</tr>
<tr><td><code>winkernel</code> <B>[Required]</B><br/>
<a href="#kubeproxy-config-k8s-io-v1alpha1-KubeProxyWinkernelConfiguration"><code>KubeProxyWinkernelConfiguration</code></a>
</td>
<td>
winkernel contains winkernel-related configuration options.</td>
winkernel contains winkernel-related configuration options.
</td>
</tr>
<tr><td><code>showHiddenMetricsForVersion</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
ShowHiddenMetricsForVersion is the version for which you want to show hidden metrics.</td>
ShowHiddenMetricsForVersion is the version for which you want to show hidden metrics.
</td>
</tr>
<tr><td><code>detectLocalMode</code> <B>[Required]</B><br/>
<a href="#kubeproxy-config-k8s-io-v1alpha1-LocalMode"><code>LocalMode</code></a>
</td>
<td>
DetectLocalMode determines mode to use for detecting local traffic, defaults to LocalModeClusterCIDR</td>
DetectLocalMode determines mode to use for detecting local traffic, defaults to LocalModeClusterCIDR
</td>
</tr>
</tbody>
</table>
## `KubeProxyConntrackConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConntrackConfiguration}
**Appears in:**
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
KubeProxyConntrackConfiguration contains conntrack settings for
the Kubernetes proxy server.
@ -241,59 +210,49 @@ the Kubernetes proxy server.
<tbody>
<tr><td><code>maxPerCore</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
maxPerCore is the maximum number of NAT connections to track
per CPU core (0 to leave the limit as-is and ignore min).</td>
per CPU core (0 to leave the limit as-is and ignore min).
</td>
</tr>
<tr><td><code>min</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
min is the minimum value of connect-tracking records to allocate,
regardless of conntrackMaxPerCore (set maxPerCore=0 to leave the limit as-is).</td>
regardless of conntrackMaxPerCore (set maxPerCore=0 to leave the limit as-is).
</td>
</tr>
<tr><td><code>tcpEstablishedTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
tcpEstablishedTimeout is how long an idle TCP connection will be kept open
(e.g. '2s'). Must be greater than 0 to set.</td>
(e.g. '2s'). Must be greater than 0 to set.
</td>
</tr>
<tr><td><code>tcpCloseWaitTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
tcpCloseWaitTimeout is how long an idle conntrack entry
in CLOSE_WAIT state will remain in the conntrack
table. (e.g. '60s'). Must be greater than 0 to set.</td>
table. (e.g. '60s'). Must be greater than 0 to set.
</td>
</tr>
</tbody>
</table>
## `KubeProxyIPTablesConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration}
**Appears in:**
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
KubeProxyIPTablesConfiguration contains iptables-related configuration
details for the Kubernetes proxy server.
@ -302,57 +261,47 @@ details for the Kubernetes proxy server.
<tbody>
<tr><td><code>masqueradeBit</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using
the pure iptables proxy mode. Values must be within the range [0, 31].</td>
the pure iptables proxy mode. Values must be within the range [0, 31].
</td>
</tr>
<tr><td><code>masqueradeAll</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode.</td>
masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode.
</td>
</tr>
<tr><td><code>syncPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
syncPeriod is the period that iptables rules are refreshed (e.g. '5s', '1m',
'2h22m'). Must be greater than 0.</td>
'2h22m'). Must be greater than 0.
</td>
</tr>
<tr><td><code>minSyncPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
minSyncPeriod is the minimum period that iptables rules are refreshed (e.g. '5s', '1m',
'2h22m').</td>
'2h22m').
</td>
</tr>
</tbody>
</table>
## `KubeProxyIPVSConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPVSConfiguration}
**Appears in:**
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
KubeProxyIPVSConfiguration contains ipvs-related configuration
details for the Kubernetes proxy server.
@ -361,93 +310,79 @@ details for the Kubernetes proxy server.
<tbody>
<tr><td><code>syncPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
syncPeriod is the period that ipvs rules are refreshed (e.g. '5s', '1m',
'2h22m'). Must be greater than 0.</td>
'2h22m'). Must be greater than 0.
</td>
</tr>
<tr><td><code>minSyncPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
minSyncPeriod is the minimum period that ipvs rules are refreshed (e.g. '5s', '1m',
'2h22m').</td>
'2h22m').
</td>
</tr>
<tr><td><code>scheduler</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
ipvs scheduler</td>
ipvs scheduler
</td>
</tr>
<tr><td><code>excludeCIDRs</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
excludeCIDRs is a list of CIDR's which the ipvs proxier should not touch
when cleaning up ipvs services.</td>
when cleaning up ipvs services.
</td>
</tr>
<tr><td><code>strictARP</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
strict ARP configure arp_ignore and arp_announce to avoid answering ARP queries
from kube-ipvs0 interface</td>
from kube-ipvs0 interface
</td>
</tr>
<tr><td><code>tcpTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
tcpTimeout is the timeout value used for idle IPVS TCP sessions.
The default value is 0, which preserves the current timeout value on the system.</td>
The default value is 0, which preserves the current timeout value on the system.
</td>
</tr>
<tr><td><code>tcpFinTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
tcpFinTimeout is the timeout value used for IPVS TCP sessions after receiving a FIN.
The default value is 0, which preserves the current timeout value on the system.</td>
The default value is 0, which preserves the current timeout value on the system.
</td>
</tr>
<tr><td><code>udpTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
udpTimeout is the timeout value used for IPVS UDP packets.
The default value is 0, which preserves the current timeout value on the system.</td>
The default value is 0, which preserves the current timeout value on the system.
</td>
</tr>
</tbody>
</table>
## `KubeProxyWinkernelConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyWinkernelConfiguration}
**Appears in:**
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
KubeProxyWinkernelConfiguration contains Windows/HNS settings for
the Kubernetes proxy server.
@ -456,65 +391,51 @@ the Kubernetes proxy server.
<tbody>
<tr><td><code>networkName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
networkName is the name of the network kube-proxy will use
to create endpoints and policies</td>
to create endpoints and policies
</td>
</tr>
<tr><td><code>sourceVip</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
sourceVip is the IP address of the source VIP endoint used for
NAT when loadbalancing</td>
NAT when loadbalancing
</td>
</tr>
<tr><td><code>enableDSR</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableDSR tells kube-proxy whether HNS policies should be created
with DSR</td>
with DSR
</td>
</tr>
</tbody>
</table>
## `LocalMode` {#kubeproxy-config-k8s-io-v1alpha1-LocalMode}
(Alias of `string`)
**Appears in:**
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
LocalMode represents modes to detect local traffic from the node
## `ProxyMode` {#kubeproxy-config-k8s-io-v1alpha1-ProxyMode}
(Alias of `string`)
**Appears in:**
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
ProxyMode represents modes used by the Kubernetes proxy server.
Currently, three modes of proxy are available in Linux platform: 'userspace' (older, going to be EOL), 'iptables'
@ -536,23 +457,13 @@ this always falls back to the userspace proxy.
## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
**Appears in:**
- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
ClientConnectionConfiguration contains details for constructing a client.
<table class="table">
@ -560,102 +471,51 @@ ClientConnectionConfiguration contains details for constructing a client.
<tbody>
<tr><td><code>kubeconfig</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
kubeconfig is the path to a KubeConfig file.</td>
kubeconfig is the path to a KubeConfig file.
</td>
</tr>
<tr><td><code>acceptContentTypes</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
default value of 'application/json'. This field will control all connections to the server used by a particular
client.</td>
client.
</td>
</tr>
<tr><td><code>contentType</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
contentType is the content type used when sending data to the server from this client.</td>
contentType is the content type used when sending data to the server from this client.
</td>
</tr>
<tr><td><code>qps</code> <B>[Required]</B><br/>
<code>float32</code>
</td>
<td>
qps controls the number of queries per second allowed for this connection.</td>
qps controls the number of queries per second allowed for this connection.
</td>
</tr>
<tr><td><code>burst</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
burst allows extra queries to accumulate when a client is exceeding its rate.</td>
</tr>
</tbody>
</table>
## `DebuggingConfiguration` {#DebuggingConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
DebuggingConfiguration holds configuration for Debugging related features.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>enableProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
burst allows extra queries to accumulate when a client is exceeding its rate.
</td>
<td>
enableProfiling enables profiling via web interface host:port/debug/pprof/</td>
</tr>
<tr><td><code>enableContentionProfiling</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
enableContentionProfiling enables lock contention profiling, if
enableProfiling is true.</td>
</tr>
</tbody>
</table>
## `FormatOptions` {#FormatOptions}
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
FormatOptions contains options for the different logging formats.
@ -665,28 +525,23 @@ FormatOptions contains options for the different logging formats.
<tbody>
<tr><td><code>json</code> <B>[Required]</B><br/>
<a href="#JSONOptions"><code>JSONOptions</code></a>
</td>
<td>
[Experimental] JSON contains options for logging format "json".</td>
[Experimental] JSON contains options for logging format "json".
</td>
</tr>
</tbody>
</table>
## `JSONOptions` {#JSONOptions}
**Appears in:**
- [FormatOptions](#FormatOptions)
JSONOptions contains options for logging format "json".
<table class="table">
@ -694,213 +549,31 @@ JSONOptions contains options for logging format "json".
<tbody>
<tr><td><code>splitStream</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] SplitStream redirects error messages to stderr while
info messages go to stdout, with buffering. The default is to write
both to stdout, without buffering.</td>
both to stdout, without buffering.
</td>
</tr>
<tr><td><code>infoBufferSize</code> <B>[Required]</B><br/>
<code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
</td>
<td>
[Experimental] InfoBufferSize sets the size of the info stream when
using split streams. The default is zero, which disables buffering.</td>
</tr>
</tbody>
</table>
## `LeaderElectionConfiguration` {#LeaderElectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)
LeaderElectionConfiguration defines the configuration of leader election
clients for components that can run with leader election enabled.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>leaderElect</code> <B>[Required]</B><br/>
<code>bool</code>
using split streams. The default is zero, which disables buffering.
</td>
<td>
leaderElect enables a leader election client to gain leadership
before executing the main loop. Enable this when running replicated
components for high availability.</td>
</tr>
<tr><td><code>leaseDuration</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
leaseDuration is the duration that non-leader candidates will wait
after observing a leadership renewal until attempting to acquire
leadership of a led but unrenewed leader slot. This is effectively the
maximum duration that a leader can be stopped before it is replaced
by another candidate. This is only applicable if leader election is
enabled.</td>
</tr>
<tr><td><code>renewDeadline</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
renewDeadline is the interval between attempts by the acting master to
renew a leadership slot before it stops leading. This must be less
than or equal to the lease duration. This is only applicable if leader
election is enabled.</td>
</tr>
<tr><td><code>retryPeriod</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
retryPeriod is the duration the clients should wait between attempting
acquisition and renewal of a leadership. This is only applicable if
leader election is enabled.</td>
</tr>
<tr><td><code>resourceLock</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceLock indicates the resource object type that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the name of resource object that will be used to lock
during leader election cycles.</td>
</tr>
<tr><td><code>resourceNamespace</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
resourceName indicates the namespace of resource object that will be used to lock
during leader election cycles.</td>
</tr>
</tbody>
</table>
## `LoggingConfiguration` {#LoggingConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
LoggingConfiguration contains logging options
Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>format</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Format Flag specifies the structure of log messages.
default value of format is `text`</td>
</tr>
<tr><td><code>flushFrequency</code> <B>[Required]</B><br/>
<a href="https://godoc.org/time#Duration"><code>time.Duration</code></a>
</td>
<td>
Maximum number of seconds between log flushes. Ignored if the
selected logging backend writes log messages without buffering.</td>
</tr>
<tr><td><code>verbosity</code> <B>[Required]</B><br/>
<code>uint32</code>
</td>
<td>
Verbosity is the threshold that determines which log messages are
logged. Default is zero which logs only the most important
messages. Higher values enable additional messages. Error messages
are always logged.</td>
</tr>
<tr><td><code>vmodule</code> <B>[Required]</B><br/>
<a href="#VModuleConfiguration"><code>VModuleConfiguration</code></a>
</td>
<td>
VModule overrides the verbosity threshold for individual files.
Only supported for "text" log format.</td>
</tr>
<tr><td><code>sanitization</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)</td>
</tr>
<tr><td><code>options</code> <B>[Required]</B><br/>
<a href="#FormatOptions"><code>FormatOptions</code></a>
</td>
<td>
[Experimental] Options holds additional parameters that are specific
to the different logging formats. Only the options for the selected
format get used, but all of them get validated.</td>
</tr>
</tbody>
</table>
## `VModuleConfiguration` {#VModuleConfiguration}
(Alias of `[]k8s.io/component-base/config/v1alpha1.VModuleItem`)
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
VModuleConfiguration is a collection of individual file names or patterns

View File

@ -20,7 +20,6 @@ auto_generated: true
## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta2-DefaultPreemptionArgs}
@ -68,8 +67,6 @@ that play a role in the number of candidates shortlisted. Must be at least
</tbody>
</table>
## `InterPodAffinityArgs` {#kubescheduler-config-k8s-io-v1beta2-InterPodAffinityArgs}
@ -100,8 +97,6 @@ matching hard affinity to the incoming pod.</td>
</tbody>
</table>
## `KubeSchedulerConfiguration` {#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration}
@ -230,8 +225,6 @@ with the extender. These extenders are shared by all scheduler profiles.</td>
</tbody>
</table>
## `NodeAffinityArgs` {#kubescheduler-config-k8s-io-v1beta2-NodeAffinityArgs}
@ -251,7 +244,7 @@ NodeAffinityArgs holds arguments to configure the NodeAffinity plugin.
<tr><td><code>addedAffinity</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
</td>
<td>
AddedAffinity is applied to all Pods additionally to the NodeAffinity
@ -266,8 +259,6 @@ a specific Node (such as Daemonset Pods) might remain unschedulable.</td>
</tbody>
</table>
## `NodeResourcesBalancedAllocationArgs` {#kubescheduler-config-k8s-io-v1beta2-NodeResourcesBalancedAllocationArgs}
@ -297,8 +288,6 @@ NodeResourcesBalancedAllocationArgs holds arguments used to configure NodeResour
</tbody>
</table>
## `NodeResourcesFitArgs` {#kubescheduler-config-k8s-io-v1beta2-NodeResourcesFitArgs}
@ -349,8 +338,6 @@ The default strategy is LeastAllocated with an equal "cpu" and "memory" weight.<
</tbody>
</table>
## `PodTopologySpreadArgs` {#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadArgs}
@ -370,7 +357,7 @@ PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread pl
<tr><td><code>defaultConstraints</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
</td>
<td>
DefaultConstraints defines topology spread constraints to be applied to
@ -401,8 +388,6 @@ and to "System" if enabled.</td>
</tbody>
</table>
## `VolumeBindingArgs` {#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs}
@ -452,15 +437,12 @@ All points must be sorted in increasing order by utilization.</td>
</tbody>
</table>
## `Extender` {#kubescheduler-config-k8s-io-v1beta2-Extender}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
@ -586,15 +568,12 @@ fail when the extender returns an error or is not reachable.</td>
</tbody>
</table>
## `ExtenderManagedResource` {#kubescheduler-config-k8s-io-v1beta2-ExtenderManagedResource}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender)
@ -627,15 +606,12 @@ resource when applying predicates.</td>
</tbody>
</table>
## `ExtenderTLSConfig` {#kubescheduler-config-k8s-io-v1beta2-ExtenderTLSConfig}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta2-Extender)
@ -719,15 +695,12 @@ CAData takes precedence over CAFile</td>
</tbody>
</table>
## `KubeSchedulerProfile` {#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
@ -777,15 +750,12 @@ for that plugin.</td>
</tbody>
</table>
## `Plugin` {#kubescheduler-config-k8s-io-v1beta2-Plugin}
**Appears in:**
- [PluginSet](#kubescheduler-config-k8s-io-v1beta2-PluginSet)
@ -816,15 +786,12 @@ Plugin specifies a plugin name and its weight when applicable. Weight is used on
</tbody>
</table>
## `PluginConfig` {#kubescheduler-config-k8s-io-v1beta2-PluginConfig}
**Appears in:**
- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile)
@ -857,15 +824,12 @@ It is up to the plugin to process these Args.
</tbody>
</table>
## `PluginSet` {#kubescheduler-config-k8s-io-v1beta2-PluginSet}
**Appears in:**
- [Plugins](#kubescheduler-config-k8s-io-v1beta2-Plugins)
@ -901,15 +865,12 @@ When all default plugins need to be disabled, an array containing only one "&low
</tbody>
</table>
## `Plugins` {#kubescheduler-config-k8s-io-v1beta2-Plugins}
**Appears in:**
- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerProfile)
@ -1026,15 +987,12 @@ The scheduler call these plugins in order. Scheduler skips the rest of these plu
</tbody>
</table>
## `PodTopologySpreadConstraintsDefaulting` {#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadConstraintsDefaulting}
(Alias of `string`)
**Appears in:**
- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta2-PodTopologySpreadArgs)
@ -1043,15 +1001,12 @@ for the PodTopologySpread plugin.
## `RequestedToCapacityRatioParam` {#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam}
**Appears in:**
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy)
@ -1074,17 +1029,13 @@ RequestedToCapacityRatioParam define RequestedToCapacityRatio parameters
</tbody>
</table>
## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta2-ResourceSpec}
**Appears in:**
- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesBalancedAllocationArgs)
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy)
@ -1115,15 +1066,12 @@ ResourceSpec represents a single resource.
</tbody>
</table>
## `ScoringStrategy` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy}
**Appears in:**
- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta2-NodeResourcesFitArgs)
@ -1165,15 +1113,12 @@ Weight defaults to 1 if not specified or explicitly set to 0.</td>
</tbody>
</table>
## `ScoringStrategyType` {#kubescheduler-config-k8s-io-v1beta2-ScoringStrategyType}
(Alias of `string`)
**Appears in:**
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta2-ScoringStrategy)
@ -1181,17 +1126,13 @@ ScoringStrategyType the type of scoring strategy used in NodeResourcesFit plugin
## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta2-UtilizationShapePoint}
**Appears in:**
- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta2-VolumeBindingArgs)
- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1beta2-RequestedToCapacityRatioParam)
@ -1225,14 +1166,12 @@ UtilizationShapePoint represents single point of priority function shape.
## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
@ -1295,7 +1234,6 @@ client.</td>
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
@ -1333,7 +1271,6 @@ enableProfiling is true.</td>
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
@ -1362,7 +1299,6 @@ FormatOptions contains options for the different logging formats.
**Appears in:**
- [FormatOptions](#FormatOptions)
@ -1385,7 +1321,7 @@ both to stdout, without buffering.</td>
<tr><td><code>infoBufferSize</code> <B>[Required]</B><br/>
<code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
</td>
<td>
[Experimental] InfoBufferSize sets the size of the info stream when
@ -1402,7 +1338,6 @@ using split streams. The default is zero, which disables buffering.</td>
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
@ -1495,7 +1430,6 @@ during leader election cycles.</td>
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1574,7 +1508,6 @@ format get used, but all of them get validated.</td>
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)

View File

@ -20,7 +20,6 @@ auto_generated: true
## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta3-DefaultPreemptionArgs}
@ -68,8 +67,6 @@ that play a role in the number of candidates shortlisted. Must be at least
</tbody>
</table>
## `InterPodAffinityArgs` {#kubescheduler-config-k8s-io-v1beta3-InterPodAffinityArgs}
@ -100,8 +97,6 @@ matching hard affinity to the incoming pod.</td>
</tbody>
</table>
## `KubeSchedulerConfiguration` {#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration}
@ -212,8 +207,6 @@ with the extender. These extenders are shared by all scheduler profiles.</td>
</tbody>
</table>
## `NodeAffinityArgs` {#kubescheduler-config-k8s-io-v1beta3-NodeAffinityArgs}
@ -233,7 +226,7 @@ NodeAffinityArgs holds arguments to configure the NodeAffinity plugin.
<tr><td><code>addedAffinity</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#nodeaffinity-v1-core"><code>core/v1.NodeAffinity</code></a>
</td>
<td>
AddedAffinity is applied to all Pods additionally to the NodeAffinity
@ -248,8 +241,6 @@ a specific Node (such as Daemonset Pods) might remain unschedulable.</td>
</tbody>
</table>
## `NodeResourcesBalancedAllocationArgs` {#kubescheduler-config-k8s-io-v1beta3-NodeResourcesBalancedAllocationArgs}
@ -279,8 +270,6 @@ NodeResourcesBalancedAllocationArgs holds arguments used to configure NodeResour
</tbody>
</table>
## `NodeResourcesFitArgs` {#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs}
@ -331,8 +320,6 @@ The default strategy is LeastAllocated with an equal "cpu" and "memory" weight.<
</tbody>
</table>
## `PodTopologySpreadArgs` {#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadArgs}
@ -352,7 +339,7 @@ PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread pl
<tr><td><code>defaultConstraints</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#topologyspreadconstraint-v1-core"><code>[]core/v1.TopologySpreadConstraint</code></a>
</td>
<td>
DefaultConstraints defines topology spread constraints to be applied to
@ -383,8 +370,6 @@ and to "System" if enabled.</td>
</tbody>
</table>
## `VolumeBindingArgs` {#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs}
@ -434,15 +419,12 @@ All points must be sorted in increasing order by utilization.</td>
</tbody>
</table>
## `Extender` {#kubescheduler-config-k8s-io-v1beta3-Extender}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
@ -568,15 +550,12 @@ fail when the extender returns an error or is not reachable.</td>
</tbody>
</table>
## `ExtenderManagedResource` {#kubescheduler-config-k8s-io-v1beta3-ExtenderManagedResource}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta3-Extender)
@ -609,15 +588,12 @@ resource when applying predicates.</td>
</tbody>
</table>
## `ExtenderTLSConfig` {#kubescheduler-config-k8s-io-v1beta3-ExtenderTLSConfig}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta3-Extender)
@ -701,15 +677,12 @@ CAData takes precedence over CAFile</td>
</tbody>
</table>
## `KubeSchedulerProfile` {#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
@ -759,15 +732,12 @@ for that plugin.</td>
</tbody>
</table>
## `Plugin` {#kubescheduler-config-k8s-io-v1beta3-Plugin}
**Appears in:**
- [PluginSet](#kubescheduler-config-k8s-io-v1beta3-PluginSet)
@ -798,15 +768,12 @@ Plugin specifies a plugin name and its weight when applicable. Weight is used on
</tbody>
</table>
## `PluginConfig` {#kubescheduler-config-k8s-io-v1beta3-PluginConfig}
**Appears in:**
- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile)
@ -839,15 +806,12 @@ It is up to the plugin to process these Args.
</tbody>
</table>
## `PluginSet` {#kubescheduler-config-k8s-io-v1beta3-PluginSet}
**Appears in:**
- [Plugins](#kubescheduler-config-k8s-io-v1beta3-Plugins)
@ -883,15 +847,12 @@ When all default plugins need to be disabled, an array containing only one "&low
</tbody>
</table>
## `Plugins` {#kubescheduler-config-k8s-io-v1beta3-Plugins}
**Appears in:**
- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile)
@ -1023,15 +984,12 @@ plugin through MultiPoint. This follows the same behavior as all other extension
</tbody>
</table>
## `PodTopologySpreadConstraintsDefaulting` {#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadConstraintsDefaulting}
(Alias of `string`)
**Appears in:**
- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadArgs)
@ -1040,15 +998,12 @@ for the PodTopologySpread plugin.
## `RequestedToCapacityRatioParam` {#kubescheduler-config-k8s-io-v1beta3-RequestedToCapacityRatioParam}
**Appears in:**
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy)
@ -1071,17 +1026,13 @@ RequestedToCapacityRatioParam define RequestedToCapacityRatio parameters
</tbody>
</table>
## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta3-ResourceSpec}
**Appears in:**
- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1beta3-NodeResourcesBalancedAllocationArgs)
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy)
@ -1112,15 +1063,12 @@ ResourceSpec represents a single resource.
</tbody>
</table>
## `ScoringStrategy` {#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy}
**Appears in:**
- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta3-NodeResourcesFitArgs)
@ -1162,15 +1110,12 @@ Weight defaults to 1 if not specified or explicitly set to 0.</td>
</tbody>
</table>
## `ScoringStrategyType` {#kubescheduler-config-k8s-io-v1beta3-ScoringStrategyType}
(Alias of `string`)
**Appears in:**
- [ScoringStrategy](#kubescheduler-config-k8s-io-v1beta3-ScoringStrategy)
@ -1178,17 +1123,13 @@ ScoringStrategyType the type of scoring strategy used in NodeResourcesFit plugin
## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta3-UtilizationShapePoint}
**Appears in:**
- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs)
- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1beta3-RequestedToCapacityRatioParam)
@ -1222,16 +1163,13 @@ UtilizationShapePoint represents single point of priority function shape.
## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
@ -1294,10 +1232,8 @@ client.</td>
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
DebuggingConfiguration holds configuration for Debugging related features.
@ -1334,7 +1270,6 @@ enableProfiling is true.</td>
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
@ -1363,7 +1298,6 @@ FormatOptions contains options for the different logging formats.
**Appears in:**
- [FormatOptions](#FormatOptions)
@ -1386,7 +1320,7 @@ both to stdout, without buffering.</td>
<tr><td><code>infoBufferSize</code> <B>[Required]</B><br/>
<code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
</td>
<td>
[Experimental] InfoBufferSize sets the size of the info stream when
@ -1403,9 +1337,7 @@ using split streams. The default is zero, which disables buffering.</td>
**Appears in:**
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta2-KubeSchedulerConfiguration)
- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
@ -1498,7 +1430,6 @@ during leader election cycles.</td>
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1577,7 +1508,6 @@ format get used, but all of them get validated.</td>
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)

View File

@ -1,799 +0,0 @@
---
title: kube-scheduler Policy Configuration (v1)
content_type: tool-reference
package: kubescheduler.config.k8s.io/v1
auto_generated: true
---
## Resource Types
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
## `Policy` {#kubescheduler-config-k8s-io-v1-Policy}
Policy describes a struct for a policy resource used in api.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubescheduler.config.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>Policy</code></td></tr>
<tr><td><code>predicates</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PredicatePolicy"><code>[]PredicatePolicy</code></a>
</td>
<td>
Holds the information to configure the fit predicate functions</td>
</tr>
<tr><td><code>priorities</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PriorityPolicy"><code>[]PriorityPolicy</code></a>
</td>
<td>
Holds the information to configure the priority functions</td>
</tr>
<tr><td><code>extenders</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-LegacyExtender"><code>[]LegacyExtender</code></a>
</td>
<td>
Holds the information to communicate with the extender(s)</td>
</tr>
<tr><td><code>hardPodAffinitySymmetricWeight</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule
corresponding to every RequiredDuringScheduling affinity rule.
HardPodAffinitySymmetricWeight represents the weight of implicit PreferredDuringScheduling affinity rule, in the range 1-100.</td>
</tr>
<tr><td><code>alwaysCheckAllPredicates</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
When AlwaysCheckAllPredicates is set to true, scheduler checks all
the configured predicates even after one or more of them fails.
When the flag is set to false, scheduler skips checking the rest
of the predicates after it finds one predicate that failed.</td>
</tr>
</tbody>
</table>
## `ExtenderManagedResource` {#kubescheduler-config-k8s-io-v1-ExtenderManagedResource}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender)
- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender)
ExtenderManagedResource describes the arguments of extended resources
managed by an extender.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Name is the extended resource name.</td>
</tr>
<tr><td><code>ignoredByScheduler</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
IgnoredByScheduler indicates whether kube-scheduler should ignore this
resource when applying predicates.</td>
</tr>
</tbody>
</table>
## `ExtenderTLSConfig` {#kubescheduler-config-k8s-io-v1-ExtenderTLSConfig}
**Appears in:**
- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender)
- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender)
ExtenderTLSConfig contains settings to enable TLS with extender
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>insecure</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
Server should be accessed without verifying the TLS certificate. For testing only.</td>
</tr>
<tr><td><code>serverName</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
ServerName is passed to the server for SNI and is used in the client to check server
certificates against. If ServerName is empty, the hostname used to contact the
server is used.</td>
</tr>
<tr><td><code>certFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Server requires TLS client certificate authentication</td>
</tr>
<tr><td><code>keyFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Server requires TLS client certificate authentication</td>
</tr>
<tr><td><code>caFile</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Trusted root certificates for server</td>
</tr>
<tr><td><code>certData</code> <B>[Required]</B><br/>
<code>[]byte</code>
</td>
<td>
CertData holds PEM-encoded bytes (typically read from a client certificate file).
CertData takes precedence over CertFile</td>
</tr>
<tr><td><code>keyData</code> <B>[Required]</B><br/>
<code>[]byte</code>
</td>
<td>
KeyData holds PEM-encoded bytes (typically read from a client certificate key file).
KeyData takes precedence over KeyFile</td>
</tr>
<tr><td><code>caData</code> <B>[Required]</B><br/>
<code>[]byte</code>
</td>
<td>
CAData holds PEM-encoded bytes (typically read from a root certificates bundle).
CAData takes precedence over CAFile</td>
</tr>
</tbody>
</table>
## `LabelPreference` {#kubescheduler-config-k8s-io-v1-LabelPreference}
**Appears in:**
- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument)
LabelPreference holds the parameters that are used to configure the corresponding priority function
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>label</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Used to identify node "groups"</td>
</tr>
<tr><td><code>presence</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
This is a boolean flag
If true, higher priority is given to nodes that have the label
If false, higher priority is given to nodes that do not have the label</td>
</tr>
</tbody>
</table>
## `LabelsPresence` {#kubescheduler-config-k8s-io-v1-LabelsPresence}
**Appears in:**
- [PredicateArgument](#kubescheduler-config-k8s-io-v1-PredicateArgument)
LabelsPresence holds the parameters that are used to configure the corresponding predicate in scheduler policy configuration.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>labels</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
The list of labels that identify node "groups"
All of the labels should be either present (or absent) for the node to be considered a fit for hosting the pod</td>
</tr>
<tr><td><code>presence</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
The boolean flag that indicates whether the labels should be present or absent from the node</td>
</tr>
</tbody>
</table>
## `LegacyExtender` {#kubescheduler-config-k8s-io-v1-LegacyExtender}
**Appears in:**
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
LegacyExtender holds the parameters used to communicate with the extender. If a verb is unspecified/empty,
it is assumed that the extender chose not to provide that extension.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>urlPrefix</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
URLPrefix at which the extender is available</td>
</tr>
<tr><td><code>filterVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender.</td>
</tr>
<tr><td><code>preemptVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the preempt call, empty if not supported. This verb is appended to the URLPrefix when issuing the preempt call to extender.</td>
</tr>
<tr><td><code>prioritizeVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender.</td>
</tr>
<tr><td><code>weight</code> <B>[Required]</B><br/>
<code>int64</code>
</td>
<td>
The numeric multiplier for the node scores that the prioritize call generates.
The weight should be a positive integer</td>
</tr>
<tr><td><code>bindVerb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb for the bind call, empty if not supported. This verb is appended to the URLPrefix when issuing the bind call to extender.
If this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender
can implement this function.</td>
</tr>
<tr><td><code>enableHttps</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
EnableHTTPS specifies whether https should be used to communicate with the extender</td>
</tr>
<tr><td><code>tlsConfig</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ExtenderTLSConfig"><code>ExtenderTLSConfig</code></a>
</td>
<td>
TLSConfig specifies the transport layer security config</td>
</tr>
<tr><td><code>httpTimeout</code> <B>[Required]</B><br/>
<a href="https://godoc.org/time#Duration"><code>time.Duration</code></a>
</td>
<td>
HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize
timeout is ignored, k8s/other extenders priorities are used to select the node.</td>
</tr>
<tr><td><code>nodeCacheCapable</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
NodeCacheCapable specifies that the extender is capable of caching node information,
so the scheduler should only send minimal information about the eligible nodes
assuming that the extender already cached full details of all nodes in the cluster</td>
</tr>
<tr><td><code>managedResources</code><br/>
<a href="#kubescheduler-config-k8s-io-v1-ExtenderManagedResource"><code>[]ExtenderManagedResource</code></a>
</td>
<td>
ManagedResources is a list of extended resources that are managed by
this extender.
- A pod will be sent to the extender on the Filter, Prioritize and Bind
(if the extender is the binder) phases iff the pod requests at least
one of the extended resources in this list. If empty or unspecified,
all pods will be sent to this extender.
- If IgnoredByScheduler is set to true for a resource, kube-scheduler
will skip checking the resource in predicates.</td>
</tr>
<tr><td><code>ignorable</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
Ignorable specifies if the extender is ignorable, i.e. scheduling should not
fail when the extender returns an error or is not reachable.</td>
</tr>
</tbody>
</table>
## `PredicateArgument` {#kubescheduler-config-k8s-io-v1-PredicateArgument}
**Appears in:**
- [PredicatePolicy](#kubescheduler-config-k8s-io-v1-PredicatePolicy)
PredicateArgument represents the arguments to configure predicate functions in scheduler policy configuration.
Only one of its members may be specified
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>serviceAffinity</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ServiceAffinity"><code>ServiceAffinity</code></a>
</td>
<td>
The predicate that provides affinity for pods belonging to a service
It uses a label to identify nodes that belong to the same "group"</td>
</tr>
<tr><td><code>labelsPresence</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-LabelsPresence"><code>LabelsPresence</code></a>
</td>
<td>
The predicate that checks whether a particular node has a certain label
defined or not, regardless of value</td>
</tr>
</tbody>
</table>
## `PredicatePolicy` {#kubescheduler-config-k8s-io-v1-PredicatePolicy}
**Appears in:**
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
PredicatePolicy describes a struct of a predicate policy.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Identifier of the predicate policy
For a custom predicate, the name can be user-defined
For the Kubernetes provided predicates, the name is the identifier of the pre-defined predicate</td>
</tr>
<tr><td><code>argument</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PredicateArgument"><code>PredicateArgument</code></a>
</td>
<td>
Holds the parameters to configure the given predicate</td>
</tr>
</tbody>
</table>
## `PriorityArgument` {#kubescheduler-config-k8s-io-v1-PriorityArgument}
**Appears in:**
- [PriorityPolicy](#kubescheduler-config-k8s-io-v1-PriorityPolicy)
PriorityArgument represents the arguments to configure priority functions in scheduler policy configuration.
Only one of its members may be specified
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>serviceAntiAffinity</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ServiceAntiAffinity"><code>ServiceAntiAffinity</code></a>
</td>
<td>
The priority function that ensures a good spread (anti-affinity) for pods belonging to a service
It uses a label to identify nodes that belong to the same "group"</td>
</tr>
<tr><td><code>labelPreference</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-LabelPreference"><code>LabelPreference</code></a>
</td>
<td>
The priority function that checks whether a particular node has a certain label
defined or not, regardless of value</td>
</tr>
<tr><td><code>requestedToCapacityRatioArguments</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments"><code>RequestedToCapacityRatioArguments</code></a>
</td>
<td>
The RequestedToCapacityRatio priority function is parametrized with function shape.</td>
</tr>
</tbody>
</table>
## `PriorityPolicy` {#kubescheduler-config-k8s-io-v1-PriorityPolicy}
**Appears in:**
- [Policy](#kubescheduler-config-k8s-io-v1-Policy)
PriorityPolicy describes a struct of a priority policy.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Identifier of the priority policy
For a custom priority, the name can be user-defined
For the Kubernetes provided priority functions, the name is the identifier of the pre-defined priority function</td>
</tr>
<tr><td><code>weight</code> <B>[Required]</B><br/>
<code>int64</code>
</td>
<td>
The numeric multiplier for the node scores that the priority function generates
The weight should be non-zero and can be a positive or a negative integer</td>
</tr>
<tr><td><code>argument</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-PriorityArgument"><code>PriorityArgument</code></a>
</td>
<td>
Holds the parameters to configure the given priority function</td>
</tr>
</tbody>
</table>
## `RequestedToCapacityRatioArguments` {#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments}
**Appears in:**
- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument)
RequestedToCapacityRatioArguments holds arguments specific to RequestedToCapacityRatio priority function.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>shape</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-UtilizationShapePoint"><code>[]UtilizationShapePoint</code></a>
</td>
<td>
Array of point defining priority function shape.</td>
</tr>
<tr><td><code>resources</code> <B>[Required]</B><br/>
<a href="#kubescheduler-config-k8s-io-v1-ResourceSpec"><code>[]ResourceSpec</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
</tbody>
</table>
## `ResourceSpec` {#kubescheduler-config-k8s-io-v1-ResourceSpec}
**Appears in:**
- [RequestedToCapacityRatioArguments](#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments)
ResourceSpec represents single resource and weight for bin packing of priority RequestedToCapacityRatioArguments.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Name of the resource to be managed by RequestedToCapacityRatio function.</td>
</tr>
<tr><td><code>weight</code> <B>[Required]</B><br/>
<code>int64</code>
</td>
<td>
Weight of the resource.</td>
</tr>
</tbody>
</table>
## `ServiceAffinity` {#kubescheduler-config-k8s-io-v1-ServiceAffinity}
**Appears in:**
- [PredicateArgument](#kubescheduler-config-k8s-io-v1-PredicateArgument)
ServiceAffinity holds the parameters that are used to configure the corresponding predicate in scheduler policy configuration.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>labels</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
The list of labels that identify node "groups"
All of the labels should match for the node to be considered a fit for hosting the pod</td>
</tr>
</tbody>
</table>
## `ServiceAntiAffinity` {#kubescheduler-config-k8s-io-v1-ServiceAntiAffinity}
**Appears in:**
- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument)
ServiceAntiAffinity holds the parameters that are used to configure the corresponding priority function
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>label</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Used to identify node "groups"</td>
</tr>
</tbody>
</table>
## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1-UtilizationShapePoint}
**Appears in:**
- [RequestedToCapacityRatioArguments](#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments)
UtilizationShapePoint represents single point of priority function shape.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>utilization</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
Utilization (x axis). Valid values are 0 to 100. Fully utilized node maps to 100.</td>
</tr>
<tr><td><code>score</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
Score assigned to given utilization (y axis). Valid values are 0 to 10.</td>
</tr>
</tbody>
</table>

View File

@ -0,0 +1,281 @@
---
title: Kubelet Configuration (v1alpha1)
content_type: tool-reference
package: kubelet.config.k8s.io/v1alpha1
auto_generated: true
---
## Resource Types
- [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig)
## `FormatOptions` {#FormatOptions}
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
FormatOptions contains options for the different logging formats.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>json</code> <B>[Required]</B><br/>
<a href="#JSONOptions"><code>JSONOptions</code></a>
</td>
<td>
[Experimental] JSON contains options for logging format "json".</td>
</tr>
</tbody>
</table>
## `JSONOptions` {#JSONOptions}
**Appears in:**
- [FormatOptions](#FormatOptions)
JSONOptions contains options for logging format "json".
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>splitStream</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] SplitStream redirects error messages to stderr while
info messages go to stdout, with buffering. The default is to write
both to stdout, without buffering.</td>
</tr>
<tr><td><code>infoBufferSize</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
</td>
<td>
[Experimental] InfoBufferSize sets the size of the info stream when
using split streams. The default is zero, which disables buffering.</td>
</tr>
</tbody>
</table>
## `VModuleConfiguration` {#VModuleConfiguration}
(Alias of `[]k8s.io/component-base/config/v1alpha1.VModuleItem`)
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
VModuleConfiguration is a collection of individual file names or patterns
and the corresponding verbosity threshold.
## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig}
CredentialProviderConfig is the configuration containing information about
each exec credential provider. Kubelet reads this configuration from disk and enables
each provider as specified by the CredentialProvider type.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>kubelet.config.k8s.io/v1alpha1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>CredentialProviderConfig</code></td></tr>
<tr><td><code>providers</code> <B>[Required]</B><br/>
<a href="#kubelet-config-k8s-io-v1alpha1-CredentialProvider"><code>[]CredentialProvider</code></a>
</td>
<td>
providers is a list of credential provider plugins that will be enabled by the kubelet.
Multiple providers may match against a single image, in which case credentials
from all providers will be returned to the kubelet. If multiple providers are called
for a single image, the results are combined. If providers return overlapping
auth keys, the value from the provider earlier in this list is used.</td>
</tr>
</tbody>
</table>
## `CredentialProvider` {#kubelet-config-k8s-io-v1alpha1-CredentialProvider}
**Appears in:**
- [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig)
CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only
invoked when an image being pulled matches the images handled by the plugin (see matchImages).
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
name is the required name of the credential provider. It must match the name of the
provider executable as seen by the kubelet. The executable must be in the kubelet's
bin directory (set by the --image-credential-provider-bin-dir flag).</td>
</tr>
<tr><td><code>matchImages</code> <B>[Required]</B><br/>
<code>[]string</code>
</td>
<td>
matchImages is a required list of strings used to match against images in order to
determine if this provider should be invoked. If one of the strings matches the
requested image from the kubelet, the plugin will be invoked and given a chance
to provide credentials. Images are expected to contain the registry domain
and URL path.
Each entry in matchImages is a pattern which can optionally contain a port and a path.
Globs can be used in the domain, but not in the port or the path. Globs are supported
as subdomains like '&lowast;.k8s.io' or 'k8s.&lowast;.io', and top-level-domains such as 'k8s.&lowast;'.
Matching partial subdomains like 'app&lowast;.k8s.io' is also supported. Each glob can only match
a single subdomain segment, so &lowast;.io does not match &lowast;.k8s.io.
A match exists between an image and a matchImage when all of the below are true:
- Both contain the same number of domain parts and each part matches.
- The URL path of an imageMatch must be a prefix of the target image URL path.
- If the imageMatch contains a port, then the port must match in the image as well.
Example values of matchImages:
- 123456789.dkr.ecr.us-east-1.amazonaws.com
- &lowast;.azurecr.io
- gcr.io
- &lowast;.&lowast;.registry.io
- registry.io:8080/path</td>
</tr>
<tr><td><code>defaultCacheDuration</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
defaultCacheDuration is the default duration the plugin will cache credentials in-memory
if a cache duration is not provided in the plugin response. This field is required.</td>
</tr>
<tr><td><code>apiVersion</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse
MUST use the same encoding version as the input. Current supported values are:
- credentialprovider.kubelet.k8s.io/v1alpha1</td>
</tr>
<tr><td><code>args</code><br/>
<code>[]string</code>
</td>
<td>
Arguments to pass to the command when executing it.</td>
</tr>
<tr><td><code>env</code><br/>
<a href="#kubelet-config-k8s-io-v1alpha1-ExecEnvVar"><code>[]ExecEnvVar</code></a>
</td>
<td>
Env defines additional environment variables to expose to the process. These
are unioned with the host's environment, as well as variables client-go uses
to pass argument to the plugin.</td>
</tr>
</tbody>
</table>
## `ExecEnvVar` {#kubelet-config-k8s-io-v1alpha1-ExecEnvVar}
**Appears in:**
- [CredentialProvider](#kubelet-config-k8s-io-v1alpha1-CredentialProvider)
ExecEnvVar is used for setting environment variables when executing an exec-based
credential plugin.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>value</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
</tbody>
</table>

View File

@ -14,6 +14,166 @@ auto_generated: true
## `FormatOptions` {#FormatOptions}
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
FormatOptions contains options for the different logging formats.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>json</code> <B>[Required]</B><br/>
<a href="#JSONOptions"><code>JSONOptions</code></a>
</td>
<td>
[Experimental] JSON contains options for logging format "json".</td>
</tr>
</tbody>
</table>
## `JSONOptions` {#JSONOptions}
**Appears in:**
- [FormatOptions](#FormatOptions)
JSONOptions contains options for logging format "json".
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>splitStream</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] SplitStream redirects error messages to stderr while
info messages go to stdout, with buffering. The default is to write
both to stdout, without buffering.</td>
</tr>
<tr><td><code>infoBufferSize</code> <B>[Required]</B><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
</td>
<td>
[Experimental] InfoBufferSize sets the size of the info stream when
using split streams. The default is zero, which disables buffering.</td>
</tr>
</tbody>
</table>
## `LoggingConfiguration` {#LoggingConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
LoggingConfiguration contains logging options
Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>format</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Format Flag specifies the structure of log messages.
default value of format is `text`</td>
</tr>
<tr><td><code>flushFrequency</code> <B>[Required]</B><br/>
<a href="https://godoc.org/time#Duration"><code>time.Duration</code></a>
</td>
<td>
Maximum number of seconds between log flushes. Ignored if the
selected logging backend writes log messages without buffering.</td>
</tr>
<tr><td><code>verbosity</code> <B>[Required]</B><br/>
<code>uint32</code>
</td>
<td>
Verbosity is the threshold that determines which log messages are
logged. Default is zero which logs only the most important
messages. Higher values enable additional messages. Error messages
are always logged.</td>
</tr>
<tr><td><code>vmodule</code> <B>[Required]</B><br/>
<a href="#VModuleConfiguration"><code>VModuleConfiguration</code></a>
</td>
<td>
VModule overrides the verbosity threshold for individual files.
Only supported for "text" log format.</td>
</tr>
<tr><td><code>sanitization</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)</td>
</tr>
<tr><td><code>options</code> <B>[Required]</B><br/>
<a href="#FormatOptions"><code>FormatOptions</code></a>
</td>
<td>
[Experimental] Options holds additional parameters that are specific
to the different logging formats. Only the options for the selected
format get used, but all of them get validated.</td>
</tr>
</tbody>
</table>
## `VModuleConfiguration` {#VModuleConfiguration}
(Alias of `[]k8s.io/component-base/config/v1alpha1.VModuleItem`)
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
VModuleConfiguration is a collection of individual file names or patterns
and the corresponding verbosity threshold.
## `KubeletConfiguration` {#kubelet-config-k8s-io-v1beta1-KubeletConfiguration}
@ -1517,7 +1677,7 @@ Default: 0.8</td>
<tr><td><code>registerWithTaints</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#taint-v1-core"><code>[]core/v1.Taint</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#taint-v1-core"><code>[]core/v1.Taint</code></a>
</td>
<td>
registerWithTaints are an array of taints to add to a node object when
@ -1539,8 +1699,6 @@ Default: true</td>
</tbody>
</table>
## `SerializedNodeConfigSource` {#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource}
@ -1562,7 +1720,7 @@ It exists in the kubeletconfig API group because it is classified as a versioned
<tr><td><code>source</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
</td>
<td>
source is the source that we are serializing.</td>
@ -1572,28 +1730,12 @@ It exists in the kubeletconfig API group because it is classified as a versioned
</tbody>
</table>
## `HairpinMode` {#kubelet-config-k8s-io-v1beta1-HairpinMode}
(Alias of `string`)
HairpinMode denotes how the kubelet should configure networking to handle
hairpin packets.
## `KubeletAnonymousAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication}
**Appears in:**
- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication)
@ -1620,15 +1762,12 @@ Anonymous requests have a username of `system:anonymous`, and a group name of
</tbody>
</table>
## `KubeletAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAuthentication}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1667,15 +1806,12 @@ Anonymous requests have a username of `system:anonymous`, and a group name of
</tbody>
</table>
## `KubeletAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorization}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1708,15 +1844,12 @@ Webhook mode uses the SubjectAccessReview API to determine authorization.</td>
</tbody>
</table>
## `KubeletAuthorizationMode` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorizationMode}
(Alias of `string`)
**Appears in:**
- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization)
@ -1724,15 +1857,12 @@ Webhook mode uses the SubjectAccessReview API to determine authorization.</td>
## `KubeletWebhookAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthentication}
**Appears in:**
- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication)
@ -1764,15 +1894,12 @@ tokenreviews.authentication.k8s.io API.</td>
</tbody>
</table>
## `KubeletWebhookAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthorization}
**Appears in:**
- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization)
@ -1805,15 +1932,12 @@ the webhook authorizer.</td>
</tbody>
</table>
## `KubeletX509Authentication` {#kubelet-config-k8s-io-v1beta1-KubeletX509Authentication}
**Appears in:**
- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication)
@ -1839,15 +1963,12 @@ and groups corresponding to the Organization in the client certificate.</td>
</tbody>
</table>
## `MemoryReservation` {#kubelet-config-k8s-io-v1beta1-MemoryReservation}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1869,7 +1990,7 @@ MemoryReservation specifies the memory reservation of different types for each N
<tr><td><code>limits</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
@ -1880,15 +2001,12 @@ MemoryReservation specifies the memory reservation of different types for each N
</tbody>
</table>
## `MemorySwapConfiguration` {#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1913,15 +2031,12 @@ MemoryReservation specifies the memory reservation of different types for each N
</tbody>
</table>
## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy}
(Alias of `string`)
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1930,15 +2045,12 @@ managers (secret, configmap) are discovering object changes.
## `ShutdownGracePeriodByPodPriority` {#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -1969,169 +2081,3 @@ ShutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods ba
</tbody>
</table>
## `FormatOptions` {#FormatOptions}
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
FormatOptions contains options for the different logging formats.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>json</code> <B>[Required]</B><br/>
<a href="#JSONOptions"><code>JSONOptions</code></a>
</td>
<td>
[Experimental] JSON contains options for logging format "json".</td>
</tr>
</tbody>
</table>
## `JSONOptions` {#JSONOptions}
**Appears in:**
- [FormatOptions](#FormatOptions)
JSONOptions contains options for logging format "json".
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>splitStream</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] SplitStream redirects error messages to stderr while
info messages go to stdout, with buffering. The default is to write
both to stdout, without buffering.</td>
</tr>
<tr><td><code>infoBufferSize</code> <B>[Required]</B><br/>
<code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code>
</td>
<td>
[Experimental] InfoBufferSize sets the size of the info stream when
using split streams. The default is zero, which disables buffering.</td>
</tr>
</tbody>
</table>
## `LoggingConfiguration` {#LoggingConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
LoggingConfiguration contains logging options
Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>format</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Format Flag specifies the structure of log messages.
default value of format is `text`</td>
</tr>
<tr><td><code>flushFrequency</code> <B>[Required]</B><br/>
<a href="https://godoc.org/time#Duration"><code>time.Duration</code></a>
</td>
<td>
Maximum number of seconds between log flushes. Ignored if the
selected logging backend writes log messages without buffering.</td>
</tr>
<tr><td><code>verbosity</code> <B>[Required]</B><br/>
<code>uint32</code>
</td>
<td>
Verbosity is the threshold that determines which log messages are
logged. Default is zero which logs only the most important
messages. Higher values enable additional messages. Error messages
are always logged.</td>
</tr>
<tr><td><code>vmodule</code> <B>[Required]</B><br/>
<a href="#VModuleConfiguration"><code>VModuleConfiguration</code></a>
</td>
<td>
VModule overrides the verbosity threshold for individual files.
Only supported for "text" log format.</td>
</tr>
<tr><td><code>sanitization</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)</td>
</tr>
<tr><td><code>options</code> <B>[Required]</B><br/>
<a href="#FormatOptions"><code>FormatOptions</code></a>
</td>
<td>
[Experimental] Options holds additional parameters that are specific
to the different logging formats. Only the options for the selected
format get used, but all of them get validated.</td>
</tr>
</tbody>
</table>
## `VModuleConfiguration` {#VModuleConfiguration}
(Alias of `[]k8s.io/component-base/config/v1alpha1.VModuleItem`)
**Appears in:**
- [LoggingConfiguration](#LoggingConfiguration)
VModuleConfiguration is a collection of individual file names or patterns
and the corresponding verbosity threshold.

View File

@ -17,4 +17,4 @@ This scheduling policy is not supported since Kubernetes v1.23. Associated flags
* Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/)
* Learn about [kube-scheduler Configuration](/docs/reference/scheduling/config/)
* Read the [kube-scheduler configuration reference (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
* Read the [kube-scheduler Policy reference (v1)](/docs/reference/config-api/kube-scheduler-policy-config.v1/)

View File

@ -55,7 +55,7 @@ kubeadm config images list [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -48,7 +48,7 @@ kubeadm config images pull [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -138,7 +138,7 @@ kubeadm init [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -62,7 +62,7 @@ kubeadm init phase addon all [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -41,7 +41,7 @@ kubeadm init phase addon coredns [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -101,7 +101,7 @@ kubeadm init phase control-plane all [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -83,7 +83,7 @@ kubeadm init phase control-plane apiserver [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -157,6 +157,13 @@ kubeadm join [api-server-endpoint] [flags]
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.</p></td>
</tr>
<tr>
<td colspan="2">--dry-run</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>Don't apply any changes; just output what would be done.</p></td>
</tr>
<tr>
<td colspan="2">-h, --help</td>
</tr>

View File

@ -20,7 +20,6 @@ Performs a best effort revert of changes made to this host by 'kubeadm init' or
The "reset" command executes the following phases:
```
preflight Run reset pre-flight checks
update-cluster-status Remove this node from the ClusterStatus object (DEPRECATED).
remove-etcd-member Remove a local etcd member.
cleanup-node Run cleanup node.
```

View File

@ -76,7 +76,7 @@ kubeadm upgrade apply [version]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -55,7 +55,7 @@ kubeadm upgrade plan [version] [flags]
<td colspan="2">--feature-gates string</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>IPv6DualStack=true|false (BETA - default=true)<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)</p></td>
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>A set of key=value pairs that describe feature gates for various features. Options are:<br/>PublicKeysECDSA=true|false (ALPHA - default=false)<br/>RootlessControlPlane=true|false (ALPHA - default=false)<br/>UnversionedKubeletConfigMap=true|false (ALPHA - default=false)</p></td>
</tr>
<tr>

View File

@ -67,6 +67,7 @@ their authors, not the Kubernetes team.
| PHP | [github.com/travisghansen/kubernetes-client-php](https://github.com/travisghansen/kubernetes-client-php) |
| PHP | [github.com/renoki-co/php-k8s](https://github.com/renoki-co/php-k8s) |
| Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) |
| Python | [github.com/gtsystem/lightkube](https://github.com/gtsystem/lightkube) |
| Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) |
| Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) |
| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) |

View File

@ -24,6 +24,14 @@ deprecated API versions to newer and more stable API versions.
The **v1.26** release will stop serving the following deprecated API versions:
#### Flow control resources {#flowcontrol-resources-v126}
The **flowcontrol.apiserver.k8s.io/v1beta1** API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1.26.
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1beta2** API version, available since v1.23.
* All existing persisted objects are accessible via the new API
* No notable changes
#### HorizontalPodAutoscaler {#horizontalpodautoscaler-v126}
The **autoscaling/v2beta2** API version of HorizontalPodAutoscaler will no longer be served in v1.26.

View File

@ -77,7 +77,7 @@ Required certificates:
| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) |
|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------|
| kube-etcd | etcd-ca | | server, client | `localhost`, `127.0.0.1` |
| kube-etcd | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-peer | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |

View File

@ -43,7 +43,9 @@ similar to the following example:
kubeadm init --pod-network-cidr=10.244.0.0/16,2001:db8:42:0::/56 --service-cidr=10.96.0.0/16,2001:db8:42:1::/112
```
To make things clearer, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for the primary dual-stack control plane node.
To make things clearer, here is an example kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
`kubeadm-config.yaml` for the primary dual-stack control plane node.
```yaml
---
@ -81,7 +83,8 @@ The `--apiserver-advertise-address` flag does not support dual-stack.
Before joining a node, make sure that the node has IPv6 routable network interface and allows IPv6 forwarding.
Here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for joining a worker node to the cluster.
Here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
`kubeadm-config.yaml` for joining a worker node to the cluster.
```yaml
apiVersion: kubeadm.k8s.io/v1beta3
@ -98,7 +101,9 @@ nodeRegistration:
node-ip: 10.100.0.3,fd00:1:2:3::3
```
Also, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for joining another control plane node to the cluster.
Also, here is an example kubeadm [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
`kubeadm-config.yaml` for joining another control plane node to the cluster.
```yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
@ -132,7 +137,9 @@ Dual-stack support doesn't mean that you need to use dual-stack addressing.
You can deploy a single-stack cluster that has the dual-stack networking feature enabled.
{{< /note >}}
To make things more clear, here is an example kubeadm [configuration file](https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3) `kubeadm-config.yaml` for the single-stack control plane node.
To make things more clear, here is an example kubeadm
[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/)
`kubeadm-config.yaml` for the single-stack control plane node.
```yaml
apiVersion: kubeadm.k8s.io/v1beta3

View File

@ -612,12 +612,12 @@ network port spaces). Kubernetes uses pause containers to allow for worker conta
crashing or restarting without losing any of the networking configuration.
Kubernetes maintains a multi-architecture image that includes support for Windows.
For Kubernetes v1.22 the recommended pause image is `k8s.gcr.io/pause:3.5`.
For Kubernetes v{{< skew currentVersion >}} the recommended pause image is `k8s.gcr.io/pause:3.6`.
The [source code](https://github.com/kubernetes/kubernetes/tree/master/build/pause)
is available on GitHub.
Microsoft maintains a different multi-architecture image, with Linux and Windows
amd64 support, that you can find as `mcr.microsoft.com/oss/kubernetes/pause:3.5`.
amd64 support, that you can find as `mcr.microsoft.com/oss/kubernetes/pause:3.6`.
This image is built from the same source as the Kubernetes maintained image but
all of the Windows binaries are [authenticode signed](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/authenticode) by Microsoft.
The Kubernetes project recommends using the Microsoft maintained image if you are
@ -661,14 +661,15 @@ On Windows nodes, strict compatibility rules apply where the host OS version mus
match the container base image OS version. Only Windows containers with a container
operating system of Windows Server 2019 are fully supported.
For Kubernetes v1.22, operating system compatibility for Windows nodes (and Pods)
For Kubernetes v{{< skew currentVersion >}}, operating system compatibility for Windows nodes (and Pods)
is as follows:
Windows Server LTSC release
: Windows Server 2019
: Windows Server 2022
Windows Server SAC release
: Windows Server version 2004, Windows Server version 20H2
: Windows Server version 20H2
The Kubernetes [version-skew policy](/docs/setup/release/version-skew-policy/) also applies.
@ -774,9 +775,9 @@ SIG Windows [contributing guide on gathering logs](https://github.com/kubernetes
nssm start flanneld
# Register kubelet.exe
# Microsoft releases the pause infrastructure container at mcr.microsoft.com/oss/kubernetes/pause:1.4.1
# Microsoft releases the pause infrastructure container at mcr.microsoft.com/oss/kubernetes/pause:3.6
nssm install kubelet C:\k\kubelet.exe
nssm set kubelet AppParameters --hostname-override=<hostname> --v=6 --pod-infra-container-image=mcr.microsoft.com/oss/kubernetes/pause:1.4.1 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns=<DNS-service-IP> --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir=<log directory> --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config
nssm set kubelet AppParameters --hostname-override=<hostname> --v=6 --pod-infra-container-image=mcr.microsoft.com/oss/kubernetes/pause:3.6 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns=<DNS-service-IP> --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir=<log directory> --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config
nssm set kubelet AppDirectory C:\k
nssm start kubelet
@ -922,7 +923,7 @@ SIG Windows [contributing guide on gathering logs](https://github.com/kubernetes
1. `kubectl port-forward` fails with "unable to do port forwarding: wincat not found"
This was implemented in Kubernetes 1.15 by including `wincat.exe` in the pause infrastructure container `mcr.microsoft.com/oss/kubernetes/pause:1.4.1`. Be sure to use a supported version of Kubernetes.
This was implemented in Kubernetes 1.15 by including `wincat.exe` in the pause infrastructure container `mcr.microsoft.com/oss/kubernetes/pause:3.6`. Be sure to use a supported version of Kubernetes.
If you would like to build your own pause infrastructure container be sure to include [wincat](https://github.com/kubernetes/kubernetes/tree/master/build/pause/windows/wincat).
1. My Kubernetes installation is failing because my Windows Server node is behind a proxy

View File

@ -214,7 +214,7 @@ In each case, the credentials of the pod are used to communicate securely with t
## Accessing services running on the cluster
The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/access-application-cluster/access-cluster/)
The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/administer-cluster/access-cluster-services/)
## Requesting redirects

View File

@ -219,7 +219,7 @@ allocated resources, events and pods running on the node.
Shows all applications running in the selected namespace.
The view lists applications by workload kind (for example: Deployments, ReplicaSets, StatefulSets).
and each workload kind can be viewed separately.
Each workload kind can be viewed separately.
The lists summarize actionable information about the workloads,
such as the number of ready pods for a ReplicaSet or current memory usage for a Pod.

View File

@ -283,7 +283,7 @@ the node identity with an out of band mechanism.
{{% thirdparty-content %}}
Third party custom controllers can be used:
- [kubelet-rubber-stamp](https://github.com/kontena/kubelet-rubber-stamp)
- [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver)
Such a controller is not a secure mechanism unless it not only verifies the CommonName
in the CSR but also verifies the requested IPs and domain names. This would prevent

View File

@ -13,11 +13,15 @@ min-kubernetes-server-version: v1.21
{{< feature-state state="beta" for_k8s_version="v1.22" >}}
The Kubernetes *Memory Manager* enables the feature of guaranteed memory (and hugepages) allocation for pods in the `Guaranteed` {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}.
The Kubernetes *Memory Manager* enables the feature of guaranteed memory (and hugepages)
allocation for pods in the `Guaranteed` {{< glossary_tooltip text="QoS class" term_id="qos-class" >}}.
The Memory Manager employs hint generation protocol to yield the most suitable NUMA affinity for a pod. The Memory Manager feeds the central manager (*Topology Manager*) with these affinity hints. Based on both the hints and Topology Manager policy, the pod is rejected or admitted to the node.
The Memory Manager employs hint generation protocol to yield the most suitable NUMA affinity for a pod.
The Memory Manager feeds the central manager (*Topology Manager*) with these affinity hints.
Based on both the hints and Topology Manager policy, the pod is rejected or admitted to the node.
Moreover, the Memory Manager ensures that the memory which a pod requests is allocated from a minimum number of NUMA nodes.
Moreover, the Memory Manager ensures that the memory which a pod requests
is allocated from a minimum number of NUMA nodes.
The Memory Manager is only pertinent to Linux based hosts.
@ -25,11 +29,15 @@ The Memory Manager is only pertinent to Linux based hosts.
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
To align memory resources with other requested resources in a Pod Spec:
- the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node. See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/);
- the Topology Manager should be enabled and proper Topology Manager policy should be configured on a Node. See [control Topology Management Policies](/docs/tasks/administer-cluster/topology-manager/).
To align memory resources with other requested resources in a Pod spec:
Starting from v1.22, the Memory Manager is enabled by default through `MemoryManager` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
- the CPU Manager should be enabled and proper CPU Manager policy should be configured on a Node.
See [control CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/);
- the Topology Manager should be enabled and proper Topology Manager policy should be configured on a Node.
See [control Topology Management Policies](/docs/tasks/administer-cluster/topology-manager/).
Starting from v1.22, the Memory Manager is enabled by default through `MemoryManager`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
Preceding v1.22, the `kubelet` must be started with the following flag:
@ -39,37 +47,59 @@ in order to enable the Memory Manager feature.
## How Memory Manager Operates?
The Memory Manager currently offers the guaranteed memory (and hugepages) allocation for Pods in Guaranteed QoS class. To immediately put the Memory Manager into operation follow the guidelines in the section [Memory Manager configuration](#memory-manager-configuration), and subsequently, prepare and deploy a `Guaranteed` pod as illustrated in the section [Placing a Pod in the Guaranteed QoS class](#placing-a-pod-in-the-guaranteed-qos-class).
The Memory Manager currently offers the guaranteed memory (and hugepages) allocation
for Pods in Guaranteed QoS class.
To immediately put the Memory Manager into operation follow the guidelines in the section
[Memory Manager configuration](#memory-manager-configuration), and subsequently,
prepare and deploy a `Guaranteed` pod as illustrated in the section
[Placing a Pod in the Guaranteed QoS class](#placing-a-pod-in-the-guaranteed-qos-class).
The Memory Manager is a Hint Provider, and it provides topology hints for the Topology Manager which then aligns the requested resources according to these topology hints. It also enforces `cgroups` (i.e. `cpuset.mems`) for pods. The complete flow diagram concerning pod admission and deployment process is illustrated in [Memory Manager KEP: Design Overview][4] and below:
The Memory Manager is a Hint Provider, and it provides topology hints for
the Topology Manager which then aligns the requested resources according to these topology hints.
It also enforces `cgroups` (i.e. `cpuset.mems`) for pods.
The complete flow diagram concerning pod admission and deployment process is illustrated in
[Memory Manager KEP: Design Overview][4] and below:
![Memory Manager in the pod admission and deployment process](/images/docs/memory-manager-diagram.svg)
During this process, the Memory Manager updates its internal counters stored in [Node Map and Memory Maps][2] to manage guaranteed memory allocation.
During this process, the Memory Manager updates its internal counters stored in
[Node Map and Memory Maps][2] to manage guaranteed memory allocation.
The Memory Manager updates the Node Map during the startup and runtime as follows.
### Startup
This occurs once a node administrator employs `--reserved-memory` (section [Reserved memory flag](#reserved-memory-flag)). In this case, the Node Map becomes updated to reflect this reservation as illustrated in [Memory Manager KEP: Memory Maps at start-up (with examples)][5].
This occurs once a node administrator employs `--reserved-memory` (section
[Reserved memory flag](#reserved-memory-flag)).
In this case, the Node Map becomes updated to reflect this reservation as illustrated in
[Memory Manager KEP: Memory Maps at start-up (with examples)][5].
The administrator must provide `--reserved-memory` flag when `Static` policy is configured.
### Runtime
Reference [Memory Manager KEP: Memory Maps at runtime (with examples)][6] illustrates how a successful pod deployment affects the Node Map, and it also relates to how potential Out-of-Memory (OOM) situations are handled further by Kubernetes or operating system.
Reference [Memory Manager KEP: Memory Maps at runtime (with examples)][6] illustrates
how a successful pod deployment affects the Node Map, and it also relates to
how potential Out-of-Memory (OOM) situations are handled further by Kubernetes or operating system.
Important topic in the context of Memory Manager operation is the management of NUMA groups. Each time pod's memory request is in excess of single NUMA node capacity, the Memory Manager attempts to create a group that comprises several NUMA nodes and features extend memory capacity. The problem has been solved as elaborated in [Memory Manager KEP: How to enable the guaranteed memory allocation over many NUMA nodes?][3]. Also, reference [Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1] illustrates how the management of groups occurs.
Important topic in the context of Memory Manager operation is the management of NUMA groups.
Each time pod's memory request is in excess of single NUMA node capacity, the Memory Manager
attempts to create a group that comprises several NUMA nodes and features extend memory capacity.
The problem has been solved as elaborated in
[Memory Manager KEP: How to enable the guaranteed memory allocation over many NUMA nodes?][3].
Also, reference [Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1]
illustrates how the management of groups occurs.
## Memory Manager configuration
Other Managers should be first pre-configured (section [Pre-configuration](#pre-configuration)). Next, the Memory Manger feature should be enabled (section [Enable the Memory Manager feature](#enable-the-memory-manager-feature)) and be run with `Static` policy (section [Static policy](#static-policy)). Optionally, some amount of memory can be reserved for system or kubelet processes to increase node stability (section [Reserved memory flag](#reserved-memory-flag)).
Other Managers should be first pre-configured. Next, the Memory Manger feature should be enabled
and be run with `Static` policy (section [Static policy](#policy-static)).
Optionally, some amount of memory can be reserved for system or kubelet processes to increase
node stability (section [Reserved memory flag](#reserved-memory-flag)).
### Policies
Memory Manager supports two policies. You can select a policy via a `kubelet` flag `--memory-manager-policy`.
Two policies can be selected:
Memory Manager supports two policies. You can select a policy via a `kubelet` flag `--memory-manager-policy`:
* `None` (default)
* `Static`
@ -79,40 +109,63 @@ Two policies can be selected:
This is the default policy and does not affect the memory allocation in any way.
It acts the same as if the Memory Manager is not present at all.
The `None` policy returns default topology hint. This special hint denotes that Hint Provider (Memory Manger in this case) has no preference for NUMA affinity with any resource.
The `None` policy returns default topology hint. This special hint denotes that Hint Provider
(Memory Manger in this case) has no preference for NUMA affinity with any resource.
#### Static policy {#policy-static}
In the case of the `Guaranteed` pod, the `Static` Memory Manger policy returns topology hints relating to the set of NUMA nodes where the memory can be guaranteed, and reserves the memory through updating the internal [NodeMap][2] object.
In the case of the `Guaranteed` pod, the `Static` Memory Manger policy returns topology hints
relating to the set of NUMA nodes where the memory can be guaranteed,
and reserves the memory through updating the internal [NodeMap][2] object.
In the case of the `BestEffort` or `Burstable` pod, the `Static` Memory Manager policy sends back the default topology hint as there is no request for the guaranteed memory, and does not reserve the memory in the internal [NodeMap][2] object.
In the case of the `BestEffort` or `Burstable` pod, the `Static` Memory Manager policy sends back
the default topology hint as there is no request for the guaranteed memory,
and does not reserve the memory in the internal [NodeMap][2] object.
### Reserved memory flag
The [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) mechanism is commonly used by node administrators to reserve K8S node system resources for the kubelet or operating system processes in order to enhance the node stability. A dedicated set of flags can be used for this purpose to set the total amount of reserved memory for a node. This pre-configured value is subsequently utilized to calculate the real amount of node's "allocatable" memory available to pods.
The [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) mechanism
is commonly used by node administrators to reserve K8S node system resources for the kubelet
or operating system processes in order to enhance the node stability.
A dedicated set of flags can be used for this purpose to set the total amount of reserved memory
for a node. This pre-configured value is subsequently utilized to calculate
the real amount of node's "allocatable" memory available to pods.
The Kubernetes scheduler incorporates "allocatable" to optimise pod scheduling process. The foregoing flags include `--kube-reserved`, `--system-reserved` and `--eviction-threshold`. The sum of their values will account for the total amount of reserved memory.
The Kubernetes scheduler incorporates "allocatable" to optimise pod scheduling process.
The foregoing flags include `--kube-reserved`, `--system-reserved` and `--eviction-threshold`.
The sum of their values will account for the total amount of reserved memory.
A new `--reserved-memory` flag was added to Memory Manager to allow for this total reserved memory to be split (by a node administrator) and accordingly reserved across many NUMA nodes.
A new `--reserved-memory` flag was added to Memory Manager to allow for this total reserved memory
to be split (by a node administrator) and accordingly reserved across many NUMA nodes.
The flag specifies a comma-separated list of memory reservations per NUMA node.
This parameter is only useful in the context of the Memory Manager feature.
The Memory Manager will not use this reserved memory for the allocation of container workloads.
For example, if you have a NUMA node "NUMA0" with `10Gi` of memory available, and the `--reserved-memory` was specified to reserve `1Gi` of memory at "NUMA0", the Memory Manager assumes that only `9Gi` is available for containers.
For example, if you have a NUMA node "NUMA0" with `10Gi` of memory available, and
the `--reserved-memory` was specified to reserve `1Gi` of memory at "NUMA0",
the Memory Manager assumes that only `9Gi` is available for containers.
You can omit this parameter, however, you should be aware that the quantity of reserved memory from all NUMA nodes should be equal to the quantity of memory specified by the [Node Allocatable feature](/docs/tasks/administer-cluster/reserve-compute-resources/). If at least one node allocatable parameter is non-zero, you will need to specify `--reserved-memory` for at least one NUMA node. In fact, `eviction-hard` threshold value is equal to `100Mi` by default, so if `Static` policy is used, `--reserved-memory` is obligatory.
You can omit this parameter, however, you should be aware that the quantity of reserved memory
from all NUMA nodes should be equal to the quantity of memory specified by the
[Node Allocatable feature](/docs/tasks/administer-cluster/reserve-compute-resources/).
If at least one node allocatable parameter is non-zero, you will need to specify
`--reserved-memory` for at least one NUMA node.
In fact, `eviction-hard` threshold value is equal to `100Mi` by default, so
if `Static` policy is used, `--reserved-memory` is obligatory.
Also, avoid the following configurations:
1. duplicates, i.e. the same NUMA node or memory type, but with a different value;
2. setting zero limit for any of memory types;
3. NUMA node IDs that do not exist in the machine hardware;
4. memory type names different than `memory` or `hugepages-<size>` (hugepages of particular `<size>` should also exist).
1. setting zero limit for any of memory types;
1. NUMA node IDs that do not exist in the machine hardware;
1. memory type names different than `memory` or `hugepages-<size>`
(hugepages of particular `<size>` should also exist).
Syntax:
`--reserved-memory N:memory-type1=value1,memory-type2=value2,...`
* `N` (integer) - NUMA node index, e.g. `0`
* `memory-type` (string) - represents memory type:
* `memory` - conventional memory
@ -127,7 +180,9 @@ or
`--reserved-memory 0:memory=1Gi --reserved-memory 1:memory=2Gi`
When you specify values for `--reserved-memory` flag, you must comply with the setting that you prior provided via Node Allocatable Feature flags. That is, the following rule must be obeyed for each memory type:
When you specify values for `--reserved-memory` flag, you must comply with the setting that
you prior provided via Node Allocatable Feature flags.
That is, the following rule must be obeyed for each memory type:
`sum(reserved-memory(i)) = kube-reserved + system-reserved + eviction-threshold`,
@ -135,17 +190,22 @@ where `i` is an index of a NUMA node.
If you do not follow the formula above, the Memory Manager will show an error on startup.
In other words, the example above illustrates that for the conventional memory (`type=memory`), we reserve `3Gi` in total, i.e.:
In other words, the example above illustrates that for the conventional memory (`type=memory`),
we reserve `3Gi` in total, i.e.:
`sum(reserved-memory(i)) = reserved-memory(0) + reserved-memory(1) = 1Gi + 2Gi = 3Gi`
An example of kubelet command-line arguments relevant to the node Allocatable configuration:
* `--kube-reserved=cpu=500m,memory=50Mi`
* `--system-reserved=cpu=123m,memory=333Mi`
* `--eviction-hard=memory.available<500Mi`
{{< note >}}
The default hard eviction threshold is 100MiB, and **not** zero. Remember to increase the quantity of memory that you reserve by setting `--reserved-memory` by that hard eviction threshold. Otherwise, the kubelet will not start Memory Manager and display an error.
The default hard eviction threshold is 100MiB, and **not** zero.
Remember to increase the quantity of memory that you reserve by setting `--reserved-memory`
by that hard eviction threshold. Otherwise, the kubelet will not start Memory Manager and
display an error.
{{< /note >}}
Here is an example of a correct configuration:
@ -157,15 +217,21 @@ Here is an example of a correct configuration:
--memory-manager-policy=Static
--reserved-memory 0:memory=3Gi --reserved-memory 1:memory=2148Mi
```
Let us validate the configuration above:
1. `kube-reserved + system-reserved + eviction-hard(default) = reserved-memory(0) + reserved-memory(1)`
2. `4GiB + 1GiB + 100MiB = 3GiB + 2148MiB`
3. `5120MiB + 100MiB = 3072MiB + 2148MiB`
4. `5220MiB = 5220MiB` (which is correct)
1. `4GiB + 1GiB + 100MiB = 3GiB + 2148MiB`
1. `5120MiB + 100MiB = 3072MiB + 2148MiB`
1. `5220MiB = 5220MiB` (which is correct)
## Placing a Pod in the Guaranteed QoS class
If the selected policy is anything other than `None`, the Memory Manager identifies pods that are in the `Guaranteed` QoS class. The Memory Manager provides specific topology hints to the Topology Manager for each `Guaranteed` pod. For pods in a QoS class other than `Guaranteed`, the Memory Manager provides default topology hints to the Topology Manager.
If the selected policy is anything other than `None`, the Memory Manager identifies pods
that are in the `Guaranteed` QoS class.
The Memory Manager provides specific topology hints to the Topology Manager for each `Guaranteed` pod.
For pods in a QoS class other than `Guaranteed`, the Memory Manager provides default topology hints
to the Topology Manager.
The following excerpts from pod manifests assign a pod to the `Guaranteed` QoS class.
@ -209,30 +275,37 @@ Notice that both CPU and memory requests must be specified for a Pod to lend it
## Troubleshooting
The following means can be used to troubleshoot the reason why a pod could not be deployed or became rejected at a node:
The following means can be used to troubleshoot the reason why a pod could not be deployed or
became rejected at a node:
- pod status - indicates topology affinity errors
- system logs - include valuable information for debugging, e.g., about generated hints
- state file - the dump of internal state of the Memory Manager (includes [Node Map and Memory Maps][2])
- state file - the dump of internal state of the Memory Manager
(includes [Node Map and Memory Maps][2])
- starting from v1.22, the [device plugin resource API](#device-plugin-resource-api) can be used
to retrieve information about the memory reserved for containers
### Pod status (TopologyAffinityError) {#TopologyAffinityError}
This error typically occurs in the following situations:
* a node has not enough resources available to satisfy the pod's request
* the pod's request is rejected due to particular Topology Manager policy constraints
The error appears in the status of a pod:
```shell
# kubectl get pods
kubectl get pods
```
```none
NAME READY STATUS RESTARTS AGE
guaranteed 0/1 TopologyAffinityError 0 113s
```
Use `kubectl describe pod <id>` or `kubectl get events` to obtain detailed error message:
```shell
```none
Warning TopologyAffinityError 10m kubelet, dell8 Resources cannot be allocated with Topology locality
```
@ -246,13 +319,17 @@ Also, the set of hints generated by CPU Manager should be present in the logs.
Topology Manager merges these hints to calculate a single best hint.
The best hint should be also present in the logs.
The best hint indicates where to allocate all the resources. Topology Manager tests this hint against its current policy, and based on the verdict, it either admits the pod to the node or rejects it.
The best hint indicates where to allocate all the resources.
Topology Manager tests this hint against its current policy, and based on the verdict,
it either admits the pod to the node or rejects it.
Also, search the logs for occurrences associated with the Memory Manager, e.g. to find out information about `cgroups` and `cpuset.mems` updates.
Also, search the logs for occurrences associated with the Memory Manager,
e.g. to find out information about `cgroups` and `cpuset.mems` updates.
### Examine the memory manager state on a node
Let us first deploy a sample `Guaranteed` pod whose specification is as follows:
```yaml
apiVersion: v1
kind: Pod
@ -273,7 +350,9 @@ spec:
command: ["sleep","infinity"]
```
Next, let us log into the node where it was deployed and examine the state file in `/var/lib/kubelet/memory_manager_state`:
Next, let us log into the node where it was deployed and examine the state file in
`/var/lib/kubelet/memory_manager_state`:
```json
{
"policyName":"Static",
@ -352,34 +431,41 @@ It can be deduced from the state file that the pod was pinned to both NUMA nodes
],
```
Pinned term means that pod's memory consumption is constrained (through `cgroups` configuration) to these NUMA nodes.
Pinned term means that pod's memory consumption is constrained (through `cgroups` configuration)
to these NUMA nodes.
This automatically implies that Memory Manager instantiated a new group that comprises these two NUMA nodes, i.e. `0` and `1` indexed NUMA nodes.
This automatically implies that Memory Manager instantiated a new group that
comprises these two NUMA nodes, i.e. `0` and `1` indexed NUMA nodes.
Notice that the management of groups is handled in a relatively complex manner, and further elaboration is provided in Memory Manager KEP in [this][1] and [this][3] sections.
Notice that the management of groups is handled in a relatively complex manner, and
further elaboration is provided in Memory Manager KEP in [this][1] and [this][3] sections.
In order to analyse memory resources available in a group, the corresponding entries from NUMA nodes belonging to the group must be added up.
In order to analyse memory resources available in a group,the corresponding entries from
NUMA nodes belonging to the group must be added up.
For example, the total amount of free "conventional" memory in the group can be computed by adding up the free memory available at every NUMA node in the group, i.e., in the `"memory"` section of NUMA node `0` (`"free":0`) and NUMA node `1` (`"free":103739236352`). So, the total amount of free "conventional" memory in this group is equal to `0 + 103739236352` bytes.
For example, the total amount of free "conventional" memory in the group can be computed
by adding up the free memory available at every NUMA node in the group,
i.e., in the `"memory"` section of NUMA node `0` (`"free":0`) and NUMA node `1` (`"free":103739236352`).
So, the total amount of free "conventional" memory in this group is equal to `0 + 103739236352` bytes.
The line `"systemReserved":3221225472` indicates that the administrator of this node reserved `3221225472` bytes (i.e. `3Gi`) to serve kubelet and system processes at NUMA node `0`, by using `--reserved-memory` flag.
The line `"systemReserved":3221225472` indicates that the administrator of this node reserved
`3221225472` bytes (i.e. `3Gi`) to serve kubelet and system processes at NUMA node `0`,
by using `--reserved-memory` flag.
### Device plugin resource API
By employing the [API](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/), the information about reserved memory for each container can be retrieved, which is contained in protobuf `ContainerMemory` message. This information can be retrieved solely for pods in Guaranteed QoS class.
By employing the [API](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/),
the information about reserved memory for each container can be retrieved, which is contained
in protobuf `ContainerMemory` message.
This information can be retrieved solely for pods in Guaranteed QoS class.
## {{% heading "whatsnext" %}}
- [Memory Manager KEP: Design Overview][4]
- [Memory Manager KEP: Memory Maps at start-up (with examples)][5]
- [Memory Manager KEP: Memory Maps at runtime (with examples)][6]
- [Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1]
- [Memory Manager KEP: The Concept of Node Map and Memory Maps][2]
- [Memory Manager KEP: How to enable the guaranteed memory allocation over many NUMA nodes?][3]
[1]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1769-memory-manager#simulation---how-the-memory-manager-works-by-examples
@ -388,3 +474,5 @@ By employing the [API](/docs/concepts/extend-kubernetes/compute-storage-net/devi
[4]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1769-memory-manager#design-overview
[5]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1769-memory-manager#memory-maps-at-start-up-with-examples
[6]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1769-memory-manager#memory-maps-at-runtime-with-examples

View File

@ -248,7 +248,7 @@ To try the gRPC liveness check, create a Pod using the command below.
In the example below, the etcd pod is configured to use gRPC liveness probe.
```shell
kubectl apply -f https://k8s.io/examples/pods/probe/content/en/examples/pods/probe/grpc-liveness.yaml
kubectl apply -f https://k8s.io/examples/pods/probe/grpc-liveness.yaml
```
After 15 seconds, view Pod events to verify that the liveness check has not failed:

View File

@ -165,6 +165,7 @@ and finally configure the `hostPath`:
```yaml
...
volumes:
- name: audit
hostPath:
path: /etc/kubernetes/audit-policy.yaml
@ -174,7 +175,6 @@ and finally configure the `hostPath`:
hostPath:
path: /var/log/kubernetes/audit/
type: DirectoryOrCreate
```
### Webhook backend

View File

@ -57,7 +57,7 @@ If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
* Make sure that you have the name of the image correct.
* Have you pushed the image to the repository?
* Have you pushed the image to the registry?
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
#### My pod is crashing or otherwise unhealthy

View File

@ -5,57 +5,57 @@ content_type: task
<!-- overview -->
Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to [get a shell on a running container](/docs/tasks/debug-application-cluster/get-shell-running-container/) and running your tools inside the remote shell.
{{% thirdparty-content %}}
`telepresence` is a tool to ease the process of developing and debugging services locally, while proxying the service to a remote Kubernetes cluster. Using `telepresence` allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to [get a shell on a running container](/docs/tasks/debug-application-cluster/get-shell-running-container/) in order to run debugging tools.
`telepresence` is a tool to ease the process of developing and debugging services locally while proxying the service to a remote Kubernetes cluster. Using `telepresence` allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
This document describes using `telepresence` to develop and debug services running on a remote cluster locally.
## {{% heading "prerequisites" %}}
* Kubernetes cluster is installed
* `kubectl` is configured to communicate with the cluster
* [Telepresence](https://www.telepresence.io/reference/install) is installed
* [Telepresence](https://www.telepresence.io/docs/latest/install/) is installed
<!-- steps -->
## Getting a shell on a remote cluster
## Connecting your local machine to a remote Kubernetes cluster
Open a terminal and run `telepresence` with no arguments to get a `telepresence` shell. This shell runs locally, giving you full access to your local filesystem.
After installing `telepresence`, run `telepresence connect` to launch it's Daemon and connect your local workstation to the cluster.
The `telepresence` shell can be used in a variety of ways. For example, write a shell script on your laptop, and run it directly from the shell in real time. You can do this on a remote shell as well, but you might not be able to use your preferred code editor, and the script is deleted when the container is terminated.
```
$ telepresence connect
Enter `exit` to quit and close the shell.
Launching Telepresence Daemon
...
Connected to context default (https://<cluster public IP>)
```
You can curl services using the Kubernetes syntax e.g. `curl -ik https://kubernetes.default`
## Developing or debugging an existing service
When developing an application on Kubernetes, you typically program or debug a single service. The service might require access to other services for testing and debugging. One option is to use the continuous deployment pipeline, but even the fastest deployment pipeline introduces a delay in the program or debug cycle.
Use the `--swap-deployment` option to swap an existing deployment with the Telepresence proxy. Swapping allows you to run a service locally and connect to the remote Kubernetes cluster. The services in the remote cluster can now access the locally running instance.
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:REMOTE_PORT` command to create an "intercept" for rerouting remote service traffic.
To run telepresence with `--swap-deployment`, enter:
Where:
`telepresence --swap-deployment $DEPLOYMENT_NAME`
- `$SERVICE_NAME` is the name of your local service
- `$LOCAL_PORT` is the port that your service is running on your local workstation
- And `$REMOTE_PORT` is the port your service listens to in the cluster
where $DEPLOYMENT_NAME is the name of your existing deployment.
Running this command spawns a shell. In the shell, start your service. You can then make edits to the source code locally, save, and see the changes take effect immediately. You can also run your service in a debugger, or any other local development tool.
Running this command tells Telepresence to send remote traffic to your local service instead of the service in the remote Kubernetes cluster. Make edits to your service source code locally, save, and see the corresponding changes when accessing your remote application take effect immediately. You can also run your local service using a debugger or any other local development tool.
## How does Telepresence work?
Telepresence installs a traffic-agent sidecar next to your existing application's container running in the remote cluster. It then captures all traffic requests going into the Pod, and instead of forwarding this to the application in the remote cluster, it routes all traffic (when you create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)) or a subset of the traffic (when you create a [personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept)) to your local development environment.
## {{% heading "whatsnext" %}}
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine.
Telepresence has [numerous proxying options](https://www.telepresence.io/reference/methods), depending on your situation.
For further reading, visit the [Telepresence website](https://www.telepresence.io).

View File

@ -67,8 +67,8 @@ Learn more about the metrics server in
[the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
### Summary API Source
The [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at node, volume, pod and container level, and omits
them in the [Summary API](https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go)
The [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at node, volume, pod and container level, and emits their statistics in
the [Summary API](https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go)
for consumers to read.
Pre-1.23, these resources have been primarily gathered from [cAdvisor](https://github.com/google/cadvisor). However, in 1.23 with the

View File

@ -36,8 +36,7 @@ If you define args, but do not define a command, the default command is used
with your new arguments.
{{< note >}}
The `command` field corresponds to `entrypoint` in some container
runtimes. Refer to the [Notes](#notes) below.
The `command` field corresponds to `entrypoint` in some container runtimes.
{{< /note >}}
In this exercise, you create a Pod that runs one container. The configuration
@ -111,50 +110,9 @@ command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
```
## Notes
This table summarizes the field names used by Docker and Kubernetes.
| Description | Docker field name | Kubernetes field name |
|----------------------------------------|------------------------|-----------------------|
| The command run by the container | Entrypoint | command |
| The arguments passed to the command | Cmd | args |
When you override the default Entrypoint and Cmd, these rules apply:
* If you do not supply `command` or `args` for a Container, the defaults defined
in the Docker image are used.
* If you supply a `command` but no `args` for a Container, only the supplied
`command` is used. The default EntryPoint and the default Cmd defined in the Docker
image are ignored.
* If you supply only `args` for a Container, the default Entrypoint defined in
the Docker image is run with the `args` that you supplied.
* If you supply a `command` and `args`, the default Entrypoint and the default
Cmd defined in the Docker image are ignored. Your `command` is run with your
`args`.
Here are some examples:
| Image Entrypoint | Image Cmd | Container command | Container args | Command run |
|--------------------|------------------|---------------------|--------------------|------------------|
| `[/ep-1]` | `[foo bar]` | &lt;not set&gt; | &lt;not set&gt; | `[ep-1 foo bar]` |
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | &lt;not set&gt; | `[ep-2]` |
| `[/ep-1]` | `[foo bar]` | &lt;not set&gt; | `[zoo boo]` | `[ep-1 zoo boo]` |
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
## {{% heading "whatsnext" %}}
* Learn more about [configuring pods and containers](/docs/tasks/).
* Learn more about [running commands in a container](/docs/tasks/debug-application-cluster/get-shell-running-container/).
* See [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core).

View File

@ -28,7 +28,7 @@ This guide demonstrates how to configure the kubelet's image credential provider
## {{% heading "prerequisites" %}}
* The kubelet image credential provider is introduced in v1.20 as an alpha feature. As with other alpha features,
a feature gate `KubeletCredentialProviders` must be enabled on only the kubelet for the feature to work.
a feature gate `KubeletCredentialProviders` must be enabled on only the kubelet for the feature to work.
* A working implementation of a credential provider exec plugin. You can build your own plugin or use one provided by cloud providers.
<!-- steps -->
@ -41,17 +41,19 @@ every node in your cluster and stored in a known directory. The directory will b
## Configuring the Kubelet
In order to use this feature, the kubelet expects two flags to be set:
* `--image-credential-provider-config` - the path to the credential provider plugin config file.
* `--image-credential-provider-bin-dir` - the path to the directory where credential provider plugin binaries are located.
### Configure a kubelet credential provider
The configuration file passed into `--image-credential-provider-config` is read by the kubelet to determine which exec plugins
should be invoked for which container images. Here's an example configuration file you may end up using if you are using the [ECR](https://aws.amazon.com/ecr/)-based plugin:
should be invoked for which container images. Here's an example configuration file you may end up using if you are using the
[ECR](https://aws.amazon.com/ecr/)-based plugin:
```yaml
kind: CredentialProviderConfig
apiVersion: kubelet.config.k8s.io/v1alpha1
kind: CredentialProviderConfig
# providers is a list of credential provider plugins that will be enabled by the kubelet.
# Multiple providers may match against a single image, in which case credentials
# from all providers will be returned to the kubelet. If multiple providers are called
@ -112,12 +114,17 @@ providers:
```
The `providers` field is a list of enabled plugins used by the kubelet. Each entry has a few required fields:
* `name`: the name of the plugin which MUST match the name of the executable binary that exists in the directory passed into `--image-credential-provider-bin-dir`.
* `matchImages`: a list of strings used to match against images in order to determine if this provider should be invoked. More on this below.
* `defaultCacheDuration`: the default duration the kubelet will cache credentials in-memory if a cache duration was not specified by the plugin.
* `apiVersion`: the api version that the kubelet and the exec plugin will use when communicating.
Each credential provider can also be given optional args and environment variables as well. Consult the plugin implementors to determine what set of arguments and environment variables are required for a given plugin.
* `name`: the name of the plugin which MUST match the name of the executable binary that exists
in the directory passed into `--image-credential-provider-bin-dir`.
* `matchImages`: a list of strings used to match against images in order to determine
if this provider should be invoked. More on this below.
* `defaultCacheDuration`: the default duration the kubelet will cache credentials in-memory
if a cache duration was not specified by the plugin.
* `apiVersion`: the API version that the kubelet and the exec plugin will use when communicating.
Each credential provider can also be given optional args and environment variables as well.
Consult the plugin implementors to determine what set of arguments and environment variables are required for a given plugin.
#### Configure image matching
@ -134,8 +141,15 @@ A match exists between an image name and a `matchImage` entry when all of the be
* If the imageMatch contains a port, then the port must match in the image as well.
Some example values of `matchImages` patterns are:
* `123456789.dkr.ecr.us-east-1.amazonaws.com`
* `*.azurecr.io`
* `gcr.io`
* `*.*.registry.io`
* `foo.registry.io:8080/path`
## {{% heading "whatsnext" %}}
* Read the details about `CredentialProviderConfig` in the
[kubelet configuration API (v1alpha1) reference](/docs/reference/config-api/kubelet-config.v1alpha1/).

View File

@ -86,12 +86,20 @@ metadata:
name: example-configmap-1-8mbdf7882g
```
To generate a ConfigMap from an env file, add an entry to the `envs` list in `configMapGenerator`. Here is an example of generating a ConfigMap with a data item from a `.env` file:
To generate a ConfigMap from an env file, add an entry to the `envs` list in `configMapGenerator`. This can also be used to set values from local environment variables by omitting the `=` and the value.
{{< note >}}
It's recommended to use the local environment variable population functionality sparingly - an overlay with a patch is often more maintainable. Setting values from the environment may be useful when they cannot easily be predicted, such as a git SHA.
{{< /note >}}
Here is an example of generating a ConfigMap with a data item from a `.env` file:
```shell
# Create a .env file
# BAZ will be populated from the local environment variable $BAZ
cat <<EOF >.env
FOO=Bar
BAZ
EOF
cat <<EOF >./kustomization.yaml
@ -105,7 +113,7 @@ EOF
The generated ConfigMap can be examined with the following command:
```shell
kubectl kustomize ./
BAZ=Qux kubectl kustomize ./
```
The generated ConfigMap is:
@ -113,10 +121,11 @@ The generated ConfigMap is:
```yaml
apiVersion: v1
data:
BAZ: Qux
FOO: Bar
kind: ConfigMap
metadata:
name: example-configmap-1-42cfbf598f
name: example-configmap-1-892ghb99c8
```
{{< note >}}
@ -748,8 +757,8 @@ Since the Service name may change as `namePrefix` or `nameSuffix` is added in th
not recommended to hard code the Service name in the command argument. For this usage, Kustomize can inject the Service name into containers through `vars`.
```shell
# Create a deployment.yaml file
cat <<EOF > deployment.yaml
# Create a deployment.yaml file (quoting the here doc delimiter)
cat <<'EOF' > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:

View File

@ -342,17 +342,16 @@ syscalls. Here seccomp has been instructed to error on any syscall by setting
ability to do anything meaningful. What you really want is to give workloads
only the privileges they need.
Clean up that Pod and Service before moving to the next section:
Clean up that Pod before moving to the next section:
```shell
kubectl delete service violation-pod --wait
kubectl delete pod violation-pod --wait --now
```
## Create Pod with seccomp profile that only allows necessary syscalls
If you take a look at the `fine-pod.json`, you will notice some of the syscalls
seen in the first example where the profile set `"defaultAction":
If you take a look at the `fine-grained.json` profile, you will notice some of the syscalls
seen in syslog of the first example where the profile set `"defaultAction":
"SCMP_ACT_LOG"`. Now the profile is setting `"defaultAction": "SCMP_ACT_ERRNO"`,
but explicitly allowing a set of syscalls in the `"action": "SCMP_ACT_ALLOW"`
block. Ideally, the container will run successfully and you will see no messages

View File

@ -2,3 +2,6 @@
title: Create a Cluster
weight: 10
---
Learn about Kubernetes {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}} and create a simple cluster using Minikube.

View File

@ -304,8 +304,8 @@ following:
## Clean up
Run `kind delete cluster -name psa-with-cluster-pss` and
`kind delete cluster -name psa-wo-cluster-pss` to delete the clusters you
Run `kind delete cluster --name psa-with-cluster-pss` and
`kind delete cluster --name psa-wo-cluster-pss` to delete the clusters you
created.
## {{% heading "whatsnext" %}}

View File

@ -3,7 +3,7 @@ kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
schedule: "* * * * *"
jobTemplate:
spec:
template:

View File

@ -160,10 +160,6 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
AllowPodAffinityNamespaceSelector: true,
}
pspValidationOptions := policy_validation.PodSecurityPolicyValidationOptions{
AllowEphemeralVolumeType: true,
}
// Enable CustomPodDNS for testing
// feature.DefaultFeatureGate.Set("CustomPodDNS=true")
switch t := obj.(type) {
@ -283,11 +279,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
gv := schema.GroupVersion{
Group: networking.GroupName,
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
}
errors = networking_validation.ValidateIngressCreate(t, gv)
errors = networking_validation.ValidateIngressCreate(t)
case *networking.IngressClass:
/*
if t.Namespace == "" {
@ -301,7 +293,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
errors = networking_validation.ValidateIngressClass(t)
case *policy.PodSecurityPolicy:
errors = policy_validation.ValidatePodSecurityPolicy(t, pspValidationOptions)
errors = policy_validation.ValidatePodSecurityPolicy(t)
case *apps.ReplicaSet:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
@ -405,7 +397,7 @@ func TestExampleObjectSchemas(t *testing.T) {
},
"admin/dns": {
"busybox": {&api.Pod{}},
"dns-horizontal-autoscaler": {&apps.Deployment{}},
"dns-horizontal-autoscaler": {&api.ServiceAccount{}, &rbac.ClusterRole{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
"dnsutils": {&api.Pod{}},
},
"admin/logging": {
@ -453,7 +445,7 @@ func TestExampleObjectSchemas(t *testing.T) {
},
"admin/sched": {
"clusterrole": {&rbac.ClusterRole{}},
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &api.ConfigMap{}, &apps.Deployment{}},
"pod1": {&api.Pod{}},
"pod2": {&api.Pod{}},
"pod3": {&api.Pod{}},
@ -592,6 +584,7 @@ func TestExampleObjectSchemas(t *testing.T) {
},
"pods/probe": {
"exec-liveness": {&api.Pod{}},
"grpc-liveness": {&api.Pod{}},
"http-liveness": {&api.Pod{}},
"pod-with-http-healthcheck": {&api.Pod{}},
"pod-with-tcp-socket-healthcheck": {&api.Pod{}},
@ -621,7 +614,11 @@ func TestExampleObjectSchemas(t *testing.T) {
},
"pods/storage": {
"projected": {&api.Pod{}},
"projected-secret-downwardapi-configmap": {&api.Pod{}},
"projected-secrets-nondefault-permission-mode": {&api.Pod{}},
"projected-service-account-token": {&api.Pod{}},
"pv-claim": {&api.PersistentVolumeClaim{}},
"pv-duplicate": {&api.Pod{}},
"pv-pod": {&api.Pod{}},
"pv-volume": {&api.PersistentVolume{}},
"redis": {&api.Pod{}},

View File

@ -10,6 +10,7 @@ spec:
- name: token-vol
mountPath: "/service-account"
readOnly: true
serviceAccountName: default
volumes:
- name: token-vol
projected:

View File

@ -8,9 +8,11 @@ spec:
- name: test
image: nginx
volumeMounts:
- name: site-data
# a mount for site-data
- name: config
mountPath: /usr/share/nginx/html
subPath: html
# another mount for nginx config
- name: config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf

View File

@ -78,21 +78,32 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| December 2021 | 2021-12-10 | 2021-12-15 |
| January 2022 | 2021-01-14 | 2021-01-19 |
| February 2022 | 2021-02-11 | 2021-02-16 |
| March 2022 | 2021-03-11 | 2021-03-16 |
| January 2022 | 2022-01-14 | 2022-01-19 |
| February 2022 | 2022-02-11 | 2022-02-16 |
| March 2022 | 2022-03-11 | 2022-03-16 |
## Detailed Release History for Active Branches
### 1.23
**1.23** enters maintenance mode on **2022-12-28**.
End of Life for **1.23** is **2023-02-28**.
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
| 1.23.2 | 2022-01-14 | 2022-01-19 | |
| 1.23.1 | 2021-12-14 | 2021-12-16 | |
### 1.22
**1.22** enters maintenance mode on **2022-08-28**
End of Life for **1.22** is **2022-10-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
| 1.22.6 | 2022-01-14 | 2022-01-19 | |
| 1.22.5 | 2021-12-10 | 2021-12-15 | |
| 1.22.4 | 2021-11-12 | 2021-11-17 | |
| 1.22.3 | 2021-10-22 | 2021-10-27 | |
@ -105,8 +116,9 @@ End of Life for **1.22** is **2022-10-28**
End of Life for **1.21** is **2022-06-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| Patch Release | Cherry Pick Deadline | Target Date | Note |
| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- |
| 1.21.9 | 2022-01-14 | 2022-01-19 | |
| 1.21.8 | 2021-12-10 | 2021-12-15 | |
| 1.21.7 | 2021-11-12 | 2021-11-17 | |
| 1.21.6 | 2021-10-22 | 2021-10-27 | |
@ -122,8 +134,9 @@ End of Life for **1.21** is **2022-06-28**
End of Life for **1.20** is **2022-02-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
| Patch Release | Cherry Pick Deadline | Target Date | Note |
| ------------- | -------------------- | ----------- | ----------------------------------------------------------------------------------- |
| 1.20.15 | 2022-01-14 | 2022-01-19 | |
| 1.20.14 | 2021-12-10 | 2021-12-15 | |
| 1.20.13 | 2021-11-12 | 2021-11-17 | |
| 1.20.12 | 2021-10-22 | 2021-10-27 | |
@ -144,7 +157,7 @@ End of Life for **1.20** is **2022-02-28**
These releases are no longer supported.
| MINOR VERSION | FINAL PATCH RELEASE | EOL DATE | NOTE |
| Minor Version | Final Patch Release | EOL Date | Note |
| ------------- | ------------------- | ---------- | ---------------------------------------------------------------------- |
| 1.19 | 1.19.16 | 2021-10-28 | |
| 1.18 | 1.18.20 | 2021-06-18 | Created to resolve regression introduced in 1.18.19 |

View File

@ -89,6 +89,7 @@ GitHub Mentions: [@kubernetes/release-engineering](https://github.com/orgs/kuber
- Adolfo García Veytia ([@puerco](https://github.com/puerco))
- Carlos Panato ([@cpanato](https://github.com/cpanato))
- Marko Mudrinić ([@xmudrii](https://github.com/xmudrii))
- Nabarun Pal ([@palnabarun](https://github.com/palnabarun))
- Sascha Grunert ([@saschagrunert](https://github.com/saschagrunert))
- Stephen Augustus ([@justaugustus](https://github.com/justaugustus))
- Verónica López ([@verolop](https://github.com/verolop))
@ -135,7 +136,6 @@ GitHub Mentions: @kubernetes/release-engineering
- Jim Angel ([@jimangel](https://github.com/jimangel))
- Joyce Kung ([@thejoycekung](https://github.com/thejoycekung))
- Max Körbächer ([@mkorbi](https://github.com/mkorbi))
- Nabarun Pal ([@palnabarun](https://github.com/palnabarun))
- Seth McCombs ([@sethmccombs](https://github.com/sethmccombs))
- Taylor Dolezal ([@onlydole](https://github.com/onlydole))
- Wilson Husin ([@wilsonehusin](https://github.com/wilsonehusin))

View File

@ -28,9 +28,9 @@ The Kubernetes project maintains release branches for the most recent three mino
Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility.
Patch releases are cut from those branches at a [regular cadence](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence), plus additional urgent releases, when required.
The [Release Managers](https://git.k8s.io/sig-release/release-managers.md) group owns this decision.
The [Release Managers](/releases/release-managers/) group owns this decision.
For more information, see the Kubernetes [patch releases](https://git.k8s.io/sig-release/releases/patch-releases.md) page.
For more information, see the Kubernetes [patch releases](/releases/patch-releases/) page.
## Supported version skew

View File

@ -0,0 +1,179 @@
---
reviewers:
- electrocucaracha
- raelga
title: Controlando el Acceso a la API de Kubernetes
content_type: concept
---
<!-- overview -->
Esta página proporciona información sobre cómo controlar el acceso a la API de Kubernetes.
<!-- body -->
Los usuarios acceden a la [API de Kubernetes](/docs/concepts/overview/kubernetes-api/) usando `kubectl`,
bibliotecas de cliente, o haciendo peticiones REST. Usuarios y
[Kubernetes service accounts](/docs/tasks/configure-pod-container/configure-service-account/) pueden ser
autorizados para acceder a la API.
Cuando una petición llega a la API, pasa por varias etapas, están ilustradas en el
siguiente diagrama:
![Diagrama de pasos para una petición a la API de Kubernetes](/images/docs/admin/access-control-overview.svg)
## Seguridad en la capa de transporte
En un {{< glossary_tooltip term_id="cluster" text="cluster" >}} típico de Kubernetes, la API sirve peticiones en el puerto 443, protegida por TLS.
El {{< glossary_tooltip term_id="kube-apiserver" text="API Server" >}} presenta un certificado. Este certificado puede ser firmando usando
un certificado de autoridad privada (CA) o basado en una llave pública relacionada
generalmente a un CA reconocido.
Si el cluster usa un certificado de autoridad privado, se necesita copiar este certificado
CA configurado dentro de su `~/.kube/config` en el cliente, entonces se podrá
confiar en la conexión y estar seguro que no será comprometida.
El cliente puede presentar un certificado TLS de cliente en esta etapa.
## Autenticación
Una vez que se estableció la conexión TLS, las peticiones HTTP avanzan a la etapa de autenticación.
Esto se muestra en el paso 1 del diagrama.
El script de creación del cluster o el administrador del cluster puede configurar el {{< glossary_tooltip term_id="kube-apiserver" text="API Server" >}} para ejecutar
uno o mas módulos de autenticación.
Los Autenticadores están descritos con más detalle en
[Authentication](/docs/reference/access-authn-authz/authentication/).
La entrada al paso de autenticación es la petición HTTP completa, aun así, esta tipicamente
examina las cabeceras y/o el certificado del cliente.
Los modulos de autenticación incluyen certificado de cliente, contraseña, tokens planos,
tokens de inicio y JSON Web Tokens (usados para los service accounts).
Múltiples módulos de autenticación puede ser especificados, en este caso cada uno es probado secuencialmente,
hasta que uno de ellos tiene éxito.
Si la petición no puede ser autenticada, la misma es rechazada con un código HTTP 401.
Si la autenticación tiene éxito, el usuario es validado con el `username` específico, y el nombre de usuario
esta disponible para los pasos siguientes. Algunos autenticadores
también proporcionan membresías de grupo al usuario, mientras que otros
no lo hacen.
Aunque Kubernetes utiliza los nombres de usuario para tomar decisiones durante el control de acceso y para registrar las peticiones de entrada, no tiene un objeto `User` ni tampoco almacena información sobre los usuarios en la API.
## Autorización
Después de autenticar la petición como proveniente de un usuario específico, la petición debe ser autorizada. Esto se muestra en el paso 2 del diagrama.
Una petición debe incluir el nombre de usuario solicitante, la acción solicitada y el objeto afectado por la acción. La petición es autorizada si hay una política existente que declare que el usuario tiene permisos para la realizar la acción.
Por ejemplo, si el usuario Bob tiene la siguiente política, entonces puede leer pods solamente en el namespace `projectCaribou`:
```json
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "bob",
"namespace": "projectCaribou",
"resource": "pods",
"readonly": true
}
}
```
Si Bob hace la siguiente petición, será autorizada dado que tiene permitido leer los objetos en el namespace `projectCaribou` :
```json
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"namespace": "projectCaribou",
"verb": "get",
"group": "unicorn.example.org",
"resource": "pods"
}
}
}
```
En cambio, si Bob en su petición intenta escribir (`create` o `update`) en los objetos del namespace `projectCaribou`, la petición será denegada. Del mismo modo, si Bob hace una petición para leer (`get`) objetos en otro namespace como `projectFish`, la autorización también será denegada.
Las autorizaciones en Kubernetes requieren que se usen atributos REST comunes para interactuar con el existente sistema de control de toda la organización o del proveedor cloud. Es importante usar formatos REST porque esos sistemas de control pueden interactuar con otras APIs además de la API de Kubernetes.
Kubernetes soporta múltiples módulos de autorización, como el modo ABAC, el modo RBAC y el modo Webhook. Cuando un administrador crea un cluster, se realiza la configuración de los módulos de autorización que deben ser usados con la API del server. Si más de uno módulo de autorización es configurado, Kubernetes verificada cada uno y si alguno de ellos autoriza la petición entonces la misma se ejecuta. Si todos los modules deniegan la petición, entonces la misma es denegada (Con un error HTTP con código 403).
Para leer más acerca de las autorizaciones en Kubernetes, incluyendo detalles sobre cómo crear politicas usando los módulos de autorización soportados, vea [Authorization](/docs/reference/access-authn-authz/authorization/).
## Control de Admisión
Los módulos de Control de Admisión son módulos de software que solo pueden modificar o rechazar peticiones.
Adicionalmente a los atributos disponibles en los módulos de Autorización, los de
Control de Admisión pueden acceder al contenido del objeto que esta siendo creado o modificado.
Los Controles de Admisión actúan en las peticiones que crean, modifican, borran o se conectan (proxy) a un objeto.
Cuando múltiples módulos de control de admisión son configurados, son llamados en orden.
Esto se muestra en el paso 3 del diagrama.
A diferencia de los módulos de Autorización y Autenticación, si uno de los módulos de control de admisión
rechaza la petición, entonces es inmediatamente rechazada.
Adicionalmente a rechazar objetos, los controles de admisión también permiten establecer
valores predeterminados complejos.
Los módulos de Control de Admisión disponibles están descritos en [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/).
Cuando una petición pasa todos los controles de admisión, esta es validada usando la rutinas de validación
para el objeto API correspondiente y luego es escrita en el objeto.
## Puertos e IPs del API server
La discusión previa aplica a peticiones enviadas a un puerto seguro del servidor API
(el caso típico). El servidor API puede en realidad servir en 2 puertos:
Por defecto, la API de Kubernetes entrega HTTP en 2 puertos:
1. puerto `localhost`:
- debe usarse para testeo e iniciar el sistema y para otros componentes del nodo maestro
(scheduler, controller-manager) para hablar con la API
- no se usa TLS
- el puerto predeterminado es el `8080`
- la IP por defecto es localhost, la puede cambiar con el flag `--insecure-bind-address`.
- la petición no pasa por los mecanismos de autenticación ni autorización
- peticiones controladas por los modulos de control de admisión.
- protegidas por necesidad para tener acceso al host
2. “Puerto seguro”:
- usar siempre que sea posible
- usa TLS. Se configura el certificado con el flag `--tls-cert-file` y la clave con `--tls-private-key-file`.
- el puerto predeterminado es `6443`, se cambia con el flag `--secure-port`.
- la IP por defecto es la primer interface que no es la localhost. se cambia con el flag `--bind-address`.
- peticiones controladas por los módulos de autenticación y autorización.
- peticiones controladas por los módulos de control de admisión.
## {{% heading "whatsnext" %}}
En los siguientes enlaces, encontrará mucha más documentación sobre autenticación, autorización y el control de acceso a la API:
- [Authenticating](/docs/reference/access-authn-authz/authentication/)
- [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/)
- [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
- [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [Authorization](/docs/reference/access-authn-authz/authorization/)
- [Role Based Access Control](/docs/reference/access-authn-authz/rbac/)
- [Attribute Based Access Control](/docs/reference/access-authn-authz/abac/)
- [Node Authorization](/docs/reference/access-authn-authz/node/)
- [Webhook Authorization](/docs/reference/access-authn-authz/webhook/)
- [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/)
- including [CSR approval](/docs/reference/access-authn-authz/certificate-signing-requests/#approval-rejection)
and [certificate signing](/docs/reference/access-authn-authz/certificate-signing-requests/#signing)
- Service accounts
- [Developer guide](/docs/tasks/configure-pod-container/configure-service-account/)
- [Administration](/docs/reference/access-authn-authz/service-accounts-admin/)
- Como los pods pueden usar
[Secrets](/docs/concepts/configuration/secret/#service-accounts-automatically-create-and-attach-secrets-with-api-credentials)
para obtener credenciales para la API.

View File

@ -393,7 +393,7 @@ Los `ServiceTypes` de Kubernetes permiten especificar qué tipo de Service quier
Los valores `Type` y sus comportamientos son:
- `ClusterIP`: Expone el Service en una dirección IP interna del clúster. Al escoger este valor el Service solo es alcanzable desde el clúster. Este es el `ServiceType` por defecto.
- [`NodePort`](#nodeport): Expone el Service en cada IP del nodo en un puerto estático (el `NodePort`). Automáticamente se crea un Service `ClusterIP`, al cual enruta el `NodePort`del Service. Podrás alcanzar el Service `NodePort` desde fuera del clúster, haciendo una petición a `<NodeIP>:<NodePort>`.
- [`NodePort`](#tipo-nodeport): Expone el Service en cada IP del nodo en un puerto estático (el `NodePort`). Automáticamente se crea un Service `ClusterIP`, al cual enruta el `NodePort`del Service. Podrás alcanzar el Service `NodePort` desde fuera del clúster, haciendo una petición a `<NodeIP>:<NodePort>`.
- [`LoadBalancer`](#loadbalancer): Expone el Service externamente usando el balanceador de carga del proveedor de la nube. Son creados automáticamente Services `NodePort`y `ClusterIP`, a los cuales el apuntará el balanceador externo.
- [`ExternalName`](#externalname): Mapea el Service al contenido del campo `externalName` (ej. `foo.bar.example.com`), al devolver un registro `CNAME` con su valor. No se configura ningún tipo de proxy.
@ -403,7 +403,7 @@ Los valores `Type` y sus comportamientos son:
También puedes usar un [Ingress](/docs/concepts/services-networking/ingress/) para exponer tu Service. Ingress no es un tipo de Service, pero actúa como el punto de entrada de tu clúster. Te permite consolidar tus reglas de enrutamiento en un único recurso, ya que puede exponer múltiples servicios bajo la misma dirección IP.
### Tipo NodePort {#nodeport}
### Tipo NodePort {#tipo-nodeport}
Si estableces el campo `type` a `NodePort`, el plano de control de Kubernetes asigna un puerto desde un rango especificado por la bandera `--service-node-port-range` (por defecto: 30000-32767).
Cada nodo es un proxy de ese puerto (el mismo número de puerto en cada nodo) hacia tu Service. Tu Service reporta al puerto asignado en el campo `.spec.ports[*].nodePort`

View File

@ -44,7 +44,7 @@ de datos de uso de recursos de todo el clúster.
A partir de Kubernetes 1.8, el servidor de métricas se despliega por defecto como un objeto de
tipo [Deployment](https://github.com/docs/concepts/workloads/controllers/deployment/) en clústeres
creados con el script `kube-up.sh`. Si usas otro mecanismo de configuración de Kubernetes, puedes desplegarlo
usando los [yamls de despliegue](https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy)
usando los [yamls de despliegue](https://github.com/kubernetes-sigs/metrics-server/releases)
proporcionados. Está soportado a partir de Kubernetes 1.7 (más detalles al final).
El servidor reune métricas de la Summary API, que es expuesta por el [Kubelet](/docs/admin/kubelet/) en cada nodo.

View File

@ -363,7 +363,7 @@ Les valeurs de `Type` et leurs comportements sont:
* `ClusterIP`: Expose le service sur une IP interne au cluster.
Le choix de cette valeur rend le service uniquement accessible à partir du cluster.
Il s'agit du `ServiceType` par défaut.
* [`NodePort`](#nodeport): Expose le service sur l'IP de chaque nœud sur un port statique (le `NodePort`).
* [`NodePort`](#type-nodeport): Expose le service sur l'IP de chaque nœud sur un port statique (le `NodePort`).
Un service `ClusterIP`, vers lequel le service` NodePort` est automatiquement créé.
Vous pourrez contacter le service `NodePort`, depuis l'extérieur du cluster, en demandant `<NodeIP>: <NodePort>`.
* [`LoadBalancer`](#loadbalancer): Expose le service en externe à l'aide de l'équilibreur de charge d'un fournisseur de cloud.
@ -378,7 +378,7 @@ Vous pouvez également utiliser [Ingress](/fr/docs/concepts/services-networking/
Ingress n'est pas un type de service, mais il sert de point d'entrée pour votre cluster.
Il vous permet de consolider vos règles de routage en une seule ressource car il peut exposer plusieurs services sous la même adresse IP.
### Type NodePort {#nodeport}
### Type NodePort {#type-nodeport}
Si vous définissez le champ `type` sur` NodePort`, le plan de contrôle Kubernetes alloue un port à partir d'une plage spécifiée par l'indicateur `--service-node-port-range` (par défaut: 30000-32767).
Chaque nœud assure le proxy de ce port (le même numéro de port sur chaque nœud) vers votre service.

View File

@ -29,7 +29,7 @@ Laman ini akan menjabarkan beberapa *add-ons* yang tersedia serta tautan instruk
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), yang berbasis dari [Tungsten Fabric](https://tungsten.io), merupakan sebuah proyek *open source* yang menyediakan virtualisasi jaringan *multi-cloud* serta platform manajemen *policy*. Contrail dan Tungsten Fabric terintegrasi dengan sistem orkestrasi lainnya seperti Kubernetes, OpenShift, OpenStack dan Mesos, serta menyediakan mode isolasi untuk mesin virtual (VM), kontainer/pod dan *bare metal*.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) merupakan penyedia jaringan *overlay* yang dapat digunakan pada Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) merupakan solusi jaringan yang mendukung multipel jaringan pada Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) merupakan sebuah multi *plugin* agar Kubernetes mendukung multipel jaringan secara bersamaan sehingga dapat menggunakan semua *plugin* CNI (contoh: Calico, Cilium, Contiv, Flannel), ditambah pula dengan SRIOV, DPDK, OVS-DPDK dan VPP pada *workload* Kubernetes.
* Multus merupakan sebuah multi *plugin* agar Kubernetes mendukung multipel jaringan secara bersamaan sehingga dapat menggunakan semua *plugin* CNI (contoh: Calico, Cilium, Contiv, Flannel), ditambah pula dengan SRIOV, DPDK, OVS-DPDK dan VPP pada *workload* Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) menyediakan integrasi antara VMware NSX-T dan orkestrator kontainer seperti Kubernetes, termasuk juga integrasi antara NSX-T dan platform CaaS/PaaS berbasis kontainer seperti *Pivotal Container Service* (PKS) dan OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) merupakan platform SDN yang menyediakan *policy-based* jaringan antara Kubernetes Pods dan non-Kubernetes *environment* dengan *monitoring* visibilitas dan keamanan.
* [Romana](http://romana.io) merupakan solusi jaringan *Layer* 3 untuk jaringan pod yang juga mendukung [*NetworkPolicy* API](/id/docs/concepts/services-networking/network-policies/). Instalasi Kubeadm *add-on* ini tersedia [di sini](https://github.com/romana/romana/tree/master/containerize).

View File

@ -195,10 +195,6 @@ Multus mendukung semua [plugin referensi](https://github.com/containernetworking
Platform Nuage menggunakan _overlay_ untuk menyediakan jaringan berbasis kebijakan yang mulus antara Kubernetes Pod-Pod dan lingkungan non-Kubernetes (VM dan server _bare metal_). Model abstraksi kebijakan Nuage dirancang dengan mempertimbangkan aplikasi dan membuatnya mudah untuk mendeklarasikan kebijakan berbutir halus untuk aplikasi. Mesin analisis _real-time_ platform memungkinkan pemantauan visibilitas dan keamanan untuk aplikasi Kubernetes.
### OpenVSwitch
[OpenVSwitch](https://www.openvswitch.org/) adalah cara yang agak lebih dewasa tetapi juga rumit untuk membangun jaringan _overlay_. Ini didukung oleh beberapa "Toko Besar" untuk jaringan.
### OVN (Open Virtual Networking)
OVN adalah solusi virtualisasi jaringan opensource yang dikembangkan oleh komunitas Open vSwitch. Ini memungkinkan seseorang membuat switch logis, router logis, ACL stateful, load-balancers dll untuk membangun berbagai topologi jaringan virtual. Proyek ini memiliki plugin dan dokumentasi Kubernetes spesifik di [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).

View File

@ -21,7 +21,7 @@ Dokumentasi ini terbuka. Jika Anda menemukan sesuatu yang tidak ada dalam daftar
- Tulis file konfigurasi Anda menggunakan YAML tidak dengan JSON. Meskipun format ini dapat digunakan secara bergantian di hampir semua skenario, YAML cenderung lebih ramah pengguna.
- Kelompokkan objek terkait ke dalam satu file yang memungkinkan. Satu file seringkali lebih mudah dikelola daripada beberapa file. Lihat pada [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/all-in-one/guestbook-all-in-one.yaml) sebagai contoh file sintaks ini.
- Kelompokkan objek terkait ke dalam satu file yang memungkinkan. Satu file seringkali lebih mudah dikelola daripada beberapa file. Lihat pada [guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml) sebagai contoh file sintaks ini.
- Perhatikan juga bahwa banyak perintah `kubectl` dapat dipanggil pada direktori. Misalnya, Anda dapat memanggil `kubectl apply` pada direktori file konfigurasi.

View File

@ -387,7 +387,7 @@ _Value_ dan perilaku dari tipe `Service` dijelaskan sebagai berikut:
* `ClusterIP`: Mengekspos `Service` ke _range_ alamat IP di dalam klaster. Apabila kamu memilih _value_ ini
`Service` yang kamu miliki hanya dapat diakses secara internal. tipe ini adalah
_default_ _value_ dari _ServiceType_.
* [`NodePort`](#nodeport): Mengekspos `Service` pada setiap IP *node* pada _port_ statis
* [`NodePort`](#type-nodeport): Mengekspos `Service` pada setiap IP *node* pada _port_ statis
atau _port_ yang sama. Sebuah `Service` `ClusterIP`, yang mana `Service` `NodePort` akan di-_route_
, dibuat secara otomatis. Kamu dapat mengakses `Service` dengan tipe ini,
dari luar klaster melalui `<NodeIP>:<NodePort>`.
@ -399,7 +399,7 @@ _Value_ dan perilaku dari tipe `Service` dijelaskan sebagai berikut:
catatan `CNAME` beserta _value_-nya. Tidak ada metode _proxy_ apa pun yang diaktifkan. Mekanisme ini
setidaknya membutuhkan `kube-dns` versi 1.7.
### Type NodePort {#nodeport}
### Type NodePort {#type-nodeport}
Jika kamu menerapkan _value_ `NodePort` pada _field_ _type_, master Kubernetes akan mengalokasikan
_port_ dari _range_ yang dispesifikasikan oleh penanda `--service-node-port-range` (secara _default_, 30000-32767)

View File

@ -28,7 +28,7 @@ I componenti aggiuntivi in ogni sezione sono ordinati alfabeticamente - l'ordine
* [Contiv](http://contiv.github.io) offre networking configurabile (L3 nativo con BGP, overlay con vxlan, L2 classico e Cisco-SDN / ACI) per vari casi d'uso e un ricco framework di policy. Il progetto Contiv è completamente [open source](http://github.com/contiv). Il [programma di installazione](http://github.com/contiv/install) fornisce sia opzioni di installazione basate su kubeadm che non su Kubeadm.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) è un provider di reti sovrapposte che può essere utilizzato con Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) è una soluzione di rete che supporta più reti in Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes.
* Multus è un multi-plugin per il supporto di più reti in Kubernetes per supportare tutti i plugin CNI (es. Calico, Cilium, Contiv, Flannel), oltre a SRIOV, DPDK, OVS-DPDK e carichi di lavoro basati su VPP in Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) fornisce l'integrazione tra VMware NSX-T e orchestratori di contenitori come Kubernetes, oltre all'integrazione tra NSX-T e piattaforme CaaS / PaaS basate su container come Pivotal Container Service (PKS) e OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1/docs/kubernetes-1-installation.rst) è una piattaforma SDN che fornisce una rete basata su policy tra i pod di Kubernetes e non Kubernetes con visibilità e monitoraggio della sicurezza.
* [Romana](http://romana.io) è una soluzione di rete Layer 3 per pod network che supporta anche [API NetworkPolicy](/docs/concepts/services-networking/network-policies/). Dettagli di installazione del componente aggiuntivo di Kubeadm disponibili [qui](https://github.com/romana/romana/tree/master/containerize).

View File

@ -292,12 +292,6 @@ Nuage è stato progettato pensando alle applicazioni e semplifica la dichiarazio
applicazioni. Il motore di analisi in tempo reale della piattaforma consente la visibilità e il monitoraggio della
sicurezza per le applicazioni Kubernetes.
### OpenVSwitch
[OpenVSwitch](https://www.openvswitch.org/) è un po 'più maturo ma anche
modo complicato per costruire una rete di sovrapposizione. Questo è approvato da molti dei
"Grandi negozi" per il networking.
### OVN (Apri rete virtuale)
OVN è una soluzione di virtualizzazione della rete opensource sviluppata da

View File

@ -25,7 +25,7 @@ content_type: concept
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)は、[Tungsten Fabric](https://tungsten.io)をベースにしている、オープンソースでマルチクラウドに対応したネットワーク仮想化およびポリシー管理プラットフォームです。ContrailおよびTungsten Fabricは、Kubernetes、OpenShift、OpenStack、Mesosなどのオーケストレーションシステムと統合されており、仮想マシン、コンテナ/Pod、ベアメタルのワークロードに隔離モードを提供します。
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually)は、Kubernetesで使用できるオーバーレイネットワークプロバイダーです。
* [Knitter](https://github.com/ZTE/Knitter/)は、1つのKubernetes Podで複数のネットワークインターフェイスをサポートするためのプラグインです。
* [Multus](https://github.com/Intel-Corp/multus-cni)は、すべてのCNIプラグイン(たとえば、Calico、Cilium、Contiv、Flannel)に加えて、SRIOV、DPDK、OVS-DPDK、VPPをベースとするKubernetes上のワークロードをサポートする、複数のネットワークサポートのためのマルチプラグインです。
* Multus は、すべてのCNIプラグイン(たとえば、Calico、Cilium、Contiv、Flannel)に加えて、SRIOV、DPDK、OVS-DPDK、VPPをベースとするKubernetes上のワークロードをサポートする、複数のネットワークサポートのためのマルチプラグインです。
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/)は、Open vSwitch(OVS)プロジェクトから生まれた仮想ネットワーク実装である[OVN(Open Virtual Network)](https://github.com/ovn-org/ovn/)をベースとする、Kubernetesのためのネットワークプロバイダです。OVN-Kubernetesは、OVSベースのロードバランサーおよびネットワークポリシーの実装を含む、Kubernetes向けのオーバーレイベースのネットワーク実装を提供します。
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin)は、クラウドネイティブベースのService function chaining(SFC)、Multiple OVNオーバーレイネットワーク、動的なサブネットの作成、動的な仮想ネットワークの作成、VLANプロバイダーネットワーク、Directプロバイダーネットワークを提供し、他のMulti-networkプラグインと付け替え可能なOVNベースのCNIコントローラープラグインです。
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in(NCP)は、VMware NSX-TとKubernetesなどのコンテナオーケストレーター間のインテグレーションを提供します。また、NSX-Tと、Pivotal Container Service(PKS)とOpenShiftなどのコンテナベースのCaaS/PaaSプラットフォームとのインテグレーションも提供します。

View File

@ -243,7 +243,7 @@ Lars Kellogg-Stedman.
### Multus (a Multi Network plugin)
[Multus](https://github.com/Intel-Corp/multus-cni) is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes.

View File

@ -30,7 +30,7 @@ Kubernetesは柔軟な設定が可能で、高い拡張性を持っています
ホスティングされたKubernetesサービスやマネージドなKubernetesでは、フラグと設定ファイルが常に変更できるとは限りません。変更可能な場合でも、通常はクラスターの管理者のみが変更できます。また、それらは将来のKubernetesバージョンで変更される可能性があり、設定変更にはプロセスの再起動が必要になるかもしれません。これらの理由により、この方法は他の選択肢が無いときにのみ利用するべきです。
[ResourceQuota](/docs/concepts/policy/resource-quotas/)、[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)、[NetworkPolicy](/docs/concepts/services-networking/network-policies/)、そしてロールベースアクセス制御([RBAC](/docs/reference/access-authn-authz/rbac/))といった *ビルトインポリシーAPI* は、ビルトインのKubernetes APIです。APIは通常、ホスティングされたKubernetesサービスやマネージドなKubernetesで利用されます。これらは宣言的で、Podのような他のKubernetesリソースと同じ慣例に従っています。そのため、新しいクラスターの設定は繰り返し再利用することができ、アプリケーションと同じように管理することが可能です。さらに、安定版(stable)を利用している場合、他のKubernetes APIのような[定義済みのサポートポリシー](/docs/reference/deprecation-policy/)を利用することができます。これらの理由により、この方法は、適切な用途の場合、 *設定ファイル**フラグ* よりも好まれます。
[ResourceQuota](/ja/docs/concepts/policy/resource-quotas/)、[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)、[NetworkPolicy](/ja/docs/concepts/services-networking/network-policies/)、そしてロールベースアクセス制御([RBAC](/ja/docs/reference/access-authn-authz/rbac/))といった *ビルトインポリシーAPI* は、ビルトインのKubernetes APIです。APIは通常、ホスティングされたKubernetesサービスやマネージドなKubernetesで利用されます。これらは宣言的で、Podのような他のKubernetesリソースと同じ慣例に従っています。そのため、新しいクラスターの設定は繰り返し再利用することができ、アプリケーションと同じように管理することが可能です。さらに、安定版(stable)を利用している場合、他のKubernetes APIのような[定義済みのサポートポリシー](/docs/reference/deprecation-policy/)を利用することができます。これらの理由により、この方法は、適切な用途の場合、 *設定ファイル**フラグ* よりも好まれます。
## 拡張
@ -115,7 +115,7 @@ Kubdernetesはいくつかのビルトイン認証方式をサポートしてい
[認証](/ja/docs/reference/access-authn-authz/authentication/)は、全てのリクエストのヘッダーまたは証明書情報を、リクエストを投げたクライアントのユーザー名にマッピングします。
Kubernetesはいくつかのビルトイン認証方式と、それらが要件に合わない場合、[認証Webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)を提供します。
Kubernetesはいくつかのビルトイン認証方式と、それらが要件に合わない場合、[認証Webhook](/ja/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)を提供します。
### 認可
@ -161,4 +161,4 @@ Kubernetesはいくつかのビルトイン認証方式と、それらが要件
* [ネットワークプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
* [デバイスプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
* [kubectlプラグイン](/docs/tasks/extend-kubectl/kubectl-plugins/)について学ぶ
* [オペレーターパターン](/docs/concepts/extend-kubernetes/operator/)について学ぶ
* [オペレーターパターン](/ja/docs/concepts/extend-kubernetes/operator/)について学ぶ

View File

@ -118,7 +118,7 @@ CRDオブジェクトの名前は[DNSサブドメイン名](/ja/docs/concepts/ov
通常、Kubernetes APIの各リソースは、RESTリクエストとオブジェクトの永続的なストレージを管理するためのコードが必要です。メインのKubernetes APIサーバーは *Pod**Service* のようなビルトインのリソースを処理し、またカスタムリソースも[CRD](#customresourcedefinition)を通じて同じように管理することができます。
[アグリゲーションレイヤー](/ja/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)は、独自のスタンドアローンAPIサーバーを書き、デプロイすることで、カスタムリソースに特化した実装の提供を可能にします。メインのAPIサーバーが、処理したいカスタムリソースへのリクエストを委譲することで、他のクライアントからも利用できるようにします。
[アグリゲーションレイヤー](/ja/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)は、独自のAPIサーバーを書き、デプロイすることで、カスタムリソースに特化した実装の提供を可能にします。メインのAPIサーバーが、処理したいカスタムリソースへのリクエストを独自のAPIサーバーに委譲することで、他のクライアントからも利用できるようにします。
## カスタムリソースの追加方法を選択する

View File

@ -26,7 +26,7 @@ Kubernetes上でワークロードを稼働させている人は、しばしば
Kubernetesは自動化のために設計されています。追加の作業、設定無しに、Kubernetesのコア機能によって多数のビルトインされた自動化機能が提供されます。
ワークロードのデプロイおよび稼働を自動化するためにKubernetesを使うことができます。 *さらに* Kubernetesがそれをどのように行うかの自動化も可能です。
Kubernetesの{{< glossary_tooltip text="コントローラー" term_id="controller" >}}コンセプトは、Kubernetesのソースコードを修正すること無く、クラスターの振る舞いを拡張することを可能にします。
Kubernetesの{{< glossary_tooltip text="オペレーターパターン" term_id="operator-pattern" >}}コンセプトは、Kubernetesのソースコードを修正すること無く、一つ以上のカスタムリソースに{{< glossary_tooltip text="カスタムコントローラー" term_id="controller" >}}をリンクすることで、クラスターの振る舞いを拡張することを可能にします。
オペレーターはKubernetes APIのクライアントで、[Custom Resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)にとっての、コントローラーのように振る舞います。
## オペレーターの例 {#example}

View File

@ -376,9 +376,19 @@ PersistentVolumeは、リソースプロバイダーがサポートする方法
アクセスモードは次の通りです。
* ReadWriteOnce ボリュームは単一のNodeで読み取り/書き込みとしてマウントできます
* ReadOnlyMany ボリュームは多数のNodeで読み取り専用としてマウントできます
* ReadWriteMany ボリュームは多数のNodeで読み取り/書き込みとしてマウントできます
`ReadWriteOnce`
: ボリュームは単一のNodeで読み取り/書き込みとしてマウントできます
`ReadOnlyMany`
: ボリュームは多数のNodeで読み取り専用としてマウントできます
`ReadWriteMany`
: ボリュームは多数のNodeで読み取り/書き込みとしてマウントできます
`ReadWriteOncePod`
: ボリュームは、単一のPodで読み取り/書き込みとしてマウントできます。クラスタ全体で1つのPodのみがそのPVCの読み取りまたは書き込みを行えるようにする場合は、ReadWriteOncePodアクセスモードを使用します。これは、CSIボリュームとKubernetesバージョン1.22以降でのみサポートされます。
これについてはブログ[Introducing Single Pod Access Mode for PersistentVolumes](/blog/2021/09/13/read-write-once-pod-access-mode-alpha/)に詳細が記載されています。
CLIではアクセスモードは次のように略されます。

View File

@ -1,123 +1,98 @@
---
title: 大規模クラスタの構築
title: 大規模クラスタの構築
weight: 20
---
## サポート
At {{< param "version" >}}, Kubernetes supports clusters with up to 5000 nodes. More specifically, we support configurations that meet *all* of the following criteria:
クラスターはKubernetesのエージェントが動作する(物理もしくは仮想の){{< glossary_tooltip text="ノード" term_id="node" >}}の集合で、{{< glossary_tooltip text="コントロールプレーン" term_id="control-plane" >}}によって管理されます。
Kubernetes {{< param "version" >}} では、最大5000ードから構成されるクラスターをサポートします。
具体的には、Kubernetesは次の基準を *全て* 満たす構成に対して適用できるように設計されています。
* No more than 110 pods per node
* No more than 5000 nodes
* No more than 150000 total pods
* No more than 300000 total containers
* 1ードにつきPodが110個以上存在しない
* 5000ード以上存在しない
* Podの総数が150000個以上存在しない
* コンテナの総数が300000個以上存在しない
ノードを追加したり削除したりすることによって、クラスターをスケールできます。
これを行う方法は、クラスターがどのようにデプロイされたかに依存します。
## 構築
## クラウドプロバイダーのリソースクォータ {#クォータの問題}
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by a "master" (the cluster-level control plane).
クラウドプロバイダーのクォータの問題に遭遇することを避けるため、多数のノードを使ったクラスターを作成するときには次のようなことを考慮してください。
Normally the number of nodes in a cluster is controlled by the value `NUM_NODES` in the platform-specific `config-default.sh` file (for example, see [GCE's `config-default.sh`](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/gce/config-default.sh)).
* 次のようなクラウドリソースの増加をリクエストする
* コンピューターインスタンス
* CPU
* ストレージボリューム
* 使用中のIPアドレス
* パケットフィルタリングのルールセット
* ロードバランサーの数
* ネットワークサブネット
* ログストリーム
* クラウドプロバイダーによる新しいインスタンスの作成に対するレート制限のため、バッチで新しいノードを立ち上げるようなクラスターのスケーリング操作を通すためには、バッチ間ですこし休止を入れます。
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
When setting up a large Kubernetes cluster, the following issues must be considered.
## コントロールプレーンのコンポーネント
### クォータの問題
大きなクラスターでは、十分な計算とその他のリソースを持ったコントロールプレーンが必要になります。
To avoid running into cloud provider quota issues, when creating a cluster with many nodes, consider:
特に故障ゾーンあたり1つまたは2つのコントロールプレーンインスタンスを動かす場合、最初に垂直方向にインスタンスをスケールし、垂直方向のスケーリングの効果が低下するポイントに達したら水平方向にスケールします。
* Increase the quota for things like CPU, IPs, etc.
* In [GCE, for example,](https://cloud.google.com/compute/docs/resource-quotas) you'll want to increase the quota for:
* CPUs
* VM instances
* Total persistent disk reserved
* In-use IP addresses
* Firewall Rules
* Forwarding rules
* Routes
* Target pools
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
フォールトトレランスを備えるために、1つの故障ゾーンに対して最低1インスタンスを動かすべきです。
Kubernetesードは、同一故障ゾーン内のコントロールプレーンエンドポイントに対して自動的にトラフィックが向かないようにします。
しかし、クラウドプロバイダーはこれを実現するための独自の機構を持っているかもしれません。
### Etcdのストレージ
例えばマネージドなロードバランサーを使うと、故障ゾーン _A_ にあるkubeletやPodから発生したトラフィックを、同じく故障ゾーン _A_ にあるコントロールプレーンホストに対してのみ送るように設定します。もし1つのコントロールプレーンホストまたは故障ゾーン _A_ のエンドポイントがオフラインになった場合、ゾーン _A_ にあるノードについてすべてのコントロールプレーンのトラフィックはゾーンを跨いで送信されます。それぞれのゾーンで複数のコントロールプレーンホストを動作させることは、結果としてほとんどありません。
To improve performance of large clusters, we store events in a separate dedicated etcd instance.
When creating a cluster, existing salt scripts:
## etcdストレージ
* start and configure additional etcd instance
* configure api-server to use it for storing events
大きなクラスターの性能を向上させるために、他の専用のetcdインスタンスにイベントオブジェクトを保存できます。
### マスターのサイズと構成要素
クラスターを作るときに、(カスタムツールを使って)以下のようなことができます。
On GCE/Google Kubernetes Engine, and AWS, `kube-up` automatically configures the proper VM size for your master depending on the number of nodes
in your cluster. On other providers, you will need to configure it manually. For reference, the sizes we use on GCE are
* 追加のetcdインスタンスを起動または設定する
* イベントを保存するために{{< glossary_tooltip term_id="kube-apiserver" text="APIサーバ" >}}を設定する
* 1-5 nodes: n1-standard-1
* 6-10 nodes: n1-standard-2
* 11-100 nodes: n1-standard-4
* 101-250 nodes: n1-standard-8
* 251-500 nodes: n1-standard-16
* more than 500 nodes: n1-standard-32
大きなクラスターのためにetcdを設定・管理する詳細については、[Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/)または[kubeadmを使用した高可用性etcdクラスターの作成](/ja/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)を見てください。
And the sizes we use on AWS are
* 1-5 nodes: m3.medium
* 6-10 nodes: m3.large
* 11-100 nodes: m3.xlarge
* 101-250 nodes: m3.2xlarge
* 251-500 nodes: c4.4xlarge
* more than 500 nodes: c4.8xlarge
## アドオンのリソース
{{< note >}}
On Google Kubernetes Engine, the size of the master node adjusts automatically based on the size of your cluster. For more information, see [this blog post](https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster-Management-Fees-on-Google-Kubernetes-Engine.html).
Kubernetesの[リソース制限](/ja/docs/concepts/configuration/manage-resources-containers/)は、メモリリークの影響やPodやコンテナが他のコンポーネントに与える他の影響を最小化することに役立ちます。
これらのリソース制限は、アプリケーションのワークロードに適用するのと同様に、{{< glossary_tooltip text="アドオン" term_id="addons" >}}のリソースにも適用されます。
On AWS, master node sizes are currently set at cluster startup time and do not change, even if you later scale your cluster up or down by manually removing or adding nodes or using a cluster autoscaler.
{{< /note >}}
### アドオンのリソース
To prevent memory leaks or other resource issues in [cluster addons](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://pr.k8s.io/10653/files) and [#10778](https://pr.k8s.io/10778/files)).
For example:
例えば、ロギングコンポーネントに対してCPUやメモリ制限を設定できます。
```yaml
...
containers:
- name: fluentd-cloud-logging
image: k8s.gcr.io/fluentd-gcp:1.16
image: fluent/fluentd-kubernetes-daemonset:v1
resources:
limits:
cpu: 100m
memory: 200Mi
```
Except for Heapster, these limits are static and are based on data we collected from addons running on 4-node clusters (see [#10335](https://issue.k8s.io/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](http://issue.k8s.io/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
アドオンのデフォルト制限は、アドオンを小中規模のKubernetesクラスターで動作させたときの経験から得られたデータに基づきます。
大規模のクラスターで動作させる場合は、アドオンはデフォルト制限よりも多くのリソースを消費することが多いです。
これらの値を調整せずに大規模のクラスターをデプロイした場合、メモリー制限に達し続けるため、アドオンが継続的に停止されるかもしれません。
あるいは、CPUのタイムスライス制限により性能がでない状態で動作するかもしれません。
To avoid running into cluster addon resource issues, when creating a cluster with many nodes, consider the following:
クラスターのアドオンのリソース制限に遭遇しないために、多くのノードで構成されるクラスターを構築する場合は次のことを考慮します。
* Scale memory and CPU limits for each of the following addons, if used, as you scale up the size of cluster (there is one replica of each handling the entire cluster so memory and CPU usage tends to grow proportionally with size/load on cluster):
* [InfluxDB and Grafana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml)
* [kubedns, dnsmasq, and sidecar](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/kube-dns.yaml.in)
* [Kibana](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml)
* Scale number of replicas for the following addons, if used, along with the size of cluster (there are multiple replicas of each so increasing replicas should help handle increased load, but, since load per replica also increases slightly, also consider increasing CPU/memory limits):
* [elasticsearch](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml)
* Increase memory and CPU limits slightly for each of the following addons, if used, along with the size of cluster (there is one replica per node but CPU/memory usage increases slightly along with cluster load/size as well):
* [FluentD with ElasticSearch Plugin](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml)
* [FluentD with GCP Plugin](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml)
* いくつかのアドオンは垂直方向にスケールします - クラスターに1つのレプリカ、もしくは故障ゾーン全体にサービングされるものがあります。このようなアドオンでは、クラスターをスケールアウトしたときにリクエストと制限を増やす必要があります。
* 数多くのアドオンは、水平方向にスケールします - より多くのPod数を動作させることで性能を向上できます - ただし、とても大きなクラスターではCPUやメモリの制限も少し引き上げる必要があるかもしれません。VerticalPodAutoscalerは、提案されたリクエストや制限の数値を提供する `_recommender_` モードで動作可能です。
* いくつかのアドオンは{{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}によって制御され、1ードに1つ複製される形で動作します: 例えばードレベルのログアグリゲーターです。水平方向にスケールするアドオンの場合と同様に、CPUやメモリ制限を少し引き上げる必要があるかもしれません。
Heapster's resource limits are set dynamically based on the initial size of your cluster (see [#16185](http://issue.k8s.io/16185)
and [#22940](http://issue.k8s.io/22940)). If you find that Heapster is running
out of resources, you should adjust the formulas that compute heapster memory request (see those PRs for details).
For directions on how to detect if addon containers are hitting resource limits, see the
[Troubleshooting section of Compute Resources](/docs/concepts/configuration/manage-resources-containers/#troubleshooting).
## {{% heading "whatsnext" %}}
### 少数のノードの起動の失敗を許容する
`VerticalPodAutoscaler` は、リソースのリクエストやPodの制限についての管理を手助けするためにクラスターへデプロイ可能なカスタムリソースです。
`VerticalPodAutoscaler` やクラスターで致命的なアドオンを含むクラスターコンポーネントをスケールする方法についてさらに知りたい場合は[Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme)をご覧ください。
For various reasons (see [#18969](https://github.com/kubernetes/kubernetes/issues/18969) for more details) running
`kube-up.sh` with a very large `NUM_NODES` may fail due to a very small number of nodes not coming up properly.
Currently you have two choices: restart the cluster (`kube-down.sh` and then `kube-up.sh` again), or before
running `kube-up.sh` set the environment variable `ALLOWED_NOTREADY_NODES` to whatever value you feel comfortable
with. This will allow `kube-up.sh` to succeed with fewer than `NUM_NODES` coming up. Depending on the
reason for the failure, those additional nodes may join later or the cluster may remain at a size of
`NUM_NODES - ALLOWED_NOTREADY_NODES`.
[cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#readme)は、クラスターで要求されるリソース水準を満たす正確なノード数で動作できるよう、いくつかのクラウドプロバイダーと統合されています。
[addon resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer#readme)は、クラスターのスケールが変化したときにアドオンの自動的なリサイズをお手伝いします。

Some files were not shown because too many files have changed in this diff Show More