Merge branch 'main' into 'dev-1.28'
commit
20b43d6095
|
@ -146,6 +146,7 @@ aliases:
|
|||
- tengqm
|
||||
- windsonsea
|
||||
- xichengliudui
|
||||
- ydFu
|
||||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- asa3311
|
||||
- chenrui333
|
||||
|
@ -237,3 +238,17 @@ aliases:
|
|||
- jimangel # Release Manager Associate
|
||||
- jrsapi # Release Manager Associate
|
||||
- salaxander # Release Manager Associate
|
||||
# authoritative source: https://github.com/kubernetes/committee-security-response/blob/main/OWNERS_ALIASES
|
||||
committee-security-response:
|
||||
- cjcullen
|
||||
- cji
|
||||
- enj
|
||||
- joelsmith
|
||||
- micahhausler
|
||||
- ritazh
|
||||
- SaranBalaji90
|
||||
- tabbysable
|
||||
# authoritative source: https://github.com/kubernetes/sig-security/blob/main/OWNERS_ALIASES
|
||||
sig-security-leads:
|
||||
- IanColdwater
|
||||
- tabbysable
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
# Tài liệu Kubernetes
|
||||
|
||||
[](https://travis-ci.org/kubernetes/website)
|
||||
[](https://github.com/kubernetes/website/releases/latest)
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
Chào mừng! Kho lưu trữ này chứa tất cả các tài nguyên cần thiết để xây dựng [trang web của Kubernetes và các tài liệu](https://kubernetes.io/). Chúng tôi rất vui vì bạn muốn đóng góp.
|
||||
|
||||
|
|
|
@ -56,7 +56,7 @@ Open up your browser to <http://localhost:1313> to view the website. As you make
|
|||
|
||||
## Running the website locally using Hugo
|
||||
|
||||
Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file.
|
||||
Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L11) file.
|
||||
|
||||
To build and test the site locally, run:
|
||||
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit 7f83d75831813de516f88917f138c32d5f712e87
|
||||
Subproject commit 55bce686224caba37f93e1e1eb53c0c9fc104ed4
|
|
@ -673,18 +673,19 @@ section#cncf {
|
|||
width: 100%;
|
||||
overflow: hidden;
|
||||
clear: both;
|
||||
display: flex;
|
||||
justify-content: space-evenly;
|
||||
flex-wrap: wrap;
|
||||
|
||||
h4 {
|
||||
line-height: normal;
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
& > div:first-child {
|
||||
float: left;
|
||||
}
|
||||
|
||||
& > div:last-child {
|
||||
float: right;
|
||||
& > div {
|
||||
background-color: #daeaf9;
|
||||
border-radius: 20px;
|
||||
padding: 25px;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -354,12 +354,48 @@ main {
|
|||
word-break: break-word;
|
||||
}
|
||||
|
||||
/* SCSS Related to the Metrics Table */
|
||||
|
||||
@media (max-width: 767px) { // for mobile devices, Display the names, Stability levels & types
|
||||
|
||||
table.metrics {
|
||||
th:nth-child(n + 4),
|
||||
td:nth-child(n + 4) {
|
||||
display: none;
|
||||
}
|
||||
|
||||
td.metric_type{
|
||||
min-width: 7em;
|
||||
}
|
||||
td.metric_stability_level{
|
||||
min-width: 6em;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
table.metrics tbody{ // Tested dimensions to improve overall aesthetic of the table
|
||||
tr {
|
||||
td {
|
||||
font-size: smaller;
|
||||
}
|
||||
td.metric_labels_varying{
|
||||
min-width: 9em;
|
||||
}
|
||||
td.metric_type{
|
||||
min-width: 9em;
|
||||
}
|
||||
td.metric_description{
|
||||
min-width: 10em;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
table.no-word-break td,
|
||||
table.no-word-break code {
|
||||
word-break: normal;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// blockquotes and callouts
|
||||
|
||||
|
|
|
@ -68,7 +68,7 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
|
|||
|
||||
Falls die Validierung fehlschlägt, beendet sich `sha256` mit einem "nonzero"-Status und gibt einen Fehler aus, welcher so aussehen könnte:
|
||||
|
||||
```bash
|
||||
```console
|
||||
kubectl: FAILED
|
||||
sha256sum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
@ -253,7 +253,7 @@ Untenstehend ist beschrieben, wie die Autovervollständigungen für Fish und Zsh
|
|||
|
||||
Falls die Validierung fehlschlägt, beendet sich `sha256` mit einem "nonzero"-Status und gibt einen Fehler aus, welcher so aussehen könnte:
|
||||
|
||||
```bash
|
||||
```console
|
||||
kubectl-convert: FAILED
|
||||
sha256sum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
|
|
@ -38,9 +38,9 @@ Sie können dieses Tutorial auch verwenden, wenn Sie [Minikube lokal](/docs/task
|
|||
|
||||
Dieses Lernprogramm enthält ein aus den folgenden Dateien erstelltes Container-Image:
|
||||
|
||||
{{< codenew language="js" file="minikube/server.js" >}}
|
||||
{{% codenew language="js" file="minikube/server.js" %}}
|
||||
|
||||
{{< codenew language="conf" file="minikube/Dockerfile" >}}
|
||||
{{% codenew language="conf" file="minikube/Dockerfile" %}}
|
||||
|
||||
Weitere Informationen zum `docker build` Befehl, lesen Sie die [Docker Dokumentation](https://docs.docker.com/engine/reference/commandline/build/).
|
||||
|
||||
|
|
|
@ -6,6 +6,8 @@ sitemap:
|
|||
priority: 1.0
|
||||
---
|
||||
|
||||
{{< site-searchbar >}}
|
||||
|
||||
{{< blocks/section id="oceanNodes" >}}
|
||||
{{% blocks/feature image="flower" %}}
|
||||
[Kubernetes]({{< relref "/docs/concepts/overview/" >}}), also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
|
||||
|
|
|
@ -26,7 +26,7 @@ In this release, [IPVS-based in-cluster service load balancing](https://github.c
|
|||
|
||||
## Dynamic Kubelet Configuration Moves to Beta
|
||||
|
||||
This feature makes it possible for new Kubelet configurations to be rolled out in a live cluster. Currently, Kubelets are configured via command-line flags, which makes it difficult to update Kubelet configurations in a running cluster. With this beta feature, [users can configure Kubelets in a live cluster](/docs/tasks/administer-cluster/reconfigure-kubelet/) via the API server.
|
||||
This feature makes it possible for new Kubelet configurations to be rolled out in a live cluster. Currently, Kubelets are configured via command-line flags, which makes it difficult to update Kubelet configurations in a running cluster. With this beta feature, users can configure Kubelets in a live cluster via the API server.
|
||||
|
||||
## Custom Resource Definitions Can Now Define Multiple Versions
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ slug: scaling-kubernetes-networking-with-endpointslices
|
|||
|
||||
EndpointSlices are an exciting new API that provides a scalable and extensible alternative to the Endpoints API. EndpointSlices track IP addresses, ports, readiness, and topology information for Pods backing a Service.
|
||||
|
||||
In Kubernetes 1.19 this feature is enabled by default with kube-proxy reading from [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) instead of Endpoints. Although this will mostly be an invisible change, it should result in noticeable scalability improvements in large clusters. It also enables significant new features in future Kubernetes releases like [Topology Aware Routing](/docs/concepts/services-networking/service-topology/).
|
||||
In Kubernetes 1.19 this feature is enabled by default with kube-proxy reading from [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) instead of Endpoints. Although this will mostly be an invisible change, it should result in noticeable scalability improvements in large clusters. It also enables significant new features in future Kubernetes releases like Topology Aware Routing.
|
||||
|
||||
## Scalability Limitations of the Endpoints API
|
||||
With the Endpoints API, there was only one Endpoints resource for a Service. That meant that it needed to be able to store IP addresses and ports (network endpoints) for every Pod that was backing the corresponding Service. This resulted in huge API resources. To compound this problem, kube-proxy was running on every node and watching for any updates to Endpoints resources. If even a single network endpoint changed in an Endpoints resource, the whole object would have to be sent to each of those instances of kube-proxy.
|
||||
|
|
|
@ -55,7 +55,7 @@ What's next? We're developing a new built-in mechanism to help limit Pod privile
|
|||
|
||||
### TopologyKeys Deprecation
|
||||
The Service field `topologyKeys` is now deprecated; all the component features that used this field were previously alpha, and are now also deprecated.
|
||||
We've replaced `topologyKeys` with a way to implement topology-aware routing, called topology-aware hints. Topology-aware hints are an alpha feature in Kubernetes 1.21. You can read more details about the replacement feature in [Topology Aware Hints](/docs/concepts/services-networking/service-topology/); the related [KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/2433-topology-aware-hints/README.md) explains the context for why we switched.
|
||||
We've replaced `topologyKeys` with a way to implement topology-aware routing, called topology-aware hints. Topology-aware hints are an alpha feature in Kubernetes 1.21. You can read more details about the replacement feature in [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/); the related [KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/2433-topology-aware-hints/README.md) explains the context for why we switched.
|
||||
|
||||
## Other Updates
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ been deprecated. These removals have been superseded by newer, stable/generally
|
|||
|
||||
## API removals, deprecations, and other changes for Kubernetes 1.24
|
||||
|
||||
* [Dynamic kubelet configuration](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig` is used to enable the dynamic configuration of the kubelet. The `DynamicKubeletConfig` flag was deprecated in Kubernetes 1.22. In v1.24, this feature gate will be removed from the kubelet. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/). Refer to the ["Dynamic kubelet config is removed" KEP](https://github.com/kubernetes/enhancements/issues/281) for more information.
|
||||
* [Dynamic kubelet configuration](https://github.com/kubernetes/enhancements/issues/281): `DynamicKubeletConfig` is used to enable the dynamic configuration of the kubelet. The `DynamicKubeletConfig` flag was deprecated in Kubernetes 1.22. In v1.24, this feature gate will be removed from the kubelet. Refer to the ["Dynamic kubelet config is removed" KEP](https://github.com/kubernetes/enhancements/issues/281) for more information.
|
||||
* [Dynamic log sanitization](https://github.com/kubernetes/kubernetes/pull/107207): The experimental dynamic log sanitization feature is deprecated and will be removed in v1.24. This feature introduced a logging filter that could be applied to all Kubernetes system components logs to prevent various types of sensitive information from leaking via logs. Refer to [KEP-1753: Kubernetes system components logs sanitization](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#deprecation) for more information and an [alternative approach](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1753-logs-sanitization#alternatives=).
|
||||
* [Removing Dockershim from kubelet](https://github.com/kubernetes/enhancements/issues/2221): the Container Runtime Interface (CRI) for Docker (i.e. Dockershim) is currently a built-in container runtime in the kubelet code base. It was deprecated in v1.20. As of v1.24, the kubelet will no longer have dockershim. Check out this blog on [what you need to do be ready for v1.24](/blog/2022/03/31/ready-for-dockershim-removal/).
|
||||
* [Storage capacity tracking for pod scheduling](https://github.com/kubernetes/enhancements/issues/1472): The CSIStorageCapacity API supports exposing currently available storage capacity via CSIStorageCapacity objects and enhances scheduling of pods that use CSI volumes with late binding. In v1.24, the CSIStorageCapacity API will be stable. The API graduating to stable initates the deprecation of the v1beta1 CSIStorageCapacity API. Refer to the [Storage Capacity Constraints for Pod Scheduling KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1472-storage-capacity-tracking) for more information.
|
||||
|
|
|
@ -7,7 +7,7 @@ slug: kubernetes-1-27-efficient-selinux-relabeling-beta
|
|||
|
||||
**Author:** Jan Šafránek (Red Hat)
|
||||
|
||||
# The problem
|
||||
## The problem
|
||||
|
||||
On Linux with Security-Enhanced Linux (SELinux) enabled, it's traditionally
|
||||
the container runtime that applies SELinux labels to a Pod and all its volumes.
|
||||
|
@ -30,7 +30,7 @@ escapes the container boundary cannot access data of any other container on the
|
|||
host. The container runtime still recursively relabels all pod volumes with this
|
||||
random SELinux label.
|
||||
|
||||
# Improvement using mount options
|
||||
## Improvement using mount options
|
||||
|
||||
If a Pod and its volume meet **all** of the following conditions, Kubernetes will
|
||||
_mount_ the volume directly with the right SELinux label. Such mount will happen
|
||||
|
@ -50,7 +50,9 @@ relabel any files on it.
|
|||
applied by the container runtime by a recursive walk through the volume
|
||||
(or its subPaths).
|
||||
|
||||
1. The Pod must have at least `seLinuxOptions.level` assigned in its [Pod Security Context](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) or all Pod containers must have it set in their [Security Contexts](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1).
|
||||
1. The Pod must have at least `seLinuxOptions.level` assigned in its
|
||||
[Pod Security Context](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
|
||||
or all Pod containers must have it set in their [Security Contexts](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1).
|
||||
Kubernetes will read the default `user`, `role` and `type` from the operating
|
||||
system defaults (typically `system_u`, `system_r` and `container_t`).
|
||||
|
||||
|
@ -90,7 +92,7 @@ relabel any files on it.
|
|||
set `seLinuxMount: true` will be recursively relabelled by the container
|
||||
runtime.
|
||||
|
||||
## Mounting with SELinux context
|
||||
### Mounting with SELinux context
|
||||
|
||||
When all aforementioned conditions are met, kubelet will
|
||||
pass `-o context=<SELinux label>` mount option to the volume plugin or CSI
|
||||
|
@ -105,7 +107,8 @@ value. Similarly, CIFS may need `-o context=<SELinux label>,nosharesock`.
|
|||
It's up to the CSI driver vendor to test their CSI driver in a SELinux enabled
|
||||
environment before setting `seLinuxMount: true` in the CSIDriver instance.
|
||||
|
||||
# How can I learn more?
|
||||
## How can I learn more?
|
||||
|
||||
SELinux in containers: see excellent
|
||||
[visual SELinux guide](https://opensource.com/business/13/11/selinux-policy-guide)
|
||||
by Daniel J Walsh. Note that the guide is older than Kubernetes, it describes
|
||||
|
@ -114,6 +117,7 @@ however, a similar concept is used for containers.
|
|||
|
||||
See a series of blog posts for details how exactly SELinux is applied to
|
||||
containers by container runtimes:
|
||||
|
||||
* [How SELinux separates containers using Multi-Level Security](https://www.redhat.com/en/blog/how-selinux-separates-containers-using-multi-level-security)
|
||||
* [Why you should be using Multi-Category Security for your Linux containers](https://www.redhat.com/en/blog/why-you-should-be-using-multi-category-security-your-linux-containers)
|
||||
|
||||
|
|
|
@ -0,0 +1,161 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Spotlight on SIG CLI"
|
||||
date: 2023-07-20
|
||||
slug: sig-cli-spotlight-2023
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2023/07/13/sig-cli-spotlight-2023/
|
||||
---
|
||||
|
||||
**Author**: Arpit Agrawal
|
||||
|
||||
In the world of Kubernetes, managing containerized applications at
|
||||
scale requires powerful and efficient tools. The command-line
|
||||
interface (CLI) is an integral part of any developer or operator’s
|
||||
toolkit, offering a convenient and flexible way to interact with a
|
||||
Kubernetes cluster.
|
||||
|
||||
SIG CLI plays a crucial role in improving the [Kubernetes
|
||||
CLI](https://github.com/kubernetes/community/tree/master/sig-cli)
|
||||
experience by focusing on the development and enhancement of
|
||||
`kubectl`, the primary command-line tool for Kubernetes.
|
||||
|
||||
In this SIG CLI Spotlight, Arpit Agrawal, SIG ContribEx-Comms team
|
||||
member, talked with [Katrina Verey](https://github.com/KnVerey), Tech
|
||||
Lead & Chair of SIG CLI,and [Maciej
|
||||
Szulik](https://github.com/soltysh), SIG CLI Batch Lead, about SIG
|
||||
CLI, current projects, challenges and how anyone can get involved.
|
||||
|
||||
So, whether you are a seasoned Kubernetes enthusiast or just getting
|
||||
started, understanding the significance of SIG CLI will undoubtedly
|
||||
enhance your Kubernetes journey.
|
||||
|
||||
## Introductions
|
||||
|
||||
**Arpit**: Could you tell us a bit about yourself, your role, and how
|
||||
you got involved in SIG CLI?
|
||||
|
||||
**Maciej**: I’m one of the technical leads for SIG-CLI. I was working
|
||||
on Kubernetes in multiple areas since 2014, and in 2018 I got
|
||||
appointed a lead.
|
||||
|
||||
**Katrina**: I’ve been working with Kubernetes as an end-user since
|
||||
2016, but it was only in late 2019 that I discovered how well SIG CLI
|
||||
aligned with my experience from internal projects. I started regularly
|
||||
attending meetings and made a few small PRs, and by 2021 I was working
|
||||
more deeply with the
|
||||
[Kustomize](https://github.com/kubernetes-sigs/kustomize) team
|
||||
specifically. Later that year, I was appointed to my current roles as
|
||||
subproject owner for Kustomize and KRM Functions, and as SIG CLI Tech
|
||||
Lead and Chair.
|
||||
|
||||
## About SIG CLI
|
||||
|
||||
**Arpit**: Thank you! Could you share with us the purpose and goals of SIG CLI?
|
||||
|
||||
**Maciej**: Our
|
||||
[charter](https://github.com/kubernetes/community/tree/master/sig-cli/)
|
||||
has the most detailed description, but in few words, we handle all CLI
|
||||
tooling that helps you manage your Kubernetes manifests and interact
|
||||
with your Kubernetes clusters.
|
||||
|
||||
**Arpit**: I see. And how does SIG CLI work to promote best-practices
|
||||
for CLI development and usage in the cloud native ecosystem?
|
||||
|
||||
**Maciej**: Within `kubectl`, we have several on-going efforts that
|
||||
try to encourage new contributors to align existing commands to new
|
||||
standards. We publish several libraries which hopefully make it easier
|
||||
to write CLIs that interact with Kubernetes APIs, such as cli-runtime
|
||||
and
|
||||
[kyaml](https://github.com/kubernetes-sigs/kustomize/tree/master/kyaml).
|
||||
|
||||
**Katrina**: We also maintain some interoperability specifications for
|
||||
CLI tooling, such as the [KRM Functions
|
||||
Specification](https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md)
|
||||
(GA) and the new ApplySet
|
||||
Specification
|
||||
(alpha).
|
||||
|
||||
## Current projects and challenges
|
||||
|
||||
**Arpit**: Going through the README file, it’s clear SIG CLI has a
|
||||
number of subprojects, could you highlight some important ones?
|
||||
|
||||
**Maciej**: The four most active subprojects that are, in my opinion,
|
||||
worthy of your time investment would be:
|
||||
|
||||
* [`kubectl`](https://github.com/kubernetes/kubectl): the canonical Kubernetes CLI.
|
||||
* [Kustomize](https://github.com/kubernetes-sigs/kustomize): a
|
||||
template-free customization tool for Kubernetes yaml manifest files.
|
||||
* [KUI](https://kui.tools) - a GUI interface to Kubernetes, think
|
||||
`kubectl` on steroids.
|
||||
* [`krew`](https://github.com/kubernetes-sigs/krew): a plugin manager for `kubectl`.
|
||||
|
||||
**Arpit**: Are there any upcoming initiatives or developments that SIG
|
||||
CLI is working on?
|
||||
|
||||
**Maciej**: There are always several initiatives we’re working on at
|
||||
any given point in time. It’s best to join [one of our
|
||||
calls](https://github.com/kubernetes/community/tree/master/sig-cli/#meetings)
|
||||
to learn about the current ones.
|
||||
|
||||
**Katrina**: For major features, you can check out [our open
|
||||
KEPs](https://www.kubernetes.dev/resources/keps/). For instance, in
|
||||
1.27 we introduced alphas for [a new pruning mode in kubectl
|
||||
apply](https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning/),
|
||||
and for kubectl create plugins. Exciting ideas that are currently
|
||||
under discussion include an interactive mode for `kubectl` delete
|
||||
([KEP
|
||||
3895](https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning))
|
||||
and the `kuberc` user preferences file ([KEP
|
||||
3104](https://kubernetes.io/blog/2023/05/09/introducing-kubectl-applyset-pruning)).
|
||||
|
||||
**Arpit**: Could you discuss any challenges that SIG CLI faces in its
|
||||
efforts to improve CLIs for cloud-native technologies? What are the
|
||||
future efforts to solve them?
|
||||
|
||||
**Katrina**: The biggest challenge we’re facing with every decision is
|
||||
backwards compatibility and ensuring we don’t break existing users. It
|
||||
frequently happens that fixing what's on the surface may seem
|
||||
straightforward, but even fixing a bug could constitute a breaking
|
||||
change for some users, which means we need to go through an extended
|
||||
deprecation process to change it, or in some cases we can’t change it
|
||||
at all. Another challenge is the need to balance customization with
|
||||
usability in the flag sets we expose on our tools. For example, we get
|
||||
many proposals for new flags that would certainly be useful to some
|
||||
users, but not a large enough subset to justify the increased
|
||||
complexity having them in the tool entails for everyone. The `kuberc`
|
||||
proposal may help with some of these problems by giving individual
|
||||
users the ability to set or override default values we can’t change,
|
||||
and even create custom subcommands via aliases
|
||||
|
||||
**Arpit**: With every new version release of Kubernetes, maintaining
|
||||
consistency and integrity is surely challenging: how does the SIG CLI
|
||||
team tackle it?
|
||||
|
||||
**Maciej**: This is mostly similar to the topic mentioned in the
|
||||
previous question: every new change, especially to existing commands
|
||||
goes through a lot of scrutiny to ensure we don’t break existing
|
||||
users. At any point in time we have to keep a reasonable balance
|
||||
between features and not breaking users.
|
||||
|
||||
## Future plans and contribution
|
||||
|
||||
**Arpit**: How do you see the role of CLI tools in the cloud-native
|
||||
ecosystem evolving in the future?
|
||||
|
||||
**Maciej**: I think that CLI tools were and will always be an
|
||||
important piece of the ecosystem. Whether used by administrators on
|
||||
remote machines that don’t have GUI or in every CI/CD pipeline, they
|
||||
are irreplaceable.
|
||||
|
||||
**Arpit**: Kubernetes is a community-driven project. Any
|
||||
recommendation for anyone looking into getting involved in SIG CLI
|
||||
work? Where should they start? Are there any prerequisites?
|
||||
|
||||
**Maciej**: There are no prerequisites other than a little bit of free
|
||||
time on your hands and willingness to learn something new :-)
|
||||
|
||||
**Katrina**: A working knowledge of [Go](https://go.dev/) often helps,
|
||||
but we also have areas in need of non-code contributions, such as the
|
||||
[Kustomize docs consolidation
|
||||
project](https://github.com/kubernetes-sigs/kustomize/issues/4338).
|
|
@ -0,0 +1,109 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA"
|
||||
date: 2023-08-16T10:00:00-08:00
|
||||
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
|
||||
---
|
||||
|
||||
**Authors:** Xing Yang (VMware) and Ashutosh Kumar (Elastic)
|
||||
|
||||
The Kubernetes Non-Graceful Node Shutdown feature is now GA in Kubernetes v1.28.
|
||||
It was introduced as
|
||||
[alpha](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown)
|
||||
in Kubernetes v1.24, and promoted to
|
||||
[beta](https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/)
|
||||
in Kubernetes v1.26.
|
||||
This feature allows stateful workloads to restart on a different node if the
|
||||
original node is shutdown unexpectedly or ends up in a non-recoverable state
|
||||
such as the hardware failure or unresponsive OS.
|
||||
|
||||
## What is a Non-Graceful Node Shutdown
|
||||
|
||||
In a Kubernetes cluster, a node can be shutdown in a planned graceful way or
|
||||
unexpectedly because of reasons such as power outage or something else external.
|
||||
A node shutdown could lead to workload failure if the node is not drained
|
||||
before the shutdown. A node shutdown can be either graceful or non-graceful.
|
||||
|
||||
The [Graceful Node Shutdown](https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/)
|
||||
feature allows Kubelet to detect a node shutdown event, properly terminate the pods,
|
||||
and release resources, before the actual shutdown.
|
||||
|
||||
When a node is shutdown but not detected by Kubelet's Node Shutdown Manager,
|
||||
this becomes a non-graceful node shutdown.
|
||||
Non-graceful node shutdown is usually not a problem for stateless apps, however,
|
||||
it is a problem for stateful apps.
|
||||
The stateful application cannot function properly if the pods are stuck on the
|
||||
shutdown node and are not restarting on a running node.
|
||||
|
||||
In the case of a non-graceful node shutdown, you can manually add an `out-of-service` taint on the Node.
|
||||
|
||||
```
|
||||
kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
|
||||
```
|
||||
|
||||
This taint triggers pods on the node to be forcefully deleted if there are no
|
||||
matching tolerations on the pods. Persistent volumes attached to the shutdown node
|
||||
will be detached, and new pods will be created successfully on a different running
|
||||
node.
|
||||
|
||||
**Note:** Before applying the out-of-service taint, you must verify that a node is
|
||||
already in shutdown or power-off state (not in the middle of restarting).
|
||||
|
||||
Once all the workload pods that are linked to the out-of-service node are moved to
|
||||
a new running node, and the shutdown node has been recovered, you should remove that
|
||||
taint on the affected node after the node is recovered.
|
||||
|
||||
## What’s new in stable
|
||||
|
||||
With the promotion of the Non-Graceful Node Shutdown feature to stable, the
|
||||
feature gate `NodeOutOfServiceVolumeDetach` is locked to true on
|
||||
`kube-controller-manager` and cannot be disabled.
|
||||
|
||||
Metrics `force_delete_pods_total` and `force_delete_pod_errors_total` in the
|
||||
Pod GC Controller are enhanced to account for all forceful pods deletion.
|
||||
A reason is added to the metric to indicate whether the pod is forcefully deleted
|
||||
because it is terminated, orphaned, terminating with the `out-of-service` taint,
|
||||
or terminating and unscheduled.
|
||||
|
||||
A "reason" is also added to the metric `attachdetach_controller_forced_detaches`
|
||||
in the Attach Detach Controller to indicate whether the force detach is caused by
|
||||
the `out-of-service` taint or a timeout.
|
||||
|
||||
## What’s next?
|
||||
|
||||
This feature requires a user to manually add a taint to the node to trigger
|
||||
workloads failover and remove the taint after the node is recovered.
|
||||
In the future, we plan to find ways to automatically detect and fence nodes
|
||||
that are shutdown/failed and automatically failover workloads to another node.
|
||||
|
||||
## How can I learn more?
|
||||
|
||||
Check out additional documentation on this feature
|
||||
[here](https://kubernetes.io/docs/concepts/architecture/nodes/#non-graceful-node-shutdown).
|
||||
|
||||
## How to get involved?
|
||||
|
||||
We offer a huge thank you to all the contributors who helped with design,
|
||||
implementation, and review of this feature and helped move it from alpha, beta, to stable:
|
||||
|
||||
* Michelle Au ([msau42](https://github.com/msau42))
|
||||
* Derek Carr ([derekwaynecarr](https://github.com/derekwaynecarr))
|
||||
* Danielle Endocrimes ([endocrimes](https://github.com/endocrimes))
|
||||
* Baofa Fan ([carlory](https://github.com/carlory))
|
||||
* Tim Hockin ([thockin](https://github.com/thockin))
|
||||
* Ashutosh Kumar ([sonasingh46](https://github.com/sonasingh46))
|
||||
* Hemant Kumar ([gnufied](https://github.com/gnufied))
|
||||
* Yuiko Mouri ([YuikoTakada](https://github.com/YuikoTakada))
|
||||
* Mrunal Patel ([mrunalp](https://github.com/mrunalp))
|
||||
* David Porter ([bobbypage](https://github.com/bobbypage))
|
||||
* Yassine Tijani ([yastij](https://github.com/yastij))
|
||||
* Jing Xu ([jingxu97](https://github.com/jingxu97))
|
||||
* Xing Yang ([xing-yang](https://github.com/xing-yang))
|
||||
|
||||
This feature is a collaboration between SIG Storage and SIG Node.
|
||||
For those interested in getting involved with the design and development of any
|
||||
part of the Kubernetes Storage system, join the Kubernetes Storage Special
|
||||
Interest Group (SIG).
|
||||
For those interested in getting involved with the design and development of the
|
||||
components that support the controlled interactions between pods and host
|
||||
resources, join the Kubernetes Node SIG.
|
|
@ -174,8 +174,6 @@ configure garbage collection:
|
|||
* [Configuring cascading deletion of Kubernetes objects](/docs/tasks/administer-cluster/use-cascading-deletion/)
|
||||
* [Configuring cleanup of finished Jobs](/docs/concepts/workloads/controllers/ttlafterfinished/)
|
||||
|
||||
<!-- * [Configuring unused container and image garbage collection](/docs/tasks/administer-cluster/reconfigure-kubelet/) -->
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [ownership of Kubernetes objects](/docs/concepts/overview/working-with-objects/owners-dependents/).
|
||||
|
|
|
@ -42,16 +42,17 @@ Existence of kube-apiserver leases enables future capabilities that may require
|
|||
each kube-apiserver.
|
||||
|
||||
You can inspect Leases owned by each kube-apiserver by checking for lease objects in the `kube-system` namespace
|
||||
with the name `kube-apiserver-<sha256-hash>`. Alternatively you can use the label selector `k8s.io/component=kube-apiserver`:
|
||||
with the name `kube-apiserver-<sha256-hash>`. Alternatively you can use the label selector `apiserver.kubernetes.io/identity=kube-apiserver`:
|
||||
|
||||
```shell
|
||||
kubectl -n kube-system get lease -l k8s.io/component=kube-apiserver
|
||||
kubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver
|
||||
```
|
||||
```
|
||||
NAME HOLDER AGE
|
||||
kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 5m33s
|
||||
kube-apiserver-dz2dqprdpsgnm756t5rnov7yka kube-apiserver-dz2dqprdpsgnm756t5rnov7yka_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s
|
||||
kube-apiserver-fyloo45sdenffw2ugwaz3likua kube-apiserver-fyloo45sdenffw2ugwaz3likua_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s
|
||||
apiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s
|
||||
apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s
|
||||
apiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s
|
||||
|
||||
```
|
||||
|
||||
The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be
|
||||
|
@ -60,24 +61,24 @@ will take over existing Leases using a new holder identity, as opposed to instan
|
|||
hostname used by kube-apisever by checking the value of the `kubernetes.io/hostname` label:
|
||||
|
||||
```shell
|
||||
kubectl -n kube-system get lease kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a -o yaml
|
||||
kubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml
|
||||
```
|
||||
```yaml
|
||||
apiVersion: coordination.k8s.io/v1
|
||||
kind: Lease
|
||||
metadata:
|
||||
creationTimestamp: "2022-11-30T15:37:15Z"
|
||||
creationTimestamp: "2023-07-02T13:16:48Z"
|
||||
labels:
|
||||
k8s.io/component: kube-apiserver
|
||||
kubernetes.io/hostname: kind-control-plane
|
||||
name: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a
|
||||
apiserver.kubernetes.io/identity: kube-apiserver
|
||||
kubernetes.io/hostname: master-1
|
||||
name: apiserver-07a5ea9b9b072c4a5f3d1c3702
|
||||
namespace: kube-system
|
||||
resourceVersion: "18171"
|
||||
uid: d6c68901-4ec5-4385-b1ef-2d783738da6c
|
||||
resourceVersion: "334899"
|
||||
uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1
|
||||
spec:
|
||||
holderIdentity: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4
|
||||
holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05
|
||||
leaseDurationSeconds: 3600
|
||||
renewTime: "2022-11-30T18:04:27.912073Z"
|
||||
renewTime: "2023-07-04T21:58:48.065888Z"
|
||||
```
|
||||
|
||||
Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour.
|
||||
|
|
|
@ -475,7 +475,7 @@ Message: Pod was terminated in response to imminent node shutdown.
|
|||
|
||||
### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown}
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.23" >}}
|
||||
{{< feature-state state="beta" for_k8s_version="v1.24" >}}
|
||||
|
||||
To provide more flexibility during graceful node shutdown around the ordering
|
||||
of pods during shutdown, graceful node shutdown honors the PriorityClass for
|
||||
|
|
|
@ -90,10 +90,8 @@ installation instructions. The list does not try to be exhaustive.
|
|||
|
||||
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard)
|
||||
is a dashboard web interface for Kubernetes.
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s)
|
||||
is a tool for graphically visualizing your containers, pods, services etc.
|
||||
Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/)
|
||||
or host the UI yourself.
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a
|
||||
tool for visualizing your containers, Pods, Services and more.
|
||||
|
||||
## Infrastructure
|
||||
|
||||
|
|
|
@ -470,7 +470,7 @@ traffic, you can configure rules to block any health check requests
|
|||
that originate from outside your cluster.
|
||||
{{< /caution >}}
|
||||
|
||||
{{< codenew file="priority-and-fairness/health-for-strangers.yaml" >}}
|
||||
{{% code file="priority-and-fairness/health-for-strangers.yaml" %}}
|
||||
|
||||
## Diagnostics
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ Kubernetes captures logs from each container in a running Pod.
|
|||
This example uses a manifest for a `Pod` with a container
|
||||
that writes text to the standard output stream, once per second.
|
||||
|
||||
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||
{{% code file="debug/counter-pod.yaml" %}}
|
||||
|
||||
To run this pod, use the following command:
|
||||
|
||||
|
@ -71,11 +71,12 @@ You can use `kubectl logs --previous` to retrieve logs from a previous instantia
|
|||
If your pod has multiple containers, specify which container's logs you want to access by
|
||||
appending a container name to the command, with a `-c` flag, like so:
|
||||
|
||||
```console
|
||||
```shell
|
||||
kubectl logs counter -c count
|
||||
```
|
||||
|
||||
See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
|
||||
See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs)
|
||||
for more details.
|
||||
|
||||
### How nodes handle container logs
|
||||
|
||||
|
@ -98,23 +99,23 @@ The usual way to access this is by running `kubectl logs`.
|
|||
|
||||
You can configure the kubelet to rotate logs automatically.
|
||||
|
||||
If you configure rotation, the kubelet is responsible for rotating container logs and managing the logging directory structure.
|
||||
If you configure rotation, the kubelet is responsible for rotating container logs and managing the
|
||||
logging directory structure.
|
||||
The kubelet sends this information to the container runtime (using CRI),
|
||||
and the runtime writes the container logs to the given location.
|
||||
|
||||
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
|
||||
`containerLogMaxSize` and `containerLogMaxFiles`,
|
||||
using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
|
||||
These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
|
||||
These settings let you configure the maximum size for each log file and the maximum number of
|
||||
files allowed for each container respectively.
|
||||
|
||||
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
|
||||
the basic logging example, the kubelet on the node handles the request and
|
||||
reads directly from the log file. The kubelet returns the content of the log file.
|
||||
|
||||
|
||||
{{< note >}}
|
||||
Only the contents of the latest log file are available through
|
||||
`kubectl logs`.
|
||||
Only the contents of the latest log file are available through `kubectl logs`.
|
||||
|
||||
For example, if a Pod writes 40 MiB of logs and the kubelet rotates logs
|
||||
after 10 MiB, running `kubectl logs` returns at most 10MiB of data.
|
||||
|
@ -149,9 +150,8 @@ If systemd is not present, the kubelet and container runtime write to `.log` fil
|
|||
run the kubelet via a helper tool, `kube-log-runner`, and use that tool to redirect
|
||||
kubelet logs to a directory that you choose.
|
||||
|
||||
You can also set a logging directory using the deprecated kubelet command line
|
||||
argument `--log-dir`. However, the kubelet always directs your container runtime to
|
||||
write logs into directories within `/var/log/pods`.
|
||||
The kubelet always directs your container runtime to write logs into directories within
|
||||
`/var/log/pods`.
|
||||
|
||||
For more information on `kube-log-runner`, read [System Logs](/docs/concepts/cluster-administration/system-logs/#klog).
|
||||
|
||||
|
@ -221,7 +221,8 @@ application containers on that node.
|
|||
Because the logging agent must run on every node, it is recommended to run the agent
|
||||
as a `DaemonSet`.
|
||||
|
||||
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
|
||||
Node-level logging creates only one agent per node and doesn't require any changes to the
|
||||
applications running on the node.
|
||||
|
||||
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects
|
||||
these logs and forwards them for aggregation.
|
||||
|
@ -231,7 +232,8 @@ these logs and forwards them for aggregation.
|
|||
You can use a sidecar container in one of the following ways:
|
||||
|
||||
* The sidecar container streams application logs to its own `stdout`.
|
||||
* The sidecar container runs a logging agent, which is configured to pick up logs from an application container.
|
||||
* The sidecar container runs a logging agent, which is configured to pick up logs
|
||||
from an application container.
|
||||
|
||||
#### Streaming sidecar container
|
||||
|
||||
|
@ -253,7 +255,7 @@ For example, a pod runs a single container, and the container
|
|||
writes to two different log files using two different formats. Here's a
|
||||
manifest for the Pod:
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
|
||||
{{% code file="admin/logging/two-files-counter-pod.yaml" %}}
|
||||
|
||||
It is not recommended to write log entries with different formats to the same log
|
||||
stream, even if you managed to redirect both components to the `stdout` stream of
|
||||
|
@ -263,7 +265,7 @@ the logs to its own `stdout` stream.
|
|||
|
||||
Here's a manifest for a pod that has two sidecar containers:
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
|
||||
{{% code file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}
|
||||
|
||||
Now when you run this pod, you can access each log stream separately by
|
||||
running the following commands:
|
||||
|
@ -330,7 +332,7 @@ Here are two example manifests that you can use to implement a sidecar container
|
|||
The first manifest contains a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
to configure fluentd.
|
||||
|
||||
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
|
||||
{{% code file="admin/logging/fluentd-sidecar-config.yaml" %}}
|
||||
|
||||
{{< note >}}
|
||||
In the sample configurations, you can replace fluentd with any logging agent, reading
|
||||
|
@ -340,16 +342,19 @@ from any source inside an application container.
|
|||
The second manifest describes a pod that has a sidecar container running fluentd.
|
||||
The pod mounts a volume where fluentd can pick up its configuration data.
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
|
||||
{{% code file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}
|
||||
|
||||
### Exposing logs directly from the application
|
||||
|
||||

|
||||
|
||||
Cluster-logging that exposes or pushes logs directly from every application is outside the scope of Kubernetes.
|
||||
Cluster-logging that exposes or pushes logs directly from every application is outside the scope
|
||||
of Kubernetes.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about [Kubernetes system logs](/docs/concepts/cluster-administration/system-logs/)
|
||||
* Learn about [Traces For Kubernetes System Components](/docs/concepts/cluster-administration/system-traces/)
|
||||
* Learn how to [customise the termination message](/docs/tasks/debug/debug-application/determine-reason-pod-failure/#customizing-the-termination-message) that Kubernetes records when a Pod fails
|
||||
* Learn how to [customise the termination message](/docs/tasks/debug/debug-application/determine-reason-pod-failure/#customizing-the-termination-message)
|
||||
that Kubernetes records when a Pod fails
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ Many applications require multiple resources to be created, such as a Deployment
|
|||
Management of multiple resources can be simplified by grouping them together in the same file
|
||||
(separated by `---` in YAML). For example:
|
||||
|
||||
{{< codenew file="application/nginx-app.yaml" >}}
|
||||
{{% code file="application/nginx-app.yaml" %}}
|
||||
|
||||
Multiple resources can be created the same way as a single resource:
|
||||
|
||||
|
|
|
@ -22,12 +22,10 @@ scheduler decisions).
|
|||
klog is the Kubernetes logging library. [klog](https://github.com/kubernetes/klog)
|
||||
generates log messages for the Kubernetes system components.
|
||||
|
||||
For more information about klog configuration, see the [Command line tool reference](/docs/reference/command-line-tools-reference/).
|
||||
|
||||
Kubernetes is in the process of simplifying logging in its components.
|
||||
The following klog command line flags
|
||||
[are deprecated](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
|
||||
starting with Kubernetes 1.23 and will be removed in a future release:
|
||||
starting with Kubernetes v1.23 and removed in Kubernetes v1.26:
|
||||
|
||||
- `--add-dir-header`
|
||||
- `--alsologtostderr`
|
||||
|
@ -96,13 +94,13 @@ klog output or structured logging.
|
|||
The default formatting of structured log messages is as text, with a format that is backward
|
||||
compatible with traditional klog:
|
||||
|
||||
```ini
|
||||
```
|
||||
<klog header> "<message>" <key1>="<value1>" <key2>="<value2>" ...
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```ini
|
||||
```
|
||||
I1025 00:15:15.525108 1 controller_utils.go:116] "Pod status updated" pod="kube-system/kubedns" status="ready"
|
||||
```
|
||||
|
||||
|
@ -245,6 +243,7 @@ in the application log provider. On both operating systems, logs are also availa
|
|||
|
||||
Provided you are authorized to interact with node objects, you can try out this alpha feature on all your nodes or
|
||||
just a subset. Here is an example to retrieve the kubelet service logs from a node:
|
||||
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"
|
||||
|
@ -252,6 +251,7 @@ kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"
|
|||
|
||||
You can also fetch files, provided that the files are in a directory that the kubelet allows for log
|
||||
fetches. For example, you can fetch a log from `/var/log` on a Linux node:
|
||||
|
||||
```shell
|
||||
kubectl get --raw "/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>"
|
||||
```
|
||||
|
@ -273,6 +273,7 @@ Option | Description
|
|||
`tailLines` | specify how many lines from the end of the log to retrieve; the default is to fetch the whole log
|
||||
|
||||
Example of a more complex query:
|
||||
|
||||
```shell
|
||||
# Fetch kubelet logs from a node named node-1.example that have the word "error"
|
||||
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error"
|
||||
|
|
|
@ -111,7 +111,7 @@ technique also lets you access a ConfigMap in a different namespace.
|
|||
|
||||
Here's an example Pod that uses values from `game-demo` to configure a Pod:
|
||||
|
||||
{{< codenew file="configmap/configure-pod.yaml" >}}
|
||||
{{% code file="configmap/configure-pod.yaml" %}}
|
||||
|
||||
A ConfigMap doesn't differentiate between single line property values and
|
||||
multi-line file-like values.
|
||||
|
@ -216,8 +216,6 @@ data has the following advantages:
|
|||
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
|
||||
closing watches for ConfigMaps marked as immutable.
|
||||
|
||||
This feature is controlled by the `ImmutableEphemeralVolumes`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
You can create an immutable ConfigMap by setting the `immutable` field to `true`.
|
||||
For example:
|
||||
|
||||
|
|
|
@ -37,6 +37,23 @@ to others, please don't hesitate to file an issue or submit a PR.
|
|||
|
||||
- Put object descriptions in annotations, to allow better introspection.
|
||||
|
||||
{{< note >}}
|
||||
There is a breaking change introduced in the [YAML 1.2](https://yaml.org/spec/1.2.0/#id2602744)
|
||||
boolean values specification with respect to [YAML 1.1](https://yaml.org/spec/1.1/#id864510).
|
||||
This is a known [issue](https://github.com/kubernetes/kubernetes/issues/34146) in Kubernetes.
|
||||
YAML 1.2 only recognizes **true** and **false** as valid booleans, while YAML 1.1 also accepts
|
||||
**yes**, **no**, **on**, and **off** as booleans. However, Kubernetes uses YAML
|
||||
[parsers](https://github.com/kubernetes/kubernetes/issues/34146#issuecomment-252692024) that are
|
||||
mostly compatible with YAML 1.1, which means that using **yes** or **no** instead of **true** or
|
||||
**false** in a YAML manifest may cause unexpected errors or behaviors. To avoid this issue, it is
|
||||
recommended to always use **true** or **false** for boolean values in YAML manifests, and to quote
|
||||
any strings that may be confused with booleans, such as **"yes"** or **"no"**.
|
||||
|
||||
Besides booleans, there are additional specifications changes between YAML versions. Please refer
|
||||
to the [YAML Specification Changes](https://spec.yaml.io/main/spec/1.2.2/ext/changes) documentation
|
||||
for a comprehensive list.
|
||||
{{< /note >}}
|
||||
|
||||
## "Naked" Pods versus ReplicaSets, Deployments, and Jobs {#naked-pods-vs-replicasets-deployments-and-jobs}
|
||||
|
||||
- Don't use naked Pods (that is, Pods not bound to a [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) or
|
||||
|
@ -135,4 +152,3 @@ to others, please don't hesitate to file an issue or submit a PR.
|
|||
Deployments and Services.
|
||||
See [Use a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
for an example.
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -367,6 +367,26 @@ DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
|
|||
{{< glossary_tooltip term_id="volume" >}} in the device monitoring agent's
|
||||
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
|
||||
{{< note >}}
|
||||
|
||||
When accessing the `/var/lib/kubelet/pod-resources/kubelet.sock` from DaemonSet
|
||||
or any other app deployed as a container on the host, which is mounting socket as
|
||||
a volume, it is a good practice to mount directory `/var/lib/kubelet/pod-resources/`
|
||||
instead of the `/var/lib/kubelet/pod-resources/kubelet.sock`. This will ensure
|
||||
that after kubelet restart, container will be able to re-connect to this socket.
|
||||
|
||||
Container mounts are managed by inode referencing the socket or directory,
|
||||
depending on what was mounted. When kubelet restarts, socket is deleted
|
||||
and a new socket is created, while directory stays untouched.
|
||||
So the original inode for the socket become unusable. Inode to directory
|
||||
will continue working.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
Support for the `PodResourcesLister service` requires `KubeletPodResources`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
|
||||
It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.
|
||||
|
||||
### `Get` gRPC endpoint {#grpc-endpoint-get}
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.27" >}}
|
||||
|
|
|
@ -48,7 +48,7 @@ for an example control plane setup that runs across multiple machines.
|
|||
|
||||
{{< glossary_definition term_id="kube-controller-manager" length="all" >}}
|
||||
|
||||
Some types of these controllers are:
|
||||
There are many different types of controllers. Some examples of them are:
|
||||
|
||||
* Node controller: Responsible for noticing and responding when nodes go down.
|
||||
* Job controller: Watches for Job objects that represent one-off tasks, then creates
|
||||
|
@ -56,6 +56,8 @@ Some types of these controllers are:
|
|||
* EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods).
|
||||
* ServiceAccount controller: Create default ServiceAccounts for new namespaces.
|
||||
|
||||
The above is not an exhaustive list.
|
||||
|
||||
### cloud-controller-manager
|
||||
|
||||
{{< glossary_definition term_id="cloud-controller-manager" length="short" >}}
|
||||
|
@ -138,4 +140,4 @@ Learn more about the following:
|
|||
* Etcd's official [documentation](https://etcd.io/docs/).
|
||||
* Several [container runtimes](/docs/setup/production-environment/container-runtimes/) in Kubernetes.
|
||||
* Integrating with cloud providers using [cloud-controller-manager](/docs/concepts/architecture/cloud-controller/).
|
||||
* [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands.
|
||||
* [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands.
|
|
@ -77,7 +77,7 @@ request.
|
|||
|
||||
Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:
|
||||
|
||||
{{< codenew file="application/deployment.yaml" >}}
|
||||
{{% code file="application/deployment.yaml" %}}
|
||||
|
||||
One way to create a Deployment using a `.yaml` file like the one above is to use the
|
||||
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command
|
||||
|
@ -120,6 +120,11 @@ satisfy the StatefulSet specification.
|
|||
Different kinds of object can also have different `.status`; again, the API reference pages
|
||||
detail the structure of that `.status` field, and its content for each different type of object.
|
||||
|
||||
{{< note >}}
|
||||
See [Configuration Best Practices](/docs/concepts/configuration/overview/) for additional
|
||||
information on writing YAML configuration files.
|
||||
{{< /note >}}
|
||||
|
||||
## Server side field validation
|
||||
|
||||
Starting with Kubernetes v1.25, the API server offers server side
|
||||
|
|
|
@ -54,12 +54,12 @@ A `LimitRange` does **not** check the consistency of the default values it appli
|
|||
|
||||
For example, you define a `LimitRange` with this manifest:
|
||||
|
||||
{{< codenew file="concepts/policy/limit-range/problematic-limit-range.yaml" >}}
|
||||
{{% code file="concepts/policy/limit-range/problematic-limit-range.yaml" %}}
|
||||
|
||||
|
||||
along with a Pod that declares a CPU resource request of `700m`, but not a limit:
|
||||
|
||||
{{< codenew file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" >}}
|
||||
{{% code file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}}
|
||||
|
||||
|
||||
then that Pod will not be scheduled, failing with an error similar to:
|
||||
|
@ -69,7 +69,7 @@ Pod "example-conflict-with-limitrange-cpu" is invalid: spec.containers[0].resour
|
|||
|
||||
If you set both `request` and `limit`, then that new Pod will be scheduled successfully even with the same `LimitRange` in place:
|
||||
|
||||
{{< codenew file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" >}}
|
||||
{{% code file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}}
|
||||
|
||||
## Example resource constraints
|
||||
|
||||
|
|
|
@ -687,7 +687,7 @@ plugins:
|
|||
|
||||
Then, create a resource quota object in the `kube-system` namespace:
|
||||
|
||||
{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
|
||||
{{% code file="policy/priority-class-resourcequota.yaml" %}}
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
|
||||
|
|
|
@ -30,6 +30,7 @@ of terminating one or more Pods on Nodes.
|
|||
* [Scheduler Performance Tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
|
||||
* [Resource Bin Packing for Extended Resources](/docs/concepts/scheduling-eviction/resource-bin-packing/)
|
||||
* [Pod Scheduling Readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)
|
||||
* [Descheduler](https://github.com/kubernetes-sigs/descheduler#descheduler-for-kubernetes)
|
||||
|
||||
## Pod Disruption
|
||||
|
||||
|
|
|
@ -36,8 +36,7 @@ specific Pods:
|
|||
|
||||
Like many other Kubernetes objects, nodes have
|
||||
[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node).
|
||||
Kubernetes also populates a standard set of labels on all nodes in a cluster. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/)
|
||||
for a list of common node labels.
|
||||
Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) on all nodes in a cluster.
|
||||
|
||||
{{<note>}}
|
||||
The value of these labels is cloud provider specific and is not guaranteed to be reliable.
|
||||
|
@ -122,7 +121,7 @@ your Pod spec.
|
|||
|
||||
For example, consider the following Pod spec:
|
||||
|
||||
{{< codenew file="pods/pod-with-node-affinity.yaml" >}}
|
||||
{{% code file="pods/pod-with-node-affinity.yaml" %}}
|
||||
|
||||
In this example, the following rules apply:
|
||||
|
||||
|
@ -172,7 +171,7 @@ scheduling decision for the Pod.
|
|||
|
||||
For example, consider the following Pod spec:
|
||||
|
||||
{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}}
|
||||
{{% code file="pods/pod-with-affinity-anti-affinity.yaml" %}}
|
||||
|
||||
If there are two possible nodes that match the
|
||||
`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the
|
||||
|
@ -288,7 +287,7 @@ spec.
|
|||
|
||||
Consider the following Pod spec:
|
||||
|
||||
{{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
|
||||
{{% code file="pods/pod-with-pod-affinity.yaml" %}}
|
||||
|
||||
This example defines one Pod affinity rule and one Pod anti-affinity rule. The
|
||||
Pod affinity rule uses the "hard"
|
||||
|
@ -513,8 +512,8 @@ The following operators can only be used with `nodeAffinity`.
|
|||
|
||||
| Operator | Behaviour |
|
||||
| :------------: | :-------------: |
|
||||
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than or equal to the integer that results from parsing the value of a label named by this selector |
|
||||
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than or equal to the integer that results from parsing the value of a label named by this selector |
|
||||
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
|
||||
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
|
||||
|
||||
|
||||
{{<note>}}
|
||||
|
|
|
@ -31,7 +31,7 @@ each schedulingGate can be removed in arbitrary order, but addition of a new sch
|
|||
|
||||
To mark a Pod not-ready for scheduling, you can create it with one or more scheduling gates like this:
|
||||
|
||||
{{< codenew file="pods/pod-with-scheduling-gates.yaml" >}}
|
||||
{{% code file="pods/pod-with-scheduling-gates.yaml" %}}
|
||||
|
||||
After the Pod's creation, you can check its state using:
|
||||
|
||||
|
@ -61,7 +61,7 @@ The output is:
|
|||
To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
|
||||
by re-applying a modified manifest:
|
||||
|
||||
{{< codenew file="pods/pod-without-scheduling-gates.yaml" >}}
|
||||
{{% code file="pods/pod-without-scheduling-gates.yaml" %}}
|
||||
|
||||
You can check if the `schedulingGates` is cleared by running:
|
||||
|
||||
|
|
|
@ -64,7 +64,7 @@ tolerations:
|
|||
|
||||
Here's an example of a pod that uses tolerations:
|
||||
|
||||
{{< codenew file="pods/pod-with-toleration.yaml" >}}
|
||||
{{% code file="pods/pod-with-toleration.yaml" %}}
|
||||
|
||||
The default value for `operator` is `Equal`.
|
||||
|
||||
|
|
|
@ -284,7 +284,7 @@ graph BT
|
|||
If you want an incoming Pod to be evenly spread with existing Pods across zones, you
|
||||
can use a manifest similar to:
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
|
||||
{{% code file="pods/topology-spread-constraints/one-constraint.yaml" %}}
|
||||
|
||||
From that manifest, `topologyKey: zone` implies the even distribution will only be applied
|
||||
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
|
||||
|
@ -377,7 +377,7 @@ graph BT
|
|||
You can combine two topology spread constraints to control the spread of Pods both
|
||||
by node and by zone:
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
|
||||
{{% code file="pods/topology-spread-constraints/two-constraints.yaml" %}}
|
||||
|
||||
In this case, to match the first constraint, the incoming Pod can only be placed onto
|
||||
nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be
|
||||
|
@ -466,7 +466,7 @@ and you know that zone `C` must be excluded. In this case, you can compose a man
|
|||
as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`.
|
||||
Similarly, Kubernetes also respects `spec.nodeSelector`.
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
|
||||
{{% code file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}
|
||||
|
||||
## Implicit conventions
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ IaaS Provider | Link |
|
|||
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
|
||||
Amazon Web Services | https://aws.amazon.com/security |
|
||||
Google Cloud Platform | https://cloud.google.com/security |
|
||||
Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety |
|
||||
Huawei Cloud | https://www.huaweicloud.com/intl/en-us/securecenter/overallsafety |
|
||||
IBM Cloud | https://www.ibm.com/cloud/security |
|
||||
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
|
||||
Oracle Cloud Infrastructure | https://www.oracle.com/security |
|
||||
|
|
|
@ -9,10 +9,10 @@ weight: 60
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} is a key security control
|
||||
to ensure that cluster users and workloads have only the access to resources required to
|
||||
Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} is a key security control
|
||||
to ensure that cluster users and workloads have only the access to resources required to
|
||||
execute their roles. It is important to ensure that, when designing permissions for cluster
|
||||
users, the cluster administrator understands the areas where privilege escalation could occur,
|
||||
users, the cluster administrator understands the areas where privilege escalation could occur,
|
||||
to reduce the risk of excessive access leading to security incidents.
|
||||
|
||||
The good practices laid out here should be read in conjunction with the general
|
||||
|
@ -24,46 +24,46 @@ The good practices laid out here should be read in conjunction with the general
|
|||
|
||||
### Least privilege
|
||||
|
||||
Ideally, minimal RBAC rights should be assigned to users and service accounts. Only permissions
|
||||
explicitly required for their operation should be used. While each cluster will be different,
|
||||
Ideally, minimal RBAC rights should be assigned to users and service accounts. Only permissions
|
||||
explicitly required for their operation should be used. While each cluster will be different,
|
||||
some general rules that can be applied are :
|
||||
|
||||
- Assign permissions at the namespace level where possible. Use RoleBindings as opposed to
|
||||
ClusterRoleBindings to give users rights only within a specific namespace.
|
||||
- Avoid providing wildcard permissions when possible, especially to all resources.
|
||||
As Kubernetes is an extensible system, providing wildcard access gives rights
|
||||
not just to all object types that currently exist in the cluster, but also to all object types
|
||||
which are created in the future.
|
||||
- Administrators should not use `cluster-admin` accounts except where specifically needed.
|
||||
Providing a low privileged account with
|
||||
[impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation)
|
||||
can avoid accidental modification of cluster resources.
|
||||
- Avoid adding users to the `system:masters` group. Any user who is a member of this group
|
||||
bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be
|
||||
revoked by removing RoleBindings or ClusterRoleBindings. As an aside, if a cluster is
|
||||
using an authorization webhook, membership of this group also bypasses that webhook (requests
|
||||
from users who are members of that group are never sent to the webhook)
|
||||
- Assign permissions at the namespace level where possible. Use RoleBindings as opposed to
|
||||
ClusterRoleBindings to give users rights only within a specific namespace.
|
||||
- Avoid providing wildcard permissions when possible, especially to all resources.
|
||||
As Kubernetes is an extensible system, providing wildcard access gives rights
|
||||
not just to all object types that currently exist in the cluster, but also to all object types
|
||||
which are created in the future.
|
||||
- Administrators should not use `cluster-admin` accounts except where specifically needed.
|
||||
Providing a low privileged account with
|
||||
[impersonation rights](/docs/reference/access-authn-authz/authentication/#user-impersonation)
|
||||
can avoid accidental modification of cluster resources.
|
||||
- Avoid adding users to the `system:masters` group. Any user who is a member of this group
|
||||
bypasses all RBAC rights checks and will always have unrestricted superuser access, which cannot be
|
||||
revoked by removing RoleBindings or ClusterRoleBindings. As an aside, if a cluster is
|
||||
using an authorization webhook, membership of this group also bypasses that webhook (requests
|
||||
from users who are members of that group are never sent to the webhook)
|
||||
|
||||
### Minimize distribution of privileged tokens
|
||||
|
||||
Ideally, pods shouldn't be assigned service accounts that have been granted powerful permissions
|
||||
(for example, any of the rights listed under [privilege escalation risks](#privilege-escalation-risks)).
|
||||
(for example, any of the rights listed under [privilege escalation risks](#privilege-escalation-risks)).
|
||||
In cases where a workload requires powerful permissions, consider the following practices:
|
||||
|
||||
- Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run
|
||||
- Limit the number of nodes running powerful pods. Ensure that any DaemonSets you run
|
||||
are necessary and are run with least privilege to limit the blast radius of container escapes.
|
||||
- Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using
|
||||
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/),
|
||||
[NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or
|
||||
[PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
|
||||
to ensure pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
|
||||
situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard.
|
||||
- Avoid running powerful pods alongside untrusted or publicly-exposed ones. Consider using
|
||||
[Taints and Toleration](/docs/concepts/scheduling-eviction/taint-and-toleration/),
|
||||
[NodeAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or
|
||||
[PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
|
||||
to ensure pods don't run alongside untrusted or less-trusted Pods. Pay especial attention to
|
||||
situations where less-trustworthy Pods are not meeting the **Restricted** Pod Security Standard.
|
||||
|
||||
### Hardening
|
||||
|
||||
Kubernetes defaults to providing access which may not be required in every cluster. Reviewing
|
||||
Kubernetes defaults to providing access which may not be required in every cluster. Reviewing
|
||||
the RBAC rights provided by default can provide opportunities for security hardening.
|
||||
In general, changes should not be made to rights provided to `system:` accounts some options
|
||||
In general, changes should not be made to rights provided to `system:` accounts some options
|
||||
to harden cluster rights exist:
|
||||
|
||||
- Review bindings for the `system:unauthenticated` group and remove them where possible, as this gives
|
||||
|
@ -76,7 +76,7 @@ to harden cluster rights exist:
|
|||
|
||||
### Periodic review
|
||||
|
||||
It is vital to periodically review the Kubernetes RBAC settings for redundant entries and
|
||||
It is vital to periodically review the Kubernetes RBAC settings for redundant entries and
|
||||
possible privilege escalations.
|
||||
If an attacker is able to create a user account with the same name as a deleted user,
|
||||
they can automatically inherit all the rights of the deleted user, especially the
|
||||
|
@ -87,7 +87,7 @@ rights assigned to that user.
|
|||
Within Kubernetes RBAC there are a number of privileges which, if granted, can allow a user or a service account
|
||||
to escalate their privileges in the cluster or affect systems outside the cluster.
|
||||
|
||||
This section is intended to provide visibility of the areas where cluster operators
|
||||
This section is intended to provide visibility of the areas where cluster operators
|
||||
should take care, to ensure that they do not inadvertently allow for more access to clusters than intended.
|
||||
|
||||
### Listing secrets
|
||||
|
@ -125,7 +125,7 @@ If someone - or some application - is allowed to create arbitrary PersistentVolu
|
|||
includes the creation of `hostPath` volumes, which then means that a Pod would get access
|
||||
to the underlying host filesystem(s) on the associated node. Granting that ability is a security risk.
|
||||
|
||||
There are many ways a container with unrestricted access to the host filesystem can escalate privileges, including
|
||||
There are many ways a container with unrestricted access to the host filesystem can escalate privileges, including
|
||||
reading data from other containers, and abusing the credentials of system services, such as Kubelet.
|
||||
|
||||
You should only allow access to create PersistentVolume objects for:
|
||||
|
@ -135,56 +135,56 @@ You should only allow access to create PersistentVolume objects for:
|
|||
that are configured for automatic provisioning.
|
||||
This is usually setup by the Kubernetes provider or by the operator when installing a CSI driver.
|
||||
|
||||
Where access to persistent storage is required trusted administrators should create
|
||||
Where access to persistent storage is required trusted administrators should create
|
||||
PersistentVolumes, and constrained users should use PersistentVolumeClaims to access that storage.
|
||||
|
||||
### Access to `proxy` subresource of Nodes
|
||||
|
||||
Users with access to the proxy sub-resource of node objects have rights to the Kubelet API,
|
||||
which allows for command execution on every pod on the node(s) to which they have rights.
|
||||
This access bypasses audit logging and admission control, so care should be taken before
|
||||
Users with access to the proxy sub-resource of node objects have rights to the Kubelet API,
|
||||
which allows for command execution on every pod on the node(s) to which they have rights.
|
||||
This access bypasses audit logging and admission control, so care should be taken before
|
||||
granting rights to this resource.
|
||||
|
||||
### Escalate verb
|
||||
|
||||
Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses.
|
||||
Generally, the RBAC system prevents users from creating clusterroles with more rights than the user possesses.
|
||||
The exception to this is the `escalate` verb. As noted in the [RBAC documentation](/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update),
|
||||
users with this right can effectively escalate their privileges.
|
||||
|
||||
### Bind verb
|
||||
|
||||
Similar to the `escalate` verb, granting users this right allows for the bypass of Kubernetes
|
||||
in-built protections against privilege escalation, allowing users to create bindings to
|
||||
Similar to the `escalate` verb, granting users this right allows for the bypass of Kubernetes
|
||||
in-built protections against privilege escalation, allowing users to create bindings to
|
||||
roles with rights they do not already have.
|
||||
|
||||
### Impersonate verb
|
||||
|
||||
This verb allows users to impersonate and gain the rights of other users in the cluster.
|
||||
Care should be taken when granting it, to ensure that excessive permissions cannot be gained
|
||||
This verb allows users to impersonate and gain the rights of other users in the cluster.
|
||||
Care should be taken when granting it, to ensure that excessive permissions cannot be gained
|
||||
via one of the impersonated accounts.
|
||||
|
||||
### CSRs and certificate issuing
|
||||
|
||||
The CSR API allows for users with `create` rights to CSRs and `update` rights on `certificatesigningrequests/approval`
|
||||
where the signer is `kubernetes.io/kube-apiserver-client` to create new client certificates
|
||||
which allow users to authenticate to the cluster. Those client certificates can have arbitrary
|
||||
The CSR API allows for users with `create` rights to CSRs and `update` rights on `certificatesigningrequests/approval`
|
||||
where the signer is `kubernetes.io/kube-apiserver-client` to create new client certificates
|
||||
which allow users to authenticate to the cluster. Those client certificates can have arbitrary
|
||||
names including duplicates of Kubernetes system components. This will effectively allow for privilege escalation.
|
||||
|
||||
### Token request
|
||||
|
||||
Users with `create` rights on `serviceaccounts/token` can create TokenRequests to issue
|
||||
tokens for existing service accounts.
|
||||
Users with `create` rights on `serviceaccounts/token` can create TokenRequests to issue
|
||||
tokens for existing service accounts.
|
||||
|
||||
### Control admission webhooks
|
||||
|
||||
Users with control over `validatingwebhookconfigurations` or `mutatingwebhookconfigurations`
|
||||
can control webhooks that can read any object admitted to the cluster, and in the case of
|
||||
Users with control over `validatingwebhookconfigurations` or `mutatingwebhookconfigurations`
|
||||
can control webhooks that can read any object admitted to the cluster, and in the case of
|
||||
mutating webhooks, also mutate admitted objects.
|
||||
|
||||
|
||||
## Kubernetes RBAC - denial of service risks {#denial-of-service-risks}
|
||||
|
||||
### Object creation denial-of-service {#object-creation-dos}
|
||||
|
||||
Users who have rights to create objects in a cluster may be able to create sufficient large
|
||||
objects to create a denial of service condition either based on the size or number of objects, as discussed in
|
||||
[etcd used by Kubernetes is vulnerable to OOM attack](https://github.com/kubernetes/kubernetes/issues/107325). This may be
|
||||
|
|
|
@ -97,6 +97,7 @@ For restricted LoadBalancer and ExternalIPs use, see
|
|||
[CVE-2020-8554: Man in the middle using LoadBalancer or ExternalIPs](https://github.com/kubernetes/kubernetes/issues/97076)
|
||||
and the [DenyServiceExternalIPs admission controller](/docs/reference/access-authn-authz/admission-controllers/#denyserviceexternalips)
|
||||
for further information.
|
||||
|
||||
## Pod security
|
||||
|
||||
- [ ] RBAC rights to `create`, `update`, `patch`, `delete` workloads is only granted if necessary.
|
||||
|
@ -153,23 +154,20 @@ Memory limit superior to request can expose the whole node to OOM issues.
|
|||
|
||||
### Enabling Seccomp
|
||||
|
||||
Seccomp can improve the security of your workloads by reducing the Linux kernel
|
||||
syscall attack surface available inside containers. The seccomp filter mode
|
||||
leverages BPF to create an allow or deny list of specific syscalls, named
|
||||
profiles. Those seccomp profiles can be enabled on individual workloads,
|
||||
[a security tutorial is available](/docs/tutorials/security/seccomp/). In
|
||||
addition, the [Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator)
|
||||
is a project to facilitate the management and use of seccomp in clusters.
|
||||
Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2.6.12.
|
||||
It can be used to sandbox the privileges of a process, restricting the calls it is able to make
|
||||
from userspace into the kernel. Kubernetes lets you automatically apply seccomp profiles loaded onto
|
||||
a node to your Pods and containers.
|
||||
|
||||
For historical context, please note that Docker has been using
|
||||
[a default seccomp profile](https://docs.docker.com/engine/security/seccomp/)
|
||||
to only allow a restricted set of syscalls since 2016 from
|
||||
[Docker Engine 1.10](https://www.docker.com/blog/docker-engine-1-10-security/),
|
||||
but Kubernetes is still not confining workloads by default. The default seccomp
|
||||
profile can be found [in containerd](https://github.com/containerd/containerd/blob/main/contrib/seccomp/seccomp_default.go)
|
||||
as well. Fortunately, [Seccomp Default](/blog/2021/08/25/seccomp-default/), a
|
||||
new alpha feature to use a default seccomp profile for all workloads can now be
|
||||
enabled and tested.
|
||||
Seccomp can improve the security of your workloads by reducing the Linux kernel syscall attack
|
||||
surface available inside containers. The seccomp filter mode leverages BPF to create an allow or
|
||||
deny list of specific syscalls, named profiles.
|
||||
|
||||
Since Kubernetes 1.27, you can enable the use of `RuntimeDefault` as the default seccomp profile
|
||||
for all workloads. A [security tutorial](/docs/tutorials/security/seccomp/) is available on this
|
||||
topic. In addition, the
|
||||
[Kubernetes Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator)
|
||||
is a project that facilitates the management and use of seccomp in clusters.
|
||||
|
||||
{{< note >}}
|
||||
Seccomp is only available on Linux nodes.
|
||||
|
|
|
@ -26,18 +26,18 @@ implementing identity-based security policies.
|
|||
Service accounts exist as ServiceAccount objects in the API server. Service
|
||||
accounts have the following properties:
|
||||
|
||||
* **Namespaced:** Each service account is bound to a Kubernetes
|
||||
{{<glossary_tooltip text="namespace" term_id="namespace">}}. Every namespace
|
||||
gets a [`default` ServiceAccount](#default-service-accounts) upon creation.
|
||||
* **Namespaced:** Each service account is bound to a Kubernetes
|
||||
{{<glossary_tooltip text="namespace" term_id="namespace">}}. Every namespace
|
||||
gets a [`default` ServiceAccount](#default-service-accounts) upon creation.
|
||||
|
||||
* **Lightweight:** Service accounts exist in the cluster and are
|
||||
defined in the Kubernetes API. You can quickly create service accounts to
|
||||
enable specific tasks.
|
||||
* **Lightweight:** Service accounts exist in the cluster and are
|
||||
defined in the Kubernetes API. You can quickly create service accounts to
|
||||
enable specific tasks.
|
||||
|
||||
* **Portable:** A configuration bundle for a complex containerized workload
|
||||
might include service account definitions for the system's components. The
|
||||
lightweight nature of service accounts and the namespaced identities make
|
||||
the configurations portable.
|
||||
* **Portable:** A configuration bundle for a complex containerized workload
|
||||
might include service account definitions for the system's components. The
|
||||
lightweight nature of service accounts and the namespaced identities make
|
||||
the configurations portable.
|
||||
|
||||
Service accounts are different from user accounts, which are authenticated
|
||||
human users in the cluster. By default, user accounts don't exist in the Kubernetes
|
||||
|
@ -78,10 +78,10 @@ the following scenarios:
|
|||
|
||||
* Your Pods need to communicate with the Kubernetes API server, for example in
|
||||
situations such as the following:
|
||||
* Providing read-only access to sensitive information stored in Secrets.
|
||||
* Granting [cross-namespace access](#cross-namespace), such as allowing a
|
||||
Pod in namespace `example` to read, list, and watch for Lease objects in
|
||||
the `kube-node-lease` namespace.
|
||||
* Providing read-only access to sensitive information stored in Secrets.
|
||||
* Granting [cross-namespace access](#cross-namespace), such as allowing a
|
||||
Pod in namespace `example` to read, list, and watch for Lease objects in
|
||||
the `kube-node-lease` namespace.
|
||||
* Your Pods need to communicate with an external service. For example, a
|
||||
workload Pod requires an identity for a commercially available cloud API,
|
||||
and the commercial provider allows configuring a suitable trust relationship.
|
||||
|
@ -92,7 +92,6 @@ the following scenarios:
|
|||
ServiceAccount identity of different Pods to group those Pods into different
|
||||
contexts.
|
||||
|
||||
|
||||
## How to use service accounts {#how-to-use}
|
||||
|
||||
To use a Kubernetes service account, you do the following:
|
||||
|
@ -101,7 +100,7 @@ To use a Kubernetes service account, you do the following:
|
|||
client like `kubectl` or a manifest that defines the object.
|
||||
1. Grant permissions to the ServiceAccount object using an authorization
|
||||
mechanism such as
|
||||
[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/).
|
||||
[RBAC](/docs/reference/access-authn-authz/rbac/).
|
||||
1. Assign the ServiceAccount object to Pods during Pod creation.
|
||||
|
||||
If you're using the identity from an external service,
|
||||
|
@ -147,7 +146,7 @@ API and mounts the token as a
|
|||
|
||||
By default, Kubernetes provides the Pod
|
||||
with the credentials for an assigned ServiceAccount, whether that is the
|
||||
`default` ServiceAccount or a custom ServiceAccount that you specify.
|
||||
`default` ServiceAccount or a custom ServiceAccount that you specify.
|
||||
|
||||
To prevent Kubernetes from automatically injecting
|
||||
credentials for a specified ServiceAccount or the `default` ServiceAccount, set the
|
||||
|
@ -210,11 +209,11 @@ acting as a ServiceAccount tries to communicate with the Kubernetes API server,
|
|||
the client includes an `Authorization: Bearer <token>` header with the HTTP
|
||||
request. The API server checks the validity of that bearer token as follows:
|
||||
|
||||
1. Check the token signature.
|
||||
1. Check whether the token has expired.
|
||||
1. Check whether object references in the token claims are currently valid.
|
||||
1. Check whether the token is currently valid.
|
||||
1. Check the audience claims.
|
||||
1. Checks the token signature.
|
||||
1. Checks whether the token has expired.
|
||||
1. Checks whether object references in the token claims are currently valid.
|
||||
1. Checks whether the token is currently valid.
|
||||
1. Checks the audience claims.
|
||||
|
||||
The TokenRequest API produces _bound tokens_ for a ServiceAccount. This
|
||||
binding is linked to the lifetime of the client, such as a Pod, that is acting
|
||||
|
@ -257,15 +256,15 @@ used in your application and nowhere else.
|
|||
[Webhook Token Authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
|
||||
to validate bearer tokens using your own validation service.
|
||||
* Provide your own identities to Pods.
|
||||
* [Use the SPIFFE CSI driver plugin to provide SPIFFE SVIDs as X.509 certificate pairs to Pods](https://cert-manager.io/docs/projects/csi-driver-spiffe/).
|
||||
* [Use the SPIFFE CSI driver plugin to provide SPIFFE SVIDs as X.509 certificate pairs to Pods](https://cert-manager.io/docs/projects/csi-driver-spiffe/).
|
||||
{{% thirdparty-content single="true" %}}
|
||||
* [Use a service mesh such as Istio to provide certificates to Pods](https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/).
|
||||
* [Use a service mesh such as Istio to provide certificates to Pods](https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/).
|
||||
* Authenticate from outside the cluster to the API server without using service account tokens:
|
||||
* [Configure the API server to accept OpenID Connect (OIDC) tokens from your identity provider](/docs/reference/access-authn-authz/authentication/#openid-connect-tokens).
|
||||
* Use service accounts or user accounts created using an external Identity
|
||||
and Access Management (IAM) service, such as from a cloud provider, to
|
||||
authenticate to your cluster.
|
||||
* [Use the CertificateSigningRequest API with client certificates](/docs/tasks/tls/managing-tls-in-a-cluster/).
|
||||
* [Configure the API server to accept OpenID Connect (OIDC) tokens from your identity provider](/docs/reference/access-authn-authz/authentication/#openid-connect-tokens).
|
||||
* Use service accounts or user accounts created using an external Identity
|
||||
and Access Management (IAM) service, such as from a cloud provider, to
|
||||
authenticate to your cluster.
|
||||
* [Use the CertificateSigningRequest API with client certificates](/docs/tasks/tls/managing-tls-in-a-cluster/).
|
||||
* [Configure the kubelet to retrieve credentials from an image registry](/docs/tasks/administer-cluster/kubelet-credential-provider/).
|
||||
* Use a Device Plugin to access a virtual Trusted Platform Module (TPM), which
|
||||
then allows authentication using a private key.
|
||||
|
|
|
@ -300,7 +300,7 @@ Below are the properties a user can specify in the `dnsConfig` field:
|
|||
|
||||
The following is an example Pod with custom DNS settings:
|
||||
|
||||
{{< codenew file="service/networking/custom-dns.yaml" >}}
|
||||
{{% code file="service/networking/custom-dns.yaml" %}}
|
||||
|
||||
When the Pod above is created, the container `test` gets the following contents
|
||||
in its `/etc/resolv.conf` file:
|
||||
|
|
|
@ -109,9 +109,8 @@ families for dual-stack, you can choose the address families by setting an optio
|
|||
`.spec.ipFamilies`, on the Service.
|
||||
|
||||
{{< note >}}
|
||||
The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a
|
||||
Service that already exists. If you want to change `.spec.ipFamilies`, delete and recreate the
|
||||
Service.
|
||||
The `.spec.ipFamilies` field is conditionally mutable: you can add or remove a secondary
|
||||
IP address family, but you cannot change the primary IP address family of an existing Service.
|
||||
{{< /note >}}
|
||||
|
||||
You can set `.spec.ipFamilies` to any of the following array values:
|
||||
|
@ -136,7 +135,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat
|
|||
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors
|
||||
will behave in this same way.)
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
|
||||
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
|
||||
1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When
|
||||
you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6
|
||||
|
@ -152,14 +151,14 @@ These examples demonstrate the behavior of various dual-stack Service configurat
|
|||
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
|
||||
behaves the same as `PreferDualStack`.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}}
|
||||
{{% code file="service/networking/dual-stack-preferred-svc.yaml" %}}
|
||||
|
||||
1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well
|
||||
as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and
|
||||
IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is
|
||||
the first element in the `.spec.ClusterIPs` array, overriding the default.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" >}}
|
||||
{{% code file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}
|
||||
|
||||
#### Dual-stack defaults on existing Services
|
||||
|
||||
|
@ -172,7 +171,7 @@ dual-stack.)
|
|||
`.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP
|
||||
will be stored in `.spec.ClusterIPs`.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
|
||||
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
|
||||
You can validate this behavior by using kubectl to inspect an existing service.
|
||||
|
||||
|
@ -212,7 +211,7 @@ dual-stack.)
|
|||
`--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to
|
||||
`None`.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
|
||||
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
|
||||
You can validate this behavior by using kubectl to inspect an existing headless service with selectors.
|
||||
|
||||
|
|
|
@ -56,6 +56,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
|||
* The [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
|
||||
works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) webserver (as a proxy).
|
||||
* The [ngrok Kubernetes Ingress Controller](https://github.com/ngrok/kubernetes-ingress-controller) is an open source controller for adding secure public access to your K8s services using the [ngrok platform](https://ngrok.com).
|
||||
* The [OCI Native Ingress Controller](https://github.com/oracle/oci-native-ingress-controller#readme) is an Ingress controller for Oracle Cloud Infrastructure which allows you to manage the [OCI Load Balancer](https://docs.oracle.com/en-us/iaas/Content/Balance/home.htm).
|
||||
* The [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) is based on [Pomerium](https://pomerium.com/), which offers context-aware access policy.
|
||||
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
|
||||
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
|
||||
|
|
|
@ -73,7 +73,7 @@ Make sure you review your Ingress controller's documentation to understand the c
|
|||
|
||||
A minimal Ingress resource example:
|
||||
|
||||
{{< codenew file="service/networking/minimal-ingress.yaml" >}}
|
||||
{{% code file="service/networking/minimal-ingress.yaml" %}}
|
||||
|
||||
An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
|
||||
The name of an Ingress object must be a valid
|
||||
|
@ -140,7 +140,7 @@ setting with Service, and will fail validation if both are specified. A common
|
|||
usage for a `Resource` backend is to ingress data to an object storage backend
|
||||
with static assets.
|
||||
|
||||
{{< codenew file="service/networking/ingress-resource-backend.yaml" >}}
|
||||
{{% code file="service/networking/ingress-resource-backend.yaml" %}}
|
||||
|
||||
After creating the Ingress above, you can view it with the following command:
|
||||
|
||||
|
@ -229,7 +229,7 @@ equal to the suffix of the wildcard rule.
|
|||
| `*.foo.com` | `baz.bar.foo.com` | No match, wildcard only covers a single DNS label |
|
||||
| `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label |
|
||||
|
||||
{{< codenew file="service/networking/ingress-wildcard-host.yaml" >}}
|
||||
{{% code file="service/networking/ingress-wildcard-host.yaml" %}}
|
||||
|
||||
## Ingress class
|
||||
|
||||
|
@ -238,7 +238,7 @@ configuration. Each Ingress should specify a class, a reference to an
|
|||
IngressClass resource that contains additional configuration including the name
|
||||
of the controller that should implement the class.
|
||||
|
||||
{{< codenew file="service/networking/external-lb.yaml" >}}
|
||||
{{% code file="service/networking/external-lb.yaml" %}}
|
||||
|
||||
The `.spec.parameters` field of an IngressClass lets you reference another
|
||||
resource that provides configuration related to that IngressClass.
|
||||
|
@ -369,7 +369,7 @@ configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the
|
|||
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
|
||||
default `IngressClass`:
|
||||
|
||||
{{< codenew file="service/networking/default-ingressclass.yaml" >}}
|
||||
{{% code file="service/networking/default-ingressclass.yaml" %}}
|
||||
|
||||
## Types of Ingress
|
||||
|
||||
|
@ -379,7 +379,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
|
|||
(see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a
|
||||
*default backend* with no rules.
|
||||
|
||||
{{< codenew file="service/networking/test-ingress.yaml" >}}
|
||||
{{% code file="service/networking/test-ingress.yaml" %}}
|
||||
|
||||
If you create it using `kubectl apply -f` you should be able to view the state
|
||||
of the Ingress you added:
|
||||
|
@ -411,7 +411,7 @@ down to a minimum. For example, a setup like:
|
|||
|
||||
It would require an Ingress such as:
|
||||
|
||||
{{< codenew file="service/networking/simple-fanout-example.yaml" >}}
|
||||
{{% code file="service/networking/simple-fanout-example.yaml" %}}
|
||||
|
||||
When you create the Ingress with `kubectl apply -f`:
|
||||
|
||||
|
@ -456,7 +456,7 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at
|
|||
The following Ingress tells the backing load balancer to route requests based on
|
||||
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
|
||||
|
||||
{{< codenew file="service/networking/name-virtual-host-ingress.yaml" >}}
|
||||
{{% code file="service/networking/name-virtual-host-ingress.yaml" %}}
|
||||
|
||||
If you create an Ingress resource without any hosts defined in the rules, then any
|
||||
web traffic to the IP address of your Ingress controller can be matched without a name based
|
||||
|
@ -467,7 +467,7 @@ requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`,
|
|||
and any traffic whose request host header doesn't match `first.bar.com`
|
||||
and `second.bar.com` to `service3`.
|
||||
|
||||
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}
|
||||
{{% code file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}}
|
||||
|
||||
### TLS
|
||||
|
||||
|
@ -505,7 +505,7 @@ certificates would have to be issued for all the possible sub-domains. Therefore
|
|||
section.
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="service/networking/tls-example-ingress.yaml" >}}
|
||||
{{% code file="service/networking/tls-example-ingress.yaml" %}}
|
||||
|
||||
{{< note >}}
|
||||
There is a gap between TLS features supported by various Ingress
|
||||
|
|
|
@ -83,7 +83,7 @@ reference for a full definition of the resource.
|
|||
|
||||
An example NetworkPolicy might look like this:
|
||||
|
||||
{{< codenew file="service/networking/networkpolicy.yaml" >}}
|
||||
{{% code file="service/networking/networkpolicy.yaml" %}}
|
||||
|
||||
{{< note >}}
|
||||
POSTing this to the API server for your cluster will have no effect unless your chosen networking
|
||||
|
@ -212,7 +212,7 @@ in that namespace.
|
|||
You can create a "default" ingress isolation policy for a namespace by creating a NetworkPolicy
|
||||
that selects all pods but does not allow any ingress traffic to those pods.
|
||||
|
||||
{{< codenew file="service/networking/network-policy-default-deny-ingress.yaml" >}}
|
||||
{{% code file="service/networking/network-policy-default-deny-ingress.yaml" %}}
|
||||
|
||||
This ensures that even pods that aren't selected by any other NetworkPolicy will still be isolated
|
||||
for ingress. This policy does not affect isolation for egress from any pod.
|
||||
|
@ -222,7 +222,7 @@ for ingress. This policy does not affect isolation for egress from any pod.
|
|||
If you want to allow all incoming connections to all pods in a namespace, you can create a policy
|
||||
that explicitly allows that.
|
||||
|
||||
{{< codenew file="service/networking/network-policy-allow-all-ingress.yaml" >}}
|
||||
{{% code file="service/networking/network-policy-allow-all-ingress.yaml" %}}
|
||||
|
||||
With this policy in place, no additional policy or policies can cause any incoming connection to
|
||||
those pods to be denied. This policy has no effect on isolation for egress from any pod.
|
||||
|
@ -232,7 +232,7 @@ those pods to be denied. This policy has no effect on isolation for egress from
|
|||
You can create a "default" egress isolation policy for a namespace by creating a NetworkPolicy
|
||||
that selects all pods but does not allow any egress traffic from those pods.
|
||||
|
||||
{{< codenew file="service/networking/network-policy-default-deny-egress.yaml" >}}
|
||||
{{% code file="service/networking/network-policy-default-deny-egress.yaml" %}}
|
||||
|
||||
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
|
||||
egress traffic. This policy does not change the ingress isolation behavior of any pod.
|
||||
|
@ -242,7 +242,7 @@ egress traffic. This policy does not change the ingress isolation behavior of an
|
|||
If you want to allow all connections from all pods in a namespace, you can create a policy that
|
||||
explicitly allows all outgoing connections from pods in that namespace.
|
||||
|
||||
{{< codenew file="service/networking/network-policy-allow-all-egress.yaml" >}}
|
||||
{{% code file="service/networking/network-policy-allow-all-egress.yaml" %}}
|
||||
|
||||
With this policy in place, no additional policy or policies can cause any outgoing connection from
|
||||
those pods to be denied. This policy has no effect on isolation for ingress to any pod.
|
||||
|
@ -252,7 +252,7 @@ those pods to be denied. This policy has no effect on isolation for ingress to
|
|||
You can create a "default" policy for a namespace which prevents all ingress AND egress traffic by
|
||||
creating the following NetworkPolicy in that namespace.
|
||||
|
||||
{{< codenew file="service/networking/network-policy-default-deny-all.yaml" >}}
|
||||
{{% code file="service/networking/network-policy-default-deny-all.yaml" %}}
|
||||
|
||||
This ensures that even pods that aren't selected by any other NetworkPolicy will not be allowed
|
||||
ingress or egress traffic.
|
||||
|
@ -280,7 +280,7 @@ When writing a NetworkPolicy, you can target a range of ports instead of a singl
|
|||
|
||||
This is achievable with the usage of the `endPort` field, as the following example:
|
||||
|
||||
{{< codenew file="service/networking/networkpolicy-multiport-egress.yaml" >}}
|
||||
{{% code file="service/networking/networkpolicy-multiport-egress.yaml" %}}
|
||||
|
||||
The above rule allows any Pod with label `role=db` on the namespace `default` to communicate
|
||||
with any IP within the range `10.0.0.0/24` over TCP, provided that the target
|
||||
|
@ -340,12 +340,8 @@ namespaces based on their labels.
|
|||
|
||||
## Targeting a Namespace by its name
|
||||
|
||||
{{< feature-state for_k8s_version="1.22" state="stable" >}}
|
||||
|
||||
The Kubernetes control plane sets an immutable label `kubernetes.io/metadata.name` on all
|
||||
namespaces, provided that the `NamespaceDefaultLabelName`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
|
||||
The value of the label is the namespace name.
|
||||
namespaces, the value of the label is the namespace name.
|
||||
|
||||
While NetworkPolicy cannot target a namespace by its name with some object field, you can use the
|
||||
standardized label to target a specific namespace.
|
||||
|
|
|
@ -1,192 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- johnbelamaric
|
||||
- imroc
|
||||
title: Topology-aware traffic routing with topology keys
|
||||
content_type: concept
|
||||
weight: 150
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
{{< note >}}
|
||||
This feature, specifically the alpha `topologyKeys` API, is deprecated since
|
||||
Kubernetes v1.21.
|
||||
[Topology Aware Routing](/docs/concepts/services-networking/topology-aware-routing/),
|
||||
introduced in Kubernetes v1.21, provides similar functionality.
|
||||
{{</ note >}}
|
||||
|
||||
_Service Topology_ enables a service to route traffic based upon the Node
|
||||
topology of the cluster. For example, a service can specify that traffic be
|
||||
preferentially routed to endpoints that are on the same Node as the client, or
|
||||
in the same availability zone.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Topology-aware traffic routing
|
||||
|
||||
By default, traffic sent to a `ClusterIP` or `NodePort` Service may be routed to
|
||||
any backend address for the Service. Kubernetes 1.7 made it possible to
|
||||
route "external" traffic to the Pods running on the same Node that received the
|
||||
traffic. For `ClusterIP` Services, the equivalent same-node preference for
|
||||
routing wasn't possible; nor could you configure your cluster to favor routing
|
||||
to endpoints within the same zone.
|
||||
By setting `topologyKeys` on a Service, you're able to define a policy for routing
|
||||
traffic based upon the Node labels for the originating and destination Nodes.
|
||||
|
||||
The label matching between the source and destination lets you, as a cluster
|
||||
operator, designate sets of Nodes that are "closer" and "farther" from one another.
|
||||
You can define labels to represent whatever metric makes sense for your own
|
||||
requirements.
|
||||
In public clouds, for example, you might prefer to keep network traffic within the
|
||||
same zone, because interzonal traffic has a cost associated with it (and intrazonal
|
||||
traffic typically does not). Other common needs include being able to route traffic
|
||||
to a local Pod managed by a DaemonSet, or directing traffic to Nodes connected to the
|
||||
same top-of-rack switch for the lowest latency.
|
||||
|
||||
## Using Service Topology
|
||||
|
||||
If your cluster has the `ServiceTopology` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
enabled, you can control Service traffic
|
||||
routing by specifying the `topologyKeys` field on the Service spec. This field
|
||||
is a preference-order list of Node labels which will be used to sort endpoints
|
||||
when accessing this Service. Traffic will be directed to a Node whose value for
|
||||
the first label matches the originating Node's value for that label. If there is
|
||||
no backend for the Service on a matching Node, then the second label will be
|
||||
considered, and so forth, until no labels remain.
|
||||
|
||||
If no match is found, the traffic will be rejected, as if there were no
|
||||
backends for the Service at all. That is, endpoints are chosen based on the first
|
||||
topology key with available backends. If this field is specified and all entries
|
||||
have no backends that match the topology of the client, the service has no
|
||||
backends for that client and connections should fail. The special value `"*"` may
|
||||
be used to mean "any topology". This catch-all value, if used, only makes sense
|
||||
as the last value in the list.
|
||||
|
||||
If `topologyKeys` is not specified or empty, no topology constraints will be applied.
|
||||
|
||||
Consider a cluster with Nodes that are labeled with their hostname, zone name,
|
||||
and region name. Then you can set the `topologyKeys` values of a service to direct
|
||||
traffic as follows.
|
||||
|
||||
* Only to endpoints on the same node, failing if no endpoint exists on the node:
|
||||
`["kubernetes.io/hostname"]`.
|
||||
* Preferentially to endpoints on the same node, falling back to endpoints in the
|
||||
same zone, followed by the same region, and failing otherwise: `["kubernetes.io/hostname",
|
||||
"topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`.
|
||||
This may be useful, for example, in cases where data locality is critical.
|
||||
* Preferentially to the same zone, but fallback on any available endpoint if
|
||||
none are available within this zone:
|
||||
`["topology.kubernetes.io/zone", "*"]`.
|
||||
|
||||
## Constraints
|
||||
|
||||
* Service topology is not compatible with `externalTrafficPolicy=Local`, and
|
||||
therefore a Service cannot use both of these features. It is possible to use
|
||||
both features in the same cluster on different Services, only not on the same
|
||||
Service.
|
||||
|
||||
* Valid topology keys are currently limited to `kubernetes.io/hostname`,
|
||||
`topology.kubernetes.io/zone`, and `topology.kubernetes.io/region`, but will
|
||||
be generalized to other node labels in the future.
|
||||
|
||||
* Topology keys must be valid label keys and at most 16 keys may be specified.
|
||||
|
||||
* The catch-all value, `"*"`, must be the last value in the topology keys, if
|
||||
it is used.
|
||||
|
||||
## Examples
|
||||
|
||||
The following are common examples of using the Service Topology feature.
|
||||
|
||||
### Only Node Local Endpoints
|
||||
|
||||
A Service that only routes to node local endpoints. If no endpoints exist on the node, traffic is dropped:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-service
|
||||
spec:
|
||||
selector:
|
||||
app: my-app
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
topologyKeys:
|
||||
- "kubernetes.io/hostname"
|
||||
```
|
||||
|
||||
### Prefer Node Local Endpoints
|
||||
|
||||
A Service that prefers node local Endpoints but falls back to cluster wide endpoints if node local endpoints do not exist:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-service
|
||||
spec:
|
||||
selector:
|
||||
app: my-app
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
topologyKeys:
|
||||
- "kubernetes.io/hostname"
|
||||
- "*"
|
||||
```
|
||||
|
||||
### Only Zonal or Regional Endpoints
|
||||
|
||||
A Service that prefers zonal then regional endpoints. If no endpoints exist in either, traffic is dropped.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-service
|
||||
spec:
|
||||
selector:
|
||||
app: my-app
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
topologyKeys:
|
||||
- "topology.kubernetes.io/zone"
|
||||
- "topology.kubernetes.io/region"
|
||||
```
|
||||
|
||||
### Prefer Node Local, Zonal, then Regional Endpoints
|
||||
|
||||
A Service that prefers node local, zonal, then regional endpoints but falls back to cluster wide endpoints.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: my-service
|
||||
spec:
|
||||
selector:
|
||||
app: my-app
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 9376
|
||||
topologyKeys:
|
||||
- "kubernetes.io/hostname"
|
||||
- "topology.kubernetes.io/zone"
|
||||
- "topology.kubernetes.io/region"
|
||||
- "*"
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/)
|
||||
* Read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
|
@ -117,7 +117,8 @@ spec:
|
|||
targetPort: 9376
|
||||
```
|
||||
|
||||
Applying this manifest creates a new Service named "my-service", which
|
||||
Applying this manifest creates a new Service named "my-service" with the default
|
||||
ClusterIP [service type](#publishing-services-service-types). The Service
|
||||
targets TCP port 9376 on any Pod with the `app.kubernetes.io/name: MyApp` label.
|
||||
|
||||
Kubernetes assigns this Service an IP address (the _cluster IP_),
|
||||
|
@ -618,7 +619,7 @@ can define your own (provider specific) annotations on the Service that specify
|
|||
|
||||
#### Load balancers with mixed protocol types
|
||||
|
||||
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
By default, for LoadBalancer type of Services, when there is more than one port defined, all
|
||||
ports must have the same protocol, and the protocol must be one which is supported
|
||||
|
|
|
@ -154,10 +154,9 @@ zone.
|
|||
|
||||
## Constraints
|
||||
|
||||
* Topology Aware Hints are not used when either `externalTrafficPolicy` or
|
||||
`internalTrafficPolicy` is set to `Local` on a Service. It is possible to use
|
||||
both features in the same cluster on different Services, just not on the same
|
||||
Service.
|
||||
* Topology Aware Hints are not used when `internalTrafficPolicy` is set to `Local`
|
||||
on a Service. It is possible to use both features in the same cluster on different
|
||||
Services, just not on the same Service.
|
||||
|
||||
* This approach will not work well for Services that have a large proportion of
|
||||
traffic originating from a subset of zones. Instead this assumes that incoming
|
||||
|
|
|
@ -117,7 +117,7 @@ balancing behavior:
|
|||
| Session affinity | Ensures that connections from a particular client are passed to the same Pod each time. | Windows Server 2022 | Set `service.spec.sessionAffinity` to "ClientIP" |
|
||||
| Direct Server Return (DSR) | Load balancing mode where the IP address fixups and the LBNAT occurs at the container vSwitch port directly; service traffic arrives with the source IP set as the originating pod IP. | Windows Server 2019 | Set the following flags in kube-proxy: `--feature-gates="WinDSR=true" --enable-dsr=true` |
|
||||
| Preserve-Destination | Skips DNAT of service traffic, thereby preserving the virtual IP of the target service in packets reaching the backend Pod. Also disables node-node forwarding. | Windows Server, version 1903 | Set `"preserve-destination": "true"` in service annotations and enable DSR in kube-proxy. |
|
||||
| IPv4/IPv6 dual-stack networking | Native IPv4-to-IPv4 in parallel with IPv6-to-IPv6 communications to, from, and within a cluster | Windows Server 2019 | See [IPv4/IPv6 dual-stack](#ipv4ipv6-dual-stack) |
|
||||
| IPv4/IPv6 dual-stack networking | Native IPv4-to-IPv4 in parallel with IPv6-to-IPv6 communications to, from, and within a cluster | Windows Server 2019 | See [IPv4/IPv6 dual-stack](/docs/concepts/services-networking/dual-stack/#windows-support) |
|
||||
| Client IP preservation | Ensures that source IP of incoming ingress traffic gets preserved. Also disables node-node forwarding. | Windows Server 2019 | Set `service.spec.externalTrafficPolicy` to "Local" and enable DSR in kube-proxy |
|
||||
{{< /table >}}
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ different purposes:
|
|||
- [CSI ephemeral volumes](#csi-ephemeral-volumes):
|
||||
similar to the previous volume kinds, but provided by special
|
||||
[CSI drivers](https://github.com/container-storage-interface/spec/blob/master/spec.md)
|
||||
which specifically [support this feature](https://kubernetes-csi.github.io/docs/drivers.html)
|
||||
which specifically [support this feature](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html)
|
||||
- [generic ephemeral volumes](#generic-ephemeral-volumes), which
|
||||
can be provided by all storage drivers that also support persistent volumes
|
||||
|
||||
|
@ -248,11 +248,10 @@ same namespace, so that these conflicts can't occur.
|
|||
|
||||
### Security
|
||||
|
||||
Enabling the GenericEphemeralVolume feature allows users to create
|
||||
PVCs indirectly if they can create Pods, even if they do not have
|
||||
permission to create PVCs directly. Cluster administrators must be
|
||||
aware of this. If this does not fit their security model, they should
|
||||
use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
Using generic ephemeral volumes allows users to create PVCs indirectly
|
||||
if they can create Pods, even if they do not have permission to create PVCs directly.
|
||||
Cluster administrators must be aware of this. If this does not fit their security model,
|
||||
they should use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
that rejects objects like Pods that have a generic ephemeral volume.
|
||||
|
||||
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota)
|
||||
|
|
|
@ -88,7 +88,7 @@ check [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserv
|
|||
|
||||
A user creates, or in the case of dynamic provisioning, has already created,
|
||||
a PersistentVolumeClaim with a specific amount of storage requested and with
|
||||
certain access modes. A control loop in the master watches for new PVCs, finds
|
||||
certain access modes. A control loop in the control plane watches for new PVCs, finds
|
||||
a matching PV (if possible), and binds them together. If a PV was dynamically
|
||||
provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise,
|
||||
the user will always get at least what they asked for, but the volume may be in
|
||||
|
@ -185,7 +185,7 @@ another claim because the previous claimant's data remains on the volume.
|
|||
An administrator can manually reclaim the volume with the following steps.
|
||||
|
||||
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
|
||||
(such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
|
||||
(such as an AWS EBS or GCE PD volume) still exists after the PV is deleted.
|
||||
1. Manually clean up the data on the associated storage asset accordingly.
|
||||
1. Manually delete the associated storage asset.
|
||||
|
||||
|
@ -196,8 +196,7 @@ the same storage asset definition.
|
|||
|
||||
For volume plugins that support the `Delete` reclaim policy, deletion removes
|
||||
both the PersistentVolume object from Kubernetes, as well as the associated
|
||||
storage asset in the external infrastructure, such as an AWS EBS, GCE PD,
|
||||
Azure Disk, or Cinder volume. Volumes that were dynamically provisioned
|
||||
storage asset in the external infrastructure, such as an AWS EBS or GCE PD volume. Volumes that were dynamically provisioned
|
||||
inherit the [reclaim policy of their StorageClass](#reclaim-policy), which
|
||||
defaults to `Delete`. The administrator should configure the StorageClass
|
||||
according to users' expectations; otherwise, the PV must be edited or
|
||||
|
@ -368,15 +367,12 @@ to `Retain`, including cases where you are reusing an existing PV.
|
|||
Support for expanding PersistentVolumeClaims (PVCs) is enabled by default. You can expand
|
||||
the following types of volumes:
|
||||
|
||||
* azureDisk
|
||||
* azureFile
|
||||
* awsElasticBlockStore
|
||||
* cinder (deprecated)
|
||||
* azureFile (deprecated)
|
||||
* {{< glossary_tooltip text="csi" term_id="csi" >}}
|
||||
* flexVolume (deprecated)
|
||||
* gcePersistentDisk
|
||||
* gcePersistentDisk (deprecated)
|
||||
* rbd (deprecated)
|
||||
* portworxVolume
|
||||
* portworxVolume (deprecated)
|
||||
|
||||
You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
|
||||
|
||||
|
@ -518,14 +514,8 @@ PersistentVolume types are implemented as plugins. Kubernetes currently supports
|
|||
The following types of PersistentVolume are deprecated.
|
||||
This means that support is still available but will be removed in a future Kubernetes release.
|
||||
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
|
||||
(**deprecated** in v1.17)
|
||||
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
(**deprecated** in v1.19)
|
||||
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) - Azure File
|
||||
(**deprecated** in v1.21)
|
||||
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
|
||||
(**deprecated** in v1.18)
|
||||
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
|
||||
(**deprecated** in v1.23)
|
||||
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
|
||||
|
@ -541,6 +531,12 @@ This means that support is still available but will be removed in a future Kuber
|
|||
|
||||
Older versions of Kubernetes also supported the following in-tree PersistentVolume types:
|
||||
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
|
||||
(**not available** in v1.27)
|
||||
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
(**not available** in v1.27)
|
||||
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
|
||||
(**not available** in v1.26)
|
||||
* `photonPersistentDisk` - Photon controller persistent disk.
|
||||
(**not available** starting v1.15)
|
||||
* [`scaleIO`](/docs/concepts/storage/volumes/#scaleio) - ScaleIO volume
|
||||
|
@ -672,11 +668,8 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
|
|||
|
||||
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
|
||||
| :--- | :---: | :---: | :---: | - |
|
||||
| AWSElasticBlockStore | ✓ | - | - | - |
|
||||
| AzureFile | ✓ | ✓ | ✓ | - |
|
||||
| AzureDisk | ✓ | - | - | - |
|
||||
| CephFS | ✓ | ✓ | ✓ | - |
|
||||
| Cinder | ✓ | - | ([if multi-attach volumes are available](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/features.md#multi-attach-volumes)) | - |
|
||||
| CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
|
||||
| FC | ✓ | ✓ | - | - |
|
||||
| FlexVolume | ✓ | ✓ | depends on the driver | - |
|
||||
|
@ -708,11 +701,9 @@ Current reclaim policies are:
|
|||
|
||||
* Retain -- manual reclamation
|
||||
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
|
||||
* Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk,
|
||||
or OpenStack Cinder volume is deleted
|
||||
* Delete -- associated storage asset such as AWS EBS or GCE PD volume is deleted
|
||||
|
||||
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk,
|
||||
and Cinder volumes support deletion.
|
||||
Currently, only NFS and HostPath support recycling. AWS EBS and GCE PD volumes support deletion.
|
||||
|
||||
### Mount Options
|
||||
|
||||
|
@ -725,15 +716,13 @@ Not all Persistent Volume types support mount options.
|
|||
|
||||
The following volume types support mount options:
|
||||
|
||||
* `awsElasticBlockStore`
|
||||
* `azureDisk`
|
||||
* `azureFile`
|
||||
* `cephfs` ( **deprecated** in v1.28)
|
||||
* `cephfs` (**deprecated** in v1.28)
|
||||
* `cinder` (**deprecated** in v1.18)
|
||||
* `gcePersistentDisk`
|
||||
* `gcePersistentDisk` (**deprecated** in v1.28)
|
||||
* `iscsi`
|
||||
* `nfs`
|
||||
* `rbd` ( **deprecated** in v1.28)
|
||||
* `rbd` (**deprecated** in v1.28)
|
||||
* `vsphereVolume`
|
||||
|
||||
Mount options are not validated. If a mount option is invalid, the mount fails.
|
||||
|
@ -746,10 +735,8 @@ it will become fully deprecated in a future Kubernetes release.
|
|||
|
||||
{{< note >}}
|
||||
For most volume types, you do not need to set this field. It is automatically
|
||||
populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore),
|
||||
[GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and
|
||||
[Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You
|
||||
need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
|
||||
populated for [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) volume block types.
|
||||
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
|
||||
{{< /note >}}
|
||||
|
||||
A PV can specify node affinity to define constraints that limit what nodes this
|
||||
|
@ -960,15 +947,14 @@ network-attached storage. See
|
|||
The following volume plugins support raw block volumes, including dynamic provisioning where
|
||||
applicable:
|
||||
|
||||
* AWSElasticBlockStore
|
||||
* AzureDisk
|
||||
* CSI
|
||||
* FC (Fibre Channel)
|
||||
* GCEPersistentDisk
|
||||
* GCEPersistentDisk (deprecated)
|
||||
* iSCSI
|
||||
* Local volume
|
||||
* OpenStack Cinder
|
||||
* RBD (deprecated)
|
||||
* RBD (Ceph Block Device; deprecated)
|
||||
* VsphereVolume
|
||||
|
||||
### PersistentVolume using a Raw Block Volume {#persistent-volume-using-a-raw-block-volume}
|
||||
|
|
|
@ -30,11 +30,11 @@ see the [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all
|
|||
|
||||
### Example configuration with a secret, a downwardAPI, and a configMap {#example-configuration-secret-downwardapi-configmap}
|
||||
|
||||
{{< codenew file="pods/storage/projected-secret-downwardapi-configmap.yaml" >}}
|
||||
{{% code file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}}
|
||||
|
||||
### Example configuration: secrets with a non-default permission mode set {#example-configuration-secrets-nondefault-permission-mode}
|
||||
|
||||
{{< codenew file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" >}}
|
||||
{{% code file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}}
|
||||
|
||||
Each projected volume source is listed in the spec under `sources`. The
|
||||
parameters are nearly the same with two exceptions:
|
||||
|
@ -49,7 +49,7 @@ parameters are nearly the same with two exceptions:
|
|||
You can inject the token for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
|
||||
into a Pod at a specified path. For example:
|
||||
|
||||
{{< codenew file="pods/storage/projected-service-account-token.yaml" >}}
|
||||
{{% code file="pods/storage/projected-service-account-token.yaml" %}}
|
||||
|
||||
The example Pod has a projected volume containing the injected service account
|
||||
token. Containers in this Pod can use that token to access the Kubernetes API
|
||||
|
|
|
@ -57,6 +57,17 @@ mountOptions:
|
|||
volumeBindingMode: Immediate
|
||||
```
|
||||
|
||||
### Default StorageClass
|
||||
|
||||
When a PVC does not specify a `storageClassName`, the default StorageClass is
|
||||
used. The cluster can only have one default StorageClass. If more than one
|
||||
default StorageClass is accidentally set, the newest default is used when the
|
||||
PVC is dynamically provisioned.
|
||||
|
||||
For instructions on setting the default StorageClass, see
|
||||
[Change the default StorageClass](/docs/tasks/administer-cluster/change-default-storage-class/).
|
||||
Note that certain cloud providers may already define a default StorageClass.
|
||||
|
||||
### Provisioner
|
||||
|
||||
Each StorageClass has a provisioner that determines what volume plugin is used
|
||||
|
@ -64,11 +75,8 @@ for provisioning PVs. This field must be specified.
|
|||
|
||||
| Volume Plugin | Internal Provisioner | Config Example |
|
||||
| :------------------- | :------------------: | :-----------------------------------: |
|
||||
| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) |
|
||||
| AzureFile | ✓ | [Azure File](#azure-file) |
|
||||
| AzureDisk | ✓ | [Azure Disk](#azure-disk) |
|
||||
| CephFS | - | - |
|
||||
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder) |
|
||||
| FC | - | - |
|
||||
| FlexVolume | - | - |
|
||||
| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) |
|
||||
|
@ -119,11 +127,8 @@ StorageClass has the field `allowVolumeExpansion` set to true.
|
|||
| Volume type | Required Kubernetes version |
|
||||
| :------------------- | :-------------------------- |
|
||||
| gcePersistentDisk | 1.11 |
|
||||
| awsElasticBlockStore | 1.11 |
|
||||
| Cinder | 1.11 |
|
||||
| rbd | 1.11 |
|
||||
| Azure File | 1.11 |
|
||||
| Azure Disk | 1.11 |
|
||||
| Portworx | 1.11 |
|
||||
| FlexVolume | 1.13 |
|
||||
| CSI | 1.14 (alpha), 1.16 (beta) |
|
||||
|
@ -167,9 +172,7 @@ and [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-tolera
|
|||
|
||||
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
|
||||
|
||||
- [AWSElasticBlockStore](#aws-ebs)
|
||||
- [GCEPersistentDisk](#gce-pd)
|
||||
- [AzureDisk](#azure-disk)
|
||||
|
||||
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
|
||||
|
||||
|
@ -332,7 +335,13 @@ using `allowedTopologies`.
|
|||
|
||||
{{< note >}}
|
||||
`zone` and `zones` parameters are deprecated and replaced with
|
||||
[allowedTopologies](#allowed-topologies)
|
||||
[allowedTopologies](#allowed-topologies). When
|
||||
[GCE CSI Migration](/docs/concepts/storage/volumes/#gce-csi-migration) is
|
||||
enabled, a GCE PD volume can be provisioned in a topology that does not match
|
||||
any nodes, but any pod trying to use that volume will fail to schedule. With
|
||||
legacy pre-migration GCE PD, in this case an error will be produced
|
||||
instead at provisioning time. GCE CSI Migration is enabled by default beginning
|
||||
from the Kubernetes 1.23 release.
|
||||
{{< /note >}}
|
||||
|
||||
### NFS
|
||||
|
@ -360,27 +369,6 @@ Here are some examples:
|
|||
- [NFS Ganesha server and external provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
|
||||
- [NFS subdir external provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
|
||||
|
||||
### OpenStack Cinder
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: gold
|
||||
provisioner: kubernetes.io/cinder
|
||||
parameters:
|
||||
availability: nova
|
||||
```
|
||||
|
||||
- `availability`: Availability Zone. If not specified, volumes are generally
|
||||
round-robin-ed across all active zones where Kubernetes cluster has a node.
|
||||
|
||||
{{< note >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="v1.11" >}}
|
||||
This internal provisioner of OpenStack is deprecated. Please use
|
||||
[the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
|
||||
{{< /note >}}
|
||||
|
||||
### vSphere
|
||||
|
||||
There are two types of provisioners for vSphere storage classes:
|
||||
|
|
|
@ -74,10 +74,10 @@ used for provisioning VolumeSnapshots. This field must be specified.
|
|||
|
||||
### DeletionPolicy
|
||||
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what
|
||||
happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to
|
||||
is to be deleted. The deletionPolicy of a volume snapshot class can either be
|
||||
`Retain` or `Delete`. This field must be specified.
|
||||
Volume snapshot classes have a [deletionPolicy](/docs/concepts/storage/volume-snapshots/#delete).
|
||||
It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot
|
||||
object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can
|
||||
either be `Retain` or `Delete`. This field must be specified.
|
||||
|
||||
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be
|
||||
deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`,
|
||||
|
|
|
@ -62,102 +62,31 @@ a different volume.
|
|||
|
||||
Kubernetes supports several types of volumes.
|
||||
|
||||
### awsElasticBlockStore (deprecated) {#awselasticblockstore}
|
||||
### awsElasticBlockStore (removed) {#awselasticblockstore}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="deprecated" >}}
|
||||
<!-- maintenance note: OK to remove all mention of awsElasticBlockStore once the v1.27 release of
|
||||
Kubernetes has gone out of support -->
|
||||
|
||||
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS)
|
||||
[EBS volume](https://aws.amazon.com/ebs/) into your pod. Unlike
|
||||
`emptyDir`, which is erased when a pod is removed, the contents of an EBS
|
||||
volume are persisted and the volume is unmounted. This means that an
|
||||
EBS volume can be pre-populated with data, and that data can be shared between pods.
|
||||
Kubernetes {{< skew currentVersion >}} does not include a `awsElasticBlockStore` volume type.
|
||||
|
||||
{{< note >}}
|
||||
You must create an EBS volume by using `aws ec2 create-volume` or the AWS API before you can use it.
|
||||
{{< /note >}}
|
||||
The AWSElasticBlockStore in-tree storage driver was deprecated in the Kubernetes v1.19 release
|
||||
and then removed entirely in the v1.27 release.
|
||||
|
||||
There are some restrictions when using an `awsElasticBlockStore` volume:
|
||||
The Kubernetes project suggests that you use the [AWS EBS](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) third party
|
||||
storage driver instead.
|
||||
|
||||
* the nodes on which pods are running must be AWS EC2 instances
|
||||
* those instances need to be in the same region and availability zone as the EBS volume
|
||||
* EBS only supports a single EC2 instance mounting a volume
|
||||
### azureDisk (removed) {#azuredisk}
|
||||
|
||||
#### Creating an AWS EBS volume
|
||||
<!-- maintenance note: OK to remove all mention of azureDisk once the v1.27 release of
|
||||
Kubernetes has gone out of support -->
|
||||
|
||||
Before you can use an EBS volume with a pod, you need to create it.
|
||||
Kubernetes {{< skew currentVersion >}} does not include a `azureDisk` volume type.
|
||||
|
||||
```shell
|
||||
aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2
|
||||
```
|
||||
The AzureDisk in-tree storage driver was deprecated in the Kubernetes v1.19 release
|
||||
and then removed entirely in the v1.27 release.
|
||||
|
||||
Make sure the zone matches the zone you brought up your cluster in. Check that the size and EBS volume
|
||||
type are suitable for your use.
|
||||
|
||||
#### AWS EBS configuration example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-ebs
|
||||
spec:
|
||||
containers:
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-ebs
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This AWS EBS volume must already exist.
|
||||
awsElasticBlockStore:
|
||||
volumeID: "<volume id>"
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
If the EBS volume is partitioned, you can supply the optional field `partition: "<partition number>"` to specify which partition to mount on.
|
||||
|
||||
#### AWS EBS CSI migration
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
|
||||
|
||||
The `CSIMigration` feature for `awsElasticBlockStore`, when enabled, redirects
|
||||
all plugin operations from the existing in-tree plugin to the `ebs.csi.aws.com` Container
|
||||
Storage Interface (CSI) driver. In order to use this feature, the [AWS EBS CSI
|
||||
driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver)
|
||||
must be installed on the cluster.
|
||||
|
||||
#### AWS EBS CSI migration complete
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
|
||||
|
||||
To disable the `awsElasticBlockStore` storage plugin from being loaded by the controller manager
|
||||
and the kubelet, set the `InTreePluginAWSUnregister` flag to `true`.
|
||||
|
||||
### azureDisk (deprecated) {#azuredisk}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="deprecated" >}}
|
||||
|
||||
The `azureDisk` volume type mounts a Microsoft Azure [Data Disk](https://docs.microsoft.com/en-us/azure/aks/csi-storage-drivers) into a pod.
|
||||
|
||||
For more details, see the [`azureDisk` volume plugin](https://github.com/kubernetes/examples/tree/master/staging/volumes/azure_disk/README.md).
|
||||
|
||||
#### azureDisk CSI migration
|
||||
|
||||
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
|
||||
|
||||
The `CSIMigration` feature for `azureDisk`, when enabled, redirects all plugin operations
|
||||
from the existing in-tree plugin to the `disk.csi.azure.com` Container
|
||||
Storage Interface (CSI) Driver. In order to use this feature, the
|
||||
[Azure Disk CSI Driver](https://github.com/kubernetes-sigs/azuredisk-csi-driver)
|
||||
must be installed on the cluster.
|
||||
|
||||
#### azureDisk CSI migration complete
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
To disable the `azureDisk` storage plugin from being loaded by the controller manager
|
||||
and the kubelet, set the `InTreePluginAzureDiskUnregister` flag to `true`.
|
||||
The Kubernetes project suggests that you use the [Azure Disk](https://github.com/kubernetes-sigs/azuredisk-csi-driver) third party
|
||||
storage driver instead.
|
||||
|
||||
### azureFile (deprecated) {#azurefile}
|
||||
|
||||
|
@ -210,51 +139,19 @@ You must have your own Ceph server running with the share exported before you ca
|
|||
|
||||
See the [CephFS example](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/) for more details.
|
||||
|
||||
### cinder (deprecated) {#cinder}
|
||||
### cinder (removed) {#cinder}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="deprecated" >}}
|
||||
<!-- maintenance note: OK to remove all mention of cinder once the v1.26 release of
|
||||
Kubernetes has gone out of support -->
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes must be configured with the OpenStack cloud provider.
|
||||
{{< /note >}}
|
||||
Kubernetes {{< skew currentVersion >}} does not include a `cinder` volume type.
|
||||
|
||||
The `cinder` volume type is used to mount the OpenStack Cinder volume into your pod.
|
||||
The OpenStack Cinder in-tree storage driver was deprecated in the Kubernetes v1.11 release
|
||||
and then removed entirely in the v1.26 release.
|
||||
|
||||
#### Cinder volume configuration example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-cinder
|
||||
spec:
|
||||
containers:
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-cinder-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-cinder
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This OpenStack volume must already exist.
|
||||
cinder:
|
||||
volumeID: "<volume id>"
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
#### OpenStack CSI migration
|
||||
|
||||
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
|
||||
|
||||
The `CSIMigration` feature for Cinder is enabled by default since Kubernetes 1.21.
|
||||
It redirects all plugin operations from the existing in-tree plugin to the
|
||||
`cinder.csi.openstack.org` Container Storage Interface (CSI) Driver.
|
||||
[OpenStack Cinder CSI Driver](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
|
||||
must be installed on the cluster.
|
||||
|
||||
To disable the in-tree Cinder plugin from being loaded by the controller manager
|
||||
and the kubelet, you can enable the `InTreePluginOpenStackUnregister`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
The Kubernetes project suggests that you use the
|
||||
[OpenStack Cinder](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md)
|
||||
third party storage driver instead.
|
||||
|
||||
### configMap
|
||||
|
||||
|
@ -1177,7 +1074,7 @@ persistent volume:
|
|||
`ControllerPublishVolume` and `ControllerUnpublishVolume` calls. This field is
|
||||
optional, and may be empty if no secret is required. If the Secret
|
||||
contains more than one secret, all secrets are passed.
|
||||
`nodeExpandSecretRef`: A reference to the secret containing sensitive
|
||||
* `nodeExpandSecretRef`: A reference to the secret containing sensitive
|
||||
information to pass to the CSI driver to complete the CSI
|
||||
`NodeExpandVolume` call. This field is optional, and may be empty if no
|
||||
secret is required. If the object contains more than one secret, all
|
||||
|
@ -1232,8 +1129,8 @@ For more information on how to develop a CSI driver, refer to the
|
|||
|
||||
CSI node plugins need to perform various privileged
|
||||
operations like scanning of disk devices and mounting of file systems. These operations
|
||||
differ for each host operating system. For Linux worker nodes, containerized CSI node
|
||||
node plugins are typically deployed as privileged containers. For Windows worker nodes,
|
||||
differ for each host operating system. For Linux worker nodes, containerized CSI node
|
||||
plugins are typically deployed as privileged containers. For Windows worker nodes,
|
||||
privileged operations for containerized CSI node plugins is supported using
|
||||
[csi-proxy](https://github.com/kubernetes-csi/csi-proxy), a community-managed,
|
||||
stand-alone binary that needs to be pre-installed on each Windows node.
|
||||
|
@ -1258,8 +1155,6 @@ are listed in [Types of Volumes](#volume-types).
|
|||
|
||||
The following in-tree plugins support persistent storage on Windows nodes:
|
||||
|
||||
* [`awsElasticBlockStore`](#awselasticblockstore)
|
||||
* [`azureDisk`](#azuredisk)
|
||||
* [`azureFile`](#azurefile)
|
||||
* [`gcePersistentDisk`](#gcepersistentdisk)
|
||||
* [`vsphereVolume`](#vspherevolume)
|
||||
|
|
|
@ -54,7 +54,7 @@ mounting/dismounting a volume to/from individual containers in a pod that needs
|
|||
persist data.
|
||||
|
||||
Volume management components are shipped as Kubernetes volume
|
||||
[plugin](/docs/concepts/storage/volumes/#types-of-volumes).
|
||||
[plugin](/docs/concepts/storage/volumes/#volume-types).
|
||||
The following broad classes of Kubernetes volume plugins are supported on Windows:
|
||||
|
||||
* [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexvolume)
|
||||
|
@ -65,8 +65,6 @@ The following broad classes of Kubernetes volume plugins are supported on Window
|
|||
|
||||
The following in-tree plugins support persistent storage on Windows nodes:
|
||||
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore)
|
||||
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk)
|
||||
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile)
|
||||
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk)
|
||||
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume)
|
||||
|
|
|
@ -9,16 +9,16 @@ no_list: true
|
|||
{{< glossary_definition term_id="workload" length="short" >}}
|
||||
Whether your workload is a single component or several that work together, on Kubernetes you run
|
||||
it inside a set of [_pods_](/docs/concepts/workloads/pods).
|
||||
In Kubernetes, a `Pod` represents a set of running
|
||||
In Kubernetes, a Pod represents a set of running
|
||||
{{< glossary_tooltip text="containers" term_id="container" >}} on your cluster.
|
||||
|
||||
Kubernetes pods have a [defined lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
For example, once a pod is running in your cluster then a critical fault on the
|
||||
{{< glossary_tooltip text="node" term_id="node" >}} where that pod is running means that
|
||||
all the pods on that node fail. Kubernetes treats that level of failure as final: you
|
||||
would need to create a new `Pod` to recover, even if the node later becomes healthy.
|
||||
would need to create a new Pod to recover, even if the node later becomes healthy.
|
||||
|
||||
However, to make life considerably easier, you don't need to manage each `Pod` directly.
|
||||
However, to make life considerably easier, you don't need to manage each Pod directly.
|
||||
Instead, you can use _workload resources_ that manage a set of pods on your behalf.
|
||||
These resources configure {{< glossary_tooltip term_id="controller" text="controllers" >}}
|
||||
that make sure the right number of the right kind of pod are running, to match the state
|
||||
|
@ -26,44 +26,51 @@ you specified.
|
|||
|
||||
Kubernetes provides several built-in workload resources:
|
||||
|
||||
* [`Deployment`](/docs/concepts/workloads/controllers/deployment/) and [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/)
|
||||
* [Deployment](/docs/concepts/workloads/controllers/deployment/) and [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
|
||||
(replacing the legacy resource
|
||||
{{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}}).
|
||||
`Deployment` is a good fit for managing a stateless application workload on your cluster,
|
||||
where any `Pod` in the `Deployment` is interchangeable and can be replaced if needed.
|
||||
* [`StatefulSet`](/docs/concepts/workloads/controllers/statefulset/) lets you
|
||||
Deployment is a good fit for managing a stateless application workload on your cluster,
|
||||
where any Pod in the Deployment is interchangeable and can be replaced if needed.
|
||||
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) lets you
|
||||
run one or more related Pods that do track state somehow. For example, if your workload
|
||||
records data persistently, you can run a `StatefulSet` that matches each `Pod` with a
|
||||
[`PersistentVolume`](/docs/concepts/storage/persistent-volumes/). Your code, running in the
|
||||
`Pods` for that `StatefulSet`, can replicate data to other `Pods` in the same `StatefulSet`
|
||||
records data persistently, you can run a StatefulSet that matches each Pod with a
|
||||
[PersistentVolume](/docs/concepts/storage/persistent-volumes/). Your code, running in the
|
||||
Pods for that StatefulSet, can replicate data to other Pods in the same StatefulSet
|
||||
to improve overall resilience.
|
||||
* [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) defines `Pods` that provide
|
||||
node-local facilities. These might be fundamental to the operation of your cluster, such
|
||||
as a networking helper tool, or be part of an
|
||||
{{< glossary_tooltip text="add-on" term_id="addons" >}}.
|
||||
Every time you add a node to your cluster that matches the specification in a `DaemonSet`,
|
||||
the control plane schedules a `Pod` for that `DaemonSet` onto the new node.
|
||||
* [`Job`](/docs/concepts/workloads/controllers/job/) and
|
||||
[`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/)
|
||||
define tasks that run to completion and then stop. Jobs represent one-off tasks, whereas
|
||||
`CronJobs` recur according to a schedule.
|
||||
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) defines Pods that provide
|
||||
facilities that are local to nodes.
|
||||
Every time you add a node to your cluster that matches the specification in a DaemonSet,
|
||||
the control plane schedules a Pod for that DaemonSet onto the new node.
|
||||
Each pod in a DaemonSet performs a job similar to a system daemon on a classic Unix / POSIX
|
||||
server. A DaemonSet might be fundamental to the operation of your cluster, such as
|
||||
a plugin to run [cluster networking](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model),
|
||||
it might help you to manage the node,
|
||||
or it could provide optional behavior that enhances the container platform you are running.
|
||||
* [Job](/docs/concepts/workloads/controllers/job/) and
|
||||
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/) provide different ways to
|
||||
define tasks that run to completion and then stop.
|
||||
You can use a [Job](/docs/concepts/workloads/controllers/job/) to
|
||||
define a task that runs to completion, just once. You can use a
|
||||
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/) to run
|
||||
the same Job multiple times according a schedule.
|
||||
|
||||
In the wider Kubernetes ecosystem, you can find third-party workload resources that provide
|
||||
additional behaviors. Using a
|
||||
[custom resource definition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/),
|
||||
you can add in a third-party workload resource if you want a specific behavior that's not part
|
||||
of Kubernetes' core. For example, if you wanted to run a group of `Pods` for your application but
|
||||
of Kubernetes' core. For example, if you wanted to run a group of Pods for your application but
|
||||
stop work unless _all_ the Pods are available (perhaps for some high-throughput distributed task),
|
||||
then you can implement or install an extension that does provide that feature.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
As well as reading about each resource, you can learn about specific tasks that relate to them:
|
||||
As well as reading about each API kind for workload management, you can read how to
|
||||
do specific tasks:
|
||||
|
||||
* [Run a stateless application using a `Deployment`](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
* [Run a stateless application using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
* Run a stateful application either as a [single instance](/docs/tasks/run-application/run-single-instance-stateful-application/)
|
||||
or as a [replicated set](/docs/tasks/run-application/run-replicated-stateful-application/)
|
||||
* [Run automated tasks with a `CronJob`](/docs/tasks/job/automated-tasks-with-cron-jobs/)
|
||||
* [Run automated tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
|
||||
|
||||
To learn about Kubernetes' mechanisms for separating code from configuration,
|
||||
visit [Configuration](/docs/concepts/configuration/).
|
||||
|
@ -76,6 +83,6 @@ for applications:
|
|||
removes Jobs once a defined time has passed since they completed.
|
||||
|
||||
Once your application is running, you might want to make it available on the internet as
|
||||
a [`Service`](/docs/concepts/services-networking/service/) or, for web application only,
|
||||
using an [`Ingress`](/docs/concepts/services-networking/ingress).
|
||||
a [Service](/docs/concepts/services-networking/service/) or, for web application only,
|
||||
using an [Ingress](/docs/concepts/services-networking/ingress).
|
||||
|
||||
|
|
|
@ -1,5 +1,61 @@
|
|||
---
|
||||
title: "Workload Resources"
|
||||
weight: 20
|
||||
simple_list: true
|
||||
---
|
||||
|
||||
Kubernetes provides several built-in APIs for declarative management of your
|
||||
{{< glossary_tooltip text="workloads" term_id="workload" >}}
|
||||
and the components of those workloads.
|
||||
|
||||
Ultimately, your applications run as containers inside
|
||||
{{< glossary_tooltip term_id="Pod" text="Pods" >}}; however, managing individual
|
||||
Pods would be a lot of effort. For example, if a Pod fails, you probably want to
|
||||
run a new Pod to replace it. Kubernetes can do that for you.
|
||||
|
||||
You use the Kubernetes API to create a workload
|
||||
{{< glossary_tooltip text="object" term_id="object" >}} that represents a higher abstraction level
|
||||
than a Pod, and then the Kubernetes
|
||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} automatically manages
|
||||
Pod objects on your behalf, based on the specification for the workload object you defined.
|
||||
|
||||
The built-in APIs for managing workloads are:
|
||||
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/) (and, indirectly, [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)),
|
||||
the most common way to run an application on your cluster.
|
||||
Deployment is a good fit for managing a stateless application workload on your cluster, where
|
||||
any Pod in the Deployment is interchangeable and can be replaced if needed.
|
||||
(Deployments are a replacement for the legacy
|
||||
{{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}} API).
|
||||
|
||||
A [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) lets you
|
||||
manage one or more Pods – all running the same application code – where the Pods rely
|
||||
on having a distinct identity. This is different from a Deployment where the Pods are
|
||||
expected to be interchangeable.
|
||||
The most common use for a StatefulSet is to be able to make a link between its Pods and
|
||||
their persistent storage. For example, you can run a StatefulSet that associates each Pod
|
||||
with a [PersistentVolume](/docs/concepts/storage/persistent-volumes/). If one of the Pods
|
||||
in the StatefulSet fails, Kubernetes makes a replacement Pod that is connected to the
|
||||
same PersistentVolume.
|
||||
|
||||
A [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) defines Pods that provide
|
||||
facilities that are local to a specific {{< glossary_tooltip text="node" term_id="node" >}};
|
||||
for example, a driver that lets containers on that node access a storage system. You use a DaemonSet
|
||||
when the driver, or other node-level service, has to run on the node where it's useful.
|
||||
Each Pod in a DaemonSet performs a role similar to a system daemon on a classic Unix / POSIX
|
||||
server.
|
||||
A DaemonSet might be fundamental to the operation of your cluster,
|
||||
such as a plugin to let that node access
|
||||
[cluster networking](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model),
|
||||
it might help you to manage the node,
|
||||
or it could provide less essential facilities that enhance the container platform you are running.
|
||||
You can run DaemonSets (and their pods) across every node in your cluster, or across just a subset (for example,
|
||||
only install the GPU accelerator driver on nodes that have a GPU installed).
|
||||
|
||||
You can use a [Job](/docs/concepts/workloads/controllers/job/) and / or
|
||||
a [CronJob](/docs/concepts/workloads/controllers/cron-jobs/) to
|
||||
define tasks that run to completion and then stop. A Job represents a one-off task,
|
||||
whereas each CronJob repeats according to a schedule.
|
||||
|
||||
Other topics in this section:
|
||||
<!-- relies on simple_list: true in the front matter -->
|
||||
|
|
|
@ -5,7 +5,10 @@ reviewers:
|
|||
- janetkuo
|
||||
title: CronJob
|
||||
content_type: concept
|
||||
description: >-
|
||||
A CronJob starts one-time Jobs on a repeating schedule.
|
||||
weight: 80
|
||||
hide_summary: true # Listed separately in section index
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -38,7 +41,7 @@ length of a Job name is no more than 63 characters.
|
|||
|
||||
This example CronJob manifest prints the current time and a hello message every minute:
|
||||
|
||||
{{< codenew file="application/job/cronjob.yaml" >}}
|
||||
{{% code file="application/job/cronjob.yaml" %}}
|
||||
|
||||
([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
|
||||
takes you through this example in more detail).
|
||||
|
|
|
@ -6,8 +6,11 @@ reviewers:
|
|||
- janetkuo
|
||||
- kow3ns
|
||||
title: DaemonSet
|
||||
description: >-
|
||||
A DaemonSet defines Pods that provide node-local facilities. These might be fundamental to the operation of your cluster, such as a networking helper tool, or be part of an add-on.
|
||||
content_type: concept
|
||||
weight: 40
|
||||
hide_summary: true # Listed separately in section index
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -35,7 +38,7 @@ different flags and/or different memory and cpu requests for different hardware
|
|||
You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below
|
||||
describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
|
||||
|
||||
{{< codenew file="controllers/daemonset.yaml" >}}
|
||||
{{% code file="controllers/daemonset.yaml" %}}
|
||||
|
||||
Create a DaemonSet based on the YAML file:
|
||||
|
||||
|
|
|
@ -6,9 +6,11 @@ feature:
|
|||
title: Automated rollouts and rollbacks
|
||||
description: >
|
||||
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
|
||||
|
||||
description: >-
|
||||
A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state.
|
||||
content_type: concept
|
||||
weight: 10
|
||||
hide_summary: true # Listed separately in section index
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -38,13 +40,9 @@ The following are typical use cases for Deployments:
|
|||
|
||||
## Creating a Deployment
|
||||
|
||||
Before creating a Deployment define an
|
||||
[environment variable](/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container)
|
||||
for a container.
|
||||
|
||||
The following is an example of a Deployment. It creates a ReplicaSet to bring up three `nginx` Pods:
|
||||
|
||||
{{< codenew file="controllers/nginx-deployment.yaml" >}}
|
||||
{{% code file="controllers/nginx-deployment.yaml" %}}
|
||||
|
||||
In this example:
|
||||
|
||||
|
|
|
@ -5,11 +5,14 @@ reviewers:
|
|||
- soltysh
|
||||
title: Jobs
|
||||
content_type: concept
|
||||
description: >-
|
||||
Jobs represent one-off tasks that run to completion and then stop.
|
||||
feature:
|
||||
title: Batch execution
|
||||
description: >
|
||||
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
|
||||
weight: 50
|
||||
hide_summary: true # Listed separately in section index
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -36,7 +39,7 @@ see [CronJob](/docs/concepts/workloads/controllers/cron-jobs/).
|
|||
Here is an example Job config. It computes π to 2000 places and prints it out.
|
||||
It takes around 10s to complete.
|
||||
|
||||
{{< codenew file="controllers/job.yaml" >}}
|
||||
{{% code file="controllers/job.yaml" %}}
|
||||
|
||||
You can run the example with this command:
|
||||
|
||||
|
@ -472,7 +475,7 @@ container exit codes and the Pod conditions.
|
|||
|
||||
Here is a manifest for a Job that defines a `podFailurePolicy`:
|
||||
|
||||
{{< codenew file="/controllers/job-pod-failure-policy-example.yaml" >}}
|
||||
{{% code file="/controllers/job-pod-failure-policy-example.yaml" %}}
|
||||
|
||||
In the example above, the first rule of the Pod failure policy specifies that
|
||||
the Job should be marked failed if the `main` container fails with the 42 exit
|
||||
|
@ -909,32 +912,12 @@ mismatch.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
{{< note >}}
|
||||
The control plane doesn't track Jobs using finalizers, if the Jobs were created
|
||||
when the feature gate `JobTrackingWithFinalizers` was disabled, even after you
|
||||
upgrade the control plane to 1.26.
|
||||
{{< /note >}}
|
||||
|
||||
The control plane keeps track of the Pods that belong to any Job and notices if
|
||||
any such Pod is removed from the API server. To do that, the Job controller
|
||||
creates Pods with the finalizer `batch.kubernetes.io/job-tracking`. The
|
||||
controller removes the finalizer only after the Pod has been accounted for in
|
||||
the Job status, allowing the Pod to be removed by other controllers or users.
|
||||
|
||||
Jobs created before upgrading to Kubernetes 1.26 or before the feature gate
|
||||
`JobTrackingWithFinalizers` is enabled are tracked without the use of Pod
|
||||
finalizers.
|
||||
The Job {{< glossary_tooltip term_id="controller" text="controller" >}} updates
|
||||
the status counters for `succeeded` and `failed` Pods based only on the Pods
|
||||
that exist in the cluster. The contol plane can lose track of the progress of
|
||||
the Job if Pods are deleted from the cluster.
|
||||
|
||||
You can determine if the control plane is tracking a Job using Pod finalizers by
|
||||
checking if the Job has the annotation
|
||||
`batch.kubernetes.io/job-tracking`. You should **not** manually add or remove
|
||||
this annotation from Jobs. Instead, you can recreate the Jobs to ensure they
|
||||
are tracked using Pod finalizers.
|
||||
|
||||
### Elastic Indexed Jobs
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
|
|
@ -12,7 +12,11 @@ feature:
|
|||
kills containers that don't respond to your user-defined health check,
|
||||
and doesn't advertise them to clients until they are ready to serve.
|
||||
content_type: concept
|
||||
description: >-
|
||||
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
|
||||
Usually, you define a Deployment and let that Deployment manage ReplicaSets automatically.
|
||||
weight: 20
|
||||
hide_summary: true # Listed separately in section index
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -52,7 +56,7 @@ use a Deployment instead, and define your application in the spec section.
|
|||
|
||||
## Example
|
||||
|
||||
{{< codenew file="controllers/frontend.yaml" >}}
|
||||
{{% code file="controllers/frontend.yaml" %}}
|
||||
|
||||
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will
|
||||
create the defined ReplicaSet and the Pods that it manages.
|
||||
|
@ -162,7 +166,7 @@ to owning Pods specified by its template-- it can acquire other Pods in the mann
|
|||
|
||||
Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:
|
||||
|
||||
{{< codenew file="pods/pod-rs.yaml" >}}
|
||||
{{% code file="pods/pod-rs.yaml" %}}
|
||||
|
||||
As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend
|
||||
ReplicaSet, they will immediately be acquired by it.
|
||||
|
@ -377,7 +381,7 @@ A ReplicaSet can also be a target for
|
|||
a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targeting
|
||||
the ReplicaSet we created in the previous example.
|
||||
|
||||
{{< codenew file="controllers/hpa-rs.yaml" >}}
|
||||
{{% code file="controllers/hpa-rs.yaml" %}}
|
||||
|
||||
Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should
|
||||
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage
|
||||
|
|
|
@ -5,6 +5,9 @@ reviewers:
|
|||
title: ReplicationController
|
||||
content_type: concept
|
||||
weight: 90
|
||||
description: >-
|
||||
Legacy API for managing workloads that can scale horizontally.
|
||||
Superseded by the Deployment and ReplicaSet APIs.
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -19,7 +22,7 @@ always up and available.
|
|||
|
||||
<!-- body -->
|
||||
|
||||
## How a ReplicationController Works
|
||||
## How a ReplicationController works
|
||||
|
||||
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
|
||||
ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a
|
||||
|
@ -41,7 +44,7 @@ service, such as web servers.
|
|||
|
||||
This example ReplicationController config runs three copies of the nginx web server.
|
||||
|
||||
{{< codenew file="controllers/replication.yaml" >}}
|
||||
{{% code file="controllers/replication.yaml" %}}
|
||||
|
||||
Run the example job by downloading the example file and then running this command:
|
||||
|
||||
|
|
|
@ -8,7 +8,11 @@ reviewers:
|
|||
- smarterclayton
|
||||
title: StatefulSets
|
||||
content_type: concept
|
||||
description: >-
|
||||
A StatefulSet runs a group of Pods, and maintains a sticky identity for each of those Pods. This is useful for managing
|
||||
applications that need persistent storage or a stable, unique network identity.
|
||||
weight: 30
|
||||
hide_summary: true # Listed separately in section index
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -32,10 +32,8 @@ for debugging if your cluster offers this.
|
|||
## What is a Pod?
|
||||
|
||||
{{< note >}}
|
||||
While Kubernetes supports more
|
||||
{{< glossary_tooltip text="container runtimes" term_id="container-runtime" >}}
|
||||
than just Docker, [Docker](https://www.docker.com/) is the most commonly known
|
||||
runtime, and it helps to describe Pods using some terminology from Docker.
|
||||
You need to install a [container runtime](/docs/setup/production-environment/container-runtimes/)
|
||||
into each node in the cluster so that Pods can run there.
|
||||
{{< /note >}}
|
||||
|
||||
The shared context of a Pod is a set of Linux namespaces, cgroups, and
|
||||
|
@ -48,7 +46,7 @@ A Pod is similar to a set of containers with shared namespaces and shared filesy
|
|||
|
||||
The following is an example of a Pod which consists of a container running the image `nginx:1.14.2`.
|
||||
|
||||
{{< codenew file="pods/simple-pod.yaml" >}}
|
||||
{{% code file="pods/simple-pod.yaml" %}}
|
||||
|
||||
To create the Pod shown above, run the following command:
|
||||
```shell
|
||||
|
@ -160,7 +158,7 @@ Kubernetes. In future, this list may be expanded.
|
|||
In Kubernetes v{{< skew currentVersion >}}, the value you set for this field has no
|
||||
effect on {{< glossary_tooltip text="scheduling" term_id="kube-scheduler" >}} of the pods.
|
||||
Setting the `.spec.os.name` helps to identify the pod OS
|
||||
authoratitively and is used for validation. The kubelet refuses to run a Pod where you have
|
||||
authoritatively and is used for validation. The kubelet refuses to run a Pod where you have
|
||||
specified a Pod OS, if this isn't the same as the operating system for the node where
|
||||
that kubelet is running.
|
||||
The [Pod security standards](/docs/concepts/security/pod-security-standards/) also use this
|
||||
|
@ -338,7 +336,7 @@ using the kubelet to supervise the individual [control plane components](/docs/c
|
|||
The kubelet automatically tries to create a {{< glossary_tooltip text="mirror Pod" term_id="mirror-pod" >}}
|
||||
on the Kubernetes API server for each static Pod.
|
||||
This means that the Pods running on a node are visible on the API server,
|
||||
but cannot be controlled from there.
|
||||
but cannot be controlled from there. See the guide [Create static Pods](/docs/tasks/configure-pod-container/static-pod) for more information.
|
||||
|
||||
{{< note >}}
|
||||
The `spec` of a static Pod cannot refer to other API objects
|
||||
|
|
|
@ -83,7 +83,7 @@ Here are some ideas for how to use init containers:
|
|||
* Wait for a {{< glossary_tooltip text="Service" term_id="service">}} to
|
||||
be created, using a shell one-line command like:
|
||||
```shell
|
||||
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
|
||||
for i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1
|
||||
```
|
||||
|
||||
* Register this Pod with a remote server from the downward API with a command like:
|
||||
|
|
|
@ -38,8 +38,8 @@ If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that no
|
|||
are [scheduled for deletion](#pod-garbage-collection) after a timeout period.
|
||||
|
||||
Pods do not, by themselves, self-heal. If a Pod is scheduled to a
|
||||
{{< glossary_tooltip text="node" term_id="node" >}} that then fails, the Pod is deleted; likewise, a Pod won't
|
||||
survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a
|
||||
{{< glossary_tooltip text="node" term_id="node" >}} that then fails, the Pod is deleted; likewise,
|
||||
a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a
|
||||
higher-level abstraction, called a
|
||||
{{< glossary_tooltip term_id="controller" text="controller" >}}, that handles the work of
|
||||
managing the relatively disposable Pod instances.
|
||||
|
@ -57,8 +57,8 @@ created anew.
|
|||
|
||||
{{< figure src="/images/docs/pod.svg" title="Pod diagram" class="diagram-medium" >}}
|
||||
|
||||
*A multi-container Pod that contains a file puller and a
|
||||
web server that uses a persistent volume for shared storage between the containers.*
|
||||
A multi-container Pod that contains a file puller and a
|
||||
web server that uses a persistent volume for shared storage between the containers.
|
||||
|
||||
## Pod phase
|
||||
|
||||
|
@ -91,9 +91,9 @@ A Pod is granted a term to terminate gracefully, which defaults to 30 seconds.
|
|||
You can use the flag `--force` to [terminate a Pod by force](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced).
|
||||
{{< /note >}}
|
||||
|
||||
Since Kubernetes 1.27, the kubelet transitions deleted pods, except for
|
||||
[static pods](/docs/tasks/configure-pod-container/static-pod/) and
|
||||
[force-deleted pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced)
|
||||
Since Kubernetes 1.27, the kubelet transitions deleted Pods, except for
|
||||
[static Pods](/docs/tasks/configure-pod-container/static-pod/) and
|
||||
[force-deleted Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced)
|
||||
without a finalizer, to a terminal phase (`Failed` or `Succeeded` depending on
|
||||
the exit statuses of the pod containers) before their deletion from the API server.
|
||||
|
||||
|
@ -219,13 +219,13 @@ status:
|
|||
...
|
||||
```
|
||||
|
||||
The Pod conditions you add must have names that meet the Kubernetes [label key format](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
|
||||
|
||||
The Pod conditions you add must have names that meet the Kubernetes
|
||||
[label key format](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
|
||||
|
||||
### Status for Pod readiness {#pod-readiness-status}
|
||||
|
||||
The `kubectl patch` command does not support patching object status.
|
||||
To set these `status.conditions` for the pod, applications and
|
||||
To set these `status.conditions` for the Pod, applications and
|
||||
{{< glossary_tooltip term_id="operator-pattern" text="operators">}} should use
|
||||
the `PATCH` action.
|
||||
You can use a [Kubernetes client library](/docs/reference/using-api/client-libraries/) to
|
||||
|
@ -259,12 +259,14 @@ the `PodReadyToStartContainers` condition in the `status.conditions` field of a
|
|||
The `PodReadyToStartContainers` condition is set to `False` by the Kubelet when it detects a
|
||||
Pod does not have a runtime sandbox with networking configured. This occurs in
|
||||
the following scenarios:
|
||||
* Early in the lifecycle of the Pod, when the kubelet has not yet begun to set up a sandbox for the Pod using the container runtime.
|
||||
* Later in the lifecycle of the Pod, when the Pod sandbox has been destroyed due
|
||||
to either:
|
||||
* the node rebooting, without the Pod getting evicted
|
||||
* for container runtimes that use virtual machines for isolation, the Pod
|
||||
sandbox virtual machine rebooting, which then requires creating a new sandbox and fresh container network configuration.
|
||||
|
||||
- Early in the lifecycle of the Pod, when the kubelet has not yet begun to set up a sandbox for
|
||||
the Pod using the container runtime.
|
||||
- Later in the lifecycle of the Pod, when the Pod sandbox has been destroyed due to either:
|
||||
- the node rebooting, without the Pod getting evicted
|
||||
- for container runtimes that use virtual machines for isolation, the Pod
|
||||
sandbox virtual machine rebooting, which then requires creating a new sandbox and
|
||||
fresh container network configuration.
|
||||
|
||||
The `PodReadyToStartContainers` condition is set to `True` by the kubelet after the
|
||||
successful completion of sandbox creation and network configuration for the Pod
|
||||
|
@ -281,16 +283,14 @@ condition to `True` before sandbox creation and network configuration starts.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
|
||||
|
||||
See [Pod Scheduling Readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/) for more information.
|
||||
See [Pod Scheduling Readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)
|
||||
for more information.
|
||||
|
||||
## Container probes
|
||||
|
||||
A _probe_ is a diagnostic
|
||||
performed periodically by the
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet/)
|
||||
on a container. To perform a diagnostic,
|
||||
the kubelet either executes code within the container, or makes
|
||||
a network request.
|
||||
A _probe_ is a diagnostic performed periodically by the [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
|
||||
on a container. To perform a diagnostic, the kubelet either executes code within the container,
|
||||
or makes a network request.
|
||||
|
||||
### Check mechanisms {#probe-check-methods}
|
||||
|
||||
|
@ -368,8 +368,6 @@ see [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod
|
|||
|
||||
#### When should you use a liveness probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="stable" >}}
|
||||
|
||||
If the process in your container is able to crash on its own whenever it
|
||||
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
|
||||
probe; the kubelet will automatically perform the correct action in accordance
|
||||
|
@ -380,8 +378,6 @@ specify a liveness probe, and specify a `restartPolicy` of Always or OnFailure.
|
|||
|
||||
#### When should you use a readiness probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="stable" >}}
|
||||
|
||||
If you'd like to start sending traffic to a Pod only when a probe succeeds,
|
||||
specify a readiness probe. In this case, the readiness probe might be the same
|
||||
as the liveness probe, but the existence of the readiness probe in the spec means
|
||||
|
@ -414,8 +410,6 @@ to stop.
|
|||
|
||||
#### When should you use a startup probe?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
|
||||
|
||||
Startup probes are useful for Pods that have containers that take a long time to
|
||||
come into service. Rather than set a long liveness interval, you can configure
|
||||
a separate configuration for probing the container as it starts up, allowing
|
||||
|
@ -444,60 +438,69 @@ shutdown.
|
|||
Typically, the container runtime sends a TERM signal to the main process in each
|
||||
container. Many container runtimes respect the `STOPSIGNAL` value defined in the container
|
||||
image and send this instead of TERM.
|
||||
Once the grace period has expired, the KILL signal is sent to any remaining
|
||||
processes, and the Pod is then deleted from the
|
||||
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}. If the kubelet or the
|
||||
container runtime's management service is restarted while waiting for processes to terminate, the
|
||||
cluster retries from the start including the full original grace period.
|
||||
Once the grace period has expired, the KILL signal is sent to any remaining processes, and the Pod
|
||||
is then deleted from the {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}.
|
||||
If the kubelet or the container runtime's management service is restarted while waiting for
|
||||
processes to terminate, the cluster retries from the start including the full original grace period.
|
||||
|
||||
An example flow:
|
||||
|
||||
1. You use the `kubectl` tool to manually delete a specific Pod, with the default grace period
|
||||
(30 seconds).
|
||||
|
||||
1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead"
|
||||
along with the grace period.
|
||||
If you use `kubectl describe` to check on the Pod you're deleting, that Pod shows up as
|
||||
"Terminating".
|
||||
If you use `kubectl describe` to check the Pod you're deleting, that Pod shows up as "Terminating".
|
||||
On the node where the Pod is running: as soon as the kubelet sees that a Pod has been marked
|
||||
as terminating (a graceful shutdown duration has been set), the kubelet begins the local Pod
|
||||
shutdown process.
|
||||
|
||||
1. If one of the Pod's containers has defined a `preStop`
|
||||
[hook](/docs/concepts/containers/container-lifecycle-hooks), the kubelet
|
||||
runs that hook inside of the container. If the `preStop` hook is still running after the
|
||||
grace period expires, the kubelet requests a small, one-off grace period extension of 2
|
||||
seconds.
|
||||
[hook](/docs/concepts/containers/container-lifecycle-hooks) and the `terminationGracePeriodSeconds`
|
||||
in the Pod spec is not set to 0, the kubelet runs that hook inside of the container.
|
||||
The default `terminationGracePeriodSeconds` setting is 30 seconds.
|
||||
|
||||
If the `preStop` hook is still running after the grace period expires, the kubelet requests
|
||||
a small, one-off grace period extension of 2 seconds.
|
||||
|
||||
{{< note >}}
|
||||
If the `preStop` hook needs longer to complete than the default grace period allows,
|
||||
you must modify `terminationGracePeriodSeconds` to suit this.
|
||||
{{< /note >}}
|
||||
|
||||
1. The kubelet triggers the container runtime to send a TERM signal to process 1 inside each
|
||||
container.
|
||||
{{< note >}}
|
||||
The containers in the Pod receive the TERM signal at different times and in an arbitrary
|
||||
order. If the order of shutdowns matters, consider using a `preStop` hook to synchronize.
|
||||
{{< /note >}}
|
||||
1. At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane evaluates whether to remove that shutting-down Pod from EndpointSlice (and Endpoints) objects, where those objects represent
|
||||
a {{< glossary_tooltip term_id="service" text="Service" >}} with a configured
|
||||
{{< glossary_tooltip text="selector" term_id="selector" >}}.
|
||||
|
||||
1. At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane
|
||||
evaluates whether to remove that shutting-down Pod from EndpointSlice (and Endpoints) objects,
|
||||
where those objects represent a {{< glossary_tooltip term_id="service" text="Service" >}}
|
||||
with a configured {{< glossary_tooltip text="selector" term_id="selector" >}}.
|
||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and other workload resources
|
||||
no longer treat the shutting-down Pod as a valid, in-service replica. Pods that shut down slowly
|
||||
should not continue to serve regular traffic and should start terminating and finish processing open connections.
|
||||
Some applications need to go beyond finishing open connections and need more graceful termination -
|
||||
for example: session draining and completion. Any endpoints that represent the terminating pods
|
||||
are not immediately removed from EndpointSlices,
|
||||
and a status indicating [terminating state](/docs/concepts/services-networking/endpoint-slices/#conditions)
|
||||
is exposed from the EndpointSlice API (and the legacy Endpoints API). Terminating
|
||||
endpoints always have their `ready` status
|
||||
as `false` (for backward compatibility with versions before 1.26),
|
||||
so load balancers will not use it for regular traffic.
|
||||
If traffic draining on terminating pod is needed, the actual readiness can be checked as a condition `serving`.
|
||||
You can find more details on how to implement connections draining
|
||||
in the tutorial [Pods And Endpoints Termination Flow](/docs/tutorials/services/pods-and-endpoint-termination-flow/)
|
||||
no longer treat the shutting-down Pod as a valid, in-service replica.
|
||||
|
||||
Pods that shut down slowly should not continue to serve regular traffic and should start
|
||||
terminating and finish processing open connections. Some applications need to go beyond
|
||||
finishing open connections and need more graceful termination, for example, session draining
|
||||
and completion.
|
||||
|
||||
Any endpoints that represent the terminating Pods are not immediately removed from
|
||||
EndpointSlices, and a status indicating [terminating state](/docs/concepts/services-networking/endpoint-slices/#conditions)
|
||||
is exposed from the EndpointSlice API (and the legacy Endpoints API).
|
||||
Terminating endpoints always have their `ready` status as `false` (for backward compatibility
|
||||
with versions before 1.26), so load balancers will not use it for regular traffic.
|
||||
|
||||
If traffic draining on terminating Pod is needed, the actual readiness can be checked as a
|
||||
condition `serving`. You can find more details on how to implement connections draining in the
|
||||
tutorial [Pods And Endpoints Termination Flow](/docs/tutorials/services/pods-and-endpoint-termination-flow/)
|
||||
|
||||
{{<note>}}
|
||||
If you don't have the `EndpointSliceTerminatingCondition` feature gate enabled
|
||||
in your cluster (the gate is on by default from Kubernetes 1.22, and locked to default in 1.26), then the Kubernetes control
|
||||
plane removes a Pod from any relevant EndpointSlices as soon as the Pod's
|
||||
in your cluster (the gate is on by default from Kubernetes 1.22, and locked to default in 1.26),
|
||||
then the Kubernetes control plane removes a Pod from any relevant EndpointSlices as soon as the Pod's
|
||||
termination grace period _begins_. The behavior above is described when the
|
||||
feature gate `EndpointSliceTerminatingCondition` is enabled.
|
||||
{{</note>}}
|
||||
|
@ -505,7 +508,7 @@ feature gate `EndpointSliceTerminatingCondition` is enabled.
|
|||
1. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends
|
||||
`SIGKILL` to any processes still running in any container in the Pod.
|
||||
The kubelet also cleans up a hidden `pause` container if that container runtime uses one.
|
||||
1. The kubelet transitions the pod into a terminal phase (`Failed` or `Succeeded` depending on
|
||||
1. The kubelet transitions the Pod into a terminal phase (`Failed` or `Succeeded` depending on
|
||||
the end state of its containers). This step is guaranteed since version 1.27.
|
||||
1. The kubelet triggers forcible removal of Pod object from the API server, by setting grace period
|
||||
to 0 (immediate deletion).
|
||||
|
@ -522,11 +525,12 @@ the `--grace-period=<seconds>` option which allows you to override the default a
|
|||
own value.
|
||||
|
||||
Setting the grace period to `0` forcibly and immediately deletes the Pod from the API
|
||||
server. If the pod was still running on a node, that forcible deletion triggers the kubelet to
|
||||
server. If the Pod was still running on a node, that forcible deletion triggers the kubelet to
|
||||
begin immediate cleanup.
|
||||
|
||||
{{< note >}}
|
||||
You must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions.
|
||||
You must specify an additional flag `--force` along with `--grace-period=0`
|
||||
in order to perform force deletions.
|
||||
{{< /note >}}
|
||||
|
||||
When a force deletion is performed, the API server does not wait for confirmation
|
||||
|
@ -536,7 +540,8 @@ name. On the node, Pods that are set to terminate immediately will still be give
|
|||
a small grace period before being force killed.
|
||||
|
||||
{{< caution >}}
|
||||
Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
|
||||
Immediate deletion does not wait for confirmation that the running resource has been terminated.
|
||||
The resource may continue to run on the cluster indefinitely.
|
||||
{{< /caution >}}
|
||||
|
||||
If you need to force-delete Pods that are part of a StatefulSet, refer to the task
|
||||
|
@ -549,21 +554,24 @@ For failed Pods, the API objects remain in the cluster's API until a human or
|
|||
{{< glossary_tooltip term_id="controller" text="controller" >}} process
|
||||
explicitly removes them.
|
||||
|
||||
The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of `Succeeded` or
|
||||
`Failed`), when the number of Pods exceeds the configured threshold
|
||||
(determined by `terminated-pod-gc-threshold` in the kube-controller-manager).
|
||||
The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up
|
||||
terminated Pods (with a phase of `Succeeded` or `Failed`), when the number of Pods exceeds the
|
||||
configured threshold (determined by `terminated-pod-gc-threshold` in the kube-controller-manager).
|
||||
This avoids a resource leak as Pods are created and terminated over time.
|
||||
|
||||
Additionally, PodGC cleans up any Pods which satisfy any of the following conditions:
|
||||
1. are orphan pods - bound to a node which no longer exists,
|
||||
2. are unscheduled terminating pods,
|
||||
3. are terminating pods, bound to a non-ready node tainted with [`node.kubernetes.io/out-of-service`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-out-of-service), when the `NodeOutOfServiceVolumeDetach` feature gate is enabled.
|
||||
|
||||
1. are orphan Pods - bound to a node which no longer exists,
|
||||
1. are unscheduled terminating Pods,
|
||||
1. are terminating Pods, bound to a non-ready node tainted with
|
||||
[`node.kubernetes.io/out-of-service`](/docs/reference/labels-annotations-taints/#node-kubernetes-io-out-of-service),
|
||||
when the `NodeOutOfServiceVolumeDetach` feature gate is enabled.
|
||||
|
||||
When the `PodDisruptionConditions` feature gate is enabled, along with
|
||||
cleaning up the pods, PodGC will also mark them as failed if they are in a non-terminal
|
||||
phase. Also, PodGC adds a pod disruption condition when cleaning up an orphan
|
||||
pod (see also:
|
||||
[Pod disruption conditions](/docs/concepts/workloads/pods/disruptions#pod-disruption-conditions)).
|
||||
cleaning up the Pods, PodGC will also mark them as failed if they are in a non-terminal
|
||||
phase. Also, PodGC adds a Pod disruption condition when cleaning up an orphan Pod.
|
||||
See [Pod disruption conditions](/docs/concepts/workloads/pods/disruptions#pod-disruption-conditions)
|
||||
for more details.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -577,4 +585,4 @@ pod (see also:
|
|||
|
||||
* For detailed information about Pod and container status in the API, see
|
||||
the API reference documentation covering
|
||||
[`.status`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus) for Pod.
|
||||
[`status`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus) for Pod.
|
||||
|
|
|
@ -133,9 +133,8 @@ you will do a second commit. It is important to keep your changes separated into
|
|||
Go to `<k8s-base>` and run these scripts:
|
||||
|
||||
```shell
|
||||
hack/update-generated-swagger-docs.sh
|
||||
hack/update-openapi-spec.sh
|
||||
hack/update-generated-protobuf.sh
|
||||
./hack/update-codegen.sh
|
||||
./hack/update-openapi-spec.sh
|
||||
```
|
||||
|
||||
Run `git status` to see what was generated.
|
||||
|
@ -206,10 +205,8 @@ release-{{< skew prevMinorVersion >}} branch, the next step is to run these scri
|
|||
release-{{< skew prevMinorVersion >}} branch of your local environment.
|
||||
|
||||
```shell
|
||||
hack/update-generated-swagger-docs.sh
|
||||
hack/update-openapi-spec.sh
|
||||
hack/update-generated-protobuf.sh
|
||||
hack/update-api-reference-docs.sh
|
||||
./hack/update-codegen.sh
|
||||
./hack/update-openapi-spec.sh
|
||||
```
|
||||
|
||||
Now add a commit to your cherry-pick pull request that has the recently generated OpenAPI spec
|
||||
|
|
|
@ -361,6 +361,48 @@ extensive human review to meet minimum standards of quality.
|
|||
To ensure accuracy in grammar and meaning, members of your localization team
|
||||
should carefully review all machine-generated translations before publishing.
|
||||
|
||||
### Translating SVG images
|
||||
|
||||
The Kubernetes project recommends using vector (SVG) images where possible, as
|
||||
these are much easier for a localization team to edit. If you find a raster
|
||||
image that needs localizing, consider first redrawing the English version as
|
||||
a vector image, and then localize that.
|
||||
|
||||
When translating text within SVG (Scalable Vector Graphics) images, it's
|
||||
essential to follow certain guidelines to ensure accuracy and maintain
|
||||
consistency across different language versions. SVG images are commonly
|
||||
used in the Kubernetes documentation to illustrate concepts, workflows,
|
||||
and diagrams.
|
||||
|
||||
1. **Identifying translatable text**: Start by identifying the text elements
|
||||
within the SVG image that need to be translated. These elements typically
|
||||
include labels, captions, annotations, or any text that conveys information.
|
||||
|
||||
2. **Editing SVG files**: SVG files are XML-based, which means they can be
|
||||
edited using a text editor. However, it's important to note that most of the
|
||||
documentation images in Kubernetes already convert text to curves to avoid font
|
||||
compatibility issues. In such cases, it is recommended to use specialized SVG
|
||||
editing software, such as Inkscape, for editing, open the SVG file and locate
|
||||
the text elements that require translation.
|
||||
|
||||
3. **Translating the text**: Replace the original text with the translated
|
||||
version in the desired language. Ensure the translated text accurately conveys
|
||||
the intended meaning and fits within the available space in the image. The Open
|
||||
Sans font family should be used when working with languages that use the Latin
|
||||
alphabet. You can download the Open Sans typeface from here:
|
||||
[Open Sans Typeface](https://fonts.google.com/specimen/Open+Sans).
|
||||
|
||||
4. **Converting text to curves**: As already mentioned, to address font
|
||||
compatibility issues, it is recommended to convert the translated text to
|
||||
curves or paths. Converting text to curves ensures that the final image
|
||||
displays the translated text correctly, even if the user's system does not
|
||||
have the exact font used in the original SVG.
|
||||
|
||||
5. **Reviewing and testing**: After making the necessary translations and
|
||||
converting text to curves, save and review the updated SVG image to ensure
|
||||
the text is properly displayed and aligned. Check
|
||||
[Preview your changes locally](https://kubernetes.io/docs/contribute/new-content/open-a-pr/#preview-locally).
|
||||
|
||||
### Source files
|
||||
|
||||
Localizations must be based on the English files from a specific release
|
||||
|
|
|
@ -271,33 +271,36 @@ Renders to:
|
|||
{{< tab name="JSON File" include="podtemplate.json" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Source code files
|
||||
## Source code files
|
||||
|
||||
You can use the `{{</* codenew */>}}` shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents of the sample file is generic and reusable, and you want the users to try it out themselves.
|
||||
You can use the `{{%/* code */%}}` shortcode to embed the contents of file in a code block to allow users to download or copy its content to their clipboard. This shortcode is used when the contents of the sample file is generic and reusable, and you want the users to try it out themselves.
|
||||
|
||||
This shortcode takes in two named parameters: `language` and `file`. The mandatory parameter `file` is used to specify the path to the file being displayed. The optional parameter `language` is used to specify the programming language of the file. If the `language` parameter is not provided, the shortcode will attempt to guess the language based on the file extension.
|
||||
|
||||
For example:
|
||||
|
||||
```none
|
||||
{{</* codenew language="yaml" file="application/deployment-scale.yaml" */>}}
|
||||
{{%/* code language="yaml" file="application/deployment-scale.yaml" */%}}
|
||||
```
|
||||
|
||||
The output is:
|
||||
|
||||
{{< codenew language="yaml" file="application/deployment-scale.yaml" >}}
|
||||
{{% code language="yaml" file="application/deployment-scale.yaml" %}}
|
||||
|
||||
When adding a new sample file, such as a YAML file, create the file in one of the `<LANG>/examples/` subdirectories where `<LANG>` is the language for the page. In the markdown of your page, use the `codenew` shortcode:
|
||||
When adding a new sample file, such as a YAML file, create the file in one of the `<LANG>/examples/` subdirectories where `<LANG>` is the language for the page. In the markdown of your page, use the `code` shortcode:
|
||||
|
||||
```none
|
||||
{{</* codenew file="<RELATIVE-PATH>/example-yaml>" */>}}
|
||||
{{%/* code file="<RELATIVE-PATH>/example-yaml>" */%}}
|
||||
```
|
||||
where `<RELATIVE-PATH>` is the path to the sample file to include, relative to the `examples` directory. The following shortcode references a YAML file located at `/content/en/examples/configmap/configmaps.yaml`.
|
||||
|
||||
```none
|
||||
{{</* codenew file="configmap/configmaps.yaml" */>}}
|
||||
{{%/* code file="configmap/configmaps.yaml" */%}}
|
||||
```
|
||||
|
||||
The legacy `{{%/* codenew */%}}` shortcode is being replaced by `{{%/* code */%}}`.
|
||||
Use `{{%/* code */%}}` in new documentation.
|
||||
|
||||
## Third party content marker
|
||||
|
||||
Running Kubernetes requires third-party software. For example: you
|
||||
|
|
|
@ -124,22 +124,16 @@ one of the `<LANG>/examples/` subdirectories where `<LANG>` is the language for
|
|||
the topic. In your topic file, use the `codenew` shortcode:
|
||||
|
||||
```none
|
||||
{{</* codenew file="<RELPATH>/my-example-yaml>" */>}}
|
||||
{{%/* codenew file="<RELPATH>/my-example-yaml>" */%}}
|
||||
```
|
||||
where `<RELPATH>` is the path to the file to include, relative to the
|
||||
`examples` directory. The following Hugo shortcode references a YAML
|
||||
file located at `/content/en/examples/pods/storage/gce-volume.yaml`.
|
||||
|
||||
```none
|
||||
{{</* codenew file="pods/storage/gce-volume.yaml" */>}}
|
||||
{{%/* codenew file="pods/storage/gce-volume.yaml" */%}}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
To show raw Hugo shortcodes as in the above example and prevent Hugo
|
||||
from interpreting them, use C-style comments directly after the `<` and before
|
||||
the `>` characters. View the code for this page for an example.
|
||||
{{< /note >}}
|
||||
|
||||
## Showing how to create an API object from a configuration file
|
||||
|
||||
If you need to demonstrate how to create an API object based on a
|
||||
|
|
|
@ -118,6 +118,8 @@ the `admissionregistration.k8s.io/v1alpha1` API.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller allows all pods into the cluster. It is **deprecated** because
|
||||
its behavior is the same as if there were no admission controller at all.
|
||||
|
||||
|
@ -125,10 +127,14 @@ its behavior is the same as if there were no admission controller at all.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
Rejects all requests. AlwaysDeny is **deprecated** as it has no real meaning.
|
||||
|
||||
### AlwaysPullImages {#alwayspullimages}
|
||||
|
||||
**Type**: Mutating and Validating.
|
||||
|
||||
This admission controller modifies every new Pod to force the image pull policy to `Always`. This is useful in a
|
||||
multitenant cluster so that users can be assured that their private images can only be used by those
|
||||
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
|
||||
|
@ -139,6 +145,8 @@ required.
|
|||
|
||||
### CertificateApproval {#certificateapproval}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller observes requests to approve CertificateSigningRequest resources and performs additional
|
||||
authorization checks to ensure the approving user has permission to **approve** certificate requests with the
|
||||
`spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
|
@ -148,6 +156,8 @@ information on the permissions required to perform different actions on Certific
|
|||
|
||||
### CertificateSigning {#certificatesigning}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller observes updates to the `status.certificate` field of CertificateSigningRequest resources
|
||||
and performs an additional authorization checks to ensure the signing user has permission to **sign** certificate
|
||||
requests with the `spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
|
@ -157,12 +167,16 @@ information on the permissions required to perform different actions on Certific
|
|||
|
||||
### CertificateSubjectRestriction {#certificatesubjectrestriction}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller observes creation of CertificateSigningRequest resources that have a `spec.signerName`
|
||||
of `kubernetes.io/kube-apiserver-client`. It rejects any request that specifies a 'group' (or 'organization attribute')
|
||||
of `system:masters`.
|
||||
|
||||
### DefaultIngressClass {#defaultingressclass}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This admission controller observes creation of `Ingress` objects that do not request any specific
|
||||
ingress class and automatically adds a default ingress class to them. This way, users that do not
|
||||
request any special ingress class do not need to care about them at all and they will get the
|
||||
|
@ -179,6 +193,8 @@ classes and how to mark one as default.
|
|||
|
||||
### DefaultStorageClass {#defaultstorageclass}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This admission controller observes creation of `PersistentVolumeClaim` objects that do not request any specific storage class
|
||||
and automatically adds a default storage class to them.
|
||||
This way, users that do not request any special storage class do not need to care about them at all and they
|
||||
|
@ -194,6 +210,8 @@ storage classes and how to mark a storage class as default.
|
|||
|
||||
### DefaultTolerationSeconds {#defaulttolerationseconds}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This admission controller sets the default forgiveness toleration for pods to tolerate
|
||||
the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters
|
||||
`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already
|
||||
|
@ -203,6 +221,8 @@ The default value for `default-not-ready-toleration-seconds` and `default-unreac
|
|||
|
||||
### DenyServiceExternalIPs
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller rejects all net-new usage of the `Service` field `externalIPs`. This
|
||||
feature is very powerful (allows network traffic interception) and not well
|
||||
controlled by policy. When enabled, users of the cluster may not create new
|
||||
|
@ -220,6 +240,8 @@ This admission controller is disabled by default.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="alpha" >}}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller mitigates the problem where the API server gets flooded by
|
||||
requests to store new Events. The cluster admin can specify event rate limits by:
|
||||
|
||||
|
@ -266,6 +288,8 @@ This admission controller is disabled by default.
|
|||
|
||||
### ExtendedResourceToleration {#extendedresourcetoleration}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This plug-in facilitates creation of dedicated nodes with extended resources.
|
||||
If operators want to create dedicated nodes with extended resources (like GPUs, FPGAs etc.), they are expected to
|
||||
[taint the node](/docs/concepts/scheduling-eviction/taint-and-toleration/#example-use-cases) with the extended resource
|
||||
|
@ -277,6 +301,8 @@ This admission controller is disabled by default.
|
|||
|
||||
### ImagePolicyWebhook {#imagepolicywebhook}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
The ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions.
|
||||
|
||||
This admission controller is disabled by default.
|
||||
|
@ -439,6 +465,8 @@ In any case, the annotations are provided by the user and are not validated by K
|
|||
|
||||
### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller denies any pod that defines `AntiAffinity` topology key other than
|
||||
`kubernetes.io/hostname` in `requiredDuringSchedulingRequiredDuringExecution`.
|
||||
|
||||
|
@ -446,6 +474,8 @@ This admission controller is disabled by default.
|
|||
|
||||
### LimitRanger {#limitranger}
|
||||
|
||||
**Type**: Mutating and Validating.
|
||||
|
||||
This admission controller will observe the incoming request and ensure that it does not violate
|
||||
any of the constraints enumerated in the `LimitRange` object in a `Namespace`. If you are using
|
||||
`LimitRange` objects in your Kubernetes deployment, you MUST use this admission controller to
|
||||
|
@ -459,6 +489,8 @@ for more details.
|
|||
|
||||
### MutatingAdmissionWebhook {#mutatingadmissionwebhook}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This admission controller calls any mutating webhooks which match the request. Matching
|
||||
webhooks are called in serial; each one may modify the object if it desires.
|
||||
|
||||
|
@ -487,6 +519,8 @@ group/version via the `--runtime-config` flag, both are on by default.
|
|||
|
||||
### NamespaceAutoProvision {#namespaceautoprovision}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This admission controller examines all incoming requests on namespaced resources and checks
|
||||
if the referenced namespace does exist.
|
||||
It creates a namespace if it cannot be found.
|
||||
|
@ -495,11 +529,15 @@ a namespace prior to its usage.
|
|||
|
||||
### NamespaceExists {#namespaceexists}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller checks all requests on namespaced resources other than `Namespace` itself.
|
||||
If the namespace referenced from a request doesn't exist, the request is rejected.
|
||||
|
||||
### NamespaceLifecycle {#namespacelifecycle}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller enforces that a `Namespace` that is undergoing termination cannot have
|
||||
new objects created in it, and ensures that requests in a non-existent `Namespace` are rejected.
|
||||
This admission controller also prevents deletion of three system reserved namespaces `default`,
|
||||
|
@ -511,6 +549,8 @@ running this admission controller.
|
|||
|
||||
### NodeRestriction {#noderestriction}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller limits the `Node` and `Pod` objects a kubelet can modify. In order to be limited by this admission controller,
|
||||
kubelets must use credentials in the `system:nodes` group, with a username in the form `system:node:<nodeName>`.
|
||||
Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node.
|
||||
|
@ -543,6 +583,8 @@ permissions required to operate correctly.
|
|||
|
||||
### OwnerReferencesPermissionEnforcement {#ownerreferencespermissionenforcement}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller protects the access to the `metadata.ownerReferences` of an object
|
||||
so that only users with **delete** permission to the object can change it.
|
||||
This admission controller also protects the access to `metadata.ownerReferences[x].blockOwnerDeletion`
|
||||
|
@ -553,6 +595,8 @@ subresource of the referenced *owner* can change it.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller implements additional validations for checking incoming
|
||||
`PersistentVolumeClaim` resize requests.
|
||||
|
||||
|
@ -582,6 +626,8 @@ For more information about persistent volume claims, see [PersistentVolumeClaims
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This admission controller automatically attaches region or zone labels to PersistentVolumes
|
||||
as defined by the cloud provider (for example, Azure or GCP).
|
||||
It helps ensure the Pods and the PersistentVolumes mounted are in the same
|
||||
|
@ -597,6 +643,8 @@ This admission controller is disabled by default.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.5" state="alpha" >}}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller defaults and limits what node selectors may be used within a namespace
|
||||
by reading a namespace annotation and a global configuration.
|
||||
|
||||
|
@ -663,6 +711,8 @@ admission plugin, which allows preventing pods from running on specifically tain
|
|||
|
||||
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
The PodSecurity admission controller checks new Pods before they are
|
||||
admitted, determines if it should be admitted based on the requested security context and the restrictions on permitted
|
||||
[Pod Security Standards](/docs/concepts/security/pod-security-standards/)
|
||||
|
@ -677,6 +727,8 @@ PodSecurity replaced an older admission controller named PodSecurityPolicy.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.7" state="alpha" >}}
|
||||
|
||||
**Type**: Mutating and Validating.
|
||||
|
||||
The PodTolerationRestriction admission controller verifies any conflict between tolerations of a
|
||||
pod and the tolerations of its namespace.
|
||||
It rejects the pod request if there is a conflict.
|
||||
|
@ -707,12 +759,16 @@ This admission controller is disabled by default.
|
|||
|
||||
### Priority {#priority}
|
||||
|
||||
**Type**: Mutating and Validating.
|
||||
|
||||
The priority admission controller uses the `priorityClassName` field and populates the integer
|
||||
value of the priority.
|
||||
If the priority class is not found, the Pod is rejected.
|
||||
|
||||
### ResourceQuota {#resourcequota}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller will observe the incoming request and ensure that it does not violate
|
||||
any of the constraints enumerated in the `ResourceQuota` object in a `Namespace`. If you are
|
||||
using `ResourceQuota` objects in your Kubernetes deployment, you MUST use this admission
|
||||
|
@ -723,6 +779,8 @@ and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for
|
|||
|
||||
### RuntimeClass {#runtimeclass}
|
||||
|
||||
**Type**: Mutating and Validating.
|
||||
|
||||
If you define a RuntimeClass with [Pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||
configured, this admission controller checks incoming Pods.
|
||||
When enabled, this admission controller rejects any Pod create requests
|
||||
|
@ -736,6 +794,8 @@ for more information.
|
|||
|
||||
### SecurityContextDeny {#securitycontextdeny}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="deprecated" >}}
|
||||
|
||||
{{< caution >}}
|
||||
|
@ -777,6 +837,8 @@ article details the PodSecurityPolicy historical context and the birth of the
|
|||
|
||||
### ServiceAccount {#serviceaccount}
|
||||
|
||||
**Type**: Mutating and Validating.
|
||||
|
||||
This admission controller implements automation for
|
||||
[serviceAccounts](/docs/tasks/configure-pod-container/configure-service-account/).
|
||||
The Kubernetes project strongly recommends enabling this admission controller.
|
||||
|
@ -785,6 +847,8 @@ You should enable this admission controller if you intend to make any use of Kub
|
|||
|
||||
### StorageObjectInUseProtection
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
The `StorageObjectInUseProtection` plugin adds the `kubernetes.io/pvc-protection` or `kubernetes.io/pv-protection`
|
||||
finalizers to newly created Persistent Volume Claims (PVCs) or Persistent Volumes (PV).
|
||||
In case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed
|
||||
|
@ -795,6 +859,8 @@ for more detailed information.
|
|||
|
||||
### TaintNodesByCondition {#taintnodesbycondition}
|
||||
|
||||
**Type**: Mutating.
|
||||
|
||||
This admission controller {{< glossary_tooltip text="taints" term_id="taint" >}} newly created
|
||||
Nodes as `NotReady` and `NoSchedule`. That tainting avoids a race condition that could cause Pods
|
||||
to be scheduled on new Nodes before their taints were updated to accurately reflect their reported
|
||||
|
@ -802,12 +868,16 @@ conditions.
|
|||
|
||||
### ValidatingAdmissionPolicy {#validatingadmissionpolicy}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
[This admission controller](/docs/reference/access-authn-authz/validating-admission-policy/) implements the CEL validation for incoming matched requests.
|
||||
It is enabled when both feature gate `validatingadmissionpolicy` and `admissionregistration.k8s.io/v1alpha1` group/version are enabled.
|
||||
If any of the ValidatingAdmissionPolicy fails, the request fails.
|
||||
|
||||
### ValidatingAdmissionWebhook {#validatingadmissionwebhook}
|
||||
|
||||
**Type**: Validating.
|
||||
|
||||
This admission controller calls any validating webhooks which match the request. Matching
|
||||
webhooks are called in parallel; if any of them rejects the request, the request
|
||||
fails. This admission controller only runs in the validation phase; the webhooks it calls may not
|
||||
|
|
|
@ -78,7 +78,7 @@ To allow creating a CertificateSigningRequest and retrieving any CertificateSign
|
|||
|
||||
For example:
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-create.yaml" >}}
|
||||
{{% code file="access/certificate-signing-request/clusterrole-create.yaml" %}}
|
||||
|
||||
To allow approving a CertificateSigningRequest:
|
||||
|
||||
|
@ -88,7 +88,7 @@ To allow approving a CertificateSigningRequest:
|
|||
|
||||
For example:
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-approve.yaml" >}}
|
||||
{{% code file="access/certificate-signing-request/clusterrole-approve.yaml" %}}
|
||||
|
||||
To allow signing a CertificateSigningRequest:
|
||||
|
||||
|
@ -96,7 +96,7 @@ To allow signing a CertificateSigningRequest:
|
|||
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/status`
|
||||
* Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-sign.yaml" >}}
|
||||
{{% code file="access/certificate-signing-request/clusterrole-sign.yaml" %}}
|
||||
|
||||
|
||||
## Signers
|
||||
|
|
|
@ -1240,7 +1240,7 @@ guidance for restricting this access in existing clusters.
|
|||
If you want new clusters to retain this level of access in the aggregated roles,
|
||||
you can create the following ClusterRole:
|
||||
|
||||
{{< codenew file="access/endpoints-aggregated.yaml" >}}
|
||||
{{% code file="access/endpoints-aggregated.yaml" %}}
|
||||
|
||||
## Upgrading from ABAC
|
||||
|
||||
|
|
|
@ -265,7 +265,7 @@ updates that Secret with that generated token data.
|
|||
|
||||
Here is a sample manifest for such a Secret:
|
||||
|
||||
{{< codenew file="secret/serviceaccount/mysecretname.yaml" >}}
|
||||
{{% code file="secret/serviceaccount/mysecretname.yaml" %}}
|
||||
|
||||
To create a Secret based on this example, run:
|
||||
|
||||
|
|
|
@ -316,11 +316,12 @@ Examples on escaping:
|
|||
|
||||
Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1].
|
||||
Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:
|
||||
- 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and
|
||||
non-intersecting elements in `Y` are appended, retaining their partial order.
|
||||
- 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values
|
||||
are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with
|
||||
non-intersecting keys are appended, retaining their partial order.
|
||||
|
||||
- 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and
|
||||
non-intersecting elements in `Y` are appended, retaining their partial order.
|
||||
- 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values
|
||||
are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with
|
||||
non-intersecting keys are appended, retaining their partial order.
|
||||
|
||||
#### Validation expression examples
|
||||
|
||||
|
@ -359,7 +360,7 @@ resource to be evaluated.
|
|||
|
||||
Here is an example illustrating a few different uses for match conditions:
|
||||
|
||||
{{< codenew file="access/validating-admission-policy-match-conditions.yaml" >}}
|
||||
{{% code file="access/validating-admission-policy-match-conditions.yaml" %}}
|
||||
|
||||
Match conditions have access to the same CEL variables as validation expressions.
|
||||
|
||||
|
@ -368,8 +369,8 @@ the request is determined as follows:
|
|||
|
||||
1. If **any** match condition evaluated to `false` (regardless of other errors), the API server skips the policy.
|
||||
2. Otherwise:
|
||||
- for [`failurePolicy: Fail`](#failure-policy), reject the request (without evaluating the policy).
|
||||
- for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the policy.
|
||||
- for [`failurePolicy: Fail`](#failure-policy), reject the request (without evaluating the policy).
|
||||
- for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the policy.
|
||||
|
||||
### Audit annotations
|
||||
|
||||
|
@ -377,7 +378,7 @@ the request is determined as follows:
|
|||
|
||||
For example, here is an admission policy with an audit annotation:
|
||||
|
||||
{{< codenew file="access/validating-admission-policy-audit-annotation.yaml" >}}
|
||||
{{% code file="access/validating-admission-policy-audit-annotation.yaml" %}}
|
||||
|
||||
When an API request is validated with this admission policy, the resulting audit event will look like:
|
||||
|
||||
|
@ -414,7 +415,7 @@ Unlike validations, message expression must evaluate to a string.
|
|||
For example, to better inform the user of the reason of denial when the policy refers to a parameter,
|
||||
we can have the following validation:
|
||||
|
||||
{{< codenew file="access/deployment-replicas-policy.yaml" >}}
|
||||
{{% code file="access/deployment-replicas-policy.yaml" %}}
|
||||
|
||||
After creating a params object that limits the replicas to 3 and setting up the binding,
|
||||
when we try to create a deployment with 5 replicas, we will receive the following message.
|
||||
|
|
|
@ -587,7 +587,7 @@ In the following table:
|
|||
|
||||
- `DynamicKubeletConfig`: Enable the dynamic configuration of kubelet. The
|
||||
feature is no longer supported outside of supported skew policy. The feature
|
||||
gate was removed from kubelet in 1.24. See [Reconfigure kubelet](/docs/tasks/administer-cluster/reconfigure-kubelet/).
|
||||
gate was removed from kubelet in 1.24.
|
||||
|
||||
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of
|
||||
volume topology and handle PV provisioning.
|
||||
|
@ -792,7 +792,6 @@ In the following table:
|
|||
A node is eligible for exclusion if labelled with "`node.kubernetes.io/exclude-from-external-load-balancers`".
|
||||
|
||||
- `ServiceTopology`: Enable service to route traffic based upon the Node topology of the cluster.
|
||||
See [ServiceTopology](/docs/concepts/services-networking/service-topology/) for more details.
|
||||
|
||||
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain Name(FQDN) as the
|
||||
hostname of a pod. See
|
||||
|
|
|
@ -189,7 +189,7 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `SidecarContainers` | `false` | Alpha | 1.28 | |
|
||||
| `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | 1.21 |
|
||||
| `SizeMemoryBackedVolumes` | `true` | Beta | 1.22 | |
|
||||
| `StableLoadBalancerNodeGet` | `true` | Beta | 1.27 | |
|
||||
| `StableLoadBalancerNodeSet` | `true` | Beta | 1.27 | |
|
||||
| `StatefulSetAutoDeletePVC` | `false` | Alpha | 1.23 | 1.26 |
|
||||
| `StatefulSetAutoDeletePVC` | `false` | Beta | 1.27 | |
|
||||
| `StatefulSetStartOrdinal` | `false` | Alpha | 1.26 | 1.26 |
|
||||
|
@ -214,7 +214,7 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `WinDSR` | `false` | Alpha | 1.14 | |
|
||||
| `WinOverlay` | `false` | Alpha | 1.14 | 1.19 |
|
||||
| `WinOverlay` | `true` | Beta | 1.20 | |
|
||||
| `WindowsHostNetwork` | `false` | Alpha | 1.26| |
|
||||
| `WindowsHostNetwork` | `true` | Alpha | 1.26| |
|
||||
{{< /table >}}
|
||||
|
||||
### Feature gates for graduated or deprecated features
|
||||
|
@ -748,7 +748,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
for more details.
|
||||
- `SizeMemoryBackedVolumes`: Enable kubelets to determine the size limit for
|
||||
memory-backed volumes (mainly `emptyDir` volumes).
|
||||
- `StableLoadBalancerNodeGet`: Enables less load balancer re-configurations by
|
||||
- `StableLoadBalancerNodeSet`: Enables less load balancer re-configurations by
|
||||
the service controller (KCCM) as an effect of changing node state.
|
||||
- `StatefulSetStartOrdinal`: Allow configuration of the start ordinal in a
|
||||
StatefulSet. See
|
||||
|
|
|
@ -361,10 +361,11 @@ kubelet [flags]
|
|||
APIListChunking=true|false (BETA - default=true)<br/>
|
||||
APIPriorityAndFairness=true|false (BETA - default=true)<br/>
|
||||
APIResponseCompression=true|false (BETA - default=true)<br/>
|
||||
APISelfSubjectReview=true|false (ALPHA - default=false)<br/>
|
||||
APISelfSubjectReview=true|false (BETA - default=true)<br/>
|
||||
APIServerIdentity=true|false (BETA - default=true)<br/>
|
||||
APIServerTracing=true|false (ALPHA - default=false)<br/>
|
||||
AggregatedDiscoveryEndpoint=true|false (ALPHA - default=false)<br/>
|
||||
APIServerTracing=true|false (BETA - default=true)<br/>
|
||||
AdmissionWebhookMatchConditions=true|false (ALPHA - default=false)<br/>
|
||||
AggregatedDiscoveryEndpoint=true|false (BETA - default=true)<br/>
|
||||
AllAlpha=true|false (ALPHA - default=false)<br/>
|
||||
AllBeta=true|false (BETA - default=false)<br/>
|
||||
AnyVolumeDataSource=true|false (BETA - default=true)<br/>
|
||||
|
@ -376,27 +377,29 @@ CSIMigrationPortworx=true|false (BETA - default=false)<br/>
|
|||
CSIMigrationRBD=true|false (ALPHA - default=false)<br/>
|
||||
CSINodeExpandSecret=true|false (BETA - default=true)<br/>
|
||||
CSIVolumeHealth=true|false (ALPHA - default=false)<br/>
|
||||
ComponentSLIs=true|false (ALPHA - default=false)<br/>
|
||||
CloudControllerManagerWebhook=true|false (ALPHA - default=false)<br/>
|
||||
CloudDualStackNodeIPs=true|false (ALPHA - default=false)<br/>
|
||||
ClusterTrustBundle=true|false (ALPHA - default=false)<br/>
|
||||
ComponentSLIs=true|false (BETA - default=true)<br/>
|
||||
ContainerCheckpoint=true|false (ALPHA - default=false)<br/>
|
||||
ContextualLogging=true|false (ALPHA - default=false)<br/>
|
||||
CronJobTimeZone=true|false (BETA - default=true)<br/>
|
||||
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)<br/>
|
||||
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br/>
|
||||
CustomResourceValidationExpressions=true|false (BETA - default=true)<br/>
|
||||
DisableCloudProviders=true|false (ALPHA - default=false)<br/>
|
||||
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)<br/>
|
||||
DownwardAPIHugePages=true|false (BETA - default=true)<br/>
|
||||
DynamicResourceAllocation=true|false (ALPHA - default=false)<br/>
|
||||
EventedPLEG=true|false (ALPHA - default=false)<br/>
|
||||
ElasticIndexedJob=true|false (BETA - default=true)<br/>
|
||||
EventedPLEG=true|false (BETA - default=false)<br/>
|
||||
ExpandedDNSConfig=true|false (BETA - default=true)<br/>
|
||||
ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)<br/>
|
||||
GRPCContainerProbe=true|false (BETA - default=true)<br/>
|
||||
GracefulNodeShutdown=true|false (BETA - default=true)<br/>
|
||||
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)<br/>
|
||||
HPAContainerMetrics=true|false (ALPHA - default=false)<br/>
|
||||
HPAContainerMetrics=true|false (BETA - default=true)<br/>
|
||||
HPAScaleToZero=true|false (ALPHA - default=false)<br/>
|
||||
HonorPVReclaimPolicy=true|false (ALPHA - default=false)<br/>
|
||||
IPTablesOwnershipCleanup=true|false (ALPHA - default=false)<br/>
|
||||
IPTablesOwnershipCleanup=true|false (BETA - default=true)<br/>
|
||||
InPlacePodVerticalScaling=true|false (ALPHA - default=false)<br/>
|
||||
InTreePluginAWSUnregister=true|false (ALPHA - default=false)<br/>
|
||||
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)<br/>
|
||||
InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)<br/>
|
||||
|
@ -405,66 +408,70 @@ InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)<br/>
|
|||
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br/>
|
||||
InTreePluginRBDUnregister=true|false (ALPHA - default=false)<br/>
|
||||
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)<br/>
|
||||
JobMutableNodeSchedulingDirectives=true|false (BETA - default=true)<br/>
|
||||
JobPodFailurePolicy=true|false (BETA - default=true)<br/>
|
||||
JobReadyPods=true|false (BETA - default=true)<br/>
|
||||
KMSv2=true|false (ALPHA - default=false)<br/>
|
||||
KMSv2=true|false (BETA - default=true)<br/>
|
||||
KubeletInUserNamespace=true|false (ALPHA - default=false)<br/>
|
||||
KubeletPodResources=true|false (BETA - default=true)<br/>
|
||||
KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)<br/>
|
||||
KubeletPodResourcesGet=true|false (ALPHA - default=false)<br/>
|
||||
KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)<br/>
|
||||
KubeletTracing=true|false (ALPHA - default=false)<br/>
|
||||
LegacyServiceAccountTokenTracking=true|false (ALPHA - default=false)<br/>
|
||||
KubeletTracing=true|false (BETA - default=true)<br/>
|
||||
LegacyServiceAccountTokenTracking=true|false (BETA - default=true)<br/>
|
||||
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br/>
|
||||
LogarithmicScaleDown=true|false (BETA - default=true)<br/>
|
||||
LoggingAlphaOptions=true|false (ALPHA - default=false)<br/>
|
||||
LoggingBetaOptions=true|false (BETA - default=true)<br/>
|
||||
MatchLabelKeysInPodTopologySpread=true|false (ALPHA - default=false)<br/>
|
||||
MatchLabelKeysInPodTopologySpread=true|false (BETA - default=true)<br/>
|
||||
MaxUnavailableStatefulSet=true|false (ALPHA - default=false)<br/>
|
||||
MemoryManager=true|false (BETA - default=true)<br/>
|
||||
MemoryQoS=true|false (ALPHA - default=false)<br/>
|
||||
MinDomainsInPodTopologySpread=true|false (BETA - default=false)<br/>
|
||||
MinimizeIPTablesRestore=true|false (ALPHA - default=false)<br/>
|
||||
MinDomainsInPodTopologySpread=true|false (BETA - default=true)<br/>
|
||||
MinimizeIPTablesRestore=true|false (BETA - default=true)<br/>
|
||||
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)<br/>
|
||||
MultiCIDRServiceAllocator=true|false (ALPHA - default=false)<br/>
|
||||
NetworkPolicyStatus=true|false (ALPHA - default=false)<br/>
|
||||
NewVolumeManagerReconstruction=true|false (BETA - default=true)<br/>
|
||||
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)<br/>
|
||||
NodeLogQuery=true|false (ALPHA - default=false)<br/>
|
||||
NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)<br/>
|
||||
NodeSwap=true|false (ALPHA - default=false)<br/>
|
||||
OpenAPIEnums=true|false (BETA - default=true)<br/>
|
||||
OpenAPIV3=true|false (BETA - default=true)<br/>
|
||||
PDBUnhealthyPodEvictionPolicy=true|false (ALPHA - default=false)<br/>
|
||||
PDBUnhealthyPodEvictionPolicy=true|false (BETA - default=true)<br/>
|
||||
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br/>
|
||||
PodDeletionCost=true|false (BETA - default=true)<br/>
|
||||
PodDisruptionConditions=true|false (BETA - default=true)<br/>
|
||||
PodHasNetworkCondition=true|false (ALPHA - default=false)<br/>
|
||||
PodSchedulingReadiness=true|false (ALPHA - default=false)<br/>
|
||||
PodSchedulingReadiness=true|false (BETA - default=true)<br/>
|
||||
ProbeTerminationGracePeriod=true|false (BETA - default=true)<br/>
|
||||
ProcMountType=true|false (ALPHA - default=false)<br/>
|
||||
ProxyTerminatingEndpoints=true|false (BETA - default=true)<br/>
|
||||
QOSReserved=true|false (ALPHA - default=false)<br/>
|
||||
ReadWriteOncePod=true|false (ALPHA - default=false)<br/>
|
||||
ReadWriteOncePod=true|false (BETA - default=true)<br/>
|
||||
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br/>
|
||||
RemainingItemCount=true|false (BETA - default=true)<br/>
|
||||
RetroactiveDefaultStorageClass=true|false (BETA - default=true)<br/>
|
||||
RotateKubeletServerCertificate=true|false (BETA - default=true)<br/>
|
||||
SELinuxMountReadWriteOncePod=true|false (ALPHA - default=false)<br/>
|
||||
SeccompDefault=true|false (BETA - default=true)<br/>
|
||||
ServerSideFieldValidation=true|false (BETA - default=true)<br/>
|
||||
SELinuxMountReadWriteOncePod=true|false (BETA - default=true)<br/>
|
||||
SecurityContextDeny=true|false (ALPHA - default=false)<br/>
|
||||
ServiceNodePortStaticSubrange=true|false (ALPHA - default=false)<br/>
|
||||
SizeMemoryBackedVolumes=true|false (BETA - default=true)<br/>
|
||||
StatefulSetAutoDeletePVC=true|false (ALPHA - default=false)<br/>
|
||||
StatefulSetStartOrdinal=true|false (ALPHA - default=false)<br/>
|
||||
StableLoadBalancerNodeSet=true|false (BETA - default=true)<br/>
|
||||
StatefulSetAutoDeletePVC=true|false (BETA - default=true)<br/>
|
||||
StatefulSetStartOrdinal=true|false (BETA - default=true)<br/>
|
||||
StorageVersionAPI=true|false (ALPHA - default=false)<br/>
|
||||
StorageVersionHash=true|false (BETA - default=true)<br/>
|
||||
TopologyAwareHints=true|false (BETA - default=true)<br/>
|
||||
TopologyManager=true|false (BETA - default=true)<br/>
|
||||
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br/>
|
||||
TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)<br/>
|
||||
TopologyManagerPolicyOptions=true|false (ALPHA - default=false)<br/>
|
||||
UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)<br/>
|
||||
ValidatingAdmissionPolicy=true|false (ALPHA - default=false)<br/>
|
||||
VolumeCapacityPriority=true|false (ALPHA - default=false)<br/>
|
||||
WatchList=true|false (ALPHA - default=false)<br/>
|
||||
WinDSR=true|false (ALPHA - default=false)<br/>
|
||||
WinOverlay=true|false (BETA - default=true)<br/>
|
||||
WindowsHostNetwork=true|false (ALPHA - default=true)<br/>
|
||||
WindowsHostNetwork=true|false (ALPHA - default=true)</p>
|
||||
(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
|
||||
</tr>
|
||||
|
||||
|
|
|
@ -218,7 +218,7 @@ configuration types to be used during a <code>kubeadm init</code> run.</p>
|
|||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">pathType</span>:<span style="color:#bbb"> </span>File<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">scheduler</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">extraArgs</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">address</span>:<span style="color:#bbb"> </span><span style="color:#d14">"10.100.0.1"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">bind-address</span>:<span style="color:#bbb"> </span><span style="color:#d14">"10.100.0.1"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">extraVolumes</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span>- <span style="color:#000;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d14">"some-volume"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">hostPath</span>:<span style="color:#bbb"> </span><span style="color:#d14">"/etc/some-path"</span><span style="color:#bbb">
|
||||
|
|
|
@ -221,7 +221,7 @@ anonymous:
|
|||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
cacheTTL: "2m"
|
||||
cacheTTL: "2m"
|
||||
</pre></code></p>
|
||||
</td>
|
||||
</tr>
|
||||
|
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Probe
|
||||
id: probe
|
||||
date: 2023-03-21
|
||||
full_link: /docs/concepts/workloads/pods/pod-lifecycle/#container-probes
|
||||
|
||||
short_description: >
|
||||
A check performed periodically by the kubelet on a container in a Pod.
|
||||
|
||||
tags:
|
||||
- tool
|
||||
---
|
||||
A check that the kubelet periodically performs against a container that is
|
||||
running in a pod, that will define container's state and health and informing container's lifecycle.
|
||||
|
||||
<!--more-->
|
||||
|
||||
To learn more, read [container probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
|
@ -0,0 +1,15 @@
|
|||
# See the OWNERS docs at https://go.k8s.io/owners
|
||||
|
||||
approvers:
|
||||
- committee-security-response
|
||||
- sig-security-leads
|
||||
|
||||
reviewers:
|
||||
- committee-security-response
|
||||
- sig-security-leads
|
||||
|
||||
labels:
|
||||
- sig/security
|
||||
- sig/docs
|
||||
- area/security
|
||||
- committee/security-response
|
|
@ -108,6 +108,6 @@ JSONPath regular expressions are not supported. If you want to match using regul
|
|||
kubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'
|
||||
|
||||
# The following command achieves the desired result
|
||||
kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).spec.containers[].image'
|
||||
kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).metadata.name'
|
||||
```
|
||||
{{< /note >}}
|
||||
|
|
|
@ -43,7 +43,7 @@ Starting from v1.9, this label is deprecated.
|
|||
|
||||
### app.kubernetes.io/instance
|
||||
|
||||
Type: Label
|
||||
Type: Label
|
||||
|
||||
Example: `app.kubernetes.io/instance: "mysql-abcxzy"`
|
||||
|
||||
|
@ -180,6 +180,7 @@ There is no relation between the value of this label and object UID.
|
|||
### applyset.kubernetes.io/is-parent-type (alpha) {#applyset-kubernetes-io-is-parent-type}
|
||||
|
||||
Type: Label
|
||||
|
||||
Example: `applyset.kubernetes.io/is-parent-type: "true"`
|
||||
|
||||
Used on: Custom Resource Definition (CRD)
|
||||
|
@ -414,19 +415,19 @@ This label can have one of three values: `Reconcile`, `EnsureExists`, or `Ignore
|
|||
- `Ignore`: Addon resources will be ignored. This mode is useful for add-ons that are not
|
||||
compatible with the add-on manager or that are managed by another controller.
|
||||
|
||||
For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)
|
||||
For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md).
|
||||
|
||||
### beta.kubernetes.io/arch (deprecated)
|
||||
|
||||
Type: Label
|
||||
|
||||
This label has been deprecated. Please use `kubernetes.io/arch` instead.
|
||||
This label has been deprecated. Please use [`kubernetes.io/arch`](#kubernetes-io-arch) instead.
|
||||
|
||||
### beta.kubernetes.io/os (deprecated)
|
||||
|
||||
Type: Label
|
||||
|
||||
This label has been deprecated. Please use `kubernetes.io/os` instead.
|
||||
This label has been deprecated. Please use [`kubernetes.io/os`](#kubernetes-io-os) instead.
|
||||
|
||||
### kube-aggregator.kubernetes.io/automanaged {#kube-aggregator-kubernetesio-automanaged}
|
||||
|
||||
|
@ -773,7 +774,7 @@ Kubernetes makes a few assumptions about the structure of zones and regions:
|
|||
|
||||
1. regions and zones are hierarchical: zones are strict subsets of regions and
|
||||
no zone can be in 2 regions
|
||||
2) zone names are unique across regions; for example region "africa-east-1" might be comprised
|
||||
2. zone names are unique across regions; for example region "africa-east-1" might be comprised
|
||||
of zones "africa-east-1a" and "africa-east-1b"
|
||||
|
||||
It should be safe to assume that topology labels do not change.
|
||||
|
@ -811,7 +812,7 @@ Example: `volume.beta.kubernetes.io/storage-provisioner: "k8s.io/minikube-hostpa
|
|||
Used on: PersistentVolumeClaim
|
||||
|
||||
This annotation has been deprecated since v1.23.
|
||||
See [volume.kubernetes.io/storage-provisioner](#volume-kubernetes-io-storage-provisioner)
|
||||
See [volume.kubernetes.io/storage-provisioner](#volume-kubernetes-io-storage-provisioner).
|
||||
|
||||
### volume.beta.kubernetes.io/storage-class (deprecated)
|
||||
|
||||
|
@ -1024,6 +1025,7 @@ Starting in v1.18, this annotation is deprecated in favor of `spec.ingressClassN
|
|||
### storageclass.kubernetes.io/is-default-class
|
||||
|
||||
Type: Annotation
|
||||
|
||||
Example: `storageclass.kubernetes.io/is-default-class: "true"`
|
||||
|
||||
Used on: StorageClass
|
||||
|
@ -1031,7 +1033,7 @@ Used on: StorageClass
|
|||
When a single StorageClass resource has this annotation set to `"true"`, new PersistentVolumeClaim
|
||||
resource without a class specified will be assigned this default class.
|
||||
|
||||
### alpha.kubernetes.io/provided-node-ip
|
||||
### alpha.kubernetes.io/provided-node-ip (alpha) {#alpha-kubernetes-io-provided-node-ip}
|
||||
|
||||
Type: Annotation
|
||||
|
||||
|
@ -1096,8 +1098,7 @@ container.
|
|||
{{< note >}}
|
||||
This annotation is deprecated. You should use the
|
||||
[`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container)
|
||||
annotation instead.
|
||||
Kubernetes versions 1.25 and newer ignore this annotation.
|
||||
annotation instead. Kubernetes versions 1.25 and newer ignore this annotation.
|
||||
{{< /note >}}
|
||||
|
||||
### endpoints.kubernetes.io/over-capacity
|
||||
|
@ -1124,17 +1125,10 @@ Example: `batch.kubernetes.io/job-tracking: ""`
|
|||
|
||||
Used on: Jobs
|
||||
|
||||
The presence of this annotation on a Job indicates that the control plane is
|
||||
The presence of this annotation on a Job used to indicate that the control plane is
|
||||
[tracking the Job status using finalizers](/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers).
|
||||
The control plane uses this annotation to safely transition to tracking Jobs
|
||||
using finalizers, while the feature is in development.
|
||||
You should **not** manually add or remove this annotation.
|
||||
|
||||
{{< note >}}
|
||||
Starting from Kubernetes 1.26, this annotation is deprecated.
|
||||
Kubernetes 1.27 and newer will ignore this annotation and always track Jobs
|
||||
using finalizers.
|
||||
{{< /note >}}
|
||||
Adding or removing this annotation no longer has an effect (Kubernetes v1.27 and later)
|
||||
All Jobs are tracked with finalizers.
|
||||
|
||||
### job-name (deprecated) {#job-name}
|
||||
|
||||
|
@ -1355,7 +1349,7 @@ Type: Label
|
|||
|
||||
Example: `feature.node.kubernetes.io/network-sriov.capable: "true"`
|
||||
|
||||
Used on: Node
|
||||
Used on: Node
|
||||
|
||||
These labels are used by the Node Feature Discovery (NFD) component to advertise
|
||||
features on a node. All built-in labels use the `feature.node.kubernetes.io` label
|
||||
|
@ -1667,7 +1661,7 @@ the integration configures an internal load balancer.
|
|||
If you use the [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/),
|
||||
see [`service.beta.kubernetes.io/aws-load-balancer-scheme`](#service-beta-kubernetes-io-aws-load-balancer-scheme).
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules (beta) {#service-beta-kubernetes-io-aws-load-balancer-manage-backend-security-group-rules)
|
||||
### service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules (beta) {#service-beta-kubernetes-io-aws-load-balancer-manage-backend-security-group-rules}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "true"`
|
||||
|
||||
|
@ -1693,7 +1687,7 @@ balancer to the value you set for _this_ annotation.
|
|||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-nlb-target-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-nlb-target-type)
|
||||
### service.beta.kubernetes.io/aws-load-balancer-nlb-target-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-nlb-target-type}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "true"`
|
||||
|
||||
|
@ -2075,10 +2069,10 @@ of the exposed advertise address/port endpoint for that API server instance.
|
|||
|
||||
Type: Annotation
|
||||
|
||||
Used on: ConfigMap
|
||||
|
||||
Example: `kubeadm.kubernetes.io/component-config.hash: 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae`
|
||||
|
||||
Used on: ConfigMap
|
||||
|
||||
Annotation that kubeadm places on ConfigMaps that it manages for configuring components.
|
||||
It contains a hash (SHA-256) used to determine if the user has applied settings different
|
||||
from the kubeadm defaults for a particular component.
|
||||
|
@ -2129,4 +2123,3 @@ Taint that kubeadm previously applied on control plane nodes to allow only criti
|
|||
workloads to schedule on them. Replaced by the
|
||||
[`node-role.kubernetes.io/control-plane`](#node-role-kubernetes-io-control-plane-taint)
|
||||
taint. kubeadm no longer sets or uses this deprecated taint.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
content_type: "reference"
|
||||
title: Kubelet Device Manager API Versions
|
||||
weight: 10
|
||||
weight: 50
|
||||
---
|
||||
|
||||
This page provides details of version compatibility between the Kubernetes
|
||||
|
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
content_type: "reference"
|
||||
title: Node Labels Populated By The Kubelet
|
||||
weight: 40
|
||||
---
|
||||
|
||||
Kubernetes {{< glossary_tooltip text="nodes" term_id="node" >}} come pre-populated
|
||||
with a standard set of {{< glossary_tooltip text="labels" term_id="label" >}}.
|
||||
|
||||
You can also set your own labels on nodes, either through the kubelet configuration or
|
||||
using the Kubernetes API.
|
||||
|
||||
## Preset labels
|
||||
|
||||
The preset labels that Kubernetes sets on nodes are:
|
||||
|
||||
* [`kubernetes.io/arch`](/docs/reference/labels-annotations-taints/#kubernetes-io-arch)
|
||||
* [`kubernetes.io/hostname`](/docs/reference/labels-annotations-taints/#kubernetes-io-hostname)
|
||||
* [`kubernetes.io/os`](/docs/reference/labels-annotations-taints/#kubernetes-io-os)
|
||||
* [`node.kubernetes.io/instance-type`](/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type)
|
||||
(if known to the kubelet – Kubernetes may not have this information to set the label)
|
||||
* [`topology.kubernetes.io/region`](/docs/reference/labels-annotations-taints/#topologykubernetesioregion)
|
||||
(if known to the kubelet – Kubernetes may not have this information to set the label)
|
||||
* [`topology.kubernetes.io/zone`](/docs/reference/labels-annotations-taints/#topologykubernetesiozone)
|
||||
(if known to the kubelet – Kubernetes may not have this information to set the label)
|
||||
|
||||
{{<note>}}
|
||||
The value of these labels is cloud provider specific and is not guaranteed to be reliable.
|
||||
For example, the value of `kubernetes.io/hostname` may be the same as the node name in some environments
|
||||
and a different value in other environments.
|
||||
{{</note>}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) for a list of common labels.
|
||||
- Learn how to [add a label to a node](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node).
|
||||
|
|
@ -379,14 +379,26 @@ to locate use of deprecated APIs.
|
|||
* Update custom integrations and controllers to call the non-deprecated APIs
|
||||
* Change YAML files to reference the non-deprecated APIs
|
||||
|
||||
You can use the `kubectl-convert` command (`kubectl convert` prior to v1.20)
|
||||
to automatically convert an existing object:
|
||||
You can use the `kubectl convert` command to automatically convert an existing object:
|
||||
|
||||
`kubectl-convert -f <file> --output-version <group>/<version>`.
|
||||
`kubectl convert -f <file> --output-version <group>/<version>`.
|
||||
|
||||
For example, to convert an older Deployment to `apps/v1`, you can run:
|
||||
|
||||
`kubectl-convert -f ./my-deployment.yaml --output-version apps/v1`
|
||||
`kubectl convert -f ./my-deployment.yaml --output-version apps/v1`
|
||||
|
||||
Note that this may use non-ideal default values. To learn more about a specific
|
||||
This conversion may use non-ideal default values. To learn more about a specific
|
||||
resource, check the Kubernetes [API reference](/docs/reference/kubernetes-api/).
|
||||
|
||||
{{< note >}}
|
||||
The `kubectl convert` tool is not installed by default, although
|
||||
in fact it once was part of `kubectl` itself. For more details, you can read the
|
||||
[deprecation and removal issue](https://github.com/kubernetes/kubectl/issues/725)
|
||||
for the built-in subcommand.
|
||||
|
||||
To learn how to set up `kubectl convert` on your computer, visit the page that is right for your
|
||||
operating system:
|
||||
[Linux](/docs/tasks/tools/install-kubectl-linux/#install-kubectl-convert-plugin),
|
||||
[macOS](/docs/tasks/tools/install-kubectl-macos/#install-kubectl-convert-plugin), or
|
||||
[Windows](/docs/tasks/tools/install-kubectl-windows/#install-kubectl-convert-plugin).
|
||||
{{< /note >}}
|
||||
|
|
|
@ -332,7 +332,7 @@ resource and its accompanying controller.
|
|||
|
||||
Say a user has defined deployment with `replicas` set to the desired value:
|
||||
|
||||
{{< codenew file="application/ssa/nginx-deployment.yaml" >}}
|
||||
{{% code file="application/ssa/nginx-deployment.yaml" %}}
|
||||
|
||||
And the user has created the deployment using Server-Side Apply like so:
|
||||
|
||||
|
@ -396,7 +396,7 @@ process than it sometimes does.
|
|||
|
||||
At this point the user may remove the `replicas` field from their configuration.
|
||||
|
||||
{{< codenew file="application/ssa/nginx-deployment-no-replicas.yaml" >}}
|
||||
{{% code file="application/ssa/nginx-deployment-no-replicas.yaml" %}}
|
||||
|
||||
Note that whenever the HPA controller sets the `replicas` field to a new value,
|
||||
the temporary field manager will no longer own any fields and will be
|
||||
|
|
|
@ -14,9 +14,12 @@ It uses a tool called [`kOps`](https://github.com/kubernetes/kops).
|
|||
* Fully automated installation
|
||||
* Uses DNS to identify clusters
|
||||
* Self-healing: everything runs in Auto-Scaling Groups
|
||||
* Multiple OS support (Amazon Linux, Debian, Flatcar, RHEL, Rocky and Ubuntu) - see the [images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
|
||||
* High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md)
|
||||
* Can directly provision, or generate terraform manifests - see the [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
|
||||
* Multiple OS support (Amazon Linux, Debian, Flatcar, RHEL, Rocky and Ubuntu) - see the
|
||||
[images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
|
||||
* High-Availability support - see the
|
||||
[high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md)
|
||||
* Can directly provision, or generate terraform manifests - see the
|
||||
[terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -24,7 +27,10 @@ It uses a tool called [`kOps`](https://github.com/kubernetes/kops).
|
|||
|
||||
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.
|
||||
|
||||
* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them. The IAM user will need [adequate permissions](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user).
|
||||
* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html),
|
||||
generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys)
|
||||
and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them.
|
||||
The IAM user will need [adequate permissions](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user).
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -34,7 +40,8 @@ It uses a tool called [`kOps`](https://github.com/kubernetes/kops).
|
|||
|
||||
#### Installation
|
||||
|
||||
Download kops from the [releases page](https://github.com/kubernetes/kops/releases) (it is also convenient to build from source):
|
||||
Download kops from the [releases page](https://github.com/kubernetes/kops/releases)
|
||||
(it is also convenient to build from source):
|
||||
|
||||
{{< tabs name="kops_installation" >}}
|
||||
{{% tab name="macOS" %}}
|
||||
|
@ -212,7 +219,8 @@ for production clusters!
|
|||
|
||||
### Explore other add-ons
|
||||
|
||||
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization, and control of your Kubernetes cluster.
|
||||
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons,
|
||||
including tools for logging, monitoring, network policy, visualization, and control of your Kubernetes cluster.
|
||||
|
||||
## Cleanup
|
||||
|
||||
|
@ -221,6 +229,8 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
|
||||
* Learn more about `kOps` [advanced usage](https://kops.sigs.k8s.io/) for tutorials, best practices and advanced configuration options.
|
||||
* Follow `kOps` community discussions on Slack: [community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors).
|
||||
* Learn more about `kOps` [advanced usage](https://kops.sigs.k8s.io/) for tutorials,
|
||||
best practices and advanced configuration options.
|
||||
* Follow `kOps` community discussions on Slack:
|
||||
[community discussions](https://github.com/kubernetes/kops#other-ways-to-communicate-with-the-contributors).
|
||||
* Contribute to `kOps` by addressing or raising an issue [GitHub Issues](https://github.com/kubernetes/kops/issues).
|
||||
|
|
|
@ -423,7 +423,7 @@ and `scp` using that other user instead.
|
|||
The `admin.conf` file gives the user _superuser_ privileges over the cluster.
|
||||
This file should be used sparingly. For normal users, it's recommended to
|
||||
generate an unique credential to which you grant privileges. You can do
|
||||
this with the `kubeadm alpha kubeconfig user --client-name <CN>`
|
||||
this with the `kubeadm kubeconfig user --client-name <CN>`
|
||||
command. That command will print out a KubeConfig file to STDOUT which you
|
||||
should save to a file and distribute to your user. After that, grant
|
||||
privileges by using `kubectl create (cluster)rolebinding`.
|
||||
|
|
|
@ -9,15 +9,21 @@ min-kubernetes-server-version: 1.21
|
|||
|
||||
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
|
||||
|
||||
Your Kubernetes cluster includes [dual-stack](/docs/concepts/services-networking/dual-stack/) networking, which means that cluster networking lets you use either address family. In a cluster, the control plane can assign both an IPv4 address and an IPv6 address to a single {{< glossary_tooltip text="Pod" term_id="pod" >}} or a {{< glossary_tooltip text="Service" term_id="service" >}}.
|
||||
Your Kubernetes cluster includes [dual-stack](/docs/concepts/services-networking/dual-stack/)
|
||||
networking, which means that cluster networking lets you use either address family.
|
||||
In a cluster, the control plane can assign both an IPv4 address and an IPv6 address to a single
|
||||
{{< glossary_tooltip text="Pod" term_id="pod" >}} or a {{< glossary_tooltip text="Service" term_id="service" >}}.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You need to have installed the {{< glossary_tooltip text="kubeadm" term_id="kubeadm" >}} tool, following the steps from [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
You need to have installed the {{< glossary_tooltip text="kubeadm" term_id="kubeadm" >}} tool,
|
||||
following the steps from [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
|
||||
For each server that you want to use as a {{< glossary_tooltip text="node" term_id="node" >}}, make sure it allows IPv6 forwarding. On Linux, you can set this by running run `sysctl -w net.ipv6.conf.all.forwarding=1` as the root user on each server.
|
||||
For each server that you want to use as a {{< glossary_tooltip text="node" term_id="node" >}},
|
||||
make sure it allows IPv6 forwarding. On Linux, you can set this by running run
|
||||
`sysctl -w net.ipv6.conf.all.forwarding=1` as the root user on each server.
|
||||
|
||||
You need to have an IPv4 and and IPv6 address range to use. Cluster operators typically
|
||||
use private address ranges for IPv4. For IPv6, a cluster operator typically chooses a global
|
||||
|
@ -65,7 +71,9 @@ nodeRegistration:
|
|||
node-ip: 10.100.0.2,fd00:1:2:3::2
|
||||
```
|
||||
|
||||
`advertiseAddress` in InitConfiguration specifies the IP address that the API Server will advertise it is listening on. The value of `advertiseAddress` equals the `--apiserver-advertise-address` flag of `kubeadm init`
|
||||
`advertiseAddress` in InitConfiguration specifies the IP address that the API Server
|
||||
will advertise it is listening on. The value of `advertiseAddress` equals the
|
||||
`--apiserver-advertise-address` flag of `kubeadm init`.
|
||||
|
||||
Run kubeadm to initiate the dual-stack control plane node:
|
||||
|
||||
|
@ -73,7 +81,8 @@ Run kubeadm to initiate the dual-stack control plane node:
|
|||
kubeadm init --config=kubeadm-config.yaml
|
||||
```
|
||||
|
||||
The kube-controller-manager flags `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` are set with default values. See [configure IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#configure-ipv4-ipv6-dual-stack).
|
||||
The kube-controller-manager flags `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6`
|
||||
are set with default values. See [configure IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack#configure-ipv4-ipv6-dual-stack).
|
||||
|
||||
{{< note >}}
|
||||
The `--apiserver-advertise-address` flag does not support dual-stack.
|
||||
|
@ -124,7 +133,9 @@ nodeRegistration:
|
|||
|
||||
```
|
||||
|
||||
`advertiseAddress` in JoinConfiguration.controlPlane specifies the IP address that the API Server will advertise it is listening on. The value of `advertiseAddress` equals the `--apiserver-advertise-address` flag of `kubeadm join`.
|
||||
`advertiseAddress` in JoinConfiguration.controlPlane specifies the IP address that the
|
||||
API Server will advertise it is listening on. The value of `advertiseAddress` equals
|
||||
the `--apiserver-advertise-address` flag of `kubeadm join`.
|
||||
|
||||
```shell
|
||||
kubeadm join --config=kubeadm-config.yaml
|
||||
|
|
|
@ -157,7 +157,7 @@ For more information on version skews, see:
|
|||
2. Download the Google Cloud public signing key:
|
||||
|
||||
```shell
|
||||
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
```
|
||||
|
||||
3. Add the Kubernetes `apt` repository:
|
||||
|
|
|
@ -6,11 +6,16 @@ weight: 30
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack, AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental) or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
|
||||
This quickstart helps to install a Kubernetes cluster hosted on GCE, Azure, OpenStack,
|
||||
AWS, vSphere, Equinix Metal (formerly Packet), Oracle Cloud Infrastructure (Experimental)
|
||||
or Baremetal with [Kubespray](https://github.com/kubernetes-sigs/kubespray).
|
||||
|
||||
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks, [inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory), provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks.
|
||||
Kubespray is a composition of [Ansible](https://docs.ansible.com/) playbooks,
|
||||
[inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory),
|
||||
provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks.
|
||||
|
||||
Kubespray provides:
|
||||
|
||||
* Highly available cluster.
|
||||
* Composable (Choice of the network plugin for instance).
|
||||
* Supports most popular Linux distributions:
|
||||
|
@ -28,7 +33,8 @@ Kubespray provides:
|
|||
- Amazon Linux 2
|
||||
* Continuous integration tests.
|
||||
|
||||
To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
|
||||
To choose a tool which best fits your use case, read
|
||||
[this comparison](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/comparisons.md) to
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/) and [kops](/docs/setup/production-environment/tools/kops/).
|
||||
|
||||
<!-- body -->
|
||||
|
@ -44,8 +50,11 @@ Provision servers with the following [requirements](https://github.com/kubernete
|
|||
* The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required See ([Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/offline-environment.md))
|
||||
* The target servers are configured to allow **IPv4 forwarding**.
|
||||
* If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
|
||||
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall.
|
||||
* If kubespray is run from non-root user account, correct privilege escalation method should be configured in the target servers. Then the `ansible_become` flag or command parameters `--become` or `-b` should be specified.
|
||||
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
|
||||
in order to avoid any issue during deployment you should disable your firewall.
|
||||
* If kubespray is run from non-root user account, correct privilege escalation method
|
||||
should be configured in the target servers. Then the `ansible_become` flag or command
|
||||
parameters `--become` or `-b` should be specified.
|
||||
|
||||
Kubespray provides the following utilities to help provision your environment:
|
||||
|
||||
|
@ -56,7 +65,10 @@ Kubespray provides the following utilities to help provision your environment:
|
|||
|
||||
### (2/5) Compose an inventory file
|
||||
|
||||
After you provision your servers, create an [inventory file for Ansible](https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html). You can do this manually or via a dynamic inventory script. For more information, see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
|
||||
After you provision your servers, create an
|
||||
[inventory file for Ansible](https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html).
|
||||
You can do this manually or via a dynamic inventory script. For more information,
|
||||
see "[Building your own inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory)".
|
||||
|
||||
### (3/5) Plan your cluster deployment
|
||||
|
||||
|
@ -74,24 +86,34 @@ Kubespray provides the ability to customize many aspects of the deployment:
|
|||
* {{< glossary_tooltip term_id="cri-o" >}}
|
||||
* Certificate generation methods
|
||||
|
||||
Kubespray customizations can be made to a [variable file](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html). If you are getting started with Kubespray, consider using the Kubespray defaults to deploy your cluster and explore Kubernetes.
|
||||
Kubespray customizations can be made to a
|
||||
[variable file](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html).
|
||||
If you are getting started with Kubespray, consider using the Kubespray
|
||||
defaults to deploy your cluster and explore Kubernetes.
|
||||
|
||||
### (4/5) Deploy a Cluster
|
||||
|
||||
Next, deploy your cluster:
|
||||
|
||||
Cluster deployment using [ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
|
||||
Cluster deployment using
|
||||
[ansible-playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
|
||||
|
||||
```shell
|
||||
ansible-playbook -i your/inventory/inventory.ini cluster.yml -b -v \
|
||||
--private-key=~/.ssh/private_key
|
||||
```
|
||||
|
||||
Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md) for best results.
|
||||
Large deployments (100+ nodes) may require
|
||||
[specific adjustments](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/large-deployments.md)
|
||||
for best results.
|
||||
|
||||
### (5/5) Verify the deployment
|
||||
|
||||
Kubespray provides a way to verify inter-pod connectivity and DNS resolve with [Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md). Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each over within the default namespace. Those pods mimic similar behavior as the rest of the workloads and serve as cluster health indicators.
|
||||
Kubespray provides a way to verify inter-pod connectivity and DNS resolve with
|
||||
[Netchecker](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/netcheck.md).
|
||||
Netchecker ensures the netchecker-agents pods can resolve DNS requests and ping each
|
||||
over within the default namespace. Those pods mimic similar behavior as the rest
|
||||
of the workloads and serve as cluster health indicators.
|
||||
|
||||
## Cluster operations
|
||||
|
||||
|
@ -99,16 +121,20 @@ Kubespray provides additional playbooks to manage your cluster: _scale_ and _upg
|
|||
|
||||
### Scale your cluster
|
||||
|
||||
You can add worker nodes from your cluster by running the scale playbook. For more information, see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
|
||||
You can remove worker nodes from your cluster by running the remove-node playbook. For more information, see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".
|
||||
You can add worker nodes from your cluster by running the scale playbook. For more information,
|
||||
see "[Adding nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#adding-nodes)".
|
||||
You can remove worker nodes from your cluster by running the remove-node playbook. For more information,
|
||||
see "[Remove nodes](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#remove-nodes)".
|
||||
|
||||
### Upgrade your cluster
|
||||
|
||||
You can upgrade your cluster by running the upgrade-cluster playbook. For more information, see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)".
|
||||
You can upgrade your cluster by running the upgrade-cluster playbook. For more information,
|
||||
see "[Upgrades](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/upgrades.md)".
|
||||
|
||||
## Cleanup
|
||||
|
||||
You can reset your nodes and wipe out all components installed with Kubespray via the [reset playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml).
|
||||
You can reset your nodes and wipe out all components installed with Kubespray
|
||||
via the [reset playbook](https://github.com/kubernetes-sigs/kubespray/blob/master/reset.yml).
|
||||
|
||||
{{< caution >}}
|
||||
When running the reset playbook, be sure not to accidentally target your production cluster!
|
||||
|
@ -116,7 +142,8 @@ When running the reset playbook, be sure not to accidentally target your product
|
|||
|
||||
## Feedback
|
||||
|
||||
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/) (You can get your invite [here](https://slack.k8s.io/)).
|
||||
* Slack Channel: [#kubespray](https://kubernetes.slack.com/messages/kubespray/)
|
||||
(You can get your invite [here](https://slack.k8s.io/)).
|
||||
* [GitHub Issues](https://github.com/kubernetes-sigs/kubespray/issues).
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -23,7 +23,7 @@ In this exercise, you create a Pod that runs two Containers. The two containers
|
|||
share a Volume that they can use to communicate. Here is the configuration file
|
||||
for the Pod:
|
||||
|
||||
{{< codenew file="pods/two-container-pod.yaml" >}}
|
||||
{{% code file="pods/two-container-pod.yaml" %}}
|
||||
|
||||
In the configuration file, you can see that the Pod has a Volume named
|
||||
`shared-data`.
|
||||
|
|
|
@ -36,7 +36,7 @@ require a supported environment. If your environment does not support this, you
|
|||
The backend is a simple hello greeter microservice. Here is the configuration
|
||||
file for the backend Deployment:
|
||||
|
||||
{{< codenew file="service/access/backend-deployment.yaml" >}}
|
||||
{{% code file="service/access/backend-deployment.yaml" %}}
|
||||
|
||||
Create the backend Deployment:
|
||||
|
||||
|
@ -97,7 +97,7 @@ the Pods that it routes traffic to.
|
|||
|
||||
First, explore the Service configuration file:
|
||||
|
||||
{{< codenew file="service/access/backend-service.yaml" >}}
|
||||
{{% code file="service/access/backend-service.yaml" %}}
|
||||
|
||||
In the configuration file, you can see that the Service, named `hello` routes
|
||||
traffic to Pods that have the labels `app: hello` and `tier: backend`.
|
||||
|
@ -125,7 +125,7 @@ configuration file.
|
|||
The Pods in the frontend Deployment run a nginx image that is configured
|
||||
to proxy requests to the `hello` backend Service. Here is the nginx configuration file:
|
||||
|
||||
{{< codenew file="service/access/frontend-nginx.conf" >}}
|
||||
{{% code file="service/access/frontend-nginx.conf" %}}
|
||||
|
||||
Similar to the backend, the frontend has a Deployment and a Service. An important
|
||||
difference to notice between the backend and frontend services, is that the
|
||||
|
@ -133,9 +133,9 @@ configuration for the frontend Service has `type: LoadBalancer`, which means tha
|
|||
the Service uses a load balancer provisioned by your cloud provider and will be
|
||||
accessible from outside the cluster.
|
||||
|
||||
{{< codenew file="service/access/frontend-service.yaml" >}}
|
||||
{{% code file="service/access/frontend-service.yaml" %}}
|
||||
|
||||
{{< codenew file="service/access/frontend-deployment.yaml" >}}
|
||||
{{% code file="service/access/frontend-deployment.yaml" %}}
|
||||
|
||||
Create the frontend Deployment and Service:
|
||||
|
||||
|
|
|
@ -126,7 +126,7 @@ The following manifest defines an Ingress that sends traffic to your Service via
|
|||
|
||||
1. Create `example-ingress.yaml` from the following file:
|
||||
|
||||
{{< codenew file="service/networking/example-ingress.yaml" >}}
|
||||
{{% code file="service/networking/example-ingress.yaml" %}}
|
||||
|
||||
1. Create the Ingress object by running the following command:
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ provides load balancing for an application that has two running instances.
|
|||
|
||||
Here is the configuration file for the application Deployment:
|
||||
|
||||
{{< codenew file="service/access/hello-application.yaml" >}}
|
||||
{{% code file="service/access/hello-application.yaml" %}}
|
||||
|
||||
1. Run a Hello World application in your cluster:
|
||||
Create the application Deployment using the file above:
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue