Merge remote-tracking branch 'upstream/main' into dev-1.27
commit
987edf7ef0
28
README-zh.md
28
README-zh.md
|
@ -60,14 +60,34 @@ cd website
|
|||
<!--
|
||||
The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/docsy#readme). Even if you plan to run the website in a container, we strongly recommend pulling in the submodule and other development dependencies by running the following:
|
||||
-->
|
||||
|
||||
Kubernetes 网站使用的是 [Docsy Hugo 主题](https://github.com/google/docsy#readme)。
|
||||
即使你打算在容器中运行网站,我们也强烈建议你通过运行以下命令来引入子模块和其他开发依赖项:
|
||||
|
||||
```bash
|
||||
# 引入 Docsy 子模块
|
||||
<!--
|
||||
### Windows
|
||||
```powershell
|
||||
# fetch submodule dependencies
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
```
|
||||
-->
|
||||
### Windows
|
||||
```powershell
|
||||
# 获取子模块依赖
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
<!--
|
||||
### Linux / other Unix
|
||||
```bash
|
||||
# fetch submodule dependencies
|
||||
make module-init
|
||||
```
|
||||
-->
|
||||
### Linux / 其它 Unix
|
||||
```bash
|
||||
# 获取子模块依赖
|
||||
make module-init
|
||||
```
|
||||
|
||||
<!--
|
||||
## Running the website using a container
|
||||
|
|
|
@ -277,7 +277,7 @@ Pods können nur eigene Image Pull Secret in ihrem eigenen Namespace referenzier
|
|||
|
||||
#### Referenzierung eines imagePullSecrets bei einem Pod
|
||||
|
||||
Nun können Sie Pods erstellen, die dieses Secret referenzieren, indem Sie einen Aschnitt `imagePullSecrets` zu ihrer Pod - Definition hinzufügen.
|
||||
Nun können Sie Pods erstellen, die dieses Secret referenzieren, indem Sie einen Abschnitt `imagePullSecrets` zu ihrer Pod - Definition hinzufügen.
|
||||
|
||||
```shell
|
||||
cat <<EOF > pod.yaml
|
||||
|
|
|
@ -11,7 +11,7 @@ Starting with Kubernetes 1.25, our container image registry has changed from k8s
|
|||
|
||||
## TL;DR: What you need to know about this change
|
||||
|
||||
* Container images for Kubernetes releases from 1.25 onward are no longer published to k8s.gcr.io, only to registry.k8s.io.
|
||||
* Container images for Kubernetes releases from <del>1.25</del> 1.27 onward are not published to k8s.gcr.io, only to registry.k8s.io.
|
||||
* In the upcoming December patch releases, the new registry domain default will be backported to all branches still in support (1.22, 1.23, 1.24).
|
||||
* If you run in a restricted environment and apply strict domain/IP address access policies limited to k8s.gcr.io, the __image pulls will not function__ after the migration to this new registry. For these users, the recommended method is to mirror the release images to a private registry.
|
||||
|
||||
|
@ -68,8 +68,15 @@ The image used by kubelet for the pod sandbox (`pause`) can be overridden by set
|
|||
kubelet --pod-infra-container-image=k8s.gcr.io/pause:3.5
|
||||
```
|
||||
|
||||
## Legacy container registry freeze {#registry-freeze}
|
||||
|
||||
[k8s.gcr.io Image Registry Will Be Frozen From the 3rd of April 2023](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/) announces the freeze of the
|
||||
legacy k8s.gcr.io image registry. Read that article for more details.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
__Change is hard__, and evolving our image-serving platform is needed to ensure a sustainable future for the project. We strive to make things better for everyone using Kubernetes. Many contributors from all corners of our community have been working long and hard to ensure we are making the best decisions possible, executing plans, and doing our best to communicate those plans.
|
||||
|
||||
Thanks to Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine, Davanum Srinivas, Mahamed Ali, and Tim Hockin from SIG K8s Infra, Brian McQueen, and Sergey Kanzhelev from SIG Node, Lubomir Ivanov from SIG Cluster Lifecycle, Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, and Stephen Augustus from SIG Release, Bob Killen and Kaslin Fields from SIG Contribex, Tim Allclair from the Security Response Committee. Also a big thank you to our friends acting as liaisons with our cloud provider partners: Jay Pipes from Amazon and Jon Johnson Jr. from Google.
|
||||
|
||||
_This article was updated on the 28th of February 2023._
|
||||
|
|
|
@ -31,8 +31,8 @@ files side by side to the artifacts for verifying their integrity.
|
|||
|
||||
[tarballs]: https://github.com/kubernetes/kubernetes/blob/release-1.26/CHANGELOG/CHANGELOG-1.26.md#downloads-for-v1260
|
||||
[binaries]: https://gcsweb.k8s.io/gcs/kubernetes-release/release/v1.26.0/bin
|
||||
[sboms]: https://storage.googleapis.com/kubernetes-release/release/v1.26.0/kubernetes-release.spdx
|
||||
[provenance]: https://storage.googleapis.com/kubernetes-release/release/v1.26.0/provenance.json
|
||||
[sboms]: https://dl.k8s.io/release/v1.26.0/kubernetes-release.spdx
|
||||
[provenance]: https://dl.k8s.io/kubernetes-release/release/v1.26.0/provenance.json
|
||||
[cosign]: https://github.com/sigstore/cosign
|
||||
|
||||
To verify an artifact, for example `kubectl`, you can download the
|
||||
|
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Introducing KWOK: Kubernetes WithOut Kubelet"
|
||||
date: 2023-03-01
|
||||
slug: introducing-kwok
|
||||
canonicalUrl: https://kubernetes.dev/blog/2023/03/01/introducing-kwok/
|
||||
---
|
||||
|
||||
**Author:** Shiming Zhang (DaoCloud), Wei Huang (Apple), Yibo Zhuang (Apple)
|
||||
|
||||
<img style="float: right; display: inline-block; margin-left: 2em; max-width: 15em;" src="/blog/2023/03/01/introducing-kwok/kwok.svg" alt="KWOK logo" />
|
||||
|
||||
Have you ever wondered how to set up a cluster of thousands of nodes just in seconds, how to simulate real nodes with a low resource footprint, and how to test your Kubernetes controller at scale without spending much on infrastructure?
|
||||
|
||||
If you answered "yes" to any of these questions, then you might be interested in KWOK, a toolkit that enables you to create a cluster of thousands of nodes in seconds.
|
||||
|
||||
## What is KWOK?
|
||||
|
||||
KWOK stands for Kubernetes WithOut Kubelet. So far, it provides two tools:
|
||||
|
||||
`kwok`
|
||||
: `kwok` is the cornerstone of this project, responsible for simulating the lifecycle of fake nodes, pods, and other Kubernetes API resources.
|
||||
|
||||
`kwokctl`
|
||||
: `kwokctl` is a CLI tool designed to streamline the creation and management of clusters, with nodes simulated by `kwok`.
|
||||
|
||||
## Why use KWOK?
|
||||
|
||||
KWOK has several advantages:
|
||||
|
||||
- **Speed**: You can create and delete clusters and nodes almost instantly, without waiting for boot or provisioning.
|
||||
- **Compatibility**: KWOK works with any tools or clients that are compliant with Kubernetes APIs, such as kubectl, helm, kui, etc.
|
||||
- **Portability**: KWOK has no specific hardware or software requirements. You can run it using pre-built images, once Docker or Nerdctl is installed. Alternatively, binaries are also available for all platforms and can be easily installed.
|
||||
- **Flexibility**: You can configure different node types, labels, taints, capacities, conditions, etc., and you can configure different pod behaviors, status, etc. to test different scenarios and edge cases.
|
||||
- **Performance**: You can simulate thousands of nodes on your laptop without significant consumption of CPU or memory resources.
|
||||
|
||||
## What are the use cases?
|
||||
|
||||
KWOK can be used for various purposes:
|
||||
|
||||
- **Learning**: You can use KWOK to learn about Kubernetes concepts and features without worrying about resource waste or other consequences.
|
||||
- **Development**: You can use KWOK to develop new features or tools for Kubernetes without accessing to a real cluster or requiring other components.
|
||||
- **Testing**:
|
||||
- You can measure how well your application or controller scales with different numbers of nodes and(or) pods.
|
||||
- You can generate high loads on your cluster by creating many pods or services with different resource requests or limits.
|
||||
- You can simulate node failures or network partitions by changing node conditions or randomly deleting nodes.
|
||||
- You can test how your controller interacts with other components or features of Kubernetes by enabling different feature gates or API versions.
|
||||
|
||||
## What are the limitations?
|
||||
|
||||
KWOK is not intended to replace others completely. It has some limitations that you should be aware of:
|
||||
|
||||
- **Functionality**: KWOK is not a kubelet and may exhibit different behaviors in areas such as pod lifecycle management, volume mounting, and device plugins. Its primary function is to simulate updates of node and pod status.
|
||||
- **Accuracy**: It's important to note that KWOK doesn't accurately reflect the performance or behavior of real nodes under various workloads or environments. Instead, it approximates some behaviors using simple formulas.
|
||||
- **Security**: KWOK does not enforce any security policies or mechanisms on simulated nodes. It assumes that all requests from the kube-apiserver are authorized and valid.
|
||||
|
||||
## Getting started
|
||||
|
||||
If you are interested in trying out KWOK, please check its [documents] for more details.
|
||||
|
||||
{{< figure src="/blog/2023/03/01/introducing-kwok/manage-clusters.svg" alt="Animation of a terminal showing kwokctl in use" caption="Using kwokctl to manage simulated clusters" >}}
|
||||
|
||||
## Getting Involved
|
||||
|
||||
If you're interested in participating in future discussions or development related to KWOK, there are several ways to get involved:
|
||||
|
||||
- Slack: [#kwok] for general usage discussion, [#kwok-dev] for development discussion. (visit [slack.k8s.io] for a workspace invitation)
|
||||
- Open Issues/PRs/Discussions in [sigs.k8s.io/kwok]
|
||||
|
||||
We welcome feedback and contributions from anyone who wants to join us in this exciting project.
|
||||
|
||||
[documents]: https://kwok.sigs.k8s.io/
|
||||
[sigs.k8s.io/kwok]: https://sigs.k8s.io/kwok/
|
||||
[#kwok]: https://kubernetes.slack.com/messages/kwok/
|
||||
[#kwok-dev]: https://kubernetes.slack.com/messages/kwok-dev/
|
||||
[slack.k8s.io]: https://slack.k8s.io/
|
|
@ -0,0 +1 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 70 70"><path fill="#326CE5" d="m7.907 20.247-.325 1.656-.033.006-1.56 6.763-.415 1.961-.063.086c0 .348-.26 1.093-.324 1.451-.089.508-.223.952-.34 1.465-.223.977-.481 1.984-.67 2.938-.192.968-.517 1.974-.671 2.909-.148.897-.6 2.053-.6 3.01v.03c0 1.74.865 2.698 1.68 3.645.407.474.797.986 1.186 1.478.398.505.811.975 1.2 1.491.76 1.007 1.607 1.973 2.39 2.965.773.978 1.576 2.031 2.384 2.97.41.476.796.983 1.186 1.478.39.494.819.974 1.189 1.474.364.492.802.982 1.188 1.473.36.457.815 1.083 1.201 1.492.408.432.784 1.015 1.184 1.479.42.488.785.997 1.198 1.493.5.6.71.843 1.39 1.274.48.305 1.318.598 2.094.598l24.56.02 1.063-.138c.075-.051.33-.117.455-.167.2-.08.28-.126.452-.198.322-.135.536-.31.797-.505.376-.28.94-.992 1.225-1.378.714-.972 1.595-1.9 2.308-2.87.715-.973 1.597-1.9 2.308-2.87.706-.962 1.55-1.922 2.302-2.874 1.524-1.93 3.09-3.807 4.616-5.739a50.63 50.63 0 0 1 1.151-1.422c.395-.464.752-1.006 1.154-1.45.362-.398.828-.993 1.127-1.447.261-.398.626-1.336.626-1.977v-.71c0-.004-.15-.618-.165-.692-.051-.237-.12-.508-.154-.704-.06-.336-.228-1.127-.332-1.414-.045-.125-.117-.53-.16-.698-.058-.233-.103-.466-.17-.688-.12-.403-.207-.995-.308-1.436-.114-.496-.22-.887-.32-1.397-.05-.263-.122-.4-.166-.691a8.807 8.807 0 0 0-.16-.728c-.13-.469-.639-2.428-.634-2.826l-.042-.09-.373-1.714-.04-.09-.733-3.072c0-.363-.235-.842-.266-1.272-.006-.076-.12-.494-.146-.623-.048-.241-.116-.389-.163-.636-.063-.333-.18-.988-.284-1.255-.11-.28-.196-.925-.3-1.268-.112-.376-.166-.853-.28-1.258-.105-.375-.213-.9-.296-1.272-.242-1.087-.408-1.402-.993-2.143-.49-.621-1.077-.932-1.83-1.277-.156-.072-.346-.176-.519-.25a8.253 8.253 0 0 1-.522-.247c-.312-.195-.732-.322-1.057-.51-.3-.173-.716-.326-1.047-.492a101 101 0 0 0-3.18-1.524 53.67 53.67 0 0 1-2.096-1.01c-.703-.355-1.398-.654-2.1-1.006-.704-.352-1.4-.66-2.101-1.007-.34-.168-.73-.36-1.066-.501-.315-.132-.776-.413-1.069-.5-.19-.056-.799-.385-1.042-.496a30.09 30.09 0 0 1-1.065-.503c-.696-.353-1.412-.66-2.12-1.016-.703-.353-1.395-.653-2.1-1.006-.719-.36-1.368-.7-2.437-.7h-.06c-.958 0-1.415.316-2.082.58a7.04 7.04 0 0 0-.899.432c-.227.142-.668.295-.934.427-1.205.595-2.415 1.134-3.619 1.736-2.398 1.2-4.844 2.27-7.24 3.47-1.207.607-2.41 1.142-3.618 1.737-.606.298-1.202.572-1.811.852-.225.104-.688.369-.898.434-.115.035-.812.368-.923.437-.214.133-.656.318-.901.43-.205.095-.73.384-.898.434-.502.149-1.192.813-1.51 1.182a4.13 4.13 0 0 0-.854 1.839c-.036.224-.471 2.074-.53 2.162z"/><path fill="#cfe3ef" stroke="#355386" d="M7.469 60.078c4.095-8.48 3.708-26.191 7.407-34.164 3.7-7.973 9.248-11.668 19.422-12.64 10.532-.436 18.728 4.667 22.196 13.807s3.326 26.11 6.759 32.462c-3.598-2.245-5.738-4.683-9.648-4.572-3.91.111-5.378 4.094-8.837 4.267-3.46.173-6.42-3.994-10.224-3.819-3.803.175-4.92 3.84-9.266 3.794-4.347-.047-5.702-3.82-9.14-3.693-3.437.127-5.757 1.425-8.669 4.558Z" style="stroke-width:1.00157;stroke-linecap:round;stroke-linejoin:round;stroke-dasharray:none;paint-order:normal"/><ellipse cx="41.148" cy="32.56" fill="#355386" rx="4.3" ry="6.67"/><ellipse cx="24.369" cy="32.56" fill="#355386" rx="4.3" ry="6.67"/><circle cx="41.559" cy="31.23" r="3" fill="#fff"/><circle cx="24.744" cy="31.23" r="3" fill="#fff"/><path d="M34.162 53.377V47.65h1.156v2.543l2.336-2.543h1.555l-2.156 2.23 2.273 3.497H37.83l-1.574-2.688-.938.957v1.73zM40.736 53.377 39.37 47.65h1.184l.863 3.934 1.047-3.934h1.375l1.004 4 .879-4h1.164l-1.39 5.727h-1.227l-1.141-4.282-1.137 4.282zM47.24 50.549q0-.875.262-1.47.195-.437.531-.784.34-.348.742-.516.535-.227 1.235-.227 1.265 0 2.023.786.762.785.762 2.183 0 1.387-.754 2.172-.754.781-2.016.781-1.277 0-2.03-.777-.755-.781-.755-2.148zm1.192-.04q0 .973.449 1.477.449.5 1.14.5.692 0 1.133-.496.446-.5.446-1.496 0-.985-.434-1.469-.43-.484-1.145-.484-.714 0-1.152.492-.437.488-.437 1.476zM53.713 53.377V47.65h1.156v2.543l2.336-2.543h1.555l-2.157 2.23 2.274 3.497H57.38l-1.574-2.688-.938.957v1.73z" style="font-weight:700;font-size:8px;font-family:Sans,Arial;text-anchor:middle;fill:#355386"/></svg>
|
After Width: | Height: | Size: 3.9 KiB |
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 521 KiB |
|
@ -17,8 +17,6 @@ components.
|
|||
The cloud-controller-manager is structured using a plugin
|
||||
mechanism that allows different cloud providers to integrate their platforms with Kubernetes.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Design
|
||||
|
@ -48,10 +46,10 @@ when new servers are created in your cloud infrastructure. The node controller o
|
|||
hosts running inside your tenancy with the cloud provider. The node controller performs the following functions:
|
||||
|
||||
1. Update a Node object with the corresponding server's unique identifier obtained from the cloud provider API.
|
||||
2. Annotating and labelling the Node object with cloud-specific information, such as the region the node
|
||||
1. Annotating and labelling the Node object with cloud-specific information, such as the region the node
|
||||
is deployed into and the resources (CPU, memory, etc) that it has available.
|
||||
3. Obtain the node's hostname and network addresses.
|
||||
4. Verifying the node's health. In case a node becomes unresponsive, this controller checks with
|
||||
1. Obtain the node's hostname and network addresses.
|
||||
1. Verifying the node's health. In case a node becomes unresponsive, this controller checks with
|
||||
your cloud provider's API to see if the server has been deactivated / deleted / terminated.
|
||||
If the node has been deleted from the cloud, the controller deletes the Node object from your Kubernetes
|
||||
cluster.
|
||||
|
@ -88,13 +86,13 @@ to read and modify Node objects.
|
|||
|
||||
`v1/Node`:
|
||||
|
||||
- Get
|
||||
- List
|
||||
- Create
|
||||
- Update
|
||||
- Patch
|
||||
- Watch
|
||||
- Delete
|
||||
- get
|
||||
- list
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- watch
|
||||
- delete
|
||||
|
||||
### Route controller {#authorization-route-controller}
|
||||
|
||||
|
@ -103,37 +101,42 @@ routes appropriately. It requires Get access to Node objects.
|
|||
|
||||
`v1/Node`:
|
||||
|
||||
- Get
|
||||
- get
|
||||
|
||||
### Service controller {#authorization-service-controller}
|
||||
|
||||
The service controller listens to Service object Create, Update and Delete events and then configures Endpoints for those Services appropriately (for EndpointSlices, the kube-controller-manager manages these on demand).
|
||||
The service controller watches for Service object **create**, **update** and **delete** events and then
|
||||
configures Endpoints for those Services appropriately (for EndpointSlices, the
|
||||
kube-controller-manager manages these on demand).
|
||||
|
||||
To access Services, it requires List, and Watch access. To update Services, it requires Patch and Update access.
|
||||
To access Services, it requires **list**, and **watch** access. To update Services, it requires
|
||||
**patch** and **update** access.
|
||||
|
||||
To set up Endpoints resources for the Services, it requires access to Create, List, Get, Watch, and Update.
|
||||
To set up Endpoints resources for the Services, it requires access to **create**, **list**,
|
||||
**get**, **watch**, and **update**.
|
||||
|
||||
`v1/Service`:
|
||||
|
||||
- List
|
||||
- Get
|
||||
- Watch
|
||||
- Patch
|
||||
- Update
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
- patch
|
||||
- update
|
||||
|
||||
### Others {#authorization-miscellaneous}
|
||||
|
||||
The implementation of the core of the cloud controller manager requires access to create Event objects, and to ensure secure operation, it requires access to create ServiceAccounts.
|
||||
The implementation of the core of the cloud controller manager requires access to create Event
|
||||
objects, and to ensure secure operation, it requires access to create ServiceAccounts.
|
||||
|
||||
`v1/Event`:
|
||||
|
||||
- Create
|
||||
- Patch
|
||||
- Update
|
||||
- create
|
||||
- patch
|
||||
- update
|
||||
|
||||
`v1/ServiceAccount`:
|
||||
|
||||
- Create
|
||||
- create
|
||||
|
||||
The {{< glossary_tooltip term_id="rbac" text="RBAC" >}} ClusterRole for the cloud
|
||||
controller manager looks like:
|
||||
|
@ -206,12 +209,21 @@ rules:
|
|||
[Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager)
|
||||
has instructions on running and managing the cloud controller manager.
|
||||
|
||||
To upgrade a HA control plane to use the cloud controller manager, see [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).
|
||||
To upgrade a HA control plane to use the cloud controller manager, see
|
||||
[Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).
|
||||
|
||||
Want to know how to implement your own cloud controller manager, or extend an existing project?
|
||||
|
||||
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.21/cloud.go#L42-L69) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider).
|
||||
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in.
|
||||
Specifically, it uses the `CloudProvider` interface defined in
|
||||
[`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.26/cloud.go#L43-L69) from
|
||||
[kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider).
|
||||
|
||||
The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to cloud providers are outside the core of Kubernetes and implement the `CloudProvider` interface.
|
||||
The implementation of the shared controllers highlighted in this document (Node, Route, and Service),
|
||||
and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core.
|
||||
Implementations specific to cloud providers are outside the core of Kubernetes and implement the
|
||||
`CloudProvider` interface.
|
||||
|
||||
For more information about developing plugins, see
|
||||
[Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/).
|
||||
|
||||
For more information about developing plugins, see [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/).
|
||||
|
|
|
@ -11,7 +11,8 @@ aliases:
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This document catalogs the communication paths between the API server and the Kubernetes cluster.
|
||||
This document catalogs the communication paths between the {{< glossary_tooltip term_id="kube-apiserver" text="API server" >}}
|
||||
and the Kubernetes {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}}.
|
||||
The intent is to allow users to customize their installation to harden the network configuration
|
||||
such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud
|
||||
provider).
|
||||
|
@ -30,28 +31,28 @@ enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/a
|
|||
or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
|
||||
are allowed.
|
||||
|
||||
Nodes should be provisioned with the public root certificate for the cluster such that they can
|
||||
Nodes should be provisioned with the public root {{< glossary_tooltip text="certificate" term_id="certificate" >}} for the cluster such that they can
|
||||
connect securely to the API server along with valid client credentials. A good approach is that the
|
||||
client credentials provided to the kubelet are in the form of a client certificate. See
|
||||
[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
|
||||
for automated provisioning of kubelet client certificates.
|
||||
|
||||
Pods that wish to connect to the API server can do so securely by leveraging a service account so
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}} that wish to connect to the API server can do so securely by leveraging a service account so
|
||||
that Kubernetes will automatically inject the public root certificate and a valid bearer token
|
||||
into the pod when it is instantiated.
|
||||
The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is
|
||||
redirected (via `kube-proxy`) to the HTTPS endpoint on the API server.
|
||||
redirected (via `{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}`) to the HTTPS endpoint on the API server.
|
||||
|
||||
The control plane components also communicate with the API server over the secure port.
|
||||
|
||||
As a result, the default operating mode for connections from the nodes and pods running on the
|
||||
As a result, the default operating mode for connections from the nodes and pod running on the
|
||||
nodes to the control plane is secured by default and can run over untrusted and/or public
|
||||
networks.
|
||||
|
||||
## Control plane to node
|
||||
|
||||
There are two primary communication paths from the control plane (the API server) to the nodes.
|
||||
The first is from the API server to the kubelet process which runs on each node in the cluster.
|
||||
The first is from the API server to the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} process which runs on each node in the cluster.
|
||||
The second is from the API server to any node, pod, or service through the API server's _proxy_
|
||||
functionality.
|
||||
|
||||
|
@ -89,7 +90,7 @@ connections **are not currently safe** to run over untrusted or public networks.
|
|||
|
||||
### SSH tunnels
|
||||
|
||||
Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this
|
||||
Kubernetes supports [SSH tunnels](https://www.ssh.com/academy/ssh/tunneling) to protect the control plane to nodes communication paths. In this
|
||||
configuration, the API server initiates an SSH tunnel to each node in the cluster (connecting to
|
||||
the SSH server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or
|
||||
service through the tunnel.
|
||||
|
@ -117,3 +118,12 @@ connections.
|
|||
Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set
|
||||
up the Konnectivity service in your cluster.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read about the [Kubernetes control plane components](/docs/concepts/overview/components/#control-plane-components)
|
||||
* Learn more about [Hubs and Spoke model](https://book.kubebuilder.io/multiversion-tutorial/conversion-concepts.html#hubs-spokes-and-other-wheel-metaphors)
|
||||
* Learn how to [Secure a Cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
|
||||
* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
||||
* [Set up Konnectivity service](/docs/tasks/extend-kubernetes/setup-konnectivity/)
|
||||
* [Use Port Forwarding to Access Applications in a Cluster](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
|
||||
* Learn how to [Fetch logs for Pods](/docs/tasks/debug/debug-application/debug-running-pod/#examine-pod-logs), [use kubectl port-forward](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod)
|
|
@ -63,9 +63,10 @@ The name of a PriorityClass object must be a valid
|
|||
and it cannot be prefixed with `system-`.
|
||||
|
||||
A PriorityClass object can have any 32-bit integer value smaller than or equal
|
||||
to 1 billion. Larger numbers are reserved for critical system Pods that should
|
||||
not normally be preempted or evicted. A cluster admin should create one
|
||||
PriorityClass object for each such mapping that they want.
|
||||
to 1 billion. This means that the range of values for a PriorityClass object is
|
||||
from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for
|
||||
built-in PriorityClasses that represent critical system Pods. A cluster
|
||||
admin should create one PriorityClass object for each such mapping that they want.
|
||||
|
||||
PriorityClass also has two optional fields: `globalDefault` and `description`.
|
||||
The `globalDefault` field indicates that the value of this PriorityClass should
|
||||
|
|
|
@ -16,7 +16,7 @@ weight: 10
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< glossary_definition term_id="service" length="short" >}}
|
||||
{{< glossary_definition term_id="service" length="short" prepend="In Kubernetes, a Service is" >}}
|
||||
|
||||
A key aim of Services in Kubernetes is that you don't need to modify your existing
|
||||
application to use an unfamiliar service discovery mechanism.
|
||||
|
|
|
@ -238,11 +238,11 @@ work between Windows and Linux:
|
|||
The following list documents differences between how Pod specifications work between Windows and Linux:
|
||||
|
||||
* `hostIPC` and `hostpid` - host namespace sharing is not possible on Windows
|
||||
* `hostNetwork` - [see below](/docs/concepts/windows/intro#compatibility-v1-pod-spec-containers-hostnetwork)
|
||||
* `hostNetwork` - [see below](#compatibility-v1-pod-spec-containers-hostnetwork)
|
||||
* `dnsPolicy` - setting the Pod `dnsPolicy` to `ClusterFirstWithHostNet` is
|
||||
not supported on Windows because host networking is not provided. Pods always
|
||||
run with a container network.
|
||||
* `podSecurityContext` [see below](/docs/concepts/windows/intro#compatibility-v1-pod-spec-containers-securitycontext)
|
||||
* `podSecurityContext` [see below](#compatibility-v1-pod-spec-containers-securitycontext)
|
||||
* `shareProcessNamespace` - this is a beta feature, and depends on Linux namespaces
|
||||
which are not implemented on Windows. Windows cannot share process namespaces or
|
||||
the container's root filesystem. Only the network can be shared.
|
||||
|
|
|
@ -45,8 +45,8 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up
|
|||
In this example:
|
||||
|
||||
* A Deployment named `nginx-deployment` is created, indicated by the
|
||||
`.metadata.name` field. This name will become the basis for the ReplicaSets
|
||||
and Pods which are created later. See [Writing a Deployment Spec](#writing-a-deployment-spec)
|
||||
`.metadata.name` field. This name will become the basis for the ReplicaSets
|
||||
and Pods which are created later. See [Writing a Deployment Spec](#writing-a-deployment-spec)
|
||||
for more details.
|
||||
* The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the `.spec.replicas` field.
|
||||
* The `.spec.selector` field defines how the created ReplicaSet finds which Pods to manage.
|
||||
|
@ -71,14 +71,12 @@ In this example:
|
|||
Before you begin, make sure your Kubernetes cluster is up and running.
|
||||
Follow the steps given below to create the above Deployment:
|
||||
|
||||
|
||||
1. Create the Deployment by running the following command:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
|
||||
```
|
||||
|
||||
|
||||
2. Run `kubectl get deployments` to check if the Deployment was created.
|
||||
|
||||
If the Deployment is still being created, the output is similar to the following:
|
||||
|
@ -125,7 +123,7 @@ Follow the steps given below to create the above Deployment:
|
|||
* `AGE` displays the amount of time that the application has been running.
|
||||
|
||||
Notice that the name of the ReplicaSet is always formatted as
|
||||
`[DEPLOYMENT-NAME]-[HASH]`. This name will become the basis for the Pods
|
||||
`[DEPLOYMENT-NAME]-[HASH]`. This name will become the basis for the Pods
|
||||
which are created.
|
||||
|
||||
The `HASH` string is the same as the `pod-template-hash` label on the ReplicaSet.
|
||||
|
@ -169,56 +167,56 @@ Follow the steps given below to update your Deployment:
|
|||
|
||||
1. Let's update the nginx Pods to use the `nginx:1.16.1` image instead of the `nginx:1.14.2` image.
|
||||
|
||||
```shell
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
```shell
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
|
||||
or use the following command:
|
||||
or use the following command:
|
||||
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
The output is similar to:
|
||||
|
||||
Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl edit deployment/nginx-deployment
|
||||
```
|
||||
Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`:
|
||||
|
||||
The output is similar to:
|
||||
```shell
|
||||
kubectl edit deployment/nginx-deployment
|
||||
```
|
||||
|
||||
```
|
||||
deployment.apps/nginx-deployment edited
|
||||
```
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
deployment.apps/nginx-deployment edited
|
||||
```
|
||||
|
||||
2. To see the rollout status, run:
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
```shell
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
```
|
||||
```
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
||||
or
|
||||
or
|
||||
|
||||
```
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
```
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
```
|
||||
|
||||
Get more details on your updated Deployment:
|
||||
|
||||
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
|
||||
```ini
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
|
@ -228,44 +226,44 @@ Get more details on your updated Deployment:
|
|||
* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
|
||||
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
|
||||
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 3 6s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 3 6s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
```
|
||||
|
||||
* Running `get pods` should now show only the new Pods:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-khku8 1/1 Running 0 14s
|
||||
nginx-deployment-1564180365-nacti 1/1 Running 0 14s
|
||||
nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-khku8 1/1 Running 0 14s
|
||||
nginx-deployment-1564180365-nacti 1/1 Running 0 14s
|
||||
nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
|
||||
```
|
||||
|
||||
Next time you want to update these Pods, you only need to update the Deployment's Pod template again.
|
||||
Next time you want to update these Pods, you only need to update the Deployment's Pod template again.
|
||||
|
||||
Deployment ensures that only a certain number of Pods are down while they are being updated. By default,
|
||||
it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
|
||||
Deployment ensures that only a certain number of Pods are down while they are being updated. By default,
|
||||
it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
|
||||
|
||||
Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.
|
||||
By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
|
||||
Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.
|
||||
By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
|
||||
|
||||
For example, if you look at the above Deployment closely, you will see that it first creates a new Pod,
|
||||
then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of
|
||||
new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.
|
||||
It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of
|
||||
a Deployment with 4 replicas, the number of Pods would be between 3 and 5.
|
||||
For example, if you look at the above Deployment closely, you will see that it first creates a new Pod,
|
||||
then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of
|
||||
new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.
|
||||
It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of
|
||||
a Deployment with 4 replicas, the number of Pods would be between 3 and 5.
|
||||
|
||||
* Get details of your Deployment:
|
||||
```shell
|
||||
|
@ -309,13 +307,13 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
|
|||
Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1
|
||||
Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3
|
||||
Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0
|
||||
```
|
||||
Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)
|
||||
and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet
|
||||
(nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet
|
||||
to 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times.
|
||||
It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy.
|
||||
Finally, you'll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
|
||||
```
|
||||
Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)
|
||||
and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet
|
||||
(nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet
|
||||
to 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times.
|
||||
It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy.
|
||||
Finally, you'll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes doesn't count terminating Pods when calculating the number of `availableReplicas`, which must be between
|
||||
|
@ -333,7 +331,7 @@ ReplicaSet is scaled to `.spec.replicas` and all old ReplicaSets is scaled to 0.
|
|||
|
||||
If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet
|
||||
as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously
|
||||
-- it will add it to its list of old ReplicaSets and start scaling it down.
|
||||
-- it will add it to its list of old ReplicaSets and start scaling it down.
|
||||
|
||||
For example, suppose you create a Deployment to create 5 replicas of `nginx:1.14.2`,
|
||||
but then update the Deployment to create 5 replicas of `nginx:1.16.1`, when only 3
|
||||
|
@ -378,107 +376,107 @@ rolled back.
|
|||
|
||||
* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.161` instead of `nginx:1.16.1`:
|
||||
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.161
|
||||
```
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.161
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
* The rollout gets stuck. You can verify it by checking the rollout status:
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
```shell
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
||||
* Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts,
|
||||
[read more here](#deployment-status).
|
||||
|
||||
* You see that the number of old replicas (`nginx-deployment-1564180365` and `nginx-deployment-2035384211`) is 2, and new replicas (nginx-deployment-3066724191) is 1.
|
||||
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 3 25s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
nginx-deployment-3066724191 1 1 0 6s
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 3 25s
|
||||
nginx-deployment-2035384211 0 0 0 36s
|
||||
nginx-deployment-3066724191 1 1 0 6s
|
||||
```
|
||||
|
||||
* Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
|
||||
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
|
||||
nginx-deployment-1564180365-hysrc 1/1 Running 0 25s
|
||||
nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
|
||||
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
|
||||
nginx-deployment-1564180365-hysrc 1/1 Running 0 25s
|
||||
nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified. Kubernetes by default sets the value to 25%.
|
||||
{{< /note >}}
|
||||
{{< note >}}
|
||||
The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified. Kubernetes by default sets the value to 25%.
|
||||
{{< /note >}}
|
||||
|
||||
* Get the description of the Deployment:
|
||||
```shell
|
||||
kubectl describe deployment
|
||||
```
|
||||
```shell
|
||||
kubectl describe deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
|
||||
Labels: app=nginx
|
||||
Selector: app=nginx
|
||||
Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.161
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True ReplicaSetUpdated
|
||||
OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)
|
||||
NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
|
||||
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
|
||||
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
|
||||
Labels: app=nginx
|
||||
Selector: app=nginx
|
||||
Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.161
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True ReplicaSetUpdated
|
||||
OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)
|
||||
NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
|
||||
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1
|
||||
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
|
||||
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
|
||||
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
|
||||
```
|
||||
|
||||
To fix this, you need to rollback to a previous revision of Deployment that is stable.
|
||||
|
||||
|
@ -487,131 +485,131 @@ rolled back.
|
|||
Follow the steps given below to check the rollout history:
|
||||
|
||||
1. First, check the revisions of this Deployment:
|
||||
```shell
|
||||
kubectl rollout history deployment/nginx-deployment
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployments "nginx-deployment"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml
|
||||
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161
|
||||
```
|
||||
```shell
|
||||
kubectl rollout history deployment/nginx-deployment
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployments "nginx-deployment"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml
|
||||
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161
|
||||
```
|
||||
|
||||
`CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by:
|
||||
`CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by:
|
||||
|
||||
* Annotating the Deployment with `kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"`
|
||||
* Manually editing the manifest of the resource.
|
||||
* Annotating the Deployment with `kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"`
|
||||
* Manually editing the manifest of the resource.
|
||||
|
||||
2. To see the details of each revision, run:
|
||||
```shell
|
||||
kubectl rollout history deployment/nginx-deployment --revision=2
|
||||
```
|
||||
```shell
|
||||
kubectl rollout history deployment/nginx-deployment --revision=2
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployments "nginx-deployment" revision 2
|
||||
Labels: app=nginx
|
||||
pod-template-hash=1159050644
|
||||
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.16.1
|
||||
Port: 80/TCP
|
||||
QoS Tier:
|
||||
cpu: BestEffort
|
||||
memory: BestEffort
|
||||
Environment Variables: <none>
|
||||
No volumes.
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployments "nginx-deployment" revision 2
|
||||
Labels: app=nginx
|
||||
pod-template-hash=1159050644
|
||||
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.16.1
|
||||
Port: 80/TCP
|
||||
QoS Tier:
|
||||
cpu: BestEffort
|
||||
memory: BestEffort
|
||||
Environment Variables: <none>
|
||||
No volumes.
|
||||
```
|
||||
|
||||
### Rolling Back to a Previous Revision
|
||||
Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.
|
||||
|
||||
1. Now you've decided to undo the current rollout and rollback to the previous revision:
|
||||
```shell
|
||||
kubectl rollout undo deployment/nginx-deployment
|
||||
```
|
||||
```shell
|
||||
kubectl rollout undo deployment/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment rolled back
|
||||
```
|
||||
Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`:
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment rolled back
|
||||
```
|
||||
Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`:
|
||||
|
||||
```shell
|
||||
kubectl rollout undo deployment/nginx-deployment --to-revision=2
|
||||
```
|
||||
```shell
|
||||
kubectl rollout undo deployment/nginx-deployment --to-revision=2
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment rolled back
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment rolled back
|
||||
```
|
||||
|
||||
For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout).
|
||||
For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout).
|
||||
|
||||
The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event
|
||||
for rolling back to revision 2 is generated from Deployment controller.
|
||||
The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event
|
||||
for rolling back to revision 2 is generated from Deployment controller.
|
||||
|
||||
2. Check if the rollback was successful and the Deployment is running as expected, run:
|
||||
```shell
|
||||
kubectl get deployment nginx-deployment
|
||||
```
|
||||
```shell
|
||||
kubectl get deployment nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 30m
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 30m
|
||||
```
|
||||
3. Get the description of the Deployment:
|
||||
```shell
|
||||
kubectl describe deployment nginx-deployment
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500
|
||||
Labels: app=nginx
|
||||
Annotations: deployment.kubernetes.io/revision=4
|
||||
kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
Selector: app=nginx
|
||||
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.16.1
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1
|
||||
Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
|
||||
Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0
|
||||
```
|
||||
```shell
|
||||
kubectl describe deployment nginx-deployment
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500
|
||||
Labels: app=nginx
|
||||
Annotations: deployment.kubernetes.io/revision=4
|
||||
kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
Selector: app=nginx
|
||||
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
|
||||
StrategyType: RollingUpdate
|
||||
MinReadySeconds: 0
|
||||
RollingUpdateStrategy: 25% max unavailable, 25% max surge
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx:1.16.1
|
||||
Port: 80/TCP
|
||||
Host Port: 0/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
OldReplicaSets: <none>
|
||||
NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0
|
||||
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1
|
||||
Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
|
||||
Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0
|
||||
```
|
||||
|
||||
## Scaling a Deployment
|
||||
|
||||
|
@ -658,26 +656,26 @@ For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surg
|
|||
```
|
||||
|
||||
* You update to a new image which happens to be unresolvable from inside the cluster.
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:sometag
|
||||
```
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:sometag
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
* The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the
|
||||
`maxUnavailable` requirement that you mentioned above. Check out the rollout status:
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 5 5 0 9s
|
||||
nginx-deployment-618515232 8 8 8 1m
|
||||
```
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 5 5 0 9s
|
||||
nginx-deployment-618515232 8 8 8 1m
|
||||
```
|
||||
|
||||
* Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas
|
||||
to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using
|
||||
|
@ -741,103 +739,103 @@ apply multiple fixes in between pausing and resuming without triggering unnecess
|
|||
```
|
||||
|
||||
* Pause by running the following command:
|
||||
```shell
|
||||
kubectl rollout pause deployment/nginx-deployment
|
||||
```
|
||||
```shell
|
||||
kubectl rollout pause deployment/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment paused
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment paused
|
||||
```
|
||||
|
||||
* Then update the image of the Deployment:
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
```shell
|
||||
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
* Notice that no new rollout started:
|
||||
```shell
|
||||
kubectl rollout history deployment/nginx-deployment
|
||||
```
|
||||
```shell
|
||||
kubectl rollout history deployment/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployments "nginx"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 <none>
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployments "nginx"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 <none>
|
||||
```
|
||||
* Get the rollout status to verify that the existing ReplicaSet has not changed:
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 3 3 3 2m
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 3 3 3 2m
|
||||
```
|
||||
|
||||
* You can make as many updates as you wish, for example, update the resources that will be used:
|
||||
```shell
|
||||
kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
|
||||
```
|
||||
```shell
|
||||
kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment resource requirements updated
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment resource requirements updated
|
||||
```
|
||||
|
||||
The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to
|
||||
the Deployment will not have any effect as long as the Deployment rollout is paused.
|
||||
The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to
|
||||
the Deployment will not have any effect as long as the Deployment rollout is paused.
|
||||
|
||||
* Eventually, resume the Deployment rollout and observe a new ReplicaSet coming up with all the new updates:
|
||||
```shell
|
||||
kubectl rollout resume deployment/nginx-deployment
|
||||
```
|
||||
```shell
|
||||
kubectl rollout resume deployment/nginx-deployment
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment resumed
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
deployment.apps/nginx-deployment resumed
|
||||
```
|
||||
* Watch the status of the rollout until it's done.
|
||||
```shell
|
||||
kubectl get rs -w
|
||||
```
|
||||
```shell
|
||||
kubectl get rs -w
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 2 2 2 2m
|
||||
nginx-3926361531 2 2 0 6s
|
||||
nginx-3926361531 2 2 1 18s
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-2142116321 1 1 1 2m
|
||||
nginx-3926361531 3 3 1 18s
|
||||
nginx-3926361531 3 3 2 19s
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 20s
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 2 2 2 2m
|
||||
nginx-3926361531 2 2 0 6s
|
||||
nginx-3926361531 2 2 1 18s
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-2142116321 1 1 1 2m
|
||||
nginx-3926361531 3 3 1 18s
|
||||
nginx-3926361531 3 3 2 19s
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 20s
|
||||
```
|
||||
* Get the status of the latest rollout:
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 28s
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 28s
|
||||
```
|
||||
{{< note >}}
|
||||
You cannot rollback a paused Deployment until you resume it.
|
||||
{{< /note >}}
|
||||
|
@ -1084,9 +1082,9 @@ For general information about working with config files, see
|
|||
configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
|
||||
|
||||
When the control plane creates new Pods for a Deployment, the `.metadata.name` of the
|
||||
Deployment is part of the basis for naming those Pods. The name of a Deployment must be a valid
|
||||
Deployment is part of the basis for naming those Pods. The name of a Deployment must be a valid
|
||||
[DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
|
||||
value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
|
||||
value, but this can produce unexpected results for the Pod hostnames. For best compatibility,
|
||||
the name should follow the more restrictive rules for a
|
||||
[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
|
||||
|
||||
|
@ -1153,11 +1151,11 @@ the default value.
|
|||
All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate`.
|
||||
|
||||
{{< note >}}
|
||||
This will only guarantee Pod termination previous to creation for upgrades. If you upgrade a Deployment, all Pods
|
||||
of the old revision will be terminated immediately. Successful removal is awaited before any Pod of the new
|
||||
revision is created. If you manually delete a Pod, the lifecycle is controlled by the ReplicaSet and the
|
||||
replacement will be created immediately (even if the old Pod is still in a Terminating state). If you need an
|
||||
"at most" guarantee for your Pods, you should consider using a
|
||||
This will only guarantee Pod termination previous to creation for upgrades. If you upgrade a Deployment, all Pods
|
||||
of the old revision will be terminated immediately. Successful removal is awaited before any Pod of the new
|
||||
revision is created. If you manually delete a Pod, the lifecycle is controlled by the ReplicaSet and the
|
||||
replacement will be created immediately (even if the old Pod is still in a Terminating state). If you need an
|
||||
"at most" guarantee for your Pods, you should consider using a
|
||||
[StatefulSet](/docs/concepts/workloads/controllers/statefulset/).
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -296,14 +296,14 @@ Your {{< glossary_tooltip text="container runtime" term_id="container-runtime" >
|
|||
Any container in a pod can run in privileged mode to use operating system administrative capabilities
|
||||
that would otherwise be inaccessible. This is available for both Windows and Linux.
|
||||
|
||||
### Linux priviledged containers
|
||||
### Linux privileged containers
|
||||
|
||||
In Linux, any container in a Pod can enable privileged mode using the `privileged` (Linux) flag
|
||||
on the [security context](/docs/tasks/configure-pod-container/security-context/) of the
|
||||
container spec. This is useful for containers that want to use operating system administrative
|
||||
capabilities such as manipulating the network stack or accessing hardware devices.
|
||||
|
||||
### Windows priviledged containers
|
||||
### Windows privileged containers
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
|
|
|
@ -459,12 +459,8 @@ Do | Don't
|
|||
Update the title in the front matter of the page or blog post. | Use first level heading, as Hugo automatically converts the title in the front matter of the page into a first-level heading.
|
||||
Use ordered headings to provide a meaningful high-level outline of your content. | Use headings level 4 through 6, unless it is absolutely necessary. If your content is that detailed, it may need to be broken into separate articles.
|
||||
Use pound or hash signs (`#`) for non-blog post content. | Use underlines (`---` or `===`) to designate first-level headings.
|
||||
Use sentence case for headings in the page body. For example,
|
||||
**Extend kubectl with plugins** | Use title case for headings in the page body. For example, **Extend Kubectl With Plugins**
|
||||
Use title case for the page title in the front matter. For example,
|
||||
`title: Kubernetes API Server Bypass Risks` | Use sentence case for page titles
|
||||
in the front matter. For example, don't use
|
||||
`title: Kubernetes API server bypass risks`
|
||||
Use sentence case for headings in the page body. For example, **Extend kubectl with plugins** | Use title case for headings in the page body. For example, **Extend Kubectl With Plugins**
|
||||
Use title case for the page title in the front matter. For example, `title: Kubernetes API Server Bypass Risks` | Use sentence case for page titles in the front matter. For example, don't use `title: Kubernetes API server bypass risks`
|
||||
{{< /table >}}
|
||||
|
||||
### Paragraphs
|
||||
|
|
|
@ -89,6 +89,7 @@ operator to use or manage a cluster.
|
|||
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/),
|
||||
[kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) and
|
||||
[kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
* [kube-controller-manager configuration (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/)
|
||||
* [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
|
||||
* [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/)
|
||||
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) and
|
||||
|
|
|
@ -107,7 +107,7 @@ CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultI
|
|||
|
||||
{{< note >}}
|
||||
The [`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) admission plugin is enabled
|
||||
by default, but is only active if you enable the the `ValidatingAdmissionPolicy`
|
||||
by default, but is only active if you enable the `ValidatingAdmissionPolicy`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) **and**
|
||||
the `admissionregistration.k8s.io/v1alpha1` API.
|
||||
{{< /note >}}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -2,7 +2,7 @@
|
|||
title: Kubectl
|
||||
id: kubectl
|
||||
date: 2018-04-12
|
||||
full_link: /docs/user-guide/kubectl-overview/
|
||||
full_link: /docs/reference/kubectl/
|
||||
short_description: >
|
||||
A command line tool for communicating with a Kubernetes cluster.
|
||||
|
||||
|
|
|
@ -5,14 +5,21 @@ date: 2018-04-12
|
|||
full_link: /docs/concepts/services-networking/service/
|
||||
short_description: >
|
||||
A way to expose an application running on a set of Pods as a network service.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
---
|
||||
An abstract way to expose an application running on a set of {{< glossary_tooltip text="Pods" term_id="pod" >}} as a network service.
|
||||
A method for exposing a network application that is running as one or more
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}} in your cluster.
|
||||
|
||||
<!--more-->
|
||||
|
||||
The set of Pods targeted by a Service is (usually) determined by a {{< glossary_tooltip text="selector" term_id="selector" >}}. If more Pods are added or removed, the set of Pods matching the selector will change. The Service makes sure that network traffic can be directed to the current set of Pods for the workload.
|
||||
The set of Pods targeted by a Service is (usually) determined by a
|
||||
{{< glossary_tooltip text="selector" term_id="selector" >}}. If more Pods are added or removed,
|
||||
the set of Pods matching the selector will change. The Service makes sure that network traffic
|
||||
can be directed to the current set of Pods for the workload.
|
||||
|
||||
Kubernetes Services either use IP networking (IPv4, IPv6, or both), or reference an external name in
|
||||
the Domain Name System (DNS).
|
||||
|
||||
The Service abstraction enables other mechanisms, such as Ingress and Gateway.
|
||||
|
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
title: CRI Pod & Container Metrics
|
||||
content_type: reference
|
||||
weight: 50
|
||||
description: >-
|
||||
Collection of Pod & Container metrics via the CRI.
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="alpha" >}}
|
||||
|
||||
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) collects pod and
|
||||
container metrics via [cAdvisor](https://github.com/google/cadvisor). As an alpha feature,
|
||||
Kubernetes lets you configure the collection of pod and container
|
||||
metrics via the {{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI). You
|
||||
must enable the `PodAndContainerStatsFromCRI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and
|
||||
use a compatible CRI implementation (containerd >= 1.6.0, CRI-O >= 1.23.0) to
|
||||
use the CRI based collection mechanism.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## CRI Pod & Container Metrics
|
||||
|
||||
With `PodAndContainerStatsFromCRI` enabled, the kubelet polls the underlying container
|
||||
runtime for pod and container stats instead of inspecting the host system directly using cAdvisor.
|
||||
The benefits of relying on the container runtime for this information as opposed to direct
|
||||
collection with cAdvisor include:
|
||||
|
||||
- Potential improved performance if the container runtime already collects this information
|
||||
during normal operations. In this case, the data can be re-used instead of being aggregated
|
||||
again by the kubelet.
|
||||
|
||||
- It further decouples the kubelet and the container runtime allowing collection of metrics for
|
||||
container runtimes that don't run processes directly on the host with kubelet where they are
|
||||
observable by cAdvisor (for example: container runtimes that use virtualization).
|
||||
|
|
@ -37,17 +37,11 @@ kubelet endpoint, and not `/stats/summary`.
|
|||
## Summary metrics API source {#summary-api-source}
|
||||
|
||||
By default, Kubernetes fetches node summary metrics data using an embedded
|
||||
[cAdvisor](https://github.com/google/cadvisor) that runs within the kubelet.
|
||||
|
||||
## Summary API data via CRI {#pod-and-container-stats-from-cri}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="alpha" >}}
|
||||
|
||||
If you enable the `PodAndContainerStatsFromCRI`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) in your
|
||||
cluster, and you use a container runtime that supports statistics access via
|
||||
[cAdvisor](https://github.com/google/cadvisor) that runs within the kubelet. If you
|
||||
enable the `PodAndContainerStatsFromCRI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
in your cluster, and you use a container runtime that supports statistics access via
|
||||
{{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI), then
|
||||
the kubelet fetches Pod- and container-level metric data using CRI, and not via cAdvisor.
|
||||
the kubelet [fetches Pod- and container-level metric data using CRI](/docs/reference/instrumentation/cri-pod-container-metrics), and not via cAdvisor.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc
|
|||
```
|
||||
### A note on `--all-namespaces`
|
||||
|
||||
Appending `--all-namespaces` happens frequently enough where you should be aware of the shorthand for `--all-namespaces`:
|
||||
Appending `--all-namespaces` happens frequently enough that you should be aware of the shorthand for `--all-namespaces`:
|
||||
|
||||
```kubectl -A```
|
||||
|
||||
|
|
|
@ -431,6 +431,17 @@ Used on: PersistentVolumeClaim
|
|||
|
||||
This annotation has been deprecated.
|
||||
|
||||
### volume.beta.kubernetes.io/storage-class (deprecated)
|
||||
|
||||
Example: `volume.beta.kubernetes.io/storage-class: "example-class"`
|
||||
|
||||
Used on: PersistentVolume, PersistentVolumeClaim
|
||||
|
||||
This annotation can be used for PersistentVolume(PV) or PersistentVolumeClaim(PVC) to specify the name of [StorageClass](/docs/concepts/storage/storage-classes/). When both `storageClassName` attribute and `volume.beta.kubernetes.io/storage-class` annotation are specified, the annotation `volume.beta.kubernetes.io/storage-class` takes precedence over the `storageClassName` attribute.
|
||||
|
||||
This annotation has been deprecated. Instead, set the [`storageClassName` field](/docs/concepts/storage/persistent-volumes/#class)
|
||||
for the PersistentVolumeClaim or PersistentVolume.
|
||||
|
||||
### volume.beta.kubernetes.io/mount-options (deprecated) {#mount-options}
|
||||
|
||||
Example : `volume.beta.kubernetes.io/mount-options: "ro,soft"`
|
||||
|
|
|
@ -14,4 +14,4 @@ Kubernetes documentation, including:
|
|||
|
||||
* [Node Metrics Data](/docs/reference/instrumentation/node-metrics).
|
||||
|
||||
|
||||
* [CRI Pod & Container Metrics](/docs/reference/instrumentation/cri-pod-container-metrics).
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Access Applications in a Cluster"
|
||||
description: Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster.
|
||||
weight: 60
|
||||
weight: 100
|
||||
---
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: Access Services Running on Clusters
|
||||
content_type: task
|
||||
weight: 140
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Communicate Between Containers in the Same Pod Using a Shared Volume
|
||||
content_type: task
|
||||
weight: 110
|
||||
weight: 120
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Configure DNS for a Cluster
|
||||
weight: 120
|
||||
weight: 130
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Set up Ingress on Minikube with the NGINX Ingress Controller
|
||||
content_type: task
|
||||
weight: 100
|
||||
weight: 110
|
||||
min-kubernetes-server-version: 1.19
|
||||
---
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ manually through [`easyrsa`](https://github.com/OpenVPN/easy-rsa), [`openssl`](h
|
|||
1. Download, unpack, and initialize the patched version of `easyrsa3`.
|
||||
|
||||
```shell
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz
|
||||
curl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz
|
||||
tar xzf easy-rsa.tar.gz
|
||||
cd easy-rsa-master/easyrsa3
|
||||
./easyrsa init-pki
|
||||
|
|
|
@ -8,7 +8,7 @@ weight: 50
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
The `dockershim` component of Kubernetes allows to use Docker as a Kubernetes's
|
||||
The `dockershim` component of Kubernetes allows the use of Docker as a Kubernetes's
|
||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
|
||||
Kubernetes' built-in `dockershim` component was removed in release v1.24.
|
||||
|
||||
|
@ -40,11 +40,11 @@ dependency on Docker:
|
|||
1. Third-party tools that perform above mentioned privileged operations. See
|
||||
[Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
|
||||
for more information.
|
||||
1. Make sure there is no indirect dependencies on dockershim behavior.
|
||||
1. Make sure there are no indirect dependencies on dockershim behavior.
|
||||
This is an edge case and unlikely to affect your application. Some tooling may be configured
|
||||
to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for
|
||||
a specific log message as part of troubleshooting instructions.
|
||||
If you have such tooling configured, test the behavior on test
|
||||
If you have such tooling configured, test the behavior on a test
|
||||
cluster before migration.
|
||||
|
||||
## Dependency on Docker explained {#role-of-dockershim}
|
||||
|
@ -74,7 +74,7 @@ before to check on these containers is no longer available.
|
|||
|
||||
You cannot get container information using `docker ps` or `docker inspect`
|
||||
commands. As you cannot list containers, you cannot get logs, stop containers,
|
||||
or execute something inside container using `docker exec`.
|
||||
or execute something inside a container using `docker exec`.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
|
|
|
@ -161,7 +161,7 @@ kubectl config set-context prod --namespace=production \
|
|||
--user=lithe-cocoa-92103_kubernetes
|
||||
```
|
||||
|
||||
By default, the above commands adds two contexts that are saved into file
|
||||
By default, the above commands add two contexts that are saved into file
|
||||
`.kube/config`. You can now view the contexts and alternate against the two
|
||||
new request contexts depending on which namespace you wish to work against.
|
||||
|
||||
|
|
|
@ -82,7 +82,7 @@ policies using an example application.
|
|||
## Deploying Cilium for Production Use
|
||||
|
||||
For detailed instructions around deploying Cilium for production, see:
|
||||
[Cilium Kubernetes Installation Guide](https://docs.cilium.io/en/stable/concepts/kubernetes/intro/)
|
||||
[Cilium Kubernetes Installation Guide](https://docs.cilium.io/en/stable/network/kubernetes/concepts/)
|
||||
This documentation includes detailed requirements, instructions and example
|
||||
production DaemonSet files.
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Managing Secrets"
|
||||
weight: 28
|
||||
weight: 60
|
||||
description: Managing confidential settings data using Secrets.
|
||||
---
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Configure Pods and Containers"
|
||||
description: Perform common configuration tasks for Pods and containers.
|
||||
weight: 20
|
||||
weight: 30
|
||||
---
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Assign Pods to Nodes using Node Affinity
|
||||
min-kubernetes-server-version: v1.10
|
||||
content_type: task
|
||||
weight: 120
|
||||
weight: 160
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Assign Pods to Nodes
|
||||
content_type: task
|
||||
weight: 120
|
||||
weight: 150
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: Attach Handlers to Container Lifecycle Events
|
||||
content_type: task
|
||||
weight: 140
|
||||
weight: 180
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configure GMSA for Windows Pods and containers
|
||||
content_type: task
|
||||
weight: 20
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configure Liveness, Readiness and Startup Probes
|
||||
content_type: task
|
||||
weight: 110
|
||||
weight: 140
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configure a Pod to Use a PersistentVolume for Storage
|
||||
content_type: task
|
||||
weight: 60
|
||||
weight: 90
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -12,27 +12,24 @@ for storage.
|
|||
Here is a summary of the process:
|
||||
|
||||
1. You, as cluster administrator, create a PersistentVolume backed by physical
|
||||
storage. You do not associate the volume with any Pod.
|
||||
storage. You do not associate the volume with any Pod.
|
||||
|
||||
1. You, now taking the role of a developer / cluster user, create a
|
||||
PersistentVolumeClaim that is automatically bound to a suitable
|
||||
PersistentVolume.
|
||||
PersistentVolumeClaim that is automatically bound to a suitable
|
||||
PersistentVolume.
|
||||
|
||||
1. You create a Pod that uses the above PersistentVolumeClaim for storage.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* You need to have a Kubernetes cluster that has only one Node, and the
|
||||
{{< glossary_tooltip text="kubectl" term_id="kubectl" >}}
|
||||
command-line tool must be configured to communicate with your cluster. If you
|
||||
do not already have a single-node cluster, you can create one by using
|
||||
[Minikube](https://minikube.sigs.k8s.io/docs/).
|
||||
{{< glossary_tooltip text="kubectl" term_id="kubectl" >}}
|
||||
command-line tool must be configured to communicate with your cluster. If you
|
||||
do not already have a single-node cluster, you can create one by using
|
||||
[Minikube](https://minikube.sigs.k8s.io/docs/).
|
||||
|
||||
* Familiarize yourself with the material in
|
||||
[Persistent Volumes](/docs/concepts/storage/persistent-volumes/).
|
||||
[Persistent Volumes](/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -50,7 +47,6 @@ In your shell on that Node, create a `/mnt/data` directory:
|
|||
sudo mkdir /mnt/data
|
||||
```
|
||||
|
||||
|
||||
In the `/mnt/data` directory, create an `index.html` file:
|
||||
|
||||
```shell
|
||||
|
@ -71,6 +67,7 @@ cat /mnt/data/index.html
|
|||
```
|
||||
|
||||
The output should be:
|
||||
|
||||
```
|
||||
Hello from Kubernetes storage
|
||||
```
|
||||
|
@ -116,8 +113,10 @@ kubectl get pv task-pv-volume
|
|||
The output shows that the PersistentVolume has a `STATUS` of `Available`. This
|
||||
means it has not yet been bound to a PersistentVolumeClaim.
|
||||
|
||||
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
task-pv-volume 10Gi RWO Retain Available manual 4s
|
||||
```
|
||||
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
task-pv-volume 10Gi RWO Retain Available manual 4s
|
||||
```
|
||||
|
||||
## Create a PersistentVolumeClaim
|
||||
|
||||
|
@ -132,7 +131,9 @@ Here is the configuration file for the PersistentVolumeClaim:
|
|||
|
||||
Create the PersistentVolumeClaim:
|
||||
|
||||
kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml
|
||||
```
|
||||
|
||||
After you create the PersistentVolumeClaim, the Kubernetes control plane looks
|
||||
for a PersistentVolume that satisfies the claim's requirements. If the control
|
||||
|
@ -147,8 +148,10 @@ kubectl get pv task-pv-volume
|
|||
|
||||
Now the output shows a `STATUS` of `Bound`.
|
||||
|
||||
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m
|
||||
```
|
||||
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m
|
||||
```
|
||||
|
||||
Look at the PersistentVolumeClaim:
|
||||
|
||||
|
@ -159,8 +162,10 @@ kubectl get pvc task-pv-claim
|
|||
The output shows that the PersistentVolumeClaim is bound to your PersistentVolume,
|
||||
`task-pv-volume`.
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
|
||||
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
|
||||
```
|
||||
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
|
||||
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
|
||||
```
|
||||
|
||||
## Create a Pod
|
||||
|
||||
|
@ -206,15 +211,16 @@ curl http://localhost/
|
|||
The output shows the text that you wrote to the `index.html` file on the
|
||||
hostPath volume:
|
||||
|
||||
Hello from Kubernetes storage
|
||||
|
||||
```
|
||||
Hello from Kubernetes storage
|
||||
```
|
||||
|
||||
If you see that message, you have successfully configured a Pod to
|
||||
use storage from a PersistentVolumeClaim.
|
||||
|
||||
## Clean up
|
||||
|
||||
Delete the Pod, the PersistentVolumeClaim and the PersistentVolume:
|
||||
Delete the Pod, the PersistentVolumeClaim and the PersistentVolume:
|
||||
|
||||
```shell
|
||||
kubectl delete pod task-pv-pod
|
||||
|
@ -242,8 +248,8 @@ You can now close the shell to your Node.
|
|||
|
||||
You can perform 2 volume mounts on your nginx container:
|
||||
|
||||
`/usr/share/nginx/html` for the static website
|
||||
`/etc/nginx/nginx.conf` for the default config
|
||||
- `/usr/share/nginx/html` for the static website
|
||||
- `/etc/nginx/nginx.conf` for the default config
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
|
@ -256,6 +262,7 @@ with a GID. Then the GID is automatically added to any Pod that uses the
|
|||
PersistentVolume.
|
||||
|
||||
Use the `pv.beta.kubernetes.io/gid` annotation as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
|
@ -264,6 +271,7 @@ metadata:
|
|||
annotations:
|
||||
pv.beta.kubernetes.io/gid: "1234"
|
||||
```
|
||||
|
||||
When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID
|
||||
is applied to all containers in the Pod in the same way that GIDs specified in the
|
||||
Pod's security context are. Every GID, whether it originates from a PersistentVolume
|
||||
|
@ -275,12 +283,8 @@ When a Pod consumes a PersistentVolume, the GIDs associated with the
|
|||
PersistentVolume are not present on the Pod resource itself.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/).
|
||||
* Read the [Persistent Storage design document](https://git.k8s.io/design-proposals-archive/storage/persistent-storage.md).
|
||||
|
||||
|
@ -290,7 +294,3 @@ PersistentVolume are not present on the Pod resource itself.
|
|||
* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core)
|
||||
* [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
|
||||
* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configure a Pod to Use a ConfigMap
|
||||
content_type: task
|
||||
weight: 150
|
||||
weight: 190
|
||||
card:
|
||||
name: tasks
|
||||
weight: 50
|
||||
|
@ -9,61 +9,92 @@ card:
|
|||
|
||||
<!-- overview -->
|
||||
Many applications rely on configuration which is used during either application initialization or runtime.
|
||||
Most of the times there is a requirement to adjust values assigned to configuration parameters.
|
||||
ConfigMaps are the Kubernetes way to inject application pods with configuration data.
|
||||
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps.
|
||||
Most times, there is a requirement to adjust values assigned to configuration parameters.
|
||||
ConfigMaps are a Kubernetes mechanism that let you inject configuration data into application
|
||||
{{< glossary_tooltip text="pods" term_id="pod" >}}.
|
||||
|
||||
The ConfigMap concept allow you to decouple configuration artifacts from image content to
|
||||
keep containerized applications portable. For example, you can download and run the same
|
||||
{{< glossary_tooltip text="container image" term_id="image" >}} to spin up containers for
|
||||
the purposes of local development, system test, or running a live end-user workload.
|
||||
|
||||
This page provides a series of usage examples demonstrating how to create ConfigMaps and
|
||||
configure Pods using data stored in ConfigMaps.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
You need to have the `wget` tool installed. If you have a different tool
|
||||
such as `curl`, and you do not have `wget`, you will need to adapt the
|
||||
step that downloads example data.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
||||
## Create a ConfigMap
|
||||
You can use either `kubectl create configmap` or a ConfigMap generator in `kustomization.yaml` to create a ConfigMap. Note that `kubectl` starts to support `kustomization.yaml` since 1.14.
|
||||
|
||||
### Create a ConfigMap Using kubectl create configmap
|
||||
You can use either `kubectl create configmap` or a ConfigMap generator in `kustomization.yaml`
|
||||
to create a ConfigMap.
|
||||
|
||||
Use the `kubectl create configmap` command to create ConfigMaps from [directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files), or [literal values](#create-configmaps-from-literal-values):
|
||||
### Create a ConfigMap using `kubectl create configmap`
|
||||
|
||||
Use the `kubectl create configmap` command to create ConfigMaps from
|
||||
[directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files),
|
||||
or [literal values](#create-configmaps-from-literal-values):
|
||||
|
||||
```shell
|
||||
kubectl create configmap <map-name> <data-source>
|
||||
```
|
||||
|
||||
where \<map-name> is the name you want to assign to the ConfigMap and \<data-source> is the directory, file, or literal value to draw the data from.
|
||||
where \<map-name> is the name you want to assign to the ConfigMap and \<data-source> is the
|
||||
directory, file, or literal value to draw the data from.
|
||||
The name of a ConfigMap object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
When you are creating a ConfigMap based on a file, the key in the \<data-source> defaults to the basename of the file, and the value defaults to the file content.
|
||||
When you are creating a ConfigMap based on a file, the key in the \<data-source> defaults to
|
||||
the basename of the file, and the value defaults to the file content.
|
||||
|
||||
You can use [`kubectl describe`](/docs/reference/generated/kubectl/kubectl-commands/#describe) or
|
||||
[`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) to retrieve information
|
||||
about a ConfigMap.
|
||||
|
||||
#### Create ConfigMaps from directories
|
||||
#### Create a ConfigMap from a directory {#create-configmaps-from-directories}
|
||||
|
||||
You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same directory. When you are creating a ConfigMap based on a directory, kubectl identifies files whose basename is a valid key in the directory and packages each of those files into the new ConfigMap. Any directory entries except regular files are ignored (e.g. subdirectories, symlinks, devices, pipes, etc).
|
||||
You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same
|
||||
directory. When you are creating a ConfigMap based on a directory, kubectl identifies files
|
||||
whose filename is a valid key in the directory and packages each of those files into the new
|
||||
ConfigMap. Any directory entries except regular files are ignored (for example: subdirectories,
|
||||
symlinks, devices, pipes, and more).
|
||||
|
||||
For example:
|
||||
{{< note >}}
|
||||
Each filename being used for ConfigMap creation must consist of only acceptable characters,
|
||||
which are: letters (`A` to `Z` and `a` to z`), digits (`0` to `9`), '-', '_', or '.'.
|
||||
If you use `kubectl create configmap` with a directory where any of the file names contains
|
||||
an unacceptable character, the `kubectl` command may fail.
|
||||
|
||||
The `kubectl` command does not print an error when it encounters an invalid filename.
|
||||
{{< /note >}}
|
||||
|
||||
Create the local directory:
|
||||
|
||||
```shell
|
||||
# Create the local directory
|
||||
mkdir -p configure-pod-container/configmap/
|
||||
```
|
||||
|
||||
Now, download the sample configuration and create the ConfigMap:
|
||||
|
||||
```shell
|
||||
# Download the sample files into `configure-pod-container/configmap/` directory
|
||||
wget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties
|
||||
wget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties
|
||||
|
||||
# Create the configmap
|
||||
# Create the ConfigMap
|
||||
kubectl create configmap game-config --from-file=configure-pod-container/configmap/
|
||||
```
|
||||
|
||||
The above command packages each file, in this case, `game.properties` and `ui.properties` in the `configure-pod-container/configmap/` directory into the game-config ConfigMap. You can display details of the ConfigMap using the following command:
|
||||
The above command packages each file, in this case, `game.properties` and `ui.properties`
|
||||
in the `configure-pod-container/configmap/` directory into the game-config ConfigMap. You can
|
||||
display details of the ConfigMap using the following command:
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps game-config
|
||||
|
@ -95,7 +126,8 @@ allow.textmode=true
|
|||
how.nice.to.look=fairlyNice
|
||||
```
|
||||
|
||||
The `game.properties` and `ui.properties` files in the `configure-pod-container/configmap/` directory are represented in the `data` section of the ConfigMap.
|
||||
The `game.properties` and `ui.properties` files in the `configure-pod-container/configmap/`
|
||||
directory are represented in the `data` section of the ConfigMap.
|
||||
|
||||
```shell
|
||||
kubectl get configmaps game-config -o yaml
|
||||
|
@ -106,7 +138,7 @@ The output is similar to this:
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:52:05Z
|
||||
creationTimestamp: 2022-02-18T18:52:05Z
|
||||
name: game-config
|
||||
namespace: default
|
||||
resourceVersion: "516"
|
||||
|
@ -129,7 +161,8 @@ data:
|
|||
|
||||
#### Create ConfigMaps from files
|
||||
|
||||
You can use `kubectl create configmap` to create a ConfigMap from an individual file, or from multiple files.
|
||||
You can use `kubectl create configmap` to create a ConfigMap from an individual file, or from
|
||||
multiple files.
|
||||
|
||||
For example,
|
||||
|
||||
|
@ -164,7 +197,8 @@ secret.code.allowed=true
|
|||
secret.code.lives=30
|
||||
```
|
||||
|
||||
You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple data sources.
|
||||
You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple
|
||||
data sources.
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties --from-file=configure-pod-container/configmap/ui.properties
|
||||
|
@ -203,9 +237,6 @@ allow.textmode=true
|
|||
how.nice.to.look=fairlyNice
|
||||
```
|
||||
|
||||
When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary data sources can be combined in one ConfigMap.
|
||||
If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run `kubectl get configmap -o jsonpath='{.binaryData}' <name>`.
|
||||
|
||||
Use the option `--from-env-file` to create a ConfigMap from an env-file, for example:
|
||||
|
||||
```shell
|
||||
|
@ -234,18 +265,18 @@ kubectl create configmap game-config-env-file \
|
|||
--from-env-file=configure-pod-container/configmap/game-env-file.properties
|
||||
```
|
||||
|
||||
would produce the following ConfigMap:
|
||||
would produce a ConfigMap. View the ConfigMap:
|
||||
|
||||
```shell
|
||||
kubectl get configmap game-config-env-file -o yaml
|
||||
```
|
||||
|
||||
where the output is similar to this:
|
||||
the output is similar to:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2017-12-27T18:36:28Z
|
||||
creationTimestamp: 2019-12-27T18:36:28Z
|
||||
name: game-config-env-file
|
||||
namespace: default
|
||||
resourceVersion: "809965"
|
||||
|
@ -276,7 +307,7 @@ where the output is similar to this:
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2017-12-27T18:38:34Z
|
||||
creationTimestamp: 2019-12-27T18:38:34Z
|
||||
name: config-multi-env-files
|
||||
namespace: default
|
||||
resourceVersion: "810136"
|
||||
|
@ -292,13 +323,15 @@ data:
|
|||
|
||||
#### Define the key to use when creating a ConfigMap from a file
|
||||
|
||||
You can define a key other than the file name to use in the `data` section of your ConfigMap when using the `--from-file` argument:
|
||||
You can define a key other than the file name to use in the `data` section of your ConfigMap
|
||||
when using the `--from-file` argument:
|
||||
|
||||
```shell
|
||||
kubectl create configmap game-config-3 --from-file=<my-key-name>=<path-to-file>
|
||||
```
|
||||
|
||||
where `<my-key-name>` is the key you want to use in the ConfigMap and `<path-to-file>` is the location of the data source file you want the key to represent.
|
||||
where `<my-key-name>` is the key you want to use in the ConfigMap and `<path-to-file>` is the
|
||||
location of the data source file you want the key to represent.
|
||||
|
||||
For example:
|
||||
|
||||
|
@ -316,7 +349,7 @@ where the output is similar to this:
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T18:54:22Z
|
||||
creationTimestamp: 2022-02-18T18:54:22Z
|
||||
name: game-config-3
|
||||
namespace: default
|
||||
resourceVersion: "530"
|
||||
|
@ -334,13 +367,15 @@ data:
|
|||
|
||||
#### Create ConfigMaps from literal values
|
||||
|
||||
You can use `kubectl create configmap` with the `--from-literal` argument to define a literal value from the command line:
|
||||
You can use `kubectl create configmap` with the `--from-literal` argument to define a literal
|
||||
value from the command line:
|
||||
|
||||
```shell
|
||||
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
|
||||
```
|
||||
|
||||
You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the `data` section of the ConfigMap.
|
||||
You can pass in multiple key-value pairs. Each pair provided on the command line is represented
|
||||
as a separate entry in the `data` section of the ConfigMap.
|
||||
|
||||
```shell
|
||||
kubectl get configmaps special-config -o yaml
|
||||
|
@ -351,7 +386,7 @@ The output is similar to this:
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: 2016-02-18T19:14:38Z
|
||||
creationTimestamp: 2022-02-18T19:14:38Z
|
||||
name: special-config
|
||||
namespace: default
|
||||
resourceVersion: "651"
|
||||
|
@ -362,26 +397,33 @@ data:
|
|||
```
|
||||
|
||||
### Create a ConfigMap from generator
|
||||
`kubectl` supports `kustomization.yaml` since 1.14.
|
||||
You can also create a ConfigMap from generators and then apply it to create the object on
|
||||
the Apiserver. The generators
|
||||
should be specified in a `kustomization.yaml` inside a directory.
|
||||
|
||||
You can also create a ConfigMap from generators and then apply it to create the object
|
||||
in the cluster's API server.
|
||||
You should specify the generators in a `kustomization.yaml` file within a directory.
|
||||
|
||||
#### Generate ConfigMaps from files
|
||||
|
||||
For example, to generate a ConfigMap from files `configure-pod-container/configmap/game.properties`
|
||||
|
||||
```shell
|
||||
# Create a kustomization.yaml file with ConfigMapGenerator
|
||||
cat <<EOF >./kustomization.yaml
|
||||
configMapGenerator:
|
||||
- name: game-config-4
|
||||
labels:
|
||||
game-config: config-4
|
||||
files:
|
||||
- configure-pod-container/configmap/game.properties
|
||||
EOF
|
||||
```
|
||||
|
||||
Apply the kustomization directory to create the ConfigMap object.
|
||||
Apply the kustomization directory to create the ConfigMap object:
|
||||
|
||||
```shell
|
||||
kubectl apply -k .
|
||||
```
|
||||
```
|
||||
configmap/game-config-4-m9dm2f92bt created
|
||||
```
|
||||
|
||||
|
@ -389,14 +431,21 @@ You can check that the ConfigMap was created like this:
|
|||
|
||||
```shell
|
||||
kubectl get configmap
|
||||
```
|
||||
```
|
||||
NAME DATA AGE
|
||||
game-config-4-m9dm2f92bt 1 37s
|
||||
```
|
||||
|
||||
and also:
|
||||
|
||||
```shell
|
||||
kubectl describe configmaps/game-config-4-m9dm2f92bt
|
||||
```
|
||||
```
|
||||
Name: game-config-4-m9dm2f92bt
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Labels: game-config=config-4
|
||||
Annotations: kubectl.kubernetes.io/last-applied-configuration:
|
||||
{"apiVersion":"v1","data":{"game.properties":"enemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.p...
|
||||
|
||||
|
@ -414,10 +463,11 @@ secret.code.lives=30
|
|||
Events: <none>
|
||||
```
|
||||
|
||||
Note that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a
|
||||
new ConfigMap is generated each time the content is modified.
|
||||
Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This
|
||||
ensures that a new ConfigMap is generated each time the content is modified.
|
||||
|
||||
#### Define the key to use when generating a ConfigMap from a file
|
||||
|
||||
You can define a key other than the file name to use in the ConfigMap generator.
|
||||
For example, to generate a ConfigMap from files `configure-pod-container/configmap/game.properties`
|
||||
with the key `game-special-key`
|
||||
|
@ -427,6 +477,8 @@ with the key `game-special-key`
|
|||
cat <<EOF >./kustomization.yaml
|
||||
configMapGenerator:
|
||||
- name: game-config-5
|
||||
labels:
|
||||
game-config: config-5
|
||||
files:
|
||||
- game-special-key=configure-pod-container/configmap/game.properties
|
||||
EOF
|
||||
|
@ -435,28 +487,51 @@ EOF
|
|||
Apply the kustomization directory to create the ConfigMap object.
|
||||
```shell
|
||||
kubectl apply -k .
|
||||
```
|
||||
```
|
||||
configmap/game-config-5-m67dt67794 created
|
||||
```
|
||||
|
||||
#### Generate ConfigMaps from Literals
|
||||
To generate a ConfigMap from literals `special.type=charm` and `special.how=very`,
|
||||
you can specify the ConfigMap generator in `kustomization.yaml` as
|
||||
```shell
|
||||
# Create a kustomization.yaml file with ConfigMapGenerator
|
||||
cat <<EOF >./kustomization.yaml
|
||||
#### Generate ConfigMaps from literals
|
||||
|
||||
This example shows you how to create a `ConfigMap` from two literal key/value pairs:
|
||||
`special.type=charm` and `special.how=very`, using Kustomize and kubectl. To achieve
|
||||
this, you can specify the `ConfigMap` generator. Create (or replace)
|
||||
`kustomization.yaml` so that it has the following contents:
|
||||
|
||||
```yaml
|
||||
---
|
||||
# kustomization.yaml contents for creating a ConfigMap from literals
|
||||
configMapGenerator:
|
||||
- name: special-config-2
|
||||
literals:
|
||||
- special.how=very
|
||||
- special.type=charm
|
||||
EOF
|
||||
```
|
||||
Apply the kustomization directory to create the ConfigMap object.
|
||||
|
||||
Apply the kustomization directory to create the ConfigMap object:
|
||||
```shell
|
||||
kubectl apply -k .
|
||||
```
|
||||
```
|
||||
configmap/special-config-2-c92b5mmcf2 created
|
||||
```
|
||||
|
||||
## Interim cleanup
|
||||
|
||||
Before proceeding, clean up some of the ConfigMaps you made:
|
||||
|
||||
```bash
|
||||
kubectl delete configmap special-config
|
||||
kubectl delete configmap env-config
|
||||
kubectl delete configmap -l 'game-config in (config-4,config-5)’
|
||||
```
|
||||
|
||||
Now that you have learned to define ConfigMaps, you can move on to the next
|
||||
section, and learn how to use these objects with Pods.
|
||||
|
||||
---
|
||||
|
||||
## Define container environment variables using ConfigMap data
|
||||
|
||||
### Define a container environment variable with data from a single ConfigMap
|
||||
|
@ -467,7 +542,8 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
kubectl create configmap special-config --from-literal=special.how=very
|
||||
```
|
||||
|
||||
2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY` environment variable in the Pod specification.
|
||||
2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY`
|
||||
environment variable in the Pod specification.
|
||||
|
||||
{{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}}
|
||||
|
||||
|
@ -481,11 +557,12 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
|
||||
### Define container environment variables with data from multiple ConfigMaps
|
||||
|
||||
* As with the previous example, create the ConfigMaps first.
|
||||
As with the previous example, create the ConfigMaps first.
|
||||
Here is the manifest you will use:
|
||||
|
||||
{{< codenew file="configmap/configmaps.yaml" >}}
|
||||
{{< codenew file="configmap/configmaps.yaml" >}}
|
||||
|
||||
Create the ConfigMap:
|
||||
* Create the ConfigMap:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://kubernetes.io/examples/configmap/configmaps.yaml
|
||||
|
@ -503,6 +580,11 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
|
||||
Now, the Pod's output includes environment variables `SPECIAL_LEVEL_KEY=very` and `LOG_LEVEL=INFO`.
|
||||
|
||||
Once you're happy to move on, delete that Pod:
|
||||
```shell
|
||||
kubectl delete pod dapi-test-pod --now
|
||||
```
|
||||
|
||||
## Configure all key-value pairs in a ConfigMap as container environment variables
|
||||
|
||||
* Create a ConfigMap containing multiple key-value pairs.
|
||||
|
@ -515,7 +597,8 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
kubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml
|
||||
```
|
||||
|
||||
* Use `envFrom` to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.
|
||||
* Use `envFrom` to define all of the ConfigMap's data as container environment variables. The
|
||||
key from the ConfigMap becomes the environment variable name in the Pod.
|
||||
|
||||
{{< codenew file="pods/pod-configmap-envFrom.yaml" >}}
|
||||
|
||||
|
@ -524,35 +607,47 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
```shell
|
||||
kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-envFrom.yaml
|
||||
```
|
||||
Now, the Pod's output includes environment variables `SPECIAL_LEVEL=very` and
|
||||
`SPECIAL_TYPE=charm`.
|
||||
|
||||
Now, the Pod's output includes environment variables `SPECIAL_LEVEL=very` and `SPECIAL_TYPE=charm`.
|
||||
|
||||
Once you're happy to move on, delete that Pod:
|
||||
```shell
|
||||
kubectl delete pod dapi-test-pod --now
|
||||
```
|
||||
|
||||
## Use ConfigMap-defined environment variables in Pod commands
|
||||
|
||||
You can use ConfigMap-defined environment variables in the `command` and `args` of a container using the `$(VAR_NAME)` Kubernetes substitution syntax.
|
||||
You can use ConfigMap-defined environment variables in the `command` and `args` of a container
|
||||
using the `$(VAR_NAME)` Kubernetes substitution syntax.
|
||||
|
||||
For example, the following Pod specification
|
||||
For example, the following Pod manifest:
|
||||
|
||||
{{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}}
|
||||
|
||||
created by running
|
||||
Create that Pod, by running:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valueFrom.yaml
|
||||
```
|
||||
|
||||
produces the following output in the `test-container` container:
|
||||
That pod produces the following output from the `test-container` container:
|
||||
|
||||
```
|
||||
very charm
|
||||
```
|
||||
|
||||
Once you're happy to move on, delete that Pod:
|
||||
```shell
|
||||
kubectl delete pod dapi-test-pod --now
|
||||
```
|
||||
|
||||
## Add ConfigMap data to a Volume
|
||||
|
||||
As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of the ConfigMap. The file contents become the key's value.
|
||||
As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create
|
||||
a ConfigMap using `--from-file`, the filename becomes a key stored in the `data` section of
|
||||
the ConfigMap. The file contents become the key's value.
|
||||
|
||||
The examples in this section refer to a ConfigMap named special-config, shown below.
|
||||
The examples in this section refer to a ConfigMap named `special-config`:
|
||||
|
||||
{{< codenew file="configmap/configmap-multikeys.yaml" >}}
|
||||
|
||||
|
@ -565,8 +660,9 @@ kubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.y
|
|||
### Populate a Volume with data stored in a ConfigMap
|
||||
|
||||
Add the ConfigMap name under the `volumes` section of the Pod specification.
|
||||
This adds the ConfigMap data to the directory specified as `volumeMounts.mountPath` (in this case, `/etc/config`).
|
||||
The `command` section lists directory files with names that match the keys in ConfigMap.
|
||||
This adds the ConfigMap data to the directory specified as `volumeMounts.mountPath` (in this
|
||||
case, `/etc/config`). The `command` section lists directory files with names that match the
|
||||
keys in ConfigMap.
|
||||
|
||||
{{< codenew file="pods/pod-configmap-volume.yaml" >}}
|
||||
|
||||
|
@ -583,14 +679,20 @@ SPECIAL_LEVEL
|
|||
SPECIAL_TYPE
|
||||
```
|
||||
|
||||
{{< caution >}}
|
||||
If there are some files in the `/etc/config/` directory, they will be deleted.
|
||||
{{< /caution >}}
|
||||
Text data is exposed as files using the UTF-8 character encoding. To use some other
|
||||
character encoding, use `binaryData`
|
||||
(see [ConfigMap object](/docs/concepts/configuration/configmap/#configmap-object) for more details).
|
||||
|
||||
{{< note >}}
|
||||
Text data is exposed as files using the UTF-8 character encoding. To use some other character encoding, use binaryData.
|
||||
If there are any files in the `/etc/config` directory of that container image, the volume
|
||||
mount will make those files from the image inaccessible.
|
||||
{{< /note >}}
|
||||
|
||||
Once you're happy to move on, delete that Pod:
|
||||
```shell
|
||||
kubectl delete pod dapi-test-pod --now
|
||||
```
|
||||
|
||||
### Add ConfigMap data to a specific path in the Volume
|
||||
|
||||
Use the `path` field to specify the desired file path for specific ConfigMap items.
|
||||
|
@ -614,24 +716,63 @@ very
|
|||
Like before, all previous files in the `/etc/config/` directory will be deleted.
|
||||
{{< /caution >}}
|
||||
|
||||
Delete that Pod:
|
||||
```shell
|
||||
kubectl delete pod dapi-test-pod --now
|
||||
```
|
||||
|
||||
### Project keys to specific paths and file permissions
|
||||
|
||||
You can project keys to specific paths and specific permissions on a per-file
|
||||
basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) user guide explains the syntax.
|
||||
basis. The
|
||||
[Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod)
|
||||
guide explains the syntax.
|
||||
|
||||
### Optional references
|
||||
|
||||
A ConfigMap reference may be marked _optional_. If the ConfigMap is non-existent, the mounted
|
||||
volume will be empty. If the ConfigMap exists, but the referenced key is non-existent, the path
|
||||
will be absent beneath the mount point. See [Optional ConfigMaps](#optional-configmaps) for more
|
||||
details.
|
||||
|
||||
### Mounted ConfigMaps are updated automatically
|
||||
|
||||
When a mounted ConfigMap is updated, the projected content is eventually updated too.
|
||||
This applies in the case where an optionally referenced ConfigMap comes into
|
||||
existence after a pod has started.
|
||||
|
||||
Kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However,
|
||||
it uses its local TTL-based cache for getting the current value of the ConfigMap. As a
|
||||
result, the total delay from the moment when the ConfigMap is updated to the moment
|
||||
when new keys are projected to the pod can be as long as kubelet sync period (1
|
||||
minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. You
|
||||
can trigger an immediate refresh by updating one of the pod's annotations.
|
||||
|
||||
{{< note >}}
|
||||
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath)
|
||||
volume will not receive ConfigMap updates.
|
||||
{{< /note >}}
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Understanding ConfigMaps and Pods
|
||||
|
||||
The ConfigMap API resource stores configuration data as key-value pairs. The data can be consumed in pods or provide the configurations for system components such as controllers. ConfigMap is similar to [Secrets](/docs/concepts/configuration/secret/), but provides a means of working with strings that don't contain sensitive information. Users and system components alike can store configuration data in ConfigMap.
|
||||
The ConfigMap API resource stores configuration data as key-value pairs. The data can be consumed
|
||||
in pods or provide the configurations for system components such as controllers. ConfigMap is
|
||||
similar to [Secrets](/docs/concepts/configuration/secret/), but provides a means of working
|
||||
with strings that don't contain sensitive information. Users and system components alike can
|
||||
store configuration data in ConfigMap.
|
||||
|
||||
{{< note >}}
|
||||
ConfigMaps should reference properties files, not replace them. Think of the ConfigMap as representing something similar to the Linux `/etc` directory and its contents. For example, if you create a [Kubernetes Volume](/docs/concepts/storage/volumes/) from a ConfigMap, each data item in the ConfigMap is represented by an individual file in the volume.
|
||||
ConfigMaps should reference properties files, not replace them. Think of the ConfigMap as
|
||||
representing something similar to the Linux `/etc` directory and its contents. For example,
|
||||
if you create a [Kubernetes Volume](/docs/concepts/storage/volumes/) from a ConfigMap, each
|
||||
data item in the ConfigMap is represented by an individual file in the volume.
|
||||
{{< /note >}}
|
||||
|
||||
The ConfigMap's `data` field contains the configuration data. As shown in the example below, this can be simple -- like individual properties defined using `--from-literal` -- or complex -- like configuration files or JSON blobs defined using `--from-file`.
|
||||
The ConfigMap's `data` field contains the configuration data. As shown in the example below,
|
||||
this can be simple (like individual properties defined using `--from-literal`) or complex
|
||||
(like configuration files or JSON blobs defined using `--from-file`).
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -651,33 +792,24 @@ data:
|
|||
property.3=value-3
|
||||
```
|
||||
|
||||
### Restrictions
|
||||
When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts
|
||||
these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary
|
||||
data sources can be combined in one ConfigMap.
|
||||
|
||||
- You must create the `ConfigMap` object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as `optional` in the Pod spec (see [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist and you don't mark the reference as `optional`, the Pod won't start. Similarly, references to keys that don't exist in the ConfigMap will also prevent the Pod from starting, unless you mark the key references as `optional`.
|
||||
If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run
|
||||
`kubectl get configmap -o jsonpath='{.binaryData}' <name>`.
|
||||
|
||||
- If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered invalid will be skipped. The pod will be allowed to start, but the invalid names will be recorded in the event log (`InvalidVariableNames`). The log message lists each skipped key. For example:
|
||||
Pods can load data from a ConfigMap that uses either `data` or `binaryData`.
|
||||
|
||||
```shell
|
||||
kubectl get events
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
|
||||
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names.
|
||||
```
|
||||
|
||||
- ConfigMaps reside in a specific {{< glossary_tooltip term_id="namespace" >}}. A ConfigMap can only be referenced by pods residing in the same namespace.
|
||||
|
||||
- You can't use ConfigMaps for {{< glossary_tooltip text="static pods" term_id="static-pod" >}}, because the Kubelet does not support this.
|
||||
|
||||
### Optional ConfigMaps
|
||||
## Optional ConfigMaps
|
||||
|
||||
You can mark a reference to a ConfigMap as _optional_ in a Pod specification.
|
||||
If the ConfigMap doesn't exist, the configuration for which it provides data in the Pod (e.g. environment variable, mounted volume) will be empty.
|
||||
If the ConfigMap doesn't exist, the configuration for which it provides data in the Pod
|
||||
(for example: environment variable, mounted volume) will be empty.
|
||||
If the ConfigMap exists, but the referenced key is non-existent the data is also empty.
|
||||
|
||||
For example, the following Pod specification marks an environment variable from a ConfigMap as optional:
|
||||
For example, the following Pod specification marks an environment variable from a ConfigMap
|
||||
as optional:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -688,7 +820,7 @@ spec:
|
|||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "-c", "env" ]
|
||||
command: ["/bin/sh", "-c", "env"]
|
||||
env:
|
||||
- name: SPECIAL_LEVEL_KEY
|
||||
valueFrom:
|
||||
|
@ -704,8 +836,9 @@ If you run this pod, and there is a ConfigMap named `a-config` but that ConfigMa
|
|||
a key named `akey`, the output is also empty. If you do set a value for `akey` in the `a-config`
|
||||
ConfigMap, this pod prints that value and then terminates.
|
||||
|
||||
You can also mark the volumes and files provided by a ConfigMap as optional. Kubernetes always creates the mount paths for the volume, even if the referenced ConfigMap or key doesn't exist. For example, the following
|
||||
Pod specification marks a volume that references a ConfigMap as optional:
|
||||
You can also mark the volumes and files provided by a ConfigMap as optional. Kubernetes always
|
||||
creates the mount paths for the volume, even if the referenced ConfigMap or key doesn't exist. For
|
||||
example, the following Pod specification marks a volume that references a ConfigMap as optional:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -716,7 +849,7 @@ spec:
|
|||
containers:
|
||||
- name: test-container
|
||||
image: gcr.io/google_containers/busybox
|
||||
command: [ "/bin/sh", "-c", "ls /etc/config" ]
|
||||
command: ["/bin/sh", "-c", "ls /etc/config"]
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config
|
||||
|
@ -730,17 +863,70 @@ spec:
|
|||
|
||||
### Mounted ConfigMaps are updated automatically
|
||||
|
||||
When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into
|
||||
existence after a pod has started.
|
||||
When a mounted ConfigMap is updated, the projected content is eventually updated too.
|
||||
This applies in the case where an optionally referenced ConfigMap comes into existence after
|
||||
a pod has started.
|
||||
|
||||
The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the
|
||||
ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as
|
||||
kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet.
|
||||
The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it
|
||||
uses its local TTL-based cache for getting the current value of the ConfigMap. As a result,
|
||||
the total delay from the moment when the ConfigMap is updated to the moment when new keys
|
||||
are projected to the pod can be as long as kubelet sync period (1 minute by default) + TTL of
|
||||
ConfigMaps cache (1 minute by default) in kubelet.
|
||||
|
||||
{{< note >}}
|
||||
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates.
|
||||
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath)
|
||||
volume will not receive ConfigMap updates.
|
||||
{{< /note >}}
|
||||
|
||||
## Restrictions
|
||||
|
||||
- You must create the `ConfigMap` object before you reference it in a Pod
|
||||
specification. Alternatively, mark the ConfigMap reference as `optional` in the Pod spec (see
|
||||
[Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist
|
||||
and you don't mark the reference as `optional`, the Pod won't start. Similarly, references
|
||||
to keys that don't exist in the ConfigMap will also prevent the Pod from starting, unless
|
||||
you mark the key references as `optional`.
|
||||
|
||||
- If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered
|
||||
invalid will be skipped. The pod will be allowed to start, but the invalid names will be
|
||||
recorded in the event log (`InvalidVariableNames`). The log message lists each skipped
|
||||
key. For example:
|
||||
|
||||
```shell
|
||||
kubectl get events
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
```
|
||||
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
|
||||
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names.
|
||||
```
|
||||
|
||||
- ConfigMaps reside in a specific {{< glossary_tooltip term_id="namespace" >}}.
|
||||
Pods can only refer to ConfigMaps that are in the same namespace as the Pod.
|
||||
|
||||
- You can't use ConfigMaps for
|
||||
{{< glossary_tooltip text="static pods" term_id="static-pod" >}}, because the
|
||||
kubelet does not support this.
|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
Delete the ConfigMaps and Pods that you made:
|
||||
|
||||
```bash
|
||||
kubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \
|
||||
configmaps/game-config-env-file
|
||||
kubectl delete pod dapi-test-pod --now
|
||||
|
||||
# You might already have removed the next set
|
||||
kubectl delete configmaps/special-config configmaps/env-config
|
||||
kubectl delete configmap -l 'game-config in (config-4,config-5)’
|
||||
```
|
||||
|
||||
If you created a directory `configure-pod-container` and no longer need it, you should remove that too,
|
||||
or move it into the trash can / deleted files location.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).
|
||||
* Follow a real world example of
|
||||
[Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).
|
||||
|
|
|
@ -1,22 +1,18 @@
|
|||
---
|
||||
title: Configure Pod Initialization
|
||||
content_type: task
|
||||
weight: 130
|
||||
weight: 170
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This page shows how to use an Init Container to initialize a Pod before an
|
||||
application Container runs.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Create a Pod that has an Init Container
|
||||
|
@ -37,55 +33,63 @@ shared Volume at `/work-dir`, and the application container mounts the shared
|
|||
Volume at `/usr/share/nginx/html`. The init container runs the following command
|
||||
and then terminates:
|
||||
|
||||
wget -O /work-dir/index.html http://info.cern.ch
|
||||
```shell
|
||||
wget -O /work-dir/index.html http://info.cern.ch
|
||||
```
|
||||
|
||||
Notice that the init container writes the `index.html` file in the root directory
|
||||
of the nginx server.
|
||||
|
||||
Create the Pod:
|
||||
|
||||
kubectl apply -f https://k8s.io/examples/pods/init-containers.yaml
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/init-containers.yaml
|
||||
```
|
||||
|
||||
Verify that the nginx container is running:
|
||||
|
||||
kubectl get pod init-demo
|
||||
```shell
|
||||
kubectl get pod init-demo
|
||||
```
|
||||
|
||||
The output shows that the nginx container is running:
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
init-demo 1/1 Running 0 1m
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
init-demo 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
Get a shell into the nginx container running in the init-demo Pod:
|
||||
|
||||
kubectl exec -it init-demo -- /bin/bash
|
||||
```shell
|
||||
kubectl exec -it init-demo -- /bin/bash
|
||||
```
|
||||
|
||||
In your shell, send a GET request to the nginx server:
|
||||
|
||||
root@nginx:~# apt-get update
|
||||
root@nginx:~# apt-get install curl
|
||||
root@nginx:~# curl localhost
|
||||
```
|
||||
root@nginx:~# apt-get update
|
||||
root@nginx:~# apt-get install curl
|
||||
root@nginx:~# curl localhost
|
||||
```
|
||||
|
||||
The output shows that nginx is serving the web page that was written by the init container:
|
||||
|
||||
<html><head></head><body><header>
|
||||
<title>http://info.cern.ch</title>
|
||||
</header>
|
||||
|
||||
<h1>http://info.cern.ch - home of the first website</h1>
|
||||
...
|
||||
<li><a href="http://info.cern.ch/hypertext/WWW/TheProject.html">Browse the first website</a></li>
|
||||
...
|
||||
|
||||
```html
|
||||
<html><head></head><body><header>
|
||||
<title>http://info.cern.ch</title>
|
||||
</header>
|
||||
|
||||
<h1>http://info.cern.ch - home of the first website</h1>
|
||||
...
|
||||
<li><a href="http://info.cern.ch/hypertext/WWW/TheProject.html">Browse the first website</a></li>
|
||||
...
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about
|
||||
[communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/).
|
||||
[communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/).
|
||||
* Learn more about [Init Containers](/docs/concepts/workloads/pods/init-containers/).
|
||||
* Learn more about [Volumes](/docs/concepts/storage/volumes/).
|
||||
* Learn more about [Debugging Init Containers](/docs/tasks/debug/debug-application/debug-init-containers/)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ reviewers:
|
|||
- pmorie
|
||||
title: Configure a Pod to Use a Projected Volume for Storage
|
||||
content_type: task
|
||||
weight: 70
|
||||
weight: 100
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configure RunAsUserName for Windows pods and containers
|
||||
content_type: task
|
||||
weight: 20
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -5,7 +5,7 @@ reviewers:
|
|||
- thockin
|
||||
title: Configure Service Accounts for Pods
|
||||
content_type: task
|
||||
weight: 90
|
||||
weight: 120
|
||||
---
|
||||
|
||||
Kubernetes offers two distinct ways for clients that run within your
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configure a Pod to Use a Volume for Storage
|
||||
content_type: task
|
||||
weight: 50
|
||||
weight: 80
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Create a Windows HostProcess Pod
|
||||
content_type: task
|
||||
weight: 20
|
||||
weight: 50
|
||||
min-kubernetes-server-version: 1.23
|
||||
---
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ reviewers:
|
|||
- tallclair
|
||||
- liggitt
|
||||
content_type: task
|
||||
weight: 240
|
||||
---
|
||||
|
||||
Kubernetes provides a built-in [admission controller](/docs/reference/access-authn-authz/admission-controllers/#podsecurity)
|
||||
|
|
|
@ -4,6 +4,7 @@ reviewers:
|
|||
- tallclair
|
||||
- liggitt
|
||||
content_type: task
|
||||
weight: 250
|
||||
---
|
||||
|
||||
Namespaces can be labeled to enforce the [Pod Security Standards](/docs/concepts/security/pod-security-standards). The three policies
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Assign Extended Resources to a Container
|
||||
content_type: task
|
||||
weight: 40
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -5,6 +5,7 @@ reviewers:
|
|||
- liggitt
|
||||
content_type: task
|
||||
min-kubernetes-server-version: v1.22
|
||||
weight: 260
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: Pull an Image from a Private Registry
|
||||
content_type: task
|
||||
weight: 100
|
||||
weight: 130
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Configure Quality of Service for Pods
|
||||
content_type: task
|
||||
weight: 30
|
||||
weight: 60
|
||||
---
|
||||
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ reviewers:
|
|||
- thockin
|
||||
title: Configure a Security Context for a Pod or Container
|
||||
content_type: task
|
||||
weight: 80
|
||||
weight: 110
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -442,7 +442,7 @@ To assign SELinux labels, the SELinux security module must be loaded on the host
|
|||
|
||||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
|
||||
By default, the contrainer runtime recursively assigns SELinux label to all
|
||||
By default, the container runtime recursively assigns SELinux label to all
|
||||
files on all Pod volumes. To speed up this process, Kubernetes can change the
|
||||
SELinux label of a volume instantly by using a mount option
|
||||
`-o context=<label>`.
|
||||
|
|
|
@ -5,7 +5,7 @@ reviewers:
|
|||
- yujuhong
|
||||
- dchen1107
|
||||
content_type: task
|
||||
weight: 160
|
||||
weight: 200
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
reviewers:
|
||||
- jsafrane
|
||||
title: Create static Pods
|
||||
weight: 170
|
||||
weight: 220
|
||||
content_type: task
|
||||
---
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ reviewers:
|
|||
- cdrage
|
||||
title: Translate a Docker Compose File to Kubernetes Resources
|
||||
content_type: task
|
||||
weight: 200
|
||||
weight: 230
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Use a User Namespace With a Pod
|
||||
reviewers:
|
||||
content_type: task
|
||||
weight: 160
|
||||
weight: 210
|
||||
min-kubernetes-server-version: v1.25
|
||||
---
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: "Monitoring, Logging, and Debugging"
|
||||
description: Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application.
|
||||
weight: 20
|
||||
weight: 40
|
||||
reviewers:
|
||||
- brendandburns
|
||||
- davidopp
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Extend Kubernetes"
|
||||
description: Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment.
|
||||
weight: 90
|
||||
weight: 110
|
||||
---
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Inject Data Into Applications"
|
||||
description: Specify configuration and other data for the Pods that run your workload.
|
||||
weight: 30
|
||||
weight: 70
|
||||
---
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Run Jobs"
|
||||
description: Run Jobs using parallel processing.
|
||||
weight: 50
|
||||
weight: 90
|
||||
---
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Manage Kubernetes Objects"
|
||||
description: Declarative and imperative paradigms for interacting with the Kubernetes API.
|
||||
weight: 25
|
||||
weight: 50
|
||||
---
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Networking"
|
||||
description: Learn how to configure networking for your cluster.
|
||||
weight: 160
|
||||
weight: 140
|
||||
---
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "Run Applications"
|
||||
description: Run and manage both stateless and stateful applications.
|
||||
weight: 40
|
||||
weight: 80
|
||||
---
|
||||
|
||||
|
|
|
@ -27,15 +27,18 @@ libraries can automatically discover the API server and authenticate.
|
|||
|
||||
From within a Pod, the recommended ways to connect to the Kubernetes API are:
|
||||
|
||||
- For a Go client, use the official [Go client library](https://github.com/kubernetes/client-go/).
|
||||
The `rest.InClusterConfig()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go).
|
||||
- For a Go client, use the official
|
||||
[Go client library](https://github.com/kubernetes/client-go/).
|
||||
The `rest.InClusterConfig()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go).
|
||||
|
||||
- For a Python client, use the official [Python client library](https://github.com/kubernetes-client/python/).
|
||||
The `config.load_incluster_config()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py).
|
||||
- For a Python client, use the official
|
||||
[Python client library](https://github.com/kubernetes-client/python/).
|
||||
The `config.load_incluster_config()` function handles API host discovery and authentication automatically.
|
||||
See [an example here](https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py).
|
||||
|
||||
- There are a number of other libraries available, please refer to the [Client Libraries](/docs/reference/using-api/client-libraries/) page.
|
||||
- There are a number of other libraries available, please refer to the
|
||||
[Client Libraries](/docs/reference/using-api/client-libraries/) page.
|
||||
|
||||
In each case, the service account credentials of the Pod are used to communicate
|
||||
securely with the API server.
|
||||
|
@ -50,7 +53,7 @@ Service named `kubernetes` in the `default` namespace so that pods may reference
|
|||
|
||||
{{< note >}}
|
||||
Kubernetes does not guarantee that the API server has a valid certificate for
|
||||
the hostname `kubernetes.default.svc`;
|
||||
the hostname `kubernetes.default.svc`;
|
||||
however, the control plane **is** expected to present a valid certificate for the
|
||||
hostname or IP address that `$KUBERNETES_SERVICE_HOST` represents.
|
||||
{{< /note >}}
|
||||
|
@ -80,7 +83,7 @@ in the Pod can use it directly.
|
|||
### Without using a proxy
|
||||
|
||||
It is possible to avoid using the kubectl proxy by passing the authentication token
|
||||
directly to the API server. The internal certificate secures the connection.
|
||||
directly to the API server. The internal certificate secures the connection.
|
||||
|
||||
```shell
|
||||
# Point to the internal API server hostname
|
||||
|
@ -107,9 +110,7 @@ The output will be similar to this:
|
|||
```json
|
||||
{
|
||||
"kind": "APIVersions",
|
||||
"versions": [
|
||||
"v1"
|
||||
],
|
||||
"versions": ["v1"],
|
||||
"serverAddressByClientCIDRs": [
|
||||
{
|
||||
"clientCIDR": "0.0.0.0/0",
|
||||
|
|
|
@ -14,21 +14,18 @@ that your application experiences, allowing for higher availability
|
|||
while permitting the cluster administrator to manage the clusters
|
||||
nodes.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< version-check >}}
|
||||
|
||||
* You are the owner of an application running on a Kubernetes cluster that requires
|
||||
- You are the owner of an application running on a Kubernetes cluster that requires
|
||||
high availability.
|
||||
* You should know how to deploy [Replicated Stateless Applications](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
- You should know how to deploy [Replicated Stateless Applications](/docs/tasks/run-application/run-stateless-application-deployment/)
|
||||
and/or [Replicated Stateful Applications](/docs/tasks/run-application/run-replicated-stateful-application/).
|
||||
* You should have read about [Pod Disruptions](/docs/concepts/workloads/pods/disruptions/).
|
||||
* You should confirm with your cluster owner or service provider that they respect
|
||||
- You should have read about [Pod Disruptions](/docs/concepts/workloads/pods/disruptions/).
|
||||
- You should confirm with your cluster owner or service provider that they respect
|
||||
Pod Disruption Budgets.
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Protecting an Application with a PodDisruptionBudget
|
||||
|
@ -38,8 +35,6 @@ nodes.
|
|||
1. Create a PDB definition as a YAML file.
|
||||
1. Create the PDB object from the YAML file.
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Identify an Application to Protect
|
||||
|
@ -61,29 +56,28 @@ You can also use PDBs with pods which are not controlled by one of the above
|
|||
controllers, or arbitrary groups of pods, but there are some restrictions,
|
||||
described in [Arbitrary Controllers and Selectors](#arbitrary-controllers-and-selectors).
|
||||
|
||||
|
||||
## Think about how your application reacts to disruptions
|
||||
|
||||
Decide how many instances can be down at the same time for a short period
|
||||
due to a voluntary disruption.
|
||||
|
||||
- Stateless frontends:
|
||||
- Concern: don't reduce serving capacity by more than 10%.
|
||||
- Concern: don't reduce serving capacity by more than 10%.
|
||||
- Solution: use PDB with minAvailable 90% for example.
|
||||
- Single-instance Stateful Application:
|
||||
- Concern: do not terminate this application without talking to me.
|
||||
- Possible Solution 1: Do not use a PDB and tolerate occasional downtime.
|
||||
- Possible Solution 2: Set PDB with maxUnavailable=0. Have an understanding
|
||||
- Possible Solution 2: Set PDB with maxUnavailable=0. Have an understanding
|
||||
(outside of Kubernetes) that the cluster operator needs to consult you before
|
||||
termination. When the cluster operator contacts you, prepare for downtime,
|
||||
and then delete the PDB to indicate readiness for disruption. Recreate afterwards.
|
||||
termination. When the cluster operator contacts you, prepare for downtime,
|
||||
and then delete the PDB to indicate readiness for disruption. Recreate afterwards.
|
||||
- Multiple-instance Stateful application such as Consul, ZooKeeper, or etcd:
|
||||
- Concern: Do not reduce number of instances below quorum, otherwise writes fail.
|
||||
- Possible Solution 1: set maxUnavailable to 1 (works with varying scale of application).
|
||||
- Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). (Allows more disruptions at once).
|
||||
- Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). (Allows more disruptions at once).
|
||||
- Restartable Batch Job:
|
||||
- Concern: Job needs to complete in case of voluntary disruption.
|
||||
- Possible solution: Do not create a PDB. The Job controller will create a replacement pod.
|
||||
- Possible solution: Do not create a PDB. The Job controller will create a replacement pod.
|
||||
|
||||
### Rounding logic when specifying percentages
|
||||
|
||||
|
@ -92,27 +86,29 @@ Values for `minAvailable` or `maxUnavailable` can be expressed as integers or as
|
|||
- When you specify an integer, it represents a number of Pods. For instance, if you set `minAvailable` to 10, then 10
|
||||
Pods must always be available, even during a disruption.
|
||||
- When you specify a percentage by setting the value to a string representation of a percentage (eg. `"50%"`), it represents a percentage of
|
||||
total Pods. For instance, if you set `maxUnavailable` to `"50%"`, then only 50% of the Pods can be unavailable during a
|
||||
total Pods. For instance, if you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available during a
|
||||
disruption.
|
||||
|
||||
When you specify the value as a percentage, it may not map to an exact number of Pods. For example, if you have 7 Pods and
|
||||
you set `minAvailable` to `"50%"`, it's not immediately obvious whether that means 3 Pods or 4 Pods must be available.
|
||||
Kubernetes rounds up to the nearest integer, so in this case, 4 Pods must be available. You can examine the
|
||||
Kubernetes rounds up to the nearest integer, so in this case, 4 Pods must be available. When you specify the value
|
||||
`maxUnavailable` as a percentage, Kubernetes rounds up the number of Pods that may be disrupted. Thereby a disruption
|
||||
can exceed your defined `maxUnavailable` percentage. You can examine the
|
||||
[code](https://github.com/kubernetes/kubernetes/blob/23be9587a0f8677eb8091464098881df939c44a9/pkg/controller/disruption/disruption.go#L539)
|
||||
that controls this behavior.
|
||||
|
||||
## Specifying a PodDisruptionBudget
|
||||
|
||||
A `PodDisruptionBudget` has three fields:
|
||||
A `PodDisruptionBudget` has three fields:
|
||||
|
||||
* A label selector `.spec.selector` to specify the set of
|
||||
pods to which it applies. This field is required.
|
||||
* `.spec.minAvailable` which is a description of the number of pods from that
|
||||
set that must still be available after the eviction, even in the absence
|
||||
of the evicted pod. `minAvailable` can be either an absolute number or a percentage.
|
||||
* `.spec.maxUnavailable` (available in Kubernetes 1.7 and higher) which is a description
|
||||
of the number of pods from that set that can be unavailable after the eviction.
|
||||
It can be either an absolute number or a percentage.
|
||||
- A label selector `.spec.selector` to specify the set of
|
||||
pods to which it applies. This field is required.
|
||||
- `.spec.minAvailable` which is a description of the number of pods from that
|
||||
set that must still be available after the eviction, even in the absence
|
||||
of the evicted pod. `minAvailable` can be either an absolute number or a percentage.
|
||||
- `.spec.maxUnavailable` (available in Kubernetes 1.7 and higher) which is a description
|
||||
of the number of pods from that set that can be unavailable after the eviction.
|
||||
It can be either an absolute number or a percentage.
|
||||
|
||||
{{< note >}}
|
||||
The behavior for an empty selector differs between the policy/v1beta1 and policy/v1 APIs for
|
||||
|
@ -120,8 +116,8 @@ PodDisruptionBudgets. For policy/v1beta1 an empty selector matches zero pods, wh
|
|||
for policy/v1 an empty selector matches every pod in the namespace.
|
||||
{{< /note >}}
|
||||
|
||||
You can specify only one of `maxUnavailable` and `minAvailable` in a single `PodDisruptionBudget`.
|
||||
`maxUnavailable` can only be used to control the eviction of pods
|
||||
You can specify only one of `maxUnavailable` and `minAvailable` in a single `PodDisruptionBudget`.
|
||||
`maxUnavailable` can only be used to control the eviction of pods
|
||||
that have an associated controller managing them. In the examples below, "desired replicas"
|
||||
is the `scale` of the controller managing the pods being selected by the
|
||||
`PodDisruptionBudget`.
|
||||
|
@ -130,20 +126,22 @@ Example 1: With a `minAvailable` of 5, evictions are allowed as long as they lea
|
|||
5 or more [healthy](#healthiness-of-a-pod) pods among those selected by the PodDisruptionBudget's `selector`.
|
||||
|
||||
Example 2: With a `minAvailable` of 30%, evictions are allowed as long as at least 30%
|
||||
of the number of desired replicas are healthy.
|
||||
of the number of desired replicas are healthy.
|
||||
|
||||
Example 3: With a `maxUnavailable` of 5, evictions are allowed as long as there are at most 5
|
||||
unhealthy replicas among the total number of desired replicas.
|
||||
|
||||
Example 4: With a `maxUnavailable` of 30%, evictions are allowed as long as no more than 30%
|
||||
of the desired replicas are unhealthy.
|
||||
Example 4: With a `maxUnavailable` of 30%, evictions are allowed as long as the number of
|
||||
unhealthy replicas does not exceed 30% of the total number of desired replica rounded up to
|
||||
the nearest integer. If the total number of desired replicas is just one, that single replica
|
||||
is still allowed for disruption, leading to an effective unavailability of 100%.
|
||||
|
||||
In typical usage, a single budget would be used for a collection of pods managed by
|
||||
a controller—for example, the pods in a single ReplicaSet or StatefulSet.
|
||||
a controller—for example, the pods in a single ReplicaSet or StatefulSet.
|
||||
|
||||
{{< note >}}
|
||||
A disruption budget does not truly guarantee that the specified
|
||||
number/percentage of pods will always be up. For example, a node that hosts a
|
||||
number/percentage of pods will always be up. For example, a node that hosts a
|
||||
pod from the collection may fail when the collection is at the minimum size
|
||||
specified in the budget, thus bringing the number of available pods from the
|
||||
collection below the specified size. The budget can only protect against
|
||||
|
@ -156,7 +154,7 @@ object such as ReplicaSet, then you cannot successfully drain a Node running one
|
|||
If you try to drain a Node where an unevictable Pod is running, the drain never completes. This is permitted as per the
|
||||
semantics of `PodDisruptionBudget`.
|
||||
|
||||
You can find examples of pod disruption budgets defined below. They match pods with the label
|
||||
You can find examples of pod disruption budgets defined below. They match pods with the label
|
||||
`app: zookeeper`.
|
||||
|
||||
Example PDB Using minAvailable:
|
||||
|
@ -246,8 +244,8 @@ on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/
|
|||
|
||||
PodDisruptionBudget guarding an application ensures that `.status.currentHealthy` number of pods
|
||||
does not fall below the number specified in `.status.desiredHealthy` by disallowing eviction of healthy pods.
|
||||
By using `.spec.unhealthyPodEvictionPolicy`, you can also define the criteria when unhealthy pods
|
||||
should be considered for eviction. The default behavior when no policy is specified corresponds
|
||||
By using `.spec.unhealthyPodEvictionPolicy`, you can also define the criteria when unhealthy pods
|
||||
should be considered for eviction. The default behavior when no policy is specified corresponds
|
||||
to the `IfHealthyBudget` policy.
|
||||
|
||||
Policies:
|
||||
|
@ -287,6 +285,6 @@ You can use a PDB with pods controlled by another type of controller, by an
|
|||
- only an integer value can be used with `.spec.minAvailable`, not a percentage.
|
||||
|
||||
You can use a selector which selects a subset or superset of the pods belonging to a built-in
|
||||
controller. The eviction API will disallow eviction of any pod covered by multiple PDBs,
|
||||
so most users will want to avoid overlapping selectors. One reasonable use of overlapping
|
||||
controller. The eviction API will disallow eviction of any pod covered by multiple PDBs,
|
||||
so most users will want to avoid overlapping selectors. One reasonable use of overlapping
|
||||
PDBs is when pods are being transitioned from one PDB to another.
|
||||
|
|
|
@ -14,14 +14,9 @@ weight: 60
|
|||
|
||||
This task shows you how to delete a {{< glossary_tooltip term_id="StatefulSet" >}}.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* This task assumes you have an application running on your cluster represented by a StatefulSet.
|
||||
|
||||
|
||||
- This task assumes you have an application running on your cluster represented by a StatefulSet.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -82,13 +77,6 @@ In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; su
|
|||
|
||||
If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) for details.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -10,24 +10,33 @@ weight: 70
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
This page shows how to delete Pods which are part of a {{< glossary_tooltip text="stateful set" term_id="StatefulSet" >}}, and explains the considerations to keep in mind when doing so.
|
||||
|
||||
This page shows how to delete Pods which are part of a
|
||||
{{< glossary_tooltip text="stateful set" term_id="StatefulSet" >}},
|
||||
and explains the considerations to keep in mind when doing so.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* This is a fairly advanced task and has the potential to violate some of the properties inherent to StatefulSet.
|
||||
* Before proceeding, make yourself familiar with the considerations enumerated below.
|
||||
|
||||
|
||||
- This is a fairly advanced task and has the potential to violate some of the properties
|
||||
inherent to StatefulSet.
|
||||
- Before proceeding, make yourself familiar with the considerations enumerated below.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## StatefulSet considerations
|
||||
|
||||
In normal operation of a StatefulSet, there is **never** a need to force delete a StatefulSet Pod. The [StatefulSet controller](/docs/concepts/workloads/controllers/statefulset/) is responsible for creating, scaling and deleting members of the StatefulSet. It tries to ensure that the specified number of Pods from ordinal 0 through N-1 are alive and ready. StatefulSet ensures that, at any time, there is at most one Pod with a given identity running in a cluster. This is referred to as *at most one* semantics provided by a StatefulSet.
|
||||
In normal operation of a StatefulSet, there is **never** a need to force delete a StatefulSet Pod.
|
||||
The [StatefulSet controller](/docs/concepts/workloads/controllers/statefulset/) is responsible for
|
||||
creating, scaling and deleting members of the StatefulSet. It tries to ensure that the specified
|
||||
number of Pods from ordinal 0 through N-1 are alive and ready. StatefulSet ensures that, at any time,
|
||||
there is at most one Pod with a given identity running in a cluster. This is referred to as
|
||||
*at most one* semantics provided by a StatefulSet.
|
||||
|
||||
Manual force deletion should be undertaken with caution, as it has the potential to violate the at most one semantics inherent to StatefulSet. StatefulSets may be used to run distributed and clustered applications which have a need for a stable network identity and stable storage. These applications often have configuration which relies on an ensemble of a fixed number of members with fixed identities. Having multiple members with the same identity can be disastrous and may lead to data loss (e.g. split brain scenario in quorum-based systems).
|
||||
Manual force deletion should be undertaken with caution, as it has the potential to violate the
|
||||
at most one semantics inherent to StatefulSet. StatefulSets may be used to run distributed and
|
||||
clustered applications which have a need for a stable network identity and stable storage.
|
||||
These applications often have configuration which relies on an ensemble of a fixed number of
|
||||
members with fixed identities. Having multiple members with the same identity can be disastrous
|
||||
and may lead to data loss (e.g. split brain scenario in quorum-based systems).
|
||||
|
||||
## Delete Pods
|
||||
|
||||
|
@ -51,19 +60,33 @@ Pods may also enter these states when the user attempts graceful deletion of a P
|
|||
on an unreachable Node.
|
||||
The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
|
||||
* The Node object is deleted (either by you, or by the [Node Controller](/docs/concepts/architecture/nodes/#node-controller)).
|
||||
* The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.
|
||||
* Force deletion of the Pod by the user.
|
||||
- The Node object is deleted (either by you, or by the
|
||||
[Node Controller](/docs/concepts/architecture/nodes/#node-controller)).
|
||||
- The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry
|
||||
from the apiserver.
|
||||
- Force deletion of the Pod by the user.
|
||||
|
||||
The recommended best practice is to use the first or second approach. If a Node is confirmed to be dead (e.g. permanently disconnected from the network, powered down, etc), then delete the Node object. If the Node is suffering from a network partition, then try to resolve this or wait for it to resolve. When the partition heals, the kubelet will complete the deletion of the Pod and free up its name in the apiserver.
|
||||
The recommended best practice is to use the first or second approach. If a Node is confirmed
|
||||
to be dead (e.g. permanently disconnected from the network, powered down, etc), then delete
|
||||
the Node object. If the Node is suffering from a network partition, then try to resolve this
|
||||
or wait for it to resolve. When the partition heals, the kubelet will complete the deletion
|
||||
of the Pod and free up its name in the apiserver.
|
||||
|
||||
Normally, the system completes the deletion once the Pod is no longer running on a Node, or the Node is deleted by an administrator. You may override this by force deleting the Pod.
|
||||
Normally, the system completes the deletion once the Pod is no longer running on a Node, or
|
||||
the Node is deleted by an administrator. You may override this by force deleting the Pod.
|
||||
|
||||
### Force Deletion
|
||||
|
||||
Force deletions **do not** wait for confirmation from the kubelet that the Pod has been terminated. Irrespective of whether a force deletion is successful in killing a Pod, it will immediately free up the name from the apiserver. This would let the StatefulSet controller create a replacement Pod with that same identity; this can lead to the duplication of a still-running Pod, and if said Pod can still communicate with the other members of the StatefulSet, will violate the at most one semantics that StatefulSet is designed to guarantee.
|
||||
Force deletions **do not** wait for confirmation from the kubelet that the Pod has been terminated.
|
||||
Irrespective of whether a force deletion is successful in killing a Pod, it will immediately
|
||||
free up the name from the apiserver. This would let the StatefulSet controller create a replacement
|
||||
Pod with that same identity; this can lead to the duplication of a still-running Pod,
|
||||
and if said Pod can still communicate with the other members of the StatefulSet,
|
||||
will violate the at most one semantics that StatefulSet is designed to guarantee.
|
||||
|
||||
When you force delete a StatefulSet pod, you are asserting that the Pod in question will never again make contact with other Pods in the StatefulSet and its name can be safely freed up for a replacement to be created.
|
||||
When you force delete a StatefulSet pod, you are asserting that the Pod in question will never
|
||||
again make contact with other Pods in the StatefulSet and its name can be safely freed up for a
|
||||
replacement to be created.
|
||||
|
||||
If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:
|
||||
|
||||
|
@ -77,7 +100,8 @@ If you're using any version of kubectl <= 1.4, you should omit the `--force` opt
|
|||
kubectl delete pods <pod> --grace-period=0
|
||||
```
|
||||
|
||||
If even after these commands the pod is stuck on `Unknown` state, use the following command to remove the pod from the cluster:
|
||||
If even after these commands the pod is stuck on `Unknown` state, use the following command to
|
||||
remove the pod from the cluster:
|
||||
|
||||
```shell
|
||||
kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
|
||||
|
@ -85,11 +109,6 @@ kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
|
|||
|
||||
Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Learn more about [debugging a StatefulSet](/docs/tasks/debug/debug-application/debug-statefulset/).
|
||||
|
||||
|
||||
|
|
|
@ -79,17 +79,17 @@ Kubernetes implements horizontal pod autoscaling as a control loop that runs int
|
|||
(and the default interval is 15 seconds).
|
||||
|
||||
Once during each period, the controller manager queries the resource utilization against the
|
||||
metrics specified in each HorizontalPodAutoscaler definition. The controller manager
|
||||
metrics specified in each HorizontalPodAutoscaler definition. The controller manager
|
||||
finds the target resource defined by the `scaleTargetRef`,
|
||||
then selects the pods based on the target resource's `.spec.selector` labels, and obtains the metrics from either the resource metrics API (for per-pod resource metrics),
|
||||
or the custom metrics API (for all other metrics).
|
||||
|
||||
* For per-pod resource metrics (like CPU), the controller fetches the metrics
|
||||
- For per-pod resource metrics (like CPU), the controller fetches the metrics
|
||||
from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler.
|
||||
Then, if a target utilization value is set, the controller calculates the utilization
|
||||
value as a percentage of the equivalent
|
||||
[resource request](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits)
|
||||
on the containers in each Pod. If a target raw value is set, the raw metric values are used directly.
|
||||
on the containers in each Pod. If a target raw value is set, the raw metric values are used directly.
|
||||
The controller then takes the mean of the utilization or the raw value (depending on the type
|
||||
of target specified) across all targeted Pods, and produces a ratio used to scale
|
||||
the number of desired replicas.
|
||||
|
@ -99,10 +99,10 @@ or the custom metrics API (for all other metrics).
|
|||
not take any action for that metric. See the [algorithm details](#algorithm-details) section below
|
||||
for more information about how the autoscaling algorithm works.
|
||||
|
||||
* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics,
|
||||
- For per-pod custom metrics, the controller functions similarly to per-pod resource metrics,
|
||||
except that it works with raw values, not utilization values.
|
||||
|
||||
* For object metrics and external metrics, a single metric is fetched, which describes
|
||||
- For object metrics and external metrics, a single metric is fetched, which describes
|
||||
the object in question. This metric is compared to the target
|
||||
value, to produce a ratio as above. In the `autoscaling/v2` API
|
||||
version, this value can optionally be divided by the number of Pods before the
|
||||
|
@ -110,7 +110,7 @@ or the custom metrics API (for all other metrics).
|
|||
|
||||
The common use for HorizontalPodAutoscaler is to configure it to fetch metrics from
|
||||
{{< glossary_tooltip text="aggregated APIs" term_id="aggregation-layer" >}}
|
||||
(`metrics.k8s.io`, `custom.metrics.k8s.io`, or `external.metrics.k8s.io`). The `metrics.k8s.io` API is
|
||||
(`metrics.k8s.io`, `custom.metrics.k8s.io`, or `external.metrics.k8s.io`). The `metrics.k8s.io` API is
|
||||
usually provided by an add-on named Metrics Server, which needs to be launched separately.
|
||||
For more information about resource metrics, see
|
||||
[Metrics Server](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server).
|
||||
|
@ -137,7 +137,7 @@ desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricVal
|
|||
For example, if the current metric value is `200m`, and the desired value
|
||||
is `100m`, the number of replicas will be doubled, since `200.0 / 100.0 ==
|
||||
2.0` If the current value is instead `50m`, you'll halve the number of
|
||||
replicas, since `50.0 / 100.0 == 0.5`. The control plane skips any scaling
|
||||
replicas, since `50.0 / 100.0 == 0.5`. The control plane skips any scaling
|
||||
action if the ratio is sufficiently close to 1.0 (within a globally-configurable
|
||||
tolerance, 0.1 by default).
|
||||
|
||||
|
@ -156,7 +156,7 @@ If a particular Pod is missing metrics, it is set aside for later; Pods
|
|||
with missing metrics will be used to adjust the final scaling amount.
|
||||
|
||||
When scaling on CPU, if any pod has yet to become ready (it's still
|
||||
initializing, or possibly is unhealthy) *or* the most recent metric point for
|
||||
initializing, or possibly is unhealthy) _or_ the most recent metric point for
|
||||
the pod was before it became ready, that pod is set aside as well.
|
||||
|
||||
Due to technical constraints, the HorizontalPodAutoscaler controller
|
||||
|
@ -165,7 +165,7 @@ determining whether to set aside certain CPU metrics. Instead, it
|
|||
considers a Pod "not yet ready" if it's unready and transitioned to
|
||||
ready within a short, configurable window of time since it started.
|
||||
This value is configured with the `--horizontal-pod-autoscaler-initial-readiness-delay` flag, and its default is 30
|
||||
seconds. Once a pod has become ready, it considers any transition to
|
||||
seconds. Once a pod has become ready, it considers any transition to
|
||||
ready to be the first if it occurred within a longer, configurable time
|
||||
since it started. This value is configured with the `--horizontal-pod-autoscaler-cpu-initialization-period` flag, and its
|
||||
default is 5 minutes.
|
||||
|
@ -175,7 +175,7 @@ calculated using the remaining pods not set aside or discarded from above.
|
|||
|
||||
If there were any missing metrics, the control plane recomputes the average more
|
||||
conservatively, assuming those pods were consuming 100% of the desired
|
||||
value in case of a scale down, and 0% in case of a scale up. This dampens
|
||||
value in case of a scale down, and 0% in case of a scale up. This dampens
|
||||
the magnitude of any potential scale.
|
||||
|
||||
Furthermore, if any not-yet-ready pods were present, and the workload would have
|
||||
|
@ -184,12 +184,12 @@ the controller conservatively assumes that the not-yet-ready pods are consuming
|
|||
of the desired metric, further dampening the magnitude of a scale up.
|
||||
|
||||
After factoring in the not-yet-ready pods and missing metrics, the
|
||||
controller recalculates the usage ratio. If the new ratio reverses the scale
|
||||
controller recalculates the usage ratio. If the new ratio reverses the scale
|
||||
direction, or is within the tolerance, the controller doesn't take any scaling
|
||||
action. In other cases, the new ratio is used to decide any change to the
|
||||
number of Pods.
|
||||
|
||||
Note that the *original* value for the average utilization is reported
|
||||
Note that the _original_ value for the average utilization is reported
|
||||
back via the HorizontalPodAutoscaler status, without factoring in the
|
||||
not-yet-ready pods or missing metrics, even when the new usage ratio is
|
||||
used.
|
||||
|
@ -203,7 +203,7 @@ can be fetched, scaling is skipped. This means that the HPA is still capable
|
|||
of scaling up if one or more metrics give a `desiredReplicas` greater than
|
||||
the current value.
|
||||
|
||||
Finally, right before HPA scales the target, the scale recommendation is recorded. The
|
||||
Finally, right before HPA scales the target, the scale recommendation is recorded. The
|
||||
controller considers all recommendations within a configurable window choosing the
|
||||
highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
|
||||
This means that scaledowns will occur gradually, smoothing out the impact of rapidly
|
||||
|
@ -212,7 +212,7 @@ fluctuating metric values.
|
|||
## API Object
|
||||
|
||||
The Horizontal Pod Autoscaler is an API resource in the Kubernetes
|
||||
`autoscaling` API group. The current stable version can be found in
|
||||
`autoscaling` API group. The current stable version can be found in
|
||||
the `autoscaling/v2` API version which includes support for scaling on
|
||||
memory and custom metrics. The new fields introduced in
|
||||
`autoscaling/v2` are preserved as annotations when working with
|
||||
|
@ -227,10 +227,8 @@ More details about the API object can be found at
|
|||
|
||||
When managing the scale of a group of replicas using the HorizontalPodAutoscaler,
|
||||
it is possible that the number of replicas keeps fluctuating frequently due to the
|
||||
dynamic nature of the metrics evaluated. This is sometimes referred to as *thrashing*,
|
||||
or *flapping*. It's similar to the concept of *hysteresis* in cybernetics.
|
||||
|
||||
|
||||
dynamic nature of the metrics evaluated. This is sometimes referred to as _thrashing_,
|
||||
or _flapping_. It's similar to the concept of _hysteresis_ in cybernetics.
|
||||
|
||||
## Autoscaling during rolling update
|
||||
|
||||
|
@ -316,7 +314,6 @@ Once you have rolled out the container name change to the workload resource, tid
|
|||
the old container name from the HPA specification.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## Scaling on custom metrics
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
|
||||
|
@ -344,20 +341,20 @@ overall maximum that you configured).
|
|||
|
||||
## Support for metrics APIs
|
||||
|
||||
By default, the HorizontalPodAutoscaler controller retrieves metrics from a series of APIs. In order for it to access these
|
||||
By default, the HorizontalPodAutoscaler controller retrieves metrics from a series of APIs. In order for it to access these
|
||||
APIs, cluster administrators must ensure that:
|
||||
|
||||
* The [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) is enabled.
|
||||
- The [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) is enabled.
|
||||
|
||||
* The corresponding APIs are registered:
|
||||
- The corresponding APIs are registered:
|
||||
|
||||
* For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-sigs/metrics-server).
|
||||
It can be launched as a cluster add-on.
|
||||
- For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-sigs/metrics-server).
|
||||
It can be launched as a cluster add-on.
|
||||
|
||||
* For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors.
|
||||
Check with your metrics pipeline to see if there is a Kubernetes metrics adapter available.
|
||||
- For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors.
|
||||
Check with your metrics pipeline to see if there is a Kubernetes metrics adapter available.
|
||||
|
||||
* For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above.
|
||||
- For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above.
|
||||
|
||||
For more information on these different metrics paths and how they differ please see the relevant design proposals for
|
||||
[the HPA V2](https://git.k8s.io/design-proposals-archive/autoscaling/hpa-v2.md),
|
||||
|
@ -537,14 +534,14 @@ Finally, you can delete an autoscaler using `kubectl delete hpa`.
|
|||
|
||||
In addition, there is a special `kubectl autoscale` command for creating a HorizontalPodAutoscaler object.
|
||||
For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
|
||||
will create an autoscaler for ReplicaSet *foo*, with target CPU utilization set to `80%`
|
||||
will create an autoscaler for ReplicaSet _foo_, with target CPU utilization set to `80%`
|
||||
and the number of replicas between 2 and 5.
|
||||
|
||||
## Implicit maintenance-mode deactivation
|
||||
|
||||
You can implicitly deactivate the HPA for a target without the
|
||||
need to change the HPA configuration itself. If the target's desired replica count
|
||||
is set to 0, and the HPA's minimum replica count is greater than 0, the HPA
|
||||
is set to 0, and the HPA's minimum replica count is greater than 0, the HPA
|
||||
stops adjusting the target (and sets the `ScalingActive` Condition on itself
|
||||
to `false`) until you reactivate it by manually adjusting the target's desired
|
||||
replica count or HPA's minimum replica count.
|
||||
|
@ -553,7 +550,7 @@ replica count or HPA's minimum replica count.
|
|||
|
||||
When an HPA is enabled, it is recommended that the value of `spec.replicas` of
|
||||
the Deployment and / or StatefulSet be removed from their
|
||||
{{< glossary_tooltip text="manifest(s)" term_id="manifest" >}}. If this isn't done, any time
|
||||
{{< glossary_tooltip text="manifest(s)" term_id="manifest" >}}. If this isn't done, any time
|
||||
a change to that object is applied, for example via `kubectl apply -f
|
||||
deployment.yaml`, this will instruct Kubernetes to scale the current number of Pods
|
||||
to the value of the `spec.replicas` key. This may not be
|
||||
|
@ -562,9 +559,9 @@ desired and could be troublesome when an HPA is active.
|
|||
Keep in mind that the removal of `spec.replicas` may incur a one-time
|
||||
degradation of Pod counts as the default value of this key is 1 (reference
|
||||
[Deployment Replicas](/docs/concepts/workloads/controllers/deployment#replicas)).
|
||||
Upon the update, all Pods except 1 will begin their termination procedures. Any
|
||||
Upon the update, all Pods except 1 will begin their termination procedures. Any
|
||||
deployment application afterwards will behave as normal and respect a rolling
|
||||
update configuration as desired. You can avoid this degradation by choosing one of the following two
|
||||
update configuration as desired. You can avoid this degradation by choosing one of the following two
|
||||
methods based on how you are modifying your deployments:
|
||||
|
||||
{{< tabs name="fix_replicas_instructions" >}}
|
||||
|
@ -572,10 +569,10 @@ methods based on how you are modifying your deployments:
|
|||
|
||||
1. `kubectl apply edit-last-applied deployment/<deployment_name>`
|
||||
2. In the editor, remove `spec.replicas`. When you save and exit the editor, `kubectl`
|
||||
applies the update. No changes to Pod counts happen at this step.
|
||||
applies the update. No changes to Pod counts happen at this step.
|
||||
3. You can now remove `spec.replicas` from the manifest. If you use source code management,
|
||||
also commit your changes or take whatever other steps for revising the source code
|
||||
are appropriate for how you track updates.
|
||||
also commit your changes or take whatever other steps for revising the source code
|
||||
are appropriate for how you track updates.
|
||||
4. From here on out you can run `kubectl apply -f deployment.yaml`
|
||||
|
||||
{{% /tab %}}
|
||||
|
@ -595,8 +592,8 @@ cluster-level autoscaler such as [Cluster Autoscaler](https://github.com/kuberne
|
|||
|
||||
For more information on HorizontalPodAutoscaler:
|
||||
|
||||
* Read a [walkthrough example](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for horizontal pod autoscaling.
|
||||
* Read documentation for [`kubectl autoscale`](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
|
||||
* If you would like to write your own custom metrics adapter, check out the
|
||||
- Read a [walkthrough example](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for horizontal pod autoscaling.
|
||||
- Read documentation for [`kubectl autoscale`](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
|
||||
- If you would like to write your own custom metrics adapter, check out the
|
||||
[boilerplate](https://github.com/kubernetes-sigs/custom-metrics-apiserver) to get started.
|
||||
* Read the [API reference](/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2/) for HorizontalPodAutoscaler.
|
||||
- Read the [API reference](/docs/reference/kubernetes-api/workload-resources/horizontal-pod-autoscaler-v2/) for HorizontalPodAutoscaler.
|
||||
|
|
|
@ -26,30 +26,24 @@ on general patterns for running stateful applications in Kubernetes.
|
|||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}}
|
||||
* {{< include "default-storage-class-prereqs.md" >}}
|
||||
* This tutorial assumes you are familiar with
|
||||
- {{< include "task-tutorial-prereqs.md" >}}
|
||||
- {{< include "default-storage-class-prereqs.md" >}}
|
||||
- This tutorial assumes you are familiar with
|
||||
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
|
||||
and [StatefulSets](/docs/concepts/workloads/controllers/statefulset/),
|
||||
as well as other core concepts like [Pods](/docs/concepts/workloads/pods/),
|
||||
[Services](/docs/concepts/services-networking/service/), and
|
||||
[ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
|
||||
* Some familiarity with MySQL helps, but this tutorial aims to present
|
||||
- Some familiarity with MySQL helps, but this tutorial aims to present
|
||||
general patterns that should be useful for other systems.
|
||||
* You are using the default namespace or another namespace that does not contain any conflicting objects.
|
||||
|
||||
|
||||
- You are using the default namespace or another namespace that does not contain any conflicting objects.
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
|
||||
* Deploy a replicated MySQL topology with a StatefulSet.
|
||||
* Send MySQL client traffic.
|
||||
* Observe resistance to downtime.
|
||||
* Scale the StatefulSet up and down.
|
||||
|
||||
|
||||
- Deploy a replicated MySQL topology with a StatefulSet.
|
||||
- Send MySQL client traffic.
|
||||
- Observe resistance to downtime.
|
||||
- Scale the StatefulSet up and down.
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
|
@ -377,7 +371,7 @@ no new Pods may schedule there, and then evicts any existing Pods.
|
|||
Replace `<node-name>` with the name of the Node you found in the last step.
|
||||
|
||||
{{< caution >}}
|
||||
Draining a Node can impact other workloads and applications
|
||||
Draining a Node can impact other workloads and applications
|
||||
running on the same node. Only perform the following step in a test
|
||||
cluster.
|
||||
{{< /caution >}}
|
||||
|
@ -492,11 +486,8 @@ kubectl delete pvc data-mysql-3
|
|||
kubectl delete pvc data-mysql-4
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
|
||||
1. Cancel the `SELECT @@server_id` loop by pressing **Ctrl+C** in its terminal,
|
||||
or running the following from another terminal:
|
||||
|
||||
|
@ -536,17 +527,11 @@ kubectl delete pvc data-mysql-4
|
|||
Some dynamic provisioners (such as those for EBS and PD) also release the
|
||||
underlying resources upon deleting the PersistentVolumes.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [scaling a StatefulSet](/docs/tasks/run-application/scale-stateful-set/).
|
||||
* Learn more about [debugging a StatefulSet](/docs/tasks/debug/debug-application/debug-statefulset/).
|
||||
* Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/).
|
||||
* Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
* Look in the [Helm Charts repository](https://artifacthub.io/)
|
||||
- Learn more about [scaling a StatefulSet](/docs/tasks/run-application/scale-stateful-set/).
|
||||
- Learn more about [debugging a StatefulSet](/docs/tasks/debug/debug-application/debug-statefulset/).
|
||||
- Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/).
|
||||
- Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/).
|
||||
- Look in the [Helm Charts repository](https://artifacthub.io/)
|
||||
for other stateful application examples.
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -13,22 +13,19 @@ weight: 50
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
|
||||
|
||||
This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* StatefulSets are only available in Kubernetes version 1.5 or later.
|
||||
- StatefulSets are only available in Kubernetes version 1.5 or later.
|
||||
To check your version of Kubernetes, run `kubectl version`.
|
||||
|
||||
* Not all stateful applications scale nicely. If you are unsure about whether to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information.
|
||||
- Not all stateful applications scale nicely. If you are unsure about whether to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information.
|
||||
|
||||
* You should perform scaling only when you are confident that your stateful application
|
||||
- You should perform scaling only when you are confident that your stateful application
|
||||
cluster is completely healthy.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Scaling StatefulSets
|
||||
|
@ -91,11 +88,6 @@ to reason about scaling operations at the application level in these cases, and
|
|||
perform scaling only when you are sure that your stateful application cluster is
|
||||
completely healthy.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/).
|
||||
|
||||
|
||||
- Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/).
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "TLS"
|
||||
weight: 100
|
||||
weight: 120
|
||||
description: Understand how to protect traffic within your cluster using Transport Layer Security (TLS).
|
||||
---
|
||||
|
||||
|
|
|
@ -84,7 +84,7 @@ weight: 10
|
|||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
|
||||
<p>Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports <a href="/docs/user-guide/horizontal-pod-autoscaling/">autoscaling</a> of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.</p>
|
||||
<p>Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports <a href="/docs/tasks/run-application/horizontal-pod-autoscale/">autoscaling</a> of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.</p>
|
||||
|
||||
<p>Running multiple instances of an application will require a way to distribute the traffic to all of them. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.</p>
|
||||
|
||||
|
|
|
@ -30,6 +30,11 @@ Install the following on your workstation:
|
|||
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
|
||||
- [kubectl](/docs/tasks/tools/)
|
||||
|
||||
This tutorial demonstrates what you can configure for a Kubernetes cluster that you fully
|
||||
control. If you are learning how to configure Pod Security Admission for a managed cluster
|
||||
where you are not able to configure the control plane, read
|
||||
[Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss).
|
||||
|
||||
## Choose the right Pod Security Standard to apply
|
||||
|
||||
[Pod Security Admission](/docs/concepts/security/pod-security-admission/)
|
||||
|
@ -42,22 +47,22 @@ that are most appropriate for your configuration, do the following:
|
|||
1. Create a cluster with no Pod Security Standards applied:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.24.0
|
||||
kind create cluster --name psa-wo-cluster-pss
|
||||
```
|
||||
The output is similar to this:
|
||||
The output is similar to:
|
||||
```
|
||||
Creating cluster "psa-wo-cluster-pss" ...
|
||||
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-psa-wo-cluster-pss"
|
||||
You can now use your cluster with:
|
||||
|
||||
|
||||
kubectl cluster-info --context kind-psa-wo-cluster-pss
|
||||
|
||||
|
||||
Thanks for using kind! 😊
|
||||
```
|
||||
|
||||
|
@ -72,7 +77,7 @@ that are most appropriate for your configuration, do the following:
|
|||
Kubernetes control plane is running at https://127.0.0.1:61350
|
||||
|
||||
CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
|
@ -82,7 +87,7 @@ that are most appropriate for your configuration, do the following:
|
|||
kubectl get ns
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
```
|
||||
NAME STATUS AGE
|
||||
default Active 9m30s
|
||||
kube-node-lease Active 9m32s
|
||||
|
@ -99,8 +104,9 @@ that are most appropriate for your configuration, do the following:
|
|||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=privileged
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
|
@ -108,12 +114,13 @@ that are most appropriate for your configuration, do the following:
|
|||
namespace/local-path-storage labeled
|
||||
```
|
||||
2. Baseline
|
||||
```shell
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=baseline
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
|
@ -123,15 +130,16 @@ that are most appropriate for your configuration, do the following:
|
|||
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged
|
||||
namespace/kube-system labeled
|
||||
namespace/local-path-storage labeled
|
||||
```
|
||||
```
|
||||
|
||||
3. Restricted
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=restricted
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
|
@ -180,7 +188,7 @@ following:
|
|||
|
||||
```
|
||||
mkdir -p /tmp/pss
|
||||
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
|
||||
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: AdmissionConfiguration
|
||||
plugins:
|
||||
|
@ -212,7 +220,7 @@ following:
|
|||
1. Configure the API server to consume this file during cluster creation:
|
||||
|
||||
```
|
||||
cat <<EOF > /tmp/pss/cluster-config.yaml
|
||||
cat <<EOF > /tmp/pss/cluster-config.yaml
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
|
@ -255,22 +263,22 @@ following:
|
|||
these Pod Security Standards:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml
|
||||
kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Creating cluster "psa-with-cluster-pss" ...
|
||||
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-psa-with-cluster-pss"
|
||||
You can now use your cluster with:
|
||||
|
||||
|
||||
kubectl cluster-info --context kind-psa-with-cluster-pss
|
||||
|
||||
|
||||
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
|
||||
```
|
||||
|
||||
|
@ -281,36 +289,21 @@ following:
|
|||
The output is similar to this:
|
||||
```
|
||||
Kubernetes control plane is running at https://127.0.0.1:63855
|
||||
|
||||
CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
1. Create the following Pod specification for a minimal configuration in the default namespace:
|
||||
|
||||
```
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
```
|
||||
1. Create the Pod in the cluster:
|
||||
1. Create a Pod in the default namespace:
|
||||
|
||||
```shell
|
||||
kubectl apply -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
|
||||
The pod is started normally, but the output includes a warning:
|
||||
```
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
|
|
@ -31,14 +31,14 @@ Install the following on your workstation:
|
|||
1. Create a `KinD` cluster as follows:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-ns-level --image kindest/node:v1.23.0
|
||||
kind create cluster --name psa-ns-level
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
Creating cluster "psa-ns-level" ...
|
||||
✓ Ensuring node image (kindest/node:v1.23.0) 🖼
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
|
@ -80,11 +80,12 @@ The output is similar to this:
|
|||
namespace/example created
|
||||
```
|
||||
|
||||
## Apply Pod Security Standards
|
||||
## Enable Pod Security Standards checking for that namespace
|
||||
|
||||
1. Enable Pod Security Standards on this namespace using labels supported by
|
||||
built-in Pod Security Admission. In this step we will warn on baseline pod
|
||||
security standard as per the latest version (default value)
|
||||
built-in Pod Security Admission. In this step you will configure a check to
|
||||
warn on Pods that don't meet the latest version of the _baseline_ pod
|
||||
security standard.
|
||||
|
||||
```shell
|
||||
kubectl label --overwrite ns example \
|
||||
|
@ -92,8 +93,8 @@ namespace/example created
|
|||
pod-security.kubernetes.io/warn-version=latest
|
||||
```
|
||||
|
||||
2. Multiple pod security standards can be enabled on any namespace, using labels.
|
||||
Following command will `enforce` the `baseline` Pod Security Standard, but
|
||||
2. You can configure multiple pod security standard checks on any namespace, using labels.
|
||||
The following command will `enforce` the `baseline` Pod Security Standard, but
|
||||
`warn` and `audit` for `restricted` Pod Security Standards as per the latest
|
||||
version (default value)
|
||||
|
||||
|
@ -107,41 +108,24 @@ namespace/example created
|
|||
pod-security.kubernetes.io/audit-version=latest
|
||||
```
|
||||
|
||||
## Verify the Pod Security Standards
|
||||
## Verify the Pod Security Standard enforcement
|
||||
|
||||
1. Create a minimal pod in `example` namespace:
|
||||
1. Create a baseline Pod in the `example` namespace:
|
||||
|
||||
```shell
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
kubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
|
||||
1. Apply the pod spec to the cluster in `example` namespace:
|
||||
|
||||
```shell
|
||||
kubectl apply -n example -f /tmp/pss/nginx-pod.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
The Pod does start OK; the output includes a warning. For example:
|
||||
|
||||
```
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
1. Apply the pod spec to the cluster in `default` namespace:
|
||||
1. Create a baseline Pod in the `default` namespace:
|
||||
|
||||
```shell
|
||||
kubectl apply -n default -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
Output is similar to this:
|
||||
|
||||
|
@ -149,9 +133,9 @@ namespace/example created
|
|||
pod/nginx created
|
||||
```
|
||||
|
||||
The Pod Security Standards were applied only to the `example`
|
||||
namespace. You could create the same Pod in the `default` namespace
|
||||
with no warnings.
|
||||
The Pod Security Standards enforcement and warning settings were applied only
|
||||
to the `example` namespace. You could create the same Pod in the `default`
|
||||
namespace with no warnings.
|
||||
|
||||
## Clean up
|
||||
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
|
@ -51,11 +51,12 @@ nodes:
|
|||
# default None
|
||||
propagation: None
|
||||
EOF
|
||||
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.23.0 --config /tmp/pss/cluster-config.yaml
|
||||
kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml
|
||||
kubectl cluster-info --context kind-psa-with-cluster-pss
|
||||
|
||||
# Wait for 15 seconds (arbitrary) ServiceAccount Admission Controller to be available
|
||||
sleep 15
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
cat <<EOF |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -67,4 +68,17 @@ spec:
|
|||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
kubectl apply -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -f -
|
||||
|
||||
# Await input
|
||||
sleep 1
|
||||
( bash -c 'true' 2>/dev/null && bash -c 'read -p "Press any key to continue... " -n1 -s' ) || \
|
||||
( printf "Press Enter to continue... " && read ) 1>&2
|
||||
|
||||
# Clean up
|
||||
printf "\n\nCleaning up:\n" 1>&2
|
||||
set -e
|
||||
kubectl delete pod --all -n example --now
|
||||
kubectl delete ns example
|
||||
kind delete cluster --name psa-with-cluster-pss
|
||||
rm -f /tmp/pss/cluster-config.yaml
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
#!/bin/sh
|
||||
# Until v1.23 is released, kind node image needs to be built from k/k master branch
|
||||
# Ref: https://kind.sigs.k8s.io/docs/user/quick-start/#building-images
|
||||
kind create cluster --name psa-ns-level --image kindest/node:v1.23.0
|
||||
kind create cluster --name psa-ns-level
|
||||
kubectl cluster-info --context kind-psa-ns-level
|
||||
# Wait for 15 seconds (arbitrary) ServiceAccount Admission Controller to be available
|
||||
# Wait for 15 seconds (arbitrary) for ServiceAccount Admission Controller to be available
|
||||
sleep 15
|
||||
kubectl create ns example
|
||||
|
||||
# Create and label the namespace
|
||||
kubectl create ns example || exit 1 # if namespace exists, don't do the next steps
|
||||
kubectl label --overwrite ns example \
|
||||
pod-security.kubernetes.io/enforce=baseline \
|
||||
pod-security.kubernetes.io/enforce-version=latest \
|
||||
|
@ -13,7 +13,9 @@ kubectl label --overwrite ns example \
|
|||
pod-security.kubernetes.io/warn-version=latest \
|
||||
pod-security.kubernetes.io/audit=restricted \
|
||||
pod-security.kubernetes.io/audit-version=latest
|
||||
cat <<EOF > /tmp/pss/nginx-pod.yaml
|
||||
|
||||
# Try running a Pod
|
||||
cat <<EOF |
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -25,4 +27,16 @@ spec:
|
|||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
kubectl apply -n example -f /tmp/pss/nginx-pod.yaml
|
||||
kubectl apply -n example -f -
|
||||
|
||||
# Await input
|
||||
sleep 1
|
||||
( bash -c 'true' 2>/dev/null && bash -c 'read -p "Press any key to continue... " -n1 -s' ) || \
|
||||
( printf "Press Enter to continue... " && read ) 1>&2
|
||||
|
||||
# Clean up
|
||||
printf "\n\nCleaning up:\n" 1>&2
|
||||
set -e
|
||||
kubectl delete pod --all -n example --now
|
||||
kubectl delete ns example
|
||||
kind delete cluster --name psa-ns-level
|
||||
|
|
|
@ -57,7 +57,7 @@ Existen los siguientes métodos para instalar kubectl en Windows:
|
|||
- Usando PowerShell puede automatizar la verificación usando el operador `-eq` para obtener un resultado de `True` o `False`:
|
||||
|
||||
```powershell
|
||||
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
|
||||
$(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256))
|
||||
```
|
||||
|
||||
1. Agregue el binario a su `PATH`.
|
||||
|
|
|
@ -50,8 +50,7 @@ brew install bash-completion@2
|
|||
Como se indica en el resultado de este comando, agregue lo siguiente a su archivo `~/.bash_profile`:
|
||||
|
||||
```bash
|
||||
export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
|
||||
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
|
||||
brew_etc="$(brew --prefix)/etc" && [[ -r "${brew_etc}/profile.d/bash_completion.sh" ]] && . "${brew_etc}/profile.d/bash_completion.sh"
|
||||
```
|
||||
|
||||
Vuelva a cargar su shell y verifique que bash-complete v2 esté instalado correctamente con `type _init_completion`.
|
||||
|
|
|
@ -45,7 +45,7 @@ Voici le fichier de configuration du Pod :
|
|||
|
||||
La sortie ressemble à ceci :
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
```
|
||||
|
@ -73,7 +73,7 @@ Voici le fichier de configuration du Pod :
|
|||
|
||||
La sortie ressemble à ceci :
|
||||
|
||||
```shell
|
||||
```console
|
||||
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
|
||||
redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379
|
||||
root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash
|
||||
|
@ -91,7 +91,7 @@ Voici le fichier de configuration du Pod :
|
|||
1. Dans votre terminal initial, surveillez les changements apportés au Pod de Redis. Éventuellement,
|
||||
vous verrez quelque chose comme ça :
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis 1/1 Running 0 13s
|
||||
redis 0/1 Completed 0 6m
|
||||
|
|
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
title: Définir des variables d'environnement pour un Container
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Cette page montre comment définir des variables d'environnement pour un
|
||||
container au sein d'un Pod Kubernetes.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Définir une variable d'environnement pour un container
|
||||
|
||||
Lorsque vous créez un Pod, vous pouvez définir des variables d'environnement
|
||||
pour les containers qui seront exécutés au sein du Pod.
|
||||
Pour les définir, utilisez le champ `env` ou `envFrom`
|
||||
dans le fichier de configuration.
|
||||
|
||||
Dans cet exercice, vous allez créer un Pod qui exécute un container. Le fichier de configuration pour ce Pod contient une variable d'environnement s'appelant `DEMO_GREETING` et sa valeur est `"Hello from the environment"`. Voici le fichier de configuration du Pod:
|
||||
|
||||
{{< codenew file="pods/inject/envars.yaml" >}}
|
||||
|
||||
1. Créez un Pod à partir de ce fichier:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/envars.yaml
|
||||
```
|
||||
|
||||
1. Listez les Pods:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l purpose=demonstrate-envars
|
||||
```
|
||||
|
||||
Le résultat sera similaire à celui-ci:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
envar-demo 1/1 Running 0 9s
|
||||
```
|
||||
|
||||
1. Listez les variables d'environnement au sein du container:
|
||||
|
||||
```shell
|
||||
kubectl exec envar-demo -- printenv
|
||||
```
|
||||
|
||||
Le résultat sera similaire à celui-ci:
|
||||
|
||||
```
|
||||
NODE_VERSION=4.4.2
|
||||
EXAMPLE_SERVICE_PORT_8080_TCP_ADDR=10.3.245.237
|
||||
HOSTNAME=envar-demo
|
||||
...
|
||||
DEMO_GREETING=Hello from the environment
|
||||
DEMO_FAREWELL=Such a sweet sorrow
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Les variables d'environnement définies dans les champs `env` ou `envFrom`
|
||||
écraseront les variables définies dans l'image utilisée par le container.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Une variable d'environnement peut faire référence à une autre variable,
|
||||
cependant l'ordre de déclaration est important. Une variable faisant référence
|
||||
à une autre doit être déclarée après la variable référencée.
|
||||
De plus, il est recommandé d'éviter les références circulaires.
|
||||
{{< /note >}}
|
||||
|
||||
## Utilisez des variables d'environnement dans la configuration
|
||||
|
||||
Les variables d'environnement que vous définissez dans la configuration d'un Pod peuvent être utilisées à d'autres endroits de la configuration, comme par exemple dans les commandes et arguments pour les containers.
|
||||
Dans l'exemple ci-dessous, les variables d'environnement `GREETING`, `HONORIFIC`, et
|
||||
`NAME` ont des valeurs respectives de `Warm greetings to`, `The Most
|
||||
Honorable`, et `Kubernetes`. Ces variables sont ensuites utilisées comme arguments
|
||||
pour le container `env-print-demo`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: print-greeting
|
||||
spec:
|
||||
containers:
|
||||
- name: env-print-demo
|
||||
image: bash
|
||||
env:
|
||||
- name: GREETING
|
||||
value: "Warm greetings to"
|
||||
- name: HONORIFIC
|
||||
value: "The Most Honorable"
|
||||
- name: NAME
|
||||
value: "Kubernetes"
|
||||
command: ["echo"]
|
||||
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
|
||||
```
|
||||
|
||||
Une fois le Pod créé, la commande `echo Warm greetings to The Most Honorable Kubernetes` sera exécutée dans le container.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* En savoir plus sur les [variables d'environnement](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/).
|
||||
* Apprendre à [utiliser des secrets comme variables d'environnement](/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
|
||||
* Voir la documentation de référence pour [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core).
|
||||
|
|
@ -0,0 +1,355 @@
|
|||
---
|
||||
title: Distribuer des données sensibles de manière sécurisée avec les Secrets
|
||||
content_type: task
|
||||
weight: 50
|
||||
min-kubernetes-server-version: v1.6
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Cette page montre comment injecter des données sensibles comme des mots de passe ou des clés de chiffrement dans des Pods.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
### Encoder vos données en format base64
|
||||
|
||||
Supposons que vous avez deux données sensibles: un identifiant `my-app` et un
|
||||
mot de passe
|
||||
`39528$vdg7Jb`. Premièrement, utilisez un outil capable d'encoder vos données
|
||||
dans un format base64. Voici un exemple en utilisant le programme base64:
|
||||
```shell
|
||||
echo -n 'my-app' | base64
|
||||
echo -n '39528$vdg7Jb' | base64
|
||||
```
|
||||
|
||||
Le résultat montre que la représentation base64 de l'utilisateur est `bXktYXBw`,
|
||||
et que la représentation base64 du mot de passe est `Mzk1MjgkdmRnN0pi`.
|
||||
|
||||
{{< caution >}}
|
||||
Utilisez un outil local approuvé par votre système d'exploitation
|
||||
afin de réduire les risques de sécurité liés à l'utilisation d'un outil externe.
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Créer un Secret
|
||||
|
||||
Voici un fichier de configuration que vous pouvez utiliser pour créer un Secret
|
||||
qui contiendra votre identifiant et mot de passe:
|
||||
|
||||
{{< codenew file="pods/inject/secret.yaml" >}}
|
||||
|
||||
1. Créez le Secret:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml
|
||||
```
|
||||
|
||||
1. Listez les informations du Secret:
|
||||
|
||||
```shell
|
||||
kubectl get secret test-secret
|
||||
```
|
||||
|
||||
Résultat:
|
||||
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
test-secret Opaque 2 1m
|
||||
```
|
||||
|
||||
1. Affichez les informations détaillées du Secret:
|
||||
|
||||
```shell
|
||||
kubectl describe secret test-secret
|
||||
```
|
||||
|
||||
Résultat:
|
||||
|
||||
```
|
||||
Name: test-secret
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
password: 13 bytes
|
||||
username: 7 bytes
|
||||
```
|
||||
|
||||
### Créer un Secret en utilisant kubectl
|
||||
|
||||
Si vous voulez sauter l'étape d'encodage, vous pouvez créer le même Secret
|
||||
en utilisant la commande `kubectl create secret`. Par exemple:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'
|
||||
```
|
||||
|
||||
Cette approche est plus pratique. La façon de faire plus explicite
|
||||
montrée précédemment permet de démontrer et comprendre le fonctionnement des Secrets.
|
||||
|
||||
|
||||
## Créer un Pod qui a accès aux données sensibles à travers un Volume
|
||||
|
||||
Voici un fichier de configuration qui permet de créer un Pod:
|
||||
|
||||
{{< codenew file="pods/inject/secret-pod.yaml" >}}
|
||||
|
||||
1. Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml
|
||||
```
|
||||
|
||||
1. Vérifiez que le Pod est opérationnel:
|
||||
|
||||
```shell
|
||||
kubectl get pod secret-test-pod
|
||||
```
|
||||
|
||||
Résultat:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
secret-test-pod 1/1 Running 0 42m
|
||||
```
|
||||
|
||||
1. Exécutez une session shell dans le Container qui est dans votre Pod:
|
||||
```shell
|
||||
kubectl exec -i -t secret-test-pod -- /bin/bash
|
||||
```
|
||||
|
||||
1. Les données sont exposées au container à travers un Volume monté sur
|
||||
`/etc/secret-volume`.
|
||||
|
||||
Dans votre shell, listez les fichiers du dossier `/etc/secret-volume`:
|
||||
```shell
|
||||
# À exécuter à l'intérieur du container
|
||||
ls /etc/secret-volume
|
||||
```
|
||||
Le résultat montre deux fichiers, un pour chaque donnée du Secret:
|
||||
```
|
||||
password username
|
||||
```
|
||||
|
||||
1. Toujours dans le shell, affichez le contenu des fichiers
|
||||
`username` et `password`:
|
||||
```shell
|
||||
# À exécuter à l'intérieur du container
|
||||
echo "$( cat /etc/secret-volume/username )"
|
||||
echo "$( cat /etc/secret-volume/password )"
|
||||
```
|
||||
Le résultat doit contenir votre identifiant et mot de passe:
|
||||
```
|
||||
my-app
|
||||
39528$vdg7Jb
|
||||
```
|
||||
|
||||
Vous pouvez alors modifier votre image ou votre ligne de commande pour que le programme
|
||||
recherche les fichiers contenus dans le dossier du champ `mountPath`.
|
||||
Chaque clé du Secret `data` sera exposée comme un fichier à l'intérieur de ce dossier.
|
||||
|
||||
### Monter les données du Secret sur des chemins spécifiques
|
||||
|
||||
Vous pouvez contrôler les chemins sur lesquels les données des Secrets sont montées.
|
||||
Utilisez le champ `.spec.volumes[].secret.items` pour changer le
|
||||
chemin cible de chaque donnée:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: mysecret
|
||||
items:
|
||||
- key: username
|
||||
path: my-group/my-username
|
||||
```
|
||||
|
||||
Voici ce qu'il se passe lorsque vous déployez ce Pod:
|
||||
|
||||
* La clé `username` du Secret `mysecret` est montée dans le container sur le chemin
|
||||
`/etc/foo/my-group/my-username` au lieu de `/etc/foo/username`.
|
||||
* La clé `password` du Secret n'est pas montée dans le container.
|
||||
|
||||
Si vous listez de manière explicite les clés en utilisant le champ `.spec.volumes[].secret.items`,
|
||||
il est important de prendre en considération les points suivants:
|
||||
|
||||
* Seules les clés listées dans le champ `items` seront montées.
|
||||
* Pour monter toutes les clés du Secret, toutes doivent être
|
||||
définies dans le champ `items`.
|
||||
* Toutes les clés définies doivent exister dans le Secret.
|
||||
Sinon, le volume ne sera pas créé.
|
||||
|
||||
### Appliquer des permissions POSIX aux données
|
||||
|
||||
Vous pouvez appliquer des permissions POSIX pour une clé d'un Secret. Si vous n'en configurez pas, les permissions seront par défaut `0644`.
|
||||
Vous pouvez aussi définir des permissions pour tout un Secret, et redéfinir les permissions pour chaque clé si nécessaire.
|
||||
|
||||
Par exemple, il est possible de définir un mode par défaut:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
volumes:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: mysecret
|
||||
defaultMode: 0400
|
||||
```
|
||||
|
||||
Le Secret sera monté sur `/etc/foo`; tous les fichiers créés par le secret
|
||||
auront des permissions de type `0400`.
|
||||
|
||||
{{< note >}}
|
||||
Si vous définissez un Pod en utilisant le format JSON, il est important de
|
||||
noter que la spécification JSON ne supporte pas le système octal, et qu'elle
|
||||
comprendra la valeur `0400` comme la valeur _décimale_ `400`.
|
||||
En JSON, utilisez plutôt l'écriture décimale pour le champ `defaultMode`.
|
||||
Si vous utilisez le format YAML, vous pouvez utiliser le système octal
|
||||
pour définir `defaultMode`.
|
||||
{{< /note >}}
|
||||
|
||||
## Définir des variables d'environnement avec des Secrets
|
||||
|
||||
Il est possible de monter les données des Secrets comme variables d'environnement dans vos containers.
|
||||
|
||||
Si un container consomme déja un Secret en variables d'environnement,
|
||||
la mise à jour de ce Secret ne sera pas répercutée dans le container tant
|
||||
qu'il n'aura pas été redémarré. Il existe cependant des solutions tierces
|
||||
permettant de redémarrer les containers lors d'une mise à jour du Secret.
|
||||
|
||||
### Définir une variable d'environnement à partir d'un seul Secret
|
||||
|
||||
* Définissez une variable d'environnement et sa valeur à l'intérieur d'un Secret:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
|
||||
```
|
||||
|
||||
* Assignez la valeur de `backend-username` définie dans le Secret
|
||||
à la variable d'environnement `SECRET_USERNAME` dans la configuration du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
* À l'intérieur d'une session shell, affichez le contenu de la variable
|
||||
d'environnement `SECRET_USERNAME`:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'
|
||||
```
|
||||
|
||||
Le résultat est:
|
||||
```
|
||||
backend-admin
|
||||
```
|
||||
|
||||
### Définir des variables d'environnement à partir de plusieurs Secrets
|
||||
|
||||
* Comme précédemment, créez d'abord les Secrets:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
|
||||
kubectl create secret generic db-user --from-literal=db-username='db-admin'
|
||||
```
|
||||
|
||||
* Définissez les variables d'environnement dans la configuration du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
* Dans un shell, listez les variables d'environnement du container:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'
|
||||
```
|
||||
Le résultat est:
|
||||
```
|
||||
DB_USERNAME=db-admin
|
||||
BACKEND_USERNAME=backend-admin
|
||||
```
|
||||
|
||||
|
||||
## Configurez toutes les paires de clé-valeur d'un Secret comme variables d'environnement
|
||||
|
||||
{{< note >}}
|
||||
Cette fonctionnalité n'est disponible que dans les versions de Kubernetes
|
||||
égales ou supérieures à v1.6.
|
||||
{{< /note >}}
|
||||
|
||||
* Créez un Secret contenant plusieurs paires de clé-valeur:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'
|
||||
```
|
||||
|
||||
* Utilisez `envFrom` pour définir toutes les données du Secret comme variables
|
||||
d'environnement. Les clés du Secret deviendront les noms des variables
|
||||
d'environnement à l'intérieur du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml
|
||||
```
|
||||
|
||||
* Dans votre shell, affichez les variables d'environnement `username` et `password`:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
|
||||
```
|
||||
|
||||
Le résultat est:
|
||||
```
|
||||
username: my-app
|
||||
password: 39528$vdg7Jb
|
||||
```
|
||||
|
||||
### Références
|
||||
|
||||
* [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
|
||||
* [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)
|
||||
* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* En savoir plus sur les [Secrets](/docs/concepts/configuration/secret/).
|
||||
* En savoir plus sur les [Volumes](/docs/concepts/storage/volumes/).
|
|
@ -0,0 +1,15 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envar-demo
|
||||
labels:
|
||||
purpose: demonstrate-envars
|
||||
spec:
|
||||
containers:
|
||||
- name: envar-demo-container
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
env:
|
||||
- name: DEMO_GREETING
|
||||
value: "Hello from the environment"
|
||||
- name: DEMO_FAREWELL
|
||||
value: "Such a sweet sorrow"
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envvars-multiple-secrets
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: BACKEND_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backend-user
|
||||
key: backend-username
|
||||
- name: DB_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: db-user
|
||||
key: db-username
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: envfrom-secret
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: test-secret
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: env-single-secret
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: SECRET_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: backend-user
|
||||
key: backend-username
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-envars-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: envars-test-container
|
||||
image: nginx
|
||||
env:
|
||||
- name: SECRET_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: test-secret
|
||||
key: username
|
||||
- name: SECRET_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: test-secret
|
||||
key: password
|
|
@ -0,0 +1,18 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-container
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
# name must match the volume name below
|
||||
- name: secret-volume
|
||||
mountPath: /etc/secret-volume
|
||||
readOnly: true
|
||||
# The secret data is exposed to Containers in the Pod through a Volume.
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: test-secret
|
|
@ -0,0 +1,7 @@
|
|||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: test-secret
|
||||
data:
|
||||
username: bXktYXBw
|
||||
password: Mzk1MjgkdmRnN0pi
|
|
@ -70,7 +70,8 @@ Contoh:
|
|||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {"portMappings": true}
|
||||
"capabilities": {"portMappings": true},
|
||||
"externalSetMarkChain": "KUBE-MARK-MASQ"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -298,7 +298,7 @@ repliche nginx da 3 a 1, fare:
|
|||
|
||||
```shell
|
||||
$ kubectl scale deployment/my-nginx --replicas=1
|
||||
deployment.extensions/my-nginx scaled
|
||||
deployment.apps/my-nginx scaled
|
||||
```
|
||||
|
||||
Ora hai solo un pod gestito dalla distribuzione
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: "スケジューリングと退避"
|
||||
weight: 90
|
||||
weight: 95
|
||||
description: >
|
||||
Kubernetesにおいてスケジューリングとは、稼働させたいPodをNodeにマッチさせ、kubeletが実行できるようにすることを指します。
|
||||
退避とは、リソース不足のNodeで1つ以上のPodを積極的に停止させるプロセスです。
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: APIを起点とした退避
|
||||
content_type: concept
|
||||
weight: 70
|
||||
weight: 110
|
||||
---
|
||||
|
||||
{{< glossary_definition term_id="api-eviction" length="short" >}} </br>
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Node上へのPodのスケジューリング
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 20
|
||||
---
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: スケジューラーのパフォーマンスチューニング
|
||||
content_type: concept
|
||||
weight: 80
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: スケジューリングフレームワーク
|
||||
content_type: concept
|
||||
weight: 90
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue