Merge pull request #28408 from tengqm/fix-links-1

Fix some links in the tasks section
pull/28248/head^2
Kubernetes Prow Robot 2021-06-14 16:24:01 -07:00 committed by GitHub
commit e5b5f45e7c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 63 additions and 76 deletions

View File

@ -7,7 +7,6 @@ card:
weight: 40
---
<!-- overview -->
This page shows how to configure access to multiple clusters by using
@ -21,20 +20,15 @@ a *kubeconfig file*. This is a generic way of referring to configuration files.
It does not mean that there is a file named `kubeconfig`.
{{< /note >}}
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
To check that {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} is installed,
run `kubectl version --client`. The kubectl version should be
[within one minor version](/docs/setup/release/version-skew-policy/#kubectl) of your
[within one minor version](/releases/version-skew-policy/#kubectl) of your
cluster's API server.
<!-- steps -->
## Define clusters, users, and contexts
@ -186,7 +180,7 @@ kubectl config --kubeconfig=config-demo view --minify
The output shows configuration information associated with the `dev-frontend` context:
```shell
```yaml
apiVersion: v1
clusters:
- cluster:
@ -238,7 +232,6 @@ kubectl config --kubeconfig=config-demo use-context dev-storage
View configuration associated with the new current context, `dev-storage`.
```shell
kubectl config --kubeconfig=config-demo view --minify
```
@ -247,7 +240,7 @@ kubectl config --kubeconfig=config-demo view --minify
In your `config-exercise` directory, create a file named `config-demo-2` with this content:
```shell
```yaml
apiVersion: v1
kind: Config
preferences: {}
@ -269,13 +262,17 @@ current value of your `KUBECONFIG` environment variable, so you can restore it l
For example:
### Linux
```shell
export KUBECONFIG_SAVED=$KUBECONFIG
```
### Windows PowerShell
```shell
```powershell
$Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG
```
The `KUBECONFIG` environment variable is a list of paths to configuration files. The list is
colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you have
a `KUBECONFIG` environment variable, familiarize yourself with the configuration files
@ -284,11 +281,14 @@ in the list.
Temporarily append two paths to your `KUBECONFIG` environment variable. For example:
### Linux
```shell
export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2
```
### Windows PowerShell
```shell
```powershell
$Env:KUBECONFIG=("config-demo;config-demo-2")
```
@ -303,7 +303,7 @@ environment variable. In particular, notice that the merged information has the
`dev-ramp-up` context from the `config-demo-2` file and the three contexts from
the `config-demo` file:
```shell
```yaml
contexts:
- context:
cluster: development
@ -347,11 +347,14 @@ If you have a `$HOME/.kube/config` file, and it's not already listed in your
For example:
### Linux
```shell
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config
```
### Windows Powershell
```shell
```powershell
$Env:KUBECONFIG="$Env:KUBECONFIG;$HOME\.kube\config"
```
@ -367,23 +370,19 @@ kubectl config view
Return your `KUBECONFIG` environment variable to its original value. For example:<br>
### Linux
```shell
export KUBECONFIG=$KUBECONFIG_SAVED
```
### Windows PowerShell
```shell
```powershell
$Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED
```
## {{% heading "whatsnext" %}}
* [Organizing Cluster Access Using kubeconfig Files](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config)

View File

@ -78,3 +78,4 @@ telemetry agents on the node, make sure to check with the vendor of the agent wh
We keep the work in progress version of migration instructions for various telemetry and security agent vendors
in [Google doc](https://docs.google.com/document/d/1ZFi4uKit63ga5sxEiZblfb-c23lFhvy6RXVPikS8wf0/edit#).
Please contact the vendor to get up to date instructions for migrating from dockershim.

View File

@ -17,33 +17,27 @@ itself. Unless resources are set aside for these system daemons, pods and system
daemons compete for resources and lead to resource starvation issues on the
node.
The `kubelet` exposes a feature named `Node Allocatable` that helps to reserve
The `kubelet` exposes a feature named 'Node Allocatable' that helps to reserve
compute resources for system daemons. Kubernetes recommends cluster
administrators to configure `Node Allocatable` based on their workload density
administrators to configure 'Node Allocatable' based on their workload density
on each node.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
Your Kubernetes server must be at or later than version 1.17 to use
the kubelet command line option `--reserved-cpus` to set an
[explicitly reserved CPU list](#explicitly-reserved-cpu-list).
<!-- steps -->
## Node Allocatable
![node capacity](/images/docs/node-capacity.svg)
`Allocatable` on a Kubernetes node is defined as the amount of compute resources
'Allocatable' on a Kubernetes node is defined as the amount of compute resources
that are available for pods. The scheduler does not over-subscribe
`Allocatable`. `CPU`, `memory` and `ephemeral-storage` are supported as of now.
'Allocatable'. 'CPU', 'memory' and 'ephemeral-storage' are supported as of now.
Node Allocatable is exposed as part of `v1.Node` object in the API and as part
of `kubectl describe node` in the CLI.
@ -97,8 +91,7 @@ flag.
It is recommended that the kubernetes system daemons are placed under a top
level control group (`runtime.slice` on systemd machines for example). Each
system daemon should ideally run within its own child control group. Refer to
[this
doc](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md#recommended-cgroups-setup)
[the design proposal](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md#recommended-cgroups-setup)
for more details on recommended control group hierarchy.
Note that Kubelet **does not** create `--kube-reserved-cgroup` if it doesn't
@ -109,7 +102,6 @@ exist. Kubelet will fail if an invalid cgroup is specified.
- **Kubelet Flag**: `--system-reserved=[cpu=100m][,][memory=100Mi][,][ephemeral-storage=1Gi][,][pid=1000]`
- **Kubelet Flag**: `--system-reserved-cgroup=`
`system-reserved` is meant to capture resource reservation for OS system daemons
like `sshd`, `udev`, etc. `system-reserved` should reserve `memory` for the
`kernel` too since `kernel` memory is not accounted to pods in Kubernetes at this time.
@ -127,13 +119,14 @@ kubelet flag.
It is recommended that the OS system daemons are placed under a top level
control group (`system.slice` on systemd machines for example).
Note that Kubelet **does not** create `--system-reserved-cgroup` if it doesn't
exist. Kubelet will fail if an invalid cgroup is specified.
Note that `kubelet` **does not** create `--system-reserved-cgroup` if it doesn't
exist. `kubelet` will fail if an invalid cgroup is specified.
### Explicitly Reserved CPU List
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
- **Kubelet Flag**: `--reserved-cpus=0-3`
**Kubelet Flag**: `--reserved-cpus=0-3`
`reserved-cpus` is meant to define an explicit CPU set for OS system daemons and
kubernetes system daemons. `reserved-cpus` is for systems that do not intend to
@ -154,14 +147,15 @@ For example: in Centos, you can do this using the tuned toolset.
### Eviction Thresholds
- **Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]`
**Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]`
Memory pressure at the node level leads to System OOMs which affects the entire
node and all pods running on it. Nodes can go offline temporarily until memory
has been reclaimed. To avoid (or reduce the probability of) system OOMs kubelet
provides [`Out of Resource`](/docs/tasks/administer-cluster/out-of-resource/) management. Evictions are
provides [out of resource](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
management. Evictions are
supported for `memory` and `ephemeral-storage` only. By reserving some memory via
`--eviction-hard` flag, the `kubelet` attempts to `evict` pods whenever memory
`--eviction-hard` flag, the `kubelet` attempts to evict pods whenever memory
availability on the node drops below the reserved value. Hypothetically, if
system daemons did not exist on a node, pods cannot use more than `capacity -
eviction-hard`. For this reason, resources reserved for evictions are not
@ -169,17 +163,17 @@ available for pods.
### Enforcing Node Allocatable
- **Kubelet Flag**: `--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
**Kubelet Flag**: `--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
The scheduler treats `Allocatable` as the available `capacity` for pods.
The scheduler treats 'Allocatable' as the available `capacity` for pods.
`kubelet` enforce `Allocatable` across pods by default. Enforcement is performed
`kubelet` enforce 'Allocatable' across pods by default. Enforcement is performed
by evicting pods whenever the overall usage across all pods exceeds
`Allocatable`. More details on eviction policy can be found
[here](/docs/tasks/administer-cluster/out-of-resource/#eviction-policy). This enforcement is controlled by
'Allocatable'. More details on eviction policy can be found
on the [node pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
page. This enforcement is controlled by
specifying `pods` value to the kubelet flag `--enforce-node-allocatable`.
Optionally, `kubelet` can be made to enforce `kube-reserved` and
`system-reserved` by specifying `kube-reserved` & `system-reserved` values in
the same flag. Note that to enforce `kube-reserved` or `system-reserved`,
@ -188,10 +182,10 @@ respectively.
## General Guidelines
System daemons are expected to be treated similar to `Guaranteed` pods. System
System daemons are expected to be treated similar to 'Guaranteed' pods. System
daemons can burst within their bounding control groups and this behavior needs
to be managed as part of kubernetes deployments. For example, `kubelet` should
have its own control group and share `Kube-reserved` resources with the
have its own control group and share `kube-reserved` resources with the
container runtime. However, Kubelet cannot burst and use up all available Node
resources if `kube-reserved` is enforced.
@ -200,9 +194,9 @@ to critical system services being CPU starved, OOM killed, or unable
to fork on the node. The
recommendation is to enforce `system-reserved` only if a user has profiled their
nodes exhaustively to come up with precise estimates and is confident in their
ability to recover if any process in that group is oom_killed.
ability to recover if any process in that group is oom-killed.
* To begin with enforce `Allocatable` on `pods`.
* To begin with enforce 'Allocatable' on `pods`.
* Once adequate monitoring and alerting is in place to track kube system
daemons, attempt to enforce `kube-reserved` based on usage heuristics.
* If absolutely necessary, enforce `system-reserved` over time.
@ -212,8 +206,6 @@ more features are added. Over time, kubernetes project will attempt to bring
down utilization of node system daemons, but that is not a priority as of now.
So expect a drop in `Allocatable` capacity in future releases.
<!-- discussion -->
## Example Scenario
@ -225,15 +217,15 @@ Here is an example to illustrate Node Allocatable computation:
* `--system-reserved` is set to `cpu=500m,memory=1Gi,ephemeral-storage=1Gi`
* `--eviction-hard` is set to `memory.available<500Mi,nodefs.available<10%`
Under this scenario, `Allocatable` will be `14.5 CPUs`, `28.5Gi` of memory and
Under this scenario, 'Allocatable' will be 14.5 CPUs, 28.5Gi of memory and
`88Gi` of local storage.
Scheduler ensures that the total memory `requests` across all pods on this node does
not exceed `28.5Gi` and storage doesn't exceed `88Gi`.
Kubelet evicts pods whenever the overall memory usage across pods exceeds `28.5Gi`,
or if overall disk usage exceeds `88Gi` If all processes on the node consume as
much CPU as they can, pods together cannot consume more than `14.5 CPUs`.
not exceed 28.5Gi and storage doesn't exceed 88Gi.
Kubelet evicts pods whenever the overall memory usage across pods exceeds 28.5Gi,
or if overall disk usage exceeds 88Gi If all processes on the node consume as
much CPU as they can, pods together cannot consume more than 14.5 CPUs.
If `kube-reserved` and/or `system-reserved` is not enforced and system daemons
exceed their reservation, `kubelet` evicts pods whenever the overall node memory
usage is higher than `31.5Gi` or `storage` is greater than `90Gi`
usage is higher than 31.5Gi or `storage` is greater than 90Gi.

View File

@ -7,35 +7,36 @@ weight: 10
---
<!-- overview -->
This page shows how to perform a rolling update on a DaemonSet.
## {{% heading "prerequisites" %}}
* The DaemonSet rolling update feature is only supported in Kubernetes version 1.6 or later.
<!-- steps -->
## DaemonSet Update Strategy
DaemonSet has two update strategy types:
* OnDelete: With `OnDelete` update strategy, after you update a DaemonSet template, new
* `OnDelete`: With `OnDelete` update strategy, after you update a DaemonSet template, new
DaemonSet pods will *only* be created when you manually delete old DaemonSet
pods. This is the same behavior of DaemonSet in Kubernetes version 1.5 or
before.
* RollingUpdate: This is the default update strategy.
* `RollingUpdate`: This is the default update strategy.
With `RollingUpdate` update strategy, after you update a
DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods
will be created automatically, in a controlled fashion. At most one pod of the DaemonSet will be running on each node during the whole update process.
will be created automatically, in a controlled fashion. At most one pod of
the DaemonSet will be running on each node during the whole update process.
## Performing a Rolling Update
To enable the rolling update feature of a DaemonSet, you must set its
`.spec.updateStrategy.type` to `RollingUpdate`.
You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default
to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well.
You may want to set
[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable)
(default to 1) and
[`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds)
(default to 0) as well.
### Creating a DaemonSet with `RollingUpdate` update strategy
@ -143,7 +144,7 @@ causes:
The rollout is stuck because new DaemonSet pods can't be scheduled on at least one
node. This is possible when the node is
[running out of resources](/docs/tasks/administer-cluster/out-of-resource/).
[running out of resources](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
When this happens, find the nodes that don't have the DaemonSet pods scheduled on
by comparing the output of `kubectl get nodes` and the output of:
@ -184,14 +185,8 @@ Delete DaemonSet from a namespace :
kubectl delete ds fluentd-elasticsearch -n kube-system
```
## {{% heading "whatsnext" %}}
* See [Task: Performing a rollback on a
DaemonSet](/docs/tasks/manage-daemon/rollback-daemon-set/)
* See [Concepts: Creating a DaemonSet to adopt existing DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/)
* See [Performing a rollback on a DaemonSet](/docs/tasks/manage-daemon/rollback-daemon-set/)
* See [Creating a DaemonSet to adopt existing DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/)