Improve Node, Taints, and Tolerations concepts
- Use glossary shortcodes in Node concept Add glossary tooltips to help new readers take in unfamiliar concepts. - Move minion hint to glossary The page about Node need not mention the former name (minion): it has been many releases since the name change. Instead, add a hint to the full glossary definition. - Use note shortcodes where appropriate - Order node management section first in concept page - Drop list of components that act on Nodes With Operators and CustomResourceDefinitions now common, plus the cluster API, it's less easy to give a definitive list of components that interacr with Node objects. - Tidy old mentions of GA features for Node - Give node tainting by condition its own section - Introduce toleration concept before using it - Mention version in TopologyManager feature state - Other rewording - Tidy Node condition table - Explain SchedulingDisabled synthesized condition - Drop details of supported versions for NodeRestriction Assume that cluster version is v1.13 or laterpull/17509/head
parent
ad4505dade
commit
c5abd7704a
|
@ -9,32 +9,132 @@ weight: 10
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
A node is a worker machine in Kubernetes, previously known as a `minion`. A node
|
||||
may be a VM or physical machine, depending on the cluster. Each node contains
|
||||
the services necessary to run [pods](/docs/concepts/workloads/pods/pod/) and is managed by the master
|
||||
components. The services on a node include the [container runtime](/docs/concepts/overview/components/#container-runtime), kubelet and kube-proxy. See
|
||||
[The Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) section in the
|
||||
architecture design doc for more details.
|
||||
Kubernetes runs your workload by placing containers into Pods to run on _Nodes_.
|
||||
A node may be a virtual or physical machine, depending on the cluster. Each node
|
||||
contains the services necessary to run
|
||||
{{< glossary_tooltip text="Pods" term_id="pod" >}}, managed by the
|
||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
|
||||
|
||||
Typically you have several nodes in a cluster; in a learning or resource-limited
|
||||
environment, you might have just one.
|
||||
|
||||
The [components](/docs/concepts/overview/components/#node-components) on a node include the
|
||||
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, a
|
||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, and the
|
||||
{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Node Status
|
||||
## Management
|
||||
|
||||
A node's status contains the following information:
|
||||
There are two main ways to have Nodes added to the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}:
|
||||
|
||||
1. The kubelet on a node self-registers to the control plane
|
||||
2. You, or another human user, manually add a Node object
|
||||
|
||||
After you create a Node object, or the kubelet on a node self-registers, the
|
||||
control plane checks whether the new Node object is valid. For example, if you
|
||||
try to create a Node from the following JSON manifest:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Node",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "10.240.79.157",
|
||||
"labels": {
|
||||
"name": "my-first-k8s-node"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Kubernetes creates a Node object internally (the representation). Kubernetes checks
|
||||
that a kubelet has registered to the API server that matches the `metadata.name`
|
||||
field of the Node. If the node is healthy (if all necessary services are running),
|
||||
it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
|
||||
until it becomes healthy.
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes keeps the object for the invalid Node and continues checking to see whether
|
||||
it becomes healthy.
|
||||
|
||||
You, or a {{< glossary_tooltip term_id="controller" text="controller">}}, must explicitly
|
||||
delete the Node object to stop that health checking.
|
||||
{{< /note >}}
|
||||
|
||||
The name of a Node object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
### Self-registration of Nodes
|
||||
|
||||
When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to
|
||||
register itself with the API server. This is the preferred pattern, used by most distros.
|
||||
|
||||
For self-registration, the kubelet is started with the following options:
|
||||
|
||||
- `--kubeconfig` - Path to credentials to authenticate itself to the API server.
|
||||
- `--cloud-provider` - How to talk to a {{< glossary_tooltip text="cloud provider" term_id="cloud-provider" >}} to read metadata about itself.
|
||||
- `--register-node` - Automatically register with the API server.
|
||||
- `--register-with-taints` - Register the node with the given list of {{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).
|
||||
|
||||
No-op if `register-node` is false.
|
||||
- `--node-ip` - IP address of the node.
|
||||
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
|
||||
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
|
||||
|
||||
When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
|
||||
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
|
||||
kubelets are only authorized to create/modify their own Node resource.
|
||||
|
||||
### Manual Node administration
|
||||
|
||||
You can create and modify Node objects using
|
||||
{{< glossary_tooltip text="kubectl" term_id="kubectl" >}}.
|
||||
|
||||
When you want to create Node objects manually, set the kubelet flag `--register-node=false`.
|
||||
|
||||
You can modify Node objects regardless of the setting of `--register-node`.
|
||||
For example, you can set labels on an existing Node, or mark it unschedulable.
|
||||
|
||||
You can use labels on Nodes in conjunction with node selectors on Pods to control
|
||||
scheduling. For example, you can to constrain a Pod to only be eligible to run on
|
||||
a subset of the available nodes.
|
||||
|
||||
Marking a node as unschedulable prevents the scheduler from placing new pods onto
|
||||
that Node, but does not affect existing Pods on the Node. This is useful as a
|
||||
preparatory step before a node reboot or other maintenance.
|
||||
|
||||
To mark a Node unschedulable, run:
|
||||
|
||||
```shell
|
||||
kubectl cordon $NODENAME
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Pods that are part of a {{< glossary_tooltip term_id="daemonset" >}} tolerate
|
||||
being run on an unschedulable Node. DaemonSets typically provide node-local services
|
||||
that should run on the Node even if it is being drained of workload applications.
|
||||
{{< /note >}}
|
||||
|
||||
## Node status
|
||||
|
||||
A Node's status contains the following information:
|
||||
|
||||
* [Addresses](#addresses)
|
||||
* [Conditions](#condition)
|
||||
* [Capacity and Allocatable](#capacity)
|
||||
* [Info](#info)
|
||||
|
||||
Node status and other details about a node can be displayed using the following command:
|
||||
You can use `kubectl` to view a Node's status and other details:
|
||||
|
||||
```shell
|
||||
kubectl describe node <insert-node-name-here>
|
||||
```
|
||||
Each section is described in detail below.
|
||||
|
||||
Each section of the output is described below.
|
||||
|
||||
### Addresses
|
||||
|
||||
|
@ -49,15 +149,23 @@ The usage of these fields varies depending on your cloud provider or bare metal
|
|||
|
||||
The `conditions` field describes the status of all `Running` nodes. Examples of conditions include:
|
||||
|
||||
| Node Condition | Description |
|
||||
|----------------|-------------|
|
||||
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
|
||||
| `MemoryPressure` | `True` if pressure exists on the node memory -- that is, if the node memory is low; otherwise `False` |
|
||||
| `PIDPressure` | `True` if pressure exists on the processes -- that is, if there are too many processes on the node; otherwise `False` |
|
||||
| `DiskPressure` | `True` if pressure exists on the disk size -- that is, if the disk capacity is low; otherwise `False` |
|
||||
| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
|
||||
{{< table caption = "Node conditions, and a description of when each condition applies." >}}
|
||||
| Node Condition | Description |
|
||||
|----------------------|-------------|
|
||||
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
|
||||
| `DiskPressure` | `True` if pressure exists on the disk size--that is, if the disk capacity is low; otherwise `False` |
|
||||
| `MemoryPressure` | `True` if pressure exists on the node memory--that is, if the node memory is low; otherwise `False` |
|
||||
| `PIDPressure` | `True` if pressure exists on the processes—that is, if there are too many processes on the node; otherwise `False` |
|
||||
| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
|
||||
{{< /table >}}
|
||||
|
||||
The node condition is represented as a JSON object. For example, the following response describes a healthy node.
|
||||
{{< note >}}
|
||||
If you use command-line tools to print details of a cordoned Node, the Condition includes
|
||||
`SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead,
|
||||
cordoned nodes are marked Unschedulable in their spec.
|
||||
{{< /note >}}
|
||||
|
||||
The node condition is represented as a JSON object. For example, the following structure describes a healthy node:
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
|
@ -72,20 +180,24 @@ The node condition is represented as a JSON object. For example, the following r
|
|||
]
|
||||
```
|
||||
|
||||
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the [kube-controller-manager](/docs/admin/kube-controller-manager/)), all the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the apiserver is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout` (an argument passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||
|
||||
In versions of Kubernetes prior to 1.5, the node controller would [force delete](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods)
|
||||
these unreachable pods from the apiserver. However, in 1.5 and higher, the node controller does not force delete pods until it is
|
||||
confirmed that they have stopped running in the cluster. You can see the pods that might be running on an unreachable node as being in
|
||||
the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has
|
||||
permanently left a cluster, the cluster administrator may need to delete the node object by hand. Deleting the node object from
|
||||
Kubernetes causes all the Pod objects running on the node to be deleted from the apiserver, and frees up their names.
|
||||
The node controller does not force delete pods until it is confirmed that they have stopped
|
||||
running in the cluster. You can see the pods that might be running on an unreachable node as
|
||||
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
|
||||
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
|
||||
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
|
||||
all the Pod objects running on the node to be deleted from the API server, and frees up their
|
||||
names.
|
||||
|
||||
The node lifecycle controller automatically creates
|
||||
[taints](/docs/concepts/configuration/taint-and-toleration/) that represent conditions.
|
||||
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
|
||||
Pods can also have tolerations which let them tolerate a Node's taints.
|
||||
|
||||
See [Taint Nodes by Condition](/docs/concepts/configuration/taint-and-toleration/#taint-nodes-by-condition)
|
||||
for more details.
|
||||
|
||||
### Capacity and Allocatable {#capacity}
|
||||
|
||||
Describes the resources available on the node: CPU, memory and the maximum
|
||||
|
@ -104,48 +216,10 @@ on a Node.
|
|||
Describes general information about the node, such as kernel version, Kubernetes version (kubelet and kube-proxy version), Docker version (if used), and OS name.
|
||||
This information is gathered by Kubelet from the node.
|
||||
|
||||
## Management
|
||||
### Node controller
|
||||
|
||||
Unlike [pods](/docs/concepts/workloads/pods/pod/) and [services](/docs/concepts/services-networking/service/),
|
||||
a node is not inherently created by Kubernetes: it is created externally by cloud
|
||||
providers like Google Compute Engine, or it exists in your pool of physical or virtual
|
||||
machines. So when Kubernetes creates a node, it creates
|
||||
an object that represents the node. After creation, Kubernetes
|
||||
checks whether the node is valid or not. For example, if you try to create
|
||||
a node from the following content:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Node",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "10.240.79.157",
|
||||
"labels": {
|
||||
"name": "my-first-k8s-node"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Kubernetes creates a node object internally (the representation), and
|
||||
validates the node by health checking based on the `metadata.name` field. If the node is valid -- that is, if all necessary
|
||||
services are running -- it is eligible to run a pod. Otherwise, it is
|
||||
ignored for any cluster activity until it becomes valid.
|
||||
The name of a Node object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes keeps the object for the invalid node and keeps checking to see whether it becomes valid.
|
||||
You must explicitly delete the Node object to stop this process.
|
||||
{{< /note >}}
|
||||
|
||||
Currently, there are three components that interact with the Kubernetes node
|
||||
interface: node controller, kubelet, and kubectl.
|
||||
|
||||
### Node Controller
|
||||
|
||||
The node controller is a Kubernetes master component which manages various
|
||||
aspects of nodes.
|
||||
The node {{< glossary_tooltip text="controller" term_id="controller" >}} is a
|
||||
Kubernetes control plane component that manages various aspects of nodes.
|
||||
|
||||
The node controller has multiple roles in a node's life. The first is assigning a
|
||||
CIDR block to the node when it is registered (if CIDR assignment is turned on).
|
||||
|
@ -168,6 +242,7 @@ checks the state of each node every `--node-monitor-period` seconds.
|
|||
#### Heartbeats
|
||||
|
||||
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
|
||||
|
||||
There are two forms of heartbeats: updates of `NodeStatus` and the
|
||||
[Lease object](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#lease-v1-coordination-k8s-io).
|
||||
Each Node has an associated Lease object in the `kube-node-lease`
|
||||
|
@ -188,13 +263,7 @@ a Lease object.
|
|||
|
||||
#### Reliability
|
||||
|
||||
In Kubernetes 1.4, we updated the logic of the node controller to better handle
|
||||
cases when a large number of nodes have problems with reaching the master
|
||||
(e.g. because the master has networking problems). Starting with 1.4, the node
|
||||
controller looks at the state of all nodes in the cluster when making a
|
||||
decision about pod eviction.
|
||||
|
||||
In most cases, node controller limits the eviction rate to
|
||||
In most cases, node controller limits the eviction rate to
|
||||
`--node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
|
||||
from more than 1 node per 10 seconds.
|
||||
|
||||
|
@ -220,62 +289,12 @@ completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
|
|||
case, the node controller assumes that there's some problem with master
|
||||
connectivity and stops all evictions until some connectivity is restored.
|
||||
|
||||
Starting in Kubernetes 1.6, the NodeController is also responsible for evicting
|
||||
pods that are running on nodes with `NoExecute` taints, when the pods do not tolerate
|
||||
the taints. Additionally, as an alpha feature that is disabled by default, the
|
||||
NodeController is responsible for adding taints corresponding to node problems like
|
||||
node unreachable or not ready. See [this documentation](/docs/concepts/configuration/taint-and-toleration/)
|
||||
for details about `NoExecute` taints and the alpha feature.
|
||||
The node controller is also responsible for evicting pods running on nodes with
|
||||
`NoExecute` taints, unless those pods tolerate that taint.
|
||||
The node controller also adds {{< glossary_tooltip text="taints" term_id="taint" >}}
|
||||
corresponding to node problems like node unreachable or not ready. This means
|
||||
that the scheduler won't place Pods onto unhealthy nodes.
|
||||
|
||||
Starting in version 1.8, the node controller can be made responsible for creating taints that represent
|
||||
Node conditions. This is an alpha feature of version 1.8.
|
||||
|
||||
### Self-Registration of Nodes
|
||||
|
||||
When the kubelet flag `--register-node` is true (the default), the kubelet will attempt to
|
||||
register itself with the API server. This is the preferred pattern, used by most distros.
|
||||
|
||||
For self-registration, the kubelet is started with the following options:
|
||||
|
||||
- `--kubeconfig` - Path to credentials to authenticate itself to the apiserver.
|
||||
- `--cloud-provider` - How to talk to a cloud provider to read metadata about itself.
|
||||
- `--register-node` - Automatically register with the API server.
|
||||
- `--register-with-taints` - Register the node with the given list of taints (comma separated `<key>=<value>:<effect>`). No-op if `register-node` is false.
|
||||
- `--node-ip` - IP address of the node.
|
||||
- `--node-labels` - Labels to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) in 1.13+).
|
||||
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.
|
||||
|
||||
When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
|
||||
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
|
||||
kubelets are only authorized to create/modify their own Node resource.
|
||||
|
||||
#### Manual Node Administration
|
||||
|
||||
A cluster administrator can create and modify node objects.
|
||||
|
||||
If the administrator wishes to create node objects manually, set the kubelet flag
|
||||
`--register-node=false`.
|
||||
|
||||
The administrator can modify node resources (regardless of the setting of `--register-node`).
|
||||
Modifications include setting labels on the node and marking it unschedulable.
|
||||
|
||||
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling,
|
||||
e.g. to constrain a pod to only be eligible to run on a subset of the nodes.
|
||||
|
||||
Marking a node as unschedulable prevents new pods from being scheduled to that
|
||||
node, but does not affect any existing pods on the node. This is useful as a
|
||||
preparatory step before a node reboot, etc. For example, to mark a node
|
||||
unschedulable, run this command:
|
||||
|
||||
```shell
|
||||
kubectl cordon $NODENAME
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Pods created by a DaemonSet controller bypass the Kubernetes scheduler
|
||||
and do not respect the unschedulable attribute on a node. This assumes that daemons belong on
|
||||
the machine even if it is being drained of applications while it prepares for a reboot.
|
||||
{{< /note >}}
|
||||
|
||||
{{< caution >}}
|
||||
`kubectl cordon` marks a node as 'unschedulable', which has the side effect of the service
|
||||
|
@ -285,34 +304,40 @@ eligible for, effectively removing incoming load balancer traffic from the cordo
|
|||
|
||||
### Node capacity
|
||||
|
||||
The capacity of the node (number of cpus and amount of memory) is part of the node object.
|
||||
Normally, nodes register themselves and report their capacity when creating the node object. If
|
||||
you are doing [manual node administration](#manual-node-administration), then you need to set node
|
||||
capacity when adding a node.
|
||||
Node objects track information about the Node's resource capacity (for example: the amount
|
||||
of memory available, and the number of CPUs).
|
||||
Nodes that [self register](#self-registration-of-nodes) report their capacity during
|
||||
registration. If you [manually](#manual-node-administration) add a Node, then
|
||||
you need to set the node's capacity informaton when you add it.
|
||||
|
||||
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
|
||||
checks that the sum of the requests of containers on the node is no greater than the node capacity. It
|
||||
includes all containers started by the kubelet, but not containers started directly by the [container runtime](/docs/concepts/overview/components/#container-runtime) nor any process running outside of the containers.
|
||||
The Kubernetes {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} ensures that
|
||||
there are enough resources for all the Pods on a Node. The scheduler checks that the sum
|
||||
of the requests of containers on the node is no greater than the node's capacity.
|
||||
That sum of requests includes all containers managed by the kubelet, but excludes any
|
||||
containers started directly by the container runtime, and also excludes any
|
||||
processes running outside of the kubelet's control.
|
||||
|
||||
If you want to explicitly reserve resources for non-Pod processes, follow this tutorial to
|
||||
{{< note >}}
|
||||
If you want to explicitly reserve resources for non-Pod processes, see
|
||||
[reserve resources for system daemons](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
|
||||
{{< /note >}}
|
||||
|
||||
## Node topology
|
||||
|
||||
{{< feature-state state="alpha" >}}
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.16" >}}
|
||||
|
||||
If you have enabled the `TopologyManager`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then
|
||||
the kubelet can use topology hints when making resource assignment decisions.
|
||||
|
||||
## API Object
|
||||
|
||||
Node is a top-level resource in the Kubernetes REST API. More details about the
|
||||
API object can be found at:
|
||||
[Node API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
|
||||
See [Control Topology Management Policies on a Node](/docs/tasks/administer-cluster/topology-manager/)
|
||||
for more information.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% capture whatsnext %}}
|
||||
* Read about [node components](/docs/concepts/overview/components/#node-components)
|
||||
* Read about node-level topology: [Control Topology Management Policies on a node](/docs/tasks/administer-cluster/topology-manager/)
|
||||
* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
|
||||
* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
|
||||
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
|
||||
section of the architecture design document.
|
||||
* Read about [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/).
|
||||
* Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling).
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -10,16 +10,17 @@ weight: 40
|
|||
|
||||
|
||||
{{% capture overview %}}
|
||||
Node affinity, described [here](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity),
|
||||
is a property of *pods* that *attracts* them to a set of nodes (either as a
|
||||
preference or a hard requirement). Taints are the opposite -- they allow a
|
||||
*node* to *repel* a set of pods.
|
||||
[_Node affinity_](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity),
|
||||
is a property of {{< glossary_tooltip text="Pods" term_id="pod" >}} that *attracts* them to
|
||||
a set of {{< glossary_tooltip text="nodes" term_id="node" >}} (either as a preference or a
|
||||
hard requirement). _Taints_ are the opposite -- they allow a node to repel a set of pods.
|
||||
|
||||
_Tolerations_ are applied to pods, and allow (but do not require) the pods to schedule
|
||||
onto nodes with matching taints.
|
||||
|
||||
Taints and tolerations work together to ensure that pods are not scheduled
|
||||
onto inappropriate nodes. One or more taints are applied to a node; this
|
||||
marks that the node should not accept any pods that do not tolerate the taints.
|
||||
Tolerations are applied to pods, and allow (but do not require) the pods to schedule
|
||||
onto nodes with matching taints.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
@ -65,12 +66,12 @@ Here’s an example of a pod that uses tolerations:
|
|||
|
||||
{{< codenew file="pods/pod-with-toleration.yaml" >}}
|
||||
|
||||
The default value for `operator` is `Equal`.
|
||||
|
||||
A toleration "matches" a taint if the keys are the same and the effects are the same, and:
|
||||
|
||||
* the `operator` is `Exists` (in which case no `value` should be specified), or
|
||||
* the `operator` is `Equal` and the `value`s are equal
|
||||
|
||||
`Operator` defaults to `Equal` if not specified.
|
||||
* the `operator` is `Equal` and the `value`s are equal.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
|
@ -204,7 +205,7 @@ when there are node problems, which is described in the next section.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
Earlier we mentioned the `NoExecute` taint effect, which affects pods that are already
|
||||
The `NoExecute` taint effect, mentioned above, affects pods that are already
|
||||
running on the node as follows
|
||||
|
||||
* pods that do not tolerate the taint are evicted immediately
|
||||
|
@ -213,9 +214,8 @@ running on the node as follows
|
|||
* pods that tolerate the taint with a specified `tolerationSeconds` remain
|
||||
bound for the specified amount of time
|
||||
|
||||
In addition, Kubernetes 1.6 introduced alpha support for representing node
|
||||
problems. In other words, the node controller automatically taints a node when
|
||||
certain condition is true. The following taints are built in:
|
||||
The node controller automatically taints a Node when certain conditions
|
||||
are true. The following taints are built in:
|
||||
|
||||
* `node.kubernetes.io/not-ready`: Node is not ready. This corresponds to
|
||||
the NodeCondition `Ready` being "`False`".
|
||||
|
@ -236,19 +236,18 @@ with `NoExecute` effect. If the fault condition returns to normal the kubelet or
|
|||
controller can remove the relevant taint(s).
|
||||
|
||||
{{< note >}}
|
||||
To maintain the existing [rate limiting](/docs/concepts/architecture/nodes/)
|
||||
behavior of pod evictions due to node problems, the system actually adds the taints
|
||||
in a rate-limited way. This prevents massive pod evictions in scenarios such
|
||||
as the master becoming partitioned from the nodes.
|
||||
The control plane limits the rate of adding node new taints to nodes. This rate limiting
|
||||
manages the number of evictions that are triggered when many nodes become unreachable at
|
||||
once (for example: if there is a network disruption).
|
||||
{{< /note >}}
|
||||
|
||||
The feature, in combination with `tolerationSeconds`, allows a pod
|
||||
to specify how long it should stay bound to a node that has one or both of these problems.
|
||||
You can specify `tolerationSeconds` for a Pod to define how long that Pod stays bound
|
||||
to a failing or unresponsive Node.
|
||||
|
||||
For example, an application with a lot of local state might want to stay
|
||||
bound to node for a long time in the event of network partition, in the hope
|
||||
For example, you might want to keep an application with a lot of local state
|
||||
bound to node for a long time in the event of network partition, hoping
|
||||
that the partition will recover and thus the pod eviction can be avoided.
|
||||
The toleration the pod would use in that case would look like
|
||||
The toleration you set for that Pod might look like:
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
|
@ -258,20 +257,15 @@ tolerations:
|
|||
tolerationSeconds: 6000
|
||||
```
|
||||
|
||||
Note that Kubernetes automatically adds a toleration for
|
||||
`node.kubernetes.io/not-ready` with `tolerationSeconds=300`
|
||||
unless the pod configuration provided
|
||||
by the user already has a toleration for `node.kubernetes.io/not-ready`.
|
||||
Likewise it adds a toleration for
|
||||
`node.kubernetes.io/unreachable` with `tolerationSeconds=300`
|
||||
unless the pod configuration provided
|
||||
by the user already has a toleration for `node.kubernetes.io/unreachable`.
|
||||
{{< note >}}
|
||||
Kubernetes automatically adds a toleration for
|
||||
`node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable`
|
||||
with `tolerationSeconds=300`,
|
||||
unless you, or a controller, set those tolerations explictly.
|
||||
|
||||
These automatically-added tolerations ensure that
|
||||
the default pod behavior of remaining bound for 5 minutes after one of these
|
||||
problems is detected is maintained.
|
||||
The two default tolerations are added by the [DefaultTolerationSeconds
|
||||
admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds).
|
||||
These automatically-added tolerations mean that Pods remain bound to
|
||||
Nodes for 5 minutes after one of these problems is detected.
|
||||
{{< /note >}}
|
||||
|
||||
[DaemonSet](/docs/concepts/workloads/controllers/daemonset/) pods are created with
|
||||
`NoExecute` tolerations for the following taints with no `tolerationSeconds`:
|
||||
|
@ -287,9 +281,8 @@ The node lifecycle controller automatically creates taints corresponding to
|
|||
Node conditions with `NoSchedule` effect.
|
||||
Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
|
||||
|
||||
Starting in Kubernetes 1.8, the DaemonSet controller automatically adds the
|
||||
following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from
|
||||
breaking.
|
||||
The DaemonSet controller automatically adds the following `NoSchedule`
|
||||
tolerations to all daemons, to prevent DaemonSets from breaking.
|
||||
|
||||
* `node.kubernetes.io/memory-pressure`
|
||||
* `node.kubernetes.io/disk-pressure`
|
||||
|
@ -299,3 +292,10 @@ breaking.
|
|||
|
||||
Adding these tolerations ensures backward compatibility. You can also add
|
||||
arbitrary tolerations to DaemonSets.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% capture whatsnext %}}
|
||||
* Read about [out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) and how you can configure it
|
||||
* Read about [pod priority](/docs/concepts/configuration/pod-priority-preemption/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -83,7 +83,7 @@ Node components run on every node, maintaining running pods and providing the Ku
|
|||
|
||||
{{< glossary_definition term_id="kube-proxy" length="all" >}}
|
||||
|
||||
### Container Runtime
|
||||
### Container runtime
|
||||
|
||||
{{< glossary_definition term_id="container-runtime" length="all" >}}
|
||||
|
||||
|
|
|
@ -15,3 +15,5 @@ tags:
|
|||
<!--more-->
|
||||
|
||||
A worker node may be a VM or physical machine, depending on the cluster. It has local daemons or services necessary to run {{< glossary_tooltip text="Pods" term_id="pod" >}} and is managed by the control plane. The daemons on a node include {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}, and a container runtime implementing the {{< glossary_tooltip text="CRI" term_id="cri" >}} such as {{< glossary_tooltip term_id="docker" >}}.
|
||||
|
||||
In early Kubernetes versions, Nodes were called “Minions”.
|
||||
|
|
Loading…
Reference in New Issue