Update links (2)

pull/28797/head
Jihoon Seo 2021-07-09 11:02:29 +09:00
parent 20eea3b75b
commit 374bb0547a
5 changed files with 12 additions and 11 deletions

View File

@ -104,7 +104,7 @@ Master and Worker nodes should be protected from overload and resource exhaustio
Resource consumption by the control plane will correlate with the number of pods and the pod churn rate. Very large and very small clusters will benefit from non-default [settings](/docs/reference/command-line-tools-reference/kube-apiserver/) of kube-apiserver request throttling and memory. Having these too high can lead to request limit exceeded and out of memory errors.
On worker nodes, [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](/docs/tasks/administer-cluster/out-of-resource/) conditions can be configured.
On worker nodes, [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](/docs/concepts/scheduling-eviction/node-pressure-eviction/) conditions can be configured.
## Security

View File

@ -353,7 +353,7 @@ the removal of the lowest priority Pods is not sufficient to allow the scheduler
to schedule the preemptor Pod, or if the lowest priority Pods are protected by
`PodDisruptionBudget`.
The kubelet uses Priority to determine pod order for [out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
The kubelet uses Priority to determine pod order for [node-pressure eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
You can use the QoS class to estimate the order in which pods are most likely
to get evicted. The kubelet ranks pods for eviction based on the following factors:
@ -361,10 +361,10 @@ to get evicted. The kubelet ranks pods for eviction based on the following facto
1. Pod Priority
1. Amount of resource usage relative to requests
See [evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
See [Pod selection for kubelet eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction)
for more details.
kubelet out-of-resource eviction does not evict Pods when their
kubelet node-pressure eviction does not evict Pods when their
usage does not exceed their requests. If a Pod with lower priority is not
exceeding its requests, it won't be evicted. Another Pod with higher priority
that exceeds its requests may be evicted.

View File

@ -267,7 +267,7 @@ This ensures that DaemonSet pods are never evicted due to these problems.
## Taint Nodes by Condition
The control plane, using the node {{<glossary_tooltip text="controller" term_id="controller">}},
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
automatically creates taints with a `NoSchedule` effect for [node conditions](/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions).
The scheduler checks taints, not node conditions, when it makes scheduling
decisions. This ensures that node conditions don't directly affect scheduling.
@ -298,7 +298,7 @@ arbitrary tolerations to DaemonSets.
## {{% heading "whatsnext" %}}
* Read about [out of resource handling](/docs/concepts/scheduling-eviction/out-of-resource/) and how you can configure it
* Read about [pod priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* Read about [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/) and how you can configure it
* Read about [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)

View File

@ -31,7 +31,7 @@ an application. Examples are:
- cloud provider or hypervisor failure makes VM disappear
- a kernel panic
- the node disappears from the cluster due to cluster network partition
- eviction of a pod due to the node being [out-of-resources](/docs/tasks/administer-cluster/out-of-resource/).
- eviction of a pod due to the node being [out-of-resources](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
Except for the out-of-resources condition, all these conditions
should be familiar to most users; they are not specific

View File

@ -57,7 +57,7 @@
/docs/admin/node-conformance.md /docs/admin/node-conformance/ 301
/docs/admin/node-conformance/ /docs/setup/best-practices/node-conformance/ 301
/docs/admin/node-problem/ /docs/tasks/debug-application-cluster/monitor-node-health/ 301
/docs/admin/out-of-resource/ /docs/tasks/administer-cluster/out-of-resource/ 301
/docs/admin/out-of-resource/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301
/docs/admin/rescheduler/ /docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ 301
/docs/admin/resourcequota/* /docs/concepts/policy/resource-quotas/ 301
/docs/admin/resourcequota/limitstorageconsumption/ /docs/tasks/administer-cluster/limit-storage-consumption/ 301
@ -128,7 +128,8 @@
/docs/concepts/scheduling/scheduling-framework/ /docs/concepts/scheduling-eviction/scheduling-framework/ 301
/id/docs/concepts/scheduling/scheduling-framework/ /id/docs/concepts/scheduling-eviction/scheduling-framework/ 301
/docs/concepts/scheduling-eviction/eviction-policy/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301
/docs/concepts/scheduling-eviction/pod-eviction/ /docs/concepts/scheduling-eviction/ 301
/docs/concepts/scheduling-eviction/out-of-resource/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301
/docs/concepts/scheduling-eviction/pod-eviction/ /docs/concepts/scheduling-eviction/#pod-disruption 301
/docs/concepts/service-catalog/ /docs/concepts/extend-kubernetes/service-catalog/ 301
/docs/concepts/services-networking/networkpolicies/ /docs/concepts/services-networking/network-policies/ 301
/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ /docs/tasks/network/customize-hosts-file-for-pods/ 301
@ -265,7 +266,7 @@
/docs/tasks/administer-cluster/overview/ /docs/concepts/cluster-administration/ 301
/docs/tasks/administer-cluster/quota-memory-cpu-namespace/ /docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/ 301
/docs/tasks/administer-cluster/quota-pod-namespace/ /docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/ 301
/docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/tasks/administer-cluster/out-of-resource/ 301
/docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301
/docs/tasks/administer-cluster/out-of-resource/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301
/docs/tasks/administer-cluster/romana-network-policy/ /docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/ 301
/docs/tasks/administer-cluster/running-cloud-controller.md /docs/tasks/administer-cluster/running-cloud-controller/ 301