diff --git a/_redirects b/_redirects index 8d57712722..fbd6069238 100644 --- a/_redirects +++ b/_redirects @@ -217,7 +217,7 @@ /docs/tasks/administer-cluster/default-cpu-request-limit/ /docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit/ 301 /docs/tasks/administer-cluster/default-memory-request-limit/ /docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit/ 301 /docs/tasks/administer-cluster/developing-cloud-controller-manager.md /docs/tasks/administer-cluster/developing-cloud-controller-manager/ 301 -/docs/tasks/administer-cluster/out-of-resource/memory-available.sh /docs/concepts/cluster-administration/out-of-resource/memory-available.sh 301 +/docs/tasks/administer-cluster/out-of-resource/memory-available.sh /docs/tasks/administer-cluster/memory-available.sh 301 /docs/tasks/administer-cluster/overview/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301 /docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/tasks/administer-cluster/out-of-resource/ 301 /docs/tasks/administer-cluster/running-cloud-controller.md /docs/tasks/administer-cluster/running-cloud-controller/ 301 diff --git a/docs/concepts/cluster-administration/out-of-resource/memory-available.sh b/docs/tasks/administer-cluster/memory-available.sh similarity index 100% rename from docs/concepts/cluster-administration/out-of-resource/memory-available.sh rename to docs/tasks/administer-cluster/memory-available.sh diff --git a/docs/tasks/administer-cluster/out-of-resource.md b/docs/tasks/administer-cluster/out-of-resource.md index 7461c7329a..2c45de8bca 100644 --- a/docs/tasks/administer-cluster/out-of-resource.md +++ b/docs/tasks/administer-cluster/out-of-resource.md @@ -370,10 +370,3 @@ to prevent system OOMs, and promote eviction of workloads so cluster state can r The Pod eviction may evict more Pods than needed due to stats collection timing gap. This can be mitigated by adding the ability to get root container stats on an on-demand basis [(https://github.com/google/cadvisor/issues/1247)](https://github.com/google/cadvisor/issues/1247) in the future. - -### How kubelet ranks Pods for eviction in response to inode exhaustion - -At this time, it is not possible to know how many inodes were consumed by a particular container. If the `kubelet` observes -inode exhaustion, it evicts Pods by ranking them by quality of service. The following issue has been opened in cadvisor -to track per container inode consumption [(https://github.com/google/cadvisor/issues/1422)](https://github.com/google/cadvisor/issues/1422) which would allow us to rank Pods -by inode consumption. For example, this would let us identify a container that created large numbers of 0 byte files, and evict that Pod over others.