From 71b92cc0363a9ed48482983b80a590c9b34afefc Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 1 Jan 2018 12:34:27 +0800 Subject: [PATCH] Fix out-of-resource future work Closes: #2896 According to the report, the google/cadvisor#1422 has been closed. However, the related issue has been fixed in google/cadvisor#1489 and merged a long time ago. We can safely remove the known issue now. --- _redirects | 2 +- .../administer-cluster}/memory-available.sh | 0 docs/tasks/administer-cluster/out-of-resource.md | 7 ------- 3 files changed, 1 insertion(+), 8 deletions(-) rename docs/{concepts/cluster-administration/out-of-resource => tasks/administer-cluster}/memory-available.sh (100%) diff --git a/_redirects b/_redirects index 8d57712722..fbd6069238 100644 --- a/_redirects +++ b/_redirects @@ -217,7 +217,7 @@ /docs/tasks/administer-cluster/default-cpu-request-limit/ /docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit/ 301 /docs/tasks/administer-cluster/default-memory-request-limit/ /docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit/ 301 /docs/tasks/administer-cluster/developing-cloud-controller-manager.md /docs/tasks/administer-cluster/developing-cloud-controller-manager/ 301 -/docs/tasks/administer-cluster/out-of-resource/memory-available.sh /docs/concepts/cluster-administration/out-of-resource/memory-available.sh 301 +/docs/tasks/administer-cluster/out-of-resource/memory-available.sh /docs/tasks/administer-cluster/memory-available.sh 301 /docs/tasks/administer-cluster/overview/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301 /docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/tasks/administer-cluster/out-of-resource/ 301 /docs/tasks/administer-cluster/running-cloud-controller.md /docs/tasks/administer-cluster/running-cloud-controller/ 301 diff --git a/docs/concepts/cluster-administration/out-of-resource/memory-available.sh b/docs/tasks/administer-cluster/memory-available.sh similarity index 100% rename from docs/concepts/cluster-administration/out-of-resource/memory-available.sh rename to docs/tasks/administer-cluster/memory-available.sh diff --git a/docs/tasks/administer-cluster/out-of-resource.md b/docs/tasks/administer-cluster/out-of-resource.md index 7461c7329a..2c45de8bca 100644 --- a/docs/tasks/administer-cluster/out-of-resource.md +++ b/docs/tasks/administer-cluster/out-of-resource.md @@ -370,10 +370,3 @@ to prevent system OOMs, and promote eviction of workloads so cluster state can r The Pod eviction may evict more Pods than needed due to stats collection timing gap. This can be mitigated by adding the ability to get root container stats on an on-demand basis [(https://github.com/google/cadvisor/issues/1247)](https://github.com/google/cadvisor/issues/1247) in the future. - -### How kubelet ranks Pods for eviction in response to inode exhaustion - -At this time, it is not possible to know how many inodes were consumed by a particular container. If the `kubelet` observes -inode exhaustion, it evicts Pods by ranking them by quality of service. The following issue has been opened in cadvisor -to track per container inode consumption [(https://github.com/google/cadvisor/issues/1422)](https://github.com/google/cadvisor/issues/1422) which would allow us to rank Pods -by inode consumption. For example, this would let us identify a container that created large numbers of 0 byte files, and evict that Pod over others.