Fix out-of-resource future work

Closes: #2896

According to the report, the google/cadvisor#1422 has been closed.
However, the related issue has been fixed in google/cadvisor#1489 and
merged a long time ago. We can safely remove the known issue now.
pull/6807/head
Qiming Teng 2018-01-01 12:34:27 +08:00
parent 53e05358be
commit 71b92cc036
3 changed files with 1 additions and 8 deletions

View File

@ -217,7 +217,7 @@
/docs/tasks/administer-cluster/default-cpu-request-limit/ /docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit/ 301
/docs/tasks/administer-cluster/default-memory-request-limit/ /docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit/ 301
/docs/tasks/administer-cluster/developing-cloud-controller-manager.md /docs/tasks/administer-cluster/developing-cloud-controller-manager/ 301
/docs/tasks/administer-cluster/out-of-resource/memory-available.sh /docs/concepts/cluster-administration/out-of-resource/memory-available.sh 301
/docs/tasks/administer-cluster/out-of-resource/memory-available.sh /docs/tasks/administer-cluster/memory-available.sh 301
/docs/tasks/administer-cluster/overview/ /docs/concepts/cluster-administration/cluster-administration-overview/ 301
/docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/tasks/administer-cluster/out-of-resource/ 301
/docs/tasks/administer-cluster/running-cloud-controller.md /docs/tasks/administer-cluster/running-cloud-controller/ 301

View File

@ -370,10 +370,3 @@ to prevent system OOMs, and promote eviction of workloads so cluster state can r
The Pod eviction may evict more Pods than needed due to stats collection timing gap. This can be mitigated by adding
the ability to get root container stats on an on-demand basis [(https://github.com/google/cadvisor/issues/1247)](https://github.com/google/cadvisor/issues/1247) in the future.
### How kubelet ranks Pods for eviction in response to inode exhaustion
At this time, it is not possible to know how many inodes were consumed by a particular container. If the `kubelet` observes
inode exhaustion, it evicts Pods by ranking them by quality of service. The following issue has been opened in cadvisor
to track per container inode consumption [(https://github.com/google/cadvisor/issues/1422)](https://github.com/google/cadvisor/issues/1422) which would allow us to rank Pods
by inode consumption. For example, this would let us identify a container that created large numbers of 0 byte files, and evict that Pod over others.