From 508f111b607372bb810e27d0bd1ecbd7f041f0bd Mon Sep 17 00:00:00 2001 From: Priyanshu Ahlawat <84102724+PriyanshuAhlawat@users.noreply.github.com> Date: Wed, 6 Apr 2022 05:48:56 +0530 Subject: [PATCH] Update resource-metrics-pipeline.md (#32467) * Update resource-metrics-pipeline.md * Update resource-metrics-pipeline.md * Update content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md Co-authored-by: Tim Bannister Co-authored-by: Tim Bannister --- .../resource-metrics-pipeline.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md index c2818940e2..12a692b2f4 100644 --- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md +++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md @@ -2,7 +2,7 @@ reviewers: - fgrzadkowski - piosz -title: Resource metrics pipeline +title: Resource metrics pipeline content_type: concept --- @@ -77,7 +77,7 @@ The architecture components, from right to left in the figure, consist of the fo * [Metrics API](#metrics-api): Kubernetes API supporting access to CPU and memory used for workload autoscaling. To make this work in your cluster, you need an API extension server that provides the Metrics API. - + {{< note >}} cAdvisor supports reading metrics from cgroups, which works with typical container runtimes on Linux. If you use a container runtime that uses another resource isolation mechanism, for example @@ -85,14 +85,15 @@ The architecture components, from right to left in the figure, consist of the fo [CRI Container Metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md) in order for metrics to be available to the kubelet. {{< /note >}} - + ## Metrics API +{{< feature-state for_k8s_version="1.8" state="beta" >}} The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s -autoscaler components. +autoscaler components. Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier reading: @@ -201,7 +202,7 @@ Memory is reported as the working set, measured in bytes, at the instant the met In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes -heavy use of heuristics to produce an estimate. +heavy use of heuristics to produce an estimate. The Kubernetes model for a container's working set expects that the container runtime counts anonymous memory associated with the container in question. The working set metric typically also @@ -264,4 +265,3 @@ curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint beginning with metrics-server 0.6.x. {{< /note >}} -