Update resource-metrics-pipeline.md (#32467)
* Update resource-metrics-pipeline.md * Update resource-metrics-pipeline.md * Update content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md Co-authored-by: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com>pull/32772/head
parent
3cadb66eb8
commit
508f111b60
|
@ -2,7 +2,7 @@
|
|||
reviewers:
|
||||
- fgrzadkowski
|
||||
- piosz
|
||||
title: Resource metrics pipeline
|
||||
title: Resource metrics pipeline
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
|
@ -77,7 +77,7 @@ The architecture components, from right to left in the figure, consist of the fo
|
|||
* [Metrics API](#metrics-api): Kubernetes API supporting access to CPU and memory used for
|
||||
workload autoscaling. To make this work in your cluster, you need an API extension server that
|
||||
provides the Metrics API.
|
||||
|
||||
|
||||
{{< note >}}
|
||||
cAdvisor supports reading metrics from cgroups, which works with typical container runtimes on Linux.
|
||||
If you use a container runtime that uses another resource isolation mechanism, for example
|
||||
|
@ -85,14 +85,15 @@ The architecture components, from right to left in the figure, consist of the fo
|
|||
[CRI Container Metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md)
|
||||
in order for metrics to be available to the kubelet.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Metrics API
|
||||
{{< feature-state for_k8s_version="1.8" state="beta" >}}
|
||||
|
||||
The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage
|
||||
for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s
|
||||
autoscaler components.
|
||||
autoscaler components.
|
||||
|
||||
Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier
|
||||
reading:
|
||||
|
@ -201,7 +202,7 @@ Memory is reported as the working set, measured in bytes, at the instant the met
|
|||
|
||||
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under
|
||||
memory pressure. However, calculation of the working set varies by host OS, and generally makes
|
||||
heavy use of heuristics to produce an estimate.
|
||||
heavy use of heuristics to produce an estimate.
|
||||
|
||||
The Kubernetes model for a container's working set expects that the container runtime counts
|
||||
anonymous memory associated with the container in question. The working set metric typically also
|
||||
|
@ -264,4 +265,3 @@ curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary
|
|||
The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint
|
||||
beginning with metrics-server 0.6.x.
|
||||
{{< /note >}}
|
||||
|
||||
|
|
Loading…
Reference in New Issue