From 65539f9e67b2399b1a4b6bf90b734b7890fb2520 Mon Sep 17 00:00:00 2001 From: David Ashpole Date: Tue, 20 Jul 2021 12:13:22 -0700 Subject: [PATCH] add documentation for API Server tracing --- .../cluster-administration/system-traces.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 content/en/docs/concepts/cluster-administration/system-traces.md diff --git a/content/en/docs/concepts/cluster-administration/system-traces.md b/content/en/docs/concepts/cluster-administration/system-traces.md new file mode 100644 index 0000000000..1f63b13588 --- /dev/null +++ b/content/en/docs/concepts/cluster-administration/system-traces.md @@ -0,0 +1,66 @@ +--- +title: Traces For Kubernetes System Components +reviewers: +- logicalhan +- lilic +content_type: concept +weight: 60 +--- + + + +{{< feature-state for_k8s_version="v1.22" state="alpha" >}} + +System component traces record the latency of and relationships between operations in the cluster. + +Kubernetes components emit traces using the [OpenTelemetry Protocol](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp.md#opentelemetry-protocol-specification) with the gRPC exporter and can be collected and routed to tracing backends using an [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector). + + + +## Trace Collection + +For a complete guide to collecting traces and using the collector, see [Getting Started with the OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/). However, there are a few things to note that are specific to Kubernetes components. + +By default, Kubernetes components export traces using the grpc exporter for OTLP on the [IANA OpenTelemetry port](https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=opentelemetry), 4317. As an example, if the collector is running as a sidecar to a Kubernetes component, the following receiver configuration will collect spans and log them to standard output: + +```yaml +receivers: + otlp: + protocols: + grpc: +exporters: + # Replace this exporter with the exporter for your backend + logging: + logLevel: debug +service: + pipelines: + traces: + receivers: [otlp] + exporters: [logging] +``` + +## Component traces + +### kube-apiserver traces + +The kube-apiserver generates spans for incoming HTTP requests, and for outgoing requests to webhooks, etcd, and re-entrant requests. It propagates the [W3C Trace Context](https://www.w3.org/TR/trace-context/) with outgoing requests but does not make use of the trace context attached to incoming requests, as the kube-apiserver is often a public endpoint. + +#### Enabling tracing in the kube-apiserver + +To enable tracing, enable the `APIServerTracing` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver. Also, provide the kube-apiserver with a tracing configration file with `--tracing-config-file=`. This is an example config that records spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint: + +```yaml +apiVersion: apiserver.config.k8s.io/v1alpha1 +kind: TracingConfiguration +# default value +#endpoint: localhost:4317 +samplingRatePerMillion: 100 +``` + +## Stability + +Tracing instrumentation is still under active development, and may change in a variety of ways. This includes span names, attached attributes, instrumented endpoints, etc. Until this feature graduates to stable, there are no guarantees of backwards compatibility for tracing instrumentation. + +## {{% heading "whatsnext" %}} + +* Read about [Getting Started with the OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/)