Add cluster-level logging overview
parent
c66c8b8df5
commit
2b386047e8
|
@ -84,8 +84,11 @@ toc:
|
|||
- title: Monitoring, Logging, and Debugging Containers
|
||||
section:
|
||||
- docs/user-guide/monitoring.md
|
||||
- docs/getting-started-guides/logging.md
|
||||
- docs/getting-started-guides/logging-elasticsearch.md
|
||||
- title: Logging
|
||||
section:
|
||||
- docs/user-guide/logging/overview.md
|
||||
- docs/user-guide/logging/stackdriver.md
|
||||
- docs/user-guide/logging/elasticsearch.md
|
||||
- docs/user-guide/getting-into-containers.md
|
||||
- docs/user-guide/connecting-to-applications-proxy.md
|
||||
- docs/user-guide/connecting-to-applications-port-forward.md
|
||||
|
|
|
@ -16,7 +16,6 @@ toc:
|
|||
section:
|
||||
- docs/user-guide/debugging-pods-and-replication-controllers.md
|
||||
- docs/user-guide/introspection-and-debugging.md
|
||||
- docs/user-guide/logging.md
|
||||
- docs/user-guide/application-troubleshooting.md
|
||||
- docs/admin/cluster-troubleshooting.md
|
||||
- docs/user-guide/debugging-services.md
|
||||
|
|
|
@ -91,15 +91,8 @@ about containers in a central database, and provides a UI for browsing that data
|
|||
|
||||
#### Cluster-level Logging
|
||||
|
||||
[Container Logging](/docs/user-guide/monitoring) saves container logs
|
||||
to a central log store with search/browsing interface. There are two
|
||||
implementations:
|
||||
|
||||
* [Cluster-level logging to Google Cloud Logging](
|
||||
/docs/user-guide/logging/#cluster-level-logging-to-google-cloud-logging)
|
||||
|
||||
* [Cluster-level Logging with Elasticsearch and Kibana](
|
||||
/docs/getting-started-guides/logging-elasticsearch/)
|
||||
A [Cluster-level logging](/docs/user-guide/logging/overview) mechanism is responsible for
|
||||
saving container logs to a central log store with search/browsing interface.
|
||||
|
||||
## Node components
|
||||
|
||||
|
|
|
@ -61,7 +61,8 @@ project](/docs/admin/salt).
|
|||
* **DNS Integration with SkyDNS** ([dns.md](/docs/admin/dns)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* **Logging** with [Kibana](/docs/user-guide/logging)
|
||||
* [**Cluster-level logging**](/docs/user-guide/logging/overview)
|
||||
Saving container logs to a central log store with search/browsing interface.
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ wget -q -O - https://get.k8s.io | bash
|
|||
|
||||
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
|
||||
|
||||
By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/getting-started-guides/logging), while `heapster` provides [monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) services.
|
||||
By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/user-guide/logging/overview), while `heapster` provides [monitoring](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/README.md) services.
|
||||
|
||||
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
|
||||
|
||||
|
|
|
@ -826,11 +826,9 @@ Notes for setting up each cluster service are given below:
|
|||
* [Setup instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/)
|
||||
* [Admin Guide](/docs/admin/dns/)
|
||||
* Cluster-level Logging
|
||||
* Multiple implementations with different storage backends and UIs.
|
||||
* [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-elasticsearch/)
|
||||
* [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/fluentd-gcp/).
|
||||
* Both require running fluentd on each node.
|
||||
* [User Guide](/docs/user-guide/logging/)
|
||||
* [Cluster-level Logging Overview](/docs/user-guide/logging/overview)
|
||||
* [Cluster-level Logging with Elasticsearch](/docs/user-guide/logging/elasticsearch)
|
||||
* [Cluster-level Logging with Stackdriver Logging](/docs/user-guide/logging/stackdriver)
|
||||
* Container Resource Monitoring
|
||||
* [Setup instructions](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-monitoring/)
|
||||
* GUI
|
||||
|
|
|
@ -172,7 +172,7 @@ $ kubectl logs --previous nginx-app-zibvs
|
|||
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
|
||||
```
|
||||
|
||||
See [Logging](/docs/user-guide/logging) for more information.
|
||||
See [Logging Overview](/docs/user-guide/logging/overview) for more information.
|
||||
|
||||
#### docker stop and docker rm
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ The following topics in the Kubernetes User Guide can help you run applications
|
|||
1. [Managing deployments](/docs/user-guide/managing-deployments/)
|
||||
1. [Application introspection and debugging](/docs/user-guide/introspection-and-debugging/)
|
||||
1. [Using the Kubernetes web user interface](/docs/user-guide/ui/)
|
||||
1. [Logging](/docs/user-guide/logging/)
|
||||
1. [Logging](/docs/user-guide/logging/overview/)
|
||||
1. [Monitoring](/docs/user-guide/monitoring/)
|
||||
1. [Getting into containers via `exec`](/docs/user-guide/getting-into-containers/)
|
||||
1. [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy/)
|
||||
|
|
|
@ -347,7 +347,7 @@ status:
|
|||
|
||||
Learn about additional debugging tools, including:
|
||||
|
||||
* [Logging](/docs/user-guide/logging)
|
||||
* [Logging](/docs/user-guide/logging/overview)
|
||||
* [Monitoring](/docs/user-guide/monitoring)
|
||||
* [Getting into containers via `exec`](/docs/user-guide/getting-into-containers)
|
||||
* [Connecting to containers via proxies](/docs/user-guide/connecting-to-applications-proxy)
|
||||
|
|
|
@ -1,40 +0,0 @@
|
|||
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Makefile for launching synthetic logging sources (any platform)
|
||||
# and for reporting the forwarding rules for the
|
||||
# Elasticsearch and Kibana pods for the GCE platform.
|
||||
# For examples of how to observe the ingested logs please
|
||||
# see the appropriate getting started guide e.g.
|
||||
# Google Cloud Logging: http://kubernetes.io/docs/getting-started-guides/logging/
|
||||
# With Elasticsearch and Kibana logging: http://kubernetes.io/docs/getting-started-guides/logging-elasticsearch/
|
||||
|
||||
.PHONY: up down logger-up logger-down logger10-up logger10-down
|
||||
|
||||
up: logger-up logger10-up
|
||||
|
||||
down: logger-down logger10-down
|
||||
|
||||
logger-up:
|
||||
kubectl create -f synthetic_0_25lps.yaml
|
||||
|
||||
logger-down:
|
||||
kubectl delete pod synthetic-logger-0.25lps-pod
|
||||
|
||||
logger10-up:
|
||||
kubectl create -f synthetic_10lps.yaml
|
||||
|
||||
logger10-down:
|
||||
kubectl delete pod synthetic-logger-10lps-pod
|
||||
|
|
@ -1,3 +0,0 @@
|
|||
assignees:
|
||||
- mikedanese
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
This directory contains two [pod](https://kubernetes.io/docs/user-guide/pods) specifications which can be used as synthetic
|
||||
logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
|
||||
describes a pod that just emits a log message once every 4 seconds. The pod specification in
|
||||
[synthetic_10lps.yaml](synthetic_10lps.yaml)
|
||||
describes a pod that just emits 10 log lines per second.
|
||||
|
||||
See [logging document](https://kubernetes.io/docs/user-guide/logging/) for more details about logging. To observe the ingested log lines when using Google Cloud Logging please see the getting
|
||||
started instructions
|
||||
at [Cluster Level Logging to Google Cloud Logging](https://kubernetes.io/docs/getting-started-guides/logging).
|
||||
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting
|
||||
started instructions
|
||||
at [Cluster Level Logging with Elasticsearch and Kibana](https://kubernetes.io/docs/getting-started-guides/logging-elasticsearch).
|
|
@ -1,30 +0,0 @@
|
|||
# This pod specification creates an instance of a synthetic logger. The logger
|
||||
# is simply a program that writes out the hostname of the pod, a count which increments
|
||||
# by one on each iteration (to help notice missing log enteries) and the date using
|
||||
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
|
||||
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
|
||||
# and could have been written out as:
|
||||
# i="0"
|
||||
# while true
|
||||
# do
|
||||
# echo -n "`hostname`: $i: "
|
||||
# date --rfc-3339 ns
|
||||
# sleep 4
|
||||
# i=$[$i+1]
|
||||
# done
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: synth-logging-source
|
||||
name: synthetic-logger-0.25lps-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: synth-lgr
|
||||
image: ubuntu:14.04
|
||||
args:
|
||||
- bash
|
||||
- -c
|
||||
- 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
|
||||
4; i=$[$i+1]; done'
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
# This pod specification creates an instance of a synthetic logger. The logger
|
||||
# is simply a program that writes out the hostname of the pod, a count which increments
|
||||
# by one on each iteration (to help notice missing log enteries) and the date using
|
||||
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
|
||||
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
|
||||
# and could have been written out as:
|
||||
# i="0"
|
||||
# while true
|
||||
# do
|
||||
# echo -n "`hostname`: $i: "
|
||||
# date --rfc-3339 ns
|
||||
# sleep 4
|
||||
# i=$[$i+1]
|
||||
# done
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
labels:
|
||||
name: synth-logging-source
|
||||
name: synthetic-logger-10lps-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: synth-lgr
|
||||
image: ubuntu:14.04
|
||||
args:
|
||||
- bash
|
||||
- -c
|
||||
- 'i="0"; while true; do echo -n "`hostname`: $i: "; date --rfc-3339 ns; sleep
|
||||
0.1; i=$[$i+1]; done'
|
||||
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
assignees:
|
||||
- mikedanese
|
||||
title: Retrieving Logs
|
||||
---
|
||||
|
||||
This page is designed to help you use logs to troubleshoot issues with your Kubernetes solution.
|
||||
|
||||
## Logging by Kubernetes Components
|
||||
|
||||
Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/logging.md).
|
||||
|
||||
## Examining the logs of running containers
|
||||
|
||||
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
|
||||
this pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
|
||||
output every second. (You can find different pod specifications [here](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/logging-demo/).)
|
||||
|
||||
{% include code.html language="yaml" file="counter-pod.yaml" k8slink="/examples/blog-logging/counter-pod.yaml" %}
|
||||
|
||||
we can run the pod:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./counter-pod.yaml
|
||||
pods/counter
|
||||
```
|
||||
|
||||
and then fetch the logs:
|
||||
|
||||
```shell
|
||||
$ kubectl logs counter
|
||||
0: Tue Jun 2 21:37:31 UTC 2015
|
||||
1: Tue Jun 2 21:37:32 UTC 2015
|
||||
2: Tue Jun 2 21:37:33 UTC 2015
|
||||
3: Tue Jun 2 21:37:34 UTC 2015
|
||||
4: Tue Jun 2 21:37:35 UTC 2015
|
||||
5: Tue Jun 2 21:37:36 UTC 2015
|
||||
...
|
||||
```
|
||||
|
||||
If a pod has more than one container then you need to specify which container's log files should
|
||||
be fetched e.g.
|
||||
|
||||
```shell
|
||||
$ kubectl logs kube-dns-v3-7r1l9 etcd
|
||||
2015/06/23 00:43:10 etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
|
||||
2015/06/23 00:43:10 etcdserver: compacted log at index 30003
|
||||
2015/06/23 00:43:10 etcdserver: saved snapshot at index 30003
|
||||
2015/06/23 02:05:42 etcdserver: start to snapshot (applied: 40004, lastsnap: 30003)
|
||||
2015/06/23 02:05:42 etcdserver: compacted log at index 40004
|
||||
2015/06/23 02:05:42 etcdserver: saved snapshot at index 40004
|
||||
2015/06/23 03:28:31 etcdserver: start to snapshot (applied: 50005, lastsnap: 40004)
|
||||
2015/06/23 03:28:31 etcdserver: compacted log at index 50005
|
||||
2015/06/23 03:28:31 etcdserver: saved snapshot at index 50005
|
||||
2015/06/23 03:28:56 filePurge: successfully removed file default.etcd/member/wal/0000000000000000-0000000000000000.wal
|
||||
2015/06/23 04:51:03 etcdserver: start to snapshot (applied: 60006, lastsnap: 50005)
|
||||
2015/06/23 04:51:03 etcdserver: compacted log at index 60006
|
||||
2015/06/23 04:51:03 etcdserver: saved snapshot at index 60006
|
||||
...
|
||||
```
|
||||
|
||||
## Cluster level logging to Google Cloud Logging
|
||||
|
||||
The getting started guide [Cluster Level Logging to Google Cloud Logging](/docs/getting-started-guides/logging)
|
||||
explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/)
|
||||
and shows how to query the ingested logs.
|
||||
|
||||
## Cluster level logging with Elasticsearch and Kibana
|
||||
|
||||
The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](/docs/getting-started-guides/logging-elasticsearch)
|
||||
describes how to ingest cluster level logs into Elasticsearch and view them using Kibana.
|
||||
|
||||
## Ingesting Application Log Files
|
||||
|
||||
Cluster level logging only collects the standard output and standard error output of the applications
|
||||
running in containers. The guide [Collecting log files from within containers with Fluentd and sending them to the Google Cloud Logging service](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
|
||||
|
||||
## Known issues
|
||||
|
||||
Kubernetes does log rotation for Kubernetes components and docker containers. The command `kubectl logs` currently only read the latest logs, not all historical ones.
|
|
@ -1,18 +1,18 @@
|
|||
---
|
||||
assignees:
|
||||
- lavalamp
|
||||
- satnam6502
|
||||
- crassirostris
|
||||
- piosz
|
||||
title: Logging with Elasticsearch and Kibana
|
||||
---
|
||||
|
||||
On the Google Compute Engine (GCE) platform, the default logging support targets
|
||||
[Google Cloud Logging](https://cloud.google.com/logging/) as described in the
|
||||
[Logging](/docs/getting-started-guides/logging) getting-started guide. Here we
|
||||
describe how to set up a cluster to ingest logs into
|
||||
[Elasticsearch](https://github.com/elastic/elasticsearch) and view
|
||||
them using [Kibana](https://github.com/elastic/kibana) as an alternative to
|
||||
Google Cloud Logging when running on GCE (note that this will not work as
|
||||
written for Google Container Engine).
|
||||
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
|
||||
in the [Logging With Stackdriver Logging](/docs/user-guide/logging/stackdriver).
|
||||
|
||||
This article describes how to set up a cluster to ingest logs into
|
||||
[Elasticsearch](https://www.elastic.co/products/elasticsearch), and view
|
||||
them using [Kibana](https://www.elastic.co/products/kibana), as an alternative to
|
||||
Stackdriver Logging when running on GCE. Note that Elasticsearch and Kibana do not work with Kubernetes clusters hosted on Google Container Engine.
|
||||
|
||||
To use Elasticsearch and Kibana for cluster logging, you should set the
|
||||
following environment variable as shown below when creating your cluster with
|
|
@ -0,0 +1,107 @@
|
|||
---
|
||||
assignees:
|
||||
- crassirostris
|
||||
- piosz
|
||||
title: Logging Overview
|
||||
---
|
||||
|
||||
Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
|
||||
|
||||
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers; this concept is called __cluster-level-logging__. Cluster-level logging requires a separate back-end to store, analyze, and query logs. Kubernetes provides no native storage solution for logs data, but you can integrate many existing logging solutions into your Kubernetes cluster.
|
||||
|
||||
In this document, you can find:
|
||||
|
||||
* A basic demonstration of logging in Kubernetes using the standard output stream
|
||||
* A detailed description of the node logging architecture in Kubernetes
|
||||
* Guidance for implementing cluster-level logging in Kubernetes
|
||||
|
||||
The guidance for cluster-level logging assumes that a logging back-end is present inside or outside of your cluster. If you're not interested in having cluster-level logging, you might still find the description how logs are stored and handled on the node to be useful.
|
||||
|
||||
## Basic logging in Kubernetes
|
||||
|
||||
In this section, you can see an example of basic logging in Kubernetes that outputs data to the standard output stream. This demonstration uses a [pod specification](/docs/user-guide/logging/counter-pod.yaml) with a container that writes some text to standard output once per second.
|
||||
|
||||
{% include code.html language="yaml" file="counter-pod.yaml" %}
|
||||
|
||||
To run this pod, use the following command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f counter-pod.yaml
|
||||
pod "counter" created
|
||||
```
|
||||
|
||||
To fetch the logs, use the `kubectl logs` command, as follows
|
||||
|
||||
```shell
|
||||
$ kubectl logs counter
|
||||
0: Tue Jun 2 21:37:31 UTC 2015
|
||||
1: Tue Jun 2 21:37:32 UTC 2015
|
||||
2: Tue Jun 2 21:37:33 UTC 2015
|
||||
3: Tue Jun 2 21:37:34 UTC 2015
|
||||
4: Tue Jun 2 21:37:35 UTC 2015
|
||||
5: Tue Jun 2 21:37:36 UTC 2015
|
||||
...
|
||||
```
|
||||
|
||||
You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/user-guide/kubectl/kubectl_logs) for more details.
|
||||
|
||||
## Logging at the node level
|
||||
|
||||
![Node level logging](/images/docs/user-guide/logging/logging-node-level.png)
|
||||
|
||||
Everything a containerized application writes to `stdout` and `stderr` is handled and redirected somewhere by a container engine. For example, Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in json format.
|
||||
|
||||
**Note:** The Docker json logging driver treats each line as a separate message. When using the Docker logging driver, there is no direct support for multi-line messages. To do so, you'll need to handle these at the logging agent level or higher.
|
||||
|
||||
By default, if a container restarts, kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
|
||||
|
||||
An important consideration in node-level logging is implementing log rotation, so that logs don't consume all available storage on the node. Kubernetes uses the [`logrotate`](http://www.linuxcommand.org/man_pages/logrotate8.html) tool to implement log rotation.
|
||||
|
||||
Kubernetes performs log rotation daily, or if the log file grows beyond 10MB in size. Each rotation belongs to a single container; if the container repeatedly fails or the pod is evicted, all previous rotations for the container are lost. By default, Kubernetes keeps up to five logging rotations per container.
|
||||
|
||||
The Kubernetes logging configuration differs depending on the node type. For example, you can find detailed information for GCI in the corresponding [configure helper](https://github.com/kubernetes/kubernetes/blob/{{page.githubbranch}}/cluster/gce/gci/configure-helper.sh#L96).
|
||||
|
||||
When you run [`kubectl logs`](/docs/user-guide/kubectl/kubectl_logs), as in the basic logging example, the kubelet on the node handles the request and reads directly from the log file, returning the contents in the response. Note that `kubectl logs` **only returns the last rotation**; you must manually extract prior rotations, if desired.
|
||||
|
||||
### System components logs
|
||||
|
||||
Kubernetes system components use a different logging mechanism than the application containers in pods. Components such as `kube-proxy` (among others) use the [glog](https://godoc.org/github.com/golang/glog) logging library. You can find the conventions for logging severity for those components in the [development docs on logging](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/logging.md).
|
||||
|
||||
System components write directly to log files in the `/var/log` directory in the node's host filesystem. Like container logs, system component logs are rotated daily and based on size. However, system component logs have a higher size retention: by default, they store 100MB.
|
||||
|
||||
## Cluster-level logging architectures
|
||||
|
||||
While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider:
|
||||
|
||||
* You can use a node-level logging agent that runs on every node.
|
||||
* You can include a dedicated sidecar container for logging in an application pod.
|
||||
* You can push logs directly to a back-end from within an application.
|
||||
|
||||
### Using a node logging agent
|
||||
|
||||
![Using a node level logging agent](/images/docs/user-guide/logging/logging-with-node-agent.png)
|
||||
|
||||
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a back-end. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
|
||||
|
||||
Because the logging agent must run on every node, it's common to implement it as either a DaemonSet replica, a manifest pod, or a dedicated native process on the node. However the latter two approaches are deprecated and highly discouraged.
|
||||
|
||||
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, since it creates only one agent per node and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
|
||||
|
||||
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) for use with Google Cloud Platform, and [Elasticsearch](/docs/user-guide/logging/elasticsearch). You can find more information and instructions in the dedicated documents. Both use [fluentd](http://www.fluentd.org/) with custom configuration as an agent on the node.
|
||||
|
||||
### Using a sidecar container with the logging agent
|
||||
|
||||
![Using a sidecar container with the logging agent](/images/docs/user-guide/logging/logging-with-sidecar.png)
|
||||
|
||||
You can implement cluster-level logging by including a dedicated logging agent _for each application_ on your cluster. You can include this logging agent as a "sidecar" container in the pod spec for each application; the sidecar container should contain only the logging agent.
|
||||
|
||||
The concrete implementation of the logging agent, the interface between agent and the application, and the interface between the logging agent and the logs back-end are completely up to a you. For an example implementation, see the [fluentd sidecar container](https://github.com/kubernetes/contrib/tree/b70447aa59ea14468f4cd349760e45b6a0a9b15d/logging/fluentd-sidecar-gcp) for the Stackdriver logging backend.
|
||||
|
||||
**Note:** Using a sidecar container for logging may lead to significant resource consumption.
|
||||
|
||||
### Exposing logs directly from the application
|
||||
|
||||
![Exposing logs directly from the application](/images/docs/user-guide/logging/logging-from-application.png)
|
||||
|
||||
You can implement cluster-level logging by exposing or pushing logs directly from every application itself; however, the implementation for such a logging mechanism is outside the scope of Kubernetes.
|
||||
|
|
@ -1,14 +1,17 @@
|
|||
---
|
||||
assignees:
|
||||
- lavalamp
|
||||
- satnam6502
|
||||
title: Logging
|
||||
- crassirostris
|
||||
- piosz
|
||||
title: Logging with Stackdriver Logging
|
||||
---
|
||||
|
||||
A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services.
|
||||
Before reading this page, it's recommended to familiarize yourself with the [overview of logging in Kubernetes](/docs/user-guide/logging/overview).
|
||||
|
||||
Cluster level logging for Kubernetes allows us to collect logs which persist beyond the lifetime of the pod's container images or the lifetime of the pod or even cluster. In this article we assume that a Kubernetes cluster has been created with cluster level logging support for sending logs to Google Cloud Logging. After a cluster has been created you will have a collection of system pods running in the `kube-system` namespace that support monitoring,
|
||||
logging and DNS resolution for names of Kubernetes services:
|
||||
This article assumes that you have created a Kubernetes cluster with cluster-level logging support for sending logs to Stackdriver Logging. You can do this either by selecting "Enable Stackdriver Logging" checkbox in create cluster dialogue in [GKE](https://cloud.google.com/container-engine/) or by setting flag `KUBE_LOGGING_DESTINATION` to `gcp` when manually starting cluster using `kube-up.sh`.
|
||||
|
||||
## Overview
|
||||
|
||||
After creation, your cluster has a collection of system pods running in the `kube-system` namespace that support monitoring, logging, and DNS resolution for Kuberentes service names. You can see these system pods by running the following command:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods --namespace=kube-system
|
||||
|
@ -25,15 +28,14 @@ Here is the same information in a picture which shows how the pods might be plac
|
|||
|
||||
![image](/images/blog-logging/diagrams/cloud-logging.png)
|
||||
|
||||
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod's execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
|
||||
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod's execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Stackdriver Logging. A pod which provides the
|
||||
[cluster DNS service](/docs/admin/dns) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
|
||||
To help explain how cluster level logging works let's start off with a synthetic log generator pod specification [counter-pod.yaml](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/blog-logging/counter-pod.yaml):
|
||||
To help explain how cluster-level logging works, consider the following synthetic log generator pod specification [counter-pod.yaml](/docs/user-guide/logging/counter-pod.yaml):
|
||||
|
||||
{% include code.html language="yaml" file="counter-pod.yaml" k8slink="/examples/blog-logging/counter-pod.yaml" %}
|
||||
{% include code.html language="yaml" file="counter-pod.yaml" %}
|
||||
|
||||
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let's create the pod in the default
|
||||
namespace.
|
||||
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let's create the pod in the default namespace.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/blog-logging/counter-pod.yaml
|
||||
|
@ -54,7 +56,7 @@ One of the nodes is now running the counter pod:
|
|||
|
||||
![image](/images/blog-logging/diagrams/27gf-counter.png)
|
||||
|
||||
When the pod status changes to `Running` we can use the kubectl logs command to view the output of this counter pod.
|
||||
When the pod status changes to `Running` we can use the `kubectl logs` command to view the output of this counter pod.
|
||||
|
||||
```shell
|
||||
$ kubectl logs counter
|
||||
|
@ -79,7 +81,13 @@ root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1
|
|||
root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux
|
||||
```
|
||||
|
||||
<<<<<<< HEAD:docs/getting-started-guides/logging.md
|
||||
What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container's execution and only see the log lines for the new container? Let's find out. First let's delete the currently running counter.
|
||||
=======
|
||||
If, for any reason, the image in this pod is killed off and then restarted by Kubernetes, or the pod was evicted from the node, logs for the container are lost.
|
||||
|
||||
Try deleting the currently running counter container:
|
||||
>>>>>>> 69304ab37e14fac33455b629442bdc3995d05ad2:docs/user-guide/logging/stackdriver.md
|
||||
|
||||
```shell
|
||||
$ kubectl delete pod counter
|
||||
|
@ -108,15 +116,9 @@ $ kubectl logs counter
|
|||
8: Tue Jun 2 21:51:48 UTC 2015
|
||||
```
|
||||
|
||||
We've lost the log lines from the first invocation of the container in this pod! Ideally, we want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted we would still like to preserve all the log lines that were ever emitted by the containers in the pod. But don't fear, this is the functionality provided by cluster level logging in Kubernetes. When a cluster is created, the standard output and standard error output of each container can be ingested using a [Fluentd](http://www.fluentd.org/) agent running on each node into either [Google Cloud Logging](https://cloud.google.com/logging/docs/) or into Elasticsearch and viewed with Kibana.
|
||||
As expected, the log lines from the first invocation of the container in this pod have been lost. However, you'll likely want to preserve all the log lines from each invocation of each container in the pod. Furthermore, even if the pod is restarted, you might still want to preserve all the log lines that were ever emitted by the containers in the pod. This is exactly the functionality provided by cluster-level logging in Kubernetes.
|
||||
|
||||
When a Kubernetes cluster is created with logging to Google Cloud Logging enabled, the system creates a pod called `fluentd-cloud-logging` on each node of the cluster to collect Docker container logs. These pods were shown at the start of this blog article in the response to the first get pods command.
|
||||
|
||||
This log collection pod has a specification which looks something like this:
|
||||
|
||||
{% include code.html language="yaml" file="fluentd-gcp.yaml" k8slink="/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml" %}
|
||||
|
||||
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
|
||||
## Viewing logs
|
||||
|
||||
We can click on the Logs item under the Monitoring section of the Google Developer Console and select the logs for the counter container, which will be called kubernetes.counter_default_count. This identifies the name of the pod (counter), the namespace (default) and the name of the container (count) for which the log collection occurred. Using this name we can select just the logs for our counter container from the drop down menu:
|
||||
|
||||
|
@ -128,7 +130,7 @@ When we view the logs in the Developer Console we observe the logs for both invo
|
|||
|
||||
Note the first container counted to 108 and then it was terminated. When the next container image restarted the counting process resumed from 0. Similarly if we deleted the pod and restarted it we would capture the logs for all instances of the containers in the pod whenever the pod was running.
|
||||
|
||||
Logs ingested into Google Cloud Logging may be exported to various other destinations including [Google Cloud Storage](https://cloud.google.com/storage/) buckets and [BigQuery](https://cloud.google.com/bigquery/). Use the Exports tab in the Cloud Logging console to specify where logs should be streamed to. You can also follow this link to the
|
||||
Logs ingested into Stackdriver Logging may be exported to various other destinations including [Google Cloud Storage](https://cloud.google.com/storage/) buckets and [BigQuery](https://cloud.google.com/bigquery/). Use the Exports tab in the Cloud Logging console to specify where logs should be streamed to. You can also follow this link to the
|
||||
[settings tab](https://pantheon.corp.google.com/project/_/logs/settings).
|
||||
|
||||
We could query the ingested logs from BigQuery using the SQL query which reports the counter log lines showing the newest lines first:
|
||||
|
@ -165,6 +167,6 @@ $ cat 21\:00\:00_21\:59\:59_S0.json | jq '.structPayload.log'
|
|||
...
|
||||
```
|
||||
|
||||
This page has touched briefly on the underlying mechanisms that support gathering cluster level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) and sending them to the Google Cloud Logging service.
|
||||
This page has touched briefly on the underlying mechanisms that support gathering cluster-level logs on a Kubernetes deployment. The approach here only works for gathering the standard output and standard error output of the processes running in the pod's containers. To gather other logs that are stored in files one can use a sidecar container to gather the required files as described at the page [Collecting log files within containers with Fluentd](https://github.com/kubernetes/contrib/blob/master/logging/fluentd-sidecar-gcp/README.md) and sending them to the Stackdriver Logging service.
|
||||
|
||||
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html)
|
||||
Some of the material in this section also appears in the blog article [Cluster-level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html)
|
|
@ -220,7 +220,7 @@ The specification of a pre-stop hook is similar to that of probes, but without t
|
|||
|
||||
## Termination message
|
||||
|
||||
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](/docs/user-guide/kubectl/) or the [UI](/docs/user-guide/ui), in addition to general [log collection](/docs/user-guide/logging). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
|
||||
In order to achieve a reasonably high level of availability, especially for actively developed applications, it's important to debug failures quickly. Kubernetes can speed debugging by surfacing causes of fatal errors in a way that can be display using [`kubectl`](/docs/user-guide/kubectl/) or the [UI](/docs/user-guide/ui), in addition to general [log collection](/docs/user-guide/logging/overview). It is possible to specify a `terminationMessagePath` where a container will write its 'death rattle'?, such as assertion failure messages, stack traces, exceptions, and so on. The default path is `/dev/termination-log`.
|
||||
|
||||
Here is a toy example:
|
||||
|
||||
|
|
|
@ -76,7 +76,7 @@ Kubernetes satisfies a number of common needs of applications running in product
|
|||
* [load balancing](/docs/user-guide/services/),
|
||||
* [rolling updates](/docs/user-guide/update-demo/),
|
||||
* [resource monitoring](/docs/user-guide/monitoring/),
|
||||
* [log access and ingestion](/docs/user-guide/logging/),
|
||||
* [log access and ingestion](/docs/user-guide/logging/overview/),
|
||||
* [support for introspection and debugging](/docs/user-guide/introspection-and-debugging/), and
|
||||
* [identity and authorization](/docs/admin/authorization/).
|
||||
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 19 KiB |
Binary file not shown.
After Width: | Height: | Size: 20 KiB |
Binary file not shown.
After Width: | Height: | Size: 37 KiB |
Binary file not shown.
After Width: | Height: | Size: 25 KiB |
|
@ -301,10 +301,6 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"namespace": {&api.Namespace{}},
|
||||
"valid-pod": {&api.Pod{}},
|
||||
},
|
||||
"../docs/user-guide/logging-demo": {
|
||||
"synthetic_0_25lps": {&api.Pod{}},
|
||||
"synthetic_10lps": {&api.Pod{}},
|
||||
},
|
||||
"../docs/user-guide/node-selection": {
|
||||
"pod": {&api.Pod{}},
|
||||
"pod-with-node-affinity": {&api.Pod{}},
|
||||
|
|
Loading…
Reference in New Issue