Merge pull request #23016 from tengqm/links-tasks-1
Tune links in tasks section (1/2)pull/23106/head
commit
d9e22e3838
|
@ -12,4 +12,4 @@ show how to do individual tasks. A task page shows how to do a
|
|||
single thing, typically by giving a short sequence of steps.
|
||||
|
||||
If you would like to write a task page, see
|
||||
[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/).
|
||||
[Creating a Documentation Pull Request](/docs/contribute/new-content/open-a-pr/).
|
||||
|
|
|
@ -375,7 +375,7 @@ different audit policies.
|
|||
|
||||
### Use fluentd to collect and distribute audit events from log file
|
||||
|
||||
[Fluentd](http://www.fluentd.org/) is an open source data collector for unified logging layer.
|
||||
[Fluentd](https://www.fluentd.org/) is an open source data collector for unified logging layer.
|
||||
In this example, we will use fluentd to split audit events by different namespaces.
|
||||
|
||||
{{< note >}}
|
||||
|
@ -503,7 +503,7 @@ different users into different files.
|
|||
bin/logstash -f /etc/logstash/config --path.settings /etc/logstash/
|
||||
```
|
||||
|
||||
1. create a [kubeconfig file](/docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/) for kube-apiserver webhook audit backend
|
||||
1. create a [kubeconfig file](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) for kube-apiserver webhook audit backend
|
||||
|
||||
cat <<EOF > /etc/kubernetes/audit-webhook-kubeconfig
|
||||
apiVersion: v1
|
||||
|
@ -537,9 +537,5 @@ plugin which supports full-text search and analytics.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Visit [Auditing with Falco](/docs/tasks/debug-application-cluster/falco).
|
||||
|
||||
Learn about [Mutating webhook auditing annotations](/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations).
|
||||
|
||||
|
||||
|
|
|
@ -10,10 +10,7 @@ content_type: concept
|
|||
|
||||
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
|
||||
This is *not* a guide for people who want to debug their cluster. For that you should check out
|
||||
[this guide](/docs/admin/cluster-troubleshooting).
|
||||
|
||||
|
||||
|
||||
[this guide](/docs/tasks/debug-application-cluster/debug-cluster).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -46,7 +43,8 @@ there are insufficient resources of one type or another that prevent scheduling.
|
|||
your pod. Reasons include:
|
||||
|
||||
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](/docs/user-guide/compute-resources/#my-pods-are-pending-with-event-message-failedscheduling) for more information.
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See
|
||||
[Compute Resources document](/docs/concepts/configuration/manage-resources-containers/) for more information.
|
||||
|
||||
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
|
@ -161,13 +159,13 @@ check:
|
|||
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP.
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
If none of the above solves your problem, follow the instructions in
|
||||
[Debugging Service document](/docs/tasks/debug-application-cluster/debug-service/)
|
||||
to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are
|
||||
actually serving; you have DNS working, iptables rules installed, and kube-proxy
|
||||
does not seem to be misbehaving.
|
||||
|
||||
If none of the above solves your problem, follow the instructions in [Debugging Service document](/docs/user-guide/debugging-services) to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving.
|
||||
|
||||
You may also visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
||||
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
|
||||
|
||||
|
|
|
@ -10,10 +10,7 @@ content_type: concept
|
|||
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
|
||||
problem you are experiencing. See
|
||||
the [application troubleshooting guide](/docs/tasks/debug-application-cluster/debug-application) for tips on application debugging.
|
||||
You may also visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
||||
|
||||
|
||||
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/) for more information.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
|
|
@ -15,10 +15,8 @@ content_type: task
|
|||
|
||||
This page shows how to investigate problems related to the execution of
|
||||
Init Containers. The example command lines below refer to the Pod as
|
||||
`<pod-name>` and the Init Containers as `<init-container-1>` and
|
||||
`<init-container-2>`.
|
||||
|
||||
|
||||
`<pod-name>` and the Init Containers as `<init-container-1>` and
|
||||
`<init-container-2>`.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -26,11 +24,9 @@ Init Containers. The example command lines below refer to the Pod as
|
|||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
* You should be familiar with the basics of
|
||||
[Init Containers](/docs/concepts/abstractions/init-containers/).
|
||||
[Init Containers](/docs/concepts/workloads/pods/init-containers/).
|
||||
* You should have [Configured an Init Container](/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container/).
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Checking the status of Init Containers
|
||||
|
|
|
@ -9,8 +9,6 @@ content_type: task
|
|||
|
||||
This page shows how to debug Pods and ReplicationControllers.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
|
@ -20,8 +18,6 @@ This page shows how to debug Pods and ReplicationControllers.
|
|||
{{< glossary_tooltip text="Pods" term_id="pod" >}} and with
|
||||
Pods' [lifecycles](/docs/concepts/workloads/pods/pod-lifecycle/).
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Debugging Pods
|
||||
|
@ -51,9 +47,9 @@ can not schedule your pod. Reasons include:
|
|||
You may have exhausted the supply of CPU or Memory in your cluster. In this
|
||||
case you can try several things:
|
||||
|
||||
* [Add more nodes](/docs/admin/cluster-management/#resizing-a-cluster) to the cluster.
|
||||
* [Add more nodes](/docs/tasks/administer-cluster/cluster-management/#resizing-a-cluster) to the cluster.
|
||||
|
||||
* [Terminate unneeded pods](/docs/user-guide/pods/single-container/#deleting_a_pod)
|
||||
* [Terminate unneeded pods](/docs/concepts/workloads/pods/#pod-termination)
|
||||
to make room for pending pods.
|
||||
|
||||
* Check that the pod is not larger than your nodes. For example, if all
|
||||
|
|
|
@ -13,9 +13,6 @@ Deployment (or other workload controller) and created a Service, but you
|
|||
get no response when you try to access it. This document will hopefully help
|
||||
you to figure out what's going wrong.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Running commands in a Pod
|
||||
|
@ -658,7 +655,7 @@ This might sound unlikely, but it does happen and it is supposed to work.
|
|||
This can happen when the network is not properly configured for "hairpin"
|
||||
traffic, usually when `kube-proxy` is running in `iptables` mode and Pods
|
||||
are connected with bridge network. The `Kubelet` exposes a `hairpin-mode`
|
||||
[flag](/docs/admin/kubelet/) that allows endpoints of a Service to loadbalance
|
||||
[flag](/docs/reference/command-line-tools-reference/kubelet/) that allows endpoints of a Service to loadbalance
|
||||
back to themselves if they try to access their own Service VIP. The
|
||||
`hairpin-mode` flag must either be set to `hairpin-veth` or
|
||||
`promiscuous-bridge`.
|
||||
|
@ -724,15 +721,13 @@ Service is not working. Please let us know what is going on, so we can help
|
|||
investigate!
|
||||
|
||||
Contact us on
|
||||
[Slack](/docs/troubleshooting/#slack) or
|
||||
[Slack](/docs/tasks/debug-application-cluster/troubleshooting/#slack) or
|
||||
[Forum](https://discuss.kubernetes.io) or
|
||||
[GitHub](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
||||
Visit [troubleshooting document](/docs/tasks/debug-application-cluster/troubleshooting/)
|
||||
for more information.
|
||||
|
||||
|
||||
|
|
|
@ -12,19 +12,13 @@ content_type: task
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This task shows you how to debug a StatefulSet.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster.
|
||||
* You should have a StatefulSet running that you want to investigate.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Debugging a StatefulSet
|
||||
|
@ -37,18 +31,12 @@ kubectl get pods -l app=myapp
|
|||
```
|
||||
|
||||
If you find that any Pods listed are in `Unknown` or `Terminating` state for an extended period of time,
|
||||
refer to the [Deleting StatefulSet Pods](/docs/tasks/manage-stateful-set/delete-pods/) task for
|
||||
refer to the [Deleting StatefulSet Pods](/docs/tasks/run-application/delete-stateful-set/) task for
|
||||
instructions on how to deal with them.
|
||||
You can debug individual Pods in a StatefulSet using the
|
||||
[Debugging Pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/) guide.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Learn more about [debugging an init-container](/docs/tasks/debug-application-cluster/debug-init-containers/).
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ title: Logging Using Elasticsearch and Kibana
|
|||
|
||||
On the Google Compute Engine (GCE) platform, the default logging support targets
|
||||
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
|
||||
in the [Logging With Stackdriver Logging](/docs/user-guide/logging/stackdriver).
|
||||
in the [Logging With Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver).
|
||||
|
||||
This article describes how to set up a cluster to ingest logs into
|
||||
[Elasticsearch](https://www.elastic.co/products/elasticsearch) and view
|
||||
|
@ -90,7 +90,8 @@ Elasticsearch, and is part of a service named `kibana-logging`.
|
|||
|
||||
The Elasticsearch and Kibana services are both in the `kube-system` namespace
|
||||
and are not directly exposed via a publicly reachable IP address. To reach them,
|
||||
follow the instructions for [Accessing services running in a cluster](/docs/concepts/cluster-administration/access-cluster/#accessing-services-running-on-the-cluster).
|
||||
follow the instructions for
|
||||
[Accessing services running in a cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster).
|
||||
|
||||
If you try accessing the `elasticsearch-logging` service in your browser, you'll
|
||||
see a status page that looks something like this:
|
||||
|
@ -102,7 +103,7 @@ like. See [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasti
|
|||
for more details on how to do so.
|
||||
|
||||
Alternatively, you can view your cluster's logs using Kibana (again using the
|
||||
[instructions for accessing a service running in the cluster](/docs/user-guide/accessing-the-cluster/#accessing-services-running-on-the-cluster)).
|
||||
[instructions for accessing a service running in the cluster](/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)).
|
||||
The first time you visit the Kibana URL you will be presented with a page that
|
||||
asks you to configure your view of the ingested logs. Select the option for
|
||||
timeseries values and select `@timestamp`. On the following page select the
|
||||
|
|
|
@ -317,8 +317,8 @@ After some time, Stackdriver Logging agent pods will be restarted with the new c
|
|||
### Changing fluentd parameters
|
||||
|
||||
Fluentd configuration is stored in the `ConfigMap` object. It is effectively a set of configuration
|
||||
files that are merged together. You can learn about fluentd configuration on the [official
|
||||
site](http://docs.fluentd.org).
|
||||
files that are merged together. You can learn about fluentd configuration on the
|
||||
[official site](https://docs.fluentd.org).
|
||||
|
||||
Imagine you want to add a new parsing logic to the configuration, so that fluentd can understand
|
||||
default Python logging format. An appropriate fluentd filter looks similar to this:
|
||||
|
@ -356,7 +356,7 @@ using [guide above](#changing-daemonset-parameters).
|
|||
### Adding fluentd plugins
|
||||
|
||||
Fluentd is written in Ruby and allows to extend its capabilities using
|
||||
[plugins](http://www.fluentd.org/plugins). If you want to use a plugin, which is not included
|
||||
[plugins](https://www.fluentd.org/plugins). If you want to use a plugin, which is not included
|
||||
in the default Stackdriver Logging container image, you have to build a custom image. Imagine
|
||||
you want to add Kafka sink for messages from a particular container for additional processing.
|
||||
You can re-use the default [container image sources](https://git.k8s.io/contrib/fluentd/fluentd-gcp-image)
|
||||
|
|
|
@ -13,9 +13,6 @@ are available in Kubernetes through the Metrics API. These metrics can be either
|
|||
by user, for example by using `kubectl top` command, or used by a controller in the cluster, e.g.
|
||||
Horizontal Pod Autoscaler, to make decisions.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## The Metrics API
|
||||
|
@ -41,11 +38,19 @@ The API requires metrics server to be deployed in the cluster. Otherwise it will
|
|||
|
||||
### CPU
|
||||
|
||||
CPU is reported as the average usage, in [CPU cores](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu), over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The kubelet chooses the window for the rate calculation.
|
||||
CPU is reported as the average usage, in
|
||||
[CPU cores](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu),
|
||||
over a period of time. This value is derived by taking a rate over a cumulative CPU counter
|
||||
provided by the kernel (in both Linux and Windows kernels).
|
||||
The kubelet chooses the window for the rate calculation.
|
||||
|
||||
### Memory
|
||||
|
||||
Memory is reported as the working set, in bytes, at the instant the metric was collected. In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate. It includes all anonymous (non-file-backed) memory since kubernetes does not support swap. The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
|
||||
Memory is reported as the working set, in bytes, at the instant the metric was collected.
|
||||
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure.
|
||||
However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
|
||||
It includes all anonymous (non-file-backed) memory since kubernetes does not support swap.
|
||||
The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
|
||||
|
||||
## Metrics Server
|
||||
|
||||
|
@ -54,9 +59,12 @@ It is deployed by default in clusters created by `kube-up.sh` script
|
|||
as a Deployment object. If you use a different Kubernetes setup mechanism you can deploy it using the provided
|
||||
[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) file.
|
||||
|
||||
Metric server collects metrics from the Summary API, exposed by [Kubelet](/docs/admin/kubelet/) on each node.
|
||||
Metric server collects metrics from the Summary API, exposed by
|
||||
[Kubelet](/docs/reference/command-line-tools-reference/kubelet/) on each node.
|
||||
|
||||
Metrics Server is registered with the main API server through
|
||||
[Kubernetes aggregator](/docs/concepts/api-extension/apiserver-aggregation/).
|
||||
[Kubernetes aggregator](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
|
||||
Learn more about the metrics server in
|
||||
[the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
|
||||
|
||||
Learn more about the metrics server in [the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
|
||||
|
|
|
@ -10,29 +10,32 @@ title: Tools for Monitoring Resources
|
|||
To scale an application and provide a reliable service, you need to
|
||||
understand how the application behaves when it is deployed. You can examine
|
||||
application performance in a Kubernetes cluster by examining the containers,
|
||||
[pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and
|
||||
[pods](/docs/concepts/workloads/pods/),
|
||||
[services](/docs/concepts/services-networking/service/), and
|
||||
the characteristics of the overall cluster. Kubernetes provides detailed
|
||||
information about an application's resource usage at each of these levels.
|
||||
This information allows you to evaluate your application's performance and
|
||||
where bottlenecks can be removed to improve overall performance.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
In Kubernetes, application monitoring does not depend on a single monitoring solution. On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or [full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics.
|
||||
In Kubernetes, application monitoring does not depend on a single monitoring solution.
|
||||
On new clusters, you can use [resource metrics](#resource-metrics-pipeline) or
|
||||
[full metrics](#full-metrics-pipeline) pipelines to collect monitoring statistics.
|
||||
|
||||
## Resource metrics pipeline
|
||||
|
||||
The resource metrics pipeline provides a limited set of metrics related to
|
||||
cluster components such as the [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale) controller, as well as the `kubectl top` utility.
|
||||
cluster components such as the
|
||||
[Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/)
|
||||
controller, as well as the `kubectl top` utility.
|
||||
These metrics are collected by the lightweight, short-term, in-memory
|
||||
[metrics-server](https://github.com/kubernetes-incubator/metrics-server) and
|
||||
are exposed via the `metrics.k8s.io` API.
|
||||
|
||||
metrics-server discovers all nodes on the cluster and
|
||||
queries each node's
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet) for CPU and
|
||||
[kubelet](/docs/reference/command-line-tools-reference/kubelet/) for CPU and
|
||||
memory usage. The kubelet acts as a bridge between the Kubernetes master and
|
||||
the nodes, managing the pods and containers running on a machine. The kubelet
|
||||
translates each pod into its constituent containers and fetches individual
|
||||
|
|
|
@ -11,15 +11,14 @@ title: Troubleshooting
|
|||
Sometimes things go wrong. This guide is aimed at making them right. It has
|
||||
two sections:
|
||||
|
||||
* [Troubleshooting your application](/docs/tasks/debug-application-cluster/debug-application/) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
|
||||
* [Troubleshooting your cluster](/docs/tasks/debug-application-cluster/debug-cluster/) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy.
|
||||
* [Troubleshooting your application](/docs/tasks/debug-application-cluster/debug-application/) - Useful
|
||||
for users who are deploying code into Kubernetes and wondering why it is not working.
|
||||
* [Troubleshooting your cluster](/docs/tasks/debug-application-cluster/debug-cluster/) - Useful
|
||||
for cluster administrators and people whose Kubernetes cluster is unhappy.
|
||||
|
||||
You should also check the known issues for the [release](https://github.com/kubernetes/kubernetes/releases)
|
||||
you're using.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Getting help
|
||||
|
@ -37,12 +36,12 @@ accomplish commonly used tasks, and [Tutorials](/docs/tutorials/) are more
|
|||
comprehensive walkthroughs of real-world, industry-specific, or end-to-end
|
||||
development scenarios. The [Reference](/docs/reference/) section provides
|
||||
detailed documentation on the [Kubernetes API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
and command-line interfaces (CLIs), such as [`kubectl`](/docs/user-guide/kubectl-overview/).
|
||||
and command-line interfaces (CLIs), such as [`kubectl`](/docs/reference/kubectl/overview/).
|
||||
|
||||
You may also find the Stack Overflow topics relevant:
|
||||
|
||||
* [Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes)
|
||||
* [Google Kubernetes Engine](http://stackoverflow.com/questions/tagged/google-container-engine)
|
||||
* [Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes)
|
||||
* [Google Kubernetes Engine](https://stackoverflow.com/questions/tagged/google-container-engine)
|
||||
|
||||
## Help! My question isn't covered! I need help now!
|
||||
|
||||
|
@ -50,15 +49,16 @@ You may also find the Stack Overflow topics relevant:
|
|||
|
||||
Someone else from the community may have already asked a similar question or may
|
||||
be able to help with your problem. The Kubernetes team will also monitor
|
||||
[posts tagged Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes).
|
||||
If there aren't any existing questions that help, please [ask a new one](http://stackoverflow.com/questions/ask?tags=kubernetes)!
|
||||
[posts tagged Kubernetes](https://stackoverflow.com/questions/tagged/kubernetes).
|
||||
If there aren't any existing questions that help, please
|
||||
[ask a new one](https://stackoverflow.com/questions/ask?tags=kubernetes)!
|
||||
|
||||
### Slack
|
||||
|
||||
The Kubernetes team hangs out on Slack in the `#kubernetes-users` channel. You
|
||||
can participate in discussion with the Kubernetes team [here](https://kubernetes.slack.com).
|
||||
Slack requires registration, but the Kubernetes team is open invitation to
|
||||
anyone to register [here](http://slack.kubernetes.io). Feel free to come and ask
|
||||
anyone to register [here](https://slack.kubernetes.io). Feel free to come and ask
|
||||
any and all questions.
|
||||
|
||||
Once registered, browse the growing list of channels for various subjects of
|
||||
|
|
|
@ -1005,7 +1005,7 @@ template:
|
|||
|
||||
* [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
|
||||
* [Imperative Management of Kubernetes Objects Using Configuration Files](/docs/tasks/manage-kubernetes-objects/imperative-config/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -167,7 +167,7 @@ kubectl create --edit -f /tmp/srv.yaml
|
|||
|
||||
* [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
|
||||
* [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -150,7 +150,7 @@ template:
|
|||
|
||||
* [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
|
||||
* [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -832,7 +832,7 @@ deployment.apps "dev-my-nginx" deleted
|
|||
|
||||
* [Kustomize](https://github.com/kubernetes-sigs/kustomize)
|
||||
* [Kubectl Book](https://kubectl.docs.kubernetes.io)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
|
||||
* [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl-commands/)
|
||||
* [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
|
||||
|
||||
|
|
|
@ -229,7 +229,7 @@ The `kubectl patch` command has a `type` parameter that you can set to one of th
|
|||
</table>
|
||||
|
||||
For a comparison of JSON patch and JSON merge patch, see
|
||||
[JSON Patch and JSON Merge Patch](http://erosb.github.io/post/json-patch-vs-merge-patch/).
|
||||
[JSON Patch and JSON Merge Patch](https://erosb.github.io/post/json-patch-vs-merge-patch/).
|
||||
|
||||
The default value for the `type` parameter is `strategic`. So in the preceding exercise, you
|
||||
did a strategic merge patch.
|
||||
|
|
|
@ -37,13 +37,23 @@ You can perform a graceful pod deletion with the following command:
|
|||
kubectl delete pods <pod>
|
||||
```
|
||||
|
||||
For the above to lead to graceful termination, the Pod **must not** specify a `pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a `pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod [shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) before the kubelet deletes the name from the apiserver.
|
||||
For the above to lead to graceful termination, the Pod **must not** specify a
|
||||
`pod.Spec.TerminationGracePeriodSeconds` of 0. The practice of setting a
|
||||
`pod.Spec.TerminationGracePeriodSeconds` of 0 seconds is unsafe and strongly discouraged
|
||||
for StatefulSet Pods. Graceful deletion is safe and will ensure that the Pod
|
||||
[shuts down gracefully](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
before the kubelet deletes the name from the apiserver.
|
||||
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a [timeout](/docs/admin/node/#node-condition). Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node. The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable.
|
||||
The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a
|
||||
[timeout](/docs/concepts/architecture/nodes/#node-condition).
|
||||
Pods may also enter these states when the user attempts graceful deletion of a Pod
|
||||
on an unreachable Node.
|
||||
The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
|
||||
* The Node object is deleted (either by you, or by the [Node Controller](/docs/admin/node)).<br/>
|
||||
* The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.<br/>
|
||||
* Force deletion of the Pod by the user.
|
||||
* The Node object is deleted (either by you, or by the [Node Controller](/docs/concepts/architecture/nodes/)).
|
||||
* The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.
|
||||
* Force deletion of the Pod by the user.
|
||||
|
||||
The recommended best practice is to use the first or second approach. If a Node is confirmed to be dead (e.g. permanently disconnected from the network, powered down, etc), then delete the Node object. If the Node is suffering from a network partition, then try to resolve this or wait for it to resolve. When the partition heals, the kubelet will complete the deletion of the Pod and free up its name in the apiserver.
|
||||
|
||||
|
|
|
@ -15,11 +15,9 @@ Horizontal Pod Autoscaler automatically scales the number of pods
|
|||
in a replication controller, deployment, replica set or stateful set based on observed CPU utilization
|
||||
(or, with beta support, on some other, application-provided metrics).
|
||||
|
||||
This document walks you through an example of enabling Horizontal Pod Autoscaler for the php-apache server. For more information on how Horizontal Pod Autoscaler behaves, see the [Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/).
|
||||
|
||||
|
||||
|
||||
|
||||
This document walks you through an example of enabling Horizontal Pod Autoscaler for the php-apache server.
|
||||
For more information on how Horizontal Pod Autoscaler behaves, see the
|
||||
[Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/).
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -459,12 +457,12 @@ HorizontalPodAutoscaler.
|
|||
## Appendix: Quantities
|
||||
|
||||
All metrics in the HorizontalPodAutoscaler and metrics APIs are specified using
|
||||
a special whole-number notation known in Kubernetes as a *quantity*. For example,
|
||||
a special whole-number notation known in Kubernetes as a
|
||||
{{< glossary_tooltip term_id="quantity" text="quantity">}}. For example,
|
||||
the quantity `10500m` would be written as `10.5` in decimal notation. The metrics APIs
|
||||
will return whole numbers without a suffix when possible, and will generally return
|
||||
quantities in milli-units otherwise. This means you might see your metric value fluctuate
|
||||
between `1` and `1500m`, or `1` and `1.5` when written in decimal notation. See the
|
||||
[glossary entry on quantities](/docs/reference/glossary?core-object=true#term-quantity) for more information.
|
||||
between `1` and `1500m`, or `1` and `1.5` when written in decimal notation.
|
||||
|
||||
## Appendix: Other possible scenarios
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ on general patterns for running stateful applications in Kubernetes.
|
|||
* This tutorial assumes you are familiar with
|
||||
[PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
|
||||
and [StatefulSets](/docs/concepts/workloads/controllers/statefulset/),
|
||||
as well as other core concepts like [Pods](/docs/concepts/workloads/pods/pod/),
|
||||
as well as other core concepts like [Pods](/docs/concepts/workloads/pods/),
|
||||
[Services](/docs/concepts/services-networking/service/), and
|
||||
[ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/).
|
||||
* Some familiarity with MySQL helps, but this tutorial aims to present
|
||||
|
@ -297,7 +297,7 @@ running while you force a Pod out of the Ready state.
|
|||
|
||||
### Break the Readiness Probe
|
||||
|
||||
The [readiness probe](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes)
|
||||
The [readiness probe](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes)
|
||||
for the `mysql` container runs the command `mysql -h 127.0.0.1 -e 'SELECT 1'`
|
||||
to make sure the server is up and able to execute queries.
|
||||
|
||||
|
|
|
@ -8,14 +8,13 @@ content_type: task
|
|||
<!-- overview -->
|
||||
{{< glossary_definition term_id="service-catalog" length="all" prepend="Service Catalog is" >}}
|
||||
|
||||
Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes cluster. Up to date information on this process can be found at the [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md) repo.
|
||||
|
||||
|
||||
|
||||
Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes cluster.
|
||||
Up to date information on this process can be found at the
|
||||
[kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md) repo.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/service-catalog/).
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/extend-kubernetes/service-catalog/).
|
||||
* Service Catalog requires a Kubernetes cluster running version 1.7 or higher.
|
||||
* You must have a Kubernetes cluster with cluster DNS enabled.
|
||||
* If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled.
|
||||
|
|
|
@ -19,7 +19,7 @@ Service Catalog itself can work with any kind of managed service, not just Googl
|
|||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/service-catalog/).
|
||||
* Understand the key concepts of [Service Catalog](/docs/concepts/extend-kubernetes/service-catalog/).
|
||||
* Install [Go 1.6+](https://golang.org/dl/) and set the `GOPATH`.
|
||||
* Install the [cfssl](https://github.com/cloudflare/cfssl) tool needed for generating SSL artifacts.
|
||||
* Service Catalog requires Kubernetes version 1.7+.
|
||||
|
|
|
@ -11,13 +11,18 @@ card:
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
The Kubernetes command-line tool, [kubectl](/docs/user-guide/kubectl/), allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl operations, see [Overview of kubectl](/docs/reference/kubectl/overview/).
|
||||
The Kubernetes command-line tool, [kubectl](/docs/reference/kubectl/kubectl/), allows
|
||||
you to run commands against Kubernetes clusters.
|
||||
You can use kubectl to deploy applications, inspect and manage cluster resources,
|
||||
and view logs. For a complete list of kubectl operations, see
|
||||
[Overview of kubectl](/docs/reference/kubectl/overview/).
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
You must use a kubectl version that is within one minor version difference of your cluster.
|
||||
For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master.
|
||||
Using the latest version of kubectl helps avoid unforeseen issues.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
|
Loading…
Reference in New Issue