Merge pull request #41624 from my-git9/local-debugging2

Tweak line wrappings in local-debugging.md debug-pods.md
pull/41698/head
Kubernetes Prow Robot 2023-06-20 05:10:22 -07:00 committed by GitHub
commit 5f7f496c4f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 80 additions and 43 deletions

View File

@ -9,16 +9,16 @@ weight: 10
<!-- overview -->
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
This is *not* a guide for people who want to debug their cluster. For that you should check out
[this guide](/docs/tasks/debug/debug-cluster).
This guide is to help users debug applications that are deployed into Kubernetes
and not behaving correctly. This is *not* a guide for people who want to debug their cluster.
For that you should check out [this guide](/docs/tasks/debug/debug-cluster).
<!-- body -->
## Diagnosing the problem
The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or
your Service?
The first step in troubleshooting is triage. What is the problem?
Is it your Pods, your Replication Controller or your Service?
* [Debugging Pods](#debugging-pods)
* [Debugging Replication Controllers](#debugging-replication-controllers)
@ -26,36 +26,43 @@ your Service?
### Debugging Pods
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
The first step in debugging a Pod is taking a look at it. Check the current
state of the Pod and recent events with the following command:
```shell
kubectl describe pods ${POD_NAME}
```
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
Look at the state of the containers in the pod. Are they all `Running`?
Have there been recent restarts?
Continue debugging depending on the state of the pods.
#### My pod stays pending
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
your pod. Reasons include:
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node.
Generally this is because there are insufficient resources of one type or another
that prevent scheduling. Look at the output of the `kubectl describe ...` command above.
There should be messages from the scheduler about why it can not schedule your pod.
Reasons include:
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See
[Compute Resources document](/docs/concepts/configuration/manage-resources-containers/) for more information.
* **You don't have enough resources**: You may have exhausted the supply of CPU
or Memory in your cluster, in this case you need to delete Pods, adjust resource
requests, or add new nodes to your cluster. See [Compute Resources document](/docs/concepts/configuration/manage-resources-containers/)
for more information.
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a
limited number of places that pod can be scheduled. In most cases, `hostPort`
is unnecessary, try using a Service object to expose your Pod. If you do require
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
#### My pod stays waiting
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node,
but it can't run on that machine. Again, the information from `kubectl describe ...`
should be informative. The most common cause of `Waiting` pods is a failure to pull the image.
There are three things to check:
* Make sure that you have the name of the image correct.
* Have you pushed the image to the registry?
@ -64,8 +71,9 @@ Again, the information from `kubectl describe ...` should be informative. The m
#### My pod is crashing or otherwise unhealthy
Once your pod has been scheduled, the methods described in [Debug Running Pods](
/docs/tasks/debug/debug-application/debug-running-pod/) are available for debugging.
Once your pod has been scheduled, the methods described in
[Debug Running Pods](/docs/tasks/debug/debug-application/debug-running-pod/)
are available for debugging.
#### My pod is running but not doing what I told it to do
@ -92,25 +100,27 @@ The next thing to check is whether the pod on the apiserver
matches the pod you meant to create (e.g. in a yaml file on your local machine).
For example, run `kubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml` and then
manually compare the original pod description, `mypod.yaml` with the one you got
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
lines on the "apiserver" version that are not on the original version. This is
expected. However, if there are lines on the original that are not on the apiserver
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
lines on the "apiserver" version that are not on the original version. This is
expected. However, if there are lines on the original that are not on the apiserver
version, then this may indicate a problem with your pod spec.
### Debugging Replication Controllers
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
Replication controllers are fairly straightforward. They can either create Pods or they can't.
If they can't create pods, then please refer to the
[instructions above](#debugging-pods) to debug your pods.
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
controller.
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events
related to the replication controller.
### Debugging Services
Services provide load balancing across a set of pods. There are several common problems that can make Services
Services provide load balancing across a set of pods. There are several common problems that can make Services
not work properly. The following instructions should help debug Service problems.
First, verify that there are endpoints for the service. For every Service object, the apiserver makes an `endpoints` resource available.
First, verify that there are endpoints for the service. For every Service object,
the apiserver makes an `endpoints` resource available.
You can view this resource with:
@ -124,8 +134,8 @@ IP addresses in the Service's endpoints.
#### My service is missing endpoints
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
a Service where the labels are:
If you are missing endpoints, try listing pods using the labels that Service uses.
Imagine that you have a Service where the labels are:
```yaml
...
@ -141,7 +151,7 @@ You can use:
kubectl get pods --selector=name=nginx,type=frontend
```
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
Verify that the pod's `containerPort` matches up with the Service's `targetPort`
#### Network traffic is not forwarded
@ -157,4 +167,3 @@ actually serving; you have DNS working, iptables rules installed, and kube-proxy
does not seem to be misbehaving.
You may also visit [troubleshooting document](/docs/tasks/debug/) for more information.

View File

@ -7,11 +7,20 @@ content_type: task
{{% thirdparty-content %}}
Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to [get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/) in order to run debugging tools.
Kubernetes applications usually consist of multiple, separate services,
each running in its own container. Developing and debugging these services
on a remote Kubernetes cluster can be cumbersome, requiring you to
[get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/)
in order to run debugging tools.
`telepresence` is a tool to ease the process of developing and debugging services locally while proxying the service to a remote Kubernetes cluster. Using `telepresence` allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
`telepresence` is a tool to ease the process of developing and debugging
services locally while proxying the service to a remote Kubernetes cluster.
Using `telepresence` allows you to use custom tools, such as a debugger and
IDE, for a local service and provides the service full access to ConfigMap,
secrets, and the services running on the remote cluster.
This document describes using `telepresence` to develop and debug services running on a remote cluster locally.
This document describes using `telepresence` to develop and debug services
running on a remote cluster locally.
## {{% heading "prerequisites" %}}
@ -24,7 +33,8 @@ This document describes using `telepresence` to develop and debug services runni
## Connecting your local machine to a remote Kubernetes cluster
After installing `telepresence`, run `telepresence connect` to launch its Daemon and connect your local workstation to the cluster.
After installing `telepresence`, run `telepresence connect` to launch
its Daemon and connect your local workstation to the cluster.
```
$ telepresence connect
@ -38,9 +48,14 @@ You can curl services using the Kubernetes syntax e.g. `curl -ik https://kuberne
## Developing or debugging an existing service
When developing an application on Kubernetes, you typically program or debug a single service. The service might require access to other services for testing and debugging. One option is to use the continuous deployment pipeline, but even the fastest deployment pipeline introduces a delay in the program or debug cycle.
When developing an application on Kubernetes, you typically program
or debug a single service. The service might require access to other
services for testing and debugging. One option is to use the continuous
deployment pipeline, but even the fastest deployment pipeline introduces
a delay in the program or debug cycle.
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` command to create an "intercept" for rerouting remote service traffic.
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT`
command to create an "intercept" for rerouting remote service traffic.
Where:
@ -48,14 +63,27 @@ Where:
- `$LOCAL_PORT` is the port that your service is running on your local workstation
- And `$REMOTE_PORT` is the port your service listens to in the cluster
Running this command tells Telepresence to send remote traffic to your local service instead of the service in the remote Kubernetes cluster. Make edits to your service source code locally, save, and see the corresponding changes when accessing your remote application take effect immediately. You can also run your local service using a debugger or any other local development tool.
Running this command tells Telepresence to send remote traffic to your
local service instead of the service in the remote Kubernetes cluster.
Make edits to your service source code locally, save, and see the corresponding
changes when accessing your remote application take effect immediately.
You can also run your local service using a debugger or any other local development tool.
## How does Telepresence work?
Telepresence installs a traffic-agent sidecar next to your existing application's container running in the remote cluster. It then captures all traffic requests going into the Pod, and instead of forwarding this to the application in the remote cluster, it routes all traffic (when you create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)) or a subset of the traffic (when you create a [personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept)) to your local development environment.
Telepresence installs a traffic-agent sidecar next to your existing
application's container running in the remote cluster. It then captures
all traffic requests going into the Pod, and instead of forwarding this
to the application in the remote cluster, it routes all traffic (when you
create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)
or a subset of the traffic (when you create a
[personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept))
to your local development environment.
## {{% heading "whatsnext" %}}
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine.
If you're interested in a hands-on tutorial, check out
[this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s)
that walks through locally developing the Guestbook application on Google Kubernetes Engine.
For further reading, visit the [Telepresence website](https://www.telepresence.io).