Merge pull request #27919 from howieyuen/debug

[zh] resync task files [2]
pull/27985/head
Kubernetes Prow Robot 2021-05-13 20:24:05 -07:00 committed by GitHub
commit 4dfabc001b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 48 additions and 58 deletions

View File

@ -82,22 +82,34 @@ Each request can be recorded with an associated _stage_. The defined stages are:
- `ResponseComplete` - 当响应消息体完成并且没有更多数据需要传输的时候。
- `Panic` - 当 panic 发生时生成。
<!--
The configuration of an
[Audit Event configuration](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
is different from the
[Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core)
API object.
-->
{{< note >}}
[审计事件配置](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
的配置与 [Event](/zh/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core)
API 对象不同。
{{< /note >}}
<!--
The audit logging feature increases the memory consumption of the API server
because some context required for auditing is stored for each request.
Additionally, memory consumption depends on the audit logging configuration.
-->
{{< note >}}
审计日志记录功能会增加 API server 的内存消耗,因为需要为每个请求存储审计所需的某些上下文。
此外,内存消耗取决于审计日志记录的配置。
{{< /note >}}
<!--
## Audit Policy
Audit policy defines rules about what events should be recorded and what data
they should include. The audit policy object structure is defined in the
[`audit.k8s.io` API group][auditing-api]. When an event is processed, it's
[`audit.k8s.io` API group](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy).
When an event is processed, it's
compared against the list of rules in order. The first matching rule sets the
_audit level_ of the event. The defined audit levels are:
-->
@ -105,7 +117,7 @@ _audit level_ of the event. The defined audit levels are:
审计政策定义了关于应记录哪些事件以及应包含哪些数据的规则。
审计策略对象结构定义在
[`audit.k8s.io` API 组](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go)
[`audit.k8s.io` API 组](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
处理事件时,将按顺序与规则列表进行比较。第一个匹配规则设置事件的
_审计级别Audit Level_。已定义的审计级别有
@ -158,12 +170,18 @@ rules:
If you're crafting your own audit profile, you can use the audit profile for Google Container-Optimized OS as a starting point. You can check the
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
script, which generates the audit policy file. You can see most of the audit policy file by looking directly at the script.
You can also refer to the [`Policy` configuration reference](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
for details about the fields defined.
-->
如果你在打磨自己的审计配置文件,你可以使用为 Google Container-Optimized OS
设计的审计配置作为出发点。你可以参考
[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)
脚本,该脚本能够生成审计策略文件。你可以直接在脚本中看到审计策略的绝大部份内容。
你也可以参考 [`Policy` 配置参考](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)
以获取有关已定义字段的详细信息。
<!--
## Audit backends
@ -173,10 +191,8 @@ Out of the box, the kube-apiserver provides two backends:
- Log backend, which writes events into the filesystem
- Webhook backend, which sends events to an external HTTP API
In both cases, audit events structure is defined by the API in the
`audit.k8s.io` API group. For Kubernetes {{< param "fullversion" >}}, that
API is at version
[`v1`](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go).
In all cases, audit events follow a structure defined by the Kubernetes API in the
[`audit.k8s.io` API group](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event).
-->
## 审计后端 {#audit-backends}
@ -186,9 +202,9 @@ API is at version
- Log 后端,将事件写入到文件系统
- Webhook 后端,将事件发送到外部 HTTP API
在这两种情况下,审计事件结构均由 `audit.k8s.io` API 组中的 API 定义。
对于 Kubernetes {{< param "fullversion" >}},该 API 的当前版本是
[`v1`](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go).
在这所有情况下,审计事件均遵循 Kubernetes API 在
[`audit.k8s.io` API 组](/zh/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
中定义的结构。
<!--
In case of patches, request body is a JSON array with patch operations, not a JSON object

View File

@ -257,8 +257,7 @@ The message tells us that there were not enough resources for the Pod on any of
其 message 部分表明没有任何节点拥有足够多的资源。
<!--
To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas.
(Or you could just leave the one Pod pending, which is harmless.)
To correct this situation, you can use `kubectl scale` to update your Deployment to specify four or fewer replicas. (Or you could leave the one Pod pending, which is harmless.)
-->
要纠正这种情况,可以使用 `kubectl scale` 更新 Deployment以指定 4 个或更少的副本。
(或者你可以让 Pod 继续保持这个状态,这是无害的。)

View File

@ -265,42 +265,18 @@ kubectl get pods --selector=name=nginx,type=frontend
```
<!--
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
have the right ports exposed. If your service has a `containerPort` specified, but the Pods that are
selected don't have that port listed, then they won't be added to the endpoints list.
Verify that the pod's `containerPort` matches up with the Service's `targetPort`
-->
如果 Pod 列表符合预期,但是 Endpoints 仍然为空,那么可能暴露的端口不正确。
如果服务指定了 `containerPort`,但是所选中的 Pod 没有列出该端口,这些 Pod
不会被添加到 Endpoints 列表。
验证 Pod 的 `containerPort` 与服务的 `targetPort` 是否匹配。
<!--
#### Network traffic is not forwarded
If you can connect to the service, but the connection is immediately dropped, and there are endpoints
in the endpoints list, it's likely that the proxy can't contact your pods.
There are three things to
check:
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods).
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP.
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
Please see [debugging service](/docs/tasks/debug-application-cluster/debug-service/) for more information.
-->
#### 网络流量未被转发
如果你可以连接到服务上,但是连接立即被断开了,并且在 Endpoints 列表中有末端表项,
可能是代理无法连接到 Pod。
要检查的有以下三项:
* Pod 工作是否正常? 看一下重启计数,并参阅[调试 Pod](#debugging-pods)
* 是否可以直接连接到 Pod获取 Pod 的 IP 地址,然后尝试直接连接到该 IP
* 应用是否在配置的端口上进行服务Kubernetes 不进行端口重映射,所以如果应用在
8080 端口上服务,那么 `containerPort` 字段就要设定为 8080。
请参阅[调试 service](/zh/docs/tasks/debug-application-cluster/debug-service/) 了解更多信息。
## {{% heading "whatsnext" %}}

View File

@ -87,7 +87,7 @@ case you can try several things:
will never be scheduled.
You can check node capacities with the `kubectl get nodes -o <format>`
command. Here are some example command lines that extract just the necessary
command. Here are some example command lines that extract the necessary
information:
-->
#### 资源不足

View File

@ -143,7 +143,7 @@ kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
```
This section use the `pause` container image in examples because it does not
contain userland debugging utilities, but this method works with all container
contain debugging utilities, but this method works with all container
images.
-->
## 使用临时容器来调试的例子 {#ephemeral-container-example}
@ -162,7 +162,7 @@ kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
```
{{< note >}}
本节示例中使用 `pause` 容器镜像,因为它不包含任何用户级调试程序,但是这个方法适用于所有容器镜像。
本节示例中使用 `pause` 容器镜像,因为它不包含调试程序,但是这个方法适用于所有容器镜像。
{{< /note >}}
<!--

View File

@ -29,15 +29,15 @@ Deployment或其他工作负载控制器运行了 Pod并创建 Service
## Running commands in a Pod
For many steps here you will want to see what a Pod running in the cluster
sees. The simplest way to do this is to run an interactive alpine Pod:
sees. The simplest way to do this is to run an interactive busybox Pod:
-->
## 在 Pod 中运行命令
对于这里的许多步骤,你可能希望知道运行在集群中的 Pod 看起来是什么样的。
最简单的方法是运行一个交互式的 alpine Pod
最简单的方法是运行一个交互式的 busybox Pod
```none
$ kubectl run -it --rm --restart=Never alpine --image=alpine sh
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
```
<!--
@ -161,13 +161,13 @@ kubectl get pods -l app=hostnames \
```
<!--
The example container used for this walk-through simply serves its own hostname
The example container used for this walk-through serves its own hostname
via HTTP on port 9376, but if you are debugging your own app, you'll want to
use whatever port number your Pods are listening on.
From within a pod:
-->
用于本教程的示例容器通过 HTTP 在端口 9376 上提供其自己的主机名,
用于本教程的示例容器通过 HTTP 在端口 9376 上提供其自己的主机名,
但是如果要调试自己的应用程序,则需要使用你的 Pod 正在侦听的端口号。
在 Pod 内运行:
@ -260,9 +260,9 @@ service/hostnames exposed
```
<!--
And read it back, just to be sure:
And read it back:
-->
重新运行查询命令,确认没有问题
重新运行查询命令:
```shell
kubectl get svc hostnames
@ -608,14 +608,13 @@ Earlier you saw that the Pods were running. You can re-check that:
kubectl get pods -l app=hostnames
```
```none
NAME READY STATUS RESTARTS AGE
NAME READY STATUS RESTARTS AGE
hostnames-632524106-bbpiw 1/1 Running 0 1h
hostnames-632524106-ly40y 1/1 Running 0 1h
hostnames-632524106-tlaok 1/1 Running 0 1h
```
<!--
The `-l app=hostnames` argument is a label selector - just like our Service
has.
The `-l app=hostnames` argument is a label selector configured on the Service.
The "AGE" column says that these Pods are about an hour old, which implies that
they are running fine and not crashing.
@ -627,7 +626,7 @@ If the restart count is high, read more about how to [debug pods](/docs/tasks/de
Inside the Kubernetes system is a control loop which evaluates the selector of
every Service and saves the results into a corresponding Endpoints object.
-->
`-l app=hostnames` 参数是一个标签选择算符 - 和我们 Service 中定义的一样
`-l app=hostnames` 参数是在 Service 上配置的标签选择器
"AGE" 列表明这些 Pod 已经启动一个小时了,这意味着它们运行良好,而未崩溃。
@ -899,7 +898,7 @@ iptables-save | grep hostnames
```
<!--
There should be 2 rules for each port of your Service (just one in this
There should be 2 rules for each port of your Service (only one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST".
Almost nobody should be using the "userspace" mode any more, so you won't spend

View File

@ -493,13 +493,13 @@ a running cluster in the [Deploying section](#deploying).
### 更改 `DaemonSet` 参数 {#changing-daemonset-parameters}
<!--
When you have the Stackdriver Logging `DaemonSet` in your cluster, you can just modify the
`template` field in its spec, daemonset controller will update the pods for you. For example,
let's assume you've just installed the Stackdriver Logging as described above. Now you want to
When you have the Stackdriver Logging `DaemonSet` in your cluster, you can modify the
`template` field in its spec. The DaemonSet controller manages the pods for you.
For example, assume you've installed the Stackdriver Logging as described above. Now you want to
change the memory limit to give fluentd more memory to safely process more logs.
-->
当集群中有 Stackdriver 日志机制的 `DaemonSet` 时,你只需修改其 spec 中的
`template` 字段,daemonset 控制器将为你更新 Pod。
`template` 字段,DaemonSet 控制器将为你管理 Pod。
例如,假设你按照上面的描述已经安装了 Stackdriver 日志机制。
现在,你想更改内存限制,来给 fluentd 提供的更多内存,从而安全地处理更多日志。