[zh] Tune translation for debug service task
parent
664464806c
commit
badeab324b
|
@ -1,19 +1,14 @@
|
||||||
---
|
---
|
||||||
reviewers:
|
|
||||||
- thockin
|
|
||||||
- bowei
|
|
||||||
content_type: concept
|
content_type: concept
|
||||||
title: 调试 Service
|
title: 调试 Service
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
---
|
|
||||||
reviewers:
|
reviewers:
|
||||||
- thockin
|
- thockin
|
||||||
- bowei
|
- bowei
|
||||||
content_type: concept
|
content_type: concept
|
||||||
title: Debug Services
|
title: Debug Services
|
||||||
---
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
<!-- overview -->
|
<!-- overview -->
|
||||||
|
@ -24,9 +19,9 @@ Deployment (or other workload controller) and created a Service, but you
|
||||||
get no response when you try to access it. This document will hopefully help
|
get no response when you try to access it. This document will hopefully help
|
||||||
you to figure out what's going wrong.
|
you to figure out what's going wrong.
|
||||||
-->
|
-->
|
||||||
对于新安装的 Kubernetes,经常出现的问题是 Service 无法正常运行。 您已经通过
|
对于新安装的 Kubernetes,经常出现的问题是 Service 无法正常运行。 你已经通过
|
||||||
Deployment(或其他工作负载控制器)运行了 Pod,并创建 Service ,但是
|
Deployment(或其他工作负载控制器)运行了 Pod,并创建 Service ,但是
|
||||||
当您尝试访问它时,没有任何响应。此文档有望对您有所帮助并找出问题所在。
|
当你尝试访问它时,没有任何响应。此文档有望对你有所帮助并找出问题所在。
|
||||||
|
|
||||||
|
|
||||||
<!-- body -->
|
<!-- body -->
|
||||||
|
@ -36,33 +31,26 @@ Deployment(或其他工作负载控制器)运行了 Pod,并创建 Service
|
||||||
For many steps here you will want to see what a Pod running in the cluster
|
For many steps here you will want to see what a Pod running in the cluster
|
||||||
sees. The simplest way to do this is to run an interactive alpine Pod:
|
sees. The simplest way to do this is to run an interactive alpine Pod:
|
||||||
-->
|
-->
|
||||||
## 在 pod 中运行命令
|
## 在 Pod 中运行命令
|
||||||
|
|
||||||
对于这里的许多步骤,您可能希望知道运行在集群中的 Pod 看起来是什么样的。最简单的方法是运行一个交互式的 alpine Pod:
|
对于这里的许多步骤,你可能希望知道运行在集群中的 Pod 看起来是什么样的。
|
||||||
|
最简单的方法是运行一个交互式的 alpine Pod:
|
||||||
|
|
||||||
```none
|
```none
|
||||||
$ kubectl run -it --rm --restart=Never alpine --image=alpine sh
|
$ kubectl run -it --rm --restart=Never alpine --image=alpine sh
|
||||||
If you don't see a command prompt, try pressing enter.
|
If you don't see a command prompt, try pressing enter.
|
||||||
```
|
```
|
||||||
<!--
|
|
||||||
{{< note >}}
|
|
||||||
If you don't see a command prompt, try pressing enter.
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
|
<!--
|
||||||
If you already have a running Pod that you prefer to use, you can run a
|
If you already have a running Pod that you prefer to use, you can run a
|
||||||
command in it using:
|
command in it using:
|
||||||
-->
|
-->
|
||||||
{{< note >}}
|
如果你已经有了你想使用的正在运行的 Pod,则可以运行以下命令去进入:
|
||||||
如果你没有看到命令提示符,请尝试按 Enter 键。
|
|
||||||
{{< /note >}}
|
|
||||||
|
|
||||||
如果您已经有了您想使用的正在运行的 Pod,则可以运行以下命令去进入:
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl exec <POD-NAME> -c <CONTAINER-NAME> -- <COMMAND>
|
kubectl exec <POD-NAME> -c <CONTAINER-NAME> -- <COMMAND>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
|
@ -70,13 +58,16 @@ For the purposes of this walk-through, let's run some Pods. Since you're
|
||||||
probably debugging your own Service you can substitute your own details, or you
|
probably debugging your own Service you can substitute your own details, or you
|
||||||
can follow along and get a second data point.
|
can follow along and get a second data point.
|
||||||
-->
|
-->
|
||||||
## 设置
|
## 设置 {#setup}
|
||||||
|
|
||||||
为了完成本次实践的任务,我们先运行几个 Pod。由于您可能正在调试自己的 Service,所以,您可以使用自己的信息进行替换,或者,您也可以跟随并开始下面的步骤来获得第二个数据点。
|
为了完成本次实践的任务,我们先运行几个 Pod。
|
||||||
|
由于你可能正在调试自己的 Service,所以,你可以使用自己的信息进行替换,
|
||||||
|
或者你也可以跟着教程并开始下面的步骤来获得第二个数据点。
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl create deployment hostnames --image=k8s.gcr.io/serve_hostname
|
kubectl create deployment hostnames --image=k8s.gcr.io/serve_hostname
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
deployment.apps/hostnames created
|
deployment.apps/hostnames created
|
||||||
```
|
```
|
||||||
|
@ -88,9 +79,11 @@ Let's scale the deployment to 3 replicas.
|
||||||
-->
|
-->
|
||||||
`kubectl` 命令将打印创建或变更的资源的类型和名称,它们可以在后续命令中使用。
|
`kubectl` 命令将打印创建或变更的资源的类型和名称,它们可以在后续命令中使用。
|
||||||
让我们将这个 deployment 的副本数扩至 3。
|
让我们将这个 deployment 的副本数扩至 3。
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl scale deployment hostnames --replicas=3
|
kubectl scale deployment hostnames --replicas=3
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
deployment.apps/hostnames scaled
|
deployment.apps/hostnames scaled
|
||||||
```
|
```
|
||||||
|
@ -98,7 +91,7 @@ deployment.apps/hostnames scaled
|
||||||
<!--
|
<!--
|
||||||
Note that this is the same as if you had the Deployment with the following YAML:
|
Note that this is the same as if you had the Deployment with the following YAML:
|
||||||
-->
|
-->
|
||||||
请注意这与您使用以下 YAML 方式启动 Deployment 类似:
|
请注意这与你使用以下 YAML 方式启动 Deployment 类似:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
|
@ -127,14 +120,14 @@ The label "app" is automatically set by `kubectl create deployment` to the name
|
||||||
|
|
||||||
You can confirm your Pods are running:
|
You can confirm your Pods are running:
|
||||||
-->
|
-->
|
||||||
|
|
||||||
"app" 标签是 `kubectl create deployment` 根据 Deployment 名称自动设置的。
|
"app" 标签是 `kubectl create deployment` 根据 Deployment 名称自动设置的。
|
||||||
|
|
||||||
确认您的 Pods 是运行状态:
|
确认你的 Pods 是运行状态:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods -l app=hostnames
|
kubectl get pods -l app=hostnames
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
hostnames-632524106-bbpiw 1/1 Running 0 2m
|
hostnames-632524106-bbpiw 1/1 Running 0 2m
|
||||||
|
@ -146,19 +139,19 @@ hostnames-632524106-tlaok 1/1 Running 0 2m
|
||||||
You can also confirm that your Pods are serving. You can get the list of
|
You can also confirm that your Pods are serving. You can get the list of
|
||||||
Pod IP addresses and test them directly.
|
Pod IP addresses and test them directly.
|
||||||
-->
|
-->
|
||||||
您还可以确认您的 Pod 是否正在运行。您可以获取 Pod IP 地址列表并直接对其进行测试。
|
你还可以确认你的 Pod 是否正在提供服务。你可以获取 Pod IP 地址列表并直接对其进行测试。
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get pods -l app=hostnames \
|
kubectl get pods -l app=hostnames \
|
||||||
-o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
|
-o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
10.244.0.5
|
10.244.0.5
|
||||||
10.244.0.6
|
10.244.0.6
|
||||||
10.244.0.7
|
10.244.0.7
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The example container used for this walk-through simply serves its own hostname
|
The example container used for this walk-through simply serves its own hostname
|
||||||
via HTTP on port 9376, but if you are debugging your own app, you'll want to
|
via HTTP on port 9376, but if you are debugging your own app, you'll want to
|
||||||
|
@ -166,15 +159,17 @@ use whatever port number your Pods are listening on.
|
||||||
|
|
||||||
From within a pod:
|
From within a pod:
|
||||||
-->
|
-->
|
||||||
用于本教程的示例容器仅通过 HTTP 在端口 9376 上提供其自己的主机名,但是如果要调试自己的应用程序,则需要使用您的 Pod 正在侦听的端口号。
|
用于本教程的示例容器仅通过 HTTP 在端口 9376 上提供其自己的主机名,
|
||||||
|
但是如果要调试自己的应用程序,则需要使用你的 Pod 正在侦听的端口号。
|
||||||
|
|
||||||
在 pod 内运行:
|
在 Pod 内运行:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do
|
for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do
|
||||||
wget -qO- $ep
|
wget -qO- $ep
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
This should produce something like:
|
This should produce something like:
|
||||||
-->
|
-->
|
||||||
|
@ -196,10 +191,11 @@ there.
|
||||||
Assuming everything has gone to plan so far, you can start to investigate why
|
Assuming everything has gone to plan so far, you can start to investigate why
|
||||||
your Service doesn't work.
|
your Service doesn't work.
|
||||||
-->
|
-->
|
||||||
如果此时您没有收到期望的响应,则您的 Pod 状态可能不健康,或者可能没有在您认为正确的端口上进行监听。
|
如果此时你没有收到期望的响应,则你的 Pod 状态可能不健康,或者可能没有在你认为正确的端口上进行监听。
|
||||||
您可能会发现 `kubectl logs` 命令对于查看正在发生的事情很有用,或者您可能需要通过`kubectl exec` 直接进入 Pod 中并从那里进行调试。
|
你可能会发现 `kubectl logs` 命令对于查看正在发生的事情很有用,
|
||||||
|
或者你可能需要通过`kubectl exec` 直接进入 Pod 中并从那里进行调试。
|
||||||
|
|
||||||
假设到目前为止一切都已按计划进行,那么您可以开始调查为何您的 Service 无法正常工作。
|
假设到目前为止一切都已按计划进行,那么你可以开始调查为何你的 Service 无法正常工作。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Does the Service exist?
|
## Does the Service exist?
|
||||||
|
@ -216,15 +212,17 @@ something like:
|
||||||
|
|
||||||
细心的读者会注意到我们实际上尚未创建 Service -这是有意而为之。 这一步有时会被遗忘,这是首先要检查的步骤。
|
细心的读者会注意到我们实际上尚未创建 Service -这是有意而为之。 这一步有时会被遗忘,这是首先要检查的步骤。
|
||||||
|
|
||||||
那么,如果我尝试访问不存在的 Service 会怎样? 假设您有另一个 Pod 通过名称匹配到 Service ,您将得到类似结果:
|
那么,如果我尝试访问不存在的 Service 会怎样? 假设你有另一个 Pod 通过名称匹配到 Service ,你将得到类似结果:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
wget -O- hostnames
|
wget -O- hostnames
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
Resolving hostnames (hostnames)... failed: Name or service not known.
|
Resolving hostnames (hostnames)... failed: Name or service not known.
|
||||||
wget: unable to resolve host address 'hostnames'
|
wget: unable to resolve host address 'hostnames'
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
The first thing to check is whether that Service actually exists:
|
The first thing to check is whether that Service actually exists:
|
||||||
-->
|
-->
|
||||||
|
@ -233,6 +231,7 @@ The first thing to check is whether that Service actually exists:
|
||||||
```shell
|
```shell
|
||||||
kubectl get svc hostnames
|
kubectl get svc hostnames
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
No resources found.
|
No resources found.
|
||||||
Error from server (NotFound): services "hostnames" not found
|
Error from server (NotFound): services "hostnames" not found
|
||||||
|
@ -242,10 +241,12 @@ Error from server (NotFound): services "hostnames" not found
|
||||||
Let's create the Service. As before, this is for the walk-through - you can
|
Let's create the Service. As before, this is for the walk-through - you can
|
||||||
use your own Service's details here.
|
use your own Service's details here.
|
||||||
-->
|
-->
|
||||||
让我们创建 Service。 和以前一样,在这次实践中 - 您可以在此处使用自己的 Service 的内容。
|
让我们创建 Service。 和以前一样,在这次实践中 - 你可以在此处使用自己的 Service 的内容。
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl expose deployment hostnames --port=80 --target-port=9376
|
kubectl expose deployment hostnames --port=80 --target-port=9376
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
service/hostnames exposed
|
service/hostnames exposed
|
||||||
```
|
```
|
||||||
|
@ -258,6 +259,7 @@ And read it back, just to be sure:
|
||||||
```shell
|
```shell
|
||||||
kubectl get svc hostnames
|
kubectl get svc hostnames
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
hostnames ClusterIP 10.0.1.175 <none> 80/TCP 5s
|
hostnames ClusterIP 10.0.1.175 <none> 80/TCP 5s
|
||||||
|
@ -268,9 +270,9 @@ Now you know that the Service exists.
|
||||||
|
|
||||||
As before, this is the same as if you had started the `Service` with YAML:
|
As before, this is the same as if you had started the `Service` with YAML:
|
||||||
-->
|
-->
|
||||||
现在您知道了 Service 确实存在。
|
现在你知道了 Service 确实存在。
|
||||||
|
|
||||||
就像之前通过 YAML 方式启动 'Service' 一样:
|
同前,此步骤效果与通过 YAML 方式启动 'Service' 一样:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
@ -286,13 +288,14 @@ spec:
|
||||||
port: 80
|
port: 80
|
||||||
targetPort: 9376
|
targetPort: 9376
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
In order to highlight the full range of configuration, the Service you created
|
In order to highlight the full range of configuration, the Service you created
|
||||||
here uses a different port number than the Pods. For many real-world
|
here uses a different port number than the Pods. For many real-world
|
||||||
Services, these values might be the same.
|
Services, these values might be the same.
|
||||||
-->
|
-->
|
||||||
为了突出配置范围的完整性,您在此处创建的 Service 使用的端口号与 Pods 不同。对于许多真实的 Service,这些值可以是相同的。
|
为了突出配置范围的完整性,你在此处创建的 Service 使用的端口号与 Pods 不同。
|
||||||
|
对于许多真实的 Service,这些值可以是相同的。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Does the Service work by DNS name?
|
## Does the Service work by DNS name?
|
||||||
|
@ -307,24 +310,29 @@ From a Pod in the same Namespace:
|
||||||
通常客户端通过 DNS 名称来匹配到 Service。
|
通常客户端通过 DNS 名称来匹配到 Service。
|
||||||
|
|
||||||
从相同命名空间下的 Pod 中运行以下命令:
|
从相同命名空间下的 Pod 中运行以下命令:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
nslookup hostnames
|
nslookup hostnames
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||||
|
|
||||||
Name: hostnames
|
Name: hostnames
|
||||||
Address 1: 10.0.1.175 hostnames.default.svc.cluster.local
|
Address 1: 10.0.1.175 hostnames.default.svc.cluster.local
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
If this fails, perhaps your Pod and Service are in different
|
If this fails, perhaps your Pod and Service are in different
|
||||||
Namespaces, try a namespace-qualified name (again, from within a Pod):
|
Namespaces, try a namespace-qualified name (again, from within a Pod):
|
||||||
-->
|
-->
|
||||||
如果失败,那么您的 Pod 和 Service 可能位于不同的命名空间中,请尝试使用限定命名空间的名称(同样在 Pod 内运行):
|
如果失败,那么你的 Pod 和 Service 可能位于不同的命名空间中,
|
||||||
|
请尝试使用限定命名空间的名称(同样在 Pod 内运行):
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
nslookup hostnames.default
|
nslookup hostnames.default
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||||
|
|
||||||
|
@ -337,11 +345,13 @@ If this works, you'll need to adjust your app to use a cross-namespace name, or
|
||||||
run your app and Service in the same Namespace. If this still fails, try a
|
run your app and Service in the same Namespace. If this still fails, try a
|
||||||
fully-qualified name:
|
fully-qualified name:
|
||||||
-->
|
-->
|
||||||
如果成功,那么需要调整您的应用,使用跨命名空间的名称去访问它,或者,在相同的命名空间中运行应用和 Service。如果仍然失败,请尝试一个完全限定的名称:
|
如果成功,那么需要调整你的应用,使用跨命名空间的名称去访问它,
|
||||||
|
或者在相同的命名空间中运行应用和 Service。如果仍然失败,请尝试一个完全限定的名称:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
nslookup hostnames.default.svc.cluster.local
|
nslookup hostnames.default.svc.cluster.local
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||||
|
|
||||||
|
@ -361,17 +371,19 @@ You can also try this from a `Node` in the cluster:
|
||||||
10.0.0.10 is the cluster's DNS Service IP, yours might be different.
|
10.0.0.10 is the cluster's DNS Service IP, yours might be different.
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
-->
|
-->
|
||||||
注意这里的后缀:"default.svc.cluster.local"。"default" 是我们正在操作的命名空间。"svc" 表示这是一个 Service。"cluster.local" 是您的集群域,在您自己的集群中可能会有所不同。
|
注意这里的后缀:"default.svc.cluster.local"。"default" 是我们正在操作的命名空间。
|
||||||
|
"svc" 表示这是一个 Service。"cluster.local" 是你的集群域,在你自己的集群中可能会有所不同。
|
||||||
|
|
||||||
您也可以在集群中的节点上尝试此操作:
|
你也可以在集群中的节点上尝试此操作:
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
10.0.0.10 是我的 DNS 服务 IP,您的可能有所不同。
|
10.0.0.10 是集群的 DNS 服务 IP,你的可能有所不同。
|
||||||
{{< /note >}}
|
{{< /note >}}
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
nslookup hostnames.default.svc.cluster.local 10.0.0.10
|
nslookup hostnames.default.svc.cluster.local 10.0.0.10
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
Server: 10.0.0.10
|
Server: 10.0.0.10
|
||||||
Address: 10.0.0.10#53
|
Address: 10.0.0.10#53
|
||||||
|
@ -385,15 +397,17 @@ If you are able to do a fully-qualified name lookup but not a relative one, you
|
||||||
need to check that your `/etc/resolv.conf` file in your Pod is correct. From
|
need to check that your `/etc/resolv.conf` file in your Pod is correct. From
|
||||||
within a Pod:
|
within a Pod:
|
||||||
-->
|
-->
|
||||||
如果您能够使用完全限定的名称查找,但不能使用相对名称,则需要检查您 Pod 中的 `/etc/resolv.conf` 文件是否正确。在 Pod 中运行以下命令:
|
如果你能够使用完全限定的名称查找,但不能使用相对名称,则需要检查你 Pod 中的
|
||||||
|
`/etc/resolv.conf` 文件是否正确。在 Pod 中运行以下命令:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
cat /etc/resolv.conf
|
cat /etc/resolv.conf
|
||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
You should see something like:
|
You should see something like:
|
||||||
-->
|
-->
|
||||||
您应该可以看到类似这样的输出:
|
你应该可以看到类似这样的输出:
|
||||||
|
|
||||||
```
|
```
|
||||||
nameserver 10.0.0.10
|
nameserver 10.0.0.10
|
||||||
|
@ -420,12 +434,19 @@ The `options` line must set `ndots` high enough that your DNS client library
|
||||||
considers search paths at all. Kubernetes sets this to 5 by default, which is
|
considers search paths at all. Kubernetes sets this to 5 by default, which is
|
||||||
high enough to cover all of the DNS names it generates.
|
high enough to cover all of the DNS names it generates.
|
||||||
-->
|
-->
|
||||||
`nameserver` 行必须指示您的集群的 DNS Service,它通过 `--cluster-dns` 标志传递到 kubelet。
|
`nameserver` 行必须指示你的集群的 DNS Service,
|
||||||
|
它是通过 `--cluster-dns` 标志传递到 kubelet 的。
|
||||||
|
|
||||||
`search` 行必须包含一个适当的后缀,以便查找 Service 名称。在本例中,它在本地命名空间(`default.svc.cluster.local`)、所有命名空间中的 `Service`(`svc.cluster.local`)最后是集群(`cluster.local`)中查找 Service 的名称。根据您自己的安装情况,可能会有额外的记录(最多 6 条)。
|
`search` 行必须包含一个适当的后缀,以便查找 Service 名称。
|
||||||
集群后缀通过 `--cluster-domain` 标志传递给 `kubelet`。 本文档中,我们假定后缀是 “cluster.local”。您的集群配置可能不同,这种情况下,您应该在上面的所有命令中更改它。
|
在本例中,它查找本地命名空间(`default.svc.cluster.local`)中的服务和
|
||||||
|
所有命名空间(`svc.cluster.local`)中的服务,最后在集群(`cluster.local`)中查找
|
||||||
|
服务的名称。根据你自己的安装情况,可能会有额外的记录(最多 6 条)。
|
||||||
|
集群后缀是通过 `--cluster-domain` 标志传递给 `kubelet` 的。
|
||||||
|
本文中,我们假定后缀是 “cluster.local”。
|
||||||
|
你的集群配置可能不同,这种情况下,你应该在上面的所有命令中更改它。
|
||||||
|
|
||||||
`options` 行必须设置足够高的 `ndots`,以便 DNS 客户端库考虑搜索路径。在默认情况下,Kubernetes 将这个值设置为 5,这个值足够高,足以覆盖它生成的所有 DNS 名称。
|
`options` 行必须设置足够高的 `ndots`,以便 DNS 客户端库考虑搜索路径。
|
||||||
|
在默认情况下,Kubernetes 将这个值设置为 5,这个值足够高,足以覆盖它生成的所有 DNS 名称。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
### Does any Service work by DNS name? {#does-any-service-exist-in-dns}
|
### Does any Service work by DNS name? {#does-any-service-exist-in-dns}
|
||||||
|
@ -436,7 +457,9 @@ Service should always work. From within a Pod:
|
||||||
-->
|
-->
|
||||||
### 是否存在 Service 能通过 DNS 名称访问?{#does-any-service-exist-in-dns}
|
### 是否存在 Service 能通过 DNS 名称访问?{#does-any-service-exist-in-dns}
|
||||||
|
|
||||||
如果上面的方式仍然失败,DNS 查找不到您需要的 Service ,您可以后退一步,看看还有什么其它东西没有正常工作。Kubernetes 主 Service 应该一直是工作的。在 Pod 中运行如下命令:
|
如果上面的方式仍然失败,DNS 查找不到你需要的 Service ,你可以后退一步,
|
||||||
|
看看还有什么其它东西没有正常工作。
|
||||||
|
Kubernetes 主 Service 应该一直是工作的。在 Pod 中运行如下命令:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
nslookup kubernetes.default
|
nslookup kubernetes.default
|
||||||
|
@ -453,18 +476,21 @@ Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
|
||||||
If this fails, please see the [kube-proxy](#is-the-kube-proxy-working) section
|
If this fails, please see the [kube-proxy](#is-the-kube-proxy-working) section
|
||||||
of this document, or even go back to the top of this document and start over,
|
of this document, or even go back to the top of this document and start over,
|
||||||
but instead of debugging your own Service, debug the DNS Service.
|
but instead of debugging your own Service, debug the DNS Service.
|
||||||
|
-->
|
||||||
|
如果失败,你可能需要转到本文的 [kube-proxy](#is-the-kube-proxy-working) 节,
|
||||||
|
或者甚至回到文档的顶部重新开始,但不是调试你自己的 Service ,而是调试 DNS Service。
|
||||||
|
|
||||||
|
<!--
|
||||||
## Does the Service work by IP?
|
## Does the Service work by IP?
|
||||||
|
|
||||||
Assuming you have confirmed that DNS works, the next thing to test is whether your
|
Assuming you have confirmed that DNS works, the next thing to test is whether your
|
||||||
Service works by its IP address. From a Pod in your cluster, access the
|
Service works by its IP address. From a Pod in your cluster, access the
|
||||||
Service's IP (from `kubectl get` above).
|
Service's IP (from `kubectl get` above).
|
||||||
-->
|
-->
|
||||||
如果失败,您可能需要转到这个文档的 [kube-proxy](#is-the-kube-proxy-working) 部分,或者甚至回到文档的顶部重新开始,但不是调试您自己的 Service ,而是调试 DNS Service。
|
|
||||||
|
|
||||||
### Service 能够通过 IP 访问么?
|
### Service 能够通过 IP 访问么?
|
||||||
|
|
||||||
假设您已经确认 DNS 工作正常,那么接下来要测试的是您的 Service 能否通过它的 IP 正常访问。从集群中的一个 Pod,尝试访问 Service 的 IP(从上面的 `kubectl get` 命令获取)。
|
假设你已经确认 DNS 工作正常,那么接下来要测试的是你的 Service 能否通过它的 IP 正常访问。
|
||||||
|
从集群中的一个 Pod,尝试访问 Service 的 IP(从上面的 `kubectl get` 命令获取)。
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
for i in $(seq 1 3); do
|
for i in $(seq 1 3); do
|
||||||
|
@ -487,7 +513,7 @@ hostnames-632524106-tlaok
|
||||||
If your Service is working, you should get correct responses. If not, there
|
If your Service is working, you should get correct responses. If not, there
|
||||||
are a number of things that could be going wrong. Read on.
|
are a number of things that could be going wrong. Read on.
|
||||||
-->
|
-->
|
||||||
如果 Service 状态是正常的,您应该得到正确的响应。如果没有,有很多可能出错的地方,请继续阅读。
|
如果 Service 状态是正常的,你应该得到正确的响应。如果没有,有很多可能出错的地方,请继续阅读。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Is the Service defined correctly?
|
## Is the Service defined correctly?
|
||||||
|
@ -498,11 +524,13 @@ and verify it:
|
||||||
-->
|
-->
|
||||||
## Service 的配置是否正确?
|
## Service 的配置是否正确?
|
||||||
|
|
||||||
这听起来可能很愚蠢,但您应该两次甚至三次检查您的 Service 配置是否正确,并且与您的 Pod 匹配。查看您的 Service 配置并验证它:
|
这听起来可能很愚蠢,但你应该两次甚至三次检查你的 Service 配置是否正确,并且与你的 Pod 匹配。
|
||||||
|
查看你的 Service 配置并验证它:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get service hostnames -o json
|
kubectl get service hostnames -o json
|
||||||
```
|
```
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"kind": "Service",
|
"kind": "Service",
|
||||||
|
@ -546,10 +574,10 @@ kubectl get service hostnames -o json
|
||||||
* If you meant to use a named port, do your Pods expose a port with the same name?
|
* If you meant to use a named port, do your Pods expose a port with the same name?
|
||||||
* Is the port's `protocol` correct for your Pods?
|
* Is the port's `protocol` correct for your Pods?
|
||||||
-->
|
-->
|
||||||
* 您想要访问的 Service 端口是否在 `spec.ports[]` 中列出?
|
* 你想要访问的 Service 端口是否在 `spec.ports[]` 中列出?
|
||||||
* `targetPort` 对您的 Pod 来说正确吗(许多 Pod 使用与 Service 不同的端口)?
|
* `targetPort` 对你的 Pod 来说正确吗(许多 Pod 使用与 Service 不同的端口)?
|
||||||
* 如果您想使用数值型端口,那么它的类型是一个数值(9376)还是字符串 “9376”?
|
* 如果你想使用数值型端口,那么它的类型是一个数值(9376)还是字符串 “9376”?
|
||||||
* 如果您想使用名称型端口,那么您的 Pod 是否暴露了一个同名端口?
|
* 如果你想使用名称型端口,那么你的 Pod 是否暴露了一个同名端口?
|
||||||
* 端口的 `protocol` 和 Pod 的是否对应?
|
* 端口的 `protocol` 和 Pod 的是否对应?
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -561,9 +589,10 @@ actually being selected by the Service.
|
||||||
|
|
||||||
Earlier you saw that the Pods were running. You can re-check that:
|
Earlier you saw that the Pods were running. You can re-check that:
|
||||||
-->
|
-->
|
||||||
## Service 有 Endpoint 吗?
|
## Service 有 Endpoints 吗?
|
||||||
|
|
||||||
如果您已经走到了这一步,您已经确认您的 Service 被正确定义,并能通过 DNS 解析。现在,让我们检查一下,您运行的 Pod 确实是由 Service 选择的。
|
如果你已经走到了这一步,你已经确认你的 Service 被正确定义,并能通过 DNS 解析。
|
||||||
|
现在,让我们检查一下,你运行的 Pod 确实是被 Service 选中的。
|
||||||
|
|
||||||
早些时候,我们已经看到 Pod 是运行状态。我们可以再检查一下:
|
早些时候,我们已经看到 Pod 是运行状态。我们可以再检查一下:
|
||||||
|
|
||||||
|
@ -590,17 +619,20 @@ If the restart count is high, read more about how to [debug pods](/docs/tasks/de
|
||||||
Inside the Kubernetes system is a control loop which evaluates the selector of
|
Inside the Kubernetes system is a control loop which evaluates the selector of
|
||||||
every Service and saves the results into a corresponding Endpoints object.
|
every Service and saves the results into a corresponding Endpoints object.
|
||||||
-->
|
-->
|
||||||
`-l app=hostnames` 参数是一个标签选择器 - 和我们 Service 中的一样。
|
`-l app=hostnames` 参数是一个标签选择算符 - 和我们 Service 中定义的一样。
|
||||||
|
|
||||||
"AGE" 列表明这些 Pod 已经启动一个小时了,这意味着它们运行良好,而不是崩溃。
|
"AGE" 列表明这些 Pod 已经启动一个小时了,这意味着它们运行良好,而未崩溃。
|
||||||
|
|
||||||
"RESTARTS" 列表明 Pod 没有经常崩溃或重启。经常性崩溃可能导致间歇性连接问题。如果重启数过大,通过[调试 pod](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods)了解更多。
|
"RESTARTS" 列表明 Pod 没有经常崩溃或重启。经常性崩溃可能导致间歇性连接问题。
|
||||||
|
如果重启次数过大,通过[调试 pod](/zh/docs/tasks/debug-application-cluster/debug-application/#debugging-pods)
|
||||||
|
了解相关技术。
|
||||||
|
|
||||||
在 Kubernetes 系统中有一个控制循环,它评估每个 Service 的选择器,并将结果保存到 Endpoints 对象中。
|
在 Kubernetes 系统中有一个控制回路,它评估每个 Service 的选择算符,并将结果保存到 Endpoints 对象中。
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
kubectl get endpoints hostnames
|
kubectl get endpoints hostnames
|
||||||
|
```
|
||||||
|
```
|
||||||
NAME ENDPOINTS
|
NAME ENDPOINTS
|
||||||
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
|
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
|
||||||
```
|
```
|
||||||
|
@ -614,7 +646,12 @@ other error, such as the Service selecting for `app=hostnames`, but the
|
||||||
Deployment specifying `run=hostnames`, as in versions previous to 1.18, where
|
Deployment specifying `run=hostnames`, as in versions previous to 1.18, where
|
||||||
the `kubectl run` command could have been also used to create a Deployment.
|
the `kubectl run` command could have been also used to create a Deployment.
|
||||||
-->
|
-->
|
||||||
这证实 endpoint 控制器已经为您的 Service 找到了正确的 Pods。如果 `Endpoint` 列的值为 `<none>`,则应检查 Service 的 `spec.selector` 字段,以及您实际想选择的 Pod 的 `metadata.labels` 的值。常见的错误是输入错误或其他错误,例如 Service 想选择 `app=hostnames`,但是 Deployment 指定的是 `run=hostnames`。在 1.18之前的版本中 `kubectl run` 也可以被用来创建 Deployment。
|
这证实 Endpoints 控制器已经为你的 Service 找到了正确的 Pods。
|
||||||
|
如果 `ENDPOINTS` 列的值为 `<none>`,则应检查 Service 的 `spec.selector` 字段,
|
||||||
|
以及你实际想选择的 Pod 的 `metadata.labels` 的值。
|
||||||
|
常见的错误是输入错误或其他错误,例如 Service 想选择 `app=hostnames`,但是
|
||||||
|
Deployment 指定的是 `run=hostnames`。在 1.18之前的版本中 `kubectl run`
|
||||||
|
也可以被用来创建 Deployment。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Are the Pods working?
|
## Are the Pods working?
|
||||||
|
@ -633,8 +670,8 @@ From within a Pod:
|
||||||
-->
|
-->
|
||||||
## Pod 正常工作吗?
|
## Pod 正常工作吗?
|
||||||
|
|
||||||
至此,您知道您的 Service 已存在,并且已匹配到您的Pod。在本实践的开始,您验证了 Pod 本身。
|
至此,你知道你的 Service 已存在,并且已匹配到你的Pod。在本实验的开始,你已经检查了 Pod 本身。
|
||||||
让我们再次检查 Pod 是否确实在工作-您可以绕过 Service 机制并直接转到 Pod,如上面的 Endpoint 所示。
|
让我们再次检查 Pod 是否确实在工作 - 你可以绕过 Service 机制并直接转到 Pod,如上面的 Endpoint 所示。
|
||||||
|
|
||||||
{{< note >}}
|
{{< note >}}
|
||||||
这些命令使用的是 Pod 端口(9376),而不是 Service 端口(80)。
|
这些命令使用的是 Pod 端口(9376),而不是 Service 端口(80)。
|
||||||
|
@ -652,6 +689,7 @@ done
|
||||||
This should produce something like:
|
This should produce something like:
|
||||||
-->
|
-->
|
||||||
输出应该类似这样:
|
输出应该类似这样:
|
||||||
|
|
||||||
```
|
```
|
||||||
hostnames-632524106-bbpiw
|
hostnames-632524106-bbpiw
|
||||||
hostnames-632524106-ly40y
|
hostnames-632524106-ly40y
|
||||||
|
@ -663,7 +701,8 @@ You expect each Pod in the Endpoints list to return its own hostname. If
|
||||||
this is not what happens (or whatever the correct behavior is for your own
|
this is not what happens (or whatever the correct behavior is for your own
|
||||||
Pods), you should investigate what's happening there.
|
Pods), you should investigate what's happening there.
|
||||||
-->
|
-->
|
||||||
您希望 Endpoint 列表中的每个 Pod 都返回自己的主机名。 如果这不是发生的情况(或您自己的 Pod 的正确行为是什么),您应调查那里发生了什么。
|
你希望 Endpoint 列表中的每个 Pod 都返回自己的主机名。
|
||||||
|
如果情况并非如此(或你自己的 Pod 的正确行为是什么),你应调查发生了什么事情。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Is the kube-proxy working?
|
## Is the kube-proxy working?
|
||||||
|
@ -680,9 +719,12 @@ will have to investigate whatever implementation of Services you are using.
|
||||||
-->
|
-->
|
||||||
## kube-proxy 正常工作吗?
|
## kube-proxy 正常工作吗?
|
||||||
|
|
||||||
如果您到达这里,则说明您的 Service 正在运行,拥有 Endpoint ,Pod 真正在运行。 此时此刻,整个 Service 代理机制是可疑的。 让我们一步一步地确认它没问题。
|
如果你到达这里,则说明你的 Service 正在运行,拥有 Endpoints,Pod 真正在提供服务。
|
||||||
|
此时,整个 Service 代理机制是可疑的。让我们一步一步地确认它没问题。
|
||||||
|
|
||||||
Service 的默认实现(在大多数集群上应用的)是 kube-proxy。这是一个在每个节点上运行的程序,并配置一小组用于提供 Service 抽象的机制之一。如果您的集群不使用 kube-proxy,则以下各节将不适用,您将必须检查您正在使用的 Service 的实现方式。
|
Service 的默认实现(在大多数集群上应用的)是 kube-proxy。
|
||||||
|
这是一个在每个节点上运行的程序,负责配置用于提供 Service 抽象的机制之一。
|
||||||
|
如果你的集群不使用 kube-proxy,则以下各节将不适用,你将必须检查你正在使用的 Service 的实现方式。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
## Is the kube-proxy working?
|
## Is the kube-proxy working?
|
||||||
|
@ -692,7 +734,7 @@ Node, you should get something like the below:
|
||||||
-->
|
-->
|
||||||
### kube-proxy 正常运行吗?
|
### kube-proxy 正常运行吗?
|
||||||
|
|
||||||
确认 `kube-proxy` 正在节点上运行。 在节点上直接运行,您将会得到类似以下的输出:
|
确认 `kube-proxy` 正在节点上运行。 在节点上直接运行,你将会得到类似以下的输出:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
ps auxw | grep kube-proxy
|
ps auxw | grep kube-proxy
|
||||||
|
@ -708,7 +750,10 @@ depends on your Node OS. On some OSes it is a file, such as
|
||||||
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
|
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
|
||||||
should see something like:
|
should see something like:
|
||||||
-->
|
-->
|
||||||
下一步,确认它并没有出现明显的失败,比如连接主节点失败。要做到这一点,您必须查看日志。访问日志取决于您节点的操作系统。在某些操作系统是一个文件,如 /var/log/messages kube-proxy.log,而其他操作系统使用 `journalctl` 访问日志。您应该看到类似的输出:
|
下一步,确认它并没有出现明显的失败,比如连接主节点失败。要做到这一点,你必须查看日志。
|
||||||
|
访问日志的方式取决于你节点的操作系统。
|
||||||
|
在某些操作系统上日志是一个文件,如 /var/log/messages kube-proxy.log,
|
||||||
|
而其他操作系统使用 `journalctl` 访问日志。你应该看到输出类似于:
|
||||||
|
|
||||||
```none
|
```none
|
||||||
I1027 22:14:53.995134 5063 server.go:200] Running in resource-only container "/kube-proxy"
|
I1027 22:14:53.995134 5063 server.go:200] Running in resource-only container "/kube-proxy"
|
||||||
|
@ -734,9 +779,12 @@ installing Kubernetes from scratch. If this is the case, you need to manually
|
||||||
install the `conntrack` package (e.g. `sudo apt install conntrack` on Ubuntu)
|
install the `conntrack` package (e.g. `sudo apt install conntrack` on Ubuntu)
|
||||||
and then retry.
|
and then retry.
|
||||||
-->
|
-->
|
||||||
如果您看到有关无法连接主节点的错误消息,则应再次检查节点配置和安装步骤。
|
如果你看到有关无法连接主节点的错误消息,则应再次检查节点配置和安装步骤。
|
||||||
|
|
||||||
`kube-proxy` 无法正确运行的可能原因之一是找不到所需的 `conntrack` 二进制文件。在一些 Linux 系统上,这也是可能发生的,这取决于您如何安装集群,例如,您正在从头开始安装 Kubernetes。如果是这样的话,您需要手动安装 `conntrack` 包(例如,在 Ubuntu 上使用 `sudo apt install conntrack`),然后重试。
|
`kube-proxy` 无法正确运行的可能原因之一是找不到所需的 `conntrack` 二进制文件。
|
||||||
|
在一些 Linux 系统上,这也是可能发生的,这取决于你如何安装集群,
|
||||||
|
例如,你是手动开始一步步安装 Kubernetes。如果是这样的话,你需要手动安装
|
||||||
|
`conntrack` 包(例如,在 Ubuntu 上使用 `sudo apt install conntrack`),然后重试。
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Kube-proxy can run in one of a few modes. In the log listed above, the
|
Kube-proxy can run in one of a few modes. In the log listed above, the
|
||||||
|
@ -744,20 +792,24 @@ line `Using iptables Proxier` indicates that kube-proxy is running in
|
||||||
"iptables" mode. The most common other mode is "ipvs". The older "userspace"
|
"iptables" mode. The most common other mode is "ipvs". The older "userspace"
|
||||||
mode has largely been replaced by these.
|
mode has largely been replaced by these.
|
||||||
|
|
||||||
|
-->
|
||||||
|
Kube-proxy 可以以若干模式之一运行。在上述日志中,`Using iptables Proxier`
|
||||||
|
行表示 kube-proxy 在 "iptables" 模式下运行。
|
||||||
|
最常见的另一种模式是 "ipvs"。先前的 "userspace" 模式已经被这些所代替。
|
||||||
|
|
||||||
|
<!--
|
||||||
#### Iptables mode
|
#### Iptables mode
|
||||||
|
|
||||||
In "iptables" mode, you should see something like the following on a Node:
|
In "iptables" mode, you should see something like the following on a Node:
|
||||||
-->
|
-->
|
||||||
Kube-proxy 可以在这些模式之一中运行。在上述日志中,`Using iptables Proxier` 行表示 kube-proxy 在 "iptables" 模式下运行。最常见的另一种模式是 "ipvs"。先前的 "userspace"
|
|
||||||
模式已经被这些所代替。
|
|
||||||
|
|
||||||
#### Iptables 模式
|
#### Iptables 模式
|
||||||
|
|
||||||
在 "iptables" 模式中, 您应该可以在节点上看到如下输出:
|
在 "iptables" 模式中, 你应该可以在节点上看到如下输出:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
iptables-save | grep hostnames
|
iptables-save | grep hostnames
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
|
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
|
||||||
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
|
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
|
||||||
|
@ -777,25 +829,25 @@ one `KUBE-SVC-<hash>` chain. For each Pod endpoint, there should be a small
|
||||||
number of rules in that `KUBE-SVC-<hash>` and one `KUBE-SEP-<hash>` chain with
|
number of rules in that `KUBE-SVC-<hash>` and one `KUBE-SEP-<hash>` chain with
|
||||||
a small number of rules in it. The exact rules will vary based on your exact
|
a small number of rules in it. The exact rules will vary based on your exact
|
||||||
config (including node-ports and load-balancers).
|
config (including node-ports and load-balancers).
|
||||||
|
-->
|
||||||
|
对于每个 Service 的每个端口,应有 1 条 `KUBE-SERVICES` 规则、一个 `KUBE-SVC-<hash>` 链。
|
||||||
|
对于每个 Pod 末端,在那个 `KUBE-SVC-<hash>` 链中应该有一些规则与之对应,还应该
|
||||||
|
有一个 `KUBE-SEP-<hash>` 链与之对应,其中包含为数不多的几条规则。
|
||||||
|
实际的规则数量可能会根据你实际的配置(包括 NodePort 和 LoadBalancer 服务)有所不同。
|
||||||
|
|
||||||
|
<!--
|
||||||
#### IPVS mode
|
#### IPVS mode
|
||||||
|
|
||||||
In "ipvs" mode, you should see something like the following on a Node:
|
In "ipvs" mode, you should see something like the following on a Node:
|
||||||
-->
|
-->
|
||||||
对于每个 Service 的所有端口,应有 1 条规则、一个链。对于每个 Pod endpoint,在那个XX应该有一些规则,也应该包含小数目的规则。实际的规则数量可能会根据您实际的配置(包括节点端口和负载均衡)有所不同。
|
|
||||||
For each port of each Service, there should be 1 rule in `KUBE-SERVICES` and
|
|
||||||
one `KUBE-SVC-<hash>` chain. For each Pod endpoint, there should be a small
|
|
||||||
number of rules in that `KUBE-SVC-<hash>` and one `KUBE-SEP-<hash>` chain with
|
|
||||||
a small number of rules in it. The exact rules will vary based on your exact
|
|
||||||
config (including node-ports and load-balancers).
|
|
||||||
|
|
||||||
#### IPVS 模式
|
#### IPVS 模式
|
||||||
|
|
||||||
在 "ipvs" 模式中, 您应该在节点下看到如下输出:
|
在 "ipvs" 模式中, 你应该在节点下看到如下输出:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
ipvsadm -ln
|
ipvsadm -ln
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
Prot LocalAddress:Port Scheduler Flags
|
Prot LocalAddress:Port Scheduler Flags
|
||||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
||||||
|
@ -813,22 +865,26 @@ load-balancer IPs, kube-proxy will create a virtual server. For each Pod
|
||||||
endpoint, it will create corresponding real servers. In this example, service
|
endpoint, it will create corresponding real servers. In this example, service
|
||||||
hostnames(`10.0.1.175:80`) has 3 endpoints(`10.244.0.5:9376`,
|
hostnames(`10.0.1.175:80`) has 3 endpoints(`10.244.0.5:9376`,
|
||||||
`10.244.0.6:9376`, `10.244.0.7:9376`).
|
`10.244.0.6:9376`, `10.244.0.7:9376`).
|
||||||
|
-->
|
||||||
|
对于每个 Service 的每个端口,还有 NodePort,External IP 和 LoadBalancer 类型服务
|
||||||
|
的 IP,kube-proxy 将创建一个虚拟服务器。
|
||||||
|
对于每个 Pod 末端,它将创建相应的真实服务器。
|
||||||
|
在此示例中,服务主机名(`10.0.1.175:80`)拥有 3 个末端(`10.244.0.5:9376`、
|
||||||
|
`10.244.0.6:9376` 和 `10.244.0.7:9376`)。
|
||||||
|
|
||||||
|
<!--
|
||||||
#### Userspace mode
|
#### Userspace mode
|
||||||
|
|
||||||
In rare cases, you may be using "userspace" mode. From your Node:
|
In rare cases, you may be using "userspace" mode. From your Node:
|
||||||
-->
|
-->
|
||||||
对于每个 Service 的每个端口,还有 NodePort,外部 IP 和
|
|
||||||
负载平衡器 IP,kube-proxy 将创建一个虚拟服务器。 对于每个 Pod Endpoint ,它将创建相应的真实服务器。 在此示例中,服务主机名(`10.0.1.175:80`)拥有 3 个 endpoint(`10.244.0.5:9376`,
|
|
||||||
`10.244.0.6:9376`, `10.244.0.7:9376`)。
|
|
||||||
|
|
||||||
#### Userspace 模式
|
#### Userspace 模式
|
||||||
|
|
||||||
在少数情况下,您可能会用到 "userspace" 模式,在您的节点上运行:
|
在极少数情况下,你可能会用到 "userspace" 模式。在你的节点上运行:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
iptables-save | grep hostnames
|
iptables-save | grep hostnames
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
|
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
|
||||||
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
|
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
|
||||||
|
@ -840,23 +896,26 @@ example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST".
|
||||||
|
|
||||||
Almost nobody should be using the "userspace" mode any more, so you won't spend
|
Almost nobody should be using the "userspace" mode any more, so you won't spend
|
||||||
more time on it here.
|
more time on it here.
|
||||||
|
-->
|
||||||
|
对于 Service (本例中只有一个)的每个端口,应当有 2 条规则:
|
||||||
|
一条 "KUBE-PORTALS-CONTAINER" 和一条 "KUBE-PORTALS-HOST" 规则。
|
||||||
|
|
||||||
|
几乎没有人应该再使用 "userspace" 模式,因此你在这里不会花更多的时间。
|
||||||
|
|
||||||
|
<!--
|
||||||
### Is kube-proxy proxying?
|
### Is kube-proxy proxying?
|
||||||
|
|
||||||
Assuming you do see one the above cases, try again to access your Service by
|
Assuming you do see one the above cases, try again to access your Service by
|
||||||
IP from one of your Nodes:
|
IP from one of your Nodes:
|
||||||
-->
|
-->
|
||||||
对于 Service (本例中只有一个)的每个端口,应当有 2 条规则: 一个 "KUBE-PORTALS-CONTAINER" 和一个 "KUBE-PORTALS-HOST"。
|
|
||||||
|
|
||||||
几乎没有人应该再使用 "userspace" 模式,因此您在这里不会花更多的时间。
|
|
||||||
|
|
||||||
### kube-proxy 是否在运行?
|
### kube-proxy 是否在运行?
|
||||||
|
|
||||||
假设您确实遇到上述情况之一,请重试从节点上通过 IP 访问您的 Service :
|
假设你确实遇到上述情况之一,请重试从节点上通过 IP 访问你的 Service :
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl 10.0.1.175:80
|
curl 10.0.1.175:80
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
hostnames-632524106-bbpiw
|
hostnames-632524106-bbpiw
|
||||||
```
|
```
|
||||||
|
@ -869,13 +928,16 @@ Look back at the `iptables-save` output above, and extract the
|
||||||
port number that `kube-proxy` is using for your Service. In the above
|
port number that `kube-proxy` is using for your Service. In the above
|
||||||
examples it is "48577". Now connect to that:
|
examples it is "48577". Now connect to that:
|
||||||
-->
|
-->
|
||||||
如果失败,并且您正在使用用户空间代理,则可以尝试直接访问代理。 如果您使用的是 iptables 代理,请跳过本节。
|
如果失败,并且你正在使用用户空间代理,则可以尝试直接访问代理。
|
||||||
|
如果你使用的是 iptables 代理,请跳过本节。
|
||||||
|
|
||||||
回顾上面的 `iptables-save` 输出,并提取 `kube-proxy` 用于您的 Service 的端口号。在上面的例子中,它是 “48577”。现在试着连接它:
|
回顾上面的 `iptables-save` 输出,并提取 `kube-proxy` 为你的 Service 所使用的端口号。
|
||||||
|
在上面的例子中,端口号是 “48577”。现在试着连接它:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl localhost:48577
|
curl localhost:48577
|
||||||
```
|
```
|
||||||
|
|
||||||
```none
|
```none
|
||||||
hostnames-632524106-tlaok
|
hostnames-632524106-tlaok
|
||||||
```
|
```
|
||||||
|
@ -883,7 +945,7 @@ hostnames-632524106-tlaok
|
||||||
<!--
|
<!--
|
||||||
If this still fails, look at the `kube-proxy` logs for specific lines like:
|
If this still fails, look at the `kube-proxy` logs for specific lines like:
|
||||||
-->
|
-->
|
||||||
如果仍然失败,请查看 `kube-proxy` 日志中的特定行,如:
|
如果这步操作仍然失败,请查看 `kube-proxy` 日志中的特定行,如:
|
||||||
|
|
||||||
```none
|
```none
|
||||||
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
|
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
|
||||||
|
@ -892,7 +954,10 @@ Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9
|
||||||
<!--
|
<!--
|
||||||
If you don't see those, try restarting `kube-proxy` with the `-v` flag set to 4, and
|
If you don't see those, try restarting `kube-proxy` with the `-v` flag set to 4, and
|
||||||
then look at the logs again.
|
then look at the logs again.
|
||||||
|
-->
|
||||||
|
如果你没有看到这些,请尝试将 `-V` 标志设置为 4 并重新启动 `kube-proxy`,然后再查看日志。
|
||||||
|
|
||||||
|
<!--
|
||||||
### Edge case: A Pod fails to reach itself via the Service IP {#a-pod-fails-to-reach-itself-via-the-service-ip}
|
### Edge case: A Pod fails to reach itself via the Service IP {#a-pod-fails-to-reach-itself-via-the-service-ip}
|
||||||
|
|
||||||
This might sound unlikely, but it does happen and it is supposed to work.
|
This might sound unlikely, but it does happen and it is supposed to work.
|
||||||
|
@ -905,26 +970,28 @@ back to themselves if they try to access their own Service VIP. The
|
||||||
`hairpin-mode` flag must either be set to `hairpin-veth` or
|
`hairpin-mode` flag must either be set to `hairpin-veth` or
|
||||||
`promiscuous-bridge`.
|
`promiscuous-bridge`.
|
||||||
|
|
||||||
|
-->
|
||||||
|
### 边缘案例: Pod 无法通过 Service IP 连接到它本身 {#a-pod-fails-to-reach-itself-via-the-service-ip}
|
||||||
|
|
||||||
|
这听起来似乎不太可能,但是确实可能发生,并且应该可行。
|
||||||
|
|
||||||
|
如果网络没有为“发夹模式(Hairpin)”流量生成正确配置,
|
||||||
|
通常当 `kube-proxy` 以 `iptables` 模式运行,并且 Pod 与桥接网络连接时,就会发生这种情况。
|
||||||
|
`kubelet` 提供了 `hairpin-mode`[标志](/zh/docs/reference/command-line-tools-reference/kubelet/),
|
||||||
|
如果 Service 的末端尝试访问自己的 Service VIP,则该端点可以把流量负载均衡回来到它们自身。
|
||||||
|
`hairpin-mode` 标志必须被设置为 `hairpin-veth` 或者 `promiscuous-bridge`。
|
||||||
|
|
||||||
|
<!--
|
||||||
The common steps to trouble shoot this are as follows:
|
The common steps to trouble shoot this are as follows:
|
||||||
|
|
||||||
* Confirm `hairpin-mode` is set to `hairpin-veth` or `promiscuous-bridge`.
|
* Confirm `hairpin-mode` is set to `hairpin-veth` or `promiscuous-bridge`.
|
||||||
You should see something like the below. `hairpin-mode` is set to
|
You should see something like the below. `hairpin-mode` is set to
|
||||||
`promiscuous-bridge` in the following example.
|
`promiscuous-bridge` in the following example.
|
||||||
-->
|
-->
|
||||||
如果您没有看到这些,请尝试将 `-V` 标志设置为 4 并重新启动 `kube-proxy`,然后再查看日志。
|
诊断此类问题的常见步骤如下:
|
||||||
|
|
||||||
### 边缘案例: 一个 Pod 无法通过 Service IP 连接到它本身{#a-pod-fails-to-reach-itself-via-the-service-ip}。
|
* 确认 `hairpin-mode` 被设置为 `hairpin-veth` 或 `promiscuous-bridge`。
|
||||||
|
你应该可以看到下面这样。本例中 `hairpin-mode` 被设置为 `promiscuous-bridge`。
|
||||||
这听起来似乎不太可能,但是确实发生了,并且应该可行。
|
|
||||||
|
|
||||||
如果网络没有为“发夹模式”流量生成正确配置,通常当 `kube-proxy` 以 `iptables` 模式运行,并且 Pod 与桥接网络连接时,就会发生这种情况。`Kubelet`暴露了 `hairpin-mode`[标志](/docs/admin/kubelet/),如果 Service 的 endpoint 尝试访问自己的 Service VIP,则该端点可以把流量负载均衡回来到它们自身。
|
|
||||||
`hairpin-mode` 标志必须被设置为 `hairpin-veth` 或者`promiscuous-bridge`。
|
|
||||||
|
|
||||||
解决此问题的常见步骤如下:
|
|
||||||
|
|
||||||
* 确认 `hairpin-mode` 被设置为 `hairpin-veth` 或 `promiscuous-bridge`.
|
|
||||||
您应该可以看到下面这样。本例中 `hairpin-mode` 被设置为
|
|
||||||
`promiscuous-bridge` 。
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
ps auxw | grep kubelet
|
ps auxw | grep kubelet
|
||||||
|
@ -942,9 +1009,12 @@ match `--hairpin-mode` flag due to compatibility. Check if there is any log
|
||||||
lines with key word `hairpin` in kubelet.log. There should be log lines
|
lines with key word `hairpin` in kubelet.log. There should be log lines
|
||||||
indicating the effective hairpin mode, like something below.
|
indicating the effective hairpin mode, like something below.
|
||||||
-->
|
-->
|
||||||
|
* 确认有效的 `hairpin-mode`。要做到这一点,你必须查看 kubelet 日志。
|
||||||
|
访问日志取决于节点的操作系统。在一些操作系统上,它是一个文件,如 /var/log/kubelet.log,
|
||||||
* 确认有效的 `hairpin-mode`。要做到这一点,您必须查看 kubelet 日志。访问日志取决于节点的操作系统。在一些操作系统上,它是一个文件,如 /var/log/kubelet.log,而其他操作系统则使用 `journalctl` 访问日志。请注意,由于兼容性,有效的 `hairpin-mode` 可能不匹配 `--hairpin-mode` 标志。在 kubelet.log 中检查是否有带有关键字 `hairpin` 的日志行。应该有日志行指示有效的 `hairpin-mode`,就像下面这样。
|
而其他操作系统则使用 `journalctl` 访问日志。请注意,由于兼容性,
|
||||||
|
有效的 `hairpin-mode` 可能不匹配 `--hairpin-mode` 标志。在 kubelet.log
|
||||||
|
中检查是否有带有关键字 `hairpin` 的日志行。应该有日志行指示有效的
|
||||||
|
`hairpin-mode`,就像下面这样。
|
||||||
|
|
||||||
```none
|
```none
|
||||||
I0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to "promiscuous-bridge"
|
I0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to "promiscuous-bridge"
|
||||||
|
@ -955,7 +1025,8 @@ I0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to "promiscuous-b
|
||||||
the permission to operate in `/sys` on node. If everything works properly,
|
the permission to operate in `/sys` on node. If everything works properly,
|
||||||
you should see something like:
|
you should see something like:
|
||||||
-->
|
-->
|
||||||
* 如果有效的发卡模式是 `hairpin-veth`, 保证 `Kubelet` 有操作节点上 `/sys` 的权限。如果一切正常,您将会看到如下输出:
|
* 如果有效的发夹模式是 `hairpin-veth`, 要保证 `Kubelet` 有操作节点上 `/sys` 的权限。
|
||||||
|
如果一切正常,你将会看到如下输出:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done
|
for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done
|
||||||
|
@ -972,7 +1043,8 @@ for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; don
|
||||||
has the permission to manipulate linux bridge on node. If `cbr0` bridge is
|
has the permission to manipulate linux bridge on node. If `cbr0` bridge is
|
||||||
used and configured properly, you should see:
|
used and configured properly, you should see:
|
||||||
-->
|
-->
|
||||||
* 如果有效的发卡模式是 `promiscuous-bridge`, 保证 `Kubelet` 有操作节点上 linux bridge 的权限。如果 `cbr0` 桥正在被使用且被正确设置,您将会看到如下输出:
|
* 如果有效的发卡模式是 `promiscuous-bridge`, 要保证 `Kubelet` 有操作节点上
|
||||||
|
Linux 网桥的权限。如果 `cbr0` 桥正在被使用且被正确设置,你将会看到如下输出:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
ifconfig cbr0 |grep PROMISC
|
ifconfig cbr0 |grep PROMISC
|
||||||
|
@ -983,7 +1055,10 @@ UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
* Seek help if none of above works out.
|
* Seek help if none of above works out.
|
||||||
|
-->
|
||||||
|
* 如果以上步骤都不能解决问题,请寻求帮助。
|
||||||
|
|
||||||
|
<!--
|
||||||
## Seek help
|
## Seek help
|
||||||
|
|
||||||
If you get this far, something very strange is happening. Your Service is
|
If you get this far, something very strange is happening. Your Service is
|
||||||
|
@ -996,28 +1071,23 @@ Contact us on
|
||||||
[Slack](/docs/troubleshooting/#slack) or
|
[Slack](/docs/troubleshooting/#slack) or
|
||||||
[Forum](https://discuss.kubernetes.io) or
|
[Forum](https://discuss.kubernetes.io) or
|
||||||
[GitHub](https://github.com/kubernetes/kubernetes).
|
[GitHub](https://github.com/kubernetes/kubernetes).
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
|
||||||
|
|
||||||
|
|
||||||
Visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
|
||||||
-->
|
-->
|
||||||
* 如果以上步骤都不能解决问题,请寻求帮助。
|
|
||||||
|
|
||||||
## 寻求帮助
|
## 寻求帮助
|
||||||
|
|
||||||
如果您走到这一步,那么就真的是奇怪的事情发生了。您的 Service 正在运行,有 Endpoint ,您的 Pods 也确实在服务中。您的 DNS 正常,`iptables` 规则已经安装,`kube-proxy` 看起来也正常。然而 Service 还是没有正常工作。这种情况下,请告诉我们,这样我们可以帮助调查!
|
如果你走到这一步,那么就真的是奇怪的事情发生了。你的 Service 正在运行,有 Endpoints 存在,
|
||||||
|
你的 Pods 也确实在提供服务。你的 DNS 正常,`iptables` 规则已经安装,`kube-proxy` 看起来也正常。
|
||||||
|
然而 Service 还是没有正常工作。这种情况下,请告诉我们,以便我们可以帮助调查!
|
||||||
|
|
||||||
通过
|
通过
|
||||||
[Slack](/docs/troubleshooting/#slack) 或者
|
[Slack](/zh/docs/tasks/debug-application-cluster/troubleshooting/#slack) 或者
|
||||||
[Forum](https://discuss.kubernetes.io) 或者
|
[Forum](https://discuss.kubernetes.io) 或者
|
||||||
[GitHub](https://github.com/kubernetes/kubernetes)
|
[GitHub](https://github.com/kubernetes/kubernetes)
|
||||||
联系我们。
|
联系我们。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## {{% heading "whatsnext" %}}
|
## {{% heading "whatsnext" %}}
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Visit [troubleshooting document](/docs/troubleshooting/) for more information.
|
||||||
|
-->
|
||||||
|
访问[故障排查文档](/zh/docs/tasks/debug-application-cluster/troubleshooting/) 获取更多信息。
|
||||||
|
|
||||||
访问 [故障排查文档](/docs/troubleshooting/) 获取更多信息。
|
|
||||||
|
|
Loading…
Reference in New Issue