Merge pull request #34237 from howieyuen/zh-34221-concepts-5

[zh]Resync concepts files after zh language renaming(concepts-5)
pull/34261/head
Kubernetes Prow Robot 2022-06-12 23:54:10 -07:00 committed by GitHub
commit 06273bf27f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 439 additions and 197 deletions

View File

@ -30,7 +30,7 @@ Add-ons 扩展了 Kubernetes 的功能。
* [Calico](https://docs.projectcalico.org/latest/getting-started/kubernetes/) is a secure L3 networking and network policy provider.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
@ -40,7 +40,7 @@ Add-ons 扩展了 Kubernetes 的功能。
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* **Romana** is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Romana](https://github.com/romana) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy](/docs/concepts/services-networking/network-policies/) API.
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
-->
## 网络和网络策略
@ -54,7 +54,7 @@ Add-ons 扩展了 Kubernetes 的功能。
* [Cilium](https://github.com/cilium/cilium) 是一个 L3 网络和网络策略插件,能够透明的实施 HTTP/API/L7 策略。
同时支持路由routing和覆盖/封装overlay/encapsulation模式。
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) 使 Kubernetes 无缝连接到一种 CNI 插件,
例如Flannel、Calico、Canal、Romana 或者 Weave。
例如Flannel、Calico、Canal 或者 Weave。
* [Contiv](https://contivpp.io/) 为各种用例和丰富的策略框架提供可配置的网络
(使用 BGP 的本机 L3、使用 vxlan 的覆盖、标准 L2 和 Cisco-SDN/ACI
Contiv 项目完全[开源](https://github.com/contiv)。
@ -84,9 +84,8 @@ Add-ons 扩展了 Kubernetes 的功能。
CaaS / PaaS 平台例如关键容器服务PKS和 OpenShift之间的集成。
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)
是一个 SDN 平台,可在 Kubernetes Pods 和非 Kubernetes 环境之间提供基于策略的联网,并具有可视化和安全监控。
* Romana 是一个 pod 网络的第三层解决方案,并支持
[NetworkPolicy API](/zh/docs/concepts/services-networking/network-policies/)。
Kubeadm add-on 安装细节可以在[这里](https://github.com/romana/romana/tree/master/containerize)找到。
* [Romana](https://github.com/romana) 是一个 Pod 网络的第三层解决方案,并支持
[NetworkPolicy](/zh-cn/docs/concepts/services-networking/network-policies/) API。
* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)
提供在网络分组两端参与工作的网络和网络策略,并且不需要额外的数据库。
@ -129,7 +128,7 @@ Add-ons 扩展了 Kubernetes 的功能。
运行虚拟机的 add-ons。通常运行在裸机集群上。
* [节点问题检测器](https://github.com/kubernetes/node-problem-detector) 在 Linux 节点上运行,
并将系统问题报告为[事件](/docs/reference/kubernetes-api/cluster-resources/event-v1/)
或[节点状况](/zh/docs/concepts/architecture/nodes/#condition)。
或[节点状况](/zh-cn/docs/concepts/architecture/nodes/#condition)。
<!--
## Legacy Add-ons

View File

@ -584,7 +584,7 @@ opinions of the proper content of these objects.
就可能出现抖动。
<!--
Each `kube-apiserver` makes an inital maintenance pass over the
Each `kube-apiserver` makes an initial maintenance pass over the
mandatory and suggested configuration objects, and after that does
periodic maintenance (once per minute) of those objects.

View File

@ -212,6 +212,7 @@ Operator.
{{% thirdparty-content %}}
* [Charmed Operator Framework](https://juju.is/)
* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk)
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
* [kubebuilder](https://book.kubebuilder.io/)
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (dotnet operator SDK)
@ -226,6 +227,7 @@ you implement yourself
{{% thirdparty-content %}}
* [Charmed Operator Framework](https://juju.is/)
* [Java Operator SDK](https://github.com/java-operator-sdk/java-operator-sdk)
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
* [kubebuilder](https://book.kubebuilder.io/)
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (dotnet operator SDK)

View File

@ -41,8 +41,7 @@ Resource quotas work like this:
资源配额的工作方式如下:
<!--
- Different teams work in different namespaces. Currently this is voluntary, but
support for making this mandatory via ACLs is planned.
- Different teams work in different namespaces. This can be enforced with [RBAC](/docs/reference/access-authn-authz/rbac/).
- The administrator creates one ResourceQuota for each namespace.
- Users create resources (pods, services, etc.) in the namespace, and the quota system
tracks usage to ensure it does not exceed hard resource limits defined in a ResourceQuota.
@ -53,8 +52,7 @@ Resource quotas work like this:
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
-->
- 不同的团队可以在不同的命名空间下工作,目前这是非约束性的,在未来的版本中可能会通过
ACL (Access Control List 访问控制列表) 来实现强制性约束。
- 不同的团队可以在不同的命名空间下工作。这可以通过 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。
- 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。
- 当用户在命名空间下创建资源(如 Pod、Service 等Kubernetes 的配额系统会
跟踪集群的资源使用情况,以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
@ -65,14 +63,14 @@ Resource quotas work like this:
提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。
若想避免这类问题,请参考
[演练](/zh/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)示例。
[演练](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)示例。
<!--
The name of a ResourceQuota object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
-->
ResourceQuota 对象的名称必须是合法的
[DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
<!--
Examples of policies that could be created using namespaces and quotas are:
@ -130,7 +128,7 @@ that can be requested in a given namespace.
## 计算资源配额
用户可以对给定命名空间下的可被请求的
[计算资源](/zh/docs/concepts/configuration/manage-resources-containers/)
[计算资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/)
总量进行限制。
<!--
@ -168,7 +166,7 @@ In addition to the resources mentioned above, in release 1.10, quota support for
### 扩展资源的资源配额
除上述资源外,在 Kubernetes 1.10 版本中,还添加了对
[扩展资源](/zh/docs/concepts/configuration/manage-resources-containers/#extended-resources)
[扩展资源](/zh-cn/docs/concepts/configuration/manage-resources-containers/#extended-resources)
的支持。
<!--
@ -202,7 +200,7 @@ In addition, you can limit consumption of storage resources based on associated
-->
## 存储资源配额
用户可以对给定命名空间下的[存储资源](/zh/docs/concepts/storage/persistent-volumes/)
用户可以对给定命名空间下的[存储资源](/zh-cn/docs/concepts/storage/persistent-volumes/)
总量进行限制。
此外还可以根据相关的存储类Storage Class来限制存储资源的消耗。
@ -218,9 +216,9 @@ In addition, you can limit consumption of storage resources based on associated
| 资源名称 | 描述 |
| --------------------- | ----------------------------------------------------------- |
| `requests.storage` | 所有 PVC存储资源的需求总量不能超过该值。 |
| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 |
| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | 在所有与 `<storage-class-name>` 相关的持久卷申领中,存储请求的总和不能超过该值。 |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
<!--
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
@ -258,7 +256,7 @@ Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/)
-->
如果所使用的是 CRI 容器运行时,容器日志会被计入临时存储配额。
这可能会导致存储配额耗尽的 Pods 被意外地驱逐出节点。
参考[日志架构](/zh/docs/concepts/cluster-administration/logging/)
参考[日志架构](/zh-cn/docs/concepts/cluster-administration/logging/)
了解详细信息。
{{< /note >}}
@ -343,7 +341,7 @@ The following types are supported:
| 资源名称 | 描述 |
| ------------------------------- | ------------------------------------------------- |
| `configmaps` | 在该命名空间中允许存在的 ConfigMap 总数上限。 |
| `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/zh/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 的总数上限。 |
| `persistentvolumeclaims` | 在该命名空间中允许存在的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 的总数上限。 |
| `pods` | 在该命名空间中允许存在的非终止状态的 Pod 总数上限。Pod 终止状态等价于 Pod 的 `.status.phase in (Failed, Succeeded)` 为真。 |
| `replicationcontrollers` | 在该命名空间中允许存在的 ReplicationController 总数上限。 |
| `resourcequotas` | 在该命名空间中允许存在的 ResourceQuota 总数上限。 |
@ -396,8 +394,8 @@ Resources specified on the quota outside of the allowed set results in a validat
| `NotTerminating` | 匹配所有 `spec.activeDeadlineSeconds` 是 nil 的 Pod。 |
| `BestEffort` | 匹配所有 Qos 是 BestEffort 的 Pod。 |
| `NotBestEffort` | 匹配所有 Qos 不是 BestEffort 的 Pod。 |
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 |
| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 |
| `PriorityClass` | 匹配所有引用了所指定的[优先级类](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption)的 Pods。 |
| `CrossNamespacePodAffinity` | 匹配那些设置了跨名字空间 [(反)亲和性条件](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node)的 Pod。 |
<!--
The `BestEffort` scope restricts a quota to tracking the following resource:
@ -485,7 +483,7 @@ Pods can be created at a specific [priority](/docs/concepts/scheduling-eviction/
You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector`
field in the quota spec.
-->
Pod 可以创建为特定的[优先级](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)。
Pod 可以创建为特定的[优先级](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority)。
通过使用配额规约中的 `scopeSelector` 字段,用户可以根据 Pod 的优先级控制其系统资源消耗。
<!--
@ -1065,7 +1063,7 @@ and it is to be created in a namespace other than `kube-system`.
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
-->
- 查看[资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)
- 查看[如何使用资源配额的详细示例](/zh/docs/tasks/administer-cluster/quota-api-object/)。
- 查看[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)。
- 阅读[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)。
了解更多信息。
- 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 10 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 8.9 KiB

View File

@ -1,122 +1,206 @@
---
title: 使用 Source IP
title: 使用源 IP
content_type: tutorial
min-kubernetes-server-version: v1.5
---
<!--
title: Using Source IP
content_type: tutorial
min-kubernetes-server-version: v1.5
-->
<!-- overview -->
Kubernetes 集群中运行的应用通过 Service 抽象来互相查找、通信和与外部世界沟通。本文介绍被发送到不同类型 Services 的数据包源 IP 的变化过程,你可以根据你的需求改变这些行为。
<!--
Applications running in a Kubernetes cluster find and communicate with each
other, and the outside world, through the Service abstraction. This document
explains what happens to the source IP of packets sent to different types
of Services, and how you can toggle this behavior according to your needs.
-->
运行在 Kubernetes 集群中的应用程序通过 Service 抽象发现彼此并相互通信,它们也用 Service 与外部世界通信。
本文解释了发送到不同类型 Service 的数据包的源 IP 会发生什么情况,以及如何根据需要切换此行为。
## {{% heading "prerequisites" %}}
<!--
### Terminology
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
## 术语表
This document makes use of the following terms:
-->
## 术语表 {#terminology}
本文使用了下列术语:
* [NAT](https://en.wikipedia.org/wiki/Network_address_translation): 网络地址转换
* [Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT): 替换数据包的源 IP, 通常为节点的 IP
* [Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT): 替换数据包的目的 IP, 通常为 Pod 的 IP
* [VIP](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个虚拟 IP, 例如分配给每个 Kubernetes Service 的 IP
* [Kube-proxy](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies): 一个网络守护程序,在每个节点上协调 Service VIP 管理
{{< comment >}}
<!--
If localizing this section, link to the equivalent Wikipedia pages for
the target localization.
-->
如果本地化此部分,请链接到目标本地化的等效 Wikipedia 页面。
{{< /comment >}}
<!--
[NAT](https://en.wikipedia.org/wiki/Network_address_translation)
: network address translation
## 准备工作
[Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT)
: replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node.
[Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT)
: replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}}
你必须拥有一个正常工作的 Kubernetes 1.5 集群来运行此文档中的示例。该示例使用一个简单的 nginx webserver通过一个HTTP消息头返回它接收到请求的源IP。你可以像下面这样创建它
[VIP](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
: a virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes
```console
[kube-proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
: a network daemon that orchestrates Service VIP management on every node
-->
[NAT](https://zh.wikipedia.org/wiki/%E7%BD%91%E7%BB%9C%E5%9C%B0%E5%9D%80%E8%BD%AC%E6%8D%A2)
: 网络地址转换
[Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT)
: 替换数据包上的源 IP在本页面中这通常意味着替换为节点的 IP 地址
[Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT)
: 替换数据包上的目标 IP在本页面中这通常意味着替换为 {{<glossary_tooltip text="Pod" term_id="pod" >}} 的 IP 地址
[VIP](/zh-cn/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
: 一个虚拟 IP 地址,例如分配给 Kubernetes 中每个 {{<glossary_tooltip text="Service" term_id="service" >}} 的 IP 地址
[Kube-proxy](/zh-cn/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
: 一个网络守护程序,在每个节点上协调 Service VIP 管理
<!--
### Prerequisites
-->
## 先决条件 {#prerequisites}
{{< include "task-tutorial-prereqs.md" >}}
<!--
The examples use a small nginx webserver that echoes back the source
IP of requests it receives through an HTTP header. You can create it as follows:
-->
示例使用一个小型 nginx Web 服务器,服务器通过 HTTP 标头返回它接收到的请求的源 IP。
你可以按如下方式创建它:
```shell
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
```
输出结果为
<!--
The output is:
-->
输出为:
```
deployment.apps/source-ip-app created
```
## {{% heading "objectives" %}}
* 通过多种类型的 Services 暴露一个简单应用
* 理解每种 Service 类型如何处理源 IP NAT
* 理解保留源IP所涉及的折中
<!--
* Expose a simple application through various types of Services
* Understand how each Service type handles source IP NAT
* Understand the tradeoffs involved in preserving source IP
-->
* 通过多种类型的 Service 暴露一个简单应用
* 了解每种 Service 类型如何处理源 IP NAT
* 了解保留源 IP 所涉及的权衡
<!-- lessoncontent -->
<!--
## Source IP for Services with `Type=ClusterIP`
-->
## `Type=ClusterIP` 类型 Service 的源 IP {#source-ip-for-services-with-type-clusterip}
## Type=ClusterIP 类型 Services 的 Source IP
如果你的 kube-proxy 运行在 [iptables 模式](/zh/docs/user-guide/services/#proxy-mode-iptables)下,从集群内部发送到 ClusterIP 的包永远不会进行源地址 NAT这从 Kubernetes 1.2 开始是默认选项。Kube-proxy 通过一个 `proxyMode` endpoint 暴露它的模式。
<!--
Packets sent to ClusterIP from within the cluster are never source NAT'd if
you're running kube-proxy in
[iptables mode](/docs/concepts/services-networking/service/#proxy-mode-iptables),
(the default). You can query the kube-proxy mode by fetching
`http://localhost:10249/proxyMode` on the node where kube-proxy is running.
-->
如果你在 [iptables 模式](/zh-cn/docs/concepts/services-networking/service/#proxy-mode-iptables)(默认)下运行
kube-proxy则从集群内发送到 ClusterIP 的数据包永远不会进行源 NAT。
你可以通过在运行 kube-proxy 的节点上获取 `http://localhost:10249/proxyMode` 来查询 kube-proxy 模式。
```console
kubectl get nodes
```
输出结果与以下结果类似:
<!--
The output is similar to this:
-->
输出类似于:
```
NAME STATUS ROLES AGE VERSION
NAME STATUS ROLES AGE VERSION
kubernetes-node-6jst Ready <none> 2h v1.13.0
kubernetes-node-cx31 Ready <none> 2h v1.13.0
kubernetes-node-jj1t Ready <none> 2h v1.13.0
```
从其中一个节点中得到代理模式
```console
kubernetes-node-6jst $ curl localhost:10249/proxyMode
<!--
Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):
-->
在其中一个节点上获取代理模式kube-proxy 监听 10249 端口):
```shell
# 在要查询的节点上的 shell 中运行
curl http://localhost:10249/proxyMode
```
输出结果为:
<!--
The output is:
-->
输出为:
```
iptables
```
你可以通过在source IP应用上创建一个Service来测试源IP保留。
```console
<!--
You can test source IP preservation by creating a Service over the source IP app:
-->
你可以通过在源 IP 应用程序上创建 Service 来测试源 IP 保留:
```shell
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
```
输出结果为:
<!--
The output is:
-->
输出为:
```
service/clusterip exposed
```
```console
```shell
kubectl get svc clusterip
```
输出结果与以下结果类似:
<!--
The output is similar to this:
-->
输出类似于:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.0.170.92 <none> 80/TCP 51s
```
从相同集群中的一个 pod 访问这个 `ClusterIP`
<!--
And hitting the `ClusterIP` from a pod in the same cluster:
-->
并从同一集群中的 Pod 中访问 `ClusterIP`
```shell
kubectl run busybox -it --image=busybox:1.28 --restart=Never --rm
```
输出结果与以下结果类似:
<!--
The output is similar to this:
-->
输出类似于:
```
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
```
然后你可以在 Pod 内运行命令:
<!--
You can then run a command inside that Pod:
-->
然后,你可以在该 Pod 中运行命令:
```shell
# 在终端内使用"kubectl run"执行
# 从 “kubectl run” 的终端中运行
ip addr
```
```
@ -134,10 +218,12 @@ ip addr
valid_lft forever preferred_lft forever
```
然后使用 `wget` 去请求本地 Web 服务器
<!--
…then use `wget` to query the local webserver
-->
然后使用 `wget` 查询本地 Web 服务器:
```shell
# 用名为 "clusterip" 的服务的 IPv4 地址替换 "10.0.170.92"
# 将 “10.0.170.92” 替换为 Service 中名为 “clusterip” 的 IPv4 地址
wget -qO - 10.0.170.92
```
```
@ -147,259 +233,414 @@ command=GET
...
```
无论客户端 pod 和 服务端 pod 是否在相同的节点上client_address 始终是客户端 pod 的 IP 地址。
<!--
The `client_address` is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.
-->
`client_address` 始终是客户端 Pod 的 IP 地址,不管客户端 Pod 和服务器 Pod 位于同一节点还是不同节点。
<!--
## Source IP for Services with `Type=NodePort`
## Type=NodePort 类型 Services 的 Source IP
Packets sent to Services with
[`Type=NodePort`](/docs/concepts/services-networking/service/#type-nodeport)
are source NAT'd by default. You can test this by creating a `NodePort` Service:
-->
## `Type=NodePort` 类型 Service 的源 IP {#source-ip-for-services-with-type-nodeport}
从 Kubernetes 1.5 开始,发送给类型为 [Type=NodePort](/zh/docs/user-guide/services/#nodeport) Services 的数据包默认进行源地址 NAT。你可以通过创建一个 `NodePort` Service 来进行测试:
```console
默认情况下,发送到 [`Type=NodePort`](/zh-cn/docs/concepts/services-networking/service/#type-nodeport)
的 Service 的数据包会经过源 NAT 处理。你可以通过创建一个 `NodePort` 的 Service 来测试这点:
```shell
kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
```
输出结果为:
<!--
The output is:
-->
输出为:
```
service/nodeport exposed
```
```console
```shell
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport)
NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }')
```
如果你的集群运行在一个云服务上,你可能需要为上面报告的 `nodes:nodeport` 开启一条防火墙规则。
现在,你可以通过上面分配的节点端口从外部访问这个 Service。
<!--
If you're running on a cloud provider, you may need to open up a firewall-rule
for the `nodes:nodeport` reported above.
Now you can try reaching the Service from outside the cluster through the node
port allocated above.
-->
如果你在云供应商上运行,你可能需要为上面报告的 `nodes:nodeport` 打开防火墙规则。
现在你可以尝试通过上面分配的节点端口从集群外部访问 Service。
```console
```shell
for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done
```
输出结果与以下结果类似:
<!--
The output is similar to:
-->
输出类似于:
```
client_address=10.180.1.1
client_address=10.240.0.5
client_address=10.240.0.3
```
<!--
Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:
* Client sends packet to `node2:nodePort`
* `node2` replaces the source IP address (SNAT) in the packet with its own IP address
* `node2` replaces the destination IP on the packet with the pod IP
* packet is routed to node 1, and then to the endpoint
* the pod's reply is routed back to node2
* the pod's reply is sent back to the client
Visually:
{{< figure src="/docs/images/tutor-service-nodePort-fig01.svg" alt="source IP nodeport figure 01" class="diagram-large" caption="Figure. Source IP Type=NodePort using SNAT" link="https://mermaid.live/edit#pako:eNqNkV9rwyAUxb-K3LysYEqS_WFYKAzat9GHdW9zDxKvi9RoMIZtlH732ZjSbE970cu5v3s86hFqJxEYfHjRNeT5ZcUtIbXRaMNN2hZ5vrYRqt52cSXV-4iMSuwkZiYtyX739EqWaahMQ-V1qPxDVLNOvkYrO6fj2dupWMR2iiT6foOKdEZoS5Q2hmVSStoH7w7IMqXUVOefWoaG3XVftHbGeZYVRbH6ZXJ47CeL2-qhxvt_ucTe1SUlpuMN6CX12XeGpLdJiaMMFFr0rdAyvvfxjHEIDbbIgcVSohKDCRy4PUV06KQIuJU6OA9MCdMjBTEEt_-2NbDgB7xAGy3i97VJPP0ABRmcqg" >}}
-->
请注意,这些并不是正确的客户端 IP它们是集群的内部 IP。这是所发生的事情
* 客户端发送数据包到 `node2:nodePort`
* `node2` 使用它自己的 IP 地址替换数据包的源 IP 地址SNAT
* `node2` 使用 pod IP 地址替换数据包的目的 IP 地址
* 数据包被路由到 node 1然后交给 endpoint
* `node2` 将数据包上的目标 IP 替换为 Pod IP
* 数据包被路由到 node1然后到端点
* Pod 的回复被路由回 node2
* Pod 的回复被发送回给客户端
用图表示:
{{< figure src="/zh-cn/docs/images/tutor-service-nodePort-fig01.svg" alt="图 1源 IP NodePort" class="diagram-large" caption="如图。使用 SNAT 的源 IPType=NodePort" link="https://mermaid.live/edit#pako:eNqNkV9rwyAUxb-K3LysYEqS_WFYKAzat9GHdW9zDxKvi9RoMIZtlH732ZjSbE970cu5v3s86hFqJxEYfHjRNeT5ZcUtIbXRaMNN2hZ5vrYRqt52cSXV-4iMSuwkZiYtyX739EqWaahMQ-V1qPxDVLNOvkYrO6fj2dupWMR2iiT6foOKdEZoS5Q2hmVSStoH7w7IMqXUVOefWoaG3XVftHbGeZYVRbH6ZXJ47CeL2-qhxvt_ucTe1SUlpuMN6CX12XeGpLdJiaMMFFr0rdAyvvfxjHEIDbbIgcVSohKDCRy4PUV06KQIuJU6OA9MCdMjBTEEt_-2NbDgB7xAGy3i97VJPP0ABRmcqg" >}}
{{< mermaid >}}
graph LR;
client(client)-->node2[节点 2];
node2-->client;
node2-. SNAT .->node1[节点 1];
node1-. SNAT .->node2;
node1-->endpoint(端点);
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
class node1,node2,endpoint k8s;
class client plain;
{{</ mermaid >}}
为了防止这种情况发生Kubernetes 提供了一个特性来保留客户端的源 IP 地址[(点击此处查看可用特性)](/zh/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。设置 `service.spec.externalTrafficPolicy` 的值为 `Local`,请求就只会被代理到本地 endpoints 而不会被转发到其它节点。这样就保留了最初的源 IP 地址。如果没有本地 endpoints发送到这个节点的数据包将会被丢弃。这样在应用到数据包的任何包处理规则下你都能依赖这个正确的 source-ip 使数据包通过并到达 endpoint。
<!--
To avoid this, Kubernetes has a feature to
[preserve the client source IP](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip).
If you set `service.spec.externalTrafficPolicy` to the value `Local`,
kube-proxy only proxies proxy requests to local endpoints, and does not
forward traffic to other nodes. This approach preserves the original
source IP address. If there are no local endpoints, packets sent to the
node are dropped, so you can rely on the correct source-ip in any packet
processing rules you might apply a packet that make it through to the
endpoint.
-->
为避免这种情况Kubernetes 有一个特性可以[保留客户端源 IP](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)。
如果将 `service.spec.externalTrafficPolicy` 设置为 `Local`
kube-proxy 只会将代理请求代理到本地端点,而不会将流量转发到其他节点。
这种方法保留了原始源 IP 地址。如果没有本地端点,则发送到该节点的数据包将被丢弃,
因此你可以在任何数据包处理规则中依赖正确的源 IP你可能会应用一个数据包使其通过该端点。
<!--
Set the `service.spec.externalTrafficPolicy` field as follows:
-->
设置 `service.spec.externalTrafficPolicy` 字段如下:
```console
```shell
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
```
输出结果为:
<!--
The output is:
-->
输出为:
```
service/nodeport patched
```
<!--
Now, re-run the test:
-->
现在,重新运行测试:
```console
```shell
for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done
```
输出结果为:
<!--
The output is similar to:
-->
输出类似于:
```
client_address=104.132.1.79
client_address=198.51.100.79
```
<!--
Note that you only got one reply, with the *right* client IP, from the one node on which the endpoint pod
is running.
请注意,你只从 endpoint pod 运行的那个节点得到了一个回复,这个回复有*正确的*客户端 IP。
This is what happens:
* client sends packet to `node2:nodePort`, which doesn't have any endpoints
* packet is dropped
* client sends packet to `node1:nodePort`, which *does* have endpoints
* node1 routes packet to endpoint with the correct source IP
Visually:
{{< figure src="/docs/images/tutor-service-nodePort-fig02.svg" alt="source IP nodeport figure 02" class="diagram-large" caption="Figure. Source IP Type=NodePort preserves client source IP address" link="" >}}
-->
请注意,你只从运行端点 Pod 的节点得到了回复,这个回复有**正确的**客户端 IP。
这是发生的事情:
* 客户端发送数据包到 `node2:nodePort`,它没有任何 endpoints
* 客户端将数据包发送到没有任何端点的 `node2:nodePort`
* 数据包被丢弃
* 客户端发送数据包到 `node1:nodePort`,它*有*endpoints
* node1 使用正确的源 IP 地址将数据包路由到 endpoint
* 客户端发送数据包到 `node1:nodePort`,它**确实**有端点
* node1 使用正确的源 IP 地址将数据包路由到端点
用图表示:
{{< mermaid >}}
graph TD;
client --> node1[节点 1];
client(client) --x node2[节点 2];
node1 --> endpoint(端点);
endpoint --> node1;
classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
class node1,node2,endpoint k8s;
class client plain;
{{</ mermaid >}}
{{< figure src="/zh-cn/docs/images/tutor-service-nodePort-fig02.svg" alt="图 2源 IP NodePort" class="diagram-large" caption="如图。源 IPType=NodePort保存客户端源 IP 地址" link="" >}}
<!--
## Source IP for Services with `Type=LoadBalancer`
## Type=LoadBalancer 类型 Services 的 Source IP
Packets sent to Services with
[`Type=LoadBalancer`](/docs/concepts/services-networking/service/#loadbalancer)
are source NAT'd by default, because all schedulable Kubernetes nodes in the
`Ready` state are eligible for load-balanced traffic. So if packets arrive
at a node without an endpoint, the system proxies it to a node *with* an
endpoint, replacing the source IP on the packet with the IP of the node (as
described in the previous section).
-->
## `Type=LoadBalancer` 类型 Service 的 Source IP {#source-ip-for-services-with-type-loadbalancer}
默认情况下,发送到 [`Type=LoadBalancer`](/zh-cn/docs/concepts/services-networking/service/#loadbalancer)
的 Service 的数据包经过源 NAT处理因为所有处于 `Ready` 状态的可调度 Kubernetes
节点对于负载均衡的流量都是符合条件的。
因此,如果数据包到达一个没有端点的节点,系统会将其代理到一个**带有**端点的节点,用该节点的 IP 替换数据包上的源 IP如上一节所述
从Kubernetes1.5开始,发送给类型为 [Type=LoadBalancer](/zh/docs/user-guide/services/#type-nodeport) Services 的数据包默认进行源地址 NAT这是因为所有处于 `Ready` 状态的可调度 Kubernetes 节点对于负载均衡的流量都是符合条件的。所以如果数据包到达一个没有 endpoint 的节点,系统将把这个包代理到*有* endpoint 的节点,并替换数据包的源 IP 为节点的 IP如前面章节所述
<!--
You can test this by exposing the source-ip-app through a load balancer:
-->
你可以通过负载均衡器上暴露 source-ip-app 进行测试:
你可以通过在一个 loadbalancer 上暴露这个 source-ip-app 来进行测试。
```console
```shell
kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
```
输出结果为:
<!--
The output is:
-->
输出为:
```
service/loadbalancer exposed
```
打印Service的IPs
```console
<!--
Print out the IP addresses of the Service:
-->
打印 Service 的 IP 地址:
```shell
kubectl get svc loadbalancer
```
输出结果与以下结果类似:
<!--
The output is similar to this:
-->
输出类似于:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
loadbalancer LoadBalancer 10.0.65.118 104.198.149.140 80/TCP 5m
loadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m
```
```console
curl 104.198.149.140
<!--
Next, send a request to this Service's external-ip:
-->
接下来,发送请求到 Service 的 的外部IPExternal-IP
```shell
curl 203.0.113.140
```
输出结果与以下结果类似:
<!--
The output is similar to this:
-->
输出类似于:
```
CLIENT VALUES:
client_address=10.240.0.5
...
```
<!--
However, if you're running on Google Kubernetes Engine/GCE, setting the same `service.spec.externalTrafficPolicy`
field to `Local` forces nodes *without* Service endpoints to remove
themselves from the list of nodes eligible for loadbalanced traffic by
deliberately failing health checks.
然而,如果你的集群运行在 Google Kubernetes Engine/GCE 上,可以通过设置 service.spec.externalTrafficPolicy 字段值为 Local ,故意导致健康检查失败来强制使没有 endpoints 的节点把自己从负载均衡流量的可选节点列表中删除。
Visually:
![Source IP with externalTrafficPolicy](/images/docs/sourceip-externaltrafficpolicy.svg)
-->
然而,如果你在 Google Kubernetes Engine/GCE 上运行,
将相同的 `service.spec.externalTrafficPolicy` 字段设置为 `Local`
故意导致健康检查失败,从而强制没有端点的节点把自己从负载均衡流量的可选节点列表中删除。
用图表示:
![Source IP with externalTrafficPolicy](/images/docs/sourceip-externaltrafficpolicy.svg)
![具有 externalTrafficPolicy 的源 IP](/images/docs/sourceip-externaltrafficpolicy.svg)
<!--
You can test this by setting the annotation:
-->
你可以通过设置注解进行测试:
你可以设置 annotation 来进行测试:
```console
```shell
kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}'
```
<!--
You should immediately see the `service.spec.healthCheckNodePort` field allocated
by Kubernetes:
-->
你应该能够立即看到 Kubernetes 分配的 `service.spec.healthCheckNodePort` 字段:
```console
```shell
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
```
输出结果与以下结果类似:
```
<!--
The output is similar to this:
-->
输出类似于:
```yaml
healthCheckNodePort: 32122
```
<!--
The `service.spec.healthCheckNodePort` field points to a port on every node
serving the health check at `/healthz`. You can test this:
-->
`service.spec.healthCheckNodePort` 字段指向每个在 `/healthz`
路径上提供健康检查的节点的端口。你可以这样测试:
`service.spec.healthCheckNodePort` 字段指向每个节点在 `/healthz` 路径上提供的用于健康检查的端口。你可以这样测试:
```console
```shell
kubectl get pod -o wide -l run=source-ip-app
```
输出结果与以下结果类似:
<!--
The output is similar to this:
-->
输出类似于:
```
NAME READY STATUS RESTARTS AGE IP NODE
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst
```
使用 curl 命令发送请求到每个节点的 `/healthz` 路径。
```console
kubernetes-node-6jst $ curl localhost:32122/healthz
<!--
Use `curl` to fetch the `/healthz` endpoint on various nodes:
-->
使用 `curl` 获取各个节点上的 `/healthz` 端点:
```shell
# 在你选择的节点上本地运行
curl localhost:32122/healthz
```
输出结果与以下结果类似:
```
1 Service Endpoints found
```
```console
kubernetes-node-jj1t $ curl localhost:32122/healthz
<!--
On a different node you might get a different result:
-->
在不同的节点上,你可能会得到不同的结果:
```shell
# 在你选择的节点上本地运行
curl localhost:32122/healthz
```
输出结果与以下结果类似:
```
No Service Endpoints Found
```
<!--
A controller running on the
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} is
responsible for allocating the cloud load balancer. The same controller also
allocates HTTP health checks pointing to this port/path on each node. Wait
about 10 seconds for the 2 nodes without endpoints to fail health checks,
then use `curl` to query the IPv4 address of the load balancer:
-->
在{{<glossary_tooltip text="控制平面" term_id="control-plane" >}}上运行的控制器负责分配云负载均衡器。
同一个控制器还在每个节点上分配指向此端口/路径的 HTTP 健康检查。
等待大约 10 秒,让 2 个没有端点的节点健康检查失败,然后使用 `curl` 查询负载均衡器的 IPv4 地址:
主节点运行的 service 控制器负责分配 cloud loadbalancer。在这样做的同时它也会分配指向每个节点的 HTTP 健康检查的 port/path。等待大约 10 秒钟之后,没有 endpoints 的两个节点的健康检查会失败,然后 curl 负载均衡器的 ip
```console
curl 104.198.149.140
```shell
curl 203.0.113.140
```
输出结果与以下结果类似:
<!--
The output is similar to this:
-->
输出类似于:
```
CLIENT VALUES:
client_address=104.132.1.79
client_address=198.51.100.79
...
```
<!--
## Cross-platform support
__跨平台支持__
Only some cloud providers offer support for source IP preservation through
Services with `Type=LoadBalancer`.
The cloud provider you're running on might fulfill the request for a loadbalancer
in a few different ways:
-->
## 跨平台支持 {#cross-platform-support}
只有部分云提供商为 `Type=LoadBalancer` 的 Service 提供保存源 IP 的支持。
你正在运行的云提供商可能会以几种不同的方式满足对负载均衡器的请求:
从 Kubernetes 1.5 开始,通过类型为 Type=LoadBalancer 的 Services 进行源 IP 保存的支持仅在一部分 cloudproviders 中实现GCP and Azure。你的集群运行的 cloudprovider 可能以某些不同的方式满足 loadbalancer 的要求:
<!--
1. With a proxy that terminates the client connection and opens a new connection
to your nodes/endpoints. In such cases the source IP will always be that of the
cloud LB, not that of the client.
2. With a packet forwarder, such that requests from the client sent to the
loadbalancer VIP end up at the node with the source IP of the client, not
an intermediate proxy.
-->
1. 使用终止客户端连接并打开到你的节点/端点的新连接的代理。
在这种情况下,源 IP 将始终是云 LB 的源 IP而不是客户端的源 IP。
1. 使用一个代理终止客户端连接并打开一个到你的 nodes/endpoints 的新连接。在这种情况下,源 IP 地址将永远是云负载均衡器的地址而不是客户端的。
2. 使用一个包转发器,因此从客户端发送到负载均衡器 VIP 的请求在拥有客户端源 IP 地址的节点终止,而不被中间代理。
第一类负载均衡器必须使用一种它和后端之间约定的协议来和真实的客户端 IP 通信,例如 HTTP [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) 头,或者 [proxy 协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)。
第二类负载均衡器可以通过简单的在保存于 Service 的 `service.spec.healthCheckNodePort` 字段上创建一个 HTTP 健康检查点来使用上面描述的特性。
2. 使用数据包转发器,这样客户端发送到负载均衡器 VIP
的请求最终会到达具有客户端源 IP 的节点,而不是中间代理。
<!--
Load balancers in the first category must use an agreed upon
protocol between the loadbalancer and backend to communicate the true client IP
such as the HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2)
or [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For)
headers, or the
[proxy protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
Load balancers in the second category can leverage the feature described above
by creating an HTTP health check pointing at the port stored in
the `service.spec.healthCheckNodePort` field on the Service.
-->
第一类负载均衡器必须使用负载均衡器和后端之间商定的协议来传达真实的客户端 IP
例如 HTTP [转发](https://tools.ietf.org/html/rfc7239#section-5.2)或
[X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For)
表头,或[代理协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)。
第二类负载均衡器可以通过创建指向存储在 Service 上的 `service.spec.healthCheckNodePort`
字段中的端口的 HTTP 健康检查来利用上述功能。
## {{% heading "cleanup" %}}
删除服务:
<!--
Delete the Services:
-->
删除 Service
```console
$ kubectl delete svc -l app=source-ip-app
```
<!--
Delete the Deployment, ReplicaSet and Pod:
-->
删除 Deployment、ReplicaSet 和 Pod
```console
$ kubectl delete deployment source-ip-app
```
## {{% heading "whatsnext" %}}
* 进一步学习 [通过 services 连接应用](/zh/docs/concepts/services-networking/connect-applications-service/)
<!--
* Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/)
* Read how to [Create an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
-->
* 详细了解[通过 Service 连接应用程序](/zh-cn/docs/concepts/services-networking/connect-applications-service/)
* 阅读如何[创建外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/)