diff --git a/content/zh-cn/docs/images/sourceip-externaltrafficpolicy.svg b/content/zh-cn/docs/images/sourceip-externaltrafficpolicy.svg
new file mode 100644
index 0000000000..c983406d57
--- /dev/null
+++ b/content/zh-cn/docs/images/sourceip-externaltrafficpolicy.svg
@@ -0,0 +1,455 @@
+
+
diff --git a/content/zh-cn/docs/tutorials/services/source-ip.md b/content/zh-cn/docs/tutorials/services/source-ip.md
index 9b8ab74dbd..ae6ef88dce 100644
--- a/content/zh-cn/docs/tutorials/services/source-ip.md
+++ b/content/zh-cn/docs/tutorials/services/source-ip.md
@@ -93,6 +93,7 @@ kubectl create deployment source-ip-app --image=registry.k8s.io/echoserver:1.4
The output is:
-->
输出为:
+
```
deployment.apps/source-ip-app created
```
@@ -133,6 +134,7 @@ kubectl get nodes
The output is similar to this:
-->
输出类似于:
+
```
NAME STATUS ROLES AGE VERSION
kubernetes-node-6jst Ready 2h v1.13.0
@@ -144,14 +146,19 @@ kubernetes-node-jj1t Ready 2h v1.13.0
Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):
-->
在其中一个节点上获取代理模式(kube-proxy 监听 10249 端口):
+
+
```shell
-# 在要查询的节点上的 shell 中运行
+# 在要查询的节点上的 Shell 中运行
curl http://localhost:10249/proxyMode
```
输出为:
+
```
iptables
```
@@ -160,6 +167,7 @@ iptables
You can test source IP preservation by creating a Service over the source IP app:
-->
你可以通过在源 IP 应用程序上创建 Service 来测试源 IP 保留:
+
```shell
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
```
@@ -167,9 +175,11 @@ kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port
The output is:
-->
输出为:
+
```
service/clusterip exposed
```
+
```shell
kubectl get svc clusterip
```
@@ -177,6 +187,7 @@ kubectl get svc clusterip
The output is similar to:
-->
输出类似于:
+
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.0.170.92 80/TCP 51s
@@ -186,6 +197,7 @@ clusterip ClusterIP 10.0.170.92 80/TCP 51s
And hitting the `ClusterIP` from a pod in the same cluster:
-->
并从同一集群中的 Pod 中访问 `ClusterIP`:
+
```shell
kubectl run busybox -it --image=busybox:1.28 --restart=Never --rm
```
@@ -193,6 +205,7 @@ kubectl run busybox -it --image=busybox:1.28 --restart=Never --rm
The output is similar to this:
-->
输出类似于:
+
```
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
@@ -201,6 +214,10 @@ If you don't see a command prompt, try pressing enter.
You can then run a command inside that Pod:
-->
然后,你可以在该 Pod 中运行命令:
+
+
```shell
# 从 “kubectl run” 的终端中运行
ip addr
@@ -224,10 +241,15 @@ ip addr
…then use `wget` to query the local webserver
-->
然后使用 `wget` 查询本地 Web 服务器:
+
+
```shell
# 将 “10.0.170.92” 替换为 Service 中名为 “clusterip” 的 IPv4 地址
wget -qO - 10.0.170.92
```
+
```
CLIENT VALUES:
client_address=10.244.3.8
@@ -251,6 +273,7 @@ are source NAT'd by default. You can test this by creating a `NodePort` Service:
默认情况下,发送到 [`Type=NodePort`](/zh-cn/docs/concepts/services-networking/service/#type-nodeport)
的 Service 的数据包会经过源 NAT 处理。你可以通过创建一个 `NodePort` 的 Service 来测试这点:
+
```shell
kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
```
@@ -258,6 +281,7 @@ kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=
The output is:
-->
输出为:
+
```
service/nodeport exposed
```
@@ -283,6 +307,7 @@ for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done
The output is similar to:
-->
输出类似于:
+
```
client_address=10.180.1.1
client_address=10.240.0.5
@@ -344,6 +369,7 @@ kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
The output is:
-->
输出为:
+
```
service/nodeport patched
```
@@ -474,7 +500,7 @@ Visually:
用图表示:
-![具有 externalTrafficPolicy 的源 IP](/images/docs/sourceip-externaltrafficpolicy.svg)
+![具有 externalTrafficPolicy 的源 IP](/zh-cn/docs/images/sourceip-externaltrafficpolicy.svg)