2016-12-02 00:29:42 +00:00
|
|
|
---
|
2017-01-05 22:39:42 +00:00
|
|
|
title: Using Source IP
|
2018-05-05 16:00:51 +00:00
|
|
|
content_template: templates/tutorial
|
2020-03-16 05:00:36 +00:00
|
|
|
min-kubernetes-server-version: v1.5
|
2016-12-02 00:29:42 +00:00
|
|
|
---
|
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% capture overview %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
Applications running in a Kubernetes cluster find and communicate with each
|
|
|
|
other, and the outside world, through the Service abstraction. This document
|
|
|
|
explains what happens to the source IP of packets sent to different types
|
|
|
|
of Services, and how you can toggle this behavior according to your needs.
|
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% /capture %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% capture prerequisites %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
### Terminology
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
This document makes use of the following terms:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
{{< comment >}}
|
|
|
|
If localizing this section, link to the equivalent Wikipedia pages for
|
|
|
|
the target localization.
|
|
|
|
{{< /comment >}}
|
|
|
|
|
|
|
|
[NAT](https://en.wikipedia.org/wiki/Network_address_translation)
|
|
|
|
: network address translation
|
|
|
|
|
|
|
|
[Source NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT)
|
|
|
|
: replacing the source IP on a packet; in this page, that usually means replacing with the IP address of a node.
|
|
|
|
|
|
|
|
[Destination NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT)
|
|
|
|
: replacing the destination IP on a packet; in this page, that usually means replacing with the IP address of a {{< glossary_tooltip term_id="pod" >}}
|
|
|
|
|
|
|
|
[VIP](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
|
|
|
|
: a virtual IP address, such as the one assigned to every {{< glossary_tooltip text="Service" term_id="service" >}} in Kubernetes
|
|
|
|
|
|
|
|
[kube-proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)
|
|
|
|
: a network daemon that orchestrates Service VIP management on every node
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
### Prerequisites
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
{{< include "task-tutorial-prereqs.md" >}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
The examples use a small nginx webserver that echoes back the source
|
2016-12-22 08:56:12 +00:00
|
|
|
IP of requests it receives through an HTTP header. You can create it as follows:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2020-02-09 23:03:53 +00:00
|
|
|
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
|
|
|
The output is:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
deployment.apps/source-ip-app created
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% /capture %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% capture objectives %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
* Expose a simple application through various types of Services
|
|
|
|
* Understand how each Service type handles source IP NAT
|
|
|
|
* Understand the tradeoffs involved in preserving source IP
|
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% /capture %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% capture lessoncontent %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
## Source IP for Services with `Type=ClusterIP`
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
Packets sent to ClusterIP from within the cluster are never source NAT'd if
|
2020-03-16 05:00:36 +00:00
|
|
|
you're running kube-proxy in
|
|
|
|
[iptables mode](/docs/concepts/services-networking/service/#proxy-mode-iptables),
|
|
|
|
(the default). You can query the kube-proxy mode by fetching
|
|
|
|
`http://localhost:10249/proxyMode` on the node where kube-proxy is running.
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
```console
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl get nodes
|
|
|
|
```
|
|
|
|
The output is similar to this:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
NAME STATUS ROLES AGE VERSION
|
2019-06-18 20:49:50 +00:00
|
|
|
kubernetes-node-6jst Ready <none> 2h v1.13.0
|
|
|
|
kubernetes-node-cx31 Ready <none> 2h v1.13.0
|
|
|
|
kubernetes-node-jj1t Ready <none> 2h v1.13.0
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2020-03-16 05:00:36 +00:00
|
|
|
|
|
|
|
Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):
|
|
|
|
```shell
|
|
|
|
# Run this in a shell on the node you want to query.
|
|
|
|
curl http://localhost:10249/proxyMode
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
|
|
|
The output is:
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
iptables
|
|
|
|
```
|
|
|
|
|
|
|
|
You can test source IP preservation by creating a Service over the source IP app:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
|
|
|
|
```
|
|
|
|
The output is:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
service/clusterip exposed
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl get svc clusterip
|
|
|
|
```
|
|
|
|
The output is similar to:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
|
|
clusterip ClusterIP 10.0.170.92 <none> 80/TCP 51s
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
And hitting the `ClusterIP` from a pod in the same cluster:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl run busybox -it --image=busybox --restart=Never --rm
|
|
|
|
```
|
|
|
|
The output is similar to this:
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
|
|
|
|
If you don't see a command prompt, try pressing enter.
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```
|
|
|
|
You can then run a command inside that Pod:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
# Run this inside the terminal from "kubectl run"
|
|
|
|
ip addr
|
|
|
|
```
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
|
|
|
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
|
|
|
inet 127.0.0.1/8 scope host lo
|
|
|
|
valid_lft forever preferred_lft forever
|
|
|
|
inet6 ::1/128 scope host
|
|
|
|
valid_lft forever preferred_lft forever
|
|
|
|
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue
|
|
|
|
link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff
|
|
|
|
inet 10.244.3.8/24 scope global eth0
|
|
|
|
valid_lft forever preferred_lft forever
|
|
|
|
inet6 fe80::188a:84ff:feb0:26a5/64 scope link
|
|
|
|
valid_lft forever preferred_lft forever
|
2020-03-16 05:00:36 +00:00
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
…then use `wget` to query the local webserver
|
|
|
|
```shell
|
|
|
|
# Replace 10.0.170.92 with the Pod's IPv4 address
|
|
|
|
wget -qO - 10.0.170.92
|
|
|
|
```
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
CLIENT VALUES:
|
|
|
|
client_address=10.244.3.8
|
|
|
|
command=GET
|
|
|
|
...
|
|
|
|
```
|
2020-03-16 05:00:36 +00:00
|
|
|
The `client_address` is always the client pod's IP address, whether the client pod and server pod are in the same node or in different nodes.
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
## Source IP for Services with `Type=NodePort`
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
Packets sent to Services with
|
|
|
|
[`Type=NodePort`](/docs/concepts/services-networking/service/#nodeport)
|
2016-12-02 00:29:42 +00:00
|
|
|
are source NAT'd by default. You can test this by creating a `NodePort` Service:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
|
|
|
|
```
|
|
|
|
The output is:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
service/nodeport exposed
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport)
|
|
|
|
NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalIP")].address }')
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
If you're running on a cloud provider, you may need to open up a firewall-rule
|
2016-12-02 00:29:42 +00:00
|
|
|
for the `nodes:nodeport` reported above.
|
|
|
|
Now you can try reaching the Service from outside the cluster through the node
|
|
|
|
port allocated above.
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done
|
|
|
|
```
|
|
|
|
The output is similar to:
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
client_address=10.180.1.1
|
|
|
|
client_address=10.240.0.5
|
|
|
|
client_address=10.240.0.3
|
|
|
|
```
|
|
|
|
|
2017-02-22 00:40:09 +00:00
|
|
|
Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
* Client sends packet to `node2:nodePort`
|
|
|
|
* `node2` replaces the source IP address (SNAT) in the packet with its own IP address
|
|
|
|
* `node2` replaces the destination IP on the packet with the pod IP
|
|
|
|
* packet is routed to node 1, and then to the endpoint
|
|
|
|
* the pod's reply is routed back to node2
|
|
|
|
* the pod's reply is sent back to the client
|
|
|
|
|
|
|
|
Visually:
|
|
|
|
|
|
|
|
```
|
|
|
|
client
|
|
|
|
\ ^
|
|
|
|
\ \
|
|
|
|
v \
|
|
|
|
node 1 <--- node 2
|
|
|
|
| ^ SNAT
|
|
|
|
| | --->
|
|
|
|
v |
|
|
|
|
endpoint
|
|
|
|
```
|
|
|
|
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
To avoid this, Kubernetes has a feature to
|
|
|
|
[preserve the client source IP](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip).
|
|
|
|
If you set `service.spec.externalTrafficPolicy` to the value `Local`,
|
|
|
|
kube-proxy only proxies proxy requests to local endpoints, and does not
|
|
|
|
forward traffic to other nodes. This approach preserves the original
|
|
|
|
source IP address. If there are no local endpoints, packets sent to the
|
|
|
|
node are dropped, so you can rely on the correct source-ip in any packet
|
|
|
|
processing rules you might apply a packet that make it through to the
|
|
|
|
endpoint.
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2017-07-05 18:48:36 +00:00
|
|
|
Set the `service.spec.externalTrafficPolicy` field as follows:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
|
|
|
|
```
|
|
|
|
The output is:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
service/nodeport patched
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
Now, re-run the test:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done
|
|
|
|
```
|
2020-03-16 05:00:36 +00:00
|
|
|
The output is similar to:
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2020-03-16 05:00:36 +00:00
|
|
|
client_address=198.51.100.79
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
Note that you only got one reply, with the *right* client IP, from the one node on which the endpoint pod
|
2017-08-14 09:32:56 +00:00
|
|
|
is running.
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
This is what happens:
|
|
|
|
|
|
|
|
* client sends packet to `node2:nodePort`, which doesn't have any endpoints
|
|
|
|
* packet is dropped
|
|
|
|
* client sends packet to `node1:nodePort`, which *does* have endpoints
|
|
|
|
* node1 routes packet to endpoint with the correct source IP
|
|
|
|
|
|
|
|
Visually:
|
|
|
|
|
|
|
|
```
|
|
|
|
client
|
|
|
|
^ / \
|
|
|
|
/ / \
|
|
|
|
/ v X
|
|
|
|
node 1 node 2
|
|
|
|
^ |
|
|
|
|
| |
|
|
|
|
| v
|
|
|
|
endpoint
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
## Source IP for Services with `Type=LoadBalancer`
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
Packets sent to Services with
|
|
|
|
[`Type=LoadBalancer`](/docs/concepts/services-networking/service/#loadbalancer)
|
|
|
|
are source NAT'd by default, because all schedulable Kubernetes nodes in the
|
|
|
|
`Ready` state are eligible for load-balanced traffic. So if packets arrive
|
2016-12-02 00:29:42 +00:00
|
|
|
at a node without an endpoint, the system proxies it to a node *with* an
|
|
|
|
endpoint, replacing the source IP on the packet with the IP of the node (as
|
|
|
|
described in the previous section).
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
You can test this by exposing the source-ip-app through a load balancer:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
|
|
|
|
```
|
|
|
|
The output is:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
service/loadbalancer exposed
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
Print out the IP addresses of the Service:
|
2019-06-11 02:42:16 +00:00
|
|
|
```console
|
|
|
|
kubectl get svc loadbalancer
|
|
|
|
```
|
|
|
|
The output is similar to this:
|
|
|
|
```
|
2018-08-21 14:16:54 +00:00
|
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
2020-03-16 05:00:36 +00:00
|
|
|
loadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
Next, send a request to this Service's external-ip:
|
|
|
|
|
|
|
|
```shell
|
|
|
|
curl 203.0.113.140
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
|
|
|
The output is similar to this:
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
CLIENT VALUES:
|
|
|
|
client_address=10.240.0.5
|
|
|
|
...
|
|
|
|
```
|
|
|
|
|
2017-11-13 20:02:31 +00:00
|
|
|
However, if you're running on Google Kubernetes Engine/GCE, setting the same `service.spec.externalTrafficPolicy`
|
2017-07-05 18:48:36 +00:00
|
|
|
field to `Local` forces nodes *without* Service endpoints to remove
|
2016-12-02 00:29:42 +00:00
|
|
|
themselves from the list of nodes eligible for loadbalanced traffic by
|
2017-07-05 18:48:36 +00:00
|
|
|
deliberately failing health checks.
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
Visually:
|
|
|
|
|
|
|
|
```
|
|
|
|
client
|
|
|
|
|
|
|
|
|
lb VIP
|
|
|
|
/ ^
|
|
|
|
v /
|
|
|
|
health check ---> node 1 node 2 <--- health check
|
|
|
|
200 <--- ^ | ---> 500
|
|
|
|
| V
|
|
|
|
endpoint
|
|
|
|
```
|
|
|
|
|
|
|
|
You can test this by setting the annotation:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}'
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
2017-07-05 18:48:36 +00:00
|
|
|
You should immediately see the `service.spec.healthCheckNodePort` field allocated
|
|
|
|
by Kubernetes:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
|
|
|
|
```
|
|
|
|
The output is similar to this:
|
2020-03-16 05:00:36 +00:00
|
|
|
```yaml
|
2017-07-05 18:48:36 +00:00
|
|
|
healthCheckNodePort: 32122
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
2017-07-05 18:48:36 +00:00
|
|
|
The `service.spec.healthCheckNodePort` field points to a port on every node
|
|
|
|
serving the health check at `/healthz`. You can test this:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl get pod -o wide -l run=source-ip-app
|
|
|
|
```
|
|
|
|
The output is similar to this:
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
NAME READY STATUS RESTARTS AGE IP NODE
|
2019-06-18 20:49:50 +00:00
|
|
|
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2020-03-16 05:00:36 +00:00
|
|
|
|
|
|
|
Use `curl` to fetch the `/healthz` endpoint on various nodes:
|
|
|
|
```shell
|
|
|
|
# Run this locally on a node you choose
|
|
|
|
curl localhost:32122/healthz
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
1 Service Endpoints found
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
2020-03-16 05:00:36 +00:00
|
|
|
|
|
|
|
On a different node you might get a different result:
|
|
|
|
```shell
|
|
|
|
# Run this locally on a node you choose
|
|
|
|
curl localhost:32122/healthz
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
No Service Endpoints Found
|
|
|
|
```
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
A controller running on the
|
|
|
|
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} is
|
|
|
|
responsible for allocating the cloud load balancer. The same controller also
|
|
|
|
allocates HTTP health checks pointing to this port/path on each node. Wait
|
|
|
|
about 10 seconds for the 2 nodes without endpoints to fail health checks,
|
|
|
|
then use `curl` to query the IPv4 address of the load balancer:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
|
|
|
curl 203.0.113.140
|
2019-06-11 02:42:16 +00:00
|
|
|
```
|
|
|
|
The output is similar to this:
|
|
|
|
```
|
2016-12-02 00:29:42 +00:00
|
|
|
CLIENT VALUES:
|
2020-03-16 05:00:36 +00:00
|
|
|
client_address=198.51.100.79
|
2016-12-02 00:29:42 +00:00
|
|
|
...
|
|
|
|
```
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
## Cross-platform support
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
Only some cloud providers offer support for source IP preservation through
|
|
|
|
Services with `Type=LoadBalancer`.
|
|
|
|
The cloud provider you're running on might fulfill the request for a loadbalancer
|
|
|
|
in a few different ways:
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
1. With a proxy that terminates the client connection and opens a new connection
|
|
|
|
to your nodes/endpoints. In such cases the source IP will always be that of the
|
|
|
|
cloud LB, not that of the client.
|
|
|
|
|
|
|
|
2. With a packet forwarder, such that requests from the client sent to the
|
|
|
|
loadbalancer VIP end up at the node with the source IP of the client, not
|
|
|
|
an intermediate proxy.
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
Load balancers in the first category must use an agreed upon
|
2016-12-02 00:29:42 +00:00
|
|
|
protocol between the loadbalancer and backend to communicate the true client IP
|
2020-03-16 05:00:36 +00:00
|
|
|
such as the HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2)
|
|
|
|
or [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For)
|
|
|
|
headers, or the
|
|
|
|
[proxy protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt).
|
|
|
|
Load balancers in the second category can leverage the feature described above
|
|
|
|
by creating an HTTP health check pointing at the port stored in
|
2017-07-05 18:48:36 +00:00
|
|
|
the `service.spec.healthCheckNodePort` field on the Service.
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% /capture %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% capture cleanup %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
|
|
|
Delete the Services:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl delete svc -l run=source-ip-app
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
Delete the Deployment, ReplicaSet and Pod:
|
|
|
|
|
2020-03-16 05:00:36 +00:00
|
|
|
```shell
|
2019-06-11 02:42:16 +00:00
|
|
|
kubectl delete deployment source-ip-app
|
2016-12-02 00:29:42 +00:00
|
|
|
```
|
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% /capture %}}
|
2016-12-02 00:29:42 +00:00
|
|
|
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% capture whatsnext %}}
|
2017-04-19 17:56:47 +00:00
|
|
|
* Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/)
|
2020-03-16 05:00:36 +00:00
|
|
|
* Read how to [Create an External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
2018-05-05 16:00:51 +00:00
|
|
|
{{% /capture %}}
|
|
|
|
|