Remove references to kube-proxy userspace mode
parent
98973fdcba
commit
37ee1e335c
|
@ -46,11 +46,6 @@ The support of multihomed SCTP associations requires that the CNI plugin can sup
|
|||
|
||||
NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.
|
||||
|
||||
{{< note >}}
|
||||
The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
### `TCP` {#protocol-tcp}
|
||||
|
||||
You can use TCP for any kind of Service, and it's the default network protocol.
|
||||
|
|
|
@ -61,63 +61,6 @@ Note that the kube-proxy starts up in different modes, which are determined by i
|
|||
- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup.
|
||||
For example, if your operating system doesn't allow you to run iptables commands,
|
||||
the standard kernel kube-proxy implementation will not work.
|
||||
Likewise, if you have an operating system which doesn't support `netsh`,
|
||||
it will not run in Windows userspace mode.
|
||||
|
||||
### User space proxy mode {#proxy-mode-userspace}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="deprecated" >}}
|
||||
|
||||
This (legacy) mode uses iptables to install interception rules, and then performs
|
||||
traffic forwarding with the assistance of the kube-proxy tool.
|
||||
The kube-procy watches the Kubernetes control plane for the addition, modification
|
||||
and removal of Service and EndpointSlice objects. For each Service, the kube-proxy
|
||||
opens a port (randomly chosen) on the local node. Any connections to this _proxy port_
|
||||
are proxied to one of the Service's backend Pods (as reported via
|
||||
EndpointSlices). The kube-proxy takes the `sessionAffinity` setting of the Service into
|
||||
account when deciding which backend Pod to use.
|
||||
|
||||
The user-space proxy installs iptables rules which capture traffic to the
|
||||
Service's `clusterIP` (which is virtual) and `port`. Those rules redirect that traffic
|
||||
to the proxy port which proxies the backend Pod.
|
||||
|
||||
By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
|
||||
|
||||
{{< figure src="/images/docs/services-userspace-overview.svg" title="Services overview diagram for userspace proxy" class="diagram-medium" >}}
|
||||
|
||||
|
||||
#### Example {#packet-processing-userspace}
|
||||
|
||||
As an example, consider the image processing application described [earlier](#example)
|
||||
in the page.
|
||||
When the backend Service is created, the Kubernetes control plane assigns a virtual
|
||||
IP address, for example 10.0.0.1. Assuming the Service port is 1234, the
|
||||
Service is observed by all of the kube-proxy instances in the cluster.
|
||||
When a proxy sees a new Service, it opens a new random port, establishes an
|
||||
iptables redirect from the virtual IP address to this new port, and starts accepting
|
||||
connections on it.
|
||||
|
||||
When a client connects to the Service's virtual IP address, the iptables
|
||||
rule kicks in, and redirects the packets to the proxy's own port.
|
||||
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
|
||||
|
||||
This means that Service owners can choose any port they want without risk of
|
||||
collision. Clients can connect to an IP and port, without being aware
|
||||
of which Pods they are actually accessing.
|
||||
|
||||
#### Scaling challenges {#scaling-challenges-userspace}
|
||||
|
||||
Using the userspace proxy for VIPs works at small to medium scale, but will
|
||||
not scale to very large clusters with thousands of Services. The
|
||||
[original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107)
|
||||
has more details on this.
|
||||
|
||||
Using the userspace proxy obscures the source IP address of a packet accessing
|
||||
a Service.
|
||||
This makes some kinds of network filtering (firewalling) impossible. The iptables
|
||||
proxy mode does not
|
||||
obscure in-cluster source IPs, but it does still impact clients coming through
|
||||
a load balancer or node-port.
|
||||
|
||||
### `iptables` proxy mode {#proxy-mode-iptables}
|
||||
|
||||
|
@ -135,7 +78,7 @@ is handled by Linux netfilter without the need to switch between userspace and t
|
|||
kernel space. This approach is also likely to be more reliable.
|
||||
|
||||
If kube-proxy is running in iptables mode and the first Pod that's selected
|
||||
does not respond, the connection fails. This is different from userspace
|
||||
does not respond, the connection fails. This is different from the old `userspace`
|
||||
mode: in that scenario, kube-proxy would detect that the connection to the first
|
||||
Pod had failed and would automatically retry with a different backend Pod.
|
||||
|
||||
|
@ -148,7 +91,8 @@ having traffic sent via kube-proxy to a Pod that's known to have failed.
|
|||
|
||||
#### Example {#packet-processing-iptables}
|
||||
|
||||
Again, consider the image processing application described [earlier](#example).
|
||||
As an example, consider the image processing application described [earlier](#example)
|
||||
in the page.
|
||||
When the backend Service is created, the Kubernetes control plane assigns a virtual
|
||||
IP address, for example 10.0.0.1. For this example, assume that the
|
||||
Service port is 1234.
|
||||
|
@ -162,10 +106,7 @@ endpoint rules redirect traffic (using destination NAT) to the backends.
|
|||
|
||||
When a client connects to the Service's virtual IP address the iptables rule kicks in.
|
||||
A backend is chosen (either based on session affinity or randomly) and packets are
|
||||
redirected to the backend. Unlike the userspace proxy, packets are never
|
||||
copied to userspace, the kube-proxy does not have to be running for the virtual
|
||||
IP address to work, and Nodes see traffic arriving from the unaltered client IP
|
||||
address.
|
||||
redirected to the backend without rewriting the client IP address.
|
||||
|
||||
This same basic flow executes when traffic comes in through a node-port or
|
||||
through a load-balancer, though in those cases the client IP address does get altered.
|
||||
|
|
|
@ -527,7 +527,6 @@ should see something like:
|
|||
```none
|
||||
I1027 22:14:53.995134 5063 server.go:200] Running in resource-only container "/kube-proxy"
|
||||
I1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.
|
||||
I1027 22:14:53.999055 5063 server.go:255] Tearing down userspace rules. Errors here are acceptable.
|
||||
I1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns-tcp" to [10.244.1.3:53]
|
||||
I1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns" to [10.244.1.3:53]
|
||||
I1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for "default/kubernetes:https" to [10.240.0.2:443]
|
||||
|
@ -549,8 +548,7 @@ and then retry.
|
|||
|
||||
Kube-proxy can run in one of a few modes. In the log listed above, the
|
||||
line `Using iptables Proxier` indicates that kube-proxy is running in
|
||||
"iptables" mode. The most common other mode is "ipvs". The older "userspace"
|
||||
mode has largely been replaced by these.
|
||||
"iptables" mode. The most common other mode is "ipvs".
|
||||
|
||||
#### Iptables mode
|
||||
|
||||
|
@ -602,24 +600,6 @@ endpoint, it will create corresponding real servers. In this example, service
|
|||
hostnames(`10.0.1.175:80`) has 3 endpoints(`10.244.0.5:9376`,
|
||||
`10.244.0.6:9376`, `10.244.0.7:9376`).
|
||||
|
||||
#### Userspace mode
|
||||
|
||||
In rare cases, you may be using "userspace" mode. From your Node:
|
||||
|
||||
```shell
|
||||
iptables-save | grep hostnames
|
||||
```
|
||||
```none
|
||||
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
|
||||
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
|
||||
```
|
||||
|
||||
There should be 2 rules for each port of your Service (only one in this
|
||||
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST".
|
||||
|
||||
Almost nobody should be using the "userspace" mode any more, so you won't spend
|
||||
more time on it here.
|
||||
|
||||
### Is kube-proxy proxying?
|
||||
|
||||
Assuming you do see one the above cases, try again to access your Service by
|
||||
|
@ -632,20 +612,6 @@ curl 10.0.1.175:80
|
|||
hostnames-632524106-bbpiw
|
||||
```
|
||||
|
||||
If this fails and you are using the userspace proxy, you can try accessing the
|
||||
proxy directly. If you are using the iptables proxy, skip this section.
|
||||
|
||||
Look back at the `iptables-save` output above, and extract the
|
||||
port number that `kube-proxy` is using for your Service. In the above
|
||||
examples it is "48577". Now connect to that:
|
||||
|
||||
```shell
|
||||
curl localhost:48577
|
||||
```
|
||||
```none
|
||||
hostnames-632524106-tlaok
|
||||
```
|
||||
|
||||
If this still fails, look at the `kube-proxy` logs for specific lines like:
|
||||
|
||||
```none
|
||||
|
|
Loading…
Reference in New Issue