Update service.md
parent
03eb37c54e
commit
ef8908fbb1
|
@ -117,7 +117,7 @@ subsets:
|
|||
- port: 9376
|
||||
```
|
||||
|
||||
NOTE: Endpoint IPs may not be loopback (127.0.0.0/8), link-local
|
||||
**NOTE:** Endpoint IPs may not be loopback (127.0.0.0/8), link-local
|
||||
(169.254.0.0/16), or link-local multicast (224.0.0.0/24).
|
||||
|
||||
Accessing a `Service` without a selector works the same as if it had a selector.
|
||||
|
@ -151,13 +151,11 @@ its pods, add appropriate selectors or endpoints and change the service `type`.
|
|||
Every node in a Kubernetes cluster runs a `kube-proxy`. `kube-proxy` is
|
||||
responsible for implementing a form of virtual IP for `Services` of type other
|
||||
than `ExternalName`.
|
||||
In Kubernetes v1.0 the proxy was purely in userspace. In Kubernetes v1.1 an
|
||||
iptables proxy was added, but was not the default operating mode. Since
|
||||
Kubernetes v1.2, the iptables proxy is the default.
|
||||
|
||||
As of Kubernetes v1.0, `Services` are a "layer 4" (TCP/UDP over IP) construct.
|
||||
In Kubernetes v1.1 the `Ingress` API was added (beta) to represent "layer 7"
|
||||
(HTTP) services.
|
||||
In Kubernetes v1.0, `Services` are a "layer 4" (TCP/UDP over IP) construct, the
|
||||
proxy was purely in userspace. In Kubernetes v1.1, the `Ingress` API was added
|
||||
(beta) to represent "layer 7"(HTTP) services, iptables proxy was added too,
|
||||
and become the default operating mode since Kubernetes v1.2. In Kubernetes v1.9-alpha,
|
||||
ipvs proxy was added.
|
||||
|
||||
### Proxy-mode: userspace
|
||||
|
||||
|
@ -169,37 +167,20 @@ will be proxied to one of the `Service`'s backend `Pods` (as reported in
|
|||
`SessionAffinity` of the `Service`. Lastly, it installs iptables rules which
|
||||
capture traffic to the `Service`'s `clusterIP` (which is virtual) and `Port`
|
||||
and redirects that traffic to the proxy port which proxies the backend `Pod`.
|
||||
|
||||
The net result is that any traffic bound for the `Service`'s IP:Port is proxied
|
||||
to an appropriate backend without the clients knowing anything about Kubernetes
|
||||
or `Services` or `Pods`.
|
||||
|
||||
By default, the choice of backend is round robin. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
default is `"None"`), and you can set the max session sticky time by setting the field
|
||||
`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` if you have already set
|
||||
`service.spec.sessionAffinity` to `"ClientIP"` (the default is "10800").
|
||||
By default, the choice of backend is round robin.
|
||||
|
||||

|
||||
|
||||
### Proxy-mode: iptables
|
||||
|
||||
In this mode, kube-proxy watches the Kubernetes master for the addition and
|
||||
removal of `Service` and `Endpoints` objects. For each `Service` it installs
|
||||
removal of `Service` and `Endpoints` objects. For each `Service`, it installs
|
||||
iptables rules which capture traffic to the `Service`'s `clusterIP` (which is
|
||||
virtual) and `Port` and redirects that traffic to one of the `Service`'s
|
||||
backend sets. For each `Endpoints` object it installs iptables rules which
|
||||
select a backend `Pod`.
|
||||
backend sets. For each `Endpoints` object, it installs iptables rules which
|
||||
select a backend `Pod`.By default, the choice of backend is random.
|
||||
|
||||
By default, the choice of backend is random. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
default is `"None"`), and you can set the max session sticky time by setting the field
|
||||
`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` if you have already set
|
||||
`service.spec.sessionAffinity` to `"ClientIP"` (the default is "10800").
|
||||
|
||||
As with the userspace proxy, the net result is that any traffic bound for the
|
||||
`Service`'s IP:Port is proxied to an appropriate backend without the clients
|
||||
knowing anything about Kubernetes or `Services` or `Pods`. This should be
|
||||
Obviously, iptables need not switch back between userspace and kernelspace, it should be
|
||||
faster and more reliable than the userspace proxy. However, unlike the
|
||||
userspace proxier, the iptables proxier cannot automatically retry another
|
||||
`Pod` if the one it initially selects does not respond, so it depends on
|
||||
|
@ -231,12 +212,21 @@ options for load balancing algorithm, such as:
|
|||
- nq: never queue
|
||||
|
||||
**Note:** ipvs mode assumed IPVS kernel modules are installed on the node
|
||||
before running kube-proxy. When kube-proxy starts, if proxy mode is ipvs,
|
||||
before running kube-proxy. When kube-proxy starts with ipvs proxy mode,
|
||||
kube-proxy would validate if IPVS modules are installed on the node, if
|
||||
it's not installed kube-proxy will fall back to iptables proxy mode.
|
||||
|
||||

|
||||
|
||||
In any of proxy model, any traffic bound for the Service’s IP:Port is
|
||||
proxied to an appropriate backend without the clients knowing anything
|
||||
about Kubernetes or Services or Pods. Client-IP based session affinity
|
||||
can be selected by setting service.spec.sessionAffinity to "ClientIP"
|
||||
(the default is "None"), and you can set the max session sticky time by
|
||||
setting the field service.spec.sessionAffinityConfig.clientIP.timeoutSeconds
|
||||
if you have already set service.spec.sessionAffinity to "ClientIP"
|
||||
(the default is “10800”).
|
||||
|
||||
## Multi-Port Services
|
||||
|
||||
Many `Services` need to expose more than one port. For this case, Kubernetes
|
||||
|
@ -708,6 +698,10 @@ work, and the client IP is not altered.
|
|||
This same basic flow executes when traffic comes in through a node-port or
|
||||
through a load-balancer, though in those cases the client IP does get altered.
|
||||
|
||||
#### Ipvs
|
||||
|
||||
Iptables operations slow down dramatically in large scale cluster e.g 10,000 Services. IPVS is designed for load balancing and based on in-kernel hash tables. So we can achieve performance consistency in large number of services from IPVS-based kube-proxy. Meanwhile, IPVS-based kube-proxy has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).
|
||||
|
||||
## API Object
|
||||
|
||||
Service is a top-level resource in the Kubernetes REST API. More details about the
|
||||
|
|
Loading…
Reference in New Issue