reword pod lifecycle, cleanup
parent
9ec9ebb67c
commit
6a401072b3
|
@ -24,8 +24,8 @@ and can load-balance across them.
|
|||
|
||||
## Motivation
|
||||
|
||||
Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are mortal.
|
||||
They are born and when they die, they are not resurrected.
|
||||
Kubernetes {{< glossary_tooltip term_id="pod" text="Pods" >}} are created and destroyed
|
||||
to match the state of your cluster. Pods are nonpermanent resources.
|
||||
If you use a {{< glossary_tooltip term_id="deployment" >}} to run your app,
|
||||
it can create and destroy Pods dynamically.
|
||||
|
||||
|
@ -45,9 +45,9 @@ Enter _Services_.
|
|||
In Kubernetes, a Service is an abstraction which defines a logical set of Pods
|
||||
and a policy by which to access them (sometimes this pattern is called
|
||||
a micro-service). The set of Pods targeted by a Service is usually determined
|
||||
by a {{< glossary_tooltip text="selector" term_id="selector" >}}
|
||||
(see [below](#services-without-selectors) for why you might want a Service
|
||||
_without_ a selector).
|
||||
by a {{< glossary_tooltip text="selector" term_id="selector" >}}.
|
||||
To learn about other ways to define Service endpoints,
|
||||
see [Services _without_ selectors](#services-without-selectors).
|
||||
|
||||
For example, consider a stateless image-processing backend which is running with
|
||||
3 replicas. Those replicas are fungible—frontends do not care which backend
|
||||
|
@ -129,12 +129,12 @@ Services most commonly abstract access to Kubernetes Pods, but they can also
|
|||
abstract other kinds of backends.
|
||||
For example:
|
||||
|
||||
* You want to have an external database cluster in production, but in your
|
||||
test environment you use your own databases.
|
||||
* You want to point your Service to a Service in a different
|
||||
{{< glossary_tooltip term_id="namespace" >}} or on another cluster.
|
||||
* You are migrating a workload to Kubernetes. Whilst evaluating the approach,
|
||||
you run only a proportion of your backends in Kubernetes.
|
||||
* You want to have an external database cluster in production, but in your
|
||||
test environment you use your own databases.
|
||||
* You want to point your Service to a Service in a different
|
||||
{{< glossary_tooltip term_id="namespace" >}} or on another cluster.
|
||||
* You are migrating a workload to Kubernetes. While evaluating the approach,
|
||||
you run only a proportion of your backends in Kubernetes.
|
||||
|
||||
In any of these scenarios you can define a Service _without_ a Pod selector.
|
||||
For example:
|
||||
|
@ -151,7 +151,7 @@ spec:
|
|||
targetPort: 9376
|
||||
```
|
||||
|
||||
Because this Service has no selector, the corresponding Endpoint object is *not*
|
||||
Because this Service has no selector, the corresponding Endpoint object is not
|
||||
created automatically. You can manually map the Service to the network address and port
|
||||
where it's running, by adding an Endpoint object manually:
|
||||
|
||||
|
@ -188,6 +188,7 @@ selectors and uses DNS names instead. For more information, see the
|
|||
[ExternalName](#externalname) section later in this document.
|
||||
|
||||
### EndpointSlices
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
|
||||
|
||||
EndpointSlices are an API resource that can provide a more scalable alternative
|
||||
|
@ -204,9 +205,8 @@ described in detail in [EndpointSlices](/docs/concepts/services-networking/endpo
|
|||
|
||||
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
|
||||
|
||||
The AppProtocol field provides a way to specify an application protocol to be
|
||||
used for each Service port. The value of this field is mirrored by corresponding
|
||||
Endpoints and EndpointSlice resources.
|
||||
The `AppProtocol` field provides a way to specify an application protocol for each Service port.
|
||||
The value of this field is mirrored by corresponding Endpoints and EndpointSlice resources.
|
||||
|
||||
## Virtual IPs and service proxies
|
||||
|
||||
|
@ -224,20 +224,19 @@ resolution?
|
|||
|
||||
There are a few reasons for using proxying for Services:
|
||||
|
||||
* There is a long history of DNS implementations not respecting record TTLs,
|
||||
and caching the results of name lookups after they should have expired.
|
||||
* Some apps do DNS lookups only once and cache the results indefinitely.
|
||||
* Even if apps and libraries did proper re-resolution, the low or zero TTLs
|
||||
on the DNS records could impose a high load on DNS that then becomes
|
||||
difficult to manage.
|
||||
* There is a long history of DNS implementations not respecting record TTLs,
|
||||
and caching the results of name lookups after they should have expired.
|
||||
* Some apps do DNS lookups only once and cache the results indefinitely.
|
||||
* Even if apps and libraries did proper re-resolution, the low or zero TTLs
|
||||
on the DNS records could impose a high load on DNS that then becomes
|
||||
difficult to manage.
|
||||
|
||||
### User space proxy mode {#proxy-mode-userspace}
|
||||
|
||||
In this mode, kube-proxy watches the Kubernetes master for the addition and
|
||||
removal of Service and Endpoint objects. For each Service it opens a
|
||||
port (randomly chosen) on the local node. Any connections to this "proxy port"
|
||||
are
|
||||
proxied to one of the Service's backend Pods (as reported via
|
||||
are proxied to one of the Service's backend Pods (as reported via
|
||||
Endpoints). kube-proxy takes the `SessionAffinity` setting of the Service into
|
||||
account when deciding which backend Pod to use.
|
||||
|
||||
|
@ -255,7 +254,7 @@ In this mode, kube-proxy watches the Kubernetes control plane for the addition a
|
|||
removal of Service and Endpoint objects. For each Service, it installs
|
||||
iptables rules, which capture traffic to the Service's `clusterIP` and `port`,
|
||||
and redirect that traffic to one of the Service's
|
||||
backend sets. For each Endpoint object, it installs iptables rules which
|
||||
backend sets. For each Endpoint object, it installs iptables rules which
|
||||
select a backend Pod.
|
||||
|
||||
By default, kube-proxy in iptables mode chooses a backend at random.
|
||||
|
@ -298,12 +297,12 @@ higher throughput of network traffic.
|
|||
IPVS provides more options for balancing traffic to backend Pods;
|
||||
these are:
|
||||
|
||||
- `rr`: round-robin
|
||||
- `lc`: least connection (smallest number of open connections)
|
||||
- `dh`: destination hashing
|
||||
- `sh`: source hashing
|
||||
- `sed`: shortest expected delay
|
||||
- `nq`: never queue
|
||||
* `rr`: round-robin
|
||||
* `lc`: least connection (smallest number of open connections)
|
||||
* `dh`: destination hashing
|
||||
* `sh`: source hashing
|
||||
* `sed`: shortest expected delay
|
||||
* `nq`: never queue
|
||||
|
||||
{{< note >}}
|
||||
To run kube-proxy in IPVS mode, you must make IPVS available on
|
||||
|
@ -389,7 +388,7 @@ compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
|
|||
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
|
||||
where the Service name is upper-cased and dashes are converted to underscores.
|
||||
|
||||
For example, the Service `"redis-master"` which exposes TCP port 6379 and has been
|
||||
For example, the Service `redis-master` which exposes TCP port 6379 and has been
|
||||
allocated cluster IP address 10.0.0.11, produces the following environment
|
||||
variables:
|
||||
|
||||
|
@ -423,19 +422,19 @@ Services and creates a set of DNS records for each one. If DNS has been enabled
|
|||
throughout your cluster then all Pods should automatically be able to resolve
|
||||
Services by their DNS name.
|
||||
|
||||
For example, if you have a Service called `"my-service"` in a Kubernetes
|
||||
Namespace `"my-ns"`, the control plane and the DNS Service acting together
|
||||
create a DNS record for `"my-service.my-ns"`. Pods in the `"my-ns"` Namespace
|
||||
For example, if you have a Service called `my-service` in a Kubernetes
|
||||
namespace `my-ns`, the control plane and the DNS Service acting together
|
||||
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
|
||||
should be able to find it by simply doing a name lookup for `my-service`
|
||||
(`"my-service.my-ns"` would also work).
|
||||
(`my-service.my-ns` would also work).
|
||||
|
||||
Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names
|
||||
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
|
||||
will resolve to the cluster IP assigned for the Service.
|
||||
|
||||
Kubernetes also supports DNS SRV (Service) records for named ports. If the
|
||||
`"my-service.my-ns"` Service has a port named `"http"` with the protocol set to
|
||||
`my-service.my-ns` Service has a port named `http` with the protocol set to
|
||||
`TCP`, you can do a DNS SRV query for `_http._tcp.my-service.my-ns` to discover
|
||||
the port number for `"http"`, as well as the IP address.
|
||||
the port number for `http`, as well as the IP address.
|
||||
|
||||
The Kubernetes DNS server is the only way to access `ExternalName` Services.
|
||||
You can find more information about `ExternalName` resolution in
|
||||
|
@ -467,9 +466,9 @@ For headless Services that do not define selectors, the endpoints controller doe
|
|||
not create `Endpoints` records. However, the DNS system looks for and configures
|
||||
either:
|
||||
|
||||
* CNAME records for [`ExternalName`](#externalname)-type Services.
|
||||
* A records for any `Endpoints` that share a name with the Service, for all
|
||||
other types.
|
||||
* CNAME records for [`ExternalName`](#externalname)-type Services.
|
||||
* A records for any `Endpoints` that share a name with the Service, for all
|
||||
other types.
|
||||
|
||||
## Publishing Services (ServiceTypes) {#publishing-services-service-types}
|
||||
|
||||
|
@ -481,26 +480,26 @@ The default is `ClusterIP`.
|
|||
|
||||
`Type` values and their behaviors are:
|
||||
|
||||
* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
|
||||
makes the Service only reachable from within the cluster. This is the
|
||||
default `ServiceType`.
|
||||
* [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
|
||||
(the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service
|
||||
routes, is automatically created. You'll be able to contact the `NodePort` Service,
|
||||
from outside the cluster,
|
||||
by requesting `<NodeIP>:<NodePort>`.
|
||||
* [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
|
||||
provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external
|
||||
load balancer routes, are automatically created.
|
||||
* [`ExternalName`](#externalname): Maps the Service to the contents of the
|
||||
`externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
|
||||
* `ClusterIP`: Exposes the Service on a cluster-internal IP. Choosing this value
|
||||
makes the Service only reachable from within the cluster. This is the
|
||||
default `ServiceType`.
|
||||
* [`NodePort`](#nodeport): Exposes the Service on each Node's IP at a static port
|
||||
(the `NodePort`). A `ClusterIP` Service, to which the `NodePort` Service
|
||||
routes, is automatically created. You'll be able to contact the `NodePort` Service,
|
||||
from outside the cluster,
|
||||
by requesting `<NodeIP>:<NodePort>`.
|
||||
* [`LoadBalancer`](#loadbalancer): Exposes the Service externally using a cloud
|
||||
provider's load balancer. `NodePort` and `ClusterIP` Services, to which the external
|
||||
load balancer routes, are automatically created.
|
||||
* [`ExternalName`](#externalname): Maps the Service to the contents of the
|
||||
`externalName` field (e.g. `foo.bar.example.com`), by returning a `CNAME` record
|
||||
with its value. No proxying of any kind is set up.
|
||||
{{< note >}}You need either `kube-dns` version 1.7 or CoreDNS version 0.0.8 or higher
|
||||
to use the `ExternalName` type.
|
||||
{{< /note >}}
|
||||
|
||||
with its value. No proxying of any kind is set up.
|
||||
{{< note >}}
|
||||
You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the `ExternalName` type.
|
||||
{{< /note >}}
|
||||
|
||||
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource as it can expose multiple services under the same IP address.
|
||||
You can also use [Ingress](/docs/concepts/services-networking/ingress/) to expose your Service. Ingress is not a Service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules
|
||||
into a single resource as it can expose multiple services under the same IP address.
|
||||
|
||||
### Type NodePort {#nodeport}
|
||||
|
||||
|
@ -509,7 +508,6 @@ allocates a port from a range specified by `--service-node-port-range` flag (def
|
|||
Each node proxies that port (the same port number on every Node) into your Service.
|
||||
Your Service reports the allocated port in its `.spec.ports[*].nodePort` field.
|
||||
|
||||
|
||||
If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10.
|
||||
This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node.
|
||||
|
||||
|
@ -530,6 +528,7 @@ Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
|
|||
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -606,19 +605,21 @@ Specify the assigned IP address as loadBalancerIP. Ensure that you have updated
|
|||
{{< /note >}}
|
||||
|
||||
#### Internal load balancer
|
||||
|
||||
In a mixed environment it is sometimes necessary to route traffic from Services inside the same
|
||||
(virtual) network address block.
|
||||
|
||||
In a split-horizon DNS environment you would need two Services to be able to route both external and internal traffic to your endpoints.
|
||||
|
||||
You can achieve this by adding one the following annotations to a Service.
|
||||
The annotation to add depends on the cloud Service provider you're using.
|
||||
To set an internal load balancer, add one of the following annotations to your Service
|
||||
depending on the cloud Service provider you're using.
|
||||
|
||||
{{< tabs name="service_tabs" >}}
|
||||
{{% tab name="Default" %}}
|
||||
Select one of the tabs.
|
||||
{{% /tab %}}
|
||||
{{% tab name="GCP" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -627,8 +628,10 @@ metadata:
|
|||
cloud.google.com/load-balancer-type: "Internal"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="AWS" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -637,8 +640,10 @@ metadata:
|
|||
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Azure" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -647,8 +652,10 @@ metadata:
|
|||
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="IBM Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -657,8 +664,10 @@ metadata:
|
|||
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="OpenStack" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -667,8 +676,10 @@ metadata:
|
|||
service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Baidu Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -677,8 +688,10 @@ metadata:
|
|||
service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Tencent Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -686,8 +699,10 @@ metadata:
|
|||
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Alibaba Cloud" %}}
|
||||
|
||||
```yaml
|
||||
[...]
|
||||
metadata:
|
||||
|
@ -695,10 +710,10 @@ metadata:
|
|||
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet"
|
||||
[...]
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
#### TLS support on AWS {#ssl-support-on-aws}
|
||||
|
||||
For partial TLS / SSL support on clusters running on AWS, you can add three
|
||||
|
@ -823,7 +838,6 @@ to the value of `"true"`. The annotation
|
|||
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
|
||||
also be used to set maximum time, in seconds, to keep the existing connections open before deregistering the instances.
|
||||
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
|
@ -991,6 +1005,7 @@ spec:
|
|||
type: ExternalName
|
||||
externalName: my.database.example.com
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName
|
||||
is intended to specify a canonical DNS name. To hardcode an IP address, consider using
|
||||
|
@ -1173,7 +1188,7 @@ of the Service.
|
|||
|
||||
{{< note >}}
|
||||
You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service
|
||||
to expose HTTP / HTTPS Services.
|
||||
to expose HTTP/HTTPS Services.
|
||||
{{< /note >}}
|
||||
|
||||
### PROXY protocol
|
||||
|
@ -1189,6 +1204,7 @@ incoming connection, similar to this example
|
|||
```
|
||||
PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n
|
||||
```
|
||||
|
||||
followed by the data from the client.
|
||||
|
||||
### SCTP
|
||||
|
@ -1227,13 +1243,8 @@ SCTP is not supported on Windows based nodes.
|
|||
The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
|
||||
{{< /warning >}}
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
|
||||
|
||||
|
||||
|
|
Loading…
Reference in New Issue