Reword “Create an External Load Balancer” task

- general cleanup
- update sample output
- use more tooltips
- avoid specifying specific cloud providers

The website repo doesn't maintain a definitive list of cloud providers that
pass Kubernetes conformance tests. It's certainly more than AWS and GCP as
the previous revision stated.
pull/27182/head
Tim Bannister 2021-03-23 23:30:03 +00:00
parent 552ac504a1
commit 39f2c3860d
1 changed files with 85 additions and 85 deletions

View File

@ -4,47 +4,44 @@ content_type: task
weight: 80
---
<!-- overview -->
This page shows how to create an External Load Balancer.
This page shows how to create an external load balancer.
{{< note >}}
This feature is only available for cloud providers or environments which support external load balancers.
{{< /note >}}
When creating a service, you have the option of automatically creating a
cloud network load balancer. This provides an externally-accessible IP address
that sends traffic to the correct port on your cluster nodes
When creating a {{< glossary_tooltip text="Service" term_id="service" >}}, you have
the option of automatically creating a cloud load balancer. This provides an
externally-accessible IP address that sends traffic to the correct port on your cluster
nodes,
_provided your cluster runs in a supported environment and is configured with
the correct cloud load balancer provider package_.
For information on provisioning and using an Ingress resource that can give
services externally-reachable URLs, load balance the traffic, terminate SSL etc.,
please check the [Ingress](/docs/concepts/services-networking/ingress/)
You can also use an {{< glossary_tooltip term_id="ingress" >}} in place of Service.
For more information, check the [Ingress](/docs/concepts/services-networking/ingress/)
documentation.
## {{% heading "prerequisites" %}}
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{< include "task-tutorial-prereqs.md" >}}
Your cluster must be running in a cloud or other environment that already has support
for configuring external load balancers.
<!-- steps -->
## Configuration file
## Create a Service
### Create a Service from a manifest
To create an external load balancer, add the following line to your
[service configuration file](/docs/concepts/services-networking/service/#loadbalancer):
Service manifest:
```yaml
type: LoadBalancer
```
Your configuration file might look like:
Your manifest might then look like:
```yaml
apiVersion: v1
@ -60,19 +57,19 @@ spec:
type: LoadBalancer
```
## Using kubectl
### Create a Service using kubectl
You can alternatively create the service with the `kubectl expose` command and
its `--type=LoadBalancer` flag:
```bash
kubectl expose rc example --port=8765 --target-port=9376 \
kubectl expose deployment example --port=8765 --target-port=9376 \
--name=example-service --type=LoadBalancer
```
This command creates a new service using the same selectors as the referenced
resource (in the case of the example above, a replication controller named
`example`).
This command creates a new Service using the same selectors as the referenced
resource (in the case of the example above, a
{{< glossary_tooltip text="Deployment" term_id="deployment" >}} named `example`).
For more information, including optional flags, refer to the
[`kubectl expose` reference](/docs/reference/generated/kubectl/kubectl-commands/#expose).
@ -86,59 +83,63 @@ information through `kubectl`:
kubectl describe services example-service
```
which should produce output like this:
which should produce output similar to:
```bash
Name: example-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=example
Type: LoadBalancer
IP: 10.67.252.103
LoadBalancer Ingress: 192.0.2.89
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32445/TCP
Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80
Session Affinity: None
Events: <none>
```
Name: example-service
Namespace: default
Labels: app=example
Annotations: <none>
Selector: app=example
Type: LoadBalancer
IP Families: <none>
IP: 10.3.22.96
IPs: 10.3.22.96
LoadBalancer Ingress: 192.0.2.89
Port: <unset> 8765/TCP
TargetPort: 9376/TCP
NodePort: <unset> 30593/TCP
Endpoints: 172.17.0.3:9376
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
```
The IP address is listed next to `LoadBalancer Ingress`.
The load balancer's IP address is listed next to `LoadBalancer Ingress`.
{{< note >}}
If you are running your service on Minikube, you can find the assigned IP address and port with:
{{< /note >}}
```bash
minikube service example-service --url
```
{{< /note >}}
## Preserving the client source IP
Due to the implementation of this feature, the source IP seen in the target
container is *not the original source IP* of the client. To enable
preservation of the client IP, the following fields can be configured in the
service spec (supported in GCE/Google Kubernetes Engine environments):
By default, the source IP seen in the target container is *not the original
source IP* of the client. To enable preservation of the client IP, the following
fields can be configured in the `.spec` of the Service:
* `service.spec.externalTrafficPolicy` - denotes if this Service desires to route
external traffic to node-local or cluster-wide endpoints. There are two available
options: Cluster (default) and Local. Cluster obscures the client source
IP and may cause a second hop to another node, but should have good overall
load-spreading. Local preserves the client source IP and avoids a second hop
for LoadBalancer and NodePort type services, but risks potentially imbalanced
traffic spreading.
* `service.spec.healthCheckNodePort` - specifies the health check node port
(numeric port number) for the service. If `healthCheckNodePort` isn't specified,
the service controller allocates a port from your cluster's NodePort range. You
can configure that range by setting an API server command line option,
`--service-node-port-range`. It will use the
user-specified `healthCheckNodePort` value if specified by the client. It only has an
effect when `type` is set to LoadBalancer and `externalTrafficPolicy` is set
to Local.
* `.spec.externalTrafficPolicy` - denotes if this Service desires to route
external traffic to node-local or cluster-wide endpoints. There are two available
options: `Cluster` (default) and `Local`. `Cluster` obscures the client source
IP and may cause a second hop to another node, but should have good overall
load-spreading. `Local` preserves the client source IP and avoids a second hop
for LoadBalancer and NodePort type Services, but risks potentially imbalanced
traffic spreading.
* `.spec.healthCheckNodePort` - specifies the health check node port
(numeric port number) for the service. If you don't specify
`healthCheckNodePort`, the service controller allocates a port from your
cluster's NodePort range.
You can configure that range by setting an API server command line option,
`--service-node-port-range`. The Service will use the user-specified
`healthCheckNodePort` value if you specify it, provided that the
Service `type` is set to LoadBalancer and `externalTrafficPolicy` is set
to `Local`.
Setting `externalTrafficPolicy` to Local in the Service configuration file
activates this feature.
Setting `externalTrafficPolicy` to Local in the Service manifest
activates this feature. For example:
```yaml
apiVersion: v1
@ -155,7 +156,20 @@ spec:
type: LoadBalancer
```
## Garbage Collecting Load Balancers
### Caveats and limitations when preserving source IPs
Load balancing services from some cloud providers do not let you configure different weights for each target.
With each target weighted equally in terms of sending traffic to Nodes, external
traffic is not equally load balanced across different Pods. The external load balancer
is unaware of the number of Pods on each node that are used as a target.
Where `NumServicePods << _NumNodes` or `NumServicePods >> NumNodes`, a fairly close-to-equal
distribution will be seen, even without weights.
Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
## Garbage collecting load balancers
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
@ -172,32 +186,18 @@ The finalizer will only be removed after the load balancer resource is cleaned u
This prevents dangling load balancer resources even in corner cases such as the
service controller crashing.
## External Load Balancer Providers
## External load balancer providers
It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
When the Service `type` is set to LoadBalancer, Kubernetes provides functionality equivalent to `type` equals ClusterIP to pods
within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes
pods. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed),
firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service
object.
## Caveats and Limitations when preserving source IPs
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB
kube-proxy rules which would correctly balance across all endpoints.
With the new functionality, the external traffic is not equally load balanced across pods, but rather
equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability
for specifying the weight per node, they balance equally across all target nodes, disregarding the number of
pods on each node).
We can, however, state that for NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal
distribution will be seen, even without weights.
Once the external load balancers provide weights, this functionality can be added to the LB programming path.
*Future Work: No support for weights is provided for the 1.4 release, but may be added at a future date*
Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the nodes
hosting the relevant Kubernetes pods. The Kubernetes control plane automates the creation of the external load balancer,
health checks (if needed), and packet filtering rules (if needed). Once the cloud provider allocates an IP address for the load
balancer, the control plane looks up that external IP address and populates it into the Service object.
## {{% heading "whatsnext" %}}
* Read about [Service](/docs/concepts/services-networking/service/)
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)