Reword “Create an External Load Balancer” task

- general cleanup
- update sample output
- use more tooltips
- avoid specifying specific cloud providers

The website repo doesn't maintain a definitive list of cloud providers that
pass Kubernetes conformance tests. It's certainly more than AWS and GCP as
the previous revision stated.
pull/27182/head
Tim Bannister 2021-03-23 23:30:03 +00:00
parent 552ac504a1
commit 39f2c3860d
1 changed files with 85 additions and 85 deletions

View File

@ -4,47 +4,44 @@ content_type: task
weight: 80 weight: 80
--- ---
<!-- overview --> <!-- overview -->
This page shows how to create an External Load Balancer. This page shows how to create an external load balancer.
{{< note >}} When creating a {{< glossary_tooltip text="Service" term_id="service" >}}, you have
This feature is only available for cloud providers or environments which support external load balancers. the option of automatically creating a cloud load balancer. This provides an
{{< /note >}} externally-accessible IP address that sends traffic to the correct port on your cluster
nodes,
When creating a service, you have the option of automatically creating a
cloud network load balancer. This provides an externally-accessible IP address
that sends traffic to the correct port on your cluster nodes
_provided your cluster runs in a supported environment and is configured with _provided your cluster runs in a supported environment and is configured with
the correct cloud load balancer provider package_. the correct cloud load balancer provider package_.
For information on provisioning and using an Ingress resource that can give You can also use an {{< glossary_tooltip term_id="ingress" >}} in place of Service.
services externally-reachable URLs, load balance the traffic, terminate SSL etc., For more information, check the [Ingress](/docs/concepts/services-networking/ingress/)
please check the [Ingress](/docs/concepts/services-networking/ingress/)
documentation. documentation.
## {{% heading "prerequisites" %}} ## {{% heading "prerequisites" %}}
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} {{< include "task-tutorial-prereqs.md" >}}
Your cluster must be running in a cloud or other environment that already has support
for configuring external load balancers.
<!-- steps --> <!-- steps -->
## Configuration file ## Create a Service
### Create a Service from a manifest
To create an external load balancer, add the following line to your To create an external load balancer, add the following line to your
[service configuration file](/docs/concepts/services-networking/service/#loadbalancer): Service manifest:
```yaml ```yaml
type: LoadBalancer type: LoadBalancer
``` ```
Your configuration file might look like: Your manifest might then look like:
```yaml ```yaml
apiVersion: v1 apiVersion: v1
@ -60,19 +57,19 @@ spec:
type: LoadBalancer type: LoadBalancer
``` ```
## Using kubectl ### Create a Service using kubectl
You can alternatively create the service with the `kubectl expose` command and You can alternatively create the service with the `kubectl expose` command and
its `--type=LoadBalancer` flag: its `--type=LoadBalancer` flag:
```bash ```bash
kubectl expose rc example --port=8765 --target-port=9376 \ kubectl expose deployment example --port=8765 --target-port=9376 \
--name=example-service --type=LoadBalancer --name=example-service --type=LoadBalancer
``` ```
This command creates a new service using the same selectors as the referenced This command creates a new Service using the same selectors as the referenced
resource (in the case of the example above, a replication controller named resource (in the case of the example above, a
`example`). {{< glossary_tooltip text="Deployment" term_id="deployment" >}} named `example`).
For more information, including optional flags, refer to the For more information, including optional flags, refer to the
[`kubectl expose` reference](/docs/reference/generated/kubectl/kubectl-commands/#expose). [`kubectl expose` reference](/docs/reference/generated/kubectl/kubectl-commands/#expose).
@ -86,59 +83,63 @@ information through `kubectl`:
kubectl describe services example-service kubectl describe services example-service
``` ```
which should produce output like this: which should produce output similar to:
```bash ```
Name: example-service Name: example-service
Namespace: default Namespace: default
Labels: <none> Labels: app=example
Annotations: <none> Annotations: <none>
Selector: app=example Selector: app=example
Type: LoadBalancer Type: LoadBalancer
IP: 10.67.252.103 IP Families: <none>
LoadBalancer Ingress: 192.0.2.89 IP: 10.3.22.96
Port: <unnamed> 80/TCP IPs: 10.3.22.96
NodePort: <unnamed> 32445/TCP LoadBalancer Ingress: 192.0.2.89
Endpoints: 10.64.0.4:80,10.64.1.5:80,10.64.2.4:80 Port: <unset> 8765/TCP
Session Affinity: None TargetPort: 9376/TCP
Events: <none> NodePort: <unset> 30593/TCP
Endpoints: 172.17.0.3:9376
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
``` ```
The IP address is listed next to `LoadBalancer Ingress`. The load balancer's IP address is listed next to `LoadBalancer Ingress`.
{{< note >}} {{< note >}}
If you are running your service on Minikube, you can find the assigned IP address and port with: If you are running your service on Minikube, you can find the assigned IP address and port with:
{{< /note >}}
```bash ```bash
minikube service example-service --url minikube service example-service --url
``` ```
{{< /note >}}
## Preserving the client source IP ## Preserving the client source IP
Due to the implementation of this feature, the source IP seen in the target By default, the source IP seen in the target container is *not the original
container is *not the original source IP* of the client. To enable source IP* of the client. To enable preservation of the client IP, the following
preservation of the client IP, the following fields can be configured in the fields can be configured in the `.spec` of the Service:
service spec (supported in GCE/Google Kubernetes Engine environments):
* `service.spec.externalTrafficPolicy` - denotes if this Service desires to route * `.spec.externalTrafficPolicy` - denotes if this Service desires to route
external traffic to node-local or cluster-wide endpoints. There are two available external traffic to node-local or cluster-wide endpoints. There are two available
options: Cluster (default) and Local. Cluster obscures the client source options: `Cluster` (default) and `Local`. `Cluster` obscures the client source
IP and may cause a second hop to another node, but should have good overall IP and may cause a second hop to another node, but should have good overall
load-spreading. Local preserves the client source IP and avoids a second hop load-spreading. `Local` preserves the client source IP and avoids a second hop
for LoadBalancer and NodePort type services, but risks potentially imbalanced for LoadBalancer and NodePort type Services, but risks potentially imbalanced
traffic spreading. traffic spreading.
* `service.spec.healthCheckNodePort` - specifies the health check node port * `.spec.healthCheckNodePort` - specifies the health check node port
(numeric port number) for the service. If `healthCheckNodePort` isn't specified, (numeric port number) for the service. If you don't specify
the service controller allocates a port from your cluster's NodePort range. You `healthCheckNodePort`, the service controller allocates a port from your
can configure that range by setting an API server command line option, cluster's NodePort range.
`--service-node-port-range`. It will use the You can configure that range by setting an API server command line option,
user-specified `healthCheckNodePort` value if specified by the client. It only has an `--service-node-port-range`. The Service will use the user-specified
effect when `type` is set to LoadBalancer and `externalTrafficPolicy` is set `healthCheckNodePort` value if you specify it, provided that the
to Local. Service `type` is set to LoadBalancer and `externalTrafficPolicy` is set
to `Local`.
Setting `externalTrafficPolicy` to Local in the Service configuration file Setting `externalTrafficPolicy` to Local in the Service manifest
activates this feature. activates this feature. For example:
```yaml ```yaml
apiVersion: v1 apiVersion: v1
@ -155,7 +156,20 @@ spec:
type: LoadBalancer type: LoadBalancer
``` ```
## Garbage Collecting Load Balancers ### Caveats and limitations when preserving source IPs
Load balancing services from some cloud providers do not let you configure different weights for each target.
With each target weighted equally in terms of sending traffic to Nodes, external
traffic is not equally load balanced across different Pods. The external load balancer
is unaware of the number of Pods on each node that are used as a target.
Where `NumServicePods << _NumNodes` or `NumServicePods >> NumNodes`, a fairly close-to-equal
distribution will be seen, even without weights.
Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
## Garbage collecting load balancers
{{< feature-state for_k8s_version="v1.17" state="stable" >}} {{< feature-state for_k8s_version="v1.17" state="stable" >}}
@ -172,32 +186,18 @@ The finalizer will only be removed after the load balancer resource is cleaned u
This prevents dangling load balancer resources even in corner cases such as the This prevents dangling load balancer resources even in corner cases such as the
service controller crashing. service controller crashing.
## External Load Balancer Providers ## External load balancer providers
It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster. It is important to note that the datapath for this functionality is provided by a load balancer external to the Kubernetes cluster.
When the Service `type` is set to LoadBalancer, Kubernetes provides functionality equivalent to `type` equals ClusterIP to pods When the Service `type` is set to LoadBalancer, Kubernetes provides functionality equivalent to `type` equals ClusterIP to pods
within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the nodes
pods. The Kubernetes service controller automates the creation of the external load balancer, health checks (if needed), hosting the relevant Kubernetes pods. The Kubernetes control plane automates the creation of the external load balancer,
firewall rules (if needed) and retrieves the external IP allocated by the cloud provider and populates it in the service health checks (if needed), and packet filtering rules (if needed). Once the cloud provider allocates an IP address for the load
object. balancer, the control plane looks up that external IP address and populates it into the Service object.
## Caveats and Limitations when preserving source IPs
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB
kube-proxy rules which would correctly balance across all endpoints.
With the new functionality, the external traffic is not equally load balanced across pods, but rather
equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability
for specifying the weight per node, they balance equally across all target nodes, disregarding the number of
pods on each node).
We can, however, state that for NumServicePods << NumNodes or NumServicePods >> NumNodes, a fairly close-to-equal
distribution will be seen, even without weights.
Once the external load balancers provide weights, this functionality can be added to the LB programming path.
*Future Work: No support for weights is provided for the 1.4 release, but may be added at a future date*
Internal pod to pod traffic should behave similar to ClusterIP services, with equal probability across all pods.
## {{% heading "whatsnext" %}}
* Read about [Service](/docs/concepts/services-networking/service/)
* Read about [Ingress](/docs/concepts/services-networking/ingress/)
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)