minor changes
moved text under command and changed wording based on some feedback. With regards to mentioning cloud providers. I think a lot of this documentation in general assumes a public cloud like Google so it may be worthwhile to mention a specific private cloud where this isn't implemented.reviewable/pr1752/r2
parent
636fa3f84f
commit
08577c385a
|
@ -22,12 +22,13 @@ $ kubectl run my-nginx --image=nginx --replicas=2 --port=80
|
||||||
deployment "my-nginx" created
|
deployment "my-nginx" created
|
||||||
```
|
```
|
||||||
|
|
||||||
To expose your service to the public internet, run the following. Note, the type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. Type LoadBalancer may work in public cloud environments just fine but may require some additional configuration in a private cloud environment (ie. OpenStack).
|
To expose your service to the public internet, run the following:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer
|
$ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer
|
||||||
service "my-nginx" exposed
|
service "my-nginx" exposed
|
||||||
```
|
```
|
||||||
|
Note: The type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. If your cloudprovider doesn't have a loadbalancer implementation (e.g. OpenStack) for Kubernetes, you can simply use the allocated [nodePort](link to nodeport service) as a rudimentary form of loadblancing across your endpoints.
|
||||||
|
|
||||||
You can see that they are running by:
|
You can see that they are running by:
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue