From 08577c385a27a32056c876307986428689f9e6b9 Mon Sep 17 00:00:00 2001 From: Chris Riviere Date: Mon, 21 Nov 2016 12:08:25 -0800 Subject: [PATCH] minor changes moved text under command and changed wording based on some feedback. With regards to mentioning cloud providers. I think a lot of this documentation in general assumes a public cloud like Google so it may be worthwhile to mention a specific private cloud where this isn't implemented. --- docs/user-guide/quick-start.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/user-guide/quick-start.md b/docs/user-guide/quick-start.md index fcb3b079b9..6e0920f973 100644 --- a/docs/user-guide/quick-start.md +++ b/docs/user-guide/quick-start.md @@ -22,12 +22,13 @@ $ kubectl run my-nginx --image=nginx --replicas=2 --port=80 deployment "my-nginx" created ``` -To expose your service to the public internet, run the following. Note, the type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. Type LoadBalancer may work in public cloud environments just fine but may require some additional configuration in a private cloud environment (ie. OpenStack). +To expose your service to the public internet, run the following: ```shell $ kubectl expose deployment my-nginx --target-port=80 --type=LoadBalancer service "my-nginx" exposed ``` +Note: The type, LoadBalancer, is highly dependant upon the underlying platform that Kubernetes is running on. If your cloudprovider doesn't have a loadbalancer implementation (e.g. OpenStack) for Kubernetes, you can simply use the allocated [nodePort](link to nodeport service) as a rudimentary form of loadblancing across your endpoints. You can see that they are running by: