Clean up hello-node for easier copy paste-ing and better consistency.

The general theme of this change is to make things easy to copy and paste,
which I found to be useful in "Kubernetes the Hard Way".

A summary of the changes:
- Removed '$' in front of command for easier copy-paste
- Name the docker container for easier copy-pasting
- Use `gcloud config set compute/zone us-central1-a` for easier copy-pasting
- Capitalize Kubernetes jargon: e.g., Pods/Deployments
- Fixed some JS inconsistencies: "/'
- Added some logging so that `kubectl log <POD>` is useful
pull/1017/head
Josh Hoak 2016-08-11 19:36:36 -07:00
parent 38b464ae58
commit b1d2df067b
1 changed files with 106 additions and 51 deletions

View File

@ -31,7 +31,7 @@ Remember the project ID; it will be referred to later in this codelab as `$PROJE
Make sure you have a Linux terminal available, you will use it to control your cluster via command line. You can use [Google Cloud Shell](https://console.cloud.google.com?cloudshell=true), it has the software this codelab uses pre-installed so that you can skip most of the environment configuration steps below.
It may be helpful to store your project ID into a variable as many commands below use it:
```shell
export PROJECT_ID="your-project-id"
```
@ -61,14 +61,15 @@ The first step is to write the application. Save this code in a folder called "`
```javascript
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end("Hello World!");
}
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
```
Now run this simple command :
Now run this simple command:
```shell
node server.js
@ -105,32 +106,40 @@ Now there is a trusted source for getting an image of your containerized app.
Let's try your image out with Docker:
```shell
$ docker run -d -p 8080:8080 gcr.io/$PROJECT_ID/hello-node:v1
325301e6b2bffd1d0049c621866831316d653c0b25a496d04ce0ec6854cb7998
docker run -d -p 8080:8080 --name hello_tutorial gcr.io/$PROJECT_ID/hello-node:v1
```
Visit your app in the browser, or use `curl` or `wget` if youd like :
```shell
$ curl http://localhost:8080
Hello World!
curl http://localhost:8080
```
**If you recieve a `Connection refused` message from Docker for Mac, ensure you are using the latest version of Docker (1.12 or later). Alternatively, if you are using Docker Toolbox on OSX, make sure you are using the VM's IP and not localhost :**
You should see `Hello World!`
**Note:** *If you recieve a `Connection refused` message from Docker for Mac, ensure you are using the latest version of Docker (1.12 or later). Alternatively, if you are using Docker Toolbox on OSX, make sure you are using the VM's IP and not localhost:*
```shell
$ curl "http://$(docker-machine ip YOUR-VM-MACHINE-NAME):8080"
curl "http://$(docker-machine ip YOUR-VM-MACHINE-NAME):8080"
```
Lets now stop the container. In this example, our app was running as Docker process `2c66d0efcbd4`, which we looked up with `docker ps`:
Lets now stop the container. You can list the docker containers with:
```shell
docker ps
CONTAINER ID IMAGE COMMAND
2c66d0efcbd4 gcr.io/$PROJECT_ID/hello-node:v1 "/bin/sh -c 'node
```
docker stop 2c66d0efcbd4
2c66d0efcbd4
You should something like see:
```shell
CONTAINER ID IMAGE COMMAND NAMES
c5b6d4b9f36d gcr.io/$PROJECT_ID/hello-node:v1 "/bin/sh -c 'node ser" hello_tutorial
```
Now stop the running container with
```
docker stop hello_tutorial
```
Now that the image works as intended and is all tagged with your `$PROJECT_ID`, we can push it to the [Google Container Registry](https://cloud.google.com/tools/container-registry/), a private repository for your Docker images accessible from every Google Cloud project (but also from outside Google Cloud Platform) :
@ -143,18 +152,34 @@ If all goes well, you should be able to see the container image listed in the co
![image](/images/hellonode/image_10.png)
## Create your cluster
## Create your Kubernetes Cluster
A cluster consists of a master API server and a set of worker VMs called nodes.
A cluster consists of a Master API server and a set of worker VMs called Nodes.
Create a cluster via the Console: *Compute > Container Engine > Container Clusters > New container cluster*. Set the name to 'hello-world', leaving all other options default. You should get a Kubernetes cluster with three nodes, ready to receive your container image.
First, choose a [Google Cloud Project zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones) to run
your service. For this tutorial, we will be using **us-central1-a**. This is
configured on the command line via:
```
gcloud config set compute/zone us-central1-a
```
Now, create a cluster via the `gcloud` command line tool:
```shell
gcloud container clusters create hello-world
```
Alternatively, you can create a cluster via the [Google Cloud Console](https://console.cloud.google.com): *Compute > Container Engine > Container Clusters > New container cluster*. Set the name to **hello-world**, leaving all other options default.
You should get a Kubernetes cluster with three nodes, ready to receive your container image! (this may take a couple of minutes)
![image](/images/hellonode/image_11.png)
Its now time to deploy your own containerized application to the Kubernetes cluster! Please ensure that you have [configured](https://cloud.google.com/container-engine/docs/clusters/operations#configuring_kubectl) `kubectl` to use the cluster you just created (make sure the value of `--zone` flag matches the zone you used for the cluster:
Its now time to deploy your own containerized application to the Kubernetes cluster!
```shell
$ gcloud container clusters get-credentials --zone us-central1-f hello-world
gcloud container clusters get-credentials hello-world
```
**The rest of this document requires both the Kubernetes client and server version to be 1.3. Run `kubectl version` to see your current versions.** For 1.2 see [this document](https://github.com/kubernetes/kubernetes.github.io/blob/release-1.2/docs/hellonode.md).
@ -163,32 +188,33 @@ $ gcloud container clusters get-credentials --zone us-central1-f hello-world
A Kubernetes **[pod](/docs/user-guide/pods/)** is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple.
Create a pod with the `kubectl run` command:
Create a Pod with the `kubectl run` command:
```shell
$ kubectl run hello-node --image=gcr.io/$PROJECT_ID/hello-node:v1 --port=8080
deployment "hello-node" created
kubectl run hello-node --image=gcr.io/$PROJECT_ID/hello-node:v1 --port=8080
```
As shown in the output, the `kubectl run` created a **[deployment](/docs/user-guide/deployments/)** object. Deployments are the recommended way for managing creation and scaling of pods. In this example, a new deployment manages a single pod replica running the *hello-node:v1* image.
As shown in the output, the `kubectl run` created a **[Deployment](/docs/user-guide/deployments/)** object. Deployments are the recommended way for managing creation and scaling of pods. In this example, a new deployment manages a single pod replica running the *hello-node:v1* image.
To view the deployment we just created run:
To view the Deployment we just created run:
```shell
$ kubectl get deployments
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 1 1 1 1 3m
```
To view the pod created by the deployment run:
To view the Pod created by the deployment run:
```shell
$ kubectl get pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-714049816-ztzrb 1/1 Running 0 6m
```
To view the stdout / stderr from a pod (hello-node image has no output, so logs will be empty in this case) run:
To view the stdout / stderr from a Pod run (probably empty currently):
```shell
kubectl logs <POD-NAME>
@ -218,9 +244,9 @@ At this point you should have our container running under the control of Kuberne
## Allow external traffic
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the Kubernetes virtual network, you have to expose the pod as a Kubernetes **[service](/docs/user-guide/services/)**.
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes **[Service](/docs/user-guide/services/)**.
From our development machine we can expose the pod to the public internet using the `kubectl expose` command combined with the `--type="LoadBalancer"` flag. The flag is needed for the creation of an externally accessible ip:
From our Development machine we can expose the pod to the public internet using the `kubectl expose` command combined with the `--type="LoadBalancer"` flag. The flag is needed for the creation of an externally accessible ip:
```shell
kubectl expose deployment hello-node --type="LoadBalancer"
@ -235,7 +261,8 @@ The Kubernetes master creates the load balancer and related Compute Engine forwa
To find the ip addresses associated with the service run:
```shell
$ kubectl get services hello-node
kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) AGE
hello-node 10.3.246.12 8080/TCP 23s
```
@ -243,17 +270,25 @@ hello-node 10.3.246.12 8080/TCP 23s
The `EXTERNAL_IP` may take several minutes to become available and visible. If the `EXTERNAL_IP` is missing, wait a few minutes and try again.
```shell
$ kubectl get services hello-node
kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) AGE
hello-node 10.3.246.12 23.251.159.72 8080/TCP 2m
```
Note there are 2 IP addresses listed, both serving port 8080. `CLUSTER_IP` is only visible inside your cloud virtual network. `EXTERNAL_IP` is externally accessible. In this example, the external IP address is 23.251.159.72.
You should now be able to reach the service by pointing your browser to this address: http://EXTERNAL_IP**:8080** or running `curl http://EXTERNAL_IP:8080`
You should now be able to reach the service by pointing your browser to this address: http://EXTERNAL_IP**:8080** or running `curl http://EXTERNAL_IP:8080`.
![image](/images/hellonode/image_12.png)
Assuming you've sent requests to your new webservice via the browser or curl,
you should now be able to see some logs by running:
```shell
kubectl logs <POD-NAME>
```
## Scale up your website
One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity for your application; you can simply tell the deployment to manage a new number of replicas for your pod:
@ -265,13 +300,15 @@ kubectl scale deployment hello-node --replicas=4
You now have four replicas of your application, each running independently on the cluster with the load balancer you created earlier and serving traffic to all of them.
```shell
$ kubectl get deployment
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 4 4 4 3 40m
```
```shell
$ kubectl get pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-714049816-g4azy 1/1 Running 0 1m
hello-node-714049816-rk0u6 1/1 Running 0 1m
@ -292,10 +329,9 @@ As always, the application you deployed to production requires bug fixes or addi
First, lets modify the application. On the development machine, edit server.js and update the response message:
```javascript
response.end("Hello Kubernetes World!");
response.end('Hello Kubernetes World!');
```
We can now build and publish a new container image to the registry with an incremented tag:
```shell
@ -310,14 +346,14 @@ the image label for our running container, we will need to edit the existing *he
`gcr.io/$PROJECT_ID/hello-node:v1` to `gcr.io/$PROJECT_ID/hello-node:v2`. To do this, we will use the `kubectl set image` command.
```shell
$ kubectl set image deployment/hello-node hello-node=gcr.io/$PROJECT_ID/hello-node:v2
deployment "hello-node" image updated
kubectl set image deployment/hello-node hello-node=gcr.io/$PROJECT_ID/hello-node:v2
```
This updates the deployment with the new image, causing new pods to be created with the new image and old pods to be deleted.
```
$ kubectl get deployments
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 4 5 4 3 1h
```
@ -329,6 +365,7 @@ Hopefully with these deployment, scaling and update features youll agree that
## Observe the Kubernetes Web UI (optional)
Kubernetes comes with a graphical web user interface that is enabled by default with your clusters.
This user interface allows you to get started quickly and enables some of the functionality found in the CLI as a more approachable and discoverable way of interacting with the system.
Enjoy the Kubernetes graphical dashboard and use it for deploying containerized applications, as well as for monitoring and managing your clusters!
@ -337,7 +374,7 @@ Enjoy the Kubernetes graphical dashboard and use it for deploying containerized
Learn more about the web interface by taking the [Dashboard tour](/docs/user-guide/ui/).
## That's it! Time to tear it down
## Cleaning it Up
That's it for the demo! So you don't leave this all running and incur charges, let's learn how to tear things down.
@ -350,25 +387,43 @@ kubectl delete service,deployment hello-node
Delete your cluster:
```shell
$ gcloud container clusters delete --zone us-central1-f hello-world
gcloud container clusters delete hello-world
```
You should see:
```
The following clusters will be deleted.
- [hello-world] in [us-central1-f]
- [hello-world] in [us-central1-a]
Do you want to continue (Y/n)?
Deleting cluster hello-world...done.
Deleted [https://container.googleapis.com/v1/projects/<$PROJECT_ID>/zones/us-central1-f/clusters/hello-world].
Deleted [https://container.googleapis.com/v1/projects/<$PROJECT_ID>/zones/us-central1-a/clusters/hello-world].
```
This deletes the Google Compute Engine instances that are running the cluster.
Finally delete the Docker registry storage bucket hosting your image(s) :
Finally delete the Docker registry storage bucket hosting your image(s) by using
`gsutil`, which should have been installed during the gcloud installation
process. For more information on gsutil, see [the gsutil documentation](https://cloud.google.com/storage/docs/gsutil)
To list the images we created earlier in the tutorial:
```shell
$ gsutil ls
gs://artifacts.<$PROJECT_ID>.appspot.com/
$ gsutil rm -r gs://artifacts.<$PROJECT_ID>.appspot.com/
Removing gs://artifacts.<$PROJECT_ID>.appspot.com/...
gsutil ls
```
Of course, you can also delete the entire project but note that you must first disable billing on the project. Additionally, deleting a project will only happen after the current billing cycle ends.
You should see:
```shell
gs://artifacts.<$PROJECT_ID>.appspot.com/
```
And then to remove the all the images under this path, run:
```shell
gsutil rm -r gs://artifacts.<$PROJECT_ID>.appspot.com/
```
You can also delete the entire Google Cloud project but note that you must first disable billing on the project. Additionally, deleting a project will only happen after the current billing cycle ends.