Consolidate YAML files [part-5] (#9258)

* Consolidate YAML files [part-5]

This PR relocates the yaml files used in stateless application
guestbook. This PR also fixes the list numbering and code block
problems in the page.

Note that neither code blocks nor note callouts live happily inside a
numbered list, so this PR moves them out of such lists.

* Update examples_test.go
pull/9261/head
Qiming 2018-07-03 04:06:18 +08:00 committed by k8s-ci-robot
parent 59e626e056
commit ea11ae29ac
8 changed files with 207 additions and 193 deletions

View File

@ -28,12 +28,12 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
Download the following configuration files:
1. [redis-master-deployment.yaml](/docs/tutorials/stateless-application/guestbook/redis-master-deployment.yaml)
1. [redis-master-service.yaml](/docs/tutorials/stateless-application/guestbook/redis-master-service.yaml)
1. [redis-slave-deployment.yaml](/docs/tutorials/stateless-application/guestbook/redis-slave-deployment.yaml)
1. [redis-slave-service.yaml](/docs/tutorials/stateless-application/guestbook/redis-slave-service.yaml)
1. [frontend-deployment.yaml](/docs/tutorials/stateless-application/guestbook/frontend-deployment.yaml)
1. [frontend-service.yaml](/docs/tutorials/stateless-application/guestbook/frontend-service.yaml)
1. [redis-master-deployment.yaml](/examples/application/guestbook/redis-master-deployment.yaml)
1. [redis-master-service.yaml](/examples/application/guestbook/redis-master-service.yaml)
1. [redis-slave-deployment.yaml](/examples/application/guestbook/redis-slave-deployment.yaml)
1. [redis-slave-service.yaml](/examples/application/guestbook/redis-slave-service.yaml)
1. [frontend-deployment.yaml](/examples/application/guestbook/frontend-deployment.yaml)
1. [frontend-service.yaml](/examples/application/guestbook/frontend-service.yaml)
{{% /capture %}}
@ -47,27 +47,34 @@ The guestbook application uses Redis to store its data. It writes its data to a
The manifest file, included below, specifies a Deployment controller that runs a single replica Redis master Pod.
{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}}
1. Launch a terminal window in the directory you downloaded the manifest files.
2. Apply the Redis Master Deployment from the `redis-master-deployment.yaml` file:
```
kubectl apply -f redis-master-deployment.yaml
```
{{< code file="guestbook/redis-master-deployment.yaml" >}}
1. Apply the Redis Master Deployment from the `redis-master-deployment.yaml` file:
3. Query the list of Pods to verify that the Redis Master Pod is running:
```
kubectl get pods
```
The response should be similar to this:
```
NAME READY STATUS RESTARTS AGE
redis-master-1068406935-3lswp 1/1 Running 0 28s
```
```
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
```
1. Query the list of Pods to verify that the Redis Master Pod is running:
```shell
kubectl get pods
```
The response should be similar to this:
```shell
NAME READY STATUS RESTARTS AGE
redis-master-1068406935-3lswp 1/1 Running 0 28s
```
1. Run the following command to view the logs from the Redis Master Pod:
```shell
kubectl logs -f POD-NAME
```
4. Run the following command to view the logs from the Redis Master Pod:
```
kubectl logs -f POD-NAME
```
{{< note >}}
**Note:** Replace POD-NAME with the name of your Pod.
{{< /note >}}
@ -76,29 +83,32 @@ The manifest file, included below, specifies a Deployment controller that runs a
The guestbook applications needs to communicate to the Redis master to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
{{< codenew file="application/guestbook/redis-master-service.yaml" >}}
1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
```
kubectl apply -f redis-master-service.yaml
```
{{< code file="guestbook/redis-master-service.yaml" >}}
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
```
1. Query the list of Services to verify that the Redis Master Service is running:
```shell
kubectl get service
```
The response should be similar to this:
```shell
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 1m
redis-master 10.0.0.151 <none> 6379/TCP 8s
```
{{< note >}}
**Note:** This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
{{< /note >}}
2. Query the list of Services to verify that the Redis Master Service is running:
```
kubectl get service
```
The response should be similar to this:
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 1m
redis-master 10.0.0.151 <none> 6379/TCP 8s
```
## Start up the Redis Slaves
@ -110,55 +120,55 @@ Deployments scale based off of the configurations set in the manifest file. In t
If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas are running, it would scale down until two replicas are running.
{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}}
1. Apply the Redis Slave Deployment from the `redis-slave-deployment.yaml` file:
```
kubectl apply -f redis-slave-deployment.yaml
```
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml
```
{{< code file="guestbook/redis-slave-deployment.yaml" >}}
1. Query the list of Pods to verify that the Redis Slave Pods are running:
2. Query the list of Pods to verify that the Redis Slave Pods are running:
```shell
kubectl get pods
```
```
kubectl get pods
```
The response should be similar to this:
The response should be similar to this:
```shell
NAME READY STATUS RESTARTS AGE
redis-master-1068406935-3lswp 1/1 Running 0 1m
redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s
redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s
```
```
NAME READY STATUS RESTARTS AGE
redis-master-1068406935-3lswp 1/1 Running 0 1m
redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s
redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s
```
### Creating the Redis Slave Service
The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
{{< codenew file="application/guestbook/redis-slave-service.yaml" >}}
1. Apply the Redis Slave Service from the following `redis-slave-service.yaml` file:
```
kubectl apply -f redis-slave-service.yaml
```
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
```
{{< code file="guestbook/redis-slave-service.yaml" >}}
1. Query the list of Services to verify that the Redis slave service is running:
2. Query the list of Services to verify that the Redis Slave Service is running:
```shell
kubectl get services
```
```
kubectl get services
```
The response should be similar to this:
The response should be similar to this:
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 2m
redis-master 10.0.0.151 <none> 6379/TCP 1m
redis-slave 10.0.0.223 <none> 6379/TCP 6s
```
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 2m
redis-master 10.0.0.151 <none> 6379/TCP 1m
redis-slave 10.0.0.223 <none> 6379/TCP 6s
```
## Set up and Expose the Guestbook Frontend
@ -166,28 +176,28 @@ The guestbook application has a web frontend serving the HTTP requests written i
### Creating the Guestbook Frontend Deployment
1. Apply the frontend Deployment from the following `frontend-deployment.yaml` file:
{{< codenew file="application/guestbook/frontend-deployment.yaml" >}}
```
kubectl apply -f frontend-deployment.yaml
```
1. Apply the frontend Deployment from the `frontend-deployment.yaml` file:
{{< code file="guestbook/frontend-deployment.yaml" >}}
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
```
2. Query the list of Pods to verify that the three frontend replicas are running:
1. Query the list of Pods to verify that the three frontend replicas are running:
```
kubectl get pods -l app=guestbook -l tier=frontend
```
```shell
kubectl get pods -l app=guestbook -l tier=frontend
```
The response should be similar to this:
The response should be similar to this:
```
NAME READY STATUS RESTARTS AGE
frontend-3823415956-dsvc5 1/1 Running 0 54s
frontend-3823415956-k22zn 1/1 Running 0 54s
frontend-3823415956-w9gbt 1/1 Running 0 54s
```
```
NAME READY STATUS RESTARTS AGE
frontend-3823415956-dsvc5 1/1 Running 0 54s
frontend-3823415956-k22zn 1/1 Running 0 54s
frontend-3823415956-w9gbt 1/1 Running 0 54s
```
### Creating the Frontend Service
@ -199,29 +209,29 @@ If you want guests to be able to access your guestbook, you must configure the f
**Note:** Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
{{< /note >}}
1. Apply the frontend Service from the following `frontend-service.yaml` file:
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
```
kubectl apply -f frontend-service.yaml
```
{{< code file="guestbook/frontend-service.yaml" >}}
1. Apply the frontend Service from the `frontend-service.yaml` file:
2. Query the list of Services to verify that the frontend Service is running:
```shell
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
```
```
kubectl get services
```
1. Query the list of Services to verify that the frontend Service is running:
The response should be similar to this:
```shell
kubectl get services
```
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.112 <none> 80:31323/TCP 6s
kubernetes 10.0.0.1 <none> 443/TCP 4m
redis-master 10.0.0.151 <none> 6379/TCP 2m
redis-slave 10.0.0.223 <none> 6379/TCP 1m
```
The response should be similar to this:
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.112 <none> 80:31323/TCP 6s
kubernetes 10.0.0.1 <none> 443/TCP 4m
redis-master 10.0.0.151 <none> 6379/TCP 2m
redis-slave 10.0.0.223 <none> 6379/TCP 1m
```
### Viewing the Frontend Service via `NodePort`
@ -229,17 +239,17 @@ If you deployed this application to Minikube or a local cluster, you need to fin
1. Run the following command to get the IP address for the frontend Service.
```
minikube service frontend --url
```
```shell
minikube service frontend --url
```
The response should be similar to this:
The response should be similar to this:
```
http://192.168.99.100:31323
```
```
http://192.168.99.100:31323
```
2. Copy the IP address, and load the page in your browser to view your guestbook.
1. Copy the IP address, and load the page in your browser to view your guestbook.
### Viewing the Frontend Service via `LoadBalancer`
@ -247,18 +257,18 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y
1. Run the following command to get the IP address for the frontend Service.
```
kubectl get service frontend
```
```shell
kubectl get service frontend
```
The response should be similar to this:
The response should be similar to this:
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.51.242.136 109.197.92.229 80:32372/TCP 1m
```
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.51.242.136 109.197.92.229 80:32372/TCP 1m
```
2. Copy the External IP address, and load the page in your browser to view your guestbook.
1. Copy the external IP address, and load the page in your browser to view your guestbook.
## Scale the Web Frontend
@ -266,52 +276,52 @@ Scaling up or down is easy because your servers are defined as a Service that us
1. Run the following command to scale up the number of frontend Pods:
```
kubectl scale deployment frontend --replicas=5
```
```shell
kubectl scale deployment frontend --replicas=5
```
2. Query the list of Pods to verify the number of frontend Pods running:
1. Query the list of Pods to verify the number of frontend Pods running:
```
kubectl get pods
```
```shell
kubectl get pods
```
The response should look similar to this:
The response should look similar to this:
```
NAME READY STATUS RESTARTS AGE
frontend-3823415956-70qj5 1/1 Running 0 5s
frontend-3823415956-dsvc5 1/1 Running 0 54m
frontend-3823415956-k22zn 1/1 Running 0 54m
frontend-3823415956-w9gbt 1/1 Running 0 54m
frontend-3823415956-x2pld 1/1 Running 0 5s
redis-master-1068406935-3lswp 1/1 Running 0 56m
redis-slave-2005841000-fpvqc 1/1 Running 0 55m
redis-slave-2005841000-phfv9 1/1 Running 0 55m
```
```
NAME READY STATUS RESTARTS AGE
frontend-3823415956-70qj5 1/1 Running 0 5s
frontend-3823415956-dsvc5 1/1 Running 0 54m
frontend-3823415956-k22zn 1/1 Running 0 54m
frontend-3823415956-w9gbt 1/1 Running 0 54m
frontend-3823415956-x2pld 1/1 Running 0 5s
redis-master-1068406935-3lswp 1/1 Running 0 56m
redis-slave-2005841000-fpvqc 1/1 Running 0 55m
redis-slave-2005841000-phfv9 1/1 Running 0 55m
```
3. Run the following command to scale down the number of frontend Pods:
1. Run the following command to scale down the number of frontend Pods:
```
kubectl scale deployment frontend --replicas=2
```
```shell
kubectl scale deployment frontend --replicas=2
```
4. Query the list of Pods to verify the number of frontend Pods running:
1. Query the list of Pods to verify the number of frontend Pods running:
```
kubectl get pods
```
```shell
kubectl get pods
```
The response should look similar to this:
The response should look similar to this:
```
NAME READY STATUS RESTARTS AGE
frontend-3823415956-k22zn 1/1 Running 0 1h
frontend-3823415956-w9gbt 1/1 Running 0 1h
redis-master-1068406935-3lswp 1/1 Running 0 1h
redis-slave-2005841000-fpvqc 1/1 Running 0 1h
redis-slave-2005841000-phfv9 1/1 Running 0 1h
```
```
NAME READY STATUS RESTARTS AGE
frontend-3823415956-k22zn 1/1 Running 0 1h
frontend-3823415956-w9gbt 1/1 Running 0 1h
redis-master-1068406935-3lswp 1/1 Running 0 1h
redis-slave-2005841000-fpvqc 1/1 Running 0 1h
redis-slave-2005841000-phfv9 1/1 Running 0 1h
```
{{% /capture %}}
@ -320,36 +330,36 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
1. Run the following commands to delete all Pods, Deployments, and Services.
```
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment -l app=guestbook
kubectl delete service -l app=guestbook
```
```shell
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment -l app=guestbook
kubectl delete service -l app=guestbook
```
The responses should be:
The responses should be:
```
deployment "redis-master" deleted
deployment "redis-slave" deleted
service "redis-master" deleted
service "redis-slave" deleted
deployment "frontend" deleted
service "frontend" deleted
```
```
deployment "redis-master" deleted
deployment "redis-slave" deleted
service "redis-master" deleted
service "redis-slave" deleted
deployment "frontend" deleted
service "frontend" deleted
```
2. Query the list of Pods to verify that no Pods are running:
1. Query the list of Pods to verify that no Pods are running:
```
kubectl get pods
```
The response should be this:
```shell
kubectl get pods
```
The response should be this:
```
No resources found.
```
```
No resources found.
```
{{% /capture %}}
{{% capture whatsnext %}}

View File

@ -463,6 +463,18 @@ func TestExampleObjectSchemas(t *testing.T) {
"nginx-with-request": {&extensions.Deployment{}},
"shell-demo": {&api.Pod{}},
},
"examples/application/guestbook": {
"frontend-deployment": {&extensions.Deployment{}},
"frontend-service": {&api.Service{}},
"redis-master-deployment": {&extensions.Deployment{}},
"redis-master-service": {&api.Service{}},
"redis-slave-deployment": {&extensions.Deployment{}},
"redis-slave-service": {&api.Service{}},
},
"docs/tasks/run-application": {
"deployment-patch-demo": {&extensions.Deployment{}},
"hpa-php-apache": {&autoscaling.HorizontalPodAutoscaler{}},
},
"examples/debug": {
"counter-pod": {&api.Pod{}},
"event-exporter": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.Deployment{}},
@ -507,14 +519,6 @@ func TestExampleObjectSchemas(t *testing.T) {
"mysql-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}},
"wordpress-deployment": {&api.Service{}, &api.PersistentVolumeClaim{}, &extensions.Deployment{}},
},
"docs/tutorials/stateless-application/guestbook": {
"frontend-deployment": {&extensions.Deployment{}},
"frontend-service": {&api.Service{}},
"redis-master-deployment": {&extensions.Deployment{}},
"redis-master-service": {&api.Service{}},
"redis-slave-deployment": {&extensions.Deployment{}},
"redis-slave-service": {&api.Service{}},
},
}
// Note a key in the following map has to be complete relative path